source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Double%20factorial | In mathematics, the double factorial of a number , denoted by , is the product of all the positive integers up to that have the same parity (odd or even) as . That is,
Restated, this says that for even , the double factorial is
while for odd it is
For example, . The zero double factorial as an empty product.
The sequence of double factorials for even = starts as
The sequence of double factorials for odd = starts as
The term odd factorial is sometimes used for the double factorial of an odd number.
History and usage
In a 1902 paper, the physicist Arthur Schuster wrote:
states that the double factorial was originally introduced in order to simplify the expression of certain trigonometric integrals that arise in the derivation of the Wallis product. Double factorials also arise in expressing the volume of a hypersphere, and they have many applications in enumerative combinatorics. They occur in Student's -distribution (1908), though Gosset did not use the double exclamation point notation.
Relation to the factorial
Because the double factorial only involves about half the factors of the ordinary factorial, its value is not substantially larger than the square root of the factorial , and it is much smaller than the iterated factorial .
The factorial of a positive may be written as the product of two double factorials:
and therefore
where the denominator cancels the unwanted factors in the numerator. (The last form also applies when .)
For an even non-negative integer with , the double factorial may be expressed as
For odd with , combining the two previous formulas yields
For an odd positive integer with , the double factorial may be expressed in terms of -permutations of or a falling factorial as
Applications in enumerative combinatorics
Double factorials are motivated by the fact that they occur frequently in enumerative combinatorics and other settings. For instance, for odd values of counts
Perfect matchings of the complete graph for |
https://en.wikipedia.org/wiki/Hyperfactorial | In mathematics, and more specifically number theory, the hyperfactorial of a positive integer is the product of the numbers of the form from to
Definition
The hyperfactorial of a positive integer is the product of the numbers . That is,
Following the usual convention for the empty product, the hyperfactorial of 0 is 1. The sequence of hyperfactorials, beginning with , is:
Interpolation and approximation
The hyperfactorials were studied beginning in the 19th century by Hermann Kinkelin and James Whitbread Lee Glaisher. As Kinkelin showed, just as the factorials can be continuously interpolated by the gamma function, the hyperfactorials can be continuously interpolated by the K-function.
Glaisher provided an asymptotic formula for the hyperfactorials, analogous to Stirling's formula for the factorials:
where is the Glaisher–Kinkelin constant.
Other properties
According to an analogue of Wilson's theorem on the behavior of factorials modulo prime numbers, when is an odd prime number
where is the notation for the double factorial.
The hyperfactorials give the sequence of discriminants of Hermite polynomials in their probabilistic formulation.
References
External links
Integer sequences
Factorial and binomial topics |
https://en.wikipedia.org/wiki/Superfactorial | In mathematics, and more specifically number theory, the superfactorial of a positive integer is the product of the first factorials. They are a special case of the Jordan–Pólya numbers, which are products of arbitrary collections of factorials.
Definition
The th superfactorial may be defined as:
Following the usual convention for the empty product, the superfactorial of 0 is 1. The sequence of superfactorials, beginning with , is:
Properties
Just as the factorials can be continuously interpolated by the gamma function, the superfactorials can be continuously interpolated by the Barnes G-function.
According to an analogue of Wilson's theorem on the behavior of factorials modulo prime numbers, when is an odd prime number
where is the notation for the double factorial.
For every integer , the number is a square number. This may be expressed as stating that, in the formula for as a product of factorials, omitting one of the factorials (the middle one, ) results in a square product. Additionally, if any integers are given, the product of their pairwise differences is always a multiple of , and equals the superfactorial when the given numbers are consecutive.
References
External links
Integer sequences
Factorial and binomial topics |
https://en.wikipedia.org/wiki/Immunostaining | In biochemistry, immunostaining is any use of an antibody-based method to detect a specific protein in a sample. The term "immunostaining" was originally used to refer to the immunohistochemical staining of tissue sections, as first described by Albert Coons in 1941. However, immunostaining now encompasses a broad range of techniques used in histology, cell biology, and molecular biology that use antibody-based staining methods.
Techniques
Immunohistochemistry
Immunohistochemistry or IHC staining of tissue sections (or immunocytochemistry, which is the staining of cells), is perhaps the most commonly applied immunostaining technique. While the first cases of IHC staining used fluorescent dyes (see immunofluorescence), other non-fluorescent methods using enzymes such as peroxidase (see immunoperoxidase staining) and alkaline phosphatase are now used. These enzymes are capable of catalysing reactions that give a coloured product that is easily detectable by light microscopy. Alternatively, radioactive elements can be used as labels, and the immunoreaction can be visualized by autoradiography.
Tissue preparation or fixation is essential for the preservation of cell morphology and tissue architecture. Inappropriate or prolonged fixation may significantly diminish the antibody binding capability. Many antigens can be successfully demonstrated in formalin-fixed paraffin-embedded tissue sections. However, some antigens will not survive even moderate amounts of aldehyde fixation. Under these conditions, tissues should be rapidly fresh frozen in liquid nitrogen and cut with a cryostat. The disadvantages of frozen sections include poor morphology, poor resolution at higher magnifications, difficulty in cutting over paraffin sections, and the need for frozen storage. Alternatively, vibratome sections do not require the tissue to be processed through organic solvents or high heat, which can destroy the antigenicity, or disrupted by freeze thawing. The disadvantage of vibr |
https://en.wikipedia.org/wiki/Draco%20%28programming%20language%29 | Draco was a shareware programming language created by Chris Gray. First developed for CP/M systems, Amiga version followed in 1987.
Although Draco, a blend of Pascal and C, was well suited for general purpose programming, its uniqueness as a language was its main weak point. Gray used Draco for the Amiga to create a port of Peter Langston's game Empire.
References
External links
CP/M distribution
Draco Author Chris Grays compiler page covering Draco
Freeware Draco-to-C converter at Aminet
Source code of Draco at Aminet
Algol programming language family
Amiga development software
CP/M software |
https://en.wikipedia.org/wiki/Glossary%20of%20differential%20geometry%20and%20topology | This is a glossary of terms specific to differential geometry and differential topology. The following three glossaries are closely related:
Glossary of general topology
Glossary of algebraic topology
Glossary of Riemannian and metric geometry.
See also:
List of differential geometry topics
Words in italics denote a self-reference to this glossary.
A
Atlas
B
Bundle – see fiber bundle.
basic element – A basic element with respect to an element is an element of a cochain complex (e.g., complex of differential forms on a manifold) that is closed: and the contraction of by is zero.
C
Chart
Cobordism
Codimension – The codimension of a submanifold is the dimension of the ambient space minus the dimension of the submanifold.
Connected sum
Connection
Cotangent bundle – the vector bundle of cotangent spaces on a manifold.
Cotangent space
D
Diffeomorphism – Given two differentiable manifolds and , a bijective map from to is called a diffeomorphism – if both and its inverse are smooth functions.
Doubling – Given a manifold with boundary, doubling is taking two copies of and identifying their boundaries. As the result we get a manifold without boundary.
E
Embedding
F
Fiber – In a fiber bundle, the preimage of a point in the base is called the fiber over , often denoted .
Fiber bundle
Frame – A frame at a point of a differentiable manifold M is a basis of the tangent space at the point.
Frame bundle – the principal bundle of frames on a smooth manifold.
Flow
G
Genus
H
Hypersurface – A hypersurface is a submanifold of codimension one.
I
Immersion
Integration along fibers
L
Lens space – A lens space is a quotient of the 3-sphere (or (2n + 1)-sphere) by a free isometric action of Z – k.
M
Manifold – A topological manifold is a locally Euclidean Hausdorff space. (In Wikipedia, a manifold need not be paracompact or second-countable.) A manifold is a differentiable manifold whose chart overlap functions |
https://en.wikipedia.org/wiki/Hadamard%20matrix | In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows.
The n-dimensional parallelotope spanned by the rows of an n×n Hadamard matrix has the maximum possible n-dimensional volume among parallelotopes spanned by vectors whose entries are bounded in absolute value by 1. Equivalently, a Hadamard matrix has maximal determinant among matrices with entries of absolute value less than or equal to 1 and so is an extremal solution of Hadamard's maximal determinant problem.
Certain Hadamard matrices can almost directly be used as an error-correcting code using a Hadamard code (generalized in Reed–Muller codes), and are also used in balanced repeated replication (BRR), used by statisticians to estimate the variance of a parameter estimator.
Properties
Let H be a Hadamard matrix of order n. The transpose of H is closely related to its inverse. In fact:
where In is the n × n identity matrix and HT is the transpose of H. To see that this is true, notice that the rows of H are all orthogonal vectors over the field of real numbers and each have length . Dividing H through by this length gives an orthogonal matrix whose transpose is thus its inverse. Multiplying by the length again gives the equality above. As a result,
where det(H) is the determinant of H.
Suppose that M is a complex matrix of order n, whose entries are bounded by |Mij| ≤ 1, for each i, j between 1 and n. Then Hadamard's determinant bound states that
Equality in this bound is attained for a real matrix M |
https://en.wikipedia.org/wiki/No%20symbol | The general prohibition sign, also known informally as the no symbol, 'do not' sign, circle-backslash symbol, nay, interdictory circle, prohibited symbol, don't do it symbol, or universal no, is a red circle with a 45-degree diagonal line inside the circle from upper-left to lower-right. It is overlaid on a pictogram to warn that an activity is not permitted, or has accompanying text to describe what is prohibited.
Appearance
According to the ISO standard (and also under a UK Statutory Instrument), the red area must take up at least 35 percent of the total area of the sign within the outer circumference of the "prohibition sign". Under the UK rules the width of a "no symbol" is 80 percent the height of the printed area.
For computer display and printing, the symbol is supported in Unicode by combining elements rather than with individual code points (see below).
Uses
Motor vehicle traffic signage
The "prohibition" symbol is used on traffic signs, so that drivers can interpret traffic laws quickly while driving. For example:
No left turn or No right turn
No U-turn
No parking (English) or No estacionarse (Spanish)
Road closed to vehicles (Japan), Road closed to vehicles (Germany, but typical in Europe)
Non-motor traffic
The symbol's use is not limited to informing drivers of motorized vehicles, and is commonly used for other forms of traffic:
, , No horse-riding
, , No bicycles
, , No pedestrians
General prohibitions and warnings
The symbol is used for non-traffic purposes to warn or prohibit certain activities:
No smoking (with symbol of a lit cigarette).
or No littering (with symbol of person littering or of litter)
No swimming (with symbol of swimmer in water underneath)
Packaging and products
It is also used on packages sent through the mail and sealed boxes of merchandise that are sold in stores. Using a graphical symbol is useful to convey important warnings regardless of language. For example:
Breakable; do not drop
Keep aw |
https://en.wikipedia.org/wiki/Video%20projector | A video projector is an image projector that receives a video signal and projects the corresponding image on a projection screen using a lens system. Video projectors use a very bright ultra-high-performance lamp (a special mercury arc lamp), Xenon arc lamp, LED or solid state blue, RB, RGB or remote fiber-optic RGB lasers to provide the illumination required to project the image, and most modern ones can correct any curves, blurriness, and other inconsistencies through manual settings. If a blue laser is used, a phosphor wheel is used to turn blue light into white light, which is also the case with white LEDs. (White LEDs do not use lasers.) A wheel is used in order to prolong the lifespan of the phosphor, as it is degraded by the heat generated by the laser diode. Remote fiber-optic RGB laser racks can be placed far away from the projector, and several racks can be housed in a single, central room. Each projector can use up to two racks, and several monochrome lasers are mounted on each rack, the light of which is mixed and transmitted to the projector booth using optical fibers. Projectors using RB lasers use a blue laser with a phosphor wheel in conjunction with a conventional solid state red laser.
Video projectors are used for many applications such as conference room presentations, classroom training, home cinema, movie theaters and concerts, having mostly replaced overhead, slide and conventional film projectors. In schools and other educational settings, they are sometimes connected to an interactive whiteboard. In the late 20th century, they became commonplace in home cinema. Although large LCD television screens became quite popular, video projectors are still common among many home theater enthusiasts.
Overview
A video projector, also known as a digital projector, may project onto a traditional reflective projection screen, or it may be built into a cabinet with a translucent rear-projection screen to form a single unified display device.
Common displ |
https://en.wikipedia.org/wiki/Eternal%20September | Eternal September or the September that never ended is Usenet slang for a period beginning around 1993 when Internet service providers began offering Usenet access to many new users. The flood of new users overwhelmed the existing culture for online forums and the ability to enforce existing norms. AOL followed with their Usenet gateway service in March 1994, leading to a constant stream of new users. Hence, from the early Usenet point of view, the influx of new users in September 1993 never ended.
History
During the 1980s and early 1990s, Usenet and the Internet were generally the domain of dedicated computer professionals and hobbyists; new users joined slowly, in small numbers, and observed and learned the social conventions of online interaction without having much of an impact on the experienced users. The only exception to this was September of every year, when large numbers of first-year college students gained access to the Internet and Usenet through their universities. These large groups of new users who had not yet learned online etiquette created a nuisance for the experienced users, who came to dread September every year. Once ISPs like AOL made Internet access widely available for home users, a continuous influx of new users began, making it feel like it was always "September" to the more experienced users.
The full phrase appears to have evolved over a series of months on two separate alt.folklore newsgroups where a number of threads exist lamenting what they saw as an increase in low-quality posts across various newsgroups. Several members of the newsgroups referenced aspects of the "September" issue, typically in a joking manner.
In a thread on January 8, 1994, Joel Furr cross-posted asking "Is it just me, or has Delphi unleashed a staggering amount of weirdos on the net?", which garnered a reply from Karl Reinsch "Of course it's perpetually September for Delphi users, isn't it?" The day before, Furr had also posted the same message to alt.fol |
https://en.wikipedia.org/wiki/Andrew%20Project | The Andrew Project was a distributed computing environment developed at Carnegie Mellon University beginning in 1982. It was an ambitious project for its time and resulted in an unprecedentedly vast and accessible university computing infrastructure. The project was named after Andrew Carnegie and Andrew Mellon, the founders of the institutions that eventually became Carnegie Mellon University.
History
The Information Technology Center, a partnership of Carnegie Mellon University (CMU) and the International Business Machines Corporation (IBM), began work on the Andrew Project in 1982. In its initial phase, the project involved both software and hardware, including wiring the campus for data and developing workstations to be distributed to students and faculty at CMU and elsewhere. The proposed "3M computer" workstations included a million pixel display and a megabyte of memory, running at a million instructions per second. Unfortunately, a cost on the order of US made the computers beyond the reach of students' budgets. The initial hardware deployment in 1985 established a number of university-owned "clusters" of public workstations in various academic buildings and dormitories. The campus was fully wired and ready for the eventual availability of inexpensive personal computers.
Early development within the Information Technology Center, originally called VICE (Vast Integrated Computing Environment) and VIRTUE (Virtue Is Reached Through Unix and Emacs), focused on centralized tools, such as a file server, and workstation tools including a window manager, editor, email, and file system client code.
Initially the system was prototyped on Sun Microsystems machines, and then to IBM RT PC series computers running a special IBM Academic Operating System. People involved in the project included James H. Morris, Nathaniel Borenstein, James Gosling, and David S. H. Rosenthal.
The project was extended several times after 1985 in order to complete the software, and was ren |
https://en.wikipedia.org/wiki/Phytochorion | A phytochorion, in phytogeography, is a geographic area with a relatively uniform composition of plant species. Adjacent phytochoria do not usually have a sharp boundary, but rather a soft one, a transitional area in which many species from both regions overlap. The region of overlap is called a vegetation tension zone.
In traditional schemes, areas in phytogeography are classified hierarchically, according to the presence of endemic families, genera or species, e.g., in floral (or floristic, phytogeographic) zones and regions, or also in kingdoms, regions and provinces, sometimes including the categories empire and domain. However, some authors prefer not to rank areas, referring to them simply as "areas", "regions" (in a non hierarchical sense) or "phytochoria".
Systems used to classify vegetation can be divided in two major groups: those that use physiognomic-environmental parameters and characteristics and those that are based on floristic (i.e. shared genera and species) relationships. Phytochoria are defined by their plant taxonomic composition, while other schemes of regionalization (e.g., vegetation type, physiognomy, plant formations, biomes) may variably take in account, depending on the author, the apparent characteristics of a community (the dominant life-form), environment characteristics, the fauna associated, anthropic factors or political-conservationist issues.
Explanation
Several systems of classifying geographic areas where plants grow have been devised. Most systems are organized hierarchically, with the largest units subdivided into smaller geographic areas, which are made up of smaller floristic communities, and so on. Phytochoria are defined as areas possessing a large number of endemic taxa. Floristic kingdoms are characterized by a high degree of family endemism, floristic regions by a high degree of generic endemism, and floristic provinces by a high degree of species endemism. Systems of phytochoria have both significant similarities |
https://en.wikipedia.org/wiki/DECstation | The DECstation was a brand of computers used by DEC, and refers to three distinct lines of computer systems—the first released in 1978 as a word processing system, and the latter (more widely known) two both released in 1989. These comprised a range of computer workstations based on the MIPS architecture and a range of PC compatibles. The MIPS-based workstations ran ULTRIX, a DEC-proprietary version of UNIX, and early releases of OSF/1.
DECstation 78
The first line of computer systems given the DECstation name were word processing systems based on the PDP-8. These
systems, built into a VT52 terminal, were also known as the VT78.
DECstation RISC workstations
History
The second (and completely unrelated) line of DECstations began with the DECstation 3100, which was released on 11 January 1989. The DECstation 3100 was the first commercially available RISC-based machine built by DEC.
This line of DECstations was the fruit of an advanced development skunkworks project carried out in DEC's Palo Alto Hamilton Ave facility. Known as the PMAX project, its focus was to produce a computer systems family with the economics and performance to compete against the likes of Sun Microsystems and other RISC-based UNIX platforms of the day. The brainchild of James Billmaier, Mario Pagliaro, Armando Stettner and Joseph DiNucci, the systems family was to also employ a truly RISC-based architecture when compared to the heavier and very CISC VAX or the then still under development PRISM architectures. At the time DEC was mostly known for their CISC systems including the successful PDP-11 and VAX lines.
Several architectures were considered from Intel, Motorola and others but the group quickly selected the MIPS line of microprocessors. The (early) MIPS microprocessors supported both big- and little-endian modes (configured during hardware reset). Little-endian mode was chosen both to match the byte ordering of VAX-based systems and the growing number of Intel-based PCs and compu |
https://en.wikipedia.org/wiki/Inferno%20%28operating%20system%29 | Inferno is a distributed operating system started at Bell Labs and now developed and maintained by Vita Nuova Holdings as free software under the MIT License. Inferno was based on the experience gained with Plan 9 from Bell Labs, and the further research of Bell Labs into operating systems, languages, on-the-fly compilers, graphics, security, networking and portability. The name of the operating system, many of its associated programs, and that of the current company, were inspired by Dante Alighieri's Divine Comedy. In Italian, Inferno means "hell", of which there are nine circles in Dante's Divine Comedy.
Design principles
Inferno was created in 1995 by members of Bell Labs' Computer Science Research division to bring ideas derived from their previous operating system, Plan 9 from Bell Labs, to a wider range of devices and networks. Inferno is a distributed operating system based on three basic principles:
Resources as files: all resources are represented as files within a hierarchical file system
Namespaces: a program's view of the network is a single, coherent namespace that appears as a hierarchical file system but may represent physically separated (locally or remotely) resources
Standard communication protocol: a standard protocol, called Styx, is used to access all resources, both local and remote
To handle the diversity of network environments it was intended to be used in, the designers decided a virtual machine (VM) was a necessary component of the system. This is the same conclusion of the Oak project that became Java, but arrived at independently. The Dis virtual machine is a register machine intended to closely match the architecture it runs on, in contrast to the stack machine of the Java virtual machine. An advantage of this approach is the relative simplicity of creating a just-in-time compiler for new architectures.
The virtual machine provides memory management designed to be efficient on devices with as little as 1 MiB of memory and witho |
https://en.wikipedia.org/wiki/Solid%20geometry | Solid geometry or stereometry is the geometry of three-dimensional Euclidean space (3D space).
A solid figure is the region of 3D space bounded by a two-dimensional surface; for example, a solid ball consists of a sphere and its interior.
Solid geometry deals with the measurements of volumes of various solids, including pyramids, prisms (and other polyhedrons), cubes, cylinders, cones (and truncated cones).
History
The Pythagoreans dealt with the regular solids, but the pyramid, prism, cone and cylinder were not studied until the Platonists. Eudoxus established their measurement, proving the pyramid and cone to have one-third the volume of a prism and cylinder on the same base and of the same height. He was probably also the discoverer of a proof that the volume enclosed by a sphere is proportional to the cube of its radius.
Topics
Basic topics in solid geometry and stereometry include:
incidence of planes and lines
dihedral angle and solid angle
the cube, cuboid, parallelepiped
the tetrahedron and other pyramids
prisms
octahedron, dodecahedron, icosahedron
cones and cylinders
the sphere
other quadrics: spheroid, ellipsoid, paraboloid and hyperboloids.
Advanced topics include:
projective geometry of three dimensions (leading to a proof of Desargues' theorem by using an extra dimension)
further polyhedra
descriptive geometry.
List of solid figures
Whereas a sphere is the surface of a ball, for other solid figures it is sometimes ambiguous whether the term refers to the surface of the figure or the volume enclosed therein, notably for a cylinder.
Techniques
Various techniques and tools are used in solid geometry. Among them, analytic geometry and vector techniques have a major impact by allowing the systematic use of linear equations and matrix algebra, which are important for higher dimensions.
Applications
A major application of solid geometry and stereometry is in 3D computer graphics.
See also
Euclidean geometry
Shape
Solid modeling
S |
https://en.wikipedia.org/wiki/List%20of%20factorial%20and%20binomial%20topics | This is a list of factorial and binomial topics in mathematics. See also binomial (disambiguation).
Abel's binomial theorem
Alternating factorial
Antichain
Beta function
Bhargava factorial
Binomial coefficient
Pascal's triangle
Binomial distribution
Binomial proportion confidence interval
Binomial-QMF (Daubechies wavelet filters)
Binomial series
Binomial theorem
Binomial transform
Binomial type
Carlson's theorem
Catalan number
Fuss–Catalan number
Central binomial coefficient
Combination
Combinatorial number system
De Polignac's formula
Difference operator
Difference polynomials
Digamma function
Egorychev method
Erdős–Ko–Rado theorem
Euler–Mascheroni constant
Faà di Bruno's formula
Factorial
Factorial moment
Factorial number system
Factorial prime
Gamma distribution
Gamma function
Gaussian binomial coefficient
Gould's sequence
Hyperfactorial
Hypergeometric distribution
Hypergeometric function identities
Hypergeometric series
Incomplete beta function
Incomplete gamma function
Jordan–Pólya number
Kempner function
Lah number
Lanczos approximation
Lozanić's triangle
Macaulay representation of an integer
Mahler's theorem
Multinomial distribution
Multinomial coefficient, Multinomial formula, Multinomial theorem
Multiplicities of entries in Pascal's triangle
Multiset
Multivariate gamma function
Narayana numbers
Negative binomial distribution
Nörlund–Rice integral
Pascal matrix
Pascal's pyramid
Pascal's simplex
Pascal's triangle
Permutation
List of permutation topics
Pochhammer symbol (also falling, lower, rising, upper factorials)
Poisson distribution
Polygamma function
Primorial
Proof of Bertrand's postulate
Sierpinski triangle
Star of David theorem
Stirling number
Stirling transform
Stirling's approximation
Subfactorial
Table of Newtonian series
Taylor series
Trinomial expansion
Vandermonde's identity
Wilson prime
Wilson's theorem
Wolstenholme prime
Factorial and binomial topics |
https://en.wikipedia.org/wiki/Telescoping%20series | In mathematics, a telescoping series is a series whose general term is of the form , i.e. the difference of two consecutive terms of a sequence .
As a consequence the partial sums only consists of two terms of after cancellation. The cancellation technique, with part of each term cancelling with part of the next term, is known as the method of differences.
For example, the series
(the series of reciprocals of pronic numbers) simplifies as
An early statement of the formula for the sum or partial sums of a telescoping series can be found in a 1644 work by Evangelista Torricelli, De dimensione parabolae.
In general
Telescoping sums are finite sums in which pairs of consecutive terms cancel each other, leaving only the initial and final terms.
Let be a sequence of numbers. Then,
If
Telescoping products are finite products in which consecutive terms cancel denominator with numerator, leaving only the initial and final terms.
Let be a sequence of numbers. Then,
If
More examples
Many trigonometric functions also admit representation as a difference, which allows telescopic canceling between the consecutive terms.
Some sums of the form where f and g are polynomial functions whose quotient may be broken up into partial fractions, will fail to admit summation by this method. In particular, one has The problem is that the terms do not cancel.
Let k be a positive integer. Then where Hk is the kth harmonic number. All of the terms after cancel.
Let k,m with k m be positive integers. Then
An application in probability theory
In probability theory, a Poisson process is a stochastic process of which the simplest case involves "occurrences" at random times, the waiting time until the next occurrence having a memoryless exponential distribution, and the number of "occurrences" in any time interval having a Poisson distribution whose expected value is proportional to the length of the time interval. Let Xt be the number of "occurrences" before ti |
https://en.wikipedia.org/wiki/Oracle%20Internet%20Directory | Oracle Internet Directory (OID) is a directory service produced by Oracle Corporation, which functions compatible with LDAP version 3.
Functionality
OID makes the following features available from within an Oracle database environment:
integration with Oracle 8i and subsequent databases for ease of use and administration
a scalable, multi-platform listing structure for reliable and safe intranet integration
synchronization of OID-based listings (also with distributed applications)
integration of existing public key certificates, digital wallets (e-wallets) and entrance privileges
co-existence with other LDAP implementations via Oracle's Directory Integration Platform (DIP)
administration tools, including:
routing policies
system management objects such as Oracle Directory Manager (also known as "oidadmin" or "ODM")
technical support regarding the quality of the services
delegated administrative service
Implementation
OID uses standard Oracle database structures to store its internal tables.
In Oracle version 9 databases, by default, many Oracle LDAP Table Stores use tablespaces with names beginning with the OLTS_ (and occasionally P1TS_) prefixes. Relevant default schemas used may include ODS (for "Oracle directory server") and ODSCOMMON.
Operation
The OID Control Utility (OIDCTL) serves as a command-line tool for starting and stopping the OID server. The OID Monitor process interprets and executes the OIDCTL commands.
Marketing
In comparing Oracle Internet Directory with its competitors, Oracle Corporation stresses that it uses as its foundation an Oracle database; whereas many competing products (such as Oracle Directory Server Enterprise Edition and Novell eDirectory) do not rely on an enterprise-strength relational database, but instead on embedded database engines similar to Berkeley DB. Integration with the Oracle database makes many of the technologies available for Oracle database available for Oracle Internet Directory, and improveme |
https://en.wikipedia.org/wiki/Linksys | Linksys Holdings, Inc., is an American brand of data networking hardware products mainly sold to home users and small businesses. It was founded in 1988 by the couple Victor and Janie Tsao, both Taiwanese immigrants to the United States. Linksys products include Wi-Fi routers, mesh Wi-Fi systems, Wifi extenders, access points, network switches, and Wi-Fi networking. It is headquartered in Irvine, California.
Linksys products are sold direct-to-consumer from its website, through online retailers and marketplaces, as well as off-the-shelf in consumer electronics and big-box retail stores. As of 2020, Linksys products are sold in retail locations and value-added resellers in 64 countries and was the first router company to ship 100 million products.
History
In 1988, spouses Janie and Victor Tsao founded DEW International, later renamed Linksys, in the garage of their Irvine, California home. The Tsaos were immigrants from Taiwan who held second jobs as consultants specializing in pairing American technology vendors with manufacturers in Taiwan. The founders used Taiwanese manufacturing to achieve its early success. The company's first products were printer sharers that connected multiple PCs to printers. The company expanded into Ethernet hubs, network cards, and cords. In 1992, the Tsaos began running Linksys full time and moved the company and its growing staff to a formal office. By 1994, it had grown to 55 employees with annual revenues of $6.5 million.
Linksys received a major boost in 1995, when Microsoft released Windows 95 with built-in networking functions that expanded the market for its products. Linksys established its first U.S. retail channels with Fry's Electronics (1995) and Best Buy (1996). In the late 1990s, Linksys released the first affordable multiport router, popularizing Linksys as a home networking brand. By 2003, when the company was acquired by Cisco, it had 305 employees and revenues of more than $500 million.
Cisco expanded the company |
https://en.wikipedia.org/wiki/Brownian%20motor | Brownian motors are nanoscale or molecular machines that use chemical reactions to generate directed motion in space. The theory behind Brownian motors relies on the phenomenon of Brownian motion, random motion of particles suspended in a fluid (a liquid or a gas) resulting from their collision with the fast-moving molecules in the fluid.
On the nanoscale (1-100 nm), viscosity dominates inertia, and the extremely high degree of thermal noise in the environment makes conventional directed motion all but impossible, because the forces impelling these motors in the desired direction are minuscule when compared to the random forces exerted by the environment. Brownian motors operate specifically to utilise this high level of random noise to achieve directed motion, and as such are only viable on the nanoscale.
The concept of Brownian motors is a recent one, having only been coined in 1995 by Peter Hänggi, but the existence of such motors in nature may have existed for a very long time and help to explain crucial cellular processes that require movement at the nanoscale, such as protein synthesis and muscular contraction. If this is the case, Brownian motors may have implications for the foundations of life itself.
In more recent times, humans have attempted to apply this knowledge of natural Brownian motors to solve human problems. The applications of Brownian motors are most obvious in nanorobotics due to its inherent reliance on directed motion.
History
20th century
The term “Brownian motor” was originally invented by Swiss theoretical physicist Peter Hänggi in 1995. The Brownian motor, like the phenomenon of Brownian motion that underpinned its underlying theory, was also named after 19th century Scottish botanist Robert Brown, who, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water, famously described the random motion of pollen particles in water in 1827. In 1905, almost eighty years later, theoretical physicist Alb |
https://en.wikipedia.org/wiki/Apache%20Groovy | Apache Groovy is a Java-syntax-compatible object-oriented programming language for the Java platform. It is both a static and dynamic language with features similar to those of Python, Ruby, and Smalltalk. It can be used as both a programming language and a scripting language for the Java Platform, is compiled to Java virtual machine (JVM) bytecode, and interoperates seamlessly with other Java code and libraries. Groovy uses a curly-bracket syntax similar to Java's. Groovy supports closures, multiline strings, and expressions embedded in strings. Much of Groovy's power lies in its AST transformations, triggered through annotations.
Groovy 1.0 was released on January 2, 2007, and Groovy 2.0 in July, 2012. Since version 2, Groovy can be compiled statically, offering type inference and performance near that of Java. Groovy 2.4 was the last major release under Pivotal Software's sponsorship which ended in March 2015. Groovy has since changed its governance structure to a Project Management Committee in the Apache Software Foundation.
History
James Strachan first talked about the development of Groovy on his blog in August 2003. In March 2004, Groovy was submitted to the JCP as JSR 241 and accepted by ballot. Several versions were released between 2004 and 2006. After the Java Community Process (JCP) standardization effort began, the version numbering changed, and a version called "1.0" was released on January 2, 2007. After various betas and release candidates numbered 1.1, on December 7, 2007, Groovy 1.1 Final was released and immediately renumbered as Groovy 1.5 to reflect the many changes made.
In 2007, Groovy won the first prize at JAX 2007 innovation award. In 2008, Grails, a Groovy web framework, won the second prize at JAX 2008 innovation award.
In November 2008, SpringSource acquired the Groovy and Grails company (G2One). In August 2009 VMware acquired SpringSource.
In April 2012, after eight years of inactivity, the Spec Lead changed the status of JSR 241 |
https://en.wikipedia.org/wiki/Nomenclature%20of%20Territorial%20Units%20for%20Statistics | Nomenclature of Territorial Units for Statistics or NUTS () is a geocode standard for referencing the administrative divisions of countries for statistical purposes. The standard, adopted in 2003, is developed and regulated by the European Union, and thus only covers the EU member states in detail. The Nomenclature of Territorial Units for Statistics is instrumental in the European Union's Structural Funds and Cohesion Fund delivery mechanisms and for locating the area where goods and services subject to European public procurement legislation are to be delivered.
For each EU member country, a hierarchy of three NUTS levels is established by Eurostat in agreement with each member state; the subdivisions in some levels do not necessarily correspond to administrative divisions within the country. A NUTS code begins with a two-letter code referencing the country, as abbreviated in the European Union's Interinstitutional Style Guide. The subdivision of the country is then referred to with one number. A second or third subdivision level is referred to with another number each. Each numbering starts with 1, as 0 is used for the upper level. Where the subdivision has more than nine entities, capital letters are used to continue the numbering. Below the three NUTS levels are local administrative units (LAUs). A similar statistical system is defined for the candidate countries and members of the European Free Trade Association, but they are not part of NUTS governed by the regulations.
The current NUTS classification, dated 21 November 2016 and effective from 1 January 2018 (now updated to current members ), lists 92 regions at NUTS 1, 244 regions at NUTS 2, 1215 regions at NUTS 3 level, and 99,387 local administrative units (LAUs).
National structures
Not all countries have every level of division, depending on their size. For example, Luxembourg and Cyprus only have local administrative units (LAUs); the three NUTS divisions each correspond to the entire country itself. |
https://en.wikipedia.org/wiki/Masatoshi%20Shima | is a Japanese electronics engineer. He was one of the architects of the world's first microprocessor, the Intel 4004. In 1968, Shima worked for Busicom in Japan, and did the logic design for a specialized CPU to be translated into three-chip custom chips. In 1969, he worked with Intel's Ted Hoff and Stanley Mazor to reduce the three-chip Busicom proposal into a one-chip architecture. In 1970, that architecture was transformed into a silicon chip, the Intel 4004, by Federico Faggin, with Shima's assistance in logic design.
He later joined Intel in 1972. There, he worked with Faggin to develop the Intel 8080, released in 1974. Shima then developed several Intel peripheral chips, some used in the IBM PC, such as the 8259 interrupt controller, 8255 programmable peripheral interface chip, 8253 timer chip, 8257 direct memory access (DMA) chip and 8251 serial communication USART chip. He then joined Zilog, where he worked with Faggin to develop the Zilog Z80 (1976) and Z8000 (1979).
Early life and career
He studied organic chemistry at Tohoku University in Sendai, Miyagi Prefecture, Japan. With poor prospects for employment in the field of chemistry, he went to work for Busicom, a business calculator manufacturer, joining in Spring 1967. There, he learned about software and digital logic design, from 1967 to 1968.
Intel 4004
After Busicom decided to use large-scale integration (LSI) circuits in their calculator products, they began work on what later became known as the "Busicom Project", a chipset for the Busicom 141-PF calculator that led to creating the first microprocessor, the Intel 4004. In April 1968, Shima was asked to design the logic for what was intended to become a future chipset to be designed and produced by a semiconductor company. Shima designed a special-purpose LSI chipset, along with his supervisor Tadashi Tanba, in 1968. His design consisted of seven LSI chips, including a three-chip CPU. Shima's initial design included arithmetic units (adders), mul |
https://en.wikipedia.org/wiki/OpenCores | OpenCores is a community developing digital open-source hardware through electronic design automation (EDA), with a similar ethos to the free software movement. OpenCores hopes to eliminate redundant design work and significantly reduce development costs. A number of companies have been reported as adopting OpenCores IP in chips, or as adjuncts to EDA tools. OpenCores is also sometimes cited as an example of open source in the electronics hardware community.
OpenCores has always been a commercially owned organization. In 2015, its core active users established the independent Free and Open Source Silicon Foundation (FOSSi Foundation), and created another directory on the librecores.org website as the basis for all future development, independent of commercial control. It has been shut down to redirect to a post on the FOSSi Foundation website seven years later in favor of a simple web search, reasoning that "free and open source silicon is no longer a dream".
History
Damjan Lampret, one of the founders of OpenCores, stated on his website that it began in 1999. The new website and its objectives were reported publicly by EE Times in 2000 and CNET News in 2001. Through the following years it was supported by advertising and sponsorship, including by Flextronics.
In mid-2007 an appeal was put out for a new backer. That November, Swedish design house ORSoC AB agreed to take over maintenance of the OpenCores website.
EE Times reported in late 2008 that OpenCores had passed the 20,000 subscriber mark. In October 2010 it reached 95,000 registered users and had approximately 800 projects. In July 2012 it reached 150,000 registered users.
During 2015, ORSoC AB formed a joint venture with KNCMiner AB to develop bitcoin mining machines. As this became the primary focus of the business, they were able to spend less time with the opencores.org project. In response to the growing lack of commitment, the core OpenRISC development team set up the Free and Open Source Silicon |
https://en.wikipedia.org/wiki/Behavior-based%20robotics | Behavior-based robotics (BBR) or behavioral robotics is an approach in robotics that focuses on robots that are able to exhibit complex-appearing behaviors despite little internal variable state to model its immediate environment, mostly gradually correcting its actions via sensory-motor links.
Principles
Behavior-based robotics sets itself apart from traditional artificial intelligence by using biological systems as a model. Classic artificial intelligence typically uses a set of steps to solve problems, it follows a path based on internal representations of events compared to the behavior-based approach. Rather than use preset calculations to tackle a situation, behavior-based robotics relies on adaptability. This advancement has allowed behavior-based robotics to become commonplace in researching and data gathering.
Most behavior-based systems are also reactive, which means they need no programming of a chair looks like, or what kind of surface the robot is moving on. Instead, all the information is gleaned from the input of the robot's sensors. The robot uses that information to gradually correct its actions according to the changes in immediate environment.
Behavior-based robots (BBR) usually show more biological-appearing actions than their computing-intensive counterparts, which are very deliberate in their actions. A BBR often makes mistakes, repeats actions, and appears confused, but can also show the anthropomorphic quality of tenacity. Comparisons between BBRs and insects are frequent because of these actions. BBRs are sometimes considered examples of weak artificial intelligence, although some have claimed they are models of all intelligence.
Features
Most behavior-based robots are programmed with a basic set of features to start them off. They are given a behavioral repertoire to work with dictating what behaviors to use and when, obstacle avoidance and battery charging can provide a foundation to help the robots learn and succeed. Rather than buil |
https://en.wikipedia.org/wiki/Pappus%27s%20centroid%20theorem | In mathematics, Pappus's centroid theorem (also known as the Guldinus theorem, Pappus–Guldinus theorem or Pappus's theorem) is either of two related theorems dealing with the surface areas and volumes of surfaces and solids of revolution.
The theorems are attributed to Pappus of Alexandria and Paul Guldin. Pappus's statement of this theorem appears in print for the first time in 1659, but it was known before, by Kepler in 1615 and by Guldin in 1640.
The first theorem
The first theorem states that the surface area A of a surface of revolution generated by rotating a plane curve C about an axis external to C and on the same plane is equal to the product of the arc length s of C and the distance d traveled by the geometric centroid of C:
For example, the surface area of the torus with minor radius r and major radius R is
Proof
A curve given by the positive function is bounded by two points given by:
and
If is an infinitesimal line element tangent to the curve, the length of the curve is given by:
The component of the centroid of this curve is:
The area of the surface generated by rotating the curve around the x-axis is given by:
Using the last two equations to eliminate the integral we have:
The second theorem
The second theorem states that the volume V of a solid of revolution generated by rotating a plane figure F about an external axis is equal to the product of the area A of F and the distance d traveled by the geometric centroid of F. (The centroid of F is usually different from the centroid of its boundary curve C.) That is:
For example, the volume of the torus with minor radius r and major radius R is
This special case was derived by Johannes Kepler using infinitesimals.
Proof 1
The area bounded by the two functions:
and bounded by the two lines:
and
is given by:
The component of the centroid of this area is given by:
If this area is rotated about the y-axis, the volume generated can be calculated using the shell method. It is given |
https://en.wikipedia.org/wiki/Skew-Hermitian%20matrix |
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix is skew-Hermitian if it satisfies the relation
where denotes the conjugate transpose of the matrix . In component form, this means that
for all indices and , where is the element in the -th row and -th column of , and the overline denotes complex conjugation.
Skew-Hermitian matrices can be understood as the complex versions of real skew-symmetric matrices, or as the matrix analogue of the purely imaginary numbers. The set of all skew-Hermitian matrices forms the Lie algebra, which corresponds to the Lie group U(n). The concept can be generalized to include linear transformations of any complex vector space with a sesquilinear norm.
Note that the adjoint of an operator depends on the scalar product considered on the dimensional complex or real space . If denotes the scalar product on , then saying is skew-adjoint means that for all one has .
Imaginary numbers can be thought of as skew-adjoint (since they are like matrices), whereas real numbers correspond to self-adjoint operators.
Example
For example, the following matrix is skew-Hermitian
because
Properties
The eigenvalues of a skew-Hermitian matrix are all purely imaginary (and possibly zero). Furthermore, skew-Hermitian matrices are normal. Hence they are diagonalizable and their eigenvectors for distinct eigenvalues must be orthogonal.
All entries on the main diagonal of a skew-Hermitian matrix have to be pure imaginary; i.e., on the imaginary axis (the number zero is also considered purely imaginary).
If and are skew-Hermitian, then is skew-Hermitian for all real scalars and .
is skew-Hermitian if and only if (or equivalently, ) is Hermitian.
is skew-Hermitian if and only if the real part is skew-symmetric and the imaginary part is symmetric.
If is skew-Hermitian, then is Hermitian if i |
https://en.wikipedia.org/wiki/Beat%20frequency%20oscillator | In a radio receiver, a beat frequency oscillator or BFO is a dedicated oscillator used to create an audio frequency signal from Morse code radiotelegraphy (CW) transmissions to make them audible. The signal from the BFO is mixed with the received signal to create a heterodyne or beat frequency which is heard as a tone in the speaker. BFOs are also used to demodulate single-sideband (SSB) signals, making them intelligible, by essentially restoring the carrier that was suppressed at the transmitter. BFOs are sometimes included in communications receivers designed for short wave listeners; they are almost always found in communication receivers for amateur radio, which often receive CW and SSB signals.
The beat frequency oscillator was invented in 1901 by Canadian engineer Reginald Fessenden. What he called the "heterodyne" receiver was the first application of the heterodyne principle.
Overview
In continuous wave (CW) radio transmission, also called radiotelegraphy, or wireless telegraphy (W/T) or on-off keying and designated by the International Telecommunication Union as emission type A1A, information is transmitted by pulses of unmodulated radio carrier wave which spell out text messages in Morse code. The different length pulses of carrier, called "dots" and "dashes" or "dits" and "dahs", are produced by the operator switching the transmitter on and off rapidly using a switch called a telegraph key. The first type of transmission was generated using a spark, since the spark fired at around 1000 times a second (when the telegraph key was pressed). The resulting damped waves (ITU Class B) could be received on a basic crystal set employing a diode detector and an ear phone as a spark rate tone. It was only with the introduction of tube transmitters that were able to create streams of continuous radio frequency carrier, that a BFO was required. The alternative was to modulate the carrier with an audio tone around 800 Hz and key the modulated carrier to permit use o |
https://en.wikipedia.org/wiki/Coding%20strand | When referring to DNA transcription, the coding strand (or informational strand) is the DNA strand whose base sequence is identical to the base sequence of the RNA transcript produced (although with thymine replaced by uracil). It is this strand which contains codons, while the non-coding strand contains anticodons. During transcription, RNA Pol II binds to the non-coding template strand, reads the anti-codons, and transcribes their sequence to synthesize an RNA transcript with complementary bases.
By convention, the coding strand is the strand used when displaying a DNA sequence. It is presented in the 5' to 3' direction.
Wherever a gene exists on a DNA molecule, one strand is the coding strand (or sense strand), and the other is the noncoding strand (also called the antisense strand, anticoding strand, template strand or transcribed strand).
Strands in transcription bubble
During transcription, RNA polymerase unwinds a short section of the DNA double helix near the start of the gene (the transcription start site). This unwound section is known as the transcription bubble. The RNA polymerase, and with it the transcription bubble, travels along the noncoding strand in the opposite, 3' to 5', direction, as well as polymerizing a newly synthesized strand in 5' to 3' or downstream direction. The DNA double helix is rewound by RNA polymerase at the rear of the transcription bubble. Like how two adjacent zippers work, when pulled together, they unzip and rezip as they proceed in a particular direction. Various factors can cause double-stranded DNA to break; thus, reorder genes or cause cell death.
RNA-DNA hybrid
Where the helix is unwound, the coding strand consists of unpaired bases, while the template strand consists of an RNA:DNA composite, followed by a number of unpaired bases at the rear. This hybrid consists of the most recently added nucleotides of the RNA transcript, complementary base-paired to the template strand. The number of base-pairs in the hybrid is |
https://en.wikipedia.org/wiki/Stack%20machine | In computer science, computer engineering and programming language implementations, a stack machine is a computer processor or a virtual machine in which the primary interaction is moving short-lived temporary values to and from a push down stack. In the case of a hardware processor, a hardware stack is used. The use of a stack significantly reduces the required number of processor registers. Stack machines extend push-down automata with additional load/store operations or multiple stacks and hence are Turing-complete.
Design
Most or all stack machine instructions assume that operands will be from the stack, and results placed in the stack. The stack easily holds more than two inputs or more than one result, so a rich set of operations can be computed. In stack machine code (sometimes called p-code), instructions will frequently have only an opcode commanding an operation, with no additional fields identifying a constant, register or memory cell, known as a zero address format. This greatly simplifies instruction decoding. Branches, load immediates, and load/store instructions require an argument field, but stack machines often arrange that the frequent cases of these still fit together with the opcode into a compact group of bits. The selection of operands from prior results is done implicitly by ordering the instructions. Some stack machine instruction sets are intended for interpretive execution of a virtual machine, rather than driving hardware directly.
Integer constant operands are pushed by or instructions. Memory is often accessed by separate or instructions containing a memory address or calculating the address from values in the stack. All practical stack machines have variants of the load–store opcodes for accessing local variables and formal parameters without explicit address calculations. This can be by offsets from the current top-of-stack address, or by offsets from a stable frame-base register.
The instruction set carries out most ALU action |
https://en.wikipedia.org/wiki/Base%20station | Base station (or base radio station) is – according to the International Telecommunication Union's (ITU) Radio Regulations (RR) – a "land station in the land mobile service."
The term is used in the context of mobile telephony, wireless computer networking and other wireless communications and in land surveying. In surveying, it is a GPS receiver at a known position, while in wireless communications it is a transceiver connecting a number of other devices to one another and/or to a wider area.
In mobile telephony, it provides the connection between mobile phones and the wider telephone network. In a computer network, it is a transceiver acting as a switch for computers in the network, possibly connecting them to a/another local area network and/or the Internet. In traditional wireless communications, it can refer to the hub of a dispatch fleet such as a taxi or delivery fleet, the base of a TETRA network as used by government and emergency services or a CB shack.
Land surveying
In the context of external land surveying, a base station is a GPS receiver at an accurately-known fixed location which is used to derive correction information for nearby portable GPS receivers. This correction data allows propagation and other effects to be corrected out of the position data obtained by the mobile stations, which gives greatly increased location precision and accuracy over the results obtained by uncorrected GPS receivers.
Computer networking
In the area of wireless computer networking, a base station is a radio receiver/transmitter that serves as the hub of the local wireless network, and may also be the gateway between a wired network and the wireless network. It typically consists of a low-power transmitter and wireless router.
Wireless communications
In radio communications, a base station is a wireless communications station installed at a fixed location and used to communicate as part of one of the following:
a push-to-talk two-way radio system, or;
a wi |
https://en.wikipedia.org/wiki/Zariski%20tangent%20space | In algebraic geometry, the Zariski tangent space is a construction that defines a tangent space at a point P on an algebraic variety V (and more generally). It does not use differential calculus, being based directly on abstract algebra, and in the most concrete cases just the theory of a system of linear equations.
Motivation
For example, suppose given a plane curve C defined by a polynomial equation
F(X,Y) = 0
and take P to be the origin (0,0). Erasing terms of higher order than 1 would produce a 'linearised' equation reading
L(X,Y) = 0
in which all terms XaYb have been discarded if a + b > 1.
We have two cases: L may be 0, or it may be the equation of a line. In the first case the (Zariski) tangent space to C at (0,0) is the whole plane, considered as a two-dimensional affine space. In the second case, the tangent space is that line, considered as affine space. (The question of the origin comes up, when we take P as a general point on C; it is better to say 'affine space' and then note that P is a natural origin, rather than insist directly that it is a vector space.)
It is easy to see that over the real field we can obtain L in terms of the first partial derivatives of F. When those both are 0 at P, we have a singular point (double point, cusp or something more complicated). The general definition is that singular points of C are the cases when the tangent space has dimension 2.
Definition
The cotangent space of a local ring R, with maximal ideal is defined to be
where 2 is given by the product of ideals. It is a vector space over the residue field k:= R/. Its dual (as a k-vector space) is called tangent space of R.
This definition is a generalization of the above example to higher dimensions: suppose given an affine algebraic variety V and a point v of V. Morally, modding out 2 corresponds to dropping the non-linear terms from the equations defining V inside some affine space, therefore giving a system of linear equations that define the tangent sp |
https://en.wikipedia.org/wiki/Reading%20frame | In molecular biology, a reading frame is a way of dividing the sequence of nucleotides in a nucleic acid (DNA or RNA) molecule into a set of consecutive, non-overlapping triplets. Where these triplets equate to amino acids or stop signals during translation, they are called codons.
A single strand of a nucleic acid molecule has a phosphoryl end, called the 5′-end, and a hydroxyl or 3′-end. These define the 5′→3′ direction. There are three reading frames that can be read in this 5′→3′ direction, each beginning from a different nucleotide in a triplet. In a double stranded nucleic acid, an additional three reading frames may be read from the other, complementary strand in the 5′→3′ direction along this strand. As the two strands of a double-stranded nucleic acid molecule are antiparallel, the 5′→3′ direction on the second strand corresponds to the 3′→5′ direction along the first strand.
In general, at the most, one reading frame in a given section of a nucleic acid, is biologically relevant (open reading frame). Some viral transcripts can be translated using multiple, overlapping reading frames. There is one known example of overlapping reading frames in mammalian mitochondrial DNA: coding portions of genes for 2 subunits of ATPase overlap.
Genetic code
DNA encodes protein sequence by a series of three-nucleotide codons. Any given sequence of DNA can therefore be read in six different ways: Three reading frames in one direction (starting at different nucleotides) and three in the opposite direction. During transcription, the RNA polymerase read the template DNA strand in the 3′→5′ direction, but the mRNA is formed in the 5′ to 3′ direction. The mRNA is single-stranded and therefore only contains three possible reading frames, of which only one is translated. The codons of the mRNA reading frame are translated in the 5′→3′ direction into amino acids by a ribosome to produce a polypeptide chain.
Open reading frame
An open reading frame (ORF) is a reading frame tha |
https://en.wikipedia.org/wiki/Ken%20Kutaragi | is a Japanese engineering technologist and businessman. He is the former chairman and CEO of Sony Interactive Entertainment (SIE), the video game division of Sony Corporation, and current president and CEO of Cyber AI Entertainment. He is known as "The Father of the PlayStation", as he oversaw the development of the original console and its successors and spinoffs, including the PlayStation 2, PlayStation Portable and the PlayStation 3. He departed Sony in 2007, a year after the PlayStation 3 was released.
He had also designed the sound processor for the Super Nintendo Entertainment System. With Sony, he designed the VLSI chip which works in conjunction with the PS1's RISC CPU to handle the graphics rendering.
Early years
Kutaragi was born in Tokyo, Japan. His parents, although not wealthy by Japanese standards, still managed to own their own business, a small printing plant in the city. As Kutaragi grew into childhood, they actively encouraged the young boy to explore his mechanical abilities in the plant, and he worked after school there. Aside from his duties in his parents' factory, Kutaragi was a studious, high-level student; he was often described as a "straight-A student."
Kutaragi always had the desire to "tinker", often taking apart toys as a child rather than playing with them. This curiosity carried from childhood, leading him as a teenager to learn the intricacies of electronics. Eventually, in fact, his love of electronics led to him enrolling in University of Electro-Communications, where he acquired an Electronics degree in the 1970s.
Immediately after graduation, Kutaragi began working for Sony in their digital research labs in the mid-1970s. Although at the time it was considered a radical decision, Kutaragi felt that Sony was on the "fast track". He quickly gained a reputation as an excellent problem solver and a forward-thinking engineer, earning that reputation by working on many successful projects, including early liquid crystal displays (L |
https://en.wikipedia.org/wiki/Infineon%20Technologies | Infineon Technologies AG is Germany's largest semiconductor manufacturer.
The company was spun-off from Siemens AG in 1999.
Infineon has about 50,280 employees and is one of the ten largest semiconductor manufacturers worldwide. In 2021 the company achieved sales of €11.06 billion.
Markets
Infineon markets semiconductors and systems for automotive, industrial, and multimarket sectors, as well as chip card and security products. Infineon has subsidiaries in the US in Milpitas, California, and in the Asia-Pacific region, in Singapore and Tokyo, Japan.
Infineon has a number of facilities in Europe, one in Dresden. Infineon's high power segment is in Warstein, Germany; Villach, Graz and Linz in Austria; Cegléd in Hungary; and Italy. It also runs R&D centers in France, Singapore, Romania, Taiwan, UK, Ukraine and India, as well as fabrication units in Malaysia, Singapore, Indonesia, and China. There is also a Shared Service Center in Porto, Portugal.
Infineon is listed in the DAX index of the Frankfurt Stock Exchange.
In 2010, board member Klaus Wucherer was elected to step into the chairman's office upon the retirement of the then-current chairman Max Dietrich Kley following a proxy contest in advance of the shareholders' meeting.
In 2023, it was Germany's largest chip manufacturer.
As of 2011, Infineon comprised four business areas after several restructurings:
Automotive (ATV)
Infineon provides semiconductor products for use in powertrains (engine and transmission control), comfort electronics (e.g., steering, shock absorbers, air conditioning), as well as in safety systems (ABS, airbags, ESP). The product portfolio includes microcontrollers, power semiconductors and sensors. In the fiscal year 2018 (ending September), sales amounted to €3,284 million for the ATV segment.
Green Industrial Power (GIP)
The industrial division of the company (named IPC until 2023 ) includes power semiconductors and modules which are used for generation, transmission and consum |
https://en.wikipedia.org/wiki/The%20UNIX-HATERS%20Handbook | The UNIX-HATERS Handbook is a semi-humorous edited compilation of messages to the UNIX-HATERS mailing list. The book was edited by Simson Garfinkel, Daniel Weise and Steven Strassmann and published in 1994.
Contents
The book concerns the frustrations of users of the Unix operating system. Many users had come from systems that they felt were far more sophisticated in features and usability, and they were frustrated by the perceived "worse is better" design philosophy that they felt Unix and much of its software encapsulated.
The book is based on messages sent to the UNIX-HATERS mailing list between 1988 and 1993, and contains a foreword by the human factors guru Don Norman and an "anti-foreword" by Dennis Ritchie, one of the creators of the operating system.
Many of the book's complaints about the Unix operating system are based on design decisions and anomalies in the command-line interface.
The front-matter page's dedication says: "To Ken and Dennis, without whom this book would not have been possible.", referring to Ken Thompson and Dennis Ritchie, the creators of Unix.
Release
This book was printed as a trade paperback. Its front cover was designed to be similar to The Scream. An air sickness bag, printed with the phrase "UNIX barf bag", was inserted into the inside back cover of every copy by the publisher.
The book was made available to download for free in electronic format in 2003.
Reception
Later reviewers of the book have noted that some issues were resolved in the future, such as the development of the ext2 filesystem addressing the discussed lack of block storage.
References
External links
Unix history
1994 non-fiction books
Operating system criticisms
Unix books
Comedy books |
https://en.wikipedia.org/wiki/Group%20scheme | In mathematics, a group scheme is a type of object from algebraic geometry equipped with a composition law. Group schemes arise naturally as symmetries of schemes, and they generalize algebraic groups, in the sense that all algebraic groups have group scheme structure, but group schemes are not necessarily connected, smooth, or defined over a field. This extra generality allows one to study richer infinitesimal structures, and this can help one to understand and answer questions of arithmetic significance. The category of group schemes is somewhat better behaved than that of group varieties, since all homomorphisms have kernels, and there is a well-behaved deformation theory. Group schemes that are not algebraic groups play a significant role in arithmetic geometry and algebraic topology, since they come up in contexts of Galois representations and moduli problems. The initial development of the theory of group schemes was due to Alexander Grothendieck, Michel Raynaud and Michel Demazure in the early 1960s.
Definition
A group scheme is a group object in a category of schemes that has fiber products and some final object S. That is, it is an S-scheme G equipped with one of the equivalent sets of data
a triple of morphisms μ: G ×S G → G, e: S → G, and ι: G → G, satisfying the usual compatibilities of groups (namely associativity of μ, identity, and inverse axioms)
a functor from schemes over S to the category of groups, such that composition with the forgetful functor to sets is equivalent to the presheaf corresponding to G under the Yoneda embedding. (See also: group functor.)
A homomorphism of group schemes is a map of schemes that respects multiplication. This can be precisely phrased either by saying that a map f satisfies the equation fμ = μ(f × f), or by saying that f is a natural transformation of functors from schemes to groups (rather than just sets).
A left action of a group scheme G on a scheme X is a morphism G ×S X→ X that induces a left act |
https://en.wikipedia.org/wiki/Geomatics | Geomatics is defined in the ISO/TC 211 series of standards as the "discipline concerned with the collection, distribution, storage, analysis, processing, presentation of geographic data or geographic information". Under another definition, it consists of products, services and tools involved in the collection, integration and management of geographic (geospatial) data. It is also known as geomatic(s) engineering (geodesy and geoinformatics engineering or geospatial engineering). Surveying engineering was the widely used name for geomatic(s) engineering in the past.
History and etymology
The term was proposed in French ("géomatique") at the end of the 1960s by scientist Bernard Dubuisson to reflect at the time recent changes in the jobs of surveyor and photogrammetrist. The term was first employed in a French Ministry of Public Works memorandum dated 1 June 1971 instituting a "standing committee of geomatics" in the government.
The term was popularised in English by French-Canadian surveyor Michel Paradis in his The little Geodesist that could article, in 1981 and in a keynote address at the centennial congress of the Canadian Institute of Surveying (now known as the Canadian Institute of Geomatics) in April 1982. He claimed that at the end of the 20th century the needs for geographical information would reach a scope without precedent in history and that, in order to address these needs, it was necessary to integrate in a new discipline both the traditional disciplines of land surveying and the new tools and techniques of data capture, manipulation, storage and diffusion.
Geomatics includes the tools and techniques used in land surveying, remote sensing, cartography, geographic information systems (GIS), global navigation satellite systems (GPS, GLONASS, Galileo, BeiDou), photogrammetry, geophysics, geography, and related forms of earth mapping. The term was originally used in Canada but has since been adopted by the International Organization for Standardizati |
https://en.wikipedia.org/wiki/HOME%20STAR | HOME STAR, (also spelled HOMESTAR), informally known as Cash for Caulkers, is a United States government program proposed in November 2009 to encourage economic growth by offering incentives to homeowners and retailers for improving the energy efficiency of existing homes.
Background
In late 2009 there was a broad perception that the United States economy was beginning to recover from the Late-2000s recession. There was a broad perception that government spending authorized by the American Recovery and Reinvestment Act of 2009 had contributed to the recovery, and some desire for the government to do more to encourage job growth and a faster recovery.
In mid-November former president Bill Clinton, and John Doerr of Barack Obama's President's Economic Recovery Advisory Board, proposed different versions of an economic stimulus program by which the government would offer tax incentives to encourage people to improve the energy efficiency of their homes. Doerr, in public speeches, called the proposal "cash for caulkers". Separately U.S. Representative Peter Welch proposed a system of energy rebates to Rahm Emanuel, Obama’s Chief of Staff. Obama, in turn, proposed the idea as part of a larger new stimulus program, at a speech at the Brookings Institution on December 8, 2009.
The stated goals of the proposed program are to reduce pollution, particularly greenhouse gases, by reducing household energy use, to save consumers money in the long term through lower power bills, and to stimulate American businesses through the money spent on appliances, materials, and installation. Improving the energy efficiency of "fixed infrastructure", which accounts for approximately 40% of all energy use in the United States, is considered the "low hanging fruit" of energy conservation - a step that achieves results relatively inexpensively and does not require any new technologies or changes to production or consumption methods.
The name "Homestar" is a reference to the popular energ |
https://en.wikipedia.org/wiki/Copper%28I%29%20oxide | Copper(I) oxide or cuprous oxide is the inorganic compound with the formula Cu2O. It is one of the principal oxides of copper, the other being or copper(II) oxide or cupric oxide (CuO). Cuprous oxide is a red-coloured solid and is a component of some antifouling paints. The compound can appear either yellow or red, depending on the size of the particles. Copper(I) oxide is found as the reddish mineral cuprite.
Preparation
Copper(I) oxide may be produced by several methods.<ref>H. Wayne Richardson "Copper Compounds in Ullmann's Encyclopedia of Industrial Chemistry 2002, Wiley-VCH, Weinheim. </ref> Most straightforwardly, it arises via the oxidation of copper metal:
4 Cu + O2 → 2 Cu2O
Additives such as water and acids affect the rate of this process as well as the further oxidation to copper(II) oxides. It is also produced commercially by reduction of copper(II) solutions with sulfur dioxide.
Reactions
Aqueous cuprous chloride solutions react with base to give the same material. In all cases, the color is highly sensitive to the procedural details.
Formation of copper(I) oxide is the basis of the Fehling's test and Benedict's test for reducing sugars. These sugars reduce an alkaline solution of a copper(II) salt, giving a bright red precipitate of Cu2O.
It forms on silver-plated copper parts exposed to moisture when the silver layer is porous or damaged. This kind of corrosion is known as red plague.
Little evidence exists for copper(I) hydroxide CuOH, which is expected to rapidly undergo dehydration. A similar situation applies to the hydroxides of gold(I) and silver(I).
Properties
The solid is diamagnetic. In terms of their coordination spheres, copper centres are 2-coordinated and the oxides are tetrahedral. The structure thus resembles in some sense the main polymorphs of SiO2, but cuprous oxide's lattices interpenetrate.
Copper(I) oxide dissolves in concentrated ammonia solution to form the colourless complex [Cu(NH3)2]+, which is easily o |
https://en.wikipedia.org/wiki/Profibus | Profibus (usually styled as PROFIBUS, as a portmanteau for Process Field Bus) is a standard for fieldbus communication in automation technology and was first promoted in 1989 by BMBF (German department of education and research) and then used by Siemens. It should not be confused with the Profinet standard for Industrial Ethernet.
Profibus is openly published as type 3 of IEC 61158/61784-1.
Origin
The history of PROFIBUS goes back to a publicly promoted plan for an association which started in Germany in 1986 and for which 21 companies and institutes devised a master project plan called "fieldbus". The goal was to implement and spread the use of a bit-serial field bus based on the basic requirements of the field device interfaces. For this purpose, member companies agreed to support a common technical concept for production (i.e. discrete or factory automation) and process automation. First, the complex communication protocol Profibus FMS (Field bus Message Specification), which was tailored for demanding communication tasks, was specified. Subsequently, in 1993, the specification for the simpler and thus considerably faster protocol PROFIBUS DP (Decentralised Peripherals) was completed. Profibus FMS is used for (non-deterministic) communication of data between Profibus Masters. Profibus DP is a protocol made for (deterministic) communication between Profibus masters and their remote I/O slaves.
There are two variations of PROFIBUS in use today; the most commonly used PROFIBUS DP, and the lesser used, application specific, PROFIBUS PA:
PROFIBUS DP (Decentralised Peripherals) is used to operate sensors and actuators via a centralised controller in production (factory) automation applications. The many standard diagnostic options, in particular, are focused on here.
PROFIBUS PA (Process Automation) is used to monitor measuring equipment via a process control system in process automation applications. This variant is designed for use in explosion/hazardous areas |
https://en.wikipedia.org/wiki/Quadrat | A quadrat is a frame, traditionally square, used in ecology, geography, and biology to isolate a standard unit of area for study of the distribution of an item over a large area. Modern quadrats can for example be rectangular, circular, or irregular. The quadrat is suitable for sampling plants, slow-moving animals, and some aquatic organisms.
A photo-quadrat is a photographic record of the area framed by a quadrat. It may use a physical frame to indicate the area, or may rely on fixed camera distance and lens field of view to automatically cover the specified area of substrate. Parallel laser pointers mounted on the camera can also be used as scale indicators. The photo is taken perpendicular to the surface, or as close as possible to perpendicular for uneven surfaces.
History
The systematic use of quadrats was developed by the pioneering plant ecologists R. Pound and F. E. Clements between 1898 and 1900. The method was then swiftly applied for many purposes in ecology, such as the study of plant succession. Botanists and ecologists such as Arthur Tansley soon took up and modified the method.
The ecologist J. E. Weaver applied the use of quadrats to the teaching of ecology in 1918.
Method
A quadrat is used to methodically count organisms within a smaller area in order to extrapolate to a larger habitat. Quadrats are designed to sample plants or slowly moving animals (such as snails). A suitable size of a quadrat depends on the size of the organisms being sampled. For example, to count plants growing on a school field, one could use a quadrat with sides 0.5 or 1 metre in length. Choice of quadrat size depends to a large extent on the type of survey being conducted. For instance, it would be difficult to gain any meaningful results using a 0.5m2 quadrat in a study of a woodland canopy.
It is important that sampling in an area is carried out at random, to avoid bias. For example, if one sampled from a school field, but for convenience only placed quadrats next |
https://en.wikipedia.org/wiki/Computational%20number%20theory | In mathematics and computer science, computational number theory, also known as algorithmic number theory, is the study of
computational methods for investigating and solving problems in number theory and arithmetic geometry, including algorithms for primality testing and integer factorization, finding solutions to diophantine equations, and explicit methods in arithmetic geometry.
Computational number theory has applications to cryptography, including RSA, elliptic curve cryptography and post-quantum cryptography, and is used to investigate conjectures and open problems in number theory, including the Riemann hypothesis, the Birch and Swinnerton-Dyer conjecture, the ABC conjecture, the modularity conjecture, the Sato-Tate conjecture, and explicit aspects of the Langlands program.
Software packages
Magma computer algebra system
SageMath
Number Theory Library
PARI/GP
Fast Library for Number Theory
Further reading
References
External links
Number theory
Number theory |
https://en.wikipedia.org/wiki/Quantum%20dot | Quantum dots (QDs), also called semiconductor nanocrystals, are semiconductor particles a few nanometres in size, having optical and electronic properties that differ from those of larger particles as a result of quantum mechanical effects. They are a central topic in nanotechnology and materials science. When the quantum dots are illuminated by UV light, an electron in the quantum dot can be excited to a state of higher energy. In the case of a semiconducting quantum dot, this process corresponds to the transition of an electron from the valence band to the conductance band. The excited electron can drop back into the valence band releasing its energy as light. This light emission (photoluminescence) is illustrated in the figure on the right. The color of that light depends on the energy difference between the conductance band and the valence band, or the transition between discrete energy states when the band structure is no longer well-defined in QDs.
Nanoscale semiconductor materials tightly confine either electrons or electron holes. The confinement is similar to a three-dimensional particle in a box model. The quantum dot absorption and emission features correspond to transitions between discrete quantum mechanically allowed energy levels in the box that are reminiscent of atomic spectra. For these reasons, quantum dots are sometimes referred to as artificial atoms, emphasizing their bound and discrete electronic states, like naturally occurring atoms or molecules. It was shown that the electronic wave functions in quantum dots resemble the ones in real atoms. By coupling two or more such quantum dots, an artificial molecule can be made, exhibiting hybridization even at room temperature. Precise assembly of quantum dots can form superlattices that act as artificial solid-state materials that exhibit unique optical and electronic properties.
Quantum dots have properties intermediate between bulk semiconductors and discrete atoms or molecules. Their optoelectr |
https://en.wikipedia.org/wiki/Cultured%20meat | Cultured meat, also known as cultivated meat among other names, is a form of cellular agriculture where meat is produced by culturing animal cells in vitro. Cultured meat is produced using tissue engineering techniques pioneered in regenerative medicine. Jason Matheny popularized the concept in the early 2000s after he co-authored a paper on cultured meat production and created New Harvest, the world's first non-profit organization dedicated to in-vitro meat research. Cultured meat has the potential to address the environmental impact of meat production, animal welfare, food security and human health, in addition to its potential mitigation of climate change.
In 2013, Mark Post created a hamburger patty made from tissue grown outside of an animal. Since then, other cultured meat prototypes have gained media attention: SuperMeat opened a farm-to-fork restaurant, called "The Chicken", in Tel Aviv to test consumer reaction to its "Chicken" burger, while the "world's first commercial sale of cell-cultured meat" occurred in December 2020 at Singapore restaurant 1880, where cultured meat manufactured by United States firm Eat Just was sold.
While most efforts focus on common meats such as pork, beef, and chicken which constitute the bulk of consumption in developed countries, companies such as Orbillion Bio focused on high-end or unusual meats including elk, lamb, bison, and Wagyu beef. Avant Meats brought cultured grouper to market in 2021, while other companies have pursued different species of fish and other seafood.
The production process is constantly evolving, driven by companies and research institutions. The applications for cultured meat led to ethical, health, environmental, cultural, and economic discussions. Data published by the non-governmental organization Good Food Institute found that in 2021 cultivated meat companies attracted $140 million in Europe. Cultured meat is mass-produced in Israel. The first restaurant to serve cultured meat opened in Singap |
https://en.wikipedia.org/wiki/2D%20geometric%20model | A 2D geometric model is a geometric model of an object as a two-dimensional figure, usually on the Euclidean or Cartesian plane.
Even though all material objects are three-dimensional, a 2D geometric model is often adequate for certain flat objects, such as paper cut-outs and machine parts made of sheet metal. Other examples include circles used as a model of thunderstorms, which can be considered flat when viewed from above.
2D geometric models are also convenient for describing certain types of artificial images, such as technical diagrams, logos, the glyphs of a font, etc. They are an essential tool of 2D computer graphics and often used as components of 3D geometric models, e.g. to describe the decals to be applied to a car model. Modern architecture practice "digital rendering" which is a technique used to form a perception of a 2-D geometric model as of a 3-D geometric model designed through descriptive geometry and computerized equipment.
2D geometric modeling techniques
simple geometric shapes
boundary representation
Boolean operations on polygons
See also
2D geometric primitive
Computational geometry
Digital image
References
Computer-aided design |
https://en.wikipedia.org/wiki/Mac%20OS%208 | Mac OS 8 is an operating system that was released by Apple Computer on July 26, 1997. It includes the largest overhaul of the classic Mac OS experience since the release of System 7, approximately six years before. It places a greater emphasis on color than prior versions. Released over a series of updates, Mac OS 8 represents an incremental integration of many of the technologies which had been developed from 1988 to 1996 for Apple's overly ambitious OS named Copland. Mac OS 8 helped modernize the Mac OS while Apple developed its next-generation operating system, Mac OS X (renamed in 2012 to OS X and then in 2016 to macOS).
Mac OS 8 is one of Apple's most commercially successful software releases, selling over 1.2 million copies in the first two weeks. As it came at a difficult time in Apple's history, many pirate groups refused to traffic in the new OS, encouraging people to buy it instead.
Mac OS 8.0 introduces the most visible changes in the lineup, including the Platinum interface and a native PowerPC multithreaded Finder. Mac OS 8.1 introduces a new, more efficient file system named HFS Plus. Mac OS 8.5 is the first version of the Mac OS to require a PowerPC processor. It features PowerPC native versions of QuickDraw, AppleScript, and the Sherlock search utility. Its successor, Mac OS 9, was released on October 23, 1999.
Copland
Starting in 1988, Apple's next-generation operating system, which it originally envisioned to be "System 8" was codenamed Copland. It was announced in March 1994 alongside the introduction of the first PowerPC Macs. Apple intended Copland as a fully modern system, including native PowerPC code, intelligent agents, a microkernel, a customizable interface named Appearance Manager, a hardware abstraction layer, and a relational database integrated into the Finder. Copland was to be followed by Gershwin, which promised memory protection spaces and full preemptive multitasking. The system was intended to be a full rewrite of the Mac OS, |
https://en.wikipedia.org/wiki/Sedimentation%20equilibrium | Sedimentation equilibrium in a suspension of different particles, such as molecules, exists when the rate of transport of each material in any one direction due to sedimentation equals the rate of transport in the opposite direction due to diffusion. Sedimentation is due to an external force, such as gravity or centrifugal force in a centrifuge.
It was discovered for colloids by Jean Baptiste Perrin for which he received the Nobel Prize in Physics in 1926.
Colloid
In a colloid, the colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion. For dilute colloids, this is described using the Laplace-Perrin distribution law:
where
is the colloidal particle volume fraction as a function of vertical distance above reference point ,
is the colloidal particle volume fraction at reference point ,
is the buoyant mass of the colloidal particles,
is the standard acceleration due to gravity,
is the Boltzmann constant,
is the absolute temperature,
and is the sedimentation length.
The buoyant mass is calculated using
where is the difference in mass density between the colloidal particles and the suspension medium, and is the colloidal particle volume found using the volume of a sphere ( is the radius of the colloidal particle).
Sedimentation length
The Laplace-Perrin distribution law can be rearranged to give the sedimentation length . The sedimentation length describes the probability of finding a colloidal particle at a height above the point of reference . At the length above the reference point, the concentration of colloidal particles decreases by a factor of .
If the sedimentation length is much greater than the diameter of the colloidal particles (), the particles can diffuse a distance greater than this diameter, and the substance remains a suspension. However, if the sedimentation length is less than the diameter (), the particles can only diffuse by a much |
https://en.wikipedia.org/wiki/TasWireless | TasWireless is a group of wireless networking enthusiasts in Tasmania, Australia. Between them they have set up wireless community networks in both Hobart and Launceston. The group has gone through many names, tas.air, www.tas.air.net.au, TPAN (Tasmanian Public Airwave Network) and now TasWireless.
With users from several different backgrounds, including computer networking, amateur radio, amateur television, programming, Linux/BSD server administration, antenna and satellite dish installations, and lots more, they are willing to assist with any community networks in any part of the state.
Introduction
The TasWireless site was first started in 1999. It started as a splitter group from TasLUG, the Tasmanian Linux Users Group. There was only a small number of people who were interested in wireless networking at this time, less than five each in Hobart and Launceston. A node database () for Tasmanian regions was started, the mailing list was put on line, but due to the lack of practical experience and knowledge, very little happened. The cost of Wi-Fi cards and wireless access points was also a problem.
In early 2002, a flood of cheap SkyNet Global 802.11b PC card cards flooded the market. These cards were liquidated stock and cost around A$50-60 each - the average retail price was still around A$200. A lot of these cards were shipped to the state and distributed (both by TasWireless admins and otherwise).
Wireless networks in Hobart
The predominant network in Hobart is called StarNet. This was started as a private network by a small group of amateur radio enthusiasts, around April 2002. It included around six or seven sites.
In April 2003, an operator of the TasWireless website stumbled upon one of their nodes, with SSID StarNet, and posted his find to the mailing list.
As a result, all users involved were able to share knowledge and make some minor changes to the network routing.
Another network RexNet, based in Kingston was also found; they had alre |
https://en.wikipedia.org/wiki/BiCMOS | Bipolar CMOS (BiCMOS) is a semiconductor technology that integrates two semiconductor technologies, those of the bipolar junction transistor and the CMOS (complementary metal–oxide–semiconductor) logic gate, into a single integrated circuit. In more recent times the bipolar processes have been extended to include high mobility devices using silicon–germanium junctions.
Bipolar transistors offer high speed, high gain, and low output impedance with relatively high power consumption per device, which are excellent properties for high-frequency analog amplifiers including low noise radio frequency (RF) amplifiers that only use a few active devices, while CMOS technology offers high input impedance and is excellent for constructing large numbers of low-power logic gates. In a BiCMOS process the doping profile and other process features may be tilted to favour either the CMOS or the bipolar devices. For example GlobalFoundries offer a basic 180 nm BiCMOS7WL process and several other BiCMOS processes optimized in various ways. These processes also include steps for the deposition of precision resistors, and high Q RF inductors and capacitors on-chip, which are not needed in a "pure" CMOS logic design.
BiCMOS is aimed at mixed-signal ICs, such as ADCs and complete software radio systems on a chip that need amplifiers, analog power management circuits, and logic gates on chip. BiCMOS has some advantages in providing digital interfaces. BiCMOS circuits use the characteristics of each type of transistor most appropriately. Generally this means that high current circuits such as on chip power regulators use metal–oxide–semiconductor field-effect transistors (MOSFETs) for efficient control, and 'sea of logic' use conventional CMOS structures, while those portions of specialized very high performance circuits such as ECL dividers and LNAs use bipolar devices. Examples include RF oscillators, bandgap-based references and low-noise circuits.
The Pentium, Pentium Pro, and SuperS |
https://en.wikipedia.org/wiki/1729%20%28number%29 | 1729 is the natural number following 1728 and preceding 1730. It is notably the first taxicab number.
In mathematics
1729 is the smallest taxicab number, and is variously known as Ramanujan's number or the Ramanujan–Hardy number, after an anecdote of the British mathematician G. H. Hardy when he visited Indian mathematician Srinivasa Ramanujan in hospital. He related their conversation:
The two different ways are:
1729 = 13 + 123 = 93 + 103
The quotation is sometimes expressed using the term "positive cubes", since allowing negative perfect cubes (the cube of a negative integer) gives the smallest solution as 91 (which is a divisor of 1729; 1991 = 1729).
91 = 63 + (−5)3 = 43 + 33
1729 was also found in one of Ramanujan's notebooks dated years before the incident, and was noted by Frénicle de Bessy in 1657. A commemorative plaque now appears at the site of the Ramanujan-Hardy incident, at 2 Colinette Road in Putney.
The same expression defines 1729 as the first in the sequence of "Fermat near misses" defined, in reference to Fermat's Last Theorem, as numbers of the form which are also expressible as the sum of two other cubes .
Other properties
1729 is a sphenic number. It is the third Carmichael number, the first Chernick–Carmichael number , the first absolute Euler pseudoprime, and the third Zeisel number. It is a centered cube number, as well as a dodecagonal number, a 24-gonal and 84-gonal number.
Investigating pairs of distinct integer-valued quadratic forms that represent every integer the same number of times, Schiemann found that such quadratic forms must be in four or more variables, and the least possible discriminant of a four-variable pair is 1729.
1729 is the lowest number which can be represented by a Loeschian quadratic form in four different ways with a and b positive integers. The integer pairs are (25,23), (32,15), (37,8) and (40,3).
1729 is also the smallest integer side of an equilateral triangle for which there are three sets o |
https://en.wikipedia.org/wiki/Schottky%20barrier | A Schottky barrier, named after Walter H. Schottky, is a potential energy barrier for electrons formed at a metal–semiconductor junction. Schottky barriers have rectifying characteristics, suitable for use as a diode. One of the primary characteristics of a Schottky barrier is the Schottky barrier height, denoted by ΦB (see figure). The value of ΦB depends on the combination of metal and semiconductor.
Not all metal–semiconductor junctions form a rectifying Schottky barrier; a metal–semiconductor junction that conducts current in both directions without rectification, perhaps due to its Schottky barrier being too low, is called an ohmic contact.
Physics of formation
When a metal is put in direct contact with a semiconductor, a so called Schottky barrier can be formed, leading to a rectifying behavior of the electrical contact. This happens both when the semiconductor is n-type and its work function is smaller than the work function of the metal, and when the semiconductor is p-type and the opposite relation between work functions holds.
At the basis of the description of the Schottky barrier formation through the band diagram formalism, there are three main assumptions:
The contact between the metal and the semiconductor must be intimate and without the presence of any other material layer (such as an oxide).
No interdiffusion of the metal and the semiconductor is taken into account.
There are no impurities at the interface between the two materials.
To a first approximation, the barrier between a metal and a semiconductor is predicted by the Schottky–Mott rule to be proportional to the difference of the metal-vacuum work function and the semiconductor-vacuum electron affinity. For an isolated metal, the work function is defined as the difference between its vacuum energy (i.e. the minimum energy that an electron must possess to completely free itself from the material) and the Fermi energy , and it is an invariant property of the specified metal:
On th |
https://en.wikipedia.org/wiki/Differential%20centrifugation | In biochemistry and cell biology, differential centrifugation (also known as differential velocity centrifugation) is a common procedure used to separate organelles and other sub-cellular particles based on their sedimentation rate. Although often applied in biological analysis, differential centrifugation is a general technique also suitable for crude purification of non-living suspended particles (e.g. nanoparticles, colloidal particles, viruses). In a typical case where differential centrifugation is used to analyze cell-biological phenomena (e.g. organelle distribution), a tissue sample is first lysed to break the cell membranes and release the organelles and cytosol. The lysate is then subjected to repeated centrifugations, where particles that sediment sufficiently quickly at a given centrifugal force for a given time form a compact "pellet" at the bottom of the centrifugation tube.
After each centrifugation, the supernatant (non-pelleted solution) is removed from the tube and re-centrifuged at an increased centrifugal force and/or time. Differential centrifugation is suitable for crude separations on the basis of sedimentation rate, but more fine grained purifications may be done on the basis of density through equilibrium density-gradient centrifugation. Thus, the differential centrifugation method is the successive pelleting of particles from the previous supernatant, using increasingly higher centrifugation forces. Cellular organelles separated by differential centrifugation maintain a relatively high degree of normal functioning, as long as they are not subject to denaturing conditions during isolation.
Theory
In a viscous fluid, the rate of sedimentation of a given suspended particle (as long as the particle is denser than the fluid) is largely a function of the following factors:
Gravitational force
Difference in density
Fluid viscosity
Particle size and shape
Larger particles sediment more quickly and at lower centrifugal forces. If a particle |
https://en.wikipedia.org/wiki/Microbody | A microbody (or cytosome) is a type of organelle that is found in the cells of plants, protozoa, and animals. Organelles in the microbody family include peroxisomes, glyoxysomes, glycosomes and hydrogenosomes. In vertebrates, microbodies are especially prevalent in the liver and kidney. Many membrane bound vesicles called microbodies that contain various enzymes, are present in both plant and animal cells
Structure
Microbodies are different type of bodies present in the cytosol, also known as cytosomes. A microbody is usually a vesicle with a spherical shape, ranging from 0.2-1.5 micrometers in diameter. Microbodies are found in the cytoplasm of a cell, but they are only visible with the use of an electron microscope. They are surrounded by a single phospholipid bilayer membrane and they contain a matrix of intracellular material including enzymes and other proteins, but they do not seem to contain any genetic material to allow them to self-replicate.
Function
Microbodies contain enzymes that participate in the preparatory or intermediate stages of biochemical reactions within the cell. This facilitates the breakdown of fats, alcohols and amino acids. Generally microbodies are involved in detoxification of peroxides and in photo respiration in plants. Different types of microbodies have different functions:
Peroxisomes
A peroxisome is a type of microbody that functions to help the body break down large molecules and detoxify hazardous substances. It contains enzymes like oxidase, react hydrogen peroxide as a byproduct of its enzymatic reactions. Within the peroxisome, hydrogen peroxide can then be converted to water by enzymes like catalase and peroxidase. Discovered and named by Christian de Duve.
Glyoxysomes
Glyoxysomes are specialized peroxisomes found in plants and mold, which help to convert stored lipids into carbohydrates so they can be used for plant growth. In glyoxysomes the fatty acids are hydrolyzed to acetyl-CoA by peroxisomal β-oxidation enzymes |
https://en.wikipedia.org/wiki/Taxicab%20number | In mathematics, the nth taxicab number, typically denoted Ta(n) or Taxicab(n), also called the nth Ramanujan–Hardy number, is defined as the smallest integer that can be expressed as a sum of two positive integer cubes in n distinct ways. The most famous taxicab number is 1729 = Ta(2) = 13 + 123 = 93 + 103.
The name is derived from a conversation in about 1919 involving mathematicians G. H. Hardy and Srinivasa Ramanujan. As told by Hardy:
History and definition
The concept was first mentioned in 1657 by Bernard Frénicle de Bessy, who published the Hardy–Ramanujan number Ta(2) = 1729. This particular example of 1729 was made famous in the early 20th century by a story involving Srinivasa Ramanujan. In 1938, G. H. Hardy and E. M. Wright proved that such numbers exist for all positive integers n, and their proof is easily converted into a program to generate such numbers. However, the proof makes no claims at all about whether the thus-generated numbers are the smallest possible and so it cannot be used to find the actual value of Ta(n).
The taxicab numbers subsequent to 1729 were found with the help of computers. John Leech obtained Ta(3) in 1957. E. Rosenstiel, J. A. Dardis and C. R. Rosenstiel found Ta(4) in 1989. J. A. Dardis found Ta(5) in 1994 and it was confirmed by David W. Wilson in 1999. Ta(6) was announced by Uwe Hollerbach on the NMBRTHRY mailing list on March 9, 2008, following a 2003 paper by Calude et al. that gave a 99% probability that the number was actually Ta(6). Upper bounds for Ta(7) to Ta(12) were found by Christian Boyer in 2006.
The restriction of the summands to positive numbers is necessary, because allowing negative numbers allows for more (and smaller) instances of numbers that can be expressed as sums of cubes in n distinct ways. The concept of a cabtaxi number has been introduced to allow for alternative, less restrictive definitions of this nature. In a sense, the specification of two summands and powers of three is also restrictiv |
https://en.wikipedia.org/wiki/Integrated%20circuit%20packaging | Integrated circuit packaging is the final stage of semiconductor device fabrication, in which the die is encapsulated in a supporting case that prevents physical damage and corrosion. The case, known as a "package", supports the electrical contacts which connect the device to a circuit board.
The packaging stage is followed by testing of the integrated circuit.
Design considerations
Electrical
The current-carrying traces that run out of the die, through the package, and into the printed circuit board (PCB) have very different electrical properties compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself. Therefore, it is important that the materials used as electrical contacts exhibit characteristics like low resistance, low capacitance and low inductance. Both the structure and materials must prioritize signal transmission properties, while minimizing any parasitic elements that could negatively affect the signal.
Controlling these characteristics is becoming increasingly important as the rest of technology begins to speed up. Packaging delays have the potential to make up almost half of a high-performance computer's delay, and this bottleneck on speed is expected to increase.
Mechanical and thermal
The integrated circuit package must resist physical breakage, keep out moisture, and also provide effective heat dissipation from the chip. Moreover, for RF applications, the package is commonly required to shield electromagnetic interference, that may either degrade the circuit performance or adversely affect neighboring circuits. Finally, the package must permit interconnecting the chip to a PCB. The materials of the package are either plastic (thermoset or thermoplastic), metal (commonly Kovar) or ceramic. A common plastic used for this is epoxy-cresol-novolak (ECN). All three material types offer usable mechanical strength, moisture and heat resistance. Nevertheless, for hi |
https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes%20theory | In mathematics and computer science, the Krohn–Rhodes theory (or algebraic automata theory) is an approach to the study of finite semigroups and automata that seeks to decompose them in terms of elementary components. These components correspond to finite aperiodic semigroups and finite simple groups that are combined in a feedback-free manner (called a "wreath product" or "cascade").
Krohn and Rhodes found a general decomposition for finite automata. The authors discovered and proved an unexpected major result in finite semigroup theory, revealing a deep connection between finite automata and semigroups.
Definitions and description of the Krohn–Rhodes theorem
Let T be a semigroup. A semigroup S that is a homomorphic image of a subsemigroup of T is said to be a divisor of T.
The Krohn–Rhodes theorem for finite semigroups states that every finite semigroup S is a divisor of a finite alternating wreath product of finite simple groups, each a divisor of S, and finite aperiodic semigroups (which contain no nontrivial subgroups).
In the automata formulation, the Krohn–Rhodes theorem for finite automata states that given a finite automaton A with states Q and input set I, output alphabet U, then one can expand the states to Q' such that the new automaton A' embeds into a cascade of "simple", irreducible automata: In particular, A is emulated by a feed-forward cascade of (1) automata whose transformation semigroups are finite simple groups and (2) automata that are banks of flip-flops running in parallel. The new automaton A' has the same input and output symbols as A. Here, both the states and inputs of the cascaded automata have a very special hierarchical coordinate form.
Moreover, each simple group (prime) or non-group irreducible semigroup (subsemigroup of the flip-flop monoid) that divides the transformation semigroup of A must divide the transformation semigroup of some component of the cascade, and only the primes that must occur as divisors of the |
https://en.wikipedia.org/wiki/Emoji | An emoji ( ; plural emoji or emojis; ) is a pictogram, logogram, ideogram, or smiley embedded in text and used in electronic messages and web pages. The primary function of emoji is to fill in emotional cues otherwise missing from typed conversation. Emoji exist in various genres, including facial expressions, common objects, places and types of weather, and animals. They are much like emoticons, except emoji are pictures rather than typographic approximations; the term "emoji" in the strict sense refers to such pictures which can be represented as encoded characters, but it is sometimes applied to messaging stickers by extension. Originally meaning pictograph, the word emoji comes from Japanese + ; the resemblance to the English words emotion and emoticon is purely coincidental. The ISO 15924 script code for emoji is Zsye.
Originating on Japanese mobile phones in 1997, emoji became increasingly popular worldwide in the 2010s after being added to several mobile operating systems. They are now considered to be a large part of popular culture in the West and around the world. In 2015, Oxford Dictionaries named the Face with Tears of Joy emoji (😂) the word of the year.
History
Evolution from emoticons (1990s)
The emoji was predated by the emoticon, a concept implemented in 1982 by computer scientist Scott Fahlman when he suggested text-based symbols such as :-) and :-( could be used to replace language. Theories about language replacement can be traced back to the 1960s, when Russian novelist and professor Vladimir Nabokov stated in an interview with The New York Times: "I often think there should exist a special typographical sign for a smile — some sort of concave mark, a supine round bracket." It did not become a mainstream concept until the 1990s when Japanese, American and European companies began developing Fahlman's idea. Mary Kalantzis and Bill Cope point out that similar symbology was incorporated by Bruce Parello, a student at the University of Illinois, |
https://en.wikipedia.org/wiki/Northbridge%20%28computing%29 | In computing, a northbridge (also host bridge, or memory controller hub) is one of two chips comprising the core logic chipset architecture on older motherboards for personal computers. A northbridge is connected directly to a CPU via the front-side bus (FSB) to handle high-performance tasks, and is usually used in conjunction with a slower southbridge to manage communication between the CPU and other parts of the motherboard. Since the 2010s, die shrink and improved transistor density have allowed for increasing chipset integration, and the functions performed by northbridges are now often incorporated into other components (like southbridges or CPUs themselves). As of 2019, Intel and AMD had both released chipsets in which all northbridge functions had been integrated into the CPU. Modern Intel Core processors have the northbridge integrated on the CPU die, where it is known as the uncore or system agent.
On older Intel based PCs, the northbridge was also named external memory controller hub (MCH) or graphics and memory controller hub (GMCH) if equipped with integrated graphics. Increasingly these functions became integrated into the CPU chip itself, beginning with memory and graphics controllers. For Intel Sandy Bridge and AMD Accelerated Processing Unit processors introduced in 2011, all of the functions of the northbridge reside on the CPU. The corresponding southbridge was renamed by Intel as the Platform Controller Hub and by AMD as the Fusion controller hub. AMD FX CPUs continued to require external northbridge and southbridge chips.
Historically, separation of functions between CPU, northbridge, and southbridge chips was necessary due to the difficulty of integrating all components onto a single chip die. However, as CPU speeds increased over time, a bottleneck emerged due to limitations caused by data transmission between the CPU and its support chipset. The trend for integrated northbridges began near the end of the 2000s — for example, the Nvidia GeFo |
https://en.wikipedia.org/wiki/Rebus | A rebus () is a puzzle device that combines the use of illustrated pictures with individual letters to depict words or phrases. For example: the word "been" might be depicted by a rebus showing an illustrated bumblebee next to a plus sign (+) and the letter "n". It was a favourite form of heraldic expression used in the Middle Ages to denote surnames.
For example, in its basic form, three salmon (fish) are used to denote the surname "Salmon". A more sophisticated example was the rebus of Bishop Walter Lyhart (d. 1472) of Norwich, consisting of a stag (or hart) lying down in a conventional representation of water.
The composition alludes to the name, profession or personal characteristics of the bearer, and speaks to the beholder Non verbis, sed rebus, which Latin expression signifies "not by words but by things" (res, rei (f), a thing, object, matter; rebus being ablative plural).
Rebuses within heraldry
Rebuses are used extensively as a form of heraldic expression as a hint to the name of the bearer; they are not synonymous with canting arms. A man might have a rebus as a personal identification device entirely separate from his armorials, canting or otherwise. For example, Sir Richard Weston (d. 1541) bore as arms: Ermine, on a chief azure five bezants, whilst his rebus, displayed many times in terracotta plaques on the walls of his mansion Sutton Place, Surrey, was a "tun" or barrel, used to designate the last syllable of his surname.
An example of canting arms proper are those of the Borough of Congleton in Cheshire consisting of a conger eel, a lion (in Latin, leo) and a tun (barrel). This word sequence "conger-leo-tun" enunciates the town's name. Similarly, the coat of arms of St. Ignatius Loyola contains wolves (in Spanish, lobo) and a kettle (olla), said by some (probably incorrectly) to be a rebus for "Loyola". The arms of Elizabeth Bowes-Lyon feature bows and lions.
Modern rebuses, word plays
A modern example of the rebus used as a form of word play |
https://en.wikipedia.org/wiki/Association%20for%20Progressive%20Communications | The Association for Progressive Communications (APC) is an international network of organizations that was founded in 1990 to provide communication infrastructure, including Internet-based applications, to groups and individuals who work for peace, human rights, protection of the environment, and sustainability. Pioneering the use of ICTs for civil society, especially in developing countries, APC were often the first providers of Internet in their member countries.
APC is a worldwide network of social activists who use the internet to make the world a better place. APC is both a network and an organisation. APC members are groups working in their own countries to advance the same mission as APC. APC has more than 59 members, mostly in Asia, Africa and Latina America, from five continents. This is a challenge and a strength, because members are at the two extremes of internet development (members in South Korea with incredible connectivity and members in rural Nigeria where they have to power computers using car batteries and solar power) and in between.
History
Background and creation
APC was founded in 1990 by:
Institute for Global Communications (IGC), San Francisco, USA
GreenNet, London, United Kingdom
IBASE, Rio de Janeiro, Brazil
Nicarao, Managua, Nicaragua
Pegasus Networks, Byron Bay, Australia
Web Networks, Toronto, Canada
NordNet, Sweden
The activists working with United Nations–sponsored data management NGO (IDOC) create a network of like-minded organisations working with information and alternative media. At this point they communicated mainly using fax and regular mail. People physically travelled around transporting and sharing databases of information and software on disks.
In 1988, on the verge of APC creation, Mitra Ardron describes the central characteristic of the future APC user, present operations and the history of APC precedents: PeaceNet, EcoNet and GreenNet. He also expresses a common commitment to global communication available |
https://en.wikipedia.org/wiki/Southbridge%20%28computing%29 | The southbridge is one of the two chips in the core logic chipset on older personal computer (PC) motherboards, the other being the northbridge. As of 2023, most personal computer devices no longer use a set of two chips, and instead have a single chip acting as the 'chipset', for example Intel's Z790 chipset.
The southbridge typically implemented the slower capabilities of the motherboard in a northbridge/southbridge chipset computer architecture. In systems with Intel chipsets, the southbridge has been named I/O Controller Hub (ICH), while AMD has named its southbridge Fusion Controller Hub (FCH) since the introduction of its Fusion Accelerated Processing Unit (APU) while moving the functions of the Northbridge onto the CPU die, hence making it similar in function to the Platform hub controller.
The southbridge can usually be distinguished from the northbridge by not being directly connected to the CPU. Rather, the northbridge ties the southbridge to the CPU. Through the use of controller integrated channel circuitry, the northbridge can directly link signals from the I/O units to the CPU for data control and access.
Current status
Due to the push for system-on-chip (SoC) processors, modern devices increasingly have the northbridge integrated into the CPU die itself; examples are Intel's Sandy Bridge and AMD's Fusion processors, both released in 2011. The southbridge became redundant and it was replaced by the Platform Controller Hub (PCH) architecture introduced with the Intel 5 Series chipset in 2008 while AMD did the same with the release of their first APUs in 2011, naming the PCH the Fusion controller hub (FCH), which was only used on AMD's APUs until 2017 when it began to be used on AMD's Zen architecture while dropping the FCH name. On Intel platforms, all southbridge features and remaining I/O functions are managed by the PCH which is directly connected to the CPU via the Direct Media Interface (DMI). Intel low power processor (Haswell-U and onward) an |
https://en.wikipedia.org/wiki/PowerPC%20G4 | PowerPC G4 is a designation formerly used by Apple and Eyetech to describe a fourth generation of 32-bit PowerPC microprocessors. Apple has applied this name to various (though closely related) processor models from Freescale, a former part of Motorola. Motorola and Freescale's proper name of this family of processors is PowerPC 74xx.
Macintosh computers such as the PowerBook G4 and iBook G4 laptops and the Power Mac G4 and Power Mac G4 Cube desktops all took their name from the processor. PowerPC G4 processors were also used in the eMac, first-generation Xserves, first-generation Mac Minis, and the iMac G4 before the introduction of the PowerPC 970.
Apple completely phased out the G4 series for desktop models after it selected the 64-bit IBM-produced PowerPC 970 processor as the basis for its PowerPC G5 series. The last desktop model that used the G4 was the Mac Mini which now comes with an Apple M2 and Apple M2 Pro processor. The last portable to use the G4 was the iBook G4 but was replaced by the Intel-based MacBook. The PowerBook G4 has been replaced by the Intel-based MacBook Pro.
The PowerPC G4 processors are also popular in other computer systems, such as the AmigaOne series of computers and the Pegasos from Genesi. Besides desktop computers the PowerPC G4 is popular in embedded environments, like routers, telecom switches, imaging, media processing, avionics and military applications, where one can take advantage of the AltiVec and its SMP capabilities.
PowerPC 7400
The PowerPC 7400 (code-named "Max") debuted in August 1999 and was the first processor to carry the "G4" moniker. The chip operates at speeds ranging from 350 to 500 MHz and contains 10.5 million transistors, manufactured using Motorola's 0.20 μm HiPerMOS6 process. The die measures 83 mm2 and features copper interconnects.
Motorola had promised Apple to deliver parts with speed up to 500 MHz, but yields proved too low initially. This forced Apple to take back the advertised 500 MHz models of |
https://en.wikipedia.org/wiki/Space%20frame | In architecture and structural engineering, a space frame or space structure (3D truss) is a rigid, lightweight, truss-like structure constructed from interlocking struts in a geometric pattern. Space frames can be used to span large areas with few interior supports. Like the truss, a space frame is strong because of the inherent rigidity of the triangle; flexing loads (bending moments) are transmitted as tension and compression loads along the length of each strut.
Chief applications include buildings and vehicles.
History
Alexander Graham Bell from 1898 to 1908 developed space frames based on tetrahedral geometry. Bell's interest was primarily in using them to make rigid frames for nautical and aeronautical engineering, with the tetrahedral truss being one of his inventions. Max Mengeringhausen developed the space grid system called MERO (acronym of MEngeringhausen ROhrbauweise) in 1943 in Germany, thus initiating the use of space trusses in architecture. The commonly used method, still in use has individual tubular members connected at node joints (ball shaped) and variations such as the space deck system, octet truss system and cubic system. Stéphane de Chateau in France invented the Tridirectional SDC system (1957), Unibat system (1959), Pyramitec (1960). A method of tree supports was developed to replace the individual columns. Buckminster Fuller patented the octet truss () in 1961 while focusing on architectural structures.
Design methods
Space frames are typically designed using a rigidity matrix. The special characteristic of the stiffness matrix in an architectural space frame is the independence of the angular factors. If the joints are sufficiently rigid, the angular deflections can be neglected, simplifying the calculations.
Overview
The simplest form of space frame is a horizontal slab of interlocking square pyramids and tetrahedra built from Aluminium or tubular steel struts. In many ways this looks like the horizontal jib of a tower crane repe |
https://en.wikipedia.org/wiki/Frequency-dependent%20selection | Frequency-dependent selection is an evolutionary process by which the fitness of a phenotype or genotype depends on the phenotype or genotype composition of a given population.
In positive frequency-dependent selection, the fitness of a phenotype or genotype increases as it becomes more common.
In negative frequency-dependent selection, the fitness of a phenotype or genotype decreases as it becomes more common. This is an example of balancing selection.
More generally, frequency-dependent selection includes when biological interactions make an individual's fitness depend on the frequencies of other phenotypes or genotypes in the population.
Frequency-dependent selection is usually the result of interactions between species (predation, parasitism, or competition), or between genotypes within species (usually competitive or symbiotic), and has been especially frequently discussed with relation to anti-predator adaptations. Frequency-dependent selection can lead to polymorphic equilibria, which result from interactions among genotypes within species, in the same way that multi-species equilibria require interactions between species in competition (e.g. where αij parameters in Lotka-Volterra competition equations are non-zero). Frequency-dependent selection can also lead to dynamical chaos when some individuals' fitnesses become very low at intermediate allele frequencies.
Negative
The first explicit statement of frequency-dependent selection appears to have been by Edward Bagnall Poulton in 1884, on the way that predators could maintain color polymorphisms in their prey.
Perhaps the best known early modern statement of the principle is Bryan Clarke's 1962 paper on apostatic selection (a synonym of negative frequency-dependent selection). Clarke discussed predator attacks on polymorphic British snails, citing Luuk Tinbergen's classic work on searching images as support that predators such as birds tended to specialize in common forms of palatable species. Clarke |
https://en.wikipedia.org/wiki/Citizen%20Lab | The Citizen Lab is an interdisciplinary laboratory based at the Munk School of Global Affairs at the University of Toronto, Canada. It was founded by Ronald Deibert in 2001. The laboratory studies information controls that impact the openness and security of the Internet and that pose threats to human rights. The organization uses a "mixed methods" approach which combines computer-generated interrogation, data mining, and analysis with intensive field research, qualitative social science, and legal and policy analysis methods. The organization has played a major role in providing technical support to journalists investigating the use of NSO Group's Pegasus spyware on journalists, politicians and human rights advocates.
History
The Citizen Lab was a founding partner of the OpenNet Initiative (2002–2013) and the Information Warfare Monitor (2002–2012) projects. The organization also developed the original design of the Psiphon censorship circumvention software, which was spun out of the Lab into a private Canadian corporation (Psiphon Inc.) in 2008.
In a 2009 report "Tracking GhostNet", researchers uncovered a suspected cyber espionage network of over 1,295 infected hosts in 103 countries between 2007 and 2009, a high percentage of which were high-value targets, including ministries of foreign affairs, embassies, international organizations, news media, and NGOs. The study was one of the first public reports to reveal a cyber espionage network that targeted civil society and government systems internationally.
In Shadows in the Cloud (2010), researchers documented a complex ecosystem of cyber espionage that systematically compromised government, business, academic, and other computer network systems in India, the offices of the Dalai Lama, the United Nations, and several other countries. According to a January 24, 2019 AP News report, Citizen Lab researchers were "being targeted" by "international undercover operatives" for its work on NSO Group.
In Million Dolla |
https://en.wikipedia.org/wiki/IUPAC%20nomenclature%20of%20organic%20chemistry | In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry.
To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas.
Basic principles
In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound.
The steps for naming an organic compound are:
Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence:
It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used.
It should have the maximum number of multiple bonds.
It should have the maximum length.
It should have the maximum number of substituents or branches cited as prefixes
It should have the ma |
https://en.wikipedia.org/wiki/Wound%20healing | Wound healing refers to a living organism's replacement of destroyed or damaged tissue by newly produced tissue.
In undamaged skin, the epidermis (surface, epithelial layer) and dermis (deeper, connective layer) form a protective barrier against the external environment. When the barrier is broken, a regulated sequence of biochemical events is set into motion to repair the damage. This process is divided into predictable phases: blood clotting (hemostasis), inflammation, tissue growth (cell proliferation), and tissue remodeling (maturation and cell differentiation). Blood clotting may be considered to be part of the inflammation stage instead of a separate stage.
The wound-healing process is not only complex but fragile, and it is susceptible to interruption or failure leading to the formation of non-healing chronic wounds. Factors that contribute to non-healing chronic wounds are diabetes, venous or arterial disease, infection, and metabolic deficiencies of old age.
Wound care encourages and speeds wound healing via cleaning and protection from reinjury or infection. Depending on each patient's needs, it can range from the simplest first aid to entire nursing specialties such as wound, ostomy, and continence nursing and burn center care.
Stages
Hemostasis (blood clotting): Within the first few minutes of injury, platelets in the blood begin to stick to the injured site. They change into an amorphous shape, more suitable for clotting, and they release chemical signals to promote clotting. This results in the activation of fibrin, which forms a mesh and acts as "glue" to bind platelets to each other. This makes a clot that serves to plug the break in the blood vessel, slowing/preventing further bleeding.
Inflammation: During this phase, damaged and dead cells are cleared out, along with bacteria and other pathogens or debris. This happens through the process of phagocytosis, where white blood cells engulf debris and destroy it. Platelet-derived growth factors |
https://en.wikipedia.org/wiki/Canonical%20transformation | In Hamiltonian mechanics, a canonical transformation is a change of canonical coordinates that preserves the form of Hamilton's equations. This is sometimes known as form invariance. It need not preserve the form of the Hamiltonian itself. Canonical transformations are useful in their own right, and also form the basis for the Hamilton–Jacobi equations (a useful method for calculating conserved quantities) and Liouville's theorem (itself the basis for classical statistical mechanics).
Since Lagrangian mechanics is based on generalized coordinates, transformations of the coordinates do not affect the form of Lagrange's equations and, hence, do not affect the form of Hamilton's equations if we simultaneously change the momentum by a Legendre transformation into
Therefore, coordinate transformations (also called point transformations) are a type of canonical transformation. However, the class of canonical transformations is much broader, since the old generalized coordinates, momenta and even time may be combined to form the new generalized coordinates and momenta. Canonical transformations that do not include the time explicitly are called restricted canonical transformations (many textbooks consider only this type).
For clarity, we restrict the presentation here to calculus and classical mechanics. Readers familiar with more advanced mathematics such as cotangent bundles, exterior derivatives and symplectic manifolds should read the related symplectomorphism article. (Canonical transformations are a special case of a symplectomorphism.) However, a brief introduction to the modern mathematical description is included at the end of this article.
Notation
Boldface variables such as represent a list of generalized coordinates that need not transform like a vector under rotation, e.g.,
A dot over a variable or list signifies the time derivative, e.g.,
The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of t |
https://en.wikipedia.org/wiki/%CE%9CClinux | μClinux is a variation of the Linux kernel, previously maintained as a fork, that targets microcontrollers without a memory management unit (MMU). It was integrated into the mainline kernel as of 2.5.46; the project continues to develop patches and tools for microcontrollers. The homepage lists Linux kernel releases for 2.0, 2.4 and 2.6 (all of which are end-of-life in mainline).
The letters "μC" are for "microcontroller": the name is pronounced "you-see-Linux", rather than pronouncing the letter mu as in Greek.
History
μClinux was originally created by D. Jeff Dionne and Kenneth Albanowski in 1998. Initially, they targeted the Motorola DragonBall family of embedded 68k processors (specifically the 68EZ328 series used in the Motorola PalmPilot) on a 2.0.33 Linux kernel. After releasing their initial work, a developer community quickly sprang up extending their work to newer kernels and other microprocessor architectures. In early 1999, support was added for the Motorola (now NXP) ColdFire family of embedded microprocessors. ARM processor support was added later.
Although originally targeting 2.0 series Linux kernels, it now has ports based on Linux 2.4 and Linux 2.6. The Linux 2.4 ports were forward ported from the 2.0.36 Linux kernel by Michael Leslie and Evan Stawnyczy during their work at Rt-Control. There were never any μClinux extensions applied to the 2.2 series kernels.
Since version 2.5.46 of the Linux kernel, the major parts of μClinux have been integrated with the mainline kernel for a number of processor architectures.
Greg Ungerer (who originally ported μClinux to the Motorola ColdFire family of processors) continued to maintain and actively push core μClinux support into the 2.6 series Linux kernels. In this regard, μClinux is essentially no longer a separate fork of Linux.
μClinux had support for many architectures, and forms the basis of many products, like network routers, security cameras, DVD or MP3 players, VoIP phone or gateways, scanners, |
https://en.wikipedia.org/wiki/Herbal | A herbal is a book containing the names and descriptions of plants, usually with information on their medicinal, tonic, culinary, toxic, hallucinatory, aromatic, or magical powers, and the legends associated with them. A herbal may also classify the plants it describes, may give recipes for herbal extracts, tinctures, or potions, and sometimes include mineral and animal medicaments in addition to those obtained from plants. Herbals were often illustrated to assist plant identification.
Herbals were among the first literature produced in Ancient Egypt, China, India, and Europe as the medical wisdom of the day accumulated by herbalists, apothecaries and physicians. Herbals were also among the first books to be printed in both China and Europe. In Western Europe herbals flourished for two centuries following the introduction of moveable type (c. 1470–1670).
In the late 17th century, the rise of modern chemistry, toxicology and pharmacology reduced the medicinal value of the classical herbal. As reference manuals for botanical study and plant identification herbals were supplanted by Floras – systematic accounts of the plants found growing in a particular region, with scientifically accurate botanical descriptions, classification, and illustrations. Herbals have seen a modest revival in the Western world since the last decades of the 20th century, as herbalism and related disciplines (such as homeopathy and aromatherapy) became popular forms of alternative medicine.
History
The use of plants for medicinal purposes, and their descriptions, dates back two to three thousand years. The word herbal is derived from the mediaeval Latin liber herbalis ("book of herbs"): it is sometimes used in contrast to the word florilegium, which is a treatise on flowers with emphasis on their beauty and enjoyment rather than the herbal emphasis on their utility. Much of the information found in printed herbals arose out of traditional medicine and herbal knowledge that predated the i |
https://en.wikipedia.org/wiki/Turtles%20all%20the%20way%20down | "Turtles all the way down" is an expression of the problem of infinite regress. The saying alludes to the mythological idea of a World Turtle that supports a flat Earth on its back. It suggests that this turtle rests on the back of an even larger turtle, which itself is part of a column of increasingly larger turtles that continues indefinitely.
The exact origin of the phrase is uncertain. In the form "rocks all the way down", the saying appears as early as 1838. References to the saying's mythological antecedents, the World Turtle and its counterpart the World Elephant, were made by a number of authors in the 17th and 18th centuries.
The expression has been used to illustrate problems such as the regress argument in epistemology.
History
Background in Hindu mythology
Early variants of the saying do not always have explicit references to infinite regression (i.e., the phrase "all the way down"). They often reference stories featuring a World Elephant, World Turtle, or other similar creatures that are claimed to come from Hindu mythology. The first known reference to a Hindu source is found in a letter by Jesuit Emanuel da Veiga (1549–1605), written at Chandagiri on 18 September 1599, in which the relevant passage reads:
Veiga's account seems to have been received by Samuel Purchas, who has a close paraphrase in his Purchas His Pilgrims (1613/1626),
"that the Earth had nine corners, whereby it was borne up by the Heaven. Others dissented, and said, that the Earth was borne up by seven Elephants; the Elephants' feet stood on Tortoises, and they were borne by they know not what." Purchas' account is again reflected by John Locke in his 1689 tract An Essay Concerning Human Understanding, where Locke introduces the story as a trope referring to the problem of induction in philosophical debate. Locke compares one who would say that properties inhere in "Substance" to the Indian who said the world was on an elephant which was on a tortoise, "But being again pressed |
https://en.wikipedia.org/wiki/GNE%20%28encyclopedia%29 | GNE (originally GNUPedia) was a project to create a free content online encyclopedia, licensed under the GNU Free Documentation License, under the auspices of the Free Software Foundation. The project was proposed by Richard Stallman in December 2000 and officially started in January 2001. It was moderated by Héctor Facundo Arena, an Argentine programmer and GNU activist.
History
Immediately upon its creation, GNUPedia was confronted by confusion with the similar-sounding Nupedia project led by Jimmy Wales and Larry Sanger, and controversy over whether this constituted a fork of the efforts to produce a free encyclopedia. In addition, Wales already owned the gnupedia.org domain name. The GNUPedia project changed its name to GNE (an abbreviation for "GNE's Not an Encyclopedia", a recursive acronym similar to that of the GNU Project) and switched to a knowledgebase. GNE was designed to avoid centralization and editors who enforced quality standards, which they viewed as possibly introducing bias. Jonathan Zittrain described GNE as a "collective blog" more than an encyclopedia. Stallman has since lent his support to Wikipedia.
In The Wikipedia Revolution, Andrew Lih explains the reasons behind the demise of GNE:
Richard Stallman who inspired the free software and free culture movement also proposed his own encyclopedia in 1999 and attempted to launch it in the same year that Wikipedia took off. Called Gnupedia it coexisted confusingly in the same space as Bomis's Nupedia, a completely separate product. Keeping with tradition Stallman renamed his project GNE – GNE's not an encyclopedia. But in the end Wikipedia's lead and enthusiastic community was already well established and Richard Stallman put the GNE project into inactive status and put his support behind Wikipedia.
The GNU Project offers the following explanation about GNE:
Just as we were starting a project, GNUpedia, to develop a free encyclopedia, the Nupedia encyclopedia project adopted the GNU Free Docum |
https://en.wikipedia.org/wiki/Canonical%20form | In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. Often, it is one which provides the simplest representation of an object and allows it to be identified in a unique way. The distinction between "canonical" and "normal" forms varies from subfield to subfield. In most fields, a canonical form specifies a unique representation for every object, while a normal form simply specifies its form, without the requirement of uniqueness.
The canonical form of a positive integer in decimal representation is a finite sequence of digits that does not begin with zero. More generally, for a class of objects on which an equivalence relation is defined, a canonical form consists in the choice of a specific object in each class. For example:
Jordan normal form is a canonical form for matrix similarity.
The row echelon form is a canonical form, when one considers as equivalent a matrix and its left product by an invertible matrix.
In computer science, and more specifically in computer algebra, when representing mathematical objects in a computer, there are usually many different ways to represent the same object. In this context, a canonical form is a representation such that every object has a unique representation (with canonicalization being the process through which a representation is put into its canonical form). Thus, the equality of two objects can easily be tested by testing the equality of their canonical forms.
Despite this advantage, canonical forms frequently depend on arbitrary choices (like ordering the variables), which introduce difficulties for testing the equality of two objects resulting on independent computations. Therefore, in computer algebra, normal form is a weaker notion: A normal form is a representation such that zero is uniquely represented. This allows testing for equality by putting the difference of two objects in normal form.
|
https://en.wikipedia.org/wiki/Fisher%27s%20fundamental%20theorem%20of%20natural%20selection | Fisher's fundamental theorem of natural selection is an idea about genetic variance in population genetics developed by the statistician and evolutionary biologist Ronald Fisher. The proper way of applying the abstract mathematics of the theorem to actual biology has been a matter of some debate.
It states:
"The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time."
Or in more modern terminology:
"The rate of increase in the mean fitness of any organism, at any time, that is ascribable to natural selection acting through changes in gene frequencies, is exactly equal to its genetic variance in fitness at that time".
History
The theorem was first formulated in Fisher's 1930 book The Genetical Theory of Natural Selection. Fisher likened it to the law of entropy in physics, stating that "It is not a little instructive that so similar a law should hold the supreme position among the biological sciences". The model of quasi-linkage equilibrium was introduced by Motoo Kimura in 1965 as an approximation in the case of weak selection and weak epistasis.
Largely as a result of Fisher's feud with the American geneticist Sewall Wright about adaptive landscapes, the theorem was widely misunderstood to mean that the average fitness of a population would always increase, even though models showed this not to be the case. In 1972, George R. Price showed that Fisher's theorem was indeed correct (and that Fisher's proof was also correct, given a typo or two), but did not find it to be of great significance. The sophistication that Price pointed out, and that had made understanding difficult, is that the theorem gives a formula for part of the change in gene frequency, and not for all of it. This is a part that can be said to be due to natural selection.
Due to confounding factors, tests of the fundamental theorem are quite rare though Bolnick in 2007 did test this effect in a natural population.
References
Further re |
https://en.wikipedia.org/wiki/Fisherian%20runaway | Fisherian runaway or runaway selection is a sexual selection mechanism proposed by the mathematical biologist Ronald Fisher in the early 20th century, to account for the evolution of ostentatious male ornamentation by persistent, directional female choice. An example is the colourful and elaborate peacock plumage compared to the relatively subdued peahen plumage; the costly ornaments, notably the bird's extremely long tail, appear to be incompatible with natural selection. Fisherian runaway can be postulated to include sexually dimorphic phenotypic traits such as behavior expressed by a particular sex.
Extreme and (seemingly) maladaptive sexual dimorphism represented a paradox for evolutionary biologists from Charles Darwin's time up to the modern synthesis. Darwin attempted to resolve the paradox by assuming heredity for both the preference and the ornament, and supposed an "aesthetic sense" in higher animals, leading to powerful selection of both characteristics in subsequent generations. Fisher developed the theory further by assuming genetic correlation between the preference and the ornament, that initially the ornament signalled greater potential fitness (the likelihood of leaving more descendants), so preference for the ornament had a selective advantage. Subsequently, if strong enough, female preference for exaggerated ornamentation in mate selection could be enough to undermine natural selection even when the ornament has become non-adaptive. Over subsequent generations this could lead to runaway selection by positive feedback, and the speed with which the trait and the preference increase could (until counter-selection interferes) increase exponentially.
Modern descriptions of the same mechanism using quantitative genetic and population genetic models were mainly established by Russell Lande and Mark Kirkpatrick in the 1980s, and are now more commonly referred to as the sexy son hypothesis.
History
From Charles Darwin to Ronald Fisher
Charles Darwin |
https://en.wikipedia.org/wiki/Compactor | A compactor is a machine or mechanism used to reduce the size of material such as waste material or bio mass through compaction. A trash compactor is often used by business and public places like hospitals (And in the United States also by homes) to reduce the volume of trash they produce. A baler-wrapper compactor is often used for making compact and wrapped bales in order to improve logistics.
Normally powered by hydraulics, compactors take many shapes and sizes. In landfill sites for example, a large tractor (typically a converted front end loader with some variant of a bulldozer blade attached) with spiked steel wheels called a landfill compactor is used to drive over waste deposited by waste collection vehicles (WCVs).
WCVs themselves incorporate a compacting mechanism which is used to increase the payload of the vehicle and reduce the number of times it has to empty. This usually takes the form of hydraulically powered sliding plates which sweep out the collection hopper and compress the material into what has already been loaded.
Different compactors are used in scrap metal processing, the most familiar being the car crusher. Such devices can be of either the "pancake" type, where a scrap automobile is flattened by a huge descending hydraulically powered plate, or the baling press, where the automobile is compressed from several directions until it resembles a large cube
Commercial use
Many retail and service businesses, such as fast food, restaurants, and hotels, use compactors to reduce the volume of non-recyclable waste as well as curb nuisance such as rodents and smell. In the hospitality industry tolerance for such nuisances is particularly low. These compactors typically come in electric and hydraulic operation, with quite a few loading configurations. Most popular loading configurations fall under the following:
Ground-access;
Walk-on;
Secured indoor chute.
These compactors are almost exclusively of welded steel construction for two reasons |
https://en.wikipedia.org/wiki/Zinc%20oxide | Zinc oxide is an inorganic compound with the formula . It is a white powder that is insoluble in water. ZnO is used as an additive in numerous materials and products including cosmetics, food supplements, rubbers, plastics, ceramics, glass, cement, lubricants, paints, sunscreens, ointments, adhesives, sealants, pigments, foods, batteries, ferrites, fire retardants, semi conductors, and first-aid tapes. Although it occurs naturally as the mineral zincite, most zinc oxide is produced synthetically.
History
Zinc compounds were probably used by early humans, in processed and unprocessed forms, as a paint or medicinal ointment, but their composition is uncertain. The use of pushpanjan, probably zinc oxide, as a salve for eyes and open wounds, is mentioned in the Indian medical text the Charaka Samhita, thought to date from 500 BC or before. Zinc oxide ointment is also mentioned by the Greek physician Dioscorides (1st century AD). Galen suggested treating ulcerating cancers with zinc oxide, as did Avicenna in his The Canon of Medicine. It is used as an ingredient in products such as baby powder and creams against diaper rashes, calamine cream, anti-dandruff shampoos, and antiseptic ointments.
The Romans produced considerable quantities of brass (an alloy of zinc and copper) as early as 200 BC by a cementation process where copper was reacted with zinc oxide. The zinc oxide is thought to have been produced by heating zinc ore in a shaft furnace. This liberated metallic zinc as a vapor, which then ascended the flue and condensed as the oxide. This process was described by Dioscorides in the 1st century AD. Zinc oxide has also been recovered from zinc mines at Zawar in India, dating from the second half of the first millennium BC.
From the 12th to the 16th century zinc and zinc oxide were recognized and produced in India using a primitive form of the direct synthesis process. From India, zinc manufacture moved to China in the 17th century. In 1743, the first European zinc |
https://en.wikipedia.org/wiki/Beamline | In accelerator physics, a beamline refers to the trajectory of the beam of particles, including the overall construction of the path segment (guide tubes, diagnostic devices) along a specific path of an accelerator facility. This part is either
the line in a linear accelerator along which a beam of particles travels, or
the path leading from particle generator (e.g. a cyclic accelerator, synchrotron light sources, cyclotrons, or spallation sources) to the experimental end-station.
Beamlines usually end in experimental stations that utilize particle beams or synchrotron light obtained from a synchrotron, or neutrons from a spallation source or research reactor. Beamlines are used in experiments in particle physics, materials science, life science, chemistry, and molecular biology, but can also be used for irradiation tests or to produce isotopes.
Beamline in a particle accelerator
In particle accelerators the beamline is usually housed in a tunnel and/or underground, cased inside a concrete housing for shielding purposes. The beamline is usually a cylindrical metal pipe, typically called a beam pipe, and/or a drift tube, evacuated to a high vacuum so there are few gas molecules in the path for the beam of accelerated particles to hit, which otherwise could scatter them before they reach their destination.
There are specialized devices and equipment on the beamline that are used for producing, maintaining, monitoring, and accelerating the particle beam. These devices may be in proximity of or attached directly to the beamline. These devices include sophisticated transducers, diagnostics (position monitors and wire scanners), lenses, collimators, thermocouples, ion pumps, ion gauges, ion chambers (for diagnostic purposes; usually called "beam monitors"), vacuum valves ("isolation valves"), and gate valves, to mention a few.
It is imperative to have all beamline sections, magnets, etc., aligned (often by a survey and an alignment crew by using a laser tracker), b |
https://en.wikipedia.org/wiki/Gorenstein%20ring | In commutative algebra, a Gorenstein local ring is a commutative Noetherian local ring R with finite injective dimension as an R-module. There are many equivalent conditions, some of them listed below, often saying that a Gorenstein ring is self-dual in some sense.
Gorenstein rings were introduced by Grothendieck in his 1961 seminar (published in ). The name comes from a duality property of singular plane curves studied by (who was fond of claiming that he did not understand the definition of a Gorenstein ring). The zero-dimensional case had been studied by . and publicized the concept of Gorenstein rings.
Frobenius rings are noncommutative analogs of zero-dimensional Gorenstein rings. Gorenstein schemes are the geometric version of Gorenstein rings.
For Noetherian local rings, there is the following chain of inclusions.
Definitions
A Gorenstein ring is a commutative Noetherian ring such that each localization at a prime ideal is a Gorenstein local ring, as defined below. A Gorenstein ring is in particular Cohen–Macaulay.
One elementary characterization is: a Noetherian local ring R of dimension zero (equivalently, with R of finite length as an R-module) is Gorenstein if and only if HomR(k, R) has dimension 1 as a k-vector space, where k is the residue field of R. Equivalently, R has simple socle as an R-module. More generally, a Noetherian local ring R is Gorenstein if and only if there is a regular sequence a1,...,an in the maximal ideal of R such that the quotient ring R/( a1,...,an) is Gorenstein of dimension zero.
For example, if R is a commutative graded algebra over a field k such that R has finite dimension as a k-vector space, R = k ⊕ R1 ⊕ ... ⊕ Rm, then R is Gorenstein if and only if it satisfies Poincaré duality, meaning that the top graded piece Rm has dimension 1 and the product Ra × Rm−a → Rm is a perfect pairing for every a.
Another interpretation of the Gorenstein property as a type of duality, for not necessarily graded rings, is: for a |
https://en.wikipedia.org/wiki/Fungicide | Fungicides are pesticides used to kill parasitic fungi or their spores. They are most commonly chemical compounds, but may include biocontrols and fungistatics. Fungi can cause serious damage in agriculture, resulting in critical losses of yield, quality, and profit. Fungicides are used both in agriculture and to fight fungal infections in animals. Fungicides are also used to control oomycetes, which are not taxonomically/genetically fungi, although sharing similar methods of infecting plants.
Fungicides can either be contact, translaminar or systemic. Contact fungicides are not taken up into the plant tissue and protect only the plant where the spray is deposited. Translaminar fungicides redistribute the fungicide from the upper, sprayed leaf surface to the lower, unsprayed surface. Systemic fungicides are taken up and redistributed through the xylem vessels. Few fungicides move to all parts of a plant. Some are locally systemic, and some move upward.
Most fungicides that can be bought retail are sold in liquid form. A very common active ingredient is sulfur, present at 0.08% in weaker concentrates, and as high as 0.5% for more potent fungicides. Fungicides in powdered form are usually around 90% sulfur and are very toxic. Other active ingredients in fungicides include neem oil, rosemary oil, jojoba oil, the bacterium Bacillus subtilis, and the beneficial fungus Ulocladium oudemansii.
Fungicide residues have been found on food for human consumption, mostly from post-harvest treatments. Some fungicides are dangerous to human health, such as vinclozolin, which has now been removed from use. Ziram is also a fungicide that is toxic to humans with long-term exposure, and fatal if ingested. A number of fungicides are also used in human health care.
Types
Fungicides can be classified according to their mechanism of action (MOA), specifically the biological process or target site they block. The Fungicide Resistance Action Committee (FRAC) assigns chemicals into clas |
https://en.wikipedia.org/wiki/Refresh%20rate | The refresh rate, also known as vertical refresh rate or vertical scan rate in reference to terminology originating with the cathode-ray tubes (CRTs), is the number of times per second that a raster-based display device displays a new image. This is independent from frame rate, which describes how many images are stored or generated every second by the device driving the display. On CRT displays, higher refresh rates produce less flickering, thereby reducing eye strain. In other technologies such as liquid-crystal displays, the refresh rate affects only how often the image can potentially be updated.
Non-raster displays may not have a characteristic refresh rate. Vector displays, for instance, do not trace the entire screen, only the actual lines comprising the displayed image, so refresh speed may differ by the size and complexity of the image data. For computer programs or telemetry, the term is sometimes applied to how frequently a datum is updated with a new external value from another source (for example; a shared public spreadsheet or hardware feed).
Physical factors
While all raster display devices have a characteristic refresh rate, the physical implementation differs between technologies.
Cathode-ray tubes
Raster-scan CRTs by their nature must refresh the screen since their phosphors will fade and the image will disappear quickly unless refreshed regularly.
In a CRT, the vertical scan rate is the number of times per second that the electron beam returns to the upper left corner of the screen to begin drawing a new frame. It is controlled by the vertical blanking signal generated by the video controller, and is partially limited by the monitor's maximum horizontal scan rate.
The refresh rate can be calculated from the horizontal scan rate by dividing the scanning frequency by the number of horizontal lines, plus some amount of time to allow for the beam to return to the top. By convention, this is a 1.05x multiplier. For instance, a monitor with a |
https://en.wikipedia.org/wiki/Brokaw%20bandgap%20reference | Brokaw bandgap reference is a voltage reference circuit widely used in integrated circuits, with an output voltage around 1.25 V with low temperature dependence. This particular circuit is one type of a bandgap voltage reference, named after Paul Brokaw, the author of its first publication.
Like all temperature-independent bandgap references, the circuit maintains an internal voltage source that has a positive temperature coefficient and another internal voltage source that has a negative temperature coefficient. By summing the two together, the temperature dependence can be canceled. Additionally, either of the two internal sources can be used as a temperature sensor.
In the Brokaw bandgap reference, the circuit uses negative feedback (by means of an operational amplifier) to force a constant current through two bipolar transistors with different emitter areas. By the Ebers–Moll model of a transistor,
The transistor with the larger emitter area requires a smaller base–emitter voltage for the same current.
The difference between the two base–emitter voltages has a positive temperature coefficient (i.e., it increases with temperature).
The base–emitter voltage for each transistor has a negative temperature coefficient (i.e., it decreases with temperature).
The circuit output is the sum of one of the base–emitter voltages with a multiple of the base–emitter voltage differences. With appropriate component choices, the two opposing temperature coefficients will cancel each other exactly and the output will have no temperature dependence.
In the example circuit shown, the opamp ensures that its inverting and non-inverting inputs are at the same voltage. This means that the currents in each collector resistor are identical, so the collector currents of Q1 and Q2 are also identical. If Q2 has an emitter area that is times larger than Q1, its base-emitter voltage will be lower than that of Q1 by a magnitude of . This voltage is generated across and so defines the cu |
https://en.wikipedia.org/wiki/Bandgap%20voltage%20reference | A bandgap voltage reference is a temperature independent voltage reference circuit widely used in integrated circuits. It produces a fixed (constant) voltage regardless of power supply variations, temperature changes, or circuit loading from a device. It commonly has an output voltage around 1.25V, close to the corresponding theoretical band gap of silicon. This circuit concept was first published by David Hilbiber in 1964. Bob Widlar, Paul Brokaw and others followed up with other commercially successful versions.
Operation
The voltage difference between two p–n junctions (e.g. diodes), operated at different current densities, is used to generate a current that is proportional to absolute temperature (PTAT) in a resistor. This current is used to generate a voltage in a second resistor. This voltage in turn is added to the voltage of one of the junctions (or a third one, in some implementations). The voltage across a diode operated at constant current is complementary to absolute temperature (CTAT), with a temperature coefficient of approximately −2mV/K. If the ratio between the first and second resistor is chosen properly, the first order effects of the temperature dependency of the diode and the PTAT current will cancel out. The resulting voltage is about 1.2–1.3V, depending on the particular technology and circuit design, and is close to the corresponding theoretical 1.22eV bandgap of silicon at 0K. The remaining voltage change over the operating temperature of typical integrated circuits is on the order of a few millivolts. This temperature dependency has a typical parabolic residual behavior since the linear (first order) effects are chosen to cancel.
Because the output voltage is by definition fixed around 1.25V for typical bandgap reference circuits, the minimum operating voltage is about 1.4V, as in a CMOS circuit at least one drain-source voltage of a field-effect transistor (FET) has to be added. Therefore, recent work concentrates on finding alternati |
https://en.wikipedia.org/wiki/Low-density%20parity-check%20code | In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed using a sparse Tanner graph (subclass of the bipartite graph). LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical maximum (the Shannon limit) for a symmetric memoryless channel. The noise threshold defines an upper bound for the channel noise, up to which the probability of lost information can be made as small as desired. Using iterative belief propagation techniques, LDPC codes can be decoded in time linear to their block length.
LDPC codes are finding increasing use in applications requiring reliable and highly efficient information transfer over bandwidth-constrained or return-channel-constrained links in the presence of corrupting noise. Implementation of LDPC codes has lagged behind that of other codes, notably turbo codes. The fundamental patent for turbo codes expired on August 29, 2013.
LDPC codes are also known as Gallager codes, in honor of Robert G. Gallager, who developed the LDPC concept in his doctoral dissertation at the Massachusetts Institute of Technology in 1960. LDPC codes have also been shown to have ideal combinatorial properties. In his dissertation, Gallager showed that LDPC codes achieve the Gilbert–Varshamov bound for linear codes over binary fields with high probability. In 2020 it was shown that Gallager's LDPC codes achieve list decoding capacity and also achieve the Gilbert–Varshamov bound for linear codes over general fields.
History
Impractical to implement when first developed by Gallager in 1963, LDPC codes were forgotten until his work was rediscovered in 1996. Turbo codes, another class of capacity-approaching codes discovered in 1993, became the coding scheme of choice in the late 1990s, used for applications such as the Deep |
https://en.wikipedia.org/wiki/Video%20camera%20tube | Video camera tubes were devices based on the cathode ray tube that were used in television cameras to capture television images, prior to the introduction of charge-coupled device (CCD) image sensors in the 1980s. Several different types of tubes were in use from the early 1930s, and as late as the 1990s.
In these tubes, an electron beam was scanned across an image of the scene to be broadcast focused on a target. This generated a current that was dependent on the brightness of the image on the target at the scan point. The size of the striking ray was tiny compared to the size of the target, allowing 480–486 horizontal scan lines per image in the NTSC format, 576 lines in PAL, and as many as 1035 lines in Hi-Vision.
Cathode ray tube
Any vacuum tube which operates using a focused beam of electrons, originally called cathode rays, is known as a cathode ray tube (CRT). These are usually seen as display devices as used in older (i.e., non-flat panel) television receivers and computer displays. The camera pickup tubes described in this article are also CRTs, but they display no image.
Early research
In June 1908, the scientific journal Nature published a letter in which Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), discussed how a fully electronic television system could be realized by using cathode ray tubes (or "Braun" tubes, after their inventor, Karl Braun) as both imaging and display devices. He noted that the "real difficulties lie in devising an efficient transmitter", and that it was possible that "no photoelectric phenomenon at present known will provide what is required". A cathode ray tube was successfully demonstrated as a displaying device by the German Professor Max Dieckmann in 1906; his experimental results were published by the journal Scientific American in 1909. Campbell-Swinton later expanded on his vision in a presidential address given to the Röntgen Society in November 1911. The photoelectric screen in the proposed transmi |
https://en.wikipedia.org/wiki/Map%20%28mathematics%29 | In mathematics, a map or mapping is a function in its general sense. These terms may have originated as from the process of making a geographical map: mapping the Earth surface to a sheet of paper.
The term map may be used to distinguish some special types of functions, such as homomorphisms. For example, a linear map is a homomorphism of vector spaces, while the term linear function may have this meaning or it may mean a linear polynomial. In category theory, a map may refer to a morphism. The term transformation can be used interchangeably, but transformation often refers to a function from a set to itself. There are also a few less common uses in logic and graph theory.
Maps as functions
In many branches of mathematics, the term map is used to mean a function, sometimes with a specific property of particular importance to that branch. For instance, a "map" is a "continuous function" in topology, a "linear transformation" in linear algebra, etc.
Some authors, such as Serge Lang, use "function" only to refer to maps in which the codomain is a set of numbers (i.e. a subset of R or C), and reserve the term mapping for more general functions.
Maps of certain kinds are the subjects of many important theories. These include homomorphisms in abstract algebra, isometries in geometry, operators in analysis and representations in group theory.
In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems.
A partial map is a partial function. Related terms such as domain, codomain, injective, and continuous can be applied equally to maps and functions, with the same meaning. All these usages can be applied to "maps" as general functions or as functions with special properties.
As morphisms
In category theory, "map" is often used as a synonym for "morphism" or "arrow", which is a structure-respecting function and thus may imply more structure than "function" does. For example, a morphism in a concrete category |
https://en.wikipedia.org/wiki/Known-plaintext%20attack | The known-plaintext attack (KPA) is an attack model for cryptanalysis where the attacker has access to both the plaintext (called a crib) and its encrypted version (ciphertext). These can be used to reveal further secret information such as secret keys and code books. The term "crib" originated at Bletchley Park, the British World War II decryption operation, where it was defined as:
History
The usage "crib" was adapted from a slang term referring to cheating (e.g., "I cribbed my answer from your test paper"). A "crib" originally was a literal or interlinear translation of a foreign-language text—usually a Latin or Greek text—that students might be assigned to translate from the original language.
The idea behind a crib is that cryptologists were looking at incomprehensible ciphertext, but if they had a clue about some word or phrase that might be expected to be in the ciphertext, they would have a "wedge," a test to break into it. If their otherwise random attacks on the cipher managed to sometimes produce those words or (preferably) phrases, they would know they might be on the right track. When those words or phrases appeared, they would feed the settings they had used to reveal them back into the whole encrypted message to good effect.
In the case of Enigma, the German High Command was very meticulous about the overall security of the Enigma system and understood the possible problem of cribs. The day-to-day operators, on the other hand, were less careful. The Bletchley Park team would guess some of the plaintext based upon when the message was sent, and by recognizing routine operational messages. For instance, a daily weather report was transmitted by the Germans at the same time every day. Due to the regimented style of military reports, it would contain the word Wetter (German for "weather") at the same location in every message. (Knowing the local weather conditions helped Bletchley Park guess other parts of the plaintext as well.) Other operators, too, |
https://en.wikipedia.org/wiki/Mindset%20%28computer%29 | The Mindset is an Intel 80186-based MS-DOS personal computer. It was developed by the Mindset Corporation and released in spring 1984. Unlike other IBM PC compatibles of the time, it has custom graphics hardware supporting a 320x200 resolution with 16 simultaneous colors (chosen from a 512-shade palette) and hardware-accelerated drawing capabilities, including a blitter, allowing it to update the screen 50 times as fast as an IBM standard color graphics adapter. The basic unit was priced at . It is conceptually similar to the more successful Amiga released over a year later. Key engineers of both the Amiga and Mindset were ex-Atari, Inc. employees.
The system didn't sell well and was only on the market for about a year. This was lamented by industry commenters, who saw compatibility taking precedence over innovation. Its distinctive case remains in the permanent collection of the Museum of Modern Art in New York.
History
Roger Badertscher was head of Atari, Inc.'s Home Computer Division until 1982 when he resigned in order to set up a new company to produce a new personal computer. As president of Mindset Corporation, he brought a number of Atari engineers with him.
Design
In most computer systems of the era, the CPU is used to create graphics by drawing bit patterns directly into memory. Separate hardware then reads these patterns and produces the actual video signal for the display. The Mindset added a new custom-designed VLSI vector processor to handle many common drawing tasks, like lines or filling areas. Instead of the CPU doing all of this work by changing memory directly, in the Mindset the CPU sets up those instructions and then hands off the actual bit fiddling to the separate processor.
Badertscher compared the chipset to the Intel 8087 floating point processor, running alongside the Intel 80186 on which the machine is based. There are a number of parallels between the Mindset and the Amiga 1000, another computer designed by ex-Atari engineers that of |
https://en.wikipedia.org/wiki/Geocode | A geocode is a code that represents a geographic entity (location or object). It is a unique identifier of the entity, to distinguish it from others in a finite set of geographic entities. In general the geocode is a human-readable and short identifier.
Typical geocodes and entities represented by it:
Country code and subdivision code. Polygon of the administrative boundaries of a country or a subdivision. The main examples are ISO codes: ISO 3166-1 alpha-2 code (e.g. AF for Afghanistan or BR for Brazil), and its subdivision conventions, such as subdivision codes (e.g. AF-GHO for Ghor province) or subdivision codes (e.g. BR-AM for Amazonas state).
DGG cell ID. Identifier of a cell of a discrete global grid: a Geohash code (e.g. ~0.023 km2 cell 6vjyngd at the Brazilian's center) or an OLC code (e.g. ~0.004 km2 cell 58PJ642P+4 at the same point).
Postal code. Polygon of a postal area: a CEP code (e.g. 70040 represents a Brazilian's central area for postal distribution).
The ISO 19112:2019 standard (section 3.1.2) adopted the term "geographic identifier" instead geocode, to encompass long labels: spatial reference in the form of a label or code that identifies a location. For example, for ISO, the country name “People's Republic of China” is a label.
Geocodes are mainly used (in general as an atomic data type) for labelling, data integrity, geotagging and spatial indexing.
In theoretical computer science a geocode system is a locality-preserving hashing function.
Classification
There are some common aspects of many geocodes (or geocode systems) that can be used as classification criteria:
Ownership: proprietary or free, differing by its licences.
Formation: the geocode can be originated from a name (ex. abbreviation of official name the country) or from mathematical function (encoding algorithm to compress latitude-longitude). See geocode system types below (of names and of grids).
Hierarchy: geocode's syntax hierarchy corresponding to the spa |
https://en.wikipedia.org/wiki/Natural%20Area%20Code | The Natural Area Code, or Universal Address, is a proprietary geocode system for identifying an area anywhere on the Earth, or a volume of space anywhere around the Earth. The use of thirty alphanumeric characters instead of only ten digits makes a NAC shorter than its numerical latitude/longitude equivalent.
Two-dimensional system
Instead of numerical longitudes and latitudes, a grid with 30 rows and 30 columns - each cell denoted by the numbers 0-9 and the twenty consonants of the Latin alphabet - is laid over the flattened globe. A NAC cell (or block) can be subdivided repeatedly into smaller NAC grids to yield an arbitrarily small area, subject to the ±1 m limitations of the World Geodetic System (WGS) data of 1984.
A NAC represents an area on the earth—the longer the NAC, the smaller the area (and thereby, location) represented. A ten-character NAC can uniquely specify any building, house, or fixed object in the world. An eight-character NAC specifies an area no larger than 25 metres by 50 metres, while a ten-character NAC cell is no larger than 0.8 metres by 1.6 metres.
Using a base 30 positional numeral system, NAC uses an alternate method which excludes vowels and avoids potential confusion between "0" (zero) and "O" (capital "o"), and "1" (one) and "I" (capital "i"):
For example, the ten-character NAC for the centre of the city of Brussels is HBV6R RG77T.
Extension to three dimensions
The full NAC system provides a third coordinate: altitude. This coordinate is the arctangent of the altitude, relative to the Earth's radius, and scaled so that the zero point (000...) is at the centre of the Earth, the midpoint (H00...) is the local radius of the geoid, i.e. the Earth's surface, and the endpoint (ZZZ...) is at infinity.
For example, the three-dimensional NAC for the centre of Brussels, at ground level, is HBV6R RG77T H0000.
See also
Military Grid Reference System
Universal Transverse Mercator coordinate system
Quadtree
Geohash
References
Furth |
https://en.wikipedia.org/wiki/List%20of%20polygons%2C%20polyhedra%20and%20polytopes | A polytope is a geometric object with flat sides, which exists in any general number of dimensions. The following list of polygons, polyhedra and polytopes gives the names of various classes of polytopes and lists some specific examples.
Polytope elements
Polygon (2-polytope)
Vertex the ridge or (n−2)-face of the polygon
Edge the facet or (n−1)-face of the polygon
Polyhedron (3-polytope)
Vertex the peak or (n−3)-face of the polyhedron
Edge the ridge or (n−2)-face of the polyhedron
Face the facet or (n−1)-face of the polyhedron
Polychoron (4-polytope)
Vertex the (n−4)-face of the polychoron
Edge the peak or (n−3)-face of the polychoron
Face the ridge or (n−2)-face of the polychoron
Cell the facet or (n−1)-face of the polychoron
5-polytope
Vertex the (n−5)-face of the 5-polytope
Edge the (n−4)-face of the 5-polytope
Face the peak or (n−3)-face of the 5-polytope
Cell the ridge or (n−2)-face of the 5-polytope
Hypercell or Teron the facet or (n−1)-face of the 5-polytope
Other
Point
Line segment
Vertex figure
Peak – (n−3)-face
Ridge – (n−2)-face
Facet – (n−1)-face
Two dimensional (polygons)
Triangle
Equilateral triangle
Isosceles triangle
Scalene triangle
Right triangle
Oblique triangle
Acute triangle
Obtuse Triangle
Quadrilateral
Rectangle
Square
Rhombus
Parallelogram
Trapezoid
Isosceles trapezoid
Kite
Rhomboid
Pentagon
Hexagon
Heptagon
Octagon
Nonagon
Decagon
Hendecagon
Dodecagon
Triskaidecagon
Tetradecagon
Pentadecagon
Hexadecagon
Heptadecagon
Octadecagon
Enneadecagon
Icosagon
Icosihenagon
Icosidigon
Icositrigon
Icositetragon
Icosipentagon
Icosihexagon
Icosiheptagon
Icosioctagon
Icosienneagon
Triacontagon
Tetracontagon
Pentacontagon
Hexacontagon
Heptacontagon
Octacontagon
Enneacontagon
Hectogon
257-gon
Chiliagon
Myriagon
65537-gon
Megagon
Gigagon
Teragon
Apeirogon
Star polygons
Pentagram
Hexagram
Heptagram
Octagram
Enneagram
Decagram
Hendecagram
Dodecagram
Icositetragram
Families
Concave polygon
Cyclic polygon
Regular polygon
Polyform
Gnomon
Golygon
Tilings
|
https://en.wikipedia.org/wiki/Manic%20Miner | Manic Miner is a platform game written for the ZX Spectrum by Matthew Smith. It was published by Bug-Byte in 1983, then later the same year by Software Projects. The first game in the Miner Willy series, the design was inspired by Miner 2049er (1982) for the Atari 8-bit family. Retro Gamer called Manic Miner one of the most influential platform games of all time, and it has been ported to numerous home computers, video game consoles, and mobile phones.
Gameplay
In each of the twenty caverns, each one screen in size, are several flashing objects, which the player must collect before Willy's oxygen supply runs out. Once the player has collected the objects in one cavern, they must then go to the now-flashing portal, which will take them to the next cavern. The player must avoid enemies, listed in the cassette inlay as "...Poisonous Pansies, Spiders, Slime, and worst of all, Manic Mining Robots..." which move along predefined paths at constant speeds. Willy can also be killed by falling too far, so players must time the precision of jumps and other movements to prevent such falls or collisions with the enemies.
Extra lives are gained every 10,000 points, and the game ends when the player has no lives left. Above the final portal is a garden. To the right is a house with a white picket fence and red car parked in front. To the left is a slope leading to backyard with a pond and tree; a white animal, resembling a cat or mouse, watches the sun set behind the pond. Upon gaining his freedom, the game restarts from the first level with no increase in difficulty.
The in-game music is In the Hall of the Mountain King from Edvard Grieg's music to Henrik Ibsen's play Peer Gynt. The music that plays during the title screen is an arrangement of The Blue Danube.
Release
There are some differences between the Bug-Byte and Software Projects versions.
In Processing Plant, the enemy at the end of the conveyor belt is a bush in the original, whereas the Software Projects one res |
https://en.wikipedia.org/wiki/Installation%20testing | Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known as installation testing. These procedure may involve full or partial upgrades, and install/uninstall processes.
Installation testing may look for errors that occur in the installation process that affect the user's perception and capability to use the installed software. There are many events that may affect the software installation and installation testing may test for proper installation whilst checking for a number of associated activities and events. Some examples include the following:
A user must select a variety of options.
Dependent files and libraries must be allocated, loaded or located.
Valid hardware configurations must be present.
Software systems may need connectivity to connect to other software systems.
Installation testing may also be considered as an activity-based approach to how to test something. For example, install the software in the various ways and on the various types of systems that it can be installed. Check which files are added or changed on disk. Does the installed software work? What happens when you uninstall?
This testing is typically performed in Operational acceptance testing, by a software testing engineer in conjunction with the configuration manager. Implementation testing is usually defined as testing which places a compiled version of code into the testing or pre-production environment, from which it may or may not progress into production. This generally takes place outside of the software development environment to limit code corruption from other future or past releases (or from the use of the wrong version of dependencies such as shared libraries) which may reside on the development environment.
The simplest installation approach is to run an install program, sometimes called package software. This package so |
https://en.wikipedia.org/wiki/Semi-locally%20simply%20connected | In mathematics, specifically algebraic topology, semi-locally simply connected is a certain local connectedness condition that arises in the theory of covering spaces. Roughly speaking, a topological space X is semi-locally simply connected if there is a lower bound on the sizes of the “holes” in X. This condition is necessary for most of the theory of covering spaces, including the existence of a universal cover and the Galois correspondence between covering spaces and subgroups of the fundamental group.
Most “nice” spaces such as manifolds and CW complexes are semi-locally simply connected, and topological spaces that do not satisfy this condition are considered somewhat pathological. The standard example of a non-semi-locally simply connected space is the Hawaiian earring.
Definition
A space X is called semi-locally simply connected if every point in X has a neighborhood U with the property that every loop in U can be contracted to a single point within X (i.e. every loop in U is nullhomotopic in X). The neighborhood U need not be simply connected: though every loop in U must be contractible within X, the contraction is not required to take place inside of U. For this reason, a space can be semi-locally simply connected without being locally simply connected.
Equivalent to this definition, a space X is semi-locally simply connected if every point in X has a neighborhood U for which the homomorphism from the fundamental group of U to the fundamental group of X, induced by the inclusion map of U into X, is trivial.
Most of the main theorems about covering spaces, including the existence of a universal cover and the Galois correspondence, require a space to be path-connected, locally path-connected, and semi-locally simply connected, a condition known as unloopable (délaçable in French). In particular, this condition is necessary for a space to have a simply connected covering space.
Examples
A simple example of a space that is not semi-locally simply connecte |
https://en.wikipedia.org/wiki/Ken%20Sakamura | , as of April 2017, is a Japanese professor and dean of the Faculty of Information Networking for Innovation and Design at Toyo University, Japan. He is a former professor in information science at the University of Tokyo (through March 2017). He is the creator of the real-time operating system (RTOS) architecture TRON.
In 2001, he shared the Takeda Award for Social/Economic Well-Being with Richard Stallman and Linus Torvalds.
Career
As of 2006, Sakamura leads the ubiquitous networking laboratory (UNL), located in Gotanda, Tokyo, and the T-Engine forum for consumer electronics. The joint goal of Sakamura's ubiquitous networking specification and the T-Engine forum, is to enable any everyday device to broadcast and receive information. It is essentially a TRON variant, paired with a competing standard to radio-frequency identification (RFID).
Since the foundation of the T-Engine forum, Sakamura has been working on opening Japanese technology to the world. His prior brainchild, TRON, the universal RTOS used in Japanese consumer electronics has had limited adoption in other countries. Sakamura has signed deals with Chinese and Korean universities to work together on ubiquitous networking. He has also worked with French software component manufacturer NexWave Solutions, Inc. He is an external board member for Nippon Telegraph and Telephone (NTT), Japan.
Ubiquitous Communicator
The Ubiquitous Communicator (UC) is a mobile computing device designed by Sakamura for use in ubiquitous computing. On 15 September 2004, YRP-UNL announced in Japan that it had begun producing a new model after creating five prototypes over three years. The model was used in trial tests circa late 2004. The new model, weighing about 196 grams, contains new features: RFID reader compatible for ucode, a two megapixel charge-coupled device (CCD) camera, a secondary 300,000 pixel camera for videotelephony, support for wireless network technologies, Bluetooth, Wi-Fi, and IrDA, VoIP phone feature, S |
https://en.wikipedia.org/wiki/Axiomatic%20semantics | Axiomatic semantics is an approach based on mathematical logic for proving the correctness of computer programs. It is closely related to Hoare logic.
Axiomatic semantics define the meaning of a command in a program by describing its effect on assertions about the program state. The assertions are logical statements—predicates with variables, where the variables define the state of the program.
See also
Algebraic semantics (computer science) — in terms of algebras
Denotational semantics — by translation of the program into another language
Operational semantics — in terms of the state of the computation
Formal semantics of programming languages — overview
Predicate transformer semantics — describes the meaning of a program fragment as the function transforming a postcondition to the precondition needed to establish it.
Assertion (computing)
References
Formal specification languages
Logic in computer science
Programming language semantics |
https://en.wikipedia.org/wiki/Language%20of%20flowers | Floriography (language of flowers) is a means of cryptological communication through the use or arrangement of flowers. Meaning has been attributed to flowers for thousands of years, and some form of floriography has been practiced in traditional cultures throughout Europe, Asia, and Africa. Plants and flowers are used as symbols in the Hebrew Bible, particularly of love and lovers in the Song of Songs, as an emblem for the Israelite people, and for the coming Messiah.
Interest in floriography soared in Victorian England and in the United States during the 19th century. Gifts of blooms, plants, and specific floral arrangements were used to send a coded message to the recipient, allowing the sender to express feelings which could not be spoken aloud in Victorian society. Armed with floral dictionaries, Victorians often exchanged small "talking bouquets", called nosegays or tussie-mussies, which could be worn or carried as a fashion accessory.
History
According to Jayne Alcock, grounds and gardens supervisor at the Walled Gardens of Cannington, the renewed Victorian era interest in the language of flowers finds its roots in Ottoman Turkey, specifically the court in Constantinople and an obsession it held with tulips during the first half of the 18th century. During the Victorian age, the use of flowers as a means of covert communication coincided with a growing interest in botany. The floriography craze was introduced to Europe by the Englishwoman Mary Wortley Montagu (1689–1762), who brought it to England in 1717, and Aubry de La Mottraye (1674–1743), who introduced it to the Swedish court in 1727. Joseph Hammer-Purgstall's (1809) appears to be the first published list associating flowers with symbolic definitions, while the first dictionary of floriography appears in 1819 when Louise Cortambert, writing under pen name Madame Charlotte de la Tour, wrote .
Robert Tyas was a popular British flower writer, publisher, and clergyman, who lived from 1811 to 1879; hi |
https://en.wikipedia.org/wiki/Angle%20of%20repose | The angle of repose, or critical angle of repose, of a granular material is the steepest angle of descent or dip relative to the horizontal plane on which the material can be piled without slumping. At this angle, the material on the slope face is on the verge of sliding. The angle of repose can range from 0° to 90°. The morphology of the material affects the angle of repose; smooth, rounded sand grains cannot be piled as steeply as can rough, interlocking sands. The angle of repose can also be affected by additions of solvents. If a small amount of water is able to bridge the gaps between particles, electrostatic attraction of the water to mineral surfaces increase the angle of repose, and related quantities such as the soil strength.
When bulk granular materials are poured onto a horizontal surface, a conical pile forms. The internal angle between the surface of the pile and the horizontal surface is known as the angle of repose and is related to the density, surface area and shapes of the particles, and the coefficient of friction of the material. Material with a low angle of repose forms flatter piles than material with a high angle of repose.
The term has a related usage in mechanics, where it refers to the maximum angle at which an object can rest on an inclined plane without sliding down. This angle is equal to the arctangent of the coefficient of static friction μs between the surfaces.
Applications of theory
The angle of repose is sometimes used in the design of equipment for the processing of particulate solids. For example, it may be used to design an appropriate hopper or silo to store the material, or to size a conveyor belt for transporting the material. It can also be used in determining whether or not a slope (of a stockpile, or uncompacted gravel bank, for example) would likely collapse; the talus slope is derived from angle of repose and represents the steepest slope a pile of granular material can take. This angle of repose is also crucial i |
https://en.wikipedia.org/wiki/The%20Codebreakers | The Codebreakers – The Story of Secret Writing () is a book by David Kahn, published in 1967, comprehensively chronicling the history of cryptography from ancient Egypt to the time of its writing. The United States government attempted to have the book altered before publication, and it succeeded in part.
Overview
Bradford Hardie III, an American cryptographer during World War II, contributed insider information, German translations from original documents, and intimate real-time operational explanations to The Codebreakers.
The Codebreakers is widely regarded as the best account of the history of cryptography up to its publication. William Crowell, the former deputy director of the National Security Agency, was quoted in Newsday magazine: "Before he (Kahn) came along, the best you could do was buy an explanatory book that usually was too technical and terribly dull."
The Puzzle Palace (1982), written by James Bamford, gives a history of the writing and publication of The Codebreakers. Kahn, then a journalist, was contracted to write a book on cryptology in 1961. He began writing it part-time, and then he quit his job to work on it full-time. The book was to include information on the NSA and, according to Bamford, the agency attempted to stop its publication. The NSA considered various options, including writing a negative review of Kahn's work to be published in the press to discredit him.
A committee of the United States Intelligence Board concluded that the book was "a possibly valuable support to foreign COMSEC authorities" and recommended "further low-key actions as possible, but short of legal action, to discourage Mr. Kahn or his prospective publishers". Kahn's publisher, Macmillan and Sons, handed over the manuscript to the government for review without Kahn's permission on 4 March 1966. Kahn and Macmillan eventually agreed to remove some material from the manuscript, particularly concerning the relationship between the NSA and its counterpart in the Un |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.