source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Anisochronous | In telecommunication, the term anisochronous refers to a periodic signal, pertaining to transmission in which the time interval separating any two corresponding transitions is not necessarily related to the time interval separating any other two transitions. It can also pertain to a data transmission in which there is always a whole number of unit intervals between any two significant instants in the same block or character, but not between significant instants in different blocks or characters.
In practice, anisochronous typically means that data packets are not arriving in the same order they were transmitted, thus dramatically altering the quality of a multimedia transmission (e.g. voice, video, music), or after processing to restore isochronicity, have had significant amounts of latency added. Isochronous and anisochronous are characteristics, while synchronous and asynchronous are relationships.
References
Telecommunication theory
Synchronization |
https://en.wikipedia.org/wiki/Backhaul%20%28broadcasting%29 | In the context of broadcasting, backhaul refers to uncut program content that is transmitted point-to-point to an individual television station or radio station, broadcast network or other receiving entity where it will be integrated into a finished TV show or radio show. The term is independent of the medium being used to send the backhaul, but communications satellite transmission is very common. When the medium is satellite, it is called a wildfeed.
Backhauls are also referred to sometimes as clean feeds, being clean in the sense that they lack any of the post-production elements that are added later to the feed's content (i.e. on-screen graphics, voice-overs, bumpers, etc.) during the integration of the backhaul feed into a finished show. In live sports production, a backhaul is used to obtain live game footage (usually for later repackaging in highlights shows) when an off-air source is not readily available. In this instance the feed that is being obtained contains all elements except for TV commercials or radio ads run by the host network's master control. This is particularly useful for obtaining live coverage of post-game press conferences or extended game highlights (melts), since the backhaul may stay up to feed these events after the network has concluded their broadcast.
Electronic news gathering, including live via satellite interviews, reporters' live shots, and sporting events are all examples of radio or television content that is backhauled to a station or network before being made available to the public through that station or network. Cable TV channels, particularly public, educational, and government access (PEG) along with (local origination) channels, may also backhauled to cable headends before making their way to the subscriber. Finished network feeds are not considered backhauls, even if local insertion is used to modify the content prior to final transmission.
There exists a dedicated group of enthusiasts who use TVRO (TV receive-on |
https://en.wikipedia.org/wiki/Group%20extension | In mathematics, a group extension is a general means of describing a group in terms of a particular normal subgroup and quotient group. If and are two groups, then is an extension of by if there is a short exact sequence
If is an extension of by , then is a group, is a normal subgroup of and the quotient group is isomorphic to the group . Group extensions arise in the context of the extension problem, where the groups and are known and the properties of are to be determined. Note that the phrasing " is an extension of by " is also used by some.
Since any finite group possesses a maximal normal subgroup with simple factor group , all finite groups may be constructed as a series of extensions with finite simple groups. This fact was a motivation for completing the classification of finite simple groups.
An extension is called a central extension if the subgroup lies in the center of .
Extensions in general
One extension, the direct product, is immediately obvious. If one requires and to be abelian groups, then the set of isomorphism classes of extensions of by a given (abelian) group is in fact a group, which is isomorphic to
cf. the Ext functor. Several other general classes of extensions are known but no theory exists that treats all the possible extensions at one time. Group extension is usually described as a hard problem; it is termed the extension problem.
To consider some examples, if , then is an extension of both and . More generally, if is a semidirect product of and , written as , then is an extension of by , so such products as the wreath product provide further examples of extensions.
Extension problem
The question of what groups are extensions of by is called the extension problem, and has been studied heavily since the late nineteenth century. As to its motivation, consider that the composition series of a finite group is a finite sequence of subgroups , where each is an extension of by some simple group. T |
https://en.wikipedia.org/wiki/Database%20schema | The database schema is the structure of a database described in a formal language supported typically by a relational database management system (RDBMS). The term "schema" refers to the organization of data as a blueprint of how the database is constructed (divided into database tables in the case of relational databases). The formal definition of a database schema is a set of formulas (sentences) called integrity constraints imposed on a database. These integrity constraints ensure compatibility between parts of the schema. All constraints are expressible in the same language. A database can be considered a structure in realization of the database language. The states of a created conceptual schema are transformed into an explicit mapping, the database schema. This describes how real-world entities are modeled in the database.
"A database schema specifies, based on the database administrator's knowledge of possible applications, the facts that can enter the database, or those of interest to the possible end-users." The notion of a database schema plays the same role as the notion of theory in predicate calculus. A model of this "theory" closely corresponds to a database, which can be seen at any instant of time as a mathematical object. Thus a schema can contain formulas representing integrity constraints specifically for an application and the constraints specifically for a type of database, all expressed in the same database language. In a relational database, the schema defines the tables, fields, relationships, views, indexes, packages, procedures, functions, queues, triggers, types, sequences, materialized views, synonyms, database links, directories, XML schemas, and other elements.
A database generally stores its schema in a data dictionary. Although a schema is defined in text database language, the term is often used to refer to a graphical depiction of the database structure. In other words, schema is the structure of the database that defines the objec |
https://en.wikipedia.org/wiki/Improper%20integral | In mathematical analysis, an improper integral is an extension of the notion of a definite integral to cases that violate the usual assumptions for that kind of integral. In the context of Riemann integrals (or, equivalently, Darboux integrals), this typically involves unboundedness, either of the set over which the integral is taken or of the integrand (the function being integrated), or both. It may also involve bounded but not closed sets or bounded but not continuous functions. While an improper integral is typically written symbolically just like a standard definite integral, it actually represents a limit of a definite integral or a sum of such limits; thus improper integrals are said to converge or diverge. If a regular definite integral (which may retronymically be called a proper integral) is worked out as if it is improper, the same answer will result.
In the simplest case of a real-valued function of a single variable integrated in the sense of Riemann (or Darboux) over a single interval, improper integrals may be in any of the following forms:
, where is undefined or discontinuous somewhere on
The first three forms are improper because the integrals are taken over an unbounded interval. (They may be improper for other reasons, as well, as explained below.) Such an integral is sometimes described as being of the "first" type or kind if the integrand otherwise satisfies the assumptions of integration. Integrals in the fourth form that are improper because has a vertical asymptote somewhere on the interval may be described as being of the "second" type or kind. Integrals that combine aspects of both types are sometimes described as being of the "third" type or kind.
In each case above, the improper integral must be rewritten using one or more limits, depending on what is causing the integral to be improper. For example, in case 1, if is continuous on the entire interval , then
The limit on the right is taken to be the definition of the inte |
https://en.wikipedia.org/wiki/Family%20Radio%20Service | The Family Radio Service (FRS) is an improved walkie-talkie radio system authorized in the United States since 1996. This personal radio service uses channelized frequencies around 462 and 467 MHz in the ultra high frequency (UHF) band. It does not suffer the interference effects found on citizens' band (CB) at 27 MHz, or the 49 MHz band also used by cordless telephones, toys, and baby monitors. FRS uses frequency modulation (FM) instead of amplitude modulation (AM). Since the UHF band has different radio propagation characteristics, short-range use of FRS may be more predictable than the more powerful license-free radios operating in the HF CB band.
Initially proposed by RadioShack in 1994 for use by families, FRS has also seen significant adoption by business interests, as an unlicensed, low-cost alternative to the business band. New rules issued by the FCC in May 2017 clarify and simplify the overlap between FRS and General Mobile Radio Service (GMRS) radio services.
Worldwide, a number of similar personal radio services exist; these share the characteristics of low power operation in the UHF (or upper VHF) band using FM, and simplified or no end-user licenses. Exact frequency allocations differ, so equipment legal to operate in one country may cause unacceptable interference in another. Radios approved for FRS are not legal to operate anywhere in Europe.
Technical information
FRS radios use narrow-band frequency modulation (NBFM) with a maximum deviation of 2.5 kilohertz. The channels are spaced at 12.5 kilohertz intervals.
All 22 channels are shared with GMRS radios. Initially, the FRS radios were limited to 500 milliwatts across all channels. However, after May 18, 2017, the limit is increased to 2 watts on channels 1-7 and 15–22.
FRS radios frequently have provisions for using sub-audible tone squelch (CTCSS and DCS) codes, filtering out unwanted chatter from other users on the same frequency. Although these codes are sometimes called "privacy codes" |
https://en.wikipedia.org/wiki/List%20of%20mathematical%20logic%20topics | This is a list of mathematical logic topics.
For traditional syllogistic logic, see the list of topics in logic. See also the list of computability and complexity topics for more theory of algorithms.
Working foundations
Peano axioms
Giuseppe Peano
Mathematical induction
Structural induction
Recursive definition
Naive set theory
Element (mathematics)
Ur-element
Singleton (mathematics)
Simple theorems in the algebra of sets
Algebra of sets
Power set
Empty set
Non-empty set
Empty function
Universe (mathematics)
Axiomatization
Axiomatic system
Axiom schema
Axiomatic method
Formal system
Mathematical proof
Direct proof
Reductio ad absurdum
Proof by exhaustion
Constructive proof
Nonconstructive proof
Tautology
Consistency proof
Arithmetization of analysis
Foundations of mathematics
Formal language
Principia Mathematica
Hilbert's program
Impredicative
Definable real number
Algebraic logic
Boolean algebra (logic)
Dialectica space
categorical logic
Model theory
Finite model theory
Descriptive complexity theory
Model checking
Trakhtenbrot's theorem
Computable model theory
Tarski's exponential function problem
Undecidable problem
Institutional model theory
Institution (computer science)
Non-standard analysis
Non-standard calculus
Hyperinteger
Hyperreal number
Transfer principle
Overspill
Elementary Calculus: An Infinitesimal Approach
Criticism of non-standard analysis
Standard part function
Set theory
Forcing (mathematics)
Boolean-valued model
Kripke semantics
General frame
Predicate logic
First-order logic
Infinitary logic
Many-sorted logic
Higher-order logic
Lindström quantifier
Second-order logic
Soundness theorem
Gödel's completeness theorem
Original proof of Gödel's completeness theorem
Compactness theorem
Löwenheim–Skolem theorem
Skolem's paradox
Gödel's incompleteness theorems
Structure (mathematical logic)
Interpretation (logic)
Substructure (mathematics)
Elementary substructure
Skolem hull
Non-standard model
Atomic model (mathematical logic)
Prime model
Saturate |
https://en.wikipedia.org/wiki/Betti%20number | In algebraic topology, the Betti numbers are used to distinguish topological spaces based on the connectivity of n-dimensional simplicial complexes. For the most reasonable finite-dimensional spaces (such as compact manifolds, finite simplicial complexes or CW complexes), the sequence of Betti numbers is 0 from some point onward (Betti numbers vanish above the dimension of a space), and they are all finite.
The nth Betti number represents the rank of the nth homology group, denoted Hn, which tells us the maximum number of cuts that can be made before separating a surface into two pieces or 0-cycles, 1-cycles, etc. For example, if then , if then , if then , if then , etc. Note that only the ranks of infinite groups are considered, so for example if , where is the finite cyclic group of order 2, then . These finite components of the homology groups are their torsion subgroups, and they are denoted by torsion coefficients.
The term "Betti numbers" was coined by Henri Poincaré after Enrico Betti. The modern formulation is due to Emmy Noether. Betti numbers are used today in fields such as simplicial homology, computer science and digital images.
Geometric interpretation
Informally, the kth Betti number refers to the number of k-dimensional holes on a topological surface. A "k-dimensional hole" is a k-dimensional cycle that is not a boundary of a (k+1)-dimensional object.
The first few Betti numbers have the following definitions for 0-dimensional, 1-dimensional, and 2-dimensional simplicial complexes:
b0 is the number of connected components;
b1 is the number of one-dimensional or "circular" holes;
b2 is the number of two-dimensional "voids" or "cavities".
Thus, for example, a torus has one connected surface component so b0 = 1, two "circular" holes (one equatorial and one meridional) so b1 = 2, and a single cavity enclosed within the surface so b2 = 1.
Another interpretation of bk is the maximum number of k-dimensional curves that can be removed whil |
https://en.wikipedia.org/wiki/Multiple%20buffering | In computer science, multiple buffering is the use of more than one buffer to hold a block of data, so that a "reader" will see a complete (though perhaps old) version of the data, rather than a partially updated version of the data being created by a "writer". It is very commonly used for computer display images. It is also used to avoid the need to use dual-ported RAM (DPRAM) when the readers and writers are different devices.
Description
An easy way to explain how multiple buffering works is to take a real-world example. It is a nice sunny day and you have decided to get the paddling pool out, only you can not find your garden hose. You'll have to fill the pool with buckets. So you fill one bucket (or buffer) from the tap, turn the tap off, walk over to the pool, pour the water in, walk back to the tap to repeat the exercise. This is analogous to single buffering. The tap has to be turned off while you "process" the bucket of water.
Now consider how you would do it if you had two buckets. You would fill the first bucket and then swap the second in under the running tap. You then have the length of time it takes for the second bucket to fill in order to empty the first into the paddling pool. When you return you can simply swap the buckets so that the first is now filling again, during which time you can empty the second into the pool. This can be repeated until the pool is full. It is clear to see that this technique will fill the pool far faster as there is much less time spent waiting, doing nothing, while buckets fill. This is analogous to double buffering. The tap can be on all the time and does not have to wait while the processing is done.
If you employed another person to carry a bucket to the pool while one is being filled and another emptied, then this would be analogous to triple buffering. If this step took long enough you could employ even more buckets, so that the tap is continuously running filling buckets.
In computer science the situation of |
https://en.wikipedia.org/wiki/Cuckoo%20clock | A cuckoo clock is a type of clock, typically pendulum driven, that strikes the hours with a sound like a common cuckoo call and has an automated cuckoo bird that moves with each note. Some move their wings and open and close their beaks while leaning forwards, whereas others have only the bird's body leaning forward. The mechanism to produce the cuckoo call has been in use since the middle of the 18th century and has remained almost without variation.
It is unknown who invented the cuckoo clock and where the first one was made. It is thought that much of its development and evolution was made in the Black Forest area in southwestern Germany (in the modern state of Baden-Württemberg), the region where the cuckoo clock was popularized and from where it was exported to the rest of the world, becoming world-famous from the mid-1850s on. Today, the cuckoo clock is one of the favourite souvenirs of travellers in Germany, Switzerland and Austria. It has become a cultural icon of Germany.
Characteristics
The design of a cuckoo clock is now conventional. Many are made in the "traditional style", which are made to hang on a wall. The classical or traditional type includes two subgroups; the carved ones, whose wooden cases are decorated with leaves, animals, etc., and a second one with cases in the shape of a chalet. They have an automaton of a bird that appears through a small trap door when the clock strikes. The cuckoo bird is activated by the clock movement as the clock strikes by means of an arm that is triggered on the hour and half hour.
There are two kinds of movements: one-day (30-hour) and eight-day clockworks. Some have musical devices, and play a tune on a Swiss music box after striking the hours and half-hours. Usually the melody sounds only at full hours in eight-day clocks and both at full and half hours in the one-day timepieces. Musical cuckoo clocks frequently have other automata which move when the music box plays. Today's cuckoo clocks are almost always |
https://en.wikipedia.org/wiki/List%20of%20U.S.%20state%20fossils | Most American states have made a state fossil designation, in many cases during the 1980s. It is common to designate one species in which fossilization has occurred, rather than a single specimen, or a category of fossils not limited to a single species.
Some states that lack an explicit state fossil have nevertheless singled out a fossil for formal designation as a state dinosaur, rock, gem or stone.
Table of state fossils
States lacking a state fossil
Arkansas
Hawaii
Minnesota
The giant beaver was proposed in 2022.
Iowa
The crinoid was proposed in 2018.
New Hampshire
The American mastodon (Mammut americanum) was considered in 2015.
New Jersey
Rhode Island
Texas
The state dinosaur of Texas is Sauroposeidon proteles.
See also
List of U.S. state dinosaurs
List of U.S. state minerals, rocks, and gemstones
Lists of U.S. state insignia
References
External links
List of U.S. state fossils, from National Park Service
State
Fossils
United States
Fossils |
https://en.wikipedia.org/wiki/Image%20analysis | Image analysis or imagery analysis is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques. Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from their face.
Computers are indispensable for the analysis of large amounts of data, for tasks that require complex computation, or for the extraction of quantitative information. On the other hand, the human visual cortex is an excellent image analysis apparatus, especially for extracting higher-level information, and for many applications — including medicine, security, and remote sensing — human analysts still cannot be replaced by computers. For this reason, many important image analysis tools such as edge detectors and neural networks are inspired by human visual perception models.
Digital
Digital Image Analysis or Computer Image Analysis is when a computer or electrical device automatically studies an image to obtain useful information from it. Note that the device is often a computer but may also be an electrical circuit, a digital camera or a mobile phone.
It involves the fields of computer or machine vision, and medical imaging, and makes heavy use of pattern recognition, digital geometry, and signal processing. This field of computer science developed in the 1950s at academic institutions such as the MIT A.I. Lab, originally as a branch of artificial intelligence and robotics.
It is the quantitative or qualitative characterization of two-dimensional (2D) or three-dimensional (3D) digital images. 2D images are, for example, to be analyzed in computer vision, and 3D images in medical imaging. The field was established in the 1950s—1970s, for example with pioneering contributions by Azriel Rosenfeld, Herbert Freeman, Jack E. Bresenham, or King-Sun Fu.
Techniques
There are many different techniques used in automatically analysing images. Each technique may be useful for a small |
https://en.wikipedia.org/wiki/Reusability | In computer science and software engineering, reusability is the use of existing assets in some form within the software product development process; these assets are products and by-products of the software development life cycle and include code, software components, test suites, designs and documentation. The opposite concept of reusability is leverage, which modifies existing assets as needed to meet specific system requirements. Because reuse implies the creation of a , it is preferred over leverage.
Subroutines or functions are the simplest form of reuse. A chunk of code is regularly organized using modules or namespaces into layers. Proponents claim that objects and software components offer a more advanced form of reusability, although it has been tough to objectively measure and define levels or scores of reusability.
The ability to reuse relies in an essential way on the ability to build larger things from smaller parts, and being able to identify commonality among those parts. Reusability is often a required characteristic of platform software. Reusability brings several aspects to software development that do not need to be considered when reusability is not required.
Reusability implies some explicit management of build, packaging, distribution, installation, configuration, deployment, maintenance and upgrade issues. If these issues are not considered, software may appear to be reusable from design point of view, but will not be reused in practice.
Software reusability more specifically refers to design features of a software element (or collection of software elements) that enhance its suitability for reuse.
Many reuse design principles were developed at the WISR workshops.
Candidate design features for software reuse include:
Adaptable
Brief: small size
Consistency
Correctness
Extensibility
Fast
Flexible
Generic
Localization of volatile (changeable) design assumptions (David Parnas)
Modularity
Orthogonality
Parameterization
Simple: |
https://en.wikipedia.org/wiki/Chown | The command , an abbreviation of change owner, is used on Unix and Unix-like operating systems to change the owner of file system files, directories. Unprivileged (regular) users who wish to change the group membership of a file that they own may use .
The ownership of any file in the system may only be altered by a super-user. A user cannot give away ownership of a file, even when the user owns it. Similarly, only a member of a group can change a file's group ID to that group.
The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system.
See also
chgrp
chmod
takeown
References
External links
chown manual page
The chown Command by The Linux Information Project (LINFO)
Operating system security
Standard Unix programs
Unix SUS2008 utilities
IBM i Qshell commands |
https://en.wikipedia.org/wiki/Axiom%20of%20countability | In mathematics, an axiom of countability is a property of certain mathematical objects that asserts the existence of a countable set with certain properties. Without such an axiom, such a set might not provably exist.
Important examples
Important countability axioms for topological spaces include:
sequential space: a set is open if every sequence convergent to a point in the set is eventually in the set
first-countable space: every point has a countable neighbourhood basis (local base)
second-countable space: the topology has a countable base
separable space: there exists a countable dense subset
Lindelöf space: every open cover has a countable subcover
σ-compact space: there exists a countable cover by compact spaces
Relationships with each other
These axioms are related to each other in the following ways:
Every first-countable space is sequential.
Every second-countable space is first countable, separable, and Lindelöf.
Every σ-compact space is Lindelöf.
Every metric space is first countable.
For metric spaces, second-countability, separability, and the Lindelöf property are all equivalent.
Related concepts
Other examples of mathematical objects obeying axioms of countability include sigma-finite measure spaces, and lattices of countable type.
References
General topology
Mathematical axioms |
https://en.wikipedia.org/wiki/Color%20science | Color science is the scientific study of color including lighting and optics; measurement of light and color; the physiology, psychophysics, and modeling of color vision; and color reproduction.
Organizations
International Commission on Illumination (CIE)
Illuminating Engineering Society (IES)
Inter-Society Color Council (ISCC)
Society for Imaging Science and Technology (IS&T)
International Colour Association (AIC)
Optica, formerly the Optical Society of America (OSA)
The Colour Group
Society of Dyers and Colourists (SDC)
American Association of Textile Chemists and Colorists (AATCC)
Association for Research in Vision and Ophthalmology (ARVO)
ACM SIGGRAPH
Vision Sciences Society (VSS)
Council for Optical Radiation Measurements (CORM)
Journals
The preeminent scholarly journal publishing research papers in color science is Color Research and Application, started in 1975 by founding editor-in-chief Fred Billmeyer, along with Gunter Wyszecki, Michael Pointer and Rolf Kuehni, as a successor to the Journal of Colour (1964–1974). Previously most color science work had been split between journals with broader or partially overlapping focus such as the Journal of the Optical Society of America (JOSA), Photographic Science and Engineering (1957–1984), and the Journal of the Society of Dyers and Colourists (renamed Coloration Technology in 2001).
Other journals where color science papers are published include the Journal of Imaging Science & Technology, the Journal of Perceptual Imaging, the Journal of the International Colour Association (JAIC), the Journal of the Color Science Association of Japan, Applied Optics, and the Journal of Vision.
Conferences
Congress of the International Color Association
IS&T Color and Imaging Conference (CIC)
SIGGRAPH
International Symposium for Color Science and Art
Selected books
3rd ed. (2000).
Author's website. 2nd ed. (2005).
1st ed. (1997).
References
Color
Image processing
Measurement
Psychophy |
https://en.wikipedia.org/wiki/Modeling%20language | A modeling language is any artificial language that can be used to express data, information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure Programing language.
Overview
A modeling language can be graphical or textual.
Graphical modeling languages use a diagram technique with named symbols that represent concepts and lines that connect the symbols and represent relationships and various other graphical notation to represent constraints.
Textual modeling languages may use standardized keywords accompanied by parameters or natural language terms and phrases to make computer-interpretable expressions.
An example of a graphical modeling language and a corresponding textual modeling language is EXPRESS.
Not all modeling languages are executable, and for those that are, the use of them doesn't necessarily mean that programmers are no longer required. On the contrary, executable modeling languages are intended to amplify the productivity of skilled programmers, so that they can address more challenging problems, such as parallel computing and distributed systems.
A large number of modeling languages appear in the literature.
Type of modeling languages
Graphical types
Example of graphical modeling languages in the field of computer science, project management and systems engineering:
Behavior Trees are a formal, graphical modeling language used primarily in systems and software engineering. Commonly used to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrated system.
Business Process Modeling Notation (BPMN, and the XML form BPML) is an example of a Process Modeling language.
C-K theory consists of a modeling language for design processes.
DRAKON is a general-purpose algorithmic modeling language for specifying |
https://en.wikipedia.org/wiki/IEEE%20802.1X | IEEE 802.1X is an IEEE Standard for port-based network access control (PNAC). It is part of the IEEE 802.1 group of networking protocols. It provides an authentication mechanism to devices wishing to attach to a LAN or WLAN.
IEEE 802.1X defines the encapsulation of the Extensible Authentication Protocol (EAP) over wired IEEE 802 networks and over 802.11 wireless networks, which is known as "EAP over LAN" or EAPOL. EAPOL was originally specified for IEEE 802.3 Ethernet, IEEE 802.5 Token Ring, and FDDI (ANSI X3T9.5/X3T12 and ISO 9314) in 802.1X-2001, but was extended to suit other IEEE 802 LAN technologies such as IEEE 802.11 wireless in 802.1X-2004. The EAPOL was also modified for use with IEEE 802.1AE ("MACsec") and IEEE 802.1AR (Secure Device Identity, DevID) in 802.1X-2010 to support service identification and optional point to point encryption over the internal LAN segment.
Overview
802.1X authentication involves three parties: a supplicant, an authenticator, and an authentication server. The supplicant is a client device (such as a laptop) that wishes to attach to the LAN/WLAN. The term 'supplicant' is also used interchangeably to refer to the software running on the client that provides credentials to the authenticator. The authenticator is a network device that provides a data link between the client and the network and can allow or block network traffic between the two, such as an Ethernet switch or wireless access point; and the authentication server is typically a trusted server that can receive and respond to requests for network access, and can tell the authenticator if the connection is to be allowed, and various settings that should apply to that client's connection or setting. Authentication servers typically run software supporting the RADIUS and EAP protocols. In some cases, the authentication server software may be running on the authenticator hardware.
The authenticator acts like a security guard to a protected network. The supplicant (i.e., c |
https://en.wikipedia.org/wiki/Regular%20Language%20description%20for%20XML | REgular LAnguage description for XML (RELAX) is a specification for describing XML-based languages.
A description written in RELAX is called a RELAX grammar.
RELAX Core has been approved as an ISO/IEC Technical Report 22250-1 in 2002 (ISO/IEC TR 22250-1:2002). It was developed by ISO/IEC JTC 1/SC 34 (ISO/IEC Joint Technical Committee 1, Subcommittee 34 - Document description and processing languages).
RELAX was designed by Murata Makoto.
In 2001, an XML schema language RELAX NG was created by unifying of RELAX Core and James Clark's TREX. It was published as ISO/IEC 19757-2 in 2003.
See also
RELAX NG
Document Schema Definition Languages
References
External links
RELAX home page
ISO/IEC TR 22250-1:2002 - Information technology -- Document description and processing languages -- Regular Language Description for XML (RELAX) -- Part 1: RELAX Core
Computer-related introductions in 2000
Data modeling languages
ISO/IEC standards
XML-based standards
de:RELAX |
https://en.wikipedia.org/wiki/List%20of%20number%20theory%20topics | This is a list of number theory topics. See also:
List of recreational number theory topics
Topics in cryptography
Divisibility
Composite number
Highly composite number
Even and odd numbers
Parity
Divisor, aliquot part
Greatest common divisor
Least common multiple
Euclidean algorithm
Coprime
Euclid's lemma
Bézout's identity, Bézout's lemma
Extended Euclidean algorithm
Table of divisors
Prime number, prime power
Bonse's inequality
Prime factor
Table of prime factors
Formula for primes
Factorization
RSA number
Fundamental theorem of arithmetic
Square-free
Square-free integer
Square-free polynomial
Square number
Power of two
Integer-valued polynomial
Fractions
Rational number
Unit fraction
Irreducible fraction = in lowest terms
Dyadic fraction
Recurring decimal
Cyclic number
Farey sequence
Ford circle
Stern–Brocot tree
Dedekind sum
Egyptian fraction
Modular arithmetic
Montgomery reduction
Modular exponentiation
Linear congruence theorem
Method of successive substitution
Chinese remainder theorem
Fermat's little theorem
Proofs of Fermat's little theorem
Fermat quotient
Euler's totient function
Noncototient
Nontotient
Euler's theorem
Wilson's theorem
Primitive root modulo n
Multiplicative order
Discrete logarithm
Quadratic residue
Euler's criterion
Legendre symbol
Gauss's lemma (number theory)
Congruence of squares
Luhn formula
Mod n cryptanalysis
Arithmetic functions
Multiplicative function
Additive function
Dirichlet convolution
Erdős–Kac theorem
Möbius function
Möbius inversion formula
Divisor function
Liouville function
Partition function (number theory)
Integer partition
Bell numbers
Landau's function
Pentagonal number theorem
Bell series
Lambert series
Analytic number theory: additive problems
Twin prime
Brun's constant
Cousin prime
Prime triplet
Prime quadruplet
Sexy prime
Sophie Germain prime
Cunningham chain
Goldbach's conjecture
Goldbach's weak conjecture
Second Hardy–Littlewood conjecture
Hardy–Littlewood circle method
Schinzel's hypothesis H
Batema |
https://en.wikipedia.org/wiki/SuperCollider | SuperCollider is an environment and programming language originally released in 1996 by James McCartney for real-time audio synthesis and algorithmic composition.
Since then it has been evolving into a system used and further developed by both scientists and artists working with sound. It is a dynamic programming language providing a framework for acoustic research, algorithmic music, interactive programming and live coding.
Originally released under the terms of the GPL-2.0-or-later in 2002, and from version 3.4 under GPL-3.0-or-later, SuperCollider is free and open-source software.
Architecture
Starting with version 3, the SuperCollider environment has been split into two components: a server, scsynth; and a client, sclang. These components communicate using OSC (Open Sound Control).
The SC language combines the object-oriented structure of Smalltalk and features from functional programming languages with a C-family syntax.
The SC Server application supports simple C and C++ plugin APIs, making it easy to write efficient sound algorithms (unit generators), which can then be combined into graphs of calculations. Because all external control in the server happens via OSC, it is possible to use it with other languages or applications.
The SuperCollider synthesis server (scsynth)
SuperCollider's sound generation is bundled into an optimised command-line executable (named scsynth). In most cases it is controlled from within the SuperCollider programming language, but it can be used independently. The audio server has the following features:
Open Sound Control access
Simple ANSI C and C++11 plugin APIs
Supports any number of input and output channels, including massively multichannel setups
Gives access to an ordered tree structure of synthesis nodes which define the order of execution
Bus system which allows dynamically restructuring the signal flow
Buffers for writing and reading
Calculation at different rates depending on the needs: audio rate, contro |
https://en.wikipedia.org/wiki/Dimension%20theorem%20for%20vector%20spaces | In mathematics, the dimension theorem for vector spaces states that all bases of a vector space have equally many elements. This number of elements may be finite or infinite (in the latter case, it is a cardinal number), and defines the dimension of the vector space.
Formally, the dimension theorem for vector spaces states that:
As a basis is a generating set that is linearly independent, the theorem is a consequence of the following theorem, which is also useful:
In particular if is finitely generated, then all its bases are finite and have the same number of elements.
While the proof of the existence of a basis for any vector space in the general case requires Zorn's lemma and is in fact equivalent to the axiom of choice, the uniqueness of the cardinality of the basis requires only the ultrafilter lemma, which is strictly weaker (the proof given below, however, assumes trichotomy, i.e., that all cardinal numbers are comparable, a statement which is also equivalent to the axiom of choice). The theorem can be generalized to arbitrary -modules for rings having invariant basis number.
In the finitely generated case the proof uses only elementary arguments of algebra, and does not require the axiom of choice nor its weaker variants.
Proof
Let be a vector space, be a linearly independent set of elements of , and be a generating set. One has to prove that the cardinality of is not larger than that of .
If is finite, this results from the Steinitz exchange lemma. (Indeed, the Steinitz exchange lemma implies every finite subset of has cardinality not larger than that of , hence is finite with cardinality not larger than that of .) If is finite, a proof based on matrix theory is also possible.
Assume that is infinite. If is finite, there is nothing to prove. Thus, we may assume that is also infinite. Let us suppose that the cardinality of is larger than that of . We have to prove that this leads to a contradiction.
By Zorn's lemma, every linearly i |
https://en.wikipedia.org/wiki/Audio%20equipment | Audio equipment refers to devices that reproduce, record, or process sound. This includes microphones, radio receivers, AV receivers, CD players, tape recorders, amplifiers, mixing consoles, effects units, headphones, and speakers.
Audio equipment is widely used in many different scenarios, such as concerts, bars, meeting rooms and the home where there is a need to reproduce, record and enhance sound volume.
Electronic circuits considered a part of audio electronics may also be designed to achieve certain signal processing operations, in order to make particular alterations to the signal while it is in the electrical form.
Audio signals can be created synthetically through the generation of electric signals from electronic devices.
Audio electronics were traditionally designed with analog electric circuit techniques until advances in digital technologies were developed. Moreover, digital signals are able to be manipulated by computer software much the same way audio electronic devices would, due to its compatible digital nature. Both analog and digital design formats are still used today, and the use of one or the other largely depends on the application.
See also
Sound recording and reproduction
Sound system (disambiguation)
References
Further reading
Sontheimer, R. (1998). Designing audio circuits. Netherlands: Elektor International Media.
Audio electronics
Consumer electronics |
https://en.wikipedia.org/wiki/RELAX%20NG | In computing, RELAX NG (REgular LAnguage for XML Next Generation) is a schema language for XML—a RELAX NG schema specifies a pattern for the structure and content of an XML document. A RELAX NG schema is itself an XML document but RELAX NG also offers a popular compact, non-XML syntax. Compared to other XML schema languages RELAX NG is considered relatively simple.
It was defined by a committee specification of the OASIS RELAX NG technical committee in 2001 and 2002, based on Murata Makoto's RELAX and James Clark's TREX, and also by part two of the international standard ISO/IEC 19757: Document Schema Definition Languages (DSDL). ISO/IEC 19757-2 was developed by ISO/IEC JTC 1/SC 34 and published in its first version in 2003.
Schema examples
Suppose we want to define an extremely simple XML markup scheme for a book: a book is defined as a sequence of one or more pages; each page contains text only. A sample XML document instance might be:
<book>
<page>This is page one.</page>
<page>This is page two.</page>
</book>
XML syntax
A RELAX NG schema can be written in a nested structure by defining a root element that contains further element definitions, which may themselves contain embedded definitions. A schema for our book in this style, using the full XML syntax, would be written:
<element name="book" xmlns="http://relaxng.org/ns/structure/1.0">
<oneOrMore>
<element name="page">
<text/>
</element>
</oneOrMore>
</element>
Nested structure becomes unwieldy with many sublevels and cannot define recursive elements, so most complex RELAX NG schemas use references to named pattern definitions located separately in the schema. Here, a "flattened schema" defines precisely the same book markup as the previous example:
<grammar xmlns="http://relaxng.org/ns/structure/1.0">
<start>
<element name="book">
<oneOrMore>
<ref name="page"/>
</oneOrMore>
</element>
</start>
<define name="page">
|
https://en.wikipedia.org/wiki/Valuation%20%28algebra%29 | In algebra (in particular in algebraic geometry or algebraic number theory), a valuation is a function on a field that provides a measure of the size or multiplicity of elements of the field. It generalizes to commutative algebra the notion of size inherent in consideration of the degree of a pole or multiplicity of a zero in complex analysis, the degree of divisibility of a number by a prime number in number theory, and the geometrical concept of contact between two algebraic or analytic varieties in algebraic geometry. A field with a valuation on it is called a valued field.
Definition
One starts with the following objects:
a field and its multiplicative group K×,
an abelian totally ordered group .
The ordering and group law on are extended to the set } by the rules
for all ∈ ,
for all ∈ .
Then a valuation of is any map
which satisfies the following properties for all a, b in K:
if and only if ,
,
, with equality if v(a) ≠ v(b).
A valuation v is trivial if v(a) = 0 for all a in K×, otherwise it is non-trivial.
The second property asserts that any valuation is a group homomorphism. The third property is a version of the triangle inequality on metric spaces adapted to an arbitrary Γ (see Multiplicative notation below). For valuations used in geometric applications, the first property implies that any non-empty germ of an analytic variety near a point contains that point.
The valuation can be interpreted as the order of the leading-order term. The third property then corresponds to the order of a sum being the order of the larger term, unless the two terms have the same order, in which case they may cancel, in which case the sum may have larger order.
For many applications, is an additive subgroup of the real numbers in which case ∞ can be interpreted as +∞ in the extended real numbers; note that for any real number a, and thus +∞ is the unit under the binary operation of minimum. The real numbers (extended by +∞) with the operations of minimum |
https://en.wikipedia.org/wiki/Document%20Schema%20Definition%20Languages | Document Schema Definition Languages (DSDL) is a framework within which multiple validation tasks of different types can be applied to an XML document in order to achieve more complete validation results than just the application of a single technology.
It is specified as a multi-part ISO/IEC Standard, ISO/IEC 19757. It was developed by ISO/IEC JTC 1/SC 34 (ISO/IEC Joint Technical Committee 1, Subcommittee 34 - Document description and processing languages).
DSDL defines a modular set of specifications for describing the document structures, data types, and data relationships in structured information resources.
Part 2: Regular-grammar-based validation – RELAX NG
Part 3: Rule-based validation – Schematron
Part 4: Namespace-based Validation Dispatching Language (NVDL)
Part 5: Extensible Datatypes
Part 7: Character Repertoire Description Language (CREPDL)
Part 8: Document Semantics Renaming Language (DSRL)
Part 9: Namespace and datatype declaration in Document Type Definitions (DTDs) (Datatype- and namespace-aware DTDs)
Part 11: Schema Association
See also
RELAX NG
Schematron
DTD
NVDL
W3C Schema
References
External links
Home page for DSDL Archived from the original on 2016-01-22.
ISO/IEC 19757-2:2003 - Information technology -- Document Schema Definition Language (DSDL) -- Part 2: Regular-grammar-based validation -- RELAX NG
Data modeling languages
ISO/IEC standards
XML
XML-based standards |
https://en.wikipedia.org/wiki/List%20of%20Brazilian%20National%20Forests | According to the Brazilian National System of Conservation Units, a national forest of Brazil is an area with forest cover of predominantly native species that has as its basic objective the multiple sustainable use of the forest resources and scientific research, with emphasis on methods of sustainable exploitation of native forests. There are 67 national forests in Brazil.
References
External links
Forests in Brazil
List
Brazilian National Forests
Brazil
Forests |
https://en.wikipedia.org/wiki/Removable%20media | In computing, a removable media is a data storage media that is designed to be readily inserted and removed from a system. Most early removable media, such as floppy disks and optical discs, require a dedicated read/write device (i.e. a drive) to be installed in the computer, while others, such as USB flash drives, are plug-and-play with all the hardware required to read them built into the device, so only need a driver software to be installed in order to communicate with the device. Some removable media readers/drives are integrated into the computer case, while others are standalone devices that need to be additionally installed or connected.
Examples of removable media that require a dedicated reader drive include:
Optical discs, e.g. Blu-rays (both standard and UHD versions), DVDs, CDs
Flash memory-based memory cards, e.g. CompactFlash, Secure Digital, Memory Stick
Magnetic storage media
Floppy and Zip disks (now obsolete)
Disk packs (now obsolete)
Magnetic tapes (now obsolete)
Paper data storage, e.g. punched cards, punched tapes (now obsolete)
Examples of removable media that are standalone plug-and-play devices that carry their own reader hardwares include:
USB flash drives
Portable storage devices
Dedicated external solid state drives (SSD)
Enclosured mass storage drives, i.e. modified hard disk drives (HDD)/internal SSDs
Peripheral devices that have integrated data storage capability
Digital cameras
Mobile devices such as smartphones, tablets and handheld game consoles
Portable media players
Other external or dockable peripherals that have expandable removable media capabilities, usually via a USB port or memory card reader
USB hubs
Wired or wireless printers
Network routers, access points and switches
Using removable media can pose some computer security risks, including viruses, data theft and the introduction of malware.
History
The earliest form of removable media, punched cards and tapes, predates the electronic computer by cen |
https://en.wikipedia.org/wiki/Oberon%20%28operating%20system%29 | The Oberon System is a modular, single-user, single-process, multitasking operating system written in the programming language Oberon. It was originally developed in the late 1980s at ETH Zurich. The Oberon System has an unconventional visual text user interface (TUI) instead of a conventional command-line interface (CLI) or graphical user interface (GUI). This TUI was very innovative in its time and influenced the design of the Acme text editor for the Plan 9 from Bell Labs operating system.
The latest version of the Oberon System, Project Oberon 2013, is still maintained by Niklaus Wirth and several collaborators, but older ETH versions of the system have been orphaned. The system also evolved into the multi-process, symmetric multiprocessing (SMP) capable A2 (formerly Active Object System (AOS), then Bluebottle), with a zooming user interface (ZUI).
History
The Oberon operating system was originally developed as part of the NS32032-based Ceres workstation project. It was written almost entirely (and in the 2013 version entirely is valid) in the Oberon programming language.
The basic system was designed and implemented by Niklaus Wirth and Jürg Gutknecht and its design and implementation is fully documented in their book "Project Oberon". The user Interface and programmers reference is found in Martin Reiser's book "The Oberon System". The Oberon System was later extended and ported to other hardware platforms
by a team at ETH Zurich and there was recognition in popular magazines.
Wirth and Gutknecht (although being active computer science professors) refer to themselves as 'part-time programmers' in the book Project Oberon. In late 2013, a few months before his 80th birthday, Wirth published a second edition of Project Oberon. It details implementing the Oberon System using a reduced instruction set computer (RISC) CPU of his own design realized on a Xilinx field-programmable gate array (FPGA) board. It was presented at the symposium organized for his 80th |
https://en.wikipedia.org/wiki/Tiger%20team | A tiger team is a team of specialists assembled to work on a specific goal or to solve a particular problem.
Term
A 1964 paper entitled Program Management in Design and Development used the term tiger teams and defined it as "a team of undomesticated and uninhibited technical specialists, selected for their experience, energy, and imagination, and assigned to track down relentlessly every possible source of failure in a spacecraft subsystem or simulation". The paper consists of anecdotes and answers to questions from a panel on improving issues in program management concerning testing and quality assurance in aerospace vehicle development and production. One of the authors was Walter C. Williams, an engineer at the Manned Spacecraft Center and part of the Edwards Air Force Base National Advisory Committee for Aeronautics. Williams suggests that tiger teams are an effective and useful method for advancing the reliability of systems and subsystems in the context of actual flight environments. Jane Goodall, Liam Hunt and Kate Herron, among others, have noted that tigers are not naturally cooperative animals and have suggested referring to “chimpanzee teams” because of the intense cooperation that occurs in chimpanzee social groups.
Examples
A tiger team was crucial to the Apollo 13 crewed lunar mission in 1970. During the mission, part of the Apollo 13 Service Module malfunctioned and exploded. A team of specialists was formed to address the resulting problems and bring the astronauts back to Earth safely, led by NASA Flight and Mission Operations Director Gene Kranz. Kranz and the members of his "White Team", later designated the "Tiger Team", received the Presidential Medal of Freedom for their efforts in the Apollo 13 mission.
In security work, a tiger team is a group that tests an organization's ability to protect its assets by attempting to defeat its physical or information security. In this context, the tiger team is often a permanent team as security is ty |
https://en.wikipedia.org/wiki/Robert%20Mills%20%28physicist%29 | Robert Laurence Mills (April 15, 1927 – October 27, 1999) was an American physicist, specializing in quantum field theory, the theory of alloys, and many-body theory. While sharing an office at Brookhaven National Laboratory, Chen-Ning Yang and Robert Mills formulated in 1954 a theory now known as the Yang–Mills theory – "the foundation for current understanding of how subatomic particles interact, a contribution which has restructured modern physics and mathematics."
Mathematically, Yang and Mills proposed a tensor equation for what are now called Yang–Mills fields (this equation reduces to Maxwell's equations as a special case; see gauge theory):
.
Biography
Mills was born in Englewood, New Jersey, son of Dorothy C. and Frederick C. Mills. He graduated from George School in Pennsylvania in early 1944. He studied at Columbia College from 1944 to 1948, while on leave from the Coast Guard. Mills demonstrated his mathematical ability by becoming a Putnam Fellow in 1948, and by receiving first-class honors in the Tripos. Mills, who was still a novice theoretical physicist, met Yang and assisted him in polishing Yang's hypothesis on non-abelian gauge fields, which later became the Yang-Mills Theory and consequently the heart of modern physics.
The mathematical ability he displayed early on was mastered in his eventual career as a full-time theoretical physicist. He earned a master's degree from Cambridge, and a PhD in Physics under Norman Kroll, from Columbia University in 1955. After a year at the Institute for Advanced Study in Princeton, New Jersey, Mills became professor of physics at Ohio State University in 1956. He remained at Ohio State University until his retirement in 1995.
Mills and Yang shared the 1980 Rumford Premium Prize from the American Academy of Arts and Sciences for their "development of a generalized gauge invariant field theory" in 1954.
Personal life
Mills was married to Elise Ackley in 1948. Together they had sons Edward and Jonathan, |
https://en.wikipedia.org/wiki/Jonathan%20Zenneck | Jonathan Adolf Wilhelm Zenneck (15 April 1871 – 8 April 1959) was a German physicist and electrical engineer who contributed to researches in radio circuit performance and to the scientific and educational contributions to the literature of the pioneer radio art. Zenneck improved the Braun cathode ray tube by adding a second deflection structure at right angles to the first, which allowed two-dimensional viewing of a waveform. This two-dimensional display is fundamental to the oscilloscope.
Early years
Zenneck was born in Ruppertshofen, Württemberg.
In 1885, Zenneck entered the Evangelical-Theological Seminary in Maulbronn. In 1887, while in a Blaubeuren seminary, Zenneck learned Latin, Greek, French, and Hebrew. In 1889, Zenneck enrolled in the University of Tübingen. At the Tübingen Seminary, he studied mathematics and natural sciences. In 1894, Zenneck took the state examination in mathematics and natural sciences and the examination for his doctor's degree. His dissertation, supervised by Theodor Eimer, was on grass snake embryos.
In 1894, Zenneck conducted zoological research (Natural History Museum, London). Between 1894 and 1895, he served in the military.
Middle years
In 1895, Zenneck left zoology and turned over to the new field of radio science, He became assistant to Ferdinand Braun and lecturer at "Physikalisches Institut" in Strasbourg, Alsace. Nikola Tesla's lectures introduced him to the wireless sciences. In 1899, Zenneck started propagation studies of wireless telegraphy, first over land, but then became more interested in the larger ranges that were reached over sea. In 1900 he started ship-to-coast experiments in the North Sea near Cuxhaven, Germany. in 1902 he conducted tests of directional antennas. In 1905, Zenneck left Strasbourg since he was appointed assistant-professor at the Danzig Technische Hochschule and in 1906, he became professor of experimental physics in the Braunschweig Technische Hochschule. Also in 1906, Zenneck wrote "Elect |
https://en.wikipedia.org/wiki/List%20of%20algebraic%20topology%20topics | This is a list of algebraic topology topics.
Homology (mathematics)
Simplex
Simplicial complex
Polytope
Triangulation
Barycentric subdivision
Simplicial approximation theorem
Abstract simplicial complex
Simplicial set
Simplicial category
Chain (algebraic topology)
Betti number
Euler characteristic
Genus
Riemann–Hurwitz formula
Singular homology
Cellular homology
Relative homology
Mayer–Vietoris sequence
Excision theorem
Universal coefficient theorem
Cohomology
List of cohomology theories
Cocycle class
Cup product
Cohomology ring
De Rham cohomology
Čech cohomology
Alexander–Spanier cohomology
Intersection cohomology
Lusternik–Schnirelmann category
Poincaré duality
Fundamental class
Applications
Jordan curve theorem
Brouwer fixed point theorem
Invariance of domain
Lefschetz fixed-point theorem
Hairy ball theorem
Degree of a continuous mapping
Borsuk–Ulam theorem
Ham sandwich theorem
Homology sphere
Homotopy theory
Homotopy
Path (topology)
Fundamental group
Homotopy group
Seifert–van Kampen theorem
Pointed space
Winding number
Simply connected
Universal cover
Monodromy
Homotopy lifting property
Mapping cylinder
Mapping cone (topology)
Wedge sum
Smash product
Adjunction space
Cohomotopy
Cohomotopy group
Brown's representability theorem
Eilenberg–MacLane space
Fibre bundle
Möbius strip
Line bundle
Canonical line bundle
Vector bundle
Associated bundle
Fibration
Hopf bundle
Classifying space
Cofibration
Homotopy groups of spheres
Plus construction
Whitehead theorem
Weak equivalence
Hurewicz theorem
H-space
Further developments
Künneth theorem
De Rham cohomology
Obstruction theory
Characteristic class
Chern class
Chern–Simons form
Pontryagin class
Pontryagin number
Stiefel–Whitney class
Poincaré conjecture
Cohomology operation
Steenrod algebra
Bott periodicity theorem
K-theory
Topological K-theory
Adams operation
Algebraic K-theory
Whitehead torsion
Twisted K-theory
Cobordism
Thom space
Suspension functor
Stable homotopy theory
Spectrum (homotopy theory)
Morava K-the |
https://en.wikipedia.org/wiki/Schematron | Schematron is a rule-based validation language for making assertions about the presence or absence of patterns in XML trees. It is a structural schema language expressed in XML using a small number of elements and XPath languages. In many implementations, the Schematron XML is processed into XSLT code for deployment anywhere that XSLT can be used.
Schematron is capable of expressing constraints in ways that other XML schema languages like XML Schema and DTD cannot. For example, it can require that the content of an element be controlled by one of its siblings. Or it can request or require that the root element, regardless of what element that is, must have specific attributes. Schematron can also specify required relationships between multiple XML files. Constraints and content rules may be associated with "plain-English" (or any language) validation error messages, allowing translation of numeric Schematron error codes into meaningful user error messages.
The current ISO recommendation is Information technology, Document Schema Definition Languages (DSDL), Part 3: Rule-based validation, Schematron (ISO/IEC 19757-3:2020).
Uses
Constraints are specified in Schematron using an XPath-based language that can be deployed as XSLT code, making it practical for applications such as the following:
Adjunct to Structural Validation By testing for co-occurrence constraints, non-regular constraints, and inter-document constraints, Schematron can extend the validations that can be expressed in languages such as DTDs, RELAX NG or XML Schema.
Lightweight Business Rules Engine Schematron is not a comprehensive, Rete rules engine, but it can be used to express rules about complex structures with an XML document.
XML Editor Syntax Highlighting Rules Some XML editors use Schematron rules to conditionally highlight XML files for errors. Not all XML editors support Schematron.
Versions
Schematron was invented by Rick Jelliffe while at Academia Sinica Computing Centre, Taiwan. |
https://en.wikipedia.org/wiki/Fifth%20Generation%20Computer%20Systems | The Fifth Generation Computer Systems (FGCS; ) was a 10-year initiative begun in 1982 by Japan's Ministry of International Trade and Industry (MITI) to create computers using massively parallel computing and logic programming. It aimed to create an "epoch-making computer" with supercomputer-like performance and to provide a platform for future developments in artificial intelligence. FGCS was ahead of its time, and its excessive ambitions led to commercial failure. However, on a theoretical level, the project spurred the development of concurrent logic programming.
The term "fifth generation" was intended to convey the system as being advanced: In the history of computing hardware, there were four "generations" of computers. Computers using vacuum tubes were called the first generation; transistors and diodes, the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas previous computer generations had focused on increasing the number of logic elements in a single CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers of CPUs to gain performance.
Background
In the late 1960s until the early 1970s, there was much talk about "generations" of computer hardware, then usually organized into three generations.
First generation: Thermionic vacuum tubes. Mid-1940s. IBM pioneered the arrangement of vacuum tubes in pluggable modules. The IBM 650 was a first-generation computer.
Second generation: Transistors. 1956. The era of miniaturization begins. Transistors are much smaller than vacuum tubes, draw less power, and generate less heat. Discrete transistors are soldered to circuit boards, with interconnections accomplished by stencil-screened conductive patterns on the reverse side. The IBM 7090 was a second-generation computer.
Third generation: Integrated circuits (silicon chips containing multiple transistors). 1964. A pioneering example is the ACPX module used in the IBM 360/91, w |
https://en.wikipedia.org/wiki/List%20of%20polynomial%20topics | This is a list of polynomial topics, by Wikipedia page. See also trigonometric polynomial, list of algebraic geometry topics.
Terminology
Degree: The maximum exponents among the monomials.
Factor: An expression being multiplied.
Linear factor: A factor of degree one.
Coefficient: An expression multiplying one of the monomials of the polynomial.
Root (or zero) of a polynomial: Given a polynomial p(x), the x values that satisfy p(x) = 0 are called roots (or zeroes) of the polynomial p.
Graphing
End behaviour –
Concavity –
Orientation –
Tangency point –
Inflection point – Point where concavity changes.
Basics
Polynomial
Coefficient
Monomial
Polynomial long division
Synthetic division
Polynomial factorization
Rational function
Partial fraction
Partial fraction decomposition over R
Vieta's formulas
Integer-valued polynomial
Algebraic equation
Factor theorem
Polynomial remainder theorem
Elementary abstract algebra
See also Theory of equations below.
Polynomial ring
Greatest common divisior of two polynomials
Symmetric function
Homogeneous polynomial
Polynomial SOS (sum of squares)
Theory of equations
Polynomial family
Quadratic function
Cubic function
Quartic function
Quintic function
Sextic function
Septic function
Octic function
Completing the square
Abel–Ruffini theorem
Bring radical
Binomial theorem
Blossom (functional)
Root of a function
nth root (radical)
Surd
Square root
Methods of computing square roots
Cube root
Root of unity
Constructible number
Complex conjugate root theorem
Algebraic element
Horner scheme
Rational root theorem
Gauss's lemma (polynomial)
Irreducible polynomial
Eisenstein's criterion
Primitive polynomial
Fundamental theorem of algebra
Hurwitz polynomial
Polynomial transformation
Tschirnhaus transformation
Galois theory
Discriminant of a polynomial
Resultant
Elimination theory
Gröbner basis
Regular chain
Triangular decomposition
Sturm's theorem
Descartes' rule of signs
Carlitz–Wan conjecture
Po |
https://en.wikipedia.org/wiki/Arg%20max | In mathematics, the arguments of the maxima (abbreviated arg max or argmax) are the points, or elements, of the domain of some function at which the function values are maximized. In contrast to global maxima, which refers to the largest outputs of a function, arg max refers to the inputs, or arguments, at which the function outputs are as large as possible.
Definition
Given an arbitrary set a totally ordered set and a function, the over some subset of is defined by
If or is clear from the context, then is often left out, as in In other words, is the set of points for which attains the function's largest value (if it exists). may be the empty set, a singleton, or contain multiple elements.
In the fields of convex analysis and variational analysis, a slightly different definition is used in the special case where are the extended real numbers. In this case, if is identically equal to on then (that is, ) and otherwise is defined as above, where in this case can also be written as:
where it is emphasized that this equality involving holds when is not identically on
Arg min
The notion of (or ), which stands for argument of the minimum, is defined analogously. For instance,
are points for which attains its smallest value. It is the complementary operator of
In the special case where are the extended real numbers, if is identically equal to on then (that is, ) and otherwise is defined as above and moreover, in this case (of not identically equal to ) it also satisfies:
Examples and properties
For example, if is then attains its maximum value of only at the point Thus
The operator is different from the operator. The operator, when given the same function, returns the of the function instead of the that cause that function to reach that value; in other words
is the element in
Like max may be the empty set (in which case the maximum is undefined) or a singleton, but unlike may not contain multiple element |
https://en.wikipedia.org/wiki/Virasoro%20algebra | In mathematics, the Virasoro algebra (named after the physicist Miguel Ángel Virasoro) is a complex Lie algebra and the unique central extension of the Witt algebra. It is widely used in two-dimensional conformal field theory and in string theory.
Definition
The Virasoro algebra is spanned by generators for and the central charge .
These generators satisfy and
The factor of is merely a matter of convention. For a derivation of the algebra as the unique central extension of the Witt algebra, see derivation of the Virasoro algebra.
The Virasoro algebra has a presentation in terms of two generators (e.g. 3 and −2) and six relations.
Representation theory
Highest weight representations
A highest weight representation of the Virasoro algebra is a representation generated by a primary state: a vector such that
where the number is called the conformal dimension or conformal weight of .
A highest weight representation is spanned by eigenstates of . The eigenvalues take the form , where the integer is called the level of the corresponding eigenstate.
More precisely, a highest weight representation is spanned by -eigenstates of the type with and , whose levels are . Any state whose level is not zero is called a descendant state of .
For any pair of complex numbers and , the Verma module is
the largest possible highest weight representation. (The same letter is used for both the element of the Virasoro algebra and its eigenvalue in a representation.)
The states with and form a basis of the Verma module. The Verma module is indecomposable, and for generic values of and it is also irreducible. When it is reducible, there exist other highest weight representations with these values of and , called degenerate representations, which are cosets of the Verma module. In particular, the unique irreducible highest weight representation with these values of and is the quotient of the Verma module by its maximal submodule.
A Verma module is irreducibl |
https://en.wikipedia.org/wiki/Meta-system | Meta-systems have several definitions. In general, they link the concepts "system" and "meta-". A "meta-system" is about other systems, such as describing, generalizing, modelling, or analyzing the other system(s).
Control theory |
https://en.wikipedia.org/wiki/Spinor%20bundle | In differential geometry, given a spin structure on an -dimensional orientable Riemannian manifold one defines the spinor bundle to be the complex vector bundle associated to the corresponding principal bundle of spin frames over and the spin representation of its structure group on the space of spinors .
A section of the spinor bundle is called a spinor field.
Formal definition
Let be a spin structure on a Riemannian manifold that is, an equivariant lift of the oriented orthonormal frame bundle with respect to the double covering of the special orthogonal group by the spin group.
The spinor bundle is defined to be the complex vector bundle
associated to the spin structure via the spin representation where denotes the group of unitary operators acting on a Hilbert space It is worth noting that the spin representation is a faithful and unitary representation of the group
See also
Clifford bundle
Clifford module bundle
Orthonormal frame bundle
Spin geometry
Spinor
Spinor representation
Notes
Further reading
|
Algebraic topology
Riemannian geometry
Structures on manifolds |
https://en.wikipedia.org/wiki/AppleLink | AppleLink was the name of both Apple Computer's online service for its dealers, third-party developers, and users, and the client software used to access it. Prior to the commercialization of the Internet, AppleLink was a popular service for Mac and Apple IIGS users. The service was offered from about 1986 to 1994 to various groups, before being superseded by their short-lived eWorld and finally today's multiple Apple websites.
Early years
The original AppleLink, which went online in 1985, was a service available only to Apple employees and dealers, and shortly thereafter to Apple University Consortium members. Apple's consumer 800 number in fact touted this fact, promoting your dealer as the place to turn for help because of his access to AppleLink. In the late 1980s the service was also opened up to software developers, who could use it both as an end-user support system as well as a conduit to Apple development for questions and suggestions.
AppleLink used client software written in Pascal under contract to Apple by Pete Burnight/Central Coast Software. The program extended the desktop metaphor of the Macintosh Finder to encompass the areas on the remote server site. These were displayed as folders and files just as local folders and files were. In addition, there was a set of public bulletin boards, and the ability to use email via the service—although initially only between AppleLink users. File transfer for drivers and system software was another important role, and for this Apple created the AppleLink Package format to combine and compress the two forks of a Macintosh file into one for storage and sending. Apple also developed their Communications Control Language (CCL) for AppleLink, a language still used in a very similar form for today's Macintosh modem scripts.
The "back end" of the AppleLink system was hosted on General Electric's Information Services (GEIS) (division) Mark III time-sharing mainframes and worldwide communications network. AppleLink |
https://en.wikipedia.org/wiki/Digital%20video%20recorder | A digital video recorder (DVR) is an electronic device that records video in a digital format to a disk drive, USB flash drive, SD memory card, SSD or other local or networked mass storage device. The term includes set-top boxes with direct to disk recording, portable media players and TV gateways with recording capability, and digital camcorders. Personal computers are often connected to video capture devices and used as DVRs; in such cases the application software used to record video is an integral part of the DVR. Many DVRs are classified as consumer electronic devices; such devices may alternatively be referred to as personal video recorders (PVRs), particularly in Canada. Similar small devices with built-in (~5 inch diagonal) displays and SSD support may be used for professional film or video production, as these recorders often do not have the limitations that built-in recorders in cameras have, offering wider codec support, the removal of recording time limitations and higher bitrates.
History
Hard-disk-based digital video recorders
The first working DVR prototype was developed in 1998 at Stanford University Computer Science department. The DVR design was a chapter of Edward Y. Chang's PhD dissertation, supervised by Professors Hector Garcia-Molina and Jennifer Widom. Two design papers were published 2017 VLDB conference,
and 1999 ICDE conference. The prototype was developed in 1998 at Pat Hanrahan's CS488 class: Experiments in Digital Television, and the prototype was demoed to industrial partners including SONY, Intel, and Apple.
Consumer digital video recorders ReplayTV and TiVo were launched at the 1999 Consumer Electronics Show in Las Vegas, Nevada. Microsoft also demonstrated a unit with DVR capability, but this did not become available until the end of 1999 for full DVR features in Dish Network's DISHplayer receivers. TiVo shipped their first units on March 31, 1999. ReplayTV won the "Best of Show" award in the video category with Netscape co-fou |
https://en.wikipedia.org/wiki/Van%20der%20Pauw%20method | The van der Pauw Method is a technique commonly used to measure the resistivity and the Hall coefficient of a sample. Its power lies in its ability to accurately measure the properties of a sample of any arbitrary shape, as long as the sample is approximately two-dimensional (i.e. it is much thinner than it is wide), solid (no holes), and the electrodes are placed on its perimeter. The van der Pauw method employs a four-point probe placed around the perimeter of the sample, in contrast to the linear four point probe: this allows the van der Pauw method to provide an average resistivity of the sample, whereas a linear array provides the resistivity in the sensing direction. This difference becomes important for anisotropic materials, which can be properly measured using the Montgomery Method, an extension of the van der Pauw Method (see, for instance, reference).
From the measurements made, the following properties of the material can be calculated:
The resistivity of the material
The doping type (i.e. whether it is a P-type or N-type material)
The sheet carrier density of the majority carrier (the number of majority carriers per unit area). From this the charge density and doping level can be found
The mobility of the majority carrier
The method was first propounded by Leo J. van der Pauw in 1958.
Conditions
There are five conditions that must be satisfied to use this technique:
1. The sample must have a flat shape of uniform thickness
2. The sample must not have any isolated holes
3. The sample must be homogeneous and isotropic
4. All four contacts must be located at the edges of the sample
5. The area of contact of any individual contact should be at least an order of magnitude smaller than the area of the entire sample.
The second condition can be weakened. The van der Pauw technique can also be applied to samples with one hole.
Sample preparation
In order to use the van der Pauw method, the sample thickness must be much less than the width and length |
https://en.wikipedia.org/wiki/List%20of%20geometric%20topology%20topics | This is a list of geometric topology topics.
Low-dimensional topology
Knot theory
Knot (mathematics)
Link (knot theory)
Wild knots
Examples of knots
Unknot
Trefoil knot
Figure-eight knot (mathematics)
Borromean rings
Types of knots
Torus knot
Prime knot
Alternating knot
Hyperbolic link
Knot invariants
Crossing number
Linking number
Skein relation
Knot polynomials
Alexander polynomial
Jones polynomial
Knot group
Writhe
Quandle
Seifert surface
Braids
Braid theory
Braid group
Kirby calculus
Surfaces
Genus (mathematics)
Examples
Positive Euler characteristic
2-disk
Sphere
Real projective plane
Zero Euler characteristic
Annulus
Möbius strip
Torus
Klein bottle
Negative Euler characteristic
The boundary of the pretzel is a genus three surface
Embedded/Immersed in Euclidean space
Cross-cap
Boy's surface
Roman surface
Steiner surface
Alexander horned sphere
Klein bottle
Mapping class group
Dehn twist
Nielsen–Thurston classification
Three-manifolds
Moise's Theorem (see also Hauptvermutung)
Poincaré conjecture
Thurston elliptization conjecture
Thurston's geometrization conjecture
Hyperbolic 3-manifolds
Spherical 3-manifolds
Euclidean 3-manifolds, Bieberbach Theorem, Flat manifolds, Crystallographic groups
Seifert fiber space
Heegaard splitting
Waldhausen conjecture
Compression body
Handlebody
Incompressible surface
Dehn's lemma
Loop theorem (aka the Disk theorem)
Sphere theorem
Haken manifold
JSJ decomposition
Branched surface
Lamination
Examples
3-sphere
Torus bundles
Surface bundles over the circle
Graph manifolds
Knot complements
Whitehead manifold
Invariants
Fundamental group
Heegaard genus
tri-genus
Analytic torsion
Manifolds in general
Orientable manifold
Connected sum
Jordan-Schönflies theorem
Signature (topology)
Handle decomposition
Handlebody
h-cobordism theorem
s-cobordism theorem
Manifold decomposition
Hilbert-Smith conjecture
Mapping class group
Orbifolds
Examples
Exotic sphere
Homology sphere
Lens space
I-bundle
See also
topology glossary
List of topo |
https://en.wikipedia.org/wiki/Lippmann%20plate | Gabriel Lippmann conceived a two-step method to record and reproduce colours, variously known as direct photochromes, interference photochromes, Lippmann photochromes, Photography in natural colours by direct exposure in the camera or the Lippmann process of colour photography. Lippmann won the Nobel Prize in Physics for this work in 1908.
A Lippmann plate is a clear glass plate (having no anti-halation backing), coated with an almost transparent (very low silver halide content) emulsion of extremely fine grains, typically 0.01 to 0.04 micrometres in diameter.
Consequently, Lippmann plates have an extremely high resolving power exceeding 400 lines/mm.
Method
In Lippmann's method, a glass plate is coated with an ultra fine grain light-sensitive film using the Albumen Process containing potassium bromide, then dried, sensitized in the silver bath, washed, irrigated with cyanine solution, and dried again. The back of the film is then brought into optical contact with a reflective surface. This is done by mounting the plate in a specialized holder with pure mercury behind the film. When it is exposed in the camera through the glass side of the plate, the light rays which strike the transparent light-sensitive film are reflected back on themselves and, by interference, create standing waves. The standing waves cause exposure of the emulsion in diffraction patterns. The developed and fixated diffraction patterns constitute a Bragg condition in which diffuse, white light is scattered in a specular fashion and undergoes constructive interference in accordance to Bragg's law. The result is an image having very similar colours as the original using a black and white photographic process.
For this method Lippmann won the Nobel Prize in Physics in 1908.
The colour image can only be viewed in the reflection of a diffuse light source from the plate, making the field of view limited, and it cannot be copied. The technique was very insensitive with the emulsions of the time and |
https://en.wikipedia.org/wiki/List%20of%20order%20theory%20topics | Order theory is a branch of mathematics that studies various kinds of objects (often binary relations) that capture the intuitive notion of ordering, providing a framework for saying when one thing is "less than" or "precedes" another.
An alphabetical list of many notions of order theory can be found in the order theory glossary. See also inequality, extreme value and mathematical optimization.
Overview
Partially ordered set
Preorder
Totally ordered set
Total preorder
Chain
Trichotomy
Extended real number line
Antichain
Strict order
Hasse diagram
Directed acyclic graph
Duality (order theory)
Product order
Distinguished elements of partial orders
Greatest element (maximum, top, unit), Least element (minimum, bottom, zero)
Maximal element, minimal element
Upper bound
Least upper bound (supremum, join)
Greatest lower bound (infimum, meet)
Limit superior and limit inferior
Irreducible element
Prime element
Compact element
Subsets of partial orders
Cofinal and coinitial set, sometimes also called dense
Meet-dense set and join-dense set
Linked set (upwards and downwards)
Directed set (upwards and downwards)
centered and σ-centered set
Net (mathematics)
Upper set and lower set
Ideal and filter
Ultrafilter
Special types of partial orders
Completeness (order theory)
Dense order
Distributivity (order theory)
modular lattice
distributive lattice
completely distributive lattice
Ascending chain condition
Infinite descending chain
Countable chain condition, often abbreviated as ccc
Knaster's condition, sometimes denoted property (K)
Well-orders
Well-founded relation
Ordinal number
Well-quasi-ordering
Completeness properties
Semilattice
Lattice
(Directed) complete partial order, (d)cpo
Bounded complete
Complete lattice
Knaster–Tarski theorem
Infinite divisibility
Orders with further algebraic operations
Heyting algebra
Relatively complemented lattice
Complete Heyting algebra
Pointless topology
MV-algebra
Ockham algebras:
Stone algebra
De Morgan algebra
Kleene alg |
https://en.wikipedia.org/wiki/Vandermonde%20matrix | In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix
with entries , the jth power of the number , for all zero-based indices and . Most authors define the Vandermonde matrix as the transpose of the above matrix.
The determinant of a square Vandermonde matrix (when ) is called a Vandermonde determinant or Vandermonde polynomial. Its value is:
This is non-zero if and only if all are distinct (no two are equal), making the Vandermonde matrix invertible.
Applications
The polynomial interpolation problem is to find a polynomial which satisfies for given data points . This problem can be reformulated in terms of linear algebra by means of the Vandermonde matrix, as follows. computes the values of at the points via a matrix multiplication , where is the vector of coefficients and is the vector of values (both written as column vectors):
If and are distinct, then V is a square matrix with non-zero determinant, i.e. an invertible matrix. Thus, given V and y, one can find the required by solving for its coefficients in the equation : . That is, the map from coefficients to values of polynomials is a bijective linear mapping with matrix V, and the interpolation problem has a unique solution. This result is called the unisolvence theorem, and is a special case of the Chinese remainder theorem for polynomials.
In statistics, the equation means that the Vandermonde matrix is the design matrix of polynomial regression.
In numerical analysis, solving the equation naïvely by Gaussian elimination results in an algorithm with time complexity O(n3). Exploiting the structure of the Vandermonde matrix, one can use Newton's divided differences method (or the Lagrange interpolation formula) to solve the equation in O(n2) time, which also gives the UL factorization of . The resulting algorithm produces extremely accurate solutions, even if is ill-conditioned |
https://en.wikipedia.org/wiki/Eigenface | An eigenface () is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby and used by Matthew Turk and Alex Pentland in face classification. The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set.
History
The eigenface approach began with a search for a low-dimensional representation of face images. Sirovich and Kirby showed that principal component analysis could be used on a collection of face images to form a set of basis features. These basis images, known as eigenpictures, could be linearly combined to reconstruct images in the original training set. If the training set consists of M images, principal component analysis could form a basis set of N images, where N < M. The reconstruction error is reduced by increasing the number of eigenpictures; however, the number needed is always chosen less than M. For example, if you need to generate a number of N eigenfaces for a training set of M face images, you can say that each face image can be made up of "proportions" of all the K "features" or eigenfaces: Face image1 = (23% of E1) + (2% of E2) + (51% of E3) + ... + (1% En).
In 1991 M. Turk and A. Pentland expanded these results and presented the eigenface method of face recognition. In addition to designing a system for automated face recognition using eigenfaces, they showed a way of calculating the eigenvectors of a covariance matrix such that computers of the time could perform eigen-decomposition on a large number of face images. Face |
https://en.wikipedia.org/wiki/Proof%20that%20e%20is%20irrational | The number e was introduced by Jacob Bernoulli in 1683. More than half a century later, Euler, who had been a student of Jacob's younger brother Johann, proved that e is irrational; that is, that it cannot be expressed as the quotient of two integers.
Euler's proof
Euler wrote the first proof of the fact that e is irrational in 1737 (but the text was only published seven years later). He computed the representation of e as a simple continued fraction, which is
Since this continued fraction is infinite and every rational number has a terminating continued fraction, e is irrational. A short proof of the previous equality is known. Since the simple continued fraction of e is not periodic, this also proves that e is not a root of a quadratic polynomial with rational coefficients; in particular, e2 is irrational.
Fourier's proof
The most well-known proof is Joseph Fourier's proof by contradiction, which is based upon the equality
Initially e is assumed to be a rational number of the form . The idea is to then analyze the scaled-up difference (here denoted x) between the series representation of e and its strictly smaller partial sum, which approximates the limiting value e. By choosing the scale factor to be the factorial of b, the fraction and the partial sum are turned into integers, hence x must be a positive integer. However, the fast convergence of the series representation implies that x is still strictly smaller than 1. From this contradiction we deduce that e is irrational.
Now for the details. If e is a rational number, there exist positive integers a and b such that . Define the number
Use the assumption that e = to obtain
The first term is an integer, and every fraction in the sum is actually an integer because for each term. Therefore, under the assumption that e is rational, x is an integer.
We now prove that . First, to prove that x is strictly positive, we insert the above series representation of e into the definition of x and obtain |
https://en.wikipedia.org/wiki/Projective%20representation | In the field of representation theory in mathematics, a projective representation of a group G on a vector space V over a field F is a group homomorphism from G to the projective linear group
where GL(V) is the general linear group of invertible linear transformations of V over F, and F∗ is the normal subgroup consisting of nonzero scalar multiples of the identity transformation (see Scalar transformation).
In more concrete terms, a projective representation of is a collection of operators satisfying the homomorphism property up to a constant:
for some constant . Equivalently, a projective representation of is a collection of operators , such that . Note that, in this notation, is a set of linear operators related by multiplication with some nonzero scalar.
If it is possible to choose a particular representative in each family of operators in such a way that the homomorphism property is satisfied on the nose, rather than just up to a constant, then we say that can be "de-projectivized", or that can be "lifted to an ordinary representation". More concretely, we thus say that can be de-projectivized if there are for each such that . This possibility is discussed further below.
Linear representations and projective representations
One way in which a projective representation can arise is by taking a linear group representation of on and applying the quotient map
which is the quotient by the subgroup of scalar transformations (diagonal matrices with all diagonal entries equal). The interest for algebra is in the process in the other direction: given a projective representation, try to 'lift' it to an ordinary linear representation. A general projective representation cannot be lifted to a linear representation , and the obstruction to this lifting can be understood via group cohomology, as described below.
However, one can lift a projective representation of to a linear representation of a different group , which will be a central extension of . |
https://en.wikipedia.org/wiki/IBM%20Personal%20Computer%20AT | The IBM Personal Computer AT (model 5170, abbreviated as IBM AT or PC/AT) was released in 1984 as the fourth model in the IBM Personal Computer line, following the IBM PC/XT and its IBM Portable PC variant. It was designed around the Intel 80286 microprocessor.
Name
IBM did not specify an expanded form of "AT" on the machine, press releases, brochures or documentation, but some sources expand the term as "Advanced Technology", including at least one internal IBM document.
History
IBM's 1984 introduction of the AT was seen as an unusual move for the company, which typically waited for competitors to release new products before producing its own models. At $4,000–6,000, it was only slightly more expensive than considerably slower IBM models. The announcement surprised rival executives, who admitted that matching IBM's prices would be difficult. No major competitor showed a comparable computer at COMDEX Las Vegas that year.
Features
The AT is IBM PC compatible, with the most significant difference being a move to the 80286 processor from the 8088 processor of prior models. Like the IBM PC, the AT supported an optional math co-processor chip, the Intel 80287, for faster execution of floating point operations.
In addition, it introduces the AT bus, later known as the ISA bus, a 16-bit bus with backwards compatibility with 8-bit PC-compatible expansion cards. The bus also offered fifteen IRQs and seven DMA channels, expanded from eight IRQs and four DMA channels for the PC, achieved by adding another 8259A IRQ controller and another 8237A DMA controller. Some IRQ and DMA channels are used by the motherboard and not exposed on the expansion bus. Both dual IRQ and DMA chipsets are cascading which shares the primary pair. In addition to these chipsets, Intel 82284 Clock Driver and Ready Interface and Intel 82288 Bus Controller are to support the microprocessor.
The 24-bit address bus of the 286 expands RAM capacity to 16 MB.
PC DOS 3.0 was included with support for |
https://en.wikipedia.org/wiki/Diagonal%20lemma | In mathematical logic, the diagonal lemma (also known as diagonalization lemma, self-reference lemma or fixed point theorem) establishes the existence of self-referential sentences in certain formal theories of the natural numbers—specifically those theories that are strong enough to represent all computable functions. The sentences whose existence is secured by the diagonal lemma can then, in turn, be used to prove fundamental limitative results such as Gödel's incompleteness theorems and Tarski's undefinability theorem.
Background
Let be the set of natural numbers. A first-order theory in the language of arithmetic represents the computable function if there exists a "graph" formula in the language of such that for each
Here is the numeral corresponding to the natural number , which is defined to be the th successor of presumed first numeral in .
The diagonal lemma also requires a systematic way of assigning to every formula a natural number (also written as ) called its Gödel number. Formulas can then be represented within by the numerals corresponding to their Gödel numbers. For example, is represented by
The diagonal lemma applies to theories capable of representing all primitive recursive functions. Such theories include first-order Peano arithmetic and the weaker Robinson arithmetic, and even to a much weaker theory known as R. A common statement of the lemma (as given below) makes the stronger assumption that the theory can represent all computable functions, but all the theories mentioned have that capacity, as well.
Statement of the lemma
Intuitively, is a self-referential sentence: says that has the property . The sentence can also be viewed as a fixed point of the operation assigning to each formula the sentence . The sentence constructed in the proof is not literally the same as , but is provably equivalent to it in the theory .
Proof
Let be the function defined by:
for each formula with only one free variable in theor |
https://en.wikipedia.org/wiki/Tabernanthe%20iboga | Tabernanthe iboga (iboga) is an evergreen rainforest shrub native to Central Africa. A member of the Apocynaceae family indigenous to Gabon, the Democratic Republic of Congo, and the Republic of Congo, it is cultivated across Central Africa for its medicinal and other effects.
In African traditional medicine and rituals, the yellowish root or bark is used to produce hallucinations and near-death outcomes, with some fatalities occurring. In high doses, ibogaine is considered to be toxic, and has caused serious comorbidities when used with opioids or prescription drugs. The United States Drug Enforcement Administration (DEA) lists ibogaine as a controlled substance of the Controlled Substances Act.
Description
T. iboga is native to tropical forests, preferring moist soil in partial shade. It bears dark green, narrow leaves and clusters of tubular flowers on an erect and branching stem, with yellow-orange fruits resembling chili pepper.
Normally growing to a height of 2 m, T. iboga may eventually grow into a small tree up to 10 m tall, given the right conditions. The flowers are yellowish-white or pink and followed by a fruit, orange at maturity, that may be either globose or fusiform. Its yellow-fleshed roots contain a number of indole alkaloids, most notably ibogaine, which is found in the highest concentration in the bark of the roots. The root material, bitter in taste, causes a degree of anaesthesia in the mouth as well as systemic numbness of the skin.
Taxonomy
Publication of binomial
Tabernanthe iboga was described by Henri Ernest Baillon and published in Bulletin Mensuel de la Société Linnéenne de Paris 1: 783 in the year 1889.
Etymology
The genus name Tabernanthe is a compound of the Latin taberna, "tavern"/"hut"/"(market) stall" and Greek: (anthos) "flower" – giving a literal meaning of "tavern flower". On the other hand, it may equally well have been intended (by way of a type of botanical shorthand) to mean "having a flower resembling that of plants |
https://en.wikipedia.org/wiki/Antichain | In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two distinct elements in the subset are incomparable.
The size of the largest antichain in a partially ordered set is known as its width. By Dilworth's theorem, this also equals the minimum number of chains (totally ordered subsets) into which the set can be partitioned. Dually, the height of the partially ordered set (the length of its longest chain) equals by Mirsky's theorem the minimum number of antichains into which the set can be partitioned.
The family of all antichains in a finite partially ordered set can be given join and meet operations, making them into a distributive lattice. For the partially ordered system of all subsets of a finite set, ordered by set inclusion, the antichains are called Sperner families
and their lattice is a free distributive lattice, with a Dedekind number of elements. More generally, counting the number of antichains of a finite partially ordered set is #P-complete.
Definitions
Let be a partially ordered set. Two elements and of a partially ordered set are called comparable if If two elements are not comparable, they are called incomparable; that is, and are incomparable if neither
A chain in is a subset in which each pair of elements is comparable; that is, is totally ordered. An antichain in is a subset of in which each pair of different elements is incomparable; that is, there is no order relation between any two different elements in
(However, some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than two distinct elements of the antichain.)
Height and width
A maximal antichain is an antichain that is not a proper subset of any other antichain. A maximum antichain is an antichain that has cardinality at least as large as every other antichain. The of a partially ordered set is the cardinality of a maximum antichain. Any anti |
https://en.wikipedia.org/wiki/Lindemann%E2%80%93Weierstrass%20theorem | In transcendental number theory, the Lindemann–Weierstrass theorem is a result that is very useful in establishing the transcendence of numbers. It states the following:
In other words, the extension field has transcendence degree over .
An equivalent formulation , is the following: This equivalence transforms a linear relation over the algebraic numbers into an algebraic relation over by using the fact that a symmetric polynomial whose arguments are all conjugates of one another gives a rational number.
The theorem is named for Ferdinand von Lindemann and Karl Weierstrass. Lindemann proved in 1882 that is transcendental for every non-zero algebraic number thereby establishing that is transcendental (see below). Weierstrass proved the above more general statement in 1885.
The theorem, along with the Gelfond–Schneider theorem, is extended by Baker's theorem, and all of these would be further generalized by Schanuel's conjecture.
Naming convention
The theorem is also known variously as the Hermite–Lindemann theorem and the Hermite–Lindemann–Weierstrass theorem. Charles Hermite first proved the simpler theorem where the exponents are required to be rational integers and linear independence is only assured over the rational integers, a result sometimes referred to as Hermite's theorem. Although that appears to be a special case of the above theorem, the general result can be reduced to this simpler case. Lindemann was the first to allow algebraic numbers into Hermite's work in 1882. Shortly afterwards Weierstrass obtained the full result, and further simplifications have been made by several mathematicians, most notably by David Hilbert and Paul Gordan.
Transcendence of and
The transcendence of and are direct corollaries of this theorem.
Suppose is a non-zero algebraic number; then is a linearly independent set over the rationals, and therefore by the first formulation of the theorem is an algebraically independent set; or in other words is |
https://en.wikipedia.org/wiki/Linearly%20ordered%20group | In mathematics, specifically abstract algebra, a linearly ordered or totally ordered group is a group G equipped with a total order "≤" that is translation-invariant. This may have different meanings. We say that (G, ≤) is a:
left-ordered group if ≤ is left-invariant, that is a ≤ b implies ca ≤ cb for all a, b, c in G,
right-ordered group if ≤ is right-invariant, that is a ≤ b implies ac ≤ bc for all a, b, c in G,
bi-ordered group if ≤ is bi-invariant, that is it is both left- and right-invariant.
A group G is said to be left-orderable (or right-orderable, or bi-orderable) if there exists a left- (or right-, or bi-) invariant order on G. A simple necessary condition for a group to be left-orderable is to have no elements of finite order; however this is not a sufficient condition. It is equivalent for a group to be left- or right-orderable; however there exist left-orderable groups which are not bi-orderable.
Further definitions
In this section is a left-invariant order on a group with identity element . All that is said applies to right-invariant orders with the obvious modifications. Note that being left-invariant is equivalent to the order defined by if and only if being right-invariant. In particular a group being left-orderable is the same as it being right-orderable.
In analogy with ordinary numbers we call an element of an ordered group positive if . The set of positive elements in an ordered group is called the positive cone, it is often denoted with ; the slightly different notation is used for the positive cone together with the identity element.
The positive cone characterises the order ; indeed, by left-invariance we see that if and only if . In fact a left-ordered group can be defined as a group together with a subset satisfying the two conditions that:
for we have also ;
let , then is the disjoint union of and .
The order associated with is defined by ; the first condition amounts to left-invariance and the second to the |
https://en.wikipedia.org/wiki/Gordon%20Bell | Chester Gordon Bell (born August 19, 1934) is an American electrical engineer and manager. An early employee of Digital Equipment Corporation (DEC) 1960–1966, Bell designed several of their PDP machines and later became Vice President of Engineering 1972–1983, overseeing the development of the VAX computer systems. Bell's later career includes entrepreneur, investor, founding Assistant Director of NSF's Computing and Information Science and Engineering Directorate 1986–1987, and researcher emeritus at Microsoft Research, 1995–2015.
Early life and education
Gordon Bell was born in Kirksville, Missouri. He grew up helping with the family business, Bell Electric, repairing appliances and wiring homes.
Bell received a BS (1956), and MS (1957) in electrical engineering from MIT. He then went to the New South Wales University of Technology (now UNSW) in Australia on a Fulbright Scholarship, where he taught classes on computer design, programmed one of the first computers to arrive in Australia (called UTECOM, an English Electric DEUCE), and published his first academic paper. Returning to the US, he worked in the MIT Speech Computation Laboratory under Professor Ken Stevens, where he wrote the first analysis by synthesis program.
Career
Digital Equipment Corporation
The DEC founders Ken Olsen and Harlan Anderson recruited him for their new company in 1960, where he designed the I/O subsystem of the PDP-1, including the first UART. Bell was the architect of the PDP-4, and PDP-6. Other architectural contributions were to the PDP-5 and PDP-11 Unibus and General Registers architecture.
After DEC, Bell went to Carnegie Mellon University in 1966 to teach computer science, but returned to DEC in 1972 as vice-president of engineering, where he was in charge of the VAX, DEC's most successful computer.
Entrepreneur and policy advisor
Bell retired from DEC in 1983 after a heart attack, but soon after founded Encore Computer, one of the first shared memory, multiple-microproc |
https://en.wikipedia.org/wiki/List%20of%20web%20directories | A Web directory is a listing of Websites organized in a hierarchy or interconnected list of categories.
The following is a list of notable Web directory services.
General
DOAJ.org – Directory of Open Access Journals
DMOZ (also known as Open Directory Project) – was at one point the largest directory of the Web. Its open content was mirrored at many sites. Offline since March 2017. Continued since August 2018 as Curlie.org.
Jasmine Directory - Lists websites by topic and by region, specializing in business websites.
Sources – general subject web portal for journalists, freelance writers, editors, authors and researchers; in addition to a search engine it includes a subject-based directory.
Starting Point Directory – Founded in 1995, relaunched in 2006, charges a fee.
World Wide Web Virtual Library (VLIB) – oldest directory of the Web.
Business directories
Business.com – Integrated directory of knowledge resources and companies, that charges a fee for listing review and operates as a pay per click search engine.
Yell – is a digital marketing and online directory business in the United Kingdom
Niche
Business.com – Integrated directory of knowledge resources and companies, that charges a fee for listing review and operates as a pay per click search engine.
Library and Archival Exhibitions on the Web – international database of online exhibitions which is a service of the Smithsonian Institution Libraries.
ProgrammableWeb – resource on APIs that provides a directory of APIs.
Virtual Library museums pages – directory of museum websites around the world.
Regional
2345.com – Chinese web directory founded in 2005. The website is the second most used web directory in China.
Alleba – Filipino search engine website, with directory.
Dalilmasr – Egyptian online directory
Timway – web portal and directory primarily serving Hong Kong.
Defunct directories
AboutUs.org – directory from 2005 to 2013.
Anime Web Turnpike – was a web directory founded in Augu |
https://en.wikipedia.org/wiki/Third-party%20software%20component | In computer programming, a third-party software component is a reusable software component developed to be either freely distributed or sold by an entity other than the original vendor of the development platform. The third-party software component market thrives because many programmers believe that component-oriented development improves the efficiency and the quality of developing custom applications. Common third-party software includes macros, bots, and software/scripts to be run as add-ons for popular developing software. In the case of operating systems such as Windows XP, Vista or Seven, there are applications installed by default, such as Windows Media Player or Internet Explorer.
See also
Middleware
Enterprise Java Beans
VCL / CLX
KParts (KDE)
Video-game third-party developers
Third-party source
Online all programming languages and their third party libraries includes a guide.
References
Component-based software engineering
Computer programming |
https://en.wikipedia.org/wiki/Group%20ring | In algebra, a group ring is a free module and at the same time a ring, constructed in a natural way from any given ring and any given group. As a free module, its ring of scalars is the given ring, and its basis is the set of elements of the given group. As a ring, its addition law is that of the free module and its multiplication extends "by linearity" the given group law on the basis. Less formally, a group ring is a generalization of a given group, by attaching to each element of the group a "weighting factor" from a given ring.
If the ring is commutative then the group ring is also referred to as a group algebra, for it is indeed an algebra over the given ring. A group algebra over a field has a further structure of a Hopf algebra; in this case, it is thus called a group Hopf algebra.
The apparatus of group rings is especially useful in the theory of group representations.
Definition
Let be a group, written multiplicatively, and let be a ring. The group ring of over , which we will denote by , or simply , is the set of mappings of finite support ( is nonzero for only finitely many elements ), where the module scalar product of a scalar in and a mapping is defined as the mapping , and the module group sum of two mappings and is defined as the mapping . To turn the additive group into a ring, we define the product of and to be the mapping
The summation is legitimate because and are of finite support, and the ring axioms are readily verified.
Some variations in the notation and terminology are in use. In particular, the mappings such as are sometimes written as what are called "formal linear combinations of elements of with coefficients in
":
or simply
Note that if the ring is in fact a field , then the module structure of the group ring is in fact a vector space over .
Examples
1. Let , the cyclic group of order 3, with generator and identity element 1G. An element r of C[G] can be written as
where z0, z1 and z2 are in C, the complex |
https://en.wikipedia.org/wiki/Diagonal | In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word diagonal derives from the ancient Greek διαγώνιος diagonios, "from angle to angle" (from διά- dia-, "through", "across" and γωνία gonia, "angle", related to gony "knee"); it was used by both Strabo and Euclid to refer to a line connecting two vertices of a rhombus or cuboid, and later adopted into Latin as diagonus ("slanting line").
In matrix algebra, the diagonal of a square matrix consists of the entries on the line from the top left corner to the bottom right corner.
There are also many other non-mathematical uses.
Non-mathematical uses
In engineering, a diagonal brace is a beam used to brace a rectangular structure (such as scaffolding) to withstand strong forces pushing into it; although called a diagonal, due to practical considerations diagonal braces are often not connected to the corners of the rectangle.
Diagonal pliers are wire-cutting pliers defined by the cutting edges of the jaws intersects the joint rivet at an angle or "on a diagonal", hence the name.
A diagonal lashing is a type of lashing used to bind spars or poles together applied so that the lashings cross over the poles at an angle.
In association football, the diagonal system of control is the method referees and assistant referees use to position themselves in one of the four quadrants of the pitch.
Polygons
As applied to a polygon, a diagonal is a line segment joining any two non-consecutive vertices. Therefore, a quadrilateral has two diagonals, joining opposite pairs of vertices. For any convex polygon, all the diagonals are inside the polygon, but for re-entrant polygons, some diagonals are outside of the polygon.
Any n-sided polygon (n ≥ 3), convex or concave, has total diagonals, as each vertex has diagonals to all other vertices except itself and the two adjacent vertices, or n |
https://en.wikipedia.org/wiki/Environment%20variable | An environment variable is a user-definable value that can affect the way running processes will behave on a computer. Environment variables are part of the environment in which a process runs. For example, a running process can query the value of the TEMP environment variable to discover a suitable location to store temporary files, or the HOME or USERPROFILE variable to find the directory structure owned by the user running the process.
They were introduced in their modern form in 1979 with Version 7 Unix, so are included in all Unix operating system flavors and variants from that point onward including Linux and macOS. From PC DOS 2.0 in 1982, all succeeding Microsoft operating systems, including Microsoft Windows, and OS/2 also have included them as a feature, although with somewhat different syntax, usage and standard variable names.
Design
In all Unix and Unix-like systems, as well as on Windows, each process has its own separate set of environment variables. By default, when a process is created, it inherits a duplicate run-time environment of its parent process, except for explicit changes made by the parent when it creates the child. At the API level, these changes must be done between running fork and exec. Alternatively, from command shells such as bash, a user can change environment variables for a particular command invocation by indirectly invoking it via env or using the ENVIRONMENT_VARIABLE=VALUE <command> notation. A running program can access the values of environment variables for configuration purposes.
Shell scripts and batch files use environment variables to communicate data and preferences to child processes. They can also be used to store temporary values for reference later in a shell script. However, in Unix, non-exported variables are preferred for this as they don't leak outside the process.
In Unix, an environment variable that is changed in a script or compiled program will only affect that process and possibly child processes. Th |
https://en.wikipedia.org/wiki/Outline%20of%20ants | The following outline is provided as an overview of and topical guide to ants:
Ants – social insects with geniculate (elbowed) antennae and a distinctive node-like structure that forms a slender waist. Ants are of the family Formicidae and evolved from wasp-like ancestors in the mid-Cretaceous period between 110 and 130 million years ago, diversifying after the rise of flowering plants. More than 12,500 out of an estimated total of 22,000 species have been classified.
Essence of ants
Ant colony
Myrmecology – scientific study of ants
Biological classification
Kingdom: Animalia
Phylum: Arthropoda
Class: Insecta
Order: Hymenoptera
Suborder: Apocrita
Superfamily: Vespoidea
Family: Formicidae (family authority: Latreille, 1809)
Kinds of ants
Ant
List of ant genera
List of ants of Great Britain
Subfamilies
Extant subfamilies
Agroecomyrmecinae
Amblyoponinae
Aneuretinae
Dolichoderinae
Dorylinae
Ectatomminae
Formicinae
Heteroponerinae
Leptanillinae
Martialinae
Myrmeciinae
Myrmicinae
Paraponerinae
Ponerinae
Proceratiinae
Pseudomyrmecinae
Fossil subfamilies
†Armaniinae (sometimes treated as the family Armaniidae within the superfamily Formicoidea)
†Brownimeciinae
†Formiciinae
†Sphecomyrminae
General myrmecology concepts
Myrmecologists
Murray S. Blum (1929–2015)
Barry Bolton
Horace Donisthorpe (1870–1951)
Auguste Forel (1848–1931)
William Gould (1715–1799)
Bert Hölldobler (born 1936)
Thomas C. Jerdon (1811–1872)
Sir John Lubbock (1st Lord and Baron Avebury) (1834–1913)
Derek Wragge Morley (1920–1969)
Frederick Smith (1805–1879)
John Obadiah Westwood (1805–1893)
William Morton Wheeler (1865–1937)
E.O. Wilson (1929–2021)
External links
Ants
Ants
Myrmecology
Myrmecology |
https://en.wikipedia.org/wiki/Spherical%20circle | In spherical geometry, a spherical circle (often shortened to circle) is the locus of points on a sphere at constant spherical distance (the spherical radius) from a given point on the sphere (the pole or spherical center). It is a curve of constant geodesic curvature relative to the sphere, analogous to a line or circle in the Euclidean plane; the curves analogous to straight lines are called great circles, and the curves analogous to planar circles are called small circles or lesser circles.
Fundamental concepts
Intrinsic characterization
A spherical circle with zero geodesic curvature is called a great circle, and is a geodesic analogous to a straight line in the plane. A great circle separates the sphere into two equal hemispheres, each with the great circle as its boundary. If a great circle passes through a point on the sphere, it also passes through the antipodal point (the unique furthest other point on the sphere). For any pair of distinct non-antipodal points, a unique great circle passes through both. Any two points on a great circle separate it into two arcs analogous to line segments in the plane; the shorter is called the minor arc and is the shortest path between the points, and the longer is called the major arc.
A circle with non-zero geodesic curvature is called a small circle, and is analogous to a circle in the plane. A small circle separates the sphere into two spherical disks or spherical caps, each with the circle as its boundary. For any triple of distinct non-antipodal points a unique small circle passes through all three. Any two points on the small circle separate it into two arcs, analogous to circular arcs in the plane.
Every circle has two antipodal poles (or centers) intrinsic to the sphere. A great circle is equidistant to its poles, while a small circle is closer to one pole than the other. Concentric circles are sometimes called parallels, because they each have constant distance to each-other, and in particular to their conce |
https://en.wikipedia.org/wiki/Download | In computer networks, download means to receive data from a remote system, typically a server such as a web server, an FTP server, an email server, or other similar systems. This contrasts with uploading, where data is sent to a remote server.
A download is a file offered for downloading or that has been downloaded, or the process of receiving such a file.
Definition
Downloading generally transfers entire files for local storage and later use, as contrasted with streaming, where the data is used nearly immediately, while the transmission is still in progress, and which may not be stored long-term. Websites that offer streaming media or media displayed in-browser, such as YouTube, increasingly place restrictions on the ability of users to save these materials to their computers after they have been received.
Downloading in computer networks involves retrieving data from a remote system, like a web server, FTP server, or email server, unlike uploading where data is sent to a remote server. A download can refer to a file made available for retrieval or one that has been received, encompassing the entire process of obtaining such a file.
Downloading is not the same as data transfer; moving or copying data between two storage devices would be data transfer, but receiving data from the Internet or BBS is downloading.
Copyright
Downloading media files involves the use of linking and framing Internet material, and relates to copyright law. Streaming and downloading can involve making copies of works that infringe on copyrights or other rights, and organizations running such websites may become vicariously liable for copyright infringement by causing others to do so.
Open hosting servers allows people to upload files to a central server, which incurs bandwidth and hard disk space costs due to files generated with each download. Anonymous and open hosting servers make it difficult to hold hosts accountable. Taking legal action against the technologies behind unauthoriz |
https://en.wikipedia.org/wiki/Uniform%20norm | In mathematical analysis, the uniform norm (or ) assigns to real- or complex-valued bounded functions defined on a set the non-negative number
This norm is also called the , the , the , or, when the supremum is in fact the maximum, the . The name "uniform norm" derives from the fact that a sequence of functions converges to under the metric derived from the uniform norm if and only if converges to uniformly.
If is a continuous function on a closed and bounded interval, or more generally a compact set, then it is bounded and the supremum in the above definition is attained by the Weierstrass extreme value theorem, so we can replace the supremum by the maximum. In this case, the norm is also called the .
In particular, if is some vector such that in finite dimensional coordinate space, it takes the form:
Metric and topology
The metric generated by this norm is called the , after Pafnuty Chebyshev, who was first to systematically study it.
If we allow unbounded functions, this formula does not yield a norm or metric in a strict sense, although the obtained so-called extended metric still allows one to define a topology on the function space in question.
The binary function
is then a metric on the space of all bounded functions (and, obviously, any of its subsets) on a particular domain. A sequence converges uniformly to a function if and only if
We can define closed sets and closures of sets with respect to this metric topology; closed sets in the uniform norm are sometimes called uniformly closed and closures uniform closures. The uniform closure of a set of functions A is the space of all functions that can be approximated by a sequence of uniformly-converging functions on For instance, one restatement of the Stone–Weierstrass theorem is that the set of all continuous functions on is the uniform closure of the set of polynomials on
For complex continuous functions over a compact space, this turns it into a C* algebra.
Properties
The set of v |
https://en.wikipedia.org/wiki/System%20requirements | To be used efficiently, all computer software needs certain hardware components or other software resources to be present on a computer. These prerequisites are known as (computer) system requirements and are often used as a guideline as opposed to an absolute rule. Most software defines two sets of system requirements: minimum and recommended. With increasing demand for higher processing power and resources in newer versions of software, system requirements tend to increase over time. Industry analysts suggest that this trend plays a bigger part in driving upgrades to existing computer systems than technological advancements. A second meaning of the term system requirements, is a generalisation of this first definition, giving the requirements to be met in the design of a system or sub-system.
Recommended system requirements
Often manufacturers of games will provide the consumer with a set of requirements that are different from those that are needed to run a software. These requirements are usually called the recommended requirements. These requirements are almost always of a significantly higher level than the minimum requirements, and represent the ideal situation in which to run the software. Generally speaking, this is a better guideline than minimum system requirements in order to have a fully usable and enjoyable experience with that software.
Hardware requirements
The most common set of requirements defined by any operating system or software application is the physical computer resources, also known as hardware, A hardware requirements list is often accompanied by a hardware compatibility list (HCL), especially in case of operating systems. An HCL lists tested, compatible, and sometimes incompatible hardware devices for a particular operating system or application. The following sub-sections discuss the various aspects of hardware requirements.
Architecture
All computer operating systems are designed for a particular computer architecture. Most s |
https://en.wikipedia.org/wiki/SPIM | SPIM is a MIPS processor simulator, designed to run assembly language code for this architecture. The program simulates R2000 and R3000 processors, and was written by James R. Larus while a professor at the University of Wisconsin–Madison. The MIPS machine language is often taught in college-level assembly courses, especially those using the textbook Computer Organization and Design: The Hardware/Software Interface by David A. Patterson and John L. Hennessy ().
The name of the simulator is a reversal of the letters "MIPS".
SPIM simulators are available for Windows (PCSpim), Mac OS X and Unix/Linux-based (xspim) operating systems. As of release 8.0 in January 2010, the simulator is licensed under the standard BSD license.
In January, 2011, a major release version 9.0 features QtSpim that has a new user interface built on the cross-platform Qt UI framework and runs on Windows, Linux, and macOS. From this version, the project has also been moved to SourceForge for better maintenance. Precompiled versions of QtSpim for Linux (32-bit), Windows, and Mac OS X, as well as PCSpim for Windows are provided.
The SPIM operating system
The SPIM simulator comes with a rudimentary operating system, which allows the programmer usage of common used functions in a comfortable way. Such functions are invoked by the -instruction. Then the OS acts depending on the values of specific registers.
The SPIM OS expects a label named as a handover point from the OS-preamble.
SPIM Alternatives/Competitors
MARS (MIPS Assembler and Runtime Simulator) is a Java-based IDE for the MIPS Assembly Programming Language and an alternative to SPIM.
Its initial release was in 2005 and is under active development.
Imperas is a suite of embedded software development tools for MIPS architecture which uses Just-in-time compilation emulation and simulation technology.
The simulator was initially released in 2008 and is under active development.
There are over 30 open source models of the MIPS 32 bit a |
https://en.wikipedia.org/wiki/Server%20Message%20Block | Server Message Block (SMB) is a communication protocol mainly used by Microsoft Windows equipped computers normally used to share files, printers, serial ports, and miscellaneous communications between nodes on a network. SMB implementation consists of two vaguely named Windows services: "Server" (ID: LanmanServer) and "Workstation" (ID: LanmanWorkstation). It uses NTLM or Kerberos protocols for user authentication. It also provides an authenticated inter-process communication (IPC) mechanism.
SMB was originally developed in 1983 by Barry A. Feigenbaum at IBM and intended to provide shared access to files and printers across nodes on a network of systems running IBM's OS/2. In 1987, Microsoft and 3Com implemented SMB in LAN Manager for OS/2, at which time SMB used the NetBIOS service atop the NetBIOS Frames protocol as its underlying transport. Later, Microsoft implemented SMB in Windows NT 3.1 and has been updating it ever since, adapting it to work with newer underlying transports: TCP/IP and NetBT. SMB over QUIC was introduced in Windows Server 2022.
In 1996, Microsoft published a version of SMB 1.0 with minor modifications under the Common Internet File System (CIFS ) moniker. CIFS was compatible with even the earliest incarnation of SMB, including LAN Manager's. It supports symbolic links, hard links, and larger file size, but none of the features of SMB 2.0 and later. Microsoft's proposal, however, remained an Internet Draft and never achieved standard status. Microsoft has since discontinued use of the CIFS moniker but continues developing SMB and making subsequent specifications publicly available. Samba is a free software reimplementation of the SMB protocol and the Microsoft extensions to it.
Features
Server Message Block (SMB) enables file sharing, printer sharing, network browsing, and inter-process communication (through named pipes) over a computer network. SMB serves as the basis for Microsoft's Distributed File System implementation.
SMB relies |
https://en.wikipedia.org/wiki/Farey%20sequence | In mathematics, the Farey sequence of order n is the sequence of completely reduced fractions, either between 0 and 1, or without this restriction, which when in lowest terms have denominators less than or equal to n, arranged in order of increasing size.
With the restricted definition, each Farey sequence starts with the value 0, denoted by the fraction , and ends with the value 1, denoted by the fraction (although some authors omit these terms).
A Farey sequence is sometimes called a Farey series, which is not strictly correct, because the terms are not summed.
Examples
The Farey sequences of orders 1 to 8 are :
F1 = { , }
F2 = { , , }
F3 = { , , , , }
F4 = { , , , , , , }
F5 = { , , , , , , , , , , }
F6 = { , , , , , , , , , , , , }
F7 = { , , , , , , , , , , , , , , , , , , }
F8 = { , , , , , , , , , , , , , , , , , , , , , , }
Farey sunburst
Plotting the numerators versus the denominators of a Farey sequence gives a shape like the one to the right, shown for 6.
Reflecting this shape around the diagonal and main axes generates the Farey sunburst, shown below. The Farey sunburst of order connects the visible integer grid points from the origin in the square of side 2, centered at the origin. Using Pick's theorem, the area of the sunburst is 4(|n|−1), where |n| is the number of fractions in n.
History
The history of 'Farey series' is very curious — Hardy & Wright (1979)
... once again the man whose name was given to a mathematical relation was not the original discoverer so far as the records go. — Beiler (1964)
Farey sequences are named after the British geologist John Farey, Sr., whose letter about these sequences was published in the Philosophical Magazine in 1816. Farey conjectured, without offering proof, that each new term in a Farey sequence expansion is the mediant of its neighbours. Farey's letter was read by Cauchy, who provided a proof in his Exercices de mathématique, and attributed this result to Farey. In fact, another mathematicia |
https://en.wikipedia.org/wiki/List%20of%20graph%20theory%20topics | This is a list of graph theory topics, by Wikipedia page.
See glossary of graph theory terms for basic terminology
Examples and types of graphs
Graph coloring
Paths and cycles
Trees
Terminology
Node
Child node
Parent node
Leaf node
Root node
Root (graph theory)
Operations
Tree structure
Tree data structure
Cayley's formula
Kőnig's lemma
Tree (set theory) (need not be a tree in the graph-theory sense, because there may not be a unique path between two vertices)
Tree (descriptive set theory)
Euler tour technique
Graph limits
Graphon
Graphs in logic
Conceptual graph
Entitative graph
Existential graph
Laws of Form
Logical graph
Mazes and labyrinths
Labyrinth
Maze
Maze generation algorithm
Algorithms
Ant colony algorithm
Breadth-first search
Depth-first search
Depth-limited search
FKT algorithm
Flood fill
Graph exploration algorithm
Matching (graph theory)
Max flow min cut theorem
Maximum-cardinality search
Shortest path
Dijkstra's algorithm
Bellman–Ford algorithm
A* algorithm
Floyd–Warshall algorithm
Topological sorting
Pre-topological order
Other topics
Networks, network theory
See list of network theory topics
Hypergraphs
Helly family
Intersection (Line) Graphs of hypergraphs
Graph theory
Graph theory
Graph theory |
https://en.wikipedia.org/wiki/Outline%20of%20combinatorics | Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures.
Essence of combinatorics
Matroid
Greedoid
Ramsey theory
Van der Waerden's theorem
Hales–Jewett theorem
Umbral calculus, binomial type polynomial sequences
Combinatorial species
Branches of combinatorics
Algebraic combinatorics
Analytic combinatorics
Arithmetic combinatorics
Combinatorics on words
Combinatorial design theory
Enumerative combinatorics
Extremal combinatorics
Geometric combinatorics
Graph theory
Infinitary combinatorics
Matroid theory
Order theory
Partition theory
Probabilistic combinatorics
Topological combinatorics
Multi-disciplinary fields that include combinatorics
Coding theory
Combinatorial optimization
Combinatorics and dynamical systems
Combinatorics and physics
Discrete geometry
Finite geometry
Phylogenetics
History of combinatorics
History of combinatorics
General combinatorial principles and methods
Combinatorial principles
Trial and error, brute-force search, bogosort, British Museum algorithm
Pigeonhole principle
Method of distinguished element
Mathematical induction
Recurrence relation, telescoping series
Generating functions as an application of formal power series
Cyclic sieving
Schrödinger method
Exponential generating function
Stanley's reciprocity theorem
Binomial coefficients and their properties
Combinatorial proof
Double counting (proof technique)
Bijective proof
Inclusion–exclusion principle
Möbius inversion formula
Parity, even and odd permutations
Combinatorial Nullstellensatz
Incidence algebra
Greedy algorithm
Divide and conquer algorithm
Akra–Bazzi method
Dynamic programming
Branch and bound
Birthday attack, birthday paradox
Floyd's cycle-finding algorithm
Reduction to linear algebra
Sparsity
Weight function
Minimax algorithm
Alpha–beta pruning
Probabilistic method
Sieve methods
Analytic combinatorics
Symbolic combinatorics
Combinatorial |
https://en.wikipedia.org/wiki/Trichome | Trichomes (; ) are fine outgrowths or appendages on plants, algae, lichens, and certain protists. They are of diverse structure and function. Examples are hairs, glandular hairs, scales, and papillae. A covering of any kind of hair on a plant is an indumentum, and the surface bearing them is said to be pubescent.
Algal trichomes
Certain, usually filamentous, algae have the terminal cell produced into an elongate hair-like structure called a trichome. The same term is applied to such structures in some cyanobacteria, such as Spirulina and Oscillatoria. The trichomes of cyanobacteria may be unsheathed, as in Oscillatoria, or sheathed, as in Calothrix. These structures play an important role in preventing soil erosion, particularly in cold desert climates. The filamentous sheaths form a persistent sticky network that helps maintain soil structure.
Plant trichomes
Plant trichomes have many different features that vary between both species of plants and organs of an individual plant. These features affect the subcategories that trichomes are placed into. Some defining features include the following:
Unicellular or multicellular
Straight (upright with little to no branching), spiral (corkscrew-shaped) or hooked (curved apex)
Presence of cytoplasm
Glandular (secretory) vs. eglandular
Tortuous, simple (unbranched and unicellular), peltate (scale-like), stellate (star-shaped)
Adaxial vs. abaxial, referring to whether trichomes are present, respectively, on the upper surface (adaxial) or lower surface (abaxial) of a leaf or other lateral organ.
In a model organism, Cistus salviifolius, there are more adaxial trichomes present on this plant because this surface suffers from more ultraviolet (UV), solar irradiance light stress than the abaxial surface.
Trichomes can protect the plant from a large range of detriments, such as UV light, insects, transpiration, and freeze intolerance.
Aerial surface hairs
Trichomes on plants are epidermal outgrowths of various kinds |
https://en.wikipedia.org/wiki/Programming%20Language%20for%20Business | Programming Language for Business or PL/B is a business-oriented programming language originally called DATABUS and designed by Datapoint in 1972 as an alternative to COBOL because Datapoint's 8-bit computers could not fit COBOL into their limited memory, and because COBOL did not at the time have facilities to deal with Datapoint's built-in keyboard and screen.
A version of DATABUS became an ANSI standard, and the name PL/B came about when Datapoint chose not to release its trademark on the DATABUS name.
Functionality
Much like Java and .NET, PL/B programs are compiled into an intermediate byte-code, which is then interpreted by a runtime library. Because of this, many PL/B programs can run on DOS, Unix, Linux, and Windows operating systems. The PL/B development environments are influenced by Java and Visual Basic, and offer many of the same features found in those languages. PL/B (Databus) is actively used all over the world, and has several forums on the Internet dedicated to supporting software developers.
Since its inception, PL/B has been enhanced and adapted to keep it modernized and able to access various data sources. It has a database capability built-in with ISAM and Associative Hashed Indexes, as well as ODBC, SQL, Oracle, sequential, random access, XML and JSON files.
All the constructs of modern programming languages have been incrementally added to the language. PL/B also has the ability to access external routines through COM, DLL's and .NET assemblies. Full access to the .NET framework is built into many versions.
Several implementations of the language are capable of running as an Application Server like Citrix, and connecting to remote databases through a data manager.
Source code example
IF (DF_EDIT[ITEM] = "PHYS")
STATESAVE MYSTATE
IF (C_F07B != 2)
DISPLAY *SETSWALL 1:1:1:80:
*BGCOLOR=2,*COLOR=15:
*P49:1," 7-Find "
ELSE
|
https://en.wikipedia.org/wiki/Layer%202%20Tunneling%20Protocol | In computer networking, Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol used to support virtual private networks (VPNs) or as part of the delivery of services by ISPs. It uses encryption ('hiding') only for its own control messages (using an optional pre-shared secret), and does not provide any encryption or confidentiality of content by itself. Rather, it provides a tunnel for Layer 2 (which may be encrypted), and the tunnel itself may be passed over a Layer 3 encryption protocol such as IPsec.
History
Published in August 1999 as proposed standard RFC 2661, L2TP has its origins primarily in two older tunneling protocols for point-to-point communication: Cisco's Layer 2 Forwarding Protocol (L2F) and Microsoft's
Point-to-Point Tunneling Protocol (PPTP). A new version of this protocol, L2TPv3, appeared as proposed standard RFC 3931 in 2005. L2TPv3 provides additional security features, improved encapsulation, and the ability to carry data links other than simply Point-to-Point Protocol (PPP) over an IP network (for example: Frame Relay, Ethernet, ATM, etc.).
Description
The entire L2TP packet, including payload and L2TP header, is sent within a User Datagram Protocol (UDP) datagram. A virtue of transmission over UDP (rather than TCP) is that it avoids the "TCP meltdown problem". It is common to carry PPP sessions within an L2TP tunnel. L2TP does not provide confidentiality or strong authentication by itself. IPsec is often used to secure L2TP packets by providing confidentiality, authentication and integrity. The combination of these two protocols is generally known as L2TP/IPsec (discussed below).
The two endpoints of an L2TP tunnel are called the L2TP access concentrator (LAC) and the L2TP network server (LNS). The LNS waits for new tunnels. Once a tunnel is established, the network traffic between the peers is bidirectional. To be useful for networking, higher-level protocols are then run through the L2TP tunnel. To facilitate this, an L2TP session is |
https://en.wikipedia.org/wiki/Autofahrer-Rundfunk-Informationssystem | Autofahrer-Rundfunk-Informationssystem (ARI, German for: Automotive-Driver's-Broadcasting-Information) was a system for indicating the presence of traffic information in FM broadcasts used by the German ARD network of FM radio stations from 1974. Developed jointly by IRT and Blaupunkt, it indicated the presence of traffic announcements through manipulation of the 57kHz subcarrier of the station's FM signal.
ARI was rendered obsolete by the more modern Radio Data System and the ARD stopped broadcasting ARI signals on March 1, 2005.
Functionality description
SK signal
The SK signal is actually the 57 kHz subcarrier that is transmitted by the ARI-compliant FM station for this functionality. This frequency, like the RDS subcarrier frequency is chosen because it is the third harmonic of the 19 kHz pilot tone used in the FM-stereo transmission standard. An easy way to understand that is that this frequency is the 19 kHz pilot tone multiplied by 3.
An ARI-equipped radio would illuminate an indicator lamp to show that this function was in force. Most such radios would use this function further to help users search for ARI broadcasts. In the Radio Data System environment, the TP signal is equivalent to this basic function.
The basic method implemented on an analog receiver would be a switch usually marked SDK or VF. Radios that used the "classic" mechanical push-button preset system would have one of these buttons set aside as the VF switch. If this switch was on, the radio would mute unless it was tuned into a station that transmitted this signal.
If the radio was a digitally-tuned receiver, this switch usually engaged an "ARI-seek" mode which had the radio seek for any ARI station if it was out of range of the currently-tuned ARI station.
DK signal
This function, which is superseded by the RDS TA function, was tied in with the broadcasting studio and would be triggered whenever the traffic-announcement jingle was played. A 125 Hz tone would be modulated on the 5 |
https://en.wikipedia.org/wiki/List%20of%20dynamical%20systems%20and%20differential%20equations%20topics | This is a list of dynamical system and differential equation topics, by Wikipedia page. See also list of partial differential equation topics, list of equations.
Dynamical systems, in general
Deterministic system (mathematics)
Linear system
Partial differential equation
Dynamical systems and chaos theory
Chaos theory
Chaos argument
Butterfly effect
0-1 test for chaos
Bifurcation diagram
Feigenbaum constant
Sharkovskii's theorem
Attractor
Strange nonchaotic attractor
Stability theory
Mechanical equilibrium
Astable
Monostable
Bistability
Metastability
Feedback
Negative feedback
Positive feedback
Homeostasis
Damping ratio
Dissipative system
Spontaneous symmetry breaking
Turbulence
Perturbation theory
Control theory
Non-linear control
Adaptive control
Hierarchical control
Intelligent control
Optimal control
Dynamic programming
Robust control
Stochastic control
System dynamics, system analysis
Takens' theorem
Exponential dichotomy
Liénard's theorem
Krylov–Bogolyubov theorem
Krylov-Bogoliubov averaging method
Abstract dynamical systems
Measure-preserving dynamical system
Ergodic theory
Mixing (mathematics)
Almost periodic function
Symbolic dynamics
Time scale calculus
Arithmetic dynamics
Sequential dynamical system
Graph dynamical system
Topological dynamical system
Dynamical systems, examples
List of chaotic maps
Logistic map
Lorenz attractor
Lorenz-96
Iterated function system
Tetration
Ackermann function
Horseshoe map
Hénon map
Arnold's cat map
Population dynamics
Complex dynamics
Fatou set
Julia set
Mandelbrot set
Difference equations
Recurrence relation
Matrix difference equation
Rational difference equation
Ordinary differential equations: general
Examples of differential equations
Autonomous system (mathematics)
Picard–Lindelöf theorem
Peano existence theorem
Carathéodory existence theorem
Numerical ordinary differential equations
Bendixson–Dulac theorem
Gradient conjecture
Recurrence plot
Limit cycle
Initial value problem
Clairaut's equation
Singular sol |
https://en.wikipedia.org/wiki/Laplace%20transform%20applied%20to%20differential%20equations | In mathematics, the Laplace transform is a powerful integral transform used to switch a function from the time domain to the s-domain. The Laplace transform can be used in some cases to solve linear differential equations with given initial conditions.
First consider the following property of the Laplace transform:
One can prove by induction that
Now we consider the following differential equation:
with given initial conditions
Using the linearity of the Laplace transform it is equivalent to rewrite the equation as
obtaining
Solving the equation for and substituting with one obtains
The solution for f(t) is obtained by applying the inverse Laplace transform to
Note that if the initial conditions are all zero, i.e.
then the formula simplifies to
An example
We want to solve
with initial conditions f(0) = 0 and f′(0)=0.
We note that
and we get
The equation is then equivalent to
We deduce
Now we apply the Laplace inverse transform to get
Bibliography
A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002.
Integral transforms
Differential equations
Differential calculus
Ordinary differential equations |
https://en.wikipedia.org/wiki/Software%20protection%20dongle | A software protection dongle (commonly known as a dongle or key) is an electronic copy protection and content protection device. When connected to a computer or other electronics, they unlock software functionality or decode content. The hardware key is programmed with a product key or other cryptographic protection mechanism and functions via an electrical connector to an external bus of the computer or appliance.
In software protection, dongles are two-interface security tokens with transient data flow with a pull communication that reads security data from the dongle. In the absence of these dongles, certain software may run only in a restricted mode, or not at all. In addition to software protection, dongles can enable functions in electronic devices, such as receiving and processing encoded video streams on television sets.
Etymology
The Merriam-Webster dictionary states that the "First known use of dongle" was in 1981 and that the etymology was "perhaps alteration of dangle."
Dongles rapidly evolved into active devices that contained a serial transceiver (UART) and even a microprocessor to handle transactions with the host. Later versions adopted the USB interface, which became the preferred choice over the serial or parallel interface.
A 1992 advertisement for Rainbow Technologies claimed the word dongle was derived from the name "Don Gall". Though untrue, this has given rise to an urban myth.
Usage
Efforts to introduce dongle copy-protection in the mainstream software market have met stiff resistance from users. Such copy-protection is more typically used with very expensive packages and vertical market software such as CAD/CAM software, cellphone flasher/JTAG debugger software, MICROS Systems hospitality and special retail software, digital audio workstation applications, and some translation memory packages.
In cases such as prepress and printing software, the dongle is encoded with a specific, per-user license key, which enables particular feature |
https://en.wikipedia.org/wiki/Transparency%20and%20translucency | In the field of optics, transparency (also called pellucidity or diaphaneity) is the physical property of allowing light to pass through the material without appreciable scattering of light. On a macroscopic scale (one in which the dimensions are much larger than the wavelengths of the photons in question), the photons can be said to follow Snell's law. Translucency (also called translucence or translucidity) allows light to pass through, but does not necessarily (again, on the macroscopic scale) follow Snell's law; the photons can be scattered at either of the two interfaces, or internally, where there is a change in index of refraction. In other words, a translucent material is made up of components with different indices of refraction. A transparent material is made up of components with a uniform index of refraction. Transparent materials appear clear, with the overall appearance of one color, or any combination leading up to a brilliant spectrum of every color. The opposite property of translucency is opacity. Other categories of visual appearance, related to the perception of regular or diffuse reflection and transmission of light, have been organized under the concept of cesia in an order system with three variables, including transparency, translucency and opacity among the involved aspects.
When light encounters a material, it can interact with it in several different ways. These interactions depend on the wavelength of the light and the nature of the material. Photons interact with an object by some combination of reflection, absorption and transmission.
Some materials, such as plate glass and clean water, transmit much of the light that falls on them and reflect little of it; such materials are called optically transparent. Many liquids and aqueous solutions are highly transparent. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are mostly responsible for excellent optical transmission.
Materials which do not |
https://en.wikipedia.org/wiki/Transparency%20%28projection%29 | A transparency, also known variously as a viewfoil, foil, or viewgraph, is a thin sheet of transparent flexible material, typically polyester (historically cellulose acetate), onto which figures can be drawn. These are then placed on an overhead projector for display to an audience. Many companies and small organizations use a system of projectors and transparencies in meetings and other groupings of people, though this system is being largely replaced by video projectors and interactive whiteboards.
Printing
Transparencies can be printed on laser printers or copiers. Specialist transparencies are available for use with laser printers that are better able to handle the high temperatures present in the fuser unit. For inkjet printers, coated transparencies are available that can absorb and hold the liquid ink—although care must be taken to avoid excessive exposure to moisture, which can cause the transparency to become cloudy; they must also be loaded correctly into the printer as they are only usually coated on one side.
Uses
Uses for transparencies are as varied as the organizations that use them.
Certain classes, such as those associated with mathematics or history and geography use transparencies to illustrate a point or problem. Until the advent of LaTeX, math classes in particular used rolls of acetate to illustrate sufficiently long problems and to display mathematical symbols missing from common computer keyboards.
Aerospace companies, like Boeing and Beechcraft, used transparencies for years in management meetings in order to brief engineers and relevant personnel about new aircraft designs and changes to existing designs, as well as bring up illustrated problems.
Some churches and other religious organizations used them to show sermon outlines and illustrate certain topics such as Old Testament battles and Jewish artifacts during worship services, as well as outline business meetings.
Spatial light modulators (SLMs)
Many overhead projectors are us |
https://en.wikipedia.org/wiki/Transparency%20%28telecommunication%29 | In telecommunications, transparency can refer to:
The property of an entity that allows another entity to pass through it without altering either of the entities.
The property that allows a transmission system or channel to accept, at its input, unmodified user information, and deliver corresponding user information at its output, unchanged in form or information content. The user information may be changed internally within the transmission system, but it is restored to its original form prior to the output without the involvement of the user.
The quality of a data communications system or device that uses a bit-oriented link protocol that does not depend on the bit sequence structure used by the data source.
Some communication systems are not transparent.
Non-transparent communication systems have one or both of the following problems:
user data may be incorrectly interpreted as internal commands. For example, modems with a Time Independent Escape Sequence or 20th century Signaling System No. 5 and R2 signalling telephone systems, which occasionally incorrectly interpreted user data (from a "blue box") as commands.
output "user data" may not always be the same as input user data. For example, many early email systems were not 8-bit clean; they seemed to transfer typical short text messages properly, but converted "unusual" characters (the control characters, the "high ASCII" characters) in an irreversible way into some other "usual" character. Many of these systems also changed user data in other irreversible ways – such as inserting linefeeds to make sure each line is less than some maximum length, and inserting a ">" at the beginning of every line that begins with "From ". Until 8BITMIME, a variety of binary-to-text encoding techniques have been overlaid on top of such systems to restore transparency – to make sure that any possible file can be transferred so that the final output "user data" is actually identical to the original user data.
References
See |
https://en.wikipedia.org/wiki/Transparency%20%28human%E2%80%93computer%20interaction%29 | Any change in a computing system, such as a new feature or new component, is transparent if the system after change adheres to previous external interface as much as possible while changing its internal behaviour. The purpose is to shield from change all systems (or human users) on the other end of the interface. Confusingly, the term refers to overall invisibility of the component, it does not refer to visibility of component's internals (as in white box or open system). The term transparent is widely used in computing marketing in substitution of the term invisible, since the term invisible has a bad connotation (usually seen as something that the user can't see and has no control over) while the term transparent has a good connotation (usually associated with not hiding anything). The vast majority of the times, the term transparent is used in a misleading way to refer to the actual invisibility of a computing process, which is also described by the term opaque, especially with regards to data structures. Because of this misleading and counter-intuitive definition, modern computer literature tends to prefer use of "agnostic" over "transparent".
The term is used particularly often with regard to an abstraction layer that is invisible either from its upper or lower neighbouring layer.
Also temporarily used later around 1969 in IBM and Honeywell programming manuals the term referred to a certain computer programming technique. An application code was transparent when it was clear of the low-level detail (such as device-specific management) and contained only the logic solving a main problem. It was achieved through encapsulation – putting the code into modules that hid internal details, making them invisible for the main application.
Examples
For example, the Network File System is transparent, because it introduces the access to files stored remotely on the network in a way uniform with previous local access to a file system, so the user might even not notice |
https://en.wikipedia.org/wiki/Virtual%20file%20system | A virtual file system (VFS) or virtual filesystem switch is an abstract layer on top of a more concrete file system. The purpose of a VFS is to allow client applications to access different types of concrete file systems in a uniform way. A VFS can, for example, be used to access local and network storage devices transparently without the client application noticing the difference. It can be used to bridge the differences in Windows, classic Mac OS/macOS and Unix filesystems, so that applications can access files on local file systems of those types without having to know what type of file system they are accessing.
A VFS specifies an interface (or a "contract") between the kernel and a concrete file system. Therefore, it is easy to add support for new file system types to the kernel simply by fulfilling the contract. The terms of the contract might change incompatibly from release to release, which would require that concrete file system support be recompiled, and possibly modified before recompilation, to allow it to work with a new release of the operating system; or the supplier of the operating system might make only backward-compatible changes to the contract, so that concrete file system support built for a given release of the operating system would work with future versions of the operating system.
Implementations
One of the first virtual file system mechanisms on Unix-like systems was introduced by Sun Microsystems in SunOS 2.0 in 1985. It allowed Unix system calls to access local UFS file systems and remote NFS file systems transparently. For this reason, Unix vendors who licensed the NFS code from Sun often copied the design of Sun's VFS. Other file systems could be plugged into it also: there was an implementation of the MS-DOS FAT file system developed at Sun that plugged into the SunOS VFS, although it wasn't shipped as a product until SunOS 4.1. The SunOS implementation was the basis of the VFS mechanism in System V Release 4.
John Heidemann |
https://en.wikipedia.org/wiki/Stripboard | Stripboard is the generic name for a widely used type of electronics prototyping material for circuit boards characterized by a pre-formed regular (rectangular) grid of holes, with wide parallel strips of copper cladding running in one direction all the way across one side of on an insulating bonded paper board. It is commonly also known by the name of the original product Veroboard, which is a trademark, in the UK, of British company Vero Technologies Ltd and Canadian company Pixel Print Ltd. It was originated and developed in the early 1960s by the Electronics Department of Vero Precision Engineering Ltd (VPE). It was introduced as a general-purpose material for use in constructing electronic circuits - differing from purpose-designed printed circuit boards (PCBs) in that a variety of electronics circuits may be constructed using a standard wiring board.
In using the board, breaks are made in the tracks, usually around holes, to divide the strips into multiple electrical nodes. With care, it is possible to break between holes to allow for components that have two pin rows only one position apart such as twin row headers for IDCs.
Stripboard is not designed for surface-mount components, though it is possible to mount many such components on the track side, particularly if tracks are cut/shaped with a knife or small cutting disc in a rotary tool.
The first single-size Veroboard product was the forerunner of the numerous types of prototype wiring board which, with worldwide use over five decades, have become known as stripboard.
The generic terms 'veroboard' and 'stripboard' are now taken to be synonymous.
History
By the mid-1950s, the printed circuit board (PCB) had become commonplace in electronics production.
In early 1959, the VPE Electronics Department was formed when managing director Geoffrey Verdon-Roe hired two former Saunders-Roe Ltd employees, Peter H Winter (aircraft design department) and Terry Fitzpatrick (electronics division).
After the fai |
https://en.wikipedia.org/wiki/Systolic%20array | In parallel computer architectures, a systolic array is a homogeneous network of tightly coupled data processing units (DPUs) called cells or nodes. Each node or DPU independently computes a partial result as a function of the data received from its upstream neighbours, stores the result within itself and passes it downstream. Systolic arrays were first used in Colossus, which was an early computer used to break German Lorenz ciphers during World War II. Due to the classified nature of Colossus, they were independently invented or rediscovered by H. T. Kung and Charles Leiserson who described arrays for many dense linear algebra computations (matrix product, solving systems of linear equations, LU decomposition, etc.) for banded matrices. Early applications include computing greatest common divisors of integers and polynomials. They are sometimes classified as multiple-instruction single-data (MISD) architectures under Flynn's taxonomy, but this classification is questionable because a strong argument can be made to distinguish systolic arrays from any of Flynn's four categories: SISD, SIMD, MISD, MIMD, as discussed later in this article.
The parallel input data flows through a network of hard-wired processor nodes, which combine, process, merge or sort the input data into a derived result. Because the wave-like propagation of data through a systolic array resembles the pulse of the human circulatory system, the name systolic was coined from medical terminology. The name is derived from systole as an analogy to the regular pumping of blood by the heart.
Applications
Systolic arrays are often hard-wired for specific operations, such as "multiply and accumulate", to perform massively parallel integration, convolution, correlation, matrix multiplication or data sorting tasks. They are also used for dynamic programming algorithms, used in DNA and protein sequence analysis.
Architecture
A systolic array typically consists of a large monolithic network of primitive com |
https://en.wikipedia.org/wiki/List%20of%20algebraic%20geometry%20topics | This is a list of algebraic geometry topics, by Wikipedia page.
Classical topics in projective geometry
Affine space
Projective space
Projective line, cross-ratio
Projective plane
Line at infinity
Complex projective plane
Complex projective space
Plane at infinity, hyperplane at infinity
Projective frame
Projective transformation
Fundamental theorem of projective geometry
Duality (projective geometry)
Real projective plane
Real projective space
Segre embedding of a product of projective spaces
Rational normal curve
Algebraic curves
Conics, Pascal's theorem, Brianchon's theorem
Twisted cubic
Elliptic curve, cubic curve
Elliptic function, Jacobi's elliptic functions, Weierstrass's elliptic functions
Elliptic integral
Complex multiplication
Weil pairing
Hyperelliptic curve
Klein quartic
Modular curve
Modular equation
Modular function
Modular group
Supersingular primes
Fermat curve
Bézout's theorem
Brill–Noether theory
Genus (mathematics)
Riemann surface
Riemann–Hurwitz formula
Riemann–Roch theorem
Abelian integral
Differential of the first kind
Jacobian variety
Generalized Jacobian
Moduli of algebraic curves
Hurwitz's theorem on automorphisms of a curve
Clifford's theorem on special divisors
Gonality of an algebraic curve
Weil reciprocity law
Algebraic geometry codes
Algebraic surfaces
Enriques–Kodaira classification
List of algebraic surfaces
Ruled surface
Cubic surface
Veronese surface
Del Pezzo surface
Rational surface
Enriques surface
K3 surface
Hodge index theorem
Elliptic surface
Surface of general type
Zariski surface
Algebraic geometry: classical approach
Algebraic variety
Hypersurface
Quadric (algebraic geometry)
Dimension of an algebraic variety
Hilbert's Nullstellensatz
Complete variety
Elimination theory
Gröbner basis
Projective variety
Quasiprojective variety
Canonical bundle
Complete intersection
Serre duality
Spaltenstein variety
Arithmetic genus, geometric genus, irregularity
Tangent space, Zariski tangent space
Function field of an algebraic variet |
https://en.wikipedia.org/wiki/Geographic%20Names%20Information%20System | The Geographic Names Information System (GNIS) is a database of name and location information about more than two million physical and cultural features throughout the United States and its territories, Antarctica, and the associated states of the Marshall Islands, Federated States of Micronesia, and Palau. It is a type of gazetteer. It was developed by the United States Geological Survey (USGS) in cooperation with the United States Board on Geographic Names (BGN) to promote the standardization of feature names.
Data were collected in two phases.
Although a third phase was considered, which would have handled name changes where local usages differed from maps, it was never begun.
The database is part of a system that includes topographic map names and bibliographic references. The names of books and historic maps that confirm the feature or place name are cited. Variant names, alternatives to official federal names for a feature, are also recorded. Each feature receives a permanent, unique feature record identifier, sometimes called the GNIS identifier. The database never removes an entry, "except in cases of obvious duplication."
Original purposes
The GNIS was originally designed for four major purposes: to eliminate duplication of effort at various other levels of government that were already compiling geographic data, to provide standardized datasets of geographic data for the government and others, to index all of the names found on official U.S. government federal and state maps, and to ensure uniform geographic names for the federal government.
Phase 1
Phase 1 lasted from 1978 to 1981, with a precursor pilot project run over the states of Kansas and Colorado in 1976, and produced 5 databases.
It excluded several classes of feature because they were better documented in non-USGS maps, including airports, the broadcasting masts for radio and television stations, civil divisions, regional and historic names, individual buildings, roads, and triangulation st |
https://en.wikipedia.org/wiki/Robertson%E2%80%93Seymour%20theorem | In graph theory, the Robertson–Seymour theorem (also called the graph minor theorem) states that the undirected graphs, partially ordered by the graph minor relationship, form a well-quasi-ordering. Equivalently, every family of graphs that is closed under minors can be defined by a finite set of forbidden minors, in the same way that Wagner's theorem characterizes the planar graphs as being the graphs that do not have the complete graph K5 or the complete bipartite graph K3,3 as minors.
The Robertson–Seymour theorem is named after mathematicians Neil Robertson and Paul D. Seymour, who proved it in a series of twenty papers spanning over 500 pages from 1983 to 2004. Before its proof, the statement of the theorem was known as Wagner's conjecture after the German mathematician Klaus Wagner, although Wagner said he never conjectured it.
A weaker result for trees is implied by Kruskal's tree theorem, which was conjectured in 1937 by Andrew Vázsonyi and proved in 1960 independently by Joseph Kruskal and S. Tarkowski.
Statement
A minor of an undirected graph G is any graph that may be obtained from G by a sequence of zero or more contractions of edges of G and deletions of edges and vertices of G. The minor relationship forms a partial order on the set of all distinct finite undirected graphs, as it obeys the three axioms of partial orders: it is reflexive (every graph is a minor of itself), transitive (a minor of a minor of G is itself a minor of G), and antisymmetric (if two graphs G and H are minors of each other, then they must be isomorphic). However, if graphs that are isomorphic may nonetheless be considered as distinct objects, then the minor ordering on graphs forms a preorder, a relation that is reflexive and transitive but not necessarily antisymmetric.
A preorder is said to form a well-quasi-ordering if it contains neither an infinite descending chain nor an infinite antichain. For instance, the usual ordering on the non-negative integers is a well-quasi-o |
https://en.wikipedia.org/wiki/Neighbor%20joining | In bioinformatics, neighbor joining is a bottom-up (agglomerative) clustering method for the creation of phylogenetic trees, created by Naruya Saitou and Masatoshi Nei in 1987. Usually based on DNA or protein sequence data, the algorithm requires knowledge of the distance between each pair of taxa (e.g., species or sequences) to create the phylogenetic tree.
The algorithm
Neighbor joining takes a distance matrix, which specifies the distance between each pair of taxa, as input.
The algorithm starts with a completely unresolved tree, whose topology corresponds to that of a star network, and iterates over the following steps, until the tree is completely resolved, and all branch lengths are known:
Based on the current distance matrix, calculate a matrix (defined below).
Find the pair of distinct taxa i and j (i.e. with ) for which is smallest. Make a new node that joins the taxa i and j, and connect the new node to the central node. For example, in part (B) of the figure at right, node u is created to join f and g.
Calculate the distance from each of the taxa in the pair to this new node.
Calculate the distance from each of the taxa outside of this pair to the new node.
Start the algorithm again, replacing the pair of joined neighbors with the new node and using the distances calculated in the previous step.
The Q-matrix
Based on a distance matrix relating the taxa, calculate the x matrix as follows:
where is the distance between taxa and .
Distance from the pair members to the new node
For each of the taxa in the pair being joined, use the following formula to calculate the distance to the new node:
and:
Taxa and are the paired taxa and is the newly created node. The branches joining and and and , and their lengths, and are part of the tree which is gradually being created; they neither affect nor are affected by later neighbor-joining steps.
Distance of the other taxa from the new node
For each taxon not considered in the previous |
https://en.wikipedia.org/wiki/Fractionating%20column | A fractionating column or fractional column is equipment used in the distillation of liquid mixtures to separate the mixture into its component parts, or fractions, based on their differences in volatility. Fractionating columns are used in small-scale laboratory distillations as well as large-scale industrial distillations.
Laboratory fractionating columns
A laboratory fractionating column is a piece of glassware used to separate vaporized mixtures of liquid compounds with close volatility. Most commonly used is either a Vigreux column or a straight column packed with glass beads or metal pieces such as Raschig rings. Fractionating columns help to separate the mixture by allowing the mixed vapors to cool, condense, and vaporize again in accordance with Raoult's law. With each condensation-vaporization cycle, the vapors are enriched in a certain component. A larger surface area allows more cycles, improving separation. This is the rationale for a Vigreux column or a packed fractionating column. Spinning band distillation achieves the same outcome by using a rotating band within the column to force the rising vapors and descending condensate into close contact, achieving equilibrium more quickly.
In a typical fractional distillation, a liquid mixture is heated in the distilling flask, and the resulting vapor rises up the fractionating column (see Figure 1). The vapor condenses on glass spurs (known as theoretical trays or theoretical plates) inside the column, and returns to the distilling flask, refluxing the rising distillate vapor. The hottest tray is at the bottom of the column and the coolest tray is at the top. At steady-state conditions, the vapor and liquid on each tray reach an equilibrium. Only the most volatile of the vapors stays in gas form all the way to the top, where it may then proceed through a condenser, which cools the vapor until it condenses into a liquid distillate. The separation may be enhanced by the addition of more trays (to a practical |
https://en.wikipedia.org/wiki/Dominated%20convergence%20theorem | In measure theory, Lebesgue's dominated convergence theorem provides sufficient conditions under which almost everywhere convergence of a sequence of functions implies convergence in the L1 norm. Its power and utility are two of the primary theoretical advantages of Lebesgue integration over Riemann integration.
In addition to its frequent appearance in mathematical analysis and partial differential equations, it is widely used in probability theory, since it gives a sufficient condition for the convergence of expected values of random variables.
Statement
Lebesgue's dominated convergence theorem. Let be a sequence of complex-valued measurable functions on a measure space . Suppose that the sequence converges pointwise to a function and is dominated by some integrable function in the sense that
for all numbers n in the index set of the sequence and all points .
Then f is integrable (in the Lebesgue sense) and
which also implies
Remark 1. The statement "g is integrable" means that measurable function is Lebesgue integrable; i.e.
Remark 2. The convergence of the sequence and domination by can be relaxed to hold only almost everywhere provided the measure space is complete or is chosen as a measurable function which agrees everywhere with the everywhere existing pointwise limit. (These precautions are necessary, because otherwise there might exist a non-measurable subset of a set , hence might not be measurable.)
Remark 3. If , the condition that there is a dominating integrable function can be relaxed to uniform integrability of the sequence (fn), see Vitali convergence theorem.
Remark 4. While is Lebesgue integrable, it is not in general Riemann integrable. For example, take fn to be defined in so that it is 1/n at rational numbers and zero everywhere else (on the irrationals). The series (fn) converges pointwise to 0, so f is identically zero, but is not Riemann integrable, since its image in every finite interval is and thus the upper and |
https://en.wikipedia.org/wiki/Automotive%20engineering | Automotive engineering, along with aerospace engineering and naval architecture, is a branch of vehicle engineering, incorporating elements of mechanical, electrical, electronic, software, and safety engineering as applied to the design, manufacture and operation of motorcycles, automobiles, and trucks and their respective engineering subsystems. It also includes modification of vehicles. Manufacturing domain deals with the creation and assembling the whole parts of automobiles is also included in it. The automotive engineering field is research intensive and involves direct application of mathematical models and formulas. The study of automotive engineering is to design, develop, fabricate, and test vehicles or vehicle components from the concept stage to production stage. Production, development, and manufacturing are the three major functions in this field.
Disciplines
Automobile engineering
Automobile engineering is a branch study of engineering which teaches manufacturing, designing, mechanical mechanisms as well as operations of automobiles.
It is an introduction to vehicle engineering which deals with motorcycles, cars, buses, trucks, etc. It includes branch study of mechanical, electronic, software and safety elements.
Some of the engineering attributes and disciplines that are of importance to the automotive engineer include:
Safety engineering: Safety engineering is the assessment of various crash scenarios and their impact on the vehicle occupants. These are tested against very stringent governmental regulations. Some of these requirements include: seat belt and air bag functionality testing, front- and side-impact testing, and tests of rollover resistance. Assessments are done with various methods and tools, including computer crash simulation (typically finite element analysis), crash-test dummy, and partial system sled and full vehicle crashes.
Fuel economy/emissions: Fuel economy is the measured fuel efficiency of the vehicle in miles per gallon |
https://en.wikipedia.org/wiki/Almost%20surely | In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. The concept is analogous to the concept of "almost everywhere" in measure theory. In probability experiments on a finite sample space with a non-zero probability for each outcome, there is no difference between almost surely and surely (since having a probability of 1 entails including all the sample points); however, this distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability 0.
Some examples of the use of this concept include the strong and uniform versions of the law of large numbers, the continuity of the paths of Brownian motion, and the infinite monkey theorem. The terms almost certainly (a.c.) and almost always (a.a.) are also used. Almost never describes the opposite of almost surely: an event that happens with probability zero happens almost never.
Formal definition
Let be a probability space. An event happens almost surely if . Equivalently, happens almost surely if the probability of not occurring is zero: . More generally, any event (not necessarily in ) happens almost surely if is contained in a null set: a subset in such that The notion of almost sureness depends on the probability measure . If it is necessary to emphasize this dependence, it is customary to say that the event occurs P-almost surely, or almost surely .
Illustrative examples
In general, an event can happen "almost surely", even if the probability space in question includes outcomes which do not belong to the event—as the following examples illustrate.
Throwing a dart
Imagine throwing a dart at a unit square (a square with an area of 1) so that the dart always hits an exact point in the square, in such a way that each point in the square is equally lik |
https://en.wikipedia.org/wiki/Self-assembly | Self-assembly is a process in which a disordered system of pre-existing components forms an organized structure or pattern as a consequence of specific, local interactions among the components themselves, without external direction. When the constitutive components are molecules, the process is termed molecular self-assembly.
Self-assembly can be classified as either static or dynamic. In static self-assembly, the ordered state forms as a system approaches equilibrium, reducing its free energy. However, in dynamic self-assembly, patterns of pre-existing components organized by specific local interactions are not commonly described as "self-assembled" by scientists in the associated disciplines. These structures are better described as "self-organized", although these terms are often used interchangeably.
In chemistry and materials science
Self-assembly in the classic sense can be defined as the spontaneous and reversible organization of molecular units into ordered structures by non-covalent interactions. The first property of a self-assembled system that this definition suggests is the spontaneity of the self-assembly process: the interactions responsible for the formation of the self-assembled system act on a strictly local level—in other words, the nanostructure builds itself.
Although self-assembly typically occurs between weakly-interacting species, this organization may be transferred into strongly-bound covalent systems. An example for this may be observed in the self-assembly of polyoxometalates. Evidence suggests that such molecules assemble via a dense-phase type mechanism whereby small oxometalate ions first assemble non-covalently in solution, followed by a condensation reaction that covalently binds the assembled units. This process can be aided by the introduction of templating agents to control the formed species. In such a way, highly organized covalent molecules may be formed in a specific manner.
Self-assembled nano-structure is an object th |
https://en.wikipedia.org/wiki/List%20of%20abstract%20algebra%20topics | Abstract algebra is the subject area of mathematics that studies algebraic structures, such as groups, rings, fields, modules, vector spaces, and algebras. The phrase abstract algebra was coined at the turn of the 20th century to distinguish this area from what was normally referred to as algebra, the study of the rules for manipulating formulae and algebraic expressions involving unknowns and real or complex numbers, often now called elementary algebra. The distinction is rarely made in more recent writings.
Basic language
Algebraic structures are defined primarily as sets with operations.
Algebraic structure
Subobjects: subgroup, subring, subalgebra, submodule etc.
Binary operation
Closure of an operation
Associative property
Distributive property
Commutative property
Unary operator
Additive inverse, multiplicative inverse, inverse element
Identity element
Cancellation property
Finitary operation
Arity
Structure preserving maps called homomorphisms are vital in the study of algebraic objects.
Homomorphisms
Kernels and cokernels
Image and coimage
Epimorphisms and monomorphisms
Isomorphisms
Isomorphism theorems
There are several basic ways to combine algebraic objects of the same type to produce a third object of the same type. These constructions are used throughout algebra.
Direct sum
Direct limit
Direct product
Inverse limit
Quotient objects: quotient group, quotient ring, quotient module etc.
Tensor product
Advanced concepts:
Category theory
Category of groups
Category of abelian groups
Category of rings
Category of modules (over a fixed ring)
Morita equivalence, Morita duality
Category of vector spaces
Homological algebra
Filtration (algebra)
Exact sequence
Functor
Zorn's lemma
Semigroups and monoids
Semigroup
Subsemigroup
Free semigroup
Green's relations
Inverse semigroup (or inversion semigroup, cf. )
Krohn–Rhodes theory
Semigroup algebra
Transformation semigroup
Monoid
Aperiodic monoid
Free monoid
Monoid (category theory)
Monoid factorisation
Syntacti |
https://en.wikipedia.org/wiki/Lyapunov%20function | In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s second method for stability) are important to stability theory of dynamical systems and control theory. A similar concept appears in the theory of general state space Markov chains, usually under the name Foster–Lyapunov functions.
For certain classes of ODEs, the existence of Lyapunov functions is a necessary and sufficient condition for stability. There is no general technique for constructing Lyapunov functions for ODEs, however, depending on formulation type, a systematic method to construct Lyapunov functions for ordinary differential equations using their most general form in autonomous cases was given by Prof. Cem Civelek. Though, in many specific cases the construction of Lyapunov functions is known. For instance, according to a lot of applied mathematicians, for a dissipative gyroscopic system a Lyapunov function could not be constructed. However, using the method expressed in the publication above, even for such a system a Lyapunov function could be constructed as per related article by C. Civelek and Ö. Cihanbegendi. In addition, quadratic functions suffice for systems with one state; the solution of a particular linear matrix inequality provides Lyapunov functions for linear systems, and conservation laws can often be used to construct Lyapunov functions for physical systems.
Definition
A Lyapunov function for an autonomous dynamical system
with an equilibrium point at is a scalar function that is continuous, has continuous first derivatives, is strictly positive for , and for which the time derivative is non positive (these conditions are required on some region containing the origin). The (stronger) condition that is strictly positive for is sometimes stated as is locally positive definite, or is locall |
https://en.wikipedia.org/wiki/Outerplanar%20graph | In graph theory, an outerplanar graph is a graph that has a planar drawing for which all vertices belong to the outer face of the drawing.
Outerplanar graphs may be characterized (analogously to Wagner's theorem for planar graphs) by the two forbidden minors and , or by their Colin de Verdière graph invariants.
They have Hamiltonian cycles if and only if they are biconnected, in which case the outer face forms the unique Hamiltonian cycle. Every outerplanar graph is 3-colorable, and has degeneracy and treewidth at most 2.
The outerplanar graphs are a subset of the planar graphs, the subgraphs of series–parallel graphs, and the circle graphs. The maximal outerplanar graphs, those to which no more edges can be added while preserving outerplanarity, are also chordal graphs and visibility graphs.
History
Outerplanar graphs were first studied and named by , in connection with the problem of determining the planarity of graphs formed by using a perfect matching to connect two copies of a base graph (for instance, many of the generalized Petersen graphs are formed in this way from two copies of a cycle graph). As they showed, when the base graph is biconnected, a graph constructed in this way is planar if and only if its base graph is outerplanar and the matching forms a dihedral permutation of its outer cycle. Chartrand and Harary also proved an analogue of Kuratowski's theorem for outerplanar graphs, that a graph is outerplanar if and only if it does not contain a subdivision of one of the two graphs K4 or K2,3.
Definition and characterizations
An outerplanar graph is an undirected graph that can be drawn in the plane without crossings in such a way that all of the vertices belong to the unbounded face of the drawing. That is, no vertex is totally surrounded by edges. Alternatively, a graph G is outerplanar if the graph formed from G by adding a new vertex, with edges connecting it to all the other vertices, is a planar graph.
A maximal outerplanar graph is an oute |
https://en.wikipedia.org/wiki/Spirit%20level | A spirit level, bubble level, or simply a level, is an instrument designed to indicate whether a surface is horizontal (level) or vertical (plumb).
Two basic designs exist: tubular (or linear) and bull's eye (or circular).
Different types of spirit levels may be used by carpenters, stonemasons, bricklayers, other building trades workers, surveyors, millwrights and other metalworkers, and in some photographic or videographic work.
History
The history of the spirit level was discussed in brief in an 1887 article appearing in Scientific American. Melchisédech Thévenot, a French scientist, invented the instrument some time before February 2, 1661. This date can be established from Thevenot's correspondence with scientist Christiaan Huygens. Within a year of this date the inventor circulated details of his invention to others, including Robert Hooke in London and Vincenzo Viviani in Florence. It is occasionally argued that these "bubble levels" did not come into widespread use until the beginning of the 18th century, the earliest surviving examples being from that time, but Adrien Auzout had recommended that the Académie Royale des Sciences take "levels of the Thevenot type" on its expedition to Madagascar in 1666. It is very likely that these levels were in use in France and elsewhere long before the turn of the century.
The Fell All-Way precision level, one of the first successful American made bull's eye levels for machine tool use, was invented by William B. Fell of Rockford, Illinois in 1939. The device was unique in that it could be placed on a machine bed and show tilt on the x-y axes simultaneously, eliminating the need to rotate the level 90 degrees. The level was so accurate it was restricted from export during World War II. The device set a new standard of .0005 inches per foot resolution (five ten thousands per foot or five arc seconds tilt). Production of the level stopped around 1970, and was restarted in the 1980s by Thomas Butler Technology, also of Ro |
https://en.wikipedia.org/wiki/Limit%20state%20design | Limit State Design (LSD), also known as Load And Resistance Factor Design (LRFD), refers to a design method used in structural engineering. A limit state is a condition of a structure beyond which it no longer fulfills the relevant design criteria. The condition may refer to a degree of loading or other actions on the structure, while the criteria refer to structural integrity, fitness for use, durability or other design requirements. A structure designed by LSD is proportioned to sustain all actions likely to occur during its design life, and to remain fit for use, with an appropriate level of reliability for each limit state. Building codes based on LSD implicitly define the appropriate levels of reliability by their prescriptions.
The method of limit state design, developed in the USSR and based on research led by Professor N.S. Streletski, was introduced in USSR building regulations in 1955.
Criteria
Limit state design requires the structure to satisfy two principal criteria: the ultimate limit state (ULS) and the serviceability limit state (SLS).
Any design process involves a number of assumptions. The loads to which a structure will be subjected must be estimated, sizes of members to check must be chosen and design criteria must be selected. All engineering design criteria have a common goal: that of ensuring a safe structure and ensuring the functionality of the structure.
Ultimate limit state (ULS)
A clear distinction is made between the ultimate state (US) and the ultimate limit state (ULS). The US is a physical situation that involves either excessive deformations leading and approaching collapse of the component under consideration or the structure as a whole, as relevant, or deformations exceeding pre-agreed values. It involves, of course, considerable inelastic (plastic) behavior of the structural scheme and residual deformations. In contrast, the ULS is not a physical situation but rather an agreed computational condition that must be fulfilled, a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.