source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Genetic%20marker | A genetic marker is a gene or DNA sequence with a known location on a chromosome that can be used to identify individuals or species. It can be described as a variation (which may arise due to mutation or alteration in the genomic loci) that can be observed. A genetic marker may be a short DNA sequence, such as a sequence surrounding a single base-pair change (single nucleotide polymorphism, SNP), or a long one, like minisatellites.
Background
For many years, gene mapping was limited to identifying organisms by traditional phenotypes markers. This included genes that encoded easily observable characteristics such as blood types or seed shapes. The insufficient number of these types of characteristics in several organisms limited the mapping efforts that could be done. This prompted the development of gene markers which could identify genetic characteristics that are not readily observable in organisms (such as protein variation).
Types
Some commonly used types of genetic markers are:
RFLP (or Restriction fragment length polymorphism)
SSLP (or Simple sequence length polymorphism)
AFLP (or Amplified fragment length polymorphism)
RAPD (or Random amplification of polymorphic DNA)
VNTR (or Variable number tandem repeat)
Microsatellite polymorphism, (or Simple sequence repeat)
SNP (or Single nucleotide polymorphism)
STR (or Short tandem repeat)
SFP (or Single feature polymorphism)
DArT (or Diversity Arrays Technology)
RAD markers (or Restriction site associated DNA markers)
(using Sequence-tagged sites)
Molecular genetic markers can be divided into two classes: a) biochemical markers which detect variation at the gene product level such as changes in proteins and amino acids and b) molecular markers which detect variation at the DNA level such as nucleotide changes: deletion, duplication, inversion and/or insertion. Markers can exhibit two modes of inheritance, i.e. dominant/recessive or co-dominant. If the genetic pattern of homo-zygotes can be distin |
https://en.wikipedia.org/wiki/Response%20spectrum | A response spectrum is a plot of the peak or steady-state response (displacement, velocity or acceleration) of a series of oscillators of varying natural frequency, that are forced into motion by the same base vibration or shock. The resulting plot can then be used to pick off the response of any linear system, given its natural frequency of oscillation. One such use is in assessing the peak response of buildings to earthquakes. The science of strong ground motion may use some values from the ground response spectrum (calculated from recordings of surface ground motion from seismographs) for correlation with seismic damage.
If the input used in calculating a response spectrum is steady-state periodic, then the steady-state result is recorded. Damping must be present, or else the response will be infinite. For transient input (such as seismic ground motion), the peak response is reported. Some level of damping is generally assumed, but a value will be obtained even with no damping.
Response spectra can also be used in assessing the response of linear systems with multiple modes of oscillation (multi-degree of freedom systems), although they are only accurate for low levels of damping. Modal analysis is performed to identify the modes, and the response in that mode can be picked from the response spectrum. These peak responses are then combined to estimate a total response. A typical combination method is the square root of the sum of the squares (SRSS) if the modal frequencies are not close. The result is typically different from that which would be calculated directly from an input, since phase information is lost in the process of generating the response spectrum.
The main limitation of response spectra is that they are only universally applicable for linear systems. Response spectra can be generated for non-linear systems, but are only applicable to systems with the same non-linearity, although attempts have been made to develop non-linear seismic design spec |
https://en.wikipedia.org/wiki/Pronucleus | A pronucleus (: pronuclei) denotes the nucleus found in either a sperm or egg cell during the process of fertilization. The sperm cell undergoes a transformation into a pronucleus after entering the egg cell but prior to the fusion of the genetic material of both the sperm and egg. In contrast, the egg cell possesses a pronucleus once it becomes haploid, not upon the arrival of the sperm cell. Haploid cells, such as sperm and egg cells in humans, carry half the number of chromosomes present in somatic cells, with 23 chromosomes compared to the 46 found in somatic cells. It is noteworthy that the male and female pronuclei do not physically merge, although their genetic material does. Instead, their membranes dissolve, eliminating any barriers between the male and female chromosomes, facilitating the combination of their chromosomes into a single diploid nucleus in the resulting embryo, which contains a complete set of 46 chromosomes.
The presence of two pronuclei serves as the initial indication of successful fertilization, often observed around 18 hours after insemination, or intracytoplasmic sperm injection (ICSI) during in vitro fertilization. At this stage, the zygote is termed a two-pronuclear zygote (2PN). Two-pronuclear zygotes transitioning through 1PN or 3PN states tend to yield poorer-quality embryos compared to those maintaining 2PN status throughout development, and this distinction may hold significance in the selection of embryos during in vitro fertilization (IVF) procedures.
History
The pronucleus was discovered the 1870s microscopically using staining techniques combined with microscopes with improved magnification levels. The pronucleus was originally found during the first studies on meiosis. Edouard Van Beneden published a paper in 1875 in which he first mentions the pronucleus by studying the eggs of rabbits and bats. He stated that the two pronuclei form together in the center of the cell to form the embryonic nucleus. Van Beneden also found t |
https://en.wikipedia.org/wiki/Linde%E2%80%93Buzo%E2%80%93Gray%20algorithm | The Linde–Buzo–Gray algorithm (introduced by Yoseph Linde, Andrés Buzo and Robert M. Gray in 1980) is a vector quantization algorithm to derive a good codebook.
It is similar to the k-means method in data clustering.
The algorithm
At each iteration, each vector is split into two new vectors.
A initial state: centroid of the training sequence;
B initial estimation #1: code book of size 2;
C final estimation after LGA: Optimal code book with 2 vectors;
D initial estimation #2: code book of size 4;
E final estimation after LGA: Optimal code book with 4 vectors;
The final two code vectors are splitted into four and the process is repeated until the desired number of code vector is obtained.
References
The original paper describing the algorithm, as an extension to Lloyd's algorithm:
Cluster analysis algorithms
Machine learning algorithms
Artificial neural networks |
https://en.wikipedia.org/wiki/Air%20Force%20Two | Air Force Two is the air traffic control designated call sign held by any United States Air Force aircraft carrying the vice president of the United States, but not the president. The term is often associated with the Boeing C-32, a modified 757 which is most commonly used as the vice president's transport. Other 89th Airlift Wing aircraft, such as the Boeing C-40 Clipper, C-20B, C-37A, and C-37B, have also served in this role. The VC-25A, the aircraft most often used by the president as Air Force One, has also been used by the vice president as Air Force Two.
History
Richard Nixon was one of the first senior officials in American government to travel internationally via jet aircraft on official business, taking a Boeing VC-137A Stratoliner on his visit to the Soviet Union in July 1959 for the Kitchen Debates as Eisenhower's vice president.
Domestically, non-presidential VIP travel still relied on the prop powered Convair VC-131 Samaritan aircraft until Nelson Rockefeller was named Gerald Ford's vice president in 1974. Rockefeller personally owned a Grumman Gulfstream II jet that he preferred to the much slower Convair; Rockefeller's Gulfstream II then used the "Executive Two" callsign while he was in office. This would prompt the 89th Airlift Wing's acquisition of 3 McDonnell Douglas VC-9Cs in 1975, adding to their 3 VC-137s jets used for senior executive international travel.
Prior senior executive aircraft included the former presidential Douglas VC-54 Skymaster, Douglas VC-118A, and Lockheed C-121 Constellations, held in reserve as back-up aircraft for the newer aircraft designated for presidential travel.
Design
Aircraft allocated for use by the vice president and senior executives authorized to travel under the Special Air Mission designation operated by the 89th Airlift Wing can be distinguished from the distinctive Raymond Loewy Air Force One livery by the lack of the Steel blue cheatline and cap over the cockpit.
Former presidential aircraft that has |
https://en.wikipedia.org/wiki/Archos | Archos (, stylized as ARCHOS) is a French multinational electronics company that was established in 1988 by Henri Crohas. Archos manufactures tablets, smartphones, portable media players and portable data storage devices. The name is an anagram of Crohas' last name. Also, in Greek (-αρχος), it's a suffix used in nouns indicating a person with power. The company's slogan has been updated from "Think Smaller" to "On The Go", and the current "Entertainment your way".
Archos has developed a variety of products, including digital audio players, portable video players (PVP), digital video recorders, a personal digital assistant, netbooks, more recently tablet computers using Google Android and Microsoft Windows (tablet PCs), and smartphones (which are manufactured by ZTE under the "Archos" brand name).
Success and decline
By the year 2000, Archos became an important player in the portable media player market and this was demonstrated by the groundbreaking year 2000 release of the very first disk-based digital audio player (DAP) called Jukebox 6000. This product paved the way for the high-capacity DAPs, which finally resulted in the wide adoption of digital and MP3 players. Archos' success during this period was attributed to its strategy of technological leadership, releasing different iterations of a product line, which featured a succession of products with better specifications and technology than their predecessors.
In the latter part of the 2000s, Archos began to lose to Apple, which - for its part - introduced its own portable devices such as the iPod. This development highlighted a trend in the technology industry where beating competitors to the market and equipping products with the most advanced technology available do not always translate to success. The company started phasing its portable media players in 2008 to focus more on its Android tablet range.
In 2013, the company entered the mobile phone market by launching a series of smartphone models for ins |
https://en.wikipedia.org/wiki/Phreatic | Phreatic is a term used in hydrology to refer to aquifers, in speleology to refer to cave passages, and in volcanology to refer to a type of volcanic eruption.
Hydrology
The term phreatic (the word originates from the Greek , meaning "well" or "spring") is used in hydrology and the earth sciences to refer to matters relating to ground water (an aquifer) below the water table. The term 'phreatic surface' indicates the location where the pore water pressure is under atmospheric conditions (i.e. the pressure head is zero). This surface normally coincides with the water table. The slope of the phreatic surface is assumed to indicate the direction of ground water movement in an unconfined aquifer.
The phreatic zone, below the phreatic surface where rock and soil is saturated with water, is the counterpart of the vadose zone, or unsaturated zone, above. Unconfined aquifers are also referred to as phreatic aquifers because their upper boundary is provided by the phreatic surface.
Speleology
In speleogenesis, a division of speleology, 'phreatic action' forms cave passages by dissolving the limestone in all directions, as opposed to 'vadose action', whereby a stream running in a cave passage erodes a trench in the floor. It occurs when the passage is full of water, and therefore normally only when it is below the water table, and only if the water is not saturated with calcium carbonate or calcium magnesium carbonate. A cave passage formed in this way is characteristically circular or oval in cross-section as limestone is dissolved on all surfaces.
Many cave passages are formed by a combination of phreatic followed by vadose action. Such passages form a keyhole cross section: a round-shaped section at the top and a rectangular trench at the bottom.
Volcanology
A phreatic eruption or steam-blast eruption occurs when magma heats ground or surface water.
Biology
Animals living within the phreatic zone of groundwater aquifers can be referred to as phreatobites. T |
https://en.wikipedia.org/wiki/FOIL%20method | In elementary algebra, FOIL is a mnemonic for the standard method of multiplying two binomials—hence the method may be referred to as the FOIL method. The word FOIL is an acronym for the four terms of the product:
First ("first" terms of each binomial are multiplied together)
Outer ("outside" terms are multiplied—that is, the first term of the first binomial and the second term of the second)
Inner ("inside" terms are multiplied—second term of the first binomial and first term of the second)
Last ("last" terms of each binomial are multiplied)
The general form is
Note that is both a "first" term and an "outer" term; is both a "last" and "inner" term, and so forth. The order of the four terms in the sum is not important and need not match the order of the letters in the word FOIL.
History
The FOIL method is a special case of a more general method for multiplying algebraic expressions using the distributive law. The word FOIL was originally intended solely as a mnemonic for high-school students learning algebra. The term appears in William Betz's 1929 text Algebra for Today, where he states:
... first terms, outer terms, inner terms, last terms. (The rule stated above may also be remembered by the word FOIL, suggested by the first letters of the words first, outer, inner, last.)
William Betz was active in the movement to reform mathematics in the United States at that time, had written many texts on elementary mathematics topics and had "devoted his life to the improvement of mathematics education".
Many students and educators in the US now use the word "FOIL" as a verb meaning "to expand the product of two binomials".
Examples
The method is most commonly used to multiply linear binomials. For example,
If either binomial involves subtraction, the corresponding terms must be negated. For example,
The distributive law
The FOIL method is equivalent to a two-step process involving the distributive law:
In the first step, the () is distributed over the a |
https://en.wikipedia.org/wiki/Pulation%20square | In category theory, a branch of mathematics, a pulation square (also called a Doolittle diagram) is a diagram that is simultaneously a pullback square and a pushout square. It is a self-dual concept.
References
Adámek, Jiří, Herrlich, Horst, & Strecker, George E. (1990). Abstract and Concrete Categories (4.2MB PDF). Originally publ. John Wiley & Sons. . (now free on-line edition)
Herrlich, Horst, & Strecker, George E., Category Theory, Heldermann Verlag (2007).
Category theory |
https://en.wikipedia.org/wiki/Rhythms%20NetConnections | Rhythms NetConnections Inc. (Former NASDAQ: RTHM) was in the business of providing broadband local-access communication services to large enterprises, telecommunications carriers and their internet service provider (ISP) affiliates and other ISPs. The company's services included a range of high-speed, always-on connections that were designed to offer customers both cost and performance advantages when accessing the Internet or private networks. The Company used multiple digital subscriber line (DSL) technologies to provide data transfer rates ranging from 128 kbit/s to 8.0 Mbit/s delivering data to the end user, and from 128 kbit/s to 1.5 Mbit/s receiving data from the end user. The company was delisted from NASDAQ in May 2001. On August 2, 2001, the company and all of its wholly owned United States subsidiaries voluntarily filed for reorganization under Chapter 11 of the United States Bankruptcy Code. Also in August 2001, the Company sent 31-day service termination letters to all of its customers.
Legal Action
A protracted class action securities lawsuit against the officers and directors of Rhythms NetConnections was settled on April 3, 2009. Judge John K. Lane of the U.S. District Court for the District of Colorado gave his final approval to a $17.5 million settlement. Judge Lane also awarded the plaintiffs' attorneys, Milberg LLP, Stull Stull & Brody and The Shuman Law Firm, 30% of the settlement fund and an additional $2.6 million in expenses from the fund.
A class of shareholders, who purchased shares in Rhythms NetConnections between Jan. 6, 2000, and April 2, 2001, brought the case. The lawsuit alleged that the officers and directors “knowingly or recklessly” made false statements about the company's subscriber line count, growth and financial condition in an effort to inflate its stock price.
At one time, Enron owned 5.4 million shares of Rhythms NetConnections stock.
See also
Covad Communications
Dot-com bubble
NorthPoint Communications
References
S |
https://en.wikipedia.org/wiki/IEC%2061131-3 | IEC 61131-3 is the third part (of 10) of the international standard IEC 61131 for programmable logic controllers. It was first published in December 1993 by the IEC; the current (third) edition was published in February 2013.
Part 3 of IEC 61131 deals with basic software architecture and programming languages of the control program within PLC. It defines three graphical and two textual programming language standards:
Ladder diagram (LD), graphical
Function block diagram (FBD), graphical
Structured text (ST), textual
Instruction list (IL), textual (deprecated in 3rd edition of the standard)
Sequential function chart (SFC), has elements to organize programs for sequential and parallel control processing, graphical.
Data types
Elementary Data Type
Bit Strings – groups of on/off values
BOOL - 1 bit (0,1)
BYTE – 8 bit (1 byte)
WORD – 16 bit (2 byte)
DWORD – 32 bit (4 byte)
LWORD – 64 bit (8 byte)
INTEGER – whole numbers (Considering byte size 8 bits)
SINT – signed short integer (1 byte)
INT – signed integer (2 byte)
DINT – signed double integer (4 byte)
LINT – signed long integer (8 byte)
USINT – Unsigned short integer (1 byte)
UINT – Unsigned integer (2 byte)
UDINT – Unsigned double integer (4 byte)
ULINT – Unsigned long integer (8 byte)
REAL – floating point IEC 60559 (same as IEEE 754-2008)
REAL – (4 byte)
LREAL – (8 byte)
Duration
TIME – (implementer specific). Literals in the form of T#5m90s15ms
LTIME – (8 byte). Literals extend to nanoseconds in the form of T#5m90s15ms542us15ns
Date
DATE – calendar date (implementer specific)
LDATE – calendar date (8 byte, nanoseconds since 1970-01-01, restricted to multiple of one day)
Time of day
TIME_OF_DAY / TOD – clock time (implementer specific)
LTIME_OF_DAY / LTOD – clock time (8 byte)
Date and time of Day
DATE_AND_TIME / DT – time and date (implementer specific)
LDATE_AND_TIME / LDT – time and date (8 byte, nanoseconds since 1970-01-01)
Character / Character string
CHAR – S |
https://en.wikipedia.org/wiki/Linear%20grammar | In computer science, a linear grammar is a context-free grammar that has at most one nonterminal in the right-hand side of each of its productions.
A linear language is a language generated by some linear grammar.
Example
An example of a linear grammar is G with N = {S}, Σ = {a, b}, P with start symbol S and rules
S → aSb
S → ε
It generates the language .
Relationship with regular grammars
Two special types of linear grammars are the following:
the left-linear or left-regular grammars, in which all rules are of the form A → αw where α is either empty or a single nonterminal and w is a string of terminals;
the right-linear or right-regular grammars, in which all rules are of the form A → wα where w is a string of terminals and α is either empty or a single nonterminal.
Each of these can describe exactly the regular languages.
A regular grammar is a grammar that is left-linear or right-linear.
Observe that by inserting new nonterminals, any linear grammar can be replaced by an equivalent one where some of the rules are left-linear and some are right-linear. For instance, the rules of G above can be replaced with
S → aA
A → Sb
S → ε
However, the requirement that all rules be left-linear (or all rules be right-linear) leads to a strict decrease in the expressive power of linear grammars.
Expressive power
All regular languages are linear; conversely, an example of a linear, non-regular language is { }. as explained above.
All linear languages are context-free; conversely, an example of a context-free, non-linear language is the Dyck language of well-balanced bracket pairs.
Hence, the regular languages are a proper subset of the linear languages, which in turn are a proper subset of the context-free languages.
While regular languages are deterministic, there exist linear languages that are nondeterministic. For example, the language of even-length palindromes on the alphabet of 0 and 1 has the linear grammar S → 0S0 | 1S1 | ε. An arbitrary string of this |
https://en.wikipedia.org/wiki/Invariant%20polynomial | In mathematics, an invariant polynomial is a polynomial that is invariant under a group acting on a vector space . Therefore, is a -invariant polynomial if
for all and .
Cases of particular importance are for Γ a finite group (in the theory of Molien series, in particular), a compact group, a Lie group or algebraic group. For a basis-independent definition of 'polynomial' nothing is lost by referring to the symmetric powers of the given linear representation of Γ.
References
Commutative algebra
Invariant theory
Polynomials |
https://en.wikipedia.org/wiki/T-structure | In the branch of mathematics called homological algebra, a t-structure is a way to axiomatize the properties of an abelian subcategory of a derived category. A t-structure on consists of two subcategories of a triangulated category or stable infinity category which abstract the idea of complexes whose cohomology vanishes in positive, respectively negative, degrees. There can be many distinct t-structures on the same category, and the interplay between these structures has implications for algebra and geometry. The notion of a t-structure arose in the work of Beilinson, Bernstein, Deligne, and Gabber on perverse sheaves.
Definition
Fix a triangulated category with translation functor . A t-structure on is a pair of full subcategories, each of which is stable under isomorphism, which satisfy the following three axioms.
If X is an object of and Y is an object of , then
If X is an object of , then X[1] is also an object of . Similarly, if Y is an object of , then Y[-1] is also an object of .
If A is an object of , then there exists a distinguished triangle such that X is an object of and Y is an object of .
It can be shown that the subcategories and are closed under extensions in . In particular, they are stable under finite direct sums.
Suppose that is a t-structure on . In this case, for any integer n, we define to be the full subcategory of whose objects have the form , where is an object of . Similarly, is the full subcategory of objects , where is an object of . More briefly, we define
With this notation, the axioms above may be rewritten as:
If X is an object of and Y is an object of , then
and .
If A is an object of , then there exists a distinguished triangle such that X is an object of and Y is an object of .
The heart or core of the t-structure is the full subcategory consisting of objects contained in both and , that is,
The heart of a t-structure is an abelian category (whereas a triangulated category is additive b |
https://en.wikipedia.org/wiki/All-pairs%20testing | In computer science, all-pairs testing or pairwise testing is a combinatorial method of software testing that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by "parallelizing" the tests of parameter pairs.
Computer scientists and mathematicians both work on algorithms to generate pairwise test suites. Numerous exist to generate such test suites as there is no efficient exact solution for every possible input and constraints scenarios. An early researcher in this area created a short one-hour Combinatorial Testing course that covers the theory of combinatorial testing (of which pairwise testing is a special case) and shows learners how to use a free tool from NIST to generate their own combinatorial test suites quickly.
Rationale
The most common bugs in a program are generally triggered by either a single input parameter or an interaction between pairs of parameters. Bugs involving interactions between three or more parameters are both progressively less common and also progressively more expensive to find---such testing has as its limit the testing of all possible inputs. Thus, a combinatorial technique for picking test cases like all-pairs testing is a useful cost-benefit compromise that enables a significant reduction in the number of test cases without drastically compromising functional coverage.
More rigorously, if we assume that a test case has parameters given in a set .
The range of the parameters are given by .
Let's assume that .
We note that the number of all possible test cases is a . Imagining that the code deals with the conditions taking only two parameters at a time, might reduce the number of needed test cases.
To demonstrate, suppose there are X,Y,Z parameters.
We can use a predicate of the form of order 3, which |
https://en.wikipedia.org/wiki/Soribada | Soribada () was the first Korean peer-to-peer file-sharing service, launched in 2000 by Sean Yang. The name 'Soribada' means "Ocean of Sound" or "Receiving (downloading) Sound". It was closed in 2002 by court order but continued to be distributed with a stipulation that its users were responsible for any of the files downloaded.
On November 5, 2003, Soribada was relaunched as and in July 2004, the website was renewed as a P2P search portal with a paid MP3 service in December 2004.
It remains the most widely used P2P system in Korea. The most recent version of Soribada is Soribada 6, which is downloadable on their website.
In 2017, the site held their first Soribada Best K-Music Awards, after 17 years since the site's launch.
Charges of 2002
Soribada was indicted on copyright infringement charges for the first time. The case was filed by the Korean Association of Phonographic Producers (KAPP), presently the Recording Industry Association of Korea (RIAK).
Soribada 2.0
Soribada 2.0 allowed users to swap files without having to establish a link to a centralized server. This mechanism was put in place in order to minimize the risk of legal prosecution. However, KAPP's reply to this solution was that every Soribada 2.0-user was sued instead of the developers. Yang Jung-hwan responded to KAPP's approach by saying, “In a situation where voluminous e-mail services handling over 100MB are being sustained, netizens will find other ways to share music files even with Soribada out of the market.”
Paid service
From December 2004 to June 2005, Soribada sold nearly 5 million songs through its servers. Searches returned both tracks for sale and free downloads, with the first ones appearing higher on search results.
Service stopped: September 2005
Upon being sued again, this time by 30 record labels (led by YBM Seoul Records (now Kakao Entertainment) and JYP Entertainment) and some musicians, Soribada stopped its service in 2005. Yang Jung-hwan and his brother Il-hwan, the cre |
https://en.wikipedia.org/wiki/Barachois | A barachois is a term used in Atlantic Canada, Saint Pierre and Miquelon, Réunion and Mauritius to describe a coastal lagoon partially or totally separated from the ocean by a sand or shingle bar. Sometimes the bar is constructed of boulders, as is the case at Freshwater Bay near St. John’s, Newfoundland. Salt water may enter the barachois during high tide.
The bar often is formed as a result of sediment deposited in the delta region of a river or – as is the case in Miquelon – by a tombolo.
Name
The English term comes from the French language, where the word is pronounced .
The term comes from a Basque word, barratxoa, meaning little bar. The popular derivation from the French barre à choir is without historical merit.
In Newfoundland English, the word has become pronounced as barshwa.
Examples
Dark Harbour, Grand Manan, New Brunswick (photo)
Barachois de Malbaie on the tip of the Gaspé Peninsula, fed by one of two Malbaie Rivers in Quebec and the Beattie, du Portage, and Murphy Rivers
Grand Barachois, Miquelon Island
Grand-Barachois, in Westmorland County, New Brunswick
Barachois Pond Provincial Park in western Newfoundland
Big Barasway and Little Barasway, communities on Newfoundland's Cape Shore
Prince Edward Island National Park has several examples
Percival Bay, off the Northumberland Strait, is also known as the Big Barachois
The coves in the lagoon of Diego Garcia in the Indian Ocean
Topsail Beach Provincial Park, Topsail
Former settlement of Freshwater, near St John's, Newfoundland.
Great Barachois, near Petit-de-Grat, Nova Scotia
References
Landforms
Bodies of water
Lagoons |
https://en.wikipedia.org/wiki/Phase-change%20material | A phase-change material (PCM) is a substance which releases/absorbs sufficient energy at phase transition to provide useful heat or cooling. Generally the transition will be from one of the first two fundamental states of matter - solid and liquid - to the other. The phase transition may also be between non-classical states of matter, such as the conformity of crystals, where the material goes from conforming to one crystalline structure to conforming to another, which may be a higher or lower energy state.
The energy released/absorbed by phase transition from solid to liquid, or vice versa, the heat of fusion is generally much higher than the sensible heat. Ice, for example, requires 333.55 J/g to melt, but then water will rise one degree further with the addition of just 4.18 J/g. Water/ice is therefore a very useful phase change material and has been used to store winter cold to cool buildings in summer since at least the time of the Achaemenid Empire.
By melting and solidifying at the phase-change temperature (PCT), a PCM is capable of storing and releasing large amounts of energy compared to sensible heat storage. Heat is absorbed or released when the material changes from solid to liquid and vice versa or when the internal structure of the material changes; PCMs are accordingly referred to as latent heat storage (LHS) materials.
There are two principal classes of phase-change material: organic (carbon-containing) materials derived either from petroleum, from plants or from animals; and salt hydrates, which generally either use natural salts from the sea or from mineral deposits or are by-products of other processes. A third class is solid to solid phase change.
PCMs are used in many different commercial applications where energy storage and/or stable temperatures are required, including, among others, heating pads, cooling for telephone switching boxes, and clothing.
By far the biggest potential market is for building heating and cooling. In this ap |
https://en.wikipedia.org/wiki/Immunoprecipitation | Immunoprecipitation (IP) is the technique of precipitating a protein antigen out of solution using an antibody that specifically binds to that particular protein. This process can be used to isolate and concentrate a particular protein from a sample containing many thousands of different proteins. Immunoprecipitation requires that the antibody be coupled to a solid substrate at some point in the procedure.
Types
Individual protein immunoprecipitation (IP)
Involves using an antibody that is specific for a known protein to isolate that particular protein out of a solution containing many different proteins. These solutions will often be in the form of a crude lysate of a plant or animal tissue. Other sample types could be body fluids or other samples of biological origin.
Protein complex immunoprecipitation (Co-IP)
Immunoprecipitation of intact protein complexes (i.e. antigen along with any proteins or ligands that are bound to it) is known as co-immunoprecipitation (Co-IP). Co-IP works by selecting an antibody that targets a known protein that is believed to be a member of a larger complex of proteins. By targeting this known member with an antibody it may become possible to pull the entire protein complex out of solution and thereby identify unknown members of the complex.
This works when the proteins involved in the complex bind to each other tightly, making it possible to pull multiple members of the complex out of the solution by latching onto one member with an antibody. This concept of pulling protein complexes out of solution is sometimes referred to as a "pull-down". Co-IP is a powerful technique that is used regularly by molecular biologists to analyze protein–protein interactions.
A particular antibody often selects for a subpopulation of its target protein that has the epitope exposed, thus failing to identify any proteins in complexes that hide the epitope. This can be seen in that it is rarely possible to precipitate even half of a given pro |
https://en.wikipedia.org/wiki/Programming%20in%20the%20large%20and%20programming%20in%20the%20small | In software engineering, programming in the large and programming in the small refer to two different aspects of writing software, namely, designing a larger system as a composition of smaller parts, and creating those smaller parts by writing lines of code in a programming language, respectively.
The terms were coined by Frank DeRemer and Hans Kron in their 1975 paper "Programming-in-the-large versus programming-in-the-small", in which they argue that the two are essentially different activities, and that typical programming languages, and the practice of structured programming, provide good support for the latter, but not for the former.
This may be compared to the later Ousterhout's dichotomy, which distinguishes between system programming languages (for components) and scripting languages (for glue code, connecting components).
Description
Fred Brooks identifies that the way an individual program is created is different from how a programming systems product is created. The former likely does one relatively simple task well. It is probably coded by a single engineer, is complete in itself, and is ready to run on the system on which it was developed. The programming activity was probably fairly short-lived as simple tasks are quick and easy to complete. This is the endeavor that DeRemer and Kron describe as programming in the small.
Compare with the activities associated with a programming systems project, again as identified by Brooks. Such a project is typified by medium-sized or large industrial teams working on the project for many months to several years. The project is likely to be split up into several or hundreds of separate modules which individually are of a similar complexity to the individual programs described above. However, each module will define an interface to its surrounding modules.
Brooks describes how programming systems projects are typically run as formal projects that follow industry best practices and will comprise testing, do |
https://en.wikipedia.org/wiki/Information%20model | An information model in software engineering is a representation of concepts and the relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. Typically it specifies relations between kinds of things, but may also include relations with individual things. It can provide sharable, stable, and organized structure of information requirements or knowledge for the domain context.
Overview
The term information model in general is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases, the concept is specialised to facility information model, building information model, plant information model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility.
Within the field of software engineering and data modeling, an information model is usually an abstract, formal representation of entity types that may include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or occurrences, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations.
An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity relationship models or XML schemas.
Information modeling languages
In 1976, an entity-relationship (ER) graphic notation was introduced by Peter Chen. He stressed that it was a "semantic" modelling technique and independent of any database mode |
https://en.wikipedia.org/wiki/List%20of%20Unix%20daemons | This is a list of Unix daemons that are found on various Unix-like operating systems. Unix daemons typically have a name ending with a d.
See also
List of Unix commands
References
Unix
Unix daemons |
https://en.wikipedia.org/wiki/128-bit%20computing | General home computing and gaming utility emerged at 8-bit (but not at 1-bit or 4-bit) word sizes, as 28=256 words become possible. Thus, early 8-bit CPUs (Zilog Z80, 6502, Intel 8088 introduced 1976-1981 by Commodore, Tandy Corporation, Apple and IBM) inaugurated the era of personal computing. Many 16-bit CPUs already existed in the mid-1970's. Over the next 30 years, the shift to 16-bit, 32-bit and 64-bit computing allowed, respectively, 216=65,536 unique words, 232=4,294,967,296 unique words and 264=18,446,744,073,709,551,615 unique words respectively, each step offering a meaningful advantage until 64 bits was reached. Further advantages evaporate from 64-bit to 128-bit computing as the number of possible values in a register increases from roughly 18 quintillion () to 340 undecillion () as so many unique values are never utilized. Thus, with a register that can store 2128 values, no advantages over 64-bit computing accrue to either home computing or gaming. CPUs with a larger word size also require more circuitry, are physically larger, require more power and generate more heat. Thus, there are currently no mainstream general-purpose processors built to operate on 128-bit integers or addresses, although a number of processors do have specialized ways to operate on 128-bit chunks of data, and uses are given below.
Representation
A processor with 128-bit byte addressing could directly address up to 2128 (over ) bytes, which would greatly exceed the total data captured, created, or replicated on Earth as of 2018, which has been estimated to be around 33 zettabytes (over 274 bytes).
A 128-bit register can store 2128 (over 3.40 × 1038) different values. The range of integer values that can be stored in 128 bits depends on the integer representation used. With the two most common representations, the range is 0 through 340,282,366,920,938,463,463,374,607,431,768,211,455 (2128 − 1) for representation as an (unsigned) binary number, and −170,141,183,460,469,231,731,6 |
https://en.wikipedia.org/wiki/PL-4 | PL-4 or POS-PHY Level 4 was the name of the interface that the interface SPI-4.2 is based on. It was proposed by PMC-Sierra to the Optical Internetworking Forum. The name means Packet Over SONET Physical layer level 4. PL-4 was developed by PMC-Sierra in conjunction with the Saturn Development Group.
Context
There are two broad categories of chip-to-chip interfaces. The first, exemplified by PCI-Express and HyperTransport, supports reads and writes of memory addresses. The second broad category carries user packets over 1 or more channels and is exemplified by the IEEE 802.3 family of Media Independent Interfaces and the Optical Internetworking Forum family of System Packet Interfaces. Of these last two, the family of System Packet Interfaces is optimized to carry user packets from many channels. The family of System Packet Interfaces is the most important packet-oriented, chip-to-chip interface family used between devices in the Packet over SONET and Optical Transport Network, which are the principal protocols used to carry the internet between cities.
Applications
PL-4 was designed to be used in systems that support OC-192 SONET interfaces and is sometimes used in 10 Gigabit Ethernet based systems. A typical application of PL-4 (SPI-4.2) is to connect a framer device to a network processor. It has been widely adopted by the high speed networking marketplace.
Technical details
The interface consists of (per direction):
sixteen LVDS pairs for the data path
one LVDS pair for control
one LVDS pair for clock at half of the data rate
two FIFO status lines running at 1/8 of the data rate
one status clock
The clocking is Source-synchronous and operates around 700 MHz. Implementations of SPI-4.2 (PL-4) have been produced which allow somewhat higher clock rates. This is important when overhead bytes are added to incoming packets.
Trivia
The name is an acronym of an acronym of an acronym as the P in PL stands for POS-PHY and the S in POS-PHY stands for SONET (Synch |
https://en.wikipedia.org/wiki/Abrizio | Abrizio was a fabless semiconductor company which made switching fabric chip sets (integrated circuits for computer network switches). Their chip set, the TT1, was used by several large system development companies as the core switch fabric in their high value communication systems.
Founding
Abrizio was founded in 1997, by Professor Nick McKeown as a spinout of the Tiny-tera project at Stanford University. It received US$6M of funding from Benchmark Capital and Sequoia Capital.
Product and technology
The product name TT1 referred to "Tiny Tera" meaning a small, highly integrated semiconductor implementation of a terabit/s capacity switching fabric. The Stanford program demonstrated a scalable packet switch that had a terabit-per-second performance in CMOS. Abrizio was the first to introduce a more optimized Input-Buffered Output Queued Switch Fabrics, which addressed the memory efficiency issue of similar technologies. Its technology made better use of memory, making the TT1 a less expensive product. Abrizio's key technology was a sophisticated implementation of a Wavefront arbiter which allowed the switch to make complex arbitration decisions very quickly.
Senior leadership
In 1998, Anders Swahn, who had been executive vice president of sales and marketing at Allied-Telesyn Inc., joined Abrizio as chief executive. Abrizio's corporate colors were purple and yellow. The CEO of Abrizio was Anders Swahn. The CTO was McKeown who was taking a leave from his professorship at Stanford. Zubair Hussein was the V.P. of Engineering.
Acquisition
Abrizio was acquired on August 24, 1999, by PMC-Sierra for 4,352,000 shares of PMC-Sierra stock, worth at that time $400M. After the acquisition, the former Abrizio development team completed the TTx switch chip set. In the wake of the bursting of the telecom bubble, PMC-Sierra laid off most of the former Abrizio team in 2001.
References
External links
Sequoia Capital
Companies established in 1997
Defunct networking comp |
https://en.wikipedia.org/wiki/Microwave%20power%20meter | A microwave power meter is an instrument which measures the electrical power at microwave frequencies typically in the range 100 MHz to 40 GHz.
Usually a microwave power meter will consist of a measuring head which contains the actual power sensing element, connected via a cable to the meter proper, which displays the power reading. The head may be referred to as a power sensor or mount. Different power sensors can be used for different frequencies or power levels. Historically the means of operation in most power sensor and meter combinations was that the sensor would convert the microwave power into an analogue voltage which would be read by the meter and converted into a power reading. Several modern power sensor heads contain electronics to create a digital output and can be plugged via USB into a PC which acts as the power meter.
Microwave power meters have a wide bandwidth—they are not frequency-selective. To measure the power of a specific frequency component in the presence of other signals at different frequencies a spectrum analyzer or measuring receiver is needed.
Sensor technologies
There are a variety of different technologies which have been used as the power sensing element. Each has advantages and disadvantages.
Thermal
Thermal sensors can generally be divided into two main categories, thermocouple power sensors and thermistor-based power sensors. Thermal sensors depend on the process of absorbing the RF and microwave signal energy, and sense the resulting heat rise. Therefore, they respond to true average power of the signal, whether it is pulsed, CW, AM/FM or any complex modulation. (Agilent 2008).
Thermocouple power sensors make up the majority of the thermal power sensors sold at present. They are generally reasonably linear and have a reasonably fast response time and dynamic range. The microwave power is absorbed in a load whose temperature rise is measured by the thermocouple. Thermocouple sensors often require a reference DC or microwave |
https://en.wikipedia.org/wiki/Wake%20Shield%20Facility | Wake Shield Facility (WSF) was a NASA experimental science platform that was placed in low Earth orbit by the Space Shuttle. It was a diameter, free-flying stainless steel disk.
The WSF was deployed using the Space Shuttle's Canadarm. The WSF then used nitrogen gas thrusters to position itself about behind the Space Shuttle, which was at an orbital altitude of over , within the thermosphere, where the atmosphere is exceedingly tenuous. The WSF's orbital speed was at least three to four times faster than the speed of thermospheric gas molecules in the area, which resulted in a cone behind the WSF that was entirely free of gas molecules. The WSF thus created an ultrahigh vacuum in its wake. The resulting vacuum was used to study epitaxial film growth. The WSF operated at a distance from the Space Shuttle to avoid contamination from the Shuttle's rocket thrusters and water dumped overboard from the Shuttle's Waste Collection System (space toilet). After two days, the Space Shuttle would rendezvous with the WSF and again use its robotic arm to collect the WSF and to store it in the Shuttle's payload bay for return to Earth.
The WSF was flown into space three times, aboard Shuttle flights STS-60 (WSF-1), STS-69 (WSF-2) and STS-80 (WSF-3). During STS-60, some hardware issues were experienced, and, as a result, the WSF-1 was only deployed at the end of the Shuttle's Canadarm. During the later missions, the WSF was deployed as a free-flying platform in the wake of the Shuttle.
These flights proved the vacuum wake concept and realized the space epitaxy concept by growing the first-ever crystalline semiconductor thin films in the vacuum of space. These included gallium arsenide (GaAs) and aluminum gallium arsenide (AlGaAs) depositions. These experiments have been used to develop better photocells and thin films. Among the potential resulting applications are artificial retinas made from tiny ceramic detectors.
Pre-flight calculations suggested that the pressure on the w |
https://en.wikipedia.org/wiki/Paul%20Vojta | Paul Alan Vojta (born September 30, 1957) is an American mathematician, known for his work in number theory on Diophantine geometry and Diophantine approximation.
Contributions
In formulating Vojta's conjecture, he pointed out the possible existence of parallels between the Nevanlinna theory of complex analysis, and diophantine analysis in the circle of ideas around the Mordell conjecture and abc conjecture. This suggested the importance of the integer solutions (affine space) aspect of diophantine equations.
Vojta wrote the .dvi-previewer xdvi.
Education and career
He was an undergraduate student at the University of Minnesota, where he became a Putnam Fellow in 1977, and a doctoral student at Harvard University (1983). He currently is a professor in the Department of Mathematics at the University of California, Berkeley.
Awards and honors
In 2012 he became a fellow of the American Mathematical Society.
Selected publications
Diophantine Approximations and Value Distribution Theory, Lecture Notes in Mathematics 1239, Springer Verlag, 1987,
References
External links
Vojta's home page
1957 births
Living people
Arithmetic geometers
Putnam Fellows
Institute for Advanced Study visiting scholars
University of Minnesota alumni
Harvard University alumni
University of California, Berkeley faculty
20th-century American mathematicians
Fellows of the American Mathematical Society
International Mathematical Olympiad participants
21st-century American mathematicians |
https://en.wikipedia.org/wiki/Coagulative%20necrosis | Coagulative necrosis is a type of accidental cell death typically caused by ischemia or infarction. In coagulative necrosis, the architectures of dead tissue are preserved for at least a couple of days. It is believed that the injury denatures structural proteins as well as lysosomal enzymes, thus blocking the proteolysis of the damaged cells. The lack of lysosomal enzymes allows it to maintain a "coagulated" morphology for some time. Like most types of necrosis, if enough viable cells are present around the affected area, regeneration will usually occur. Coagulative necrosis occurs in most bodily organs, excluding the brain. Different diseases are associated with coagulative necrosis, including acute tubular necrosis and acute myocardial infarction.
Coagulative necrosis can also be induced by high local temperature; it is a desired effect of treatments such as high intensity focused ultrasound applied to cancerous cells.
Causes
Coagulative necrosis is most commonly caused by conditions that do not involve severe trauma, toxins or an acute or chronic immune response. The lack of oxygen (hypoxia) causes cell death in a localized area which is perfused by blood vessels failing to deliver primarily oxygen, but also other important nutrients. It is important to note that while ischemia in most tissues of the body will cause coagulative necrosis, in the central nervous system ischemia causes liquefactive necrosis, as there is very little structural framework in neural tissue.
Pathology
Macroscopic
The macroscopic appearance of an area of coagulative necrosis is a pale segment of tissue contrasting against surrounding well vascularized tissue and is dry on cut surface. The tissue may later turn red due to inflammatory response. The surrounding surviving cells can aid in regeneration of the affected tissue unless they are stable or permanent.
Microscopic
Microscopically, coagulative necrosis causes cells to appear to have the same outline, but no nuclei. The nucleu |
https://en.wikipedia.org/wiki/Liquefactive%20necrosis | Liquefactive necrosis (or colliquative necrosis) is a type of necrosis which results in a transformation of the tissue into a liquid viscous mass. Often it is associated with focal bacterial or fungal infections, and can also manifest as one of the symptoms of an internal chemical burn. In liquefactive necrosis, the affected cell is completely digested by hydrolytic enzymes, resulting in a soft, circumscribed lesion consisting of pus and the fluid remains of necrotic tissue. Dead leukocytes will remain as a creamy yellow pus. After the removal of cell debris by white blood cells, a fluid filled space is left. It is generally associated with abscess formation and is commonly found in the central nervous system.
In the brain
Due to excitotoxicity, hypoxic death of cells within the central nervous system can result in liquefactive necrosis. This is a process in which lysosomes turn tissues into pus as a result of lysosomal release of digestive enzymes. Loss of tissue architecture means that the tissue can be liquefied. This process is not associated with bacterial action or infection. Ultimately, in a living patient most necrotic cells and their contents disappear.
The affected area is soft with liquefied centre containing necrotic debris. Later, a cyst wall is formed.
Microscopically, the cystic space contains necrotic cell debris and macrophages filled with phagocytosed material. The cyst wall is formed by proliferating capillaries, inflammatory cells, and gliosis (proliferating glial cells) in the case of brain and proliferating fibroblasts in the case of abscess cavities.
Brain cells have a large amount of digestive enzymes (hydrolases). These enzymes cause the neural tissue to become soft and liquefy.
In the lung
Liquefactive necrosis can also occur in the lung, especially in the context of lung abscesses.
Infection
Liquefactive necrosis can also take place due to certain infections. Neutrophils, fighting off a bacteria, will release hydrolytic enzymes whi |
https://en.wikipedia.org/wiki/Microsoft%20NetMeeting | Microsoft NetMeeting is a discontinued VoIP and multi-point videoconferencing program offered by Microsoft. NetMeeting allows multiple clients to host and join a call that includes video and audio, text chat, application and desktop sharing, and file sharing. It was originally bundled with Internet Explorer 3 and then with Windows versions from Windows 95 to Windows Server 2003.
History
NetMeeting was released on May 29, 1996, with Internet Explorer 3 and later Internet Explorer 4. It incorporated technology acquired by Microsoft from UK software developer Data Connection Ltd and DataBeam Corporation (subsequently acquired by Lotus Software).
Before video service became common on free IM clients, such as Yahoo! Messenger and MSN Messenger, NetMeeting was a popular way to perform video conferences and chat over the Internet (with the help of public ILS servers, or "direct-dialing" to an IP address). The defunct TechTV channel even used NetMeeting as a means of getting viewers onto their call-in shows via webcam, although viewers had to call on their telephones, because broadband Internet connections were still rare.
Protocol architecture
NetMeeting is an implementation of the ITU T.120 and H.323 protocol stacks for videoconferencing, with Microsoft extensions. A call is set up, undertaken and torn down between NetMeeting clients using the H.225 protocol. Audio is carried using H.245, encoded using the G.711 and G.723.1 codecs from 5.3 to 64 kbit/s, while the video is encoded using the H.263 and H.261 codecs. Application sharing is performed using the "Share 2.0" protocol, based on a pre-release version of T.128, with the protocol also being used to transport chat messages; whiteboard sharing uses ITU T.126, while file sharing is performed using FTP over T.127. Due to its use of a standardised protocol, NetMeeting can interoperate with other H.232-implementing software, such as Ekiga.
Discontinuation
In Windows XP, the Start menu shortcut to NetMeeting was removed |
https://en.wikipedia.org/wiki/Fragile%20binary%20interface%20problem | The fragile binary interface problem or FBI is a shortcoming of certain object-oriented programming language compilers, in which internal changes to an underlying class library can cause descendant libraries or programs to cease working. It is an example of software brittleness.
This problem is more often called the fragile base class problem or FBC; however, that term has a wider sense.
Cause
The problem occurs due to a "shortcut" used with compilers for many common object-oriented (OO) languages, a design feature that was kept when OO languages were evolving from earlier non-OO structured programming languages such as C and Pascal.
In these languages there were no objects in the modern sense, but there was a similar construct known as a record (or "struct" in C) that held a variety of related information in one piece of memory. The parts within a particular record were accessed by keeping track of the starting location of the record, and knowing the offset from that starting point to the part in question. For instance a "person" record might have a first name, last name and middle initial, to access the initial the programmer writes thisPerson.middleInitial which the compiler turns into something like a = location(thisPerson) + offset(middleInitial). Modern CPUs typically include instructions for this common sort of access.
When object-oriented language compilers were first being developed, much of the existing compiler technology was used, and objects were built on top of the record concept. In these languages the objects were referred to by their starting point, and their public data, known as "fields", were accessed through the known offset. In effect the only change was to add another field to the record, which is set to point to an immutable virtual method table for each class, such that the record describes both its data and methods (functions). When compiled, the offsets are used to access both the data and the code (via the virtual method table).
Symp |
https://en.wikipedia.org/wiki/FUNET | FUNET is the Finnish University and Research Network, a backbone network providing Internet connections for Finnish universities and polytechnics as well as other research facilities. It is governed by the state-owned CSC – IT Center for Science Ltd. The FUNET project started in December 1983 and soon gained international connectivity via EARN with DECnet as the dominant protocol. FUNET was connected to the greater Internet through NORDUnet in 1988. The FUNET FTP service went online in 1990, hosting the first versions of Linux in 1991.
The main backbone connections have gradually been upgraded to optical fiber since 2008. First 100 Gbit/s connections were put in production in 2015. FUNET is connected to other research networks through NORDUnet, and to other Finnish ISPs via three FICIX points.
See also
NORDUnet
GEANT
References
External links
Funet Network Services provided by CSC - IT Center for Science
Funet FTP archive
CSC — IT Center for Science Ltd.
Communications in Finland
Education in Finland
Internet in Finland
Internet mirror services
National research and education networks |
https://en.wikipedia.org/wiki/Distributed%20version%20control | In software development, distributed version control (also known as distributed revision control) is a form of version control in which the complete codebase, including its full history, is mirrored on every developer's computer. Compared to centralized version control, this enables automatic management branching and merging, speeds up most operations (except pushing and pulling), improves the ability to work offline, and does not rely on a single location for backups. Git, the world's most popular version control system, is a distributed version control system.
In 2010, software development author Joel Spolsky described distributed version control systems as "possibly the biggest advance in software development technology in the [past] ten years".
Distributed vs. centralized
Distributed version control systems (DVCS) use a peer-to-peer approach to version control, as opposed to the client–server approach of centralized systems. Distributed revision control synchronizes repositories by transferring patches from peer to peer. There is no single central version of the codebase; instead, each user has a working copy and the full change history.
Advantages of DVCS (compared with centralized systems) include:
Allows users to work productively when not connected to a network.
Common operations (such as commits, viewing history, and reverting changes) are faster for DVCS, because there is no need to communicate with a central server. With DVCS, communication is necessary only when sharing changes among other peers.
Allows private work, so users can use their changes even for early drafts they do not want to publish.
Working copies effectively function as remote backups, which avoids relying on one physical machine as a single point of failure.
Allows various development models to be used, such as using development branches or a Commander/Lieutenant model.
Permits centralized control of the "release version" of the project
On FOSS software projects it is much easi |
https://en.wikipedia.org/wiki/Inequity%20aversion | Inequity aversion (IA) is the preference for fairness and resistance to incidental inequalities. The social sciences that study inequity aversion include sociology, economics, psychology, anthropology, and ethology. Researches on inequity aversion aim to explain behaviors that are not purely driven by self-interests but fairness considerations.
In some literature, the terminology inequality aversion was used in the places of inequity aversion. The discourses in social studies argue that "inequality" pertains to the gap between the distribution of resources, while "inequity" pertains to the fundamental and institutional unfairness. Therefore, the choice between using inequity or inequality aversion may depend on the specific context.
Human studies
Inequity aversion research on humans mostly occurs in the discipline of economics though it is also studied in sociology.
Research on inequity aversion began in 1978 when studies suggested that humans are sensitive to inequities in favor of as well as those against them, and that some people attempt overcompensation when they feel "guilty" or unhappy to have received an undeserved reward.
A more recent definition of inequity aversion (resistance to inequitable outcomes) was developed in 1999 by Fehr and Schmidt. They postulated that people make decisions so as to minimize inequity in outcomes. Specifically, consider a setting with individuals {1,2,...,n} who receive pecuniary outcomes xi. Then the utility to person i would be given by
where α parametrizes the distaste of person i for disadvantageous inequality in the first nonstandard term, and β parametrizes the distaste of person i for advantageous inequality in the final term. The results suggested that a small fraction of selfish behaviors may influence the majority with a fair mind to act selfishly in some scenarios, while a minority of fair-minded behaviors may also affect selfish players to cooperate in games with punishment. In addition, the inequity aversion |
https://en.wikipedia.org/wiki/Social%20construction%20of%20technology | Social construction of technology (SCOT) is a theory within the field of science and technology studies. Advocates of SCOT—that is, social constructivists—argue that technology does not determine human action, but that rather, human action shapes technology. They also argue that the ways a technology is used cannot be understood without understanding how that technology is embedded in its social context. SCOT is a response to technological determinism and is sometimes known as technological constructivism.
SCOT draws on work done in the constructivist school of the sociology of scientific knowledge, and its subtopics include actor-network theory (a branch of the sociology of science and technology) and historical analysis of sociotechnical systems, such as the work of historian Thomas P. Hughes. Its empirical methods are an adaptation of the Empirical Programme of Relativism (EPOR), which outlines a method of analysis to demonstrate the ways in which scientific findings are socially constructed (see strong program). Leading adherents of SCOT include Wiebe Bijker and Trevor Pinch.
SCOT holds that those who seek to understand the reasons for acceptance or rejection of a technology should look to the social world. It is not enough, according to SCOT, to explain a technology's success by saying that it is "the best"—researchers must look at how the criteria of being "the best" is defined and what groups and stakeholders participate in defining it. In particular, they must ask who defines the technical criteria success is measured by, why technical criteria are defined this way, and who is included or excluded. Pinch and Bijker argue that technological determinism is a myth that results when one looks backwards and believes that the path taken to the present was the only possible path.
SCOT is not only a theory, but also a methodology: it formalizes the steps and principles to follow when one wants to analyze the causes of technological failures or successes.
Legac |
https://en.wikipedia.org/wiki/Sierpi%C5%84ski%27s%20constant | Sierpiński's constant is a mathematical constant usually denoted as K. One way of defining it is as the following limit:
where r2(k) is a number of representations of k as a sum of the form a2 + b2 for integer a and b.
It can be given in closed form as:
where is Gauss's constant and is the Euler-Mascheroni constant.
Another way to define/understand Sierpiński's constant is,
Let r(n) denote the number of representations of by squares, then the Summatory Function of has the Asymptotic expansion
,
where is the Sierpinski constant. The above plot shows
,
with the value of indicated as the solid horizontal line.
See also
Wacław Sierpiński
External links
http://www.plouffe.fr/simon/constants/sierpinski.txt - Sierpiński's constant up to 2000th decimal digit.
https://archive.lib.msu.edu/crcmath/math/math/s/s276.htm
Mathematical constants
References |
https://en.wikipedia.org/wiki/Programmable%20interrupt%20controller | In computing, a programmable interrupt controller (PIC) is an integrated circuit that helps a microprocessor (or CPU) handle interrupt requests (IRQ) coming from multiple different sources (like external I/O devices) which may occur simultaneously. It helps prioritize IRQs so that the CPU switches execution to the most appropriate interrupt handler (ISR) after the PIC assesses the IRQ's relative priorities. Common modes of interrupt priority include hard priorities, rotating priorities, and cascading priorities. PICs often allow mapping input to outputs in a configurable way. On the PC architecture PIC are typically embedded into a southbridge chip whose internal architecture is defined by the chipset vendor's standards.
Common features
PICs typically have a common set of registers: interrupt request register (IRR), in-service register (ISR), and interrupt mask register (IMR). The IRR specifies which interrupts are pending acknowledgement, and is typically a symbolic register which can not be directly accessed. The ISR register specifies which interrupts have been acknowledged, but are still waiting for an end of interrupt (EOI). The IMR specifies which interrupts are to be ignored and not acknowledged. A simple register schema such as this allows up to two distinct interrupt requests to be outstanding at one time, one waiting for acknowledgement, and one waiting for EOI.
There are a number of common priority schemas in PICs including hard priorities, specific priorities, and rotating priorities.
Interrupts may be either edge triggered or level triggered.
There are a number of common ways of acknowledging an interrupt has completed when an EOI is issued. These include specifying which interrupt completed, using an implied interrupt which has completed (usually the highest priority pending in the ISR), and treating interrupt acknowledgement as the EOI.
Well-known types
One of the best known PICs, the 8259A, was included in the x86 PC. In modern times, this is n |
https://en.wikipedia.org/wiki/Software%20brittleness | In computer programming and software engineering, software brittleness is the increased difficulty in fixing older software that may appear reliable, but actually fails badly when presented with unusual data or altered in a seemingly minor way. The phrase is derived from analogies to brittleness in metalworking.
Causes
When software is new, it is very malleable; it can be formed to be whatever is wanted by the implementers. But as the software in a given project grows larger and larger, and develops a larger base of users with long experience with the software, it becomes less and less malleable. Like a metal that has been work-hardened, the software becomes a legacy system, brittle and unable to be easily maintained without fracturing the entire system.
Brittleness in software can be caused by algorithms that do not work well for the full range of input data. A good example is an algorithm that allows a divide by zero to occur, or a curve-fitting equation that is used to extrapolate beyond the data that it was fitted to. Another cause of brittleness is the use of data structures that restrict values. This was commonly seen in the late 1990s as people realized that their software only had room for a 2 digit year entry; this led to the sudden updating of tremendous quantities of brittle software before the year 2000. Another more commonly encountered form of brittleness is in graphical user interfaces that make invalid assumptions. For example, a user may be running on a low resolution display, and the software will open a window too large to fit the display. Another common problem is expressed when a user uses a color scheme other than the default, causing text to be rendered in the same color as the background, or a user uses a font other than the default, which won't fit in the allowed space and cuts off instructions and labels.
Very often, an old code base is simply abandoned and a brand-new system (which is intended to be free of many of the burdens of the l |
https://en.wikipedia.org/wiki/Kilocalorie%20per%20mole | The kilocalorie per mole is a unit to measure an amount of energy per number of molecules, atoms, or other similar particles. It is defined as one kilocalorie of energy (1000 thermochemical gram calories) per one mole of substance. The unit symbol is written kcal/mol or kcal⋅mol−1. As typically measured, one kcal/mol represents a temperature increase of one degree Celsius in one liter of water (with a mass of 1 kg) resulting from the reaction of one mole of reagents.
In SI units, one kilocalorie per mole is equal to 4.184 kilojoules per mole (kJ/mol), which comes to approximately joules per molecule, or about 0.043 eV per molecule. At room temperature (25 °C, 77 °F, or 298.15 K) it is approximately equal to 1.688 units in the kT term of Boltzmann's equation.
Even though it is not an SI unit, the kilocalorie per mole is still widely used in chemistry and biology for thermodynamical quantities such as thermodynamic free energy, heat of vaporization, heat of fusion and ionization energy. This is due to a variety of factors, including the ease with which it can be calculated based on the units of measure typically employed in quantifying a chemical reaction, especially in aqueous solution. In addition, for many important biological processes, thermodynamic changes are on a convenient order of magnitude when expressed in kcal/mol. For example, for the reaction of glucose with ATP to form glucose-6-phosphate and ADP, the free energy of reaction is −4.0 kcal/mol using the pH = 7 standard state.
References
Energy (physics)
Thermodynamics
Heat transfer
Units of chemical measurement |
https://en.wikipedia.org/wiki/Just%20in%20sequence | Just in sequence (JIS) is an inventory strategy that matches just in time (JIT) and complete fit in sequence with variation of assembly line production. Components and parts arrive at a production line right in time as scheduled before they get assembled. Feedback from the manufacturing line is used to coordinate transport to and from the process area. When implemented successfully, JIS improves a company's return on assets (ROA), without loss in flexibility, quality or overall efficiency. JIS is mainly implemented with car manufacture.
JIS is sometimes called In-Line Vehicle Sequencing (ILVS).
Just in sequence is just in time
Just in sequence (JIS) is just one specialised strategy to achieve just in time (JIT). The process concept of JIT sees buffers at the production line as waste in capital bound. The aim is to eliminate buffers as much as possible at the expense of stability when disturbances arise. Just In Sequence is one of the most extreme applications of the concept, where components arrive Just In Time and sequenced for consumption.
The sequencing allows companies to eliminate supply buffers as soon as the quantity in component part buffers necessary is reduced to a minimum. If not sequencing according to scheduled variety of production, all required components must be stocked in buffers. For flexible production lines, such as a modern automotive assembly line, the variety is an option to produce directly on customer orders. As soon as the next order arrives at the work center, the scheduler distributes the supply orders inline with the production sequence of the final production line.
Displacement of buffers upwards to suppliers
However, with JIS the buffer quantities are displaced upward in material flow to the components suppliers. It is a misinterpretation of JIS to assume that all buffers will be eliminated. Hence just the cost for buffer inventory becomes re-allocated to the producers of the supplies. Sequencing eliminates buffers in the final |
https://en.wikipedia.org/wiki/Calm%20technology | Calm technology or calm design is a type of information technology where the interaction between the technology and its user is designed to occur in the user's periphery rather than constantly at the center of attention. Information from the technology smoothly shifts to the user's attention when needed but otherwise stays calmly in the user's periphery. Mark Weiser and John Seely Brown describe calm technology as "that which informs but doesn't demand our focus or attention."
The use of calm technology is paired with ubiquitous computing as a way to minimize the perceptible invasiveness of computers in everyday life.
Principles
For a technology to be considered calm technology, there are three core principles it should adhere to:
The user's attention to the technology must reside mainly in the periphery. This means that either the technology can easily shift between the center of attention and the periphery or that much of the information conveyed by the technology is present in the periphery rather than the center.
The technology increases a user's use of his or her periphery. This creates a pleasant user experience by not overburdening the user with information.
The technology relays a sense of familiarity to the user and allows awareness of the user's surroundings in the past, present, and future.
History
The phrase "calm technology" was first published in the article "Designing Calm Technology", written by Mark Weiser and John Seely Brown in 1995. The concept had developed amongst researchers at the Xerox Palo Alto Research Center in addition to the concept of ubiquitous computing.
Weiser introduced the concept of calm technology by using the example of LiveWire or "Dangling String". It is an string connected to the mounted small electric motor in the ceiling. The motor is connected to a nearby Ethernet cable. When a bit of information flows through that Ethernet cable, it causes a twitch of the motor. The more the information flows, the motor runs f |
https://en.wikipedia.org/wiki/NNDB | The Notable Names Database (NNDB) is an online database of biographical details of over 40,000 people. Soylent Communications, a sole proprietorship that also hosted the now-defunct Rotten.com, describes NNDB as an "intelligence aggregator" of noteworthy persons, highlighting their interpersonal connections. The Rotten.com domain was registered in 1996 by former Apple and Netscape software engineer Thomas E. Dell, who was also known by his internet alias, "Soylent".
Entries
Each entry has an executive summary with an assessment of the person's notability. It also lists their deaths, cause of death, and life risk factors that may affect their lives span such as obesity, cocaine addiction, or dwarfism. Businesspeople and government officials are listed with chronologies of their posts, positions, and board memberships. NNDB has articles on films with user-submitted reviews, discographies of selected music groups, and extensive bibliographies.
NNDB Mapper
The NNDB Mapper, a visual tool for exploring connections between people, was made available in May 2008. It required Adobe Flash 7.
See also
NameBase
References
External links
Internet properties established in 2002
Databases
Online databases
Online person databases |
https://en.wikipedia.org/wiki/Topology%20dissemination%20based%20on%20reverse-path%20forwarding | Topology broadcast based on reverse-path forwarding (TBRPF) is a link-state routing protocol for wireless mesh networks.
The obvious design for a wireless link-state protocol (such as the optimized link-state routing protocol) transmits large amounts of routing data, and this limits the utility of a link-state protocol when the network is made of moving nodes. The number and size of the routing transmissions make the network unusable for any but the smallest networks.
The conventional solution is to use a distance-vector routing protocol such as AODV, which usually transmits no data about routing. However, distance-vector routing requires more time to establish a connection, and the routes are less optimized than a link-state router.
TBRPF transmits only the differences between the previous network state and the current network state. Therefore, routing messages are smaller, and can therefore be sent more frequently. This means that nodes' routing tables are more up-to-date.
TBRPF is controlled under a US patent filed in December 2000 and assigned to SRI International (Patent ID 6845091, issued January 18, 2005).
Further reading
B. Bellur, and R.G. Ogier. 1999. "A Reliable, Efficient Topology Broadcast Protocol for Dynamic Networks," Proc. IEEE INFOCOMM ’99, pp. 178–186.
R.G. Ogier, M.G. Lewis, F.L. Templin, and B. Bellur. 2002. "Topology Broadcast based on Reverse Path Forwarding (TBRPF)," RFC 3684.
External links
: Topology Dissemination Based on Reverse-Path Forwarding (TBRPF)
Packethop Inc. website
Wireless networking
Ad hoc routing protocols
SRI International |
https://en.wikipedia.org/wiki/Dynamic%20Source%20Routing | Dynamic Source Routing (DSR) is a routing protocol for wireless mesh networks. It is similar to AODV in that it forms a route on-demand when a transmitting node requests one. However, it uses source routing instead of relying on the routing table at each intermediate device.
Background
Determining the source route requires accumulating the address of each device between the source and destination during route discovery. The accumulated path information is cached by nodes processing the route discovery packets. The learned paths are used to route packets. To accomplish source routing, the routed packets contain the address of each device the packet will traverse. This may result in high overhead for long paths or large addresses, like IPv6. To avoid using source routing, DSR optionally defines a flow id option that allows packets to be forwarded on a hop-by-hop basis.
This protocol is truly based on source routing whereby all the routing information is maintained (continually updated) at mobile nodes.
It has only two major phases, which are Route Discovery and Route Maintenance.
Route Reply would only be generated if the message has reached the intended destination node (route record which is initially contained in Route Request would be inserted into the Route Reply).
To return the Route Reply, the destination node must have a route to the source node. If the route is in the Destination Node's route cache, the route would be used. Otherwise, the node will reverse the route based on the route record in the Route Request message header (this requires that all links are symmetric).
In the event of fatal transmission, the Route Maintenance Phase is initiated whereby the Route Error packets are generated at a node. The erroneous hop will be removed from the node's route cache; all routes containing the hop are truncated at that point. Again, the Route Discovery Phase is initiated to determine the most viable route.
For information on other similar protocols, see t |
https://en.wikipedia.org/wiki/WMYD | WMYD (channel 20) is an independent television station in Detroit, Michigan, United States. It is owned by the E. W. Scripps Company alongside ABC affiliate WXYZ-TV (channel 7). Both stations share studios at Broadcast House on 10 Mile Road in Southfield, while WMYD's transmitter is located on Eight Mile Road in Oak Park.
Founded in 1968 as WXON on channel 62 and relocated to channel 20 in 1972, the station was an independent focusing primarily on syndicated programs and classic reruns. It made an ill-fated foray into subscription television (STV) from 1979 to 1983, broadcasting a pay service under the ON TV brand that was dogged by a poor relationship with the station and signal piracy issues exacerbated by Detroit's proximity to Canada. After it folded, WXON continued as an independent station and emerged as the second-rated independent in its market, affiliating with The WB in 1995.
Granite Broadcasting purchased WXON in 1997 and renamed it WDWB. However, its high debt load motivated several attempts to sell the station, one of which fell apart after The WB merged with UPN to form The CW but did not include WDWB as an affiliate. The station then became WMYD, aligned with MyNetworkTV and airing its programming for 15 years. In 2014, Scripps purchased WMYD and added local newscasts from the WXYZ-TV newsroom. As Detroit's ATSC 3.0 (NextGen TV) station, WMYD is used in automotive-related tests of the transmission technology.
History
The channel 62 years
At the end of January 1965, Aben Johnson, majority owner of a chemical manufacturing company and with several real estate holdings, filed with the Federal Communications Commission (FCC) to build a television station on channel 44 in Pontiac, in Oakland County. After an overhaul of the FCC's UHF table of allocations, Johnson amended his application to specify channel 62 in Detroit. A construction permit for the station was issued on October 7, 1965, and assigned the call sign WXON that December. Johnson also held |
https://en.wikipedia.org/wiki/Handheld%20PC | A handheld personal computer (PC) is a pocket-sized computer typically built around a clamshell form factor and is significantly smaller than any standard laptop computer, but based on the same principles. It is sometimes referred to as a palmtop computer, not to be confused with Palmtop PC which was a name used mainly by Hewlett-Packard.
Most handheld PCs use an operating system specifically designed for mobile use. Ultra-compact laptops capable of running common x86-compatible desktop operating systems are typically classified as subnotebooks.
The name Handheld PC was used by Microsoft from 1996 until the early 2000s to describe a category of small computers having keyboards and running the Windows CE operating system.
History
The first hand-held device compatible with desktop IBM personal computers of the time was the Atari Portfolio of 1989. Other early models were the Poqet PC of 1989 and the Hewlett Packard HP 95LX of 1991 which run the MS-DOS operating system. Other DOS-compatible hand-held computers also existed. After 2000 the handheld PC segment practically halted, replaced by other forms, although later communicators such as Nokia E90 can be considered to be of the same class.
Today, most modern Handheld PCs are designed around portable gaming. And due to the popularity of the Nintendo Switch and subsequently; the Steam Deck, most modern handheld PC's designs are influenced by the designes of both devices.
Microsoft's Handheld PC standard
The Handheld PC (with capital "H") or H/PC for short was the official name of a hardware design for personal digital assistant (PDA) devices running Windows CE. The intent of Windows CE was to provide an environment for applications compatible with the Microsoft Windows operating system, on processors better suited to low-power operation in a portable device. It provides the appointment calendar functions usual for any PDA.
Microsoft was wary of using the term "PDA" for the Handheld PC. Instead, Microsoft market |
https://en.wikipedia.org/wiki/Order%20One%20Network%20Protocol | The OrderOne MANET Routing Protocol is an algorithm for computers communicating by digital radio in a mesh network to find each other, and send messages to each other along a reasonably efficient path. It was designed for, and promoted as working with wireless mesh networks.
OON's designers say it can handle thousands of nodes, where most other protocols handle less than a hundred. OON uses hierarchical algorithms to minimize the total amount of transmissions needed for routing. Routing overhead is limited to between 1% and 5% of node-to-node bandwidth in any network and does not grow as the network size grows.
The basic idea is that a network organizes itself into a tree. Nodes meet at the root of the tree to establish an initial route. The route then moves away from the root by cutting corners, as ant-trails do. When there are no more corners to cut, a nearly optimum route exists. This route is continuously maintained.
Each process can be performed with localized minimal communication, and very small router tables. OORP requires about 200K of memory. A simulated network with 500 nodes transmitting at 200 bytes/second organized itself in about 20 seconds.
As of 2004, OORP was patented or had other significant intellectual property restrictions. See the link below.
Assumptions
Each computer, or "node" of the network has a unique name, at least one network link, and a computer with some capacity to hold a list of neighbors.
Organizing the tree
The network nodes form a hierarchy by having each node select a parent. The parent is a neighbor node that is the next best step to the most other nodes. This method creates a hierarchy around nodes that are more likely to be present, and which have more capacity, and which are closer to the topological center of the network. The memory limitations of a small node are reflected in its small routing table, which automatically prevents it from being a preferred central node.
At the top, one or two nodes are un |
https://en.wikipedia.org/wiki/Comparative%20biology | Comparative biology uses natural variation and disparity to understand the patterns of life at all levels—from genes to communities—and the critical role of organisms in ecosystems. Comparative biology is a cross-lineage approach to understanding the phylogenetic history of individuals or higher taxa and the mechanisms and patterns that drives it. Comparative biology encompasses Evolutionary Biology, Systematics, Neontology, Paleontology, Ethology, Anthropology, and Biogeography as well as historical approaches to Developmental biology, Genomics, Physiology, Ecology and many other areas of the biological sciences. The comparative approach also has numerous applications in human health, genetics, biomedicine, and conservation biology. The biological relationships (phylogenies, pedigree) are important for comparative analyses and usually represented by a phylogenetic tree or cladogram to differentiate those features with single origins (Homology) from those with multiple origins (Homoplasy).
See also
Cladistics
Comparative Anatomy
Evolution
Evolutionary Biology
Systematics
Bioinformatics
Neontology
Paleontology
Phylogenetics
Genomics
Evolutionary biology
Comparisons |
https://en.wikipedia.org/wiki/Mathematical%20structure | In mathematics, a structure is a set endowed with some additional features on the set (e.g. an operation, relation, metric, or topology). Often, the additional features are attached or related to the set, so as to provide it with some additional meaning or significance.
A partial list of possible structures are measures, algebraic structures (groups, fields, etc.), topologies, metric structures (geometries), orders, events, equivalence relations, differential structures, and categories.
Sometimes, a set is endowed with more than one feature simultaneously, which allows mathematicians to study the interaction between the different structures more richly. For example, an ordering imposes a rigid form, shape, or topology on the set, and if a set has both a topology feature and a group feature, such that these two features are related in a certain way, then the structure becomes a topological group.
Mappings between sets which preserve structures (i.e., structures in the domain are mapped to equivalent structures in the codomain) are of special interest in many fields of mathematics. Examples are homomorphisms, which preserve algebraic structures; homeomorphisms, which preserve topological structures; and diffeomorphisms, which preserve differential structures.
History
In 1939, the French group with the pseudonym Nicolas Bourbaki saw structures as the root of mathematics. They first mentioned them in their "Fascicule" of Theory of Sets and expanded it into Chapter IV of the 1957 edition. They identified three mother structures: algebraic, topological, and order.
Example: the real numbers
The set of real numbers has several standard structures:
An order: each number is either less than or greater than any other number.
Algebraic structure: there are operations of multiplication and addition that make it into a field.
A measure: intervals of the real line have a specific length, which can be extended to the Lebesgue measure on many of its subsets.
A metric: there is |
https://en.wikipedia.org/wiki/250%20%28number%29 | 250 (two hundred [and] fifty) is the natural number following 249 and preceding 251.
250 is also the sum of squares of the divisors of the number 14.
250 has the same digits and prime factors.
Integers between 251 and 259
251
252
253
254
255
256
257
258
259
References
Integers |
https://en.wikipedia.org/wiki/221%20%28number%29 | 221 (two hundred [and] twenty-one) is the natural number following 220 and preceding 222.
In mathematics
Its factorization as 13 × 17 makes 221 the product of two consecutive prime numbers, the sixth smallest such product.
221 is a centered square number.
In other fields
In Texas hold 'em, the probability of being dealt pocket aces (the strongest possible outcome in the initial deal of two cards per player) is 1/221.
Sherlock Holmes's home address: 221B Baker Street.
References
Integers |
https://en.wikipedia.org/wiki/Sega%20Channel | The Sega Channel is a discontinued online game service developed by Sega for the Sega Genesis video game console, serving as a content delivery system. Launched on December 14, 1994, the Sega Channel was provided to the public by TCI and Time Warner Cable through cable television services by way of coaxial cable. It was a pay to play service, through which customers could access Genesis games online, play game demos, and get cheat codes. Lasting until July 31, 1998, the Sega Channel operated three years after the release of Sega's next generation console, the Sega Saturn. Though criticized for its poorly timed launch and high subscription fee, the Sega Channel has been praised for its innovations in downloadable content and impact on online game services.
History
Released in Japan as the Mega Drive in 1988, North America in 1989, and Europe and other regions as the Mega Drive in 1990, the Sega Genesis was Sega's entry into the 16-bit era of video game consoles.
In 1990, Sega started its first internet-based service for Genesis, Sega Meganet, in Japan. Operating through a cartridge and a peripheral called the Mega Modem, it allowed Mega Drive owners to play 17 games online. A North American version, the "Tele-Genesis", was announced but never released. Another phone-based system, the Mega Anser, turned the Japanese Mega Drive into an online banking terminal. Due to Meganet's low number of games, high price, and the Mega Drive's lack of success in Japan, the system was a commercial failure. By 1992, the Mega Modem peripheral could be found in bargain bins at a reduced price, and a remodeled version of the Mega Drive released in 1993 removed the EXT 9-pin port, preventing connections to the Meganet service.
In April 1993, Sega announced the Sega Channel service, which would use cable television services to deliver content. In the US, national testing began in June, and deployment began in December, with a complete US release in 1994. By June 1994, 21 cable compan |
https://en.wikipedia.org/wiki/Smart%20gun | A smart gun, also called a smart-gun, or smartgun, is a firearm that can detect its authorized user(s) or something that is normally only possessed by its authorized user(s). The term is also used in science fiction to refer to various types of semi-automatic firearms.
Smart guns have one or more systems that allow them to fire only when activated by an authorized user. Those systems typically employ RFID chips or other proximity tokens, fingerprint recognition, magnetic rings, or mechanical locks. They can thereby prevent accidental shootings, gun thefts, and criminal usage by persons not authorized to use the guns.
Related to smart guns are other smart firearms safety devices such as biometric or RFID activated accessories and safes.
Commercial availability
No smart gun has ever been sold on the commercial market in the United States. The Armatix iP1, a .22 caliber handgun with an active RFID watch used to unlock it, is the most mature smart gun developed. It was briefly planned to be offered at a few retailers before being quickly withdrawn due to pressure from gun-rights advocates concerned that it would trigger the New Jersey Childproof Handgun Law.
As of 2019, a number of startups and companies including Armatix, Biofire, LodeStar Firearms, and Swiss company SAAR are purportedly developing various smart handguns and rifles, but none have brought the technology to market.
Reception
Reception to the concept of smart gun technology has been mixed. There have been public calls to develop the technology, most notably from President Obama. Gun-rights groups including the National Rifle Association of America have expressed concerns that the technology could be mandated, and some firearms enthusiasts are concerned that the technology wouldn't be reliable enough to trust.
National Rifle Association
The NRA and its membership boycotted Smith & Wesson after it was revealed in 1999 that the company was developing a smart gun for the U.S. government.
More recen |
https://en.wikipedia.org/wiki/Josephus%20problem | In computer science and mathematics, the Josephus problem (or Josephus permutation) is a theoretical problem related to a certain counting-out game. Such games are used to pick out a person from a group, e.g. eeny, meeny, miny, moe.
In the particular counting-out game that gives rise to the Josephus problem, a number of people are standing in a circle waiting to be executed. Counting begins at a specified point in the circle and proceeds around the circle in a specified direction. After a specified number of people are skipped, the next person is executed. The procedure is repeated with the remaining people, starting with the next person, going in the same direction and skipping the same number of people, until only one person remains, and is freed.
The problem—given the number of people, starting point, direction, and number to be skipped—is to choose the position in the initial circle to avoid execution.
History
The problem is named after Flavius Josephus, a Jewish historian living in the 1st century. According to Josephus' firsthand account of the siege of Yodfat, he and his 40 soldiers were trapped in a cave by Roman soldiers. They chose suicide over capture, and settled on a serial method of committing suicide by drawing lots. Josephus states that by luck or possibly by the hand of God, he and another man remained until the end and surrendered to the Romans rather than killing themselves. This is the story given in Book 3, Chapter 8, part 7 of Josephus' The Jewish War (writing of himself in the third person):
The details of the mechanism used in this feat are rather vague. According to James Dowdy and Michael Mays, in 1612 Claude Gaspard Bachet de Méziriac suggested the specific mechanism of arranging the men in a circle and counting by threes to determine the order of elimination. This story has been often repeated and the specific details vary considerably from source to source. For instance, Israel Nathan Herstein and Irving Kaplansky (1974) have Joseph |
https://en.wikipedia.org/wiki/Sequence%20diagram | In software engineering, a sequence diagram or system sequence diagram (SSD) shows process interactions arranged in a time sequence. The diagram depicts the processes and objects involved and the sequence of messages exchanged as needed to carry out the functionality. Sequence diagrams are typically associated with use case realizations in the 4+1 architectural view model of the system under development. Sequence diagrams are sometimes called event diagrams or event scenarios.
For a particular scenario of a use case, the diagrams show the events that external actors generate, their order, and possible inter-system events. All systems are treated as a black box; the diagram places emphasis on events that cross the system boundary from actors to systems. A system sequence diagram should be done for the main success scenario of the use case, and frequent or complex alternative scenarios.
Key elements of sequence diagram
A sequence diagram shows, as parallel vertical lines (lifelines), different processes or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order in which they occur. This allows the specification of simple runtime scenarios in a graphical manner.
A system sequence diagram should specify and show the following:
External actors
Messages (methods) invoked by these actors
Return values (if any) associated with previous messages
Indication of any loops or iteration area
Reading a system sequence diagram
Professionals, in developing a project, often use system sequence diagrams to illustrate how certain tasks are done between users and the system. These tasks may include repetitive, simple, or complex tasks. The purpose is to illustrate the use case in a visual format. In order to construct a system sequence diagram, you need to be familiar with the unified modeling language (UML). These models show the logic behind the actors (people who affect the system) and the system in performing the task. R |
https://en.wikipedia.org/wiki/Reconciliation%20ecology | Reconciliation ecology is the branch of ecology which studies ways to encourage biodiversity in the human-dominated ecosystems of the anthropocene era. Michael Rosenzweig first articulated the concept in his book Win-Win Ecology, based on the theory that there is not enough area for all of earth's biodiversity to be saved within designated nature preserves. Therefore, humans should increase biodiversity in human-dominated landscapes. By managing for biodiversity in ways that do not decrease human utility of the system, it is a "win-win" situation for both human use and native biodiversity. The science is based in the ecological foundation of human land-use trends and species-area relationships. It has many benefits beyond protection of biodiversity, and there are numerous examples of it around the globe. Aspects of reconciliation ecology can already be found in management legislation, but there are challenges in both public acceptance and ecological success of reconciliation attempts.
Theoretical basis
Human land use trends
Traditional conservation is based on "reservation and restoration"; reservation meaning setting pristine lands aside for the sole purpose of maintaining biodiversity, and restoration meaning returning human impacted ecosystems to their natural state. However, reconciliation ecologists argue that there is too great a proportion of land already impacted by humans for these techniques to succeed.
While it is difficult to measure exactly how much land has been transformed by human use, estimates range from 39 to 50%. This includes agricultural land, pastureland, urban areas, and heavily harvested forest systems. An estimated 50% of arable land is already under cultivation. Land transformation has increased rapidly over the last fifty years, and is likely to continue to increase. Beyond direct transformation of land area, humans have impacted the global biogeochemical cycles, leading to human caused change in even the most remote areas. These inc |
https://en.wikipedia.org/wiki/New%20York%20State%20Agricultural%20Experiment%20Station | The New York State Agricultural Experiment Station (NYSAES) at Geneva, Ontario County, New York State, is an agricultural experiment station operated by the New York State College of Agriculture and Life Sciences at Cornell University. In August 2018, the station was rebranded as Cornell AgriTech, but its official name remains unchanged.
The Station is the sixth oldest institution of its kind in the country.
History
The New York State Agricultural Experiment Station was established by an Act of the New York State Legislature on June 26, 1880. More than 100 locations were considered, but a 125-acre parcel in Geneva was eventually chosen. In 1882, the State purchased the land, an Italianate villa, and all outbuildings from Nehemiah and Louisa Denton for $25,000. The villa was converted into the Station headquarters, now known as Parrott Hall. The new institution became operative on March 1, 1882. It would become known colloquially as the Geneva Experiment Station.
An 1883 Report of the Board of Control of the NYSAES to the New York State Assembly stated that there were immediate and dire threats State agricultural output caused by insect pests, bovine diseases, drought, soil nutrient exhaustion, and outward labor migration, and that an organization dedicated to staving off these threats was needed.
Originally, farmers wanted the station to serve as a model farm. However, the first director, E. Lewis Sturtevant, immediately established the policy that the station was to conduct agricultural science research and to establish experimental plots, both of which would have little resemblance to commercial agriculture. Nevertheless, the primary mission of the Station has always been to serve those who produce and consume New York's agricultural products.
In its early days, Station scientists, who were few in number, concentrated their efforts on dairy, horticulture, and evaluation of varieties of vegetables and field crops. In 1887, the program was broadened to include |
https://en.wikipedia.org/wiki/Project%20Sherwood | Project Sherwood was the codename for a United States program in controlled nuclear fusion during the period it was classified. After 1958, when fusion research was declassified around the world, the project was reorganized as a separate division within the United States Atomic Energy Commission (AEC) and lost its codename.
Sherwood developed out of a number of ad hoc efforts dating back to about 1951. Primary among these was the stellarator program at Princeton University, itself code-named Project Matterhorn. Since then the weapons labs had clamored to join the club, Los Alamos with its z-pinch efforts, Livermore's magnetic mirror program, and later, Oak Ridge's fuel injector efforts. By 1953 the combined budgets were increasing into the million dollar range, demanding some sort of oversight at the AEC level.
The name "Sherwood" was suggested by Paul McDaniel, Deputy Director of the AEC. He noted that funding for the wartime Hood Building was being dropped and moved to the new program, so they "robbing Hood to pay Friar Tuck", a reference to the British physicist and fusion researcher James L. Tuck. The connection to Robin Hood and Friar Tuck gave the project its name.
Lewis Strauss strongly supported keeping the program secret until pressure from the United Kingdom led to a declassification effort at the 2nd Atoms for Peace meeting in the fall of 1958. After this time a number of purely civilian organizations also formed to organize meetings on the topic, with the American Physical Society organizing meetings under their Division of Plasma Physics. These meetings have been carried on to this day and were renamed International Sherwood Fusion Theory Conference. The original Project Sherwood became simply the Controlled Thermonuclear Research program within the AEC and its follow-on organizations.
Designs and concepts
Research centered on three plasma confinement designs; the stellarator headed by Lyman Spitzer at the Princeton Plasma Physics Laboratory, the to |
https://en.wikipedia.org/wiki/LonTalk | LonTalk is a networking protocol. Originally developed by Echelon Corporation for networking devices over media such as twisted pair, powerlines, fiber optics, and RF. It is popular for the automation of various functions in industrial control, home automation, transportation, and buildings systems such as lighting and HVAC (such as in intelligent buildings), the protocol has now been adopted as an open international control networking standard in the ISO/IEC 14908 family of standards. Published through ISO/IEC JTC 1/SC 6, this standard specifies a multi-purpose control network protocol stack optimized for smart grid, smart building, and smart city applications.
LonWorks
LonTalk is part of the technology platform called LonWorks.
Protocol
The protocol is defined by ISO/IEC 14908.1 and published by ISO/IEC JTC 1/SC 6. The LonTalk protocol has also been ratified by standards setting bodies in the following industries & regions:
ANSI Standard ANSI/CEA 709.1 - Control networking (US)
EN 14908 - Building controls (EU)
GB/Z 20177.1-2006 - Control networking and building controls (China)
IEEE 1473-L - Train controls (US)
SEMI E54 - Semiconductor manufacturing equipment sensors & actuators (US)
IFSF - International forecourt standard for EU petrol stations
OSGP - A widely use protocol for smart grid devices built on ISO/IEC 14908.1
The protocol is only available from the official distribution organizations of each regional standards body or in the form of microprocessors manufactured by companies that have ported the standard to their respective chip designs.
Security
An April 2015 cryptanalysis paper claims to have found serious security flaws in the OMA Digest algorithm of the Open Smart Grid Protocol, which itself is built on the same EN 14908 foundations as LonTalk. The authors speculate that "every other LonTalk-derived standard" is similarly vulnerable to the key-recovery attacks described.
See also
BACnet -- A building automation and control protoc |
https://en.wikipedia.org/wiki/Iron%20law%20of%20prohibition | The iron law of prohibition is a term coined by Richard Cowan in 1986 which posits that as law enforcement becomes more intense, the potency of prohibited substances increases. Cowan put it this way: "the harder the enforcement, the harder the drugs."
This law is an application of the Alchian–Allen effect; Libertarian judge
Jim Gray calls the law the "cardinal rule of prohibition", and notes that is a powerful argument for the legalization of drugs. It is based on the premise that when drugs or alcohol are prohibited, they will be produced in black markets in more concentrated and powerful forms, because these more potent forms offer better efficiency in the business model—they take up less space in storage, less weight in transportation, and they sell for more money. Economist Mark Thornton writes that the iron law of prohibition undermines the argument in favor of prohibition, because the higher potency forms are less safe for the consumer.
Findings
Thornton published research showing that the potency of marijuana increased in response to higher enforcement budgets. He later expanded this research in his dissertation to include other illegal drugs and alcohol during Prohibition in the United States (1920–1933). The basic approach is based on the Alchian and Allen Theorem. This argument says that a fixed cost (e.g. transportation fee) added to the price of two varieties of the same product (e.g. high quality red apple and a low quality red apple) results in greater sales of the more expensive variety. When applied to rum-running, drug smuggling, and blockade running the more potent products become the sole focus of the suppliers. Thornton notes that the greatest added cost in illegal sales is the avoidance of detection. Thornton says that if drugs are legalized, then consumers will begin to wean themselves off the higher potency forms, for instance with cocaine users buying coca leaves, and heroin users switching to opium.
The popular shift from beer to wine to |
https://en.wikipedia.org/wiki/Remote%20pickup%20unit | A remote pickup unit or RPU is a radio system using special radio frequencies set aside for electronic news-gathering (ENG) and remote broadcasting. It can also be used for other types of point-to-point radio links.
An RPU is used to send program material from a remote location to the broadcast station or network. Usually these systems use specialized high audio fidelity radio equipment. One manufacturer, Marti, was best known for manufacturing remote pickup equipment, so much so that the name is usually used to refer to a remote pickup unit regardless of who the actual equipment manufacturer actually is.
Today much of the remote broadcast use digital audio system fed over ISDN telephone lines. This method is favored because of reliability of telephone lines versus a radio link back to the station. The radio RPU remains much more favored for ENG however, because of its flexibility.
Footnotes
Broadcast engineering |
https://en.wikipedia.org/wiki/Linear%20bounded%20automaton | In computer science, a linear bounded automaton (plural linear bounded automata, abbreviated LBA) is a restricted form of Turing machine.
Operation
A linear bounded automaton is a Turing machine that satisfies the following three conditions:
Its input alphabet includes two special symbols, serving as left and right endmarkers.
Its transitions may not print other symbols over the endmarkers.
Its transitions may neither move to the left of the left endmarker nor to the right of the right endmarker.
In other words:
instead of having potentially infinite tape on which to compute, computation is restricted to the portion of the tape containing the input plus the two tape squares holding the endmarkers.
An alternative, less restrictive definition is as follows:
Like a Turing machine, an LBA possesses a tape made up of cells that can contain symbols from a finite alphabet, a head that can read from or write to one cell on the tape at a time and can be moved, and a finite number of states.
An LBA differs from a Turing machine in that while the tape is initially considered to have unbounded length, only a finite contiguous portion of the tape, whose length is a linear function of the length of the initial input, can be accessed by the read/write head; hence the name linear bounded automaton.
This limitation makes an LBA a somewhat more accurate model of a real-world computer than a Turing machine, whose definition assumes unlimited tape.
The strong and the weaker definition lead to the same computational abilities of the respective automaton classes, by the same argument used to prove the linear speedup theorem.
LBA and context-sensitive languages
Linear bounded automata are acceptors for the class of context-sensitive languages. The only restriction placed on grammars for such languages is that no production maps a string to a shorter string. Thus no derivation of a string in a context-sensitive language can contain a sentential form longer than the string |
https://en.wikipedia.org/wiki/Random%20amplification%20of%20polymorphic%20DNA | Random amplified polymorphic DNA (RAPD), pronounced "rapid", is a type of polymerase chain reaction (PCR), but the segments of DNA that are amplified are random. The scientist performing RAPD creates several arbitrary, short primers (10–12
nucleotides), then proceeds with the PCR using a large template of genomic DNA, hoping that fragments will amplify. By resolving the resulting patterns, a semi-unique profile can be gleaned from an RAPD reaction.
No knowledge of the DNA sequence of the targeted genome is required, as the primers will bind somewhere in the sequence, but it is not certain exactly where. This makes the method popular for comparing the DNA of biological systems that have not had the attention of the scientific community, or in a system in which relatively few DNA sequences are compared (it is not suitable for forming a cDNA databank). Because it relies on a large, intact DNA template sequence, it has some limitations in the use of degraded DNA samples. Its resolving power is much lower than targeted, species-specific DNA comparison methods, such as short tandem repeats. In recent years, RAPD has been used to characterize, and trace, the phylogeny of diverse plant and animal species.
Introduction
RAPD markers are decamer (10 nucleotides long) DNA fragments from PCR amplification of random segments of genomic DNA with a single primer of arbitrary nucleotide sequence and which are able to differentiate between genetically distinct individuals, although not necessarily in a reproducible way.
It is used to analyze the genetic diversity of an individual by using random primers. Due to problems in experiment reproducibility, many scientific journals do not accept experiments merely based on RAPDs anymore.
RAPD requires only one primer for amplification.
How it works
After amplification with PCR, samples are loaded into a gel (either agarose or polyacrylamide) for gel electrophoresis. The differing sizes created through random amplification will sepa |
https://en.wikipedia.org/wiki/Cascode%20voltage%20switch%20logic | Cascode Voltage Switch Logic (CVSL) refers to a CMOS-type logic family which is designed for certain advantages. It requires mainly N-channel MOSFET transistors to implement the logic using true and complementary input signals, and also needs two P-channel transistors at the top to pull one of the outputs high. This logic family is also known as Differential Cascode Voltage Switch Logic (DCVS or DCVSL).
See also
Logic family
References
Weste and Harris, CMOS VLSI Design, Third Edition (; (international edition))
Logic families |
https://en.wikipedia.org/wiki/Skeleton%20%28category%20theory%29 | In mathematics, a skeleton of a category is a subcategory that, roughly speaking, does not contain any extraneous isomorphisms. In a certain sense, the skeleton of a category is the "smallest" equivalent category, which captures all "categorical properties" of the original. In fact, two categories are equivalent if and only if they have isomorphic skeletons. A category is called skeletal if isomorphic objects are necessarily identical.
Definition
A skeleton of a category C is an equivalent category D in which no two distinct objects are isomorphic. It is generally considered to be a subcategory. In detail, a skeleton of C is a category D such that:
D is a subcategory of C: every object of D is an object of C
for every pair of objects d1 and d2 of D, the morphisms in D are morphisms in C, i.e.
and the identities and compositions in D are the restrictions of those in C.
The inclusion of D in C is full, meaning that for every pair of objects d1 and d2 of D we strengthen the above subset relation to an equality:
The inclusion of D in C is essentially surjective: Every C-object is isomorphic to some D-object.
D is skeletal: No two distinct D-objects are isomorphic.
Existence and uniqueness
It is a basic fact that every small category has a skeleton; more generally, every accessible category has a skeleton. (This is equivalent to the axiom of choice.) Also, although a category may have many distinct skeletons, any two skeletons are isomorphic as categories, so up to isomorphism of categories, the skeleton of a category is unique.
The importance of skeletons comes from the fact that they are (up to isomorphism of categories), canonical representatives of the equivalence classes of categories under the equivalence relation of equivalence of categories. This follows from the fact that any skeleton of a category C is equivalent to C, and that two categories are equivalent if and only if they have isomorphic skeletons.
Examples
The category Set of all sets h |
https://en.wikipedia.org/wiki/Pseudorandom%20generator | In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution.
Many different classes of statistical tests have been considered in the literature, among them the class of all Boolean circuits of a given size.
It is not known whether good pseudorandom generators for this class exist, but it is known that their existence is in a certain sense equivalent to (unproven) circuit lower bounds in computational complexity theory.
Hence the construction of pseudorandom generators for the class of Boolean circuits of a given size rests on currently unproven hardness assumptions.
Definition
Let be a class of functions.
These functions are the statistical tests that the pseudorandom generator will try to fool, and they are usually algorithms.
Sometimes the statistical tests are also called adversaries or distinguishers. The notation in the codomain of the functions is the Kleene star.
A function with is a pseudorandom generator against with bias if, for every in , the statistical distance between the distributions and is at most , where is the uniform distribution on .
The quantity is called the seed length and the quantity is called the stretch of the pseudorandom generator.
A pseudorandom generator against a family of adversaries with bias is a family of pseudorandom generators , where is a pseudorandom generator against with bias and seed length .
In most applications, the family represents some model of computation or some set of algorithms, and one is interested in designing a pseudorandom generator with small seed length and bias, and such that the output of the generator can be computed by the same s |
https://en.wikipedia.org/wiki/Malaise%20trap | A Malaise trap is a large, tent-like structure used for trapping, killing, and preserving flying insects, particularly Hymenoptera and Diptera. The trap is made of a material such as PET (polyester) netting and can be various colours. Insects fly into the tent wall and are funneled into a collecting vessel attached to its highest point. It was invented by René Malaise in 1934.
Structure
Many versions of the Malaise trap are used, but the basic structure consists of a tent with a large opening at the bottom for insects to fly into and a tall central wall that directs the flying insects upward to a cylinder containing a killing agent. The chemical varies according to purpose and access. Conventionally, cyanide was used inside the jar with an absorbent material.
However, due to restrictions, many people use ethanol. Ethanol damages some flying insects such as lepidopterans, but most people use the malaise trap primarily for hymenopterans and dipterans. In addition, the ethanol keeps the specimens preserved for a longer period of time. Other dry killing agents including no-pest strips (dichlorvos) and ethyl acetate need to be checked more regularly.
Design details
Cylinder
When choosing a Malaise trap design, the types of insects to catch must be considered. The opening to the cylinder is of key importance. Typically, the opening is around , and can vary according to the size of insect desired. If using a dry agent, a smaller hole results in a faster death, limiting the amount of damage a newly caught insect can inflict on older, fragile specimens. In ethanol, this is less of a concern. Larger holes potentially allow in more butterflies, moths, and dragonflies.
Location
Placement of the trap is very important. It should be positioned to maximize the number of flying insects that pass through the opening. This is determined by the natural features of the site. One should evaluate topography, vegetation, wind, and water. For example, if a wide corridor in a |
https://en.wikipedia.org/wiki/Maximum%20entropy%20probability%20distribution | In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
Definition of entropy and differential entropy
If is a discrete random variable with distribution given by
then the entropy of is defined as
If is a continuous random variable with probability density , then the differential entropy of is defined as
The quantity is understood to be zero whenever .
This is a special case of more general forms described in the articles Entropy (information theory), Principle of maximum entropy, and differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing will also maximize the more general forms.
The base of the logarithm is not important as long as the same one is used consistently: change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits; mathematicians and physicists will often prefer the natural logarithm, resulting in a unit of nats for the entropy.
The choice of the measure is however crucial in determining the entropy and the resulting maximum entropy distribution, even though the usual recourse to the Lebesgue measure is often defended as "natural".
Distributions with measured constants
Many statistical distributions of applicable interest are those for which the |
https://en.wikipedia.org/wiki/ALGOL%2068C | ALGOL 68C is an imperative computer programming language, a dialect of ALGOL 68, that was developed by Stephen R. Bourne and Michael Guy to program the Cambridge Algebra System (CAMAL). The initial compiler was written in the Princeton Syntax Compiler (PSYCO, by Edgar T. Irons) that was implemented by J. H. Mathewman at Cambridge.
ALGOL 68C was later used for the CHAOS OS for the capability-based security CAP computer at University of Cambridge in 1971. Other early contributors were Andrew D. Birrell and Ian Walker.
Subsequent work was done on the compiler after Bourne left Cambridge University in 1975. Garbage collection was added, and the code base is still running on an emulated OS/MVT using Hercules.
The ALGOL 68C compiler generated output in ZCODE, a register-based intermediate language, which could then be either interpreted or compiled to a native executable. This ability to interpret or compile ZCODE encouraged the porting of ALGOL 68C to many different computing platforms. Aside from the CAP computer, the compiler was ported to systems including Conversational Monitor System (CMS), TOPS-10, and Zilog Z80.
Popular culture
A very early predecessor of this compiler was used by Guy and Bourne to write the first Game of Life programs on the PDP-7 with a DEC 340 display.
Various Liverpool Software Gazette issues detail the Z80 implementation. The compiler required about 120 KB of memory to run; hence the Z80's 64 KB memory is actually too small to run the compiler. So ALGOL 68C programs for the Z80 had to be cross-compiled from the larger CAP computer, or an IBM System/370 mainframe computer.
Algol 68C and Unix
Stephen Bourne subsequently reused ALGOL 68's if ~ then ~ else ~ fi, case ~ in ~ out ~ esac and for ~ while ~ do ~ od clauses in the common Unix Bourne shell, but with in's syntax changed, out removed, and od replaced with done (to avoid conflict with the od utility).
After Cambridge, Bourne spent nine years at Bell Labs with the Version 7 Unix (Se |
https://en.wikipedia.org/wiki/Digital%20distribution | Digital distribution, also referred to as content delivery, online distribution, or electronic software distribution, among others, is the delivery or distribution of digital media content such as audio, video, e-books, video games, and other software.
The term is generally used to describe distribution over an online delivery medium, such as the Internet, thus bypassing physical distribution methods, such as paper, optical discs, and VHS videocassettes. The term online distribution is typically applied to freestanding products; downloadable add-ons for other products are more commonly known as downloadable content. With the advancement of network bandwidth capabilities, online distribution became prominent in the 21st century, with prominent platforms such as Amazon Video, and Netflix's streaming service starting in 2007.
Content distributed online may be streamed or downloaded, and often consists of books, films and television programs, music, software, and video games. Streaming involves downloading and using content at a user's request, or "on-demand", rather than allowing a user to store it permanently. In contrast, fully downloading content to a hard drive or other forms of storage media may allow offline access in the future.
Specialist networks known as content delivery networks help distribute content over the Internet by ensuring both high availability and high performance. Alternative technologies for content delivery include peer-to-peer file sharing technologies. Alternatively, content delivery platforms create and syndicate content remotely, acting like hosted content management systems.
Unrelated to the above, the term "digital distribution" is also used in film distribution to describe the distribution of content through physical digital media, in opposition to distribution by analog media such as photographic film and magnetic tape (see: digital cinema).
Impact on traditional retail
The rise of online distribution has provided controversy for t |
https://en.wikipedia.org/wiki/Sun%20Ray | The Sun Ray was a stateless thin client computer (and associated software) aimed at corporate environments, originally introduced by Sun Microsystems in September 1999 and discontinued by Oracle Corporation in 2014. It featured a smart card reader and several models featured an integrated flat panel display.
The idea of a stateless desktop was a significant shift from, and the eventual successor to, Sun's earlier line of diskless Java-only desktops, the JavaStation.
Predecessor
The concept began in Sun Microsystems Laboratories in 1997 as a project codenamed NetWorkTerminal (NeWT). The client was designed to be small, low cost, low power, and silent. It was based on the Sun Microelectronics MicroSPARC IIep. Other processors initially considered for it included Intel's StrongARM, Philips Semiconductors' TriMedia, and National Semiconductor's Geode. The MicroSPARC IIep was selected because of its high level of integration, good performance, low cost, and availability.
NeWT included 8 MiB of EDO DRAM and 4 MiB of NOR flash. The graphics controller used was the ATI Rage 128 because of its low power, 2D rendering performance, and low cost. It also included an ATI video encoder for TV-out (removed in the Sun Ray 1), a Philips Semiconductor SAA7114 video decoder/scaler, Crystal Semiconductor audio CODEC, Sun Microelectronics Ethernet controller, PCI USB host interface with 4 port hub, and I²C smart card interface. The motherboard and daughtercard were housed in an off-the-shelf commercial small form-factor PC case with internal +12/+5VDC auto ranging power supply.
NeWT was designed to have feature parity with a modern business PC in every way possible. Instead of a commercial operating system. the client ran a real-time operating system called "exec", which was originally developed in Sun Labs as part of an Ethernet-based security camera project codenamed NetCam. Less than 60 NeWTs were ever built and very few survived; one is in the collection of the Computer Histo |
https://en.wikipedia.org/wiki/Golden-section%20search | The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an interval containing multiple extrema (possibly including the interval boundaries), it will converge to one of them. If the only extremum on the interval is on a boundary of the interval, it will converge to that boundary point. The method operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratio φ:1:φ where φ is the golden ratio. These ratios are maintained for each iteration and are maximally efficient. Excepting boundary points, when searching for a minimum, the central point is always less than or equal to the outer points, assuring that a minimum is contained between the outer points. The converse is true when searching for a maximum. The algorithm is the limit of Fibonacci search (also described below) for many function evaluations. Fibonacci search and golden-section search were discovered by Kiefer (1953) (see also Avriel and Wilde (1966)).
Basic idea
The discussion here is posed in terms of searching for a minimum (searching for a maximum is similar) of a unimodal function. Unlike finding a zero, where two function evaluations with opposite sign are sufficient to bracket a root, when searching for a minimum, three values are necessary. The golden-section search is an efficient way to progressively reduce the interval locating the minimum. The key is to observe that regardless of how many points have been evaluated, the minimum lies within the interval defined by the two points adjacent to the point with the least value so far evaluated.
The diagram above illustrates a single step in the techniqu |
https://en.wikipedia.org/wiki/Pixel-art%20scaling%20algorithms | Pixel-art scaling algorithms are graphical filters that enhance hand-drawn 2D pixel art graphics. The re-scaling of pixel art is a specialist sub-field of image rescaling.
As pixel-art graphics are usually in very low resolutions, they rely on careful placing of individual pixels, often with a limited palette of colors. This results in graphics that rely on a high amount of stylized visual cues to define complex shapes with very little resolution, down to individual pixels and making image scaling of pixel art a particularly difficult problem.
A number of specialized algorithms have been developed to handle pixel-art graphics, as the traditional scaling algorithms do not take such perceptual cues into account.
Since a typical application of this technology is improving the appearance of fourth-generation and earlier video games on arcade and console emulators, many are designed to run in real time for sufficiently small input images at 60-frames per second. This places constraints on the type of programming techniques that can be used for this sort of real-time processing. Many work only on specific scale factors: 2× is the most common, with 3×, 4×, 5× and 6× also present.
Algorithms
SAA5050 'Diagonal Smoothing'
The Mullard SAA5050 Teletext character generator chip (1980) used a primitive pixel scaling algorithm to generate higher-resolution characters on screen from a lower-resolution representation from its internal ROM. Internally each character shape was defined on a 5×9 pixel grid, which was then interpolated by smoothing diagonals to give a 10×18 pixel character, with a characteristically angular shape, surrounded to the top and to the left by two pixels of blank space. The algorithm only works on monochrome source data, and assumes the source pixels will be logical true or false depending on whether they are 'on' or 'off'. Pixels 'outside the grid pattern' are assumed to be off.
The algorithm works as follows:
A B C --\ 1 2
D E F --/ 3 4
1 = B | (A & |
https://en.wikipedia.org/wiki/Frobenius%20algebra | In mathematics, especially in the fields of representation theory and module theory, a Frobenius algebra is a finite-dimensional unital associative algebra with a special kind of bilinear form which gives the algebras particularly nice duality theories. Frobenius algebras began to be studied in the 1930s by Richard Brauer and Cecil Nesbitt and were named after Georg Frobenius. Tadashi Nakayama discovered the beginnings of a rich duality theory , . Jean Dieudonné used this to characterize Frobenius algebras . Frobenius algebras were generalized to quasi-Frobenius rings, those Noetherian rings whose right regular representation is injective. In recent times, interest has been renewed in Frobenius algebras due to connections to topological quantum field theory.
Definition
A finite-dimensional, unital, associative algebra A defined over a field k is said to be a Frobenius algebra if A is equipped with a nondegenerate bilinear form that satisfies the following equation: . This bilinear form is called the Frobenius form of the algebra.
Equivalently, one may equip A with a linear functional such that the kernel of λ contains no nonzero left ideal of A.
A Frobenius algebra is called symmetric if σ is symmetric, or equivalently λ satisfies .
There is also a different, mostly unrelated notion of the symmetric algebra of a vector space.
Nakayama automorphism
For a Frobenius algebra A with σ as above, the automorphism ν of A such that is Nakayama automorphism associated to A and σ.
Examples
Any matrix algebra defined over a field k is a Frobenius algebra with Frobenius form σ(a,b)=tr(a·b) where tr denotes the trace.
Any finite-dimensional unital associative algebra A has a natural homomorphism to its own endomorphism ring End(A). A bilinear form can be defined on A in the sense of the previous example. If this bilinear form is nondegenerate, then it equips A with the structure of a Frobenius algebra.
Every group ring k[G] of a finite group G over a field k is a s |
https://en.wikipedia.org/wiki/Butterfat | Butterfat or milkfat is the fatty portion of milk. Milk and cream are often sold according to the amount of butterfat they contain.
Composition
Butterfat is mainly composed of triglycerides. Each triglyceride contains three fatty acids. Butterfat triglycerides contain the following amounts of fatty acids (by mass fraction):
Butterfat contains about 3% trans fat, which is slightly less than 0.5 grams per US tablespoon. Trans fats occur naturally in meat and milk from ruminants. The predominant kind of trans fat found in milk is vaccenic fatty acid. Trans fats may be also found in some industrially produced foods, such as shortenings obtained by hydrogenation of vegetable oils. In light of recognized scientific evidence, nutritional authorities consider all trans fats equally harmful for health and recommend that their consumption be reduced to trace amounts.
However, two Canadian studies have shown that vaccenic acid could be beneficial compared to vegetable shortenings containing trans fats, or a mixture of pork lard and soy fat, by lowering total LDL and triglyceride levels. A study by the US Department of Agriculture showed that vaccenic acid raises both HDL and LDL cholesterol, whereas industrial trans fats only raise LDL with no beneficial effect on HDL.
U.S. standards
In the U.S., there are federal standards for butterfat content of dairy products. Many other countries also have standards for minimum fat levels in dairy products. Commercial products generally contain the minimum legal amount of fat with any excess being removed to make cream, a valuable commodity.
Milks
Non-fat milk, also labeled "fat-free milk" or "skim milk", contains less than 0.5% fat
Low-fat milk is 1% fat
Reduced-fat milk is 2% fat
Whole milk contains at least 3.25% fat
Cheeses
Dry curd and nonfat cottage cheese contain less than 0.5% fat
Lowfat cottage cheese contains 0.5–2% fat
Cottage cheese contains at least 4% fat
Swiss cheese contains at least 43% fat relative t |
https://en.wikipedia.org/wiki/Coherent%20duality | In mathematics, coherent duality is any of a number of generalisations of Serre duality, applying to coherent sheaves, in algebraic geometry and complex manifold theory, as well as some aspects of commutative algebra that are part of the 'local' theory.
The historical roots of the theory lie in the idea of the adjoint linear system of a linear system of divisors in classical algebraic geometry. This was re-expressed, with the advent of sheaf theory, in a way that made an analogy with Poincaré duality more apparent. Then according to a general principle, Grothendieck's relative point of view, the theory of Jean-Pierre Serre was extended to a proper morphism; Serre duality was recovered as the case of the morphism of a non-singular projective variety (or complete variety) to a point. The resulting theory is now sometimes called Serre–Grothendieck–Verdier duality, and is a basic tool in algebraic geometry. A treatment of this theory, Residues and Duality (1966) by Robin Hartshorne, became a reference. One concrete spin-off was the Grothendieck residue.
To go beyond proper morphisms, as for the versions of Poincaré duality that are not for closed manifolds, requires some version of the compact support concept. This was addressed in SGA2 in terms of local cohomology, and Grothendieck local duality; and subsequently. The Greenlees–May duality, first formulated in 1976 by Ralf Strebel and in 1978 by Eben Matlis, is part of the continuing consideration of this area.
Adjoint functor point of view
While Serre duality uses a line bundle or invertible sheaf as a dualizing sheaf, the general theory (it turns out) cannot be quite so simple. (More precisely, it can, but at the cost of imposing the Gorenstein ring condition.) In a characteristic turn, Grothendieck reformulated general coherent duality as the existence of a right adjoint functor , called twisted or exceptional inverse image functor, to a higher direct image with compact support functor .
Higher direct images a |
https://en.wikipedia.org/wiki/Finalizer | In computer science, a finalizer or finalize method is a special method that performs finalization, generally some form of cleanup. A finalizer is executed during object destruction, prior to the object being deallocated, and is complementary to an initializer, which is executed during object creation, following allocation. Finalizers are strongly discouraged by some, due to difficulty in proper use and the complexity they add, and alternatives are suggested instead, mainly the dispose pattern (see problems with finalizers).
The term finalizer is mostly used in object-oriented and functional programming languages that use garbage collection, of which the archetype is Smalltalk. This is contrasted with a destructor, which is a method called for finalization in languages with deterministic object lifetimes, archetypically C++. These are generally exclusive: a language will have either finalizers (if automatically garbage collected) or destructors (if manually memory managed), but in rare cases a language may have both, as in C++/CLI and D, and in case of reference counting (instead of tracing garbage collection), terminology varies. In technical use, finalizer may also be used to refer to destructors, as these also perform finalization, and some subtler distinctions are drawn – see terminology. The term final also indicates a class that cannot be inherited; this is unrelated.
Terminology
The terminology of finalizer and finalization versus destructor and destruction varies between authors and is sometimes unclear.
In common use, a destructor is a method called deterministically on object destruction, and the archetype is C++ destructors; while a finalizer is called non-deterministically by the garbage collector, and the archetype is Java finalize methods.
For languages that implement garbage collection via reference counting, terminology varies, with some languages such as Objective-C and Perl using destructor, and other languages such as Python using finalizer (p |
https://en.wikipedia.org/wiki/Joseph%20Engelberger | Joseph Frederick Engelberger (July 26, 1925 – December 1, 2015) was an American physicist, engineer and entrepreneur. Licensing the original patent awarded to inventor George Devol, Engelberger developed the first industrial robot in the United States, the Unimate, in the 1950s. Later, he worked as entrepreneur and vocal advocate of robotic technology beyond the manufacturing plant in a variety of fields, including service industries, health care, and space exploration.
Biography
Early life and education
Joseph Frederick Engelberger was born on July 26, 1925, in Brooklyn, New York. He grew up in Connecticut during the Great Depression, but later returned to New York City for his college education.
Engelberger received his B.S. in physics in 1946, and M.S. in Electrical Engineering in 1949 from Columbia University. He worked as an engineer with Manning, Maxwell and Moore, where he met inventor George Devol at a Westport cocktail party in 1956, two years after Devol had designed and patented a rudimentary industrial robotic arm. However, Manning, Maxwell and Moore was sold and Engelberger's division was closed that year.
Unimation
Finding himself jobless but with a business partner and an idea, Engelberger co-founded Unimation with Devol, creating the world's first robotics company. In 1957, he also founded Consolidated Controls Corporation. As president of Unimation, Engelberger collaborated with Devol to engineer and produce an industrial robot under the brand name Unimate. The first Unimate robotic arm was installed at a General Motors Plant in Ewing Township, New Jersey, in 1961.
The introduction of robotics to the manufacturing process effectively transformed the automotive industry, with Chrysler and the Ford Motor Company soon following General Motors' lead and installing Unimates in their manufacturing facilities. The rapid adoption of the technology also provided Unimation with a working business model: after selling the first Unimate at a $35,000 loss, |
https://en.wikipedia.org/wiki/Target%20costing | Target costing is an approach to determine a product's life-cycle cost which should be sufficient to develop specified functionality and quality, while ensuring its desired profit. It involves setting a target cost by subtracting a desired profit margin from a competitive market price. A target cost is the maximum amount of cost that can be incurred on a product, however, the firm can still earn the required profit margin from that product at a particular selling price. Target costing decomposes the target cost from product level to component level. Through this decomposition, target costing spreads the competitive pressure faced by the company to product's designers and suppliers. Target costing consists of cost planning in the design phase of production as well as cost control throughout the resulting product life cycle. The cardinal rule of target costing is to never exceed the target cost. However, the focus of target costing is not to minimize costs, but to achieve a desired level of cost reduction determined by the target costing process.
Definition
Target costing is defined as "a disciplined process for determining and achieving a full-stream cost at which a proposed product with specified functionality, performance, and quality must be produced in order to generate the desired profitability at the product’s anticipated selling price over a specified period of time in the future." This definition encompasses the principal concepts: products should be based on an accurate assessment of the wants and needs of customers in different market segments, and cost targets should be what result after a sustainable profit margin is subtracted from what customers are willing to pay at the time of product introduction and afterwards.
The fundamental objective of target costing is to manage the business to be profitable in a highly competitive marketplace. In effect, target costing is a proactive cost planning, cost management, and cost reduction practice whereby costs |
https://en.wikipedia.org/wiki/Level%20of%20invention | Level of invention (or degree of inventiveness, or level of solution, or rank of solution, or rank of invention) is a relative degree of changes to the previous system (or solution) in the result of solution of inventive problem (one containing a contradiction). Term was defined and introduced by TRIZ author G. S. Altshuller.
After initially reviewing 200,000 patent abstracts, Altshuller selected 40,000 as representatives of high level inventive solutions. The remainder involved direct improvements easily recognized within the specialty of the system.
Altshuller separated the patents' different degrees of inventiveness into five levels:
Level 1 – Routine design problems solved by methods well known within the specialty. Usually no invention needed.
example: use of coal for writing
Level 2 – Minor improvements to an existing system using methods known within the industry.
example: graphite pencil (wrapped coal stick)
Level 3 – Fundamental improvement to an existing system using methods known outside the industry.
example: ink pen (ink instead of coal)
Level 4 – A new generation of a system that entails a new principle for performing the system's primary functions. Solutions are found more often in science than technology.
example: printer (another whole system for writing)
Level 5 – A rare scientific discovery or pioneering invention of an essentially new system.
example: electronic pen&paper (see Anoto)
These levels of invention are applied to solutions rather than problems requiring a system of solution.
Also level of invention and the potential for innovation in any nation, geographical area or economic activity is as measurement in the concept of innovative capacity originally introduced by Prof. Suarez-Villa in 1990.
See also
Inventive step and non-obviousness
Novelty (patent)
References
External links
Levels of Solutions (with examples) By Kalevi Rantanen
Introduction to Basic I-TRIZ Levels of Invention
Maturity Mapping of DVD Technology By S |
https://en.wikipedia.org/wiki/Differential%20algebra | In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.
More specifically, differential algebra refers to the theory introduced by Joseph Ritt in 1950, in which differential rings, differential fields, and differential algebras are rings, fields, and algebras equipped with finitely many derivations.
A natural example of a differential field is the field of rational functions in one variable over the complex numbers, where the derivation is differentiation with respect to More generally, every differential equation may be viewed as an element of a differential algebra over the differential field generated by the (known) functions appearing in the equation.
History
Joseph Ritt developed differential algebra because he viewed attempts to reduce systems of differential equations to various canonical forms as an unsatisfactory approach. However, the success of algebraic elimination methods and algebraic manifold theory motivated Ritt to consider a similar approach for differential equations. His efforts led to an initial paper Manifolds Of Functions Defined By Systems Of Algebraic Differential Equations and 2 books, Differential Equations From The Algebraic Standpoint and Differential Algebra. Ellis Kolchin, Ritt's student, advanced this field and published Differential Algebra And Algebraic Groups.
Differential rings
Definition
A derivation on a ring is a function
such that
and
(Leibniz product rule),
for every and in
A derivation is linear over the integers since these identities imply and
A d |
https://en.wikipedia.org/wiki/Grothendieck%27s%20relative%20point%20of%20view | Grothendieck's relative point of view is a heuristic applied in certain abstract mathematical situations, with a rough meaning of taking for consideration families of 'objects' explicitly depending on parameters, as the basic field of study, rather than a single such object. It is named after Alexander Grothendieck, who made extensive use of it in treating foundational aspects of algebraic geometry. Outside that field, it has been influential particularly on category theory and categorical logic.
In the usual formulation, the point of view treats, not objects X of a given category C, but morphisms
f: X → S
where S is a fixed object. This idea is made formal in the idea of the slice category of objects of C 'above' S. To move from one slice to another requires a base change; from a technical point of view base change becomes a major issue for the whole approach (see for example Beck–Chevalley conditions).
A base change 'along' a given morphism
g: T → S
is typically given by the fiber product, producing an object over T from one over S. The 'fiber' terminology is significant: the underlying heuristic is that X over S is a family of fibers, one for each 'point' of S; the fiber product is then the family on T, which described by fibers is for each point of T the fiber at its image in S. This set-theoretic language is too naïve to fit the required context, certainly, from algebraic geometry. It combines, though, with the use of the Yoneda lemma to replace the 'point' idea with that of treating an object, such as S, as 'as good as' the representable functor it sets up.
The Grothendieck–Riemann–Roch theorem from about 1956 is usually cited as the key moment for the introduction of this circle of ideas. The more classical types of Riemann–Roch theorem are recovered in the case where S is a single point (i.e. the final object in the working category C). Using other S is a way to have versions of theorems 'with parameters', i.e. allowing for continuous variation, for |
https://en.wikipedia.org/wiki/Thabit%20number | In number theory, a Thabit number, Thâbit ibn Qurra number, or 321 number is an integer of the form for a non-negative integer n.
The first few Thabit numbers are:
2, 5, 11, 23, 47, 95, 191, 383, 767, 1535, 3071, 6143, 12287, 24575, 49151, 98303, 196607, 393215, 786431, 1572863, ...
The 9th century mathematician, physician, astronomer and translator Thābit ibn Qurra is credited as the first to study these numbers and their relation to amicable numbers.
Properties
The binary representation of the Thabit number 3·2n−1 is n+2 digits long, consisting of "10" followed by n 1s.
The first few Thabit numbers that are prime (Thabit primes or 321 primes):
2, 5, 11, 23, 47, 191, 383, 6143, 786431, 51539607551, 824633720831, ...
, there are 67 known prime Thabit numbers. Their n values are:
0, 1, 2, 3, 4, 6, 7, 11, 18, 34, 38, 43, 55, 64, 76, 94, 103, 143, 206, 216, 306, 324, 391, 458, 470, 827, 1274, 3276, 4204, 5134, 7559, 12676, 14898, 18123, 18819, 25690, 26459, 41628, 51387, 71783, 80330, 85687, 88171, 97063, 123630, 155930, 164987, 234760, 414840, 584995, 702038, 727699, 992700, 1201046, 1232255, 2312734, 3136255, 4235414, 6090515, 11484018, 11731850, 11895718, 16819291, 17748034, 18196595, 18924988, 20928756, ...
The primes for 234760 ≤ n ≤ 3136255 were found by the distributed computing project 321 search.
In 2008, PrimeGrid took over the search for Thabit primes. It is still searching and has already found all currently known Thabit primes with n ≥ 4235414. It is also searching for primes of the form 3·2n+1, such primes are called Thabit primes of the second kind or 321 primes of the second kind.
The first few Thabit numbers of the second kind are:
4, 7, 13, 25, 49, 97, 193, 385, 769, 1537, 3073, 6145, 12289, 24577, 49153, 98305, 196609, 393217, 786433, 1572865, ...
The first few Thabit primes of the second kind are:
7, 13, 97, 193, 769, 12289, 786433, 3221225473, 206158430209, 6597069766657, 221360928884514619393, ...
Their n values are:
1, 2, |
https://en.wikipedia.org/wiki/Hamiltonian%20vector%20field | In mathematics and physics, a Hamiltonian vector field on a symplectic manifold is a vector field defined for any energy function or Hamiltonian. Named after the physicist and mathematician Sir William Rowan Hamilton, a Hamiltonian vector field is a geometric manifestation of Hamilton's equations in classical mechanics. The integral curves of a Hamiltonian vector field represent solutions to the equations of motion in the Hamiltonian form. The diffeomorphisms of a symplectic manifold arising from the flow of a Hamiltonian vector field are known as canonical transformations in physics and (Hamiltonian) symplectomorphisms in mathematics.
Hamiltonian vector fields can be defined more generally on an arbitrary Poisson manifold. The Lie bracket of two Hamiltonian vector fields corresponding to functions f and g on the manifold is itself a Hamiltonian vector field, with the Hamiltonian given by the
Poisson bracket of f and g.
Definition
Suppose that is a symplectic manifold. Since the symplectic form is nondegenerate, it sets up a fiberwise-linear isomorphism
between the tangent bundle and the cotangent bundle , with the inverse
Therefore, one-forms on a symplectic manifold may be identified with vector fields and every differentiable function determines a unique vector field , called the Hamiltonian vector field with the Hamiltonian , by defining for every vector field on ,
Note: Some authors define the Hamiltonian vector field with the opposite sign. One has to be mindful of varying conventions in physical and mathematical literature.
Examples
Suppose that is a -dimensional symplectic manifold. Then locally, one may choose canonical coordinates on , in which the symplectic form is expressed as:
where denotes the exterior derivative and denotes the exterior product. Then the Hamiltonian vector field with Hamiltonian takes the form:
where is a square matrix
and
The matrix is frequently denoted with .
Suppose that M = R2n is the 2n-dimens |
https://en.wikipedia.org/wiki/Reproductive%20biology | Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc |
https://en.wikipedia.org/wiki/WLIW%20%28TV%29 | WLIW (channel 21) is a secondary PBS member television station licensed to Garden City, New York, United States, serving the New York City television market. It is owned by The WNET Group alongside the area's primary PBS member, Newark, New Jersey–licensed WNET (channel 13); two Class A stations which share spectrum with WNET, WNDT-CD (channel 14) and WMBQ-CD (channel 46); and WLIW-FM (88.3) in Southampton. Through an outsourcing agreement, The WNET Group also operates New Jersey's PBS state network NJ PBS and the website NJ Spotlight.
WLIW and WNET share studios at One Worldwide Plaza in Midtown Manhattan with an auxiliary street-level studio in the Lincoln Center complex on Manhattan's Upper West Side. WLIW's transmitter is located at One World Trade Center; the station also maintains a production studio at its former transmitter site in Plainview, New York. WLIW's multiplex is New York's high-power ATSC 3.0 (NextGen TV) television station and also broadcasts WMBQ-CD.
WLIW was established in 1969 as the first television station on Long Island. Originally operated on a tight budget, the station had no permanent studio facilities for nearly a decade. In the 1980s and 1990s, increasing cable television coverage led to the expansion of WLIW into a regional service that was the smaller competitor to WNET, the nation's largest public TV station, and the station increased its own programming efforts. However, some critics felt that this shift deemphasized the station's Long Island identity. In 2003, WLIW and WNET merged, completing an 18-month process. As part of the WNET Group, WLIW maintains a separate vice president and general manager, Diane Masciale, who is in charge of the entire group's locally oriented television production.
History
Early history
The Nassau County Board of Supervisors voted on February 14, 1968, to provide funding to set up an educational television station on Long Island, thereby also accessing matching funds from the New York state governme |
https://en.wikipedia.org/wiki/Inner%20loop | In computer programs, an important form of control flow is the loop which causes a block of code to be executed more than once. A common idiom is to have a loop nested inside another loop, with the contained loop being commonly referred to as the inner loop.
Background
Two main types of loop exist and they can be nested within each other to, possibly, any depth as required. The two types are for loop and while loop. Both are slightly different but may be interchanged. Research has shown that performance of the complete structure of a loop with an inner loop is different when compared with a loop without an inner loop. Indeed, even the performance of two loops with different types of inner loop, where one is a for loop and the other a while loop, are different.
It was observed that more computations are performed per unit time when an inner for loop is involved than otherwise. This implies, given the same number of computations to perform, the one with an inner for loop will finish faster than the one without it. This is a machine- or platform-independent technique of loop optimization and was observed across several programming languages and compilers or interpreters tested. The case of a while loop as the inner loop performed badly, performing even slower than a loop without any inner loop in some cases. Two examples below written in python present a while loop with an inner for loop and a while loop without any inner loop. Although both have the same terminating condition for their while loops, the first example will finish faster because of the inner for loop. The variable innermax is a fraction of the maxticketno variable in the first example.
while ticket_no * innermax < max_ticket_no:
for j in range(0, innermax):
if (ticket_no * innermax + j) == jackpot_no:
return
ticket_no += 1
while ticket_no < max_ticket_no:
if ticket_no == jackpot_no:
return
ticket_no += 1
References
4. Python Nested For Loop From Techgee |
https://en.wikipedia.org/wiki/139%20%28number%29 | 139 (one hundred [and] thirty-nine) is the natural number following 138 and preceding 140.
In mathematics
139 is the 34th prime number. It is a twin prime with 137. Because 141 is a semiprime, 139 is a Chen prime. 139 is the smallest prime before a prime gap of length 10.
This number is the sum of five consecutive prime numbers (19 + 23 + 29 + 31 + 37).
It is the smallest factor of 64079 which is the smallest Lucas number with prime index which is not prime. It is also the smallest factor of the first nine terms of the Euclid–Mullin sequence, making it the tenth term.
139 is a happy number and a strictly non-palindromic number.
In the military
RUM-139 VL-ASROC is a United States Navy ASROC anti-submarine missile
was a United States Navy Admirable-class minesweeper during World War II
was a United States Navy Haskell-class attack transport during World War II
was a United States Navy destroyer during World War II
was a United States Navy transport ship during World War I and World War II
was a tanker loaned to the Soviet Union during World War II, then returned to the United States in 1944
was a United States Navy cargo ship during World War II
was a United States Navy Des Moines-class heavy cruiser following World War II
was a United States Navy Wickes-class destroyer during World War II
In transportation
British Rail Class 139 is the TOPS classification assigned to the lightweight railcars by West Midlands Trains on the Stourbridge Town Branch Line
Fiat M139 platform is the next-generation premium rear wheel drive automobile platform from Fiat
London Buses route 139 is a Transport for London contracted bus route in London
In other fields
139 is also:
The year AD 139 or 139 BC
139 AH is a year in the Islamic calendar that corresponds to 756 – 757 CE.
139 Juewa is a large and dark main belt asteroid discovered in 1874
The atomic number of untriennium, an unsynthesized chemical element
Gull Lake No. 139 is a rural municipality in Saska |
https://en.wikipedia.org/wiki/146%20%28number%29 | 146 (one hundred [and] forty-six) is the natural number following 145 and preceding 147.
In mathematics
146 is an octahedral number, the number of spheres that can be packed into in a regular octahedron with six spheres along each edge. For an octahedron with seven spheres along each edge, the number of spheres on the surface of the octahedron is again 146. It is also possible to arrange 146 disks in the plane into an irregular octagon with six disks on each side, making 146 an octo number.
There is no integer with exactly 146 coprimes less than it, so 146 is a nontotient. It is also never the difference between an integer and the total of coprimes below it, so it is a noncototient. And it is not the sum of proper divisors of any number, making it an untouchable number.
There are 146 connected partially ordered sets with four labeled elements.
See also
146 (disambiguation)
References
Integers |
https://en.wikipedia.org/wiki/Envenomation | Envenomation is the process by which venom is injected by the bite or sting of a venomous animal.
Many kinds of animals, including mammals (e.g., the northern short-tailed shrew, Blarina brevicauda), reptiles (e.g., the king cobra), spiders (e.g., black widows), insects (e.g., wasps), and fish (e.g., stone fish) employ venom for hunting and for self-defense.
In particular, snakebite envenoming is considered a neglected tropical disease resulting in >100,000 deaths and maiming >400,000 people per year.
Mechanisms
Some venoms are applied externally, especially to sensitive tissues such as the eyes, but most venoms are administered by piercing the skin of the victim. Venom in the saliva of the Gila monster and some other reptiles enters prey through bites of grooved teeth. More commonly animals have specialized organs such as hollow teeth (fangs) and tubular stingers that penetrate the prey's skin, whereupon muscles attached to the attacker's venom reservoir squirt venom deep within the victim's body tissue. For example, the fangs of venomous snakes are connected to a venom gland by means of a duct. Death may occur as a result of bites or stings. The rate of envenoming is described as the likelihood of venom successfully entering a system upon bite or sting.
Mechanisms of snake envenomation
Snakes administer venom to their target by piercing the target's skin with specialized organs known as fangs. Snakebites can be broken into four stages; strike launch, fang erection, fang penetration, and fang withdrawal. Snakes have a venom gland connected to a duct and subsequent fangs. The fangs have hollow tubes with grooved sides that allow venom to flow within them. During snake bites, the fangs penetrate the skin of the target and the fang sheath, a soft tissue organ surrounding the fangs, is retracted. The fang sheath retraction causes an increase in internal pressures. This pressure differential initiates venom flow in the venom delivery system. Larger snakes have been |
https://en.wikipedia.org/wiki/Gajim | Gajim is an instant messaging client for the XMPP protocol which uses the GTK toolkit. The name Gajim is a recursive acronym for Gajim's a jabber instant messenger. Gajim runs on Linux, BSD, macOS, and Microsoft Windows. Released under the GPL-3.0-only license, Gajim is free software. A 2009 round-up of similar software on Tom's Hardware found version 0.12.1 "the lightest and fastest jabber IM client".
Features
Gajim aims to be an easy to use and fully-featured XMPP client. Gajim uses GTK (PyGObject) as GUI library, which makes it cross-platform compatible. Some of its features:
Group chat support
Emojis, Avatars, File transfer
Systray icon, Spell checking
TLS, OpenPGP and end-to-end encryption support (OpenPGP not available under Windows until version 0.15),
Transport Registration support
Service Discovery including Nodes
Wikipedia, dictionary and search engine lookup
Multiple accounts support
D-Bus Capabilities
XML Console
Jingle voice and video support (using the "python-farstream" library, no support in Windows yet)
OMEMO encryption
HTTP file upload
Gajim is available in Basque, Bulgarian, Chinese, Croatian, Czech, English, Esperanto, French, German, Italian, Norwegian (Bokmål), Polish, Russian, Spanish, Slovak, Swedish, Ukrainian and others.
Third-party plugins
Gajim supports various third-party plugins (official list).
See also
Comparison of instant messaging clients
References
Reviews
Joe 'Zonker' Brockmeier Review: Gajim Jabber client on Linux.com, September 16, 2005
Mihai Marinof, Gajim Review. Free Jabber client for Linux. on Softpedia, 7 November 2006
External links
Official wiki
XMPP Software: Clients
Unofficial XMPP/Jabber clients and OS usage statistics (5) by Lucas Nussbaum
Free instant messaging clients
Free XMPP clients
Instant messaging clients that use GTK
Software that uses PyGTK
Windows instant messaging clients
Free software programmed in Python
Applications using D-Bus |
https://en.wikipedia.org/wiki/Natural%20density | In number theory, natural density, also referred to as asymptotic density or arithmetic density, is one method to measure how "large" a subset of the set of natural numbers is. It relies chiefly on the probability of encountering members of the desired subset when combing through the interval as grows large.
Intuitively, it is thought that there are more positive integers than perfect squares, since every perfect square is already positive, and many other positive integers exist besides. However, the set of positive integers is not in fact larger than the set of perfect squares: both sets are infinite and countable and can therefore be put in one-to-one correspondence. Nevertheless if one goes through the natural numbers, the squares become increasingly scarce. The notion of natural density makes this intuition precise for many, but not all, subsets of the naturals (see Schnirelmann density, which is similar to natural density but defined for all subsets of ).
If an integer is randomly selected from the interval , then the probability that it belongs to is the ratio of the number of elements of in to the total number of elements in . If this probability tends to some limit as tends to infinity, then this limit is referred to as the asymptotic density of . This notion can be understood as a kind of probability of choosing a number from the set . Indeed, the asymptotic density (as well as some other types of densities) is studied in probabilistic number theory.
Definition
A subset of positive integers has natural density if the proportion of elements of among all natural numbers from 1 to converges to as tends to infinity.
More explicitly, if one defines for any natural number the counting function as the number of elements of less than or equal to , then the natural density of being exactly means that
It follows from the definition that if a set has natural density then .
Upper and lower asymptotic density
Let be a subset of the set of nat |
https://en.wikipedia.org/wiki/Dysgenics | Dysgenics (also known as cacogenics) is the decrease in prevalence of traits deemed to be either socially desirable or well adapted to their environment due to selective pressure disfavoring the reproduction of those traits.
The adjective "dysgenic" is the antonym of "eugenic". In 1915 the term was used by David Starr Jordan to describe the supposed deleterious effects of modern warfare on group-level genetic fitness because of its tendency to kill physically healthy men while preserving the disabled at home. Similar concerns had been raised by early eugenicists and social Darwinists during the 19th century, and continued to play a role in scientific and public policy debates throughout the 20th century. More recent concerns about supposed dysgenic effects in human populations have been advanced by the controversial psychologist Richard Lynn, notably in his 1996 book Dysgenics: Genetic Deterioration in Modern Populations, which argued that a reduction in selection pressures and decreased infant mortality since the Industrial Revolution have resulted in an increased propagation of deleterious traits and genetic disorders.
Despite these concerns, genetic studies have shown no evidence for dysgenic effects in human populations.
In fiction
Cyril M. Kornbluth's 1951 short story "The Marching Morons" is an example of dysgenic fiction, describing a man who accidentally ends up in the distant future and discovers that dysgenics has resulted in mass stupidity. Mike Judge's 2006 film Idiocracy has the same premise, with the main character the subject of a military hibernation experiment that goes awry, taking him 500 years into the future. While in "The Marching Morons", civilization is kept afloat by a small group of dedicated geniuses, in Idiocracy, voluntary childlessness among high-IQ couples leaves only automated systems to fill that role.
See also
Devolution (biology)
Flynn effect
Heritability of IQ
List of congenital disorders
List of biological development diso |
https://en.wikipedia.org/wiki/RS-449 | The RS-449 specification, also known as EIA-449 or TIA-449, defines the functional and mechanical characteristics of the interface between data terminal equipment, typically a computer, and data communications equipment, typically a modem or terminal server. The full title of the standard is EIA-449 General Purpose 37-Position and 9-Position Interface for Data Terminal Equipment and Data Circuit-Terminating Equipment Employing Serial Binary Data Interchange.
449 was part of an effort to replace RS-232C, offering much higher performance and longer cable lengths while using the same DB-25 connectors. This was initially split into two closely related efforts, RS-422 and RS-423. As feature creep set in, the number of required pins began to grow beyond what a DB-25 could handle, and the RS-449 effort started to define a new connector.
449 emerged as an unwieldy system using a large DC-37 connector along with a separate DE-9 connector if the 422 protocol was used. The resulting cable mess was already dismissed as hopeless before the standard was even finalized. The effort was eventually abandoned in favor of RS-530, which used a single DB-25 connector.
Background
During the late 1970s, the EIA began developing two new serial data standards to replace RS-232. RS-232 had a number of issues that limited its performance and practicality. Among these was the relatively large voltages used for signalling, +5 and -5V for mark and space. To supply these, a +12V power supply was typically required, which made it somewhat difficult to implement in a market that was rapidly being dominated by +5/0V transistor-transistor logic (TTL) circuitry and even lower-voltage CMOS implementations. These high voltages and unbalanced communications also resulted in relatively short cable lengths, nominally set to a maximum of , although in practice they could be somewhat longer if running at slower speeds.
The reason for the large voltages was due to ground voltages. RS-232 included both a pr |
https://en.wikipedia.org/wiki/Services%20computing | Services Computing has become a cross-discipline that covers the science and technology of bridging the gap between business services and IT services. The underlying technology suite includes Web services and service-oriented architecture (SOA), cloud computing, business consulting methodology and utilities, business process modeling, transformation and integration. This scope of Services Computing covers the whole life-cycle of service provision that includes business componentization, services modeling, services creation, services realization, services annotation, services deployment, services discovery, services composition, services delivery, service-to-service collaboration, services monitoring, services optimization, as well as services management. The goal of Services Computing is to enable IT services and computing technology to perform business services more efficiently and effectively.
References
External links
Technical Committee on Services Computing, IEEE Computer Society (TCSVC)
IEEE Transactions on Services Computing (TSC)
IEEE World Congress on Services (SERVICES)
IEEE International Conference on Cloud Computing (CLOUD)
IEEE International Conference on Edge Computing (EDGE)
IEEE International Conference on Digital Health (ICDH)
IEEE International Conference on Web Services (ICWS)
IEEE International Conference on Services Computing (SCC)
IEEE International Conference on Smart Data Services (SMDS)
Computer programming
Service-oriented (business computing) |
https://en.wikipedia.org/wiki/LaCie | LaCie (; English: "The Company") is an American-French computer hardware company specializing in external hard drives, RAID arrays, optical drives, Flash Drives, and computer monitors. The company markets several lines of hard drives with a capacity of up to many terabytes of data, with a choice of interfaces (FireWire 400, FireWire 800, eSATA, USB 2.0, USB 3.0, Thunderbolt, and Ethernet). LaCie also has a series of mobile bus-powered hard drives.
LaCie's computer display product line is targeted specifically to graphics professionals, with an emphasis on color matching.
Company history
LaCie began life as two separate computer storage companies: in 1989 as électronique d2 in Paris, France, and in 1987 as LaCie in Tigard, Oregon (later Portland, Oregon), U.S.
In 1995, électronique d2 acquired La Cie, and later adopted the name 'LaCie' for all of its operations. At the early founding stages of both companies, both focused their businesses on IT storage solutions, based on the SCSI interface standard for connecting external devices to computers. SCSI was adopted by Apple Computer as its main peripheral interface standard and the market for both LaCie and d2 became closely, but not exclusively, associated with the Macintosh platform.
In Europe, the French company électronique d2 was founded in 1989 by Pierre Fournier and Philippe Spruch, working from their apartment in the 14th arrondissement of Paris. d2's main activity was assembling hard drives in external SCSI casings and selling them as peripheral devices.
By 1990, the company had outgrown its small beginnings and moved to new 900 square meter premises in rue Watt, also in Paris. By this stage, designing casings was no longer sufficient for d2 to maintain a competitive edge, and so the company began to develop its own products and invest in R&D. d2 began to open subsidiaries around Europe, the first in London in 1991, followed by offices in Brussels and Copenhagen. The company began to expand its bus |
https://en.wikipedia.org/wiki/Slide%20attack | The slide attack is a form of cryptanalysis designed to deal with the prevailing idea that even weak ciphers can become very strong by increasing the number of rounds, which can ward off a differential attack. The slide attack works in such a way as to make the number of rounds in a cipher irrelevant. Rather than looking at the data-randomizing aspects of the block cipher, the slide attack works by analyzing the key schedule and exploiting weaknesses in it to break the cipher. The most common one is the keys repeating in a cyclic manner.
The attack was first described by David Wagner and Alex Biryukov. Bruce Schneier first suggested the term slide attack to them, and they used it in their 1999 paper describing the attack.
The only requirements for a slide attack to work on a cipher is that it can be broken down into multiple rounds of an identical F function. This probably means that it has a cyclic key schedule. The F function must be vulnerable to a known-plaintext attack. The slide attack is closely related to the related-key attack.
The idea of the slide attack has roots in a paper published by Edna Grossman and Bryant Tuckerman in an IBM Technical Report in 1977. Grossman and Tuckerman demonstrated the attack on a weak block cipher named New Data Seal (NDS). The attack relied on the fact that the cipher has identical subkeys in each round, so the cipher had a cyclic key schedule with a cycle of only one key, which makes it an early version of the slide attack. A summary of the report, including a description of the NDS block cipher and the attack, is given in Cipher Systems (Beker & Piper, 1982).
The actual attack
First, to introduce some notation. In this section assume the cipher takes n bit blocks and has a key-schedule using as keys of any length.
The slide attack works by breaking the cipher up into identical permutation
functions, F. This F function may consist of more than one round
of the cipher; it is defined by the key-schedule. For example, i |
https://en.wikipedia.org/wiki/Charles%20Parsons%20%28philosopher%29 | Charles Dacre Parsons (born April 13, 1933) is an American philosopher best known for his work in the philosophy of mathematics and the study of the philosophy of Immanuel Kant. He is professor emeritus at Harvard University.
Life and career
Parsons is a son of the famous Harvard sociologist Talcott Parsons. He earned his Ph.D. in philosophy at Harvard University in 1961, under the direction of Burton Dreben and Willard Van Orman Quine. He taught for many years at Columbia University before moving to Harvard University in 1989. He retired in 2005 as the Edgar Pierce professor of philosophy, a position formerly held by Quine.
He is an elected Fellow of the American Academy of Arts and Sciences and the Norwegian Academy of Science and Letters.
Among his former doctoral students are Michael Levin, James Higginbotham, Peter Ludlow, Gila Sher, Øystein Linnebo, Richard Tieszen, and Mark van Atten.
In 2017, Parsons held the Gödel Lecture titled Gödel and the universe of sets.
Philosophical work
In addition to his work in logic and the philosophy of mathematics, Parsons was an editor, with Solomon Feferman and others, of the posthumous works of Kurt Gödel. He has also written on historical figures, especially Immanuel Kant, Gottlob Frege, Kurt Gödel, and Willard Van Orman Quine.
Works
Books
1983. Mathematics in Philosophy: Selected Essays. Ithaca, N.Y.: Cornell Univ. Press.
2008. Mathematical Thought and its Objects. Cambridge Univ. Press.
2012. From Kant to Husserl: Selected Essays. Cambridge, Massachusetts, and London: Harvard Univ. Press.
2014a. Philosophy of Mathematics in the Twentieth Century: Selected Essays. Cambridge, Massachusetts, and London: Harvard Univ. Press.
Selected articles
1987. "Developing Arithmetic in Set Theory without infinity: Some Historical Remarks". History and Philosophy of Logic, vol. 8, pp. 201–213.
1990a. "The Uniqueness of the Natural Numbers". Iyyun, vol. 39, pp. 13–44. ISSN 0021-3306.
1990b. "The Structuralist |
https://en.wikipedia.org/wiki/Diamond-like%20carbon | Diamond-like carbon (DLC) is a class of amorphous carbon material that displays some of the typical properties of diamond. DLC is usually applied as coatings to other materials that could benefit from such properties.
DLC exists in seven different forms. All seven contain significant amounts of sp3 hybridized carbon atoms. The reason that there are different types is that even diamond can be found in two crystalline polytypes. The more common one uses a cubic lattice, while the less common one, lonsdaleite, has a hexagonal lattice. By mixing these polytypes at the nanoscale, DLC coatings can be made that at the same time are amorphous, flexible, and yet purely sp3 bonded "diamond". The hardest, strongest, and slickest is tetrahedral amorphous carbon (ta-C). Ta-C can be considered to be the "pure" form of DLC, since it consists almost entirely of sp3 bonded carbon atoms. Fillers such as hydrogen, graphitic sp2 carbon, and metals are used in the other 6 forms to reduce production expenses or to impart other desirable properties.
The various forms of DLC can be applied to almost any material that is compatible with a vacuum environment.
History
In 2006, the market for outsourced DLC coatings was estimated as about €30,000,000 in the European Union.
In 2011, researchers at Stanford University announced a super-hard amorphous diamond under conditions of ultrahigh pressure. The diamond lacks the crystalline structure of diamond but has the light weight characteristic of carbon.
In 2021, Chinese researchers announced AM-III, a super-hard, fullerene-based form of amorphous carbon. It is also a semi-conductor with a bandgap range of 1.5 to 2.2 eV. The material demonstrated a hardness of 113 GPa on a Vickers hardness test vs diamonds rate at around 70 to 100 GPa. It was hard enough to scratch the surface of a diamond.
Distinction from natural and synthetic diamond
Naturally occurring diamond is almost always found in the crystalline form with a purely cubic orientati |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.