source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Software%20law | Software law refers to the legal remedies available to protect software-based assets. Software may, under various circumstances and in various countries, be restricted by patent or copyright or both. Most commercial software is sold under some kind of software license agreement.
See also
Legal aspects of computing
Software copyright
Software patent
Software license
Software license agreement
Proprietary software
Free and open source software |
https://en.wikipedia.org/wiki/Timer | A timer is a type of clock used for measuring specific times.
Timers can be categorized into two main types.
The word "timer" is usually reserved for devices that count down from a specified time interval called a countdown timer, while devices that do the opposite, measuring elapsed time by counting upwards from zero, are called stopwatches. A simple example of the first type is an hourglass. Working method timers have two main groups: hardware and software timers.
Most timers give an indication that the time interval that had been set has expired - such as a loud noise.
Time switches, timing mechanisms that activate a switch, are sometimes also called "timers."
Hardware
Mechanical
Mechanical timers use clockwork to measure time. Manual timers are typically set by turning a dial to the time interval desired, turning the dial stores energy in a mainspring to run the mechanism. They function similarly to a mechanical alarm clock, the energy in the mainspring causes a balance wheel to rotate back and forth. Each swing of the wheel releases the gear train to move forward by a small fixed amount, causing the dial to move steadily backward until it reaches zero when a lever arm strikes a bell. The mechanical kitchen timer was invented in 1926 called a fan fly that spins against air resistance, low-precision mechanical egg-timers are sometimes of this type.
The simplest and oldest type of mechanical timer is the hourglass - which is also known as "the glass of the hour" - in which a fixed amount of sand drains through a narrow opening from one chamber to another to measure a time interval.
Electromechanical
Short-period bimetallic electromechanical timers use a thermal mechanism, with a metal finger made of strips of two metals with different rates of thermal expansion sandwiched together, steel and bronze are common. An electric current flowing through this finger causes heating of the metals, one side expands less than the other, and an electrical contact on |
https://en.wikipedia.org/wiki/Mathcounts | Mathcounts, stylized as MATHCOUNTS, is a non-profit organization that provides grades 6-8 extracurricular mathematics programs in all U.S. states, plus the District of Columbia, Puerto Rico, Guam and U.S. Virgin Islands. Its mission is to provide engaging math programs for middle school students of all ability levels to build confidence and improve attitudes about math and problem solving.
Mathcounts also provides numerous math resources for schools and the general public.
Topics covered include geometry, counting, probability, number theory, and algebra.
History
Mathcounts was started in 1983 by the National Society of Professional Engineers, the National Council of Teachers of Mathematics, and CNA Insurance to increase middle school interest in mathematics. The first national-level competition was held in 1984. The Mathcounts Competition Series spread quickly in middle schools, and today it is the best-known middle school mathematics competition. In 2007 Mathcounts launched the National Math Club as a non-competitive alternative to the Competition Series. In 2011 Mathcounts launched the Math Video Challenge Program, which was discontinued in 2023.
2020 was the only year since 1984 in which a national competition was not held, due to the COVID-19 pandemic. The "MATHCOUNTS Week" event featuring problems from the 2020 State Competition was held on the Art of Problem Solving website as a replacement. The 2021 National Competition was held online.
Current sponsors include RTX Corporation, U.S. Department of Defense STEM, BAE Systems, Northrop Grumman, National Society of Professional Engineers, 3M, Texas Instruments, Art of Problem Solving, Bentley Systems, Carina Initiatives, National Council of Examiners for Engineering and Surveying, CNA Financial, Google, Brilliant, and Mouser Electronics.
Competition Series
The Competition Series is divided into four levels: school, chapter, state, and national. Students progress to each level in the competition based on |
https://en.wikipedia.org/wiki/System%20Management%20Bus | The System Management Bus (abbreviated to SMBus or SMB) is a single-ended simple two-wire bus for the purpose of lightweight communication. Most commonly it is found in chipsets of computer motherboards for communication with the power source for ON/OFF instructions. The exact functionality and hardware interfaces vary with vendors.
It is derived from I²C for communication with low-bandwidth devices on a motherboard, especially power related chips such as a laptop's rechargeable battery subsystem (see Smart Battery System and ACPI). Other devices might include external master hosts, temperature sensor, fan or voltage sensors, lid switches, clock generator, and RGB lighting. PCI add-in cards may connect to an SMBus segment.
A device can provide manufacturer information, indicate its model/part number, save its state for a suspend event, report different types of errors, accept control parameters, return status over SMBus, and poll chipset registers. The SMBus is generally not user configurable or accessible. Although SMBus devices usually can't identify their functionality, a new PMBus coalition has extended SMBus to include conventions allowing that.
The SMBus was defined by Intel and Duracell in 1994. It carries clock, data, and instructions and is based on Philips' I²C serial bus protocol. Its clock frequency range is 10 kHz to 100 kHz. (PMBus extends this to 400 kHz.) Its voltage levels and timings are more strictly defined than those of I²C, but devices belonging to the two systems are often successfully mixed on the same bus.
SMBus is used as an interconnect in several platform management standards including: ASF, DASH, IPMI.
SMBus is used to access DRAM configuration information as part of serial presence detect. SMBus has grown into a wide variety of system enumeration use cases other than power management.
SMBus/I²C Interoperability
While SMBus is derived from I²C, there are several major differences between the specifications of the two busses in the |
https://en.wikipedia.org/wiki/Online%20poker | Online poker is the game of poker played over the Internet. It has been partly responsible for a huge increase in the number of poker players worldwide. Christiansen Capital Advisors stated online poker revenues grew from $82.7 million in 2001 to $2.4 billion in 2005, while a survey carried out by DrKW and Global Betting and Gaming Consultants asserted online poker revenues in 2004 were at $1.4 billion. In a testimony before the United States Senate regarding Internet Gaming, Grant Eve, a Certified Public Accountant representing the US Accounting Firm Joseph Eve, Certified Public Accountants, estimated that one in every four dollars gambled is gambled online.
Traditional (or "brick and mortar", B&M, live, land-based) venues for playing poker, such as casinos and poker rooms, may be intimidating for novice players and are often located in geographically disparate locations. Also, brick and mortar casinos are reluctant to promote poker because it is difficult for them to profit from it. Though the rake, or time charge, of traditional casinos is often high, the opportunity costs of running a poker room are even higher. Brick and mortar casinos often make much more money by removing poker rooms and adding more slot machines. For example, figures from the Gaming Accounting Firm Joseph Eve estimate that poker accounts for 1% of brick and mortar casino revenues.
Online venues, by contrast, are dramatically cheaper because they have much smaller overhead costs. For example, adding another table does not take up valuable space like it would for a brick and mortar casino. Online poker rooms also allow the players to play for low stakes (as low as 1¢/2¢) and often offer poker freeroll tournaments (where there is no entry fee), attracting beginners and/or less wealthy clientele.
Online venues may be more vulnerable to certain types of fraud, especially collusion between players. However, they have collusion detection abilities that do not exist in brick and mortar casinos. F |
https://en.wikipedia.org/wiki/Ideal%20%28order%20theory%29 | In mathematical order theory, an ideal is a special subset of a partially ordered set (poset). Although this term historically was derived from the notion of a ring ideal of abstract algebra, it has subsequently been generalized to a different notion. Ideals are of great importance for many constructions in order and lattice theory.
Definitions
A subset of a partially ordered set is an ideal, if the following conditions hold:
is non-empty,
for every x in and y in P, implies that y is in ( is a lower set),
for every x, y in , there is some element z in , such that and ( is a directed set).
While this is the most general way to define an ideal for arbitrary posets, it was originally defined for lattices only. In this case, the following equivalent definition can be given:
a subset of a lattice is an ideal if and only if it is a lower set that is closed under finite joins (suprema); that is, it is nonempty and for all x, y in , the element of P is also in .
A weaker notion of order ideal is defined to be a subset of a poset that satisfies the above conditions 1 and 2. In other words, an order ideal is simply a lower set. Similarly, an ideal can also be defined as a "directed lower set".
The dual notion of an ideal, i.e., the concept obtained by reversing all ≤ and exchanging with is a filter.
Frink ideals, pseudoideals and Doyle pseudoideals are different generalizations of the notion of a lattice ideal.
An ideal or filter is said to be proper if it is not equal to the whole set P.
The smallest ideal that contains a given element p is a and p is said to be a of the ideal in this situation. The principal ideal for a principal p is thus given by .
Terminology confusion
The above definitions of "ideal" and "order ideal" are the standard ones, but there is some confusion in terminology. Sometimes the words and definitions such as "ideal", "order ideal", "Frink ideal", or "partial order ideal" mean one another.
Prime ideals
An important |
https://en.wikipedia.org/wiki/Dynatron%20oscillator | In electronics, the dynatron oscillator, invented in 1918 by Albert Hull at General Electric, is an obsolete vacuum tube electronic oscillator circuit which uses a negative resistance characteristic in early tetrode vacuum tubes, caused by a process called secondary emission. It was the first negative resistance vacuum tube oscillator. The dynatron oscillator circuit was used to a limited extent as beat frequency oscillators (BFOs), and local oscillators in vacuum tube radio receivers as well as in scientific and test equipment from the 1920s to the 1940s but became obsolete around World War 2 due to the variability of secondary emission in tubes.
Negative transconductance oscillators, such as the transitron oscillator invented by Cleto Brunetti in 1939, are similar negative resistance vacuum tube oscillator circuits which are based on negative transconductance (a fall in current through one grid electrode caused by an increase in voltage on a second grid) in a pentode or other multigrid vacuum tube. These replaced the dynatron circuit and were employed in vacuum tube electronic equipment through the 1970s.
How they work
The dynatron and transitron oscillators differ from many oscillator circuits in that they do not use feedback to generate oscillations, but negative resistance. A tuned circuit (resonant circuit), consisting of an inductor and capacitor connected together, can store electric energy in the form of oscillating currents, "ringing" analogously to a tuning fork. If a tuned circuit could have zero electrical resistance, once oscillations were started it would function as an oscillator, producing a continuous sine wave. But because of the inevitable resistance inherent in actual circuits, without an external source of power the energy in the oscillating current is dissipated as heat in the resistance, and any oscillations decay to zero.
In the dynatron and transitron circuits, a vacuum tube is biased so that one of its electrodes has negativ |
https://en.wikipedia.org/wiki/Cytokine%20release%20syndrome | In immunology, cytokine release syndrome (CRS) is a form of systemic inflammatory response syndrome (SIRS) that can be triggered by a variety of factors such as infections and certain drugs. It refers to cytokine storm syndromes (CSS) and occurs when large numbers of white blood cells are activated and release inflammatory cytokines, which in turn activate yet more white blood cells. CRS is also an adverse effect of some monoclonal antibody medications, as well as adoptive T-cell therapies. When occurring as a result of a medication, it is also known as an infusion reaction.
The term cytokine storm is often used interchangeably with CRS but, despite the fact that they have similar clinical phenotype, their characteristics are different. When occurring as a result of a therapy, CRS symptoms may be delayed until days or weeks after treatment. Immediate-onset CRS is a cytokine storm, although severe cases of CRS have also been called cytokine storms.
Signs and symptoms
Symptoms include fever that tends to fluctuate, fatigue, loss of appetite, muscle and joint pain, nausea, vomiting, diarrhea, rashes, fast breathing, rapid heartbeat, low blood pressure, seizures, headache, confusion, delirium, hallucinations, tremor, and loss of coordination.
Lab tests and clinical monitoring show low blood oxygen, widened pulse pressure, increased cardiac output (early), potentially diminished cardiac output (late), high levels of nitrogen compounds in the blood, elevated D-dimer, elevated transaminases, factor I deficiency and excessive bleeding, higher-than-normal level of bilirubin.
Cause
CRS occurs when large numbers of white blood cells, including B cells, T cells, natural killer cells, macrophages, dendritic cells, and monocytes are activated and release inflammatory cytokines, which activate more white blood cells in a positive feedback loop of pathogenic inflammation. Immune cells are activated by stressed or infected cells through receptor-ligand interactions.
This can oc |
https://en.wikipedia.org/wiki/Codebook | A codebook is a type of document used for gathering and storing cryptography codes. Originally, codebooks were often literally books, but today "codebook" is a byword for the complete record of a series of codes, regardless of physical format.
Cryptography
In cryptography, a codebook is a document used for implementing a code. A codebook contains a lookup table for coding and decoding; each word or phrase has one or more strings which replace it. To decipher messages written in code, corresponding copies of the codebook must be available at either end. The distribution and physical security of codebooks presents a special difficulty in the use of codes compared to the secret information used in ciphers, the key, which is typically much shorter.
The United States National Security Agency documents sometimes use codebook to refer to block ciphers; compare their use of combiner-type algorithm to refer to stream ciphers.
Codebooks come in two forms, one-part or two-part:
In one-part codes, the plaintext words and phrases and the corresponding code words are in the same alphabetical order. They are organized similar to a standard dictionary. Such codes are half the size of two-part codes but are more vulnerable since an attacker who recovers some code word meanings can often infer the meaning of nearby code words. One-part codes may be used simply to shorten messages for transmission or have their security enhanced with superencryption methods, such as adding a secret number to numeric code words.
In two-part codes, one part is for converting plaintext to ciphertext, the other for the opposite purpose. They are usually organized similarly to a language translation dictionary, with plaintext words (in the first part) and ciphertext words (in the second part) presented like dictionary headwords.
The earliest known use of a codebook system was by Gabriele de Lavinde in 1379 working for the Antipope Clement VII.
Two-part codebooks go back as least as far as
Antoine |
https://en.wikipedia.org/wiki/SabreTalk | SabreTalk is a discontinued dialect of PL/I for the S/360 IBM mainframes running the TPF platform. SabreTalk was developed jointly by American Airlines, Eastern Air Lines and IBM. SabreTalk is known as PL/TPF (Programming Language for TPF).
SabreTalk programs still run in the British Airways Flight Operations system (FICO) under ALCS, using a commercially available automatic converter to translate SabreTalk programs to C programs. Both the Reservations and Operations Support System (OSS) of Delta Air Lines were developed using both SabreTalk and IBM 360 Assembler. Although development is currently restricted to C++, the majority of Delta's programming platform remained in Sabretalk until recently in the 2010s.
Because of the availability of translators
from SabreTalk to C and discontinued support by the original developers, several companies are beginning the move away from SabreTalk to purely C-based programs.
Code Sample:
SAMPLE: PROCEDURE;
DECLARE ARRAY(10) DECIMAL(5) BASED(POINTUR);
DECLARE COUNTER BINARY(15) ALIGNED;
DECLARE TOTAL BINARY(31) ALIGNED;
START(POINTUR=#RG1); /* RECEIVE POINTER TO ARRAY IN REGISTER 1 */
TOTAL = 0;
LOOP:
DO COUNTER = 0 TO 10 BY 2;
TOTAL = TOTAL + ARRAY(COUNTER); /* TALLY EVEN NUMBERED ITEMS */
END LOOP;
IF TOTAL = 0 THEN /* VALUE OF TOTAL COMPUTED? */
ENTRC ERRO; /* N=CHECK VALIDITY IN PROG ERRO W/RETURN EXPECTED*/
BACKC(#RAC= TOTAL); /* BACK TO CALLING PROGRAM PASSING VALUE OF */
END SAMPLE; /* TOTAL IN REGISTER RAC. */
References
External links
Legacy systems
PL/I programming language family
IBM software |
https://en.wikipedia.org/wiki/SWIG | The Simplified Wrapper and Interface Generator (SWIG) is an open-source software tool used to connect computer programs or libraries written in C or C++ with scripting languages such as Lua, Perl, PHP, Python, R, Ruby, Tcl, and other languages like C#, Java, JavaScript, Go, D, OCaml, Octave, Scilab and Scheme. Output can also be in the form of XML.
Function
The aim is to allow the calling of native functions (that were written in C or C++) by other programming languages, passing complex data types to those functions, keeping memory from being inappropriately freed, inheriting object classes across languages, etc. The programmer writes an interface file containing a list of C/C++ functions to be made visible to an interpreter. SWIG will compile the interface file and generate code in regular C/C++ and the target programming language. SWIG will generate conversion code for functions with simple arguments; conversion code for complex types of arguments must be written by the programmer. The SWIG tool creates source code that provides the glue between C/C++ and the target language. Depending on the language, this glue comes in two forms:
a shared library that an extant interpreter can link to as some form of extension module, or
a shared library that can be linked to other programs compiled in the target language (for example, using Java Native Interface (JNI) in Java).
SWIG is not used for calling interpreted functions by native code; this must be done by the programmer manually.
Example
SWIG wraps simple C declarations by creating an interface that closely matches the way in which the declarations would be used in a C program. For example, consider the following interface file:
%module example
%inline %{
extern double sin(double x);
extern int strcmp(const char *, const char *);
extern int Foo;
%}
#define STATUS 50
#define VERSION "1.1"
In this file, there are two functions and , a global variable , and two constants and . When SWIG creates an extension module |
https://en.wikipedia.org/wiki/Zombie%20%28computing%29 | In computing, a zombie is a computer connected to the Internet that has been compromised by a hacker via a computer virus, computer worm, or trojan horse program and can be used to perform malicious tasks under the remote direction of the hacker. Zombie computers often coordinate together in a botnet controlled by the hacker, and are used for activities such as spreading e-mail spam and launching distributed denial-of-service attacks (DDoS attacks) against web servers. Most victims are unaware that their computers have become zombies. The concept is similar to the zombie of Haitian Voodoo folklore, which refers to a corpse resurrected by a sorcerer via magic and enslaved to the sorcerer's commands, having no free will of its own. A coordinated DDoS attack by multiple botnet machines also resembles a "zombie horde attack", as depicted in fictional zombie films.
Advertising
Zombie computers have been used extensively to send e-mail spam; as of 2005, an estimated 50–80% of all spam worldwide was sent by zombie computers. This allows spammers to avoid detection and presumably reduces their bandwidth costs, since the owners of zombies pay for their own bandwidth. This spam also greatly increases the spread of Trojan horses, as Trojans are not self-replicating. They rely on the movement of e-mails or spam to grow, whereas worms can spread by other means. For similar reasons, zombies are also used to commit click fraud against sites displaying pay-per-click advertising. Others can host phishing or money mule recruiting websites.
Distributed denial-of-service attacks
Zombies can be used to conduct distributed denial-of-service (DDoS) attacks, a term which refers to the orchestrated flooding of target websites by large numbers of computers at once. The large number of Internet users making simultaneous requests of a website's server is intended to result in crashing and the prevention of legitimate users from accessing the site. A variant of this type of flooding is kn |
https://en.wikipedia.org/wiki/Foodborne%20illness | Foodborne illness (also foodborne disease and food poisoning) is any illness resulting from the contamination of food by pathogenic bacteria, viruses, or parasites, as well as prions (the agents of mad cow disease), and toxins such as aflatoxins in peanuts, poisonous mushrooms, and various species of beans that have not been boiled for at least 10 minutes.
Symptoms vary depending on the cause but often include vomiting, fever, and aches, and may include diarrhea. Bouts of vomiting can be repeated with an extended delay in between, because even if infected food was eliminated from the stomach in the first bout, microbes, like bacteria (if applicable), can pass through the stomach into the intestine and begin to multiply. Some types of microbes stay in the intestine.
For contaminants requiring an incubation period, symptoms may not manifest for hours to days, depending on the cause and on the quantity of consumption. Longer incubation periods tend to cause those affected to not associate the symptoms with the item consumed, so they may misattribute the symptoms to gastroenteritis, for example.
Causes
Foodborne illness usually arises from improper handling, preparation, or food storage. Good hygiene practices before, during, and after food preparation can reduce the chances of contracting an illness. There is a consensus in the public health community that regular hand-washing is one of the most effective defenses against the spread of foodborne illness. The action of monitoring food to ensure that it will not cause foodborne illness is known as food safety. Foodborne disease can also be caused by a large variety of toxins that affect the environment.
Furthermore, foodborne illness can be caused by a number of chemicals, such as pesticides, medicines, and natural toxic substances such as vomitoxin, poisonous mushrooms or reef fish.
Bacteria
Bacteria are a common cause of foodborne illness. The United Kingdom, in 2000, reported the individual bacteria involved as |
https://en.wikipedia.org/wiki/Manchester%20Baby | The Manchester Baby, also called the Small-Scale Experimental Machine (SSEM), was the first electronic stored-program computer. It was built at the University of Manchester by Frederic C. Williams, Tom Kilburn, and Geoff Tootill, and ran its first program on 21 June 1948.
The Baby was not intended to be a practical computing engine, but was instead designed as a testbed for the Williams tube, the first truly random-access memory. Described as "small and primitive" 50 years after its creation, it was the first working machine to contain all the elements essential to a modern electronic digital computer. As soon as the Baby had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a full scale operational machine, the . The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.
The Baby had a 32-bit word length and a memory of 32 words (1 kibibit, 1,024 bits). As it was designed to be the simplest possible stored-program computer, the only arithmetic operations implemented in hardware were subtraction and negation; other arithmetic operations were implemented in software. The first of three programs written for the machine calculated the highest proper divisor of 218 (262,144), by testing every integer from 218 downwards. This algorithm would take a long time to execute—and so prove the computer's reliability, as division was implemented by repeated subtraction of the divisor. The program consisted of 17 instructions and ran for about 52 minutes before reaching the correct answer of 131,072, after the Baby had performed about 3.5 million operations (for an effective CPU speed of about 1100 instructions per second).
Background
The first design for a program-controlled computer was Charles Babbage's Analytical Engine in the 1830s, with Ada Lovelace conceiving the idea of the first theoretical program to calculate Bernoulli numbers. A c |
https://en.wikipedia.org/wiki/Software%20Engineering%202004 | The Software Engineering 2004 (SE2004) —formerly known as Computing Curriculum Software Engineering (CCSE)— is a document that provides recommendations for undergraduate education in software engineering. SE2004 was initially developed by a steering committee between 2001 and 2004. Its development was sponsored by the Association for Computing Machinery and the IEEE Computer Society. Important components of SE2004 include the Software Engineering Education Knowledge, a list of topics that all graduates should know, as well as a set of guidelines for implementing curricula and a set of proposed courses.
External links
SE2004 Home Page
Software engineering papers
Computer science education |
https://en.wikipedia.org/wiki/Descartes%27%20theorem | In geometry, Descartes' theorem states that for every four kissing, or mutually tangent, circles, the radii of the circles satisfy a certain quadratic equation. By solving this equation, one can construct a fourth circle tangent to three given, mutually tangent circles. The theorem is named after René Descartes, who stated it in 1643.
Frederick Soddy's 1936 poem The Kiss Precise summarizes the theorem in terms of the bends (inverse radii) of the four circles:
Special cases of the theorem apply when one or two of the circles is replaced by a straight line (with zero bend) or when the bends are integers or square numbers. A version of the theorem using complex numbers allows the centers of the circles, and not just their radii, to be calculated. With an appropriate definition of curvature, the theorem also applies in spherical geometry and hyperbolic geometry. In higher dimensions, an analogous quadratic equation applies to systems of pairwise tangent spheres or hyperspheres.
History
Geometrical problems involving tangent circles have been pondered for millennia. In ancient Greece of the third century BC, Apollonius of Perga devoted an entire book to the topic, [Tangencies]. It has been lost, and is known largely through a description of its contents by Pappus of Alexandria and through fragmentary references to it in medieval Islamic mathematics. However, Greek geometry was largely focused on straightedge and compass construction. For instance, the problem of Apollonius, closely related to Descartes' theorem, asks for the construction of a circle tangent to three given circles which need not themselves be tangent. Instead, Descartes' theorem is formulated using algebraic relations between numbers describing geometric forms. This is characteristic of analytic geometry, a field pioneered by René Descartes and Pierre de Fermat in the first half of the 17th century.
Descartes discussed the tangent circle problem briefly in 1643, in two letters to Princess Elisabeth o |
https://en.wikipedia.org/wiki/Anti-twister%20mechanism | The anti-twister or antitwister mechanism is a method of connecting a flexible link between two objects, one of which is rotating with respect to the other, in a way that prevents the link from becoming twisted. The link could be an electrical cable or a flexible conduit.
This mechanism is intended as an alternative to the usual method of supplying electric power to a rotating device, the use of slip rings. The slip rings are attached to one part of the machine, and a set of fine metal brushes are attached to the other part. The brushes are kept in sliding contact with the slip rings, providing an electrical path between the two parts while allowing the parts to rotate about each other.
However, this presents problems with smaller devices. Whereas with large devices minor fluctuations in the power provided through the brush mechanism are inconsequential, in the case of tiny electronic components, the brushing introduces unacceptable levels of noise in the stream of power supplied. Therefore, a smoother means of power delivery is needed.
A device designed and patented in 1971 by Dale A. Adams and reported in The Amateur Scientist in December 1975, solves this problem with a rotating disk above a base from which a cable extends up, over, and onto the top of the disk. As the disk rotates the plane of this cable is rotated at exactly half the rate of the disk so the cable experiences no net twisting.
What makes the device possible is the peculiar connectivity of the space of 3D rotations, as discovered by P. A. M. Dirac and illustrated in his Plate trick (also known as the string trick or belt trick). Its covering Spin(3) group can be represented by unit quaternions, also known as versors.
See also
Quaternions and spatial rotation
Candle dance
References
External links
An anti-twist mechanism made with LEGO
Electrical generators
Mechanisms (engineering)
Spinors |
https://en.wikipedia.org/wiki/Topcoder | Topcoder (formerly TopCoder) is a crowdsourcing company with an open global community of designers, developers, data scientists, and competitive programmers. Topcoder pays community members for their work on the projects and sells community services to corporate, mid-size, and small-business clients. Topcoder also organizes the annual Topcoder Open tournament and a series of smaller regional events.
History
Topcoder was founded in 2001 by Jack Hughes, Chairman and Founder of the Tallan company. The name was formerly spelt as "TopCoder" until 2013. Topcoder ran regular competitive programming challenges, known as Single Round Matches or "SRMs," where each SRM was a timed 1.5-hour algorithm competition and contestants would compete against each other to solve the same set of problems. The contestants were students from different secondary schools or universities. Cash prizes ranging from $5,000 to $10,000 per match were secured from corporate sponsors and awarded to tournament winners to generate interest from the student community.
As the community of designers, developers, data scientists, and competitive programmers involved in Topcoder grew, the company started to offer software development services to 3rd party clients, contracting individual community members to work on specific tasks. Most of the revenue, though, still came from consulting services provided to clients by Topcoder employees. From 2006 onwards, Topcoder held design competitions, thus offering design services to their clients. In 2006 Topcoder also started to organize Marathon Matches (MM) – one week long algorithmic contests.
In an attempt to optimize expenses, Topcoder introduced new competition tracks in 2007-2008 and delegated more work from its employees to the community. By 2009, the size of Topcoder's staff had been reduced to 16 project managers servicing 35 clients, while the community did most of the actual work via crowdsourcing. Topcoder representatives claim that at this point thei |
https://en.wikipedia.org/wiki/Type%20species | In zoological nomenclature, a type species (species typica) is the species name with which the name of a genus or subgenus is considered to be permanently taxonomically associated, i.e., the species that contains the biological type specimen (or specimens). A similar concept is used for suprageneric groups and called a type genus.
In botanical nomenclature, these terms have no formal standing under the code of nomenclature, but are sometimes borrowed from zoological nomenclature. In botany, the type of a genus name is a specimen (or, rarely, an illustration) which is also the type of a species name. The species name with that type can also be referred to as the type of the genus name. Names of genus and family ranks, the various subdivisions of those ranks, and some higher-rank names based on genus names, have such types.
In bacteriology, a type species is assigned for each genus. Whether or not currently recognized as valid, every named genus or subgenus in zoology is theoretically associated with a type species. In practice, however, there is a backlog of untypified names defined in older publications when it was not required to specify a type.
Use in zoology
A type species is both a concept and a practical system that is used in the classification and nomenclature (naming) of animals. The "type species" represents the reference species and thus "definition" for a particular genus name. Whenever a taxon containing multiple species must be divided into more than one genus, the type species automatically assigns the name of the original taxon to one of the resulting new taxa, the one that includes the type species.
The term "type species" is regulated in zoological nomenclature by article 42.3 of the International Code of Zoological Nomenclature, which defines a type species as the name-bearing type of the name of a genus or subgenus (a "genus-group name"). In the Glossary, type species is defined as
The type species permanently attaches a formal name (the ge |
https://en.wikipedia.org/wiki/Quantum%20number | In quantum physics and chemistry, quantum numbers describe values of conserved quantities in the dynamics of a quantum system. Quantum numbers correspond to eigenvalues of operators that commute with the Hamiltonian—quantities that can be known with precision at the same time as the system's energy—and their corresponding eigenspaces. Together, a specification of all of the quantum numbers of a quantum system fully characterize a basis state of the system, and can in principle be measured together.
An important aspect of quantum mechanics is the quantization of many observable quantities of interest. In particular, this leads to quantum numbers that take values in discrete sets of integers or half-integers; although they could approach infinity in some cases. This distinguishes quantum mechanics from classical mechanics where the values that characterize the system such as mass, charge, or momentum, all range continuously. Quantum numbers often describe specifically the energy levels of electrons in atoms, but other possibilities include angular momentum, spin, etc. An important family is flavour quantum numbers – internal quantum numbers which determine the type of a particle and its interactions with other particles through the fundamental forces. Any quantum system can have one or more quantum numbers; it is thus difficult to list all possible quantum numbers.
Quantum numbers needed for a given system
The tally of quantum numbers varies from system to system and has no universal answer. Hence these parameters must be found for each system to be analyzed. A quantized system requires at least one quantum number. The dynamics (i.e. time evolution) of any quantum system are described by a quantum operator in the form of a Hamiltonian, . There is one quantum number of the system corresponding to the system's energy; i.e., one of the eigenvalues of the Hamiltonian. There is also one quantum number for each linearly independent operator that commutes with the Ham |
https://en.wikipedia.org/wiki/Copland%20%28operating%20system%29 | Copland is an operating system developed by Apple for Macintosh computers between 1994 and 1996 but never commercially released. It was intended to be released as System 8, and later, Mac OS 8. Planned as a modern successor to the aging System 7, Copland introduced protected memory, preemptive multitasking, and several new underlying operating system features, while retaining compatibility with existing Mac applications. Copland's tentatively planned successor, codenamed Gershwin, was intended to add more advanced features such as application-level multithreading.
Development officially began in March 1994. Over the next several years, previews of Copland garnered much press, introducing the Mac audience to operating system concepts such as object orientation, crash-proofing, and multitasking. In May 1996, Gil Amelio stated that Copland was the primary focus of the company, aiming for a late-year release. Internally, however, the development effort was beset with problems due to dysfunctional corporate personnel and project management. Development milestones and developer release dates were missed repeatedly.
Ellen Hancock was hired to get the project back on track, but quickly concluded it would never ship. In August 1996, it was announced that Copland was canceled and Apple would look outside the company for a new operating system. Among many choices, they selected NeXTSTEP and purchased NeXT in 1997 to obtain it. In the interim period, while NeXTSTEP was ported to the Mac, Apple released a much more legacy-oriented Mac OS 8 in 1997, followed by Mac OS 9 in 1999. Mac OS X became Apple's next-generation operating system with its release in 2001. All of these releases bear functional or cosmetic influence from Copland.
The Copland development effort has been described as an example of feature creep. In 2008, PC World included Copland on a list of the biggest project failures in information technology (IT) history.
Design
Mac OS legacy
The prehistory of Copland |
https://en.wikipedia.org/wiki/Companion%20matrix | In linear algebra, the Frobenius companion matrix of the monic polynomial
is the square matrix defined as
Some authors use the transpose of this matrix, , which is more convenient for some purposes such as linear recurrence relations (see below).
is defined from the coefficients of , while the characteristic polynomial as well as the minimal polynomial of are equal to . In this sense, the matrix and the polynomial are "companions".
Similarity to companion matrix
Any matrix with entries in a field has characteristic polynomial , which in turn has companion matrix . These matrices are related as follows.
The following statements are equivalent:
A is similar over F to , i.e. A can be conjugated to its companion matrix by matrices in GLn(F);
the characteristic polynomial coincides with the minimal polynomial of A , i.e. the minimal polynomial has degree n;
the linear mapping makes a cyclic -module, having a basis of the form ; or equivalently as -modules.
If the above hold, one says that A is non-derogatory.
Not every square matrix is similar to a companion matrix, but every square matrix is similar to a block diagonal matrix made of companion matrices. If we also demand that the polynomial of each diagonal block divides the next one, they are uniquely determined by A, and this gives the rational canonical form of A.
Diagonalizability
The roots of the characteristic polynomial are the eigenvalues of . If there are n distinct eigenvalues , then is diagonalizable as , where D is the diagonal matrix and V is the Vandermonde matrix corresponding to the 's:
Indeed, an easy computation shows that the transpose has eigenvectors with , which follows from . Thus, its diagonalizing change of basis matrix is , meaning , and taking the transpose of both sides gives .
We can read the eigenvectors of with from the equation : they are the column vectors of the inverse Vandermonde matrix . This matrix is known explicitly, giving the eignevectors , with |
https://en.wikipedia.org/wiki/Radiance | In radiometry, radiance is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area. Radiance is used to characterize diffuse emission and reflection of electromagnetic radiation, and to quantify emission of neutrinos and other particles. The SI unit of radiance is the watt per steradian per square metre (). It is a directional quantity: the radiance of a surface depends on the direction from which it is being observed.
The related quantity spectral radiance is the radiance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength.
Historically, radiance was called "intensity" and spectral radiance was called "specific intensity". Many fields still use this nomenclature. It is especially dominant in heat transfer, astrophysics and astronomy. "Intensity" has many other meanings in physics, with the most common being power per unit area.
Description
Radiance is useful because it indicates how much of the power emitted, reflected, transmitted or received by a surface will be received by an optical system looking at that surface from a specified angle of view. In this case, the solid angle of interest is the solid angle subtended by the optical system's entrance pupil. Since the eye is an optical system, radiance and its cousin luminance are good indicators of how bright an object will appear. For this reason, radiance and luminance are both sometimes called "brightness". This usage is now discouraged (see the article Brightness for a discussion). The nonstandard usage of "brightness" for "radiance" persists in some fields, notably laser physics.
The radiance divided by the index of refraction squared is invariant in geometric optics. This means that for an ideal optical system in air, the radiance at the output is the same as the input radiance. This is sometimes called conservation of radiance. For real, passive, optical |
https://en.wikipedia.org/wiki/Calibration%20curve | In analytical chemistry, a calibration curve, also known as a standard curve, is a general method for determining the concentration of a substance in an unknown sample by comparing the unknown to a set of standard samples of known concentration. A calibration curve is one approach to the problem of instrument calibration; other standard approaches may mix the standard into the unknown, giving an internal standard. The calibration curve is a plot of how the instrumental response, the so-called analytical signal, changes with the concentration of the analyte (the substance to be measured).
General use
In more general use, a calibration curve is a curve or table for a measuring instrument which measures some parameter indirectly, giving values for the desired quantity as a function of values of sensor output. For example, a calibration curve can be made for a particular pressure transducer to determine applied pressure from transducer output (a voltage). Such a curve is typically used when an instrument uses a sensor whose calibration varies from one sample to another, or changes with time or use; if sensor output is consistent the instrument would be marked directly in terms of the measured unit.
Method
The operator prepares a series of standards across a range of concentrations near the expected concentration of analyte in the unknown. The concentrations of the standards must lie within the working range of the technique (instrumentation) they are using. Analyzing each of these standards using the chosen technique will produce a series of measurements. For most analyses a plot of instrument response vs. concentration will show a linear relationship. The operator can measure the response of the unknown and, using the calibration curve, can interpolate to find the concentration of analyte.
The data - the concentrations of the analyte and the instrument response for each standard - can be fit to a straight line, using linear regression analysis. This yields a model |
https://en.wikipedia.org/wiki/Clonogenic%20assay | A clonogenic assay is a cell biology technique for studying the effectiveness of specific agents on the survival and proliferation of cells. It is frequently used in cancer research laboratories to determine the effect of drugs or radiation on proliferating tumor cells as well as for titration of Cell-killing Particles (CKPs) in virus stocks. It was first developed by T.T. Puck and Philip I. Marcus at the University of Colorado in 1955.
Although this technique can provide accurate results, the assay is time-consuming to set up and analyze and can only provide data on tumor cells that can grow in culture. The word "clonogenic" refers to the fact that these cells are clones of one another.
Procedure
The experiment involves three major steps:
The treatment is applied to a sample of cells.
The cells are "plated" in a tissue culture vessel and allowed to grow.
The colonies produced are fixed, stained, and counted.
At the conclusion of the experiment, the percentage of cells that survived the treatment is measured. A graphical representation of survival versus drug concentration or dose of ionizing radiation is called a cell survival curve.
For Cell-killing Particle assays, the surviving fraction of cells is used to approximate the Poisson Distribution of virus particles amongst cells and therefore determine the number of CKPs encountered by each cell.
Any type of cell could be used in an experiment, but since the goal of these experiments in oncological research is the discovery of more effective cancer treatments, human tumor cells are a typical choice. The cells either come from prepared "cell lines," which have been well-studied and whose general characteristics are known, or from a biopsy of a tumor in a patient. The cells are put in petri dishes or in plates which contain several circular "wells." Particular numbers of cells are plated depending on the experiment; for an experiment involving irradiation it is usual to plate larger numbers of cells with incr |
https://en.wikipedia.org/wiki/Kademlia | Kademlia is a distributed hash table for decentralized peer-to-peer computer networks designed by Petar Maymounkov and David Mazières in 2002. It specifies the structure of the network and the exchange of information through node lookups. Kademlia nodes communicate among themselves using UDP. A virtual or overlay network is formed by the participant nodes. Each node is identified by a number or node ID. The node ID serves not only as identification, but the Kademlia algorithm uses the node ID to locate values (usually file hashes or keywords).
In order to look up the value associated with a given key, the algorithm explores the network in several steps. Each step will find nodes that are closer to the key until the contacted node returns the value or no more closer nodes are found. This is very efficient: like many other s, Kademlia contacts only nodes during the search out of a total of nodes in the system.
Further advantages are found particularly in the decentralized structure, which increases the resistance against a denial-of-service attack. Even if a whole set of nodes is flooded, this will have limited effect on network availability, since the network will recover itself by knitting the network around these "holes".
I2P's implementation of Kademlia is modified to mitigate Kademlia's vulnerabilities, such as Sybil attacks.
System details
Peer-to-peer networks are made of nodes, by design. The protocols that these nodes use to communicate, and locate information, have become more efficient over time. The first generation peer-to-peer file sharing networks, such as Napster, relied on a central database to co-ordinate lookups on the network. Second generation peer-to-peer networks, such as Gnutella, used flooding to locate files, searching every node on the network. Third generation peer-to-peer networks, such as Bittorrent, use distributed hash tables to look up files in the network. Distributed hash tables store resource locations throughout the network. |
https://en.wikipedia.org/wiki/Luoshu%20Square | The Luoshu (pinyin), Lo Shu (Wade-Giles), or Nine Halls Diagram is an ancient Chinese diagram and named for the Luo River near Luoyang, Henan. The Luoshu appears in myths concerning the invention of writing by Cangjie and other culture heroes. It is a unique normal magic square of order three. It is usually paired with the River Map or Hetunamed in reference to the Yellow Riverand used with the River Map in various contexts involving Chinese geomancy, numerology, philosophy, and early natural science.
Traditions
The Lo Shu is part of the legacy of ancient Chinese mathematical and divinity (cf. the I Ching ) traditions, and is an important emblem in Feng Shui ()—the art of geomancy concerned with the placement of objects in relation to the flow of qi (), or "natural energy".
History
A Chinese legend concerning the pre-historic Emperor Yu () tells of the Lo Shu, often in connection with the Yellow River Map (Hetu) and the eight trigrams. In ancient China there is a legend of a huge deluge: the people offered sacrifices to the god of one of the flooding rivers, the Luo river (), to try to calm his anger. A magical turtle emerged from the water with the curiously unnatural Lo Shu pattern on its shell: circular dots representing the integers one through nine are arranged in a three-by-three grid.
Early records dated to 650 BCE are ambiguous, referring to a "river map", but clearly start to refer to a magic square by 80 CE, and explicitly give an example of one since 570 CE. Recent publications have provided support that the Lo Shu Magic Square was an important model for time and space. It served as a basis for city planning, and tomb and temple design. The magic square was incidentally used to designate spaces of political and religious importance.
The layout
The odd and even numbers alternate in the periphery of the Lo Shu pattern; the four even numbers are at the four corners, and the five odd numbers (which outnumber the even numbers by one) form a cross in th |
https://en.wikipedia.org/wiki/Type%20%28biology%29 | In biology, a type is a particular specimen (or in some cases a group of specimens) of an organism to which the scientific name of that organism is formally associated. In other words, a type is an example that serves to anchor or centralizes the defining features of that particular taxon. In older usage (pre-1900 in botany), a type was a taxon rather than a specimen.
A taxon is a scientifically named grouping of organisms with other like organisms, a set that includes some organisms and excludes others, based on a detailed published description (for example a species description) and on the provision of type material, which is usually available to scientists for examination in a major museum research collection, or similar institution.
Type specimen
According to a precise set of rules laid down in the International Code of Zoological Nomenclature (ICZN) and the International Code of Nomenclature for algae, fungi, and plants (ICN), the scientific name of every taxon is almost always based on one particular specimen, or in some cases specimens. Types are of great significance to biologists, especially to taxonomists. Types are usually physical specimens that are kept in a museum or herbarium research collection, but failing that, an image of an individual of that taxon has sometimes been designated as a type. Describing species and appointing type specimens is part of scientific nomenclature and alpha taxonomy.
When identifying material, a scientist attempts to apply a taxon name to a specimen or group of specimens based on their understanding of the relevant taxa, based on (at least) having read the type description(s), preferably also based on an examination of all the type material of all of the relevant taxa. If there is more than one named type that all appear to be the same taxon, then the oldest name takes precedence and is considered to be the correct name of the material in hand. If on the other hand, the taxon appears never to have been named at all, th |
https://en.wikipedia.org/wiki/Gadget | A gadget is a mechanical device or any ingenious article. Gadgets are sometimes referred to as gizmos.
History
The etymology of the word is disputed. The word first appears as reference to an 18th-century tool in glassmaking that was developed as a spring pontil. As stated in the glass dictionary published by the Corning Museum of Glass, a gadget is a "metal rod with a spring clip that grips the foot of a vessel and so avoids the use of a pontil". Gadgets were first used in the late 18th century. According to the Oxford English Dictionary, there is anecdotal evidence for the use of "gadget" as a placeholder name for a technical item whose precise name one can't remember since the 1850s; with Robert Brown's 1886 book Spunyarn and Spindrift, A sailor boy's log of a voyage out and home in a China tea-clipper containing the earliest known usage in print.
A widely circulated story holds that the word gadget was "invented" when Gaget, Gauthier & Cie, the company behind the repoussé construction of the Statue of Liberty (1886), made a small-scale version of the monument and named it after their firm; however this contradicts the evidence that the word was already used before in nautical circles, and the fact that it did not become popular, at least in the USA, until after World War I. Other sources cite a derivation from the French gâchette which has been applied to various pieces of a firing mechanism, or the French gagée, a small tool or accessory.
The October 1918 issue of Notes and Queries contains a multi-article entry on the word "gadget" (12 S. iv. 187). H. Tapley-Soper of The City Library, Exeter, writes:
A discussion arose at the Plymouth meeting of the Devonshire Association in 1916 when it was suggested that this word should be recorded in the list of local verbal provincialisms. Several members dissented from its inclusion on the ground that it is in common use throughout the country; and a naval officer who was present said that it has for years been a po |
https://en.wikipedia.org/wiki/Dwarfing | Dwarfing is a process in which a breed of animals or cultivar of plants is changed to become significantly smaller than standard members of their species. The effect can be induced through human intervention or non-human processes, and can include genetic, nutritional or hormonal means. Used most specifically, dwarfing includes pathogenic changes in the structure of an organism (for example, the bulldog, a genetically achondroplastic dog breed), in contrast to non-pathogenic proportional reduction in stature (such as the whippet, a small sighthound dog breed).
Animals
In animals, including humans, dwarfism has been described in several ways. Shortened stature can result from growth hormone deficiency, starvation, portal systemic shunts, renal disease, hypothyroidism diabetes mellitus and other conditions. Any of these conditions can be established in a population through genetic engineering, selective breeding, or insular dwarfism, or some combination of the above.
Dwarfing can produce more practical breeds that can fit in small accommodations, or may appeal aesthetically, as well as other associated side effects. Smaller stature may be a deliberate goal of breeding programs, or it may be a side effect of other breeding goals.
Nonpurposeful dwarfing
In some husbandry conditions, humans created dwarf breeds, or allowed them to develop, without specifically selecting for smaller animals. It is likely that the Shetland sheep breed, Shetland collie breed of dogs, and various pony breeds of horses developed in this manner. In the case of the Shetland sheep and collies, it is likely that environmental conditions, such as a lack of abundant fodder, led to farmers selecting smaller animals who continued to reproduce on limited food over larger animals who did not reproduce well on limited diets. In this case, the emphasis was on selecting for survival and reproduction, not size.
Purposeful dwarfing
Humans have encouraged the deliberate development of dwarf breed |
https://en.wikipedia.org/wiki/Dyscalculia | Dyscalculia () is a disability resulting in difficulty learning or comprehending arithmetic, such as difficulty in understanding numbers, learning how to manipulate numbers, performing mathematical calculations, and learning facts in mathematics. It is sometimes colloquially referred to as "math dyslexia", though this analogy is misleading as they are distinct syndromes.
Dyscalculia is associated with dysfunction in the region around the intraparietal sulcus and potentially also the frontal lobe. Dyscalculia does not reflect a general deficit in cognitive abilities or difficulties with time, measurement, and spatial reasoning. Estimates of the prevalence of dyscalculia range between 3 and 6% of the population. In 2015 it was established that 11% of children with dyscalculia also have ADHD. Dyscalculia has also been associated with Turner syndrome and people who have spina bifida.
Mathematical disabilities can occur as the result of some types of brain injury, in which case the term acalculia is used instead of dyscalculia, which is of innate, genetic or developmental origin.
Signs and symptoms
The earliest appearance of dyscalculia is typically a deficit in subitizing, the ability to know, from a brief glance and without counting, how many objects there are in a small group. Children as young as five can subitize six objects, especially while looking at the dots on the sides of dice. However, children with dyscalculia can subitize fewer objects and even when correct take longer to identify the number than their age-matched peers. Dyscalculia often looks different at different ages. It tends to become more apparent as children get older; however, symptoms can appear as early as preschool. Common symptoms of dyscalculia are having difficulty with mental math, trouble analyzing time and reading an analog clock, struggle with motor sequencing that involves numbers, and often counting on fingers when adding numbers.
Common symptoms
Dyscalculia is characterized by d |
https://en.wikipedia.org/wiki/Indian%20Script%20Code%20for%20Information%20Interchange | Indian Standard Code for Information Interchange (ISCII) is a coding scheme for representing various writing systems of India. It encodes the main Indic scripts and a Roman transliteration. The supported scripts are: Bengali–Assamese, Devanagari, Gujarati, Gurmukhi, Kannada, Malayalam, Oriya, Tamil, and Telugu. ISCII does not encode the writing systems of India that are based on Persian, but its writing system switching codes nonetheless provide for Kashmiri, Sindhi, Urdu, Persian, Pashto and Arabic. The Persian-based writing systems were subsequently encoded in the PASCII encoding.
ISCII has not been widely used outside certain government institutions, although a variant without the mechanism was used on classic Mac OS, Mac OS Devanagari, and it has now been rendered largely obsolete by Unicode. Unicode uses a separate block for each Indic writing system, and largely preserves the ISCII layout within each block.
Background
The Brahmi-derived writing systems have similar structure. So ISCII encodes letters with the same phonetic value at the same code point, overlaying the various scripts. For example, the ISCII codes 0xB3 0xDB represent [ki]. This will be rendered as കി in Malayalam, कि in Devanagari, as ਕਿ in Gurmukhi, and as கி in Tamil. The writing system can be selected in rich text by markup or in plain text by means of the code described below.
One motivation for the use of a single encoding is the idea that it will allow easy transliteration from one writing system to another. However, there are enough incompatibilities that this is not really a practical idea.
ISCII is an 8-bit encoding. The lower 128 code points are plain ASCII, the upper 128 code points are ISCII-specific. In addition to the code points representing characters, ISCII makes use of a code point with mnemonic that indicates that the following byte contains one of two kinds of information. One set of values changes the writing system until the next writing system indicator or end-of- |
https://en.wikipedia.org/wiki/Backup | In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup". Backups can be used to recover data after its loss from data deletion or corruption, or to recover data from an earlier time. Backups provide a simple form of disaster recovery; however not all backup systems are able to reconstitute a computer system or other complex configuration such as a computer cluster, active directory server, or database server.
A backup system contains at least one copy of all data considered worth saving. The data storage requirements can be large. An information repository model may be used to provide structure to this storage. There are different types of data storage devices used for copying backups of data that is already in secondary storage onto archive files. There are also different ways these devices can be arranged to provide geographic dispersion, data security, and portability.
Data is selected, extracted, and manipulated for storage. The process can include methods for dealing with live data, including open files, as well as compression, encryption, and de-duplication. Additional techniques apply to enterprise client-server backup. Backup schemes may include dry runs that validate the reliability of the data being backed up. There are limitations and human factors involved in any backup scheme.
Storage
A backup strategy requires an information repository, "a secondary storage space for data" that aggregates backups of data "sources". The repository could be as simple as a list of all backup media (DVDs, etc.) and the dates produced, or could include a computerized index, catalog, or relational database.
The backup data needs to be stored, requiring a backup rotation scheme, which is a system of backing up data to computer media that l |
https://en.wikipedia.org/wiki/Secretion | Secretion is the movement of material from one point to another, such as a secreted chemical substance from a cell or gland. In contrast, excretion is the removal of certain substances or waste products from a cell or organism. The classical mechanism of cell secretion is via secretory portals at the plasma membrane called porosomes. Porosomes are permanent cup-shaped lipoprotein structures embedded in the cell membrane, where secretory vesicles transiently dock and fuse to release intra-vesicular contents from the cell.
Secretion in bacterial species means the transport or translocation of effector molecules for example: proteins, enzymes or toxins (such as cholera toxin in pathogenic bacteria e.g. Vibrio cholerae) from across the interior (cytoplasm or cytosol) of a bacterial cell to its exterior. Secretion is a very important mechanism in bacterial functioning and operation in their natural surrounding environment for adaptation and survival.
In eukaryotic cells
Mechanism
Eukaryotic cells, including human cells, have a highly evolved process of secretion. Proteins targeted for the outside are synthesized by ribosomes docked to the rough endoplasmic reticulum (ER). As they are synthesized, these proteins translocate into the ER lumen, where they are glycosylated and where molecular chaperones aid protein folding. Misfolded proteins are usually identified here and retrotranslocated by ER-associated degradation to the cytosol, where they are degraded by a proteasome. The vesicles containing the properly folded proteins then enter the Golgi apparatus.
In the Golgi apparatus, the glycosylation of the proteins is modified and further post-translational modifications, including cleavage and functionalization, may occur. The proteins are then moved into secretory vesicles which travel along the cytoskeleton to the edge of the cell. More modification can occur in the secretory vesicles (for example insulin is cleaved from proinsulin in the secretory vesicles).
Eventu |
https://en.wikipedia.org/wiki/Duality%20%28electricity%20and%20magnetism%29 | In physics, the electromagnetic dual concept is based on the idea that, in the static case, electromagnetism has two separate facets: electric fields and magnetic fields. Expressions in one of these will have a directly analogous, or dual, expression in the other. The reason for this can ultimately be traced to special relativity, where applying the Lorentz transformation to the electric field will transform it into a magnetic field. These are special cases of duality in mathematics.
The electric field () is the dual of the magnetic field ().
The electric displacement field () is the dual of the magnetic flux density ().
Faraday's law of induction is the dual of Ampère's circuital law.
Gauss's law for electric field is the dual of Gauss's law for magnetism.
The electric potential is the dual of the magnetic potential.
Permittivity is the dual of permeability.
Electrostriction is the dual of magnetostriction.
Piezoelectricity is the dual of piezomagnetism.
Ferroelectricity is the dual of ferromagnetism.
An electrostatic motor is the dual of a magnetic motor;
Electrets are the dual of permanent magnets;
The Faraday effect is the dual of the Kerr effect;
The Aharonov–Casher effect is the dual to the Aharonov–Bohm effect;
The hypothetical magnetic monopole is the dual of electric charge.
See also
Maxwell's equations
Duality (electrical circuits)
List of dualities
Electromagnetism
Duality theories |
https://en.wikipedia.org/wiki/List%20of%20hybrid%20vehicles | This is a list of hybrid vehicles. A hybrid could theoretically have any two power sources, but hybrid vehicles have typically combined an internal combustion engine with a battery and electric motor(s).
This list includes both regular hybrid electric vehicles and plug-in hybrids, in chronological order of first production. Since Porsche made the first hybrid car in 1899 there have been a number of hybrid vehicles; but there was a marked increase in interest in, and development of, hybrid vehicles for personal transport in the late 1990s.
Automobiles
Overview by decade
Early designs: 1899–1917
1899: Carmaker Pieper of Belgium introduced a vehicle with an under-seat electric motor and a gasoline engine. It used the internal combustion engine to charge its batteries at cruise speed and used both motors to accelerate or climb a hill. Auto-Mixte, also of Belgium, built vehicles from 1906 to 1912 under the Pieper patents.
1900: Ferdinand Porsche, then a young engineer at Jacob Lohner & Co. creates the first gasoline–electric hybrid vehicles.
1901: Jacob Lohner & Co. produces the first Lohner–Porsche, a series of gasoline–electric hybrid vehicles based on employee Ferdinand Porsche's novel drivetrain. These vehicles had a driveline that was either gas or electric, but not both at the same time.
1905 or sooner: Fischer Motor Vehicle Co., Hoboken, NJ produces and sells a petrol–electric omnibus in the United States and in London, including battery storage.
1907: AL (French car)
1917: Woods Dual Power Car had a driveline similar to the current GMC/Chevrolet Silverado hybrid pickup truck.
Buses
Date unknown
Castrosua Tempus (in use in Barcelona, Granada, Lugo, Madrid, Santiago de Compostela, Sevilla)
Irisbus Hynobis (Castellón)
MAN Lion's City Hybrid:
Italy: Trento
Portugal: Lisboa and Oporto.
Spain:Barcelona, Cádiz, Madrid, Málaga, Murcia, San Sebastián, Sevilla and Valladolid
Mercedes Benz/Orion VII Hybrid
North American Bus Industries 60-BRT Hybrid
S |
https://en.wikipedia.org/wiki/Spectral%20graph%20theory | In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix.
The adjacency matrix of a simple undirected graph is a real symmetric matrix and is therefore orthogonally diagonalizable; its eigenvalues are real algebraic integers.
While the adjacency matrix depends on the vertex labeling, its spectrum is a graph invariant, although not a complete one.
Spectral graph theory is also concerned with graph parameters that are defined via multiplicities of eigenvalues of matrices associated to the graph, such as the Colin de Verdière number.
Cospectral graphs
Two graphs are called cospectral or isospectral if the adjacency matrices of the graphs are isospectral, that is, if the adjacency matrices have equal multisets of eigenvalues.
Cospectral graphs need not be isomorphic, but isomorphic graphs are always cospectral.
Graphs determined by their spectrum
A graph is said to be determined by its spectrum if any other graph with the same spectrum as is isomorphic to .
Some first examples of families of graphs that are determined by their spectrum include:
The complete graphs.
The finite starlike trees.
Cospectral mates
A pair of graphs are said to be cospectral mates if they have the same spectrum, but are non-isomorphic.
The smallest pair of cospectral mates is {K1,4, C4 ∪ K1}, comprising the 5-vertex star and the graph union of the 4-vertex cycle and the single-vertex graph, as reported by Collatz and Sinogowitz in 1957.
The smallest pair of polyhedral cospectral mates are enneahedra with eight vertices each.
Finding cospectral graphs
Almost all trees are cospectral, i.e., as the number of vertices grows, the fraction of trees for which there exists a cospectral tree goes to 1.
A pair of regular graphs are cospectral if and only if their complements are cospectral.
A pair |
https://en.wikipedia.org/wiki/List%20of%20multivariable%20calculus%20topics | This is a list of multivariable calculus topics. See also multivariable calculus, vector calculus, list of real analysis topics, list of calculus topics.
Closed and exact differential forms
Contact (mathematics)
Contour integral
Contour line
Critical point (mathematics)
Curl (mathematics)
Current (mathematics)
Curvature
Curvilinear coordinates
Del
Differential form
Differential operator
Directional derivative
Divergence
Divergence theorem
Double integral
Equipotential surface
Euler's theorem on homogeneous functions
Exterior derivative
Flux
Frenet–Serret formulas
Gauss's law
Gradient
Green's theorem
Green's identities
Harmonic function
Helmholtz decomposition
Hessian matrix
Hodge star operator
Inverse function theorem
Irrotational vector field
Isoperimetry
Jacobian matrix
Lagrange multiplier
Lamellar vector field
Laplacian
Laplacian vector field
Level set
Line integral
Matrix calculus
Mixed derivatives
Monkey saddle
Multiple integral
Newtonian potential
Parametric equation
Parametric surface
Partial derivative
Partial differential equation
Potential
Real coordinate space
Saddle point
Scalar field
Solenoidal vector field
Stokes' theorem
Submersion
Surface integral
Symmetry of second derivatives
Taylor's theorem
Total derivative
Vector field
Vector operator
Vector potential
list
Mathematics-related lists
Outlines of mathematics and logic
Outlines |
https://en.wikipedia.org/wiki/X.Org%20Server | X.Org Server is the free and open-source implementation of the X Window System display server stewarded by the X.Org Foundation.
Implementations of the client-side X Window System protocol exist in the form of X11 libraries, which serve as helpful APIs for communicating with the X server. Two such major X libraries exist for X11. The first of these libraries was Xlib, the original C language X11 API, but another C language X library, XCB, was created later in 2001. Other smaller X libraries exist, both as interfaces for Xlib and XCB in other languages, and as smaller standalone X libraries.
The services with which the X.Org Foundation supports X Server include the packaging of the releases; certification (for a fee); evaluation of improvements to the code; developing the web site, and handling the distribution of monetary donations. The releases are coded, documented, and packaged by global developers.
Software architecture
The X.Org Server implements the server side of the X Window System core protocol version 11 (X11) and extensions to it, e.g. RandR.
Version 1.16.0 integrates support for systemd-based launching and management which improved boot performance and reliability.
Device Independent X (DIX)
The Device Independent X (DIX) is the part of the X.Org Server that interacts with clients and implements software rendering. The main loop and the event delivery are part of the DIX.
An X server has a tremendous amount of functionality that must be implemented to support the X core protocol. This includes code tables, glyph rasterization and caching, XLFDs, and the core rendering API which draws graphics primitives.
Device Dependent X (DDX)
The Device Dependent X (DDX) is the part of the x-server that interacts with the hardware. In the X.Org Server source code, each directory under "hw" corresponds to one DDX. Hardware comprises graphics cards as well as mouse and keyboards. Each driver is hardware specific and implemented as a separate loadable module.
2D |
https://en.wikipedia.org/wiki/Software%20engineering%20demographics | Software engineers form part of the workforce around the world. There are an estimated 26.9 million professional software engineers in the world as of 2022, up from 21 million in 2016.
By Country
United States
In 2022, there were an estimated 4.4 million professional software engineers in North America. There are 152 million people employed in the US workforce, making software engineers 2.54% of the total workforce. The total above is an increase compared to around 3.87 million software engineers employed in 2016.
Summary
Based on data from the U.S. Bureau of Labor Statistics from 2002, about 612,000 software engineers worked in the U.S. about one out of every 200 workers. There were 55% to 60% as many software engineers as all traditional engineers. This comparison holds whether one compares the number of practitioners, managers, educators, or technicians/programmers. Software engineering had 612,000 practitioners; 264,790 managers, 16,495 educators, and 457,320 programmers.
Software Engineers Vs. Traditional Engineers
The following two tables compare the number of software engineers (611,900 in 2002) versus the number of traditional engineers (1,157,020 in 2002).
There are another 1,500,000 people in system analysis, system administration, and computer support, many of whom might be called software engineers. Many systems analysts manage software development teams and analysis is an important software engineering role, so many of them might be considered software engineers in the near future. This means that the number of software engineers may actually be much higher.
It's important to note that the number of software engineers declined by 5-to-10 percent from 2000 to 2002.
Computer Managers Versus Construction and Engineering Managers
Computer and information system managers (264,790) manage software projects, as well as computer operations. Similarly, Construction and engineering managers (413,750) oversee engineering projects, manufacturing plants |
https://en.wikipedia.org/wiki/F%20connector | The F connector (also F-type connector) is a coaxial RF connector commonly used for "over the air" terrestrial television, cable television and universally for satellite television and cable modems, usually with RG-6/U cable or with RG-59/U cable.
The F connector was invented by Eric E. Winston in the early 1950s while working for Jerrold Electronics on their development of cable television. In the 1970s, it became commonplace on VHF, and later UHF, television antenna connections in the United States, as coaxial cables replaced twin-lead. It is now specified in IEC 61169-24:2019.
Description
The F connector is an inexpensive, gendered, threaded, compression connector for radio frequency signals. It has good 75 Ω impedance match for frequencies well over 1 GHz and has usable bandwidth up to several GHz.
Connectors mate using a 3/8-32UNEF thread. The female connector has a socket for the center conductor and external threads. The male connector has a center pin, and a captive nut with internal threads.
The design allows for low-cost construction, where cables are terminated almost exclusively with male connectors. The coaxial cable center conductor forms the pin, and cable dielectric extends up to the mating face of the connector. Thus, the male connector consists of only a body, which is generally crimped onto or screwed over the cable shielding braid, and a captive nut, neither of which require tight tolerances. Push-on versions are also available.
Female connectors are typically used on bulkheads or as couplers, often being secured with the same threads as for the connectors. They can be manufactured as a single piece, with center sockets and dielectric, entirely at the factory where tolerances can easily be controlled.
This design is sensitive to the surface properties of the inner conductor (which must be solid wire, not stranded).
Weatherproofing
The F connector is not weatherproof. Neither the threads nor the joint between male connector body and capt |
https://en.wikipedia.org/wiki/Real-time%20clock | A real-time clock (RTC) is an electronic device (most often in the form of an integrated circuit) that measures the passage of time.
Although the term often refers to the devices in personal computers, servers and embedded systems, RTCs are present in almost any electronic device which needs to keep accurate time of day.
Terminology
The term real-time clock is used to avoid confusion with ordinary hardware clocks which are only signals that govern digital electronics, and do not count time in human units. RTC should not be confused with real-time computing, which shares its three-letter acronym but does not directly relate to time of day.
Purpose
Although keeping time can be done without an RTC, using one has benefits:
Low power consumption (important when running from alternate power)
Frees the main system for time-critical tasks
Sometimes more accurate than other methods
A GPS receiver can shorten its startup time by comparing the current time, according to its RTC, with the time at which it last had a valid signal. If it has been less than a few hours, then the previous ephemeris is still usable.
Some motherboards are made without real time clocks. The real time clock is omitted either out of the desire to save money.
Power source
RTCs often have an alternate source of power, so they can continue to keep time while the primary source of power is off or unavailable. This alternate source of power is normally a lithium battery in older systems, but some newer systems use a supercapacitor, because they are rechargeable and can be soldered. The alternate power source can also supply power to battery backed RAM.
Timing
Most RTCs use a crystal oscillator, but some have the option of using the power line frequency. The crystal frequency is usually 32.768 kHz, the same frequency used in quartz clocks and watches. Being exactly 215 cycles per second, it is a convenient rate to use with simple binary counter circuits. The low frequency saves power, while remain |
https://en.wikipedia.org/wiki/List%20of%20commutative%20algebra%20topics | Commutative algebra is the branch of abstract algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings, rings of algebraic integers, including the ordinary integers , and p-adic integers.
Research fields
Combinatorial commutative algebra
Invariant theory
Active research areas
Serre's multiplicity conjectures
Homological conjectures
Basic notions
Commutative ring
Module (mathematics)
Ring ideal, maximal ideal, prime ideal
Ring homomorphism
Ring monomorphism
Ring epimorphism
Ring isomorphism
Zero divisor
Chinese remainder theorem
Classes of rings
Field (mathematics)
Algebraic number field
Polynomial ring
Integral domain
Boolean algebra (structure)
Principal ideal domain
Euclidean domain
Unique factorization domain
Dedekind domain
Nilpotent elements and reduced rings
Dual numbers
Tensor product of fields
Tensor product of R-algebras
Constructions with commutative rings
Quotient ring
Field of fractions
Product of rings
Annihilator (ring theory)
Integral closure
Localization and completion
Completion (ring theory)
Formal power series
Localization of a ring
Local ring
Regular local ring
Localization of a module
Valuation (mathematics)
Discrete valuation
Discrete valuation ring
I-adic topology
Weierstrass preparation theorem
Finiteness properties
Noetherian ring
Hilbert's basis theorem
Artinian ring
Ascending chain condition (ACC) and descending chain condition (DCC)
Ideal theory
Fractional ideal
Ideal class group
Radical of an ideal
Hilbert's Nullstellensatz
Homological properties
Flat module
Flat map
Flat map (ring theory)
Projective module
Injective module
Cohen-Macaulay ring
Gorenstein ring
Complete intersection ring
Koszul complex
Hilbert's syzygy theorem
Quillen–Suslin theorem
Dimension theory
Height (ring theory)
|
https://en.wikipedia.org/wiki/Saturated%20model | In mathematical logic, and particularly in its subfield model theory, a saturated model M is one that realizes as many complete types as may be "reasonably expected" given its size. For example, an ultrapower model of the hyperreals is -saturated, meaning that every descending nested sequence of internal sets has a nonempty intersection.
Definition
Let κ be a finite or infinite cardinal number and M a model in some first-order language. Then M is called κ-saturated if for all subsets A ⊆ M of cardinality less than κ, the model M realizes all complete types over A. The model M is called saturated if it is |M|-saturated where |M| denotes the cardinality of M. That is, it realizes all complete types over sets of parameters of size less than |M|. According to some authors, a model M is called countably saturated if it is -saturated; that is, it realizes all complete types over countable sets of parameters. According to others, it is countably saturated if it is countable and saturated.
Motivation
The seemingly more intuitive notion—that all complete types of the language are realized—turns out to be too weak (and is appropriately named weak saturation, which is the same as 1-saturation). The difference lies in the fact that many structures contain elements that are not definable (for example, any transcendental element of R is, by definition of the word, not definable in the language of fields). However, they still form a part of the structure, so we need types to describe relationships with them. Thus we allow sets of parameters from the structure in our definition of types. This argument allows us to discuss specific features of the model that we may otherwise miss—for example, a bound on a specific increasing sequence cn can be expressed as realizing the type which uses countably many parameters. If the sequence is not definable, this fact about the structure cannot be described using the base language, so a weakly saturated structure may not bound t |
https://en.wikipedia.org/wiki/IBM%201440 | The IBM 1440 computer was announced by IBM October 11, 1962. This member of the IBM 1400 series was described many years later as "essentially a lower-cost version of the 1401", and programs for the 1440 could easily be adapted to run on the IBM 1401.
Despite what IBM described as "special features ... to meet immediate data processing requirements and ... to absorb increased demands," the 1440 did not quite attain the same commercial success as the 1401, and it was withdrawn on February 8, 1971.
Author Emerson Pugh wrote that the 1440 "did poorly in the marketplace because it was initially offered without the ability to attach magnetic tape units as well." (referring to offering both tape and disk).
System configuration
The IBM 1441 processing unit (CPU) contained arithmetic and logic circuits and up to 16,000 alphanumeric storage positions.
The console was either a Model 1 or, when an electric typewriter was added, a Model 2, of the IBM 1447 operator's console.
Peripherals
The following peripherals were available:
IBM 1442 card reader/punch
Model 1 read up to 300 cards a minute and punched up to 80 columns a second
Model 2 read up to 400 cards a minute and punched up to 160 columns a second
Model 4, a read-only unit, read up to 400 cards/minute.
An IBM 1440 could be configured with a choice of:
Model 4 (lowest cost)
Model 4, for reading, and a Model 1 or 2 as a second unit
IBM 1443 flying typebar printer
Basic rate of 150 lines a minute and up to 430 lines a minute, depending on typebar
Interchangeable typebars having character sets of 13, 39, 52, and 63 characters
IBM 1311 disk drive
Capacity for 2 million characters in each removable pack
With optional "Move Track Record" feature, capacity is increased to 2,980,000 characters in each pack
Each pack weighed less than 10 lb (5 kg).
Up to five 1311 drives
Tape drives
The IBM 7335 tape drive, available for use with the 1440, was introduced by IBM on October 10, 1963.
Software
IBM 1440 Autoco |
https://en.wikipedia.org/wiki/Phenology | Phenology is the study of periodic events in biological life cycles and how these are influenced by seasonal and interannual variations in climate, as well as habitat factors (such as elevation).
Examples include the date of emergence of leaves and flowers, the first flight of butterflies, the first appearance of migratory birds, the date of leaf colouring and fall in deciduous trees, the dates of egg-laying of birds and amphibia, or the timing of the developmental cycles of temperate-zone honey bee colonies. In the scientific literature on ecology, the term is used more generally to indicate the time frame for any seasonal biological phenomena, including the dates of last appearance (e.g., the seasonal phenology of a species may be from April through September).
Because many such phenomena are very sensitive to small variations in climate, especially to temperature, phenological records can be a useful proxy for temperature in historical climatology, especially in the study of climate change and global warming. For example, viticultural records of grape harvests in Europe have been used to reconstruct a record of summer growing season temperatures going back more than 500 years.
In addition to providing a longer historical baseline than instrumental measurements, phenological observations provide high temporal resolution of ongoing changes related to global warming.
Etymology
The word is derived from the Greek φαίνω (phainō), "to show, to bring to light, make to appear" + λόγος (logos), amongst others "study, discourse, reasoning" and indicates that phenology has been principally concerned with the dates of first occurrence of biological events in their annual cycle.
The term was first used by Charles François Antoine Morren, a professor of botany at the University of Liège (Belgium). Morren was a student of Adolphe Quetelet. Quetelet made plant phenological observations at the Royal Observatory of Belgium in Brussels. He is considered "one of 19th century t |
https://en.wikipedia.org/wiki/Bolivian%20hemorrhagic%20fever | Bolivian hemorrhagic fever (BHF), also known as black typhus or Ordog Fever, is a hemorrhagic fever and zoonotic infectious disease originating in Bolivia after infection by Machupo mammarenavirus.
BHF was first identified in 1963 as an ambisense RNA virus of the Arenaviridae family, by a research group led by Karl Johnson. The mortality rate is estimated at 5 to 30 percent. Due to its pathogenicity, Machupo virus requires Biosafety Level Four conditions, the highest level.
During the period between February and March 2007, some 20 suspected BHF cases (3 fatal) were reported to the Servicio Departamental de Salud (SEDES) in Beni Department, Bolivia. In February 2008, at least 200 suspected new cases (12 fatal) were reported to SEDES. In November 2011, a second case was confirmed near the departmental capital of Trinidad, and a serosurvey was conducted to determine the extent of Machupo virus infections in the department. A SEDES expert involved in the survey expressed his concerns about the expansion of the virus to other provinces outside the endemic regions of Mamoré and Iténez provinces.
Epidemiology
History
The disease was first encountered in 1962, in the Bolivian village of San Joaquín, hence the name "Bolivian" Hemorrhagic Fever. When initial investigations failed to find an arthropod carrier, other sources were sought before finally determining that the disease was carried by infected mice. Although mosquitoes were not the cause as originally suspected, the extermination of mosquitoes using DDT to prevent malaria proved to be indirectly responsible for the outbreak in that the accumulation of DDT in various animals along the food chain led to a shortage of cats in the village; subsequently, a mouse plague erupted in the village, leading to an epidemic.
Vectors
The vector is the large vesper mouse (Calomys callosus), a rodent indigenous to northern Bolivia. Infected animals are asymptomatic and shed the virus in excreta, thereby infecting humans. Evid |
https://en.wikipedia.org/wiki/Cinnamaldehyde | Cinnamaldehyde is an organic compound with the formula(C9H8O) C6H5CH=CHCHO. Occurring naturally as predominantly the trans (E) isomer, it gives cinnamon its flavor and odor. It is a phenylpropanoid that is naturally synthesized by the shikimate pathway. This pale yellow, viscous liquid occurs in the bark of cinnamon trees and other species of the genus Cinnamomum. The essential oil of cinnamon bark is about 90% cinnamaldehyde. Cinnamaldehyde decomposes to styrene because of oxidation as a result of bad storage or transport conditions. Styrene especially forms in high humidity and high temperatures. This is the reason why cinnamon contains small amounts of styrene.
Structure and synthesis
Cinnamaldehyde was isolated from cinnamon essential oil in 1834 by Jean-Baptiste Dumas and Eugène-Melchior Péligot and synthesized in the laboratory by the Italian chemist Luigi Chiozza in 1854.
The natural product is trans-cinnamaldehyde. The molecule consists of a benzene ring attached to an unsaturated aldehyde. As such, the molecule can be viewed as a derivative of acrolein. Its color is due to the π → π* transition: increased conjugation in comparison with acrolein shifts this band towards the visible.
Biosynthesis
Cinnamaldehyde occurs widely, and closely related compounds give rise to lignin. All such compounds are biosynthesized starting from phenylalanine, which undergoes conversion.
The biosynthesis of cinnamaldehyde begins with deamination of L-phenylalanine into cinnamic acid by the action of phenylalanine ammonia lyase (PAL). PAL catalyzes this reaction by a non-oxidative deamination. This deamination relies on the MIO prosthetic group of PAL. PAL gives rise to trans-cinnamic acid. In the second step, 4-coumarate–CoA ligase (4CL) converts cinnamic acid to cinnamoyl-CoA by an acid–thiol ligation. 4CL uses ATP to catalyze the formation of cinnamoyl-CoA. 4CL effects this reaction in two steps. 4CL forms a hydroxycinnamate–AMP anhydride, followed by a nucleophile atta |
https://en.wikipedia.org/wiki/Metal%20Gear%202%3A%20Solid%20Snake | Metal Gear 2: Solid Snake is a 1990 action-adventure stealth video game developed and published by Konami for the MSX2 computer platform. It serves as a direct sequel to the MSX2 version of the original Metal Gear, written and designed by series's creator Hideo Kojima, who conceived the game in response to Snake's Revenge, a separately-produced sequel that was being developed at the time for the NES specifically for the North American and European markets. The MSX2 version of Solid Snake was only released in Japan, although Kojima would later direct another sequel titled Metal Gear Solid, which was released worldwide for the PlayStation in 1998 to critical acclaim. This later led to Solid Snake being re-released alongside the original Metal Gear as additional content in the Subsistence version of Metal Gear Solid 3 for the PlayStation 2 in 2005. It was also included in the HD remastered ports of Metal Gear Solid 3 released for PlayStation 3, PlayStation Vita, and Xbox 360, and was given a stand-alone re-release in Japan as a downloadable game for mobile phones and the Wii Virtual Console.
Set in 1999, a few years after the events of the original game, Solid Snake must infiltrate a heavily defended territory known as Zanzibar Land to rescue a kidnapped scientist and destroy the revised "Metal Gear D". The game significantly evolved the stealth-based game system of its predecessor "in almost every way", introduced a complex storyline dealing with themes such as the nature of warfare and nuclear proliferation, and is considered "one of the best 8 bit games ever made."
Gameplay
Solid Snake builds upon the stealth-based gameplay system of its predecessor. As in the original Metal Gear, the player's objective is to infiltrate the enemy's stronghold, while avoiding detection from soldiers, cameras, infrared sensors and other surveillance devices. The biggest change in the game was done to the enemy's abilities. Instead of remaining stationed in one screen like in the fi |
https://en.wikipedia.org/wiki/Rod%20%28unit%29 | The rod, perch, or pole (sometimes also lug) is a surveyor's tool and unit of length of various historical definitions. In British imperial and US customary units it is defined as feet, equal to exactly of a mile, or yards (a quarter of a surveyor's chain), and is exactly 5.0292 meters. The rod is useful as a unit of length because integer multiples of it can form one acre of square measure (area). The 'perfect acre' is a rectangular area of 43,560 square feet, bounded by sides 660 feet (a furlong) long and 66 feet (a chain) wide (220 yards by 22 yards) or, equivalently, 40 rods by 4 rods. An acre is therefore 160 square rods or 10 square chains.
The name perch derives from the Ancient Roman unit, the pertica.
The measure also has a relationship with the military pike of about the same size. Both measures date from the sixteenth century, when the pike was still utilized in national armies. The tool has largely been supplanted by electronic tools such as surveyor lasers (lidar) and optical target devices for surveying lands. Surveyors rods and chains are still used in rough terrains with heavy overgrowth where laser or other optical measurements are difficult or impossible. In dialectal English the term lug has also been used, although the Oxford English Dictionary states that this unit, while usually of feet, may also be of 15, 18, 20, or 21 feet.
In the United States until 1 January 2023, the rod was often defined as 16.5 US survey feet, or approximately 5.029 210 058 m.
History
In England, the perch was officially discouraged in favour of the rod as early as the 15th century; however, local customs maintained its use. In the 13th century perches were variously recorded in lengths of , , and ; and even as late as 1820, a House of Commons report notes lengths of , , , , and even . In Ireland, a perch was standardized at , making an Irish chain, furlong and mile proportionately longer by 27.27% than the "standard" English measure.
Until English King Henry V |
https://en.wikipedia.org/wiki/Captive%20portal | A captive portal is a web page accessed with a web browser that is displayed to newly connected users of a Wi-Fi or wired network before they are granted broader access to network resources. Captive portals are commonly used to present a landing or log-in page which may require authentication, payment, acceptance of an end-user license agreement, acceptable use policy, survey completion, or other valid credentials that both the host and user agree to adhere by. Captive portals are used for a broad range of mobile and pedestrian broadband services – including cable and commercially provided Wi-Fi and home hotspots. A captive portal can also be used to provide access to enterprise or residential wired networks, such as apartment houses, hotel rooms, and business centers.
The captive portal is presented to the client and is stored either at the gateway or on a web server hosting the web page. Depending on the feature set of the gateway, websites or TCP ports can be allow-listed so that the user would not have to interact with the captive portal in order to use them. The MAC address of attached clients can also be used to bypass the login process for specified devices.
WISPr refers to this web browser-based authentication method as the Universal Access Method (UAM).
Uses
Captive portals are primarily used in open wireless networks where the users are shown a welcome message informing them of the conditions of access (allowed ports, liability, etc.). Administrators tend to do this so that their own users take responsibility for their actions and to avoid any legal responsibility. Whether this delegation of responsibility is legally valid is a matter of debate. Some networks may also require entering the user's cell phone number or identity information so that administrators can provide information to authorities in case there was illegal activity on the network.
Often captive portals are used for marketing and commercial communication purposes. Access to the Interne |
https://en.wikipedia.org/wiki/Distributed%20Sender%20Blackhole%20List | The Distributed Sender Blackhole List was a Domain Name System-based Blackhole List that listed IP addresses of insecure e-mail hosts. DSBL could be used by server administrators to tag or block e-mail messages that came from insecure servers, which is often spam.
The DSBL published its lists as domain name system (DNS) zones that could be queried by anyone on the Internet.
DSBL is a dead RBL as of May 2008. Its administrators continued to run their authoritative nameservers for several months after their decommissioning announcement; as of March 9, 2009, even those servers are offline. At this point, using any *.dsbl.org lookups in an RBL check results in DNS failures and can even prevent an SMTP server from starting a conversation.
Blocking
It is not possible for DSBL to block or intercept mail. E-mail is sometimes blocked or bounced with a message referencing DSBL. These messages were not blocked by DSBL; they were blocked by the administrator of the receiving mail server, who chose to reject messages coming from a potentially-insecure IP address listed by DSBL. See DNSBL for a description of how mail transfer agents interact with these lists.
Methodology
DSBL lists IP addresses of hosts that are demonstrated to be insecure. DSBL defines an insecure host as one that allows e-mail to be sent from anyone to anyone else. Normal servers only send mail from their own users to anyone else. Insecure servers are commonly abused by spammers, although DSBL does not claim that the hosts have sent spam or have been abused by spammers; only that they could be.
DSBL builds its lists by receiving specially-formatted "listme" e-mails triggered by testers. DSBL itself does not test hosts for security vulnerabilities. The testers use software that causes insecure servers to send a message to an e-mail address monitored by DSBL. The message includes a time-sensitive cryptographically secure cookie to prevent servers from being listed by mistake. When a valid listme message i |
https://en.wikipedia.org/wiki/Authentication%20protocol | An authentication protocol is a type of computer communications protocol or cryptographic protocol specifically designed for transfer of authentication data between two entities. It allows the receiving entity to authenticate the connecting entity (e.g. Client connecting to a Server) as well as authenticate itself to the connecting entity (Server to a client) by declaring the type of information needed for authentication as well as syntax. It is the most important layer of protection needed for secure communication within computer networks.
Purpose
With the increasing amount of trustworthy information being accessible over the network, the need for keeping unauthorized persons from access to this data emerged. Stealing someone's identity is easy in the computing world - special verification methods had to be invented to find out whether the person/computer requesting data is really who he says he is. The task of the authentication protocol is to specify the exact series of steps needed for execution of the authentication. It has to comply with the main protocol principles:
A Protocol has to involve two or more parties and everyone involved in the protocol must know the protocol in advance.
All the included parties have to follow the protocol.
A protocol has to be unambiguous - each step must be defined precisely.
A protocol must be complete - must include a specified action for every possible situation.
An illustration of password-based authentication using simple authentication protocol:
Alice (an entity wishing to be verified) and Bob (an entity verifying Alice's identity) are both aware of the protocol they agreed on using. Bob has Alice's password stored in a database for comparison.
Alice sends Bob her password in a packet complying with the protocol rules.
Bob checks the received password against the one stored in his database. Then he sends a packet saying "Authentication successful" or "Authentication failed" based on the result.
This is an exampl |
https://en.wikipedia.org/wiki/Bit-flipping%20attack | A bit-flipping attack is an attack on a cryptographic cipher in which the attacker can change the ciphertext in such a way as to result in a predictable change of the plaintext, although the attacker is not able to learn the plaintext itself. Note that this type of attack is not—directly—against the cipher itself (as cryptanalysis of it would be), but against a particular message or series of messages. In the extreme, this could become a Denial of service attack against all messages on a particular channel using that cipher.
The attack is especially dangerous when the attacker knows the format of the message. In such a situation, the attacker can turn it into a similar message but one in which some important information is altered. For example, a change in the destination address might alter the message route in a way that will force re-encryption with a weaker cipher, thus possibly making it easier for an attacker to decipher the message.
When applied to digital signatures, the attacker might be able to change a promissory note stating "I owe you $10.00" into one stating "I owe you $10,000".
Stream ciphers, such as RC4, are vulnerable to a bit-flipping attack, as are some block cipher modes of operation. See stream cipher attack. A keyed message authentication code, digital signature, or other authentication mechanism allows the recipient to detect if any bits were flipped in transit.
References
External links
Wireless LAN Security White Paper
Cryptographic attacks |
https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20quantum%20theory | This is a list of mathematical topics in quantum theory, by Wikipedia page. See also list of functional analysis topics, list of Lie group topics, list of quantum-mechanical systems with analytical solutions.
Mathematical formulation of quantum mechanics
bra–ket notation
canonical commutation relation
complete set of commuting observables
Heisenberg picture
Hilbert space
Interaction picture
Measurement in quantum mechanics
quantum field theory
quantum logic
quantum operation
Schrödinger picture
semiclassical
statistical ensemble
wavefunction
wave–particle duality
Wightman axioms
WKB approximation
Schrödinger equation
quantum mechanics, matrix mechanics, Hamiltonian (quantum mechanics)
particle in a box
particle in a ring
particle in a spherically symmetric potential
quantum harmonic oscillator
hydrogen atom
ring wave guide
particle in a one-dimensional lattice (periodic potential)
Fock symmetry in theory of hydrogen
Symmetry
identical particles
angular momentum
angular momentum operator
rotational invariance
rotational symmetry
rotation operator
translational symmetry
Lorentz symmetry
Parity transformation
Noether's theorem
Noether charge
Spin (physics)
isospin
Aman matrices
scale invariance
spontaneous symmetry breaking
supersymmetry breaking
Quantum states
quantum number
Pauli exclusion principle
quantum indeterminacy
uncertainty principle
wavefunction collapse
zero-point energy
bound state
coherent state
squeezed coherent state
density state
Fock state, Fock space
vacuum state
quasinormal mode
no-cloning theorem
quantum entanglement
Dirac equation
spinor, spinor group, spinor bundle
Dirac sea
Spin foam
Poincaré group
gamma matrices
Dirac adjoint
Wigner's classification
anyon
Interpretations of quantum mechanics
Copenhagen interpretation
locality principle
Bell's theorem
Bell test loopholes
CHSH inequality
hidden variable theory
path integral formulation, quantum action
Bohm interp |
https://en.wikipedia.org/wiki/Equation%20solving | In mathematics, to solve an equation is to find its solutions, which are the values (numbers, functions, sets, etc.) that fulfill the condition stated by the equation, consisting generally of two expressions related by an equals sign. When seeking a solution, one or more variables are designated as unknowns. A solution is an assignment of values to the unknown variables that makes the equality in the equation true. In other words, a solution is a value or a collection of values (one for each unknown) such that, when substituted for the unknowns, the equation becomes an equality.
A solution of an equation is often called a root of the equation, particularly but not only for polynomial equations. The set of all solutions of an equation is its solution set.
An equation may be solved either numerically or symbolically. Solving an equation numerically means that only numbers are admitted as solutions. Solving an equation symbolically means that expressions can be used for representing the solutions.
For example, the equation is solved for the unknown by the expression , because substituting for in the equation results in , a true statement. It is also possible to take the variable to be the unknown, and then the equation is solved by . Or and can both be treated as unknowns, and then there are many solutions to the equation; a symbolic solution is , where the variable may take any value. Instantiating a symbolic solution with specific numbers gives a numerical solution; for example, gives (that is, ), and gives .
The distinction between known variables and unknown variables is generally made in the statement of the problem, by phrases such as "an equation in and ", or "solve for and ", which indicate the unknowns, here and .
However, it is common to reserve , , , ... to denote the unknowns, and to use , , , ... to denote the known variables, which are often called parameters. This is typically the case when considering polynomial equations, such as quad |
https://en.wikipedia.org/wiki/Degree%20of%20truth | In classical logic, propositions are typically unambiguously considered as being true or false. For instance, the proposition one is both equal and not equal to itself is regarded as simply false, being contrary to the Law of Noncontradiction; while the proposition one is equal to one is regarded as simply true, by the Law of Identity. However, some mathematicians, computer scientists, and philosophers have been attracted to the idea that a proposition might be more or less true, rather than wholly true or wholly false. Consider My coffee is hot.
In mathematics, this idea can be developed in terms of fuzzy logic. In computer science, it has found application in artificial intelligence. In philosophy, the idea has proved particularly appealing in the case of vagueness. Degrees of truth is an important concept in law.
The term is an older concept than conditional probability. Instead of determining the objective probability, only a subjective assessment is defined. Especially for novices in the field, the chance for confusion is high. They are highly likely to confound the concept of probability with the concept of degree of truth. To overcome the misconception, it makes sense to see probability theory as the preferred paradigm to handle uncertainty.
In adjudicative processes, 'substantive truth' is distinct from 'formal legal truth' which comes in four degrees: hearsay, balance of probabilities, proven beyond reasonable doubt and absolute truth (knowledge reserved unto God).
See also
Language
Meaning (linguistics) — Semiotics
Technology
Artificial intelligence
Logic
Bivalence
Fuzzy logic
Fuzzy set
Half-truth
Multi-valued logic
Paradox of the heap
Truth
Truth value
Vagueness
Books
Vagueness and Degrees of Truth
Bibliography
References
Fuzzy logic
Logical truth |
https://en.wikipedia.org/wiki/HDMI | High-Definition Multimedia Interface (HDMI) is a proprietary audio/video interface for transmitting uncompressed video data and compressed or uncompressed digital audio data from an HDMI-compliant source device, such as a display controller, to a compatible computer monitor, video projector, digital television, or digital audio device. HDMI is a digital replacement for analog video standards.
HDMI implements the ANSI/CTA-861 standard, which defines video formats and waveforms, transport of compressed and uncompressed LPCM audio, auxiliary data, and implementations of the VESA EDID. CEA-861 signals carried by HDMI are electrically compatible with the CEA-861 signals used by the Digital Visual Interface (DVI). No signal conversion is necessary, nor is there a loss of video quality when a DVI-to-HDMI adapter is used. The Consumer Electronics Control (CEC) capability allows HDMI devices to control each other when necessary and allows the user to operate multiple devices with one handheld remote control device.
Several versions of HDMI have been developed and deployed since the initial release of the technology, occasionally introducing new connectors with smaller form factors, but all versions still use the same basic pinout and are compatible with all connector types and cables. Other than improved audio and video capacity, performance, resolution and color spaces, newer versions have optional advanced features such as 3D, Ethernet data connection, and CEC extensions.
Production of consumer HDMI products started in late 2003. In Europe, either DVI-HDCP or HDMI is included in the HD ready in-store labeling specification for TV sets for HDTV, formulated by EICTA with SES Astra in 2005. HDMI began to appear on consumer HDTVs in 2004 and camcorders and digital still cameras in 2006. , nearly 10 billion HDMI devices have been sold.
History
The HDMI founders were Hitachi, Panasonic, Philips, Silicon Image, Sony, Thomson, and Toshiba. Digital Content Protection, LLC provi |
https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20classical%20mechanics | This is a list of mathematical topics in classical mechanics, by Wikipedia page. See also list of variational topics, correspondence principle.
Newtonian physics
Newton's laws of motion
Inertia,
Kinematics, rigid body
Momentum, kinetic energy
Parallelogram of force
Circular motion
Rotational speed
Angular speed
Angular momentum
torque
angular acceleration
moment of inertia
parallel axes rule
perpendicular axes rule
stretch rule
centripetal force, centrifugal force, Reactive centrifugal force
Laplace–Runge–Lenz vector
Euler's disk
elastic potential energy
Mechanical equilibrium
D'Alembert's principle
Degrees of freedom (physics and chemistry)
Frame of reference
Inertial frame of reference
Galilean transformation
Principle of relativity
Conservation laws
Conservation of momentum
Conservation of linear momentum
Conservation of angular momentum
Conservation of energy
Potential energy
Conservative force
Conservation of mass
Law of universal gravitation
Projectile motion
Kepler's laws of planetary motion
Escape velocity
Potential well
Weightlessness
Lagrangian point
N-body problem
Kolmogorov-Arnold-Moser theorem
Virial theorem
Gravitational binding energy
Speed of gravity
Newtonian limit
Hill sphere
Roche lobe
Roche limit
Hamiltonian mechanics
Phase space
Symplectic manifold
Liouville's theorem (Hamiltonian)
Poisson bracket
Poisson algebra
Poisson manifold
Antibracket algebra
Hamiltonian constraint
Moment map
Contact geometry
Analysis of flows
Nambu mechanics
Lagrangian mechanics
Action (physics)
Lagrangian
Euler–Lagrange equations
Noether's theorem
Classical mechanics |
https://en.wikipedia.org/wiki/Kane%20quantum%20computer | The Kane quantum computer is a proposal for a scalable quantum computer proposed by Bruce Kane in 1998, who was then at the University of New South Wales. Often thought of as a hybrid between quantum dot and nuclear magnetic resonance (NMR) quantum computers, the Kane computer is based on an array of individual phosphorus donor atoms embedded in a pure silicon lattice. Both the nuclear spins of the donors and the spins of the donor electrons participate in the computation.
Unlike many quantum computation schemes, the Kane quantum computer is in principle scalable to an arbitrary number of qubits. This is possible because qubits may be individually addressed by electrical means.
Description
The original proposal calls for phosphorus donors to be placed in an array with a spacing of 20 nm, approximately 20 nm below the surface. An insulating oxide layer is grown on top of the silicon. Metal A gates are deposited on the oxide above each donor, and J gates between adjacent donors.
The phosphorus donors are isotopically pure 31P, which have a nuclear spin of 1/2. The silicon substrate is isotopically pure 28Si which has nuclear spin 0. Using the nuclear spin of the P donors as a method to encode qubits has two major advantages. Firstly, the state has an extremely long decoherence time, perhaps on the order of 1018 seconds at millikelvin temperatures. Secondly, the qubits may be manipulated by applying an oscillating magnetic field, as in typical NMR proposals. By altering the voltage on the A gates, it should be possible to alter the Larmor frequency of individual donors. This allows them to be addressed individually, by bringing specific donors into resonance with the applied oscillating magnetic field.
Nuclear spins alone will not interact significantly with other nuclear spins 20 nm away. Nuclear spin is useful to perform single-qubit operations, but to make a quantum computer, two-qubit operations are also required. This is the role of electron spin in this desi |
https://en.wikipedia.org/wiki/Confusion%20and%20diffusion | In cryptography, confusion and diffusion are two properties of the operation of a secure cipher identified by Claude Shannon in his 1945 classified report A Mathematical Theory of Cryptography. These properties, when present, work together to thwart the application of statistics and other methods of cryptanalysis.
Confusion in a symmetric cipher is obscuring the local correlation between the input (plaintext) and output (ciphertext) by varying the application of the key to the data, while diffusion is hiding the plaintext statistics by spreading it over a larger area of ciphertext. Although ciphers can be confusion-only (substitution cipher, one-time pad) or diffusion-only (transposition cipher), any "reasonable" block cipher uses both confusion and diffusion. These concepts are also important in the design of cryptographic hash functions and pseudorandom number generators, where decorrelation of the generated values is the main feature, diffusion (and its avalanche effect) is also applicable to non-cryptographic hash functions.
Definition
Confusion
Confusion means that each binary digit (bit) of the ciphertext should depend on several parts of the key, obscuring the connections between the two.
The property of confusion hides the relationship between the ciphertext and the key.
This property makes it difficult to find the key from the ciphertext and if a single bit in a key is changed, the calculation of most or all of the bits in the ciphertext will be affected.
Confusion increases the ambiguity of ciphertext and it is used by both block and stream ciphers.
In substitution–permutation networks, confusion is provided by substitution boxes.
Diffusion
Diffusion means that if we change a single bit of the plaintext, then about half of the bits in the ciphertext should change, and similarly, if we change one bit of the ciphertext, then about half of the plaintext bits should change. This is equivalent to the expectation that encryption schemes exhibit an av |
https://en.wikipedia.org/wiki/Human%20body%20weight | Human body weight is a person's mass or weight.
Strictly speaking, body weight is the measurement of weight without items located on the person. Practically though, body weight may be measured with clothes on, but without shoes or heavy accessories such as mobile phones and wallets, and using manual or digital weighing scales. Excess or reduced body weight is regarded as an indicator of determining a person's health, with body volume measurement providing an extra dimension by calculating the distribution of body weight.
Average adult human weight varies by continent, from about in Asia and Africa to about in North America, with men on average weighing more than women.
Estimation in children
There are a number of methods to estimate weight in children for circumstances (such as emergencies) when actual weight cannot be measured. Most involve a parent or health care provider guessing the child's weight through weight-estimation formulas. These formulas base their findings on the child's age and tape-based systems of weight estimation. Of the many formulas that have been used for estimating body weight, some include the Advanced Pediatric Life Support formula, the Leffler formula, and Theron formula. There are also several types of tape-based systems for estimating children's weight, with the most well-known being the Broselow tape. The Broselow tape is based on length with weight read from the appropriate color area. Newer systems, such as the PAWPER tape, make use of a simple two-step process to estimate weight: the length-based weight estimation is modified according to the child's body habitus to increase the accuracy of the final weight prediction.
The Leffler formula is used for children 0–10 years of age. In those less than a year old, it is
and for those 1–10 years old, it is
where m is the number of kilograms the child weighs and am and ay respectively are the number of months or years old the child is.
The Theron formula is
where m and ay are as |
https://en.wikipedia.org/wiki/IBM%20hexadecimal%20floating-point | Hexadecimal floating point (now called HFP by IBM) is a format for encoding floating-point numbers first introduced on the IBM System/360 computers, and supported on subsequent machines based on that architecture, as well as machines which were intended to be application-compatible with System/360.
In comparison to IEEE 754 floating point, the HFP format has a longer significand, and a shorter exponent. All HFP formats have 7 bits of exponent with a bias of 64. The normalized range of representable numbers is from 16−65 to 1663 (approx. 5.39761 × 10−79 to 7.237005 × 1075).
The number is represented as the following formula: (−1)sign × 0.significand × 16exponent−64.
Single-precision 32-bit
A single-precision HFP number (called "short" by IBM) is stored in a 32-bit word:
{|
|- style="text-align:center"
|style="width:20px"|1
|style="width:20px"|
|style="width:50px"|7
|style="width:20px"|
|style="width:20px"|
|style="width:210px"|24
|style="width:20px"|
|style="text-align:left"|(width in bits)
|- style="text-align:center"
|colspan="1" style="text-align:center;background-color:#FC9"|S
|colspan="3" style="text-align:center;background-color:#99F"|Exp
|colspan="3" style="text-align:center;background-color:#9F9"|Fraction
|colspan="1" style="text-align:center;background-color:#FFF"|
|- style="text-align:center"
|31
|30
|...
|24
|23
|...
|0
|align="left"|(bit index)*
|-
|colspan="8"| * IBM documentation numbers the bits from left to right, so that the most significant bit is designated as bit number 0.
|}
In this format the initial bit is not suppressed, and the
radix (hexadecimal) point is set to the left of the significand (fraction in IBM documentation and the figures).
Since the base is 16, the exponent in this form is about twice as large as the equivalent in IEEE 754, in order to have similar exponent range in binary, 9 exponent bits would be required.
Example
Consider encoding the value −118.625 as an HFP single-precision floating-point value.
The value is ne |
https://en.wikipedia.org/wiki/Donatello%20%28Teenage%20Mutant%20Ninja%20Turtles%29 | Donatello, nicknamed Don or Donnie/Donny, is a superhero and one of the four main characters of the Teenage Mutant Ninja Turtles comics and all related media. He is the smartest and often gentlest of his brothers, wearing a purple mask over his eyes. He wields a bō staff, his primary signature weapon in all media.
He is the adoptive and mutated son of Master Splinter, and the brother of Leonardo, Raphael and Michelangelo. Like all of the brothers, he is named after a Renaissance artist; in this case, he is named after Italian sculptor Donatello. He is the favorite turtle of co-creator Peter Laird, who served as the basis of Donatello's personality.
Donatello is a gifted polymath, possessing a natural aptitude for science and technology, and often speaks in technobabble. He is the most logical and rational of his brothers, possessing a high sense of ingenuity and serving as the second-in-command of the team. He is responsible for all technological innovations the team utilizes, such as technical expertise, intel gathering, vehicles, homemade bioweapons, and everything in between. Any science and math skill you can think of, he will do it all. Donatello is not as rowdy and violent as his brothers, but he can get a little annoyed with them on occasion. However, as a natural introvert, he is far quieter and more reserved than his brothers, rarely losing his temper. Donatello is calm, sensible, friendly, and gentle. He does not get into a lot of confrontations with his brothers. He doesn't share the love of combat as his brothers do, instead opting for alternative methods of problem-solving and interests that stimulate the inquiring mind. He is more interested in his work than in his ninjutsu, but he still attends ninja practice and works hard there as well as his projects.
Comic books
Mirage Comics
In the comics, Donatello is depicted as the calmest turtle. While the comics' portrayal of the team has no official command structure, in the early stories he is depict |
https://en.wikipedia.org/wiki/Vanishing%20point | A vanishing point is a point on the image plane of a perspective rendering where the two-dimensional perspective projections of mutually parallel lines in three-dimensional space appear to converge. When the set of parallel lines is perpendicular to a picture plane, the construction is known as one-point perspective, and their vanishing point corresponds to the oculus, or "eye point", from which the image should be viewed for correct perspective geometry. Traditional linear drawings use objects with one to three sets of parallels, defining one to three vanishing points.
Italian humanist polymath and architect Leon Battista Alberti first introduced the concept in his treatise on perspective in art, De pictura, written in 1435.
Vector notation
The vanishing point may also be referred to as the "direction point", as lines having the same directional vector, say D, will have the same vanishing point. Mathematically, let be a point lying on the image plane, where is the focal length (of the camera associated with the image), and let be the unit vector associated with , where . If we consider a straight line in space with the unit vector and its vanishing point , the unit vector associated with is equal to , assuming both point towards the image plane.
When the image plane is parallel to two world-coordinate axes, lines parallel to the axis that is cut by this image plane will have images that meet at a single vanishing point. Lines parallel to the other two axes will not form vanishing points as they are parallel to the image plane. This is one-point perspective. Similarly, when the image plane intersects two world-coordinate axes, lines parallel to those planes will meet form two vanishing points in the picture plane. This is called two-point perspective. In three-point perspective the image plane intersects the , , and axes and therefore lines parallel to these axes intersect, resulting in three different vanishing points.
Theorem
The vanishing point theo |
https://en.wikipedia.org/wiki/Lorenzo%27s%20oil | Lorenzo’s oil is liquid solution, made of 4 parts glycerol trioleate and 1 part glycerol trierucate, which are the triacylglycerol forms of oleic acid and erucic acid. It is prepared from olive oil and rapeseed oil.
It is used in the investigational treatment of asymptomatic patients with adrenoleukodystrophy (ALD), a nervous system disorder.
The development of the oil was led by Augusto and Michaela Odone after their son Lorenzo was diagnosed with the disease in 1984, at the age of five. Lorenzo was predicted to die within a few years. His parents sought experimental treatment options, and the initial formulation of the oil was developed by retired British scientist Don Suddaby (formerly of Croda International). Suddaby and his colleague, Keith Coupland, received U.S. Patent No. 5,331,009 for the oil. The royalties received by Augusto were paid to The Myelin Project which he and Michaela founded to further research treatments for ALD and similar disorders. The Odones and their invention obtained widespread publicity in 1992 because of the film Lorenzo's Oil.
Research on the effectiveness of Lorenzo's Oil has seen mixed results, with possible benefit for asymptomatic ALD patients but of unpredictable or no benefit to those with symptoms, suggesting its possible role as a preventative measure in families identified as ALD dominant. Lorenzo Odone died on May 30, 2008, at the age of 30; he was bedridden with paralysis and died from aspiration pneumonia, likely caused by having inhaled food.
Treatment costs
Lorenzo's oil costs approximately $400 USD for a month's treatment.
Proposed mechanism of action
The mixture of fatty acids purportedly reduces the levels of very long chain fatty acids (VLCFAs), which are elevated in ALD. It does so by competitively inhibiting the enzyme that forms VLCFAs.
Effectiveness
Lorenzo's oil, in combination with a diet low in VLCFA, has been investigated for its possible effects on the progression of ALD. Clinical results have been mi |
https://en.wikipedia.org/wiki/I2P | The Invisible Internet Project (I2P) is an anonymous network layer (implemented as a mix network) that allows for censorship-resistant, peer-to-peer communication. Anonymous connections are achieved by encrypting the user's traffic (by using end-to-end encryption), and sending it through a volunteer-run network of roughly 55,000 computers distributed around the world. Given the high number of possible paths the traffic can transit, a third party watching a full connection is unlikely. The software that implements this layer is called an "I2P router", and a computer running I2P is called an "I2P node". I2P is free and open sourced, and is published under multiple licenses.
Technical design
I2P has been beta software since it started in 2003 as a fork of Freenet. The software's developers emphasize that bugs are likely to occur in the beta version and that peer review has been insufficient to date. However, they believe the code is now reasonably stable and well-developed, and more exposure can help the development of I2P.
The network is strictly message-based, like IP, but a library is available to allow reliable streaming communication on top of it (similar to Non-blocking IO-based TCP, although from version 0.6, a new Secure Semi-reliable UDP transport is used). All communication is end-to-end encrypted (in total, four layers of encryption are used when sending a message) through garlic routing, and even the end points ("destinations") are cryptographic identifiers (essentially a pair of public keys), so that neither senders nor recipients of messages need to reveal their IP address to the other side or to third-party observers.
Although many developers had been a part of the Invisible IRC Project (IIP) and Freenet communities, significant differences exist between their designs and concepts. IIP was an anonymous centralized IRC server. Freenet is a censorship-resistant distributed data store. I2P is an anonymous peer-to-peer distributed communicatio |
https://en.wikipedia.org/wiki/Haier | Haier Group Corporation () is a Chinese multinational home appliances and consumer electronics company headquartered in Qingdao, Shandong. It designs, develops, manufactures and sells products including refrigerators, air conditioners, washing machines, dryers, microwave ovens, mobile phones, computers, and televisions. The home appliances business, namely Haier Smart Home, has seven global brands – Haier, Casarte, Leader, GE Appliances, Fisher & Paykel, Aqua and Candy.
According to data released by Euromonitor, Haier was the number one brand globally in major appliances for 10 consecutive years from 2009 to 2018. The Haier brand was also recognized by BrandZ in 2019 as the most valuable IoT ecosystem brand in the world with a brand value of $16.3 billion. In 2019, Haier Smart Home ranked 448 on Fortune's Global 500 list with a revenue of $27.7 billion.
Haier Group also consisted of two listed subsidiaries in three exchanges: Haier Smart Home (; ex-Qingdao Haier Co., Ltd.), Haier Electronics Group Co., Ltd. (), and "D-share" listing of Haier Smart Home in China Europe International Exchange of Frankfurt.
History
The origins of Haier date back long before the actual founding of the company. In the 1920s, a refrigerator factory was built in Qingdao to supply the Chinese market. After the 1949 establishment of the People's Republic of China, the factory was then taken over and turned into a state-owned enterprise.
By the 1980s, the factory had a debt of over CN¥1.4 million and suffered from dilapidated infrastructure, poor management, and lack of quality controls, resulting from the planned economic system and relevant policies. Production had slowed, rarely surpassing 80 refrigerators a month, and the factory was close to bankruptcy. The Qingdao government hired a young assistant city-manager, Zhang Ruimin, responsible for a number of city-owned appliance companies. Zhang was appointed the managing director of the factory in 1984.
Founding
Haier had been founded |
https://en.wikipedia.org/wiki/Ken%20Olsen | Kenneth Harry "Ken" Olsen (February 20, 1926 – February 6, 2011) was an American engineer who co-founded Digital Equipment Corporation (DEC) in 1957 with colleague Harlan Anderson and his brother Stan Olsen.
Background
Kenneth Harry Olsen was born in Bridgeport, Connecticut and grew up in the neighboring town of Stratford, Connecticut. His father's parents came from Norway and his mother's parents from Sweden. Olsen began his career working summers in a machine shop. Fixing radios in his basement gave him the reputation of a neighborhood inventor.
After serving in the United States Navy between 1944 and 1946, Olsen attended the Massachusetts Institute of Technology, where he earned both a BS (1950) and an MS (1952) degree in electrical engineering.
Career
Pre-DEC
During his studies at MIT, the Office of Naval Research of the United States Department of the Navy recruited Olsen to help build a computerized flight simulator. Also while at MIT he directed the building of the first transistorized research computer. Olsen was an engineer who had been working at MIT Lincoln Laboratory on the TX-2 project.
Olsen's most important connection to Project Whirlwind was his work on the Memory Test Computer (MTC), described as "a special purpose computer built to test core memory for the Whirlwind." Unlike the 18-bit TX-0, which was "designed to be a predecessor for a larger 36 bit machine, the TX-2," Whirlwind and the MTC used 16 bits.
Digital Equipment Corporation
In 1957, Olsen and an MIT colleague, Harlan Anderson, decided to start their own firm. They approached American Research and Development Corporation, an early venture capital firm, which had been founded by Georges Doriot, and founded Digital Equipment Corporation (DEC) after receiving $70,000 for a 70% share. In the 1960s, Olsen received patents for a saturable switch, a diode transformer gate circuit, an improved version of magnetic-core memory, and the line printer buffer. (Note that MIT professor Jay W. Fo |
https://en.wikipedia.org/wiki/Leslie%20Orgel | Leslie Eleazer Orgel FRS (12 January 1927 – 27 October 2007) was a British chemist. He is known for his theories on the origin of life.
Biography
Leslie Orgel was born in London, England, on . He received his Bachelor of Arts degree in chemistry with first-class honours from the University of Oxford in 1948. In 1951 he was elected a Fellow of Magdalen College, Oxford and in 1953 was awarded his PhD in chemistry.
Orgel started his career as a theoretical inorganic chemist and continued his studies in this field at Oxford, the California Institute of Technology , and the University of Chicago.
Together with Sydney Brenner, Jack Dunitz, Dorothy Hodgkin, and Beryl M. Oughton he was one of the first people in April 1953 to see the model of the structure of DNA, constructed by Francis Crick and James Watson, at the time he and the other scientists were working at Oxford University's Chemistry Department. According to the late Dr. Beryl Oughton, later Rimmer, they all travelled together in two cars once Dorothy Hodgkin announced to them that they were off to Cambridge to see the model of the structure of DNA. All were impressed by the new DNA model, especially Brenner who subsequently worked with Crick; Orgel himself also worked with Crick at the Salk Institute for Biological Studies.
In 1955 he joined the chemistry department at Cambridge University. There he did work in transition metal chemistry and ligand field theory, published several peer-reviewed journal articles, and wrote a textbook entitled Transition Metal Chemistry: Ligand Field Theory (1960). He developed the Orgel diagram showing the energies of electronic terms in transition metal complexes.
Orgel formulated his protein-translation error-catastrophe theory of aging in 1963, (prior to the use of the term by Manfred Eigen for mutational error catastrophe) which has since been experimentally challenged.
In 1964, Orgel was appointed senior fellow and research professor at the Salk Institute for Biological |
https://en.wikipedia.org/wiki/Microbiologist | A microbiologist (from Greek ) is a scientist who studies microscopic life forms and processes. This includes study of the growth, interactions and characteristics of microscopic organisms such as bacteria, algae, fungi, and some types of parasites and their vectors. Most microbiologists work in offices and/or research facilities, both in private biotechnology companies and in academia. Most microbiologists specialize in a given topic within microbiology such as bacteriology, parasitology, virology, or immunology.
Duties
Microbiologists generally work in some way to increase scientific knowledge or to utilise that knowledge in a way that improves outcomes in medicine or some industry. For many microbiologists, this work includes planning and conducting experimental research projects in some kind of laboratory setting. Others may have a more administrative role, supervising scientists and evaluating their results. Microbiologists working in the medical field, such as clinical microbiologists, may see patients or patient samples and do various tests to detect disease-causing organisms.
For microbiologists working in academia, duties include performing research in an academic laboratory, writing grant proposals to fund research, as well as some amount of teaching and designing courses. Microbiologists in industry roles may have similar duties except research is performed in industrial labs in order to develop or improve commercial products and processes. Industry jobs may also not include some degree of sales and marketing work, as well as regulatory compliance duties. Microbiologists working in government may have a variety of duties, including laboratory research, writing and advising, developing and reviewing regulatory processes, and overseeing grants offered to outside institutions. Some microbiologists work in the field of patent law, either with national patent offices or private law practices. Her duties include research and navigation of intellectual proper |
https://en.wikipedia.org/wiki/Quillen%E2%80%93Suslin%20theorem | The Quillen–Suslin theorem, also known as Serre's problem or Serre's conjecture, is a theorem in commutative algebra concerning the relationship between free modules and projective modules over polynomial rings. In the geometric setting it is a statement about the triviality of vector bundles on affine space.
The theorem states that every finitely generated projective module over a polynomial ring is free.
History
Background
Geometrically, finitely generated projective modules over the ring correspond to vector bundles over affine space , where free modules correspond to trivial vector bundles. This correspondence (from modules to (algebraic) vector bundles) is given by the 'globalisation' or 'twiddlification' functor, sending (cite Hartshorne II.5, page 110). Affine space is topologically contractible, so it admits no non-trivial topological vector bundles. A simple argument using the exponential exact sequence and the d-bar Poincaré lemma shows that it also admits no non-trivial holomorphic vector bundles.
Jean-Pierre Serre, in his 1955 paper Faisceaux algébriques cohérents, remarked that the corresponding question was not known for algebraic vector bundles: "It is not known whether there exist projective A-modules of finite type which are not free." Here is a polynomial ring over a field, that is, = .
To Serre's dismay, this problem quickly became known as Serre's conjecture. (Serre wrote, "I objected as often as I could [to the name].") The statement does not immediately follow from the proofs given in the topological or holomorphic case. These cases only guarantee that there is a continuous or holomorphic trivialization, not an algebraic trivialization.
Serre made some progress towards a solution in 1957 when he proved that every finitely generated projective module over a polynomial ring over a field was stably free, meaning that after forming its direct sum with a finitely generated free module, it became free. The problem remained open until |
https://en.wikipedia.org/wiki/Capability-based%20security | Capability-based security is a concept in the design of secure computing systems, one of the existing security models. A capability (known in some systems as a key) is a communicable, unforgeable token of authority. It refers to a value that references an object along with an associated set of access rights. A user program on a capability-based operating system must use a capability to access an object. Capability-based security refers to the principle of designing user programs such that they directly share capabilities with each other according to the principle of least privilege, and to the operating system infrastructure necessary to make such transactions efficient and secure. Capability-based security is to be contrasted with an approach that uses traditional UNIX permissions and Access Control Lists.
Although most operating systems implement a facility which resembles capabilities, they typically do not provide enough support to allow for the exchange of capabilities among possibly mutually untrusting entities to be the primary means of granting and distributing access rights throughout the system. A capability-based system, in contrast, is designed with that goal in mind.
Introduction
Capabilities achieve their objective of improving system security by being used in place of forgeable references. A forgeable reference (for example, a path name) identifies an object, but does not specify which access rights are appropriate for that object and the user program which holds that reference. Consequently, any attempt to access the referenced object must be validated by the operating system, based on the ambient authority of the requesting program, typically via the use of an access-control list (ACL). Instead, in a system with capabilities, the mere fact that a user program possesses that capability entitles it to use the referenced object in accordance with the rights that are specified by that capability. In theory, a system with capabilities removes the need |
https://en.wikipedia.org/wiki/Reduced%20ring | In ring theory, a branch of mathematics, a ring is called a reduced ring if it has no non-zero nilpotent elements. Equivalently, a ring is reduced if it has no non-zero elements with square zero, that is, x2 = 0 implies x = 0. A commutative algebra over a commutative ring is called a reduced algebra if its underlying ring is reduced.
The nilpotent elements of a commutative ring R form an ideal of R, called the nilradical of R; therefore a commutative ring is reduced if and only if its nilradical is zero. Moreover, a commutative ring is reduced if and only if the only element contained in all prime ideals is zero.
A quotient ring R/I is reduced if and only if I is a radical ideal.
Let be nilradical of any commutative ring . There is a natural functor of category of commutative rings into category of reduced rings and it is left adjoint to the inclusion functor of into . The bijection is induced from the universal property of quotient rings.
Let D be the set of all zero-divisors in a reduced ring R. Then D is the union of all minimal prime ideals.
Over a Noetherian ring R, we say a finitely generated module M has locally constant rank if is a locally constant (or equivalently continuous) function on Spec R. Then R is reduced if and only if every finitely generated module of locally constant rank is projective.
Examples and non-examples
Subrings, products, and localizations of reduced rings are again reduced rings.
The ring of integers Z is a reduced ring. Every field and every polynomial ring over a field (in arbitrarily many variables) is a reduced ring.
More generally, every integral domain is a reduced ring since a nilpotent element is a fortiori a zero-divisor. On the other hand, not every reduced ring is an integral domain. For example, the ring Z[x, y]/(xy) contains x + (xy) and y + (xy) as zero-divisors, but no non-zero nilpotent elements. As another example, the ring Z × Z contains (1, 0) and (0, 1) as zero-divisors, but contains no non-zero |
https://en.wikipedia.org/wiki/Duality%20%28projective%20geometry%29 | In geometry, a striking feature of projective planes is the symmetry of the roles played by points and lines in the definitions and theorems, and (plane) duality is the formalization of this concept. There are two approaches to the subject of duality, one through language () and the other a more functional approach through special mappings. These are completely equivalent and either treatment has as its starting point the axiomatic version of the geometries under consideration. In the functional approach there is a map between related geometries that is called a duality. Such a map can be constructed in many ways. The concept of plane duality readily extends to space duality and beyond that to duality in any finite-dimensional projective geometry.
Principle of duality
A projective plane may be defined axiomatically as an incidence structure, in terms of a set of points, a set of lines, and an incidence relation that determines which points lie on which lines. These sets can be used to define a plane dual structure.
Interchange the role of "points" and "lines" in
to obtain the dual structure
,
where is the converse relation of . is also a projective plane, called the dual plane of .
If and are isomorphic, then is called self-dual. The projective planes for any field (or, more generally, for every division ring (skewfield) isomorphic to its dual) are self-dual. In particular, Desarguesian planes of finite order are always self-dual. However, there are non-Desarguesian planes which are not self-dual, such as the Hall planes and some that are, such as the Hughes planes.
In a projective plane a statement involving points, lines and incidence between them that is obtained from another such statement by interchanging the words "point" and "line" and making whatever grammatical adjustments that are necessary, is called the plane dual statement of the first. The plane dual statement of "Two points are on a unique line" is "Two lines meet at a unique point" |
https://en.wikipedia.org/wiki/Totenkopf | Totenkopf (, i.e. skull, literally "dead person's head") is the German word for skull. The word is often used to denote a figurative, graphic or sculptural symbol, common in Western culture, consisting of the representation of a human skull- usually frontal, more rarely in profile with or without the mandible. In some cases, other human skeletal parts may be added, often including two crossed long bones (femurs) depicted below or behind the skull. The human skull is an internationally used symbol for death, the defiance of death, danger, or the dead, as well as piracy or toxicity.
In English, the term Totenkopf is commonly associated with 19th- and 20th-century German military use, particularly in Nazi Germany.
Naval use
In early modern sea warfare, buccaneers used the Totenkopf as a pirate flag: a skull or other skeletal parts as a death threat and as a demand to hand over a ship. The symbol continues to be used by modern navies.
German military
Prussia
Use of the Totenkopf as a military emblem began under Frederick the Great, who formed a regiment of Hussar cavalry in the Prussian army commanded by Colonel von Ruesch, the Husaren-Regiment Nr. 5 (von Ruesch). It adopted a black uniform with a Totenkopf emblazoned on the front of its mirlitons and wore it on the field in the War of Austrian Succession and in the Seven Years' War. The Totenkopf remained a part of the uniform when the regiment was reformed into Leib-Husaren Regiments Nr.1 and Nr.2 in 1808.
Brunswick
In 1809, during the War of the Fifth Coalition, Frederick William, Duke of Brunswick-Wolfenbüttel raised a force of volunteers to fight Napoleon Bonaparte, who had conquered the Duke's lands. The Brunswick corps was provided with black uniforms, giving rise to their nickname, the Black Brunswickers. Both hussar cavalry and infantry in the force wore a Totenkopf badge, either in mourning for the duke's father, Charles William Ferdinand, Duke of Brunswick-Wolfenbüttel, who had been killed at the Bat |
https://en.wikipedia.org/wiki/Dextrin | Dextrins are a group of low-molecular-weight carbohydrates produced by the hydrolysis of starch and glycogen. Dextrins are mixtures of polymers of D-glucose units linked by α-(1→4) or α-(1→6) glycosidic bonds.
Dextrins can be produced from starch using enzymes like amylases, as during digestion in the human body and during malting and mashing in beer brewing or by applying dry heat under acidic conditions (pyrolysis or roasting). This procedure was first discovered in 1811 by Edme-Jean Baptiste Bouillon-Lagrange. The latter process is used industrially, and also occurs on the surface of bread during the baking process, contributing to flavor, color and crispness. Dextrins produced by heat are also known as pyrodextrins. Starch hydrolyses during roasting under acidic conditions, and short-chained starch parts partially rebranch with α-(1,6) bonds to the degraded starch molecule. See also Maillard reaction.
Dextrins are white, yellow, or brown powders that are partially or fully water-soluble, yielding optically active solutions of low viscosity. Most of them can be detected with iodine solution, giving a red coloration; one distinguishes erythrodextrin (dextrin that colours red) and achrodextrin (giving no colour).
White and yellow dextrins from starch roasted with little or no acid are called British gum.
Uses
Yellow dextrins are used as water-soluble glues in remoistenable envelope adhesives and paper tubes, in the mining industry as additives in froth flotation, in the foundry industry as green strength additives in sand casting, as printing thickener for batik resist dyeing, and as binders in gouache paint and also in the leather industry.
White dextrins are used as:
a crispness enhancer for food processing, in food batters, coatings, and glazes, (INS number 1400)
a textile finishing and coating agent to increase weight and stiffness of textile fabrics
a thickening and binding agent in pharmaceuticals and paper coatings
a pyrotechnic binder and fuel; t |
https://en.wikipedia.org/wiki/Inoculation%20loop | An inoculation loop (also called a smear loop, inoculation wand or microstreaker) is a simple tool used mainly by microbiologists to pick up and transfer a small sample of microorganisms called inoculum from a microbial culture, e.g. for streaking on a culture plate. This process is called inoculation.
The tool consists of a thin handle with a loop about 5 mm wide or smaller at the end. It was originally made of twisted metal wire (such as platinum, tungsten or nichrome), but disposable molded plastic versions are now common.
See also
Cell spreader
Micropipette
References
Laboratory equipment
Microbiology equipment |
https://en.wikipedia.org/wiki/Don%20Coppersmith | Don Coppersmith (born 1950) is a cryptographer and mathematician. He was involved in the design of the Data Encryption Standard block cipher at IBM, particularly the design of the S-boxes, strengthening them against differential cryptanalysis.
He also improved the quantum Fourier transform discovered by Peter Shor in the same year (1994). He has also worked on algorithms for computing discrete logarithms, the cryptanalysis of RSA, methods for rapid matrix multiplication (see Coppersmith–Winograd algorithm) and IBM's MARS cipher. He is also a co-designer of the SEAL and Scream ciphers.
In 1972, Coppersmith obtained a bachelor's degree in mathematics at the Massachusetts Institute of Technology, and a Masters and Ph.D. in mathematics from Harvard University in 1975 and 1977 respectively. He was a Putnam Fellow each year from 1968–1971, becoming the first four-time Putnam Fellow in history. In 1998, he started Ponder This, an online monthly column on mathematical puzzles and problems. In October 2005, the column was taken over by James Shearer. Around that same time, he left IBM and began working at the IDA Center for Communications Research, Princeton.
In 2002, Coppersmith won the RSA Award for Excellence in Mathematics.
See also
Coppersmith's attack
Coppersmith method
References
External links
20th-century American mathematicians
21st-century American mathematicians
IBM employees
IBM Research computer scientists
Harvard Graduate School of Arts and Sciences alumni
Modern cryptographers
Putnam Fellows
1950s births
Living people
Massachusetts Institute of Technology School of Science alumni
International Association for Cryptologic Research fellows |
https://en.wikipedia.org/wiki/Compatibility%20layer | In software engineering, a compatibility layer is an interface that allows binaries for a legacy or foreign system to run on a host system. This translates system calls for the foreign system into native system calls for the host system. With some libraries for the foreign system, this will often be sufficient to run foreign binaries on the host system. A hardware compatibility layer consists of tools that allow hardware emulation.
Software
Examples include:
Wine, which runs some Microsoft Windows binaries on Unix-like systems using a program loader and the Windows API implemented in DLLs
Windows's application compatibility layers to attempt to run poorly written applications or those written for earlier versions of the platform.
Lina, which runs some Linux binaries on Windows, Mac OS X and Unix-like systems with native look and feel.
KernelEX, which runs some Windows 2000/XP programs on Windows 98/Me.
Executor, which runs 68k-based "classic" Mac OS programs in Windows, Mac OS X and Linux.
Anbox, an Android compatibility layer for Linux.
Hybris, library that translates Bionic into glibc calls.
Darling, a translation layer that attempts to run Mac OS X and Darwin binaries on Linux.
Windows Subsystem for Linux v1, which runs Linux binaries on Windows via a compatibility layer which translates Linux system calls into native windows system calls.
Cygwin, a POSIX-compatible environment that runs natively on Windows.
2ine, a project to run OS/2 application on Linux
Rosetta 2, Apple's translation layer bundled with macOS Big Sur to allow x86-64 exclusive applications to run on ARM hardware.
ACL allows Android apps to natively execute on Tizen, webOS, or MeeGoo phones.
Alien Dalvik allows Android apps to run on MeeGo and Meamo. Alien Dalvik 2.0 was also revealed for iOS on an iPad, however unlike MeeGo and Meamo, this version ran from the cloud.
touchHLE is a compatibility layer (referred to as a “high-level emulator”) for Windows and macOS made by Andrea |
https://en.wikipedia.org/wiki/Frattini%20subgroup | In mathematics, particularly in group theory, the Frattini subgroup of a group is the intersection of all maximal subgroups of . For the case that has no maximal subgroups, for example the trivial group {e} or a Prüfer group, it is defined by . It is analogous to the Jacobson radical in the theory of rings, and intuitively can be thought of as the subgroup of "small elements" (see the "non-generator" characterization below). It is named after Giovanni Frattini, who defined the concept in a paper published in 1885.
Some facts
is equal to the set of all non-generators or non-generating elements of . A non-generating element of is an element that can always be removed from a generating set; that is, an element a of such that whenever is a generating set of containing a, is also a generating set of .
is always a characteristic subgroup of ; in particular, it is always a normal subgroup of .
If is finite, then is nilpotent.
If is a finite p-group, then . Thus the Frattini subgroup is the smallest (with respect to inclusion) normal subgroup N such that the quotient group is an elementary abelian group, i.e., isomorphic to a direct sum of cyclic groups of order p. Moreover, if the quotient group (also called the Frattini quotient of ) has order , then k is the smallest number of generators for (that is, the smallest cardinality of a generating set for ). In particular a finite p-group is cyclic if and only if its Frattini quotient is cyclic (of order p). A finite p-group is elementary abelian if and only if its Frattini subgroup is the trivial group, .
If and are finite, then .
An example of a group with nontrivial Frattini subgroup is the cyclic group of order , where p is prime, generated by a, say; here, .
See also
Fitting subgroup
Frattini's argument
Socle
References
(See Chapter 10, especially Section 10.4.)
Group theory
Functional subgroups |
https://en.wikipedia.org/wiki/Poincar%C3%A9%20duality | In mathematics, the Poincaré duality theorem, named after Henri Poincaré, is a basic result on the structure of the homology and cohomology groups of manifolds. It states that if M is an n-dimensional oriented closed manifold (compact and without boundary), then the kth cohomology group of M is isomorphic to the th homology group of M, for all integers k
Poincaré duality holds for any coefficient ring, so long as one has taken an orientation with respect to that coefficient ring; in particular, since every manifold has a unique orientation mod 2, Poincaré duality holds mod 2 without any assumption of orientation.
History
A form of Poincaré duality was first stated, without proof, by Henri Poincaré in 1893. It was stated in terms of Betti numbers: The kth and th Betti numbers of a closed (i.e., compact and without boundary) orientable n-manifold are equal. The cohomology concept was at that time about 40 years from being clarified. In his 1895 paper Analysis Situs, Poincaré tried to prove the theorem using topological intersection theory, which he had invented. Criticism of his work by Poul Heegaard led him to realize that his proof was seriously flawed. In the first two complements to Analysis Situs, Poincaré gave a new proof in terms of dual triangulations.
Poincaré duality did not take on its modern form until the advent of cohomology in the 1930s, when Eduard Čech and Hassler Whitney invented the cup and cap products and formulated Poincaré duality in these new terms.
Modern formulation
The modern statement of the Poincaré duality theorem is in terms of homology and cohomology: if M is a closed oriented n-manifold, then there is a canonically defined isomorphism for any integer k. To define such an isomorphism, one chooses a fixed fundamental class [M] of M, which will exist if is oriented. Then the isomorphism is defined by mapping an element to the cap product .
Homology and cohomology groups are defined to be zero for negative degrees, so Poinc |
https://en.wikipedia.org/wiki/RC%20time%20constant | The RC time constant, also called tau, the time constant (in seconds) of an RC circuit, is equal to the product of the circuit resistance (in ohms) and the circuit capacitance (in farads), i.e.
[seconds]
It is the time required to charge the capacitor, through the resistor, from an initial charge voltage of zero to approximately 63.2% of the value of an applied DC voltage, or to discharge the capacitor through the same resistor to approximately 36.8% of its initial charge voltage. These values are derived from the mathematical constant e, where and . The following formulae use it, assuming a constant voltage applied across the capacitor and resistor in series, to determine the voltage across the capacitor against time:
Charging toward applied voltage (initially zero voltage across capacitor, constant across resistor and capacitor together)
Discharging toward zero from initial voltage (initially across capacitor, constant zero voltage across resistor and capacitor together)
Cutoff frequency
The time constant is related to the cutoff frequency fc, an alternative parameter of the RC circuit, by
or, equivalently,
where resistance in ohms and capacitance in farads yields the time constant in seconds or the cutoff frequency in Hz.
Short conditional equations using the value for :
fc in Hz = 159155 / τ in µs
τ in µs = 159155 / fc in Hz
Other useful equations are:
rise time (20% to 80%)
rise time (10% to 90%)
In more complicated circuits consisting of more than one resistor and/or capacitor, the open-circuit time constant method provides a way of approximating the cutoff frequency by computing a sum of several RC time constants.
Delay
The signal delay of a wire or other circuit, measured as group delay or phase delay or the effective propagation delay of a digital transition, may be dominated by resistive-capacitive effects, depending on the distance and other parameters, or may alternatively be dominated by inductive, wave, and speed of light effects i |
https://en.wikipedia.org/wiki/Magnetic%20moment | In electromagnetism, the magnetic moment is the magnetic strength and orientation of a magnet or other object that produces a magnetic field, expressed as a vector. Examples of objects that have magnetic moments include loops of electric current (such as electromagnets), permanent magnets, elementary particles (such as electrons), composite particles (such as protons and neutrons), various molecules, and many astronomical objects (such as many planets, some moons, stars, etc).
More precisely, the term magnetic moment normally refers to a system's magnetic dipole moment, the component of the magnetic moment that can be represented by an equivalent magnetic dipole: a magnetic north and south pole separated by a very small distance. The magnetic dipole component is sufficient for small enough magnets or for large enough distances. Higher-order terms (such as the magnetic quadrupole moment) may be needed in addition to the dipole moment for extended objects.
The magnetic dipole moment of an object determines the magnitude of torque that the object experiences in a given magnetic field. Objects with larger magnetic moments experience larger torques when the same magnetic field is applied. The strength (and direction) of this torque depends not only on the magnitude of the magnetic moment but also on its orientation relative to the direction of the magnetic field. The magnetic moment may therefore be considered to be a vector. The direction of the magnetic moment points from the south to north pole of the magnet (inside the magnet).
The magnetic field of a magnetic dipole is proportional to its magnetic dipole moment. The dipole component of an object's magnetic field is symmetric about the direction of its magnetic dipole moment, and decreases as the inverse cube of the distance from the object.
Definition, units, and measurement
Definition
The magnetic moment can be defined as a vector relating the aligning torque on the object from an externally applied magnetic |
https://en.wikipedia.org/wiki/Balloon%20Fight | is an action video game developed by Nintendo and HAL Laboratory and published by Nintendo. The original arcade version was released for the Nintendo VS. System internationally as Vs. Balloon Fight, while its Nintendo Entertainment System counterpart was released in Japan in 1985 and internationally in 1986.
The gameplay is similar to the 1982 game Joust from Williams Electronics. The home Nintendo Entertainment System version was ported to the NEC PC-8801 in October 1985, the Sharp X1 in November 1985, the Game Boy Advance as Balloon Fight-e for the e-Reader in the United States on September 16, 2002, and as part of the Famicom Mini Series in Japan on May 21, 2004. It was later rereleased through Nintendo's Virtual Console and NES Classic Edition. It was released on Nintendo Switch Online in 2018.
Gameplay
The player controls an unnamed Balloon Fighter with two balloons attached to his helmet. Repeatedly pressing the A button or holding down the B button causes the Balloon Fighter to flap his arms and rise into the air. If a balloon is popped, the player's flotation is decreased, making it harder to rise. A life is lost if both balloons are popped by enemy Balloon Fighters, if the player falls in the water, gets eaten by the large piranha near the surface of the water, or is hit by lightning.
There are two modes of play: the 1-player/2-player game where the goal is to clear the screen of enemies, and Balloon Trip where the goal is to avoid obstacles in a side-scrolling stage. The original arcade game does not include Balloon Trip, but all the level layouts are completely different so as to take advantage of vertical scrolling in addition to some minor gameplay differences.
1-player/2-player game
Defeat all of the enemies on screen to clear the stage. This mode can be played alone or co-operatively with a second player. Each player starts with three extra lives. The 3DS Balloon Fight port comes with the Download Play option, that allows you to play along with a |
https://en.wikipedia.org/wiki/Champernowne%20constant | In mathematics, the Champernowne constant is a transcendental real constant whose decimal expansion has important properties. It is named after economist and mathematician D. G. Champernowne, who published it as an undergraduate in 1933.
For base 10, the number is defined by concatenating representations of successive integers:
.
Champernowne constants can also be constructed in other bases, similarly, for example:
.
The Champernowne word or Barbier word is the sequence of digits of C10 obtained by writing it in base 10 and juxtaposing the digits:
More generally, a Champernowne sequence (sometimes also called a Champernowne word) is any sequence of digits obtained by concatenating all finite digit-strings (in any given base) in some recursive order.
For instance, the binary Champernowne sequence in shortlex order is
where spaces (otherwise to be ignored) have been inserted just to show the strings being concatenated.
Properties
A real number x is said to be normal if its digits in every base follow a uniform distribution: all digits being equally likely, all pairs of digits equally likely, all triplets of digits equally likely, etc. x is said to be normal in base b if its digits in base b follow a uniform distribution.
If we denote a digit string as [a0, a1, …], then, in base 10, we would expect strings [0], [1], [2], …, [9] to occur 1/10 of the time, strings [0,0], [0,1], …, [9,8], [9,9] to occur 1/100 of the time, and so on, in a normal number.
Champernowne proved that is normal in base 10, while Nakai and Shiokawa proved a more general theorem, a corollary of which is that is normal in base for any b. It is an open problem whether is normal in bases .
Kurt Mahler showed that the constant is transcendental.
The irrationality measure of is , and more generally for any base .
The Champernowne word is a disjunctive sequence.
Series
The definition of the Champernowne constant immediately gives rise to an infinite series representation invol |
https://en.wikipedia.org/wiki/Stoneham%20number | In mathematics, the Stoneham numbers are a certain class of real numbers, named after mathematician Richard G. Stoneham (1920–1996). For coprime numbers b, c > 1, the Stoneham number αb,c is defined as
It was shown by Stoneham in 1973 that αb,c is b-normal whenever c is an odd prime and b is a primitive root of c2. In 2002, Bailey & Crandall showed that coprimality of b, c > 1 is sufficient for b-normality of αb,c.
References
.
Eponymous numbers in mathematics
Number theory
Sets of real numbers |
https://en.wikipedia.org/wiki/TUM%20Asia | The German Institute of Science and Technology – TUM Asia is an institute for research and education located in Singapore, formed in 2002 as the Asian campus of the Technical University of Munich.
Background
GIST-TUM Asia was set up in Singapore in 2002, under the Singapore government's Global Schoolhouse Initiative. GIST-TUM Asia currently offers 5 Master of Science programmes, MSc in Industrial Chemistry; MSc in Integrated Circuit Design; MSc in Green Electronics; MSc in Aerospace Engineering; and MSc in Transport and Logistics. The former three programmes are run jointly with either Nanyang Technological University (NTU) or National University of Singapore (NUS) and the latter two programmes are pure TUM programmes.
GIST-TUM Asia then partnered Singapore Institute of Technology (SIT) to offer Bachelor of Science programmes in Electrical Engineering and Information Technology and Chemical Engineering in 2010. Admitted students undergo a 2 ½ year programme that includes a 2 to 4 month overseas exchange at the Munich campus.
In 2020, TUM deepened their partnership with SIT and change the Electrical Engineering Program and Chemical Engineering Program into 4 years Bachelor of Engineering with Honours.
TUM CREATE, a research initiative by GIST-TUM Asia, was incepted in June 2010 to propagate research programmes where scientists and researchers from both Germany and Singapore can work together for the advancement of science and technology. With the research agreement in TUM CREATE Centre for Electromobility in Megacities sealed between GIST-TUM Asia and National Research Foundation of Singapore (NRF), the research programme will focus on developing innovative systems that incorporate safety and reliability with functionality and energy efficiency in electric vehicles.
References
External links
Official Website
Research institutes in Singapore
International research institutes
Engineering universities and colleges
Technical University of Munich
Germany–Singapor |
https://en.wikipedia.org/wiki/Cut-through%20switching | In computer networking, cut-through switching, also called cut-through forwarding is a method for packet switching systems, wherein the switch starts forwarding a frame (or packet) before the whole frame has been received, normally as soon as the destination address and outgoing interface is determined. Compared to store and forward, this technique reduces latency through the switch and relies on the destination devices for error handling. Pure cut-through switching is only possible when the speed of the outgoing interface is at least equal or higher than the incoming interface speed.
Adaptive switching dynamically selects between cut-through and store and forward behaviors based on current network conditions.
Cut-through switching is closely associated with wormhole switching.
Use in Ethernet
When cut-through switching is used in Ethernet the switch is not able to verify the integrity of an incoming frame before forwarding it.
The technology was developed by Kalpana, the company that introduced the first Ethernet switch.
The primary advantage of cut-through Ethernet switches, compared to store-and-forward Ethernet switches, is lower latency.
Cut-through Ethernet switches can support an end-to-end network delay latency of about ten microseconds.
End-to-end application latencies below 3 microseconds require specialized hardware such as InfiniBand.
A cut-through switch will forward corrupted frames, whereas a store and forward switch will drop them. Fragment free is a variation on cut-through switching that partially addresses this problem by assuring that collision fragments are not forwarded. Fragment free will hold the frame until the first 64 bytes are read from the source to detect a collision before forwarding. This is only useful if there is a chance of a collision on the source port.
The theory here is that frames that are damaged by collisions are often shorter than the minimum valid Ethernet frame size of 64 bytes. With a fragment-free buffer the fir |
https://en.wikipedia.org/wiki/Microbiological%20culture | A microbiological culture, or microbial culture, is a method of multiplying microbial organisms by letting them reproduce in predetermined culture medium under controlled laboratory conditions. Microbial cultures are foundational and basic diagnostic methods used as research tools in molecular biology.
The term culture can also refer to the microorganisms being grown.
Microbial cultures are used to determine the type of organism, its abundance in the sample being tested, or both. It is one of the primary diagnostic methods of microbiology and used as a tool to determine the cause of infectious disease by letting the agent multiply in a predetermined medium. For example, a throat culture is taken by scraping the lining of tissue in the back of the throat and blotting the sample into a medium to be able to screen for harmful microorganisms, such as Streptococcus pyogenes, the causative agent of strep throat. Furthermore, the term culture is more generally used informally to refer to "selectively growing" a specific kind of microorganism in the lab.
It is often essential to isolate a pure culture of microorganisms. A pure (or axenic) culture is a population of cells or multicellular organisms growing in the absence of other species or types. A pure culture may originate from a single cell or single organism, in which case the cells are genetic clones of one another. For the purpose of gelling the microbial culture, the medium of agarose gel (agar) is used. Agar is a gelatinous substance derived from seaweed. A cheap substitute for agar is guar gum, which can be used for the isolation and maintenance of thermophiles.
Bacterial culture
There are several types of bacterial culture methods that are selected based on the agent being cultured and the downstream use.
Broth cultures
One method of bacterial culture is liquid culture, in which the desired bacteria are suspended in a liquid nutrient medium, such as Luria broth, in an upright flask. This allows a scientist |
https://en.wikipedia.org/wiki/Lysozyme | Lysozyme (, muramidase, N-acetylmuramide glycanhydrolase; systematic name peptidoglycan N-acetylmuramoylhydrolase) is an antimicrobial enzyme produced by animals that forms part of the innate immune system. It is a glycoside hydrolase that catalyzes the following process:
Hydrolysis of (1→4)-β-linkages between N-acetylmuramic acid and N-acetyl-D-glucosamine residues in a peptidoglycan and between N-acetyl-D-glucosamine residues in chitodextrins
Peptidoglycan is the major component of gram-positive bacterial cell wall. This hydrolysis in turn compromises the integrity of bacterial cell walls causing lysis of the bacteria.
Lysozyme is abundant in secretions including tears, saliva, human milk, and mucus. It is also present in cytoplasmic granules of the macrophages and the polymorphonuclear neutrophils (PMNs). Large amounts of lysozyme can be found in egg white. C-type lysozymes are closely related to α-lactalbumin in sequence and structure, making them part of the same glycoside hydrolase family 22. In humans, the C-type lysozyme enzyme is encoded by the LYZ gene.
Hen egg white lysozyme is thermally stable, with a melting point reaching up to 72 °C at pH 5.0. However, lysozyme in human milk loses activity very quickly at that temperature. Hen egg white lysozyme maintains its activity in a large range of pH (6–9). Its isoelectric point is 11.35. The isoelectric point of human milk lysozyme is 10.5–11.
Function and mechanism
The enzyme functions by hydrolyzing glycosidic bonds in peptidoglycans. The enzyme can also break glycosidic bonds in chitin, although not as effectively as true chitinases.
Lysozyme's active site binds the peptidoglycan molecule in the prominent cleft between its two domains. It attacks peptidoglycans (found in the cell walls of bacteria, especially Gram-positive bacteria), its natural substrate, between N-acetylmuramic acid (NAM) and the fourth carbon atom of N-acetylglucosamine (NAG).
Shorter saccharides like tetrasaccharide have also |
https://en.wikipedia.org/wiki/Coefficient%20of%20performance | The coefficient of performance or COP (sometimes CP or CoP) of a heat pump, refrigerator or air conditioning system is a ratio of useful heating or cooling provided to work (energy) required. Higher COPs equate to higher efficiency, lower energy (power) consumption and thus lower operating costs. The COP is used in thermodynamics.
The COP usually exceeds 1, especially in heat pumps, because instead of just converting work to heat (which, if 100% efficient, would be a COP of 1), it pumps additional heat from a heat source to where the heat is required. Most air conditioners have a COP of 2.3 to 3.5. Less work is required to move heat than for conversion into heat, and because of this, heat pumps, air conditioners and refrigeration systems can have a coefficient of performance greater than one.
For complete systems, COP calculations should include energy consumption of all power consuming auxiliaries. The COP is highly dependent on operating conditions, especially absolute temperature and relative temperature between sink and system, and is often graphed or averaged against expected conditions.
Performance of absorption refrigerator chillers is typically much lower, as they are not heat pumps relying on compression, but instead rely on chemical reactions driven by heat.
Equation
The equation is:
where
is the useful heat supplied or removed by the considered system (machine).
is the net work put into the considered system in one cycle.
The COP for heating and cooling are different because the heat reservoir of interest is different. When one is interested in how well a machine cools, the COP is the ratio of the heat taken up from the cold reservoir to input work. However, for heating, the COP is the ratio of the magnitude of the heat given off to the hot reservoir (which is the heat taken up from the cold reservoir plus the input work) to the input work:
where
is the heat removed from the cold reservoir and added to the system;
is the heat given off to |
https://en.wikipedia.org/wiki/Cohesion%20%28computer%20science%29 | In computer programming, cohesion refers to the degree to which the elements inside a module belong together. In one sense, it is a measure of the strength of relationship between the methods and data of a class and some unifying purpose or concept served by that class. In another sense, it is a measure of the strength of relationship between the class's methods and data themselves.
Cohesion is an ordinal type of measurement and is usually described as “high cohesion” or “low cohesion”. Modules with high cohesion tend to be preferable, because high cohesion is associated with several desirable traits of software including robustness, reliability, reusability, and understandability. In contrast, low cohesion is associated with undesirable traits such as being difficult to maintain, test, reuse, or even understand.
Cohesion is often contrasted with coupling. High cohesion often correlates with loose coupling, and vice versa. The software metrics of coupling and cohesion were invented by Larry Constantine in the late 1960s as part of Structured Design, based on characteristics of “good” programming practices that reduced maintenance and modification costs. Structured Design, cohesion and coupling were published in the article Stevens, Myers & Constantine (1974) and the book Yourdon & Constantine (1979); the latter two subsequently became standard terms in software engineering.
High cohesion
In object-oriented programming, if the methods that serve a class tend to be similar in many aspects, then the class is said to have high cohesion. In a highly cohesive system, code readability and reusability is increased, while complexity is kept manageable.
Cohesion is increased if:
The functionalities embedded in a class, accessed through its methods, have much in common.
Methods carry out a small number of related activities, by avoiding coarsely grained or unrelated sets of data.
Related methods are in the same source file or otherwise grouped together; for example, in s |
https://en.wikipedia.org/wiki/Copy%20constructor%20%28C%2B%2B%29 | In the C++ programming language, a copy constructor is a special constructor for creating a new object as a copy of an existing object. Copy constructors are the standard way of copying objects in C++, as opposed to cloning, and have C++-specific nuances.
The first argument of such a constructor is a reference to an object of the same type as is being constructed (const or non-const), which might be followed by parameters of any type (all having default values).
Normally the compiler automatically creates a copy constructor for each class (known as an implicit copy constructor) but for special cases the programmer creates the copy constructor, known as a user-defined copy constructor. In such cases, the compiler does not create one. Hence, there is always one copy constructor that is either defined by the user or by the system.
A user-defined copy constructor is generally needed when an object owns pointers or non-shareable references, such as to a file, in which case a destructor and an assignment operator should also be written (see Rule of three).
Definition
Copying of objects is achieved by the use of a copy constructor and an assignment operator. A copy constructor has as its first parameter a (possibly const or volatile) reference to its own class type. It can have more arguments, but the rest must have default values associated with them. The following would be valid copy constructors for class X:
X(const X& copy_from_me);
X(X& copy_from_me);
X(volatile X& copy_from_me);
X(const volatile X& copy_from_me);
X(X& copy_from_me, int = 0);
X(const X& copy_from_me, double = 1.0, int = 42);
...
The first one should be used unless there is a good reason to use one of the others. One of the differences between the first and the second is that temporaries can be copied with the first. For example:
X a = X(); // valid given X(const X& copy_from_me) but not valid given X(X& copy_from_me)
// because the second wants a non-const X&
/ |
https://en.wikipedia.org/wiki/Alexandrov%20topology | In topology, an Alexandrov topology is a topology in which the intersection of every family of open sets is open. It is an axiom of topology that the intersection of every finite family of open sets is open; in Alexandrov topologies the finite restriction is dropped.
A set together with an Alexandrov topology is known as an Alexandrov-discrete space or finitely generated space.
Alexandrov topologies are uniquely determined by their specialization preorders. Indeed, given any preorder ≤ on a set X, there is a unique Alexandrov topology on X for which the specialization preorder is ≤. The open sets are just the upper sets with respect to ≤. Thus, Alexandrov topologies on X are in one-to-one correspondence with preorders on X.
Alexandrov-discrete spaces are also called finitely generated spaces since their topology is uniquely determined by the family of all finite subspaces. Alexandrov-discrete spaces can thus be viewed as a generalization of finite topological spaces.
Due to the fact that inverse images commute with arbitrary unions and intersections, the property of being an Alexandrov-discrete space is preserved under quotients.
Alexandrov-discrete spaces are named after the Russian topologist Pavel Alexandrov. They should not be confused with the more geometrical Alexandrov spaces introduced by the Russian mathematician Aleksandr Danilovich Aleksandrov.
Characterizations of Alexandrov topologies
Alexandrov topologies have numerous characterizations. Let X = <X, T> be a topological space. Then the following are equivalent:
Open and closed set characterizations:
Open set. An arbitrary intersection of open sets in X is open.
Closed set. An arbitrary union of closed sets in X is closed.
Neighbourhood characterizations:
Smallest neighbourhood. Every point of X has a smallest neighbourhood.
Neighbourhood filter. The neighbourhood filter of every point in X is closed under arbitrary intersections.
Interior and closure algebraic characterizations:
Interior |
https://en.wikipedia.org/wiki/Artinian%20module | In mathematics, specifically abstract algebra, an Artinian module is a module that satisfies the descending chain condition on its poset of submodules. They are for modules what Artinian rings are for rings, and a ring is Artinian if and only if it is an Artinian module over itself (with left or right multiplication). Both concepts are named for Emil Artin.
In the presence of the axiom of (dependent) choice, the descending chain condition becomes equivalent to the minimum condition, and so that may be used in the definition instead.
Like Noetherian modules, Artinian modules enjoy the following heredity property:
If M is an Artinian R-module, then so is any submodule and any quotient of M.
The converse also holds:
If M is any R-module and N any Artinian submodule such that M/N is Artinian, then M is Artinian.
As a consequence, any finitely-generated module over an Artinian ring is Artinian. Since an Artinian ring is also a Noetherian ring, and finitely-generated modules over a Noetherian ring are Noetherian, it is true that for an Artinian ring R, any finitely-generated R-module is both Noetherian and Artinian, and is said to be of finite length. It also follows that any finitely generated Artinian module is Noetherian even without the assumption of R being Artinian. However, if R is not Artinian and M is not finitely-generated, there are counterexamples.
Left and right Artinian rings, modules and bimodules
The ring R can be considered as a right module, where the action is the natural one given by the ring multiplication on the right. R is called right Artinian when this right module R is an Artinian module. The definition of "left Artinian ring" is done analogously. For noncommutative rings this distinction is necessary, because it is possible for a ring to be Artinian on one side but not the other.
The left-right adjectives are not normally necessary for modules, because the module M is usually given as a left or right R-module at the outset. However, it |
https://en.wikipedia.org/wiki/Specialization%20%28pre%29order | In the branch of mathematics known as topology, the specialization (or canonical) preorder is a natural preorder on the set of the points of a topological space. For most spaces that are considered in practice, namely for all those that satisfy the T0 separation axiom, this preorder is even a partial order (called the specialization order). On the other hand, for T1 spaces the order becomes trivial and is of little interest.
The specialization order is often considered in applications in computer science, where T0 spaces occur in denotational semantics. The specialization order is also important for identifying suitable topologies on partially ordered sets, as is done in order theory.
Definition and motivation
Consider any topological space X. The specialization preorder ≤ on X relates two points of X when one lies in the closure of the other. However, various authors disagree on which 'direction' the order should go. What is agreed is that if
x is contained in cl{y},
(where cl{y} denotes the closure of the singleton set {y}, i.e. the intersection of all closed sets containing {y}), we say that x is a specialization of y and that y is a generalization of x; this is commonly written y ⤳ x.
Unfortunately, the property "x is a specialization of y" is alternatively written as "x ≤ y" and as "y ≤ x" by various authors (see, respectively, and ).
Both definitions have intuitive justifications: in the case of the former, we have
x ≤ y if and only if cl{x} ⊆ cl{y}.
However, in the case where our space X is the prime spectrum Spec R of a commutative ring R (which is the motivational situation in applications related to algebraic geometry), then under our second definition of the order, we have
y ≤ x if and only if y ⊆ x as prime ideals of the ring R.
For the sake of consistency, for the remainder of this article we will take the first definition, that "x is a specialization of y" be written as x ≤ y. We then see,
x ≤ y if and only if x is contained in all close |
https://en.wikipedia.org/wiki/Btrieve | Btrieve is a transactional database (navigational database) software product. It is based on Indexed Sequential Access Method (ISAM), which is a way of storing data for fast retrieval. There have been several versions of the product for DOS, Linux, older versions of Microsoft Windows, 32-bit IBM OS/2 and for Novell NetWare.
It was originally a record manager published by SoftCraft. Btrieve was written by Doug Woodward and Nancy Woodward and initial funding was provided in part by Doug's brother Loyd Woodward. Around the same time as the release of the first IBM PCs, Doug received 50% of the company as a wedding gift and later purchased the remainder from his brother. After gaining market share and popularity, it was acquired from Doug and Nancy Woodward by Novell in 1987, for integration into their NetWare operating system in addition to continuing with the DOS version. The product gained significant market share as a database embedded in mid-market applications in addition to being embedded in every copy of NetWare 2.x, 3.x and 4.x since it was available on every NetWare network. After some reorganization within Novell, it was decided in 1994 to spin off the product and technology to Doug and Nancy Woodward along with Ron Harris, to be developed by a new company known as Btrieve Technologies, Inc. (BTI).
Btrieve was modularized starting with version 6.15 and became one of two database front-ends that plugged into a standard software interface called the MicroKernel Database Engine. The Btrieve front-end supported the Btrieve API and the other front-end was called Scalable SQL, a relational database product based upon the MKDE that used its own variety of Structured Query Language, otherwise known as SQL. After these versions were released (Btrieve 6.15 and ScalableSQL v4) the company was renamed to Pervasive Software prior to their IPO. Shortly thereafter the Btrieve and ScalableSQL products were combined into the products sold as Pervasive.SQL or PSQL, and later |
https://en.wikipedia.org/wiki/Binomial%20%28polynomial%29 | In algebra, a binomial is a polynomial that is the sum of two terms, each of which is a monomial. It is the simplest kind of a sparse polynomial after the monomials.
Definition
A binomial is a polynomial which is the sum of two monomials. A binomial in a single indeterminate (also known as a univariate binomial) can be written in the form
where and are numbers, and and are distinct non-negative integers and is a symbol which is called an indeterminate or, for historical reasons, a variable. In the context of Laurent polynomials, a Laurent binomial, often simply called a binomial, is similarly defined, but the exponents and may be negative.
More generally, a binomial may be written as:
Examples
Operations on simple binomials
The binomial can be factored as the product of two other binomials:
This is a special case of the more general formula:
When working over the complex numbers, this can also be extended to:
The product of a pair of linear binomials and is a trinomial:
A binomial raised to the th power, represented as can be expanded by means of the binomial theorem or, equivalently, using Pascal's triangle. For example, the square of the binomial is equal to the sum of the squares of the two terms and twice the product of the terms, that is:
The numbers (1, 2, 1) appearing as multipliers for the terms in this expansion are the binomial coefficients two rows down from the top of Pascal's triangle. The expansion of the th power uses the numbers rows down from the top of the triangle.
An application of the above formula for the square of a binomial is the "-formula" for generating Pythagorean triples:
For , let , , and ; then .
Binomials that are sums or differences of cubes can be factored into smaller-degree polynomials as follows:
See also
Completing the square
Binomial distribution
List of factorial and binomial topics (which contains a large number of related links)
Notes
References
Algebra
Factorial and binomial topics |
https://en.wikipedia.org/wiki/Krull%27s%20principal%20ideal%20theorem | In commutative algebra, Krull's principal ideal theorem, named after Wolfgang Krull (1899–1971), gives a bound on the height of a principal ideal in a commutative Noetherian ring. The theorem is sometimes referred to by its German name, Krulls Hauptidealsatz (from ("Principal") + + ("theorem")).
Precisely, if R is a Noetherian ring and I is a principal, proper ideal of R, then each minimal prime ideal over I has height at most one.
This theorem can be generalized to ideals that are not principal, and the result is often called Krull's height theorem. This says that if R is a Noetherian ring and I is a proper ideal generated by n elements of R, then each minimal prime over I has height at most n. The converse is also true: if a prime ideal has height n, then it is a minimal prime ideal over an ideal generated by n elements.
The principal ideal theorem and the generalization, the height theorem, both follow from the fundamental theorem of dimension theory in commutative algebra (see also below for the direct proofs). Bourbaki's Commutative Algebra gives a direct proof. Kaplansky's Commutative Rings includes a proof due to David Rees.
Proofs
Proof of the principal ideal theorem
Let be a Noetherian ring, x an element of it and a minimal prime over x. Replacing A by the localization , we can assume is local with the maximal ideal . Let be a strictly smaller prime ideal and let , which is a -primary ideal called the n-th symbolic power of . It forms a descending chain of ideals . Thus, there is the descending chain of ideals in the ring . Now, the radical is the intersection of all minimal prime ideals containing ; is among them. But is a unique maximal ideal and thus . Since contains some power of its radical, it follows that is an Artinian ring and thus the chain stabilizes and so there is some n such that . It implies:
,
from the fact is -primary (if is in , then with and . Since is minimal over , and so implies is in .) Now, quotientin |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.