source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Underwater%20Demolition%20Team | Underwater Demolition Team (UDT), or frogmen, were amphibious units created by the United States Navy during World War II with specialized non-tactical missions. They were predecessors of the navy's current SEAL teams.
Their primary WWII function began with reconnaissance and underwater demolition of natural or man-made obstacles obstructing amphibious landings. Postwar they transitioned to scuba gear changing their capabilities. With that they came to be considered more elite and tactical during the Korean and Vietnam Wars. UDTs were pioneers in underwater demolition, closed-circuit diving, combat swimming, and midget submarine (dry and wet submersible) operations. They later were tasked with ensuring recovery of space capsules and astronauts after splash down in the Mercury, Gemini and Apollo space flight programs. Commando training was added making them the forerunner to the United States Navy SEAL program that exists today.
In 1983, after additional SEAL training, the UDTs were re-designated as SEAL Teams or Swimmer Delivery Vehicle Teams (SDVTs). SDVTs have since been re-designated SEAL Delivery Vehicle Teams.
Early history
The United States Navy studied the problems encountered by the disastrous Allied amphibious landings during the Gallipoli Campaign of World War I. This contributed to the development and experimentation of new landing techniques in the mid-1930s. In August 1941, landing trials were performed and one hazardous operation led to Army Second Lieutenant Lloyd E. Peddicord being assigned the task of analyzing the need for a human intelligence (HUMINT) capability.
When the U.S. entered World War II, the Navy realized that in order to strike at the Axis powers the U.S. forces would need to perform a large number of amphibious attacks. The Navy decided that men would have to go in to reconnoiter the landing beaches, locate obstacles and defenses, as well as guide the landing forces ashore. In August 1942, Peddicord set up a recon school for his |
https://en.wikipedia.org/wiki/Frequency%20compensation | In electronics engineering, frequency compensation is a technique used in amplifiers, and especially in amplifiers employing negative feedback. It usually has two primary goals: To avoid the unintentional creation of positive feedback, which will cause the amplifier to oscillate, and to control overshoot and ringing in the amplifier's step response. It is also used extensively to improve the bandwidth of single pole systems.
Explanation
Most amplifiers use negative feedback to trade gain for other desirable properties, such as decreased distortion, improved noise reduction or increased invariance to variation of parameters such as temperature. Ideally, the phase characteristic of an amplifier's frequency response would be linear; however, device limitations make this goal physically unattainable. More particularly, capacitances within the amplifier's gain stages cause the output signal to lag behind the input signal by up to 90° for each pole they create. If the sum of these phase lags reaches 180°, the output signal will be the negative of the input signal. Feeding back any portion of this output signal to the inverting (negative) input when the gain of the amplifier is sufficient will cause the amplifier to oscillate. This is because the feedback signal will reinforce the input signal. That is, the feedback is then positive rather than negative.
Frequency compensation is implemented to avoid this result.
Another goal of frequency compensation is to control the step response of an amplifier circuit as shown in Figure 1. For example, if a step in voltage is input to a voltage amplifier, ideally a step in output voltage would occur. However, the output is not ideal because of the frequency response of the amplifier, and ringing occurs. Several figures of merit to describe the adequacy of step response are in common use. One is the rise time of the output, which ideally would be short. A second is the time for the output to lock into its final value, which ag |
https://en.wikipedia.org/wiki/Living%20Machine | A Living Machine is a form of ecological sewage treatment based on fixed-film ecology.
The Living Machine system was commercialized and is marketed by Living Machine Systems, L3C, a corporation based in Charlottesville, Va, USA.
Examples
Examples of Living Machines are mechanical composters for industrial kitchens, effective microorganisms as fertilizer for agricultural purposes, and Integrated Biotectural systems in landscaping and architecture like Earthships or the IBTS Greenhouse.
Components like tomato plants (for more water purification) and fish (for food) have been part of the living, ecosystem-like designs. The theory does not limit the size of the system, or the amount of species. One design optimum is a natural ecosystem which is designed for a special purpose like a sewage treating wetland in a suitable ecosystem for the locality. Another optimum is an economically viable system returning profit for the investor. The practice of permaculture is one example for a compromise between the two optimum design points.
The scale of Living Machine systems ranges from the individual building to community-scale public works. Some of the earliest Living Machines were used to treat domestic wastewater in small, ecologically-conscious villages, such as Findhorn Community in Scotland,. The latest-generation Tidal Flow Wetland Living Machines are being used in major urban office buildings, military bases, housing developments, resorts and institutional campuses.
Living Machine System Process
“Fixed film ecology” has superseded systems based on hydroponics or a fluid medium. In fixed film systems the wetland cells are filled with a solid aggregate medium having extensive surface area for beneficial biofilm (treatment bacteria) growth. Fixed film ecology allows denser and more diverse micro-ecosystems to form than does a liquid medium. These ecosystems go well beyond bacteria to include a variety of organisms up to and including macro-vegetation.
Tidal cycles (filli |
https://en.wikipedia.org/wiki/Hilbert%27s%20program | In mathematics, Hilbert's program, formulated by German mathematician David Hilbert in the early 1920s, was a proposed solution to the foundational crisis of mathematics, when early attempts to clarify the foundations of mathematics were found to suffer from paradoxes and inconsistencies. As a solution, Hilbert proposed to ground all existing theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent. Hilbert proposed that the consistency of more complicated systems, such as real analysis, could be proven in terms of simpler systems. Ultimately, the consistency of all of mathematics could be reduced to basic arithmetic.
Gödel's incompleteness theorems, published in 1931, showed that Hilbert's program was unattainable for key areas of mathematics. In his first theorem, Gödel showed that any consistent system with a computable set of axioms which is capable of expressing arithmetic can never be complete: it is possible to construct a statement that can be shown to be true, but that cannot be derived from the formal rules of the system. In his second theorem, he showed that such a system could not prove its own consistency, so it certainly cannot be used to prove the consistency of anything stronger with certainty. This refuted Hilbert's assumption that a finitistic system could be used to prove the consistency of itself, and therefore could not prove everything else.
Statement of Hilbert's program
The main goal of Hilbert's program was to provide secure foundations for all mathematics. In particular, this should include:
A formulation of all mathematics; in other words all mathematical statements should be written in a precise formal language, and manipulated according to well defined rules.
Completeness: a proof that all true mathematical statements can be proved in the formalism.
Consistency: a proof that no contradiction can be obtained in the formalism of mathematics. This consistency proof should preferably use only "fin |
https://en.wikipedia.org/wiki/Primordial%20soup | Primordial soup, also known as primordial goo, primordial ooze, prebiotic soup and prebiotic broth, is the hypothetical set of conditions present on the Earth around 3.7 to 4.0 billion years ago. It is an aspect of the heterotrophic theory (also known as the Oparin–Haldane hypothesis) concerning the origin of life, first proposed by Alexander Oparin in 1924, and J. B. S. Haldane in 1929.
As formulated by Oparin, in the primitive Earth's surface layers, carbon, hydrogen, water vapour, and ammonia reacted to form the first organic compounds. The concept of a primordial soup gained credence in 1953 when the "Miller–Urey experiment" used a highly reduced mixture of gases—methane, ammonia and hydrogen—to form basic organic monomers, such as amino acids.
Historical background
The notion that living beings originated from inanimate materials comes from the Ancient Greeks—the theory known as spontaneous generation. Aristotle in the 4th century BCE gave a proper explanation, writing:
Aristotle also states that it is not only that animals originate from other similar animals, but also that living things do arise and always have arisen from lifeless matter. His theory remained the dominant idea on origin of life (outside that of deity as a causal agent) from the ancient philosophers to the Renaissance thinkers in various forms. With the birth of modern science, experimental refutations emerged. Italian physician Francesco Redi demonstrated in 1668 that maggots developed from rotten meat only in a jar where flies could enter, but not in a closed-lid jar. He concluded that: omne vivum ex vivo (All life comes from life).
The experiment of French chemist Louis Pasteur in 1859 is regarded as the death blow to spontaneous generation. He experimentally showed that organisms (microbes) can not grow in sterilised water, unless it is exposed to air. The experiment won him the Alhumbert Prize in 1862 from the French Academy of Sciences, and he concluded: "Never will the doctrine of |
https://en.wikipedia.org/wiki/Rhombic%20dodecahedron | In geometry, the rhombic dodecahedron is a convex polyhedron with 12 congruent rhombic faces. It has 24 edges, and 14 vertices of 2 types. It is a Catalan solid, and the dual polyhedron of the cuboctahedron.
Properties
The rhombic dodecahedron is a zonohedron. Its polyhedral dual is the cuboctahedron. The long face-diagonal length is exactly times the short face-diagonal length; thus, the acute angles on each face measure arccos(), or approximately 70.53°.
Being the dual of an Archimedean polyhedron, the rhombic dodecahedron is face-transitive, meaning the symmetry group of the solid acts transitively on its set of faces. In elementary terms, this means that for any two faces A and B, there is a rotation or reflection of the solid that leaves it occupying the same region of space while moving face A to face B.
The rhombic dodecahedron can be viewed as the convex hull of the union of the vertices of a cube and an octahedron. The 6 vertices where 4 rhombi meet correspond to the vertices of the octahedron, while the 8 vertices where 3 rhombi meet correspond to the vertices of the cube.
The rhombic dodecahedron is one of the nine edge-transitive convex polyhedra, the others being the five Platonic solids, the cuboctahedron, the icosidodecahedron, and the rhombic triacontahedron.
The rhombic dodecahedron can be used to tessellate three-dimensional space: it can be stacked to fill a space, much like hexagons fill a plane.
This polyhedron in a space-filling tessellation can be seen as the Voronoi tessellation of the face-centered cubic lattice. It is the Brillouin zone of body centered cubic (bcc) crystals. Some minerals such as garnet form a rhombic dodecahedral crystal habit. As Johannes Kepler noted in his 1611 book on snowflakes (Strena seu de Nive Sexangula), honey bees use the geometry of rhombic dodecahedra to form honeycombs from a tessellation of cells each of which is a hexagonal prism capped with half a rhombic dodecahedron. The rhombic dodecahedron also |
https://en.wikipedia.org/wiki/Symbolic%20execution | In computer science, symbolic execution (also symbolic evaluation or symbex) is a means of analyzing a program to determine what inputs cause each part of a program to execute. An interpreter follows the program, assuming symbolic values for inputs rather than obtaining actual inputs as normal execution of the program would. It thus arrives at expressions in terms of those symbols for expressions and variables in the program, and constraints in terms of those symbols for the possible outcomes of each conditional branch. Finally, the possible inputs that trigger a branch can be determined by solving the constraints.
The field of symbolic simulation applies the same concept to hardware. Symbolic computation applies the concept to the analysis of mathematical expressions.
Example
Consider the program below, which reads in a value and fails if the input is 6.
int f() {
...
y = read();
z = y * 2;
if (z == 12) {
fail();
} else {
printf("OK");
}
}
During a normal execution ("concrete" execution), the program would read a concrete input value (e.g., 5) and assign it to y. Execution would then proceed with the multiplication and the conditional branch, which would evaluate to false and print OK.
During symbolic execution, the program reads a symbolic value (e.g., λ) and assigns it to y. The program would then proceed with the multiplication and assign λ * 2 to z. When reaching the if statement, it would evaluate λ * 2 == 12. At this point of the program, λ could take any value, and symbolic execution can therefore proceed along both branches, by "forking" two paths. Each path gets assigned a copy of the program state at the branch instruction as well as a path constraint. In this example, the path constraint is λ * 2 == 12 for the if branch and λ * 2 != 12 for the else branch. Both paths can be symbolically executed independently. When paths terminate (e.g., as a result of executing fail() or simply exiting), symbolic execution computes a concrete |
https://en.wikipedia.org/wiki/Palatini%20variation | In general relativity and gravitation the Palatini variation is nowadays thought of as a variation of a Lagrangian with respect to the connection.
In fact, as is well known, the Einstein–Hilbert action for general relativity was first formulated purely in terms of the spacetime metric . In the Palatini variational method one takes as independent field variables not only the ten components but also the forty components of the affine connection , assuming, a priori, no dependence of the from the and their derivatives.
The reason the Palatini variation is considered important is that it means that the use of the Christoffel connection in general relativity does not have to be added as a separate assumption; the information is already in the Lagrangian. For theories of gravitation which have more complex Lagrangians than the Einstein–Hilbert Lagrangian of general relativity, the Palatini variation sometimes gives more complex connections and sometimes tensorial equations.
Attilio Palatini (1889–1949) was an Italian mathematician who received his doctorate from the University of Padova, where he studied under Levi-Civita and Ricci-Curbastro.
The history of the subject, and Palatini's connection with it, are not straightforward (see references). In fact, it seems that what the textbooks now call "Palatini formalism" was actually invented in 1925 by Einstein, and as the years passed, people tended to mix up the Palatini identity and the Palatini formalism.
See also
Palatini identity
Self-dual Palatini action
Tetradic Palatini action
References
[English translation by R. Hojman and C. Mukku in P. G. Bergmann and V. De Sabbata (eds.) Cosmology and Gravitation, Plenum Press, New York (1980)]
Lagrangian mechanics
General relativity |
https://en.wikipedia.org/wiki/Redirection%20%28computing%29 | In computing, redirection is a form of interprocess communication, and is a function common to most command-line interpreters, including the various Unix shells that can redirect standard streams to user-specified locations.
In Unix-like operating systems, programs do redirection with the system call, or its less-flexible but higher-level stdio analogues, and .
Redirecting standard input and standard output
Redirection is usually implemented by placing certain characters between commands.
Basic
Typically, the syntax of these characters is as follows, using < to redirect input, and > to redirect output. command > file1 executes , placing the output in , as opposed to displaying it at the terminal, which is the usual destination for standard output. This will clobber any existing data in .
Using command < file1 executes , with as the source of input, as opposed to the keyboard, which is the usual source for standard input.
command < infile > outfile combines the two capabilities: reads from and writes to
Variants
To append output to the end of the file, rather than clobbering it, the >> operator is used: command1 >> file1.
To read from a stream literal (an inline file, passed to the standard input), one can use a here document, using the << operator:
$ tr a-z A-Z << END_TEXT
> one two three
> uno dos tres
> END_TEXT
ONE TWO THREE
UNO DOS TRES
To read from a string, one can use a here string, using the <<< operator: tr a-z A-Z <<< "one two three", or:
$ NUMBERS="one two three"
$ tr a-z A-Z <<< "$NUMBERS"
ONE TWO THREE
Piping
Programs can be run together such that one program reads the output from another with no need for an explicit intermediate file. command1 | command2 executes , using its output as the input for (commonly called piping, with the "|" character being known as the "pipe").
The two programs performing the commands may run in parallel with the only storage space being working buffers (Linux allows up to 64K for each buffer) plus whateve |
https://en.wikipedia.org/wiki/Sunrise%20problem | The sunrise problem can be expressed as follows: "What is the probability that the sun will rise tomorrow?" The sunrise problem illustrates the difficulty of using probability theory when evaluating the plausibility of statements or beliefs.
According to the Bayesian interpretation of probability, probability theory can be used to evaluate the plausibility of the statement, "The sun will rise tomorrow."
Laplace's approach
The sunrise problem was first introduced in the 18th century by Pierre-Simon Laplace, who treated it by means of his rule of succession. Let p be the long-run frequency of sunrises, i.e., the sun rises on 100 × p% of days. Prior to knowing of any sunrises, one is completely ignorant of the value of p. Laplace represented this prior ignorance by means of a uniform probability distribution on p.
For instance, the probability that p is between 20% and 50% is just 30%. This must not be interpreted to mean that in 30% of all cases, p is between 20% and 50%. Rather, it means that one's state of knowledge (or ignorance) justifies one in being 30% sure that the sun rises between 20% of the time and 50% of the time. Given the value of p, and no other information relevant to the question of whether the sun will rise tomorrow, the probability that the sun will rise tomorrow is p. But we are not "given the value of p". What we are given is the observed data: the sun has risen every day on record. Laplace inferred the number of days by saying that the universe was created about 6000 years ago, based on a young-earth creationist reading of the Bible.
To find the conditional probability distribution of p given the data, one uses Bayes' theorem, which some call the Bayes–Laplace rule. Having found the conditional probability distribution of p given the data, one may then calculate the conditional probability, given the data, that the sun will rise tomorrow. That conditional probability is given by the rule of succession. The plausibility that t |
https://en.wikipedia.org/wiki/Code%20Complete | Code Complete is a software development book, written by Steve McConnell and published in 1993 by Microsoft Press, encouraging developers to continue past code-and-fix programming and the big design up front and waterfall models. It is also a compendium of software construction techniques, which include techniques from naming variables to deciding when to write a subroutine.
Summary
McConnell defines the main activities in construction as detailed design, construction planning, coding and debugging, unit testing, integration and integration testing. Although he does not dismiss the value of other aspects of software development such as requirements and documentation, McConnell emphasises the construction of software due to several reasons. Within the view of the book, construction is a large part of software development, as the central activity within software development and can significantly improve the productivity of a programmer when focus is directed towards it; in addition, the source code is seen as defining the operation of the software, with documentation implicated when code and documentation are opposed. Lastly, the book contends that construction is the exclusive activity which is guaranteed to remain completed.
The techniques of a good programmer are also given throughout the book. The whole part seven of the book is about software craftsmanship (layout, style, character, themes and self-documentation).
The other six parts of the book are: laying the foundation, creating high-quality code, variables, statements, code improvements and system considerations.
Reception
Code Complete has received outstanding reviews, being widely regarded as one of the leading must-reads for software developers. It won a Jolt Award in 1993.
There are also negative reviews about the length and style of the book, which runs to over 900 pages and goes into detail on many topics.
The first edition has been superseded by Code Complete 2. The first editions can be found us |
https://en.wikipedia.org/wiki/Carcinology | Carcinology is a branch of zoology that consists of the study of crustaceans, a group of arthropods that includes lobsters, crayfish, shrimp, krill, copepods, barnacles and crabs. Other names for carcinology are malacostracology, crustaceology, and crustalogy, and a person who studies crustaceans is a carcinologist or occasionally a malacostracologist, a crustaceologist, or a crustalogist.
The word carcinology derives from Greek , karkínos, "crab"; and , -logia.
Subfields
Carcinology is a subdivision of arthropodology, the study of arthropods which includes arachnids, insects, and myriapods. Carcinology branches off into taxonomically oriented disciplines such as:
astacology – the study of crayfish
cirripedology – the study of barnacles
copepodology – the study of copepods
Journals
Scientific journals devoted to the study of crustaceans include:
Crustaceana
Journal of Crustacean Biology
''Nauplius (journal)
See also
Entomology
Publications in carcinology
List of carcinologists
References
Crustaceans
Subfields of zoology
Subfields of arthropodology |
https://en.wikipedia.org/wiki/Pilot%20ACE | The Pilot ACE (Automatic Computing Engine) was one of the first computers built in the United Kingdom. Built at the National Physical Laboratory (NPL) in the early 1950s, it was also one of the earliest general-purpose, stored-program computers – joining other UK designs like the Manchester Mark 1 and EDSAC of the same era. It was a preliminary version of the full ACE, which was designed by Alan Turing, who left NPL before the construction was completed.
History
Pilot ACE was built to a cut down version of Turing's full ACE design. After Turing left NPL (in part because he was disillusioned by the lack of progress on building the ACE), James H. Wilkinson took over the project. Donald Davies, Harry Huskey and Mike Woodger were involved with the design. The Pilot ACE ran its first program on 10 May 1950, and was demonstrated to the press in November 1950.
Although originally intended as a prototype, it became clear that the machine was a potentially useful resource, especially given the lack of other computing devices at the time. After some upgrades to make operational use practical, it went into service in late 1951 and saw considerable operational service over the next several years. One reason Pilot ACE was useful is that it was able to perform floating-point arithmetic necessary for scientific calculations. Wilkinson tells the story of how this came to be.
When first built, Pilot ACE did not have hardware for either multiplication or division, in contrast to other computers at that time. (Hardware multiplication was added later.) Pilot ACE started out using fixed-point multiplication and division implemented as software. It soon became apparent that fixed-point arithmetic was a bad idea because the numbers quickly went out of range. It only took a short time to write new software so that Pilot ACE could do floating-point arithmetic. After that, James Wilkinson became an expert and wrote a book on rounding errors in floating-point calculations, which eventuall |
https://en.wikipedia.org/wiki/LEO%20%28website%29 | LEO (meaning Link Everything Online) is an Internet-based electronic dictionary and translation dictionary initiated by the computer science department of the Technical University of Munich in Germany. After a spin-out, the dictionaries have been run since 3 April 2006 by the limited liability company Leo GmbH, formed by the members of the original Leo team, and are partially funded by commercial advertising on the website. Its dictionaries can be consulted free online from any web browser or from LEO's Lion downloadable user interface (GUI) which is free since version 3.0 (released 13 January 2009) to private users only and no longer sold as shareware. Corporate users and research institutions are however required to purchase a license.
Dictionaries
The website hosts eight free German language based bilingual dictionaries and forums for additional language queries. The dictionaries are characterized by providing translations in forms of hyperlinks to further dictionary queries, thereby facilitating back translations. The dictionaries are partly added to and corrected by large vocabulary donations of individuals or companies, partly through suggestions and discussions on the LEO language forums.
For any of the eight foreign languages, there's at least one (in the cases of English and French two) qualified employee in charge (whose mother tongue is either German and who has studied the respective other idiom or vice versa). These employees oversee the above-mentioned donations and suggestions before integrating them in the dictionary. Thus, an entry can never be simply made by a registered user. These registered users, on the other hand, have the possibility to communicate in the eight different forums where native German speakers and the other native speakers collaborate alike, providing help with finding idiomatic equivalents for phrases or texts etc.
English–German
The English-German dictionary run by Leo since 1995 contains around 800,000 entries and receive |
https://en.wikipedia.org/wiki/List%20of%20video%20game%20industry%20people | Below is a list of notable people who work or have worked in the video game industry.
The list is divided into different roles, but some people fit into more than one category. For example, Sid Meier is both a game designer and programmer. In these cases, the people appear in both sections.
Art and animation
Dennis Hwang: graphic designer working for Niantic
Edmund McMillen: artist whose art style is synonymous with Flash games
Jordan Mechner: introduced realistic movement to video games with Prince of Persia
Jim Sachs: created new standard for quality of art with the release of the Amiga game Defender of the Crown
Derek Yu: Indie video game artist, designer, and blogger. Known for working on Spelunky, Spelunky 2, Aquaria, Eternal Daughter, I'm O.K – A Murder Simulator, and DRL
Company officers
J Allard: Xbox Officer President
David Baszucki: founder and CEO of the Roblox Corporation
Marc Blank: co-founder of Infocom
Cliff Bleszinski: founder of Boss Key Productions
Doug Bowser: president of Nintendo of America (2019–present)
Arjan Brussee: co-founder of Guerrilla Games & Boss Key Productions
Jon Burton: founder of Traveller's Tales and its parent company TT Games
Nolan Bushnell: founder of Atari
David Cage: founder of Quantic Dream
Doug Carlston: co-founder of Brøderbund
Trevor Chan: founder and CEO of Enlight Software
Adrian Chmielarz: founder of Metropolis Software, People Can Fly, and The Astronauts
Raphaël Colantonio: founder of Arkane Studios
Josef Fares: founder of Hazelight Studios
Reggie Fils-Aimé: former president of Nintendo of America (2006–2019)
Greg Fischbach: CEO of Acclaim Entertainment before it bankrupted
Jack Friedman: Founder of Jakks Pacific, LJN, and THQ
Andy Gavin & Jason Rubin: founders of Naughty Dog
David Gordon: founder of Programma International
Yves Guillemot: co-founder and CEO of Ubisoft
Shuntaro Furukawa: President of Nintendo (2018–present)
Hal Halpin: President of ECA
John Hanke: founder and CEO |
https://en.wikipedia.org/wiki/Mmap | In computing, mmap(2) is a POSIX-compliant Unix system call that maps files or devices into memory. It is a method of memory-mapped file I/O. It implements demand paging because file contents are not immediately read from disk and initially use no physical RAM at all. The actual reads from disk are performed after a specific location is accessed, in a lazy manner. After the mapping is no longer needed, the pointers must be unmapped with munmap(2). Protection information—for example, marking mapped regions as executable—can be managed using mprotect(2), and special treatment can be enforced using madvise(2).
In Linux, macOS and the BSDs, mmap can create several types of mappings. Other operating systems may only support a subset of these; for example, shared mappings may not be practical in an operating system without a global VFS or I/O cache.
History
The original design of memory-mapped files came from the TOPS-20 operating system. mmap and associated systems calls were designed as part of the Berkeley Software Distribution (BSD) version of Unix. Their API was already described in the 4.2BSD System Manual, even though it was neither implemented in that release, nor in 4.3BSD. Sun Microsystems had implemented this very API, though, in their SunOS operating system. The BSD developers at University of California, Berkeley unsuccessfully requested Sun to donate its implementation; 4.3BSD-Reno was instead shipped with an implementation based on the virtual memory system of Mach.
File-backed and anonymous
File-backed mapping maps an area of the process's virtual memory to files; that is, reading those areas of memory causes the file to be read. It is the default mapping type.
Anonymous mapping maps an area of the process's virtual memory not backed by any file. The contents are initialized to zero. In this respect an anonymous mapping is similar to malloc, and is used in some malloc implementations for certain allocations, particularly large ones. Anonymous mappings |
https://en.wikipedia.org/wiki/Cotton%20swab | Cotton swabs (American English) or cotton buds (British English) are wads of cotton wrapped around a short rod made of wood, rolled paper, or plastic. They are most commonly used for ear cleaning, although this is not recommended by physicians. Other uses for cotton swabs include first aid, cosmetics application, cleaning, infant care and crafts. Some countries have banned the plastic-stemmed versions in favor of biodegradable alternatives over concerns about marine pollution.
History
The first mass-produced cotton swab was developed in 1923 by Polish-American Leo Gerstenzang after he watched his wife attach wads of cotton to toothpicks to clean their infant's ears. His product was originally named "Baby Gays" in recognition of them being intended for infants before being renamed "Q-tips Baby Gays", with the "Q" standing for "quality". The product eventually became known as "Q-tips", which went on to become the most widely sold brand name of cotton swabs in North America. The term "Q-tip" is often used as a genericized trademark for a cotton swab in the United States and Canada. The Q-tips brand is owned by Unilever and had over $200 million in US sales in 2014. "Johnson's buds" are made by Johnson & Johnson.
However, according to the United States Patent Case (C-10,415) Q-Tips, Inc. v. Johnson & Johnson, 108 F. Supp. 845 (D.N.J. 1952), it would appear that the first commercial producer of cotton tipped applicators was a Mrs. Hazel Tietjen Forbis, who manufactured them in her home. She also owned a patent on the article, numbered 1,652,108, dated December 6, 1927, and sold the product under the appellation Baby Nose-Gay. In 1925, Leo Gerstenzang Co., Inc. purchased an assignment of the product patent from Mrs. Forbis. On January 2, 1937, Q-Tips, Inc's president, Mr. Leo Gerstenzang, and his wife Mrs. Ziuta Gerstenzang formed a partnership and purchased from Mrs. Forbis "All merchandise, machinery and fixtures now contained in the premises 132 W. 36th Street and |
https://en.wikipedia.org/wiki/Expression%20%28mathematics%29 | In mathematics, an expression or mathematical expression is a finite combination of symbols that is well-formed according to rules that depend on the context. Mathematical symbols can designate numbers (constants), variables, operations, functions, brackets, punctuation, and grouping to help determine order of operations and other aspects of logical syntax.
Many authors distinguish an expression from a formula, the former denoting a mathematical object, and the latter denoting a statement about mathematical objects. For example, is an expression, while is a formula. However, in modern mathematics, and in particular in computer algebra, formulas are viewed as expressions that can be evaluated to true or false, depending on the values that are given to the variables occurring in the expressions. For example takes the value false if is given a value less than –1, and the value true otherwise.
Examples
The use of expressions ranges from the simple:
(linear polynomial)
(quadratic polynomial)
(rational fraction)
to the complex:
Syntax versus semantics
Syntax
An expression is a syntactic construct. It must be well-formed: the allowed operators must have the correct number of inputs in the correct places, the characters that make up these inputs must be valid, have a clear order of operations, etc. Strings of symbols that violate the rules of syntax are not well-formed and are not valid mathematical expressions.
For example, in the usual notation of arithmetic, the expression 1 + 2 × 3 is well-formed, but the following expression is not:
.
Semantics
Semantics is the study of meaning. Formal semantics is about attaching meaning to expressions.
In algebra, an expression may be used to designate a value, which might depend on values assigned to variables occurring in the expression. The determination of this value depends on the semantics attached to the symbols of the expression. The choice of semantics depends on the context of the expression. T |
https://en.wikipedia.org/wiki/Signal%20transmission | In telecommunications, transmission is the process of sending or propagating an analog or digital signal via a medium that is wired, wireless, or fiber-optic.
Transmission system technologies typically refer to physical layer protocol duties such as modulation, demodulation, line coding, equalization, error control, bit synchronization and multiplexing, but it may also involve higher-layer protocol duties, for example, digitizing an analog signal, and data compression.
Transmission of a digital message, or of a digitized analog signal, is known as data transmission.
Examples of transmission are the sending of signals with limited duration, for example, a block or packet of data, a phone call, or an email.
See also
Radio transmitter
References
Telecommunications engineering |
https://en.wikipedia.org/wiki/Key%20schedule | In cryptography, the so-called product ciphers are a certain kind of cipher, where the (de-)ciphering of data is typically done as an iteration of rounds. The setup for each round is generally the same, except for round-specific fixed values called a round constant, and round-specific data derived from the cipher key called a round key. A key schedule is an algorithm that calculates all the round keys from the key.
Some types of key schedules
Some ciphers have simple key schedules. For example, the block cipher TEA splits the 128-bit key into four 32-bit pieces and uses them repeatedly in successive rounds.
DES has a key schedule in which the 56-bit key is divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 round key bits are selected by Permuted Choice 2 (PC-2) – 24 bits from the left half and 24 from the right. The rotations have the effect that a different set of bits is used in each round key; each bit is used in approximately 14 out of the 16 round keys.
To avoid simple relationships between the cipher key and the round keys, in order to resist such forms of cryptanalysis as related-key attacks and slide attacks, many modern ciphers use more elaborate key schedules to generate an "expanded key" from which round keys are drawn. Some ciphers, such as Rijndael (AES) and Blowfish, use the same operations as those used in the data path of the cipher algorithm for their key expansion, sometimes initialized with some "nothing-up-my-sleeve numbers". Other ciphers, such as RC5, expand keys with functions that are somewhat or completely different from the encryption functions.
Notes
Knudsen and Mathiassen (2004) give some experimental evidence that indicate that the key schedule plays a part in providing strength against linear and differential cryptanalysis. For toy Feistel ciphers, it was observed that those with complex and well-d |
https://en.wikipedia.org/wiki/Duality%20%28mathematics%29 | In mathematics, a duality translates concepts, theorems or mathematical structures into other concepts, theorems or structures, in a one-to-one fashion, often (but not always) by means of an involution operation: if the dual of is , then the dual of is . Such involutions sometimes have fixed points, so that the dual of is itself. For example, Desargues' theorem is self-dual in this sense under the standard duality in projective geometry.
In mathematical contexts, duality has numerous meanings. It has been described as "a very pervasive and important concept in (modern) mathematics" and "an important general theme that has manifestations in almost every area of mathematics".
Many mathematical dualities between objects of two types correspond to pairings, bilinear functions from an object of one type and another object of the second type to some family of scalars. For instance, linear algebra duality corresponds in this way to bilinear maps from pairs of vector spaces to scalars, the duality between distributions and the associated test functions corresponds to the pairing in which one integrates a distribution against a test function, and Poincaré duality corresponds similarly to intersection number, viewed as a pairing between submanifolds of a given manifold.
From a category theory viewpoint, duality can also be seen as a functor, at least in the realm of vector spaces. This functor assigns to each space its dual space, and the pullback construction assigns to each arrow its dual .
Introductory examples
In the words of Michael Atiyah,
The following list of examples shows the common features of many dualities, but also indicates that the precise meaning of duality may vary from case to case.
Complement of a subset
A simple, maybe the most simple, duality arises from considering subsets of a fixed set . To any subset , the complement consists of all those elements in that are not contained in . It is again a subset of . Taking the complement has the foll |
https://en.wikipedia.org/wiki/Alexander%20Oparin | Alexander Ivanovich Oparin (; – April 21, 1980) was a Soviet biochemist notable for his theories about the origin of life, and for his book The Origin of Life. He also studied the biochemistry of material processing by plants and enzyme reactions in plant cells. He showed that many food production processes were based on biocatalysis and developed the foundations for industrial biochemistry in the USSR.
Life
Born in Uglich in 1894, Oparin graduated from the Moscow State University in 1917 and became a professor of biochemistry there in 1927. Many of his early papers were about plant enzymes and their role in metabolism. In 1924 he put forward a hypothesis suggesting that life on Earth developed through a gradual chemical evolution of carbon-based molecules in the Earth's primordial soup. In 1935, along with academician Alexey Bakh, he founded the Biochemistry Institute of the Soviet Academy of Sciences. In 1939, Oparin became a Corresponding Member of the Academy, and, in 1946, a full member. In 1940s and 1950s he supported the theories of Trofim Lysenko and Olga Lepeshinskaya, who made claims about "the origin of cells from noncellular matter". "Taking the party line" helped advance his career. In 1970, he was elected President of the International Society for the Study of the Origins of Life. He died in Moscow on April 21, 1980, and was interred in Novodevichy Cemetery in Moscow.
Oparin became Hero of Socialist Labour in 1969, received the Lenin Prize in 1974 and was awarded the Lomonosov Gold Medal in 1979 "for outstanding achievements in biochemistry". He was also a five-time recipient of the Order of Lenin.
Theory of the origin of life
Although Oparin's started out reviewing various panspermia theories, including those of Hermann von Helmholtz and William Thomson (Lord Kelvin), he was primarily interested in how life began. As early as 1922, he asserted that:
There is no fundamental difference between a living organism and lifeless matter. The complex co |
https://en.wikipedia.org/wiki/Eric%20Lander | Eric Steven Lander (born February 3, 1957) is an American mathematician and geneticist who is a professor of biology at the Massachusetts Institute of Technology (MIT), and a professor of systems biology at Harvard Medical School. Eric Lander is founding director emeritus of the Broad Institute of MIT and Harvard. He is a 1987 MacArthur Fellow and Rhodes Scholar.
Lander served as the 11th director of the Office of Science and Technology Policy and Science Advisor to the President in Joe Biden's presidential Cabinet. In response to allegations that he had engaged in bullying and abusive conduct, Lander apologized and resigned from the Biden Administration effective February 18, 2022.
Early life and education
Lander was born in Brooklyn, New York City, to Jewish parents, the son of Rhoda G. Lander, a social studies teacher, and Harold Lander, an attorney. He was captain of the math team at Stuyvesant High School, graduating in 1974 as valedictorian and an International Mathematical Olympiad Silver Medalist for the U.S. At age 17, he wrote a paper on quasiperfect numbers for which he won the Westinghouse Science Talent Search.
After graduating from Stuyvesant High School as valedictorian in 1974, Lander graduated from Princeton University in 1978 as valedictorian and with a Bachelor of Arts in Mathematics. He completed his senior thesis, "On the structure of projective modules", under John Coleman Moore's supervision. He then moved to the University of Oxford where he was a Rhodes Scholar and student of Wolfson College, Oxford. He was awarded a Doctor of Philosophy degree by the University of Oxford in 1980 with a thesis on algebraic coding theory and symmetric block designs supervised by Peter Cameron.
Career
During his career, Lander has worked on human genetic variation, human population history, genome evolution, non-coding RNAs, three-dimensional folding of the human genome and genome-wide association studies to discover the genes essential for biological pr |
https://en.wikipedia.org/wiki/Sinc%20function | In mathematics, physics and engineering, the sinc function, denoted by , has two forms, normalized and unnormalized.
In mathematics, the historical unnormalized sinc function is defined for by
Alternatively, the unnormalized sinc function is often called the sampling function, indicated as Sa(x).
In digital signal processing and information theory, the normalized sinc function is commonly defined for by
In either case, the value at is defined to be the limiting value
for all real (the limit can be proven using the squeeze theorem).
The normalization causes the definite integral of the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value of ). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values of .
The normalized sinc function is the Fourier transform of the rectangular function with no scaling. It is used in the concept of reconstructing a continuous bandlimited signal from uniformly spaced samples of that signal.
The only difference between the two definitions is in the scaling of the independent variable (the axis) by a factor of . In both cases, the value of the function at the removable singularity at zero is understood to be the limit value 1. The sinc function is then analytic everywhere and hence an entire function.
The function has also been called the cardinal sine or sine cardinal function. The term sinc was introduced by Philip M. Woodward in his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own", and his 1953 book Probability and Information Theory, with Applications to Radar.
The function itself was first mathematically derived in this form by Lord Rayleigh in his expression (Rayleigh's Formula) for the zeroth-order spherical Bessel function of the first |
https://en.wikipedia.org/wiki/Per-unit%20system | In the power systems analysis field of electrical engineering, a per-unit system is the expression of system quantities as fractions of a defined base unit quantity. Calculations are simplified because quantities expressed as per-unit do not change when they are referred from one side of a transformer to the other. This can be a pronounced advantage in power system analysis where large numbers of transformers may be encountered. Moreover, similar types of apparatus will have the impedances lying within a narrow numerical range when expressed as a per-unit fraction of the equipment rating, even if the unit size varies widely. Conversion of per-unit quantities to volts, ohms, or amperes requires a knowledge of the base that the per-unit quantities were referenced to. The per-unit system is used in power flow, short circuit evaluation, motor starting studies etc.
The main idea of a per unit system is to absorb large differences in absolute values into base relationships. Thus, representations of elements in the system with per unit values become more uniform.
A per-unit system provides units for power, voltage, current, impedance, and admittance. With the exception of impedance and admittance, any two units are independent and can be selected as base values; power and voltage are typically chosen. All quantities are specified as multiples of selected base values. For example, the base power might be the rated power of a transformer, or perhaps an arbitrarily selected power which makes power quantities in the system more convenient. The base voltage might be the nominal voltage of a bus. Different types of quantities are labeled with the same symbol (pu); it should be clear whether the quantity is a voltage, current, or other unit of measurement.
Purpose
There are several reasons for using a per-unit system:
Similar apparatus (generators, transformers, lines) will have similar per-unit impedances and losses expressed on their own rating, regardless of their abs |
https://en.wikipedia.org/wiki/Dry%20well | A dry well or drywell is an underground structure that disposes of unwanted water, most commonly surface runoff and stormwater, in some cases greywater or water used in a groundwater heat pump.
It is a gravity-fed, vertical underground system that can capture surface water from impervious surfaces, then store and gradually infiltrate the water into the groundwater aquifer.
Such structures are also called a dead well, absorbing well, negative well and soakaway or soakage pit in the United Kingdom or a soakwell or soak pit in Australia.
Design
Dry wells are excavated pits that may be filled with aggregate or air and are often lined with a perforated casing. The casings consist of perforated chambers made out of plastic or concrete and may be lined with geotextile. They provide high stormwater infiltration capacity while also having a relatively small footprint.
A dry well receives water from entry pipes at its top. It can be used part of a broader stormwater drainage network or on smaller scales such as collecting stormwater from building roofs. It is used in conjunction with pretreatment measures such as bioswales or sediment chambers to prevent groundwater contamination.
The depth of the dry well allows the water to penetrate soil layers with poor infiltration such as clays into more permeable layers of the vadose zone such as sand.
Simple dry wells consist of a pit filled with gravel, riprap, rubble, or other debris. Such pits resist collapse but do not have much storage capacity because their interior volume is mostly filled by stone. A more advanced dry well defines a large interior storage volume by a concrete or plastic chamber with perforated sides and bottom. These dry wells are usually buried completely so that they do not take up any land area. The dry wells for a parking lot's storm drains are usually buried below the same parking lot.
Related concepts
A sump in a basement can be built in dry well form, allowing the sump pump to cycle less freq |
https://en.wikipedia.org/wiki/French%20drain | A French drain (also called a weeping tile, trench drain, filter drain, blind drain, rubble drain, rock drain, drain tile, perimeter drain, land drain, French ditch, sub-surface drain, sub-soil drain, or agricultural drain) is a trench filled with gravel or rock, or both, with or without a perforated pipe that redirects surface water and groundwater away from an area. The perforated pipe is called a weeping tile (also called a drain tile or perimeter tile). When the pipe is draining, it "weeps", or exudes liquids. It was named during a time period when drainpipes were made from terracotta tiles.
French drains are primarily used to prevent ground and surface water from penetrating or damaging building foundations and as an alternative to open ditches or storm sewers for streets and highways. Alternatively, French drains may be used to distribute water, such as a septic drain field at the outlet of a typical septic tank sewage treatment system. French drains are also used behind retaining walls to relieve ground water pressure.
History and construction
The earliest forms of French drains were simple ditches that were pitched from a high area to a lower one and filled with gravel. These may have been invented in France but Henry Flagg French (1813–1885) of Concord, Massachusetts, a lawyer and Assistant U.S. Treasury Secretary described and popularized them in Farm Drainage (1859). French's own drains were made of sections of ordinary roofing tile that were laid with a gap in between the sections to admit water. Later, specialised drain tiles were designed with perforations. To prevent clogging, the size of the gravel varied from coarse in the center to fine on the outside and was selected contingent on the gradation of the surrounding soil. The sizes of particles were critical to prevent the surrounding soil from washing into the pores, i. e., voids between the particles of gravel and thereby clogging the drain. The later development of geotextiles greatly simplifi |
https://en.wikipedia.org/wiki/Point%20mutation | A point mutation is a genetic mutation where a single nucleotide base is changed, inserted or deleted from a DNA or RNA sequence of an organism's genome. Point mutations have a variety of effects on the downstream protein product—consequences that are moderately predictable based upon the specifics of the mutation. These consequences can range from no effect (e.g. synonymous mutations) to deleterious effects (e.g. frameshift mutations), with regard to protein production, composition, and function.
Causes
Point mutations usually take place during DNA replication. DNA replication occurs when one double-stranded DNA molecule creates two single strands of DNA, each of which is a template for the creation of the complementary strand. A single point mutation can change the whole DNA sequence. Changing one purine or pyrimidine may change the amino acid that the nucleotides code for.
Point mutations may arise from spontaneous mutations that occur during DNA replication. The rate of mutation may be increased by mutagens. Mutagens can be physical, such as radiation from UV rays, X-rays or extreme heat, or chemical (molecules that misplace base pairs or disrupt the helical shape of DNA). Mutagens associated with cancers are often studied to learn about cancer and its prevention.
There are multiple ways for point mutations to occur. First, ultraviolet (UV) light and higher-frequency light are capable of ionizing electrons, which in turn can affect DNA. Reactive oxygen molecules with free radicals, which are a byproduct of cellular metabolism, can also be very harmful to DNA. These reactants can lead to both single-stranded DNA breaks and double-stranded DNA breaks. Third, bonds in DNA eventually degrade, which creates another problem to keep the integrity of DNA to a high standard. There can also be replication errors that lead to substitution, insertion, or deletion mutations.
Categorization
Transition/transversion categorization
In 1959 Ernst Freese coined the terms |
https://en.wikipedia.org/wiki/Central%20Security%20Service | The Central Security Service (CSS) is a combat support agency of the United States Department of Defense which was established in 1972 to integrate the National Security Agency (NSA) and the Service Cryptologic Components (SCC) of the United States Armed Forces in the field of signals intelligence, cryptology, and information assurance at the tactical level. In 2002, the CSS had approximately 25,000 uniformed members. It is part of the United States Intelligence Community.
History
After World War II ended, the United States had two military organizations for the collection of signals intelligence (SIGINT): the Army Security Agency (ASA) and the Naval Communications Intelligence Organization (OP-20-G). The latter was deactivated and reorganized into the much smaller Communications Support Activities (CSA) in 1946, leaving ASA as the main SIGINT agency. Additionally, the United States Air Force established its own US Air Force Security Service (USAFSS) for the collection of communications intelligence in 1948.
On May 20, 1949, the Secretary of Defense created the Armed Forces Security Agency (AFSA), which became responsible for the direction and control of all US communications intelligence (COMINT) and communications security (COMSEC) activities. However, at the tactical level these tasks continued to be performed by the respective army, navy, and air force agencies, which were not willing to accept the authority of the newly created AFSA. In trying to get control over the military SIGINT elements, AFSA was replaced by the new and more powerful National Security Agency (NSA) on October 24, 1952.
Tactical military intelligence was traditionally collected by specialized soldiers, sailors, airmen, Marines, and coast guardsmen deployed around the world. For example, during the Vietnam War, each of the military services deployed its own cryptologic units, supported by the NSA, which set up a number of SIGINT Support Groups (SSGs) as merging points for signal intellige |
https://en.wikipedia.org/wiki/Dissection | Dissection (from Latin "to cut to pieces"; also called anatomization) is the dismembering of the body of a deceased animal or plant to study its anatomical structure. Autopsy is used in pathology and forensic medicine to determine the cause of death in humans. Less extensive dissection of plants and smaller animals preserved in a formaldehyde solution is typically carried out or demonstrated in biology and natural science classes in middle school and high school, while extensive dissections of cadavers of adults and children, both fresh and preserved are carried out by medical students in medical schools as a part of the teaching in subjects such as anatomy, pathology and forensic medicine. Consequently, dissection is typically conducted in a morgue or in an anatomy lab.
Dissection has been used for centuries to explore anatomy. Objections to the use of cadavers have led to the use of alternatives including virtual dissection of computer models.
In the field of surgery, the term "dissection" or "dissecting" means more specifically to the practice of separating an anatomical structure (an organ, nerve or blood vessel) from its surrounding connective tissue in order to minimize unwanted damage during a surgical procedure.
Overview
Plant and animal bodies are dissected to analyze the structure and function of its components. Dissection is practised by students in courses of biology, botany, zoology, and veterinary science, and sometimes in arts studies. In medical schools, students dissect human cadavers to learn anatomy. Zoötomy is sometimes used to describe "dissection of an animal".
Human dissection
A key principle in the dissection of human cadavers (sometimes called androtomy) is the prevention of human disease to the dissector. Prevention of transmission includes the wearing of protective gear, ensuring the environment is clean, dissection technique and pre-dissection tests to specimens for the presence of HIV and hepatitis viruses. Specimens are dissected |
https://en.wikipedia.org/wiki/Hostname | In computer networking, a hostname (archaically nodename) is a label that is assigned to a device connected to a computer network and that is used to identify the device in various forms of electronic communication, such as the World Wide Web. Hostnames may be simple names consisting of a single word or phrase, or they may be structured. Each hostname usually has at least one numeric network address associated with it for routing packets for performance and other reasons.
Internet hostnames may have appended the name of a Domain Name System (DNS) domain, separated from the host-specific label by a period ("dot"). In the latter form, a hostname is also called a domain name. If the domain name is completely specified, including a top-level domain of the Internet, then the hostname is said to be a fully qualified domain name (FQDN). Hostnames that include DNS domains are often stored in the Domain Name System together with the IP addresses of the host they represent for the purpose of mapping the hostname to an address, or the reverse process.
Internet hostnames
In the Internet, a hostname is a domain name assigned to a host computer. This is usually a combination of the host's local name with its parent domain's name. For example, en.wikipedia.org consists of a local hostname (en) and the domain name wikipedia.org. This kind of hostname is translated into an IP address via the local hosts file, or the Domain Name System (DNS) resolver. It is possible for a single host computer to have several hostnames; but generally the operating system of the host prefers to have one hostname that the host uses for itself.
Any domain name can also be a hostname, as long as the restrictions mentioned below are followed. So, for example, both en.wikipedia.org and wikipedia.org are hostnames because they both have IP addresses assigned to them. A hostname may be a domain name, if it is properly organized into the domain name system. A domain name may be a hostname if it has been a |
https://en.wikipedia.org/wiki/Localization%20of%20a%20category | In mathematics, localization of a category consists of adding to a category inverse morphisms for some collection of morphisms, constraining them to become isomorphisms. This is formally similar to the process of localization of a ring; it in general makes objects isomorphic that were not so before. In homotopy theory, for example, there are many examples of mappings that are invertible up to homotopy; and so large classes of homotopy equivalent spaces. Calculus of fractions is another name for working in a localized category.
Introduction and motivation
A category C consists of objects and morphisms between these objects. The morphisms reflect relations between the objects. In many situations, it is meaningful to replace C by another category C''' in which certain morphisms are forced to be isomorphisms. This process is called localization.
For example, in the category of R-modules (for some fixed commutative ring R) the multiplication by a fixed element r of R is typically (i.e., unless r is a unit) not an isomorphism:
The category that is most closely related to R-modules, but where this map is an isomorphism turns out to be the category of -modules. Here is the localization of R with respect to the (multiplicatively closed) subset S consisting of all powers of r,
The expression "most closely related" is formalized by two conditions: first, there is a functor
sending any R-module to its localization with respect to S. Moreover, given any category C and any functor
sending the multiplication map by r on any R-module (see above) to an isomorphism of C, there is a unique functor
such that .
Localization of categories
The above examples of localization of R-modules is abstracted in the following definition. In this shape, it applies in many more examples, some of which are sketched below.
Given a category C and some class W of morphisms in C, the localization C[W−1] is another category which is obtained by inverting all the morphisms in W. More formally, |
https://en.wikipedia.org/wiki/Web%20development | Web development is the work involved in developing a website for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing a simple single static page of plain text to complex web applications, electronic businesses, and social network services. A more comprehensive list of tasks to which Web development commonly refers, may include Web engineering, Web design, Web content development, client liaison, client-side/server-side scripting, Web server and network security configuration, and e-commerce development.
Among Web professionals, "Web development" usually refers to the main non-design aspects of building Web sites: writing markup and coding. Web development may use content management systems (CMS) to make content changes easier and available with basic technical skills.
For larger organizations and businesses, Web development teams can consist of hundreds of people (Web developers) and follow standard methods like Agile methodologies while developing Web sites. Smaller organizations may only require a single permanent or contracting developer, or secondary assignment to related job positions such as a graphic designer or information systems technician. Web development may be a collaborative effort between departments rather than the domain of a designated department. There are three kinds of Web developer specialization: front-end developer, back-end developer, and full-stack developer. Front-end developers are responsible for behavior and visuals that run in the user browser, while back-end developers deal with the servers. Since the commercialization of the Web, the industry has boomed and has become one of the most used technologies ever.
See also
Outline of web design and web development
Web design
Web development tools
Web application development
Web developer
References |
https://en.wikipedia.org/wiki/Sobolev%20space | In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
Sobolev spaces are named after the Russian mathematician Sergei Sobolev. Their importance comes from the fact that weak solutions of some important partial differential equations exist in appropriate Sobolev spaces, even when there are no strong solutions in spaces of continuous functions with the derivatives understood in the classical sense.
Motivation
In this section and throughout the article is an open subset of
There are many criteria for smoothness of mathematical functions. The most basic criterion may be that of continuity. A stronger notion of smoothness is that of differentiability (because functions that are differentiable are also continuous) and a yet stronger notion of smoothness is that the derivative also be continuous (these functions are said to be of class — see Differentiability classes). Differentiable functions are important in many areas, and in particular for differential equations. In the twentieth century, however, it was observed that the space (or , etc.) was not exactly the right space to study solutions of differential equations. The Sobolev spaces are the modern replacement for these spaces in which to look for solutions of partial differential equations.
Quantities or properties of the underlying model of the differential equation are usually expressed in terms of integral norms. A typical example is measuring the energy of a temperature or velocity distribution by an -norm. It is therefore importan |
https://en.wikipedia.org/wiki/Haploidisation | Haploidisation is the process of halving the chromosomal content of a cell, producing a haploid cell. Within the normal reproductive cycle, haploidisation is one of the major functional consequences of meiosis, the other being a process of chromosomal crossover that mingles the genetic content of the parental chromosomes. Usually, haploidisation creates a monoploid cell from a diploid progenitor, or it can involve halving of a polyploid cell, for example to make a diploid potato plant from a tetraploid lineage of potato plants.
If haploidisation is not followed by fertilisation, the result is a haploid lineage of cells. For example, experimental haploidisation may be used to recover a strain of haploid Dictyostelium from a diploid strain. It sometimes occurs naturally in plants when meiotically reduced cells (usually egg cells) develop by parthenogenesis.
Haploidisation was one of the procedures used by Japanese researchers to produce Kaguya, a mouse which had same-sex parents; two haploids were then combined to make the diploid mouse.
Haploidisation commitment is a checkpoint in meiosis which follows the successful completion of premeiotic DNA replication and recombination commitment.
See also
Polyploidy
Ploidy
References
Genetics |
https://en.wikipedia.org/wiki/KOI8-R | KOI8-R (RFC 1489) is an 8-bit character encoding, derived from the KOI-8 encoding by the programmer Andrei Chernov in 1993 and designed to cover Russian, which uses a Cyrillic alphabet. KOI8-R was based on Russian Morse code, which was created from a phonetic version of Latin Morse code. As a result, Russian Cyrillic letters are in pseudo-Roman order rather than the normal Cyrillic alphabetical order. Although this may seem unnatural, if the 8th bit is stripped, the text is partially readable in ASCII and may convert to syntactically correct KOI-7. For example, "Русский Текст" in KOI8-R becomes rUSSKIJ tEKST ("Russian Text").
KOI8 stands for Kod Obmena Informatsiey, 8 bit () which means "Code for Information Exchange, 8 bit". In Microsoft Windows, KOI8-R is assigned the code page number 20866. In IBM, KOI8-R is assigned code page 878. KOI8-R also happens to cover Bulgarian, but has not been used for that purpose since CP1251 was accepted. The use of these older code pages is being replaced with Unicode as a more common way to represent Cyrillic together with other languages.
Unicode is preferred to KOI-8 and its variants or other Cyrillic encodings in modern applications, especially on the Internet, making UTF-8 the dominant encoding for web pages. KOI8-R, the most popular variant, is used by less than 0.004% of websites which are mainly Russian and Bulgarian. However, both groups prefer other encodings. For further discussion of Unicode's complete coverage of 436 Cyrillic letters/code points, including for Old Cyrillic, and how single-byte character encodings, such as Windows-1251 and KOI8 variants, cannot provide this, see Cyrillic script in Unicode.
Character set
The following table shows the KOI8-R encoding. Each character is shown with its equivalent Unicode code point.
See also
KOI8-B, a derivation of KOI8-R with only the letter subset implemented
KOI8-U, another derivative encoding which adds Ukrainian characters
KOI character encodings
RELCOM
|
https://en.wikipedia.org/wiki/KOI8-U | KOI8-U (RFC 2319) is an 8-bit character encoding, designed to cover Ukrainian, which uses a Cyrillic alphabet. It is based on KOI8-R, which covers Russian and Bulgarian, but replaces eight box drawing characters with four Ukrainian letters Ґ, Є, І, and Ї in both upper case and lower case.
KOI8-RU is closely related, but adds Ў for Belarusian. In both, the letter allocations match those in KOI8-E, except for Ґ which is added to KOI8-F.
In Microsoft Windows, KOI8-U is assigned the code page number 21866. In IBM, KOI8-U is assigned code page/CCSID 1168.
KOI8 remains much more commonly used than ISO 8859-5, which never really caught on. Another common Cyrillic character encoding is Windows-1251. In the future, both may eventually give way to Unicode.
KOI8 stands for Kod Obmena Informatsiey, 8 bit () which means "Code for Information Exchange, 8 bit".
The KOI8 character sets have the property that the Russian Cyrillic letters are in pseudo-Roman order rather than the natural Cyrillic alphabetical order as in ISO 8859-5. Although this may seem unnatural, it has the useful property that if the eighth bit is stripped, the text can still be read (or at least deciphered) in case-reversed transliteration on an ordinary ASCII terminal. For instance, "Русский Текст" in KOI8-U becomes rUSSKIJ tEKST ("Russian Text") if the 8th bit is stripped.
Character set
The following table shows the KOI8-U encoding. Each character is shown with its equivalent Unicode code point.
Although RFC 2319 says that character 0x95 should be U+2219 (∙), it may also be U+2022 (•) to match the bullet character in Windows-1251.
Some references have a typo and incorrectly state that character 0xB4 is U+0403, rather than the correct U+0404. This typo is present in Appendix A of RFC 2319 (but the table in the main text of the RFC gives the correct mapping).
See also
KOI character encodings
Ukrainian alphabet
References
Further reading
External links
https://web.archive.org/web/2005020623094 |
https://en.wikipedia.org/wiki/Ethanethiol | Ethanethiol, commonly known as ethyl mercaptan, is an organosulfur compound with the formula CH3CH2SH. is a colorless liquid with a distinct odor. Abbreviated EtSH, it consists of an ethyl group (Et), CH3CH2, attached to a thiol group, SH. Its structure parallels that of ethanol, but with sulfur in place of oxygen. The odor of EtSH is infamous. Ethanethiol is more volatile than ethanol due to a diminished ability to engage in hydrogen bonding. Ethanethiol is toxic in high concentrations. It occurs naturally as a minor component of petroleum, and may be added to otherwise odorless gaseous products such as liquefied petroleum gas (LPG) to help warn of gas leaks. At these concentrations, ethanethiol is not harmful.
Preparation
Ethanethiol is prepared by the reaction of ethylene with hydrogen sulfide in the presence of various catalysts. It is also prepared commercially by the reaction of ethanol with hydrogen sulfide gas over an acidic solid catalyst, such as alumina.
Historic methods
Ethanethiol was originally reported by Zeise in 1834. Zeise treated calcium ethyl sulfate with a suspension of barium sulfide saturated with hydrogen sulfide. He is credited with naming the C2H5S- group as mercaptum.
Ethanethiol can also be prepared by a halide displacement reaction, where ethyl halide is treated with aqueous sodium bisulfide. This conversion was demonstrated as early as 1840 by Henri Victor Regnault.
Odor
Ethanethiol has a strongly disagreeable odor that humans can detect in minute concentrations. The threshold for human detection is as low as one part in 2.8 billion parts of air (0.36 parts per billion). Its odor resembles that of leeks, onions, durian or cooked cabbage.
Employees of the Union Oil Company of California reported first in 1938 that turkey vultures would gather at the site of any gas leak. After finding that this was caused by traces of ethanethiol in the gas it was decided to boost the amount of ethanethiol in the gas, to make detection of leaks |
https://en.wikipedia.org/wiki/AMule | aMule is a free peer-to-peer file sharing utility that works with the eDonkey network and the Kad network, offering similar features to eMule and adding others such as GeoIP (country flags). On August 18, 2003 it was forked from the xMule source code, which itself is a fork of the lMule project, which was the first attempt to bring the eMule client to Linux. These projects were discontinued and aMule is the resulting project, though aMule has less and less resemblance to the client that sired it.
aMule shares code with the eMule project. The credit and partials downloads of eMule can be used by aMule and vice versa, making program substitution simple.
aMule aims to be portable over multiple platforms and is doing this with the help of the wxWidgets library. Currently supported systems include Linux, macOS, various BSD-derived systems, Windows, Irix and Solaris. Beside the stable releases the project also offers SVN versions as an unstable release.
TCP and UDP ports
According to the aMule official FAQ, these are the default ports. Server ports 4661 TCP and 4665 UDP are only used by the EDonkey network. Therefore, the Kad Network will only use 4662 TCP and 4672 UDP. The traffic direction is from client perspective:
4661 TCP (outgoing): Port on which an eDonkey server listens for connection (port number may vary depending on eDonkey server used).
4662 TCP (outgoing and incoming): Client to client transfers.
4665 UDP (outgoing and incoming): Used for global eDonkey server searches and global source queries. This is always Client TCP port + 3.
4672 UDP (outgoing and incoming): Extended aMule protocol, Queue Rating, File Reask Ping
4711 TCP: WebServer listening port. Used if aMule is accessed through the web.
4712 TCP: internal Connection port. Used to communicate aMule with other applications such as aMule WebServer or aMuleCMD.
Most of these ports are customizable.
Monolithic and modular build
aMule can be compiled using -disable-monolithic parameter: this |
https://en.wikipedia.org/wiki/IBiquity | iBiquity Digital Corporation is a company formed by the merger of USA Digital Radio and Lucent Digital Radio. Based in Columbia, Maryland, with additional offices in Basking Ridge, New Jersey, and Auburn Hills, Michigan, iBiquity is a privately held intellectual properties company with investors in the technology, broadcasting, manufacturing, media, and financial industries.
About
IBOC can operate on both AM band and FM band broadcasts either in a digital-only mode, or in a "hybrid" digital+analog mode. The stations can split the digital bandwidth to carry multiple audio program streams (called HD2 or HD3 multicast channels) as well as show on-screen text data such as song title and artist, traffic and weather information.Nearly 2,000 stations in the US broadcast with this system. The technology is marketed under the trademark HD Radio. It is the only technology approved by the Federal Communications Commission for digital AM and FM broadcasting in the United States. Due in large part to its ability to deliver digital audio services while leveraging existing analog spectrum (by broadcasting digital information on the sidebands), commercial implementation of the technology is gaining momentum in various countries on one side of the world, including Canada, Mexico and the Philippines. Testing and demonstrations of the system are also underway in China, Colombia, Germany, Indonesia, Jamaica, New Zealand, Poland, Switzerland, Thailand, and Ukraine, among other countries. According to iBiquity Digital, holder of the HD Radio trademark, the "HD" in "HD Radio" does not stand for "High Definition" or "Hybrid Digital". It is simply part of their trademark, and does not have any meaning on its own. On September 2, 2015, iBiquity announced that DTS was purchasing them for $172 million USD, bringing the HD Radio technology under the same banner as DTS' eponymous theater surround sound systems.
References
External links
HD Radio
Electronics companies of the United States
D |
https://en.wikipedia.org/wiki/PuTTY | PuTTY () is a free and open-source terminal emulator, serial console and network file transfer application. It supports several network protocols, including SCP, SSH, Telnet, rlogin, and raw socket connection. It can also connect to a serial port. The name "PuTTY" has no official meaning.
PuTTY was originally written for Microsoft Windows, but it has been ported to various other operating systems. Official ports are available for some Unix-like platforms, with work-in-progress ports to and , and unofficial ports have been contributed to platforms such as Symbian, Windows Mobile and Windows Phone.
PuTTY was written and is maintained primarily by Simon Tatham, a British programmer.
Features
PuTTY supports many variations on the secure remote terminal, and provides user control over the SSH encryption key and protocol version, alternate ciphers such as AES, 3DES, RC4, Blowfish, DES, and Public-key authentication. PuTTY uses its own format of key files – PPK (protected by Message Authentication Code). PuTTY supports SSO through GSSAPI, including user provided GSSAPI DLLs. It also can emulate control sequences from xterm, VT220, VT102 or ECMA-48 terminal emulation, and allows local, remote, or dynamic port forwarding with SSH (including X11 forwarding). The network communication layer supports IPv6, and the SSH protocol supports the zlib@openssh.com delayed compression scheme. It can also be used with local serial port connections.
PuTTY comes bundled with command-line SCP and SFTP clients, called "pscp" and "psftp" respectively, and plink, a command-line connection tool, used for non-interactive sessions.
PuTTY does not support session tabs directly, but many wrappers are available that do.
History
PuTTY development began late in 1998, and was a usable SSH-2 client by October 2000.
Components
PuTTY consists of several components:
PuTTY the Telnet, rlogin, and SSH client itself, which can also connect to a serial port
PSCP an SCP client, i.e. command-line secur |
https://en.wikipedia.org/wiki/Calculus%20of%20constructions | In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants.
Some of its variants include the calculus of inductive constructions (which adds inductive types), the calculus of (co)inductive constructions (which adds coinduction), and the predicative calculus of inductive constructions (which removes some impredicativity).
General traits
The CoC is a higher-order typed lambda calculus, initially developed by Thierry Coquand. It is well known for being at the top of Barendregt's lambda cube. It is possible within CoC to define functions from terms to terms, as well as terms to types, types to types, and types to terms.
The CoC is strongly normalizing, and hence consistent.
Usage
The CoC has been developed alongside the Coq proof assistant. As features were added (or possible liabilities removed) to the theory, they became available in Coq.
Variants of the CoC are used in other proof assistants, such as Matita and Lean.
The basics of the calculus of constructions
The calculus of constructions can be considered an extension of the Curry–Howard isomorphism. The Curry–Howard isomorphism associates a term in the simply typed lambda calculus with each natural-deduction proof in intuitionistic propositional logic. The calculus of constructions extends this isomorphism to proofs in the full intuitionistic predicate calculus, which includes proofs of quantified statements (which we will also call "propositions").
Terms
A term in the calculus of constructions is constructed using the following rules:
is a term (also called type);
is a term (also called prop, the type of all propositions);
Variables () are terms;
If and are terms, then so is ;
If and are terms and is a variable, then the follo |
https://en.wikipedia.org/wiki/IBM%20Monochrome%20Display%20Adapter | The Monochrome Display Adapter (MDA, also MDA card, Monochrome Display and Printer Adapter, MDPA) is IBM's standard video display card and computer display standard for the IBM PC introduced in 1981. The MDA does not have any pixel-addressable graphics modes, only a single monochrome text mode which can display 80 columns by 25 lines of high resolution text characters or symbols useful for drawing forms.
Hardware design
The original IBM MDA was an 8-bit ISA card with a Motorola 6845 display controller, 4 KB of RAM, a DE-9 output port intended for use with an IBM monochrome monitor, and a parallel port for attachment of a printer, avoiding the need to purchase a separate card.
Capabilities
The MDA was based on the IBM System/23 Datamaster's display system, and was intended to support business and word processing use with its sharp, high-resolution characters. Each character is rendered in a box of 9 × 14 pixels, of which 7 × 11 depicts the character itself and the other pixels provide space between character columns and lines. Some characters, such as the lowercase "m", are rendered eight pixels across.
The theoretical total screen display resolution of the MDA is 720 × 350 pixels, if the dimensions of all character cells are added up, but the MDA cannot address individual pixels to take full advantage of this resolution. Each character cell can be set to one of 256 bitmap characters stored in ROM on the card, and this character set cannot be altered from the built-in hardware code page 437. The only way to simulate "graphics" is through ASCII art, obtaining a low resolution 80 × 25 "pixels" screen, based on character positions.
Code page 437 has 256 characters (0-255), including the standard 95 printable ASCII characters from (32-126), and the 33 ASCII control codes (0-31 and 127) are replaced with printable graphic symbols. It also includes another 128 characters (128-255) like the aforementioned characters for drawing forms. Some of these shapes appear in |
https://en.wikipedia.org/wiki/Good%20manufacturing%20practice | Current good manufacturing practices (cGMP) are those conforming to the guidelines recommended by relevant agencies. Those agencies control the authorization and licensing of the manufacture and sale of food and beverages, cosmetics, pharmaceutical products, dietary supplements, and medical devices. These guidelines provide minimum requirements that a manufacturer must meet to assure that their products are consistently high in quality, from batch to batch, for their intended use. The rules that govern each industry may differ significantly; however, the main purpose of GMP is always to prevent harm from occurring to the end user. Additional tenets include ensuring the end product is free from contamination, that it is consistent in its manufacture, that its manufacture has been well documented, that personnel are well trained, and that the product has been checked for quality more than just at the end phase. GMP is typically ensured through the effective use of a quality management system (QMS).
Good manufacturing practices, along with good agricultural practices, good laboratory practices and good clinical practices, are overseen by regulatory agencies in the United Kingdom, United States, Canada, Europe, China, India and other countries.
High-level details
Good manufacturing practice guidelines provide guidance for manufacturing, testing, and quality assurance in order to ensure that a manufactured product is safe for human consumption or use. Many countries have legislated that manufacturers follow GMP procedures and create their own GMP guidelines that correspond with their legislation.
All guideline follows a few basic principles:
Manufacturing facilities must maintain a clean and hygienic manufacturing area.
Manufacturing facilities must maintain controlled environmental conditions in order to prevent cross-contamination from adulterants and allergens that may render the product unsafe for human consumption or use.
Manufacturing processes must be clea |
https://en.wikipedia.org/wiki/Replay%20attack | A replay attack (also known as a repeat attack or playback attack) is a form of network attack in which valid data transmission is maliciously or fraudulently repeated or delayed. This is carried out either by the originator or by an adversary who intercepts the data and re-transmits it, possibly as part of a spoofing attack by IP packet substitution. This is one of the lower-tier versions of a man-in-the-middle attack. Replay attacks are usually passive in nature.
Another way of describing such an attack is:
"an attack on a security protocol using a replay of messages from a different context into the intended (or original and expected) context, thereby fooling the honest participant(s) into thinking they have successfully completed the protocol run."
Example
Suppose Alice wants to prove her identity to Bob. Bob requests her password as proof of identity, which Alice dutifully provides (possibly after some transformation like hashing, or even salting, the password); meanwhile, Eve is eavesdropping on the conversation and keeps the password (or the hash). After the interchange is over, Eve (acting as Alice) connects to Bob; when asked for proof of identity, Eve sends Alice's password (or hash) read from the last session which Bob accepts, thus granting Eve access.
Prevention and countermeasures
Replay attacks can be prevented by tagging each encrypted component with a session ID and a component number. This combination of solutions does not use anything that is interdependent on one another. Due to the fact that there is no interdependency, there are fewer vulnerabilities. This works because a unique, random session ID is created for each run of the program; thus, a previous run becomes more difficult to replicate. In this case, an attacker would be unable to perform the replay because on a new run the session ID would have changed.
Session IDs, also known as session tokens, are one mechanism that can be used to help avoid replay attacks. The way of generating |
https://en.wikipedia.org/wiki/PTS-DOS | PTS-DOS (aka PTS/DOS) is a disk operating system, a DOS clone, developed in Russia by PhysTechSoft and Paragon Technology Systems.
History and versions
PhysTechSoft was formed in 1991 in Moscow, Russia by graduates and members of MIPT, informally known as PhysTech. At the end of 1993, PhysTechSoft released the first commercially available PTS-DOS as PTS-DOS v6.4. The version numbering followed MS-DOS version numbers, as Microsoft released MS-DOS 6.2 in November 1993.
In 1995, some programmers left PhysTechSoft and founded Paragon Technology Systems. They took source code with them and released their own version named PTS/DOS 6.51CD as well as S/DOS 1.0 ("Source DOS"), a stripped down open-source version. According to official PhysTechSoft announcements, these programmers violated both copyright laws and Russian military laws, as PTS-DOS was developed in close relationship with Russia's military and thus may be subject to military secrets law.
PhysTechSoft continued development on their own and released PTS-DOS v6.6 somewhere between and presented PTS-DOS v6.65 at the CeBIT exhibition in 1997. The next version from PhysTechSoft, formally PTS/DOS Extended Version 6.70 was labeled PTS-DOS 2000 and is still being distributed as a last 16-bit PTS-DOS system, .
Paragon continued their PTS-DOS line and released Paragon DOS Pro 2000 (also known and labeled in some places as PTS/DOS Pro 2000). According to Paragon, this was the last version and all development since then ceased. Moreover, this release contained bundled source code of older PTS-DOS v6.51.
Later, PhysTechSoft continued developing PTS-DOS and finally released PTS-DOS 32, formally known as PTS-DOS v7.0, which added support for the FAT32 file system.
PTS-DOS is certified by the Russian Ministry of Defense.
Commands
The following list of commands are supported by PTS-DOS 2000 Pro.
APPEND
ASK
ASSIGN
ATTR
BEEP
BREAK
CALL
CD
CHDIR
CHKDSK
CHOICE
CLS
COMMAND
COPY
CTTY
DATE
DEBUG
DEL
DIR
DI |
https://en.wikipedia.org/wiki/Knuth%E2%80%93Bendix%20completion%20algorithm | The Knuth–Bendix completion algorithm (named after Donald Knuth and Peter Bendix) is a semi-decision algorithm for transforming a set of equations (over terms) into a confluent term rewriting system. When the algorithm succeeds, it effectively solves the word problem for the specified algebra.
Buchberger's algorithm for computing Gröbner bases is a very similar algorithm. Although developed independently, it may also be seen as the instantiation of Knuth–Bendix algorithm in the theory of polynomial rings.
Introduction
For a set E of equations, its deductive closure () is the set of all equations that can be derived by applying equations from E in any order.
Formally, E is considered a binary relation, () is its rewrite closure, and () is the equivalence closure of ().
For a set R of rewrite rules, its deductive closure ( ∘ ) is the set of all equations that can be confirmed by applying rules from R left-to-right to both sides until they are literally equal.
Formally, R is again viewed as a binary relation, () is its rewrite closure, () is its converse, and ( ∘ ) is the relation composition of their reflexive transitive closures ( and ).
For example, if are the group axioms, the derivation chain
demonstrates that a−1⋅(a⋅b) b is a member of E'''s deductive closure.
If is a "rewrite rule" version of E, the derivation chains
demonstrate that (a−1⋅a)⋅b ∘ b is a member of Rs deductive closure.
However, there is no way to derive a−1⋅(a⋅b) ∘ b similar to above, since a right-to-left application of the rule is not allowed.
The Knuth–Bendix algorithm takes a set E of equations between terms, and a reduction ordering (>) on the set of all terms, and attempts to construct a confluent and terminating term rewriting system R that has the same deductive closure as E.
While proving consequences from E often requires human intuition, proving consequences from R does not.
For more details, see Confluence (abstract rewriting)#Motivating examples, which gives an example p |
https://en.wikipedia.org/wiki/Trastuzumab | Trastuzumab, sold under the brand name Herceptin among others, is a monoclonal antibody used to treat breast cancer and stomach cancer. It is specifically used for cancer that is HER2 receptor positive. It may be used by itself or together with other chemotherapy medication. Trastuzumab is given by slow injection into a vein and injection just under the skin.
Common side effects include fever, infection, cough, headache, trouble sleeping, and rash. Other severe side effects include heart failure, allergic reactions, and lung disease. Use during pregnancy may harm the baby. Trastuzumab works by binding to the HER2 receptor and slowing down cell replication.
Trastuzumab was approved for medical use in the United States in September 1998, and in the European Union in August 2000. It is on the World Health Organization's List of Essential Medicines. A biosimilar was approved in the European Union in November 2017, and in the United States in December 2018.
Medical uses
The safety and efficacy of trastuzumab-containing combination therapies (with chemotherapy, hormone blockers, or lapatinib) for the treatment of metastatic breast cancer. The overall hazard ratios (HR) for overall survival and progression free survival were 0.82 and 0.61, respectively. It was difficult to accurately ascertain the true impact of trastuzumab on survival, as in three of the seven trials, over half of the patients in the control arm were allowed to cross-over and receive trastuzumab after their cancer began to progress. Thus, this analysis likely underestimates the true survival benefit associated with trastuzumab treatment in this population. In these trials, trastuzumab also increased the risk of heart problems, including heart failure and left ventricular ejection fraction decline.
In early-stage HER2-positive breast cancer, trastuzumab-containing regimens improved overall survival (Hazard ratio (HR) = 0.66) and disease-free survival (HR = 0.60). Increased risk of heart failure (RR = |
https://en.wikipedia.org/wiki/Yousef%20Alavi | Yousef Alavi (March 19, 1928 – May 21, 2013) was an Iranian born American mathematician who specialized in combinatorics and graph theory. He received his PhD from Michigan State University in 1958. He was a professor of mathematics at Western Michigan University from 1958 until his retirement in 1996; he chaired the department from 1989 to 1992.
In 1987 he received the first Distinguished Service Award of the Michigan Section of the Mathematical Association of America due to his 30 years of service to the MAA; at that time, the Michigan House and Senate issued a special resolution honoring him.
References
20th-century American mathematicians
Combinatorialists
2013 deaths
Michigan State University alumni
Western Michigan University faculty
1928 births |
https://en.wikipedia.org/wiki/B%C3%A9la%20Bollob%C3%A1s | Béla Bollobás FRS (born 3 August 1943) is a Hungarian-born British mathematician who has worked in various areas of mathematics, including functional analysis, combinatorics, graph theory, and percolation. He was strongly influenced by Paul Erdős since the age of 14.
Early life and education
As a student, he took part in the first three International Mathematical Olympiads, winning two gold medals. Paul Erdős invited Bollobás to lunch after hearing about his victories, and they kept in touch afterward. Bollobás' first publication was a joint publication with Erdős on extremal problems in graph theory, written when he was in high school in 1962.
With Erdős's recommendation to Harold Davenport and a long struggle for permission from the Hungarian authorities, Bollobás was able to spend an undergraduate year in Cambridge, England. However, the authorities denied his request to return to Cambridge for doctoral study. A similar scholarship offer from Paris was also quashed. He wrote his first doctorate in discrete geometry under the supervision of László Fejes Tóth and Paul Erdős in Budapest University, 1967, after which he spent a year in Moscow with Israïl Moiseevich Gelfand. After spending a year at Christ Church, Oxford, where Michael Atiyah held the Savilian Chair of Geometry, he vowed never to return to Hungary due to his disillusion with the 1956 Soviet intervention. He then went to Trinity College, Cambridge, where in 1972 he received a second PhD in functional analysis, studying Banach algebras under the supervision of Frank Adams. Bollobás recalled, "By then, I said to myself, 'If I ever manage to leave Hungary, I won't return.'" In 1970, he was awarded a fellowship to the college.
His main area of research is combinatorics, particularly graph theory. His chief interests are in extremal graph theory and random graph theory. In 1996 he resigned his university post, but remained a Fellow of Trinity College, Cambridge.
Career
Bollobás has been a Fellow of Trin |
https://en.wikipedia.org/wiki/Andr%C3%A1s%20Gy%C3%A1rf%C3%A1s | András Gyárfás (born 1945) is a Hungarian mathematician who specializes in the study of graph theory. He is famous for two conjectures:
Together with Paul Erdős he conjectured what is now called the Erdős–Gyárfás conjecture which states that any graph with minimum degree 3 contains a simple cycle whose length is a power of two.
He and David Sumner independently formulated the Gyárfás–Sumner conjecture according to which, for every tree T, the T-free graphs are χ-bounded.
Gyárfás began working as a researcher for the Computer and Automation Research Institute of the Hungarian Academy of Sciences in 1968. He earned a candidate degree in 1980, and a doctorate (Dr. Math. Sci.) in 1992. He won the Géza Grünwald Commemorative Prize for young researchers of the János Bolyai Mathematical Society in 1978. He was co-author with Paul Erdős on 15 papers, and thus has Erdős number one.
References
External links
András Gyárfás at the Computer and Automation Research Institute, Hungarian Academy of Sciences
Google scholar profile
20th-century Hungarian mathematicians
21st-century Hungarian mathematicians
1945 births
Combinatorialists
Living people |
https://en.wikipedia.org/wiki/Adobe%20Atmosphere | Adobe Atmosphere (informally abbreviated Atmo) was a software platform for interacting with 3D computer graphics. 3D models created with the commercial program could be explored socially using a browser plugin available free of charge. Atmosphere was originally developed by Attitude Software as 3D Anarchy and was later bought by Adobe Systems. The product spent the majority of its lifetime in beta testing. Adobe released the last version of Atmosphere, version 1.0 build 216, in February 2004, then discontinued the software in December that year.
Features
Atmosphere focused on explorable "worlds" (later officially called "environments"), which were linked together by "portals", analogous to the World Wide Web's hyperlinks. These portals were represented as spinning squares of red, green, and blue that revolved around each other and floated above the ground. Portals were indicative of the Atmosphere team's desire to mirror the functionality of Web pages. Although the world itself was described in the .aer (or .atmo) file, images and sounds were kept separately, usually in the GIF, WAV or MP3 format. Objects in worlds were scriptable using a specialized dialect of JavaScript, allowing a more immersive environment, and worlds could be generated dynamically using PHP. Using JavaScript, a world author could link an object to a Web page, so that a user could, for example, launch a Web page by clicking on a billboard advertisement (Ctrl+Shift+Click in earlier versions). By version 1.0, Atmosphere also boasted support for using Macromedia Flash animations and Windows Media Video as textures.
Atmosphere-based worlds consisted mainly of parametric primitives, such as floors, walls, and cones. These primitives could be painted a solid color, given an image-based texture, or made "subtractive". Invisible, "subtractive" primitives could be used to cut "holes" in other primitives, to build more complex shapes. Many worlds also contained animated polygon meshes made possible by |
https://en.wikipedia.org/wiki/Multivariable%20calculus | Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables (multivariate), rather than just one.
Multivariable calculus may be thought of as an elementary part of advanced calculus. For advanced calculus, see calculus on Euclidean space. The special case of calculus in three dimensional space is often called vector calculus.
Typical operations
Limits and continuity
A study of limits and continuity in multivariable calculus yields many counterintuitive results not demonstrated by single-variable functions. For example, there are scalar functions of two variables with points in their domain which give different limits when approached along different paths. E.g., the function.
approaches zero whenever the point is approached along lines through the origin (). However, when the origin is approached along a parabola , the function value has a limit of . Since taking different paths toward the same point yields different limit values, a general limit does not exist there.
Continuity in each argument not being sufficient for multivariate continuity can also be seen from the following example. In particular, for a real-valued function with two real-valued parameters, , continuity of in for fixed and continuity of in for fixed does not imply continuity of .
Consider
It is easy to verify that this function is zero by definition on the boundary and outside of the quadrangle . Furthermore, the functions defined for constant and and by
and
are continuous. Specifically,
for all and .
However, the sequence (for natural ) converges to , rendering the function as discontinuous at . Approaching the origin not along parallels to the - and -axis reveals this discontinuity.
Continuity of function composition
If is continuous at and is a single variable function continuous at then the |
https://en.wikipedia.org/wiki/Deinterlacing | Deinterlacing is the process of converting interlaced video into a non-interlaced or progressive form. Interlaced video signals are commonly found in analog television, digital television (HDTV) when in the 1080i format, some DVD titles, and a smaller number of Blu-ray discs.
An interlaced video frame consists of two fields taken in sequence: the first containing all the odd lines of the image, and the second all the even lines. Analog television employed this technique because it allowed for less transmission bandwidth while keeping a high frame rate for smoother and more life-like motion. A non-interlaced (or progressive scan) signal that uses the same bandwidth only updates the display half as often and was found to create a perceived flicker or stutter. CRT-based displays were able to display interlaced video correctly due to their complete analog nature, blending in the alternating lines seamlessly. However, since the early 2000s, displays such as televisions and computer monitors have become almost entirely digital - in that the display is composed of discrete pixels - and on such displays the interlacing becomes noticeable and can appear as a distracting visual defect. The deinterlacing process should try to minimize these.
Deinterlacing is thus a necessary process and comes built-in to most modern DVD players, Blu-ray players, LCD/LED televisions, digital projectors, TV set-top boxes, professional broadcast equipment, and computer video players and editors - although each with varying levels of quality.
Deinterlacing has been researched for decades and employs complex processing algorithms; however, consistent results have been very hard to achieve.
Background
Both video and photographic film capture a series of frames (still images) in rapid succession; however, television systems read the captured image by serially scanning the image sensor by lines (rows). In analog television, each frame is divided into two consecutive fields, one containing all ev |
https://en.wikipedia.org/wiki/Heterogeneous%20Element%20Processor | The Heterogeneous Element Processor (HEP) was introduced by Denelcor, Inc. in 1982. The HEP's architect was Burton Smith. The machine was designed to solve fluid dynamics problems for the Ballistic Research Laboratory. A HEP system, as the name implies, was pieced together from many heterogeneous components -- processors, data memory modules, and I/O modules. The components were connected via a switched network.
A single processor, called a PEM, in a HEP system (up to sixteen PEMs could be connected) was rather unconventional; via a "program status word (PSW) queue," up to fifty processes could be maintained in hardware at once. The largest system ever delivered had 4 PEMs. The eight-stage instruction pipeline allowed instructions from eight different processes to proceed at once. In fact, only one instruction from a given process was allowed to be present in the pipeline at any point in time. Therefore, the full processor throughput of 10 MIPS could only be achieved when eight or more processes were active; no single process could achieve throughput greater than 1.25 MIPS. This type of multithreading processing classifies the HEP as a barrel processor. The hardware implementation of the HEP PEM was emitter-coupled logic.
Processes were classified as either user-level or supervisor-level. User-level processes could create supervisor-level processes, which were used to manage user-level processes and perform I/O. Processes of the same class were required to be grouped into one of seven user tasks and seven supervisor tasks.
Each processor, in addition to the PSW queue and instruction pipeline, contained instruction memory, 2,048 64-bit general purpose registers and 4,096 constant registers. Constant registers were differentiated by the fact that only supervisor processes could modify their contents. The processors themselves contained no data memory; instead, data memory modules could be separately attached to the switched network.
The HEP memory consis |
https://en.wikipedia.org/wiki/Berm | A berm is a level space, shelf, or raised barrier (usually made of compacted soil) separating areas in a vertical way, especially partway up a long slope. It can serve as a terrace road, track, path, a fortification line, a border/separation barrier for navigation, good drainage, industry, or other purposes.
Etymology
The word is one of Middle Dutch and came into usage in English via French.
Military use
History
In medieval military engineering, a berm (or berme) was a level space between a parapet or defensive wall and an adjacent steep-walled ditch or moat. It was intended to reduce soil pressure on the walls of the excavated part to prevent its collapse. It also meant that debris dislodged from fortifications would not fall into (and fill) a ditch or moat.
In the trench warfare of World War I, the name was applied to a similar feature at the lip of a trench, which served mainly as an elbow-rest for riflemen.
Modern usage
In modern military engineering, a berm is the earthen or sod wall or parapet, especially a low earthen wall adjacent to a ditch. The digging of the ditch (often by a bulldozer or military engineering vehicle) can provide the soil from which the berm is constructed. Walls constructed in this manner are an obstacle to vehicles, including most armoured fighting vehicles but are easily crossed by infantry. Because of the ease of construction, such walls can be made hundreds or thousands of kilometres long. A prominent example of such a berm is the Moroccan Western Sahara Wall.
Erosion control
Berms are also used to control erosion and sedimentation by reducing the rate of surface runoff. The berms either reduce the velocity of the water, or direct water to areas that are not susceptible to erosion, thereby reducing the adverse effects of running water on exposed topsoil. Following the 2010 Deepwater Horizon oil spill in the Gulf of Mexico, the construction of berms designed to prevent oil from reaching the fragile Louisiana wetlands (which w |
https://en.wikipedia.org/wiki/Steganalysis | Steganalysis is the study of detecting messages hidden using steganography; this is analogous to cryptanalysis applied to cryptography.
Overview
The goal of steganalysis is to identify suspected packages, determine whether or not they have a payload encoded into them, and, if possible, recover that payload.
Unlike cryptanalysis, in which intercepted data contains a message (though that message is encrypted), steganalysis generally starts with a pile of suspect data files, but little information about which of the files, if any, contain a payload. The steganalyst is usually something of a forensic statistician, and must start by reducing this set of data files (which is often quite large; in many cases, it may be the entire set of files on a computer) to the subset most likely to have been altered.
Basic techniques
The problem is generally handled with statistical analysis. A set of unmodified files of the same type, and ideally from the same source (for example, the same model of digital camera, or if possible, the same digital camera; digital audio from a CD MP3 files have been "ripped" from; etc.) as the set being inspected, are analyzed for various statistics. Some of these are as simple as spectrum analysis, but since most image and audio files these days are compressed with lossy compression algorithms, such as JPEG and MP3, they also attempt to look for inconsistencies in the way this data has been compressed. For example, a common artifact in JPEG compression is "edge ringing", where high-frequency components (such as the high-contrast edges of black text on a white background) distort neighboring pixels. This distortion is predictable, and simple steganographic encoding algorithms will produce artifacts that are detectably unlikely.
One case where detection of suspect files is straightforward is when the original, unmodified carrier is available for comparison. Comparing the package against the original file will yield the differences caused by |
https://en.wikipedia.org/wiki/Computer%20forensics | Computer forensics (also known as computer forensic science) is a branch of digital forensic science pertaining to evidence found in computers and digital storage media. The goal of computer forensics is to examine digital media in a forensically sound manner with the aim of identifying, preserving, recovering, analyzing and presenting facts and opinions about the digital information.
Although it is most often associated with the investigation of a wide variety of computer crime, computer forensics may also be used in civil proceedings. The discipline involves similar techniques and principles to data recovery, but with additional guidelines and practices designed to create a legal audit trail.
Evidence from computer forensics investigations is usually subjected to the same guidelines and practices of other digital evidence. It has been used in a number of high-profile cases and is accepted as reliable within U.S. and European court systems.
Overview
In the early 1980s, personal computers became more accessible to consumers, leading to their increased use in criminal activity (for example, to help commit fraud). At the same time, several new "computer crimes" were recognized (such as cracking). The discipline of computer forensics emerged during this time as a method to recover and investigate digital evidence for use in court. Since then, computer crime and computer-related crime has grown, with the FBI reporting a suspected 791,790 internet crimes alone in 2020, a 69% increase over the amount reported in 2019. Today, computer forensics is used to investigate a wide variety of crime, including child pornography, fraud, espionage, cyberstalking, murder, and rape. The discipline also features in civil proceedings as a form of information gathering (for example, Electronic discovery)
Forensic techniques and expert knowledge are used to explain the current state of a digital artifact, such as a computer system, storage medium (e.g., hard disk or CD-ROM), or an el |
https://en.wikipedia.org/wiki/Windows%20XP%20Professional%20x64%20Edition | Microsoft Windows XP Professional x64 Edition, released on April 25, 2005, is an edition of Windows XP for x86-64 personal computers. It is designed to use the expanded 64-bit memory address space provided by the x86-64 architecture.
The primary benefit of moving to 64-bit is the increase in the maximum allocatable random-access memory (RAM). 32-bit editions of Windows XP are limited to a total of 4 gigabytes. Although the theoretical memory limit of a 64-bit computer is about 16 exabytes (17.1 billion gigabytes), Windows XP x64 is limited to 128GB of physical memory and 16 terabytes of virtual memory.
Windows XP Professional x64 Edition uses the same kernel and code tree as Windows Server 2003 and is serviced by the same service packs. However, it includes client features of Windows XP such as System Restore, Windows Messenger, Fast User Switching, Welcome Screen, Security Center and games, which Windows Server 2003 does not have.
Windows XP Professional x64 Edition is not to be confused with Windows XP 64-Bit Edition as the latter was designed for Itanium architecture. During the initial development phases, Windows XP Professional x64 Edition was named Windows XP 64-Bit Edition for 64-Bit Extended Systems.
Advantages
Supports up to 128GB of RAM.
Supports up to two physical CPUs (in separate physical sockets) and up to 64 logical processors (i.e. cores or threads on a single CPU).
Uses the Windows Server 2003 kernel which is newer than 32-bit Windows XP and has improvements to enhance scalability. Windows XP Professional x64 Edition also introduces Kernel Patch Protection (also known as PatchGuard) which can help improve security by helping to eliminate rootkits.
Supports GPT-partitioned disks for data volumes (but not bootable volumes) after SP1, which allows disks greater than 2TB to be used as a single GPT partition for storing data.
Allows faster encoding of audio or video, higher performance video gaming and faster 3D rendering in software optimize |
https://en.wikipedia.org/wiki/Distributivity%20%28order%20theory%29 | In the mathematical area of order theory, there are various notions of the common concept of distributivity, applied to the formation of suprema and infima. Most of these apply to partially ordered sets that are at least lattices, but the concept can in fact reasonably be generalized to semilattices as well.
Distributive lattices
Probably the most common type of distributivity is the one defined for lattices, where the formation of binary suprema and infima provide the total operations of join () and meet (). Distributivity of these two operations is then expressed by requiring that the identity
hold for all elements x, y, and z. This distributivity law defines the class of distributive lattices. Note that this requirement can be rephrased by saying that binary meets preserve binary joins. The above statement is known to be equivalent to its order dual
such that one of these properties suffices to define distributivity for lattices. Typical examples of distributive lattice are totally ordered sets, Boolean algebras, and Heyting algebras. Every finite distributive lattice is isomorphic to a lattice of sets, ordered by inclusion (Birkhoff's representation theorem).
Distributivity for semilattices
A semilattice is partially ordered set with only one of the two lattice operations, either a meet- or a join-semilattice. Given that there is only one binary operation, distributivity obviously cannot be defined in the standard way. Nevertheless, because of the interaction of the single operation with the given order, the following definition of distributivity remains possible. A meet-semilattice is distributive, if for all a, b, and x:
If a ∧ b ≤ x then there exist a and b such that a ≤ a, b ≤ b' and x = a ∧ b' .
Distributive join-semilattices are defined dually: a join-semilattice is distributive, if for all a, b, and x:
If x ≤ a ∨ b then there exist a and b such that a ≤ a, b ≤ b and x = a ∨ b' .
In either case, a' and b' need not be unique.
These definit |
https://en.wikipedia.org/wiki/Key%20server%20%28cryptographic%29 | In computer security, a key server is a computer that receives and then serves existing cryptographic keys to users or other programs. The users' programs can be running on the same network as the key server or on another networked computer.
The keys distributed by the key server are almost always provided as part of a cryptographically protected public key certificates containing not only the key but also 'entity' information about the owner of the key. The certificate is usually in a standard format, such as the OpenPGP public key format, the X.509 certificate format, or the PKCS format. Further, the key is almost always a public key for use with an asymmetric key encryption algorithm.
History
Key servers play an important role in public key cryptography.
In public key cryptography an individual is able to generate a key pair, where one of the keys is kept private
while the other is distributed publicly. Knowledge of the public key does not compromise the security of public key cryptography. An
individual holding the public key of a key pair can use that key to carry out cryptographic operations that allow secret communications with strong authentication of the holder of the matching private key. The
need to have the public key of a key pair in order to start
communication or verify signatures is a bootstrapping problem. Locating keys
on the web or writing to the individual asking them to transmit their public
keys can be time consuming and unsecure. Key servers act as central repositories to
alleviate the need to individually transmit public keys and can act as the root of a chain of trust.
The first web-based PGP keyserver was written for a thesis by Marc Horowitz, while he was studying at MIT. Horowitz's keyserver was called the HKP Keyserver
after a web-based OpenPGP HTTP Keyserver Protocol (HKP), used to allow people to interact with the
keyserver. Users were able to upload, download, and search keys either through
HKP on TCP port 11371, or through |
https://en.wikipedia.org/wiki/Quanta%20Computer | Quanta Computer Incorporated () () is a Taiwan-based manufacturer of notebook computers and other electronic hardware. Its customers include Apple Inc., Dell, Hewlett-Packard Inc., Acer Inc., Alienware, Amazon.com, Cisco, Fujitsu, Gericom, Lenovo, LG, Maxdata, Microsoft, MPC, BlackBerry Ltd, Sharp Corporation, Siemens AG, Sony, Sun Microsystems, Toshiba, Valve, Verizon Wireless, and Vizio.
Quanta has extended its businesses into enterprise network systems, home entertainment, mobile communication, automotive electronics, and digital home markets. The company also designs, manufactures and markets GPS systems, including handheld GPS, in-car GPS, Bluetooth GPS and GPS with other positioning technologies.
Quanta Computer was announced as the original design manufacturer (ODM) for the XO-1 by the One Laptop per Child project on December 13, 2005, and took an order for one million laptops as of February 16, 2007. In October 2008, it was announced that Acer would phase out Quanta from the production chain, and instead outsource manufacturing of 15 million Aspire One netbooks to Compal Electronics.
In 2011, Quanta designed servers in conjunction with Facebook as part of the Open Compute Project.
It was estimated that Quanta had a 31% worldwide market share of notebook computers in the first quarter of 2008.
History
The firm was founded in 1988 by Barry Lam, a Shanghai-born businessman who grew up in Hong Kong and received his education in Taiwan, with a starting capital of less than $900,000. A first notebook prototype was completed in November 1988, with factory production beginning in 1990.
Throughout the 1990s, Quanta established contracts with Apple Computers and Gateway, among others, opening an after-sales office in California in 1991 and another one in Augsburg, Germany in 1994. In 1996, Quanta signed a contract with Dell, making the firm Quanta's largest customer at the time.
In 2014, Quanta ranked 409th on Fortune's Global 500 list. 2016 is the strongest p |
https://en.wikipedia.org/wiki/Molniya%20orbit | A Molniya orbit (, "Lightning") is a type of satellite orbit designed to provide communications and remote sensing coverage over high latitudes. It is a highly elliptical orbit with an inclination of 63.4 degrees, an argument of perigee of 270 degrees, and an orbital period of approximately half a sidereal day. The name comes from the Molniya satellites, a series of Soviet/Russian civilian and military communications satellites which have used this type of orbit since the mid-1960s.
The Molniya orbit has a long dwell time over the hemisphere of interest, while moving very quickly over the other. In practice, this places it over either Russia or Canada for the majority of its orbit, providing a high angle of view to communications and monitoring satellites covering these high-latitude areas. Geostationary orbits, which are necessarily inclined over the equator, can only view these regions from a low angle, hampering performance. In practice, a satellite in a Molniya orbit serves the same purpose for high latitudes as a geostationary satellite does for equatorial regions, except that multiple satellites are required for continuous coverage.
Satellites placed in Molniya orbits have been used for television broadcasting, telecommunications, military communications, relaying, weather monitoring, early warning systems and some classified purposes.
History
The Molniya orbit was discovered by Soviet scientists in the 1960s as a high-latitude communications alternative to geostationary orbits, which require large launch energies to achieve a high perigee and to change inclination to orbit over the equator (especially when launched from Russian latitudes). As a result, OKB-1 sought a less energy-demanding orbit. Studies found that this could be achieved using a highly elliptical orbit with an apogee over Russian territory. The orbit's name refers to the "lightning" speed with which the satellite passes through the perigee.
The first use of the Molniya orbit was by the co |
https://en.wikipedia.org/wiki/PKCS | In cryptography, PKCS (Public Key Cryptography Standards) are a group of public key cryptography standards devised and published by RSA Security LLC, starting in the early 1990s. The company published the standards to promote the use of the cryptography techniques to which they had patents, such as the RSA algorithm, the Schnorr signature algorithm and several others. Though not industry standards (because the company retained control over them), some of the standards have begun to move into the "standards track" processes of relevant standards organizations in recent years, such as the IETF and the PKIX working group.
See also
Cryptographic Message Syntax
References
General
External links
About PKCS (appendix G from RFC 3447)
OASIS PKCS 11 TC (technical committee home page)
Cryptography standards
Public-key cryptography
Standards of the United States |
https://en.wikipedia.org/wiki/Rush%20Hour%20%28puzzle%29 | Rush Hour is a sliding block puzzle invented by Nob Yoshigahara in the 1970s. It was first sold in the United States in 1996. It is now being manufactured by ThinkFun (formerly Binary Arts).
ThinkFun now sells Rush Hour spin-offs Rush Hour Jr., Safari Rush Hour, Railroad Rush Hour, Rush Hour Brain Fitness and Rush Hour Shift, with puzzles by Scott Kim. The game sold more than 1 million units.
Game
The board is a 6×6 grid with grooves in the tiles to allow cars to slide, card tray to hold the cards, current active card holder and an exit hole. The game comes with 16 vehicles (12 cars, 4 trucks), each colored differently, and 40 puzzle cards. Cars and trucks are both one square wide, but cars are two squares long and trucks are three squares long. Vehicles can only be moved along a straight line on the grid; rotation is forbidden. Puzzle cards, each with a level number that indicates the difficulty of the challenge, show the starting positions of cars and trucks. Not all cars and trucks are used in all challenges.
Objective
The goal of the game is to get only the red car out through the exit of the board by moving the other vehicles out of its way. However, the cars and trucks (set up before play, according to a puzzle card) obstruct the path which makes the puzzle even more difficult.
Editions
The Regular Edition comes with forty puzzles split into four different difficulties, ranging from Beginner to Expert. The Deluxe Edition has a black playing board, card box in place of the Regular Edition's card tray, and sixty new puzzles with an extra difficulty: the Grand Master. The Ultimate Collector's Edition has a playing board that can hold vehicles not in play and can display the active card in a billboard-like display. The Ultimate Collectors Edition also includes 155 new puzzles (with some of them being from card set three) and a white limo. In 2011, the board was changed to black, like the Deluxe Edition.
An iOS version of the game was released in 2010.
Expans |
https://en.wikipedia.org/wiki/Flicker%20%28screen%29 | Flicker is a visible change in brightness between cycles displayed on video displays. It applies to the refresh interval on cathode ray tube (CRT) televisions and computer monitors, as well as plasma computer displays and televisions.
Occurrence
Flicker occurs on CRTs when they are driven at a low refresh rate, allowing the brightness to drop for time intervals sufficiently long to be noticed by a human eye – see persistence of vision and flicker fusion threshold. For most devices, the screen's phosphors quickly lose their excitation between sweeps of the electron gun, and the afterglow is unable to fill such gaps – see phosphor persistence. A refresh rate of 60 Hz on most screens will produce a visible "flickering" effect. Most people find that refresh rates of 70–90 Hz and above enable flicker-free viewing on CRTs. Use of refresh rates above 120 Hz is uncommon, as they provide little noticeable flicker reduction and limit available resolution.
Flatscreen plasma displays have a similar effect. The plasma pixels fade in brightness between refreshes.
In LCD screens, the LCD itself does not flicker, it preserves its opacity unchanged until updated for the next frame. However, in order to prevent accumulated damage LCDs quickly alternate the voltage between positive and negative for each pixel, which is called 'polarity inversion'. Ideally, this wouldn't be noticeable because every pixel has the same brightness whether a positive or a negative voltage is applied. In practice, there is a small difference, which means that every pixel flickers at about 30 Hz. Screens that use opposite polarity per-line or per-pixel can reduce this effect compared to when the entire screen is at the same polarity, sometimes the type of screen is detectable by using patterns designed to maximize the effect.
More of a concern is the LCD backlight. Earlier LCDs used fluorescent lamps which flickered at 100–120 Hz; newer fluorescently backlit LCDs use an electronic ballast that flickers a |
https://en.wikipedia.org/wiki/Computability%20logic | Computability logic (CoL) is a research program and mathematical framework for redeveloping logic as a systematic formal theory of computability, as opposed to classical logic which is a formal theory of truth. It was introduced and so named by Giorgi Japaridze in 2003.
In classical logic, formulas represent true/false statements. In CoL, formulas represent computational problems. In classical logic, the validity of a formula depends only on its form, not on its meaning. In CoL, validity means being always computable. More generally, classical logic tells us when the truth of a given statement always follows from the truth of a given set of other statements. Similarly, CoL tells us when the computability of a given problem A always follows from the computability of other given problems B1,...,Bn. Moreover, it provides a uniform way to actually construct a solution (algorithm) for such an A from any known solutions of B1,...,Bn.
CoL formulates computational problems in their most general – interactive sense. CoL defines a computational problem as a game played by a machine against its environment. Such a problem is computable if there is a machine that wins the game against every possible behavior of the environment. Such a game-playing machine generalizes the Church-Turing thesis to the interactive level. The classical concept of truth turns out to be a special, zero-interactivity-degree case of computability. This makes classical logic a special fragment of CoL. Thus CoL is a conservative extension of classical logic. Computability logic is more expressive, constructive and computationally meaningful than classical logic. Besides classical logic, independence-friendly (IF) logic and certain proper extensions of linear logic and intuitionistic logic also turn out to be natural fragments of CoL. Hence meaningful concepts of "intuitionistic truth", "linear-logic truth" and "IF-logic truth" can be derived from the semantics of CoL.
CoL systematically answers the |
https://en.wikipedia.org/wiki/Controlled%20burn | A controlled or prescribed (Rx) burn, which can include hazard reduction burning, backfire, swailing or a burn-off, is a fire set intentionally for purposes of forest management, fire suppression, farming, prairie restoration or greenhouse gas abatement. A controlled burn may also refer to the intentional burning of slash and fuels through burn piles. Fire is a natural part of both forest and grassland ecology and controlled fire can be a tool for foresters.
Hazard reduction or controlled burning is conducted during the cooler months to reduce fuel buildup and decrease the likelihood of serious hotter fires. Controlled burning stimulates the germination of some desirable forest trees, and reveals soil mineral layers which increases seedling vitality, thus renewing the forest. Some cones, such as those of lodgepole pine, sequoia and many chaparral shrubs are pyriscent, meaning heat from fire opens cones to disperse seeds.
In industrialized countries, controlled burning is usually overseen by fire control authorities for regulations and permits.
History
There are two basic causes of wildfires. One is natural, mainly through lightning, and the other is human activity. Controlled burns have a long history in wildland management. Pre-agricultural societies used fire to regulate both plant and animal life. Fire history studies have documented periodic wildland fires ignited by indigenous peoples in North America and Australia. Native Americans frequently used fire to manage natural environments in a way that benefited humans and wildlife, starting low-intensity fires that released nutrients for plants, reduced competition, and consumed excess flammable material that otherwise would eventually fuel high-intensity, catastrophic fires.
Fires, both naturally caused and prescribed, were once part of natural landscapes in many areas. In the US, these practices ended in the early 20th century, when federal fire policies were enacted with the goal of suppressing all fires. S |
https://en.wikipedia.org/wiki/Steamroller | A steamroller (or steam roller) is a form of road roller – a type of heavy construction machinery used for leveling surfaces, such as roads or airfields – that is powered by a steam engine. The leveling/flattening action is achieved through a combination of the size and weight of the vehicle and the rolls: the smooth wheels and the large cylinder or drum fitted in place of treaded road wheels.
The majority of steam rollers are outwardly similar to traction engines as many traction engine manufacturers later produced rollers based on their existing designs, and the patents owned by certain roller manufacturers tended to influence the general arrangements used by others. The key difference between the two vehicles is that on a roller the main roll replaces the front wheels and axle that would be fitted to a traction engine, and the driving wheels are smooth-tired.
The word steamroller frequently refers to road rollers in general, regardless of the method of propulsion.
History
Before about 1850, the word steamroller meant a fixed machine for rolling and curving steel plates for boilers and ships.
From then on, it also meant a mobile device for flattening ground.
An early steamroller was patented by Louis Lemoine in France in 1859 and demonstrated sometime before February 1861. In Britain, a 30-ton steamroller was designed in 1863 by William Clark and partner W.F. Batho. Having failed to impress the British municipal road authorities it was transferred to Kolkata where it continued to work.
The company Aveling and Porter was the first to successfully sell the product commercially and subsequently became the largest manufacturer in Britain. In 1866 they produced a prototype roller with 3 foot-wide rollers fitted to the rear of a standard 12 nominal horsepower traction engine. This experimental machine was described by local papers as 'the world's first steamroller' and it caused a public spectacle.
In 1867, the steam road roller was patented and the company began |
https://en.wikipedia.org/wiki/Autoradiograph | An autoradiograph is an image on an X-ray film or nuclear emulsion produced by the pattern of decay emissions (e.g., beta particles or gamma rays) from a distribution of a radioactive substance. Alternatively, the autoradiograph is also available as a digital image (digital autoradiography), due to the recent development of scintillation gas detectors or rare-earth phosphorimaging systems. The film or emulsion is apposed to the labeled tissue section to obtain the autoradiograph (also called an autoradiogram). The auto- prefix indicates that the radioactive substance is within the sample, as distinguished from the case of historadiography or microradiography, in which the sample is marked using an external source. Some autoradiographs can be examined microscopically for localization of silver grains (such as on the interiors or exteriors of cells or organelles) in which the process is termed micro-autoradiography. For example, micro-autoradiography was used to examine whether atrazine was being metabolized by the hornwort plant or by epiphytic microorganisms in the biofilm layer surrounding the plant.
Applications
In biology, this technique may be used to determine the tissue (or cell) localization of a radioactive substance, either introduced into a metabolic pathway, bound to a receptor or enzyme, or hybridized to a nucleic acid. Applications for autoradiography are broad, ranging from biomedical to environmental sciences to industry.
Receptor autoradiography
The use of radiolabeled ligands to determine the tissue distributions of receptors is termed either in vivo or in vitro receptor autoradiography if the ligand is administered into the circulation (with subsequent tissue removal and sectioning) or applied to the tissue sections, respectively. Once the receptor density is known, in vitro autoradiography can also be used to determine the anatomical distribution and affinity of a radiolabeled drug towards the receptor. For in vitro autoradiography, radioligan |
https://en.wikipedia.org/wiki/Class%20II%20gene | A class II gene is a type of gene that codes for a protein. Class II genes are transcribed by RNAP II .
Class II genes have a promoter that may contain a TATA box.
Basal transcription of class II genes requires the formation of a preinitiation complex.
They are transcribed by RNA polymerase II, include both intron and exon, and code for polypeptide.
Major histocompatibility complex (MHC) class II genes are important in the immune response.
Major histocompatibility complex (MHC) II is found on antigen-presenting cells (APCs) and functions to present exogenous proteins to CD4+ T cells. MHC II thus plays an important role in activating the immune system in response to extracellular pathogens via activation of CD4+ T cells. MHC class II molecules are differentially expressed across multiple cell-types. For example, MHC II molecules are constitutively expressed in thymic epithelial cells and antigen-presenting cells (APC's), whereas they undergo interferon-γ-mediated expression in other cell types. Central to the regulation of the complex gene-expression profile exhibited by MHC class II molecules is a single master regulatory factor known as the class II transactivator (CIITA). CIITA is a non-DNA-binding co-activator whose expression is tightly controlled by a regulatory region containing three independent promoters (pI, pIII and pIV).
References
Genes
Molecular biology |
https://en.wikipedia.org/wiki/Game%20semantics | Game semantics (, translated as dialogical logic) is an approach to formal semantics that grounds the concepts of truth or validity on game-theoretic concepts, such as the existence of a winning strategy for a player, somewhat resembling Socratic dialogues or medieval theory of Obligationes.
History
In the late 1950s Paul Lorenzen was the first to introduce a game semantics for logic, and it was further developed by Kuno Lorenz. At almost the same time as Lorenzen, Jaakko Hintikka developed a model-theoretical approach known in the literature as GTS (game-theoretical semantics). Since then, a number of different game semantics have been studied in logic.
Shahid Rahman (Lille) and collaborators developed dialogical logic into a general framework for the study of logical and philosophical issues related to logical pluralism. Beginning 1994 this triggered a kind of renaissance with lasting consequences. This new philosophical impulse experienced a parallel renewal in the fields of theoretical computer science, computational linguistics, artificial intelligence, and the formal semantics of programming languages, for instance the work of Johan van Benthem and collaborators in Amsterdam who looked thoroughly at the interface between logic and games, and Hanno Nickau who addressed the full abstraction problem in programming languages by means of games. New results in linear logic by Jean-Yves Girard in the interfaces between mathematical game theory and logic on one hand and argumentation theory and logic on the other hand resulted in the work of many others, including S. Abramsky, J. van Benthem, A. Blass, D. Gabbay, M. Hyland, W. Hodges, R. Jagadeesan, G. Japaridze, E. Krabbe, L. Ong, H. Prakken, G. Sandu, D. Walton, and J. Woods, who placed game semantics at the center of a new concept in logic in which logic is understood as a dynamic instrument of inference. There has also been an alternative perspective on proof theory and meaning theory, advocating that Wittgenste |
https://en.wikipedia.org/wiki/Piphilology | Piphilology comprises the creation and use of mnemonic techniques to remember many digits of the mathematical constant . The word is a play on the word "pi" itself and of the linguistic field of philology.
There are many ways to memorize , including the use of piems (a portmanteau, formed by combining pi and poem), which are poems that represent in a way such that the length of each word (in letters) represents a digit. Here is an example of a piem: "Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics." Notice how the first word has three letters, the second word has one, the third has four, the fourth has one, the fifth has five, and so on. In longer examples, 10-letter words are used to represent the digit zero, and this rule is extended to handle repeated digits in so-called Pilish writing. The short story "Cadaeic Cadenza" records the first 3,834 digits of in this manner, and a 10,000-word novel, Not A Wake, has been written accordingly.
However, poems prove to be inefficient for large memorizations of . Other methods include remembering patterns in the numbers (for instance, the year 1971 appears in the first fifty digits of ) and the method of loci (which has been used to memorize to 67,890 digits).
History
Until the 20th century, the number of digits of pi which mathematicians had the stamina to calculate by hand remained in the hundreds, so that memorization of all known digits at the time was possible. In 1949 a computer was used to calculate π to 2,000 places, presenting one of the earliest opportunities for a more difficult challenge.
Later computers calculated pi to extraordinary numbers of digits (2.7 trillion as of August 2010), and people began memorizing more and more of the output. The world record for the number of digits memorized has exploded since the mid-1990s, and it stood at 100,000 as of October 2006. The previous record (83,431) was set by the same person (Akira Haraguchi) on July 2, 2005 |
https://en.wikipedia.org/wiki/Armstrong%20oscillator | The Armstrong oscillator (also known as the Meissner oscillator) is an electronic oscillator circuit which uses an inductor and capacitor to generate an oscillation. It is the earliest oscillator circuit, invented by US engineer Edwin Armstrong in 1912 and independently by Austrian engineer Alexander Meissner in 1913, and was used in the first vacuum tube radio transmitters. It is sometimes called a tickler oscillator because its distinguishing feature is that the feedback signal needed to produce oscillations is magnetically coupled into the tank inductor in the input circuit by a "tickler coil" (L2, right) in the output circuit. Assuming the coupling is weak but sufficient to sustain oscillation, the oscillation frequency f is determined primarily by the LC circuit (tank circuit L1 and C in the figure on the right) and is approximately given by
This circuit was widely used in the regenerative radio receiver, popular until the 1940s. In that application, the input radio frequency signal from the antenna is magnetically coupled into the LC circuit by an additional winding, and the feedback is reduced with adjustable gain control in the feedback loop, so the circuit is just short of oscillation. The result is a narrow-band radio-frequency filter and amplifier. The non-linear characteristic of the transistor or tube also demodulated the RF signal to produce the audio signal.
The circuit diagram shown is a modern implementation, using a field-effect transistor as the amplifying element. Armstrong's original design used a triode vacuum tube.
In the Meissner variant, the LC resonant circuit is exchanged with the feedback coil, i.e., in the output path (vacuum tube plate, field-effect transistor drain, or bipolar transistor collector) of the amplifier (e.g., Grebennikov, Fig. 2.8). Many publications, however, embrace both variants with either name. English speakers call it the "Armstrong oscillator", whereas German speakers call it the "Meißner oscillator".
See als |
https://en.wikipedia.org/wiki/Retrotransposon | Retrotransposons (also called Class I transposable elements or transposons via RNA intermediates) are a type of genetic component that copy and paste themselves into different genomic locations (transposon) by converting RNA back into DNA through the reverse transcription process using an RNA transposition intermediate.
Through reverse transcription, retrotransposons amplify themselves quickly to become abundant in eukaryotic genomes such as maize (49–78%) and humans (42%). They are only present in eukaryotes but share features with retroviruses such as HIV, for example, discontinuous reverse transcriptase-mediated extrachromosomal recombination.
These retrotransposons are regulated by a family of short non-coding RNAs termed as PIWI [P-element induced wimpy testis]-interacting RNAs (piRNAs). piRNA is a recently discovered class of ncRNAs, which are in the length range of ~24-32 nucleotides. Initially, piRNAs were described as repeat-associated siRNAs (rasiRNAs) because of their origin from the repetitive elements such as transposable sequences of the genome. However, later it was identified that they acted via PIWI-protein. In addition to having a role in the suppression of genomic transposons, various roles of piRNAs have been recently reported like regulation of 3’ UTR of protein-coding genes via RNAi, transgenerational epigenetic inheritance to convey a memory of past transposon activity, and RNA-induced epigenetic silencing.
There are two main types of retrotransposons, long terminal repeats (LTRs) and non-long terminal repeats (non-LTRs). Retrotransposons are classified based on sequence and method of transposition. Most retrotransposons in the maize genome are LTR, whereas in humans they are mostly non-LTR. Retrotransposons (mostly of the LTR type) can be passed onto the next generation of a host species through the germline.
The other type of transposon is the DNA transposon. DNA transposons encode a transposase which, when translated, catalyses the exc |
https://en.wikipedia.org/wiki/GFA%20BASIC | GFA BASIC is a dialect of the BASIC programming language, by Frank Ostrowski. The name is derived from the company ("GFA Systemtechnik GmbH"), which distributed the software. In the mid-1980s to the 1990s it enjoyed popularity as an advanced BASIC dialect, but has been mostly superseded by several other programming languages. Official support ended in the early 2000s.
History
GFA BASIC was developed by Frank Ostrowski at "GFA Systemtechnik GmbH" (later "GFA Software"), a German company in Kiel and Düsseldorf, as a proprietary version of his free BASIC implementation, Turbo-Basic XL. GFA is an acronym for "Gesellschaft für Automatisierung" ("Company for Automation"), which gave name to the software. The first GFA BASIC version was released in 1986. In the mid and late 1980s it became very popular for the Atari ST home computer range, since the Atari ST BASIC shipped with them was more primitive. Later, ports for the Commodore Amiga, DOS and Windows were marketed. Version 2.0 was the most popular release of GFA BASIC as it offered then many more advanced features compared to alternatives. GFA BASIC 3.0 included further improvements like support for user-defined structures and other agglomerated data types. The final released version was 3.6. Around 2002 GFA software ceased all GFA BASIC activities and shut down the mailinglist and website in 2005. Due to missing official support and availability of GFA BASIC the user community took over the support and an installed an own communication infrastructure.
Features and functionality
As of version 2.0, the most popular release, GFA BASIC was a very modern programming language for its time. Line numbers were not used and one line was equivalent to one command. To greatly simplify maintenance of long listings, the IDE later even allowed for code folding. It had a reasonable range of structured programming commands — procedures with local variables and parameter passing by value or reference, loop constructs, etc. Modular |
https://en.wikipedia.org/wiki/Issai%20Schur | Issai Schur (10 January 1875 – 10 January 1941) was a Russian mathematician who worked in Germany for most of his life. He studied at the University of Berlin. He obtained his doctorate in 1901, became lecturer in 1903 and, after a stay at the University of Bonn, professor in 1919.
As a student of Ferdinand Georg Frobenius, he worked on group representations (the subject with which he is most closely associated), but also in combinatorics and number theory and even theoretical physics. He is perhaps best known today for his result on the existence of the Schur decomposition and for his work on group representations (Schur's lemma).
Schur published under the name of both I. Schur, and J. Schur, the latter especially in Journal für die reine und angewandte Mathematik. This has led to some confusion.
Childhood
Issai Schur was born into a Jewish family, the son of the businessman Moses Schur and his wife Golde Schur (née Landau). He was born in Mogilev on the Dnieper River in what was then the Russian Empire. Schur used the name Schaia (Isaiah as the epitaph on his grave) rather than Issai up in his middle twenties. Schur's father may have been a wholesale merchant.
In 1888, at the age of 13, Schur went to Liepāja (Courland, now in Latvia), where his married sister and his brother lived, 640 km north-west of Mogilev. Kurland was one of the three Baltic governorates of Tsarist Russia, and since the Middle Ages the Baltic Germans were the upper social class. The local Jewish community spoke mostly German and not Yiddish.
Schur attended the German-speaking Nicolai Gymnasium in Libau from 1888 to 1894 and reached the top grade in his final examination, and received a gold medal. Here he became fluent in German.
Education
In October 1894, Schur attended the University of Berlin, with concentration in mathematics and physics. In 1901, he graduated summa cum laude under Frobenius and Lazarus Immanuel Fuchs with his dissertation On a class of matrices that can be assigne |
https://en.wikipedia.org/wiki/Shell%20account | A shell account is a user account on a remote server, traditionally running under the Unix operating system, which gives access to a shell via a command-line interface protocol such as telnet, SSH, or over a modem using a terminal emulator.
Shell accounts were made first accessible to interested members of the public by Internet Service Providers (such as Netcom (USA), Panix, The World and Digex), although in rare instances individuals had access to shell accounts through their employer or university. They were used for file storage, web space, email accounts, newsgroup access and software development. Before the late 1990s, shell accounts were often much less expensive than full net access through SLIP or PPP, which was required to access the then-new World Wide Web. Most personal computer operating systems also lacked TCP/IP stacks by default before the mid-1990s. Products such as The Internet Adapter were devised that could work as a proxy server, allowing users to run a web browser for the price of a shell account.
Shell providers are often found to offer shell accounts at low-cost or free. These shell accounts generally provide users with access to various software and services including compilers, IRC clients, background processes, FTP, text editors (such as nano) and email clients (such as pine). Some shell providers may also allow tunneling of traffic to bypass corporate firewalls.
See also
Bulletin board system
FreeBSD jail
Free-net
SDF Public Access Unix System, one of the oldest and largest non-profit public access UNIX systems on the Internet.
Slirp, a free software application similar to The Internet Adapter
SSH tunneling
The Big Electric Cat was a public access computer system in New York City in the late 1980s, known on Usenet as node dasys1.
The Internet Adapter, a graphical application front end for internet access using shell accounts allowing TCP/IP-based applications such as Netscape to run over the shell account.
The WELL, best known |
https://en.wikipedia.org/wiki/Sequential%20consistency | Sequential consistency is a consistency model used in the domain of concurrent computing (e.g. in distributed shared memory, distributed transactions, etc.).
It is the property that "... the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program."
That is, the execution order of a program in the same processor (or thread) is the same as the program order, while the execution order of a program on different processors (or threads) is undefined. In an example like this:
execution order between A1, B1 and C1 is preserved, that is, A1 runs before B1, and B1 before C1. The same for A2 and B2. But, as execution order between processors is undefined, B2 might run before or after C1 (B2 might physically run before C1, but the effect of B2 might be seen after that of C1, which is the same as "B2 run after C1")
Conceptually, there is single global memory and a "switch" that connects an arbitrary processor to memory at any time step. Each processor issues memory operations in program order and the switch provides the global serialization among all memory operations
The sequential consistency is weaker than strict consistency, which requires a read from a location to return the value of the last write to that location; strict consistency demands that operations be seen in the order in which they were actually issued.
See also
Concurrent data structure
Linearizability
Serializability
References
Consistency models |
https://en.wikipedia.org/wiki/Release%20consistency | Release consistency is one of the synchronization-based consistency models used in concurrent programming (e.g. in distributed shared memory, distributed transactions etc.).
Introduction
In modern parallel computing systems, memory consistency must be maintained to avoid undesirable outcomes. Strict consistency models like sequential consistency are intuitively composed but can be quite restrictive in terms of performance as they would disable instruction level parallelism which is widely applied in sequential programming. To achieve better performance, some relaxed models are explored and release consistency is an aggressive relaxing attempt.
Release consistency vs. sequential consistency
Hardware structure and program-level effort
Sequential consistency can be achieved simply by hardware implementation, while release consistency is also based on an observation that most of the parallel programs are properly synchronized. In programming level, synchronization is applied to clearly schedule a certain memory access in one thread to occur after another. When a synchronized variable is accessed, hardware would make sure that all writes local to a processor have been propagated to all other processors and all writes from other processors are seen and gathered. In release consistency model, the action of entering and leaving a critical section are classified as acquire and release and for either case, explicit code should be put in the program showing when to do these operations.
Conditions for sequential consistent result
In general, a distributed shared memory is release consistent if it obeys the following rules:
1. Before an access to a shared variable is performed, all previous acquires by this processor must have completed.
2. Before a release is performed, all previous reads and writes by this process must have completed.
3. The acquire and release accesses must be processor consistent.
If the conditions above are met and the program is properly synchro |
https://en.wikipedia.org/wiki/XNU | XNU (X is Not Unix) is the computer operating system (OS) kernel developed at Apple Inc. since December 1996 for use in the Mac OS X (now macOS) operating system and released as free and open-source software as part of the Darwin OS, which in addition to macOS is also the basis for the Apple TV Software, iOS, iPadOS, watchOS, visionOS, and tvOS OSes. XNU is an abbreviation of X is Not Unix.
Originally developed by NeXT for the NeXTSTEP operating system, XNU was a hybrid kernel derived from version 2.5 of the Mach kernel developed at Carnegie Mellon University, which incorporated the bulk of the 4.3BSD kernel modified to run atop Mach primitives, along with an application programming interface (API) in Objective-C for writing drivers named Driver Kit.
After Apple acquired NeXT, the kernel was updated with code derived from OSFMK 7.3 from OSF, and the FreeBSD project, and the Driver Kit was replaced with new API on a restricted subset of C++ (based on Embedded C++) named I/O Kit.
Kernel design
XNU is a hybrid kernel, containing features of both monolithic kernels and microkernels, attempting to make the best use of both technologies, such as the message passing ability of microkernels enabling greater modularity and larger portions of the OS to benefit from memory protection, and retaining the speed of monolithic kernels for some critical tasks.
, XNU runs on ARM64 and x86-64 processors, both one processor and symmetric multiprocessing (SMP) models. PowerPC support was removed as of the version in Mac OS X Snow Leopard. Support for IA-32 was removed as of the version in Mac OS X Lion; support for 32-bit ARM was removed as of the version in .
Mach
The basis of the XNU kernel is a heavily modified (hybrid) Open Software Foundation Mach kernel (OSFMK) 7.3. OSFMK 7.3 is a microkernel that includes applicable code from the University of Utah Mach 4 kernel and from the many Mach 3.0 variants forked from the original Carnegie Mellon University Mach 3.0 microkernel.
O |
https://en.wikipedia.org/wiki/Semicircle | In mathematics (and more specifically geometry), a semicircle is a one-dimensional locus of points that forms half of a circle. It is a circular arc that measures 180° (equivalently, radians, or a half-turn). It has only one line of symmetry (reflection symmetry).
In non-technical usage, the term "semicircle" is sometimes used to refer to either a closed curve that also includes the diameter segment from one end of the arc to the other or to the half-disk, which is a two-dimensional geometric region that further includes all the interior points.
By Thales' theorem, any triangle inscribed in a semicircle with a vertex at each of the endpoints of the semicircle and the third vertex elsewhere on the semicircle is a right triangle, with a right angle at the third vertex.
All lines intersecting the semicircle perpendicularly are concurrent at the center of the circle containing the given semicircle.
Uses
A semicircle can be used to construct the arithmetic and geometric means of two lengths using straight-edge and compass. For a semicircle with a diameter of a + b, the length of its radius is the arithmetic mean of a and b (since the radius is half of the diameter).
The geometric mean can be found by dividing the diameter into two segments of lengths a and b, and then connecting their common endpoint to the semicircle with a segment perpendicular to the diameter. The length of the resulting segment is the geometric mean. This can be proven by applying the Pythagorean theorem to three similar right triangles, each having as vertices the point where the perpendicular touches the semicircle and two of the three endpoints of the segments of lengths a and b.
The construction of the geometric mean can be used to transform any rectangle into a square of the same area, a problem called the quadrature of a rectangle. The side length of the square is the geometric mean of the side lengths of the rectangle. More generally, it is used as a lemma in a general method for tra |
https://en.wikipedia.org/wiki/GNU%20toolchain | The GNU toolchain is a broad collection of programming tools produced by the GNU Project. These tools form a toolchain (a suite of tools used in a serial manner) used for developing software applications and operating systems.
The GNU toolchain plays a vital role in development of Linux, some BSD systems, and software for embedded systems. Parts of the GNU toolchain are also directly used with or ported to other platforms such as Solaris, macOS, Microsoft Windows (via Cygwin and MinGW/MSYS), Sony PlayStation Portable (used by PSP modding scene) and Sony PlayStation 3.
Components
Projects included in the GNU toolchain are:
GNU make: an automation tool for compilation and build
GNU Compiler Collection (GCC): a suite of compilers for several programming languages
GNU C Library (glibc): core C library including headers, libraries, and dynamic loader
GNU Binutils: a suite of tools including linker, assembler and other tools
GNU Bison: a parser generator, often used with the Flex lexical analyser
GNU m4: an m4 macro processor
GNU Debugger (GDB): a code debugging tool
GNU Autotools (GNU Build System): Autoconf, Automake and Libtool
See also
GNU Classpath
GNU Core Utilities
CVS and Git
MinGW and Cygwin
Cross compiler
LLVM
References
External links
GCC, the GNU Compiler Collection
Building and Installing under Linux
Prebuilt Win32 GNU Toolchains for various embedded platforms
Programming tools
toolchain |
https://en.wikipedia.org/wiki/Opticks | Opticks: or, A Treatise of the Reflexions, Refractions, Inflexions and Colours of Light is a book by English natural philosopher Isaac Newton that was published in English in 1704 (a scholarly Latin translation appeared in 1706). The book analyzes the fundamental nature of light by means of the refraction of light with prisms and lenses, the diffraction of light by closely spaced sheets of glass, and the behaviour of color mixtures with spectral lights or pigment powders. Opticks was Newton's second major book on physical science and it is considered one of the three major works on optics during the Scientific Revolution (alongside Kepler's Astronomiae Pars Optica and Huygens' Traité de la Lumière). Newton's name did not appear on the title page of the first edition of Opticks.
Overview
The publication of Opticks represented a major contribution to science, different from but in some ways rivalling the Principia. Opticks is largely a record of experiments and the deductions made from them, covering a wide range of topics in what was later to be known as physical optics. That is, this work is not a geometric discussion of catoptrics or dioptrics, the traditional subjects of reflection of light by mirrors of different shapes and the exploration of how light is "bent" as it passes from one medium, such as air, into another, such as water or glass. Rather, the Opticks is a study of the nature of light and colour and the various phenomena of diffraction, which Newton called the "inflexion" of light.
In this book Newton sets forth in full his experiments, first reported to the Royal Society of London in 1672, on dispersion, or the separation of light into a spectrum of its component colours. He demonstrates how the appearance of color arises from selective Absorption (electromagnetic radiation)absorption, reflection, or transmission of the various component parts of the incident light.
The major significance of Newton's work is that it overturned the dogma, attributed |
https://en.wikipedia.org/wiki/Power%20engineering | Power engineering, also called power systems engineering, is a subfield of electrical engineering that deals with the generation, transmission, distribution, and utilization of electric power, and the electrical apparatus connected to such systems. Although much of the field is concerned with the problems of three-phase AC power – the standard for large-scale power transmission and distribution across the modern world – a significant fraction of the field is concerned with the conversion between AC and DC power and the development of specialized power systems such as those used in aircraft or for electric railway networks. Power engineering draws the majority of its theoretical base from electrical engineering and mechanical engineering.
History
Pioneering years
Electricity became a subject of scientific interest in the late 17th century. Over the next two centuries a number of important discoveries were made including the incandescent light bulb and the voltaic pile. Probably the greatest discovery with respect to power engineering came from Michael Faraday who in 1831 discovered that a change in magnetic flux induces an electromotive force in a loop of wire—a principle known as electromagnetic induction that helps explain how generators and transformers work.
In 1881 two electricians built the world's first power station at Godalming in England. The station employed two waterwheels to produce an alternating current that was used to supply seven Siemens arc lamps at 250 volts and thirty-four incandescent lamps at 40 volts. However supply was intermittent and in 1882 Thomas Edison and his company, The Edison Electric Light Company, developed the first steam-powered electric power station on Pearl Street in New York City. The Pearl Street Station consisted of several generators and initially powered around 3,000 lamps for 59 customers. The power station used direct current and operated at a single voltage. Since the direct current power could not be easily transf |
https://en.wikipedia.org/wiki/Provability%20logic | Provability logic is a modal logic, in which the box (or "necessity") operator is interpreted as 'it is provable that'. The point is to capture the notion of a proof predicate of a reasonably rich formal theory, such as Peano arithmetic.
Examples
There are a number of provability logics, some of which are covered in the literature mentioned in . The basic system is generally referred to as GL (for Gödel–Löb) or L or K4W (W stands for well-foundedness). It can be obtained by adding the modal version of Löb's theorem to the logic K (or K4).
Namely, the axioms of GL are all tautologies of classical propositional logic plus all formulas of one of the following forms:
Distribution axiom:
Löb's axiom:
And the rules of inference are:
Modus ponens: From p → q and p conclude q;
Necessitation: From p conclude .
History
The GL model was pioneered by Robert M. Solovay in 1976. Since then, until his death in 1996, the prime inspirer of the field was George Boolos. Significant contributions to the field have been made by Sergei N. Artemov, Lev Beklemishev, Giorgi Japaridze, Dick de Jongh, Franco Montagna, Giovanni Sambin, Vladimir Shavrukov, Albert Visser and others.
Generalizations
Interpretability logics and Japaridze's polymodal logic present natural extensions of provability logic.
See also
Hilbert–Bernays provability conditions
Interpretability logic
Kripke semantics
Japaridze's polymodal logic
Löb's theorem
References
George Boolos, The Logic of Provability. Cambridge University Press, 1993.
Giorgi Japaridze and Dick de Jongh, The logic of provability. In: Handbook of Proof Theory, S. Buss, ed. Elsevier, 1998, pp. 475–546.
Sergei N. Artemov and Lev Beklemishev, Provability logic. In: Handbook of Philosophical Logic, D. Gabbay and F. Guenthner, eds., vol. 13, 2nd ed., pp. 189–360. Springer, 2005.
Per Lindström, Provability logic—a short introduction. Theoria 62 (1996), pp. 19–61.
Craig Smoryński, Self-reference and modal logic. Springer, Berlin, 1985.
Robert |
https://en.wikipedia.org/wiki/Electric%20power%20conversion | In all fields of electrical engineering, power conversion is the process of converting electric energy from one form to another. A power converter is an electrical or electro-mechanical device for converting electrical energy. A power converter can convert alternating current (AC) into direct current (DC) and vice versa; change the voltage or frequency of the current or do some combination of these. The power converter can be as simple as a transformer or it can be a far more complex system, such as a resonant converter. The term can also refer to a class of electrical machinery that is used to convert one frequency of alternating current into another. Power conversion systems often incorporate redundancy and voltage regulation.
Power converters are classified based on the type of power conversion they do. One way of classifying power conversion systems is according to whether the input and output are alternating current or direct current. Finally, the task of all power converters is to "process and control the flow of electrical energy by supplying voltages and currents in a form that is optimally suited for user loads".
DC power conversion
DC to DC
The following devices can convert DC to DC:
Linear regulator
Voltage regulator
Motor–generator
Rotary converter
Switched-mode power supply
DC to AC
The following devices can convert DC to AC:
Power inverter
Motor–generator
Rotary converter
Switched-mode power supply
Chopper (electronics)
AC power conversion
AC to DC
The following devices can convert AC to DC:
Rectifier
Mains power supply unit (PSU)
Motor–generator
Rotary converter
Switched-mode power supply
AC to AC
The following devices can convert AC to AC:
Transformer or autotransformer
Voltage converter
Voltage regulator
Cycloconverter
Variable-frequency transformer
Motor–generator
Rotary converter
Switched-mode power supply
Other systems
There are also devices and methods to convert between power systems designed for single and three-phase operation.
Th |
https://en.wikipedia.org/wiki/DC-to-DC%20converter | A DC-to-DC converter is an electronic circuit or electromechanical device that converts a source of direct current (DC) from one voltage level to another. It is a type of electric power converter. Power levels range from very low (small batteries) to very high (high-voltage power transmission).
History
Before the development of power semiconductors, one way to convert the voltage of a DC supply to a higher voltage, for low-power applications, was to convert it to AC by using a vibrator, then by a step-up transformer, and finally a rectifier. Where higher power was needed, a motor–generator unit was often used, in which an electric motor drove a generator that produced the desired voltage. (The motor and generator could be separate devices, or they could be combined into a single "dynamotor" unit with no external power shaft.) These relatively inefficient and expensive designs were used only when there was no alternative, as to power a car radio (which then used thermionic valves (tubes) that require much higher voltages than available from a 6 or 12 V car battery). The introduction of power semiconductors and integrated circuits made it economically viable by use of techniques described below. For example, first is converting the DC power supply to high-frequency AC as an input of a transformer - it is small, light, and cheap due to the high frequency — that changes the voltage which gets rectified back to DC. Although by 1976 transistor car radio receivers did not require high voltages, some amateur radio operators continued to use vibrator supplies and dynamotors for mobile transceivers requiring high voltages although transistorized power supplies were available.
While it was possible to derive a lower voltage from a higher with a linear regulator or even a resistor, these methods dissipated the excess as heat; energy-efficient conversion became possible only with solid-state switch-mode circuits.
Uses
DC-to-DC converters are used in portable electronic dev |
https://en.wikipedia.org/wiki/Resource%20Reservation%20Protocol | The Resource Reservation Protocol (RSVP) is a transport layer protocol designed to reserve resources across a network using the integrated services model. RSVP operates over an IPv4 or IPv6 and provides receiver-initiated setup of resource reservations for multicast or unicast data flows. It does not transport application data but is similar to a control protocol, like Internet Control Message Protocol (ICMP) or Internet Group Management Protocol (IGMP). RSVP is described in .
RSVP can be used by hosts and routers to request or deliver specific levels of quality of service (QoS) for application data streams. RSVP defines how applications place reservations and how they can relinquish the reserved resources once no longer required. RSVP operations will generally result in resources being reserved in each node along a path. RSVP is not a routing protocol but was designed to interoperate with current and future routing protocols.
RSVP by itself is rarely deployed in telecommunications networks. In 2003, development effort was shifted from RSVP to RSVP-TE for teletraffic engineering. Next Steps in Signaling (NSIS) was a proposed replacement for RSVP.
Main attributes
RSVP requests resources for simplex flows: a traffic stream in only one direction from sender to one or more receivers.
RSVP is not a routing protocol but works with current and future routing protocols.
RSVP is receiver oriented in that the receiver of a data flow initiates and maintains the resource reservation for that flow.
RSVP maintains soft state (the reservation at each node needs a periodic refresh) of the host and routers' resource reservations, hence supporting dynamic automatic adaptation to network changes.
RSVP provides several reservation styles (a set of reservation options) and allows for future styles to be added in protocol revisions to fit varied applications.
RSVP transports and maintains traffic and policy control parameters that are opaque to RSVP.
History and related standards
Th |
https://en.wikipedia.org/wiki/Maiden%20flight | The maiden flight, also known as first flight, of an aircraft is the first occasion on which it leaves the ground under its own power. The same term is also used for the first launch of rockets.
The maiden flight of a new aircraft type is always a historic occasion for the type and can be quite emotional for those involved. In the early days of aviation it could be dangerous, because the exact handling characteristics of the aircraft were generally unknown. The maiden flight of a new type is almost invariably flown by a highly experienced test pilot. Maiden flights are usually accompanied by a chase plane, to verify items like altitude, airspeed, and general airworthiness.
A maiden flight is only one stage in the development of an aircraft type. Unless the type is a pure research aircraft (such as the X-15), the aircraft must be tested extensively to ensure that it delivers the desired performance with an acceptable margin of safety. In the case of civilian aircraft, a new type must be certified by a governing agency (such as the Federal Aviation Administration in the United States) before it can enter operation.
Notable maiden flights (aircraft)
An incomplete list of maiden flights of notable aircraft types, organized by date, follows.
June, 1875 – Thomas Moy's Aerial Steamer, London, England (pilotless, tethered)
October 9, 1890 – Clément Ader – took off from Gretz-Armainvilliers, Ouest of Paris, France.
August 14, 1901 – Gustave Whitehead from Leutershausen, Bavaria.
May 15, 1902 – Lyman Gilmore – took off from Grass Valley, California.
March 31, 1903 – Richard Pearse – took off from Waitohi Flat, Temuka, South Island, New Zealand.
December 17, 1903 – Wright brothers Wright Flyer – First successful piloted and controlled heavier-than-air powered aircraft; flights took place four miles south of Kitty Hawk, North Carolina.
March 18, 1906 – Traian Vuia, a Romanian inventor and engineer, who flew 11 meters in his self-named monoplane at Montesson near Pa |
https://en.wikipedia.org/wiki/Single-serving%20visitor%20pattern | In computer programming, the single-serving visitor pattern is a design pattern. Its intent is to optimise the implementation of a visitor that is allocated, used only once, and then deleted (which is the case of most visitors).
Applicability
The single-serving visitor pattern should be used when visitors do not need to remain in memory. This is often the case when visiting a hierarchy of objects (such as when the visitor pattern is used together with the composite pattern) to perform a single task on it, for example counting the number of cameras in a 3D scene.
The regular visitor pattern should be used when the visitor must remain in memory. This occurs when the visitor is configured with a number of parameters that must be kept in memory for a later use of the visitor (for example, for storing the rendering options of a 3D scene renderer).
However, if there should be only one instance of such a visitor in a whole program, it can be a good idea to implement it both as a single-serving visitor and as a singleton. In doing so, it is ensured that the single-serving visitor can be called later with its parameters unchanged (in this particular case "single-serving visitor" is an abuse of language since the visitor can be used several times).
Usage examples
The single-serving visitor is called through the intermediate of static methods.
Without parameters: Element* elem;
SingleServingVisitor::apply_to(elem);
With parameters: Element* elem;
TYPE param1, param2;
SingleServingVisitor::apply_to(elem, param1, param2);
Implementation as a singleton: Element* elem;
TYPE param1, param2;
SingleServingVisitor::set_param1(param1);
SingleServingVisitor::set_param2(param2);
SingleServingVisitor::apply_to(elem);
Consequences
Pros
No "zombie" objects. With a single-serving visitor, it is ensured that visitors are allocated when needed and destroyed once useless.
A simpler interface than visitor. The visitor is created, used and free by the sole call of the apply_ |
https://en.wikipedia.org/wiki/Ignitron | An ignitron is a type of gas-filled tube used as a controlled rectifier and dating from the 1930s. Invented by Joseph Slepian while employed by Westinghouse, Westinghouse was the original manufacturer and owned trademark rights to the name "Ignitron". Ignitrons are closely related to mercury-arc valves but differ in the way the arc is ignited. They function similarly to thyratrons; a triggering pulse to the igniter electrode turns the device "on", allowing a high current to flow between the cathode and anode electrodes. After it is turned on, the current through the anode must be reduced to zero to restore the device to its nonconducting state. They are used to switch high currents in heavy industrial applications.
Construction and operation
An ignitron is usually a large steel container with a pool of mercury in the bottom that acts as a cathode during operation. A large graphite or refractory metal cylinder, held above the pool by an insulated electrical connection, serves as the anode. An igniting electrode (called the ignitor), made of a refractory semiconductor material such as silicon carbide, is briefly pulsed with a high current to create a puff of electrically conductive mercury plasma. The plasma rapidly bridges the space between the mercury pool and the anode, permitting heavy conduction between the main electrodes. At the surface of the mercury, heating by the resulting arc liberates large numbers of electrons which help to maintain the mercury arc. The mercury surface thus serves as the cathode, and current is normally only in one direction. Once ignited, an ignitron will continue to pass current until either the current is externally interrupted or the voltage applied between cathode and anode is reversed.
Applications
Ignitrons were long used as high-current rectifiers in major industrial and utility installations where thousands of amperes of AC must be converted to DC, such as aluminum smelters. Ignitrons were used to control the current in el |
https://en.wikipedia.org/wiki/Transfection | Transfection is the process of deliberately introducing naked or purified nucleic acids into eukaryotic cells. It may also refer to other methods and cell types, although other terms are often preferred: "transformation" is typically used to describe non-viral DNA transfer in bacteria and non-animal eukaryotic cells, including plant cells. In animal cells, transfection is the preferred term as transformation is also used to refer to progression to a cancerous state (carcinogenesis) in these cells. Transduction is often used to describe virus-mediated gene transfer into eukaryotic cells.
The word transfection is a portmanteau of trans- and infection. Genetic material (such as supercoiled plasmid DNA or siRNA constructs), may be transfected. Transfection of animal cells typically involves opening transient pores or "holes" in the cell membrane to allow the uptake of material. Transfection can be carried out using calcium phosphate (i.e. tricalcium phosphate), by electroporation, by cell squeezing, or by mixing a cationic lipid with the material to produce liposomes that fuse with the cell membrane and deposit their cargo inside.
Transfection can result in unexpected morphologies and abnormalities in target cells.
Terminology
The meaning of the term has evolved. The original meaning of transfection was "infection by transformation", i.e., introduction of genetic material, DNA or RNA, from a prokaryote-infecting virus or bacteriophage into cells, resulting in an infection. For work with bacterial and archaeal cells transfection retains its original meaning as a special case of transformation. Because the term transformation had another sense in animal cell biology (a genetic change allowing long-term propagation in culture, or acquisition of properties typical of cancer cells), the term transfection acquired, for animal cells, its present meaning of a change in cell properties caused by introduction of DNA.
Methods
There are various methods of introducing foreign |
https://en.wikipedia.org/wiki/Track%20%26%20Field%20%28video%20game%29 | Track & Field, also known as in Japan and Europe, is a 1983 Olympic-themed sports video game developed by Konami for arcades. The Japanese release sported an official license for the 1984 Summer Olympics. In Europe, the game was initially released under the Japanese title Hyper Olympic in 1983, before re-releasing under the US title Track & Field in early 1984.
Players compete in a series of events, most involving alternately pressing two buttons as quickly as possible to make the onscreen character run faster. It has a horizontal side-scrolling format, depicting one or two tracks at a time, a large scoreboard that displays world records and current runs, and a packed audience in the background.
The game was a worldwide commercial success in arcades, becoming one of the most successful arcade games of 1984. Konami and Centuri also held a 1984 Track & Field video game competition that drew more than a million players internationally, holding the record for the largest organized video game competition of all time . It was followed by sequels, including Hyper Sports, and similar Olympic video games from other companies. It led to a resurgence of arcade sports games and inspired Namco's side-scrolling platform game Pac-Land (1984).
Gameplay
In the original arcade game, the player uses two "run" buttons (or a trackball in later units that replaced buttons damaged from overuse) and one "action" button to control an athlete competing in the following six events:
100 meter dash – running by quickly alternating button presses;
Long jump – running by alternating button press and correct timing for jump hold jump button to set angle (42 degrees is optimal);
Javelin throw – running by alternating button presses and then using action button correct timing for angle (43 degrees is optimal);
110 meter hurdles – running by alternating button presses and using action button to time jumps;
Hammer throw – spinning initiated by pressing a run button once and then correctly t |
https://en.wikipedia.org/wiki/IBM%20TopView | TopView is the first object-oriented, multitasking, and windowing, personal computer operating environment for PC DOS developed by IBM, announced in August 1984 and shipped in March 1985. TopView provided a text-mode (although it also ran in graphics mode) operating environment that allowed users to run more than one application at the same time on a PC. IBM demonstrated an early version of the product to key customers before making it generally available, around the time they shipped their new PC AT computer.
History
When Microsoft announced Windows 1.0 in November 1983, International Business Machines (IBM), Microsoft's important partner in popularizing MS-DOS for the IBM PC, notably did not announce support for the forthcoming window environment. IBM determined that the microcomputer market needed a multitasking environment. When it released TopView in 1985, the press speculated that the software was the start of IBM's plan to increase its control over the IBM PC (even though IBM published the specifications publicly) by creating a proprietary operating system for it, similar to what IBM had offered for years on its larger computers. TopView also allowed IBM to serve customers who were surprised that the new IBM AT did not come with an operating system able to use the hardware multitasking and protected mode features of the new 80286 CPU, as DOS and most applications were still running in 8086/8088 real mode.
Even given TopView's virtual memory management capabilities, hardware limitations still held the new environment back—a base AT with 256 KB of RAM only had room for 80 KB of application code and data in RAM once DOS and TopView had loaded up. 512-640 KB was recommended to load up two typical application programs of the time. This was the maximum the earlier IBM XT could have installed. Once loaded, TopView took back much of the memory consumed by DOS, but still not enough to satisfy industry critics. TopView ran in real mode on any x86 processor and coul |
https://en.wikipedia.org/wiki/Sensitivity%20analysis | Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be divided and allocated to different sources of uncertainty in its inputs. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem.
The process of recalculating outcomes under alternative assumptions to determine the impact of a variable under sensitivity analysis can be useful for a range of purposes, including:
Testing the robustness of the results of a model or system in the presence of uncertainty.
Increased understanding of the relationships between input and output variables in a system or model.
Uncertainty reduction, through the identification of model input that cause significant uncertainty in the output and should therefore be the focus of attention in order to increase robustness (perhaps by further research).
Searching for errors in the model (by encountering unexpected relationships between inputs and outputs).
Model simplification – fixing model input that has no effect on the output, or identifying and removing redundant parts of the model structure.
Enhancing communication from modelers to decision makers (e.g. by making recommendations more credible, understandable, compelling or persuasive).
Finding regions in the space of input factors for which the model output is either maximum or minimum or meets some optimum criterion (see optimization and Monte Carlo filtering).
In case of calibrating models with large number of parameters, a primary sensitivity test can ease the calibration stage by focusing on the sensitive parameters. Not knowing the sensitivity of parameters can result in time being uselessly spent on non-sensitive ones.
To seek to identify important connections between observations, model inputs, and predictions or forecasts, leading to the developmen |
https://en.wikipedia.org/wiki/Grader | A grader, also commonly referred to as a road grader, motor grader, or simply blade, is a form of heavy equipment with a long blade used to create a flat surface during grading. Although the earliest models were towed behind horses, and later tractors, most modern graders are self-propelled and thus technically "motor graders".
Typical graders have three axles, with the steering wheels in front, followed by the grading blade or mouldboard, then a cab and engine atop tandem rear axles. Some graders also have front-wheel drives for improved performance. Some graders have optional rear attachments, such as a ripper, scarifier, or compactor. A blade forward of the front axle may also be added. For snowplowing and some dirt grading operations, a main blade extension can also be mounted.
Capacities range from a blade width of 2.50 to 7.30 m (8 to 24 ft) and engines from 93–373 kW (125–500 hp). Certain graders can operate multiple attachments, or be designed for specialized tasks like underground mining.
Function
In civil engineering "rough grading" is performed by heavy equipment such as wheel tractor-scrapers and bulldozers. Graders are used to "finish grade", with the angle, tilt (or pitch), and height of their blade capable of being adjusted to a high level of precision.
Graders are commonly used in the construction and maintenance of dirt and gravel roads. In constructing paved roads, they prepare a wide flat base course for the final road surface. Graders are also used to set native soil or gravel foundation pads to finish grade before the construction of large buildings. Graders can produce canted surfaces for drainage or safety. They may be used to produce drainage ditches with shallow V-shaped cross-sections on either side of highways.
Steering is performed via a steering wheel, or a joystick capable of controlling both the angle and cant of the front wheels. Many models also allow frame articulation between the front and rear axles, which allows a small |
https://en.wikipedia.org/wiki/Text%20mode | Text mode is a computer display mode in which content is internally represented on a computer screen in terms of characters rather than individual pixels. Typically, the screen consists of a uniform rectangular grid of character cells, each of which contains one of the characters of a character set; at the same time, contrasted to graphics mode or other kinds of computer graphics modes.
Text mode applications communicate with the user by using command-line interfaces and text user interfaces. Many character sets used in text mode applications also contain a limited set of predefined semi-graphical characters usable for drawing boxes and other rudimentary graphics, which can be used to highlight the content or to simulate widget or control interface objects found in GUI programs. A typical example is the IBM code page 437 character set.
An important characteristic of text mode programs is that they assume monospaced fonts, where every character has the same width on screen, which allows them to easily maintain the vertical alignment when displaying semi-graphical characters. This was an analogy of early mechanical printers which had fixed pitch. This way, the output seen on the screen could be sent directly to the printer maintaining the same format.
Depending on the environment, the screen buffer can be directly addressable. Programs that display output on remote video terminals must issue special control sequences to manipulate the screen buffer. The most popular standards for such control sequences are ANSI and VT100.
Programs accessing the screen buffer through control sequences may lose synchronization with the actual display so that many text mode programs have a redisplay everything command, often associated with the key combination.
History
Text mode video rendering came to prominence in the early 1970s, when video-oriented text terminals started to replace teleprinters in the interactive use of computers.
Benefits
The advantages of text modes as co |
https://en.wikipedia.org/wiki/List%20of%20Linux%20distributions | This page provides general information about notable Linux distributions in the form of a categorized list. Distributions are organized into sections by the major distribution or package management system they are based on.
Debian-based
Debian Linux is a distribution that emphasizes free software. It supports many hardware platforms. Debian and distributions based on it use the .deb package format and the dpkg package manager and its frontends (such as apt or synaptic).
Ubuntu-based
Ubuntu is a distribution based on Debian, designed to have regular releases, a consistent user experience and commercial support on both desktops and servers.
Current official derivatives
These Ubuntu variants, also known as Ubuntu flavours, simply install a set of packages different from the original Ubuntu, but since they draw additional packages and updates from the same repositories as Ubuntu, all of the same software is available for each of them.
Discontinued official derivatives
Unofficial derivatives
Unofficial variants and derivatives are not controlled or guided by Canonical Ltd. and generally have different goals in mind.
Knoppix-based
Knoppix itself is based on Debian. It is a live distribution, with automated hardware configuration and a wide choice of software, which is decompressed as it loads from the drive.
Other Debian-based
Pacman-based
Pacman is a package manager that is capable of resolving dependencies and automatically downloading and installing all necessary packages. It is primarily developed and used by Arch Linux and its derivatives.
Arch Linux-based
Arch Linux is an independently developed, x86-64 general-purpose Linux distribution that strives to provide the latest stable versions of most software by following a rolling-release model. The default installation is a minimal base system, configured by the user to only add what is purposely required.
Other Pacman-based
RPM-based
Red Hat Linux and SUSE Linux were the original major distributions |
https://en.wikipedia.org/wiki/Landau%E2%80%93Ramanujan%20constant | In mathematics and the field of number theory, the Landau–Ramanujan constant is the positive real number b that occurs in a theorem proved by Edmund Landau in 1908, stating that for large , the number of positive integers below that are the sum of two square numbers behaves asymptotically as
This constant b was rediscovered in 1913 by Srinivasa Ramanujan, in the first letter he wrote to G.H. Hardy.
Sums of two squares
By the sum of two squares theorem, the numbers that can be expressed as a sum of two squares of integers are the ones for which each prime number congruent to 3 mod 4 appears with an even exponent in their prime factorization. For instance, 45 = 9 + 36 is a sum of two squares; in its prime factorization, 32 × 5, the prime 3 appears with an even exponent, and the prime 5 is congruent to 1 mod 4, so its exponent can be odd.
Landau's theorem states that if is the number of positive integers less than that are the sum of two squares, then
,
where is the Landau–Ramanujan constant.
The Landau-Ramanujan constant can also be written as an infinite product:
History
This constant was stated by Landau in the limit form above; Ramanujan instead approximated as an integral, with the same constant of proportionality, and with a slowly growing error term.
References
Additive number theory
Analytic number theory
Mathematical constants
Srinivasa Ramanujan |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.