source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Laser%20cooling | Laser cooling includes a number of techniques where atoms, molecules, and small mechanical systems are cooled with laser light. The directed energy of lasers is often associated with heating materials, e.g. laser cutting, so it can be counterintuitive that laser cooling often results in sample temperatures approaching absolute zero. Laser cooling relies on the change in momentum when an object, such as an atom, absorbs and re-emits a photon (a particle of light). For an ensemble of particles, their thermodynamic temperature is proportional to the variance in their velocity. That is, more homogeneous velocities among particles corresponds to a lower temperature. Laser cooling techniques combine atomic spectroscopy with the aforementioned mechanical effect of light to compress the velocity distribution of an ensemble of particles, thereby cooling the particles.
The 1997 Nobel Prize in Physics was awarded to Claude Cohen-Tannoudji, Steven Chu, and William Daniel Phillips "for development of methods to cool and trap atoms with laser light".
History
Radiation pressure
Radiation pressure is the force that electromagnetic radiation exerts on matter. In 1873 Maxwell published his treatise on electromagnetism in which he predicted radiation pressure. The force was experimentally demonstrated for the first time by Lebedev and reported at a conference in Paris in 1900, and later published in more detail in 1901. Following Lebedev's measurements Nichols and Hull also demonstrated the force of radiation pressure in 1901, with a refined measurement reported in 1903.
In 1933, Otto Frisch deflected an atomic beam of sodium atoms with light.
This was the first realization of radiation pressure acting on a resonant absorber.
Laser cooling proposals
The introduction of lasers in atomic manipulation experiments acted as the advent of laser cooling proposals in the mid 1970s. Laser cooling was proposed separately in 1975 by two different research groups: Hänsch a |
https://en.wikipedia.org/wiki/Transmeta%20Crusoe | The Transmeta Crusoe was a family of x86-compatible microprocessors developed by Transmeta and introduced in 2000.
Instead of the instruction set architecture being implemented in hardware, or translated by specialized hardware, the Crusoe runs a software abstraction layer, or a virtual machine, known as the Code Morphing Software (CMS). The CMS translates machine code instructions received from programs into native instructions for the microprocessor. In this way, the Crusoe can emulate other instruction set architectures (ISAs). This is used to allow the microprocessors to emulate the Intel x86 instruction set.
Design
The Crusoe was notable for its method of achieving x86 compatibility. Instead of the instruction set architecture being implemented in hardware, or translated by specialized hardware, the Crusoe runs a software abstraction layer, or a virtual machine, known as the Code Morphing Software (CMS). The CMS translates machine code instructions received from programs into native instructions for the microprocessor. In this way, the Crusoe can emulate other instruction set architectures (ISAs). This is used to allow the microprocessors to emulate the Intel x86 instruction set. In theory, it is possible for the CMS to be modified to emulate other ISAs. Transmeta demonstrated Crusoe executing Java bytecode by translating the bytecodes into instructions in its native instruction set. The addition of an abstraction layer between the x86 instruction stream and the hardware means that the hardware architecture can change without breaking compatibility, just by modifying the CMS. For example, Transmeta Efficeon — a second-generation Transmeta design — has a 256-bit-wide VLIW core versus the 128-bit core of the Crusoe. Efficeon also supports SSE instructions.
The Crusoe is a VLIW microprocessor that executes bundles of instructions, termed molecules by Transmeta. Each molecule contains multiple instructions, termed atoms. The Code Morphing Software translates x |
https://en.wikipedia.org/wiki/Iterator | In computer programming, an iterator is an object that enables a programmer to traverse a container, particularly lists. Various types of iterators are often provided via a container's interface. Though the interface and semantics of a given iterator are fixed, iterators are often implemented in terms of the structures underlying a container implementation and are often tightly coupled to the container to enable the operational semantics of the iterator. An iterator performs traversal and also gives access to data elements in a container, but does not itself perform iteration (i.e., not without some significant liberty taken with that concept or with trivial use of the terminology).
An iterator is behaviorally similar to a database cursor. Iterators date to the CLU programming language in 1974.
Description
Internal Iterators
Internal iterators are higher order functions (often taking anonymous functions, but not necessarily) such as map(), reduce() etc., implementing the traversal across a container, applying the given function to every element in turn. An example might be Python's map function:
digits = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
squared_digits = map(lambda x: x**2, digits)
# Iterating over this iterator would result in 0, 1, 4, 9, 16, ..., 81.
External iterators and the iterator pattern
An external iterator may be thought of as a type of pointer that has two primary operations: referencing one particular element in the object collection (called element access), and modifying itself so it points to the next element (called element traversal). There must also be a way to create an iterator so it points to some first element as well as some way to determine when the iterator has exhausted all of the elements in the container. Depending on the language and intended use, iterators may also provide additional operations or exhibit different behaviors.
The primary purpose of an iterator is to allow a user to process every element of a container while isola |
https://en.wikipedia.org/wiki/InterBase | InterBase is a relational database management system (RDBMS) currently developed and marketed by Embarcadero Technologies. InterBase is distinguished from other RDBMSs by its small footprint, close to zero administration requirements, and multi-generational architecture. InterBase runs on the Microsoft Windows, macOS, Linux, Solaris operating systems as well as iOS and Android.
Technology
InterBase is a SQL-92-compliant relational database and supports standard interfaces such as JDBC, ODBC, and ADO.NET.
Small footprint
A full InterBase server installation requires around 40 MB on disk. A minimum InterBase client install requires about 400 KB of disk space.
Embedded or server
InterBase can be run as an embedded database or regular server.
Data controller friendly inbuilt encryption
Since InterBase XE, InterBase includes 256-bit AES-strength encryption that offers full database, table or column data encryption. This assists data controllers conform with data protection laws around at-rest data by separating encryption and access to the database, ensuring the database file is encrypted wherever it resides. The separation of the encryption also enables developers to just develop the application rather than worry about the data visible from a specific user login.
Multi-generational architecture
Concurrency control
To avoid blocking during updates, Interbase uses multiversion concurrency control instead of locks. Each transaction will create a version of the record. Upon the write step, the update will fail rather than be blocked initially.
Rollbacks and recovery
InterBase also uses multi-generational records to implement rollbacks rather than transaction logs.
Drawbacks
Certain operations are more difficult to implement in a multi-generational architecture, and hence perform slowly relative to a more traditional implementation. One example is the SQL COUNT verb. Even when an index is available on the column or columns included in the COUNT, all records must be |
https://en.wikipedia.org/wiki/Glycerol | Glycerol (), also called glycerine or glycerin, is a simple triol compound. It is a colorless, odorless, viscous liquid that is sweet-tasting and non-toxic. The glycerol backbone is found in lipids known as glycerides. Because it has antimicrobial and antiviral properties, it is widely used in wound and burn treatments approved by the U.S. Food and Drug Administration. Conversely, it is also used as a bacterial culture medium. Its presence in blood can be used as an effective marker to measure liver disease. It is also widely used as a sweetener in the food industry and as a humectant in pharmaceutical formulations. Because of its three hydroxyl groups, glycerol is miscible with water and is hygroscopic in nature.
Structure
Although achiral, glycerol is prochiral with respect to reactions of one of the two primary alcohols. Thus, in substituted derivatives, the stereospecific numbering labels the molecule with a sn- prefix before the stem name of the molecule.
Production
Glycerol is generally obtained from plant and animal sources where it occurs in triglycerides, esters of glycerol with long-chain carboxylic acids. The hydrolysis, saponification, or transesterification of these triglycerides produces glycerol as well as the fatty acid derivative:
Triglycerides can be saponified with sodium hydroxide to give glycerol and fatty sodium salt or soap.
Typical plant sources include soybeans or palm. Animal-derived tallow is another source. Approximately 950,000 tons per year are produced in the United States and Europe; 350,000 tons of glycerol were produced per year in the U.S. alone from 2000 to 2004. The EU directive 2003/30/EC set a requirement that 5.75% of petroleum fuels were to be replaced with biofuel sources across all member states by 2010. It was projected in 2006 that by 2020, production would be six times more than demand, creating an excess of glycerol as a byproduct of biofuel production.
Glycerol from triglycerides is produced on a large scale, but |
https://en.wikipedia.org/wiki/Nroff | nroff (short for "new roff") is a text-formatting program on Unix and Unix-like operating systems. It produces output suitable for simple fixed-width printers and terminal windows. It is an integral part of the Unix help system, being used to format man pages for display.
nroff and the related troff were both developed from the original roff. While nroff was intended to produce output on terminals and line printers, troff was intended to produce output on typesetting systems. Both used the same underlying markup and a single source file could normally be used by nroff or troff without change.
History
nroff was written by Joe Ossanna for Version 2 Unix, in Assembly language and then ported to C.
It was a descendant of the RUNOFF program from CTSS, the first computerized text-formatting program, and is a predecessor of the Unix troff document processing system.
There is also a free software version of nroff in the groff package.
Variants
The Minix operating system, among others, uses a clone of nroff called cawf by Vic Abell, based on awf, the Amazingly Workable Formatter designed in awk by Henry Spencer. These are not full replacements for the nroff/troff suite of tools, but are sufficient for display and printing of basic documents and manual pages.
In addition, a simplified version of nroff is available in Ratfor source code form as an example in the book Software Tools by Brian Kernighan and P. J. Plauger.
See also
troff
groff
TeX
LaTeX
man page
Lout
References
External links
source code for Henry Spencer's AWF
troff/nroff quick reference
nroff source code in Illumos. Explanation by Bryan Cantrill
Troff
Assembly language software
Markup languages
Unix text processing utilities |
https://en.wikipedia.org/wiki/Joe%20Ossanna | Joseph Frank Ossanna, Jr. (December 10, 1928 – November 28, 1977) was an electrical engineer and computer programmer who worked as a member of the technical staff at the Bell Telephone Laboratories in Murray Hill, New Jersey. He became actively engaged in the software design of Multics (Multiplexed Information and Computing Service), a general-purpose operating system used at Bell.
Education and career
Ossanna received his Bachelor of Engineering (B.S.E.E.) from Wayne State University in 1952.
At Bell Telephone Labs, Ossanna was concerned with low-noise amplifier design, feedback amplifier design, satellite look-angle prediction, mobile radio fading theory, and statistical data processing. He was also concerned with the operation of the Murray Hill Computation Center and was actively engaged in the software design of Multics.
After learning how to program the PDP-7 computer, Ken Thompson, Dennis Ritchie, Joe Ossanna, and Rudd Canaday began to program the operating system that was designed earlier by Thompson (Unics, later named Unix). After writing the file system and a set of basic utilities, and assembler, a core of the Unix operating system was established. Doug McIlroy later wrote, "Ossanna, with the instincts of a motor pool sergeant, equipped our first lab and attracted the first outside users."
When the team got a Graphic Systems CAT phototypesetter for making camera-ready copy of professional articles for publication and patent applications, Ossanna wrote a version of nroff that would drive it. It was dubbed troff, for typesetter roff. So it was that in 1973 he authored the first version of troff for Unix entirely written in PDP-11 assembly language. However, two years later, Ossanna re-wrote the code in the C programming language. He had planned another rewrite which was supposed to improve its usability but this work was taken over by Brian Kernighan.
Ossanna was a member of the Association for Computing Machinery, Sigma Xi, and Tau Beta Pi.
He died |
https://en.wikipedia.org/wiki/JT%20Storage | JT Storage, Inc. (also known as JTS Corporation) was a maker of inexpensive IDE hard drives for personal computers based in San Jose, California. It was founded in 1994 by "Jugi" Tandon—the inventor of the double-sided floppy disk drive and founder of Tandon Corporation—and Tom Mitchell, a co-founder of Seagate and former president and Chief Operating Officer of both Seagate and Conner Peripherals.
The company later reverse-merged with Jack Tramiel's Atari Corporation in 1996, sold all Atari assets to Hasbro Interactive in 1998 and was finally declared bankrupt in 1999.
History
Early years and products
JTS initially focused on a new 3" form-factor drive for laptops. The 3" form factor allowed a larger drive capacity for laptops with the existing technology. Compaq was actively engaged in qualifying these drives and built several laptops with this form factor drive. Lack of a second source was a major obstacle for this new form factor to gain a foothold; JTS licensed the form factor to Western Digital to attempt to remedy this problem. Eventually, as 2.5" drives became cheaper to build, interest in the 3" form factor waned, and JTS and WD stopped the project in 1998.
JTS by then had become a source of cheap, medium-performance 3.5" drives with 5400 RPM spindles. The drives, produced in a factory in India (the factory was in the Madras Export Processing Zone in the suburbs of the Southern Indian city of Madras, now known as Chennai), were known for poor reliability. Failure rates were very high and quality control was inconsistent: good drives were very good, still running after 5 years, whereas bad drives almost always failed within a few weeks. Because of their low-tier reputation, JTS drives were rare in brand-name PCs and most frequently turned up in home-built and whitebox PCs. Product lines included Palladium and Champion internal IDE hard drives.
The basic design of their drives was done by Kalok for TEAC in the early 1990s. TEAC used the design as part o |
https://en.wikipedia.org/wiki/TYPSET%20and%20RUNOFF | TYPSET is an early document editor that was used with the 1964-released RUNOFF program, one of the earliest text formatting programs to see significant use.
Of two earlier print/formatting programs DITTO and TJ-2, only the latter had, and introduced, text justification; RUNOFF also added pagination.
The name RUNOFF, and similar names led to other formatting program implementations. By 1982 Runoff largely became associated with Digital Equipment Corporation and Unix computers. DEC used the terms VAX DSR and DSR to refer to VAX DIGITAL Standard Runoff.
History
CTSS
The original RUNOFF type-setting program for CTSS was written by Jerome H. Saltzer circa 1964. Bob Morris and Doug McIlroy translated that from MAD to BCPL. Morris and McIlroy then moved the BCPL version to Multics when the IBM 7094 on which CTSS ran was being shut down.
Multics
Documentation for the Multics version of RUNOFF described it as "types out text segments in manuscript form."
Other versions and implementations
A later version of runoff for Multics was written in PL/I by Dennis Capps, in 1974. This runoff code was the ancestor of roff that was written for the fledgling Unix in assembly language by Ken Thompson.
Other versions of Runoff were developed for various computer systems including Digital Equipment Corporation's PDP-11 minicomputer systems running RT-11, RSTS/E, RSX on Digital's PDP-10 and for OpenVMS on VAX minicomputers, as well as UNIVAC Series 90 mainframes using the EDT text editor under the VS/9 operating system. These different releases of Runoff typically had little in common except the convention of indicating a command to Runoff by beginning the line with a period.
The origin of IBM's SCRIPT (markup) software began in 1968 when "IBM contracted Stuart Madnick of MIT to write a simple document preparation ..." to run on CP/67. He modeled it on MIT's CTSS RUNOFF.
Background
RUNOFF was written in 1964 for the CTSS operating system by Jerome H. Saltzer in MAD and FAP.
It |
https://en.wikipedia.org/wiki/GNU%20Libtool | In computer programming, GNU Libtool is a software development tool, part of the GNU build system, consisting of a shell script created to address the software portability problem when compiling shared libraries from source code.
It hides the differences between computing platforms for the commands which compile shared libraries.
It provides a command-line interface that is identical across platforms and it executes the platform's native commands.
Rationale
Different operating systems handle shared libraries differently.
Some platforms do not use shared libraries at all.
It can be difficult to make a software program portable: the C compiler differs from system to system; certain library functions are missing on some systems; header files may have different names.
Libtool helps manage the creation of static and dynamic libraries on various Unix-like operating systems.
Libtool accomplishes this by abstracting the library-creation process, hiding differences between various systems (e.g. Linux systems vs. Solaris).
GNU Libtool is designed to simplify the process of compiling a computer program on a new system, by "encapsulating both the platform-specific dependencies, and the user interface, in a single script".
When porting a program to a new system, Libtool is designed so the porter need not read low-level documentation for the shared libraries to be built, rather just run a configure script (or equivalent).
Use
Libtool is used by Autoconf and Automake, two other portability tools in the GNU build system.
It can also be used directly.
Clones and derivatives
Since GNU Libtool was released, other free software projects have created drop-in replacements under different software licenses.
See also
GNU Compiler Collection
GNU build system
pkg-config
References
External links
Autobook homepage
Autotools Tutorial
Avoiding libtool minefields when cross-compiling
Autotools Mythbuster
Compiling tools
Libtool
Free computer libraries
Cross-platform |
https://en.wikipedia.org/wiki/News%20server | A news server is a collection of software used to handle Usenet articles. It may also refer to a computer itself which is primarily or solely used for handling Usenet. Access to Usenet is only available through news server providers.
Articles and posts
End users often use the term "posting" to refer to a single message or file posted to Usenet. For articles containing plain text, this is synonymous with an article. For binary content such as pictures and files, it is often necessary to split the content among multiple articles. Typically through the use of numbered Subject: headers, the multiple-article postings are automatically reassembled into a single unit by the newsreader. Most servers do not distinguish between single and multiple-part postings, dealing only at the level of the individual component articles.
Headers and overviews
Each news article contains a complete set of header lines, but in common use the term "headers" is also used when referring to the News Overview database. The overview is a list of the most frequently used headers, and additional information such as article sizes, typically retrieved by the client software using the NNTP command. Overviews make reading a newsgroup faster for both the client and server by eliminating the need to open each individual article to present them in list form.
If non-overview headers are required, such as for when using a kill file, it may still be necessary to use the slower method of reading all the complete article headers. Many clients are unable to do this, and limit filtering to what is available in the summaries.
News server attributes
Among the operators and users of commercial news servers, common concerns are the continually increasing storage and network capacity requirements and their effects. Completion (the ability of a server to successfully receive all traffic), retention (the amount of time articles are made available to readers) and overall system performance. With the increasin |
https://en.wikipedia.org/wiki/InterNetNews | InterNetNews (INN) is a Usenet news server package, originally released by Rich Salz in 1991, and presented at the Summer 1992 USENIX conference in San Antonio, Texas. It was the first news server with integrated NNTP functionality.
While previous servers processed articles individually or in batches, innd is a single continuously running process that receives articles from the network, files them, and records what remote hosts should receive them. Readers can access articles directly from the disk in the same manner as B News and C News, but an included program, called nnrpd, also serves newsreaders that employ NNTP.
A later improvement was the Cyclical News Filesystem (CNFS), which sequentially stores articles in large on-disk buffers. This method, implemented by Scott Fritchie, greatly increased performance by eliminating the operating system overhead needed to deal with thousands of individual article files.
James Brister's innfeed program was also added to the package. Like innd, innfeed operates continuously to feed articles out to other servers, while the earlier innxmit processed them in batches. This combination allows articles to be received and redistributed with virtually no latency, and has substantially changed the nature of Usenet interaction by reducing the time for messages to be posted, read across the network and answered, from hours or days, to seconds or minutes. A similar earlier program, called nntplink, provided a comparable function, but it was produced independently.
INN is under active development . The package is maintained by volunteers, and development is hosted by the Internet Systems Consortium. The current maintainer of INN is Russ Allbery and the ISC.
Notes
References
External links
Rich Salz (1992). InterNetNews: Usenet transport for Internet sites.''
Russ Allbery's INN site
ISC's home page for INN
INN source code
Usenet
Usenet servers
Software using the ISC license |
https://en.wikipedia.org/wiki/Radiosonde | A radiosonde is a battery-powered telemetry instrument carried into the atmosphere usually by a weather balloon that measures various atmospheric parameters and transmits them by radio to a ground receiver. Modern radiosondes measure or calculate the following variables: altitude, pressure, temperature, relative humidity, wind (both wind speed and wind direction), cosmic ray readings at high altitude and geographical position (latitude/longitude). Radiosondes measuring ozone concentration are known as ozonesondes.
Radiosondes may operate at a radio frequency of 403 MHz or 1680 MHz. A radiosonde whose position is tracked as it ascends to give wind speed and direction information is called a rawinsonde ("radar wind -sonde"). Most radiosondes have radar reflectors and are technically rawinsondes. A radiosonde that is dropped from an airplane and falls, rather than being carried by a balloon is called a dropsonde. Radiosondes are an essential source of meteorological data, and hundreds are launched all over the world daily.
History
The first flights of aerological instruments were done in the second half of the 19th century with kites and meteographs, a recording device measuring pressure and temperature that was recuperated after the experiment. This proved to be difficult because the kites were linked to the ground and were very difficult to manoeuvre in gusty conditions. Furthermore, the sounding was limited to low altitudes because of the link to the ground.
Gustave Hermite and Georges Besançon, from France, were the first in 1892 to use a balloon to fly the meteograph. In 1898, Léon Teisserenc de Bort organized at the Observatoire de Météorologie Dynamique de Trappes the first regular daily use of these balloons. Data from these launches showed that the temperature lowered with height up to a certain altitude, which varied with the season, and then stabilized above this altitude. De Bort's discovery of the tropopause and stratosphere was announced in 1902 |
https://en.wikipedia.org/wiki/L.%20E.%20J.%20Brouwer | Luitzen Egbertus Jan Brouwer (; ; 27 February 1881 – 2 December 1966), usually cited as L. E. J. Brouwer but known to his friends as Bertus, was a Dutch mathematician and philosopher who worked in topology, set theory, measure theory and complex analysis. Regarded as one of the greatest mathematicians of the 20th century, he is known as the founder of modern topology, particularly for establishing his fixed-point theorem and the topological invariance of dimension.
Brouwer also became a major figure in the philosophy of intuitionism, a constructivist school of mathematics which argues that math is a cognitive construct rather than a type of objective truth. This position led to the Brouwer–Hilbert controversy, in which Brouwer sparred with his formalist colleague David Hilbert. Brouwer's ideas were subsequently taken up by his student Arend Heyting and Hilbert's former student Hermann Weyl. In addition to his mathematical work, Brouwer also published the short philosophical tract Life, Art, and Mysticism (1905).
Biography
Brouwer was born to Dutch Protestant parents. Early in his career, Brouwer proved a number of theorems in the emerging field of topology. The most important were his fixed point theorem, the topological invariance of degree, and the topological invariance of dimension. Among mathematicians generally, the best known is the first one, usually referred to now as the Brouwer fixed point theorem. It is a corollary to the second, concerning the topological invariance of degree, which is the best known among algebraic topologists. The third theorem is perhaps the hardest.
Brouwer also proved the simplicial approximation theorem in the foundations of algebraic topology, which justifies the reduction to combinatorial terms, after sufficient subdivision of simplicial complexes, of the treatment of general continuous mappings. In 1912, at age 31, he was elected a member of the Royal Netherlands Academy of Arts and Sciences. He was an Invited Speaker of the |
https://en.wikipedia.org/wiki/OpenEXR | OpenEXR is a high-dynamic range, multi-channel raster file format, released as an open standard along with a set of software tools created by Industrial Light & Magic (ILM), under a free software license similar to the BSD license.
It is notable for supporting multiple channels of potentially different pixel sizes, including 32-bit unsigned integer, 32-bit and 16-bit floating point values, as well as various compression techniques which include lossless and lossy compression algorithms. It also has arbitrary channels and encodes multiple points of view such as left- and right-camera images.
Overview
A full technical introduction of the format is available on the OpenEXR website.
OpenEXR, or EXR for short, is a deep raster format developed by ILM and broadly used in the computer-graphics industry, both visual effects and animation.
OpenEXR's multi-resolution and arbitrary channel format makes it appealing for compositing, as it alleviates several painful elements of the process. Since it can store arbitrary channels—specular, diffuse, alpha, RGB, normals, and various other types—in one file, it takes away the need to store this information in separate files. The multi-channel concept also reduces the necessity to "bake" in the aforementioned data to the final image. If a compositer is not happy with the current level of specularity, they can adjust that specific channel.
OpenEXR's API makes tools development a relative ease for developers. Since there are almost never two identical production pipelines, custom tools always need to be developed to address problems (e.g. image-manipulation issue). OpenEXR's library allows quick and easy access to the image's attributes such as tiles and channels.
The OpenEXR library is developed in C++ and is available in source format as well as compiled format for Microsoft Windows, macOS and Linux. Python bindings for the library are also available for version 2.x.
History
OpenEXR was created by ILM in 1999 and released to t |
https://en.wikipedia.org/wiki/C%20News | C News is a news server package, written by Geoff Collyer, assisted by Henry Spencer, at the University of Toronto as a replacement for B News. It was presented at the Winter 1987 USENIX conference in Washington, D.C.
Functionally, the operation of C News is very much like that of B News. One major difference was that C News was written with portability in mind. It ran on many variants of Unix and even MS-DOS. The relaynews program that handled article filing and feeding was carefully optimized and designed to process articles in batches, while B News processed one article per program invocation. The authors claimed that relaynews could process articles 19 times as quickly as B News.
In 1992, Collyer gave C News a new index facility called NOV (or News Overview). This allowed newsreaders to rapidly retrieve header and threading information with relatively little load on the server. Virtually all news servers continue to use this method in the form of the NNTP XOVER command. Development of C News stopped about 1995, and the package was largely superseded by INN.
External links
Geoff Collyer and Henry Spencer (1987). News Need Not Be Slow.
Mark Linimon (1994). C News Frequently Asked Questions.
C News source code
Usenet
Usenet servers |
https://en.wikipedia.org/wiki/Network%20News%20Transfer%20Protocol | The Network News Transfer Protocol (NNTP) is an application protocol used for transporting Usenet news articles (netnews) between news servers, and for reading/posting articles by the end user client applications. Brian Kantor of the University of California, San Diego, and Phil Lapsley of the University of California, Berkeley, wrote , the specification for the Network News Transfer Protocol, in March 1986. Other contributors included Stan O. Barber from the Baylor College of Medicine and Erik Fair of Apple Computer.
Usenet was originally designed based on the UUCP network, with most article transfers taking place over direct point-to-point telephone links between news servers, which were powerful time-sharing systems. Readers and posters logged into these computers reading the articles directly from the local disk.
As local area networks and Internet participation proliferated, it became desirable to allow newsreaders to be run on personal computers connected to local networks. The resulting protocol was NNTP, which resembled the Simple Mail Transfer Protocol (SMTP) but was tailored for exchanging newsgroup articles.
A newsreader, also known as a news client, is a software application that reads articles on Usenet, either directly from the news server's disks or via the NNTP.
The well-known TCP port 119 is reserved for NNTP. Well-known TCP port 433 (NNSP) may be used when doing a bulk transfer of articles from one server to another. When clients connect to a news server with Transport Layer Security (TLS), TCP port 563 is often used. This is sometimes referred to as NNTPS. Alternatively, a plain-text connection over port 119 may be changed to use TLS via the STARTTLS command.
In October 2006, the IETF released , which updates NNTP and codifies many of the additions made over the years since RFC 977. At the same time, the IETF also released , which specifies the use of Transport Layer Security (TLS) via NNTP over STARTTLS.
Network News Reader Protocol
During |
https://en.wikipedia.org/wiki/Riemann%20surface | In mathematics, particularly in complex analysis, a Riemann surface is a one-dimensional complex manifold.
Loosely speaking, this means that any Riemann surface is formed by gluing together open subsets of the complex plane C using holomorphic gluing maps.
Examples of Riemann surfaces include graphs of multivalued functions like √z or log(z), e.g. the subset of pairs (z,w) ∈ C2 with w = log(z).
Every Riemann surface is a surface: a two-dimensional real manifold, but it contains more structure (specifically a complex structure). Conversely, a two-dimensional real manifold can be turned into a Riemann surface (usually in several inequivalent ways) if and only if it is orientable and metrizable. So the sphere and torus admit complex structures, but the Möbius strip, Klein bottle and real projective plane do not.
Every compact Riemann surface is a complex algebraic curve by Chow's theorem and the Riemann–Roch theorem.
Riemann surfaces were first studied by and are named after Bernhard Riemann.
Definitions
There are several equivalent definitions of a Riemann surface.
A Riemann surface X is a complex manifold of complex dimension one. This means that X is a connected Hausdorff space that is endowed with an atlas of charts to the open unit disk of the complex plane: for every point x ∈ X there is a neighbourhood of x that is homeomorphic to the open unit disk of the complex plane, and the transition maps between two overlapping charts are required to be holomorphic.
A Riemann surface is an oriented manifold of (real) dimension two – a two-sided surface – together with a conformal structure. Again, manifold means that locally at any point x of X, the space is homeomorphic to a subset of the real plane. The supplement "Riemann" signifies that X is endowed with an additional structure which allows angle measurement on the manifold, namely an equivalence class of so-called Riemannian metrics. Two such metrics are considered equivalent if the angles they measur |
https://en.wikipedia.org/wiki/Roll-to-roll%20processing | In the field of electronic devices, roll-to-roll processing, also known as web processing, reel-to-reel processing or R2R, is the process of creating electronic devices on a roll of flexible plastic, metal foil, or flexible glass. In other fields predating this use, it can refer to any process of applying coating, printing, or performing other processes starting with a roll of a flexible material and re-reeling after the process to create an output roll. These processes, and others such as sheeting, can be grouped together under the general term converting. When the rolls of material have been coated, laminated or printed they can be subsequently slit to their finished size on a slitter rewinder.
In electronic devices
Large circuits made with thin-film transistors and other devices can be patterned onto these large substrates, which can be up to a few metres wide and long. Some of the devices can be patterned directly, much like an inkjet printer deposits ink. For most semiconductors, however, the devices must be patterned using photolithography techniques.
Roll-to-roll processing of large-area electronic devices reduces manufacturing cost. Most notable would be solar cells, which are still prohibitively expensive for most markets due to the high cost per unit area of traditional bulk (mono- or polycrystalline) silicon manufacturing. Other applications could arise which take advantage of the flexible nature of the substrates, such as electronics embedded into clothing, large-area flexible displays, and roll-up portable displays.
LED (Light Emitting Diode)
Inorganic LED - Flexible LED is commonly made into 25, 50, 100 m, or even longer strips using a roll-to-roll process. A long neon LED tube is using such a long flexible strip and encapsulated with PVC or silicone diffusing encapsulation.
Organic LED (OLED) - OLED for foldable phone screen is adopting roll-to-roll processing technology.
Thin-film cells
A crucial issue for a roll-to-roll thin-film ce |
https://en.wikipedia.org/wiki/Spin%20network | In physics, a spin network is a type of diagram which can be used to represent states and interactions between particles and fields in quantum mechanics. From a mathematical perspective, the diagrams are a concise way to represent multilinear functions and functions between representations of matrix groups. The diagrammatic notation can thus greatly simplify calculations.
Roger Penrose described spin networks in 1971. Spin networks have since been applied to the theory of quantum gravity by Carlo Rovelli, Lee Smolin, Jorge Pullin, Rodolfo Gambini and others.
Spin networks can also be used to construct a particular functional on the space of connections which is invariant under local gauge transformations.
Definition
Penrose's definition
A spin network, as described in Penrose (1971), is a kind of diagram in which each line segment represents the world line of a "unit" (either an elementary particle or a compound system of particles). Three line segments join at each vertex. A vertex may be interpreted as an event in which either a single unit splits into two or two units collide and join into a single unit. Diagrams whose line segments are all joined at vertices are called closed spin networks. Time may be viewed as going in one direction, such as from the bottom to the top of the diagram, but for closed spin networks the direction of time is irrelevant to calculations.
Each line segment is labelled with an integer called a spin number. A unit with spin number n is called an n-unit and has angular momentum nħ/2, where ħ is the reduced Planck constant. For bosons, such as photons and gluons, n is an even number. For fermions, such as electrons and quarks, n is odd.
Given any closed spin network, a non-negative integer can be calculated which is called the norm of the spin network. Norms can be used to calculate the probabilities of various spin values. A network whose norm is zero has zero probability of occurrence. The rules for calculating norms and probabil |
https://en.wikipedia.org/wiki/UUCP | UUCP (Unix-to-Unix Copy) is a suite of computer programs and protocols allowing remote execution of commands and transfer of files, email and netnews between computers.
A command named is one of the programs in the suite; it provides a user interface for requesting file copy operations. The UUCP suite also includes (user interface for remote command execution), (the communication program that performs the file transfers), (reports statistics on recent activity), (execute commands sent from remote machines), and (reports the UUCP name of the local system). Some versions of the suite include / (convert 8-bit binary files to 7-bit text format and vice versa).
Although UUCP was originally developed on Unix in the 1970s and 1980s, and is most closely associated with Unix-like systems, UUCP implementations exist for several non-Unix-like operating systems, including DOS, OS/2, OpenVMS (for VAX hardware only), AmigaOS, classic Mac OS, and even CP/M.
History
UUCP was originally written at AT&T Bell Laboratories by Mike Lesk. By 1978 it was in use on 82 UNIX machines inside the Bell system, primarily for software distribution. It was released in 1979 as part of Version 7 Unix.
The first UUCP emails from the U.S. arrived in the United Kingdom in 1979 and email between the UK, the Netherlands and Denmark started in 1980, becoming a regular service via EUnet in 1982.
The original UUCP was rewritten by AT&T researchers Peter Honeyman, David A. Nowitz, and Brian E. Redman around 1983. The rewrite is referred to as HDB or HoneyDanBer uucp, which was later enhanced, bug fixed, and repackaged as BNU UUCP ("Basic Network Utilities").
Each of these versions was distributed as proprietary software, which inspired Ian Lance Taylor to write a new free software version from scratch in 1991.
Taylor UUCP was released under the GNU General Public License. Taylor UUCP addressed security holes which allowed some of the original network worms to remotely execute unexpected she |
https://en.wikipedia.org/wiki/Multi-exposure%20HDR%20capture | In photography and videography, multi-exposure HDR capture is a technique that creates high dynamic range (HDR) images (or extended dynamic range images) by taking and combining multiple exposures of the same subject matter at different exposure levels. Combining multiple images in this way results in an image with a greater dynamic range than what would be possible by taking one single image. The technique can also be used to capture video by taking and combining multiple exposures for each frame of the video. The term "HDR" is used frequently to refer to the process of creating HDR images from multiple exposures. Many smartphones have an automated HDR feature that relies on computational imaging techniques to capture and combine multiple exposures.
A single image captured by a camera provides a finite range of luminosity inherent to the medium, whether it is a digital sensor or film. Outside this range, tonal information is lost and no features are visible; tones that exceed the range are "burned out" and appear pure white in the brighter areas, while tones that fall below the range are "crushed" and appear pure black in the darker areas. The ratio between the maximum and the minimum tonal values that can be captured in a single image is known as the dynamic range. In photography, dynamic range is measured in exposure value (EV) differences, also known as stops.
The human eye's response to light is non-linear: halving the light level does not halve the perceived brightness of a space, it makes it look only slightly dimmer. For most illumination levels, the response is approximately logarithmic. Human eyes adapt fairly rapidly to changes in light levels. HDR can thus produce images that look more like what a human sees when looking at the subject.
This technique can be applied to produce images that preserve local contrast for a natural rendering, or exaggerate local contrast for artistic effect. HDR is useful for recording many real-world scenes containing very |
https://en.wikipedia.org/wiki/Liquefaction | In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics.
It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels.
Geology
In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018.
In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load.
Physics and chemistry
In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases.
Coal
Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes.
Dissolution
Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid.
Food preparation
In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English.
Irradiation
Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanos |
https://en.wikipedia.org/wiki/Chorology | Chorology (from Greek , khōros, "place, space"; and , -logia) can mean
the study of the causal relations between geographical phenomena occurring within a particular region
the study of the spatial distribution of organisms (biogeography).
In geography, the term was first used by Strabo. In the twentieth century, Richard Hartshorne worked on that notion again. The term was popularized by Ferdinand von Richthofen.
See also
Chorography
Khôra
References
Biogeography |
https://en.wikipedia.org/wiki/Overfitting | In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters than can be justified by the data. In a mathematical sense, these parameters represent the degree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., the noise) as if that variation represented underlying model structure.
Underfitting occurs when a mathematical model cannot adequately capture the underlying structure of the data. An under-fitted model is a model where some parameters or terms that would appear in a correctly specified model are missing. Under-fitting would occur, for example, when fitting a linear model to non-linear data. Such a model will tend to have poor predictive performance.
The possibility of over-fitting exists because the criterion used for selecting the model is not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set of training data, and yet its suitability might be determined by its ability to perform well on unseen data; then over-fitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend.
As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions.
The potential for overfitting depends not only on the number of parameters and data but also the conformability of the model structure with the data shape, and the magnitude of model error compared to the expected lev |
https://en.wikipedia.org/wiki/Automation | Automation describes a wide range of technologies that reduce human intervention in processes, namely by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines. Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices, and computers, usually in combination. Complicated systems, such as modern factories, airplanes, and ships typically use combinations of all of these techniques. The benefit of automation includes labor savings, reducing waste, savings in electricity costs, savings in material costs, and improvements to quality, accuracy, and precision.
Automation includes the use of various equipment and control systems such as machinery, processes in factories, boilers, and heat-treating ovens, switching on telephone networks, steering, and stabilization of ships, aircraft, and other applications and vehicles with reduced human intervention. Examples range from a household thermostat controlling a boiler to a large industrial control system with tens of thousands of input measurements and output control signals. Automation has also found a home in the banking industry. It can range from simple on-off control to multi-variable high-level algorithms in terms of control complexity.
In the simplest type of an automatic control loop, a controller compares a measured value of a process with a desired set value and processes the resulting error signal to change some input to the process, in such a way that the process stays at its set point despite disturbances. This closed-loop control is an application of negative feedback to a system. The mathematical basis of control theory was begun in the 18th century and advanced rapidly in the 20th. The term automation, inspired by the earlier word automatic (coming from automaton), was not widely used before 1947, when Ford established an automation department. It was during this time |
https://en.wikipedia.org/wiki/GNU%20Units | GNU Units is a cross-platform computer program for conversion of units of quantities. It has a database of measurement units, including esoteric and historical units. This for instance allows conversion of velocities specified in furlongs per fortnight, and pressures specified in tons per acre. Output units are checked for consistency with the input, allowing verification of conversion of complex expressions.
History
GNU Units was written by Adrian Mariano as an implementation of the units utility included with the Unix operating system. It was originally available under a permissive license. The GNU variant is distributed under the GPL although the FreeBSD project maintains a free fork of units from before the license change.
units (Unix utility)
The original units program has been a standard part of Unix since the early Bell Laboratories versions.
Source code for a version very similar to the original is available from the Heirloom Project.
The GNU implementation
GNU units includes several extensions to the original version, including
Exponents can be written with ^ or **.
Exponents can be larger than 9 if written with ^ or **.
Rational and decimal exponents are supported.
Sums of units (e.g., ) can be converted.
Conversions can be made to sums of units, termed unit lists (e.g., from degrees to degrees, minutes, and seconds).
Units that measure reciprocal dimensions can be converted (e.g., S to megohm).
Parentheses for grouping are supported. This sometimes allows more natural expressions, such as in the example given in Complex units expressions.
Roots of units (e.g., can be computed.
Nonlinear units conversions (e.g., °F to °C) are supported.
Functions such as sin, cos, ln, log, and log2 are included.
A script for updating the currency conversions is included; the script requires Python.
Units definitions, including nonlinear conversions and unit lists, are user extensible.
The plain text database definitions.units is a good reference in |
https://en.wikipedia.org/wiki/Filter%20design | Filter design is the process of designing a signal processing filter that satisfies a set of requirements, some of which may be conflicting. The purpose is to find a realization of the filter that meets each of the requirements to a sufficient degree to make it useful.
The filter design process can be described as an optimization problem where each requirement contributes to an error function that should be minimized. Certain parts of the design process can be automated, but normally an experienced electrical engineer is needed to get a good result.
The design of digital filters is a deceptively complex topic. Although filters are easily understood and calculated, the practical challenges of their design and implementation are significant and are the subject of advanced research.
Typical design requirements
Typical requirements which are considered in the design process are:
The filter should have a specific frequency response
The filter should have a specific phase shift or group delay
The filter should have a specific impulse response
The filter should be causal
The filter should be stable
The filter should be localized (pulse or step inputs should result in finite time outputs)
The computational complexity of the filter should be low
The filter should be implemented in particular hardware or software
The frequency function
An important parameter is the required frequency response.
In particular, the steepness and complexity of the response curve is a deciding factor for the filter order and feasibility.
A first-order recursive filter will only have a single frequency-dependent component. This means that the slope of the frequency response is limited to 6 dB per octave. For many purposes, this is not sufficient. To achieve steeper slopes, higher-order filters are required.
In relation to the desired frequency function, there may also be an accompanying weighting function, which describes, for each frequency, how important it is that the resultin |
https://en.wikipedia.org/wiki/Analog%20sampled%20filter | An analog sampled filter an electronic filter that is a hybrid between an analog and a digital filter. The input is an analog signal, and usually stored in capacitors. The time domain is discrete, however. Distinct analog samples are shifted through an array of holding capacitors as in a bucket brigade. Analog adders and amplifiers do the arithmetic in the signal domain, just as in an analog computer.
Note that these filters are subject to aliasing phenomena just like a digital filter, and anti-aliasing filters will usually be required. See .
Companies such as Linear Technology and Maxim produce integrated circuits that implement this functionality. Filters up to the 8th order may be implemented using a single chip. Some are fully configurable; some are pre-configured, usually as low-pass filters.
Due to the high filter order that can be achieved in an easy and stable manner, single chip analog sampled filters are often used for implementing anti-aliasing filters for digital filters. The analog sampled filter will in its turn need yet another anti-aliasing filter, but this can often be implemented as a simple 1st order low-pass analog filter consisting of one series resistor and one capacitor to ground.
Linear filters
Electronic circuits |
https://en.wikipedia.org/wiki/Mathematical%20physics | Mathematical physics refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories". An alternative definition would also include those mathematics that are inspired by physics (also known as physical mathematics).
Scope
There are several distinct branches of mathematical physics, and these roughly correspond to particular historical parts of our world.
Classical mechanics
The rigorous, abstract and advanced reformulation of Newtonian mechanics adopting the Lagrangian mechanics and the Hamiltonian mechanics even in the presence of constraints. Both formulations are embodied in analytical mechanics and lead to understanding the deep interplay of the notions of symmetry and conserved quantities during the dynamical evolution, as embodied within the most elementary formulation of Noether's theorem. These approaches and ideas have been extended to other areas of physics as statistical mechanics, continuum mechanics, classical field theory and quantum field theory. Moreover, they have provided several examples and ideas in differential geometry (e.g. several notions in symplectic geometry and vector bundle).
Partial differential equations
Following mathematics: the theory of partial differential equation, variational calculus, Fourier analysis, potential theory, and vector analysis are perhaps most closely associated with mathematical physics. These were developed intensively from the second half of the 18th century (by, for example, D'Alembert, Euler, and Lagrange) until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics.
Quantum theory
The theory of |
https://en.wikipedia.org/wiki/Time%20domain | Time domain refers to the analysis of mathematical functions, physical signals or time series of economic or environmental data, with respect to time. In the time domain, the signal or function's value is known for all real numbers, for the case of continuous time, or at various separate instants in the case of discrete time. An oscilloscope is a tool commonly used to visualize real-world signals in the time domain. A time-domain graph shows how a signal changes with time, whereas a frequency-domain graph shows how much of the signal lies within each given frequency band over a range of frequencies.
Though most precisely referring to time in physics, the term time domain may occasionally informally refer to position in space when dealing with spatial frequencies, as a substitute for the more precise term spatial domain.
Origin of term
The use of the contrasting terms time domain and frequency domain developed in U.S. communication engineering in the late 1940s, with the terms appearing together without definition by 1950. When an analysis uses the second or one of its multiples as a unit of measurement, then it is in the time domain. When analysis concerns the reciprocal units such as Hertz, then it is in the frequency domain.
See also
Frequency domain
Fourier transform
Laplace transform
Blackman–Tukey transform
References
Time domain analysis |
https://en.wikipedia.org/wiki/Trinitron | Trinitron was Sony's brand name for its line of aperture-grille-based CRTs used in television sets and computer monitors, one of the first television systems to enter the market since the 1950s. Constant improvement in the basic technology and attention to overall quality allowed Sony to charge a premium for Trinitron devices into the 1990s.
Patent protection on the basic Trinitron design ran out in 1996, and it quickly faced a number of competitors at much lower prices.
The name Trinitron was derived from trinity, meaning the union of three, and tron from electron tube, after the way that the Trinitron combined the three separate electron guns of other CRT designs into one.
History
Color television
Color television had been demoed since the 1920s starting with John Logie Baird's system. However, it was only in the late 1940s that it was perfected by both CBS and RCA. At the time, a number of systems were being proposed that used separate red, green and blue signals (RGB), broadcast in succession. Most systems broadcast entire frames in sequence, with a colored filter (or "gel") that rotated in front of an otherwise conventional black and white television tube. Because they broadcast separate signals for the different colors, all of these systems were incompatible with existing black and white sets. Another problem was that the mechanical filter made them flicker unless very high refresh rates were used. In spite of these problems, the United States Federal Communications Commission selected a sequential-frame 144 frame/s standard from CBS as their color broadcast in 1950.
RCA worked along different lines entirely, using the luminance-chrominance system. This system did not directly encode or transmit the RGB signals; instead it combined these colors into one overall brightness figure, the "luminance". Luminance closely matched the black and white signal of existing broadcasts, allowing it to be displayed on existing televisions. This was a major advantage ove |
https://en.wikipedia.org/wiki/9 | 9 (nine) is the natural number following and preceding .
Evolution of the Hindu–Arabic digit
Circa 300 BCE, as part of the Brahmi numerals, various Indians wrote a digit 9 similar in shape to the modern closing question mark without the bottom dot. The Kshatrapa, Andhra and Gupta started curving the bottom vertical line coming up with a -look-alike. The Nagari continued the bottom stroke to make a circle and enclose the 3-look-alike, in much the same way that the sign @ encircles a lowercase a. As time went on, the enclosing circle became bigger and its line continued beyond the circle downwards, as the 3-look-alike became smaller. Soon, all that was left of the 3-look-alike was a squiggle. The Arabs simply connected that squiggle to the downward stroke at the middle and subsequent European change was purely cosmetic.
While the shape of the glyph for the digit 9 has an ascender in most modern typefaces, in typefaces with text figures the character usually has a descender, as, for example, in .
The modern digit resembles an inverted 6. To disambiguate the two on objects and documents that can be inverted, they are often underlined. Another distinction from the 6 is that it is sometimes handwritten with two strokes and a straight stem, resembling a raised lower-case letter q. In seven-segment display, the number 9 can be constructed either with a hook at the end of its stem or without one. Most LCD calculators use the former, but some VFD models use the latter.
Mathematics
Nine is the fourth composite number, and the first composite number that is odd. Nine is the third square number (32), and the second non-unitary square prime of the form p2, and, the first that is odd, with all subsequent squares of this form odd as well. Nine has the even aliquot sum of 4, and with a composite number sequence of two (9, 4, 3, 1, 0) within the 3-aliquot tree. There are nine Heegner numbers, or square-free positive integers that yield an imaginary quadratic field whose ring |
https://en.wikipedia.org/wiki/Differentiated%20services | Differentiated services or DiffServ is a computer networking architecture that specifies a mechanism for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks. DiffServ can, for example, be used to provide low-latency to critical network traffic such as voice or streaming media while providing best-effort service to non-critical services such as web traffic or file transfers.
DiffServ uses a 6-bit differentiated services code point (DSCP) in the 8-bit differentiated services field (DS field) in the IP header for packet classification purposes. The DS field replaces the outdated IPv4 TOS field.
Background
Modern data networks carry many different types of services, including voice, video, streaming music, web pages and email. Many of the proposed QoS mechanisms that allowed these services to co-exist were both complex and failed to scale to meet the demands of the public Internet. In December 1998, the IETF replaced the TOS and IP precedence fields in the IPv4 header with the DS field. In the IPv6 header the DS field is part of the Traffic Class field where it occupies the 6 most significant bits.
In the DS field, a range of eight values (class selectors) is used for backward compatibility with the former IPv4 IP precedence field. Today, DiffServ has largely supplanted TOS and other layer-3 QoS mechanisms, such as integrated services (IntServ), as the primary architecture routers use to provide QoS.
Traffic management mechanisms
DiffServ is a coarse-grained, class-based mechanism for traffic management. In contrast, IntServ is a fine-grained, flow-based mechanism. DiffServ relies on a mechanism to classify and mark packets as belonging to a specific class. DiffServ-aware routers implement per-hop behaviors (PHBs), which define the packet-forwarding properties associated with a class of traffic. Different PHBs may be defined to offer, for example, low-loss or low-latency service.
Rather than differentiating networ |
https://en.wikipedia.org/wiki/Probabilistic%20method | In mathematics, the probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for certain, without any possible error.
This method has now been applied to other areas of mathematics such as number theory, linear algebra, and real analysis, as well as in computer science (e.g. randomized rounding), and information theory.
Introduction
If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero.
Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties.
Another way to use the probabilistic method is by calculating the expected value of some random variable. If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value.
Alternatively, the probabilistic method can also be used to guarantee the existence of a desired element in a sample space with a value that is greater than or equal to the calculated expected value, since the non-existence of such element would imply every element in the sample space is less than the expected value, a contradiction.
Common tools used in the probabilistic method include Markov's inequality, the Chernoff bound, and the Lovász local lemma.
Two examples due to Erdős
Although others before him proved theorems via the probabilistic method (for example, Szele's 1943 result that there exist tournaments containing a large number of Hamilton |
https://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton%20theorem | In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies its own characteristic equation.
If is a given matrix and is the identity matrix, then the characteristic polynomial of is defined as , where is the determinant operation and is a variable for a scalar element of the base ring. Since the entries of the matrix are (linear or constant) polynomials in , the determinant is also a degree- monic polynomial in , One can create an analogous polynomial in the matrix instead of the scalar variable , defined as The Cayley–Hamilton theorem states that this polynomial expression is equal to the zero matrix, which is to say that . The theorem allows to be expressed as a linear combination of the lower matrix powers of . When the ring is a field, the Cayley–Hamilton theorem is equivalent to the statement that the minimal polynomial of a square matrix divides its characteristic polynomial.
A special case of the theorem was first proved by Hamilton in 1853 in terms of inverses of linear functions of quaternions. This corresponds to the special case of certain real or complex matrices. Cayley in 1858 stated the result for and smaller matrices, but only published a proof for the case. As for matrices, Cayley stated “..., I have not thought it necessary to undertake the labor of a formal proof of the theorem in the general case of a matrix of any degree”. The general case was first proved by Ferdinand Frobenius in 1878.
Examples
matrices
For a matrix , the characteristic polynomial is given by , and so is trivial.
matrices
As a concrete example, let
Its characteristic polynomial is given by
The Cayley–Hamilton theorem claims that, if we define
then
We can verify by computation that indeed,
For a generic matrix,
the characteristic polynomial is given by , so the Ca |
https://en.wikipedia.org/wiki/Wang%20Laboratories | Wang Laboratories was a US computer company founded in 1951 by An Wang and G. Y. Chu. The company was successively headquartered in Cambridge, Massachusetts (1954–1963), Tewksbury, Massachusetts (1963–1976), and finally in Lowell, Massachusetts (1976–1997). At its peak in the 1980s, Wang Laboratories had annual revenues of US$3 billion and employed over 33,000 people. It was one of the leading companies during the time of the Massachusetts Miracle.
The company was directed by An Wang, who was described as an "indispensable leader" and played a personal role in setting business and product strategy until his death in 1990. The company went through transitions between different product lines, beginning with typesetters, calculators, and word processors, then adding computers, copiers, and laser printers.
Wang Laboratories filed for bankruptcy protection in August 1992. After emerging from bankruptcy, the company changed its name to Wang Global. It was acquired by Getronics of the Netherlands in 1999, becoming Getronics North America, then was sold to KPN in 2007 and CompuCom in 2008.
Public stock listing
Wang went public on August 26, 1967, with the issuance of 240,000 shares at $12.50 per share on the American Stock Exchange. The stock closed the day above $40, valuing the company's equity at approximately $77 million, of which An Wang and his family owned about 63%.
An Wang took steps to ensure that the Wang family would retain control of the company even after going public. He created a second class of stock, class B, with higher dividends but only one-tenth the voting power of class C. The public mostly bought class B shares; the Wang family retained most of the class C shares. The letters B and C were used to ensure that brokerages would fill any Wang stock orders with class B shares unless class C was specifically requested. Wang stock had been listed on the New York Stock Exchange, but this maneuver was not quite acceptable under NYSE's rules, and Wang was |
https://en.wikipedia.org/wiki/Venona%20project | The Venona project was a United States counterintelligence program initiated during World War II by the United States Army's Signal Intelligence Service and later absorbed by the National Security Agency (NSA), that ran from February 1, 1943, until October 1, 1980. It was intended to decrypt messages transmitted by the intelligence agencies of the Soviet Union (e.g. the NKVD, the KGB, and the GRU). Initiated when the Soviet Union was an ally of the US, the program continued during the Cold War, when the Soviet Union was considered an enemy.
During the 37-year duration of the Venona project, the Signal Intelligence Service decrypted and translated approximately 3,000 messages. The signals intelligence yield included discovery of the Cambridge Five espionage ring in the United Kingdom and Soviet espionage of the Manhattan Project in the US (known as Project Enormous). Some of the espionage was undertaken to support the Soviet atomic bomb project. The Venona project remained secret for more than 15 years after it concluded. Some of the decoded Soviet messages were not declassified and published by the United States until 1995.
Background
During World War II and the early years of the Cold War, the Venona project was a source of information on Soviet intelligence-gathering directed at the Western military powers. Although unknown to the public, and even to Presidents Franklin D. Roosevelt and Harry S. Truman, these programs were of importance concerning crucial events of the early Cold War. These included the Julius and Ethel Rosenberg spying case (which was based on events during World War II) and the defections of Donald Maclean and Guy Burgess to the Soviet Union.
Most decipherable messages were transmitted and intercepted between 1942 and 1945, during World War II, when the Soviet Union was an ally of the US. Sometime in 1945, the existence of the Venona program was revealed to the Soviet Union by cryptologist-analyst Bill Weisband, an NKVD agent in the US Army's |
https://en.wikipedia.org/wiki/Transpose | In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The transpose of a matrix was introduced in 1858 by the British mathematician Arthur Cayley. In the case of a logical matrix representing a binary relation R, the transpose corresponds to the converse relation RT.
Transpose of a matrix
Definition
The transpose of a matrix , denoted by , , , , , , or , may be constructed by any one of the following methods:
Reflect over its main diagonal (which runs from top-left to bottom-right) to obtain
Write the rows of as the columns of
Write the columns of as the rows of
Formally, the -th row, -th column element of is the -th row, -th column element of :
If is an matrix, then is an matrix.
In the case of square matrices, may also denote the th power of the matrix . For avoiding a possible confusion, many authors use left upperscripts, that is, they denote the transpose as . An advantage of this notation is that no parentheses are needed when exponents are involved: as , notation is not ambiguous.
In this article this confusion is avoided by never using the symbol as a variable name.
Matrix definitions involving transposition
A square matrix whose transpose is equal to itself is called a symmetric matrix; that is, is symmetric if
A square matrix whose transpose is equal to its negative is called a skew-symmetric matrix; that is, is skew-symmetric if
A square complex matrix whose transpose is equal to the matrix with every entry replaced by its complex conjugate (denoted here with an overline) is called a Hermitian matrix (equivalent to the matrix being equal to its conjugate transpose); that is, is Hermitian if
A square complex matrix whose transpose is equal to the negation of its complex conjugate is called a skew-Hermitian matrix; that is, is skew-Hermitian |
https://en.wikipedia.org/wiki/Harvard%20Mark%20I | The Harvard Mark I, or IBM Automatic Sequence Controlled Calculator (ASCC), was one of the earliest general-purpose electromechanical computers used in the war effort during the last part of World War II.
One of the first programs to run on the Mark I was initiated on 29 March 1944 by John von Neumann. At that time, von Neumann was working on the Manhattan Project, and needed to determine whether implosion was a viable choice to detonate the atomic bomb that would be used a year later. The Mark I also computed and printed mathematical tables, which had been the initial goal of British inventor Charles Babbage for his "analytical engine" in 1837.
The Mark I was disassembled in 1959; part of it was given to IBM, part went to the Smithsonian Institution, and part entered the Harvard Collection of Historical Scientific Instruments. For decades, Harvard's portion was on display in the lobby of the Aiken Computation Lab. About 1997, it was moved to the Harvard Science Center. In 2021, it was moved again, to the lobby of Harvard's new Science and Engineering Complex in Allston, Massachusetts.
Origins
The original concept was presented to IBM by Howard Aiken in November 1937. After a feasibility study by IBM engineers, the company chairman Thomas Watson Sr. personally approved the project and its funding in February 1939.
Howard Aiken had started to look for a company to design and build his calculator in early 1937. After two rejections, he was shown a demonstration set that Charles Babbage’s son had given to Harvard University 70 years earlier. This led him to study Babbage and to add references to the Analytical Engine to his proposal; the resulting machine "brought Babbage’s principles of the Analytical Engine almost to full realization, while adding important new features."
The ASCC was developed and built by IBM at their Endicott plant and shipped to Harvard in February 1944. It began computations for the US Navy Bureau of Ships in May and was officially prese |
https://en.wikipedia.org/wiki/Non-volatile%20random-access%20memory | Non-volatile random-access memory (NVRAM) is random-access memory that retains data without applied power. This is in contrast to dynamic random-access memory (DRAM) and static random-access memory (SRAM), which both maintain data only for as long as power is applied, or forms of sequential-access memory such as magnetic tape, which cannot be randomly accessed but which retains data indefinitely without electric power.
Read-only memory devices can be used to store system firmware in embedded systems such as an automotive ignition system control or home appliance. They are also used to hold the initial processor instructions required to bootstrap a computer system. Read-write memory can be used to store calibration constants, passwords, or setup information, and may be integrated into a microcontroller.
If the main memory of a computer system were non-volatile, it would greatly reduce the time required to start a system after a power interruption. Current existing types of semiconductor non-volatile memory have limitations in memory size, power consumption, or operating life that make them impractical for main memory. Development is going on for the use of non-volatile memory chips as a system's main memory, as persistent memory. A standard for persistent memory known as NVDIMM-P has been published in 2021.
Early NVRAMs
Early computers used core and drum memory systems which were non-volatile as a byproduct of their construction. The most common form of memory through the 1960s was magnetic-core memory, which stored data in the polarity of small magnets. Since the magnets held their state even with the power removed, core memory was also non-volatile. Other memory types required constant power to retain data, such as vacuum tube or solid-state flip-flops, Williams tubes, and semiconductor memory (static or dynamic RAM).
Advances in semiconductor fabrication in the 1970s led to a new generation of solid state memories that magnetic-core memory could not match |
https://en.wikipedia.org/wiki/Killer%20heuristic | In competitive two-player games, the killer heuristic is a move-ordering method based on the observation that a strong move or small set of such moves in a particular position may be equally strong in similar positions at the same move (ply) in the game tree.
Retaining such moves obviates the effort of rediscovering them in sibling nodes.
This technique improves the efficiency of alpha–beta pruning, which in turn improves the efficiency of the minimax algorithm. Alpha–beta pruning works best when the best moves are considered first. This is because the best moves are the ones most likely to produce a cutoff, a condition where the game-playing program knows that the position it is considering could not possibly have resulted from best play by both sides and so need not be considered further. I.e. the game-playing program will always make its best available move for each position. It only needs to consider the other player's possible responses to that best move, and can skip evaluation of responses to (worse) moves it will not make.
The killer heuristic attempts to produce a cutoff by assuming that a move that produced a cutoff in another branch of the game tree at the same depth is likely to produce a cutoff in the present position, that is to say that a move that was a very good move from a different (but possibly similar) position might also be a good move in the present position. By trying the killer move before other moves, a game-playing program can often produce an early cutoff, saving itself the effort of considering or even generating all legal moves from a position.
In practical implementation, game-playing programs frequently keep track of two killer moves for each depth of the game tree (greater than depth of 1) and see if either of these moves, if legal, produces a cutoff before the program generates and considers the rest of the possible moves. If a non-killer move produces a cutoff, it replaces one of the two killer moves at its depth. This idea can |
https://en.wikipedia.org/wiki/Epicenter | The epicenter (), epicentre, or epicentrum in seismology is the point on the Earth's surface directly above a hypocenter or focus, the point where an earthquake or an underground explosion originates.
Determination
The primary purpose of a seismometer is to locate the initiating points of earthquake epicenters. The secondary purpose, of determining the 'size' or magnitude must be calculated after the precise location is known.
The earliest seismographs were designed to give a sense of the direction of the first motions from an earthquake. The Chinese frog seismograph would have dropped its ball in the general compass direction of the earthquake, assuming a strong positive pulse. We now know that first motions can be in almost any direction depending on the type of initiating rupture (focal mechanism).
The first refinement that allowed a more precise determination of the location was the use of a time scale. Instead of merely noting, or recording, the absolute motions of a pendulum, the displacements were plotted on a moving graph, driven by a clock mechanism. This was the first seismogram, which allowed precise timing of the first ground motion, and an accurate plot of subsequent motions.
From the first seismograms, as seen in the figure, it was noticed that the trace was divided into two major portions. The first seismic wave to arrive was the P-wave, followed closely by the S-wave. Knowing the relative 'velocities of propagation', it was a simple matter to calculate the distance of the earthquake.
One seismograph would give the distance, but that could be plotted as a circle, with an infinite number of possibilities. Two seismographs would give two intersecting circles, with two possible locations. Only with a third seismograph would there be a precise location.
Modern earthquake location still requires a minimum of three seismometers. Most likely, there are many, forming a seismic array. The emphasis is on precision since much can be learned about the fau |
https://en.wikipedia.org/wiki/GiFT | giFT Internet File Transfer (giFT) is a computer software daemon that allows several file sharing protocols to be used with a simple client having a graphical user interface (GUI). The client dynamically loads plugins implementing the protocols, as they are required.
General
Clients implementing frontends for the giFT daemon communicate with its process using a lightweight network protocol. This allows the networking protocol code to be completely abstracted from the user interface. The giFT daemon is written using relatively cross-platform C code, which means that it can be compiled for and executed on a big variety of operating systems. There are several giFT GUI front-ends for Microsoft Windows, Apple Macintosh, and Unix-like operating systems.
The name giFT (giFT Internet File Transfer) is a so-called recursive acronym, which means that it refers to itself in the expression for which it stands.
One of the biggest drawbacks of the giFT engine is that it currently lacks Unicode support, which prevents sharing files with Unicode characters in their file names (such as "ø","ä", "å", "é" etc.). Also, giFT lacks many features needed to use the gnutella network effectively.
Available plugins
Available protocols are:
Stable
OpenFT, giFT's own file sharing protocol
gnutella (used by FrostWire, Shareaza)
Turtle F2F
Beta version
FastTrack (used by Kazaa). The giFT plugin is giFT-FastTrack.
Alpha version
OpenNap
eDonkey network
Soulseek
OpenFT protocol
giFT's sibling project is OpenFT, a peer-to-peer file-sharing network protocol that has a structure in which nodes are divided into 'search' nodes and 'index' supernodes in addition to common nodes. Since both projects are related very closely, when one says 'OpenFT', one can mean either one of two different things: the OpenFT protocol, or the implementation in the form of a plugin for giFT.
Although the name OpenFT stands for "Open FastTrack", the OpenFT protocol is an entirely new protocol design: only a few id |
https://en.wikipedia.org/wiki/Inductive%20bias | The inductive bias (also known as learning bias) of a learning algorithm is the set of assumptions that the learner uses to predict outputs of given inputs that it has not encountered.
Inductive bias is anything which makes the algorithm learn one pattern instead of another pattern (e.g. step-functions in decision trees instead of continuous function in a linear regression model).
In machine learning, one aims to construct algorithms that are able to learn to predict a certain target output. To achieve this, the learning algorithm is presented some training examples that demonstrate the intended relation of input and output values. Then the learner is supposed to approximate the correct output, even for examples that have not been shown during training. Without any additional assumptions, this problem cannot be solved since unseen situations might have an arbitrary output value. The kind of necessary assumptions about the nature of the target function are subsumed in the phrase inductive bias.
A classical example of an inductive bias is Occam's razor, assuming that the simplest consistent hypothesis about the target function is actually the best. Here consistent means that the hypothesis of the learner yields correct outputs for all of the examples that have been given to the algorithm.
Approaches to a more formal definition of inductive bias are based on mathematical logic. Here, the inductive bias is a logical formula that, together with the training data, logically entails the hypothesis generated by the learner. However, this strict formalism fails in many practical cases, where the inductive bias can only be given as a rough description (e.g. in the case of artificial neural networks), or not at all.
Types
The following is a list of common inductive biases in machine learning algorithms.
Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence. This is the bias used in the Naive Ba |
https://en.wikipedia.org/wiki/Center%20of%20mass | In physics, the center of mass of a distribution of mass in space (sometimes referred to as the barycenter or balance point) is the unique point at any given time where the weighted relative position of the distributed mass sums to zero. This is the point to which a force may be applied to cause a linear acceleration without an angular acceleration. Calculations in mechanics are often simplified when formulated with respect to the center of mass. It is a hypothetical point where the entire mass of an object may be assumed to be concentrated to visualise its motion. In other words, the center of mass is the particle equivalent of a given object for application of Newton's laws of motion.
In the case of a single rigid body, the center of mass is fixed in relation to the body, and if the body has uniform density, it will be located at the centroid. The center of mass may be located outside the physical body, as is sometimes the case for hollow or open-shaped objects, such as a horseshoe. In the case of a distribution of separate bodies, such as the planets of the Solar System, the center of mass may not correspond to the position of any individual member of the system.
The center of mass is a useful reference point for calculations in mechanics that involve masses distributed in space, such as the linear and angular momentum of planetary bodies and rigid body dynamics. In orbital mechanics, the equations of motion of planets are formulated as point masses located at the centers of mass (see Barycenter (astronomy) for details). The center of mass frame is an inertial frame in which the center of mass of a system is at rest with respect to the origin of the coordinate system.
History
The concept of center of gravity or weight was studied extensively by the ancient Greek mathematician, physicist, and engineer Archimedes of Syracuse. He worked with simplified assumptions about gravity that amount to a uniform field, thus arriving at the mathematical properties of what |
https://en.wikipedia.org/wiki/Symplectic%20matrix | In mathematics, a symplectic matrix is a matrix with real entries that satisfies the condition
where denotes the transpose of and is a fixed nonsingular, skew-symmetric matrix. This definition can be extended to matrices with entries in other fields, such as the complex numbers, finite fields, p-adic numbers, and function fields.
Typically is chosen to be the block matrix
where is the identity matrix. The matrix has determinant and its inverse is .
Properties
Generators for symplectic matrices
Every symplectic matrix has determinant , and the symplectic matrices with real entries form a subgroup of the general linear group under matrix multiplication since being symplectic is a property stable under matrix multiplication. Topologically, this symplectic group is a connected noncompact real Lie group of real dimension , and is denoted . The symplectic group can be defined as the set of linear transformations that preserve the symplectic form of a real symplectic vector space.
This symplectic group has a distinguished set of generators, which can be used to find all possible symplectic matrices. This includes the following sets
where is the set of symmetric matrices. Then, is generated by the setp. 2
of matrices. In other words, any symplectic matrix can be constructed by multiplying matrices in and together, along with some power of .
Inverse matrix
Every symplectic matrix is invertible with the inverse matrix given by
Furthermore, the product of two symplectic matrices is, again, a symplectic matrix. This gives the set of all symplectic matrices the structure of a group. There exists a natural manifold structure on this group which makes it into a (real or complex) Lie group called the symplectic group.
Determinantal properties
It follows easily from the definition that the determinant of any symplectic matrix is ±1. Actually, it turns out that the determinant is always +1 for any field. One way to see this is through the use of the P |
https://en.wikipedia.org/wiki/Special%20unitary%20group | In mathematics, the special unitary group of degree , denoted , is the Lie group of unitary matrices with determinant 1.
The matrices of the more general unitary group may have complex determinants with absolute value 1, rather than real 1 in the special case.
The group operation is matrix multiplication. The special unitary group is a normal subgroup of the unitary group , consisting of all unitary matrices. As a compact classical group, is the group that preserves the standard inner product on . It is itself a subgroup of the general linear group,
The groups find wide application in the Standard Model of particle physics, especially in the electroweak interaction and in quantum chromodynamics.
The simplest case, , is the trivial group, having only a single element. The group is isomorphic to the group of quaternions of norm 1, and is thus diffeomorphic to the 3-sphere. Since unit quaternions can be used to represent rotations in 3-dimensional space (up to sign), there is a surjective homomorphism from to the rotation group whose kernel is . is also identical to one of the symmetry groups of spinors, Spin(3), that enables a spinor presentation of rotations.
Properties
The special unitary group is a strictly real Lie group (vs. a more general complex Lie group). Its dimension as a real manifold is . Topologically, it is compact and simply connected. Algebraically, it is a simple Lie group (meaning its Lie algebra is simple; see below).
The center of is isomorphic to the cyclic group , and is composed of the diagonal matrices for an th root of unity and the identity matrix.
Its outer automorphism group for is while the outer automorphism group of is the trivial group.
A maximal torus of rank is given by the set of diagonal matrices with determinant . The Weyl group of is the symmetric group , which is represented by signed permutation matrices (the signs being necessary to ensure that the determinant is ).
The Lie algebra of , denoted |
https://en.wikipedia.org/wiki/Phar%20Lap | Phar Lap (4 October 1926 – 5 April 1932) was a champion Australian Thoroughbred racehorse. Achieving incredible success during his distinguished career, his initial underdog status gave people hope during the early years of the Great Depression. He won the Melbourne Cup, two Cox Plates, the Australian Derby, and 19 other weight-for-age races.
One of his greatest performances was winning the Agua Caliente Handicap in Mexico in track-record time in his final race. He won in a different country, after a bad start many lengths behind the leaders, with no training before the race, and he split his hoof during the race.
After a sudden and mysterious illness, Phar Lap died in 1932 in Atherton, California. At the time, he was the third-highest stakes-winner in the world. His mounted hide is displayed at the Melbourne Museum, his skeleton at the Museum of New Zealand, and his heart at the National Museum of Australia.
Name
The name Phar Lap derives from the common Zhuang and Thai word for lightning: ฟ้าแลบ , literally 'sky flash'. He was ridden by Ruby louise Thomas when she was only 13.
Phar Lap was called "The Wonder Horse," "The Red Terror," and "Big Red" (the latter nickname was also given to two of the greatest United States racehorses, Man o' War and Secretariat). He was affectionately known as "Bobby" to his strapper Tommy Woodcock He was also sometimes referred to as "Australia's Wonder Horse."
According to the Museum of Victoria, Aubrey Ping, a medical student at the University of Sydney, suggested "farlap" as the horse's name. Ping knew the word from his father, a Zhuang-speaking Chinese immigrant. Phar Lap's trainer Harry Telford liked the name, but changed the F to PH to create a seven letter word, which was split in two in keeping with the dominant naming pattern of Melbourne Cup winners.
Early life
A chestnut gelding, Phar Lap was foaled on 4 October 1926 in Seadown near Timaru in the South Island of New Zealand. He was sired by Night Raid from Entreaty |
https://en.wikipedia.org/wiki/Western%20blot | The western blot (sometimes called the protein immunoblot), or western blotting, is a widely used analytical technique in molecular biology and immunogenetics to detect specific proteins in a sample of tissue homogenate or extract. Besides detecting the proteins, this technique is also utilized to visualize, distinguish, and quantify the different proteins in a complicated protein combination.
Western blot technique uses three elements to achieve its task of separating a specific protein from a complex: separation by size, transfer of protein to a solid support, and marking target protein using a primary and secondary antibody to visualize. A synthetic or animal-derived antibody (known as the primary antibody) is created that recognizes and binds to a specific target protein. The electrophoresis membrane is washed in a solution containing the primary antibody, before excess antibody is washed off. A secondary antibody is added which recognizes and binds to the primary antibody. The secondary antibody is visualized through various methods such as staining, immunofluorescence, and radioactivity, allowing indirect detection of the specific target protein.
Other related techniques include dot blot analysis, quantitative dot blot, immunohistochemistry and immunocytochemistry, where antibodies are used to detect proteins in tissues and cells by immunostaining, and enzyme-linked immunosorbent assay (ELISA).
The name western blot is a play on the Southern blot, a technique for DNA detection named after its inventor, English biologist Edwin Southern. Similarly, detection of RNA is termed as northern blot. The term "western blot" was given by W. Neal Burnette in 1981, although the method itself was independently invented in 1979 by Jaime Renart, Jakob Reiser, and George Stark at Stanford University, and by Harry Towbin, Theophil Staehelin, and Julian Gordon at the Friedrich Miescher Institute in Basel, Switzerland. The Towbin group also used secondary antibodies for detec |
https://en.wikipedia.org/wiki/Skew-symmetric%20matrix | In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
In terms of the entries of the matrix, if denotes the entry in the -th row and -th column, then the skew-symmetric condition is equivalent to
Example
The matrix
is skew-symmetric because
Properties
Throughout, we assume that all matrix entries belong to a field whose characteristic is not equal to 2. That is, we assume that , where 1 denotes the multiplicative identity and 0 the additive identity of the given field. If the characteristic of the field is 2, then a skew-symmetric matrix is the same thing as a symmetric matrix.
The sum of two skew-symmetric matrices is skew-symmetric.
A scalar multiple of a skew-symmetric matrix is skew-symmetric.
The elements on the diagonal of a skew-symmetric matrix are zero, and therefore its trace equals zero.
If is a real skew-symmetric matrix and is a real eigenvalue, then , i.e. the nonzero eigenvalues of a skew-symmetric matrix are non-real.
If is a real skew-symmetric matrix, then is invertible, where is the identity matrix.
If is a skew-symmetric matrix then is a symmetric negative semi-definite matrix.
Vector space structure
As a result of the first two properties above, the set of all skew-symmetric matrices of a fixed size forms a vector space. The space of skew-symmetric matrices has dimension
Let denote the space of matrices. A skew-symmetric matrix is determined by scalars (the number of entries above the main diagonal); a symmetric matrix is determined by scalars (the number of entries on or above the main diagonal). Let denote the space of skew-symmetric matrices and denote the space of symmetric matrices. If then
Notice that and This is true for every square matrix with entries from any field whose characteristic is different from 2. Then, since and
where denotes the direct sum |
https://en.wikipedia.org/wiki/Diagonal%20matrix | In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it (a scalar matrix), is a diagonal matrix.
A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with it results in changing scale (size). Its determinant is the product of its diagonal values.
Definition
As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with n columns and n rows is diagonal if
However, the main diagonal entries are unrestricted.
The term diagonal matrix may sometimes refer to a , which is an m-by-n matrix with all the entries not of the form di,i being zero. For example:
or
More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a . A square diagonal matrix is a symmetric matrix, so this can also be called a .
The following matrix is square diagonal matrix:
If the entries are real numbers or complex numbers, then it is a normal matrix as well.
In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices".
Vector-to-matrix diag operator
A diagonal matrix can be constructed from a vector using the operator:
This may be written more compactly as .
The same operator is also used to represent block diagonal matrices as where each argument is a matrix.
The operator may be written as:
where represents the Hadamard product and is a constant vector with elements 1.
Matrix-to-vector diag operator
The inverse matrix-to-vector operator is sometimes denoted by the identically named where the argument is now a matrix and the result is a vector of its diagonal entries.
The following pr |
https://en.wikipedia.org/wiki/Microsoft%20Messenger%20service | Messenger (formerly MSN Messenger Service, .NET Messenger Service and Windows Live Messenger Service) was an instant messaging and presence system developed by Microsoft in 1999 for use with its MSN Messenger software. It was used by instant messaging clients including Windows 8, Windows Live Messenger, Microsoft Messenger for Mac, Outlook.com and Xbox Live. Third-party clients also connected to the service. It communicated using the Microsoft Notification Protocol, a proprietary instant messaging protocol. The service allowed anyone with a Microsoft account to sign in and communicate in real time with other people who were signed in as well.
On January 11, 2013, Microsoft announced that they were retiring the existing Messenger service globally (except for mainland China where Messenger will continue to be available) and replacing it with Skype. In April 2013, Microsoft merged the service into Skype network; existing users were able to sign into Skype with their existing accounts and access their contact list. As part of the merger, Skype instant messaging functionality is now running on the backbone of the former Messenger service.
Background
Despite multiple name changes to the service and its client software over the years, the Messenger service is often referred to colloquially as "MSN", due to the history of MSN Messenger. The service itself was known as MSN Messenger Service from 1999 to 2001, at which time, Microsoft changed its name to .NET Messenger Service and began offering clients that no longer carried the "MSN" name, such as the Windows Messenger client included with Windows XP, which was originally intended to be a streamlined version of MSN Messenger, free of advertisements and integrated into Windows.
Nevertheless, the company continued to offer more upgrades to MSN Messenger until the end of 2005, when all previous versions of MSN Messenger and Windows Messenger were superseded by a new program, Windows Live Messenger, as part of Microsoft's la |
https://en.wikipedia.org/wiki/Abc%20conjecture | The abc conjecture (also known as the Oesterlé–Masser conjecture) is a conjecture in number theory that arose out of a discussion of Joseph Oesterlé and David Masser in 1985. It is stated in terms of three positive integers and (hence the name) that are relatively prime and satisfy . The conjecture essentially states that the product of the distinct prime factors of is usually not much smaller than . A number of famous conjectures and theorems in number theory would follow immediately from the abc conjecture or its versions. Mathematician Dorian Goldfeld described the abc conjecture as "The most important unsolved problem in Diophantine analysis".
The abc conjecture originated as the outcome of attempts by Oesterlé and Masser to understand the Szpiro conjecture about elliptic curves, which involves more geometric structures in its statement than the abc conjecture. The abc conjecture was shown to be equivalent to the modified Szpiro's conjecture.
Various attempts to prove the abc conjecture have been made, but none are currently accepted by the mainstream mathematical community, and, as of 2023, the conjecture is still regarded as unproven.
Formulations
Before stating the conjecture, the notion of the radical of an integer must be introduced: for a positive integer , the radical of , denoted , is the product of the distinct prime factors of . For example,
If a, b, and c are coprime positive integers such that a + b = c, it turns out that "usually" . The abc conjecture deals with the exceptions. Specifically, it states that:
An equivalent formulation is:
Equivalently (using the little o notation):
A fourth equivalent formulation of the conjecture involves the quality q(a, b, c) of the triple (a, b, c), which is defined as
For example:
A typical triple (a, b, c) of coprime positive integers with a + b = c will have c < rad(abc), i.e. q(a, b, c) < 1. Triples with q > 1 such as in the second example are rather special, they consist of numbers divisible b |
https://en.wikipedia.org/wiki/SATA | SATA (Serial AT Attachment) is a computer bus interface that connects host bus adapters to mass storage devices such as hard disk drives, optical drives, and solid-state drives. Serial ATA succeeded the earlier Parallel ATA (PATA) standard to become the predominant interface for storage devices.
Serial ATA industry compatibility specifications originate from the Serial ATA International Organization (SATA-IO) which are then released by the INCITS Technical Committee T13, AT Attachment (INCITS T13).
History
SATA was announced in 2000 in order to provide several advantages over the earlier PATA interface such as reduced cable size and cost (seven conductors instead of 40 or 80), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. Revision 1.0 of the specification was released in January 2003.
Serial ATA industry compatibility specifications originate from the Serial ATA International Organization (SATA-IO). The SATA-IO group collaboratively creates, reviews, ratifies, and publishes the interoperability specifications, the test cases and plugfests. As with many other industry compatibility standards, the SATA content ownership is transferred to other industry bodies: primarily INCITS T13 and an INCITS T10 subcommittee (SCSI), a subgroup of T10 responsible for Serial Attached SCSI (SAS). The remainder of this article strives to use the SATA-IO terminology and specifications.
Before SATA's introduction in 2000, PATA was simply known as ATA. The "AT Attachment" (ATA) name originated after the 1984 release of the IBM Personal Computer AT, more commonly known as the IBM AT. The IBM AT's controller interface became a de facto industry interface for the inclusion of hard disks. "AT" was IBM's abbreviation for "Advanced Technology"; thus, many companies and organizations indicate SATA is an abbreviation of "Serial Advanced Technology Attachment". However, the ATA specifications s |
https://en.wikipedia.org/wiki/Shiga%20toxin | Shiga toxins are a family of related toxins with two major groups, Stx1 and Stx2, expressed by genes considered to be part of the genome of lambdoid prophages. The toxins are named after Kiyoshi Shiga, who first described the bacterial origin of dysentery caused by Shigella dysenteriae. Shiga-like toxin (SLT) is a historical term for similar or identical toxins produced by Escherichia coli. The most common sources for Shiga toxin are the bacteria S. dysenteriae and some serotypes of Escherichia coli (STEC), which includes serotypes O157:H7, and O104:H4.
Nomenclature
Microbiologists use many terms to describe Shiga toxin and differentiate more than one unique form. Many of these terms are used interchangeably.
Shiga toxin type 1 and type 2 (Stx-1 and 2) are the Shiga toxins produced by some E. coli strains. Stx-1 is identical to Stx of Shigella spp. or differs by only one amino acid. Stx-2 shares 56% sequence identity with Stx-1.
Cytotoxins – an archaic denotation for Stx – is used in a broad sense.
Verocytotoxins/verotoxins – a seldom-used term for Stx – is from the hypersensitivity of Vero cells to Stx.
The term Shiga-like toxins is another antiquated term which arose prior to the understanding that Shiga and Shiga-like toxins were identical.
History
The toxin is named after Kiyoshi Shiga, who discovered S. dysenteriae in 1897. In 1977, researchers in Ottawa, Ontario discovered the Shiga toxin normally produced by Shigella dysenteriae in a line of E. coli. The E. coli version of the toxin was named "verotoxin" because of its ability to kill Vero cells (African green monkey kidney cells) in culture. Shortly after, the verotoxin was referred to as Shiga-like toxin because of its similarities to Shiga toxin.
It has been suggested by some researchers that the gene coding for Shiga-like toxin comes from a toxin-converting lambdoid bacteriophage, such as H-19B or 933W, inserted into the bacteria's chromosome via transduction. Phylogenetic studies of the dive |
https://en.wikipedia.org/wiki/Traceability | Traceability is the capability to trace something. In some cases, it is interpreted as the ability to verify the history, location, or application of an item by means of documented recorded identification.
Other common definitions include the capability (and implementation) of keeping track of a given set or type of information to a given degree, or the ability to chronologically interrelate uniquely identifiable entities in a way that is verifiable.
Traceability is applicable to measurement, supply chain, software development, healthcare and security.
Measurement
The term measurement traceability or metrological traceability is used to refer to an unbroken chain of comparisons relating an instrument's measurements to a known standard. Calibration to a traceable standard can be used to determine an instrument's bias, precision, and accuracy. It may also be used to show a chain of custody - from current interpretation of evidence to the actual evidence in a legal context, or history of handling of any information.
In many countries, national standards for weights and measures are maintained by a National Metrological Institute (NMI) which provides the highest level of standards for the calibration / measurement traceability infrastructure in that country. Examples of government agencies include the National Physical Laboratory, UK (NPL) the National Institute of Standards and Technology (NIST) in the USA, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, and the Instituto Nazionale di Ricerca Metrologica (INRiM) in Italy. As defined by NIST, "Traceability of measurement requires the establishment of an unbroken chain of comparisons to stated references each with a stated uncertainty."
A clock providing is traceable to a time standard such as Coordinated Universal Time or International Atomic Time. The Global Positioning System is a source of traceable time.
Supply chain
Within a product's supply chain, traceability may be both a regulatory and an eth |
https://en.wikipedia.org/wiki/Birefringence | Birefringence is the optical property of a material having a refractive index that depends on the polarization and propagation direction of light. These optically anisotropic materials are said to be birefringent (or birefractive). The birefringence is often quantified as the maximum difference between refractive indices exhibited by the material. Crystals with non-cubic crystal structures are often birefringent, as are plastics under mechanical stress.
Birefringence is responsible for the phenomenon of double refraction whereby a ray of light, when incident upon a birefringent material, is split by polarization into two rays taking slightly different paths. This effect was first described by Danish scientist Rasmus Bartholin in 1669, who observed it in calcite crystals which have one of the strongest birefringences. In the 19th century Augustin-Jean Fresnel described the phenomenon in terms of polarization, understanding light as a wave with field components in transverse polarization (perpendicular to the direction of the wave vector). Birefringence plays an important role in achieving phase-matching for a number of nonlinear optical processes.
Explanation
A mathematical description of wave propagation in a birefringent medium is presented below. Following is a qualitative explanation of the phenomenon.
Uniaxial materials
The simplest type of birefringence is described as uniaxial, meaning that there is a single direction governing the optical anisotropy whereas all directions perpendicular to it (or at a given angle to it) are optically equivalent. Thus rotating the material around this axis does not change its optical behaviour. This special direction is known as the optic axis of the material. Light propagating parallel to the optic axis (whose polarization is always perpendicular to the optic axis) is governed by a refractive index (for "ordinary") regardless of its specific polarization. For rays with any other propagation direction, there is one linea |
https://en.wikipedia.org/wiki/Synchronization%20gear | A synchronization gear (also known as a gun synchronizer or interrupter gear) was a device enabling a single-engine tractor configuration aircraft to fire its forward-firing armament through the arc of its spinning propeller without bullets striking the blades. This allowed the aircraft, rather than the gun, to be aimed at the target.
There were many practical problems, mostly arising from the inherently imprecise nature of an automatic gun's firing, the great (and varying) velocity of the blades of a spinning propeller, and the very high speed at which any gear synchronizing the two had to operate. In practice, all known gears worked on the principle of actively triggering each shot, in the manner of a semi-automatic weapon.
Design and experimentation with gun synchronization had been underway in France and Germany in 1913–1914, following the ideas of August Euler, who seems to have been the first to suggest mounting a fixed armament firing in the direction of flight (in 1910). However, the first practical—if far from reliable—gear to enter operational service was that fitted to the Fokker Eindecker fighters, which entered squadron service with the German Air Service in mid-1915. The success of the Eindecker led to numerous gun synchronization devices, culminating in the reasonably reliable hydraulic British Constantinesco gear of 1917. By the end of the First World War, German engineers were well on the way to perfecting a gear using an electrical rather than a mechanical or hydraulic link between the engine and the gun, with the gun being triggered by a solenoid rather than by a mechanical "trigger motor".
From 1918 to the mid-1930s the standard armament for a fighter aircraft remained two synchronized rifle-calibre machine guns, firing forward through the arc of the propeller. During the late 1930s, however, the main role of the fighter was increasingly seen as the destruction of large, all-metal bombers, for which this armament was inadequately light. Since |
https://en.wikipedia.org/wiki/Modularity%20theorem | The modularity theorem (formerly called the Taniyama–Shimura conjecture, Taniyama-Weil conjecture or modularity conjecture for elliptic curves) states that elliptic curves over the field of rational numbers are related to modular forms. Andrew Wiles proved the modularity theorem for semistable elliptic curves, which was enough to imply Fermat's Last Theorem. Later, a series of papers by Wiles's former students Brian Conrad, Fred Diamond and Richard Taylor, culminating in a joint paper with Christophe Breuil, extended Wiles's techniques to prove the full modularity theorem in 2001.
Statement
The theorem states that any elliptic curve over can be obtained via a rational map with integer coefficients from the classical modular curve for some integer ; this is a curve with integer coefficients with an explicit definition. This mapping is called a modular parametrization of level . If is the smallest integer for which such a parametrization can be found (which by the modularity theorem itself is now known to be a number called the conductor), then the parametrization may be defined in terms of a mapping generated by a particular kind of modular form of weight two and level , a normalized newform with integer -expansion, followed if need be by an isogeny.
Related statements
The modularity theorem implies a closely related analytic statement:
To each elliptic curve E over we may attach a corresponding L-series. The -series is a Dirichlet series, commonly written
The generating function of the coefficients is then
If we make the substitution
we see that we have written the Fourier expansion of a function of the complex variable , so the coefficients of the -series are also thought of as the Fourier coefficients of . The function obtained in this way is, remarkably, a cusp form of weight two and level and is also an eigenform (an eigenvector of all Hecke operators); this is the Hasse–Weil conjecture, which follows from the modularity theorem.
Some modular |
https://en.wikipedia.org/wiki/Common%20logarithm | In mathematics, the common logarithm is the logarithm with base 10. It is also known as the decadic logarithm and as the decimal logarithm, named after its base, or Briggsian logarithm, after Henry Briggs, an English mathematician who pioneered its use, as well as standard logarithm. Historically, it was known as logarithmus decimalis or logarithmus decadis. It is indicated by , , or sometimes with a capital (however, this notation is ambiguous, since it can also mean the complex natural logarithmic multi-valued function). On calculators, it is printed as "log", but mathematicians usually mean natural logarithm (logarithm with base e ≈ 2.71828) rather than common logarithm when they write "log". To mitigate this ambiguity, the ISO 80000 specification recommends that should be written , and should be .
Before the early 1970s, handheld electronic calculators were not available, and mechanical calculators capable of multiplication were bulky, expensive and not widely available. Instead, tables of base-10 logarithms were used in science, engineering and navigation—when calculations required greater accuracy than could be achieved with a slide rule. By turning multiplication and division to addition and subtraction, use of logarithms avoided laborious and error-prone paper-and-pencil multiplications and divisions. Because logarithms were so useful, tables of base-10 logarithms were given in appendices of many textbooks. Mathematical and navigation handbooks included tables of the logarithms of trigonometric functions as well. For the history of such tables, see log table.
Mantissa and characteristic
An important property of base-10 logarithms, which makes them so useful in calculations, is that the logarithm of numbers greater than 1 that differ by a factor of a power of 10 all have the same fractional part. The fractional part is known as the mantissa. Thus, log tables need only show the fractional part. Tables of common logarithms typically listed the mantissa, t |
https://en.wikipedia.org/wiki/Online%20chat | Online chat may refer to any kind of communication over the Internet that offers a real-time transmission of text messages from sender to receiver. Chat messages are generally short in order to enable other participants to respond quickly. Thereby, a feeling similar to a spoken conversation is created, which distinguishes chatting from other text-based online communication forms such as Internet forums and email. Online chat may address point-to-point communications as well as multicast communications from one sender to many receivers and voice and video chat, or may be a feature of a web conferencing service.
Online chat in a less stringent definition may be primarily any direct text-based or video-based (webcams), one-on-one chat or one-to-many group chat (formally also known as synchronous conferencing), using tools such as instant messengers, Internet Relay Chat (IRC), talkers and possibly MUDs or other online games. The expression online chat comes from the word chat which means "informal conversation". Online chat includes web-based applications that allow communication – often directly addressed, but anonymous between users in a multi-user environment. Web conferencing is a more specific online service, that is often sold as a service, hosted on a web server controlled by the vendor.
History
The first online chat system was called Talkomatic, created by Doug Brown and David R. Woolley in 1973 on the PLATO System at the University of Illinois. It offered several channels, each of which could accommodate up to five people, with messages appearing on all users' screens character-by-character as they were typed. Talkomatic was very popular among PLATO users into the mid-1980s. In 2014, Brown and Woolley released a web-based version of Talkomatic.
The first online system to use the actual command "chat" was created for The Source in 1979 by Tom Walker and Fritz Thane of Dialcom, Inc.
Other chat platforms flourished during the 1980s. Among the earliest with a |
https://en.wikipedia.org/wiki/Algebraic%20number%20theory | Algebraic number theory is a branch of number theory that uses the techniques of abstract algebra to study the integers, rational numbers, and their generalizations. Number-theoretic questions are expressed in terms of properties of algebraic objects such as algebraic number fields and their rings of integers, finite fields, and function fields. These properties, such as whether a ring admits unique factorization, the behavior of ideals, and the Galois groups of fields, can resolve questions of primary importance in number theory, like the existence of solutions to Diophantine equations.
History of algebraic number theory
Diophantus
The beginnings of algebraic number theory can be traced to Diophantine equations, named after the 3rd-century Alexandrian mathematician, Diophantus, who studied them and developed methods for the solution of some kinds of Diophantine equations. A typical Diophantine problem is to find two integers x and y such that their sum, and the sum of their squares, equal two given numbers A and B, respectively:
Diophantine equations have been studied for thousands of years. For example, the solutions to the quadratic Diophantine equation x2 + y2 = z2 are given by the Pythagorean triples, originally solved by the Babylonians (). Solutions to linear Diophantine equations, such as 26x + 65y = 13, may be found using the Euclidean algorithm (c. 5th century BC).
Diophantus' major work was the Arithmetica, of which only a portion has survived.
Fermat
Fermat's Last Theorem was first conjectured by Pierre de Fermat in 1637, famously in the margin of a copy of Arithmetica where he claimed he had a proof that was too large to fit in the margin. No successful proof was published until 1995 despite the efforts of countless mathematicians during the 358 intervening years. The unsolved problem stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century.
Gauss
One of the founding wor |
https://en.wikipedia.org/wiki/Laplace%20operator | In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian of a function at a point measures by how much the average value of over small spheres or balls centered at deviates from .
The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics: the Laplacian of the gravitational potential due to a given mass density distribution is a constant multiple of that density distribution. Solutions of Laplace's equation are called harmonic functions and represent the possible gravitational potentials in regions of vacuum.
The Laplacian occurs in many differential equations describing physical phenomena. Poisson's equation describes electric and gravitational potentials; the diffusion equation describes heat and fluid flow; the wave equation describes wave propagation; and the Schrödinger equation describes the wave function in quantum mechanics. In image processing and computer vision, the Laplacian operator has been used for various tasks, such as blob and edge detection. The Laplacian is the simplest elliptic operator and is at the core of Hodge theory as well as the results of de Rham cohomology.
Definition
The Laplace operator is a second-order differential operator in the n-dimensional Euclidean space, defined as the divergence () of the gradient (). Thus if is a twice-differentiable real-valued function, then the Laplacian of is the real-valued function defined by:
where the latter notatio |
https://en.wikipedia.org/wiki/Gravitational%20field | In physics, a gravitational field or gravitational acceleration field is a vector field used to explain the influences that a body extends into the space around itself. A gravitational field is used to explain gravitational phenomena, such as the gravitational force field excerted on another massive body. It has dimension of acceleration (L/T2) and it is measured in units of newtons per kilogram (N/kg) or, equivalently, in meters per second squared (m/s2).
In its original concept, gravity was a force between point masses. Following Isaac Newton, Pierre-Simon Laplace attempted to model gravity as some kind of radiation field or fluid, and since the 19th century, explanations for gravity in classical mechanics have usually been taught in terms of a field model, rather than a point attraction. It results from the spatial gradient of the gravitational potential field.
In general relativity, rather than two particles attracting each other, the particles distort spacetime via their mass, and this distortion is what is perceived and measured as a "force". In such a model one states that matter moves in certain ways in response to the curvature of spacetime, and that there is either no gravitational force, or that gravity is a fictitious force.
Gravity is distinguished from other forces by its obedience to the equivalence principle.
Classical mechanics
In classical mechanics, a gravitational field is a physical quantity. A gravitational field can be defined using Newton's law of universal gravitation. Determined in this way, the gravitational field around a single particle of mass is a vector field consisting at every point of a vector pointing directly towards the particle. The magnitude of the field at every point is calculated by applying the universal law, and represents the force per unit mass on any object at that point in space. Because the force field is conservative, there is a scalar potential energy per unit mass, , at each point in space associated with t |
https://en.wikipedia.org/wiki/Covert%20channel | In computer security, a covert channel is a type of attack that creates a capability to transfer information objects between processes that are not supposed to be allowed to communicate by the computer security policy. The term, originated in 1973 by Butler Lampson, is defined as channels "not intended for information transfer at all, such as the service program's effect on system load," to distinguish it from legitimate channels that are subjected to access controls by COMPUSEC.
Characteristics
A covert channel is so called because it is hidden from the access control mechanisms of secure operating systems since it does not use the legitimate data transfer mechanisms of the computer system (typically, read and write), and therefore cannot be detected or controlled by the security mechanisms that underlie secure operating systems. Covert channels are exceedingly hard to install in real systems, and can often be detected by monitoring system performance. In addition, they suffer from a low signal-to-noise ratio and low data rates (typically, on the order of a few bits per second). They can also be removed manually with a high degree of assurance from secure systems by well established covert channel analysis strategies.
Covert channels are distinct from, and often confused with, legitimate channel exploitations that attack low-assurance pseudo-secure systems using schemes such as steganography or even less sophisticated schemes to disguise prohibited objects inside of legitimate information objects. The legitimate channel misuse by steganography is specifically not a form of covert channel.
Covert channels can tunnel through secure operating systems and require special measures to control. Covert channel analysis is the only proven way to control covert channels. By contrast, secure operating systems can easily prevent misuse of legitimate channels, so distinguishing both is important. Analysis of legitimate channels for hidden objects is often misrepresente |
https://en.wikipedia.org/wiki/Functional%20predicate | In formal logic and related branches of mathematics, a functional predicate, or function symbol, is a logical symbol that may be applied to an object term to produce another object term.
Functional predicates are also sometimes called mappings, but that term has additional meanings in mathematics.
In a model, a function symbol will be modelled by a function.
Specifically, the symbol F in a formal language is a functional symbol if, given any symbol X representing an object in the language, F(X) is again a symbol representing an object in that language.
In typed logic, F is a functional symbol with domain type T and codomain type U if, given any symbol X representing an object of type T, F(X) is a symbol representing an object of type U.
One can similarly define function symbols of more than one variable, analogous to functions of more than one variable; a function symbol in zero variables is simply a constant symbol.
Now consider a model of the formal language, with the types T and U modelled by sets [T] and [U] and each symbol X of type T modelled by an element [X] in [T].
Then F can be modelled by the set
which is simply a function with domain [T] and codomain [U].
It is a requirement of a consistent model that [F(X)] = [F(Y)] whenever [X] = [Y].
Introducing new function symbols
In a treatment of predicate logic that allows one to introduce new predicate symbols, one will also want to be able to introduce new function symbols. Given the function symbols F and G, one can introduce a new function symbol F ∘ G, the composition of F and G, satisfying (F ∘ G)(X) = F(G(X)), for all X.
Of course, the right side of this equation doesn't make sense in typed logic unless the domain type of F matches the codomain type of G, so this is required for the composition to be defined.
One also gets certain function symbols automatically.
In untyped logic, there is an identity predicate id that satisfies id(X) = X for all X.
In typed logic, given any type T, there is an identit |
https://en.wikipedia.org/wiki/SimEarth | SimEarth: The Living Planet is a life simulation game, the second designed by Will Wright. and published in 1990 by Maxis. In SimEarth, the player controls the development of a planet. English scientist James Lovelock served as an advisor and his Gaia hypothesis of planet evolution was incorporated into the game. Versions were made for the Macintosh, Atari ST, Amiga, IBM PC, Super Nintendo Entertainment System, Sega CD, and TurboGrafx-16. It was re-released for the Wii Virtual Console. In 1996, several of Maxis' simulation games were re-released under the Maxis Collector Series with greater compatibility with Windows 95 and differing box art, including the addition of Classics beneath the title. SimEarth was re-released in 1997 under the Classics label.
Gameplay
In SimEarth, the player can vary a planet's atmosphere, temperature, landmasses, etc., then place various forms of life on the planet and watch them evolve. In the “Random Planet” game setting, the game is a software toy, without any required goals. The big (and difficult) challenge is to evolve sentient life and an advanced civilization. The development stages of the planet can be restored and repeated, until the planet "dies" ten billion years after its creation, the estimated time when the Sun will become a red giant and kill off all of the planet's life.
There are also eight scenarios that do have goals, the first three (Aquarium, Cambrian Earth, and Modern-day Earth) involving managing the evolution and development of Earth in different stages, the next four (Mars, Venus, Ice Planet, and Dune) involving terraforming other planets to support life, and the final scenario (Earth 2XXX) involving rescuing life and civilization on a future Earth from self-replicating robots and nuclear warfare and giving the player the option of causing a great flood to help achieve this goal. In addition, there is another game mode besides Random Planet and Scenario mode, called Daisy World, where the only biome on the pl |
https://en.wikipedia.org/wiki/Light-on-dark%20color%20scheme | A light-on-dark color scheme also called dark mode, dark theme, night mode, black mode, or lights-out (mode) is a color scheme that uses light-colored text, icons, and graphical user interface elements on a dark background. It is often discussed in terms of computer user interface design and web design. Many modern websites and operating systems offer the user an optional light-on-dark display mode.
Some users find dark mode displays more visually appealing, and claim that it can reduce eye strain. Displaying white at full brightness uses roughly six times as much power as pure black on a 2016 Google Pixel, which has an OLED display. However, conventional LED displays cannot benefit from reduced power consumption. Most modern operating systems support an optional light-on-dark color scheme.
History
Predecessors of modern computer screens, such as cathode-ray oscillographs, oscilloscopes, etc., tended to plot graphs and introduce other content as glowing traces on a black background.
With the introduction of computer screens, originally user interfaces were formed on cathode-ray tubes (CRTs) like those used for oscillographs or oscilloscopes. The phosphor was normally a very dark color, and lit up brightly when the electron beam hit it, appearing to be white, green, blue, or amber on a black background, depending on phosphors applied on a monochrome screen. RGB screens continued to operate similarly, using all the beams set to "on" to form white.
With the advent of teletext, research was done into which primary and secondary light colors and combinations worked best for this new medium. Cyan or yellow on black was typically found to be optimal from a palette of black, red, green, yellow, blue, magenta, cyan and white.
The opposite color set, a dark-on-light color scheme, was originally introduced in WYSIWYG word processors to simulate ink on paper, and became the norm.
Microsoft introduced a dark theme in the Anniversary Update of Windows 10 in 2016. In 2018, |
https://en.wikipedia.org/wiki/Czochralski%20method | The Czochralski method, also Czochralski technique or Czochralski process, is a method of crystal growth used to obtain single crystals of semiconductors (e.g. silicon, germanium and gallium arsenide), metals (e.g. palladium, platinum, silver, gold), salts and synthetic gemstones. The method is named after Polish scientist Jan Czochralski, who invented the method in 1915 while investigating the crystallization rates of metals. He made this discovery by accident: instead of dipping his pen into his inkwell, he dipped it in molten tin, and drew a tin filament, which later proved to be a single crystal. The method is still used in over 90 percent of all electronics in the world that use semiconductors.
The most important application may be the growth of large cylindrical ingots, or boules, of single crystal silicon used in the electronics industry to make semiconductor devices like integrated circuits. Other semiconductors, such as gallium arsenide, can also be grown by this method, although lower defect densities in this case can be obtained using variants of the Bridgman–Stockbarger method.
The method is not limited to production of metal or metalloid crystals. For example, it is used to manufacture very high-purity crystals of salts, including material with controlled isotopic composition, for use in particle physics experiments, with tight controls (part per billion measurements) on confounding metal ions and water absorbed during manufacture.
Application
Monocrystalline silicon (mono-Si) grown by the Czochralski method is often referred to as monocrystalline Czochralski silicon (Cz-Si). It is the basic material in the production of integrated circuits used in computers, TVs, mobile phones and all types of electronic equipment and semiconductor devices. Monocrystalline silicon is also used in large quantities by the photovoltaic industry for the production of conventional mono-Si solar cells. The almost perfect crystal structure yields the highest light-to-ele |
https://en.wikipedia.org/wiki/Integrated%20services | In computer networking, integrated services or IntServ is an architecture that specifies the elements to guarantee quality of service (QoS) on networks. IntServ can for example be used to allow video and sound to reach the receiver without interruption.
IntServ specifies a fine-grained QoS system, which is often contrasted with DiffServ's coarse-grained control system.
Under IntServ, every router in the system implements IntServ, and every application that requires some kind of QoS guarantee has to make an individual reservation. Flow specs describe what the reservation is for, while RSVP is the underlying mechanism to signal it across the network.
Flow specs
There are two parts to a flow spec:
What does the traffic look like? Done in the Traffic SPECification part, also known as TSPEC.
What guarantees does it need? Done in the service Request SPECification part, also known as RSPEC.
TSPECs include token bucket algorithm parameters. The idea is that there is a token bucket which slowly fills up with tokens, arriving at a constant rate. Every packet which is sent requires a token, and if there are no tokens, then it cannot be sent. Thus, the rate at which tokens arrive dictates the average rate of traffic flow, while the depth of the bucket dictates how 'bursty' the traffic is allowed to be.
TSPECs typically just specify the token rate and the bucket depth. For example, a video with a refresh rate of 75 frames per second, with each frame taking 10 packets, might specify a token rate of 750 Hz, and a bucket depth of only 10. The bucket depth would be sufficient to accommodate the 'burst' associated with sending an entire frame all at once. On the other hand, a conversation would need a lower token rate, but a much higher bucket depth. This is because there are often pauses in conversations, so they can make do with less tokens by not sending the gaps between words and sentences. However, this means the bucket depth needs to be increased to compensate for the |
https://en.wikipedia.org/wiki/Relational%20algebra | In database theory, relational algebra is a theory that uses algebraic structures for modeling data, and defining queries on it with a well founded semantics. The theory was introduced by Edgar F. Codd.
The main application of relational algebra is to provide a theoretical foundation for relational databases, particularly query languages for such databases, chief among which is SQL. Relational databases store tabular data represented as relations. Queries over relational databases often likewise return tabular data represented as relations.
The main purpose of relational algebra is to define operators that transform one or more input relations to an output relation. Given that these operators accept relations as input and produce relations as output, they can be combined and used to express complex queries that transform multiple input relations (whose data are stored in the database) into a single output relation (the query results).
Unary operators accept a single relation as input. Examples include operators to filter certain attributes (columns) or tuples (rows) from an input relation. Binary operators accept two relations as input and combine them into a single output relation. For example, taking all tuples found in either relation (union), removing tuples from the first relation found in the second relation (difference), extending the tuples of the first relation with tuples in the second relation matching certain conditions, and so forth.
Other more advanced operators can also be included, where the inclusion or exclusion of certain operators gives rise to a family of algebras.
Introduction
Relational algebra received little attention outside of pure mathematics until the publication of E.F. Codd's relational model of data in 1970. Codd proposed such an algebra as a basis for database query languages. (See section Implementations.)
Relational algebra operates on homogeneous sets of tuples
where we commonly interpret m to be the number of rows in a |
https://en.wikipedia.org/wiki/Tuple%20relational%20calculus | Tuple calculus is a calculus that was created and introduced by Edgar F. Codd as part of the relational model, in order to provide a declarative database-query language for data manipulation in this data model. It formed the inspiration for the database-query languages QUEL and SQL, of which the latter, although far less faithful to the original relational model and calculus, is now the de facto standard database-query language; a dialect of SQL is used by nearly every relational-database-management system. Michel Lacroix and Alain Pirotte proposed domain calculus, which is closer to first-order logic and together with Codd showed that both of these calculi (as well as relational algebra) are equivalent in expressive power. Subsequently, query languages for the relational model were called relationally complete if they could express at least all of these queries.
Definition of the calculus
Relational database
Since the calculus is a query language for relational databases we first have to define a relational database. The basic relational building block is the domain (somewhat similar, but not equal to, a data type). A tuple is a finite sequence of attributes, which are ordered pairs of domains and values. A relation is a set of (compatible) tuples. Although these relational concepts are mathematically defined, those definitions map loosely to traditional database concepts. A table is an accepted visual representation of a relation; a tuple is similar to the concept of a row.
We first assume the existence of a set C of column names, examples of which are "name", "author", "address", etcetera. We define headers as finite subsets of C. A relational database schema is defined as a tuple S = (D, R, h) where D is the domain of atomic values (see relational model for more on the notions of domain and atomic value), R is a finite set of relation names, and
h : R → 2C
a function that associates a header with each relation name in R. (Note that this is a simplific |
https://en.wikipedia.org/wiki/Dynamic%20recompilation | In computer science, dynamic recompilation is a feature of some emulators and virtual machines, where the system may recompile some part of a program during execution. By compiling during execution, the system can tailor the generated code to reflect the program's run-time environment, and potentially produce more efficient code by exploiting information that is not available to a traditional static compiler.
Uses
Most dynamic recompilers are used to convert machine code between architectures at runtime. This is a task often needed in the emulation of legacy gaming platforms. In other cases, a system may employ dynamic recompilation as part of an adaptive optimization strategy to execute a portable program representation such as Java or .NET Common Language Runtime bytecodes. Full-speed debuggers also utilize dynamic recompilation to reduce the space overhead incurred in most deoptimization techniques, and other features such as dynamic thread migration.
Tasks
The main tasks a dynamic recompiler has to perform are:
Reading in machine code from the source platform
Emitting machine code for the target platform
A dynamic recompiler may also perform some auxiliary tasks:
Managing a cache of recompiled code
Updating of elapsed cycle counts on platforms with cycle count registers
Management of interrupt checking
Providing an interface to virtualized support hardware, for example a GPU
Optimizing higher-level code structures to run efficiently on the target hardware (see below)
Applications
Many Java virtual machines feature dynamic recompilation.
Apple's Rosetta for Mac OS X on x86, allows PowerPC code to be run on the x86 architecture.
Later versions of the Mac 68K emulator used in classic Mac OS to run 680x0 code on the PowerPC hardware.
Psyco, a specializing compiler for Python.
The HP Dynamo project, an example of a transparent binary dynamic optimizer.
DynamoRIO, an open-source successor to Dynamo that works with the ARM, x86-64 and IA-64 (Itanium) |
https://en.wikipedia.org/wiki/AES3 | AES3 is a standard for the exchange of digital audio signals between professional audio devices. An AES3 signal can carry two channels of pulse-code-modulated digital audio over several transmission media including balanced lines, unbalanced lines, and optical fiber.
AES3 was jointly developed by the Audio Engineering Society (AES) and the European Broadcasting Union (EBU) and so is also known as AES/EBU. The standard was first published in 1985 and was revised in 1992 and 2003. AES3 has been incorporated into the International Electrotechnical Commission's standard IEC 60958, and is available in a consumer-grade variant known as S/PDIF.
History and development
The development of standards for digital audio interconnect for both professional and domestic audio equipment, began in the late 1970s in a joint effort between the Audio Engineering Society and the European Broadcasting Union, and culminated in the publishing of AES3 in 1985. The AES3 standard has been revised in 1992 and 2003 and is published in AES and EBU versions. Early on, the standard was frequently known as AES/EBU.
Variants using different physical connections are specified in IEC 60958. These are essentially consumer versions of AES3 for use within the domestic high fidelity environment using connectors more commonly found in the consumer market. These variants are commonly known as S/PDIF.
Related standards and documents
IEC 60958
IEC 60958 (formerly IEC 958) is the International Electrotechnical Commission's standard on digital audio interfaces. It reproduces the AES3 professional digital audio interconnect standard and the consumer version of the same, S/PDIF.
The standard consists of several parts:
IEC 60958-1: General
IEC 60958-2: Software Information Delivery Mode
IEC 60958-3: Consumer applications
IEC 60958-4: Professional applications
IEC 60958-5: Consumer application enhancement
AES-2id
AES-2id is an AES information document published by the Audio Engineering Society for digita |
https://en.wikipedia.org/wiki/Ciphertext | In cryptography, ciphertext or cyphertext is the result of encryption performed on plaintext using an algorithm, called a cipher. Ciphertext is also known as encrypted or encoded information because it contains a form of the original plaintext that is unreadable by a human or computer without the proper cipher to decrypt it. This process prevents the loss of sensitive information via hacking. Decryption, the inverse of encryption, is the process of turning ciphertext into readable plaintext. Ciphertext is not to be confused with codetext because the latter is a result of a code, not a cipher.
Conceptual underpinnings
Let be the plaintext message that Alice wants to secretly transmit to Bob and let be the encryption cipher, where is a cryptographic key. Alice must first transform the plaintext into ciphertext, , in order to securely send the message to Bob, as follows:
In a symmetric-key system, Bob knows Alice's encryption key. Once the message is encrypted, Alice can safely transmit it to Bob (assuming no one else knows the key). In order to read Alice's message, Bob must decrypt the ciphertext using which is known as the decryption cipher,
Alternatively, in a non-symmetric key system, everyone, not just Alice and Bob, knows the encryption key; but the decryption key cannot be inferred from the encryption key. Only Bob knows the decryption key and decryption proceeds as
Types of ciphers
The history of cryptography began thousands of years ago. Cryptography uses a variety of different types of encryption. Earlier algorithms were performed by hand and are substantially different from modern algorithms, which are generally executed by a machine.
Historical ciphers
Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include:
Substitution cipher: the units of plaintext are replaced with ciphertext (e.g., Caesar cipher and one-time pad)
Polyalphabetic substitution cipher: a substitution cipher using mu |
https://en.wikipedia.org/wiki/Passivation%20%28chemistry%29 | In physical chemistry and engineering, passivation is coating a material so that it becomes "passive", that is, less readily affected or corroded by the environment. Passivation involves creation of an outer layer of shield material that is applied as a microcoating, created by chemical reaction with the base material, or allowed to build by spontaneous oxidation in the air. As a technique, passivation is the use of a light coat of a protective material, such as metal oxide, to create a shield against corrosion. Passivation of silicon is used during fabrication of microelectronic devices. Undesired passivation of electrodes, called "fouling", increases the circuit resistance so it interferes with some electrochemical applications such as electrocoagulation for wastewater treatment, amperometric chemical sensing, and electrochemical synthesis.
When exposed to air, many metals naturally form a hard, relatively inert surface layer, usually an oxide (termed the "native oxide layer") or a nitride, that serves as a passivation layer. In the case of silver, the dark tarnish is a passivation layer of silver sulfide formed from reaction with environmental hydrogen sulfide. (In contrast, metals such as iron oxidize readily to form a rough porous coating of rust that adheres loosely and sloughs off readily, allowing further oxidation.) The passivation layer of oxide markedly slows further oxidation and corrosion in room-temperature air for aluminium, beryllium, chromium, zinc, titanium, and silicon (a metalloid). The inert surface layer formed by reaction with air has a thickness of about 1.5 nm for silicon, 1–10 nm for beryllium, and 1 nm initially for titanium, growing to 25 nm after several years. Similarly, for aluminium, it grows to about 5 nm after several years.
In the context of the semiconductor device fabrication, such as silicon MOSFET transistors and solar cells, surface passivation refers not only to reducing the chemical reactivity of the surface but also to e |
https://en.wikipedia.org/wiki/Electrical%20element | In electrical engineering, electrical elements are conceptual abstractions representing idealized electrical components, such as resistors, capacitors, and inductors, used in the analysis of electrical networks. All electrical networks can be analyzed as multiple electrical elements interconnected by wires. Where the elements roughly correspond to real components, the representation can be in the form of a schematic diagram or circuit diagram. This is called a lumped-element circuit model. In other cases, infinitesimal elements are used to model the network in a distributed-element model.
These ideal electrical elements represent actual, physical electrical or electronic components. Still, they do not exist physically and are assumed to have ideal properties. In contrast, actual electrical components have less than ideal properties, a degree of uncertainty in their values, and some degree of nonlinearity. To model the nonideal behavior of a real circuit component may require a combination of multiple ideal electrical elements to approximate its function. For example, an inductor circuit element is assumed to have inductance but no resistance or capacitance, while a real inductor, a coil of wire, has some resistance in addition to its inductance. This may be modeled by an ideal inductance element in series with a resistance.
Circuit analysis using electric elements is useful for understanding practical networks of electrical components. Analyzing how a network is affected by its individual elements makes it possible to estimate how a real network will behave.
Types
Circuit elements can be classified into different categories. One is how many terminals they have to connect them to other components:
One-port elements represent the simplest components, with only two terminals to connect to. Examples are resistances, capacitances, inductances, and diodes.
Multiport elementsthese have more than two terminals. They connect to the external circuit through multiple pa |
https://en.wikipedia.org/wiki/Boiler | A boiler is a closed vessel in which fluid (generally water) is heated. The fluid does not necessarily boil. The heated or vaporized fluid exits the boiler for use in various processes or heating applications, including water heating, central heating, boiler-based power generation, cooking, and sanitation.
Heat sources
In a fossil fuel power plant using a steam cycle for power generation, the primary heat source will be combustion of coal, oil, or natural gas. In some cases byproduct fuel such as the carbon monoxide rich offgasses of a coke battery can be burned to heat a boiler; biofuels such as bagasse, where economically available, can also be used. In a nuclear power plant, boilers called steam generators are heated by the heat produced by nuclear fission. Where a large volume of hot gas is available from some process, a heat recovery steam generator or recovery boiler can use the heat to produce steam, with little or no extra fuel consumed; such a configuration is common in a combined cycle power plant where a gas turbine and a steam boiler are used. In all cases the combustion product waste gases are separate from the working fluid of the steam cycle, making these systems examples of external combustion engines.
Materials
The pressure vessel of a boiler is usually made of steel (or alloy steel), or historically of wrought iron. Stainless steel, especially of the austenitic types, is not used in wetted parts of boilers due to corrosion and stress corrosion cracking. However, ferritic stainless steel is often used in superheater sections that will not be exposed to boiling water, and electrically-heated stainless steel shell boilers are allowed under the European "Pressure Equipment Directive" for production of steam for sterilizers and disinfectors.
In live steam models, copper or brass is often used because it is more easily fabricated in smaller size boilers. Historically, copper was often used for fireboxes (particularly for steam locomotives), because o |
https://en.wikipedia.org/wiki/Zero%20insertion%20force | Zero insertion force (ZIF) is a type of IC socket or electrical connector that requires very little (but not literally zero) force for insertion. With a ZIF socket, before the IC is inserted, a lever or slider on the side of the socket is moved, pushing all the sprung contacts apart so that the IC can be inserted with very little force - generally the weight of the IC itself is sufficient and no external downward force is required. The lever is then moved back, allowing the contacts to close and grip the pins of the IC. ZIF sockets are much more expensive than standard IC sockets and also tend to take up a larger board area due to the space taken up by the lever mechanism. Typically, they are only used when there is a good reason to do so.
Design
A normal integrated circuit (IC) socket requires the IC to be pushed into sprung contacts which then grip by friction. For an IC with hundreds of pins, the total insertion force can be very large (hundreds of newtons), leading to a danger of damage to the device or the circuit board. Also, even with relatively small pin counts, each pin extraction is fairly awkward and carries a significant risk of bending pins, particularly if the person performing the extraction hasn't had much practice or if the board is crowded. Low insertion force (LIF) sockets reduce the issues of insertion and extraction, but because of its lower insertion force than a conventional socket, are likely to produce less reliable connections.
Large ZIF sockets are only commonly found mounted on PC motherboards, being used from about the mid 1990s forward. These CPU sockets are designed to support a particular range of CPUs, allowing computer retailers and consumers to assemble motherboard/CPU combinations based on individual budget and requirements. The rest of the electronics industry has largely abandoned sockets (of any kind) and instead moved to the use of surface mount components soldered directly to the board.
Smaller ZIF sockets are commonly |
https://en.wikipedia.org/wiki/Socket%207 | Socket 7 is a physical and electrical specification for an x86-style CPU socket on a personal computer motherboard. It was released in June 1995. The socket supersedes the earlier Socket 5, and accepts P5 Pentium microprocessors manufactured by Intel, as well as compatibles made by Cyrix/IBM, AMD, IDT and others. Socket 7 was the only socket that supported a wide range of CPUs from different manufacturers and a wide range of speeds.
Differences between Socket 5 and Socket 7 are that Socket 7 has an extra pin and is designed to provide dual split rail voltage, as opposed to Socket 5's single voltage. However, not all motherboard manufacturers supported the dual voltage on their boards initially. Socket 7 is backwards compatible; a Socket 5 CPU can be inserted and used on a Socket 7 motherboard.
Processors that used Socket 7 are the AMD K5 and K6, the Cyrix 6x86 and 6x86MX, the IDT WinChip, the Intel P5 Pentium (2.5–3.5 V, 75–200 MHz), the Pentium MMX (166–233 MHz), and the Rise Technology mP6.
Socket 7 typically uses a 321-pin (arranged as 19 by 19 pins) SPGA ZIF socket or the very rare 296-pin (arranged as 37 by 37 pins) SPGA LIF socket. The size is 1.95" x 1.95" (4.95 cm x 4.95 cm).
An extension of Socket 7, Super Socket 7, was developed by AMD for their K6-2 and K6-III processors to operate at a higher clock rate and use AGP.
Socket 7 and Socket 8 were replaced by Slot 1 and Slot 2 in 1999.
See also
List of Intel microprocessors
List of AMD microprocessors
References
Socket 007 |
https://en.wikipedia.org/wiki/Relational%20calculus | The relational calculus consists of two calculi, the tuple relational calculus and the domain relational calculus, that is part of the relational model for databases and provide a declarative way to specify database queries. The raison d'être of relational calculus is the formalization of query optimization, which is finding more efficient manners to execute the same query in a database.
The relational calculus is similar to the relational algebra, which is also part of the relational model: While the relational calculus is meant as a declarative language that prescribes no execution order on the subexpressions of a relational calculus expression, the relational algebra is meant as an imperative language: the sub-expressions of a relational algebraic expression are meant to be executed from left-to-right and inside-out following their nesting.
Per Codd's theorem, the relational algebra and the domain-independent relational calculus are logically equivalent.
Example
A relational algebra expression might prescribe the following steps to retrieve the phone numbers and names of book stores that supply Some Sample Book:
Join book stores and titles over the BookstoreID.
Restrict the result of that join to tuples for the book Some Sample Book.
Project the result of that restriction over StoreName and StorePhone.
A relational calculus expression would formulate this query in the following descriptive or declarative manner:
Get StoreName and StorePhone for book stores such that there exists a title BK with the same BookstoreID value and with a BookTitle value of Some Sample Book.
Mathematical properties
The relational algebra and the domain-independent relational calculus are logically equivalent: for any algebraic expression, there is an equivalent expression in the calculus, and vice versa. This result is known as Codd's theorem.
Purpose
The raison d'être of the relational calculus is the formalization of query optimization. Query optimization consists in |
https://en.wikipedia.org/wiki/Plasma%20display | A plasma display panel (PDP) is a type of flat panel display that uses small cells containing plasma: ionized gas that responds to electric fields. Plasma televisions were the first large (over 32 inches diagonal) flat panel displays to be released to the public.
Until about 2007, plasma displays were commonly used in large televisions. By 2013, they had lost nearly all market share due to competition from low-cost LCDs and more expensive but high-contrast OLED flat-panel displays. Manufacturing of plasma displays for the United States retail market ended in 2014, and manufacturing for the Chinese market ended in 2016. Plasma displays are obsolete, having been superseded in most if not all aspects by OLED displays.
General characteristics
Plasma displays are bright (1,000 lux or higher for the display module), have a wide color gamut, and can be produced in fairly large sizes—up to diagonally. They had a very low luminance "dark-room" black level compared with the lighter grey of the unilluminated parts of an LCD screen. (As plasma panels are locally lit and do not require a back light, blacks are blacker on plasma and grayer on LCD's.) LED-backlit LCD televisions have been developed to reduce this distinction. The display panel itself is about thick, generally allowing the device's total thickness (including electronics) to be less than . Power consumption varies greatly with picture content, with bright scenes drawing significantly more power than darker ones – this is also true for CRTs as well as modern LCDs where LED backlight brightness is adjusted dynamically. The plasma that illuminates the screen can reach a temperature of at least . Typical power consumption is 400 watts for a screen. Most screens are set to "vivid" mode by default in the factory (which maximizes the brightness and raises the contrast so the image on the screen looks good under the extremely bright lights that are common in big box stores), which draws at least twice the power (around |
https://en.wikipedia.org/wiki/Active-matrix%20liquid-crystal%20display | An active-matrix liquid-crystal display (AMLCD) is a type of flat-panel display used in high-resolution TVs, computer monitors, notebook computers, tablet computers and smartphones with an LCD screen, due to low weight, very good image quality, wide color gamut and fast response time.
The concept of active-matrix LCDs was proposed by Bernard J. Lechner at the RCA Laboratories in 1968. The first functional AMLCD with thin-film transistors was made by T. Peter Brody, Fang-Chen Luo and their team at Westinghouse Electric Corporation in 1972. However, it took years of additional research and development by others to launch successful products.
Introduction
The most common type of AMLCD contains, besides the polarizing sheets and cells of liquid crystal, a matrix of thin-film transistors to make a thin-film-transistor liquid-crystal display. These devices store the electrical state of each pixel on the display while all the other pixels are being updated. This method provides a much brighter, sharper display than a passive matrix of the same size. An important specification for these displays is their viewing-angle.
Thin-film transistors are usually used for constructing an active matrix so that the two terms are often interchanged, even though a thin-film transistor is just one component in an active matrix and some active-matrix designs have used other components such as diodes. Whereas a passive matrix display uses a simple conductive grid to apply a voltage to the liquid crystals in the target area, an active-matrix display uses a grid of transistors and capacitors with the ability to hold a charge for a limited period of time. Because of the switching action of transistors, only the desired pixel receives a charge, and the pixel acts as a capacitor to hold the charge until the next refresh cycle, improving image quality over a passive matrix. This is a special version of a sample-and-hold circuit.
See also
Organic light-emitting diode
Active-matrix organic |
https://en.wikipedia.org/wiki/Robot%20control | Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wireless, semi-autonomous (a mix of fully automatic and wireless control), and fully autonomous (using artificial intelligence).
Modern robots (2000-present)
Medical and surgical
In the medical field, robots are used to make precise movements that are difficult for humans. Robotic surgery involves the use of less-invasive surgical methods, which are “procedures performed through tiny incisions”. Robots use the da Vinci surgical method, which involves the robotic arm (which holds onto surgical instruments) and a camera. The surgeon sits on a console where he controls the robot wirelessly. The feed from the camera is projected on a monitor, allowing the surgeon to see the incisions. The system is built to mimic the movement of the surgeon’s hands and has the ability to filter slight hand tremors. But despite the visual feedback, there is no physical feedback. In other words, as the surgeon applies force on the console, the surgeon won’t be able to feel how much pressure he or she is applying to the tissue.
Military
The earliest robots used in the military dates back to the 19th century, where automatic weapons were on the rise due to developments in mass production. The first automated weapons were used in World War I, including radio-controlled, unmanned aerial vehicles (UAVs). Since the invention, the technology of ground and aerial robotic weapons continues to develop, it transitioned to become part of modern warfare. In the transition phase of the development, the robots were semi-automatic, being able to be controlled remotely by a human controller. The advancements made in sensors and processors lead to advancements in capabilities of military robots. Since the mid-20th century, the technology of artificial intelligence |
https://en.wikipedia.org/wiki/Overclocking | In computing, overclocking is the practice of increasing the clock rate of a computer to exceed that certified by the manufacturer. Commonly, operating voltage is also increased to maintain a component's operational stability at accelerated speeds. Semiconductor devices operated at higher frequencies and voltages increase power consumption and heat. An overclocked device may be unreliable or fail completely if the additional heat load is not removed or power delivery components cannot meet increased power demands. Many device warranties state that overclocking or over-specification voids any warranty, but some manufacturers allow overclocking as long as it is done (relatively) safely.
Overview
The purpose of overclocking is to increase the operating speed of a given component. Normally, on modern systems, the target of overclocking is increasing the performance of a major chip or subsystem, such as the main processor or graphics controller, but other components, such as system memory (RAM) or system buses (generally on the motherboard), are commonly involved. The trade-offs are an increase in power consumption (heat), fan noise (cooling), and shortened lifespan for the targeted components. Most components are designed with a margin of safety to deal with operating conditions outside of a manufacturer's control; examples are ambient temperature and fluctuations in operating voltage. Overclocking techniques in general aim to trade this safety margin by setting the device to run in the higher end of the margin, with the understanding that temperature and voltage must be more strictly monitored and controlled by the user. Examples are that operating temperature would need to be more strictly controlled with increased cooling, as the part will be less tolerant of increased temperatures at the higher speeds. Also base operating voltage may be increased to compensate for unexpected voltage drops and to strengthen signalling and timing signals, as low-voltage excursions |
https://en.wikipedia.org/wiki/Geoid | The geoid () is the shape that the ocean surface would take under the influence of the gravity of Earth, including gravitational attraction and Earth's rotation, if other influences such as winds and tides were absent. This surface is extended through the continents (such as with very narrow hypothetical canals). According to Gauss, who first described it, it is the "mathematical figure of the Earth", a smooth but irregular surface whose shape results from the uneven distribution of mass within and on the surface of Earth. It can be known only through extensive gravitational measurements and calculations. Despite being an important concept for almost 200 years in the history of geodesy and geophysics, it has been defined to high precision only since advances in satellite geodesy in the late 20th century.
All points on a geoid surface have the same geopotential (the sum of gravitational potential energy and centrifugal potential energy). The force of gravity acts everywhere perpendicular to the geoid, meaning that plumb lines point perpendicular and bubble levels are parallel to the geoid.
Being an equigeopotential means the geoid corresponds to the free surface of water at rest (if only gravity and rotational acceleration were at work); this is also a sufficient condition for a ball to remain at rest instead of rolling over the geoid.
Earth's gravity acceleration (the vertical derivative of geopotential) is thus non-uniform over the geoid.
The geoid undulation or geoidal height is the height of the geoid relative to a given reference ellipsoid.
The geoid serves as a coordinate surface for various vertical coordinates, such as orthometric heights, geopotential heights, and dynamic heights (see Geodesy#Heights).
Description
The geoid surface is irregular, unlike the reference ellipsoid (which is a mathematical idealized representation of the physical Earth as an ellipsoid), but is considerably smoother than Earth's physical surface. Although the "ground" of the Ea |
https://en.wikipedia.org/wiki/Clownfish | Clownfish or anemonefish are fishes from the subfamily Amphiprioninae in the family Pomacentridae. Thirty species of clownfish are recognized: one in the genus Premnas, while the remaining are in the genus Amphiprion. In the wild, they all form symbiotic mutualisms with sea anemones. Depending on the species, anemonefish are overall yellow, orange, or a reddish or blackish color, and many show white bars or patches. The largest can reach a length of , while the smallest barely achieve .
Distribution and habitat
Anemonefish are endemic to the warmer waters of the Indian Ocean, including the Red Sea, and Pacific Ocean, the Great Barrier Reef, Southeast Asia, Japan, and the Indo-Malaysian region. While most species have restricted distributions, others are widespread. Anemonefish typically live at the bottom of shallow seas in sheltered reefs or in shallow lagoons. No anemonefish are found in the Atlantic.
Diet
Anemonefish are omnivorous and can feed on undigested food from their host anemones, and the fecal matter from the anemonefish provides nutrients to the sea anemone. Anemonefish primarily feed on small zooplankton from the water column, such as copepods and tunicate larvae, with a small portion of their diet coming from algae, with the exception of Amphiprion perideraion, which primarily feeds on algae.
Symbiosis and mutualism
Anemonefish and sea anemones have a symbiotic, mutualistic relationship, each providing many benefits to the other. The individual species are generally highly host specific. The sea anemone protects the anemonefish from predators, as well as providing food through the scraps left from the anemone's meals and occasional dead anemone tentacles, and functions as a safe nest site. In return, the anemonefish defends the anemone from its predators and parasites. The anemone also picks up nutrients from the anemonefish's excrement. The nitrogen excreted from anemonefish increases the number of algae incorporated into the tissue of their ho |
https://en.wikipedia.org/wiki/Internet%20Storm%20Center | The Internet Storm Center (ISC) is a program of the SANS Technology Institute, a branch of the SANS Institute which monitors the level of malicious activity on the Internet, particularly with regard to large-scale infrastructure events.
History
The ISC evolved from "Incidents.org", a site initially founded by the SANS Institute to assist in the
public-private sector cooperation during the Y2K cutover. In 2000, Incidents.org started to cooperate with DShield to create a Consensus Incidents Database (CID). It collected security information from cooperating sites and agencies for mass analysis.
On March 22, 2001, the SANS CID was responsible for the early detection of the "Lion" worm attacks on various facilities. The quick warning and counter-efforts organized by the CID were instrumental in controlling the damage done by this worm, which otherwise might have been considerably worse.
Later, DShield was integrated closer into incidents.org as the SANS Institute started to sponsor DShield. The CID was renamed the "Internet Storm Center" in acknowledgement of the way it uses the distributed sensor network similar to the way a weather reporting center will detect and track an atmospheric storm and provide warnings. Since that time the ISC has expanded its monitoring operations; its website cites a figure of over twenty million "intrusion detection log entries" per day. It continues to provide analyses and alerts of security threats to the Internet community.
During the last hours of 2005 and the first weeks of 2006, the Internet Storm Center went to its longest period at the time to "yellow" on the Infocon for the WMF vulnerability.
The most prominent feature of the ISC is a daily "Handler Diary" which is prepared by one of the 40 volunteer incident handlers and summarized the events of the day. It frequently is the first public source for new attack trends and actively facilitates cooperation by soliciting more information to understand particular attacks bet |
https://en.wikipedia.org/wiki/Underwood%20Dudley | Underwood Dudley (born January 6, 1937) is an American mathematician and writer. His popular works include several books describing crank mathematics by pseudomathematicians who incorrectly believe they have squared the circle or done other impossible things.
Career
Dudley was born in New York City. He received bachelor's and master's degrees from the Carnegie Institute of Technology and a PhD from the University of Michigan. His academic career consisted of two years at Ohio State University followed by 37 at DePauw University, from which he retired in 2004. He edited the College Mathematics Journal and the Pi Mu Epsilon Journal, and was a Pólya Lecturer for the Mathematical Association of America (MAA) for two years. He is the discoverer of the Dudley triangle.
Publications
Dudley's popular books include Mathematical Cranks (MAA 1992, ), The Trisectors (MAA 1996, ), and Numerology: Or, What Pythagoras Wrought (MAA 1997, ). Dudley won the Trevor Evans Award for expository writing from the MAA in 1996.
Dudley has also written and edited straightforward mathematical works such as Readings for Calculus (MAA 1993, ) and Elementary Number Theory (W.H. Freeman 1978, ). In 2009, he authored "A Guide to Elementary Number Theory" (MAA, 2009, ), published under Mathematical Association of America's Dolciani Mathematical Expositions.
Lawsuit
In 1995, Dudley was one of several people sued by William Dilworth for defamation because Mathematical Cranks included an analysis of Dilworth's "A correction in set theory", an attempted refutation of Cantor's diagonal method. The suit was dismissed in 1996 due to failure to state a claim.
The dismissal was upheld on appeal in a decision written by jurist Richard Posner. From the decision: "A crank is a person inexplicably obsessed by an obviously unsound idea—a person with a bee in his bonnet. To call a person a crank is to say that because of some quirk of temperament he is wasting his time pursuing a line of thought that is |
https://en.wikipedia.org/wiki/Riemann%20sum | In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is in numerical integration, i.e., approximating the area of functions or lines on a graph, where it is also known as the rectangle rule. It can also be applied for approximating the length of curves and other approximations.
The sum is calculated by partitioning the region into shapes (rectangles, trapezoids, parabolas, or cubics) that together form a region that is similar to the region being measured, then calculating the area for each of these shapes, and finally adding all of these small areas together. This approach can be used to find a numerical approximation for a definite integral even if the fundamental theorem of calculus does not make it easy to find a closed-form solution.
Because the region by the small shapes is usually not exactly the same shape as the region being measured, the Riemann sum will differ from the area being measured. This error can be reduced by dividing up the region more finely, using smaller and smaller shapes. As the shapes get smaller and smaller, the sum approaches the Riemann integral.
Definition
Let be a function defined on a closed interval of the real numbers, , and as a partition of , that is
A Riemann sum of over with partition is defined as
where and .
One might produce different Riemann sums depending on which 's are chosen. In the end this will not matter, if the function is Riemann integrable, when the difference or width of the summands approaches zero.
Types of Riemann sums
Specific choices of give different types of Riemann sums:
If for all i, the method is the left rule and gives a left Riemann sum.
If for all i, the method is the right rule and gives a right Riemann sum.
If for all i, the method is the midpoint rule and gives a middle Riemann sum.
If (that is, the supremum of over ), the me |
https://en.wikipedia.org/wiki/Hyperion%20%28moon%29 | Hyperion , also known as Saturn VII, is a moon of Saturn discovered by William Cranch Bond, his son George Phillips Bond and William Lassell in 1848. It is distinguished by its irregular shape, its chaotic rotation, and its unexplained sponge-like appearance. It was the first non-round moon to be discovered.
Name
The moon is named after Hyperion, the Titan god of watchfulness and observation – the elder brother of Cronus, the Greek equivalent of the Roman god Saturn. It is also designated Saturn VII. The adjectival form of the name is Hyperionian.
Hyperion's discovery came shortly after John Herschel had suggested names for the seven previously known satellites of Saturn in his 1847 publication Results of Astronomical Observations made at the Cape of Good Hope. William Lassell, who saw Hyperion two days after William Bond, had already endorsed Herschel's naming scheme and suggested the name Hyperion in accordance with it. He also beat Bond to publication.
Physical characteristics
Shape
Hyperion is one of the largest bodies known to be highly irregularly shaped (non-ellipsoidal, i.e. not in hydrostatic equilibrium) in the Solar System. The only larger moon known to be irregular in shape is Neptune's moon Proteus. Hyperion has about 15% of the mass of Mimas, the least massive known ellipsoidal body. The largest crater on Hyperion is approximately in diameter and deep. A possible explanation for the irregular shape is that Hyperion is a fragment of a larger body that was broken up by a large impact in the distant past. A proto-Hyperion could have been in diameter (which ranges from a little below the size of Mimas to a little below the size of Tethys). Over about 1,000 years, ejecta from a presumed Hyperion breakup would have impacted Titan at low speeds, building up volatiles in the atmosphere of Titan.
Composition
Like most of Saturn's moons, Hyperion's low density indicates that it is composed largely of water ice with only a small amount of rock. It i |
https://en.wikipedia.org/wiki/Audio%20feedback | Audio feedback (also known as acoustic feedback, simply as feedback) is a positive feedback situation that may occur when an acoustic path exists between an audio input (for example, a microphone or guitar pickup) and an audio output (for example, a loudspeaker). In this example, a signal received by the microphone is amplified and passed out of the loudspeaker. The sound from the loudspeaker can then be received by the microphone again, amplified further, and then passed out through the loudspeaker again. The frequency of the resulting howl is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them. The principles of audio feedback were first discovered by Danish scientist Søren Absalon Larsen, hence it is also known as the Larsen effect.
Feedback is almost always considered undesirable when it occurs with a singer's or public speaker's microphone at an event using a sound reinforcement system or PA system. Audio engineers typically use directional microphones with cardioid pickup patterns and various electronic devices, such as equalizers and, since the 1990s, automatic feedback suppressors, to prevent feedback, which detracts from the audience's enjoyment of the event and may damage equipment or hearing.
Since the 1960s, electric guitar players in rock music bands using loud guitar amplifiers, speaker cabinets and distortion effects have intentionally created guitar feedback to create different sounds including long sustained tones that cannot be produced using standard playing techniques. The sound of guitar feedback is considered to be a desirable musical effect in heavy metal music, hardcore punk and grunge. Jimi Hendrix was an innovator in the intentional use of guitar feedback in his guitar solos to create unique musical sounds.
History and theory
The conditions for feedback follow the Barkhau |
https://en.wikipedia.org/wiki/Cylindrical%20coordinate%20system | A cylindrical coordinate system is a three-dimensional coordinate system that specifies point positions by the distance from a chosen reference axis (axis L in the image opposite), the direction from the axis relative to a chosen reference direction (axis A), and the distance from a chosen reference plane perpendicular to the axis (plane containing the purple section). The latter distance is given as a positive or negative number depending on which side of the reference plane faces the point.
The origin of the system is the point where all three coordinates can be given as zero. This is the intersection between the reference plane and the axis.
The axis is variously called the cylindrical or longitudinal axis, to differentiate it from the polar axis, which is the ray that lies in the reference plane, starting at the origin and pointing in the reference direction.
Other directions perpendicular to the longitudinal axis are called radial lines.
The distance from the axis may be called the radial distance or radius, while the angular coordinate is sometimes referred to as the angular position or as the azimuth. The radius and the azimuth are together called the polar coordinates, as they correspond to a two-dimensional polar coordinate system in the plane through the point, parallel to the reference plane. The third coordinate may be called the height or altitude (if the reference plane is considered horizontal), longitudinal position, or axial position.
Cylindrical coordinates are useful in connection with objects and phenomena that have some rotational symmetry about the longitudinal axis, such as water flow in a straight pipe with round cross-section, heat distribution in a metal cylinder, electromagnetic fields produced by an electric current in a long, straight wire, accretion disks in astronomy, and so on.
They are sometimes called "cylindrical polar coordinates" and "polar cylindrical coordinates", and are sometimes used to specify the position of stars in a |
https://en.wikipedia.org/wiki/Physical%20geodesy | Physical geodesy is the study of the physical properties of Earth's gravity and its potential field (the geopotential), with a view to their application in geodesy.
Measurement procedure
Traditional geodetic instruments such as theodolites rely on the gravity field for orienting their vertical axis along the local plumb line or local vertical direction with the aid of a spirit level. After that, vertical angles (zenith angles or, alternatively, elevation angles) are obtained with respect to this local vertical, and horizontal angles in the plane of the local horizon, perpendicular to the vertical.
Levelling instruments again are used to obtain geopotential differences between points on the Earth's surface. These can then be expressed as "height" differences by conversion to metric units.
Units
Gravity is commonly measured in units of m·s−2 (metres per second squared). This also can be expressed (multiplying by the gravitational constant G in order to change units) as newtons per kilogram of attracted mass.
Potential is expressed as gravity times distance, m2·s−2. Travelling one metre in the direction of a gravity vector of strength 1 m·s−2 will increase your potential by 1 m2·s−2. Again employing G as a multiplier, the units can be changed to joules per kilogram of attracted mass.
A more convenient unit is the GPU, or geopotential unit: it equals 10 m2·s−2. This means that travelling one metre in the vertical direction, i.e., the direction of the 9.8 m·s−2 ambient gravity, will approximately change your potential by 1 GPU. Which again means that the difference in geopotential, in GPU, of a point with that of sea level can be used as a rough measure of height "above sea level" in metres.
Gravity
Potential fields
Geoid
Due to the irregularity of the Earth's true gravity field, the equilibrium figure of sea water, or the geoid, will also be of irregular form. In some places, like west of Ireland, the geoid—mathematical mean sea level—sticks out as much as 10 |
https://en.wikipedia.org/wiki/Cache%20coherence | In computer architecture, cache coherence is the uniformity of shared resource data that ends up stored in multiple local caches. When clients in a system maintain caches of a common memory resource, problems may arise with incoherent data, which is particularly the case with CPUs in a multiprocessing system.
In the illustration on the right, consider both the clients have a cached copy of a particular memory block from a previous read. Suppose the client on the bottom updates/changes that memory block, the client on the top could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts by maintaining a coherent view of the data values in multiple caches.
Overview
In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion.
The following are the requirements for cache coherence:
Write Propagation Changes to the data in any cache must be propagated to other copies (of that cache line) in the peer caches.
Transaction Serialization Reads/Writes to a single memory location must be seen by all processors in the same order.
Theoretically, coherence can be performed at the load/store granularity. However, in practice it is generally performed at the granularity of cache blocks.
Definition
Coherence defines the behavior of reads and writes to a single address location.
One type of data occurring simultaneously in different cache memory is called cache coherence, or in some systems, global memory.
In a multiprocessor system, consider that more than one processor has cached a copy o |
https://en.wikipedia.org/wiki/Computational%20geometry | Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with a history stretching back to antiquity.
Computational complexity is central to computational geometry, with great practical significance if algorithms are used on very large datasets containing tens or hundreds of millions of points. For such sets, the difference between O(n2) and O(n log n) may be the difference between days and seconds of computation.
The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization.
Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), and computer vision (3D reconstruction).
The main branches of computational geometry are:
Combinatorial computational geometry, also called algorithmic geometry, which deals with geometric objects as discrete entities. A groundlaying book in the subject by Preparata and Shamos dates the first use of the term "computational geometry" in this sense by 1975.
Numerical computational geometry, also called machine geometry, computer-aided geometric design (CAGD), or geometric modeling, which deals primarily with representing real-world objects in forms suitable for computer computations in CAD/CAM systems. This branch ma |
https://en.wikipedia.org/wiki/Internet%20Archive | The Internet Archive is an American digital library founded on May 10, 1996, and chaired by free information advocate Brewster Kahle. It provides free access to collections of digitized materials including websites, software applications, music, audiovisual and print materials. The Archive also advocates for a free and open Internet. , the Internet Archive holds more than 38 million print materials, 11.6 million pieces of audiovisual content, 2.6 million software programs, 15 million audio files, 4.7 million images, 251,000 concerts, and over 832 billion web pages in its Wayback Machine. Their mission is to provide "universal access to all knowledge."
The Internet Archive allows the public to upload and download digital material to its data cluster, but the bulk of its data is collected automatically by its web crawlers, which work to preserve as much of the public web as possible. Its web archive, the Wayback Machine, contains hundreds of billions of web captures. The Archive also oversees numerous book digitization projects, collectively one of the world's largest book digitization efforts.
History
Brewster Kahle founded the Archive in May 1996 around the same time that he began the for-profit web crawling company Alexa Internet. In October of that year, the Internet Archive had begun to archive and preserve the World Wide Web in large amounts, though it saved the earliest known page on May 10, 1996, at 2:42 PM. The archived content first became available to the general public in 2001, when it developed the Wayback Machine.
In late 1999, the Archive expanded its collections beyond the web archive, beginning with the Prelinger Archives. Now, the Internet Archive includes texts, audio, moving images, and software. It hosts a number of other projects: the NASA Images Archive, the contract crawling service Archive-It, and the wiki-editable library catalog and book information site Open Library. Soon after that, the Archive began working to provide specialized serv |
https://en.wikipedia.org/wiki/Diode%20bridge | A diode bridge is a bridge rectifier circuit of four diodes that is used in the process of converting alternating current (AC) from the input terminals to direct current (DC, i.e. fixed polarity) on the output terminals. Its function is to convert the negative voltage portions of the AC waveform to positive voltage, after which a low-pass filter can be used to smooth the result into DC.
When used in its most common application, for conversion of an alternating-current (AC) input into a direct-current (DC) output, it is known as a bridge rectifier. A bridge rectifier provides full-wave rectification from a two-wire AC input, resulting in lower cost and weight as compared to a rectifier with a three-wire input from a transformer with a center-tapped secondary winding.
Prior to the availability of integrated circuits, a bridge rectifier was constructed from separate diodes. Since about 1950, a single four-terminal component containing the four diodes connected in a bridge configuration has been available and is now available with various voltage and current ratings.
Diodes are also used in bridge topologies along with capacitors as voltage multipliers.
History
The diode bridge circuit was invented by Karol Pollak and patented in December 1895 in Great Britain and in January 1896 in Germany. In 1897, Leo Graetz independently invented and published a similar circuit. Today the circuit is sometimes referred to as a "Graetz circuit" or "Graetz bridge".
Current flow
According to the conventional model of current flow, originally established by Benjamin Franklin and still followed by most engineers today), current flows through electrical conductors from the positive to the negative pole (defined as positive flow). In actuality, free electrons in a conductor nearly always flow from the negative to the positive pole. In the vast majority of applications, however, the actual direction of current flow is irrelevant. Therefore, in the discussion below the conventional model |
https://en.wikipedia.org/wiki/Arkanoid | is a 1986 block breaker arcade game developed and published by Taito. In North America, it was published by Romstar. Controlling a paddle-like craft known as the Vaus, the player is tasked with clearing a formation of colorful blocks by deflecting a ball towards it without letting the ball leave the bottom edge of the playfield. Some blocks contain power-ups that have various effects, such as increasing the length of the Vaus, creating several additional balls, or equipping the Vaus with cannons. Other blocks may be indestructible or require multiple hits to break.
Created by Taito designers Akira Fujita and Hiroshi Tsujino, Arkanoid expanded on the concept established in Atari's Breakout, a successful game in its own right that was met with a large wave of similar clone games from other manufacturers. It was part of a contest within Taito, where two teams of designers had to complete a block breaker game and determine which one was superior to the other. The film Tron served as inspiration for the game's futuristic, neon aesthetic. Level designs were sketched on paper before being programmed and tested to make sure they were fun to play. The enemy and power-up designs were 3D models converted into sprite art.
Early location tests for Arkanoid surpassed Taito's initial expectations. It became a major commercial success in arcades, becoming the highest-grossing table arcade cabinet of 1987 in Japan and the year's highest-grossing conversion kit in the United States. The game was commended by critics for its gameplay, simplicity, addictive nature, and improvements over the original Breakout concept. The game revitalized the genre and set the groundwork for many games to follow. Arkanoid was ported to many home video game platforms, including the Commodore 64, Nintendo Entertainment System, ZX Spectrum, and (years later) mobile phones, and it spawned a long series of sequels and updates over the course of two decades.
Gameplay
Arkanoid is a block breaker video game |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.