source
stringlengths 31
203
| text
stringlengths 28
2k
|
|---|---|
https://en.wikipedia.org/wiki/LongRun
|
LongRun and LongRun2 are power management technologies introduced by Transmeta. LongRun was introduced with the Crusoe processor, while LongRun2 was introduced with the Efficeon processor. LongRun2 has since been licensed to Fujitsu, NEC, Sony, Toshiba, and NVIDIA.
LongRun automatically adjusted the processor, moving between higher performance but higher power, and lower power but lower performance. The goals of the automation could be adjusted. One control offered processor frequency levels, and the ability to set a minimum and maximum "window", where the automatic controls would not adjust the speed outside of the window. A second control offered a target of either "economy" or "performance". Some versions offered a third control that adjusted the processor based on power rather than speed.
LongRun was based primarily on reducing the clock frequency and voltage supplied to the processor, now commonly called DVFS. Lower frequency reduces performance but also allows voltage reduction, and can yield both power savings and improved efficiency. LongRun2 built further on this by incorporating process technology aimed at reducing variations in the manufacturing process and thereby improving yields.
References
External links
Official description of LongRun2, recovered from the Internet Archive as of 2009.
Embedded microprocessors
Computer hardware tuning
Clock signal
|
https://en.wikipedia.org/wiki/Royal%20Institution%20of%20Naval%20Architects
|
The Royal Institution of Naval Architects (also known as RINA) is an international organisation representing naval architects. It is an elite international professional institution based in London. Its members are involved worldwide at all levels in the design, construction, repair and operation of ships, boats and marine structures. Members are elected by the council and are presented with the following titles (denoting the membership type and shown with post-nominal letters): Associate (AssocRINA); Associate Member (AMRINA); Member (MRINA); and Fellow (FRINA).
The Patron of the Institution was Queen Elizabeth II.
History
The Royal Institution of Naval Architects was founded in Britain in 1860 as The Institution of Naval Architects and incorporated by Royal Charter in 1910 and 1960 to "advance the art and science of ship design".
Founding members included John Scott Russell, Edward Reed, Rev Joseph Woolley, Nathaniel Barnaby, Frederick Kynaston Barnes and John Penn.
On 9 April 1919 Blanche Thornycroft, Rachel Mary Parsons, and Eily Keary became the first women admitted into the institution.
Present role
The Royal Institution of Naval Architects is an international organisation with its headquarters in the UK, representing naval architects in all the maritime nations of the world. It is a professional institution and learned society of international standing, whose members are involved worldwide at all levels in the design, construction, repair and maintenance of ships, boats and maritime structures. Through its international membership, publications and conferences, the Royal Institution of Naval Architects provides a link between industry, universities and maritime organisations worldwide.
Professional institution
As a professional institution, its aim is to set standards of professional competence and conduct, and assist its members to both achieve and maintain those standards. Membership of the Royal Institution of Naval Architects provides a professio
|
https://en.wikipedia.org/wiki/Cryogenic%20particle%20detector
|
Cryogenic particle detectors operate at very low temperature, typically only a few degrees above absolute zero. These sensors interact with an energetic elementary particle (such as a photon) and deliver a signal that can be related to the type of particle and the nature of the interaction. While many types of particle detectors might be operated with improved performance at cryogenic temperatures, this term generally refers to types that take advantage of special effects or properties occurring only at low temperature.
Introduction
The most commonly cited reason for operating any sensor at low temperature is the reduction in thermal noise, which is proportional to the square root of the absolute temperature. However, at very low temperature, certain material properties become very sensitive to energy deposited by particles in their passage through the sensor, and the gain from these changes may be even more than that from reduction in thermal noise. Two such commonly used properties are heat capacity and electrical resistivity, particularly superconductivity; other designs are based on superconducting tunnel junctions, quasiparticle trapping, rotons in superfluids, magnetic bolometers, and other principles.
Originally, astronomy pushed the development of cryogenic detectors for optical and infrared radiation. Later, particle physics and cosmology motivated cryogenic detector development for sensing known and predicted particles such as neutrinos, axions, and weakly interacting massive particles (WIMPs).
Types of cryogenic particle detectors
Calorimetric particle detection
A calorimeter is a device that measures the amount of heat deposited in a sample of material. A calorimeter differs from a bolometer in that a calorimeter measures energy, while a bolometer measures power.
Below the Debye temperature of a crystalline dielectric material (such as silicon), the heat capacity decreases inversely as the cube of the absolute temperature. It becomes very small, so
|
https://en.wikipedia.org/wiki/Waveguide%20%28radio%20frequency%29
|
In radio-frequency engineering and communications engineering, waveguide is a hollow metal pipe used to carry radio waves. This type of waveguide is used as a transmission line mostly at microwave frequencies, for such purposes as connecting microwave transmitters and receivers to their antennas, in equipment such as microwave ovens, radar sets, satellite communications, and microwave radio links.
The electromagnetic waves in a (metal-pipe) waveguide may be imagined as travelling down the guide in a zig-zag path, being repeatedly reflected between opposite walls of the guide. For the particular case of rectangular waveguide, it is possible to base an exact analysis on this view. Propagation in a dielectric waveguide may be viewed in the same way, with the waves confined to the dielectric by total internal reflection at its surface. Some structures, such as non-radiative dielectric waveguides and the Goubau line, use both metal walls and dielectric surfaces to confine the wave.
Principle
Depending on the frequency, waveguides can be constructed from either conductive or dielectric materials. Generally, the lower the frequency to be passed the larger the waveguide is. For example, the natural waveguide the earth forms given by the dimensions between the conductive ionosphere and the ground as well as the circumference at the median altitude of the Earth is resonant at 7.83 Hz. This is known as Schumann resonance. On the other hand, waveguides used in extremely high frequency (EHF) communications can be less than a millimeter in width.
History
During the 1890s theorists did the first analyses of electromagnetic waves in ducts. Around 1893 J. J. Thomson derived the electromagnetic modes inside a cylindrical metal cavity. In 1897 Lord Rayleigh did a definitive analysis of waveguides; he solved the boundary value problem of electromagnetic waves propagating through both conducting tubes and dielectric rods of arbitrary shape. He showed that the waves could tr
|
https://en.wikipedia.org/wiki/Self-service
|
Self-service is the practice of serving oneself, usually when making purchases. Aside from Automated Teller Machines, which are not limited to banks, and customer-operated supermarket check-out, labor-saving which has been described as self-sourcing, there is the latter's subset, selfsourcing and a related pair: End-user development and End-user computing.
Note has been made how paid labor has been replaced with unpaid labor, and how reduced professionalism and distractions from primary duties have reduced value obtained from employees' time.
For decades, laws have been passed both facilitating and preventing self-pumping of gas and other self-service.
Overview
Self-service is the practice of serving oneself, usually when purchasing items. Common examples include many gas stations, where the customer pumps their own gas rather than have an attendant do it (full service is required by law in New Jersey, urban parts of Oregon, most of Mexico, and Richmond, British Columbia, but is the exception rather than the rule elsewhere). Automatic Teller Machines (ATMs) in the banking world have also revolutionized how people withdraw and deposit funds; most stores in the Western world, where the customer uses a shopping cart in the store, placing the items they want to buy into the cart and then proceeding to the checkout counter/aisles; or at buffet-style restaurants, where the customer serves their own plate of food from a large, central selection.
Patentable business method
In 1917, the US Patent Office awarded Clarence Saunders a patent for a "self-serving store." Saunders invited his customers to collect the goods they wanted to buy from the store and present them to a cashier, rather than having the store employee consult a list presented by the customer, and collect the goods. Saunders licensed the business method to independent grocery stores; these operated under the name "Piggly Wiggly."
Electronic commerce
Self-service is over the phone, web, and email to faci
|
https://en.wikipedia.org/wiki/REPROM
|
Reprogrammable memory (abbreviated as REPROM or RePROM) is type of ROM, more precisely, a type of PROM electronic memory. Re refers to reprogrammable ROM memory.
There are two types of RePROM electronic memories:
EPROM
E²PROM or EEPROM
See also
Read-mostly memory (RMM)
Non-volatile memory
Computer memory
|
https://en.wikipedia.org/wiki/Giulio%20Ascoli
|
Giulio Ascoli (20 January 1843, Trieste – 12 July 1896, Milan) was a Jewish-Italian mathematician. He was a student of the Scuola Normale di Pisa, where he graduated in 1868.
In 1872 he became Professor of Algebra and Calculus of the Politecnico di Milano University. From 1879 he was professor of mathematics at the Reale Istituto Tecnico Superiore, where, in 1901, was affixed a plaque that remembers him.
He was also a corresponding member of Istituto Lombardo.
He made contributions to the theory of functions of a real variable and to Fourier series. For example, Ascoli introduced equicontinuity in 1884, a topic regarded as one of the fundamental concepts in the theory of real functions. In 1889, Italian mathematician Cesare Arzelà generalized Ascoli's Theorem into the Arzelà–Ascoli theorem, a practical sequential compactness criterion of functions.
See also
Measure (mathematics)
Oscillation (mathematics)
Riemann Integral
Notes
Biographical references
.
(in Italian). Available from the website of the.
References
.
. "Riemann's conditions for integrability and their influence on the birth of the concept of measure" (English translation of title) is an article on the history of measure theory, analyzing deeply and comprehensively every early contribution to the field, starting from Riemann's work and going to the works of Hermann Hankel, Gaston Darboux, Giulio Ascoli, Henry John Stephen Smith, Ulisse Dini, Vito Volterra, Paul David Gustav du Bois-Reymond and Carl Gustav Axel Harnack.
External links
Biography in Italian.
Ascoli, Julio in the Jewish Encyclopedia.
By Their Fruits Ye Shall Know Them: Some Remarks on the Interaction of General Topology with Other Areas of Mathematics by T. Koetsier, J. Van Mill, an article containing a history of Ascoli's work on the Arzelà-Ascoli theorem.
1843 births
1896 deaths
19th-century Italian mathematicians
Mathematical analysts
Academic staff of the Polytechnic University of Milan
19th-century Italian Jews
|
https://en.wikipedia.org/wiki/N%C3%B8rlund%E2%80%93Rice%20integral
|
In mathematics, the Nørlund–Rice integral, sometimes called Rice's method, relates the nth forward difference of a function to a line integral on the complex plane. It commonly appears in the theory of finite differences and has also been applied in computer science and graph theory to estimate binary tree lengths. It is named in honour of Niels Erik Nørlund and Stephen O. Rice. Nørlund's contribution was to define the integral; Rice's contribution was to demonstrate its utility by applying saddle-point techniques to its evaluation.
Definition
The nth forward difference of a function f(x) is given by
where is the binomial coefficient.
The Nörlund–Rice integral is given by
where f is understood to be meromorphic, α is an integer, , and the contour of integration is understood to circle the poles located at the integers α, ..., n, but encircles neither integers 0, ..., nor any of the poles of f. The integral may also be written as
where B(a,b) is the Euler beta function. If the function is polynomially bounded on the right hand side of the complex plane, then the contour may be extended to infinity on the right hand side, allowing the transform to be written as
where the constant c is to the left of α.
Poisson–Mellin–Newton cycle
The Poisson–Mellin–Newton cycle, noted by Flajolet et al. in 1985, is the observation that the resemblance of the Nørlund–Rice integral to the Mellin transform is not accidental, but is related by means of the binomial transform and the Newton series. In this cycle, let be a sequence, and let g(t) be the corresponding Poisson generating function, that is, let
Taking its Mellin transform
one can then regain the original sequence by means of the Nörlund–Rice integral:
where Γ is the gamma function.
Riesz mean
A closely related integral frequently occurs in the discussion of Riesz means. Very roughly, it can be said to be related to the Nörlund–Rice integral in the same way that Perron's formula is related to the Mellin transfo
|
https://en.wikipedia.org/wiki/Pursuit%E2%80%93evasion
|
Pursuit–evasion (variants of which are referred to as cops and robbers and graph searching) is a family of problems in mathematics and computer science in which one group attempts to track down members of another group in an environment. Early work on problems of this type modeled the environment geometrically. In 1976, Torrence Parsons introduced a formulation whereby movement is constrained by a graph. The geometric formulation is sometimes called continuous pursuit–evasion, and the graph formulation discrete pursuit–evasion (also called graph searching). Current research is typically limited to one of these two formulations.
Discrete formulation
In the discrete formulation of the pursuit–evasion problem, the environment is modeled as a graph.
Problem definition
There are innumerable possible variants of pursuit–evasion, though they tend to share many elements. A typical, basic example is as follows (cops and robber games): Pursuers and evaders occupy nodes of a graph. The two sides take alternate turns, which consist of each member either staying put or moving along an edge to an adjacent node. If a pursuer occupies the same node as an evader the evader is captured and removed from the graph. The question usually posed is how many pursuers are necessary to ensure the eventual capture of all the evaders. If one pursuer suffices, the graph is called a cop-win graph. In this case, a single evader can always be captured in time linear to the number of n nodes of the graph. Capturing r evaders with k pursuers can take in the order of r n time as well, but the exact bounds for more than one pursuer are still unknown.
Often the movement rules are altered by changing the velocity of the evaders. This velocity is the maximum number of edges that an evader can move along in a single turn. In the example above, the evaders have a velocity of one. At the other extreme is the concept of infinite velocity, which allows an evader to move to any node in the graph so l
|
https://en.wikipedia.org/wiki/Vitosha%20Mountain%20TV%20Tower
|
Vitosha Mountain TV Tower, better known as Kopitoto (, "The Hoof") after the rock outcrop () it stands on, is a tall TV tower built of reinforced concrete on Vitosha Mountain near Sofia, Bulgaria. The footprint of the tower has the shape of a hexagon with three of the sides extended (i.e. almost triangular). From the tower there is a commanding view of Sofia, and the tower can be seen from everywhere in Sofia, making it a landmark of Sofia's skyline. It is the second tallest television tower in Bulgaria.
See also
List of towers
List of tallest structures in Bulgaria
External links
Pictures and description in Bulgarian
Towers in Bulgaria
Buildings and structures in Sofia
|
https://en.wikipedia.org/wiki/Flooding%20%28computer%20networking%29
|
Flooding is used in computer network routing algorithms in which every incoming packet is sent through every outgoing link except the one it arrived on.
Flooding is used in bridging and in systems such as Usenet and peer-to-peer file sharing and as part of some routing protocols, including OSPF, DVMRP, and those used in ad-hoc wireless networks (WANETs).
Types
There are generally two types of flooding available, uncontrolled flooding and controlled flooding.
In uncontrolled flooding each node unconditionally distributes packets to each of its neighbors. Without conditional logic to prevent indefinite recirculation of the same packet, broadcast storms are a hazard.
Controlled flooding has its own two algorithms to make it reliable, SNCF (Sequence Number Controlled Flooding) and RPF (reverse-path forwarding). In SNCF, the node attaches its own address and sequence number to the packet, since every node has a memory of addresses and sequence numbers. If it receives a packet in memory, it drops it immediately while in RPF, the node will only send the packet forward. If it is received from the next node, it sends it back to the sender.
Algorithms
There are several variants of flooding algorithms. Most work roughly as follows:
Each node acts as both a transmitter and a receiver.
Each node tries to forward every message to every one of its neighbors except the source node.
This results in every message eventually being delivered to all reachable parts of the network.
Algorithms may need to be more complex than this, since, in some case, precautions have to be taken to avoid wasted duplicate deliveries and infinite loops, and to allow messages to eventually expire from the system.
Selective flooding
A variant of flooding called selective flooding partially addresses these issues by only sending packets to routers in the same direction. In selective flooding, the routers don't send every incoming packet on every line but only on those lines which are going approxim
|
https://en.wikipedia.org/wiki/Ars%20Combinatoria%20%28journal%29
|
Ars Combinatoria, a Canadian Journal of Combinatorics is an English language research journal in combinatorics, published by the Charles Babbage Research Centre, Winnipeg, Manitoba, Canada. From 1976 to 1988 it published two volumes per year, and subsequently it published as many as six volumes per year.
The journal is indexed in MathSciNet and Zentralblatt. As of 2019, SCImago Journal Rank listed it in the bottom quartile of miscellaneous mathematics journals.
As of December 15, 2021, the editorial board of the journal resigned, asking that inquiries be directed to the publisher.
References
1976 establishments in Canada
Academic journals established in 1976
Academic journals published in Canada
English-language journals
Quarterly journals
Combinatorics journals
Mass media in Winnipeg
|
https://en.wikipedia.org/wiki/Asperity%20%28materials%20science%29
|
In materials science, asperity, defined as "unevenness of surface, roughness, ruggedness" (from the Latin asper—"rough"), has implications (for example) in physics and seismology. Smooth surfaces, even those polished to a mirror finish, are not truly smooth on a microscopic scale. They are rough, with sharp, rough or rugged projections, termed "asperities". Surface asperities exist across multiple scales, often in a self affine or fractal geometry. The fractal dimension of these structures has been correlated with the contact mechanics exhibited at an interface in terms of friction and contact stiffness.
When two macroscopically smooth surfaces come into contact, initially they only touch at a few of these asperity points. These cover only a very small portion of the surface area. Friction and wear originate at these points, and thus understanding their behavior becomes important when studying materials in contact. When the surfaces are subjected to a compressive load, the asperities deform through elastic and plastic modes, increasing the contact area between the two surfaces until the contact area is sufficient to support the load.
The relationship between frictional interactions and asperity geometry is complex and poorly understood. It has been reported that an increased roughness may under certain circumstances result in weaker frictional interactions while smoother surfaces may in fact exhibit high levels of friction owing to high levels of true contact.
The Archard equation provides a simplified model of asperity deformation when materials in contact are subject to a force. Due to the ubiquitous presence of deformable asperities in self affine hierarchical structures, the true contact area at an interface exhibits a linear relationship with the applied normal load.
See also
Surface roughness
Burnishing (metal)
References
External links
Materials science
Surfaces
Tribology
|
https://en.wikipedia.org/wiki/French%20mathematical%20seminars
|
French mathematical seminars have been an important type of institution combining research and exposition, active since the beginning of the twentieth century.
From 1909 to 1937, the Séminaire Hadamard gathered many participants (f. i. André Weil) around the presentation of international research papers and work in progress. The Séminaire Julia focussed on yearly themes and impulsed the Bourbaki movement. The Séminaire Nicolas Bourbaki is the most famous, but is atypical in a number of ways: it attempts to cover, if selectively, the whole of pure mathematics, and its talks are now, by convention, reports and surveys on research by someone not directly involved. More standard is a working group organised around a specialist area, with research talks given and written up "from the horse's mouth".
Historically speaking, the Séminaire Cartan of the late 1940s and early 1950s, around Henri Cartan, was one of the most influential. Publication in those days was by means of the duplicated exemplaire (limited distribution and not peer-reviewed). The seminar model was tested, almost to destruction, by the SGA series of Alexander Grothendieck.
Notable seminars
Séminaire Bourbaki, still current, general; Nicolas Bourbaki
Séminaire Brelot-Choquet-Deny (from 1957), potential theory; Marcel Brelot, Gustave Choquet, Jacques Deny
Séminaire Cartan, homological algebra, sheaf theory, several complex variables; Henri Cartan and his students
Séminaire Châtelet-Dubreil, Dubreil, Dubreil-Pisot, from 1951, abstract algebra
Séminaire Chevalley, algebraic geometry, late 1950s
Séminaire Delange-Pisot, then Delange-Pisot-Poitou, from 1959, number theory
Séminaire Ehresmann, differential geometry and category theory; Charles Ehresmann
Séminaire Grothendieck, from 1957, became Grothendieck's Séminaire de Géométrie Algébrique
Séminaire Janet, differential equations
Séminaire Kahane
Séminaire Lelong, several complex variables
Séminaire Schwartz, functional analysis; Laurent Schwar
|
https://en.wikipedia.org/wiki/Virt-manager
|
virt-manager is a desktop virtual machine monitor primarily developed by Red Hat.
Features
Virtual Machine Manager allows users to:
create, edit, start and stop VMs
view and control each VM's console
see performance and utilization statistics for each VM
view all running VMs and hosts, and their live performance or resource utilization statistics.
use KVM, Xen or QEMU virtual machines, running either locally or remotely.
use LXC containers
Support for FreeBSD's bhyve hypervisor has been included since 2014, though it remains disabled by default.
Distributions including Virtual Machine Manager
Virtual Machine Manager comes as the package in:
Arch Linux
CentOS
Debian (since lenny)
Fedora (since version 6)
FreeBSD (via Ports collection)
Frugalware
Gentoo
Mandriva Linux (since release 2007.1)
MXLinux
NetBSD (via pkgsrc)
NixOS
OpenBSD (via Ports collection)
openSUSE (since release 10.3)
Red Hat Enterprise Linux (versions 5 through 7 only)
Scientific Linux
Trisquel
TrueOS
Ubuntu (version 8.04 and above)
Void Linux
See also
libvirt, the API used by Virtual Machine Manager to create and manage virtual machines
References
External links
Documentation
While the Virtual Machine Manager project itself lacks documentation, there are third parties providing relevant information, e.g.:
Red Hat Enterprise Linux virtualization 7 documentation (VMM is not used in RHEL 8 and later):
Getting Started with Virtual Machine Manager
Fedora documentation:
Getting started with virtualization
Ubuntu official documentation:
KVM/VirtManager
Libvirt documentation:
Documentation: index
Documentation: Storage pools
Documentation: Network management architecture
Wiki: Virtual networking
Free virtualization software
Red Hat software
Remote administration software
Software that uses PyGObject
Virtualization
Virtualization software for Linux
Virtualization software that uses GTK
|
https://en.wikipedia.org/wiki/Genomic%20convergence
|
Genomic convergence is a multifactor approach used in genetic research that combines different kinds of genetic data analysis to identify and prioritize susceptibility genes for a complex disease.
Early applications
In January 2003, Michael Hauser along with fellow researchers at the Duke Center for Human Genetics (CHG) coined the term “genomic convergence” to describe their endeavor to identify genes affecting the expression of Parkinson disease (PD). Their work successfully combined serial analysis of gene expression (SAGE) with genetic linkage analysis. The authors explain, “While both linkage and expression analyses are powerful on their own, the number of possible genes they present as candidates for PD or any complex disorder remains extremely large”. The convergence of the two methods allowed researchers to decrease the number of possible PD genes to consider for further study.
Their success prompted further use of the genomic convergence method at the CHG, and in July 2003 Yi-Ju Li, et al. published a paper revealing that glutathione S-transferase omega-1 (GSTO1) modifies the age-at-onset (AAO) of Alzheimer disease (AD) and PD.
In May 2004, Dr. Margaret Pericak-Vance, currently the director of the John P. Hussman Institute for Human Genomics at the University of Miami Miller School of Medicine and then the director of the CHG, articulated the value of the genomic convergence method at a New York Academy of Sciences (NYAS) keynote address entitled "Novel Methods in Genetic Exploration of Neurodegenerative Disease." She stated, "No single method is going to get us where we need to be with these complex traits. It is going to take a combination of methods to dissect the underlying etiology of these disorders".
Recent and future applications
Genomic convergence has a countless number of creative applications that combine the strengths of different analyses and studies. Maher Noureddine et al., note in their 2005 paper, “One of the growing problems in the s
|
https://en.wikipedia.org/wiki/Evolution%40Home
|
evolution@home was a volunteer computing project for evolutionary biology, launched in 2001. The aim of evolution@home is to improve understanding of evolutionary processes. This is achieved by simulating individual-based models. The Simulator005 module of evolution@home was designed to better predict the behaviour of Muller's ratchet.
The project was operated semi-automatically; participants had to manually download tasks from the webpage and submit results by email using this method of operation. yoyo@home used a BOINC wrapper to completely automate this project by automatically distributing tasks and collecting their results. Therefore, the BOINC version was a complete volunteer computing project. yoyo@home has declared its involvement in this project finished.
See also
Artificial life
Digital organism
Evolutionary computation
Folding@home
List of volunteer computing projects
References
Science in society
Free science software
Volunteer computing projects
Digital organisms
Bioinformatics
|
https://en.wikipedia.org/wiki/KFRE-TV
|
KFRE-TV (channel 59) is a television station licensed to Sanger, California, United States, serving the Fresno area as an affiliate of The CW. It is owned by Sinclair Broadcast Group alongside Visalia-licensed Fox affiliate KMPH-TV (channel 26). Both stations share studios on McKinley Avenue in eastern Fresno, while KFRE-TV's transmitter is located on Bear Mountain (near Meadow Lakes).
History
Early years
The station first signed on the air on July 17, 1985, as KMSG-TV; it originally operated as a religious independent mostly with shows like The PTL Club, The 700 Club, Richard Roberts, Jimmy Swaggart, and others as well as Home Shopping Network programming during the overnight hours. This station signed on just as KAIL (then on channel 53, now on channel 7) was evolving from religious to more of a general entertainment format. By 1987, the station evolved into a Spanish-language format during the afternoon and evening hours, and English-language religious programs for about eight hours a day each morning. The station's Spanish programming was sourced from NetSpan, the second Spanish-language television network to launch in the United States (after the Spanish International Network, now Univision); NetSpan was relaunched as Telemundo in 1987. By 1989, the station gradually dropped its inventory of English-language religious programs, and exclusively affiliated with Telemundo.
WB affiliation
In 2000, KNSO (channel 51, then an affiliate of The WB) signed a deal to become the Fresno market's new Telemundo affiliate; as a result, Pappas Telecasting terminated a local marketing agreement (LMA) between KNSO and Fox affiliate KMPH (channel 26). On January 1, 2001, the LMA with KMPH was transferred to KMSG, which also resulted in the WB affiliation moving to the station from KNSO (becoming the network's third affiliate in the market; The WB's original Fresno affiliate was Clovis-based KGMC (channel 43), which was with the network from its launch in 1995 until 1997); chan
|
https://en.wikipedia.org/wiki/List%20of%20computer%20size%20categories
|
This list of computer size categories attempts to list commonly used categories of computer by the physical size of the device and its chassis or case, in descending order of size. One generation's "supercomputer" is the next generation's "mainframe", and a "PDA" does not have the same set of functions as a "laptop", but the list still has value, as it provides a ranked categorization of devices. It also ranks some more obscure computer sizes. There are different sizes like-mini computers, microcomputer, mainframe computer and super computer.
Large computers
Supercomputer
Minisupercomputer
Mainframe computer
Midrange computer
Superminicomputer
Minicomputer
Microcomputers
Interactive kiosk
Arcade cabinet
Personal computer (PC)
Desktop computer—see computer form factor for some standardized sizes of desktop computers
Full-size
All-in-one
Compact
Home theater
Home computer
Mobile computers
Desktop replacement computer or desknote
Laptop computer
Subnotebook computer, also known as a Kneetop computer; clamshell varieties may also be known as minilaptop or ultraportable laptop computers
Tablet personal computer
Handheld computers, which include the classes:
Ultra-mobile personal computer, or UMPC
Personal digital assistant or enterprise digital assistant, which include:
HandheldPC or Palmtop computer
Pocket personal computer
Electronic organizer
E-reader
Pocket computer
Calculator, which includes the class:
Graphing calculator
Scientific calculator
Programmable calculator
Accounting / Financial Calculator
Handheld game console
Portable media player
Portable data terminal
Handheld
Smartphone, a class of mobile phone
Feature phone
Wearable computer
Single-board computer
Wireless sensor network components
Plug computer
Stick PC, a single-board computer in a small elongated casing resembling a stick
Microcontroller
Smartdust
Nanocomputer
Others
Rackmount computer
Blade server
Blade PC
Small form factor personal
|
https://en.wikipedia.org/wiki/Table%20of%20Newtonian%20series
|
In mathematics, a Newtonian series, named after Isaac Newton, is a sum over a sequence written in the form
where
is the binomial coefficient and is the falling factorial. Newtonian series often appear in relations of the form seen in umbral calculus.
List
The generalized binomial theorem gives
A proof for this identity can be obtained by showing that it satisfies the differential equation
The digamma function:
The Stirling numbers of the second kind are given by the finite sum
This formula is a special case of the kth forward difference of the monomial xn evaluated at x = 0:
A related identity forms the basis of the Nörlund–Rice integral:
where is the Gamma function and is the Beta function.
The trigonometric functions have umbral identities:
and
The umbral nature of these identities is a bit more clear by writing them in terms of the falling factorial . The first few terms of the sin series are
which can be recognized as resembling the Taylor series for sin x, with (s)n standing in the place of xn.
In analytic number theory it is of interest to sum
where B are the Bernoulli numbers. Employing the generating function its Borel sum can be evaluated as
The general relation gives the Newton series
where is the Hurwitz zeta function and the Bernoulli polynomial. The series does not converge, the identity holds formally.
Another identity is
which converges for . This follows from the general form of a Newton series for equidistant nodes (when it exists, i.e. is convergent)
See also
Binomial transform
List of factorial and binomial topics
Nörlund–Rice integral
Carlson's theorem
References
Philippe Flajolet and Robert Sedgewick, "Mellin transforms and asymptotics: Finite differences and Rice's integrals", Theoretical Computer Science 144 (1995) pp 101–124.
Finite differences
Factorial and binomial topics
Newton series
|
https://en.wikipedia.org/wiki/Lead%E2%80%93lag%20compensator
|
A lead–lag compensator is a component in a control system that improves an undesirable frequency response in a feedback and control system. It is a fundamental building block in classical control theory.
Applications
Lead–lag compensators influence disciplines as varied as robotics,
satellite control, automobile diagnostics, LCDs and laser frequency stabilisation. They are an important building block in analog control systems, and
can also be used in digital control.
Given the control plant, desired specifications can be achieved using compensators. I, P, PI, PD, and PID, are optimizing controllers which are used to improve system parameters (such as reducing steady state error, reducing resonant peak, improving system response by reducing rise time). All these operations can be done by compensators as well,
used in cascade compensation technique.
Theory
Both lead compensators and lag compensators introduce a pole–zero pair into the open loop transfer function. The transfer function can be written in the Laplace domain as
where X is the input to the compensator, Y is the output, s is the complex Laplace transform variable, z is the zero frequency and p is the pole frequency. The pole and zero are both typically negative, or left of the origin in the complex plane. In a lead compensator, ,
while in a lag compensator .
A lead-lag compensator consists of a lead compensator cascaded with a lag compensator. The overall transfer function can be written as
Typically , where z1 and p1 are the zero and pole of the lead compensator and z2 and p2 are the zero and pole of the lag compensator. The lead compensator provides phase lead at high frequencies. This shifts the root locus to the left, which enhances the responsiveness and stability of the system. The lag compensator provides phase lag at low frequencies which reduces
the steady state error.
The precise locations of the poles and zeros depend on both the desired characteristics of the closed
|
https://en.wikipedia.org/wiki/Smooth%20infinitesimal%20analysis
|
Smooth infinitesimal analysis is a modern reformulation of the calculus in terms of infinitesimals. Based on the ideas of F. W. Lawvere and employing the methods of category theory, it views all functions as being continuous and incapable of being expressed in terms of discrete entities. As a theory, it is a subset of synthetic differential geometry.
The nilsquare or nilpotent infinitesimals are numbers ε where ε² = 0 is true, but ε = 0 need not be true at the same time.
Overview
This approach departs from the classical logic used in conventional mathematics by denying the law of the excluded middle, e.g., NOT (a ≠ b) does not imply a = b. In particular, in a theory of smooth infinitesimal analysis one can prove for all infinitesimals ε, NOT (ε ≠ 0); yet it is provably false that all infinitesimals are equal to zero. One can see that the law of excluded middle cannot hold from the following basic theorem (again, understood in the context of a theory of smooth infinitesimal analysis):
Every function whose domain is R, the real numbers, is continuous and infinitely differentiable.
Despite this fact, one could attempt to define a discontinuous function f(x) by specifying that f(x) = 1 for x = 0, and f(x) = 0 for x ≠ 0. If the law of the excluded middle held, then this would be a fully defined, discontinuous function. However, there are plenty of x, namely the infinitesimals, such that neither x = 0 nor x ≠ 0 holds, so the function is not defined on the real numbers.
In typical models of smooth infinitesimal analysis, the infinitesimals are not invertible, and therefore the theory does not contain infinite numbers. However, there are also models that include invertible infinitesimals.
Other mathematical systems exist which include infinitesimals, including nonstandard analysis and the surreal numbers. Smooth infinitesimal analysis is like nonstandard analysis in that (1) it is meant to serve as a foundation for analysis, and (2) the infinitesimal quantities do no
|
https://en.wikipedia.org/wiki/Selected%20area%20diffraction
|
Selected area (electron) diffraction (abbreviated as SAD or SAED) is a crystallographic experimental technique typically performed using a transmission electron microscope (TEM). It is a specific case of electron diffraction used primarily in material science and solid state physics as one of the most common experimental techniques. Especially with appropriate analytical software, SAD patterns (SADP) can be used to determine crystal orientation, measure lattice constants or examine its defects.
Principle
In transmission electron microscope, a thin crystalline sample is illuminated by parallel beam of electrons accelerated to energy of hundreds of kiloelectron volts. At these energies samples are transparent for the electrons if the sample is thinned enough (typically less than 100 nm). Due to the wave–particle duality, the high-energetic electrons behave as matter waves with wavelength of a few thousandths of a nanometer. The relativistic wavelength is given by
where is Planck's constant, is the electron rest mass, is the elementary charge, is the speed of light and is an electric potential accelerating the electrons (also called acceleration voltage). For instance the acceleration voltage of 200 000 kV results in a wavelength of 2.508 pm.
Since the spacing between atoms in crystals is about a hundred times larger, the electrons are diffracted on the crystal lattice, acting as a diffraction grating. Due to the diffraction, part of the electrons is scattered at particular angles (diffracted beams), while others pass through the sample without changing their direction (transmitted beams). In order to determine the diffraction angles, the electron beam normally incident to the atomic lattice can be seen as a planar wave, which is re-transmitted by each atom as a spherical wave. Due to the constructive interference, the spherical waves from number of diffracted beams under angles given, approximately, by the Bragg condition
where the integer is an
|
https://en.wikipedia.org/wiki/Order-5%20dodecahedral%20honeycomb
|
In hyperbolic geometry, the order-5 dodecahedral honeycomb is one of four compact regular space-filling tessellations (or honeycombs) in hyperbolic 3-space. With Schläfli symbol it has five dodecahedral cells around each edge, and each vertex is surrounded by twenty dodecahedra. Its vertex figure is an icosahedron.
Description
The dihedral angle of a Euclidean regular dodecahedron is ~116.6°, so no more than three of them can fit around an edge in Euclidean 3-space. In hyperbolic space, however, the dihedral angle is smaller than it is in Euclidean space, and depends on the size of the figure; the smallest possible dihedral angle is 60°, for an ideal hyperbolic regular dodecahedron with infinitely long edges. The dodecahedra in this dodecahedral honeycomb are sized so that all of their dihedral angles are exactly 72°.
Images
Related polytopes and honeycombs
There are four regular compact honeycombs in 3D hyperbolic space:
There is another honeycomb in hyperbolic 3-space called the order-4 dodecahedral honeycomb, {5,3,4}, which has only four dodecahedra per edge. These honeycombs are also related to the 120-cell which can be considered as a honeycomb in positively curved space (the surface of a 4-dimensional sphere), with three dodecahedra on each edge, {5,3,3}. Lastly the dodecahedral ditope, {5,3,2} exists on a 3-sphere, with 2 hemispherical cells.
There are nine uniform honeycombs in the [5,3,5] Coxeter group family, including this regular form. Also the bitruncated form, t1,2{5,3,5}, , of this honeycomb has all truncated icosahedron cells.
The Seifert–Weber space is a compact manifold that can be formed as a quotient space of the order-5 dodecahedral honeycomb.
This honeycomb is a part of a sequence of polychora and honeycombs with icosahedron vertex figures:
This honeycomb is a part of a sequence of regular polytopes and honeycombs with dodecahedral cells:
Rectified order-5 dodecahedral honeycomb
The rectified order-5 dodecahedral honeycomb, , has
|
https://en.wikipedia.org/wiki/Icosahedral%20honeycomb
|
In geometry, the icosahedral honeycomb is one of four compact, regular, space-filling tessellations (or honeycombs) in hyperbolic 3-space. With Schläfli symbol there are three icosahedra around each edge, and 12 icosahedra around each vertex, in a regular dodecahedral vertex figure.
Description
The dihedral angle of a regular icosahedron is around 138.2°, so it is impossible to fit three icosahedra around an edge in Euclidean 3-space. However, in hyperbolic space, properly scaled icosahedra can have dihedral angles of exactly 120 degrees, so three of those can fit around an edge.
Related regular honeycombs
There are four regular compact honeycombs in 3D hyperbolic space:
Related regular polytopes and honeycombs
It is a member of a sequence of regular polychora and honeycombs {3,p,3} with deltrahedral cells:
It is also a member of a sequence of regular polychora and honeycombs {p,5,p}, with vertex figures composed of pentagons:
Uniform honeycombs
There are nine uniform honeycombs in the [3,5,3] Coxeter group family, including this regular form as well as the bitruncated form, t1,2{3,5,3}, , also called truncated dodecahedral honeycomb, each of whose cells are truncated dodecahedra.
Rectified icosahedral honeycomb
The rectified icosahedral honeycomb, t1{3,5,3}, , has alternating dodecahedron and icosidodecahedron cells, with a triangular prism vertex figure:
Perspective projections from center of Poincaré disk model
Related honeycomb
There are four rectified compact regular honeycombs:
Truncated icosahedral honeycomb
The truncated icosahedral honeycomb, t0,1{3,5,3}, , has alternating dodecahedron and truncated icosahedron cells, with a triangular pyramid vertex figure.
Related honeycombs
Bitruncated icosahedral honeycomb
The bitruncated icosahedral honeycomb, t1,2{3,5,3}, , has truncated dodecahedron cells with a tetragonal disphenoid vertex figure.
Related honeycombs
Cantellated icosahedral honeycomb
The cantellated icosahedral honeycomb, t0,2{3
|
https://en.wikipedia.org/wiki/Nodal%20admittance%20matrix
|
In power engineering, nodal admittance matrix (or just admittance matrix) or Y Matrix or Ybus is an N x N matrix describing a linear power system with N buses. It represents the nodal admittance of the buses in a power system. In realistic systems which contain thousands of buses, the Y matrix is quite sparse. Each bus in a real power system is usually connected to only a few other buses through the transmission lines. The Y Matrix is also one of the data requirements needed to formulate a power-flow study.
Context
Electric power transmission needs optimization in order to determine the necessary real and reactive power flows in a system for a given set of loads, as well as the voltages and currents in the system. Power flow studies are used not only to analyze current power flow situations, but also to plan ahead for anticipated disturbances to the system, such as the loss of a transmission line to maintenance and repairs. The power flow study would determine whether or not the system could continue functioning properly without the transmission line. Only computer simulation allows the complex handling required in power flow analysis because in most realistic situations the system is very complex and extensive and would be impractical to solve by hand. The Y Matrix is a tool in that domain. It provides a method of systematically reducing a complex system to a matrix that can be solved by a computer program. The equations used to construct the Y matrix come from the application of Kirchhoff's current law and Kirchhoff's voltage law to a circuit with steady-state sinusoidal operation. These laws give us that the sum of currents entering a node in the circuit is zero, and the sum of voltages around a closed loop starting and ending at a node is also zero. These principles are applied to all the nodes in a power flow system and thereby determine the elements of the admittance matrix, which represents the admittance relationships between nodes, which then determin
|
https://en.wikipedia.org/wiki/SILO%20%28bootloader%29
|
The SPARC Improved bootLOader (SILO) is the bootloader used by the SPARC port of the Linux operating system; it can also be used for Solaris as a replacement for the standard Solaris boot loader.
SILO generally looks similar to the basic version of LILO, giving a "boot:" prompt, at which the user can press the Tab key to see the available images to boot. The configuration file format is reasonably similar to LILO's, as well as some of the command-line options. However, SILO differs significantly from LILO because it reads and parses the configuration file at boot time, so it is not necessary to re-run it after every change to the file or to the installed kernel images. SILO is able to access ext2, ext3, ext4, UFS, romfs and ISO 9660 file systems, enabling it to boot arbitrary kernels from them (more similar to GRUB).
SILO also has support for transparent decompression of gzipped vmlinux images, making the bzImage format unnecessary on SPARC Linux.
SILO is loaded from the SPARC PROM.
Licensed under the terms of the GNU General Public License (GPL).
See also
bootman
LILO
elilo
Yaboot
NTLDR
BCD
References
External links
Gentoo wiki about SILO
Free boot loaders
SPARC microprocessor architecture
|
https://en.wikipedia.org/wiki/DMS-59
|
DMS-59 (Dual Monitor Solution, 59 pins) was generally used for computer video cards. It provides two Digital Visual Interface (DVI) or Video Graphics Array (VGA) outputs in a single connector. A Y-style breakout cable is needed for the transition from the DMS-59 output (digital + analogue) to DVI (digital) or VGA (analogue), and different types of adapter cables exist. The connector is four pins high and 15 pins wide, with a single pin missing from the bottom row, in a D-shaped shell, with thumbscrews. , this adapter cable was listed as obsolete by its primary vendor Molex.
The advantage of DMS-59 is its ability to support two high resolution displays, such as two DVI Single Link digital channels or two VGA analog channels, with a single DVI-size connector. The compact size lets a half-height card support two high resolution displays, and a full-height card (with two DMS-59 connectors) up to four high resolution displays.
The DMS-59 connector is used by e.g. AMD (AMD FireMV), Nvidia and Matrox for video cards sold in some Lenovo ThinkStation models, Viglen Genies and Omninos, Dell, HP and Compaq computers. DMS-59 connectors also appeared on Sun Computers. Some confusion has been caused by the fact that vendors label cards with DMS-59 as "supports DVI", but the cards have no DVI connectors built-in. Such cards, when equipped with only a VGA connector adapter cable, cannot be connected to a monitor with only a DVI-D input. A DMS-59 to DVI adapter cable needs to be used with such monitors.
The DMS-59 connector is derived from the LFH-60 Molex low-force helix connector, which could be found in some earlier graphics cards. These ports are similar to DMS-59, but have all 60 pins present, whereas DMS-59 has one pin (pin 58) blocked. A connector plug with all 60 pins (such as a Molex 88766-7610 DVI-I splitter) does not fit into a properly keyed DMS-59 socket.
A Dual-DVI breakout cable can be used in connection with two passive DVI-to-HDMI adapters to feed modern displa
|
https://en.wikipedia.org/wiki/Zonule%20of%20Zinn
|
The zonule of Zinn () (Zinn's membrane, ciliary zonule) (after Johann Gottfried Zinn) is a ring of fibrous strands forming a zonule (little band) that connects the ciliary body with the crystalline lens of the eye. These fibers are sometimes collectively referred to as the suspensory ligaments of the lens, as they act like suspensory ligaments.
Development
The non-pigmented ciliary epithelial cells of the eye synthesize portions of the zonules.
Anatomy
The zonule of Zinn is split into two layers: a thin layer, which lies near the hyaloid fossa, and a thicker layer, which is a collection of zonular fibers. Together, the fibers are known as the suspensory ligament of the lens. The zonules are about 1–2 μm in diameter.
The zonules attach to the lens capsule 2 mm anterior and 1 mm posterior to the equator, and arise of the ciliary epithelium from the pars plana region as well as from the valleys between the ciliary processes in the pars plicata.
When colour granules are displaced from the zonules of Zinn (by friction against the lens), the irises slowly fade. In some cases those colour granules clog the channels and lead to glaucoma pigmentosa.
The zonules are primarily made of fibrillin, a connective tissue protein. Mutations in the fibrillin gene lead to the condition Marfan syndrome, and consequences include an increased risk of lens dislocation.
Clinical appearance
The zonules of Zinn are difficult to visualize using a slit lamp, but may be seen with exceptional dilation of the pupil, or if a coloboma of the iris or a subluxation of the lens is present. The number of zonules present in a person appears to decrease with age. The zonules insert around the outer margin of the lens (equator), both anteriorly and posteriorly.
Function
Securing the lens to the optical axis and transferring forces from the ciliary muscle in accommodation. When colour granules are displaced from the zonules of Zinn, caused by friction of the lens, the iris can slowly fade. These
|
https://en.wikipedia.org/wiki/Vadem%20Clio
|
The Vadem Clio is a handheld PC released by Vadem in 1999. Models of it used Windows CE H/PC Pro 3.0 (WinCE Core OS 2.11) as the operating system. Data Evolution Corporation currently owns the rights to the Clio.
Overview
The Clio is a convertible tablet computer released by Vadem and designed by Sohela.
Data Evolution Corporation, which runs Microsoft's salar CE operating system and has a "SwingTop" pivoting arm. The 180-degree screen rotation allowed the unit to be used as a touch-screen tablet or as a more traditional notebook with a keyboard. Clio could run for more than 12 hours on a single charge. Along with the Sony VAIO it was one of the first full-sized portable computers that measured only an inch (2.2 cm) thick.
The platform was conceived of and created within Vadem by a skunkworks team that was led by Edmond Ku. Clio was first developed without the knowledge of Microsoft and after it was presented to Bill Gates and the CE team, it led to the definition of the Jupiter-class CE platform.
Handwriting software was from Vadem's ParaGraph group (acquired from SGI), the same team that provided handwriting recognition technology used in the Apple Newton.
Originally introduced in 1998, the Clio product line won numerous awards and accolades, such as Mobile Computing & Communications’ “Best Handheld Design, Keyboard Form Factor;” PC Week “Best of Comdex” finalist; Home Office Computing’s Silver Award; Mobility Award “Notebook Computing, PC Companion” winner; Industrial Designs Excellence Awards (IDEA)—Silver in Business and Industrial Equipment; and IDC’s “Best Design”. In addition, the Clio has been featured in hundreds of articles and has appeared on the covers of a number of magazines, including Pen Computing and Business Week.
Design
The swing arm and rotating screen concept was conceived by Edmond Ku, Vadem's engineering director. The physical design was the creation of frogdesign, Inc.'s industrial designers Sonia Schieffer and Josh Morenstein and mec
|
https://en.wikipedia.org/wiki/The%20Baseball%20Network
|
The Baseball Network was a short-lived American television broadcasting joint venture between ABC, NBC and Major League Baseball (MLB). Under the arrangement, beginning in the 1994 season, the league produced its own broadcasts in-house which were then brokered to air on ABC and NBC. The Baseball Network was the first television network in the United States to be owned by a professional sports league.
The package included coverage of games in prime time on selected nights throughout the regular season (under the branding Baseball Night in America), along with coverage of the postseason and the World Series. Unlike previous broadcasting arrangements with the league, there was no national "game of the week" during the regular season; these would be replaced by multiple weekly regional telecasts on certain nights of the week. Additionally, The Baseball Network had exclusive coverage windows; no other broadcaster could televise MLB games during the same night that The Baseball Network was televising games.
The arrangement did not last long; due to the effects of a players' strike on the remainder of the 1994 season, and poor reception from fans and critics over how the coverage was implemented, The Baseball Network was disbanded after the 1995 season. While NBC would maintain rights to certain games, the growing Fox network (having established its own sports division two years earlier in 1994) became the league's new national broadcast partner beginning in 1996.
Background
After the fallout from CBS's financial problems from their exclusive, four-year-long (lasting from 1990 to 1993), US$1.8 billion television contract with Major League Baseball (a contract that ultimately cost the network approximately $500 million), Major League Baseball decided to go into the business of producing the telecasts themselves and market these to advertisers on its own. In reaction to the failed trial with CBS, Major League Baseball was desperately grasping for every available dollar.
|
https://en.wikipedia.org/wiki/Chapman%E2%80%93Jouguet%20condition
|
The Chapman–Jouguet condition holds approximately in detonation waves in high explosives. It states that the detonation propagates at a velocity at which the reacting gases just reach sonic velocity (in the frame of the leading shock wave) as the reaction ceases.
David Chapman and Émile Jouguet originally (c. 1900) stated the condition for an infinitesimally thin detonation. A physical interpretation of the condition is usually based on the later modelling (c. 1943) by Yakov Borisovich Zel'dovich, John von Neumann, and Werner Döring (the so-called ZND detonation model).
In more detail (in the ZND model) in the frame of the leading shock of the detonation wave, gases enter at supersonic velocity and are compressed through the shock to a high-density, subsonic flow. This sudden change in pressure initiates the chemical (or sometimes, as in steam explosions, physical) energy release. The energy release re-accelerates the flow back to the local speed of sound. It can be shown fairly simply, from the one-dimensional gas equations for steady flow, that the reaction must cease at the sonic ("CJ") plane, or there would be discontinuously large pressure gradients at that point.
The sonic plane forms a so-called choke point that enables the lead shock, and reaction zone, to travel at a constant velocity, undisturbed by the expansion of gases in the rarefaction region beyond the CJ plane.
This simple one-dimensional model is quite successful in explaining detonations. However, observations of the structure of real chemical detonations show a complex three-dimensional structure, with parts of the wave traveling faster than average, and others slower. Indeed, such waves are quenched as their structure is destroyed. The Wood–Kirkwood detonation theory can correct for some of these limitations.
Mathematical description
Source:
The Rayleigh line equation and the Hugoniot curve equation obtained from the Rankine–Hugoniot relations for an ideal gas, with the assumption of con
|
https://en.wikipedia.org/wiki/Conductance%20quantum
|
The conductance quantum, denoted by the symbol , is the quantized unit of electrical conductance. It is defined by the elementary charge e and Planck constant h as:
=
It appears when measuring the conductance of a quantum point contact, and, more generally, is a key component of the Landauer formula, which relates the electrical conductance of a quantum conductor to its quantum properties. It is twice the reciprocal of the von Klitzing constant (2/RK).
Note that the conductance quantum does not mean that the conductance of any system must be an integer multiple of G0. Instead, it describes the conductance of two quantum channels (one channel for spin up and one channel for spin down) if the probability for transmitting an electron that enters the channel is unity, i.e. if transport through the channel is ballistic. If the transmission probability is less than unity, then the conductance of the channel is less than G0. The total conductance of a system is equal to the sum of the conductances of all the parallel quantum channels that make up the system.
Derivation
In a 1D wire, connecting two reservoirs of potential and adiabatically:
The density of states is
where the factor 2 comes from electron spin degeneracy, is the Planck constant, and is the electron velocity.
The voltage is:
where is the electron charge.
The 1D current going across is the current density:
This results in a quantized conductance:
Occurrence
Quantized conductance occurs in wires that are ballistic conductors, when the elastic mean free path is much larger than the length of the wire: . B. J. van Wees et al. first observed the effect in a point contact in 1988. Carbon nanotubes have quantized conductance independent of diameter. The quantum hall effect can be used to precisely measure the conductance quantum value. It also occurs in electrochemistry reactions and in association with the quantum capacitance defines the rate with which electrons are transferred between quant
|
https://en.wikipedia.org/wiki/Mouthfeel
|
Mouthfeel refers to the physical sensations in the mouth caused by food or drink, making it distinct from taste. It is a fundamental sensory attribute which, along with taste and smell, determines the overall flavor of a food item. Mouthfeel is also sometimes referred to as texture.
It is used in many areas related to the testing and evaluating of foodstuffs, such as wine-tasting and food rheology. It is evaluated from initial perception on the palate, to first bite, through chewing to swallowing and aftertaste. In wine-tasting, for example, mouthfeel is usually used with a modifier (big, sweet, tannic, chewy, etc.) to the general sensation of the wine in the mouth. Research indicates texture and mouthfeel can also influence satiety with the effect of viscosity most significant.
Mouthfeel is often related to a product's water activity—hard or crisp products having lower water activities and soft products having intermediate to high water activities.
Qualities perceived
Chewiness: The sensation of sustained, elastic resistance from food while it is chewed.
Cohesiveness: Degree to which the sample deforms before rupturing when biting with molars.
Crunchiness: The audible grinding of a food when it is chewed.
Density: Compactness of cross section of the sample after biting completely through with the molars.
Dryness: Degree to which the sample feels dry in the mouth.
Exquisiteness: Perceived quality of the item in question.
Fracturability: Force with which the sample crumbles, cracks or shatters. Fracturability encompasses crumbliness, crispiness, crunchiness and brittleness.
Graininess: Degree to which a sample contains small grainy particles.
Gumminess: Energy required to disintegrate a semi-solid food to a state ready for swallowing.
Hardness: Force required to deform the product to a given distance, i.e., force to compress between molars, bite through with incisors, compress between tongue and palate.
Heaviness: Weight of product perceived when fir
|
https://en.wikipedia.org/wiki/Amplitude%20modulation%20signalling%20system
|
The amplitude modulation signalling system (AMSS or the AM signalling system) is a digital system for adding low bit rate information to an analogue amplitude modulated broadcast signal in the same manner as the Radio Data System (RDS) for frequency modulated (FM) broadcast signals.
This system has been standardized in March 2006 by ETSI (TS 102 386) as an extension to the Digital Radio Mondiale (DRM) system.
Broadcasting
AMSS data are broadcast from the following transmitters:
LW
RTL France: 234 kHz
SW
BBC World Service: 15.575 MHz
Formerly it was also used by:
MW
Truckradio 531 kHz
BBC World Service: 648 kHz
Deutschlandradio Kultur: 990 kHz
External links
ETSI TS 102 386 V1.2.1 (2006-03) directly from ETSI Publications Download Area (account or free registration required)
Radio technology
Broadcast engineering
2006 introductions
2006 establishments
|
https://en.wikipedia.org/wiki/Hudson%20Soft%20HuC6270
|
HuC6270 is a video display controller (VDC) developed by Hudson Soft and manufactured for Hudson Soft by Seiko Epson. The VDC was used in the PC Engine game console produced by NEC Corporation, and the upgraded PC Engine SuperGrafx.
Technical specification
The HuC6270 generates a display signal composed of background (with x y scrolling) and sprites. It uses external VRAM via a 16-bit address bus. It can display up to 64 sprites on screen, with a maximum of 16 sprites per horizontal scan line.
Uses
The HuC6270 was used in consoles PC Engine and PC engine SuperGrafx consoles. Additionally, the VDC was used in two arcade games. The arcade version of Bloody Wolf ran on a custom version of the PC Engine. The arcade hardware is missing the second 16-bit graphic chip, the HuC6260 video color encoder, that is in the PC Engine. This means the VDC directly accesses palette RAM and builds out the display signals/timing. A rare Capcom quiz-type arcade game also ran on a modified version of the SuperGrafx hardware, which used two VDCs.
References
Graphics chips
TurboGrafx-16
|
https://en.wikipedia.org/wiki/Paris%E2%80%93Harrington%20theorem
|
In mathematical logic, the Paris–Harrington theorem states that a certain combinatorial principle in Ramsey theory, namely the strengthened finite Ramsey theorem, which is expressible in Peano arithmetic, is not provable in this system. The combinatorial principle is however provable in slightly stronger systems.
This result has been described by some (such as the editor of the Handbook of Mathematical Logic in the references below) as the first "natural" example of a true statement about the integers that could be stated in the language of arithmetic, but not proved in Peano arithmetic; it was already known that such statements existed by Gödel's first incompleteness theorem.
Strengthened finite Ramsey theorem
The strengthened finite Ramsey theorem is a statement about colorings and natural numbers and states that:
For any positive integers n, k, m, such that m ≥ n, one can find N with the following property: if we color each of the n-element subsets of S = {1, 2, 3,..., N} with one of k colors, then we can find a subset Y of S with at least m elements, such that all n-element subsets of Y have the same color, and the number of elements of Y is at least the smallest element of Y.
Without the condition that the number of elements of Y is at least the smallest element of Y, this is a corollary of the finite Ramsey theorem in , with N given by:
Moreover, the strengthened finite Ramsey theorem can be deduced from the infinite Ramsey theorem in almost exactly the same way that the finite Ramsey theorem can be deduced from it, using a compactness argument (see the article on Ramsey's theorem for details). This proof can be carried out in second-order arithmetic.
The Paris–Harrington theorem states that the strengthened finite Ramsey theorem is not provable in Peano arithmetic.
Paris–Harrington theorem
Roughly speaking, Jeff Paris and Leo Harrington (1977) showed that the strengthened finite Ramsey theorem is unprovable in Peano arithmetic by showing that in Peano
|
https://en.wikipedia.org/wiki/Simple%20%28abstract%20algebra%29
|
In mathematics, the term simple is used to describe an algebraic structure which in some sense cannot be divided by a smaller structure of the same type. Put another way, an algebraic structure is simple if the kernel of every homomorphism is either the whole structure or a single element. Some examples are:
A group is called a simple group if it does not contain a nontrivial proper normal subgroup.
A ring is called a simple ring if it does not contain a nontrivial two sided ideal.
A module is called a simple module if it does not contain a nontrivial submodule.
An algebra is called a simple algebra if it does not contain a nontrivial two sided ideal.
The general pattern is that the structure admits no non-trivial congruence relations.
The term is used differently in semigroup theory. A semigroup is said to be simple if it has no nontrivial
ideals, or equivalently, if Green's relation J is
the universal relation. Not every congruence on a semigroup is associated with an ideal, so a simple semigroup may
have nontrivial congruences. A semigroup with no nontrivial congruences is called congruence simple.
See also
semisimple
simple universal algebra
Abstract algebra
|
https://en.wikipedia.org/wiki/Duplicate%20code
|
In computer programming, duplicate code is a sequence of source code that occurs more than once, either within a program or across different programs owned or maintained by the same entity. Duplicate code is generally considered undesirable for a number of reasons. A minimum requirement is usually applied to the quantity of code that must appear in a sequence for it to be considered duplicate rather than coincidentally similar. Sequences of duplicate code are sometimes known as code clones or just clones, the automated process of finding duplications in source code is called clone detection.
Two code sequences may be duplicates of each other without being character-for-character identical, for example by being character-for-character identical only when white space characters and comments are ignored, or by being token-for-token identical, or token-for-token identical with occasional variation. Even code sequences that are only functionally identical may be considered duplicate code.
Emergence
Some of the ways in which duplicate code may be created are:
copy and paste programming, which in academic settings may be done as part of plagiarism
scrounging, in which a section of code is copied "because it works". In most cases this operation involves slight modifications in the cloned code, such as renaming variables or inserting/deleting code. The language nearly always allows one to call one copy of the code from different places, so that it can serve multiple purposes, but instead the programmer creates another copy, perhaps because they
do not understand the language properly
do not have the time to do it properly, or
do not care about the increased active software rot.
It may also happen that functionality is required that is very similar to that in another part of a program, and a developer independently writes code that is very similar to what exists elsewhere. Studies suggest that such independently rewritten code is typically not syntactically similar.
|
https://en.wikipedia.org/wiki/Brauer%20algebra
|
In mathematics, a Brauer algebra is an associative algebra introduced by Richard Brauer in the context of the representation theory of the orthogonal group. It plays the same role that the symmetric group does for the representation theory of the general linear group in Schur–Weyl duality.
Structure
The Brauer algebra is a -algebra depending on the choice of a positive integer . Here is an indeterminate, but in practice is often specialised to the dimension of the fundamental representation of an orthogonal group . The Brauer algebra has the dimension
Diagrammatic definition
A basis of consists of all pairings on a set of elements (that is, all perfect matchings of a complete graph : any two of the elements may be matched to each other, regardless of their symbols). The elements are usually written in a row, with the elements beneath them.
The product of two basis elements and is obtained by concatenation: first identifying the endpoints in the bottom row of and the top row of (Figure AB in the diagram), then deleting the endpoints in the middle row and joining endpoints in the remaining two rows if they are joined, directly or by a path, in AB (Figure AB=nn in the diagram). Thereby all closed loops in the middle of AB are removed. The product of the basis elements is then defined to be the basis element corresponding to the new pairing multiplied by where is the number of deleted loops. In the example .
Generators and relations
can also be defined as the -algebra with generators satisfying the following relations:
Relations of the symmetric group:
whenever
Almost-idempotent relation:
Commutation:
whenever
Tangle relations
Untwisting:
:
In this presentation represents the diagram in which is always connected to directly beneath it except for and which are connected to and respectively. Similarly represents the diagram in which is always connected to directly beneath it except for being connected to and to
|
https://en.wikipedia.org/wiki/Hereditary%20set
|
In set theory, a hereditary set (or pure set) is a set whose elements are all hereditary sets. That is, all elements of the set are themselves sets, as are all elements of the elements, and so on.
Examples
For example, it is vacuously true that the empty set is a hereditary set, and thus the set containing only the empty set is a hereditary set. Similarly, a set that contains two elements: the empty set and the set that contains only the empty set, is a hereditary set.
In formulations of set theory
In formulations of set theory that are intended to be interpreted in the von Neumann universe or to express the content of Zermelo–Fraenkel set theory, all sets are hereditary, because the only sort of object that is even a candidate to be an element of a set is another set. Thus the notion of hereditary set is interesting only in a context in which there may be urelements.
Assumptions
The inductive definition of hereditary sets presupposes that set membership is well-founded (i.e., the axiom of regularity), otherwise the recurrence may not have a unique solution. However, it can be restated non-inductively as follows: a set is hereditary if and only if its transitive closure contains only sets.
In this way the concept of hereditary sets can also be extended to non-well-founded set theories in which sets can be members of themselves. For example, a set that contains only itself is a hereditary set.
See also
Hereditarily countable set
Hereditarily finite set
Well-founded set
References
Set theory
|
https://en.wikipedia.org/wiki/Goodyear%20MPP
|
The Goodyear Massively Parallel Processor (MPP) was a
massively parallel processing supercomputer built by Goodyear Aerospace
for the NASA Goddard Space Flight Center. It was designed to deliver enormous computational power at lower cost than other existing supercomputer architectures, by using thousands of simple processing elements, rather than one or a few highly complex CPUs. Development of the MPP began circa 1979; it was delivered in May 1983, and was in general use from 1985 until 1991.
It was based on Goodyear's earlier STARAN array processor, a 4x256 1-bit processing element (PE) computer. The MPP was a 128x128 2-dimensional array of 1-bit wide PEs. In actuality 132x128 PEs were configured with a 4x128 configuration added for fault tolerance to substitute for up to 4 rows (or columns) of processors in the presence of problems. The PEs operated in a single instruction, multiple data (SIMD) fashioneach PE performed the same operation simultaneously, on different data elements, under the control of a microprogrammed control unit.
After the MPP was retired in 1991, it was donated to the Smithsonian Institution, and is now in the collection of the National Air and Space Museum's Steven F. Udvar-Hazy Center. It was succeeded at Goddard by the MasPar MP-1 and Cray T3D massively parallel computers.
Applications
The MPP was initially developed for high-speed analysis of satellite images. In early tests, it was able to extract and separate different land-use areas on Landsat imagery in 18 seconds, as compared with 7 hours on a DEC VAX-11/780.
Once the system was put into production use, NASA's Office of Space Science and Applications solicited proposals from scientists across the country to test and implement a wide range of computational algorithms on the MPP. 40 projects were accepted, to form the "MPP Working Group"; results of most of them were presented at the First Symposium on the Frontiers of Massively Parallel Computation, in 1986.
Some examples of app
|
https://en.wikipedia.org/wiki/Binomial%20transform
|
In combinatorics, the binomial transform is a sequence transformation (i.e., a transform of a sequence) that computes its forward differences. It is closely related to the Euler transform, which is the result of applying the binomial transform to the sequence associated with its ordinary generating function.
Definition
The binomial transform, T, of a sequence, {an}, is the sequence {sn} defined by
Formally, one may write
for the transformation, where T is an infinite-dimensional operator with matrix elements Tnk.
The transform is an involution, that is,
or, using index notation,
where is the Kronecker delta. The original series can be regained by
The binomial transform of a sequence is just the nth forward differences of the sequence, with odd differences carrying a negative sign, namely:
where Δ is the forward difference operator.
Some authors define the binomial transform with an extra sign, so that it is not self-inverse:
whose inverse is
In this case the former transform is called the inverse binomial transform, and the latter is just binomial transform. This is standard usage for example in On-Line Encyclopedia of Integer Sequences.
Example
Both versions of the binomial transform appear in difference tables. Consider the following difference table:
Each line is the difference of the previous line. (The n-th number in the m-th line is am,n = 3n−2(2m+1n2 + 2m(1+6m)n + 2m-19m2), and the difference equation am+1,n = am,n+1 - am,n holds.)
The top line read from left to right is {an} = 0, 1, 10, 63, 324, 1485, ... The diagonal with the same starting point 0 is {tn} = 0, 1, 8, 36, 128, 400, ... {tn} is the noninvolutive binomial transform of {an}.
The top line read from right to left is {bn} = 1485, 324, 63, 10, 1, 0, ... The cross-diagonal with the same starting point 1485 is {sn} = 1485, 1161, 900, 692, 528, 400, ... {sn} is the involutive binomial transform of {bn}.
Ordinary generating function
The transform connects the generating functions asso
|
https://en.wikipedia.org/wiki/Lindstrand%20Balloons
|
Lindstrand Balloons was a manufacturer of hot air balloons and other aerostats. The company was started by Swedish-born pilot and aeronautical designer Per Lindstrand in Oswestry, England, as Colt Balloons (later Thunder & Colt Balloons, then Lindstrand Balloons) in 1978. Lindstrand Balloons was known for its leading-edge engineering, which included sophisticated testing and production facilities.
Lindstrand Balloons designed and built the hot air balloons flown by Per Lindstrand and Richard Branson on their record breaking flights; first across the Atlantic Ocean in 1987 and then the Pacific Ocean in 1990. Subsequently Lindstrand designed and built three Rozière balloons which Per Lindstrand and Branson and others used in their unsuccessful attempts to circumnavigate the Earth by balloon. Per Lindstrand played an instrumental role in making these flights possible, and was pilot for all of them.
Ownership structure
In the late-1990s, Cameron Holdings and its owner Don Cameron acquired two-thirds ownership of Lindstrand Balloons. Cameron bought the majority stake in Lindstrand Balloons from Rory McCarthy, a British industrialist associated with Richard Branson, who had invested in Lindstrand to support Branson's series of record-setting balloon flights. The remaining third of the company was owned by its founder Per Lindstrand, until 2003 when Per sold his remaining share to Cameron Holdings.
Despite Cameron's ownership, Lindstrand Balloons continued to operate as an independent company with separate management and its own distinct designs and products.
Lindstrand Technologies
Per Lindstrand independently operates a separate company, which designs and builds gas balloons, innovative buildings, specialized aerospace equipment (including an advanced parachute for the Beagle 2 Mars-lander) and cutting edge inflatable structures including aircraft hangars, plugs for fire-containment for road tunnels and flood prevention systems.
On 15 April 2015 it was reported tha
|
https://en.wikipedia.org/wiki/Describing%20function
|
In control systems theory, the describing function (DF) method, developed by Nikolay Mitrofanovich Krylov and Nikolay Bogoliubov in the 1930s, and extended by Ralph Kochenburger is an approximate procedure for analyzing certain nonlinear control problems. It is based on quasi-linearization, which is the approximation of the non-linear system under investigation by a linear time-invariant (LTI) transfer function that depends on the amplitude of the input waveform. By definition, a transfer function of a true LTI system cannot depend on the amplitude of the input function because an LTI system is linear. Thus, this dependence on amplitude generates a family of linear systems that are combined in an attempt to capture salient features of the non-linear system behavior. The describing function is one of the few widely applicable methods for designing nonlinear systems, and is very widely used as a standard mathematical tool for analyzing limit cycles in closed-loop controllers, such as industrial process controls, servomechanisms, and electronic oscillators.
The method
Consider feedback around a discontinuous (but piecewise continuous) nonlinearity (e.g., an amplifier with saturation, or an element with deadband effects) cascaded with a slow stable linear system. The continuous region in which the feedback is presented to the nonlinearity depends on the amplitude of the output of the linear system. As the linear system's output amplitude decays, the nonlinearity may move into a different continuous region. This switching from one continuous region to another can generate periodic oscillations. The describing function method attempts to predict characteristics of those oscillations (e.g., their fundamental frequency) by assuming that the slow system acts like a low-pass or bandpass filter that concentrates all energy around a single frequency. Even if the output waveform has several modes, the method can still provide intuition about properties like frequency and p
|
https://en.wikipedia.org/wiki/Interrupt%20descriptor%20table
|
The interrupt descriptor table (IDT) is a data structure used by the x86 architecture to implement an interrupt vector table. The IDT is used by the processor to determine the correct response to interrupts and exceptions.
The details in the description below apply specifically to the x86 architecture. Other architectures have similar data structures, but may behave differently.
Use of the IDT is triggered by three types of events: hardware interrupts, software interrupts, and processor exceptions, which together are referred to as interrupts. The IDT consists of 256 interrupt vectors–the first 32 (0–31 or 0x00–0x1F) of which are used for processor exceptions.
Real mode
In real mode, the interrupt table is called IVT (interrupt vector table). Up to the 80286, the IVT always resided at the same location in memory, ranging from 0x0000 to 0x03ff, and consisted of 256 far pointers. Hardware interrupts may be mapped to any of the vectors by way of a programmable interrupt controller. On the 80286 and later, the size and locations of the IVT can be changed in the same way as it is done with the IDT (Interrupt descriptor table) in protected mode (i.e., via the LIDT (Load Interrupt Descriptor Table Register) instruction) though it does not change the format of it.
BIOS interrupts
The BIOS provides simple real-mode access to a subset of hardware facilities by registering interrupt handlers. They are invoked as software interrupts with the INT assembly instruction and the parameters are passed via registers. These interrupts are used for various tasks like detecting
the system memory layout, configuring VGA output and modes, and accessing the disk early in the boot process.
Protected and long mode
The IDT is an array of descriptors stored consecutively in memory and indexed by the vector number. It is not necessary to use all of the possible entries: it is sufficient to populate the table up to the highest interrupt vector used, and set the IDT length portion of the
|
https://en.wikipedia.org/wiki/V%28D%29J%20recombination
|
V(D)J recombination is the mechanism of somatic recombination that occurs only in developing lymphocytes during the early stages of T and B cell maturation. It results in the highly diverse repertoire of antibodies/immunoglobulins and T cell receptors (TCRs) found in B cells and T cells, respectively. The process is a defining feature of the adaptive immune system.
V(D)J recombination in mammals occurs in the primary lymphoid organs (bone marrow for B cells and thymus for T cells) and in a nearly random fashion rearranges variable (V), joining (J), and in some cases, diversity (D) gene segments. The process ultimately results in novel amino acid sequences in the antigen-binding regions of immunoglobulins and TCRs that allow for the recognition of antigens from nearly all pathogens including bacteria, viruses, parasites, and worms as well as "altered self cells" as seen in cancer. The recognition can also be allergic in nature (e.g. to pollen or other allergens) or may match host tissues and lead to autoimmunity.
In 1987, Susumu Tonegawa was awarded the Nobel Prize in Physiology or Medicine "for his discovery of the genetic principle for generation of antibody diversity".
Background
Human antibody molecules (including B cell receptors) are composed of heavy and light chains, each of which contains both constant (C) and variable (V) regions, genetically encoded on three loci:
The immunoglobulin heavy locus (IGH@) on chromosome 14, containing the gene segments for the immunoglobulin heavy chain.
The immunoglobulin kappa (κ) locus (IGK@) on chromosome 2, containing the gene segments for one type (κ) of immunoglobulin light chain.
The immunoglobulin lambda (λ) locus (IGL@) on chromosome 22, containing the gene segments for another type (λ) of immunoglobulin light chain.
Each heavy chain or light chain gene contains multiple copies of three different types of gene segments for the variable regions of the antibody proteins. For example, the human immunoglobulin heavy
|
https://en.wikipedia.org/wiki/ZX8301
|
The ZX8301 is an Uncommitted Logic Array (ULA) integrated circuit designed for the Sinclair QL microcomputer. Also known as the "Master Chip", it provides a Video Display Generator, the division of a 15 MHz crystal to provide the 7.5 MHz system clock, ZX8302 register address decoder, DRAM refresh and bus controller. The ZX8301 is IC22 on the QL motherboard.
The Sinclair Research business model had always been to work toward a maximum performance to price ratio (as was evidenced by the keyboard mechanisms in the QL and earlier Sinclair models). Unfortunately, this focus on price and performance often resulted in cost cutting in the design and build of Sinclair's machines. One such cost driven decision (failing to use a hardware buffer integrated circuit (IC) between the IC pins and the external RGB monitor connection) caused the ZX8301 to quickly develop a reputation for being fragile and easy to damage, particularly if the monitor plug was inserted or removed while the QL was powered up. Such action resulted in damage to the video circuitry and almost always required replacement of the ZX8301.
The ZX8301, when subsequently used in the International Computers Limited (ICL) One Per Desk featured hardware buffering, and the chip proved to be much more reliable in this configuration.
See also
Sinclair QL
One Per Desk
List of Sinclair QL clones
References
External links
http://www.worldofspectrum.org/qlfaq/Hardware
Gate arrays
Sinclair Research
|
https://en.wikipedia.org/wiki/Physical%20coefficient
|
A physical coefficient is an important number that characterizes some physical property of a technical or scientific object. A coefficient also has a scientific reference which is the reliance on force.
Stoichiometric coefficient of a chemical compound
To find the coefficient of a chemical compound, you must balance the elements involved in it. For example, water:
H2O.
It just so happens that hydrogen (H) and oxygen (O) are both diatomic molecules, thus we have H2 and O2. To form water, one of the O atoms breaks off from the O2 molecule and react with the H2 compound to form H2O. But, there is one oxygen atom left. It reacts with another H2 molecule. Since it took two of each atom to balance the compound, we put the coefficient 2 in front of H2O:
2 H2O.
The total reaction is thus 2 H2 + O2 → 2 H2O.
Examples of physical coefficients
Coefficient of thermal expansion (thermodynamics) (dimensionless) - Relates the change in temperature to the change in a material's dimensions.
Partition coefficient (KD) (chemistry) - The ratio of concentrations of a compound in two phases of a mixture of two immiscible solvents at equilibrium.
Hall coefficient (electrical physics) - Relates a magnetic field applied to an element to the voltage created, the amount of current and the element thickness. It is a characteristic of the material from which the conductor is made.
Lift coefficient (CL or CZ) (aerodynamics) (dimensionless) - Relates the lift generated by an airfoil with the dynamic pressure of the fluid flow around the airfoil, and the planform area of the airfoil.
Ballistic coefficient (BC) (aerodynamics) (units of kg/m2) - A measure of a body's ability to overcome air resistance in flight. BC is a function of mass, diameter, and drag coefficient.
Transmission coefficient (quantum mechanics) (dimensionless) - Represents the probability flux of a transmitted wave relative to that of an incident wave. It is often used to describe the probability of a particle
|
https://en.wikipedia.org/wiki/Prefetcher
|
The Prefetcher is a component of Microsoft Windows which was introduced in Windows XP. It is a component of the Memory Manager that can speed up the Windows boot process and shorten the amount of time it takes to start up programs. It accomplishes this by caching files that are needed by an application to RAM as the application is launched, thus consolidating disk reads and reducing disk seeks. This feature was covered by US patent 6,633,968.
Since Windows Vista, the Prefetcher has been extended by SuperFetch and ReadyBoost. SuperFetch attempts to accelerate application launch times by monitoring and adapting to application usage patterns over periods of time, and caching the majority of the files and data needed by them into memory in advance so that they can be accessed very quickly when needed. ReadyBoost (when enabled) uses external memory like a USB flash drive to extend the system cache beyond the amount of RAM installed in the computer. ReadyBoost also has a component called ReadyBoot that replaces the Prefetcher for the boot process if the system has 700 MB or more of RAM.
Overview
When a Windows system boots, components of many files need to be read into memory and processed. Often different parts of the same file (e.g. Registry hives) are loaded at different times. As a result, a significant amount of time is spent 'jumping' from file to file and back again multiple times, even though a single access would be more efficient. The prefetcher works by watching what data is accessed during the boot process (including data read from the NTFS Master File Table), and recording a trace file of this activity. The boot fetcher will continue to watch for such activity until 30 seconds after the user's shell has started, or until 60 seconds after all services have finished initializing, or until 120 seconds after the system has booted, whichever elapses first.
Future boots can then use the information recorded in this trace file to load code and data in a more eff
|
https://en.wikipedia.org/wiki/Second%20partial%20derivative%20test
|
In mathematics, the second partial derivative test is a method in multivariable calculus used to determine if a critical point of a function is a local minimum, maximum or saddle point.
Functions of two variables
Suppose that is a differentiable real function of two variables whose second partial derivatives exist and are continuous. The Hessian matrix of is the 2 × 2 matrix of partial derivatives of :
Define to be the determinant
of . Finally, suppose that is a critical point of , that is, that . Then the second partial derivative test asserts the following:
If and then is a local minimum of .
If and then is a local maximum of .
If then is a saddle point of .
If then the point could be any of a minimum, maximum, or saddle point (that is, the test is inconclusive).
Sometimes other equivalent versions of the test are used. In cases 1 and 2, the requirement that is positive at implies that and have the same sign there. Therefore, the second condition, that be greater (or less) than zero, could equivalently be that or be greater (or less) than zero at that point.
A condition implicit in the statement of the test is that if or , it must be the case that and therefore only cases 3 or 4 are possible.
Functions of many variables
For a function f of three or more variables, there is a generalization of the rule above. In this context, instead of examining the determinant of the Hessian matrix, one must look at the eigenvalues of the Hessian matrix at the critical point. The following test can be applied at any critical point a for which the Hessian matrix is invertible:
If the Hessian is positive definite (equivalently, has all eigenvalues positive) at a, then f attains a local minimum at a.
If the Hessian is negative definite (equivalently, has all eigenvalues negative) at a, then f attains a local maximum at a.
If the Hessian has both positive and negative eigenvalues then a is a saddle point for f (and in fact this is true even
|
https://en.wikipedia.org/wiki/SSLeay
|
SSLeay is an open-source SSL implementation. It was developed by Eric Andrew Young and Tim J. Hudson as an SSL 3.0 implementation using RC2 and RC4 encryption. The recommended pronunciation is to say each letter s-s-l-e-a-y and was first developed by Eric A. Young ("eay"). SSLeay also included an implementation of the DES from earlier work by Eric Young which was believed to be the first open-source implementation of DES. Development of SSLeay unofficially mostly ended, and volunteers forked the project under the OpenSSL banner around December 1998, when Tim and Eric both commenced working for RSA Security in Australia.
SSLeay
SSLeay was developed by Eric A. Young, starting in 1995. Windows support was added by Tim J. Hudson. Patches to open source applications to support SSL using SSLeay were produced by Tim Hudson. Development by Young and Hudson ceased in 1998. The SSLeay library and codebase is licensed under its own SSLeay License, a form of free software license. The SSLeay License is a BSD-style open-source license, almost identical to a four-clause BSD license.
SSLeay supports X.509v3 certificates and PKCS#10 certificate requests. It supports SSL2 and SSL3. Also supported is TLSv1.
The first secure FTP implementation was created under BSD using SSLeay by Tim Hudson.
The first open source Certifying Authority implementation was created with CGI scripts using SSLeay by Clifford Heath.
Forks
OpenSSL is a fork and successor project to SSLeay and has a similar interface to it. After Young and Hudson joined RSA Corporation, volunteers forked SSLeay and continued development as OpenSSL.
BSAFE SSL-C is a fork of SSLeay developed by Eric A. Young and Tim J. Hudson for RSA Corporation. It was released as part of BSAFE SSL-C.
References
External links
SSLeay Documentation Archive
SSLapps Notes
See also
GnuTLS
OpenSSL, a major fork of SSLeay
LibreSSL, a major fork of OpenSSL
wolfSSL
Cryptographic software
Transport Layer Security implementation
|
https://en.wikipedia.org/wiki/Polynomial%20transformation
|
In mathematics, a polynomial transformation consists of computing the polynomial whose roots are a given function of the roots of a polynomial. Polynomial transformations such as Tschirnhaus transformations are often used to simplify the solution of algebraic equations.
Simple examples
Translating the roots
Let
be a polynomial, and
be its complex roots (not necessarily distinct).
For any constant , the polynomial whose roots are
is
If the coefficients of are integers and the constant is a rational number, the coefficients of may be not integers, but the polynomial has integer coefficients and has the same roots as .
A special case is when The resulting polynomial does not have any term in .
Reciprocals of the roots
Let
be a polynomial. The polynomial whose roots are the reciprocals of the roots of as roots is its reciprocal polynomial
Scaling the roots
Let
be a polynomial, and be a non-zero constant. A polynomial whose roots are the product by of the roots of is
The factor appears here because, if and the coefficients of are integers or belong to some integral domain, the same is true for the coefficients of .
In the special case where , all coefficients of are multiple of , and is a monic polynomial, whose coefficients belong to any integral domain containing and the coefficients of . This polynomial transformation is often used to reduce questions on algebraic numbers to questions on algebraic integers.
Combining this with a translation of the roots by , allows to reduce any question on the roots of a polynomial, such as root-finding, to a similar question on a simpler polynomial, which is monic and does not have a term of degree . For examples of this, see Cubic function § Reduction to a depressed cubic or Quartic function § Converting to a depressed quartic.
Transformation by a rational function
All preceding examples are polynomial transformations by a rational function, also called Tschirnhaus transformations. Let
be
|
https://en.wikipedia.org/wiki/Transfigurations
|
"Transfigurations" is the 25th episode of the third season of the American science fiction television series Star Trek: The Next Generation, and the 73rd episode of the series overall.
Set in the 24th century, the series follows the adventures of the Starfleet crew of the Federation starship Enterprise-D. In this episode, the Enterprise rescues a critically injured amnesiac who is undergoing a mysterious transformation.
Plot
The Enterprise discovers a crashed escape pod in an unexplored star system. Investigating, they find there is one critically injured passenger in the pod, and the crew brings him aboard the ship. Dr. Crusher determines the survivor will live due to the stranger's own amazing recuperative powers. Crusher also notes that the survivor's cells are mutating in some way.
A couple of days later the stranger finally awakens, but has no memory of his life or identity. The crew decides to call him "John Doe". Some time passes and John has recovered physically, but still has amnesia. In addition, from time to time he suffers from severe pain which is somehow tied to his ongoing mutation. He also begins emitting strange, bright energy bursts. John soon learns that he is able to use this energy to heal injuries, as witnessed by Crusher when he aids an injured O'Brien in her Sickbay.
In the meantime, Geordi La Forge has determined the pod the Enterprise discovered was a kind of storage device. Geordi is also able to interpret a star chart and find the location of John's home planet. However, John's memory has begun to return, and he senses that he must not go back to his home planet yet. A day or so later, a vessel intercepts the Enterprise, and John declares he has to leave. He tries to steal a shuttle and an energy burst accidentally knocks Lieutenant Worf from a walkway, resulting in a fatal fall to the floor below due to a broken neck. John then uses his healing powers to revive Worf and heal his injuries. Prevented from escaping, John explains that h
|
https://en.wikipedia.org/wiki/Diffusion%20flame
|
In combustion, a diffusion flame is a flame in which the oxidizer and fuel are separated before burning. Contrary to its name, a diffusion flame involves both diffusion and convection processes. The name diffusion flame was first suggested by S.P. Burke and T.E.W. Schumann in 1928, to differentiate from premixed flame where fuel and oxidizer are premixed prior to burning. The diffusion flame is also referred to as nonpremixed flame. The burning rate is however still limited by the rate of diffusion. Diffusion flames tend to burn slower and to produce more soot than premixed flames because there may not be sufficient oxidizer for the reaction to go to completion, although there are some exceptions to the rule. The soot typically produced in a diffusion flame becomes incandescent from the heat of the flame and lends the flame its readily identifiable orange-yellow color. Diffusion flames tend to have a less-localized flame front than premixed flames.
The contexts for diffusion may vary somewhat. For instance, a candle uses the heat of the flame itself to vaporize its wax fuel and the oxidizer (oxygen) diffuses into the flame from the surrounding air, while a gaslight flame (or the safety flame of a Bunsen burner) uses fuel already in the form of a vapor.
Diffusion flames are often studied in counter flow (also called opposed jet) burners. Their interest is due to possible application in the flamelet model for turbulent combustion. Furthermore they provide a convenient way to examine strained flames and flames with holes. These are also known under the name of "edge flames", characterized by a local extinction on their axis because of the high strain rates in the vicinity of the stagnation point.
Diffusion flames have an entirely different appearance in a microgravity environment. There is no convection to carry the hot combustion products away from the fuel source, which results in a spherical flame front, such as in the candle seen here. This is a rare example of
|
https://en.wikipedia.org/wiki/Distribution%20uniformity
|
Distribution uniformity or DU in irrigation is a measure of how uniformly water is applied to the area being watered, normally expressed as percentage, and not to be confused with efficiency. The distribution uniformity is often calculated when performing an irrigation audit. The DU should not be confused with the coefficient of uniformity (CU) which is often preferred for describing the performance of overhead pressurized systems.
The most common measure of DU is the low quarter DU expressed as DUlq, which is a measure of the average of the lowest quarter of samples, divided by the average of all samples expressed as percentage. The higher the DUlq, the more uniform the coverage of the area measured. If all samples are equal, the DUlq is 1.0 or 100%. There is no universal value of DUlq for satisfactory system performance. A value of >.80 or 80% is considered above average.
Distribution uniformity may be helpful as a starting point for irrigation scheduling. For example, an irrigator might want to apply not less than one inch of water to the area being watered. If the DU were 75% (0.75), then the total amount to be applied would be the desired amount of water, divided by the DU. In this case, the required irrigation would be 1.33 inches of water, so that only a very small area received less than one inch. The lower the DU, the less uniform the distribution at the plane of data collection and the more water that may be needed to meet the minimum requirement.
Catchments are commonly used to determine sprinkler DU and one must be reminded that data collection most often occurs above grade and above the root zone where plant uptake normally occurs. Many factors may affect water distribution or redistribution between catchment plane and root zone; slope, plant canopy, thatch, mulch, infiltration rate, etc.. Soil type and root horizon may nullify the need for high DUlq sprinklers.
Low sprinkler DUlq does not guarantee inefficiency, nor does high DUlq guara
|
https://en.wikipedia.org/wiki/Erosion%20control
|
Erosion control is the practice of preventing or controlling wind or water erosion in agriculture, land development, coastal areas, river banks and construction. Effective erosion controls handle surface runoff and are important techniques in preventing water pollution, soil loss, wildlife habitat loss and human property loss.
Usage
Erosion controls are used in natural areas, agricultural settings or urban environments. In urban areas erosion controls are often part of stormwater runoff management programs required by local governments. The controls often involve the creation of a physical barrier, such as vegetation or rock, to absorb some of the energy of the wind or water that is causing the erosion. They also involve building and maintaining storm drains. On construction sites they are often implemented in conjunction with sediment controls such as sediment basins and silt fences.
Bank erosion is a natural process: without it, rivers would not meander and change course. However, land management patterns that change the hydrograph and/or vegetation cover can act to increase or decrease channel migration rates. In many places, whether or not the banks are unstable due to human activities, people try to keep a river in a single place. This can be done for environmental reclamation or to prevent a river from changing course into land that is being used by people. One way that this is done is by placing riprap or gabions along the bank.
Examples
Examples of erosion control methods include the following:
cellular confinement systems
crop rotation
conservation tillage
contour plowing
contour trenching
cover crops
fiber rolls (also called straw wattles)
gabions
hydroseeding
level spreaders
mulching
perennial crops
plasticulture
polyacrylamide (as a coagulant)
reforestation
riparian buffer
riprap
strip farming
sand fence
vegetated waterway (bioswale)
terracing
windbreaks
Mathematical modeling
Since the 1920s and 1930s scientists have been creating mathematical mode
|
https://en.wikipedia.org/wiki/PGPDisk
|
PGP Virtual Disk is a disk encryption system that allows one to create a virtual encrypted disk within a file.
Older versions for Windows NT were freeware (for example, bundled with PGP v6.0.2i; and with some of the CKT builds of PGP). These are still available for download, but no longer maintained. Today, PGP Virtual Disk is available as part of the PGP Desktop product family, running on Windows 2000/XP/Vista, and Mac OS X.
See also
Disk encryption software
Comparison of disk encryption software
United States v. Boucher – federal criminal case involving PGPDisk-protected data
Cryptographic software
Disk encryption
|
https://en.wikipedia.org/wiki/Endemic%20Bird%20Areas%20of%20the%20World
|
Endemic Bird Areas of the World: Priorities for Biodiversity Conservation represents an effort to document in detail the endemic biodiversity conservation importance of the world's Endemic Bird Areas.
The authors are Alison J. Stattersfield, Michael J. Crosby, Adrian J. Long, and David C. Wege, with a foreword by Queen Noor of Jordan. Endemic Bird Areas of the World: Priorities for Biodiversity Conservation contains 846 pages, and is a 1998 publication by Birdlife International, No. 7 in their Birdlife Conservation Series.
Six Introductory Sections
The book has six introductory sections:
"Biodiversity and Priority setting"
"Identifying Endemic Bird Areas"
"Global Analyses"
"The Prioritization of Endemic Bird Areas"
"The Conservation Relevance of Endemic Bird Areas"
"Endemic Bird Areas as Targets for Conservation Action"
Six Regional Introductions
These are then followed by six Regional Introductions, in which Endemic Bird Areas are grouped into six major regions:
North and Central America
South America
Africa, Europe, and the Middle East
Continental Asia
South-east Asian Islands, New Guinea and Australia
Pacific Islands
Endemic Bird Areas
The bulk of the book consists of accounts of each of the 218 Endemic Bird Areas. Each account contains the following information:
summary statistics about the EBA
A "General Characteristics" section
A section giving an overview of the restricted-range endemic bird species found in the EBA
A Threats and Conservation section describing the threats posed to the EBA's biodiversity interest, and any significant measure in which are in place to counter these
An annotated list of the restricted-range endemics found in the EBA
Secondary Bird Areas
The book concludes with a short section giving brief details of 138 secondary areas, again grouped into the six regions.
Details
Endemic Bird Areas of the World: Priorities for Biodiversity Conservation follows on from work presented in the 1992 publication Putting biodiver
|
https://en.wikipedia.org/wiki/WNYZ-LD
|
WNYZ-LD is a low-power television station in New York City, owned by K Media. It broadcasts on VHF channel 6, commonly known as an "FM6 operation" because the audio portion of the signal lies at 87.75 MHz, receivable by analog FM radios, tuned to the 87.75 frequency. Throughout its existence, the station has operated closer to a radio station than a television station. WNYZ-LD broadcasts video, usually silent films, which are repeated throughout the day to fulfill the Federal Communications Commission (FCC) requirement that video be broadcast on the licensed frequency. The station airs this programming without commercials, while viewers hear the audio of WWRU out of Jersey City, New Jersey.
History
As W33BS
The station originated in 1987. It first signed on in 1998 as W33BS in Darien, Connecticut; later as UHF channel 33.
As WNYZ-LP
The station was moved to VHF channel 6 in 2003 and the call sign was changed to WNYZ-LP. At that time the station was re-licensed to New York City. The station's original owner, Reverend Dr. Carrie L. Thomas, sold the station to the now defunct Island Broadcasting Company after its transition to channel 6. The new owner dropped its religious format, and began operating WNYZ as an FM radio station. Since the New York City FM radio dial is significantly crowded, the market had not added a station to the FM band since 1985. This rather unconventional work-around effectively extended the available FM band in the city. The audio programming broadcast over WNYZ was originally Russian pop music. The station was branded as, "Radio Everything" ().
Brief digital operation
In November 2008, Island Broadcasting installed an Axcera DT325B digital VHF transmitter with the Axciter/Bandwidth Enhancement Technology (BET) option, which permitted WNYZ-LP to simultaneously transmit a single 480i SD digital stream using virtual channel 1.1, along with the analog audio carrier on 87.75 MHz. This allowed the station to serve both its radio and television
|
https://en.wikipedia.org/wiki/SAGE%20KE
|
The Science of Aging Knowledge Environment (SAGE KE) was an online scientific resource provided by the American Association for the Advancement of Science (AAAS).
History and organization
The American Association for the Advancement of Science established a collaboration with Stanford University Libraries and The Center for Resource Economics/Island Press (Island Press) in 1996 to find means to utilize internet-based technologies to enhance access to scientific information and improve the effectiveness of information transfer. The collaborative coined the term Knowledge Environment (KE) to describe the collection of electronic networking tools they were seeking to develop.
SAGE KE is the third in a series of Knowledge Environments developed by Science and AAAS, after the Signal Transduction Knowledge Environment (STKE) and AIDScience. Funding for SAGE KE comes from The Ellison Medical Foundation, founded and supported by Oracle Corporation CEO Larry Ellison.
SAGE KE published its final issue on 28 June 2006 due to lack of funding. The interactive content was discontinued during the summer of 2006, leaving the SAGE KE site as an archive by August 2006.
Activities
The focus of SAGE KE was to provide timely access to information about advances on basic mechanisms of aging and age-related diseases through the internet, to provide searchable databases of information on aging and to provide an active environment in which biogerontologists could share and debate their understandings.
Ouroboros
Ouroboros is a WordPress community weblog devoted to research in the biology of aging. It was established in July 2006 in reaction to the termination of the SAGE KE. The primary mission of the site is to provide timely commentary and review of recently published articles in the scholarly literature, either directly or indirectly related to aging. Articles on the site discuss a range of scientific topics, including Alzheimer's disease, bioinformatics, calorie restriction, regul
|
https://en.wikipedia.org/wiki/Electronic%20waste
|
Electronic waste or e-waste describes discarded electrical or electronic devices. It is also commonly known as waste electrical and electronic equipment (WEEE) or end-of-life (EOL) electronics. Used electronics which are destined for refurbishment, reuse, resale, salvage recycling through material recovery, or disposal are also considered e-waste. Informal processing of e-waste in developing countries can lead to adverse human health effects and environmental pollution. The growing consumption of electronic goods due to the Digital Revolution and innovations in science and technology, such as bitcoin, has led to a global e-waste problem and hazard. The rapid exponential increase of e-waste is due to frequent new model releases and unnecessary purchases of electrical and electronic equipment (EEE), short innovation cycles and low recycling rates, and a drop in the average life span of computers.
Electronic scrap components, such as CPUs, contain potentially harmful materials such as lead, cadmium, beryllium, or brominated flame retardants. Recycling and disposal of e-waste may involve significant risk to the health of workers and their communities.
Definition
E-waste or electronic waste is created when an electronic product is discarded after the end of its useful life. The rapid expansion of technology and the consumption driven society results in the creation of a very large amount of e-waste.
In the US, the United States Environmental Protection Agency (EPA) classifies e-waste into ten categories:
Large household appliances, including cooling and freezing appliances
Small household appliances
IT equipment, including monitors
Consumer electronics, including televisions
Lamps and luminaires
Toys
Tools
Medical devices
Monitoring and control instruments and
Automatic dispensers
These include used electronics which are destined for reuse, resale, salvage, recycling, or disposal as well as re-usables (working and repairable electronics) and secondary ra
|
https://en.wikipedia.org/wiki/Jefimenko%27s%20equations
|
In electromagnetism, Jefimenko's equations (named after Oleg D. Jefimenko) give the electric field and magnetic field due to a distribution of electric charges and electric current in space, that takes into account the propagation delay (retarded time) of the fields due to the finite speed of light and relativistic effects. Therefore they can be used for moving charges and currents. They are the particular solutions to Maxwell's equations for any arbitrary distribution of charges and currents.
Equations
Electric and magnetic fields
Jefimenko's equations give the electric field E and magnetic field B produced by an arbitrary charge or current distribution, of charge density ρ and current density J:
where r′ is a point in the charge distribution, r is a point in space, and
is the retarded time. There are similar expressions for D and H.
These equations are the time-dependent generalization of Coulomb's law and the Biot–Savart law to electrodynamics, which were originally true only for electrostatic and magnetostatic fields, and steady currents.
Origin from retarded potentials
Jefimenko's equations can be found from the retarded potentials φ and A:
which are the solutions to Maxwell's equations in the potential formulation, then substituting in the definitions of the electromagnetic potentials themselves:
and using the relation
replaces the potentials φ and A by the fields E and B.
Heaviside–Feynman formula
The Heaviside–Feynman formula, also known as the Jefimenko–Feynman formula, can be seen as the point-like electric charge version of Jefimenko's equations. Actually, it can be (non trivially) deduced from them using Dirac functions, or using the Liénard-Wiechert potentials. It is mostly known from The Feynman Lectures on Physics, where it was used to introduce and describe the origin of electromagnetic radiation. The formula provides a natural generalization of the Coulomb's law for cases where the source charge is moving:
Here, and are the electri
|
https://en.wikipedia.org/wiki/Brendan%20McKay%20%28mathematician%29
|
Brendan Damien McKay (born 26 October 1951 in Melbourne, Australia) is an Australian computer scientist and mathematician. He is currently an Emeritus Professor in the Research School of Computer Science at the Australian National University (ANU). He has published extensively in combinatorics.
McKay received a Ph.D. in mathematics from the University of Melbourne in 1980, and was appointed Assistant Professor of Computer Science at Vanderbilt University, Nashville in the same year (1980–1983). His thesis, Topics in Computational Graph Theory, was written under the direction of Derek Holton. He was awarded the Australian Mathematical Society Medal in 1990. He was elected a Fellow of the Australian Academy of Science in 1997, and appointed Professor of Computer Science at the ANU in 2000.
Mathematics
McKay is the author of at least 127 refereed articles.
One of McKay's main contributions has been a practical algorithm for the graph isomorphism problem and its software implementation NAUTY (No AUTomorphisms, Yes?). Further achievements include proving with Stanisław Radziszowski that the Ramsey number R(4,5) = 25; proving with Radziszowski that no 4-(12, 6, 6) combinatorial designs exist, determining with Gunnar Brinkmann, the number of posets on 16 points, and determining with Ian Wanless the number of Latin squares of size 11. Together with Brinkmann, he also developed the Plantri programme for generating planar triangulations and planar cubic graphs.
The McKay–Miller–Širáň graphs, a class of highly-symmetric graphs with diameter two and many vertices relative to their degree, are named in part for McKay, who first wrote about them with Mirka Miller and Jozef Širáň in 1998.
Biblical cyphers
Outside of his specialty, McKay is best known for his collaborative work with a group of Israeli mathematicians such as Dror Bar-Natan and Gil Kalai, together with Maya Bar-Hillel, who rebutted a Bible code theory which maintained that the Hebrew text of the Bible enciphered
|
https://en.wikipedia.org/wiki/Tutte%20polynomial
|
The Tutte polynomial, also called the dichromate or the Tutte–Whitney polynomial, is a graph polynomial. It is a polynomial in two variables which plays an important role in graph theory. It is defined for every undirected graph and contains information about how the graph is connected. It is denoted by .
The importance of this polynomial stems from the information it contains about . Though originally studied in algebraic graph theory as a generalization of counting problems related to graph coloring and nowhere-zero flow, it contains several famous other specializations from other sciences such as the Jones polynomial from knot theory and the partition functions of the Potts model from statistical physics. It is also the source of several central computational problems in theoretical computer science.
The Tutte polynomial has several equivalent definitions. It is essentially equivalent to Whitney’s rank polynomial, Tutte’s own dichromatic polynomial and Fortuin–Kasteleyn’s random cluster model under simple transformations. It is essentially a generating function for the number of edge sets of a given size and connected components, with immediate generalizations to matroids. It is also the most general graph invariant that can be defined by a deletion–contraction recurrence. Several textbooks about graph theory and matroid theory devote entire chapters to it.
Definitions
Definition. For an undirected graph one may define the Tutte polynomial as
where denotes the number of connected components of the graph . In this definition it is clear that is well-defined and a polynomial in and .
The same definition can be given using slightly different notation by letting denote the rank of the graph . Then the Whitney rank generating function is defined as
The two functions are equivalent under a simple change of variables:
Tutte’s dichromatic polynomial is the result of another simple transformation:
Tutte’s original definition of is equivalent but less easil
|
https://en.wikipedia.org/wiki/Zbus
|
Z Matrix or bus impedance matrix in computing is an important tool in power system analysis. Though, it is not frequently used in power flow study, unlike Ybus matrix, it is, however, an important tool in other power system studies like short circuit analysis or fault study.
The Zbus matrix can be computed by matrix inversion of the Ybus matrix. Since the Ybus matrix is usually sparse, the explicit Zbus matrix would be dense and very memory intensive to handle directly.
Context
Electric power transmission needs optimization. Only Computer simulation allows the complex handling required. The Zbus matrix is a big tool in that box.
Formulation
Z Matrix can be formed by either inverting the Ybus matrix or by using Z bus building algorithm. The latter method is harder to implement but more practical and faster (in terms of computer run time and number of floating-point operations per second) for a relatively large system.
Formulation:
Because the Zbus is the inverse of the Ybus, it is symmetrical like the Ybus. The diagonal elements of the Zbus are referred to as driving-point impedances of the buses and the off-diagonal elements are called transfer impedances.
One reason the Ybus is so much more popular in calculation is the matrix becomes sparse for large systems; that is, many elements go to zero as the admittance between two far away buses is very small. In the Zbus, however, the impedance between two far away buses becomes very large, so there are no zero elements, making computation much harder.
The operations to modify an existing Zbus are straightforward, and outlined in Table 1.
To create a Zbus matrix from scratch, we
start by listing the equation for one branch:
Then we add additional branches according
to Table 1 until each bus is expressed in the matrix:
References
Electrical power control
|
https://en.wikipedia.org/wiki/Smart%20battery
|
A smart battery or a smart battery pack is a rechargeable battery pack with a built-in battery management system (BMS), usually designed for use in a portable computer such as a laptop. In addition to the usual positive and negative terminals, a smart battery has two or more terminals to connect to the BMS; typically the negative terminal is also used as BMS "ground". BMS interface examples are: SMBus, PMBus, EIA-232, EIA-485, and Local Interconnect Network.
Internally, a smart battery can measure voltage and current, and deduce charge level and SoH (State of Health) parameters, indicating the state of the cells. Externally, a smart battery can communicate with a smart battery charger and a "smart energy user" via the bus interface. A smart battery can demand that the charging stop, request charging, or demand that the smart energy user stop using power from this battery. There are standard specifications for smart batteries: Smart Battery System, MIPI BIF and many ad-hoc specifications.
Charging
A smart battery charger is mainly a switch mode power supply (also known as high frequency charger) that has the ability to communicate with a smart battery pack's battery management system (BMS) in order to control and monitor the charging process. This communication may be by a standard bus such as CAN bus in automobiles or System Management Bus (SMBus) in computers. The charge process is controlled by the BMS and not by the charger, thus increasing security in the system. Not all chargers have this type of communication, which is commonly used for lithium batteries.
Besides the usual plus (positive) and minus (negative) terminals, a smart battery charger also has multiple terminals to connect to the smart battery pack's BMS. The Smart Battery System standard is commonly used to define this connection, which includes the data bus and the communications protocol between the charger and battery. There are other ad-hoc specifications also used.
Hardware
Smart battery c
|
https://en.wikipedia.org/wiki/Bottom%20type
|
In type theory, a theory within mathematical logic, the bottom type of a type system is the type that is a subtype of all other types.
Where such a type exists, it is often represented with the up tack (⊥) symbol.
When the bottom type is empty, a function whose return type is bottom cannot return any value, not even the lone value of a unit type. In such a language, the bottom type may therefore be known as the zero, void or never type. In the Curry–Howard correspondence, an empty type corresponds to falsity.
Computer science applications
In subtyping systems, the bottom type is a subtype of all types. It is dual to the top type, which spans all possible values in a system.
If a type system is sound, the bottom type is uninhabited and a term of bottom type represents a logical contradiction. In such systems, typically no distinction is drawn between the bottom type and the empty type, and the terms may be used interchangeably.
If the bottom type is inhabited, its terms[s] typically correspond to error conditions such as undefined behavior, infinite recursion, or unrecoverable errors.
In Bounded Quantification with Bottom, Pierce says that "Bot" has many uses:
In a language with exceptions, a natural type for the raise construct is raise ∈ exception -> Bot, and similarly for other control structures. Intuitively, Bot here is the type of computations that do not return an answer.
Bot is useful in typing the "leaf nodes" of polymorphic data structures. For example, List(Bot) is a good type for nil.
Bot is a natural type for the "null pointer" value (a pointer which does not point to any object) of languages like Java: in Java, the null type is the universal subtype of reference types. null is the only value of the null type; and it can be cast to any reference type. However, the null type is not a bottom type as described above, it is not a subtype of int and other primitive types.
A type system including both Top and Bot seems to be a natural target for t
|
https://en.wikipedia.org/wiki/Top%20type
|
In mathematical logic and computer science, some type theories and type systems include a top type that is commonly denoted with top or the symbol ⊤. The top type is sometimes called also universal type, or universal supertype as all other types in the type system of interest are subtypes of it, and in most cases, it contains every possible object of the type system. It is in contrast with the bottom type, or the universal subtype, which every other type is supertype of and it is often that the type contains no members at all.
Support in programming languages
Several typed programming languages provide explicit support for the top type.
In statically-typed languages, there are two different, often confused, concepts when discussing the top type.
A universal base class or other item at the top of a run time class hierarchy (often relevant in object-oriented programming) or type hierarchy; it is often possible to create objects with this (run time) type, or it could be found when one examines the type hierarchy programmatically, in languages that support it
A (compile time) static type in the code whose variables can be assigned any value (or a subset thereof, like any object pointer value), similar to dynamic typing
The first concept often implies the second, i.e., if a universal base class exists, then a variable that can point to an object of this class can also point to an object of any class. However, several languages have types in the second regard above (e.g., void * in C++, id in Objective-C, interface {} in Go), static types which variables can accept any object value, but which do not reflect real run time types that an object can have in the type system, so are not top types in the first regard.
In dynamically-typed languages, the second concept does not exist (any value can be assigned to any variable anyway), so only the first (class hierarchy) is discussed. This article tries to stay with the first concept when discussing top types, but also mentio
|
https://en.wikipedia.org/wiki/Difference%20hierarchy
|
In set theory, a branch of mathematics, the difference hierarchy over a pointclass is a hierarchy of larger pointclasses
generated by taking differences of sets. If Γ is a pointclass, then the set of differences in Γ is . In usual notation, this set is denoted by 2-Γ. The next level of the hierarchy is denoted by 3-Γ and consists of differences of three sets:
. This definition can be extended recursively into the transfinite to α-Γ for some ordinal α.
In the Borel hierarchy, Felix Hausdorff and Kazimierz Kuratowski proved that the countable levels of the
difference hierarchy over Π0γ give
Δ0γ+1.
References
Descriptive set theory
Mathematical logic hierarchies
|
https://en.wikipedia.org/wiki/Abstract%20type
|
In programming languages, an abstract type (also known as existential types) is a type in a nominative type system that cannot be instantiated directly; by contrast, a concrete type be instantiated directly. Instantiation of an abstract type can occur only indirectly, via a concrete subtype.
An abstract type may provide no implementation, or an incomplete implementation. In some languages, abstract types with no implementation (rather than an incomplete implementation) are known as protocols, interfaces, signatures, or class types. In class-based object-oriented programming, abstract types are implemented as abstract classes (also known as abstract base classes), and concrete types as concrete classes. In generic programming, the analogous notion is a concept, which similarly specifies syntax and semantics, but does not require a subtype relationship: two unrelated types may satisfy the same concept.
Often, abstract types will have one or more implementations provided separately, for example, in the form of concrete subtypes that be instantiated. In object-oriented programming, an abstract class may include abstract methods or abstract properties that are shared by its subclasses. Other names for language features that are (or may be) used to implement abstract types include traits, mixins, flavors, roles, or type classes.
Creation
Abstract classes can be created, signified, or simulated in several ways:
By use of the explicit keyword in the class definition, as in Java, D or C#.
By including, in the class definition, one or more abstract methods (called pure virtual functions in C++), which the class is declared to accept as part of its protocol, but for which no implementation is provided.
By inheriting from an abstract type, and not overriding all missing features necessary to complete the class definition. In other words, a child type that does not implement all abstract methods from its parent becomes abstract itself.
In many dynamically typed lan
|
https://en.wikipedia.org/wiki/Trait%20%28computer%20programming%29
|
In computer programming, a trait is a concept used in programming languages which represents a set of methods that can be used to extend the functionality of a class.
Rationale
In object-oriented programming, behavior is sometimes shared between classes which are not related to each other. For example, many unrelated classes may have methods to serialize objects to JSON. Historically, there have been several approaches to solve this without duplicating the code in every class needing the behavior. Other approaches include multiple inheritance and mixins, but these have drawbacks: the behavior of the code may unexpectedly change if the order in which the mixins are applied is altered, or if new methods are added to the parent classes or mixins.
Traits solve these problems by allowing classes to use the trait and get the desired behavior. If a class uses more than one trait, the order in which the traits are used does not matter. The methods provided by the traits have direct access to the data of the class.
Characteristics
Traits combine aspects of protocols (interfaces) and mixins. Like an interface, a trait defines one or more method signatures, of which implementing classes must provide implementations. Like a mixin, a trait provides additional behavior for the implementing class.
In case of a naming collision between methods provided by different traits, the programmer must explicitly disambiguate which one of those methods will be used in the class; thus manually solving the diamond problem of multiple inheritance. This is different from other composition methods in object-oriented programming, where conflicting names are automatically resolved by scoping rules.
Operations which can be performed with traits include:
symmetric sum: an operation that merges two disjoint traits to create a new trait
override (or asymmetric sum): an operation that forms a new trait by adding methods to an existing trait, possibly overriding some of its methods
alias: an oper
|
https://en.wikipedia.org/wiki/Ekiga
|
Ekiga (formerly called GnomeMeeting) is a VoIP and video conferencing application for GNOME and Microsoft Windows. It is distributed as free software under the terms of the GNU GPL-2.0-or-later. It was the default VoIP client in Ubuntu until October 2009, when it was replaced by Empathy. Ekiga supports both the SIP and H.323 (based on OPAL) protocols and is fully interoperable with any other SIP compliant application and with Microsoft NetMeeting. It supports many high-quality audio and video codecs.
Ekiga was initially written by Damien Sandras in order to graduate from the University of Louvain (UCLouvain). It is currently developed by a community-based team led by Sandras. The logo was designed based on his concept by Andreas Kwiatkowski.
Ekiga.net was also a free and private SIP registrar, which enabled its members to originate and terminate (receive) calls from and to each other directly over the Internet.
The service was discontinued at the end of 2018.
Features
Features of Ekiga include:
Integration
Ekiga is integrated with a number of different software packages and protocols such as LDAP directories registration and browsing along with support for Novell Evolution so that contacts are shared between both programs and zeroconf (Apple Bonjour) support. It auto-detects devices including USB, ALSA and legacy OSS soundcards, Video4linux and FireWire camera.
User interface
Ekiga supports a Contact list based interface along with Presence support with custom messages. It allows for the monitoring of contacts and viewing call history along with an addressbook, dialpad, and chat window. SIP URLs and H.323/callto support is built-in along with full-screen videoconferencing (accelerated using a graphics card).
Technical features
Call forwarding on busy, no answer, always (SIP and H.323)
Call transfer (SIP and H.323)
Call hold (SIP and H.323)
DTMF support (SIP and H.323)
Basic instant messaging (SIP)
Text chat (SIP and H.323)
Register with several regi
|
https://en.wikipedia.org/wiki/Minimum%20intelligent%20signal%20test
|
The minimum intelligent signal test, or MIST, is a variation of the Turing test proposed by Chris McKinstry in which only boolean (yes/no or true/false) answers may be given to questions. The purpose of such a test is to provide a quantitative statistical measure of humanness, which may subsequently be used to optimize the performance of artificial intelligence systems intended to imitate human responses.
McKinstry gathered approximately 80,000 propositions that could be answered yes or no, e.g.:
Is Earth a planet?
Was Abraham Lincoln once President of the United States?
Is the sun bigger than my foot?
Do people sometimes lie?
He called these propositions Mindpixels.
These questions test both specific knowledge of aspects of culture, and basic facts about the meaning of various words and concepts. It could therefore be compared with the SAT, intelligence testing and other controversial measures of mental ability. McKinstry's aim was not to distinguish between shades of intelligence but to identify whether a computer program could be considered intelligent at all.
According to McKinstry, a program able to do much better than chance on a large number of MIST questions would be judged to have some level of intelligence and understanding. For example, on a 20-question test, if a program were guessing the answers at random, it could be expected to score 10 correct on average. But the probability of a program scoring 20 out of 20 correct by guesswork is only one in 220, i.e. one in 1,048,576; so if a program were able to sustain this level of performance over several independent trials, with no prior access to the propositions, it should be considered intelligent.
Discussion
McKinstry criticized existing approaches to artificial intelligence such as chatterbots, saying that his questions could "kill" AI programs by quickly exposing their weaknesses. He contrasted his approach, a series of direct questions assessing an AI's capabilities, to the Turing test and
|
https://en.wikipedia.org/wiki/List%20of%20AMD%20graphics%20processing%20units
|
The following is a list that contains general information about GPUs and video cards by AMD, including those by ATI Technologies before 2006, based on official specifications in table-form.
Field explanations
The headers in the table listed below describe the following:
Model – The marketing name for the GPU assigned by AMD/ATI. Note that ATI trademarks have been replaced by AMD trademarks starting with the Radeon HD 6000 series for desktop and AMD FirePro series for professional graphics.
Codename – The internal engineering codename for the GPU.
Launch – Date of release for the GPU.
Architecture – The microarchitecture used by the GPU.
Fab – Fabrication process. Average feature size of components of the GPU.
Transistors – Number of transistors on the die.
Die size – Physical surface area of the die.
Core config – The layout of the graphics pipeline, in terms of functional units.
Core clock – The reference base and boost (if available) core clock frequency.
Fillrate
Pixel - The rate at which pixels can be rendered by the raster operators to a display. Measured in pixels/s.
Texture - The rate at which textures can be mapped by the texture mapping units onto a polygon mesh. Measured in texels/s.
Performance
Shader operations - How many operations the pixel shaders (or unified shaders in Direct3D 10 and newer GPUs) can perform. Measured in operations/s.
Vertex operations - The amount of geometry operations that can be processed on the vertex shaders in one second (only applies to Direct3D 9.0c and older GPUs). Measured in vertices/s.
Memory
Bus type – Type of memory bus utilized.
Bus width – Maximum bit width of the memory bus utilized.
Size – Size of the graphics memory.
Clock – The reference memory clock frequency.
Bandwidth – Maximum theoretical memory bandwidth based on bus type and width.
TDP (Thermal design power) – Maximum amount of heat generated by the GPU chip, measured in Watt.
TBP (Typical board power) – Typical power drawn by the t
|
https://en.wikipedia.org/wiki/Coacervate
|
Coacervate ( or ) is an aqueous phase rich in macromolecules such as synthetic polymers, proteins or nucleic acids. It forms through liquid-liquid phase separation (LLPS), leading to a dense phase in thermodynamic equilibrium with a dilute phase. The dispersed droplets of dense phase are also called coacervates, micro-coacervates or coacervate droplets. These structures draw a lot of interest because they form spontaneously from aqueous mixtures and provide stable compartmentalization without the need of a membrane.
The term coacervate was coined in 1929 by Dutch chemist Hendrik G. Bungenberg de Jong and Hugo R. Kruyt while studying lyophilic colloidal dispersions. The name is a reference to the clustering of colloidal particles, like bees in a swarm. The concept was later borrowed by Russian biologist Alexander I. Oparin to describe the proteinoid microspheres proposed to be primitive cells (protocells) on early Earth. Coacervate-like protocells are at the core of the Oparin-Haldane hypothesis.
A reawakening of coacervate research was seen in the 2000s, starting with the recognition in 2004 by scientists at the University of California, Santa Barbara (UCSB) that some marine invertebrates (such as the sandcastle worm) exploit complex coacervation to produce water-resistant biological adhesives. A few years later in 2009 the role of liquid-liquid phase separation was further recognized to be involved in the formation of certain membraneless organelles by the biophysicists Clifford Brangwynne and Tony Hyman. Liquid organelles share features with coacervate droplets and fueled the study of coacervates for biomimicry.
Thermodynamics
Coacervates are a type of lyophilic colloid; that is, the dense phase retains some of the original solvent – generally water – and does not collapse into solid aggregates, rather keeping a liquid property. Coacervates can be characterized as complex or simple based on the driving force for the LLPS: associative or segregative. Associative
|
https://en.wikipedia.org/wiki/FPD-Link
|
Flat Panel Display Link, more commonly referred to as FPD-Link, is the original high-speed digital video interface created in 1996 by National Semiconductor (now within Texas Instruments). It is a free and open standard for connecting the output from a graphics processing unit in a laptop, tablet computer, flat panel display, or LCD television to the display panel's timing controller.
Most laptops, tablet computers, flat-panel monitors, and TVs used the interface internally through 2010, when industry leaders AMD, Dell, Intel, Lenovo, LG, and Samsung together announced that they would be phasing out this interface by 2013 in favor of embedded DisplayPort (eDP).
FPD-Link and LVDS
FPD-Link was the first large-scale application of the low-voltage differential signaling (LVDS) standard. National Semiconductor immediately provided interoperability specifications for the FPD-Link technology in order to promote it as a free and open standard, and thus other IC suppliers were able to copy it. FlatLink by TI was the first interoperable version of FPD-Link.
By the end of the twentieth century, the major notebook computer manufacturers created the Standard Panels Working Group (SPWG) and made FPD-Link / FlatLink the standard for transferring graphics and video through the notebook's hinge.
Automotive and more applications
In automotive applications, FPD-Link is commonly used for navigation systems, in-car entertainment, and backup cameras, as well as other advanced driver-assistance systems.
The automotive environment is known to be one of the harshest for electronic equipment due to inherent extreme temperatures and electrical transients. In order to satisfy these stringent reliability requirements, the FPD-Link II and III chipsets meet or exceed the AEC-Q100 automotive reliability standard for integrated circuits, and the ISO 10605 standard for automotive ESD applications.
Another display interface based on FPD-Link is OpenLDI. It enables longer cable lengths becau
|
https://en.wikipedia.org/wiki/OpenLDI
|
OpenLDI (Open LVDS Display Interface) is a high-bandwidth digital-video interface standard for connecting graphics/video processors to flat panel LCD monitors. Even though the promoter’s group originally designed it for the desktop computer to monitor application, the majority of applications today are industrial display connections. For example, displays in medical imaging, machine vision, and construction equipment use the OpenLDI chipsets.
OpenLDI is based on the FPD-Link specification, which was the de facto standard for transferring graphics and video data through notebook computer hinges since the late 1990s. Both OpenLDI and FPD-Link use low-voltage differential signaling (LVDS) as the physical layer signaling, and the three terms have mistakenly been used synonymously. (FPD-Link and OpenLDI are largely compatible, beyond the physical-layer; specifying the same serial data-streams).
The OpenLDI standard was promoted by National Semiconductor, Texas Instruments, Silicon Graphics (SGI) and others. OpenLDI wasn't used in many of the intended applications after losing the computer-to-monitor interconnect application to a competing standard, Digital Visual Interface (DVI).
The SGI 1600SW was the only monitor produced in significant quantities with an OpenLDI connection, though it had minor differences from the final published standards. The 1600SW used a 36-pin MDR36 male connector with a pinout that differs from that of the 36-pin centronics-style connector in the OpenLDI standard.
Sony produced some VAIO displays and laptops using the standard.
(According to the SGI 1600SW entry, a few other displays were made by various manufacturers using the OpenLDI standard.)
See also
VGA
References
External links
OpenLDI specification from National Semiconductor
1600SW MDR36 connector pinout
Digital display connectors
|
https://en.wikipedia.org/wiki/Radar%20altimeter
|
A radar altimeter (RA), also called a radio altimeter (RALT), electronic altimeter, reflection altimeter, or low-range radio altimeter (LRRA), measures altitude above the terrain presently beneath an aircraft or spacecraft by timing how long it takes a beam of radio waves to travel to ground, reflect, and return to the craft. This type of altimeter provides the distance between the antenna and the ground directly below it, in contrast to a barometric altimeter which provides the distance above a defined vertical datum, usually mean sea level.
Principle
As the name implies, radar (radio detection and ranging) is the underpinning principle of the system. The system transmits radio waves down to the ground and measures the time it takes them to be reflected back up to the aircraft. The altitude above the ground is calculated from the radio waves' travel time and the speed of light. Radar altimeters required a simple system for measuring the time-of-flight that could be displayed using conventional instruments, as opposed to a cathode ray tube normally used on early radar systems.
To do this, the transmitter sends a frequency modulated signal that changes in frequency over time, ramping up and down between two frequency limits, Fmin and Fmax over a given time, T. In the first units, this was accomplished using an LC tank with a tuning capacitor driven by a small electric motor. The output is then mixed with the radio frequency carrier signal and sent out the transmission antenna.
Since the signal takes some time to reach the ground and return, the frequency of the received signal is slightly delayed relative to the signal being sent out at that instant. The difference in these two frequencies can be extracted in a frequency mixer, and because the difference in the two signals is due to the delay reaching the ground and back, the resulting output frequency encodes the altitude. The output is typically on the order of hundreds of cycles per second, not megacycles, a
|
https://en.wikipedia.org/wiki/Copeland%E2%80%93Erd%C5%91s%20constant
|
The Copeland–Erdős constant is the concatenation of "0." with the base 10 representations of the prime numbers in order. Its value, using the modern definition of prime, is approximately
0.235711131719232931374143… .
The constant is irrational; this can be proven with Dirichlet's theorem on arithmetic progressions or Bertrand's postulate (Hardy and Wright, p. 113) or Ramare's theorem that every even integer is a sum of at most six primes. It also follows directly from its normality (see below).
By a similar argument, any constant created by concatenating "0." with all primes in an arithmetic progression dn + a, where a is coprime to d and to 10, will be irrational; for example, primes of the form 4n + 1 or 8n + 1. By Dirichlet's theorem, the arithmetic progression dn · 10m + a contains primes for all m, and those primes are also in cd + a, so the concatenated primes contain arbitrarily long sequences of the digit zero.
In base 10, the constant is a normal number, a fact proven by Arthur Herbert Copeland and Paul Erdős in 1946 (hence the name of the constant).
The constant is given by
where pn is the nth prime number.
Its continued fraction is [0; 4, 4, 8, 16, 18, 5, 1, …] ().
Related constants
Copeland and Erdős's proof that their constant is normal relies only on the fact that is strictly increasing and , where is the nth prime number. More generally, if is any strictly increasing sequence of natural numbers such that and is any natural number greater than or equal to 2, then the constant obtained by concatenating "0." with the base- representations of the 's is normal in base . For example, the sequence satisfies these conditions, so the constant 0.003712192634435363748597110122136… is normal in base 10, and 0.003101525354661104…7 is normal in base 7.
In any given base b the number
which can be written in base b as 0.0110101000101000101…b
where the nth digit is 1 if and only if n is prime, is irrational.
See also
Smarandache–Wellin numbers:
|
https://en.wikipedia.org/wiki/Ntoskrnl.exe
|
ntoskrnl.exe (short for Windows NT operating system kernel executable), also known as the kernel image, contains the kernel and executive layers of the Microsoft Windows NT kernel, and is responsible for hardware abstraction, process handling, and memory management. In addition to the kernel and executive mentioned earlier, it contains the cache manager, security reference monitor, memory manager, scheduler (Dispatcher), and blue screen of death (the prose and portions of the code).
Overview
x86 versions of ntoskrnl.exe depend on bootvid.dll, hal.dll and kdcom.dll (x64 variants of ntoskrnl.exe have these dlls embed into the kernel to increase performance). However, it is not a native application. In other words, it is not linked against ntdll.dll. Instead, ntoskrnl.exe containing a standard "start" entry point that calls the architecture-independent kernel initialization function. Because it requires a static copy of the C Runtime objects, the executable is usually about 10 MB in size.
In Windows XP and earlier, the Windows installation source ships four kernel image files to support uniprocessor systems, symmetric multiprocessor (SMP) systems, CPUs with PAE, and CPUs without PAE. Windows setup decides whether the system is uniprocessor or multiprocessor, then, installs both the PAE and non-PAE variants of the kernel image for the decided kind. On a multiprocessor system, Setup installs ntkrnlmp.exe and ntkrpamp.exe but renames them to ntoskrnl.exe and ntkrnlpa.exe respectively.
Starting with Windows Vista, Microsoft began unifying the kernel images as multi-core CPUs took to the market and PAE became mandatory.
Routines in ntoskrnl use prefixes on their names to indicate in which component of ntoskrnl they are defined.
Since not all functions are being exported by the kernel, function prefixes ending in i or p (such as Mi, Obp, Iop) are internal and not supposed to be accessed by the user. These functions contain the core code and implements important checks
|
https://en.wikipedia.org/wiki/Slip%20%28materials%20science%29
|
In materials science, slip is the large displacement of one part of a crystal relative to another part along crystallographic planes and directions. Slip occurs by the passage of dislocations on close/packed planes, which are planes containing the greatest number of atoms per area and in close-packed directions (most atoms per length). Close-packed planes are known as slip or glide planes. A slip system describes the set of symmetrically identical slip planes and associated family of slip directions for which dislocation motion can easily occur and lead to plastic deformation. The magnitude and direction of slip are represented by the Burgers vector, .
An external force makes parts of the crystal lattice glide along each other, changing the material's geometry. A critical resolved shear stress is required to initiate a slip.
Slip systems
Face centered cubic crystals
Slip in face centered cubic (fcc) crystals occurs along the close packed plane. Specifically, the slip plane is of type {111}, and the direction is of type <10>. In the diagram on the right, the specific plane and direction are (111) and [10], respectively.
Given the permutations of the slip plane types and direction types, fcc crystals have 12 slip systems. In the fcc lattice, the norm of the Burgers vector, b, can be calculated using the following equation:
Where a is the lattice constant of the unit cell.
Body centered cubic crystals
Slip in body-centered cubic (bcc) crystals occurs along the plane of shortest Burgers vector as well; however, unlike fcc, there are no truly close-packed planes in the bcc crystal structure.
Thus, a slip system in bcc requires heat to activate.
Some bcc materials (e.g. α-Fe) can contain up to 48 slip systems.
There are six slip planes of type {110}, each with two <111> directions (12 systems). There are 24 {123} and 12 {112} planes each with one <111> direction (36 systems, for a total of 48). Although the number of possible slip systems is much higher in bcc cr
|
https://en.wikipedia.org/wiki/Taint%20checking
|
Taint checking is a feature in some computer programming languages, such as Perl, Ruby or Ballerina designed to increase security by preventing malicious users from executing commands on a host computer. Taint checks highlight specific security risks primarily associated with web sites which are attacked using techniques such as SQL injection or buffer overflow attack approaches.
Overview
The concept behind taint checking is that any variable that can be modified by an outside user (for example a variable set by a field in a web form) poses a potential security risk. If that variable is used in an expression that sets a second variable, that second variable is now also suspicious. The taint checking tool can then proceed variable by variable forming a list of variables which are potentially influenced by outside input. If any of these variables is used to execute dangerous commands (such as direct commands to a SQL database or the host computer operating system), the taint checker warns that the program is using a potentially dangerous tainted variable. The computer programmer can then redesign the program to erect a safe wall around the dangerous input.
Taint checking may be viewed as a conservative approximation of the full verification of non-interference or the more general concept of secure information flow. Because information flow in a system cannot be verified by examining a single execution trace of that system, the results of taint analysis will necessarily reflect approximate information regarding the information flow characteristics of the system to which it is applied.
Example
The following dangerous Perl code opens a large SQL injection vulnerability by not checking the value of the $name variable:
#!/usr/bin/perl
my $name = $cgi->param("name"); # Get the name from the browser
...
$dbh->{TaintIn} = 1;
$dbh->execute("SELECT * FROM users WHERE name = '$name';"); # Execute an SQL query
If taint checking is turned on, Perl would refuse to run t
|
https://en.wikipedia.org/wiki/Pattern%20formation
|
The science of pattern formation deals with the visible, (statistically) orderly outcomes of self-organization and the common principles behind similar patterns in nature.
In developmental biology, pattern formation refers to the generation of complex organizations of cell fates in space and time. The role of genes in pattern formation is an aspect of morphogenesis, the creation of diverse anatomies from similar genes, now being explored in the science of evolutionary developmental biology or evo-devo. The mechanisms involved are well seen in the anterior-posterior patterning of embryos from the model organism Drosophila melanogaster (a fruit fly), one of the first organisms to have its morphogenesis studied, and in the eyespots of butterflies, whose development is a variant of the standard (fruit fly) mechanism.
Patterns in nature
Examples of pattern formation can be found in biology, physics, and science, and can readily be simulated with computer graphics, as described in turn below.
Biology
Biological patterns such as animal markings, the segmentation of animals, and phyllotaxis are formed in different ways.
In developmental biology, pattern formation describes the mechanism by which initially equivalent cells in a developing tissue in an embryo assume complex forms and functions. Embryogenesis, such as of the fruit fly Drosophila, involves coordinated control of cell fates. Pattern formation is genetically controlled, and often involves each cell in a field sensing and responding to its position along a morphogen gradient, followed by short distance cell-to-cell communication through cell signaling pathways to refine the initial pattern. In this context, a field of cells is the group of cells whose fates are affected by responding to the same set positional information cues. This conceptual model was first described as the French flag model in the 1960s. More generally, the morphology of organisms is patterned by the mechanisms of evolutionary development
|
https://en.wikipedia.org/wiki/Anonymous%20post
|
An anonymous post, is an entry on a textboard, anonymous bulletin board system, or other discussion forums like Internet forum, without a screen name or more commonly by using a non-identifiable pseudonym.
Some online forums such as Slashdot do not allow such posts, requiring users to be registered either under their real name or utilizing a pseudonym. Others like JuicyCampus, AutoAdmit, 2channel, and other Futaba-based imageboards (such as 4chan) thrive on anonymity. Users of 4chan, in particular, interact in an anonymous and ephemeral environment that facilitates rapid generation of new trends.
History of online anonymity
Online anonymity can be traced to Usenet newsgroups in the late 1990s where the notion of using invalid emails for posting to newsgroups was introduced. This was primarily used for discussion on newsgroups pertaining to certain sensitive topics. There was also the introduction of anonymous remailers which were capable of stripping away the sender's address from mail packets before sending them to the receiver. Online services which facilitated anonymous posting sprang up around mid-1992, originating with the cypherpunk group.
The precursor to Internet forums like 2channel and 4chan were textboards like Ayashii World and Amezou World that provided the ability for anonymous posts in Japan. These "large-scale anonymous textboards" were inspired by the Usenet culture and were primarily focused on technology, unlike their descendants.
Today, image boards receive tremendous Internet traffic from all parts of the world. In 2011, on 4chan's most popular board, /b/, there were roughly 35,000 threads and 400,000 posts created per day. At that time, that level of content was on par with YouTube. Such high traffic suggests a broad demand from Internet users for anonymous content sharing sites.
Levels of anonymity
Anonymity on the Internet can pertain to both the utilization of pseudonyms or requiring no authentication at all (also called "perfect anonymi
|
https://en.wikipedia.org/wiki/Private%20IP
|
PIP in telecommunications and datacommunications stands for Private Internet Protocol or Private IP. PIP refers to connectivity into a private extranet network which by its design emulates the functioning of the Internet. Specifically, the Internet uses a routing protocol called border gateway protocol (BGP), as do most Multiprotocol Label Switching (MPLS) networks. With this design, there is an ambiguity to the route that a packet can take while traversing the network. Whereas the Internet is a public offering, MPLS PIP networks are private. This lends a known, often used, and comfortable network design model for private implementation.
Private IP removes the need for antiquated Frame Relay networks, and even more antiquated point-to-point networks, with the service provider able to offer a private extranet to its customer at an affordable pricepoint.
References
Network protocols
|
https://en.wikipedia.org/wiki/Autosave
|
Autosave is a saving function in many computer applications and video games which automatically saves the current changes or progress in the program or game, intending to prevent data loss should the user be otherwise prevented from doing so manually by a crash, freeze or user error. Autosaving is typically done either in predetermined intervals or before, during, and after a complex editing task is begun.
Application software
It has traditionally been seen as a feature to protect documents in an application or system failure (crash), and autosave backups are often purged whenever the user finishes their work. An alternative paradigm is to have all changes saved continuously (as with pen and paper) and all versions of a document available for review. This would remove the need for saving documents entirely. There are challenges to implementation at the file, application and operating system levels.
For example, in Microsoft Office, this option is called AutoRecover and, by default, saves the document every ten minutes in the temporary file directory. Restarting an Office program after crashing prompts the user to save the last recovered version. However, this does not protect users who mistakenly click "No" when asked to save their changes if Excel closes normally (except for Office 2013 and later). Autosave also syncs documents to OneDrive when editing normally.
Mac OS 10.7 Lion added an autosave feature that is available to some applications, and works in conjunction with Time Machine-like functionality to periodically save all versions of a document. This eliminates the need for any manual saving, as well as providing versioning support through the same system. A version is saved every five minutes, during any extended periods of idle time, or when the user uses "Save a version," which replaces the former "Save" menu item and takes its Command-S shortcut. Saves are made on snapshots of the document data and occur in a separate thread, so the user is never pa
|
https://en.wikipedia.org/wiki/IBM%20Advanced%20Peer-to-Peer%20Networking
|
IBM Advanced Peer-to-Peer Networking (APPN) is an extension to the Systems Network Architecture (SNA) "that allows large and small computers to communicate as peers across local and wide-area networks."
Goals and features
The goals of APPN were:
Provide effective routing for SNA traffic
Allow sessions to be established without the involvement of a central computer
Reduce the requirements to predict resource use
Provide prioritization within SNA traffic
Support both legacy and APPN traffic
To meet these goals it includes features such as these:
distributed network control
dynamic exchange of network topology information to foster ease of connection, reconfiguration, and route selection
dynamic definition of network resources
automated resource registration and directory lookup.
History
APPN was defined around 1986, and was meant to complement IBM's Systems Network Architecture. It was designed as a simplification, but it turned out to be significantly complex, in particular in migration situations.
APPN was originally meant to be a "DECNET killer", but DEC actually died before APPN was completed. APPN has been largely superseded by TCP/IP (Internet).
APPN evolved to include a more efficient data routing layer which was called High Performance Routing (HPR). HPR was made available across a range of enterprise corporation networking products in the late 1990s, but today is typically used only within IBM's z/OS environments as a replacement for legacy SNA networks. It seems to be still widely used within UDP tunnels, this technology is known as Enterprise Extender.
APPN should not be confused with the similarly named APPC (Advanced Program-to-Program Communication). APPN manages communication between machines, including routing, and operates at the transport and network layers. By contrast, APPC manages communication between programs, operating at the application and presentation layers.
APPN has nothing to do with peer-to-peer file sharing software such
|
https://en.wikipedia.org/wiki/Projection%20%28mathematics%29
|
In mathematics, a projection is an idempotent mapping of a set (or other mathematical structure) into a subset (or sub-structure). In this case, idempotent means that projecting twice is the same as projecting once. The restriction to a subspace of a projection is also called a projection, even if the idempotence property is lost.
An everyday example of a projection is the casting of shadows onto a plane (sheet of paper): the projection of a point is its shadow on the sheet of paper, and the projection (shadow) of a point on the sheet of paper is that point itself (idempotency). The shadow of a three-dimensional sphere is a closed disk. Originally, the notion of projection was introduced in Euclidean geometry to denote the projection of the three-dimensional Euclidean space onto a plane in it, like the shadow example. The two main projections of this kind are:
The projection from a point onto a plane or central projection: If C is a point, called the center of projection, then the projection of a point P different from C onto a plane that does not contain C is the intersection of the line CP with the plane. The points P such that the line CP is parallel to the plane does not have any image by the projection, but one often says that they project to a point at infinity of the plane (see Projective geometry for a formalization of this terminology). The projection of the point C itself is not defined.
The projection parallel to a direction D, onto a plane or parallel projection: The image of a point P is the intersection with the plane of the line parallel to D passing through P. See for an accurate definition, generalized to any dimension.
The concept of projection in mathematics is a very old one, and most likely has its roots in the phenomenon of the shadows cast by real-world objects on the ground. This rudimentary idea was refined and abstracted, first in a geometric context and later in other branches of mathematics. Over time different versions of the con
|
https://en.wikipedia.org/wiki/Modified%20internal%20rate%20of%20return
|
The modified internal rate of return (MIRR) is a financial measure of an investment's attractiveness. It is used in capital budgeting to rank alternative investments of equal size. As the name implies, MIRR is a modification of the internal rate of return (IRR) and as such aims to resolve some problems with the IRR.
Problems associated with the IRR
While there are several problems with the IRR, MIRR resolves two of them.
Firstly, IRR is sometimes misapplied, under an assumption that interim positive cash flows are reinvested elsewhere in a different project at the same rate of return offered by the project that generated them. This is usually an unrealistic scenario and a more likely situation is that the funds will be reinvested at a rate closer to the firm's cost of capital. The IRR therefore often gives an unduly optimistic picture of the projects under study. Generally for comparing projects more fairly, the weighted average cost of capital should be used for reinvesting the interim cash flows.
Secondly, more than one IRR can be found for projects with alternating positive and negative cash flows, which leads to confusion and ambiguity. MIRR finds only one value.
Calculation
MIRR is calculated as follows:
,
where n is the number of equal periods at the end of which the cash flows occur (not the number of cash flows), PV is present value (at the beginning of the first period), FV is future value (at the end of the last period).
The formula adds up the negative cash flows after discounting them to time zero using the external cost of capital, adds up the positive cash flows including the proceeds of reinvestment at the external reinvestment rate to the final period, and then works out what rate of return would cause the magnitude of the discounted negative cash flows at time zero to be equivalent to the future value of the positive cash flows at the final time period.
Spreadsheet applications, such as Microsoft Excel, have inbuilt functions to calculate t
|
https://en.wikipedia.org/wiki/Multiple%20%28mathematics%29
|
In mathematics, a multiple is the product of any quantity and an integer. In other words, for the quantities a and b, it can be said that b is a multiple of a if b = na for some integer n, which is called the multiplier. If a is not zero, this is equivalent to saying that is an integer.
When a and b are both integers, and b is a multiple of a, then a is called a divisor of b. One says also that a divides b. If a and b are not integers, mathematicians prefer generally to use integer multiple instead of multiple, for clarification. In fact, multiple is used for other kinds of product; for example, a polynomial p is a multiple of another polynomial q if there exists third polynomial r such that p = qr.
Examples
14, 49, −21 and 0 are multiples of 7, whereas 3 and −6 are not. This is because there are integers that 7 may be multiplied by to reach the values of 14, 49, 0 and −21, while there are no such integers for 3 and −6. Each of the products listed below, and in particular, the products for 3 and −6, is the only way that the relevant number can be written as a product of 7 and another real number:
is not an integer;
is not an integer.
Properties
0 is a multiple of every number ().
The product of any integer and any integer is a multiple of . In particular, , which is equal to , is a multiple of (every integer is a multiple of itself), since 1 is an integer.
If and are multiples of then and are also multiples of .
Submultiple
In some texts, "a is a submultiple of b" has the meaning of "a being a unit fraction of b" (a1/b) or, equivalently, "b being an integer multiple n of a" (bna). This terminology is also used with units of measurement (for example by the BIPM and NIST), where a unit submultiple is obtained by prefixing the main unit, defined as the quotient of the main unit by an integer, mostly a power of 103. For example, a millimetre is the 1000-fold submultiple of a metre. As another example, one inch may be considered as a 12-fold su
|
https://en.wikipedia.org/wiki/Projection%20%28set%20theory%29
|
In set theory, a projection is one of two closely related types of functions or operations, namely:
A set-theoretic operation typified by the th projection map, written that takes an element of the Cartesian product to the value
A function that sends an element to its equivalence class under a specified equivalence relation or, equivalently, a surjection from a set to another set. The function from elements to equivalence classes is a surjection, and every surjection corresponds to an equivalence relation under which two elements are equivalent when they have the same image. The result of the mapping is written as when is understood, or written as when it is necessary to make explicit.
See also
References
Basic concepts in set theory
|
https://en.wikipedia.org/wiki/Pervasive%20Software
|
Pervasive Software was a company that developed software including database management systems and extract, transform and load tools. Pervasive Data Integrator and Pervasive Data Profiler are integration products, and the Pervasive PSQL relational database management system is its primary data storage product. These embeddable data management products deliver integration between corporate data, third-party applications and custom software.
Pervasive Software was headquartered in Austin, Texas, and sold its products with partners in other countries.
The company is involved in cloud computing through DataSolutions and its DataCloud offering along with its long-standing relationship with salesforce.com. It was acquired by Actian Corp. in April 2013.
History
Pervasive started in 1982 as SoftCraft developing the database management system technology Btrieve. Acquired by Novell in 1987, in January 1994 Pervasive spun out as Btrieve Technologies. The company name was changed to Pervasive Software in June 1996. Their initial public offering in 1997 raised $18.6 million.
Ron R. Harris was chief executive and founder Nancy R. Woodward was chairman of the board of directors (the other co-founder was her husband Douglas Woodward). Its shares were listed on the Nasdaq exchange under symbol PVSW.
Its database product was announced in 1999 as Pervasive.SQL version 7, and later renamed PSQL. PSQL implemented the atomicity, consistency, isolation, durability properties known as ACID using a relational database model.
In August 2003, Pervasive agreed to acquire Data Junction Corporation, makers of data and application integration tools renamed Pervasive Data Integrator, for about $51.7 million in cash and stock shares. Data Junction, founded in 1984, was a privately held company also headquartered in Austin. The merger closed in December 2003.
Pervasive also acquired business-to-business data interchange service Channelinx in August 2009. Based in Greenville, South Carolina
|
https://en.wikipedia.org/wiki/Optimal%20virulence
|
Optimal virulence is a concept relating to the ecology of hosts and parasites. One definition of virulence is the host's parasite-induced loss of fitness. The parasite's fitness is determined by its success in transmitting offspring to other hosts. For about 100 years, the consensus was that virulence decreased and parasitic relationships evolved toward symbiosis. This was even called the law of declining virulence despite being a hypothesis, not even a theory. It has been challenged since the 1980s and has been disproved.
A pathogen that is too restrained will lose out in competition to a more aggressive strain that diverts more host resources to its own reproduction. However, the host, being the parasite's resource and habitat in a way, suffers from this higher virulence. This might induce faster host death, and act against the parasite's fitness by reducing probability to encounter another host (killing the host too fast to allow for transmission). Thus, there is a natural force providing pressure on the parasite to "self-limit" virulence.
The idea is, then, that there exists an equilibrium point of virulence, where parasite's fitness is highest. Any movement on the virulence axis, towards higher or lower virulence, will result in lower fitness for the parasite, and thus will be selected against.
Mode of transmission
Paul W. Ewald has explored the relationship between virulence and mode of transmission. He came to the conclusion that virulence tends to remain especially high in waterborne and vector-borne infections, such as cholera and dengue. Cholera is spread through sewage and dengue through mosquitos. In the case of respiratory infections, the pathogen depends on an ambulatory host to survive. It must spare the host long enough to find a new host. Water- or vector-borne transmission circumvents the need for a mobile host. Ewald is convinced that the crowding of field hospitals and trench warfare provided an easy route to transmission that evolved the
|
https://en.wikipedia.org/wiki/Debian%E2%80%93Mozilla%20trademark%20dispute
|
In 2006, a branding issue developed when Mike Connor, representing the Mozilla Corporation, requested that the Debian Project comply with Mozilla standards for use of the Thunderbird trademark when redistributing the Thunderbird software. At issue were modifications not approved by the Mozilla Foundation, when the name for the software remained the same.
The Debian Project subsequently rebranded the Mozilla Firefox program, and other software released by Mozilla, so that Debian could distribute modified software without being bound by the trademark requirements that the Mozilla Foundation had invoked. The new names established by Debian were Iceweasel for Mozilla Firefox, Icedove for Mozilla Thunderbird, and Iceape for SeaMonkey. These changes were implemented in the subsequent version of Debian (Etch). In July 2007, Iceowl, a rebranded version of Mozilla Sunbird, was added to the unstable branch of Debian.
In 2016, a number of Mozilla employees and Debian maintainers argued that the branding was no longer needed, and on 10 March 2016, Debian's unstable branch switched back to the Mozilla branding, with the stable branch planning to switch after Iceweasel's end of life.
The decade-long branding issues between the Debian Project and Mozilla Corporation ended in 2017 when all Mozilla applications in Debian were reverted to their original names.
Applications
Debian's Iceweasel, Icedove, Iceowl, and Iceape were based on Mozilla's Firefox, Thunderbird, Sunbird, and SeaMonkey, respectively. The rebranded products still used some Internet-based services from Mozilla, including the Mozilla plugin finder service, and Mozilla add-ons and their update notifications. There was also no change to how non-free components, such as Flash, were found or used.
Iceape
Iceape was a free software Internet suite based on SeaMonkey. It was developed by the Debian Mozilla Team in unison with the SeaMonkey Council's work on their release, but in accordance with Debian's policy of only
|
https://en.wikipedia.org/wiki/WWMB
|
WWMB, virtual channel 21 (UHF digital channel 26), is a Dabl-affiliated television station licensed to Florence, South Carolina, United States, serving the Pee Dee and Grand Strand regions of South Carolina. The station is owned by Howard Stirk Holdings; the Sinclair Broadcast Group, which owns dual ABC/CW affiliate WPDE-TV (channel 15, also licensed to Florence), operates WWMB under a shared services agreement (SSA). Both stations share studios on University Boulevard in Conway, while WWMB's transmitter is located on Pee Dee Church Road in Floydale, South Carolina.
History
WWMB went on the air September 21, 1994, as an independent station. It was owned by Atlantic Media Group, but operated by Diversified Communications, then-owner of WPDE, under an LMA. It joined UPN as a charter affiliate on January 16, 1995. By 1999, WWMB was airing Access Hollywood starring Myrtle Beach native Nancy O'Dell. Barrington Broadcasting bought WPDE in 2006. At the same time, Atlantic Media Group sold WWMB to SagamoreHill Broadcasting, which continued the LMA with WPDE.
On January 24, 2006, Time Warner and CBS Corporation announced that the two networks they owned, The WB and UPN, would cease operations, and that those companies would combine their resources to create The CW. Just hours after the announcement, WPDE released a notice on its website indicating WWMB would become an affiliate of the new network. This notice was a little premature, as over the next two months, many announcements of network affiliation changes including station deals with The CW were made.
The existence of a cable-only WB affiliate, "WFWB," which was carried by area cable systems as part of The WB 100+ national cable service, made a CW affiliation for WWMB seem less of a sure thing. Nevertheless, WWMB made public on April 10 it had joined The CW.
Even though WWMB aired CW programming on its individually-programmed main channel, it also operated a digital subchannel that carried the programming of The CW
|
https://en.wikipedia.org/wiki/Wet-bulb%20temperature
|
The wet-bulb temperature (WBT) is the temperature read by a thermometer covered in water-soaked (water at ambient temperature) cloth (a wet-bulb thermometer) over which air is passed. At 100% relative humidity, the wet-bulb temperature is equal to the air temperature (dry-bulb temperature); at lower humidity the wet-bulb temperature is lower than dry-bulb temperature because of evaporative cooling.
The wet-bulb temperature is defined as the temperature of a parcel of air cooled to saturation (100% relative humidity) by the evaporation of water into it, with the latent heat supplied by the parcel. A wet-bulb thermometer indicates a temperature close to the true (thermodynamic) wet-bulb temperature. The wet-bulb temperature is the lowest temperature that can be reached under current ambient conditions by the evaporation of water only.
Even heat-adapted people cannot carry out normal outdoor activities past a wet-bulb temperature of , equivalent to a heat index of . A reading of – equivalent to a heat index of – is considered the theoretical human survivability limit for up to six hours of exposure.
Intuition
If a thermometer is wrapped in a water-moistened cloth, it will behave differently. The drier and less humid the air is, the faster the water will evaporate. The faster water evaporates, the lower the thermometer's temperature will be relative to air temperature.
Water can evaporate only if the air around it can absorb more water. This is measured by comparing how much water is in the air to the maximum that could be in the air—the relative humidity. 0% means the air is completely dry, and 100% means the air contains all the water it can hold in the present circumstances and it cannot absorb any more water (from any source).
This is part of the cause of apparent temperature in humans. The drier the air, the more moisture it can take up beyond what is already in it, and the easier it is for extra water to evaporate. The result is that sweat evaporates more
|
https://en.wikipedia.org/wiki/Newton%20fractal
|
The Newton fractal is a boundary set in the complex plane which is characterized by Newton's method applied to a fixed polynomial or transcendental function. It is the Julia set of the meromorphic function which is given by Newton's method. When there are no attractive cycles (of order greater than 1), it divides the complex plane into regions , each of which is associated with a root of the polynomial, . In this way the Newton fractal is similar to the Mandelbrot set, and like other fractals it exhibits an intricate appearance arising from a simple description. It is relevant to numerical analysis because it shows that (outside the region of quadratic convergence) the Newton method can be very sensitive to its choice of start point.
Almost all points of the complex plane are associated with one of the roots of a given polynomial in the following way: the point is used as starting value for Newton's iteration {{math|zn + 1 : zn − {{sfrac|p(zn)|p'''(zn)}}}}, yielding a sequence of points If the sequence converges to the root , then was an element of the region . However, for every polynomial of degree at least 2 there are points for which the Newton iteration does not converge to any root: examples are the boundaries of the basins of attraction of the various roots. There are even polynomials for which open sets of starting points fail to converge to any root: a simple example is , where some points are attracted by the cycle rather than by a root.
An open set for which the iterations converge towards a given root or cycle (that is not a fixed point), is a Fatou set for the iteration. The complementary set to the union of all these, is the Julia set. The Fatou sets have common boundary, namely the Julia set. Therefore, each point of the Julia set is a point of accumulation for each of the Fatou sets. It is this property that causes the fractal structure of the Julia set (when the degree of the polynomial is larger than 2).
To plot images of the fractal,
|
https://en.wikipedia.org/wiki/Lyndon%20word
|
In mathematics, in the areas of combinatorics and computer science, a Lyndon word is a nonempty string that is strictly smaller in lexicographic order than all of its rotations. Lyndon words are named after mathematician Roger Lyndon, who investigated them in 1954, calling them standard lexicographic sequences. Anatoly Shirshov introduced Lyndon words in 1953 calling them regular words. Lyndon words are a special case of Hall words; almost all properties of Lyndon words are shared by Hall words.
Definitions
Several equivalent definitions exist.
A -ary Lyndon word of length is an -character string over an alphabet of size , and which is the unique minimum element in the lexicographical ordering in the multiset of all its rotations. Being the singularly smallest rotation implies that a Lyndon word differs from any of its non-trivial rotations, and is therefore aperiodic.
Alternately, a word is a Lyndon word if and only if it is nonempty and lexicographically strictly smaller than any of its proper suffixes, that is for all nonempty words such that and is nonempty.
Another characterisation is the following: A Lyndon word has the property that it is nonempty and, whenever it is split into two nonempty substrings, the left substring is always lexicographically less than the right substring. That is, if is a Lyndon word, and is any factorization into two substrings, with and understood to be non-empty, then . This definition implies that a string of length is a Lyndon word if and only if there exist Lyndon words and such that and . Although there may be more than one choice of and with this property, there is a particular choice, called the standard factorization, in which is as long as possible.
Enumeration
The Lyndon words over the two-symbol binary alphabet {0,1}, sorted by length and then lexicographically within each length class, form an infinite sequence that begins
0, 1, 01, 001, 011, 0001, 0011, 0111, 00001, 00011, 00101, 00111, 01011, 0
|
https://en.wikipedia.org/wiki/Zero%20suppression
|
Zero suppression is the removal of redundant zeroes from a number. This can be done for storage, page or display space constraints or formatting reasons, such as making a letter more legible.
Examples
00049823 → 49823
7.678600000 → 7.6786
0032.3231000 → 32.3231
2.45000×1010 → 2.45×1010
0.0045×1010 → 4.5×107
One must be careful; in physics and related disciplines, trailing zeros are used to indicate the precision of the number, as an error of ±1 in the last place is assumed. Examples:
4.5981 is 4.5981 ± 0.0001
4.59810 is 4.5981 ± 0.00001
4.598100 is 4.5981 ± 0.000001
Data compression
It is also a way to store a large array of numbers, where many of the entries are zero. By omitting the zeroes, and instead storing the indices along with the values of the non-zero items, less space may be used in total. It only makes sense if the extra space used for storing the indices (on average) is smaller than the space saved by not storing the zeroes. This is sometimes used in a sparse array.
Example:
Original array: 0, 1, 0, 0, 2, 5, 0, 0, 0, 4, 0, 0, 0, 0, 0
Pairs of index and data: {2,1}, {5,2}, {6,5}, {10,4}
See also
References
Information theory
0 (number)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.