source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Oscar%20Hertwig | Oscar Hertwig (21 April 1849 in Friedberg – 25 October 1922 in Berlin) was a German embryologist and zoologist known for his research in developmental biology and evolution. Hertwig is credited as the first man to observe sexual reproduction by looking at the cells of sea urchins under the microscope.
Biography
Hertwig was the elder brother of zoologist-professor Richard Hertwig (1850–1937). The Hertwig brothers were the most eminent scholars of Ernst Haeckel (and Carl Gegenbaur) from the University of Jena. They were independent of Haeckel's philosophical speculations but took his ideas in a positive way to widen their concepts in zoology. Initially, between 1879 and 1883, they performed embryological studies, especially on the theory of the coelom (1881), the fluid-filled body cavity. These problems were based on the phylogenetic theorems of Haeckel, i.e. the biogenic theory (German = biogenetisches Grundgesetz), and the "gastraea theory".
Within 10 years, the two brothers moved apart to the north and south of Germany.
Oscar Hertwig later became a professor of anatomy in 1888 in Berlin; however, Richard Hertwig had moved 3 years prior, becoming a professor of zoology in Munich from 1885 to 1925, at Ludwig Maximilian University, where he served the last 40 years of his 50-year career as a professor at 4 universities.
Hertwig was a leader in the field of comparative and causal animal-developmental history. He also wrote a leading textbook. By studying sea urchins he proved that fertilization occurs due to the fusion of a sperm and egg cell. He recognized the role of the cell nucleus during inheritance and chromosome reduction during meiosis: in 1876, he published his findings that fertilization includes the penetration of a spermatozoon into an egg cell. Hermann Fol also observed this in the same year.
Hertwig's experiments with frog eggs revealed the 'long axis rule', or Hertwig rule. According to this rule, cell divides along its long axis.
In 1885 Hertwig w |
https://en.wikipedia.org/wiki/Work%20at%20home%20parent | A work at home parent is someone who conducts remote work from home and integrates parenting into his or her working time and workspace. They are sometimes referred to as a WAHM (work at home mom) or a WAHD (work at home dad).
People work from home for a variety of reasons, including lower business expenses, personal health limitations, eliminating commuting, or to have a more flexible schedule. This flexibility can give workers more options when planning tasks, business and non-business, including parenting duties. While some remote workers opt for childcare outside the home, others integrate child rearing into their working time and workspace. The latter are considered work-at-home parents.
Many WAHPs start home businesses to care for their children while still creating income. The desire to care for one's own children, the incompatibility of a 9-to-5 work day with school hours or sick days, and the expense of childcare prompt many parents to change or leave their jobs in the workforce to be available to their children. Many WAHPs build a business schedule that can be integrated with their parenting duties.
Integrating business and parenting
An integration of parenting and business can take place in one or more of four key ways: combined uses of time, combined uses of space, normalizing children in business, and flexibility.
Combining uses of time involves some level of human multitasking, such as taking children on business errands, and the organized scheduling of business activities during child's down times and vice versa. The WAHP combines uses of space by creating a home (or mobile) office that accommodates the child's presence.
Normalizing acknowledges the child's presence in the business environment. This can include letting key business partners know that parenting is a priority, establishing routines and rules for children in the office, and even having children help with business when appropriate.
Finally, the WAHP can utilize the inherent flexibi |
https://en.wikipedia.org/wiki/AdS%20black%20hole | In theoretical physics, an anti-de Sitter (AdS) black hole is a black hole solution of general relativity or its extensions which represents an isolated massive object, but with a negative cosmological constant. Such a solution asymptotically approaches anti-de Sitter space at spatial infinity, and is a generalization of the Kerr vacuum solution, which asymptotically approaches Minkowski spacetime at spatial infinity.
In 3+1 dimensions, the metric is given by
where t is the time coordinate, r is the radial coordinate, Ω are the polar coordinates, C is a constant and k is the AdS curvature.
In general, in d+1 dimensions, the metric is given by
According to the AdS/CFT correspondence, if gravity were quantized, an AdS black hole would be dual to a thermal state on the conformal boundary. In the context of say, AdS/QCD, this would correspond to the deconfinement phase of the quark–gluon plasma.
See also
BTZ black hole
Black brane
Black holes
Exact solutions in general relativity |
https://en.wikipedia.org/wiki/Black%20brane | In general relativity, a black brane is a solution of the equations that generalizes a black hole solution but it is also extended—and translationally symmetric—in p additional spatial dimensions. That type of solution would be called a black p-brane.
In string theory, the term black brane describes a group of D1-branes that are surrounded by a horizon. With the notion of a horizon in mind as well as identifying points as zero-branes, a generalization of a black hole is a black p-brane. However, many physicists tend to define a black brane separate from a black hole, making the distinction that the singularity of a black brane is not a point like a black hole, but instead a higher dimensional object.
A BPS black brane is similar to a BPS black hole. They both have electric charges. Some BPS black branes have magnetic charges.
The metric for a black p-brane in a n-dimensional spacetime is:
where:
η is the (p + 1)-Minkowski metric with signature (−, +, +, +, ...),
σ are the coordinates for the worldsheet of the black p-brane,
u is its four-velocity,
r is the radial coordinate and,
Ω is the metric for a (n − p − 2)-sphere, surrounding the brane.
Curvatures
When .
The Ricci Tensor becomes , .
The Ricci Scalar becomes .
Where , are the Ricci Tensor and Ricci scalar of the metric .
Black string
A black string is a higher dimensional (D>4) generalization of a black hole in which the event horizon is topologically equivalent to S2 × S1 and spacetime is asymptotically Md−1 × S1.
Perturbations of black string solutions were found to be unstable for L (the length around S1) greater than some threshold L′. The full non-linear evolution of a black string beyond this threshold might result in a black string breaking up into separate black holes which would coalesce into a single black hole. This scenario seems unlikely because it was realized a black string could not pinch off in finite time, shrinking S2 to a point and then evolving to some Kaluza–Klein black hole |
https://en.wikipedia.org/wiki/Warped%20Passages | Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions is the debut non-fiction book by Lisa Randall, published in 2005, about particle physics in general and additional dimensions of space (cf. Kaluza–Klein theory) in particular. The book has made it to top 50 at amazon.com, making it the world's first successful book on theoretical physics by a female author. She herself characterizes the book as being about physics and the multi-dimensional universe. The book describes, at a non-technical level, theoretical models Professor Randall developed with the physicist Raman Sundrum, in which various aspects of particle physics (e.g. supersymmetry) are explained in a higher-dimensional braneworld scenario. These models have since generated thousands of citations.
Overview
She comments that her motivation for writing this book was her "thinking that there were people who wanted a more complete and balanced vision of the current state of physics." She has noticed there is a large audience that thinks physics is about the bizarre or exotic. She observes that when people develop an understanding of the science of particle physics and the experiments that produce the science, people get excited. "The upcoming experiments at the Large Hadron Collider (LHC) at CERN near Geneva will test many ideas, including some of the warped extra-dimensional theories I talk about." Another motivation was that she "gambled that there are people who really want to understand the physics and how the many ideas connect."
Background
Randall is currently a professor at Harvard University in Cambridge, Massachusetts, focusing on particle physics and cosmology. She stays current through her research into the nature of matter's most basic elements, and the forces that govern these most basic elements. Randall's experiences, which qualify her as an authority on the subject of the book, are her original "contributions in a wide variety of physics studies, including cosmological |
https://en.wikipedia.org/wiki/Alpher%E2%80%93Bethe%E2%80%93Gamow%20paper | In physical cosmology, the Alpher–Bethe–Gamow paper, or αβγ paper, was created by Ralph Alpher, then a physics PhD student, his advisor George Gamow, and Hans Bethe. The work, which would become the subject of Alpher's PhD dissertation, argued that the Big Bang would create hydrogen, helium and heavier elements in the correct proportions to explain their abundance in the early universe. While the original theory neglected a number of processes important to the formation of heavy elements, subsequent developments showed that Big Bang nucleosynthesis is consistent with the observed constraints on all primordial elements.
Formally titled "The Origin of Chemical Elements", it was published in the April 1948 issue of Physical Review.
Bethe's name
Gamow humorously decided to add the name of his friend—the eminent physicist Hans Bethe—to this paper in order to create the whimsical author list of Alpher, Bethe, Gamow, a play on the Greek letters α, β, and γ (alpha, beta, gamma). Bethe was listed in the article as "H. Bethe, Cornell University, Ithaca, New York". In his 1952 book The Creation of the Universe, Gamow explained Hans Bethe's association with the theory thus:
After this, Bethe did work on Big Bang nucleosynthesis.
Alpher, at the time only a graduate student, was generally dismayed by the inclusion of Bethe's name on this paper. He felt that the inclusion of another eminent physicist would overshadow his personal contribution to this work and prevent him from receiving proper recognition for such an important discovery. He expressed resentment over Gamow's whimsy as late as 1999.
Main shortcoming of the theory
The theory originally proposed that all atomic nuclei are produced by the successive capture of neutrons, one mass unit at a time. However, later study challenged the universality of the successive-capture theory. No element was found to have a stable isotope with an atomic mass of five or eight. Physicists soon noticed that these mass gaps would hind |
https://en.wikipedia.org/wiki/Quaternionic%20projective%20space | In mathematics, quaternionic projective space is an extension of the ideas of real projective space and complex projective space, to the case where coordinates lie in the ring of quaternions Quaternionic projective space of dimension n is usually denoted by
and is a closed manifold of (real) dimension 4n. It is a homogeneous space for a Lie group action, in more than one way. The quaternionic projective line is homeomorphic to the 4-sphere.
In coordinates
Its direct construction is as a special case of the projective space over a division algebra. The homogeneous coordinates of a point can be written
where the are quaternions, not all zero. Two sets of coordinates represent the same point if they are 'proportional' by a left multiplication by a non-zero quaternion c; that is, we identify all the
.
In the language of group actions, is the orbit space of by the action of , the multiplicative group of non-zero quaternions. By first projecting onto the unit sphere inside one may also regard as the orbit space of by the action of , the group of unit quaternions. The sphere then becomes a principal Sp(1)-bundle over :
This bundle is sometimes called a (generalized) Hopf fibration.
There is also a construction of by means of two-dimensional complex subspaces of , meaning that lies inside a complex Grassmannian.
Topology
Homotopy theory
The space , defined as the union of all finite 's under inclusion, is the classifying space BS3. The homotopy groups of are given by These groups are known to be very complex and in particular they are non-zero for infinitely many values of . However, we do have that
It follows that rationally, i.e. after localisation of a space, is an Eilenberg–Maclane space . That is (cf. the example K(Z,2)). See rational homotopy theory.
In general, has a cell structure with one cell in each dimension which is a multiple of 4, up to . Accordingly, its cohomology ring is , where is a 4-dimensional generator. This is analogous to |
https://en.wikipedia.org/wiki/Photobleaching | In optics, photobleaching (sometimes termed fading) is the photochemical alteration of a dye or a fluorophore molecule such that it is permanently unable to fluoresce. This is caused by cleaving of covalent bonds or non-specific reactions between the fluorophore and surrounding molecules. Such irreversible modifications in covalent bonds are caused by transition from a singlet state to the triplet state of the fluorophores. The number of excitation cycles to achieve full bleaching varies. In microscopy, photobleaching may complicate the observation of fluorescent molecules, since they will eventually be destroyed by the light exposure necessary to stimulate them into fluorescing. This is especially problematic in time-lapse microscopy.
However, photobleaching may also be used prior to applying the (primarily antibody-linked) fluorescent molecules, in an attempt to quench autofluorescence. This can help improve the signal-to-noise ratio.
Photobleaching may also be exploited to study the motion and/or diffusion of molecules, for example via the FRAP, in which movement of cellular components can be confirmed by observing a recovery of fluorescence at the site of photobleaching, or FLIP techniques, in which multiple rounds of photobleaching is done so that the spread of fluorescence loss can be observed in cell.
Loss of activity caused by photobleaching can be controlled by reducing the intensity or time-span of light exposure, by increasing the concentration of fluorophores, by reducing the frequency and thus the photon energy of the input light, or by employing more robust fluorophores that are less prone to bleaching (e.g. Cyanine Dyes, Alexa Fluors or DyLight Fluors, AttoDyes, Janelia Dyes and others). To a reasonable approximation, a given molecule will be destroyed after a constant exposure (intensity of emission X emission time X number of cycles) because, in a constant environment, each absorption-emission cycle has an equal probability of causing photobleach |
https://en.wikipedia.org/wiki/Megatons%20to%20Megawatts%20Program | The Megatons to Megawatts Program, also called the United States-Russia Highly Enriched Uranium Purchase Agreement, was an agreement between Russia and the United States whereby Russia converted 500 metric tons of "excess" weapons-grade uranium (enough for 20,000 warheads) into 15,000 metric tons of low enriched uranium, which was purchased by the US for use in its commercial nuclear power plants. The official name of the program is the "Agreement between the Government of the Russian Federation and the Government of the United States of America Concerning the Disposition of Highly-Enriched Uranium Extracted from Nuclear Weapons", dated February 18, 1993. Under this Agreement, Russia agreed to supply the United States with low-enriched uranium (LEU) obtained from high-enriched uranium (HEU) found to be in excess of Russian defense purposes. The United States agreed to purchase the low-enriched uranium fuel.
The original proposal for this program was made by Thomas Neff, a physicist at MIT, in an October 24, 1991 Op-Ed in The New York Times. On August 28, 1992, in Moscow, U.S. and Russian negotiators initialed the 20-year agreement and President George H. W. Bush announced the agreement on August 31, 1992. In 1993, the agreement was signed and initiated by President Bill Clinton and the commercial implementing contract was then signed by both parties. The program was successfully completed in December 2013.
The program was credited for being one of the most successful disarmament programs in history, but its low set price for nuclear fuel caused Western companies to not invest in uranium refining capacity, resulting by 2022 in Russia's government-owned Rosatom becoming the supplier of about 50% of the world's enriched uranium, and 25% of the nuclear fuel used in the US.
Terms of the program
Under this Agreement, the United States and Russia agreed to commercially implement a 20-year program to convert 500 metric tons of HEU (uranium-235 enriched to 90 percent) ta |
https://en.wikipedia.org/wiki/Compaq%20Portable%20II | The Compaq Portable II is the fourth product in the Compaq Portable series to be brought out by Compaq Computer Corporation. Released in 1986 at a price of US$3499, the Portable II much improved upon its predecessor, the Compaq 286, which had been Compaq's version of the PC AT in the original Compaq Portable chassis; Portable 286 came equipped with 6/8-MHz 286 and a high-speed 20-MB hard drive, while the Portable II included an 8 MHz processor, and was lighter and smaller than the previous Compaq Portables. There were four models of the Compaq Portable II. The basic Model 1 shipped one 5.25" floppy drive and 256 KB of RAM. The Model 2 added a second 5.25" floppy drive and sold for $3599. The Model 3 shipped with a 10MB hard disk in addition to one 5.25" floppy drive and 640 KB of RAM for $4799 at launch. The Model 4 would upgrade the Model 3 with a 20MB hard drive and sold for $4999. There also may have been a 4.1 MB hard drive included at one point. The Compaq Portable II was significantly lighter than its predecessors, the Model 1 weighed just 23.6 pounds compared to the 30.5 pounds the Compaq Portable 286 weighed. Compaq only shipped the system with a small demo disk, MS-DOS 3.1 had to be purchased separately.
There are at least two reported cases of improperly serviced computers exploding when the non-rechargeable lithium battery on the motherboard was connected to the power supply. There were no recorded injuries. The Compaq Portable II was succeeded by the Compaq Portable III in 1987.
Hardware
The Compaq Portable II had room for additional after market upgrades. Compaq manufactured four memory expansion boards, 512 KB and 2048 KB ISA memory cards and 512 KB and 1536 KB memory boards that attached to the back of the motherboard. With 640 KB installed on the motherboard and both the ISA card and the expansion board, the computer could be upgraded with up to a maximum of 4.2MB of RAM. The motherboard also had space for an optional 80287 math coprocessor. There |
https://en.wikipedia.org/wiki/Jonathan%20Shewchuk | Jonathan Richard Shewchuk is a Professor in Computer Science at the University of California, Berkeley.
He obtained his B.S. in Physics and Computing Science from Simon Fraser University in 1990, and his M.S. and Ph.D. in Computer Science from Carnegie Mellon University, the latter in 1997.
He conducts research in scientific computing, computational geometry (especially mesh generation, numerical robustness, and surface reconstruction), numerical methods, and physically based animation.
He is also the author of Three Sins of Authors In Computer Science And Math.
In 2003 he was awarded J. H. Wilkinson Prize for Numerical Software for writing the Triangle software package which computes high-quality unstructured triangular meshes.
He appears in online course videos of CS 61B: Data Structures class in University of California, Berkeley. |
https://en.wikipedia.org/wiki/Mercurial | Mercurial is a distributed revision control tool for software developers. It is supported on Microsoft Windows and Unix-like systems, such as FreeBSD, macOS, and Linux.
Mercurial's major design goals include high performance and scalability, decentralization, fully distributed collaborative development, robust handling of both plain text and binary files, and advanced branching and merging capabilities, while remaining conceptually simple. It includes an integrated web-interface. Mercurial has also taken steps to ease the transition for users of other version control systems, particularly Subversion. Mercurial is primarily a command-line driven program, but graphical user interface extensions are available, e.g. TortoiseHg, and several IDEs offer support for version control with Mercurial. All of Mercurial's operations are invoked as arguments to its driver program hg (a reference to Hg – the chemical symbol of the element mercury).
Olivia Mackall originated Mercurial and served as its lead developer until late 2016. Mercurial is released as free software under the GPL-2.0-or-later license. It is mainly implemented using the Python programming language, but includes a binary diff implementation written in C.
History
Mackall first announced Mercurial on 19 April 2005. The impetus for this was the announcement earlier that month by Bitmover that they were withdrawing the free version of BitKeeper because of the development of SourcePuller.
BitKeeper had been used for the version control requirements of the Linux kernel project. Mackall decided to write a distributed version control system as a replacement for use with the Linux kernel. This project started a few days after the now well-known Git project was initiated by Linus Torvalds with similar aims.
The Linux kernel project decided to use Git rather than Mercurial, but Mercurial is now used by many other projects (see below).
In an answer on the Mercurial mailing list, Olivia Mackall explained how the name |
https://en.wikipedia.org/wiki/Subdural%20space | The subdural space (or subdural cavity) is a potential space that can be opened by the separation of the arachnoid mater from the dura mater as the result of trauma, pathologic process, or the absence of cerebrospinal fluid as seen in a cadaver. In the cadaver, due to the absence of cerebrospinal fluid in the subarachnoid space, the arachnoid mater falls away from the dura mater. It may also be the site of trauma, such as a subdural hematoma, causing abnormal separation of dura and arachnoid mater. Hence, the subdural space is referred to as "potential" or "artificial" space.
See also
Epidural space
Subarachnoid space
Meninges
Subdural hematoma |
https://en.wikipedia.org/wiki/ISO/IEC%2010967 | ISO/IEC 10967, Language independent arithmetic (LIA), is a series of
standards on computer arithmetic. It is compatible with ISO/IEC/IEEE 60559:2011,
more known as IEEE 754-2008, and much of the
specifications are for IEEE 754 special values
(though such values are not required by LIA itself, unless the parameter iec559 is true).
It was developed by the working group ISO/IEC JTC1/SC22/WG11, which was disbanded in 2011.
LIA consists of three parts:
Part 1: Integer and floating point arithmetic, second edition published 2012.
Part 2: Elementary numerical functions, first edition published 2001.
Part 3: Complex integer and floating point arithmetic and complex elementary numerical functions, first edition published 2006.
Parts
Part 1
Part 1 deals with the basic integer and floating point datatypes (for multiple radices, including 2 and 10),
but unlike IEEE 754-2008 not the representation of the values. Part 1 also
deals with basic arithmetic, including comparisons, on values of such
datatypes. The parameter iec559 is expected to be
true for most implementations of LIA-1.
Part 1 was revised, to the second edition, to become more in line with the specifications
in parts 2 and 3.
Part 2
Part 2 deals with some additional "basic" operations on integer and floating point
datatype values, but focuses primarily on specifying requirements on numerical
versions of elementary functions. Much of the specifications in LIA-2 are inspired
by the specifications in Ada for elementary functions.
Part 3
Part 3 generalizes parts 1 and 2 to deal with imaginary and complex
datatypes and arithmetic and elementary functions on such values.
Much of the specifications in LIA-3 are inspired by the specifications
for imaginary and complex datatypes and operations in
C, Ada and
Common Lisp.
Bindings
Each of the parts provide suggested bindings for a number of
programming languages. These are not part of the LIA standards,
just suggestions, and are not complete. Authors of a programming
l |
https://en.wikipedia.org/wiki/ICL%202900%20Series | The ICL 2900 Series was a range of mainframe computer systems announced by the British manufacturer ICL on 9 October 1974. The company had started development under the name "New Range" immediately on its formation in 1968. The range was not designed to be compatible with any previous machines produced by the company, nor for compatibility with any competitor's machines: rather, it was conceived as a synthetic option, combining the best ideas available from a variety of sources.
In marketing terms, the 2900 Series was superseded by Series 39 in the mid-1980s; however, Series 39 was essentially a new set of machines implementing the 2900 Series architecture, as were subsequent ICL machines branded "Trimetra".
Origins
When ICL was formed in 1968 as a result of the merger of
International Computers and Tabulators (ICT) with English Electric, Leo Marconi and Elliott Automation, the company
considered several options for its future product line. These included enhancements to either ICT's 1900 Series or the English Electric System 4,
and a development based on J. K. Iliffe's Basic Language Machine. The option finally selected was the so-called Synthetic Option: a new design conceptualized from scratch.
As the name implies, the design was influenced by many sources, including earlier ICL machines. The design of Burroughs mainframes was influential, although ICL rejected the concept of optimising the design for one high-level language. The Multics system provided other ideas, notably in the area of protection. However, the biggest single outside influence was probably the MU5 machine developed at Manchester University.
Architectural concepts
The virtual machine
The 2900 Series architecture uses the concept of a virtual machine as the set of resources available to a program. The concept of a virtual machine in the 2900 Series architecture differs from the term as used in other environments. Because each program runs in its own virtual machine, the concept may be like |
https://en.wikipedia.org/wiki/L-notation | L-notation is an asymptotic notation analogous to big-O notation, denoted as for a bound variable tending to infinity. Like big-O notation, it is usually used to roughly convey the rate of growth of a function, such as the computational complexity of a particular algorithm.
Definition
It is defined as
where c is a positive constant, and is a constant .
L-notation is used mostly in computational number theory, to express the complexity of algorithms for difficult number theory problems, e.g. sieves for integer factorization and methods for solving discrete logarithms. The benefit of this notation is that it simplifies the analysis of these algorithms. The expresses the dominant term, and the takes care of everything smaller.
When is 0, then
is a polylogarithmic function (a polynomial function of ln n);
When is 1 then
is a fully exponential function of ln n (and thereby polynomial in n).
If is between 0 and 1, the function is subexponential of ln n (and superpolynomial).
Examples
Many general-purpose integer factorization algorithms have subexponential time complexities. The best is the general number field sieve, which has an expected running time of
for . The best such algorithm prior to the number field sieve was the quadratic sieve which has running time
For the elliptic curve discrete logarithm problem, the fastest general purpose algorithm is the baby-step giant-step algorithm, which has a running time on the order of the square-root of the group order n. In L-notation this would be
The existence of the AKS primality test, which runs in polynomial time, means that the time complexity for primality testing is known to be at most
where c has been proven to be at most 6.
History
L-notation has been defined in various forms throughout the literature. The first use of it came from Carl Pomerance in his paper "Analysis and comparison of some integer factoring algorithms". This form had only the parameter: the in the formula was for |
https://en.wikipedia.org/wiki/China%20Compulsory%20Certificate | The China Compulsory Certificate mark, commonly known as a CCC Mark, is a compulsory safety mark for many products imported, sold or used in the Chinese market. It was implemented on May 1, 2002, and became fully effective on August 1, 2003.
It is the result of the integration of China's two previous compulsory inspection systems, namely "CCIB" (Safety Mark, introduced in 1989 and required for products in 47 product categories) and "CCEE" (also known as "Great Wall" Mark, for electrical commodities in 7 product categories), into a single procedure.
Applicable products
The CCC mark is required for both Chinese-manufactured and foreign-imported products; the certification process involves the Guobiao standards.
The mandatory products include, among others:
Electrical wires and cables
Circuit switches, electric devices for protection or connection
Low-voltage Electrical Apparatus
Low power motors
Electric tools
Welding machines
Household and similar electrical appliances
Audio and video apparatus (not including the audio apparatus for broadcasting service and automobiles)
Information technology equipment
Lighting apparatus (not including the lighting apparatus with the voltage lower than 36V)
Motor vehicles and safety accessories
Motor vehicle Tires
Safety Glasses
Agricultural Machinery
Telecommunication Terminal Products
Fire Fighting Equipment
Safety Protection Products
Wireless LAN products
Decoration Materials
Toys
Implementation rules
Apart from the GB Standard, the implementation rules are the second important component that form the basis of CCC certification. The implementation rules determine the process of the CCC-Certification and list the mandatory products for the certification. Based on many regulatory amendments, it is important to get the latest version of the implementation rules before starting the certification process.
In 2014, a comprehensive regulatory amendment of the Implementation Rules had taken place. The major chan |
https://en.wikipedia.org/wiki/Base36 | Base36 is a binary-to-text encoding scheme that represents binary data in an ASCII string format by translating it into a radix-36 representation. The choice of 36 is convenient in that the digits can be represented using the Arabic numerals 0–9 and the Latin letters A–Z (the ISO basic Latin alphabet).
Each base36 digit needs less than 6 bits of information to be represented.
Conversion
Signed 32- and 64-bit integers will only hold at most 6 or 13 base-36 digits, respectively (that many base-36 digits can overflow the 32- and 64-bit integers). For example, the 64-bit signed integer maximum value of "9223372036854775807" is "" in base-36.
Similarly, the 32-bit signed integer maximum value of "2147483647" is "" in base-36.
Standard implementations
The C standard library since C89 supports base36 numbers via the strtol and strtoul functions
In the Common Lisp standard (ANSI INCITS 226-1994), functions like parse-integer support a radix of 2 to 36.
Java SE supports conversion from/to String to different bases from 2 up to 36. For example, and
Just like Java, JavaScript also supports conversion from/to String to different bases from 2 up to 36.
PHP, like Java, supports conversion from/to String to different bases from 2 up to 36 using the base_convert function, available since PHP 4.
Go supports conversion to string to different bases from 2 up to 36 using the built-in strconv.FormatInt(), and strconv.FormatUint() functions, and conversions from string encoded in different bases from 2 up to 36 using the built-in strconv.ParseInt(), and strconv.ParseUint() functions.
Python allows conversions of strings from base 2 to base 36.
See also
Uuencoding |
https://en.wikipedia.org/wiki/Sofrito | (Spanish, ), (Catalan, ), (Italian, ), or (Portuguese, ) is a basic preparation in Mediterranean, Latin American, Spanish, Italian and Portuguese cooking. It typically consists of aromatic ingredients cut into small pieces and sautéed or braised in cooking oil for a long period of time over a low heat.
In modern Spanish cuisine, consists of garlic, onion and peppers cooked in olive oil, and optionally tomatoes or carrots. This is known as , or sometimes as in Portuguese-speaking nations, where only garlic, onions, and olive oil are considered essential, tomato and bay laurel leaves being the other most common ingredients.
Mediterranean
The earliest mentioned recipe of , from around the middle of the 14th century, was made with only onion and oil.
In Italian cuisine, chopped onions, carrots and celery is , and then, slowly cooked in olive oil, becomes . It may also contain garlic, shallot, or leek.
In Greek Cuisine, Sofrito refers to a dish that is found almost exclusively in Corfu. It is served less commonalty in other regions of Greece and is often referred to as 'Corfu Sofrito' outside of Corfu. It is made with veal or beef, slowly cooked with garlic, wine, herbs sugar and wine vinegar to produce an umami sauce with softened meat. It is usually served with rice and potatoes.
Latin America
In Cuban cuisine, is prepared in a similar fashion, but the main components are Spanish onions, garlic, and green or red bell peppers. is also often used instead of or in addition to bell peppers. It is a base for beans, stews, rices, and other dishes, including and . Other secondary components include tomato sauce, dry white wine, cumin, bay leaf, and cilantro. (a kind of spicy, cured sausage), (salt pork) and ham are added for specific recipes, such as beans.
In Dominican cuisine, is also called , and is a liquid mixture containing vinegar, water, and sometimes tomato juice. A or is used for rice, stews, beans, and other dishes. A typical Dominican is made |
https://en.wikipedia.org/wiki/Reactive%20arthritis | Reactive arthritis, also known as Reiter's syndrome, is a form of inflammatory arthritis that develops in response to an infection in another part of the body (cross-reactivity). Coming into contact with bacteria and developing an infection can trigger the disease. By the time the patient presents with symptoms, often the "trigger" infection has been cured or is in remission in chronic cases, thus making determination of the initial cause difficult.
The arthritis often is coupled with other characteristic symptoms; this has been called Reiter's syndrome, Reiter's disease or Reiter's arthritis. The term "reactive arthritis" is increasingly used as a substitute for this designation because of Hans Reiter's war crimes with the Nazi Party.
The manifestations of reactive arthritis include the following triad of symptoms: an inflammatory arthritis of large joints, inflammation of the eyes in the form of conjunctivitis or uveitis, and urethritis in men or cervicitis in women. Arthritis occurring alone following sexual exposure or enteric infection is also known as reactive arthritis. Patients can also present with mucocutaneous lesions, as well as psoriasis-like skin lesions such as circinate balanitis, and keratoderma blennorrhagicum. Enthesitis can involve the Achilles tendon resulting in heel pain. Not all affected persons have all the manifestations.
The clinical pattern of reactive arthritis commonly consists of an inflammation of fewer than five joints which often includes the knee or sacroiliac joint. The arthritis may be "additive" (more joints become inflamed in addition to the primarily affected one) or "migratory" (new joints become inflamed after the initially inflamed site has already improved).
Reactive arthritis is an RF-seronegative, HLA-B27-linked arthritis often precipitated by genitourinary or gastrointestinal infections. The most common triggers are intestinal infections (with Salmonella, Shigella or Campylobacter) and sexually transmitted infectio |
https://en.wikipedia.org/wiki/Type%201%20diabetes | Type 1 diabetes (T1D), formerly known as juvenile diabetes, is an autoimmune disease that originates when cells that make insulin (beta cells) are destroyed by the immune system. Insulin is a hormone required for the cells to use blood sugar for energy and it helps regulate glucose levels in the bloodstream. Before treatment this results in high blood sugar levels in the body. The common symptoms of this elevated blood sugar are frequent urination, increased thirst, increased hunger, weight loss, and other serious complications. Additional symptoms may include blurry vision, tiredness, and slow wound healing. Symptoms typically develop over a short period of time, often a matter of weeks if not months.
The cause of type 1 diabetes is unknown, but it is believed to involve a combination of genetic and environmental factors. The underlying mechanism involves an autoimmune destruction of the insulin-producing beta cells in the pancreas. Diabetes is diagnosed by testing the level of sugar or glycated hemoglobin (HbA1C) in the blood. Type 1 diabetes can typically be distinguished from type 2 by testing for the presence of autoantibodies and/or declining levels/absence of C-peptide.
There is no known way to prevent type 1 diabetes. Treatment with insulin is required for survival. Insulin therapy is usually given by injection just under the skin but can also be delivered by an insulin pump. A diabetic diet and exercise are important parts of management. If left untreated, diabetes can cause many complications. Complications of relatively rapid onset include diabetic ketoacidosis and nonketotic hyperosmolar coma. Long-term complications include heart disease, stroke, kidney failure, foot ulcers and damage to the eyes. Furthermore, since insulin lowers blood sugar levels, complications may arise from low blood sugar if more insulin is taken than necessary.
Type 1 diabetes makes up an estimated 5–10% of all diabetes cases. The number of people affected globally is unknown |
https://en.wikipedia.org/wiki/Yilmaz%20theory%20of%20gravitation | The Yilmaz theory of gravitation is an attempt by Huseyin Yilmaz (1924–2013; Turkish: Hüseyin Yılmaz) and his coworkers to formulate a classical field theory of gravitation which is similar to general relativity in weak-field conditions, but in which event horizons cannot appear.
Yilmaz's work has been criticized on the grounds that:
his proposed field equation is ill-defined
event horizons can occur in weak field situations according to the general theory of relativity, in the case of a supermassive black hole
the theory is consistent only with either a completely empty universe or a negative energy vacuum
It is well known that attempts to quantize general relativity along the same lines which lead from Maxwell's classical field theory of electromagnetism to quantum electrodynamics fail, and that it has proven very difficult to construct a theory of quantum gravity which goes over to general relativity in an appropriate limit. However Yilmaz has claimed that his theory is "compatible with quantum mechanics". He suggests that it might be an alternative to superstring theory.
In his theory, Yilmaz wishes to retain the left hand side of the Einstein field equation (namely the Einstein tensor, which is well-defined for any Lorentzian manifold, independent of general relativity) but to modify the right hand side, the stress–energy tensor, by adding a kind of gravitational contribution. According to Yilmaz's critics, this additional term is not well-defined, and cannot be made well defined.
No astronomers have tested his ideas, although some have tested competitors of general relativity; see :Category:Tests of general relativity. |
https://en.wikipedia.org/wiki/Pair%20of%20pants%20%28mathematics%29 | In mathematics, a pair of pants is a surface which is homeomorphic to the three-holed sphere. The name comes from considering one of the removed disks as the waist and the two others as the cuffs of a pair of pants.
Pairs of pants are used as building blocks for compact surfaces in various theories. Two important applications are to hyperbolic geometry, where decompositions of closed surfaces into pairs of pants are used to construct the Fenchel-Nielsen coordinates on Teichmüller space, and in topological quantum field theory where they are the simplest non-trivial cobordisms between 1-dimensional manifolds.
Pants and pants decomposition
Pants as topological surfaces
A pair of pants is any surface that is homeomorphic to a sphere with three holes, which formally is the result of removing from the sphere three open disks with pairwise disjoint closures. Thus a pair of pants is a compact surface of genus zero with three boundary components.
The Euler characteristic of a pair of pants is equal to −1, and the only other surface with this property is the punctured torus (a torus minus an open disk).
Pants decompositions
The importance of the pairs of pants in the study of surfaces stems from the following property: define the complexity of a connected compact surface of genus with boundary components to be , and for a non-connected surface take the sum over all components. Then the only surfaces with negative Euler characteristic and complexity zero are disjoint unions of pairs of pants. Furthermore, for any surface and any simple closed curve on which is not homotopic to a boundary component, the compact surface obtained by cutting along has a complexity that is strictly less than . In this sense, pairs of pants are the only "irreducible" surfaces among all surfaces of negative Euler characteristic.
By a recursion argument, this implies that for any surface there is a system of simple closed curves which cut the surface into pairs of pants. This is ca |
https://en.wikipedia.org/wiki/Personification%20of%20Russia | The personification of Russia is traditionally feminine and most commonly maternal since medieval times.
Most common terms for national personification of Russia are:
Mother Russia (, tr. Matushka Rossiya, "Mother Russia"; also, , tr. Rossiya-matushka, "Russia the Mother", , tr. Mat'-Rossiya, , tr. Matushka Rus' , "Mother Rus' "),
Homeland the Mother (, tr. Rodina-mat' ).
In the Russian language, the concept of motherland is rendered by two terms: "" (tr. rodina), literally, "place of birth" and "" (tr. otchizna), literally "fatherland".
Harald Haarmann and Orlando Figes see the goddess Mokosh a source of the "Mother Russia" concept.
Usage
During the Soviet period, the Bolsheviks extensively utilized the image of "Motherland", especially during World War II.
Statues
During the Soviet era, many statues depicting the Mother Motherland were built, most to commemorate the Great Patriotic War. These include:
The Motherland Calls (, tr. Rodina-mat' zovyot), a colossal statue in Volgograd, Russia, commemorating the Battle of Stalingrad
Mother Motherland (, tr. Batʹkivshchyna-Maty, , tr. Rodina-mat' ), now called Mother Ukraine, is a monumental statue in Kyiv that is a part of the Museum of The History of Ukraine in World War II
Mother Motherland (Saint Petersburg), a statue at the Piskarevskoye Memorial Cemetery, St. Petersburg, Russia
Mother Russia (Kaliningrad), a monument in Kaliningrad, Russia
Mother Motherland Mourning over Her Perished Sons (, tr. Rodina-mat', skorbyashchaya o pogibshikh synov'yakh), Minsk, Belarus commemorating the dead in Afghanistan
, a monument in Naberezhnye Chelny, Russia
Mother Motherland (Pavlovsk), a memorial complex, Pavlovsk, Voronezh Oblast, Russia
Motherland Monument (Matveev Kurgan)
See also
Defender of the Fatherland Day
Mat Zemlya
Russian Bear |
https://en.wikipedia.org/wiki/Mid-range | In statistics, the mid-range or mid-extreme is a measure of central tendency of a sample defined as the arithmetic mean of the maximum and minimum values of the data set:
The mid-range is closely related to the range, a measure of statistical dispersion defined as the difference between maximum and minimum values.
The two measures are complementary in sense that if one knows the mid-range and the range, one can find the sample maximum and minimum values.
The mid-range is rarely used in practical statistical analysis, as it lacks efficiency as an estimator for most distributions of interest, because it ignores all intermediate points, and lacks robustness, as outliers change it significantly. Indeed, for many distributions it is one of the least efficient and least robust statistics. However, it finds some use in special cases: it is the maximally efficient estimator for the center of a uniform distribution, trimmed mid-ranges address robustness, and as an L-estimator, it is simple to understand and compute.
Robustness
The midrange is highly sensitive to outliers and ignores all but two data points. It is therefore a very non-robust statistic, having a breakdown point of 0, meaning that a single observation can change it arbitrarily. Further, it is highly influenced by outliers: increasing the sample maximum or decreasing the sample minimum by x changes the mid-range by while it changes the sample mean, which also has breakdown point of 0, by only It is thus of little use in practical statistics, unless outliers are already handled.
A trimmed midrange is known as a – the n% trimmed midrange is the average of the n% and (100−n)% percentiles, and is more robust, having a breakdown point of n%. In the middle of these is the midhinge, which is the 25% midsummary. The median can be interpreted as the fully trimmed (50%) mid-range; this accords with the convention that the median of an even number of points is the mean of the two middle points.
These trimmed mid |
https://en.wikipedia.org/wiki/Nucleoside%20analogue | Nucleoside analogues are structural analogues of a nucleoside, which normally contain a nucleobase and a sugar. Nucleotide analogues are analogues of a nucleotide, which normally has one to three phosphates linked to a nucleoside. Both types of compounds can deviate from what they mimick in a number of ways, as changes can be made to any of the constituent parts (nucleobase, sugar, phosphate). They are related to nucleic acid analogues.
Nucleoside and nucleotide analogues can be used in therapeutic drugs, including a range of antiviral products used to prevent viral replication in infected cells. The most commonly used is acyclovir.
Nucleotide and nucleoside analogues can also be found naturally. Examples include ddhCTP (3ʹ-deoxy-3′,4ʹdidehydro-CTP) produced by the human antiviral protein viperin and sinefungin (a S-Adenosyl methionine analogue) produced by some Streptomyces.
Function
These agents can be used against hepatitis B virus, hepatitis C virus, herpes simplex, and HIV. Once they are phosphorylated, they work as antimetabolites by being similar enough to nucleotides to be incorporated into growing DNA strands; but they act as chain terminators and stop viral DNA polymerase. They are not specific to viral DNA and also affect mitochondrial DNA. Because of this they have side effects such as bone marrow suppression.
There is a large family of nucleoside analogue reverse transcriptase inhibitors, because DNA production by reverse transcriptase is very different from normal human DNA replication, so it is possible to design nucleoside analogues that are preferentially incorporated by the former. Some nucleoside analogues, however, can function both as NRTIs and polymerase inhibitors for other viruses (e.g., hepatitis B).
Less selective nucleoside analogues are used as chemotherapy agents to treat cancer, e.g. gemcitabine. They are also used as antiplatelet drugs to prevent the formation of blood clots, ticagrelor and cangrelor.
Resistance
Resistance can de |
https://en.wikipedia.org/wiki/Effective%20atomic%20number%20%28compounds%20and%20mixtures%29 | The atomic number of a material exhibits a strong and fundamental relationship with the nature of radiation interactions within that medium. There are numerous mathematical descriptions of different interaction processes that are dependent on the atomic number, . When dealing with composite media (i.e. a bulk material composed of more than one element), one therefore encounters the difficulty of defining . An effective atomic number in this context is equivalent to the atomic number but is used for compounds (e.g. water) and mixtures of different materials (such as tissue and bone). This is of most interest in terms of radiation interaction with composite materials. For bulk interaction properties, it can be useful to define an effective atomic number for a composite medium and, depending on the context, this may be done in different ways. Such methods include (i) a simple mass-weighted average, (ii) a power-law type method with some (very approximate) relationship to radiation interaction properties or (iii) methods involving calculation based on interaction cross sections. The latter is the most accurate approach (Taylor 2012), and the other more simplified approaches are often inaccurate even when used in a relative fashion for comparing materials.
In many textbooks and scientific publications, the following - simplistic and often dubious - sort of method is employed. One such proposed formula for the effective atomic number, , is as follows:
where
is the fraction of the total number of electrons associated with each element, and
is the atomic number of each element.
An example is that of water (H2O), made up of two hydrogen atoms (Z=1) and one oxygen atom (Z=8), the total number of electrons is 1+1+8 = 10, so the fraction of electrons for the two hydrogens is (2/10) and for the one oxygen is (8/10). So the for water is:
The effective atomic number is important for predicting how photons interact with a substance, as certain types of photon interactions |
https://en.wikipedia.org/wiki/Programming%20complexity | Programming complexity (or software complexity) is a term that includes software properties that affect internal interactions. Several commentators distinguish between the terms "complex" and "complicated". Complicated implies being difficult to understand, but ultimately knowable. Complex, by contrast, describes the interactions between entities. As the number of entities increases, the number of interactions between them increases exponentially, making it impossible to know and understand them all. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions, thus increasing the risk of introducing defects when changing the software. In more extreme cases, it can make modifying the software virtually impossible.
The idea of linking software complexity to software maintainability has been explored extensively by Professor Manny Lehman, who developed his Laws of Software Evolution. He and his co-author Les Belady explored numerous Software Metrics that could be used to measure the state of software, eventually concluding that the only practical solution is to use deterministic complexity models.
Measures
Several measures of software complexity have been proposed. Many of these, although yielding a good representation of complexity, do not lend themselves to easy measurement. Some of the more commonly used metrics are
McCabe's cyclomatic complexity metric
Halstead's software science metrics
Henry and Kafura introduced "Software Structure Metrics Based on Information Flow" in 1981, which measures complexity as a function of "fan-in" and "fan-out". They define fan-in of a procedure as the number of local flows into that procedure plus the number of data structures from which that procedure retrieves information. Fan-out is defined as the number of local flows out of that procedure plus the number of data structures that the procedure updates. Local flows relate to data passed to, and from procedures that cal |
https://en.wikipedia.org/wiki/List%20of%20Japanese%20typographic%20symbols | This article lists Japanese typographic symbols that are not included in kana or kanji groupings.
Repetition marks
Brackets and quotation marks
Phonetic marks
Punctuation marks
Other special marks
Organization-specific symbols
See also
Japanese map symbols
Japanese punctuation
Emoji, which originated in Japanese mobile phone culture |
https://en.wikipedia.org/wiki/Solid%20oxygen | Solid oxygen forms at normal atmospheric pressure at a temperature below 54.36 K (−218.79 °C, −361.82 °F). Solid oxygen O2, like liquid oxygen, is a clear substance with a light sky-blue color caused by absorption in the red part of the visible light spectrum.
Oxygen molecules have attracted attention because of the relationship between the molecular magnetization and crystal structures, electronic structures, and superconductivity. Oxygen is the only simple diatomic molecule (and one of the few molecules in general) to carry a magnetic moment. This makes solid oxygen particularly interesting, as it is considered a "spin-controlled" crystal that displays antiferromagnetic magnetic order in the low temperature phases. The magnetic properties of oxygen have been studied extensively. At very high pressures, solid oxygen changes from an insulating to a metallic state; and at very low temperatures, it even transforms to a superconducting state. Structural investigations of solid oxygen began in the 1920s and, at present, six distinct crystallographic phases are established unambiguously.
The density of solid oxygen ranges from 21 cm3/mol in the α-phase, to 23.5 cm3/mol in the γ-phase.
Phases
Six different phases of solid oxygen are known to exist:
α-phase: light blue forms at 1 atm, below 23.8 K, monoclinic crystal structure, space group C2/m (no. 12).
β-phase: faint blue to pink forms at 1 atm, below 43.8 K, rhombohedral crystal structure, space group Rm (no. 166). At room temperature and high pressure begins transformation to tetraoxygen.
γ-phase: faint blue forms at 1 atm, below 54.36 K, cubic crystal structure, Pmm (no. 221).
δ-phase: orange forms at room temperature at a pressure of 9 GPa
ε-phase: dark-red to black forms at room temperature at pressures greater than 10 GPa
ζ-phase: metallic forms at pressures greater than 96 GPa
It has been known that oxygen is solidified into a state called the β-phase at room temperature by applying pressure, and w |
https://en.wikipedia.org/wiki/Computational%20model | A computational model uses computer programs to simulate and study complex systems using an algorithmic or mechanistic approach and is widely used in a diverse range of fields spanning from physics, engineering, chemistry and biology to economics, psychology, cognitive science and computer science.
The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by adjusting the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Operation theories of the model can be derived/deduced from these computational experiments.
Examples of common computational models are weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, Computational Engineering Models (CEM), and neural network models.
See also
Computational Engineering
Computational cognition
Reversible computing
Agent-based model
Artificial neural network
Computational linguistics
Computational human modeling
Decision field theory
Dynamical systems model of cognition
Membrane computing
Ontology (information science)
Programming language theory
Microscale and macroscale models |
https://en.wikipedia.org/wiki/Windows%20Firewall | Windows Firewall (officially called Microsoft Defender Firewall in Windows 10 version 2004 and later) is a firewall component of Microsoft Windows. It was first included in Windows XP SP2 and Windows Server 2003 SP1. Before the release of Windows XP Service Pack 2, it was known as the "Internet Connection Firewall."
Overview
When Windows XP was originally shipped in October 2001, it included a limited firewall called "Internet Connection Firewall". It was disabled by default due to concerns with backward compatibility, and the configuration screens were buried away in network configuration screens that many users never looked at. As a result, it was rarely used. In mid-2003, the Blaster worm attacked a large number of Windows machines, taking advantage of flaws in the RPC Windows service. Several months later, the Sasser worm did something similar. The ongoing prevalence of these worms through 2004 resulted in unpatched machines being infected within a matter of minutes. Because of these incidents, as well as other criticisms that Microsoft was not being active in protecting customers from threats, Microsoft decided to significantly improve both the functionality and the interface of Windows XP's built-in firewall, rebrand it as Windows Firewall, and switched it on by default since Windows XP SP2.
One of three profiles is activated automatically for each network interface:
Public assumes that the network is shared with the World and is the most restrictive profile.
Private assumes that the network is isolated from the Internet and allows more inbound connections than public. A network is never assumed to be private unless designated as such by a local administrator.
Domain profile is the least restrictive. It allows more inbound connections to allow for file sharing etc. The domain profile is selected automatically when connected to a network with a domain trusted by the local computer.
Security log capabilities are included, which can record IP addresses and o |
https://en.wikipedia.org/wiki/Gap%20theorem | See also Gap theorem (disambiguation) for other gap theorems in mathematics.
In computational complexity theory, the Gap Theorem, also known as the Borodin–Trakhtenbrot Gap Theorem, is a major theorem about the complexity of computable functions.
It essentially states that there are arbitrarily large computable gaps in the hierarchy of complexity classes. For any computable function that represents an increase in computational resources, one can find a resource bound such that the set of functions computable within the expanded resource bound is the same as the set computable within the original bound.
The theorem was proved independently by Boris Trakhtenbrot and Allan Borodin.
Although Trakhtenbrot's derivation preceded Borodin's by several years, it was not known nor recognized in the West until after Borodin's work was published.
Gap theorem
The general form of the theorem is as follows.
Suppose is an abstract (Blum) complexity measure. For any total computable function for which for every , there is a total computable function such that with respect to , the complexity classes with boundary functions and are identical.
The theorem can be proved by using the Blum axioms without any reference to a concrete computational model, so it applies to time, space, or any other reasonable complexity measure.
For the special case of time complexity, this can be stated more simply as:
for any total computable function such that for all , there exists a time bound such that .
Because the bound may be very large (and often will be nonconstructible) the Gap Theorem does not imply anything interesting for complexity classes such as P or NP, and it does not contradict the time hierarchy theorem or space hierarchy theorem.
See also
Blum's speedup theorem |
https://en.wikipedia.org/wiki/Prostate%20biopsy | Prostate biopsy is a procedure in which small hollow needle-core samples are removed from a man's prostate gland to be examined for the presence of prostate cancer. It is typically performed when the result from a PSA blood test is high. It may also be considered advisable after a digital rectal exam (DRE) finds possible abnormality. PSA screening is controversial as PSA may become elevated due to non-cancerous conditions such as benign prostatic hyperplasia (BPH), by infection, or by manipulation of the prostate during surgery or catheterization. Additionally many prostate cancers detected by screening develop so slowly that they would not cause problems during a man's lifetime, making the complications due to treatment unnecessary.
The most frequent side effect of the procedure is blood in the urine (31%). Other side effects may include infection (0.9%) and death (0.2%).
Ultrasound-guided prostate biopsy
The procedure may be performed transrectally, through the urethra or through the perineum. The most common approach is transrectally, and historically this was done with tactile finger guidance. The most common method of prostate biopsy was transrectal ultrasound-guided prostate (TRUS) biopsy.
Extended biopsy schemes take 12 to 14 cores from the prostate gland through a thin needle in a systematic fashion from different regions of the prostate.
A biopsy procedure with a higher rate of cancer detection is template prostate mapping (TPM) or transperineal template-guided mapping biopsy (TTMB), whereby typically 50 to 60 samples are taken of the prostate through the outer skin between the rectum and scrotum, to thoroughly sample and map the entire prostate, through a template with holes every 5mm, usually under a general or spinal anaesthetic.
Antibiotics are usually prescribed to minimize the risk of infection. A healthcare provider may also prescribe an enema to be taken in the morning of the procedure. During the transrectal procedure, an ultrasound probe is |
https://en.wikipedia.org/wiki/Algebra%20i%20Logika | Algebra i Logika (English: Algebra and Logic) is a peer-reviewed Russian mathematical journal founded in 1962 by Anatoly Ivanovich Malcev, published by the Siberian Fund for Algebra and Logic at Novosibirsk State University. An English translation of the journal is published by Springer-Verlag as Algebra and Logic since 1968. It published papers presented at the meetings of the "Algebra and Logic" seminar at the Novosibirsk State University. The journal is edited by academician Yury Yershov.
The journal is reviewed cover-to-cover in Mathematical Reviews and Zentralblatt MATH.
Abstracting and Indexing
Algebra i Logika is indexed and abstracted in the following databases:
According to the Journal Citation Reports, the journal had a 2020 impact factor of 0.753. |
https://en.wikipedia.org/wiki/Federa%C3%A7%C3%A3o%20das%20Sociedades%20de%20Biologia%20Experimental | The Federação das Sociedades de Biologia Experimental (Federation of Experimental Biology Societies, abbreviated FeSBE) is a Brazilian scientific association which runs a number of the mainstream specialized societies in experimental biology and medicine. It was founded in 1985 and currently has the following member societies:
Brazilian Society of Physiology (SBFis)
Brazilian Society of Biophysics (SBBf)
Brazilian Society of Biochemistry and Molecular Biology (SBBq)
Brazilian Society of Pharmacology and Experimental Therapeutics (SBFTE)
Brazilian Society of Immunology (SBI)
Brazilian Society of Neurosciences and Behavior (SBNeC)
Brazilian Society of Clinical Investigation (SBIC)
There are also 4 associate member societies:
Brazilian Society of Endocrinology and Metabology (SBEM)
Brazilian Society of Cell Biology (SBBC)
Brazilian Society of Nuclear Biosciences (SBBN)
Brazilian Research Association in Vision (BRAVO)
FeSBE holds an annual meeting with the societies of physiology, pharmacology, neurosciences, biophysics and clinical investigation every August, the three first societies responding for 83% of all attendees. The remaining societies biochemistry and molecular biology, immunology, endocrinology, cell biology and nuclear biosciences) have their own separate meetings.
The Annual FeSBE Meeting has the same importance of its American counterpart, the Federation of Experimental Biology Societies (FEBS), although smaller in size. In 2005 the Meeting has had 3,768 participants (74% of which were undergrad or graduate students) and 2,940 posters were presented. About 90% of the participants came from states from the South and Southeast regions.
FeSBE is affiliated with the Brazilian Society for Advancement of Science. Its current president is Dr. Gerhard Malnic. Former presidents were Eduardo Krieger, Sérgio Ferreira, Dora Fix Ventura and Antonio Carlos Campos de Carvalho.
See also
Science and technology in Brazil
External links
FESBE Home Page
Sc |
https://en.wikipedia.org/wiki/Component-based%20software%20engineering | Component-based software engineering (CBSE), also called component-based development (CBD), is a style of software engineering that aims to build software out of loosely-coupled, modular components. It emphasizes the separation of concerns among different parts of a software system.
Definition and characteristics of components
An individual software component is a software package, a web service, a web resource, or a module that encapsulates a set of related functions or data.
Components communicate with each other via interfaces. Each component provides an interface (called a provided interface) through which other components can use it. When a component uses another component's interface, that interface is called a used interface.
In the UML illustrations in this article, provided interfaces are represented by lollipop-symbols, while used interfaces are represented by open socket symbols.
Components must be substitutable, meaning that a component must be replaceable by another one having the same interfaces without breaking the rest of the system.
Components should be reusable.
Component-based usability testing should be considered when software components directly interact with users.
Components should be:
fully documented
thoroughly tested
robust - with comprehensive input-validity checking
able to pass back appropriate error messages or return codes
History
The idea that software should be componentized - built from prefabricated components - first became prominent with Douglas McIlroy's address at the NATO conference on software engineering in Garmisch, Germany, 1968, titled Mass Produced Software Components. The conference set out to counter the so-called software crisis. McIlroy's subsequent inclusion of pipes and filters into the Unix operating system was the first implementation of an infrastructure for this idea.
Brad Cox of Stepstone largely defined the modern concept of a software component. He called them Software ICs and set out to crea |
https://en.wikipedia.org/wiki/Agrophysics | Agrophysics is a branch of science bordering on agronomy and physics,
whose objects of study are the agroecosystem - the biological objects, biotope and biocoenosis affected by human activity, studied and described using the methods of physical sciences. Using the achievements of the exact sciences to solve major problems in agriculture, agrophysics involves the study of materials and processes occurring in the production and processing of agricultural crops, with particular emphasis on the condition of the environment and the quality of farming materials and food production.
Agrophysics is closely related to biophysics, but is restricted to the physics of the plants, animals, soil and an atmosphere involved in agricultural activities and biodiversity. It is different from biophysics in having the necessity of taking into account the specific features of biotope and biocoenosis, which involves the knowledge of nutritional science and agroecology, agricultural technology, biotechnology, genetics etc.
The needs of agriculture, concerning the past experience study of the local complex soil and next plant-atmosphere systems, lay at the root of the emergence of a new branch – agrophysics – dealing this with experimental physics.
The scope of the branch starting from soil science (physics) and originally limited to the study of relations within the soil environment, expanded over time onto influencing the properties of agricultural crops and produce as foods and raw postharvest materials, and onto the issues of quality, safety and labeling concerns, considered distinct from the field of nutrition for application in food science.
Research centres focused on the development of the agrophysical sciences include the Institute of Agrophysics, Polish Academy of Sciences in Lublin, and the Agrophysical Research Institute, Russian Academy of Sciences in St. Petersburg.
See also
Agriculture science
Agroecology
Genomics
Metagenomics
Metabolomics
Physics (Aristotle)
Proteomi |
https://en.wikipedia.org/wiki/Lazarus%20Fuchs | Lazarus Immanuel Fuchs (5 May 1833 – 26 April 1902) was a Jewish-German mathematician who contributed important research in the field of linear differential equations. He was born in Moschin (Mosina) (located in Grand Duchy of Posen) and died in Berlin, Germany. He was buried in Schöneberg in the St. Matthew's Cemetery. His grave in section H is preserved and listed as a grave of honour of the State of Berlin.
He is the eponym of Fuchsian groups and functions, and the Picard–Fuchs equation.
A singular point a of a linear differential equation
is called Fuchsian if p and q are meromorphic around the point a,
and have poles of orders at most 1 and 2, respectively.
According to a theorem of Fuchs, this condition is necessary and sufficient
for the regularity of the singular point, that is, to ensure the existence
of two linearly independent solutions of the form
where the exponents can be determined from the equation. In the case when
is an integer this formula has to be modified.
Another well-known result of Fuchs is the Fuchs's conditions, the necessary and sufficient conditions
for the non-linear differential equation of the form
to be free of movable singularities.
An interest remark about him as a teacher during the period of his work at the Heidelberg University pertains to his manner of lecturing: his knowledge of the mathematics he was assigned to teach was so deep that he was wont not to prepare before giving a lecture — he would simply improvise on the spot, while exposing the students to the train of thought taken by mathematicians of the finest degree.
Lazarus Fuchs was the father of Richard Fuchs, a German mathematician.
Selected works
Über Funktionen zweier Variabeln, welche durch Umkehrung der Integrale zweier gegebener Funktionen entstehen, Göttingen 1881.
Zur Theorie der linearen Differentialgleichungen, Berlin 1901.
Gesammelte Werke, Hrsg. von Richard Fuchs und Ludwig Schlesinger. 3 Bde. Berlin 1904–1909. |
https://en.wikipedia.org/wiki/Compression%20theorem | In computational complexity theory, the compression theorem is an important theorem about the complexity of computable functions.
The theorem states that there exists no largest complexity class, with computable boundary, which contains all computable functions.
Compression theorem
Given a Gödel numbering of the computable functions and a Blum complexity measure where a complexity class for a boundary function is defined as
Then there exists a total computable function so that for all
and |
https://en.wikipedia.org/wiki/White%20Christmas%20%28song%29 | "White Christmas" is an Irving Berlin song reminiscing about an old-fashioned Christmas setting. The song was written by Berlin for the 1942 musical film Holiday Inn. The composition won the Academy Award for Best Original Song at the 15th Academy Awards. Bing Crosby's record topped the Billboard chart for 11 weeks in 1942 and returned to the number one position again in December of 1943 and 1944. His version would return to the top 40 a dozen times in subsequent years.
Since its release, "White Christmas" has been covered by many artists, the version sung by Bing Crosby being the world's best-selling single (in terms of sales of physical media) with estimated sales in excess of 50 million physical copies worldwide. When the figures for other versions of the song are added to Crosby's, sales of the song exceed 100 million.
History
Origin
Accounts vary as to when and where Berlin wrote the song. One story is that he wrote it in 1940, in warm La Quinta, California, while staying at the La Quinta Hotel, a frequent Hollywood retreat also favored by writer-director-producer Frank Capra, although the Arizona Biltmore also claims the song was written there. He often stayed up all night writing. One day he told his secretary, "I want you to take down a song I wrote over the weekend. Not only is it the best song I ever wrote, it's the best song anybody ever wrote."
Bing Crosby versions
The first public performance of the song was by Bing Crosby, on his NBC radio show The Kraft Music Hall on Christmas Day, 1941, a few weeks after the attack on Pearl Harbor. Crosby subsequently recorded the song with the John Scott Trotter Orchestra and the Ken Darby Singers at Radio Recorders for Decca Records in 18 minutes on May 29, 1942, and it was released on July 30 as part of an album of six 78-rpm discs from the musical film Holiday Inn. At first, Crosby did not see anything special about the song. He just said "I don't think we have any problems with that one, Irving."
The song |
https://en.wikipedia.org/wiki/G.%20Mike%20Reed | George Michael ("Mike") Reed is an American computer scientist. He has contributed to theoretical computer science in general and CSP in particular.
Mike Reed has a doctorate in pure mathematics from Auburn University, United States, and a doctorate in computation from Oxford University, England. He has an interest in mathematical topology.
Reed was a Senior Research Associate at NASA Goddard Space Flight Center. From 1986 to 2005, he was at the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science) in England where he was also a Fellow in Computation of St Edmund Hall, Oxford (1986–2005). In 2005, he became Director of UNU/IIST, Macau, part of the United Nations University. |
https://en.wikipedia.org/wiki/Marcel-Paul%20Sch%C3%BCtzenberger | Marcel-Paul "Marco" Schützenberger (24 October 1920 – 29 July 1996) was a French mathematician and Doctor of Medicine. He worked in the fields of formal language, combinatorics, and information theory. In addition to his formal results in mathematics, he was "deeply involved in [a] struggle against the votaries of [neo-]Darwinism", a stance which has resulted in some mixed reactions from his peers and from critics of his stance on evolution. Several notable theorems and objects in mathematics as well as computer science bear his name (for example Schutzenberger group or the Chomsky–Schützenberger hierarchy). Paul Schützenberger was his great-grandfather.
In the late 1940s, he was briefly married to the psychologist Anne Ancelin Schützenberger.
Contributions to medicine and biology
Schützenberger's first doctorate, in medicine, was awarded in 1948 from the Faculté de Médecine de Paris. His doctoral thesis, on the statistical study of biological sex at birth, was distinguished by the Baron Larrey Prize from the French Academy of Medicine.
Biologist Jaques Besson, a co-author with Schützenberger on a biological topic, while noting that Schützenberger is perhaps most remembered for work in pure mathematical fields, credits him for likely being responsible for the introduction of statistical sequential analysis in French hospital practice.
Contributions to mathematics, computer science, and linguistics
Schützenberger's second doctorate was awarded in 1953 from Université Paris III. This work, developed from earlier results is counted amongst the early influential French academic work in information theory. His later impact in both linguistics and combinatorics is reflected by two theorems in formal linguistics (the Chomsky–Schützenberger enumeration theorem and the Chomsky–Schützenberger representation theorem), and one in combinatorics (the Schützenberger theorem). With Alain Lascoux, Schützenberger is credited with the foundation of the notion of the plactic mo |
https://en.wikipedia.org/wiki/John%20Fitzgerald%20%28computer%20scientist%29 |
John S. Fitzgerald FBCS (born 1965) is a British computer scientist. He is a professor at Newcastle University. He was the head of the School of Computing before taking on the role of Dean of Strategic Projects in the university’s Faculty of Science, Agriculture and Engineering. His research interests are in the area of dependable computer systems and formal methods, with a background in the VDM. He is a former Chair of Formal Methods Europe and committee member of BCS-FACS.
Education
Fitzgerald was born in Belfast, Northern Ireland, and was educated at Bangor Grammar School and the Victoria University of Manchester. He holds the BSc in Computing and Information Systems and the PhD degrees from the Department of Computer Science at Manchester.
Selected books
Bicarregui, J.C., Fitzgerald, J.S. and Lindsay, P.A. et al., Proof in VDM: a Practitioner's Guide. Springer-Verlag Formal Approaches to Computing and Information Technology (FACIT), 1994. .
Fitzgerald, J.S. and Larsen, P.G., Modelling Systems: Practical Tools and Techniques in Software Engineering. Cambridge University Press, 1998. . (Japanese Edition pub. Iwanami Shoten, 2003. .)
Fitzgerald, J.S., Larsen, P.G., Mukherjee, P. et al., Validated Designs for Object-oriented Systems. Springer-Verlag, 2005. .
See also
Colleagues at Newcastle University:
Cliff Jones
Brian Randell |
https://en.wikipedia.org/wiki/Marconi%20Prize | The Marconi Prize is an annual award recognizing achievements and advancements made in field of communications (radio, mobile, wireless, telecommunications, data communications, networks, and Internet). The prize is awarded by the Marconi Society, and it includes a $100,000 honorarium and a work of sculpture. Recipients of the prize are awarded at the Marconi Society's annual symposium and gala.
Occasionally, the Marconi Society Lifetime Achievement Award is bestowed on legendary late-career individuals, recognizing their transformative contributions and remarkable impacts to the field of communications and to the development of the careers of students, colleagues and peers, throughout their lifetimes. So far, the recipients include Claude E. Shannon (2000, died in 2001), William O. Baker (2003, died in 2005), Gordon E. Moore (2005), Amos E. Joel Jr. (2009, died in 2008), Robert W. Galvin (2011, died in 2011), and Thomas Kailath (2017).
Criteria
The Marconi Prize is awarded based on the candidate’s contributions in the following areas:
The significance of the impact of the nominee’s work on widely-used technology.
The scientific importance of the nominee’s work in setting the stage for, influencing, and advancing the field beyond the nominee’s own achievements.
The nominee’s contributions to innovation and entrepreneurship by introducing completely new ideas, methods, or technologies. These may include forming, leading, or advising organizations, mentoring students on moving ideas from research to implementation, or fostering new industries/enabling scale implementation.
The social and humanitarian impact of the nominee’s contributions to the design, development, and/or deployment of new communication technologies or communications public policies that promote social development and/or inclusiveness.
Marconi Fellow
The Marconi Prize winners are also named as Marconi Fellows. The foundation and the prize are named after the honor of Guglielmo Marconi, a Nob |
https://en.wikipedia.org/wiki/Early%20completion | Early completion is a property of some classes of asynchronous circuit. It means that the output of a circuit may be available as soon as sufficient inputs have arrived to allow it to be determined. For example, if all of the inputs to a mux have arrived, and all are the same, but the select line has not yet arrived, the circuit can still produce an output. Since all the inputs are identical, the select line is irrelevant.
Example: an asynchronous ripple-carry adder
A ripple carry adder is a simple adder circuit, but slow because the carry signal has to propagate through each stage of the adder:
This diagram shows a 5-bit ripple carry adder in action. There is a five-stage long carry path, so every time two numbers are added with this adder, it needs to wait for the carry to propagate through all five stages.
By switching to dual-rail signalling for the carry bit, it can have each stage signal its carry out as soon as it knows. If both inputs to a stage are 1, then the carry out will be 1 no matter what the carry in is. If both inputs are 0, then the carry out will be zero. This early completion cuts down on the maximum length of the carry chain in most cases:
Two of the carry-out bits can be known as soon as input arrives, for the input shown in the picture. This means that the maximum carry chain length is three, not five. If it uses dual-rail signalling for inputs and outputs, it can indicate completion as soon as all the carry chains have completed.
On average, an n-bit asynchronous ripple carry adder will finish in O(log n) time. By extending this approach to carry look-ahead adders, it is possible to add in O(log log n) time.
External links
"Self-timed carry-lookahead adders" by Fu-Chiung Cheng, Stephen H. Unger, Michael Theobald.
Digital electronics |
https://en.wikipedia.org/wiki/Clock%20gating | In computer architecture, clock gating is a popular power management technique used in many synchronous circuits for reducing dynamic power dissipation, by removing the clock signal when the circuit is not in use or ignores clock signal. Clock gating saves power by pruning the clock tree, at the cost of adding more logic to a circuit. Pruning the clock disables portions of the circuitry so that the flip-flops in them do not have to switch states. Switching states consumes power. When not being switched, the switching power consumption goes to zero, and only leakage currents are incurred.
Although asynchronous circuits by definition do not have a global "clock", the term perfect clock gating is used to illustrate how various clock gating techniques are simply approximations of the data-dependent behavior exhibited by asynchronous circuitry. As the granularity on which one gates the clock of a synchronous circuit approaches zero, the power consumption of that circuit approaches that of an asynchronous circuit: the circuit only generates logic transitions when it is actively computing.
Details
An alternative solution to clock gating is to use Clock Enable (CE) logic on synchronous data path employing the input multiplexer, e.g., for D type flip-flops: using C / Verilog language notation: Dff= CE? D: Q;
where: Dff is D-input of D-type flip-flop, D is module information input (without CE input), Q is D-type flip-flop output. This type of clock gating is race condition free and is preferred for FPGAs designs and for clock gating of the small circuit. For FPGAs every D-type flip-flop has an additional CE input signal.
Clock gating works by taking the enable conditions attached to registers, and uses them to gate the clocks. A design must contain these enable conditions in order to use and benefit from clock gating. This clock gating process can also save significant die area as well as power, since it removes large numbers of muxes and replaces them with clock gating l |
https://en.wikipedia.org/wiki/Quasi-delay-insensitive%20circuit | In digital logic design, an asynchronous circuit is quasi delay-insensitive (QDI) when it operates correctly, independent of gate and wire delay with the weakest exception necessary to be turing-complete.
Overview
Pros
Robust to process variation, temperature fluctuation, circuit redesign, and FPGA remapping.
Natural event sequencing facilitates complex control circuitry.
Automatic clock gating and compute-dependent cycle time can save dynamic power and increase throughput by optimizing for average-case workload characteristics instead of worst-case.
Cons
Delay insensitive encodings generally require twice as many wires for the same data.
Communication protocols and encodings generally require twice as many devices for the same functionality.
Chips
QDI circuits have been used to manufacture a large number of research chips, a small selection of which follows.
Caltech's asynchronous microprocessor
Tokyo University's TITAC and TITAC-2 processors
Theory
The simplest QDI circuit is a ring oscillator implemented using a cycle of inverters. Each gate drives two events on its output node. Either the pull up network drives node's voltage from GND to Vdd or the pull down network from VDD to GND. This gives the ring oscillator six events in total.
Multiple cycles may be connected using a multi-input gate. A c-element, which waits for its inputs to match before copying the value to its output, may be used to synchronize multiple cycles. If one cycle reaches the c-element before another, it is forced to wait. Synchronizing three or more of these cycles creates a pipeline allowing the cycles to trigger one after another.
If cycles are known to be mutually exclusive, then they may be connected using combinational logic (AND, OR). This allows the active cycle to continue regardless of the inactive cycles, and is generally used to implement delay insensitive encodings.
For larger systems, this is too much to manage. So, they are partitioned into processes. Each |
https://en.wikipedia.org/wiki/Larmor%20formula | In electrodynamics, the Larmor formula is used to calculate the total power radiated by a nonrelativistic point charge as it accelerates. It was first derived by J. J. Larmor in 1897, in the context of the wave theory of light.
When any charged particle (such as an electron, a proton, or an ion) accelerates, energy is radiated in the form of electromagnetic waves. For a particle whose velocity is small relative to the speed of light (i.e., nonrelativistic), the total power that the particle radiates (when considered as a point charge) can be calculated by the Larmor formula:
where or is the proper acceleration, is the charge, and is the speed of light. A relativistic generalization is given by the Liénard–Wiechert potentials.
In either unit system, the power radiated by a single electron can be expressed in terms of the classical electron radius and electron mass as:
One implication is that an electron orbiting around a nucleus, as in the Bohr model, should lose energy, fall to the nucleus and the atom should collapse. This puzzle was not solved until quantum theory was introduced.
Derivation
Derivation 1: Mathematical approach (using CGS units)
We first need to find the form of the electric and magnetic fields. The fields can be written (for a fuller derivation see Liénard–Wiechert potential)
and where is the charge's velocity divided by , is the charge's acceleration divided by , is a unit vector in the direction, is the magnitude of , is the charge's location, and . The terms on the right are evaluated at the retarded time .
The right-hand side is the sum of the electric fields associated with the velocity and the acceleration of the charged particle. The velocity field depends only upon while the acceleration field depends on both and and the angular relationship between the two. Since the velocity field is proportional to , it falls off very quickly with distance. On the other hand, the acceleration field is proportional to , which means |
https://en.wikipedia.org/wiki/Don%20Sannella | Donald T. Sannella FRSE is professor of computer science in the Laboratory for Foundations of Computer Science, at the School of Informatics, University of Edinburgh, Scotland.
Sannella graduated from Yale University, University of California, Berkeley and University of Edinburgh with degrees in computer science. His research interests include: algebraic specification and formal software development, correctness of modular systems, types and functional programming, resource certification for mobile code.
Sannella is founder of the European Joint Conferences on Theory and Practice of Software, a confederation of computer science conferences, held annually in Europe since 1998.
He is editor-in-chief of the journal Theoretical Computer Science,
and is co-founder and CEO of Contemplate Ltd. His father is Ted Sannella.
Honours and awards
In 2014 Sannella was elected a Fellow of the Royal Society of Edinburgh.
Personal Life
Don Sanella loves to ski and is often found out on the slopes. |
https://en.wikipedia.org/wiki/%CE%98%20%28set%20theory%29 | In set theory, Θ (pronounced like the letter theta) is the least nonzero ordinal α such that there is no surjection from the reals onto α.
If the axiom of choice (AC) holds (or even if the reals can be wellordered), then Θ is simply , the cardinal successor of the cardinality of the continuum. However, Θ is often studied in contexts where the axiom of choice fails, such as models of the axiom of determinacy.
Θ is also the supremum of the lengths of all prewellorderings of the reals.
Proof of existence
It may not be obvious that it can be proven, without using AC, that there even exists a nonzero ordinal onto which there is no surjection from the reals (if there is such an ordinal, then there must be a least one because the ordinals are wellordered). However, suppose there were no such ordinal. Then to every ordinal α we could associate the set of all prewellorderings of the reals having length α. This would give an injection from the class of all ordinals into the set of all sets of orderings on the reals (which can to be seen to be a set via repeated application of the powerset axiom). Now the axiom of replacement shows that the class of all ordinals is in fact a set. But that is impossible, by the Burali-Forti paradox.
Cardinal numbers
Descriptive set theory
Determinacy |
https://en.wikipedia.org/wiki/AD%2B | In set theory, AD+ is an extension, proposed by W. Hugh Woodin, to the axiom of determinacy. The axiom, which is to be understood in the context of ZF plus DC (the axiom of dependent choice for real numbers), states two things:
Every set of reals is ∞-Borel.
For any ordinal λ less than Θ, any subset A of ωω, and any continuous function π:λω→ωω, the preimage π−1[A] is determined. (Here λω is to be given the product topology, starting with the discrete topology on λ.)
The second clause by itself is referred to as ordinal determinacy.
See also
Axiom of projective determinacy
Axiom of real determinacy
Suslin's problem
Topological game |
https://en.wikipedia.org/wiki/Squirrel%20%28programming%20language%29 | Squirrel is a high level imperative, object-oriented programming language, designed to be a lightweight scripting language that fits in the size, memory bandwidth, and real-time requirements of applications like video games.
MirthKit, a simple toolkit for making and distributing open source, cross-platform 2D games, uses Squirrel for its platform. It is used extensively by Code::Blocks for scripting and was also used in Final Fantasy Crystal Chronicles: My Life as a King. It is also used in Left 4 Dead 2, Portal 2 and Thimbleweed Park for scripted events and in NewDark, an unofficial Thief 2: The Metal Age engine update, to facilitate additional, simplified means of scripting mission events, aside of the regular C scripting.
Language features
Dynamic typing
Delegation
Classes, inheritance
Higher order functions
Generators
Cooperative threads (coroutines)
Tail recursion
Exception handling
Automatic memory management (mainly reference counting with backup garbage collector)
Weak references
Both compiler and virtual machine fit together in about 7k lines of C++ code
Optional 16-bit character strings
Syntax
Squirrel uses a C-like syntax.
Factorial in Squirrel
function factorial(x)
{
if (x <= 1) {
return 1;
}
else {
return x * factorial(x-1);
}
}
Generators
function not_a_random_number_generator(max) {
local last = 42;
local IM = 139968;
local IA = 3877;
local IC = 29573;
for(;;) { // loops forever
yield (max * (last = (last * IA + IC) % IM) / IM);
}
}
local randtor = not_a_random_number_generator(100);
for(local i = 0; i < 10; i += 1)
print(">"+resume randtor+"\n");
Classes and inheritance
class BaseVector {
constructor(...)
{
if(vargv.len() >= 3) {
x = vargv[0];
y = vargv[1];
z = vargv[2];
}
}
x = 0;
y = 0;
z = 0;
}
class Vector3 extends BaseVector {
function _add(other)
{
if(other instanceof ::Vector3)
return ::Vector3(x+other.x,y+other.y,z+other.z);
else
throw "wro |
https://en.wikipedia.org/wiki/IEEE%20802.1D | IEEE 802.1D is the Ethernet MAC bridges standard which includes bridging, Spanning Tree Protocol and others. It is standardized by the IEEE 802.1 working group. It includes details specific to linking many of the other 802 projects including the widely deployed 802.3 (Ethernet), 802.11 (Wireless LAN) and 802.16 (WiMax) standards.
Bridges using virtual LANs (VLANs) have never been part of 802.1D, but were instead specified in separate standard, 802.1Q originally published in 1998.
By 2014, all the functionality defined by IEEE 802.1D has been incorporated into either IEEE 802.1Q-2014 (Bridges and Bridged Networks) or IEEE 802.1AC (MAC Service Definition). 802.1D is expected to be officially withdrawn in 2022.
Publishing history:
1990 — Original publication (802.1D-1990).
1993 — standard ISO/IEC 10038:1993.
1998 — Revised version (802.1D-1998, ISO/IEC 15802-3:1998), incorporating the extensions P802.1p, P802.12e, 802.1j-1996 and 802.6k-1992.
2004 — Revised version (802.1D-2004), incorporating the extensions 802.11c-1998, 802.1t-2001, 802.1w-2001, and removing the original Spanning Tree Protocol, instead incorporating the Rapid Spanning Tree Protocol (RSTP) from 802.1w-2001.
Amendments to 802.1D-2004:
2004 — Small amendment (802.17a-2004) to add in 802.17 bridging support.
2007 — Small amendment (802.16k-2007) to add in 802.16 bridging support.
2012 — Shortest Path Bridging (IEEE 802.1aq-2012, amendment to 802.1Q-2011).
See also
Spanning tree protocol
Multiple Spanning Tree Protocol
TRILL TRansparent Interconnection of Lots of Links |
https://en.wikipedia.org/wiki/Dreambox | Dreambox is a series of Linux-powered DVB satellite, terrestrial and cable digital television receivers (set-top boxes), produced by German multimedia vendor Dream Multimedia.
History and description
The Linux-based production software originally used by Dreambox was originally developed for DBox2, by the Tuxbox project. The Dbox2 was a proprietary design distributed by KirchMedia for their pay TV services. The bankruptcy of KirchMedia flooded the market with unsold boxes available for Linux enthusiasts. The Dreambox shares the basic design of the DBox2, including the Ethernet port and the PowerPC processor.
Its firmware is officially user-upgradable, since it is a Linux-based computer, as opposed to third-party "patching" of alternate receivers. All units support Dream's own DreamCrypt conditional access (CA) system, with software-emulated CA Modules (CAMs) available for many alternate CA systems. The built-in Ethernet interface allows networked computers to access the recordings on the internal hard disks on some Dreambox models. It also enables the receiver to store digital copies of DVB MPEG transport streams on distributed file systems or broadcast the streams as IPTV to VideoLAN and XBMC Media Center clients. Unlike many PC based PVR systems that use free-to-air type of DVB receiver cards, the built-in conditional access allows receiving and storing encrypted content.
In 2007, Dream Multimedia also introduced a non-Linux based Dreambox receiver, the DM100, their sole to date, still featuring an Ethernet port. It has a USB-B port for service instead of the RS-232 or mini-USB connectors found on other models. Unlike all other Dreamboxes, it features an STMicroelectronics CPU instead of PowerPC or MIPS.
Dreambox models
There are a number of different models of Dreambox available. The numbers are suffixed with -S for Satellite, -T for Terrestrial and -C for Cable:
Table
**HDMI via DVI to HDMI adapter.
Remark: The new 7020hd v2 has a new Flash with anoth |
https://en.wikipedia.org/wiki/Ehrenfest%20paradox | The Ehrenfest paradox concerns the rotation of a "rigid" disc in the theory of relativity.
In its original 1909 formulation as presented by Paul Ehrenfest in relation to the concept of Born rigidity within special relativity, it discusses an ideally rigid cylinder that is made to rotate about its axis of symmetry. The radius R as seen in the laboratory frame is always perpendicular to its motion and should therefore be equal to its value R0 when stationary. However, the circumference (2R) should appear Lorentz-contracted to a smaller value than at rest, by the usual factor γ. This leads to the contradiction that R = R0 and R < R0.
The paradox has been deepened further by Albert Einstein, who showed that since measuring rods aligned along the periphery and moving with it should appear contracted, more would fit around the circumference, which would thus measure greater than 2R. This indicates that geometry is non-Euclidean for rotating observers, and was important for Einstein's development of general relativity.
Any rigid object made from real material that is rotating with a transverse velocity close to that material's speed of sound must exceed the point of rupture due to centrifugal force, because centrifugal pressure can not exceed the shear modulus of material.
where is speed of sound, is density and is shear modulus. Therefore, when considering relativistic speeds, it is only a thought experiment. Neutron-degenerate matter may allow velocities close to the speed of light, since the speed of a neutron-star oscillation is relativistic (though these bodies cannot strictly be said to be "rigid").
Essence of the paradox
Imagine a disk of radius R rotating with constant angular velocity .
The reference frame is fixed to the stationary center of the disk. Then the magnitude of the relative velocity of any point in the circumference of the disk is . So the circumference will undergo Lorentz contraction by a factor of .
However, since the radius is perpend |
https://en.wikipedia.org/wiki/Friedrich%20Miescher%20Laboratory%20of%20the%20Max%20Planck%20Society | The Friedrich Miescher Laboratory (FML) of the Max Planck Society is a biological research institute located on the Society's campus in Tübingen, Germany, named after Friedrich Miescher, founded in 1969 to offer highly qualified junior scientists in biology an opportunity to establish independent research groups and pursue their own line of research within a five-year period.
There are currently four research groups studying evolutionary genetics, systems biology of development, and the biochemistry of meiotic recombination.
Profile
The Friedrich Miescher Laboratory (FML) of the Max Planck Society is a biological research institute located on the Society's campus in Tübingen, Germany, named after Friedrich Miescher. It was founded in 1969 to offer highly qualified junior scientists in the area of biology an opportunity to establish independent research groups and pursue their own line of research within a five-year period.
The FML was a bold experiment by the Max Planck Society, in response to the brain drain, to place more resources in the hands of junior scientists and make Germany a more attractive research destination.
Group Leaders
The group leaders are elected by a committee of scientists from diverse areas and institutions on the basis of a public tendering procedure.
Since 2005 the FML has been represented by a managing director in order to relieve the group leaders of administrative burdens and to allow them even more time to focus on their research.
There is no specification as to which kind of biological research should be conducted at the FML, and the focus of research changes with the appointment of each new group leader. While at the FML, they can use modern, well-equipped laboratories and work in teams tailored to their ideas. Each group leader is free to allocate their resources as they choose, and in addition there is a central budget for the FML, managed jointly by the group leaders. |
https://en.wikipedia.org/wiki/Gobbledok | The Gobbledok is a fictitious television character used to promote The Smith's Snackfood Company brand potato chips in Australia. A light brown alien from "Dok the Potato Planet", the Gobbledok was known for its multi-colored mohawk hairstyle, its obsession for eating Smith's potato chips, and its catchphrase "chippie, chippie, chippie!"
Initially conceived for a one-off advertisement by John Finkelson of Sydney's George Patterson Advertising agency, the Gobbledok was designed and created by special effects worker Warren Beaton, who later worked for Weta Workshop. The character's unexpected success led to numerous further appearances between 1987 and 1996. It continues to make occasional appearances in commercials and on packaging, and was used in an advertisement in 2021 to commemorate The Smith's Snackfood Company's 90th anniversary in Australia. The Gobbledok was voiced by Dave Gibson, who was also known for voice-work on the TV series Australia's Funniest Home Videos. Gibson reprised his role as the character's voice for the 2021 advertisement.
See also
List of Australian and New Zealand advertising characters |
https://en.wikipedia.org/wiki/Office%20of%20Technology%20Assessment | The Office of Technology Assessment (OTA) was an office of the United States Congress that operated from 1974 to 1995. OTA's purpose was to provide congressional members and committees with objective and authoritative analysis of the complex scientific and technical issues of the late 20th century, i.e. technology assessment. It was a leader in practicing and encouraging delivery of public services in innovative and inexpensive ways, including early involvement in the distribution of government documents through electronic publishing. Its model was widely copied around the world.
The OTA was authorized in 1972 and received its first funding in fiscal year 1974. It was defunded at the end of 1995, following the 1994 mid-term elections which led to Republican control of the Senate and the House. House Republican legislators characterized the OTA as wasteful and hostile to GOP interests.
Princeton University hosts The OTA Legacy site, which holds "the complete collection of OTA publications along with additional materials that illuminate the history and impact of the agency". On July 23, 2008 the Federation of American Scientists launched a similar archive that includes interviews and additional documents about OTA.
History
Congress established the Office of Technology Assessment with the Technology Assessment Act of 1972. It was governed by a twelve-member board, comprising six members of Congress from each party—half from the Senate and half from the House of Representatives. During its twenty-four-year life it produced about 750 studies on a wide range of topics, including acid rain, health care, global climate change, and polygraphs.
Closure
Criticism of the agency was fueled by Fat City, a 1980 book by Washington Times journalist Donald Lambro that was regarded favorably by the Reagan administration; it called OTA an "unnecessary agency" that duplicated government work done elsewhere. OTA was abolished (technically "de-funded") in the "Contract with America" |
https://en.wikipedia.org/wiki/Jim%20Woodcock | James Charles Paul Woodcock is a British computer scientist.
Woodcock gained his PhD from the University of Liverpool. Until 2001 he was Professor of Software Engineering at the Oxford University Computing Laboratory, where he was also a Fellow of Kellogg College. He then joined the University of Kent and is now based at the University of York, where, since October 2012, he has been head of the Department of Computer Science.
His research interests include: strong software engineering, Grand Challenge in dependable systems evolution, unifying theories of programming, formal specification, refinement, concurrency, state-rich systems, mobile and reconfigurable processes, nanotechnology, Grand Challenge in the railway domain. He has a background in formal methods, especially the Z notation and CSP.
Woodcock worked on applying the Z notation to the IBM CICS project, helping to gain a Queen's Award for Technological Achievement, and Mondex, helping to gain the highest ITSEC classification level.
Prof. Woodcock is editor-in-chief of the Formal Aspects of Computing journal.
Books
Jim Woodcock and Jim Davies, Using Z: Specification, Refinement, and Proof. Prentice-Hall International Series in Computer Science, 1996. .
Jim Woodcock and Martin Loomes, Software Engineering Mathematics: Formal Methods Demystified. Kindle Edition, Taylor & Francis, 2007. |
https://en.wikipedia.org/wiki/Servomotor | A servomotor (or servo motor or simply servo (to be differentiated from servomechanism, which may also be called a servo)) is a rotary actuator or linear actuator that allows for precise control of angular or linear position, velocity, and acceleration in a mechanical system. It consists of a suitable motor coupled to a sensor for position feedback. It also requires a relatively sophisticated controller, often a dedicated module designed specifically for use with servomotors.
Servomotors are not a specific class of motor, although the term servomotor is often used to refer to a motor suitable for use in a closed-loop control system.
Servomotors are used in applications such as robotics, CNC machinery, and automated manufacturing.
Mechanism
A servomotor is a closed-loop servomechanism that uses position feedback to control its motion and final position. The input to its control is a signal (either analog or digital) representing the position commanded for the output shaft.
The motor is paired with some type of position encoder to provide position and speed feedback. In the simplest case, only the position is measured. The measured position of the output is compared to the command position, the external input to the controller. If the output position differs from that required, an error signal is generated which then causes the motor to rotate in either direction, as needed to bring the output shaft to the appropriate position. As the positions approach, the error signal reduces to zero, and the motor stops.
The very simplest servomotors use position-only sensing via a potentiometer and bang-bang control of their motor; the motor always rotates at full speed (or is stopped). This type of servomotor is not widely used in industrial motion control, but it forms the basis of the simple and cheap servos used for radio-controlled models.
More sophisticated servomotors make use of an Absolute encoder (a type of rotary encoder) to calculate the shafts position and inf |
https://en.wikipedia.org/wiki/Electrochemical%20gradient | An electrochemical gradient is a gradient of electrochemical potential, usually for an ion that can move across a membrane. The gradient consists of two parts:
The chemical gradient, or difference in solute concentration across a membrane.
The electrical gradient, or difference in charge across a membrane.
When there are unequal concentrations of an ion across a permeable membrane, the ion will move across the membrane from the area of higher concentration to the area of lower concentration through simple diffusion. Ions also carry an electric charge that forms an electric potential across a membrane. If there is an unequal distribution of charges across the membrane, then the difference in electric potential generates a force that drives ion diffusion until the charges are balanced on both sides of the membrane.
Electrochemical gradients are essential to the operation of batteries and other electrochemical cells, photosynthesis and cellular respiration, and certain other biological processes.
Overview
Electrochemical energy is one of the many interchangeable forms of potential energy through which energy may be conserved. It appears in electroanalytical chemistry and has industrial applications such as batteries and fuel cells. In biology, electrochemical gradients allow cells to control the direction ions move across membranes. In mitochondria and chloroplasts, proton gradients generate a chemiosmotic potential used to synthesize ATP, and the sodium-potassium gradient helps neural synapses quickly transmit information.
An electrochemical gradient has two components: a differential concentration of electric charge across a membrane and a differential concentration of chemical species across that same membrane. In the former effect, the concentrated charge attracts charges of the opposite sign; in the latter, the concentrated species tends to diffuse across the membrane to an equalize concentrations. The combination of these two phenomena determines the t |
https://en.wikipedia.org/wiki/Lycoperdon%20perlatum | Lycoperdon perlatum, popularly known as the common puffball, warted puffball, gem-studded puffball or devil's snuff-box, is a species of puffball fungus in the family Agaricaceae. A widespread species with a cosmopolitan distribution, it is a medium-sized puffball with a round fruit body tapering to a wide stalk, and dimensions of wide by tall. It is off-white with a top covered in short spiny bumps or "jewels", which are easily rubbed off to leave a netlike pattern on the surface. When mature it becomes brown, and a hole in the top opens to release spores in a burst when the body is compressed by touch or falling raindrops.
The puffball grows in fields, gardens, and along roadsides, as well as in grassy clearings in woods. It is edible when young and the internal flesh is completely white, although care must be taken to avoid confusion with immature fruit bodies of poisonous Amanita species. L. perlatum can usually be distinguished from other similar puffballs by differences in surface texture. Several chemical compounds have been isolated and identified from the fruit bodies of L. perlatum, including sterol derivatives, volatile compounds that give the puffball its flavor and odor, and the unusual amino acid lycoperdic acid. Extracts of the puffball have antimicrobial and antifungal activities.
Taxonomy
The species was first described in the scientific literature in 1796 by mycologist Christiaan Hendrik Persoon. Synonyms include Lycoperdon gemmatum (as described by August Batsch in 1783); the variety Lycoperdon gemmatum var. perlatum (published by Elias Magnus Fries in 1829); Lycoperdon bonordenii (George Edward Massee, 1887); and Lycoperdon perlatum var. bonordenii (A.C. Perdeck, 1950).
L. perlatum is the type species of the genus Lycoperdon. Molecular analyses suggest a close phylogenetic relationship with L. marginatum.
The specific epithet perlatum is Latin for "widespread". It is commonly known as the common puffball, the gem-studded puffball (or gemmed |
https://en.wikipedia.org/wiki/Pellet%20mill | A pellet mill, also known as a pellet press, is a type of mill or machine press used to create pellets from powdered material. Pellet mills are unlike grinding mills, in that they combine small materials into a larger, homogeneous mass, rather than break large materials into smaller pieces.
Types
There are many types of pellet mills that can be generally grouped into large-scale and small-scale types. According to the production capacity, pellet mills also can be divided into flat die pellet mill and ring die pellet mill.
Large-scale
There are two common types of large-scale pellet mills: flat die mills and ring die mills. Flat die mills use a flat die with slots. The powder is introduced to the top of the die and as the die rotates a roller presses the powder through the holes in the die. A cutter on the other side of the die cuts the exposed pellet free from the die. In the ring die there are radial slot throughout the die. Powder is fed into the inside of the die and spreaders evenly distribute the powder. Two rollers then compress the powder through the die holes. Two cutters are used to cut the pellets free from the outside of the die.
Large scale pellet mills are usually used to produce animal feed, wood pellets, and fuel pellets for use in a pellet stove.
Small-scale
Small-scale mills are usually variations of screw presses or hydraulic presses. The same basic process is used for both types. A die, also known as a mold, holds the uncompressed powder in a shaped pocket. The pocket shape defined the final pellet shape. A platen is attached to the end of the screw (in a screw press) or the ram (in a hydraulic press) which compresses the powder.
Some platens are heated to speed up the time it takes and improve the overall structure of the pellet. They may also have water ports for quick cooling between uses.
Applications
One of the more common applications is to produce KBr (potassium bromide) pellets which are used in infrared spectroscopy applications |
https://en.wikipedia.org/wiki/Infineon%20TriCore | TriCore is a 32-bit microcontroller architecture from Infineon. It unites the elements of a RISC processor core, a microcontroller and a DSP in one chip package.
History and background
In 1999, Infineon launched the first generation of AUDO (Automotive unified processor) which is based on what the company describes as a 32-bit ”unified RISC/MCU/DSP microcontroller core,” called TriCore, which is as of 2011 on its fourth generation, called AUDO MAX (version 1.6).
TriCore is a heterogeneous, asymmetric dual core architecture with a peripheral control processor that enables user modes and core system protection.
Infineon's AUDO families targets gasoline and diesel engine control units (ECUs), applications in hybrid and electric vehicles as well as transmission, active and passive safety and chassis applications. It also targets industrial applications, e.g. optimized motor control applications and signal processing.
Different models offer different combinations of memories, peripheral sets, frequencies, temperatures and packaging. Infineon also offers software
claimed to help manufacturers meet SIL/ASIL safety standards. All members of the AUDO family are binary-compatible and share the same development tools. An AUTOSAR library that enables existing code to be integrated is also available.
Safety
Infineon's portfolio includes microcontrollers with additional hardware features as well as SafeTcore safety software and a watchdog IC.
AUDO families cover safety applications including active suspension and driver assistant systems and also EPS and chassis domain control. Some features of the product portfolio are memory protection, redundant peripherals, MemCheck units with integrated CRCs, ECC on memories, integrated test and debug functionality and FlexRay. |
https://en.wikipedia.org/wiki/QWK%20%28file%20format%29 | QWK is a file-based offline mail reader format that was popular among bulletin board system (BBS) users, especially users of FidoNet and other networks that generated large volumes of mail. QWK was originally developed by Mark "Sparky" Herring in 1987 for systems running the popular PCBoard bulletin board system, but it was later adapted for other platforms. Herring died of a heart attack in 2020 after being swatted. During the height of bulletin board system popularity, several dozen offline mail readers supported the QWK format.
Description
Like other offline readers, QWK gathered up messages for a particular user using BBS-side QWK software, compressed them using an application such as PKZIP, and then transferred them to the user. This is usually accomplished via a "BBS door" program running on the BBS system. In the case of QWK, the messages were placed in a single large file that was then bundled with several control files and then compressed into a single archive with the file extension, and typically the BBS's "id" name as the base filename in the form . The file was normally sent to the user automatically using the self-starting feature of the ZModem protocol, although most QWK doors allowed a choice of other protocols.
Once the resulting file has been received by the user, the steps are reversed to extract the files from the archive and then open them in a client-side reader. Again, these individual steps are typically automated to a degree, meaning that the user simply has to invoke the door software on the BBS, wait for the download to complete, and then run the client. The various intermediary steps are automated. QWK originally did not include any functionality for uploading replies, but this was quickly addressed as QWK became more popular. QWK placed replies in a file (again, typically with the BBS's "id" as the name) that was exchanged automatically the next time the user called in.
QWK clients varied widely in functionality, but all of them off |
https://en.wikipedia.org/wiki/Virtually%20fibered%20conjecture | In the mathematical subfield of 3-manifolds, the virtually fibered conjecture, formulated by American mathematician William Thurston, states that every closed, irreducible, atoroidal 3-manifold with infinite fundamental group has a finite cover which is a surface bundle over the circle.
A 3-manifold which has such a finite cover is said to virtually fiber. If M is a Seifert fiber space, then M virtually fibers if and only if the rational Euler number of the Seifert fibration or the (orbifold) Euler characteristic of the base space is zero.
The hypotheses of the conjecture are satisfied by hyperbolic 3-manifolds. In fact, given that the geometrization conjecture is now settled, the only case needed to be proven for the virtually fibered conjecture is that of hyperbolic 3-manifolds.
The original interest in the virtually fibered conjecture (as well as its weaker cousins, such as the virtually Haken conjecture) stemmed from the fact that any of these conjectures, combined with Thurston's hyperbolization theorem, would imply the geometrization conjecture. However, in practice all known attacks on the "virtual" conjecture take geometrization as a hypothesis, and rely on the geometric and group-theoretic properties of hyperbolic 3-manifolds.
The virtually fibered conjecture was not actually conjectured by Thurston. Rather, he posed it as a question and has stated that it was intended as a challenge and not meant to indicate he believed it, although he wrote that "[t]his dubious-sounding question seems to have a definite chance for a positive answer".
The conjecture was finally settled in the affirmative in a series of papers from 2009 to 2012. In a posting on the ArXiv on 25 Aug 2009, Daniel Wise implicitly implied (by referring to a then-unpublished longer manuscript) that he had proven the conjecture for the case where the 3-manifold is closed, hyperbolic, and Haken. This was followed by a survey article in Electronic Research Announcements in Mathematical Scienc |
https://en.wikipedia.org/wiki/HNN%20extension | In mathematics, the HNN extension is an important construction of combinatorial group theory.
Introduced in a 1949 paper Embedding Theorems for Groups by Graham Higman, Bernhard Neumann, and Hanna Neumann, it embeds a given group G into another group G' , in such a way that two given isomorphic subgroups of G are conjugate (through a given isomorphism) in G' .
Construction
Let G be a group with presentation , and let be an isomorphism between two subgroups of G. Let t be a new symbol not in S, and define
The group is called the HNN extension of G relative to α. The original group G is called the base group for the construction, while the subgroups H and K are the associated subgroups. The new generator t is called the stable letter.
Key properties
Since the presentation for contains all the generators and relations from the presentation for G, there is a natural homomorphism, induced by the identification of generators, which takes G to . Higman, Neumann, and Neumann proved that this morphism is injective, that is, an embedding of G into . A consequence is that two isomorphic subgroups of a given group are always conjugate in some overgroup; the desire to show this was the original motivation for the construction.
Britton's Lemma
A key property of HNN-extensions is a normal form theorem known as Britton's Lemma. Let be as above and let w be the following product in :
Then Britton's Lemma can be stated as follows:
Britton's Lemma. If w = 1 in G∗α then
either and g0 = 1 in G
or and for some i ∈ {1, ..., n−1} one of the following holds:
εi = 1, εi+1 = −1, gi ∈ H,
εi = −1, εi+1 = 1, gi ∈ K.
In contrapositive terms, Britton's Lemma takes the following form:
Britton's Lemma (alternate form). If w is such that
either and g0 ≠ 1 ∈ G,
or and the product w does not contain substrings of the form tht−1, where h ∈ H and of the form t−1kt where k ∈ K,
then in .
Consequences of Britton's Lemma
Most basic properties of HNN-extensions follow from Britton's Lemm |
https://en.wikipedia.org/wiki/Lacida | The Lacida, also called LCD, was a Polish rotor cipher machine. It was designed and produced before World War II by Poland's Cipher Bureau for prospective wartime use by Polish military higher commands.
History
The machine's name derived from the surname initials of Gwido Langer, Maksymilian Ciężki and Ludomir Danilewicz and / or his younger brother, Leonard Danilewicz. It was built in Warsaw, to the Cipher Bureau's specifications, by the AVA Radio Company.
In anticipation of war, before the September 1939 invasion of Poland, two LCDs were sent to France. From spring 1941, an LCD was used by the Polish Team Z at the Polish-, Spanish- and French-manned Cadix radio-intelligence and decryption center at Uzès, near France's Mediterranean coast.
Prior to the machine's production, it had never been subjected to rigorous decryption attempts. Now it was decided to remedy this oversight. In early July 1941, Polish cryptologists Marian Rejewski and Henryk Zygalski received LCD-enciphered messages that had earlier been transmitted to the staff of the Polish Commander-in-Chief, based in London. Breaking the first message, given to the two cryptologists on July 3, took them only a couple of hours. Further tests yielded similar results. Colonel Langer suspended the use of LCD at Cadix.
In 1974, Rejewski explained that the LCD had two serious flaws. It lacked a commutator ("plugboard"), which was one of the strong points of the German military Enigma machine. The LCD's other weakness involved the reflector and wiring. These shortcomings did not imply that the LCD, somewhat larger than the Enigma and more complicated (e.g., it had a switch for resetting to deciphering), was easy to solve. Indeed, the likelihood of its being broken by the German E-Dienst was judged slight. Theoretically it did exist, however.
See also
Biuro Szyfrów (Cipher Bureau) |
https://en.wikipedia.org/wiki/Tonicity | In chemical biology, tonicity is a measure of the effective osmotic pressure gradient; the water potential of two solutions separated by a partially-permeable cell membrane. Tonicity depends on the relative concentration of selective membrane-impermeable solutes across a cell membrane which determine the direction and extent of osmotic flux. It is commonly used when describing the swelling-versus-shrinking response of cells immersed in an external solution.
Unlike osmotic pressure, tonicity is influenced only by solutes that cannot cross the membrane, as only these exert an effective osmotic pressure. Solutes able to freely cross the membrane do not affect tonicity because they will always equilibrate with equal concentrations on both sides of the membrane without net solvent movement. It is also a factor affecting imbibition.
There are three classifications of tonicity that one solution can have relative to another: hypertonic, hypotonic, and isotonic. A hypotonic solution example is distilled water.
Hypertonic solution
A hypertonic solution has a greater concentration of non-permeating solutes than another solution. In biology, the tonicity of a solution usually refers to its solute concentration relative to that of another solution on the opposite side of a cell membrane; a solution outside of a cell is called hypertonic if it has a greater concentration of solutes than the cytosol inside the cell. When a cell is immersed in a hypertonic solution, osmotic pressure tends to force water to flow out of the cell in order to balance the concentrations of the solutes on either side of the cell membrane. The cytosol is conversely categorized as hypotonic, opposite of the outer solution.
When plant cells are in a hypertonic solution, the flexible cell membrane pulls away from the rigid cell wall, but remains joined to the cell wall at points called plasmodesmata. The cells often take on the appearance of a pincushion, and the plasmodesmata almost cease to function b |
https://en.wikipedia.org/wiki/Interrupt%20vector%20table | An interrupt vector table (IVT) is a data structure that associates a list of interrupt handlers with a list of interrupt requests in a table of interrupt vectors. Each entry of the interrupt vector table, called an interrupt vector, is the address of an interrupt handler(also known as ISR). While the concept is common across processor architectures, IVTs may be implemented in architecture-specific fashions. For example, a dispatch table is one method of implementing an interrupt vector table.
Interrupts are assigned a number between 0 to 255. The interrupt vectors for each interrupt number are stored in the lower 1024 bytes of main memory. For example, interrupt 0 is stored from 0000:0000 to 0000:0003, interrupt 1 from 0000:0004 to 0000:0007, and so on.
Background
Most processors have an interrupt vector table, including chips from Intel, AMD, Infineon, Microchip Atmel, NXP, ARM etc.
Interrupt handlers
Handling methods
An interrupt vector table is used in the three most popular methods of finding the starting address of the interrupt service routine:
"Predefined"
The "predefined" method loads the program counter (PC) directly with the address of some entry inside the interrupt vector table. The jump table itself contains executable code. While in principle an extremely short interrupt handler could be stored entirely inside the interrupt vector table, in practice the code at each entry is a single jump instruction that jumps to the full interrupt service routine (ISR) for that interrupt. The Intel 8080, Atmel AVR and all 8051 and Microchip microcontrollers use the predefined approach.
"Fetch"
The "fetch" method loads the PC indirectly, using the address of some entry inside the interrupt vector table to pull an address out of that table, and then loading the PC with that address. Each and every entry of the IVT is the address of an interrupt service routine. All Motorola/Freescale microcontrollers use the fetch method.
"Interrupt acknowledge"
For the "i |
https://en.wikipedia.org/wiki/In-system%20programming | In-system programming (ISP), or also called in-circuit serial programming (ICSP), is the ability of some programmable logic devices, microcontrollers, chipsets and other embedded devices to be programmed while installed in a complete system, rather than requiring the chip to be programmed prior to installing it into the system. It also allows firmware updates to be delivered to the on-chip memory of microcontrollers and related processors without requiring specialist programming circuitry on the circuit board, and simplifies design work.
Overview
There is no standard for in-system programming protocols for programming microcontroller devices. Almost all manufacturers of microcontrollers support this feature, but all have implemented their own protocols, which often differ even for different devices from the same manufacturer. In general, modern protocols try to keep the number of pins used low, typically to 2 pins. Some ISP interfaces manage to achieve the same with just a single pin, others use up to 4 for implementing a JTAG interface.
The primary advantage of in-system programming is that it allows manufacturers of electronic devices to integrate programming and testing into a single production phase, and save money, rather than requiring a separate programming stage prior to assembling the system. This may allow manufacturers to program the chips in their own system's production line instead of buying pre-programmed chips from a manufacturer or distributor, making it feasible to apply code or design changes in the middle of a production run. The other advantage is that production can always use the latest firmware, and new features as well as bug fixes can be implemented and put into production without the delay occurring when using pre-programmed microcontrollers.
Microcontrollers are typically soldered directly to a printed circuit board and usually do not have the circuitry or space for a large external programming cable to another computer.
Typically, |
https://en.wikipedia.org/wiki/Open%20Content%20Alliance | The Open Content Alliance (OCA) was a consortium of organizations contributing to a permanent, publicly accessible archive of digitized texts. Its creation was announced in October 2005 by Yahoo!, the Internet Archive, the University of California, the University of Toronto and others. Scanning for the Open Content Alliance was administered by the Internet Archive, which also provided permanent storage and access through its website.
The OCA was, in part, a response to Google Book Search, which was announced in October 2004. OCA's approach to seeking permission from copyright holders differed significantly from that of Google Book Search. OCA digitized copyrighted works only after asking and receiving permission from the copyright holder ("opt-in"). By contrast, Google Book Search digitized copyrighted works unless explicitly told not to do so ("opt-out"), and contends that digitizing for the purposes of indexing is fair use.
Microsoft had a special relationship with the Open Content Alliance until May 2008. Microsoft joined the Open Content Alliance in October 2005 as part of its Live Book Search project. However, in May 2008 Microsoft announced it would be ending the Live Book Search project and no longer funding the scanning of books through the Internet Archive. Microsoft removed any contractual restrictions on the content they had scanned and they relinquished the scanning equipment to their digitization partners and libraries to continue digitization programs. Between about 2006 and 2008 Microsoft sponsored the scanning of over 750,000 books, 300,000 of which are now part of the Internet Archive's on-line collections.
Opposition to Google Book Settlement
Brewster Kahle, a founder of the Open Content Alliance, actively opposed the proposed Google Book Settlement until its defeat in March 2011.
Contributors
The following are contributors to the OCA:
Adobe Systems Incorporated
Boston Library Consortium
Boston Public Library
The Bancroft Library
The British |
https://en.wikipedia.org/wiki/International%20Code%20of%20Nomenclature%20of%20Prokaryotes | The International Code of Nomenclature of Prokaryotes (ICNP) formerly the International Code of Nomenclature of Bacteria (ICNB) or Bacteriological Code (BC) governs the scientific names for Bacteria and Archaea. It denotes the rules for naming taxa of bacteria, according to their relative rank. As such it is one of the nomenclature codes of biology.
Originally the International Code of Botanical Nomenclature dealt with bacteria, and this kept references to bacteria until these were eliminated at the 1975 International Botanical Congress. An early Code for the nomenclature of bacteria was approved at the 4th International Congress for Microbiology in 1947, but was later discarded.
The latest version to be printed in book form is the 1990 Revision, but the book does not represent the current rules. The 2008 Revision has been published in the International Journal of Systematic and Evolutionary Microbiology (IJSEM). Rules are maintained by the International Committee on Systematics of Prokaryotes (ICSP; formerly the International Committee on Systematic Bacteriology, ICSB).
The baseline for bacterial names is the Approved Lists with a starting point of 1980. New bacterial names are reviewed by the ICSP as being in conformity with the Rules of Nomenclature and published in the IJSEM.
Cyanobacteria
Since 1975, most bacteria were covered under the bacteriological code. However, cyanobacteria were still covered by the botanical code. Starting in 1999, cyanobacteria were covered by both the botanical and bacteriological codes. This situation has caused nomenclatural problems for the cyanobacteria. By 2020, there were three proposals for how to resolve the situation:
Exclude cyanobacteria from the bacteriological code.
Apply the bacteriological code to all cyanobacteria.
Treat valid publication under the botanical code as valid publication under the bacteriological code.
In 2021, the ICSP held a formal vote on the three proposals and the third option was chosen.
Type |
https://en.wikipedia.org/wiki/FOSS.IN | FOSS.IN, previously known as Linux Bangalore, was an annual free and open source software (FOSS) conference, held in Bangalore, India from 2001 to 2012. From 2001 to 2004, it was known as Linux Bangalore, before it took on a new name and wider focus. During its lifetime, it was one of the largest FOSS events in Asia, with participants from around the world. It focused on the technical and software side of FOSS, encouraging development and contribution to FOSS projects from India. The event was held every year in late November or early December.
History
Linux Bangalore
Linux Bangalore was India's premier Free and Open Source Software event, held annually in Bangalore. It featured talks, discussions, workshops, round-table meetings and demonstrations by Indian and international speakers, and covered a diverse spectrum of Linux and other FOSS technologies, including kernel programming, embedded systems, desktop environments, localization, databases, web applications, gaming, multimedia and community and user group development.
First held in 2001, the event saw the participation of thousands of delegates and replicated its success in 2002, 2003 and 2004. Linux Bangalore was a community-driven event, conceived, planned and built by the free and open source community of India, and Facilitation (business)|facilitated by the Bangalore Linux User Group. The event was very popular among software developers as reflected heavily by the demographics of participants.
At the conclusion of LB/2004, it was announced that name of the event would change and the scope would expand to cater to a wider range of topics. On August 12, 2005, it was announced that the name of the event would be changed to FOSS.IN.
FOSS.IN
While the Linux Bangalore conferences focused around Linux, FOSS.IN broadened the scope to all free and open source software technologies. It was founded by Atul Chitnis.
FOSS.IN/2005
FOSS.IN/2005 was held from November 29 to December 2, 2005, at the Bangalore Palac |
https://en.wikipedia.org/wiki/Openfiler | Openfiler is an operating system that provides file-based network-attached storage and block-based storage area network. It was created by Xinit Systems, and is based on the CentOS Linux distribution. It is free software licensed under the GNU GPLv2
History
The Openfiler codebase was started at Xinit Systems in 2001. The company created a project and donated the codebase to it in October 2003.
The first public release of Openfiler was made in May 2004. The latest release was published in 2011.
Although there has been no formal announcement, there is no evidence that Openfiler is being actively developed since 2015. DistroWatch has listed Openfiler as discontinued. The official website states that paid support is still available.
Criticism
Though some users have run Openfiler for years with few problems, in a 2013 article on SpiceWorks website, the author recommended against using Openfiler, citing lack of features, lack of support and risk of data loss.
See also
TrueNAS, a FreeBSD based free and open-source NAS solution
Unraid
OpenMediaVault
XigmaNAS
NetApp filer, a commercial proprietary filer
NexentaStor - Advanced enterprise-level NAS software solution (Debian/OpenSolaris-based)
NAS4Free — network-attached storage (NAS) server software.
Gluster
Zentyal
List of NAS manufacturers
Comparison of iSCSI targets
File area network
Disk enclosure
OpenWrt |
https://en.wikipedia.org/wiki/Peaceful%20nuclear%20explosion | Peaceful nuclear explosions (PNEs) are nuclear explosions conducted for non-military purposes. Proposed uses include excavation for the building of canals and harbours, electrical generation, the use of nuclear explosions to drive spacecraft, and as a form of wide-area fracking. PNEs were an area of some research from the late 1950s into the 1980s, primarily in the United States and Soviet Union.
In the U.S., a series of tests were carried out under Project Plowshare. Some of the ideas considered included blasting a new Panama Canal, constructing the proposed Nicaragua Canal, the use of underground explosions to create electricity (Project PACER), and a variety of mining, geological, and radionuclide studies. The largest of the excavation tests was carried out in the Sedan nuclear test in 1962, which released large amounts of radioactive gas into the air. By the late 1960s, public opposition to Plowshare was increasing, and a 1970s study of the economics of the concepts suggested they had no practical use. Plowshare saw decreasing interest from the 1960s, and was officially cancelled in 1977.
The Soviet program started a few years after the U.S. efforts and explored many of the same concepts under their Nuclear Explosions for the National Economy program. The program was more extensive, eventually conducting 239 nuclear explosions. Some of these tests also released radioactivity, including a significant release of plutonium into the groundwater and the polluting of an area near the Volga River. A major part of the program in the 1970s and 80s was the use of very small bombs to produce shock waves as a seismic measuring tool, and as part of these experiments, two bombs were successfully used to seal blown-out oil wells. The program officially ended in 1988.
As part of ongoing arms control efforts, both programs came to be controlled by a variety of agreements. Most notable among these is the 1976 Treaty on Underground Nuclear Explosions for Peaceful Purposes (PNE |
https://en.wikipedia.org/wiki/Pallor%20mortis |
Pallor mortis (Latin: pallor "paleness", mortis "of death"), the first stage of death, is an after-death paleness that occurs in those with light/white skin. An opto-electronical colour measurement device is used to measure pallor mortis on bodies.
Timing and applicability
Pallor mortis occurs almost immediately, generally within 15–25 minutes, after death. Paleness develops so rapidly after death that it has little to no use in determining the time of death, aside from saying that it either happened less than 30 minutes ago or more, which could help if the body were found very soon after death.
Cause
Pallor mortis results from the collapse of capillary circulation throughout the body. Gravity then causes the blood to sink down into the lower parts of the body, creating livor mortis.
Similar paleness in living persons
A living person can look deathly pale, with such paleness often likened to death in figurative speech and in fiction. This can happen when blood escapes from the surface of the skin, in a matter of deep shock. Also heart failure (insufficientia cordis) can make the face appear pale; the person then might have blue lips. Skin can also become pale as a result of vasoconstriction as part of the body's homeostatic systems in cold conditions, or if the skin is deficient in vitamin D, as seen in people who spend most of the time indoors, away from sunlight. |
https://en.wikipedia.org/wiki/Termination%20factor | In molecular biology, a termination factor is a protein that mediates the termination of RNA transcription by recognizing a transcription terminator and causing the release of the newly made mRNA. This is part of the process that regulates the transcription of RNA to preserve gene expression integrity and are present in both eukaryotes and prokaryotes, although the process in bacteria is more widely understood. The most extensively studied and detailed transcriptional termination factor is the Rho (ρ) protein of E. coli.
Prokaryotic
Prokaryotes use one type of RNA polymerase, transcribing mRNAs that code for more than one type of protein. Transcription, translation and mRNA degradation all happen simultaneously. Transcription termination is essential to define boundaries in transcriptional units, a function necessary to maintain the integrity of the strands and provide quality control. Termination in E. coli may be Rho dependent, utilizing Rho factor, or Rho independent, also known as intrinsic termination. Although most operons in DNA are Rho independent, Rho dependent termination is also essential to maintain correct transcription.
ρ factor
The Rho protein is an RNA translocase that recognizes a cytosine-rich region of the elongating mRNA, but the exact features of the recognized sequences and how the cleaving takes place remain unknown. Rho forms a ring-shaped hexamer and advances along the mRNA, hydrolyzing ATP toward RNA polymerase (5' to 3' with respect to the mRNA). When the Rho protein reaches the RNA polymerase complex, transcription is terminated by dissociation of the RNA polymerase from the DNA. The structure and activity of the Rho protein is similar to that of the F1 subunit of ATP synthase, supporting the theory that the two share an evolutionary link.
Rho factor is widely present in different bacterial sequences and is responsible for the genetic polarity in E. coli. It works as a sensor of translational status, inhibiting non-productive transcri |
https://en.wikipedia.org/wiki/Bit%20banging | In computer engineering and electrical engineering, bit banging is a "term of art" for any method of data transmission that employs software as a substitute for dedicated hardware to generate transmitted signals or process received signals. Software directly sets and samples the states of GPIOs (e.g., pins on a microcontroller), and is responsible for meeting all timing requirements and protocol sequencing of the signals. In contrast to bit banging, dedicated hardware (e.g., UART, SPI, I²C) satisfies these requirements and, if necessary, provides a data buffer to relax software timing requirements. Bit banging can be implemented at very low cost, and is commonly used in some embedded systems.
Bit banging allows a device to implement different protocols with minimal or no hardware changes. In some cases, bit banging is made feasible by newer, faster processors because more recent hardware operates much more quickly than hardware did when standard communications protocols were created.
C code example
The following C language code example transmits a byte of data on an SPI bus.
// transmit byte serially, MSB first
void send_8bit_serial_data(unsigned char data)
{
int i;
// select device (active low)
output_low(SD_CS);
// send bits 7..0
for (i = 0; i < 8; i++)
{
// consider leftmost bit
// set line high if bit is 1, low if bit is 0
if (data & 0x80)
output_high(SD_DI);
else
output_low(SD_DI);
// pulse the clock state to indicate that bit value should be read
output_low(SD_CLK);
delay();
output_high(SD_CLK);
// shift byte left so next bit will be leftmost
data <<= 1;
}
// deselect device
output_high(SD_CS);
}
Considerations
The question whether to deploy bit banging or not is a trade-off between load, performance and reliability on one hand, and the availability of a hardware alternative on the other. The software emulation process consumes more |
https://en.wikipedia.org/wiki/List%20of%20megafauna%20discovered%20in%20modern%20times | The following is a list of megafauna discovered by science since the beginning of the 19th century (with their respective date of discovery). Some of these may have been known to native peoples or reported anecdotally but had not been generally acknowledged as confirmed by the scientific world, until conclusive evidence was obtained for formal studies. In other cases, certain animals were initially considered hoaxes – similar to the initial reception of mounted specimens of the duck-billed platypus (Ornithorhynchus anatinus) in late 18th-century Europe.
The definition of megafauna varies, but this list includes some of the more notable examples.
Megafauna believed extinct, but rediscovered
Burchell's zebra (Equus quagga burchellii), 2004
Megafauna previously unknown from the fossil record
Western grey kangaroo Notamacropus fuliginosus (1817)
Malayan tapir Tapirus indicus (1819)
Red kangaroo Osphranter rufus (1822)
Lowland anoa Bubalus depressicornis (1827)
Mountain tapir Tapirus pinchaque (1829)
Baird's tapir Tapirus bairdii (1865)
Bonobo Pan paniscus (1928)
Kouprey Bos sauveli (1937)
Saola Pseudoryx nghetinhensis (1993)
Megafauna initially believed to have been fictitious or hoaxes
Przewalski's horse Equus ferus przewalskii (1881 - current wild population descended from zoo breeding since 1945)
Okapi Okapia johnstoni (1901)
See also
Mammalia
List of mammals described in the 2000s |
https://en.wikipedia.org/wiki/Low-force%20helix | A low-force helix (LFH-60) is a 60-pin electrical connector (four rows of 15 pins) with signals for two digital and analog connectors. Each of the pins is twisted approximately 45 degrees between the tip and the plastic frame which holds the pins in place. Hence "helix" in the name.
The DMS-59 is a derivative of the LFH60.
The LFH connector is typically used with workstations, because it can connect a single computer graphics source to up to four different monitors. The standard interface is a 60-pin LFH connector with two breakout VGA or DVI cables. This system provides users with flexibility for a variety of display configurations, though forsakes standard DVI or VGA connectors. This renders the LFH connector unusable without an adapter.
It's also used in HDCI (High Definition Camera Interface) used in Polycom HDX video conferencing systems.
Using the LFH interface requires a graphics card with multi-monitor capabilities and an LFH port. NVIDIA and Matrox used to manufacture such cards.
Another application of LFH capabilities is manufactured by Matrox. The Matrox card outputs via two LFH cables to a single monitor, delivering 9.2 million pixels of resolution (3840 × 2400). This system provides large amounts of detailed information for professional applications such as aerospace and automotive visualization, computer aided design, desktop publishing, digital photography, life sciences, mapping, oil and gas exploration, plant design and management, satellite imaging, space exploration, and transportation and logistics.
The LFH connector is used in some Cisco routers and WAN Interface Cards.
See also
DMS-59
Molex
very-high-density cable interconnect standardized as SCSI interface which is often used to carry four digital/analog monitor signals |
https://en.wikipedia.org/wiki/Delta%20method | In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator.
History
The delta method was derived from propagation of error, and the idea behind was known in the early 20th century. Its statistical application can be traced as far back as 1928 by T. L. Kelley. A formal description of the method was presented by J. L. Doob in 1935. Robert Dorfman also described a version of it in 1938.
Univariate delta method
While the delta method generalizes easily to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms. Roughly, if there is a sequence of random variables satisfying
where θ and σ2 are finite valued constants and denotes convergence in distribution, then
for any function g satisfying the property that its first derivative, evaluated at , exists and is non-zero valued.
Proof in the univariate case
Demonstration of this result is fairly straightforward under the assumption that is continuous. To begin, we use the mean value theorem (i.e.: the first order approximation of a Taylor series using Taylor's theorem):
where lies between and θ.
Note that since and , it must be that and since is continuous, applying the continuous mapping theorem yields
where denotes convergence in probability.
Rearranging the terms and multiplying by gives
Since
by assumption, it follows immediately from appeal to Slutsky's theorem that
This concludes the proof.
Proof with an explicit order of approximation
Alternatively, one can add one more step at the end, to obtain the order of approximation:
This suggests that the error in the approximation converges to 0 in probability.
Multivariate delta method
By definition, a consistent estimator B converges in probability to its true value β, and often a central limit theorem can be applied to obtain asymptotic |
https://en.wikipedia.org/wiki/Cayley%E2%80%93Menger%20determinant | In linear algebra, geometry, and trigonometry, the Cayley–Menger determinant is a formula for the content, i.e. the higher-dimensional volume, of a -dimensional simplex in terms of the squares of all of the distances between pairs of its vertices. The determinant is named after Arthur Cayley and Karl Menger.
The pairwise distance polynomials between n points in a real Euclidean space are Euclidean invariants that are associated via the Cayley-Menger relations. These relations served multiple purposes such as generalising Heron's Formula, computing the content of a n-dimensional simplex, and ultimately determining if any real symmetric matrix is a Euclidean distance matrix in the field of Distance geometry.
History
Karl Menger was a young geometry professor at the University of Vienna and Arthur Cayley was a British mathematician who specialized in algebraic geometry. Menger extended Cayley's algebraic excellence to propose a new axiom of metric spaces using the concepts of distance geometry and relation of congruence, known as the Cayley-Menger determinant. This ended up generalising one of the first discoveries in distance geometry, Heron's formula, which computes the area of a triangle given its side lengths.
Definition
Let be points in -dimensional Euclidean space, with . These points are the vertices of an n-dimensional simplex: a triangle when ; a tetrahedron when , and so on. Let be the Euclidean distances between vertices and . The content, i.e. the n-dimensional volume of this simplex, denoted by , can be expressed as a function of determinants of certain matrices, as follows:
This is the Cayley–Menger determinant. For it is a symmetric polynomial in the 's and is thus invariant under permutation of these quantities. This fails for but it is always invariant under permutation of the vertices.
Except for the final row and column of 1s, the matrix in the second form of this equation is a Euclidean distance matrix.
Special cases
2-Simplex
To |
https://en.wikipedia.org/wiki/Dynamic%20Logical%20Partitioning | Dynamic Logical Partitioning (DLPAR), is the capability of a logical partition (LPAR) to be reconfigured dynamically, without having to shut down the operating system that runs in the LPAR. DLPAR enables memory, CPU capacity, and I/O interfaces to be moved nondisruptively between LPARs within the same server.
DLPAR has been supported by the operating systems AIX and IBM i on almost all POWER4 and follow-on POWER systems since then. The Linux kernel for POWER also supported DLPAR, but its dynamic reconfiguration capabilities were limited to CPU capacity and PCI devices, but not memory. In October 2009, seven years after the AIX announcement of DLPAR of memory, CPU and IO slots, Linux finally added the capability to DLPAR memory on POWER systems. The fundamentals of DLPAR are described in the IBM Systems Journal paper titled: "Dynamic reconfiguration: Basic building blocks for autonomic computing on IBM pSeries Servers.
Later on, the POWER5 processor added enhanced DLPAR capabilities, including micro-partitioning: up to 10 LPARs can be configured per processor, with a single multiprocessor server supporting a maximum of 254 LPARs (and thus up to 254 independent operating system instances).
There are many interesting applications of DLPAR capabilities. Primarily, it is used to build agile infrastructures, or to automate hardware system resource allocation, planning, and provisioning. This in turn results in increased system utilization. For example, memory, processor or I/O slots can be added, removed or moved to another LPAR, without rebooting the operating system or the application running in an LPAR. IBM DB2 is such application (http://www.ibm.com/developerworks/eserver/articles/db2_dlpar.html), it is aware of the DLPAR events and automatically tunes itself to changing LPAR resources.
The IBM Z mainframes and their operating systems, including Linux on IBM Z, support even more sophisticated forms of dynamic LPARs. Relevant LPAR-related features on those mainf |
https://en.wikipedia.org/wiki/Red%20flag%20%28idiom%29 | A red flag could either be a literal red flag used for signaling or, as a metaphor, a sign of some particular problem requiring attention.
Background
The term and the expression "to raise the red flag" come from various usages of real flags throughout history. A red flag is frequently flown by armed forces to warn the public of live fire exercises in progress, and is sometimes flown by ships carrying munitions (in this context it is actually the flag for the letter B in the international maritime signal flag alphabet, a red swallow-tailed flag). In many countries a red flag is flown to signify that an outdoor shooting range is in use. The United States Air Force refers to its largest annual exercise as Red Flag operation. Red flags are used for various signals in team sailing races (see Racing Rules of Sailing). A red flag warning is a signal of high wildfire danger, and a red flag on the beach warns of dangerous water conditions (double red flags indicate beach closure). Red flags of various designs indicate dangerous wind and wave conditions for mariners. In auto racing, a red flag indicates that a race has been stopped.
A signal of danger or a problem can be referred to as a red flag, a usage that originated in the 18th century. An infamous example of use of a red flag in warfare is Mexican General Antonio López de Santa Anna's use of the symbol to let his Texian opposition in the Alamo know that he intended to spare none of the defenders (on which he followed through). The term "red flag" is used, e.g., during screening of communications, and refers to specific words or phrases encountered that might indicate relevance to the case. For example, email spam filters make use of such "red flags".
See also
Red flag (racing)
Red Flags Rule |
https://en.wikipedia.org/wiki/RNA%20editing | RNA editing (also RNA modification) is a molecular process through which some cells can make discrete changes to specific nucleotide sequences within an RNA molecule after it has been generated by RNA polymerase. It occurs in all living organisms and is one of the most evolutionarily conserved properties of RNAs. RNA editing may include the insertion, deletion, and base substitution of nucleotides within the RNA molecule. RNA editing is relatively rare, with common forms of RNA processing (e.g. splicing, 5'-capping, and 3'-polyadenylation) not usually considered as editing. It can affect the activity, localization as well as stability of RNAs, and has been linked with human diseases.
RNA editing has been observed in some tRNA, rRNA, mRNA, or miRNA molecules of eukaryotes and their viruses, archaea, and prokaryotes. RNA editing occurs in the cell nucleus, as well as within mitochondria and plastids. In vertebrates, editing is rare and usually consists of a small number of changes to the sequence of the affected molecules. In other organisms, such as squids, extensive editing (pan-editing) can occur; in some cases the majority of nucleotides in an mRNA sequence may result from editing. More than 160 types of RNA modifications have been described so far.
RNA-editing processes show great molecular diversity, and some appear to be evolutionarily recent acquisitions that arose independently. The diversity of RNA editing phenomena includes nucleobase modifications such as cytidine (C) to uridine (U) and adenosine (A) to inosine (I) deaminations, as well as non-template nucleotide additions and insertions. RNA editing in mRNAs effectively alters the amino acid sequence of the encoded protein so that it differs from that predicted by the genomic DNA sequence.
Detection of RNA editing
Next generation sequencing
To identify diverse post-transcriptional modifications of RNA molecules and determine the transcriptome-wide landscape of RNA modifications by means of next gener |
https://en.wikipedia.org/wiki/Slim%20Devices | Slim Devices, Inc. was a consumer electronics company based in Mountain View, California, United States. Their main product was the Squeezebox network music player which connects to a home ethernet or Wi-Fi network, and allows the owner to stream digital audio over the network to a stereo. The company, founded in 2000, was originally most notable for their support of open-source software, namely their SlimServer software which their products at that time all depended upon, and is still available as a free download and modification by any interested developer.
On 18 October 2006 Sean Adams, the CEO of Slim Devices, announced that the company was being fully acquired by Logitech.
Slim Devices was featured in the December 2006 issue of Fast Company magazine. The article focused on the company's business model and profiled the three key leaders: Sean Adams (CEO), Dean Blackketter (CTO), and Patrick Cosson (VP of Marketing). |
https://en.wikipedia.org/wiki/Herman%20Hedning | Herman Hedning (lit. Herman the Heathen, known as Marwin Meathead in English editions) is a Swedish humorous comic strip, drawn and written by Jonas Darnell.
History
Herman Hedning first appeared in 1988 in Fantomen, the Swedish edition of The Phantom. Since then Jonas Darnell has written over 700 strips, and a number of longer stories, typically 8-10 pages. About 680 have been translated into English. The first of 20 albums was released in 1990, and the first magazine in 1998. Since 1998 there have been 130 issues, eight per year, except for 1998 and 1999, with 2 and 6 issues respectively, and a few years where nine issues were published.
Herman Hedning is published in Sweden, Norway and Finland, and is a steady seller, outlasting many of its contemporaries.
Main characters
The comic has three main protagonists, Herman Hedning (Marwin Meathead), Gammelman (Oldhead) and Lilleman (Shorthead).
Marwin Meathead is fat, sadistic, greedy, lazy and self-important, and possesses all the bad qualities imaginable in a human being. He always wears his helmet and carries a wooden club with a large nail through it, made of the core wood of an Arch-oak. He is continuously plotting against his two acquaintances, Oldhead and Shorthead. Despite his sadism, brutal nature and professed lack of intelligence, he is shown to be quite intelligent when it comes to certain subjects, as well as sometimes extremely cunning when it comes to making money, often by taking advantage of the trusting nature of others.
Shorthead is a kind, naive, nature-loving soul who wants to be everybody's friend, and is often the target for Marwin's sadistic plans. At times, however, he can be just as malicious towards Marwin, switching between pitying him and despising him, and being willing to let him suffer from the consequences of his actions.
Oldhead, who has glasses and a beard, is grumpy and blasé, and is usually happy to simply sit and watch Marwin and Shorthead exerting violence against each other |
https://en.wikipedia.org/wiki/DataTAC | DataTAC is a wireless data network technology originally developed by Mobile Data International which was later acquired by Motorola, who jointly developed it with IBM and deployed in the United States as ARDIS (Advanced Radio Data Information Services). DataTAC was also marketed in the mid-1990s as MobileData by Telecom Australia, and is still used by Bell Mobility as a paging network in Canada. The first public open and mobile data network using MDI DataTAC was found in Hong Kong as Hutchison Mobile Data Limited (a subsidiary of Hutchison Telecom), where public end-to-end data services are provided for enterprises, FedEx, and consumer mobile information services were also offered called MobileQuotes with financial information, news, telebetting and stock data.
DataTAC is an open standard for point to point wireless data communications, similar to Mobitex. Like Mobitex, it is mainly used in vertical market applications. One of the early DataTAC devices was the Newton Messaging Card, a two-way pager connected to a PC card using the DataTAC network. The original BlackBerry devices, the RIM 850 and 857 also used the DataTAC network.
In North America, DataTAC is typically deployed in the 800 MHz band. DataTAC was also deployed in the same band by Telecom Australia (now Telstra).
The DataTAC network runs at speeds up to 19.2 kbit/s, which is not sufficient to handle most of the wireless data applications available today. The network runs 25 kHz channels in the 800 MHz frequency bands. Due to the lower frequency bands that DataTAC uses, in-building coverage is typically better than with newer, higher frequency networks.
In the 1990s a DataTAC network operators group was put together by Motorola called Worldwide Wireless Data Networks Operators Group (WWDNOG) chaired by Shahram Mehraban, Motorola's DataTAC system product manager. |
https://en.wikipedia.org/wiki/GEOnet%20Names%20Server | The GEOnet Names Server (GNS), sometimes also referred to in official documentation as Geographic Names Data or geonames in domain and email addresses, is a service that provides access to the United States National Geospatial-Intelligence Agency's (NGA) and the US Board on Geographic Names's (BGN) database of geographic feature names and locations for locations outside the US. The database is the official repository for the US Federal Government on foreign place-name decisions approved by the BGN. Approximately 20,000 of the database's features are updated monthly. Names are not deleted from the database, "except in cases of obvious duplication". The database contains search aids such as spelling variations and non-Roman script spellings in addition to its primary information about location, administrative division, and quality. The accuracy of the database had been criticised.
Accuracy
A 2008 survey of South Korea toponyms on GNS found that roughly 1% of them were actually Japanese names that had never been in common usage, even during the period of Japanese colonial rule in Korea, and had come from a 1946 US military map that had apparently been compiled with Japanese assistance. In addition to the Japanese toponyms, the same study noted that "There are many spelling errors and simple mis-understanding of the place names with similar characters" amongst South Korea toponyms on GNS, as well extraneous names of Chinese and English origin.
See also
Geographic Names Information System (GNIS), a similar database for locations within the United States |
https://en.wikipedia.org/wiki/Arbiter%20%28electronics%29 | Arbiters are electronic devices that allocate access to shared resources.
Bus arbiter
There are multiple ways to perform a computer bus arbitration, with the most popular varieties being:
dynamic centralized parallel where one central arbiter is used for all masters as discussed in this article;
centralized serial (or "daisy chain") where, upon accessing the bus, the active master passes the opportunity to the next one. In essence, each connected master contains its own arbiter;
distributed arbitration by self-selection (distributed bus arbitration) where the access is self-granted based on the decision made locally by using information from other masters;
distributed arbitration by collision detection where each master tries to access the bus on its own, but detects conflicts and retries the failed operations.
A bus arbiter is a device used in a multi-master bus system to decide which bus master will be allowed to control the bus for each bus cycle.
The most common kind of bus arbiter is the memory arbiter in a system bus system.
A memory arbiter is a device used in a shared memory system to decide, for each memory cycle, which CPU will be allowed to access that shared memory.
Some atomic instructions depend on the arbiter to prevent other CPUs from reading memory "halfway through" atomic read-modify-write instructions.
A memory arbiter is typically integrated into the memory controller/DMA controller.
Some systems, such as conventional PCI, have a single centralized bus arbitration device that one can point to as "the" bus arbiter.
Other systems use decentralized bus arbitration, where all the devices cooperate to decide who goes next.
When every CPU connected to the memory arbiter has synchronized memory access cycles, the memory arbiter can be designed as a synchronous arbiter.
Otherwise the memory arbiter must be designed as an asynchronous arbiter.
Asynchronous arbiters
An important form of arbiter is used in asynchronous circuits to select th |
https://en.wikipedia.org/wiki/Immerman%E2%80%93Szelepcs%C3%A9nyi%20theorem | In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s(n)) = co-NSPACE(s(n)) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem.
In other words, if a nondeterministic machine can solve a problem, another machine with the same resource bounds can solve its complement problem (with the yes and no answers reversed) in the same asymptotic amount of space. No similar result is known for the time complexity classes, and indeed it is conjectured that NP is not equal to co-NP.
The principle used to prove the theorem has become known as inductive counting. It has also been used to prove other theorems in computational complexity, including the closure of LOGCFL under complementation and the existence of error-free randomized logspace algorithms for USTCON.
Proof sketch
The theorem can be proven by showing how to translate any nondeterministic Turing machine M into another nondeterministic Turing machine that solves the complementary decision problem under the same (asymptotic) space complexity, plus a constant number of pointers and counters, which needs only a logarithmic amount of space.
The idea is to simulate all the configurations of M on input w, and to check if any configuration is accepting. This can be done within the same space plus a constant number of pointers and counters to keep track of the configurations. If no configuration is accepting, the simulating Turing machine accepts the input w. This idea is elaborated below for logarithmic NSPACE class (NL), which generalizes to larger NSPACE classes via a |
https://en.wikipedia.org/wiki/Algorithm%20BSTW | The Algorithm BSTW is a data compression algorithm, named after its designers, Bentley, Sleator, Tarjan and Wei in 1986. BSTW is a dictionary-based algorithm that uses a move-to-front transform to keep recently seen dictionary entries at the front of the dictionary. Dictionary references are then encoded using any of a number of encoding methods, usually Elias delta coding or Elias gamma coding. |
https://en.wikipedia.org/wiki/EDN%20%28magazine%29 | EDN is an electronics industry website and formerly a magazine owned by AspenCore Media, an Arrow Electronics company. The editor-in-chief is Majeed Ahmad. EDN was published monthly until, in April 2013, EDN announced that the print edition would cease publication after the June 2013 issue.
History
The first issue of Electrical Design News, the original name, was published in May 1956 by Rogers Corporation of Englewood, Colorado. In January 1961, Cahners Publishing Company, Inc., of Boston, acquired Rogers Publishing Company. In February 1966, Cahners sold 40% of its company to International Publishing Company in London In 1970, the Reed Group merged with International Publishing Corporation and changed its name to Reed International Limited.
Acquisition of EEE magazine
Cahners Publishing Company acquired Electronic Equipment Engineering, a monthly magazine, in March 1971 and discontinued it. In doing so, Cahners folded EEE's best features into EDN, and renamed the magazine EDN/EEE. At the time, George Harold Rostky (1926–2003) was editor-in-chief of EEE. Rostky joined EDN and eventually became editor-in-chief before leaving to join Electronic Engineering Times as editor-in-chief.
Taking EDN worldwide
Roy Forsberg later became editor-in-chief of EDN magazine. He was later promoted to publisher and Jon Titus PhD was named editor-in-chief. Forsberg and Titus established EDN Europe, EDN Asia and EDN China, creating one of the largest global circulations for a design engineering magazine. EDNs 25th anniversary issue was a 425-page folio.
Reed Limited acquires remaining interest in Cahners
In 1977, Reed acquired the remaining interest in Cahners, then known as Cahners Publications. In 1982, Reed International Limited changed its name to Reed International PLC. In 1992, Reed International merged with Elsevier NV, becoming Reed Elsevier PLC on January 1, 1993. Reed Business Media then removed the Cahners Business Publishing name to rebrand itself as Reed Business Info |
https://en.wikipedia.org/wiki/Theodosius%20of%20Bithynia | Theodosius of Bithynia (; 2nd–1st century BC) was a Hellenistic astronomer and mathematician from Bithynia who wrote the Spherics, a treatise about spherical geometry, as well as several other books on mathematics and astronomy, of which two survive, On Habitations and On Days and Nights.
Life
Little is known about Theodosius' life. The Suda (10th-century Byzantine encyclopedia) mentions him writing a commentary on Archimedes' Method (late 3rd century BC), and Strabo's Geographica mentioned mathematicians Hipparchus ( – ) and "Theodosius and his sons" as among the residents of Bithynia distinguished for their learning. Later Vitruvius (1st century BC) mentioned a sundial invented by Theodosius. Thus Theodosius lived sometime after Archimedes and before Vitruvius, likely contemporaneously with or after Hipparchus, probably sometime between 200–50 BC.
Historically he was called Theodosius of Tripolis due to a confusing paragraph in the Suda which probably fused the entries about separate people named Theodosius, and was interpreted to mean that he came either from the Tripolis in Phoenicia or the one in Africa. Some sources claim he moved from Bithynia to Tripolis, or came from a hypothetical city called Tripolis in Bithynia.
Works
Theodosius' chief work, the Spherics ( ), provided the mathematics for spherical astronomy. Euclid's Phenomena and Autolycus's On the Moving Sphere, both dating from two centuries prior, make use of theorems proven in Spherics, so it has been speculated that they may have expected readers to be familiar with a treatise on elementary spherical geometry, perhaps by Eudoxus of Cnidus (4th century BC), on which the Spherics may have been based. However, no mention of this hypothetical earlier work or its author remains today.
The Spherics is reasonably complete, and remained the main reference on the subject at least until the time of Pappus of Alexandria (4th century AD). The work was translated into Arabic in the 10th century, and then |
https://en.wikipedia.org/wiki/Generation%E2%80%93recombination%20noise | Generation–recombination noise, or g–r noise, is a type of electrical signal noise caused statistically by the fluctuation of the generation and recombination of electrons in semiconductor-based photon detectors. |
https://en.wikipedia.org/wiki/String%20theory%20landscape | In string theory, the string theory landscape (or landscape of vacua) is the collection of possible false vacua, together comprising a collective "landscape" of choices of parameters governing compactifications.
The term "landscape" comes from the notion of a fitness landscape in evolutionary biology. It was first applied to cosmology by Lee Smolin in his book The Life of the Cosmos (1997), and was first used in the context of string theory by Leonard Susskind.
Compactified Calabi–Yau manifolds
In string theory the number of flux vacua is commonly thought to be roughly , but could be or higher. The large number of possibilities arises from choices of Calabi–Yau manifolds and choices of generalized magnetic fluxes over various homology cycles, found in F-theory.
If there is no structure in the space of vacua, the problem of finding one with a sufficiently small cosmological constant is NP complete. This is a version of the subset sum problem.
A possible mechanism of string theory vacuum stabilization, now known as the KKLT mechanism, was proposed in 2003 by Shamit Kachru, Renata Kallosh, Andrei Linde, and Sandip Trivedi.
Fine-tuning by the anthropic principle
Fine-tuning of constants like the cosmological constant or the Higgs boson mass are usually assumed to occur for precise physical reasons as opposed to taking their particular values at random. That is, these values should be uniquely consistent with underlying physical laws.
The number of theoretically allowed configurations has prompted suggestions that this is not the case, and that many different vacua are physically realized. The anthropic principle proposes that fundamental constants may have the values they have because such values are necessary for life (and therefore intelligent observers to measure the constants). The anthropic landscape thus refers to the collection of those portions of the landscape that are suitable for supporting intelligent life.
In order to implement this idea in a con |
https://en.wikipedia.org/wiki/Institute%20for%20Creative%20Technologies | The Institute for Creative Technologies (ICT) is a University Affiliated Research Center at the University of Southern California located in Playa Vista, California. ICT was established in 1999 with funding from the US Army.
Dr. Mike Andrews, chief scientist of the US Army is described as "founder of and inspiration behind" the ICT. He followed up on discussions between US Army leadership (four-star general Paul J. Kern) and Disney Imagineering president Bran Ferren, on how to gain access to Hollywood entertainment industry expertise in high-technology areas such as computer-based Modeling & Simulation, and Virtual Reality. The name was derived from Ferren's title at The Walt Disney Company.
It was created to combine the assets of a major research university with the creative resources of Hollywood and the game industry to advance the state-of-the-art in training and simulation. The institute's research has also led to applications for education, entertainment and rehabilitation, including virtual patients, virtual museum guides and visual effects technologies. Core areas include virtual humans, graphics, mixed-reality, learning sciences, games, storytelling and medical virtual reality.
Honors and awards
2010 - Scientific and Engineering Academy Award
2018 - Second Academy Award for Technical Achievement
2022 - Emmy (Technical Achievement)
Army affiliation
ICT is a DoD-sponsored University Affiliated Research Center (UARC) working in collaboration with the U.S. Army DEVCOM Soldier Center. The ICT is one of the Army’s four University Affiliated Research Centers (UARC). UARCs are a strategic United States Department of Defense (DoD) Research Center associated with an American university. UARCs are formally established by the Under Secretary of Defense for Research and Engineering (USD(R&E)) to ensure that essential engineering and technology capabilities of particular importance to the DoD are maintained and readily available. The mission of the ICT UARC i |
https://en.wikipedia.org/wiki/Hydrophobic%20sand | Hydrophobic sand (or magic sand) is a toy made from sand coated with a hydrophobic compound. The presence of the hydrophobic compound causes the grains of sand to adhere to one another and form cylinders (to minimize surface area) when exposed to water, and form a pocket of air around the sand. The pocket of air makes magic sand unable to get wet. A variation of this, kinetic sand, has several of the same properties, but acts like wet sand that will not dry out. Hydrophobic sand, whether the wet or dry type, will not mix with water.
History
The earliest reference to waterproof sand is in the 1915 book The Boy Mechanic Book 2 published by Popular Mechanics. The Boy Mechanic states waterproof sand was invented by East Indian magicians. The sand was made by mixing heated sand with melted wax. The wax would repel water when the sand was exposed to water.
Magic sand was originally developed to trap ocean oil spills near the shore. This was done by sprinkling magic sand on floating petroleum, which would then mix with the oil and make it heavy enough to sink. Due to the expense of production, however, it is no longer used for this purpose.
Hydrophobic sand has also been tested by utility companies in Arctic regions as a foundation for junction boxes, as it never freezes. It is also used as an aerating medium for potted plants.
Properties
Magic sand
The properties of Magic sand are achieved using ordinary beach sand, which contains tiny particles of pure silica, and exposing it to vapors of trimethylsilanol (CH3)3SiOH, an organosilicon compound. Upon exposure, the trimethylsilanol compound bonds to the silica particles while forming water. The exteriors of the sand grains are thus coated with hydrophobic groups. When Magic sand is removed from water, it is completely dry and free-flowing.
Aqua Sand, a brand name toy product made using magic sand, can be found in blue, green, or red colors; but all appear silvery in water because of a layer of air that |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.