source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Sol%E2%80%93gel%20process | In materials science, the sol–gel process is a method for producing solid materials from small molecules. The method is used for the fabrication of metal oxides, especially the oxides of silicon (Si) and titanium (Ti). The process involves conversion of monomers into a colloidal solution (sol) that acts as the precursor for an integrated network (or gel) of either discrete particles or network polymers. Typical precursors are metal alkoxides. Sol-gel process is used to produce ceramic nanoparticles.
Stages
In this chemical procedure, a "sol" (a colloidal solution) is formed that then gradually evolves towards the formation of a gel-like diphasic system containing both a liquid phase and solid phase whose morphologies range from discrete particles to continuous polymer networks. In the case of the colloid, the volume fraction of particles (or particle density) may be so low that a significant amount of fluid may need to be removed initially for the gel-like properties to be recognized. This can be accomplished in any number of ways. The simplest method is to allow time for sedimentation to occur, and then pour off the remaining liquid. Centrifugation can also be used to accelerate the process of phase separation.
Removal of the remaining liquid (solvent) phase requires a drying process, which is typically accompanied by a significant amount of shrinkage and densification. The rate at which the solvent can be removed is ultimately determined by the distribution of porosity in the gel. The ultimate microstructure of the final component will clearly be strongly influenced by changes imposed upon the structural template during this phase of processing.
Afterwards, a thermal treatment, or firing process, is often necessary in order to favor further polycondensation and enhance mechanical properties and structural stability via final sintering, densification, and grain growth. One of the distinct advantages of using this methodology as opposed to the more traditional p |
https://en.wikipedia.org/wiki/Sequence%20space | In functional analysis and related areas of mathematics, a sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K of real or complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space.
The most important sequence spaces in analysis are the spaces, consisting of the -power summable sequences, with the p-norm. These are special cases of Lp spaces for the counting measure on the set of natural numbers. Other important classes of sequences like convergent sequences or null sequences form sequence spaces, respectively denoted c and c0, with the sup norm. Any sequence space can also be equipped with the topology of pointwise convergence, under which it becomes a special kind of Fréchet space called FK-space.
Definition
A sequence in a set is just an -valued map whose value at is denoted by instead of the usual parentheses notation
Space of all sequences
Let denote the field either of real or complex numbers. The set of all sequences of elements of is a vector space for componentwise addition
and componentwise scalar multiplication
A sequence space is any linear subspace of
As a topological space, is naturally endowed with the product topology. Under this topology, is Fréchet, meaning that it is a complete, metrizable, locally convex topological vector space (TVS). However, this topology is rather pathological: there are no continuous norms on (and thus the product topology cannot be defined by any norm). Among Fréchet spaces, |
https://en.wikipedia.org/wiki/Table%20%28database%29 | A table is a collection of related data held in a table format within a database. It consists of columns and rows.
In relational databases, and flat file databases, a table is a set of data elements (values) using a model of vertical columns (identifiable by name) and horizontal rows, the cell being the unit where a row and column intersect. A table has a specified number of columns, but can have any number of rows. Each row is identified by one or more values appearing in a particular column subset. A specific choice of columns which uniquely identify rows is called the primary key.
"Table" is another term for "relation"; although there is the difference in that a table is usually a multiset (bag) of rows where a relation is a set and does not allow duplicates. Besides the actual data rows, tables generally have associated with them some metadata, such as constraints on the table or on the values within particular columns.
The data in a table does not have to be physically stored in the database. Views also function as relational tables, but their data are calculated at query time. External tables (in Informix
or Oracle,
for example) can also be thought of as views.
In many systems for computational statistics, such as R and Python's pandas, a data frame or data table is a data type supporting the table abstraction. Conceptually, it is a list of records or observations all containing the same fields or columns. The implementation consists of a list of arrays or vectors, each with a name.
Tables versus relations
In terms of the relational model of databases, a table can be considered a convenient representation of a relation, but the two are not strictly equivalent. For instance, a SQL table can potentially contain duplicate rows, whereas a true relation cannot contain duplicate rows that we call tuples. Similarly, representation as a table implies a particular ordering to the rows and columns, whereas a relation is explicitly unordered. However, the database |
https://en.wikipedia.org/wiki/Dream%20dictionary | A dream dictionary (also known as oneirocritic literature) is a tool made for interpreting images in a dream. Dream dictionaries tend to include specific images which are attached to specific interpretations. However, dream dictionaries are generally not considered scientifically viable by those within the psychology community.
History
Since the 19th century, the art of dream interpretation has been transferred to a scientific ground, making it a distinct part of psychology. However, the dream symbols of the "unscientific" days—the outcome of hearsay interpretations that differ around the world among different cultures—continued to mark the day of an average human-being, who is most likely unfamiliar with Freudian analysis of dreams.
The dream dictionary includes interpretations of dreams, giving each symbol in a dream a specific meaning. The argument of what dreams represent has greatly changed over time. With this changing, so have the interpretation of dreams. Dream dictionaries have changed in content since they were first published. The Greeks and Romans saw dreams as having a religious meaning. This made them believe that their dreams were an insight into the future and held the key to the solutions of their problems. Aristotle's view on dreams were that they were merely a function of our physiological make up. He did not believe dreams have a greater meaning, solely that they're the result of how we sleep. In the Middle Ages, dreams were seen as an interpretation of good or evil.
Although the dream dictionary is not recognized in the psychology world, Freud is said to have revolutionized the interpretation and study of dreams. Freud came to the conclusion that dreams were a form of wish fulfillment. Dream dictionaries were first based upon Freudian thoughts and ancient interpretations of dreams.
Some examples of dream interpretation are: dreaming you are on a beach means you are facing negativity in your life, or a lion may represent a need to control oth |
https://en.wikipedia.org/wiki/Physical%20change | Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve |
https://en.wikipedia.org/wiki/Coiflet | Coiflets are discrete wavelets designed by Ingrid Daubechies, at the request of Ronald Coifman, to have scaling functions with vanishing moments. The wavelet is near symmetric, their wavelet functions have vanishing moments and scaling functions , and has been used in many applications using Calderón–Zygmund operators.
Theory
Some theorems about Coiflets:
Theorem 1
For a wavelet system , the following three
equations are equivalent:
and similar equivalence holds between and
Theorem 2
For a wavelet system , the following six equations
are equivalent:
and similar equivalence holds between and
Theorem 3
For a biorthogonal wavelet system , if either or
possesses a degree L of vanishing moments, then the following two equations are equivalent:
for any such that
Coiflet coefficients
Both the scaling function (low-pass filter) and the wavelet function (high-pass filter) must be normalised by a factor . Below are the coefficients for the scaling functions for C6–30. The wavelet coefficients are derived by reversing the order of the scaling function coefficients and then reversing the sign of every second one (i.e. C6 wavelet = {−0.022140543057, 0.102859456942, 0.544281086116, −1.205718913884, 0.477859456942, 0.102859456942}).
Mathematically, this looks like , where k is the coefficient index, B is a wavelet coefficient, and C a scaling function coefficient. N is the wavelet index, i.e. 6 for C6.
Matlab function
F = coifwavf(W) returns the scaling filter associated with the Coiflet wavelet specified by the string W where W = "coifN". Possible values for N are 1, 2, 3, 4, or 5. |
https://en.wikipedia.org/wiki/Sturmian%20word | In mathematics, a Sturmian word (Sturmian sequence or billiard sequence), named after Jacques Charles François Sturm, is a certain kind of infinitely long sequence of characters. Such a sequence can be generated by considering a game of English billiards on a square table. The struck ball will successively hit the vertical and horizontal edges labelled 0 and 1 generating a sequence of letters. This sequence is a Sturmian word.
Definition
Sturmian sequences can be defined strictly in terms of their combinatoric properties or geometrically as cutting sequences for lines of irrational slope or codings for irrational rotations. They are traditionally taken to be infinite sequences on the alphabet of the two symbols 0 and 1.
Combinatorial definitions
Sequences of low complexity
For an infinite sequence of symbols w, let σ(n) be the complexity function of w; i.e., σ(n) = the number of distinct contiguous subwords (factors) in w of length n. Then w is Sturmian if σ(n) = n + 1 for all n.
Balanced sequences
A set X of binary strings is called balanced if the Hamming weight of elements of X takes at most two distinct values. That is, for any |s|1 = k or |s|1 = k where |s|1 is the number of 1s in s.
Let w be an infinite sequence of 0s and 1s and let denote the set of all length-n subwords of w. The sequence w is Sturmian if is balanced for all n and w is not eventually periodic.
Geometric definitions
Cutting sequence of irrational
Let w be an infinite sequence of 0s and 1s. The sequence w is Sturmian if for some and some irrational , w is realized as the cutting sequence of the line .
Difference of Beatty sequences
Let w = (wn) be an infinite sequence of 0s and 1s. The sequence w is Sturmian if it is the difference of non-homogeneous Beatty sequences, that is, for some and some irrational
for all or
for all .
Coding of irrational rotation
For , define by . For define the θ-coding of x to be the sequence (xn) where
Let w be an infinite se |
https://en.wikipedia.org/wiki/ZbMATH%20Open | zbMATH Open, formerly Zentralblatt MATH, is a major reviewing service providing reviews and abstracts for articles in pure and applied mathematics, produced by the Berlin office of FIZ Karlsruhe – Leibniz Institute for Information Infrastructure GmbH. Editors are the European Mathematical Society, FIZ Karlsruhe, and the Heidelberg Academy of Sciences. zbMATH is distributed by Springer Science+Business Media. It uses the Mathematics Subject Classification codes for organising reviews by topic.
History
Mathematicians Richard Courant, Otto Neugebauer, and Harald Bohr, together with the publisher Ferdinand Springer, took the initiative for a new mathematical reviewing journal. Harald Bohr worked in Copenhagen. Courant and Neugebauer were professors at the University of Göttingen. At that time, Göttingen was considered one of the central places for mathematical research, having appointed mathematicians like David Hilbert, Hermann Minkowski, Carl Runge, and Felix Klein, the great organiser of mathematics and physics in Göttingen. His dream of a building for an independent mathematical institute with a spacious and rich reference library was realised four years after his death. The credit for this achievement is particularly due to Richard Courant, who convinced the Rockefeller Foundation to donate a large amount of money for the construction.
The service was founded in 1931, by Otto Neugebauer as Zentralblatt für Mathematik und ihre Grenzgebiete. It contained the bibliographical data of all recently published mathematical articles and book, together with peer reviews done by mathematicians over the world. In the preface to the first volume, the intentions of Zentralblatt are formulated as follows:
Zentralblatt and the Jahrbuch über die Fortschritte der Mathematik had in essence the same agenda, but Zentralblatt published several issues per year. An issue was published as soon as sufficiently many reviews were available, in a frequency of three or four weeks.
In the l |
https://en.wikipedia.org/wiki/Fibonacci%20word | A Fibonacci word is a specific sequence of binary digits (or symbols from any two-letter alphabet). The Fibonacci word is formed by repeated concatenation in the same way that the Fibonacci numbers are formed by repeated addition.
It is a paradigmatic example of a Sturmian word and specifically, a morphic word.
The name "Fibonacci word" has also been used to refer to the members of a formal language L consisting of strings of zeros and ones with no two repeated ones. Any prefix of the specific Fibonacci word belongs to L, but so do many other strings. L has a Fibonacci number of members of each possible length.
Definition
Let be "0" and be "01". Now (the concatenation of the previous sequence and the one before that).
The infinite Fibonacci word is the limit , that is, the (unique) infinite sequence that contains each , for finite , as a prefix.
Enumerating items from the above definition produces:
0
01
010
01001
01001010
0100101001001
...
The first few elements of the infinite Fibonacci word are:
0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, ...
Closed-form expression for individual digits
The nth digit of the word is where is the golden ratio and is the floor function . As a consequence, the infinite Fibonacci word can be characterized by a cutting sequence of a line of slope or . See the figure above.
Substitution rules
Another way of going from Sn to Sn +1 is to replace each symbol 0 in Sn with the pair of consecutive symbols 0, 1 in Sn +1, and to replace each symbol 1 in Sn with the single symbol 0 in Sn +1.
Alternatively, one can imagine directly generating the entire infinite Fibonacci word by the following process: start with a cursor poi |
https://en.wikipedia.org/wiki/Interconnection | In telecommunications, interconnection is the physical linking of a carrier's network with equipment or facilities not belonging to that network. The term may refer to a connection between a carrier's facilities and the equipment belonging to its customer, or to a connection between two or more carriers.
In United States regulatory law, interconnection is specifically defined (47 C.F.R. 51.5) as "the linking of two or more networks for the mutual exchange of traffic."
One of the primary tools used by regulators to introduce competition in telecommunications markets has been to impose interconnection requirements on dominant carriers.
History
United States
Under the Bell System monopoly (post Communications Act of 1934), the Bell System owned the phones and did not allow interconnection, either of separate phones (or other terminal equipment) or of other networks; a popular saying was "Ma Bell has you by the calls".
This began to change in the landmark case Hush-A-Phone v. United States [1956], which allowed some non-Bell owned equipment to be connected to the network, and was followed by a number of other cases, regulatory decisions, and legislation that led to the transformation of the American long distance telephone industry from a monopoly to a competitive business.
This further changed in FCC's Carterfone decision in 1968, which required the Bell System companies to permit interconnection by radio-telephone operators.
Today the standard electrical connector for interconnection in the US, and much of the world, is the registered jack family of standards, especially RJ11. This was introduced by the Bell System in the 1970s, following a 1976 FCC order. Since then, it has gained popularity worldwide, and is a de facto international standard.
Europe
Outside of the U.S., Interconnection or "Interconnect regimes" also take into account the associated commercial arrangements. As an example of the use of commercial arrangements, the focus by the EU has been on |
https://en.wikipedia.org/wiki/Hubbard%20model | The Hubbard model is an approximate model used to describe the transition between conducting and insulating systems. It is particularly useful in solid-state physics. The model is named for John Hubbard.
The Hubbard model states that each electron experiences competing forces: one pushes it to tunnel to neighboring atoms, while the other pushes it away from its neighbors. Its Hamiltonian thus has two terms: a kinetic term allowing for tunneling ("hopping") of particles between lattice sites and a potential term reflecting on-site interaction. The particles can either be fermions, as in Hubbard's original work, or bosons, in which case the model is referred to as the "Bose–Hubbard model".
The Hubbard model is a useful approximation for particles in a periodic potential at sufficiently low temperatures, where all the particles may be assumed to be in the lowest Bloch band, and long-range interactions between the particles can be ignored. If interactions between particles at different sites of the lattice are included, the model is often referred to as the "extended Hubbard model". In particular, the Hubbard term, most commonly denoted by U, is applied in first principles based simulations using Density Functional Theory, DFT. The inclusion of the Hubbard term in DFT simulations is important as this improves the prediction of electron localisation and thus it prevents the incorrect prediction of metallic conduction in insulating systems.
The Hubbard model introduces short-range interactions between electrons to the tight-binding model, which only includes kinetic energy (a "hopping" term) and interactions with the atoms of the lattice (an "atomic" potential). When the interaction between electrons is strong, the behavior of the Hubbard model can be qualitatively different from a tight-binding model. For example, the Hubbard model correctly predicts the existence of Mott insulators: materials that are insulating due to the strong repulsion between electrons, even tho |
https://en.wikipedia.org/wiki/Blue%20field%20entoptic%20phenomenon | The blue field entoptic phenomenon is an entoptic phenomenon characterized by the appearance of tiny bright dots (nicknamed blue-sky sprites) moving quickly along undulating pathways in the visual field, especially when looking into bright blue light such as the sky. The dots are short-lived, visible for about one second or less, and traveling short distances along seemingly random, undulating paths. Some of them seem to follow the same path as other dots before them. The dots may appear elongated along the path, like tiny worms. The dots' rate of travel appears to vary in synchrony with the heartbeat: they briefly accelerate at each beat. The dots appear in the central field of view, within 15 degrees from the fixation point. The left and right eye see different, seemingly random, dot patterns; a person viewing through both eyes sees a combination of both left and right visual field disturbances. If you are seeing the phenomenon, and you lightly press inward on the sides of your eyeballs at the lateral canthus, the movement stops being fluid and the dots move only when your heart beats.
Most people are able to see this phenomenon in the sky, although it is relatively weak in most instances; many will not notice it until asked to pay attention. The dots are highly conspicuous against any monochromatic blue background of a wavelength of around 430 nm in place of the sky. The phenomenon is also known as Scheerer's phenomenon, after the German ophthalmologist Richard Scheerer, who first drew clinical attention to it in 1924.
Explanation
The dots are white blood cells moving in the capillaries in front of the retina of the eye.
Blue light (optimal wavelength: 430 nm) is absorbed by the red blood cells that fill the capillaries. The eye and brain "edit out" the shadow lines of the capillaries, partially by dark adaptation of the photoreceptors lying beneath the capillaries. The white blood cells, which are larger than red blood cells, but much rarer and do not absorb |
https://en.wikipedia.org/wiki/777%20%28number%29 | 777 (seven hundred [and] seventy-seven) is the natural number following 776 and preceding 778. The number 777 is significant in numerous religious and political contexts.
In mathematics
777 is an odd, composite, palindromic repdigit. It is also a sphenic number, with 3, 7, and 37 as its prime factors. Its largest prime factor is a concatenation of its smaller two; the only other number below 1000 with this property is 138.
777 is also:
An extravagant number, a lucky number, a polite number, and an amenable number.
A deficient number, since the sum of its divisors is less than 2n.
A congruent number, as it is possible to make a right triangle with a rational number of sides whose area is 777.
An arithmetic number, since the average of its positive divisors is also an integer (152).
A repdigit in senary.
Religious significance
According to the Bible, Lamech, the father of Noah lived for 777 years. Some of the known religious connections to 777 are noted in the sections below.
Judaism
The numbers 3 and 7 both are considered "perfect numbers" under Hebrew tradition.
Christianity
According to the American publication, the Orthodox Study Bible, 777 represents the threefold perfection of the Trinity.
Thelema
777 is also found in the title of the book 777 and Other Qabalistic Writings of Aleister Crowley pertaining to the law of thelema.
Political significance
Afrikaner Weerstandsbeweging
The Afrikaner Resistance Movement (Afrikaner Weerstandsbeweging, AWB), a Boer-nationalist, neo-Nazi, and white supremacist movement in South Africa, used the number 777 as part of their emblem.
The number refers to a triumph of "God's number" 7 over the Devil's number 666. On the AWB flag, the numbers are arranged in a triskelion shape, resembling the Nazi swastika.
Computing
In Unix's chmod, change-access-mode command, the octal value 777 grants all file-access permissions to all user types in a file.
Commercial
Aviation
Boeing, the largest manufacturer of airliners in |
https://en.wikipedia.org/wiki/Wireless%20router | A wireless router or Wi-Fi router is a device that performs the functions of a router and also includes the functions of a wireless access point. It is used to provide access to the Internet or a private computer network. Depending on the manufacturer and model, it can function in a wired local area network, in a wireless-only LAN, or in a mixed wired and wireless network.
Features
Wireless routers typically feature one or more network interface controllers supporting Fast Ethernet or Gigabit Ethernet ports integrated into the main system on a chip (SoC) around which the router is built. An Ethernet switch as described in IEEE 802.1Q may interconnect multiple ports. Some routers implement link aggregation through which two or more ports may be used together improving throughput and redundancy.
All wireless routers feature one or more wireless network interface controllers. These are also integrated into the main SoC or may be separate chips on the printed circuit board. It also can be a distinct card connected over a MiniPCI or MiniPCIe interface. Some dual-band wireless routers operate the 2.4 GHz and 5 GHz bands simultaneously. Wireless controllers support a part of the IEEE 802.11-standard family and many dual-band wireless routers have data transfer rates exceeding 300 Mbit/s (For 2.4 GHz band) and 450 Mbit/s (For 5 GHz band). Some wireless routers provide multiple streams allowing multiples of data transfer rates (e.g. a three-stream wireless router allows transfers of up to 1.3 Gbit/s on the 5 GHz bands).
Some wireless routers have one or two USB ports. These can be used to connect printer or desktop or mobile external hard disk drive to be used as a shared resource on the network. A USB port may also be used for connecting mobile broadband modem, aside from connecting the wireless router to an Ethernet with xDSL or cable modem. A mobile broadband USB adapter can be connected to the router to share the mobile broadband Internet connection through the wire |
https://en.wikipedia.org/wiki/Entoptic%20phenomenon | Entoptic phenomena () are visual effects whose source is within the human eye itself. (Occasionally, these are called entopic phenomena, which is probably a typographical mistake.)
In Helmholtz's words: "Under suitable conditions light falling on the eye may render visible certain objects within the eye itself. These perceptions are called entoptical."
Overview
Entoptic images have a physical basis in the image cast upon the retina. Hence, they are different from optical illusions, which are caused by the visual system and characterized by a visual percept that (loosely said) appears to differ from reality. Because entoptic images are caused by phenomena within the observer's own eye, they share one feature with optical illusions and hallucinations: the observer cannot share a direct and specific view of the phenomenon with others.
Helmholtz commented on entoptic phenomena which could be seen easily by some observers, but could not be seen at all by others. This variance is not surprising because the specific aspects of the eye that produce these images are unique to each individual. Because of the variation between individuals, and the inability for two observers to share a nearly identical stimulus, these phenomena are unlike most visual sensations. They are also unlike most optical illusions which are produced by viewing a common stimulus. Yet, there is enough commonality between the main entoptic phenomena that their physical origin is now well understood.
Examples
Some examples of entoptical effects include:
Floaters or muscae volitantes are slowly drifting blobs of varying size, shape, and transparency, which are particularly noticeable when viewing a bright, featureless background (such as the sky) or a point source of diffuse light very close to the eye. They are shadow images of objects floating in liquid between the retina and the gel inside the eye (vitreous humor). They are visible because they move; if they were pinned to retina by the vitreous o |
https://en.wikipedia.org/wiki/MapReduce | MapReduce is a programming model and an associated implementation for processing and generating big data sets with a parallel, distributed algorithm on a cluster.
A MapReduce program is composed of a map procedure, which performs filtering and sorting (such as sorting students by first name into queues, one queue for each name), and a reduce method, which performs a summary operation (such as counting the number of students in each queue, yielding name frequencies). The "MapReduce System" (also called "infrastructure" or "framework") orchestrates the processing by marshalling the distributed servers, running the various tasks in parallel, managing all communications and data transfers between the various parts of the system, and providing for redundancy and fault tolerance.
The model is a specialization of the split-apply-combine strategy for data analysis.
It is inspired by the map and reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as in their original forms. The key contributions of the MapReduce framework are not the actual map and reduce functions (which, for example, resemble the 1995 Message Passing Interface standard's reduce and scatter operations), but the scalability and fault-tolerance achieved for a variety of applications due to parallelization. As such, a single-threaded implementation of MapReduce is usually not faster than a traditional (non-MapReduce) implementation; any gains are usually only seen with multi-threaded implementations on multi-processor hardware. The use of this model is beneficial only when the optimized distributed shuffle operation (which reduces network communication cost) and fault tolerance features of the MapReduce framework come into play. Optimizing the communication cost is essential to a good MapReduce algorithm.
MapReduce libraries have been written in many programming languages, with different levels of optimization. A popular open-source imp |
https://en.wikipedia.org/wiki/AOpen | AOPEN (, stylized AOPEN) is a major electronics manufacturer from Taiwan that makes computers and parts for computers. AOPEN used to be the Open System Business Unit of Acer Computer Inc. which designed, manufactured and sold computer components.
It was incorporated in December 1996 as a subsidiary of Acer Group with an initial public offering (IPO) at the Taiwan stock exchange in August 2002. It is also the first subsidiary that established the entrepreneurship paradigm in the pan-Acer Group. At that time, AOPENs major shareholder was the Wistron Group. In 2018 AOPEN became a partner of the pan-Acer Group again as the business-to-business branch of the computing industry.
They are perhaps most well known for their "Mobile on Desktop" (MoDT), which implements Intel's Pentium M platform on desktop motherboards. Because the Pentium 4 and other NetBurst CPUs proved less energy efficient than the Pentium M, in late 2004 and early 2005, many manufacturers introduced desktop motherboards for the mobile Pentium M, AOPEN being one of the first.
AOPEN currently specializes in ultra small form factor (uSFF) platform applications; digital signage; and product development and designs characterized by miniaturization, standardization and modularization.
Product position and strategies
Since 2005 AOPEN has been developing energy-saving products. According to different types of customers, applications and contexts, AOPEN splits its product platforms into two major categories: media player platform and Panel PC platform, both of which have Windows, Linux, ChromeOS and Android devices.
Digital Signage Platform
There are six major parts in AOPEN's digital signage platform applications: media player, management, deployment, display, extension and software. AOPEN manufacturers the digital signage media players with operating system and pre-imaging. This also includes a remote management option.
See also
List of companies of Taiwan |
https://en.wikipedia.org/wiki/Cray%20Time%20Sharing%20System | The Cray Time Sharing System, also known in the Cray user community as CTSS, was developed as an operating system for the Cray-1 or Cray X-MP line of supercomputers in 1978. CTSS was developed by the Los Alamos Scientific Laboratory (LASL now LANL) in conjunction with the Lawrence Livermore Laboratory (LLL now LLNL). CTSS was popular with Cray sites in the United States Department of Energy (DOE), but was used by several other Cray sites, such as the San Diego Supercomputing Center.
Overview
The predecessor of CTSS was the Livermore Time Sharing System (LTSS) which ran on Control Data CDC 7600 line of supercomputers. The first compiler was known as LRLTRAN, for Lawrence Radiation Laboratory forTRAN, a FORTRAN 66 programming language but with dynamic memory and other features. The Cray version, including automatic vectorization, was known as CVC, pronounced "Civic" like the Honda car of the period, for Cray Vector Compiler.
Some controversy existed at LASL with the first attempt to develop an operating system for the Cray-1 named DEIMOS, a message-passing, Unix-like operating system, by Forrest Basket. DEIMOS had initial "teething" problems common to the performance of all early operating systems. This left a bad taste for Unix-like systems at the National Laboratories and with the manufacturer, Cray Research, Inc., of the hardware who went on to develop their own batch oriented operating system, COS (Cray Operating System) and their own vectorizing Fortran compiler named "CFT" (Cray ForTran) both written in the Cray Assembly Language (CAL).
CTSS had the misfortune to have certain constants, structures, and lacking certain networking facilities (TCP/IP) which were optimized to be Cray-1 architecture-dependent without extensive rework when larger memory supercomputers like the Cray-2 and the Cray Y-MP came into use. CTSS has its final breaths running on Cray instruction-set-compatible hardware developed by Scientific Computer Systems (SCS-40 and SCS-30) and Supert |
https://en.wikipedia.org/wiki/KSAT-TV | KSAT-TV (channel 12) is a television station in San Antonio, Texas, United States, affiliated with ABC. Owned by Graham Media Group, the station maintains studios on North St. Mary's Street on the northern edge of downtown, and its transmitter is located off Route 181 in northwest Wilson County (northeast of Elmendorf).
History
KONO-TV
Channel 12 was the last commercial VHF allocation in San Antonio to be awarded. The first applicant for the allocation came in June 1952, from Bexar County Television Corporation, a subsidiary of Alamo Broadcasting Company, owners of radio station KABC. Bexar County Television planned to operate channel 12 as an ABC Television affiliate, owing to the radio station's affiliation with the ABC radio network. Shortly thereafter, Mission Broadcasting Company, owners of KONO radio (860 AM and 92.9 FM), also applied for a channel 12 license. By 1953, both Bexar County Television and Mission Broadcasting proper had dropped out of the running for channel 12. However, two new applicants filed applications: Sunshine Broadcasting Company, then-owners of KTSA radio, and Mission Telecasting Company. Mission was majority (50%) owned by Eugene J. Roth, principal owner of Mission Broadcasting Company, with the other half of the company split among seven individuals.
Sunshine would later withdraw its application, although another player would throw their hat into the ring in January 1954: the Walmac Corporation, owners of KMAC radio. In an attempt to avoid long, drawn-out hearings for a license, Walmac and Mission met in May 1954 to work out an agreement between the two parties. On March 12, 1956, the FCC heard final oral arguments between Walmac and Mission, with an FCC examiner having already favored Mission's application the previous year. In May 1956, the FCC granted a license to Mission and denied Walmac's bid. Mission officials proceeded to construct a new, studio building and tower on North St. Mary's Street, adjacent to the studios for K |
https://en.wikipedia.org/wiki/Scale%20space | Scale-space theory is a framework for multi-scale signal representation developed by the computer vision, image processing and signal processing communities with complementary motivations from physics and biological vision. It is a formal theory for handling image structures at different scales, by representing an image as a one-parameter family of smoothed images, the scale-space representation, parametrized by the size of the smoothing kernel used for suppressing fine-scale structures. The parameter in this family is referred to as the scale parameter, with the interpretation that image structures of spatial size smaller than about have largely been smoothed away in the scale-space level at scale .
The main type of scale space is the linear (Gaussian) scale space, which has wide applicability as well as the attractive property of being possible to derive from a small set of scale-space axioms. The corresponding scale-space framework encompasses a theory for Gaussian derivative operators, which can be used as a basis for expressing a large class of visual operations for computerized systems that process visual information. This framework also allows visual operations to be made scale invariant, which is necessary for dealing with the size variations that may occur in image data, because real-world objects may be of different sizes and in addition the distance between the object and the camera may be unknown and may vary depending on the circumstances.
Definition
The notion of scale space applies to signals of arbitrary numbers of variables. The most common case in the literature applies to two-dimensional images, which is what is presented here. For a given image , its linear (Gaussian) scale-space representation is a family of derived signals defined by the convolution of with the two-dimensional Gaussian kernel
such that
where the semicolon in the argument of implies that the convolution is performed only over the variables , while the scale paramete |
https://en.wikipedia.org/wiki/List%20of%20antiviral%20drugs | Antiviral drugs are different from antibiotics. Flu antiviral drugs are different from antiviral drugs used to treat other infectious diseases such as COVID-19. Antiviral drugs prescribed to treat COVID-19 are not approved or authorized to treat flu. |
https://en.wikipedia.org/wiki/Muller%27s%20method | Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0. It was first presented by David E. Muller in 1956.
Muller's method is based on the secant method, which constructs at every iteration a line through two points on the graph of f. Instead, Muller's method uses three points, constructs the parabola through these three points, and takes the intersection of the x-axis with the parabola to be the next approximation.
Recurrence relation
Muller's method is a recursive method which generates an approximation of the root ξ of f at each iteration. Starting with the three initial values x0, x−1 and x−2, the first iteration calculates the first approximation x1, the second iteration calculates the second approximation x2, the third iteration calculates the third approximation x3, etc. Hence the kth iteration generates approximation xk. Each iteration takes as input the last three generated approximations and the value of f at these approximations. Hence the kth iteration takes as input the values xk-1, xk-2 and xk-3 and the function values f(xk-1), f(xk-2) and f(xk-3). The approximation xk is calculated as follows.
A parabola yk(x) is constructed which goes through the three points (xk-1, f(xk-1)), (xk-2, f(xk-2)) and (xk-3, f(xk-3)). When written in the Newton form, yk(x) is
where f[xk-1, xk-2] and f[xk-1, xk-2, xk-3] denote divided differences. This can be rewritten as
where
The next iterate xk is now given as the solution closest to xk-1 of the quadratic equation yk(x) = 0. This yields the recurrence relation
In this formula, the sign should be chosen such that the denominator is as large as possible in magnitude. We do not use the standard formula for solving quadratic equations because that may lead to loss of significance.
Note that xk can be complex, even if the previous iterates were all real. This is in contrast with other root-finding algorithms like the secant method, Sidi's generalized secant method |
https://en.wikipedia.org/wiki/Antigen%20processing | Antigen processing, or the cytosolic pathway, is an immunological process that prepares antigens for presentation to special cells of the immune system called T lymphocytes. It is considered to be a stage of antigen presentation pathways. This process involves two distinct pathways for processing of antigens from an organism's own (self) proteins or intracellular pathogens (e.g. viruses), or from phagocytosed pathogens (e.g. bacteria); subsequent presentation of these antigens on class I or class II major histocompatibility complex (MHC) molecules is dependent on which pathway is used. Both MHC class I and II are required to bind antigens before they are stably expressed on a cell surface. MHC I antigen presentation typically (considering cross-presentation) involves the endogenous pathway of antigen processing, and MHC II antigen presentation involves the exogenous pathway of antigen processing. Cross-presentation involves parts of the exogenous and the endogenous pathways but ultimately involves the latter portion of the endogenous pathway (e.g. proteolysis of antigens for binding to MHC I molecules).
While the joint distinction between the two pathways is useful, there are instances where extracellular-derived peptides are presented in the context of MHC class I and cytosolic peptides are presented in the context of MHC class II (this often happens in dendritic cells).
The endogenous pathway
The endogenous pathway is used to present cellular peptide fragments on the cell surface on MHC class I molecules. If a virus had infected the cell, viral peptides would also be presented, allowing the immune system to recognize and kill the infected cell. Worn out proteins within the cell become ubiquitinated, marking them for proteasome degradation. Proteasomes break the protein up into peptides that include some around nine amino acids long (suitable for fitting within the peptide binding cleft of MHC class I molecules). Transporter associated with antigen processing |
https://en.wikipedia.org/wiki/Escalation%20of%20commitment | Escalation of commitment is a human behavior pattern in which an individual or group facing increasingly negative outcomes from a decision, action, or investment nevertheless continue the behavior instead of altering course. The actor maintains behaviors that are irrational, but align with previous decisions and actions.
Economists and behavioral scientists use a related term, sunk-cost fallacy, to describe the justification of increased investment of money or effort in a decision, based on the cumulative prior investment ("sunk cost") despite new evidence suggesting that the future cost of continuing the behavior outweighs the expected benefit.
In sociology, irrational escalation of commitment or commitment bias describe similar behaviors. The phenomenon and the sentiment underlying them are reflected in such proverbial images as "throwing good money after bad", or "In for a penny, in for a pound", or "It's never the wrong time to make the right decision", or "If you find yourself in a hole, stop digging."
Early use
Escalation of commitment was first described by Barry M. Staw in his 1976 paper, "Knee deep in the big muddy: A study of escalating commitment to a chosen course of action".
Researchers, inspired by the work of Staw, conducted studies that tested factors, situations and causes of escalation of commitment. The research introduced other analyses of situations and how people approach problems and make decisions. Some of the earliest work stemmed from events in which this phenomenon had an effect and help explain the phenomenon.
Research and analysis
Over the past few decades, researchers have followed and analyzed many examples of the escalation of commitment to a situation. The heightened situations are explained in three elements. Firstly, a situation has a costly amount of resources such as time, money and people invested in the project. Next, past behavior leads up to an apex in time where the project has not met expectations or could be in a c |
https://en.wikipedia.org/wiki/Poppers | Popper is a slang term given broadly to recreational drug of the chemical class called alkyl nitrites that are inhaled. They act on the body as vasodilators. Most widely sold products include the original isoamyl nitrite, isopentyl nitrite, and isopropyl nitrite. Isobutyl nitrite is also widely used but is banned in the European Union. In some countries, poppers are labeled or packaged as room deodorizers, leather polish, nail polish remover, or videotape head cleaner to evade anti-drug laws.
Popper use has a relaxation effect on involuntary smooth muscles, such as those in the throat and anus. It is used for practical purposes to facilitate anal sex by increasing blood flow and relaxing sphincter muscles. The drug is also used for recreational drug purposes, typically for the "high" or "rush" that the drug can create, and to enhance sexual pleasure in general.
In popular culture, poppers have been part of club culture from the mid-1970s disco scene and surged in popularity in the 1980s and 1990s rave scene.
History
19th-century discovery
The French chemist Antoine Jérôme Balard synthesized amyl nitrite in 1844. Sir Thomas Lauder Brunton, a Scottish physician born in the year of amyl nitrite's first synthesis, documented its clinical use to treat angina pectoris in 1867 when patients experiencing chest pains would experience complete relief after inhalation. Brunton was inspired by earlier work with the same agent, performed by Arthur Gamgee and Benjamin Ward Richardson. Brunton reasoned that the angina sufferer's pain and discomfort could be reduced by administering amyl nitrite—to dilate the coronary arteries of patients, thus improving blood flow to the heart muscle.
Amyl nitrites were originally enclosed in a glass mesh called "pearls". The usual administration of these pearls was done by crushing them between the fingers, followed by a popping sound. This administration process seems to be the origin of the slang term "poppers". It was then administere |
https://en.wikipedia.org/wiki/Soil%20conservation | Soil conservation is the prevention of loss of the topmost layer of the soil from erosion or prevention of reduced fertility caused by over usage, acidification, salinization or other chemical soil contamination.
Slash-and-burn and other unsustainable methods of subsistence farming are practiced in some lesser developed areas. A consequence of deforestation is typically large-scale erosion, loss of soil nutrients and sometimes total desertification. Techniques for improved soil conservation include crop rotation, cover crops, conservation tillage and planted windbreaks, affect both erosion and fertility. When plants die, they decay and become part of the soil. Code 330 defines standard methods recommended by the U.S. Natural Resources Conservation Service. Farmers have practiced soil conservation for millennia. In Europe, policies such as the Common Agricultural Policy are targeting the application of best management practices such as reduced tillage, winter cover crops, plant residues and grass margins in order to better address soil conservation. Political and economic action is further required to solve the erosion problem. A simple governance hurdle concerns how we value the land and this can be changed by cultural adaptation. Soil carbon is a carbon sink, playing a role in climate change mitigation.
Contour ploughing
Contour ploughing orients furrows following the contour lines of the farmed area. Furrows move left and right to maintain a constant altitude, which reduces runoff. Contour plowing was practiced by the ancient Phoenicians for slopes between two and ten percent. Contour plowing can increase crop yields from 10 to 50 percent, partially as a result of greater soil retention.
Terrace farming
Terracing is the practice of creating nearly level areas in a hillside area. The terraces form a series of steps each at a higher level than the previous. Terraces are protected from erosion by other soil barriers. Terraced farming is more common on small far |
https://en.wikipedia.org/wiki/WinSCP | WinSCP (Windows Secure Copy) is a free and open-source SSH File Transfer Protocol (SFTP), File Transfer Protocol (FTP), WebDAV, Amazon S3, and secure copy protocol (SCP) client for Microsoft Windows. Its main function is secure file transfer between a local computer and a remote server. Beyond this, WinSCP offers basic file manager and file synchronization functionality. For secure transfers, it uses the Secure Shell protocol (SSH) and supports the SCP protocol in addition to SFTP.
Development of WinSCP started around March 2000 and continues. Originally it was hosted by the University of Economics in Prague, where its author worked at the time. Since July 16, 2003, it is licensed under the GNU GPL. It is hosted on SourceForge and GitHub.
WinSCP is based on the implementation of the SSH protocol from PuTTY and FTP protocol from FileZilla. It is also available as a plugin for Altap Salamander file manager, and there exists a third-party plugin for the FAR file manager.
Features
Graphical user interface
Translated into several languages
Integration with Windows (drag and drop, URL, shortcut icons)
All common operations with files
Support for SFTP and SCP protocols over SSH-1 and SSH-2, FTP protocol, WebDAV protocol and Amazon S3 protocol.
Batch file scripting, command-line interface, and .NET wrapper
Can act as a remote text editor, either downloading a file to edit or passing it on to a local application, then uploading it again when updated.
Directory synchronization in several semi or fully automatic ways
Support for SSH password, keyboard-interactive, public key, and Kerberos (GSS) authentication
Integrates with Pageant (PuTTY authentication agent) for full support of public key authentication with SSH
Choice of Windows File Explorer-like or Norton Commander-like interfaces
Optionally stores session information
Optionally import session information from PuTTY sessions in the registry
Able to upload files and retain associated original date/timestamps, unlike F |
https://en.wikipedia.org/wiki/Administration%20on%20Aging | The Administration on Aging (AoA) is an agency within the Administration for Community Living of the United States Department of Health and Human Services. AoA works to ensure that older Americans can stay independent in their communities, mostly by awarding grants to States, Native American tribal organizations, and local communities to support programs authorized by Congress in the Older Americans Act. AoA also awards discretionary grants to research organizations working on projects that support those goals. It conducts statistical activities in support of the research, analysis, and evaluation of programs to meet the needs of an aging population.
AoA's FY 2013 budget proposal includes a total of $1.9 billion, $819 million of which funds senior nutrition programs like Meals on Wheels. The agency also funds $539 million in grants to programs to help seniors stay in their homes through services (such as accomplishing essential activities of daily living, like getting to the doctor's office, buying groceries etc.) and through help given to caregivers. Some of these grants are for Cash & Counseling programs that provide Medicaid participants a monthly budget for home care and access to services that help them manage their finances.
AoA is headed by the Assistant Secretary for Aging. From July 2016 to August 2017, Edwin Walker served as Acting Assistant Secretary for Aging. The Assistant Secretary reports directly to the Secretary of Health and Human Services. Lance Allen Robertson was confirmed in August 2017, and served until January 20, 2021. On January 20, 2021, Alison Barkoff was sworn in as Principal Deputy Assistant Secretary, and was named as Acting Assistant Secretary. On March 9, 2022, President Biden Nominated Rita Landgraf, the former Secretary of the Delaware Department of Health and Social Services, to serve as his first Assistant Secretary. Confirmation is pending.
See also
:Category:United States Assistant Secretaries for Aging
Pension Rights C |
https://en.wikipedia.org/wiki/Dephosphorylation | In biochemistry, dephosphorylation is the removal of a phosphate (PO43−) group from an organic compound by hydrolysis. It is a reversible post-translational modification. Dephosphorylation and its counterpart, phosphorylation, activate and deactivate enzymes by detaching or attaching phosphoric esters and anhydrides. A notable occurrence of dephosphorylation is the conversion of ATP to ADP and inorganic phosphate.
Dephosphorylation employs a type of hydrolytic enzyme, or hydrolase, which cleaves ester bonds. The prominent hydrolase subclass used in dephosphorylation is phosphatase, which removes phosphate groups by hydrolysing phosphoric acid monoesters into a phosphate ion and a molecule with a free hydroxyl (-OH) group.
The reversible phosphorylation-dephosphorylation reaction occurs in every physiological process, making proper function of protein phosphatases necessary for organism viability. Because protein dephosphorylation is a key process involved in cell signalling, protein phosphatases are implicated in conditions such as cardiac disease, diabetes, and Alzheimer's disease.
History
The discovery of dephosphorylation came from a series of experiments examining the enzyme phosphorylase isolated from rabbit skeletal muscle. In 1955, Edwin Krebs and Edmond Fischer used radiolabeled ATP to determine that phosphate is added to the serine residue of phosphorylase to convert it from its b to a form via phosphorylation. Subsequently, Krebs and Fischer showed that this phosphorylation is part of a kinase cascade. Finally, after purifying the phosphorylated form of the enzyme, phosphorylase a, from rabbit liver, ion exchange chromatography was used to identify phosphoprotein phosphatase I and II.
Since the discovery of these dephosphorylating proteins, the reversible nature of phosphorylation and dephosphorylation has been associated with a broad range of functional proteins, primarily enzymatic, but also including nonenzymatic proteins. Edwin Krebs and Edmond F |
https://en.wikipedia.org/wiki/Complex%20network | In the context of network theory, a complex network is a graph (network) with non-trivial topological features—features that do not occur in simple networks such as lattices or random graphs but often occur in networks representing real systems. The study of complex networks is a young and active area of scientific research (since 2000) inspired largely by empirical findings of real-world networks such as computer networks, biological networks, technological networks, brain networks, climate networks and social networks.
Definition
Most social, biological, and technological networks display substantial non-trivial topological features, with patterns of connection between their elements that are neither purely regular nor purely random. Such features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure, and hierarchical structure. In the case of directed networks these features also include reciprocity, triad significance profile and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. The most complex structures can be realized by networks with a medium number of interactions. This corresponds to the fact that the maximum information content (entropy) is obtained for medium probabilities.
Two well-known and much studied classes of complex networks are scale-free networks and small-world networks, whose discovery and definition are canonical case-studies in the field. Both are characterized by specific structural features—power-law degree distributions for the former and short path lengths and high clustering for the latter. However, as the study of complex networks has continued to grow in importance and popularity, many other aspects of network structures have attracted attention as well.
The field continues to develop at a brisk pace, and has broug |
https://en.wikipedia.org/wiki/Fraction | A fraction (from , "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters. A common, vulgar, or simple fraction (examples: and ) consists of an integer numerator, displayed above a line (or before a slash like ), and a non-zero integer denominator, displayed below (or after) that line. If these integers are positive, then the numerator represents a number of equal parts, and the denominator indicates how many of those parts make up a unit or a whole. For example, in the fraction , the numerator 3 indicates that the fraction represents 3 equal parts, and the denominator 4 indicates that 4 parts make up a whole. The picture to the right illustrates of a cake.
Other uses for fractions are to represent ratios and division. Thus the fraction can also be used to represent the ratio 3:4 (the ratio of the part to the whole), and the division (three divided by four).
We can also write negative fractions, which represent the opposite of a positive fraction. For example, if represents a half-dollar profit, then − represents a half-dollar loss. Because of the rules of division of signed numbers (which states in part that negative divided by positive is negative), −, and all represent the same fraction negative one-half. And because a negative divided by a negative produces a positive, represents positive one-half.
In mathematics the set of all numbers that can be expressed in the form , where a and b are integers and b is not zero, is called the set of rational numbers and is represented by the symbol Q or ℚ, which stands for quotient. A number is a rational number precisely when it can be written in that form (i.e., as a common fraction). However, the word fraction can also be used to describe mathematical expressions that are not rational numbers. Examples of these usages include algebra |
https://en.wikipedia.org/wiki/Tau%20protein | The tau proteins (abbreviated from tubulin associated unit) are a group of six highly soluble protein isoforms produced by alternative splicing from the gene MAPT (microtubule-associated protein tau). They have roles primarily in maintaining the stability of microtubules in axons and are abundant in the neurons of the central nervous system (CNS), where the cerebral cortex has the highest abundance. They are less common elsewhere but are also expressed at very low levels in CNS astrocytes and oligodendrocytes.
Pathologies and dementias of the nervous system such as Alzheimer's disease and Parkinson's disease are associated with tau proteins that have become hyperphosphorylated insoluble aggregates called neurofibrillary tangles. The tau proteins were identified in 1975 as heat-stable proteins essential for microtubule assembly, and since then they have been characterized as intrinsically disordered proteins.
Function
Microtubule stabilization
Tau proteins are found more often in neurons than in non-neuronal cells in humans. One of tau's main functions is to modulate the stability of axonal microtubules. Other nervous system microtubule-associated proteins (MAPs) may perform similar functions, as suggested by tau knockout mice that did not show abnormalities in brain development – possibly because of compensation in tau deficiency by other MAPs.
Although tau is present in dendrites at low levels, where it is involved in postsynaptic scaffolding, it is active primarily in the distal portions of axons, where it provides microtubule stabilization but also flexibility as needed. Tau proteins interact with tubulin to stabilize microtubules and promote tubulin assembly into microtubules. Tau has two ways of controlling microtubule stability: isoforms and phosphorylation.
In addition to its microtubule-stabilizing function, Tau has also been found to recruit signaling proteins and to regulate microtubule-mediated axonal transport.
mRNA translation
Tau is a negative |
https://en.wikipedia.org/wiki/Numeral%20prefix | Numeral or number prefixes are prefixes derived from numerals or occasionally other numbers. In English and many other languages, they are used to coin numerous series of words. For example:
unicycle, bicycle, tricycle (1-cycle, 2-cycle, 3-cycle)
dyad, triad (2 parts, 3 parts)
biped, quadruped (2 legs, 4 legs)
September, October, November, December (month 7, month 8, month 9, month 10)
decimal, hexadecimal (base-10, base-16)
septuagenarian, octogenarian (70–79 years old, 80–89 years old)
centipede, millipede (around 100 legs, around 1000 legs)
In many European languages there are two principal systems, taken from Latin and Greek, each with several subsystems; in addition, Sanskrit occupies a marginal position. There is also an international set of metric prefixes, which are used in the metric system and which for the most part are either distorted from the forms below or not based on actual number words.
Table of number prefixes in English
In the following prefixes, a final vowel is normally dropped before a root that begins with a vowel, with the exceptions of bi-, which is bis- before a vowel, and of the other monosyllables, du-, di-, dvi-, tri-, which are invariable.
The cardinal series are derived from cardinal numbers, such as the English one, two, three. The multiple series are based on adverbial numbers like the English once, twice, thrice. The distributive series originally meant one each, two each or one by one, two by two, etc., though that meaning is now frequently lost. The ordinal series are based on ordinal numbers such as the English first, second, third (for numbers higher than 2, the ordinal forms are also used for fractions; only the fraction has special forms).
For the hundreds, there are competing forms: those in -gent-, from the original Latin, and those in -cent-, derived from centi-, etc. plus the prefixes for 1–9.
Many of the items in the following tables are not in general use, but may rather be regarded as coinages by individ |
https://en.wikipedia.org/wiki/Barley%20malt%20syrup | Barley malt syrup is an unrefined sweetener, processed by extraction from sprouted, malted, barley.
Barley malt syrup contains approximately 65 percent maltose, 30 percent complex carbohydrates, and 3 percent storage protein (prolamin glycoprotein). Malt syrup is dark brown, thick, sticky, and possesses a strong distinctive flavor described as "malty". It is about half as sweet as refined white sugar. Barley malt syrup is sometimes used in combination with other natural sweeteners to lend a malt flavor. Also called "barley malt extract" (or just malt syrup), barley malt syrup is made from malted barley, though there are instances of mislabeling where merchants use other grains or corn syrup in production.
Barley malt syrup is also sold in powdered form. Barley malt extract is used in the bread and baked good industry for browning and flavoring, and in cereal manufacture to add malt flavor. Adding barley malt syrup to yeast dough increases fermentation as a result of the enzymes in the malt, thus quickening the proofing process.
Barley malt syrup has a long history, and was one of the primary sweeteners (along with honey) in use in China in the years 1000 BCE – 1000 CE. Qimin Yaoshu, a classic 6th century Chinese text, contains notes on the extraction of malt syrup and maltose from common household grains. Barley malt syrup continues to be used in traditional Chinese sweets, such as Chinese cotton candy.
Sugar rationing in the US led to the first commercial malt syrup production in the 1920s, to deal with sugar shortages.
Malt loaf is another product that makes use of barley malt syrup.
See also
Brewing
List of syrups
List of unrefined sweeteners
Malted milk |
https://en.wikipedia.org/wiki/Trickle%20charging | Trickle charging means charging a fully charged battery at a rate equal to its self-discharge rate, thus enabling the battery to remain at its fully charged level; this state occurs almost exclusively when the battery is not loaded, as trickle charging will not keep a battery charged if current is being drawn by a load. A battery under continuous float voltage charging is said to be float-charging.
For lead-acid batteries under no load float charging (such as in SLI batteries), trickle charging happens naturally at the end-of-charge, when the lead-acid battery internal resistance to the charging current increases enough to reduce additional charging current to a trickle, hence the name. In such cases, the trickle charging equals the energy expended by the lead-acid battery splitting the water in the electrolyte into hydrogen and oxygen gases.
Other battery chemistries, such as lithium-ion battery technology, cannot be safely trickle charged. In that case, supervisory circuits (sometimes called battery management system) adjust electrical conditions during charging to match the requirements of the battery chemistry. For Li-ion batteries generally, and for some variants especially, failure to accommodate the limitations of the chemistry and electro-chemistry of a cell, with regard to trickle charging after reaching a fully charged state, can lead to overheating and, possibly to fire or explosion. |
https://en.wikipedia.org/wiki/Data%20validation | In computer science, data validation is the process of ensuring data has undergone data cleansing to confirm they have data quality, that is, that they are both correct and useful. It uses routines, often called "validation rules", "validation constraints", or "check routines", that check for correctness, meaningfulness, and security of data that are input to the system. The rules may be implemented through the automated facilities of a data dictionary, or by the inclusion of explicit application program validation logic of the computer and its application.
This is distinct from formal verification, which attempts to prove or disprove the correctness of algorithms for implementing a specification or property.
Overview
Data validation is intended to provide certain well-defined guarantees for fitness and consistency of data in an application or automated system. Data validation rules can be defined and designed using various methodologies, and be deployed in various contexts. Their implementation can use declarative data integrity rules, or procedure-based business rules.
The guarantees of data validation do not necessarily include accuracy, and it is possible for data entry errors such as misspellings to be accepted as valid. Other clerical and/or computer controls may be applied to reduce inaccuracy within a system.
Different kinds
In evaluating the basics of data validation, generalizations can be made regarding the different kinds of validation according to their scope, complexity, and purpose.
For example:
Data type validation;
Range and constraint validation;
Code and cross-reference validation;
Structured validation; and
Consistency validation
Data-type check
Data type validation is customarily carried out on one or more simple data fields.
The simplest kind of data type validation verifies that the individual characters provided through user input are consistent with the expected characters of one or more known primitive data types as defined in |
https://en.wikipedia.org/wiki/Routh%E2%80%93Hurwitz%20stability%20criterion | In the control system theory, the Routh–Hurwitz stability criterion is a mathematical test that is a necessary and sufficient condition for the stability of a linear time-invariant (LTI) dynamical system or control system. A stable system is one whose output signal is bounded; the position, velocity or energy do not increase to infinity as time goes on. The Routh test is an efficient recursive algorithm that English mathematician Edward John Routh proposed in 1876 to determine whether all the roots of the characteristic polynomial of a linear system have negative real parts. German mathematician Adolf Hurwitz independently proposed in 1895 to arrange the coefficients of the polynomial into a square matrix, called the Hurwitz matrix, and showed that the polynomial is stable if and only if the sequence of determinants of its principal submatrices are all positive. The two procedures are equivalent, with the Routh test providing a more efficient way to compute the Hurwitz determinants () than computing them directly. A polynomial satisfying the Routh–Hurwitz criterion is called a Hurwitz polynomial.
The importance of the criterion is that the roots p of the characteristic equation of a linear system with negative real parts represent solutions ept of the system that are stable (bounded). Thus the criterion provides a way to determine if the equations of motion of a linear system have only stable solutions, without solving the system directly. For discrete systems, the corresponding stability test can be handled by the Schur–Cohn criterion, the Jury test and the Bistritz test. With the advent of computers, the criterion has become less widely used, as an alternative is to solve the polynomial numerically, obtaining approximations to the roots directly.
The Routh test can be derived through the use of the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices. Hurwitz derived his conditions differently.
Using Euclid's algorithm
The criterion is rela |
https://en.wikipedia.org/wiki/World%20Reference%20Base%20for%20Soil%20Resources | The World Reference Base for Soil Resources (WRB) is an international soil classification system for naming soils and creating legends for soil maps. The currently valid version is the fourth edition 2022. It is edited by a working group of the International Union of Soil Sciences (IUSS).
Background
History
Since the 19th century, several countries developed national soil classification systems. During the 20th century, the need for an international soil classification system became more and more obvious.
From 1971 to 1981, the Food and Agriculture Organization (FAO) and UNESCO published the Soil Map of the World, 10 volumes, scale 1 : 5 M). The Legend for this map, published in 1974 under the leadership of Rudi Dudal, became the FAO soil classification. Many ideas from national soil classification systems were brought together in this worldwide-applicable system, among them the idea of diagnostic horizons as established in the '7th approximation to the USDA soil taxonomy' from 1960. The next step was the Revised Legend of the Soil Map of the World, published in 1988.
In 1982, the International Soil Science Society (ISSS; now: International Union of Soil Sciences, IUSS) established a working group named International Reference Base for Soil Classification (IRB). Chair of this working group was Ernst Schlichting. Its mandate was to develop an international soil classification system that should better consider soil-forming processes than the FAO soil classification. Drafts were presented in 1982 and 1990.
In 1992, the IRB working group decided to develop a new system named World Reference Base for Soil Resources (WRB) that should further develop the Revised Legend of the FAO soil classification and include some ideas of the more systematic IRB approach. Otto Spaargaren (International Soil Reference and Information Centre) and Freddy Nachtergaele (FAO) were nominated to prepare a draft. This draft was presented at the 15th World Congress of Soil Science in Acapu |
https://en.wikipedia.org/wiki/Hut%208 | Hut 8 was a section in the Government Code and Cypher School (GC&CS) at Bletchley Park (the British World War II codebreaking station, located in Buckinghamshire) tasked with solving German naval (Kriegsmarine) Enigma messages. The section was led initially by Alan Turing. He was succeeded in November 1942 by his deputy, Hugh Alexander. Patrick Mahon succeeded Alexander in September 1944.
Hut 8 was partnered with Hut 4, which handled the translation and intelligence analysis of the raw decrypts provided by Hut 8.
Located initially in one of the original single-story wooden huts, the name "Hut 8" was retained when Huts 3, 6 & 8 moved to a new brick building, Block D, in February 1943.
After 2005, the first Hut 8 was restored to its wartime condition, and it now houses the "HMS Petard Exhibition".
Operation
In 1940, a few breaks were made into the naval "Dolphin" code, but Luftwaffe messages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed (see ) before they could be broken. In February 1942, the German navy introduced "Triton", a version of Enigma with a fourth rotor for messages to and from Atlantic U-boats; these became unreadable for a period of ten months during a crucial period (see Enigma in 1942).
Britain produced modified bombes, but it was the success of the US Navy bombe that was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and from across the Atlantic by enciphered teleprinter links.
Personnel
In addition to the cryptanalysts, around 130 women worked in Hut 8 and provided essential clerical support including punching holes into the Banbury sheets. Hut 8 relied on Wrens to run the bombes housed elsewhere at Bletchley.
Code breakers
Alan Turing
Conel Hugh O'Donel Alexander
Michael Arbuthnot Ashcroft
Joan Clarke
Joseph Gillis
Harry Golombek
I. J. Good
Peter Hilton, January 1942 to late 1942
Rosalind |
https://en.wikipedia.org/wiki/Dark-energy%20star | A dark-energy star is a hypothetical compact astrophysical object, which a minority of physicists think might constitute an alternative explanation for observations of astronomical black hole candidates.
The concept was proposed by physicist George Chapline. The theory states that infalling matter is converted into vacuum energy or dark energy, as the matter falls through the event horizon. The space within the event horizon would end up with a large value for the cosmological constant and have negative pressure to exert against gravity. There would be no information-destroying singularity.
Theory
In a 2000 paper, George Chapline Jr. and Robert B. Laughlin, with Evan Hohlfeld and David Santiago, modeled spacetime as a Bose–Einstein condensate.
In March 2005, physicist George Chapline claimed that quantum mechanics makes it a "near certainty" that black holes do not exist and are instead dark-energy stars. The dark-energy star is a different concept from that of a gravastar.
Dark-energy stars were first proposed because in quantum physics, absolute time is required; however, in general relativity, an object falling towards a black hole would, to an outside observer, seem to have time pass infinitely slowly at the event horizon. The object itself would feel as if time flowed normally.
In order to reconcile quantum mechanics with black holes, Chapline theorized that a phase transition in the phase of space occurs at the event horizon. He based his ideas on the physics of superfluids. As a column of superfluid grows taller, at some point, density increases, slowing down the speed of sound, so that it approaches zero. However, at that point, quantum physics makes sound waves dissipate their energy into the superfluid, so that the zero sound speed condition is never encountered.
In the dark-energy star hypothesis, infalling matter approaching the event horizon decays into successively lighter particles. Nearing the event horizon, environmental effects accelerate pro |
https://en.wikipedia.org/wiki/QCP | The QCP file format is used by many cellular telephone manufacturers to provide ring tones and record voice. It is based on RIFF, a generic format for storing chunks of data identified by tags. The QCP format does not specify how voice data in the file is encoded. Rather, it is a container format. QCP files are typically encoded QCELP or EVRC.
Qualcomm, which originated the format, has removed an internal web page link from the page that formerly discussed QCP. "Out of an abundance of caution, due to the December 31st, 2007 injunction ordered against certain Qualcomm products, Qualcomm has temporarily removed certain web content until it can be reviewed and modified if necessary to ensure compliance with the injunction. It may be several more days or weeks before these pages are accessible again. Thank you for your patience."
QCP files have the same signature as RIFF files: A SOF (start of file) header of 52494646 ("RIFF"), and an EOF (end of file) of 0000.
Playing QCP files
Qualcomm previously offered downloads of the software and SDK for its PureVoice voice and audio enhancement products that could play and convert QCP files. |
https://en.wikipedia.org/wiki/Battery%20room | A battery room is a room that houses batteries for backup or uninterruptible power systems. The rooms are found in telecommunication central offices, and provide standby power for computing equipment in datacenters. Batteries provide direct current (DC) electricity, which may be used directly by some types of equipment, or which may be converted to alternating current (AC) by uninterruptible power supply (UPS) equipment. The batteries may provide power for minutes, hours or days, depending on each system's design, although they are most commonly activated during brief electric utility outages lasting only seconds.
Battery rooms were used to segregate the fumes and corrosive chemicals of wet cell batteries (often lead–acid) from the operating equipment, and for better control of temperature and ventilation. In 1890, the Western Union central telegraph office in New York City had 20,000 wet cells, mostly of the primary zinc-copper type.
Telecommunications
Telephone system central offices contain large battery systems to provide power for customer telephones, telephone switches, and related apparatus. Terrestrial microwave links, cellular telephone sites, fibre optic apparatus and satellite communications facilities also have standby battery systems, which may be large enough to occupy a separate room in the building. In normal operation power from the local commercial utility operates telecommunication equipment, and batteries provide power if the normal supply is interrupted. These can be sized for the expected full duration of an interruption, or may be required only to provide power while a standby generator set or other emergency power supply is started.
Batteries often used in battery rooms are the flooded lead-acid battery, the valve regulated lead-acid battery or the nickel–cadmium battery. Batteries are installed in groups. Several batteries are wired together in a series circuit forming a group providing DC electric power at 12, 24, 48 or 60 volts (or |
https://en.wikipedia.org/wiki/KIAH | KIAH (channel 39) is a television station in Houston, Texas, United States, serving as the local CW outlet. Owned and operated by network majority owner Nexstar Media Group, the station maintains studios adjacent to the Westpark Tollway on the southwest side of Houston, and its transmitter is located near Missouri City, in unincorporated Fort Bend County.
History
Origins
The station first signed on the air on January 6, 1967, as an independent station under the callsign KHTV (standing for "Houston Television"). Prior to its debut, the channel 39 allocation in Houston belonged to the now-defunct DuMont affiliate KNUZ-TV, which existed during the mid-1950s. Channel 39 was originally owned by the WKY Television System, a subsidiary of the Oklahoma Publishing Company, publishers of Oklahoma City's major daily newspaper, The Daily Oklahoman. After the company's namesake station, WKY-TV, was sold in 1976, the WKY Television System became Gaylord Broadcasting, named for the family that owned Oklahoma Publishing.
As Houston's first general entertainment independent station, KHTV ran a schedule of programs including children's shows, syndicated programs, movies, religious programs and some sporting events. One of its best known locally produced programs was Houston Wrestling, hosted by local promoter Paul Boesch, which aired on Saturday evenings (having been taped the night before at the weekly live shows in the Sam Houston Coliseum). From 1983 to 1985, the station was branded on-air as "KHTV 39 Gold". It was the leading independent station in Houston, even as competitors entered the market (including KVRL/KDOG (channel 26, now KRIV), when it launched in 1971). During this time, KHTV was distributed to cable providers as a regional superstation of sorts, with carriage on systems as far east as Baton Rouge, Louisiana.
As a WB affiliate
On November 2, 1993, the Warner Bros. Television division of Time Warner and the Tribune Company announced the formation of The WB Televis |
https://en.wikipedia.org/wiki/Rado%27s%20theorem%20%28Ramsey%20theory%29 | Rado's theorem is a theorem from the branch of mathematics known as Ramsey theory. It is named for the German mathematician Richard Rado. It was proved in his thesis, Studien zur Kombinatorik.
Statement
Let be a system of linear equations, where is a matrix with integer entries. This system is said to be -regular if, for every -coloring of the natural numbers 1, 2, 3, ..., the system has a monochromatic solution. A system is regular if it is r-regular for all r ≥ 1.
Rado's theorem states that a system is regular if and only if the matrix A satisfies the columns condition. Let ci denote the i-th column of A. The matrix A satisfies the columns condition provided that there exists a partition C1, C2, ..., Cn of the column indices such that if , then
s1 = 0
for all i ≥ 2, si can be written as a rational linear combination of the cjs in all the Ck with k < i. This means that si is in the linear subspace of Q'm spanned by the set of the cj's.
Special cases
Folkman's theorem, the statement that there exist arbitrarily large sets of integers all of whose nonempty sums are monochromatic, may be seen as a special case of Rado's theorem concerning the regularity of the system of equations
where T ranges over each nonempty subset of the set
Other special cases of Rado's theorem are Schur's theorem and Van der Waerden's theorem. For proving the former apply Rado's theorem to the matrix . For Van der Waerden's theorem with m chosen to be length of the monochromatic arithmetic progression, one can for example consider the following matrix:
Computability
Given a system of linear equations it is a priori unclear how to check computationally that it is regular. Fortunately, Rado's theorem provides a criterion which is testable in finite time. Instead of considering colourings (of infinitely many natural numbers), it must be checked that the given matrix satisfies the columns condition. Since the matrix consists only of finitely many columns, this property |
https://en.wikipedia.org/wiki/Endaural%20phenomena | Endaural phenomena are sounds that are heard without any external acoustic stimulation. Endaural means "in the ear". Phenomena include transient ringing in the ears (that sound like sine tones), white noise-like sounds, and subjective tinnitus. Endaural phenomena need to be distinguished from otoacoustic emissions, in which a person's ear emits sounds. The emitter typically cannot hear the sounds made by his or her ear. Endaural phenomena also need to be distinguished from auditory hallucinations, which are sometimes associated with psychosis.
See also
Bruit
Entoptic phenomenon |
https://en.wikipedia.org/wiki/UNIVAC%20418 | The UNIVAC 418 was a transistorized, 18-bit word magnetic-core memory machine made by Sperry Univac. The name came from its 4-microsecond memory cycle time and 18-bit word. The assembly language for this class of computers was TRIM III and ART418.
Over the three different models, more than 392 systems were manufactured. It evolved from the Control Unit Tester (CUT), a device used in the factory to test peripherals for larger systems.
Architecture
The instruction word had three formats:
Format I - common Load, Store, and Arithmetic operations
f - Function code (6 bits)
u - Operand address (12 bits)
Format II - Constant arithmetic and Boolean functions
f - Function code (6 bits)
z - Operand address or value (12 bits)
Format III - Input/Output
f - Function code (6 bits)
m - Minor function code (6 bits)
k - Designator (6 bits) used for channel number, shift count, etc.
Numbers were represented in ones' complement, single and double precision. The TRIM assembly source code used octal numbers as opposed to more common hexadecimal because the 18-bit words are evenly divisible by 3, but not by 4.
The machine had the following addressable registers:
A - Register (Double precision Accumulator, 36 bits) composed of:
AU - Register (Upper Accumulator, 18 bits)
AL - Register (Lower Accumulator, 18 bits)
ICR - Register (Index Control Register, 3 bits), also designated the "B-register"
SR - Register ("Special Register", 4 bits), a paging register allowing direct access to memory banks other than the executing (P register) bank
P - Register (Program address, 15 bits)
All register values were displayed in real time on the front panel of the computer in binary, with the ability of the user to enter new values via push button (a function that was safe to perform only when the computer was not in run mode).
UNIVAC 418-I
The first UNIVAC 418-I was delivered in June 1963. It was available with 4,096 to 16,384 words of memory.
UNIVAC 1218 Military Computer
The 418-I was also av |
https://en.wikipedia.org/wiki/Habitat%20destruction | Habitat destruction (also termed habitat loss and habitat reduction) is the process by which a natural habitat becomes incapable of supporting its native species. The organisms that previously inhabited the site are displaced or dead, thereby reducing biodiversity and species abundance. Habitat destruction is the leading cause of biodiversity loss. Fragmentation and loss of habitat have become one of the most important topics of research in ecology as they are major threats to the survival of endangered species.
Activities such as harvesting natural resources, industrial production and urbanization are human contributions to habitat destruction. Pressure from agriculture is the principal human cause. Some others include mining, logging, trawling, and urban sprawl. Habitat destruction is currently considered the primary cause of species extinction worldwide. Environmental factors can contribute to habitat destruction more indirectly. Geological processes, climate change, introduction of invasive species, ecosystem nutrient depletion, water and noise pollution are some examples. Loss of habitat can be preceded by an initial habitat fragmentation.
Attempts to address habitat destruction are in international policy commitments embodied by Sustainable Development Goal 15 "Life on Land" and Sustainable Development Goal 14 "Life Below Water". However, the United Nations Environment Programme report on "Making Peace with Nature" released in 2021 found that most of these efforts had failed to meet their internationally agreed upon goals.
Impacts on organisms
When a habitat is destroyed, the carrying capacity for indigenous plants, animals, and other organisms is reduced so that populations decline, sometimes up to the level of extinction.
Habitat loss is perhaps the greatest threat to organisms and biodiversity. Temple (1986) found that 82% of endangered bird species were significantly threatened by habitat loss. Most amphibian species are also threatened by native habit |
https://en.wikipedia.org/wiki/Metric%20dimension%20%28graph%20theory%29 | In graph theory, the metric dimension of a graph G is the minimum cardinality of a subset S of vertices such that all other vertices are uniquely determined by their distances to the vertices in S. Finding the metric dimension of a graph is an NP-hard problem; the decision version, determining whether the metric dimension is less than a given value, is NP-complete.
Detailed definition
For an ordered subset of vertices and a vertex v in a connected graph G, the representation of v with respect to W is the ordered k-tuple , where d(x,y) represents the distance between the vertices x and y. The set W is a resolving set (or locating set) for G if every two vertices of G have distinct representations. The metric dimension of G is the minimum cardinality of a resolving set for G. A resolving set containing a minimum number of vertices is called a basis (or reference set) for G. Resolving sets for graphs were introduced independently by and , while the concept of a resolving set and that of metric dimension were defined much earlier in the more general context of metric spaces by Blumenthal in his monograph Theory and Applications of Distance Geometry. Graphs are special examples of metric spaces with their intrinsic path metric.
Trees
If a tree is a path, its metric dimension is one. Otherwise, let L denote the set of leaves, degree-one vertices in the tree. Let K be the set of vertices that have degree greater than two, and that are connected by paths of degree-two vertices to one or more leaves. Then the metric dimension is |L| − |K|. A basis of this cardinality may be formed by removing from L one of the leaves associated with each vertex in K. The same algorithm is valid for the line graph of the tree, and thus any tree and its line graph have the same metric dimension.
Properties
In , it is proved that:
The metric dimension of a graph is 1 if and only if is a path.
The metric dimension of an -vertex graph is if and only if it is a complete graph.
The metri |
https://en.wikipedia.org/wiki/Chinese%20Software%20Developer%20Network | The "Chinese Software Developer Network" or "China Software Developer Network", (CSDN), operated by Bailian Midami Digital Technology Co., Ltd., is one of the biggest networks of software developers in China. CSDN provides Web forums, blog hosting, IT news, and other services. CSDN has about 10 million registered users and is the largest developer community in China.
Services offered
Web forums with a ranking system and similar topics
Blog hosting , with 69,484 bloggers at April 7, 2005
Document Center , a selection of blog articles
IT News
IT job hunting and training services
Online bookmark service
Web Forum
The CSDN community website is where Chinese software programmers seek advice. A poster describes a problem, posts it in the forum with a price in CSDN points, and then waits for replies. On some popular boards, a poster will get a response in a few hours, if not minutes. Most replies are short but enough to point out the mistake and give possible solutions. Some posts include code and may grow to several pages. The majority of posts are written in Simplified Chinese, although Traditional Chinese and English posts are not uncommon. .
Topics are mainly IT related and focused on programming, but political and life topics are also active. The forums were closed for two weeks in June 2004. This was likely for political reasons, because many political words, such as the names of political leaders and organizations, have been banned in posts since then. However, political discussions with intentional misspellings are still active.
Blog
The site hosts many IT blogs, but the large number of bloggers makes the server slow. In December 2005, Baidu rated CSDN as one of the top Chinese blog service providers.
Collaboration with Microsoft
CSDN started cooperation with Microsoft in 2002, and several Microsoft technical support staff have provided their support in CSDN forums since then. CSDN is also a major source of Chinese Microsoft Most Valuable Professional |
https://en.wikipedia.org/wiki/List%20of%20NP-complete%20problems | This is a list of some of the more commonly known problems that are NP-complete when expressed as decision problems. As there are hundreds of such problems known, this list is in no way comprehensive. Many problems of this type can be found in .
Graphs and hypergraphs
Graphs occur frequently in everyday applications. Examples include biological or social networks, which contain hundreds, thousands and even billions of nodes in some cases (e.g. Facebook or LinkedIn).
1-planarity
3-dimensional matching
Bandwidth problem
Bipartite dimension
Capacitated minimum spanning tree
Route inspection problem (also called Chinese postman problem) for mixed graphs (having both directed and undirected edges). The program is solvable in polynomial time if the graph has all undirected or all directed edges. Variants include the rural postman problem.
Clique cover problem
Clique problem
Complete coloring, a.k.a. achromatic number
Cycle rank
Degree-constrained spanning tree
Domatic number
Dominating set, a.k.a. domination number
NP-complete special cases include the edge dominating set problem, i.e., the dominating set problem in line graphs. NP-complete variants include the connected dominating set problem and the maximum leaf spanning tree problem.
Feedback vertex set
Feedback arc set
Graph coloring
Graph homomorphism problem
Graph partition into subgraphs of specific types (triangles, isomorphic subgraphs, Hamiltonian subgraphs, forests, perfect matchings) are known NP-complete. Partition into cliques is the same problem as coloring the complement of the given graph. A related problem is to find a partition that is optimal terms of the number of edges between parts.
Grundy number of a directed graph.
Hamiltonian completion
Hamiltonian path problem, directed and undirected.
Graph intersection number
Longest path problem
Maximum bipartite subgraph or (especially with weighted edges) maximum cut.
Maximum common subgraph isomorphism problem
Maximum independent set
Maximum Induced pat |
https://en.wikipedia.org/wiki/Ti%20plasmid | A tumour inducing (Ti) plasmid is a plasmid found in pathogenic species of Agrobacterium, including [[Agrobacterium tumefaciens|A. tumefaciens]], A. rhizogenes, A. rubi and A. vitis.
Evolutionarily, the Ti plasmid is part of a family of plasmids carried by many species of Alphaproteobacteria. Members of this plasmid family are defined by the presence of a conserved DNA region known as the repABC gene cassette, which mediates the replication of the plasmid, the partitioning of the plasmid into daughter cells during cell division as well as the maintenance of the plasmid at low copy numbers in a cell. The Ti plasmids themselves are sorted into different categories based on the type of molecule, or opine, they allow the bacteria to break down as an energy source.
The presence of this Ti plasmid is essential for the bacteria to cause crown gall disease in plants. This is facilitated via certain crucial regions in the Ti plasmid, including the vir region, which encodes for virulence genes, and the transfer DNA (T-DNA) region, which is a section of the Ti plasmid that is transferred via conjugation into host plant cells after an injury site is sensed by the bacteria. These regions have features that allow the delivery of T-DNA into host plant cells, and can modify the host plant cell to cause the synthesis of molecules like plant hormones (e.g. auxins, cytokinins) and opines and the formation of crown gall tumours.
Because the T-DNA region of the Ti plasmid can be transferred from bacteria to plant cells, it represented an exciting avenue for the transfer of DNA between kingdoms and spurred large amounts of research on the Ti plasmid and its possible uses in bioengineering.
Nomenclature and classification
The Ti plasmid is a member of the RepABC plasmid family found in Alphaproteobacteria. These plasmids are often relatively large in size, ranging from 100kbp to 2Mbp. They are also often termed replicons, as their replication begins at a single site. Members of this f |
https://en.wikipedia.org/wiki/Datasaab%20D2 | D2 was a concept and prototype computer designed by Datasaab in Linköping, Sweden. It was built with discrete transistors and completed in 1960. Its purpose was to investigate the feasibility of building a computer for use in an aircraft to assist with navigation, ultimately leading to the design of the CK37 computer used in Saab 37 Viggen. This military side of the project was known as SANK, or Saabs Automatiska Navigations-Kalkylator (Saab's Automatic Navigational-Calculator), and D2 was the name for its civilian application.
The D2 weighed approximately 200 kg, and could be placed on a desktop. It used words of 20 bits corresponding to 6 decimal digits. The memory capacity was 6K words, corresponding to 15 kilobytes. Programs and data were stored in separate memories. It could perform 100,000 integer additions per second. Paper tape was used for input.
Experience from the D2 prototype was the foundation for Datasaab's continued development both of the civilian D21 computer and military aircraft models. The commercial D21, launched already in 1962, used magnetic tape, 24 bit words, and unified program and data memory. Otherwise it was close to the D2 prototype, while a working airborne computer required a lot more miniaturization.
The D2 is on exhibit at IT-ceum, the computer museum in Linköping, Sweden. |
https://en.wikipedia.org/wiki/Windows%20Presentation%20Foundation | Windows Presentation Foundation (WPF) is a free and open-source graphical subsystem (similar to WinForms) originally developed by Microsoft for rendering user interfaces in Windows-based applications. WPF, previously known as "Avalon", was initially released as part of .NET Framework 3.0 in 2006. WPF uses DirectX and attempts to provide a consistent programming model for building applications. It separates the user interface from business logic, and resembles similar XML-oriented object models, such as those implemented in XUL and SVG.
Overview
WPF employs XAML, an XML-based language, to define and link various interface elements. WPF applications can be deployed as standalone desktop programs or hosted as an embedded object in a website. WPF aims to unify a number of common user interface elements, such as 2D/3D rendering, fixed and adaptive documents, typography, vector graphics, runtime animation, and pre-rendered media. These elements can then be linked and manipulated based on various events, user interactions, and data bindings.
WPF runtime libraries are included with all versions of Microsoft Windows since Windows Vista and Windows Server 2008. Users of Windows XP SP2/SP3 and Windows Server 2003 can optionally install the necessary libraries.
Microsoft Silverlight provided functionality that is mostly a subset of WPF to provide embedded web controls comparable to Adobe Flash. 3D runtime rendering had been supported in Silverlight since Silverlight 5.
At the Microsoft Connect event on December 4, 2018, Microsoft announced releasing WPF as open source project on GitHub. It is released under the MIT License. Windows Presentation Foundation has become available for projects targeting the .NET software framework, however, the system is not cross-platform and is still available only on Windows.
Features
Direct3D
Graphics, including desktop items like windows, are rendered using Direct3D. This allows the display of more complex graphics and custom themes, a |
https://en.wikipedia.org/wiki/Pitch%20detection%20algorithm | A pitch detection algorithm (PDA) is an algorithm designed to estimate the pitch or fundamental frequency of a quasiperiodic or oscillating signal, usually a digital recording of speech or a musical note or tone. This can be done in the time domain, the frequency domain, or both.
PDAs are used in various contexts (e.g. phonetics, music information retrieval, speech coding, musical performance systems) and so there may be different demands placed upon the algorithm. There is as yet no single ideal PDA, so a variety of algorithms exist, most falling broadly into the classes given below.
A PDA typically estimates the period of a quasiperiodic signal, then inverts that value to give the frequency.
General approaches
One simple approach would be to measure the distance between zero crossing points of the signal (i.e. the zero-crossing rate). However, this does not work well with complicated waveforms which are composed of multiple sine waves with differing periods or noisy data. Nevertheless, there are cases in which zero-crossing can be a useful measure, e.g. in some speech applications where a single source is assumed. The algorithm's simplicity makes it "cheap" to implement.
More sophisticated approaches compare segments of the signal with other segments offset by a trial period to find a match. AMDF (average magnitude difference function), ASMDF (Average Squared Mean Difference Function), and other similar autocorrelation algorithms work this way. These algorithms can give quite accurate results for highly periodic signals. However, they have false detection problems (often "octave errors"), can sometimes cope badly with noisy signals (depending on the implementation), and - in their basic implementations - do not deal well with polyphonic sounds (which involve multiple musical notes of different pitches).
Current time-domain pitch detector algorithms tend to build upon the basic methods mentioned above, with additional refinements to bring the performance more |
https://en.wikipedia.org/wiki/Designer%20baby | A designer baby is a baby whose genetic makeup has been selected or altered, often to exclude a particular gene or to remove genes associated with disease. This process usually involves analysing a wide range of human embryos to identify genes associated with particular diseases and characteristics, and selecting embryos that have the desired genetic makeup; a process known as preimplantation genetic diagnosis. Screening for single genes is commonly practiced, and polygenic screening is offered by a few companies. Other methods by which a baby's genetic information can be altered involve directly editing the genome before birth, which is not routinely performed and only one instance of this is known to have occurred as of 2019, where Chinese twins Lulu and Nana were edited as embryos, causing widespread criticism.
Genetically altered embryos can be achieved by introducing the desired genetic material into the embryo itself, or into the sperm and/or egg cells of the parents; either by delivering the desired genes directly into the cell or using gene-editing technology. This process is known as germline engineering and performing this on embryos that will be brought to term is typically prohibited by law. Editing embryos in this manner means that the genetic changes can be carried down to future generations, and since the technology concerns editing the genes of an unborn baby, it is considered controversial and is subject to ethical debate. While some scientists condone the use of this technology to treat disease, concerns have been raised that this could be translated into using the technology for cosmetic purposes and enhancement of human traits.
Pre-implantation genetic diagnosis
Pre-implantation genetic diagnosis (PGD or PIGD) is a procedure in which embryos are screened prior to implantation. The technique is used alongside in vitro fertilisation (IVF) to obtain embryos for evaluation of the genome – alternatively, ovocytes can be screened prior to fertilisat |
https://en.wikipedia.org/wiki/Sanger%20sequencing | Sanger sequencing is a method of DNA sequencing that involves electrophoresis and is based on the random incorporation of chain-terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication. After first being developed by Frederick Sanger and colleagues in 1977, it became the most widely used sequencing method for approximately 40 years. It was first commercialized by Applied Biosystems in 1986. More recently, higher volume Sanger sequencing has been replaced by next generation sequencing methods, especially for large-scale, automated genome analyses. However, the Sanger method remains in wide use for smaller-scale projects and for validation of deep sequencing results. It still has the advantage over short-read sequencing technologies (like Illumina) in that it can produce DNA sequence reads of > 500 nucleotides and maintains a very low error rate with accuracies around 99.99%. Sanger sequencing is still actively being used in efforts for public health initiatives such as sequencing the spike protein from SARS-CoV-2 as well as for the surveillance of norovirus outbreaks through the Center for Disease Control and Prevention's (CDC) CaliciNet surveillance network.
Method
The classical chain-termination method requires a single-stranded DNA template, a DNA primer, a DNA polymerase, normal deoxynucleotide triphosphates (dNTPs), and modified di-deoxynucleotide triphosphates (ddNTPs), the latter of which terminate DNA strand elongation. These chain-terminating nucleotides lack a 3'-OH group required for the formation of a phosphodiester bond between two nucleotides, causing DNA polymerase to cease extension of DNA when a modified ddNTP is incorporated. The ddNTPs may be radioactively or fluorescently labelled for detection in automated sequencing machines.
The DNA sample is divided into four separate sequencing reactions, containing all four of the standard deoxynucleotides (dATP, dGTP, dCTP and dTTP) and the DNA polymerase. To each reaction is adde |
https://en.wikipedia.org/wiki/Solitaire%20%28cipher%29 | The Solitaire cryptographic algorithm was designed by Bruce Schneier at the request of Neal Stephenson for use in his novel Cryptonomicon, in which field agents use it to communicate securely without having to rely on electronics or having to carry incriminating tools. It was designed to be a manual cryptosystem calculated with an ordinary deck of playing cards. In Cryptonomicon, this algorithm was originally called Pontifex to hide the fact that it involved playing cards.
One of the motivations behind Solitaire's creation is that in totalitarian environments, a deck of cards is far more affordable (and less incriminating) than a personal computer with an array of cryptological utilities. However, as Schneier warns in the appendix of Cryptonomicon, just about everyone with an interest in cryptanalysis will now know about this algorithm, so carrying a deck of cards may also be considered incriminating. Furthermore, analysis has revealed flaws in the cipher such that it is now considered insecure.
Encryption and decryption
This algorithm uses a standard deck of cards with 52 suited cards and two jokers which are distinguishable from each other, called the A joker and the B joker. For simplicity's sake, only two suits will be used in this example, clubs and diamonds. Each card is assigned a numerical value: the clubs will be numbered from 1 to 13 (Ace through King) and the diamonds will be numbered 14 through 26 in the same manner. The jokers will be assigned the values of 27 and 28. Thus, the jack of clubs would have the value 11, and the two of diamonds would have the value 15. (In a full deck of cards, the suits are valued in bridge order: clubs, diamonds, hearts, spades, with the suited cards numbered 1 through 52, and the jokers numbered 53 and 54.)
To begin encryption or decryption, arrange the deck of cards face-up in an order previously agreed upon. The person decrypting a message must have a deck arranged in the same order as the deck used by the person wh |
https://en.wikipedia.org/wiki/Cryptobiosis | Cryptobiosis or anabiosis is a metabolic state in extremophilic organisms in response to adverse environmental conditions such as desiccation, freezing, and oxygen deficiency. In the cryptobiotic state, all measurable metabolic processes stop, preventing reproduction, development, and repair. When environmental conditions return to being hospitable, the organism will return to its metabolic state of life as it was prior to cryptobiosis.
Forms
Anhydrobiosis
Anhydrobiosis is the most studied form of cryptobiosis and occurs in situations of extreme desiccation. The term anhydrobiosis derives from the Greek for "life without water" and is most commonly used for the desiccation tolerance observed in certain invertebrate animals such as bdelloid rotifers, tardigrades, brine shrimp, nematodes, and at least one insect, a species of chironomid (Polypedilum vanderplanki). However, other life forms exhibit desiccation tolerance. These include the resurrection plant Craterostigma plantagineum, the majority of plant seeds, and many microorganisms such as bakers' yeast. Studies have shown that some anhydrobiotic organisms can survive for decades, even centuries, in the dry state.
Invertebrates undergoing anhydrobiosis often contract into a smaller shape and some proceed to form a sugar called trehalose. Desiccation tolerance in plants is associated with the production of another sugar, sucrose. These sugars are thought to protect the organism from desiccation damage. In some creatures, such as bdelloid rotifers, no trehalose has been found, which has led scientists to propose other mechanisms of anhydrobiosis, possibly involving intrinsically disordered proteins.
In 2011, Caenorhabditis elegans, a nematode that is also one of the best-studied model organisms, was shown to undergo anhydrobiosis in the dauer larva stage. Further research taking advantage of genetic and biochemical tools available for this organism revealed that in addition to trehalose biosynthesis, a set of |
https://en.wikipedia.org/wiki/Restriction%20digest | A restriction digest is a procedure used in molecular biology to prepare DNA for analysis or other processing. It is sometimes termed DNA fragmentation, though this term is used for other procedures as well. In a restriction digest, DNA molecules are cleaved at specific restriction sites of 4-12 nucleotides in length by use of restriction enzymes which recognize these sequences.
The resulting digested DNA is very often selectively amplified using polymerase chain reaction (PCR), making it more suitable for analytical techniques such as agarose gel electrophoresis, and chromatography. It is used in genetic fingerprinting, plasmid subcloning, and RFLP analysis.
Restriction site
A given restriction enzyme cuts DNA segments within a specific nucleotide sequence, at what is called a restriction site. These recognition sequences are typically four, six, eight, ten, or twelve nucleotides long and generally palindromic (i.e. the same nucleotide sequence in the 5' – 3' direction). Because there are only so many ways to arrange the four nucleotides that compose DNA (Adenine, Thymine, Guanine and Cytosine) into a four- to twelve-nucleotide sequence, recognition sequences tend to occur by chance in any long sequence. Restriction enzymes specific to hundreds of distinct sequences have been identified and synthesized for sale to laboratories, and as a result, several potential "restriction sites" appear in almost any gene or locus of interest on any chromosome. Furthermore, almost all artificial plasmids include a (often entirely synthetic) polylinker (also called "multiple cloning site") that contains dozens of restriction enzyme recognition sequences within a very short segment of DNA. This allows the insertion of almost any specific fragment of DNA into plasmid vectors, which can be efficiently "cloned" by insertion into replicating bacterial cells.
After restriction digest, DNA can then be analysed using agarose gel electrophoresis. In gel electrophoresis, a sample of |
https://en.wikipedia.org/wiki/Fecal%20coliform | A fecal coliform (British: faecal coliform) is a facultatively anaerobic, rod-shaped, gram-negative, non-sporulating bacterium. Coliform bacteria generally originate in the intestines of warm-blooded animals. Fecal coliforms are capable of growth in the presence of bile salts or similar surface agents, are oxidase negative, and produce acid and gas from lactose within 48 hours at 44 ± 0.5°C. The term "thermotolerant coliform" is more correct and is gaining acceptance over "fecal coliform".
Coliform bacteria include genera that originate in feces (e.g. Escherichia) as well as genera not of fecal origin (e.g. Enterobacter, Klebsiella, Citrobacter). The assay is intended to be an indicator of fecal contamination; more specifically of E. coli which is an indicator microorganism for other pathogens that may be present in feces. Presence of fecal coliforms in water may not be directly harmful, and does not necessarily indicate the presence of feces.
Fecal bacteria as indicator of water quality
Background
In general, increased levels of fecal coliforms provide a warning of failure in water treatment, a break in the integrity of the distribution system, possible contamination with pathogens. When levels are high there may be an elevated risk of waterborne gastroenteritis. Tests for the bacteria are cheap, reliable and rapid (1-day incubation).
Potential sources of bacteria in water
The presence of fecal coliform in aquatic environments may indicate that the water has been contaminated with the fecal material of humans or other animals. Fecal coliform bacteria can enter rivers through direct discharge of waste from mammals and birds, from agricultural and storm runoff, and from human sewage. However, their presence may also be the result of plant material, and pulp or paper mill effluent.
Human sewage
Failing home septic systems can allow coliforms in the effluent to flow into the water table, aquifers, drainage ditches and nearby surface waters. Sewage conn |
https://en.wikipedia.org/wiki/Bernhard%20Rensch | Bernhard Rensch (21 January 1900 – 4 April 1990) was a German evolutionary biologist and ornithologist who did field work in Indonesia and India. Starting his scientific career with pro-Lamarckian views, he shifted to selectionism and became one of the architects of the modern synthesis in evolutionary biology, which he popularised in Germany. Besides his work on how environmental factors influenced the evolution of geographically isolated populations and on evolution above the species level, which contributed to the modern synthesis, he also worked extensively in the area of animal behavior (ethology) and on philosophical aspects of biological science. His education and scientific work were interrupted by service in the German military during both World War I and World War II.
Biography
Rensch was born in Thale and as a young boy, he took an interest in observing the natural world and discovered a talent for drawing and painting. He served in the German army from 1917–1920 and began to observe natural phenomena while he was held prisoner in France. He returned to Germany and began his studies on feather structure under Valentin Haecker (1864–1927) who had himself studied under August Weismann. Until the 1930s Rensch held anti-Darwinian and Lamarckian views. Rensch also took an interest in the philosophy of science and was fascinated by Theodor Ziehen (1862–1950). Rensch also studied expressionist painting and in later life examined the biological roots of art. He received his Ph.D. from the University of Halle in 1922. He joined the zoological museum of the University of Berlin as an assistant in 1925. In 1927 he participated in a zoological expedition to the Sunda Islands. He studied the geographical distribution of subspecies of polytypic species and of complexes of closely related species with attention to how local environmental factors, especially climate, influenced their evolution. In 1929 he published the book Das Prinzip geographischer Rassenkreise und da |
https://en.wikipedia.org/wiki/Spatial%20network | A spatial network (sometimes also geometric graph) is a graph in which the vertices or edges are spatial elements associated with geometric objects, i.e., the nodes are located in a space equipped with a certain metric. The simplest mathematical realization of spatial network is a lattice or a random geometric graph (see figure in the right), where nodes are distributed uniformly at random over a two-dimensional plane; a pair of nodes are connected if the Euclidean distance is smaller than a given neighborhood radius. Transportation and mobility networks, Internet, mobile phone networks, power grids, social and contact networks and biological neural networks are all examples where the underlying space is relevant and where the graph's topology alone does not contain all the information. Characterizing and understanding the structure, resilience and the evolution of spatial networks is crucial for many different fields ranging from urbanism to epidemiology.
Examples
An urban spatial network can be constructed by abstracting intersections as nodes and streets as links, which is referred to as a transportation network.
One might think of the 'space map' as being the negative image of the standard map, with the open space cut out of the background buildings or walls.
Characterizing spatial networks
The following aspects are some of the characteristics to examine a spatial network:
Planar networks
In many applications, such as railways, roads, and other transportation networks, the network is assumed to be planar. Planar networks build up an important group out of the spatial networks, but not all spatial networks are planar. Indeed, the airline passenger
networks is a non-planar example: Many large airports in the world are connected through direct flights.
The way it is embedded in space
There are examples of networks, which seem to be not "directly" embedded in space. Social networks for instance
connect individuals through friendship relations. But in this cas |
https://en.wikipedia.org/wiki/Quantum%201/f%20noise | Quantum 1/f noise is an intrinsic and fundamental part of quantum mechanics. Fighter pilots, photographers, and scientists all appreciate the higher quality of images and signals resulting from the consideration of quantum 1/f noise. Engineers have battled unwanted 1/f noise since 1925, giving it poetic names (such as flicker noise, funkelrauschen, bruit de scintillation, etc.) due to its mysterious nature. The Quantum 1/f noise theory was developed about 50 years later, describing the nature of 1/f noise, allowing it the be explained and calculated via straightforward engineering formulas. It allows for the low-noise optimization of materials, devices and systems of most high-technology applications of modern industry and science. The theory includes the conventional and coherent quantum 1/f effects (Q1/fE). Both effects are combined in a general engineering formula, and present in Q1/f noise, which is itself most of fundamental 1/f noise. The latter is defined as the result of the simultaneous presence of nonlinearity and a certain type of homogeneity in a system, and can be quantum or classical.
The conventional Q1/fE represents 1/f fluctuations caused by bremsstrahlung, decoherence and interference in the scattering of charged particles off one another, in tunneling or in any other process in solid state physics and in general.
Other noise data sets
It has also recently been claimed that 1/f noise has been seen in higher ordered self constructing functions, as well as complex systems, both biological, chemical, and physical.
The theory
The basic derivation of quantum 1/f was made by Peter Handel, a theoretical physicist at the University of Missouri–St. Louis, and published in Physical Review A, in August 1980.
Several hundred papers have been published by many authors on Handel's quantum theory on 1/f noise, which is a new aspect of quantum mechanics. They verified, applied, and further developed the quantum 1/f noise formulas. Aldert van der Ziel, the |
https://en.wikipedia.org/wiki/Transverse%20isotropy | A transversely isotropic material is one with physical properties that are symmetric about an axis that is normal to a plane of isotropy. This transverse plane has infinite planes of symmetry and thus, within this plane, the material properties are the same in all directions. Hence, such materials are also known as "polar anisotropic" materials. In geophysics, vertically transverse isotropy (VTI) is also known as radial anisotropy.
This type of material exhibits hexagonal symmetry (though technically this ceases to be true for tensors of rank 6 and higher), so the number of independent constants in the (fourth-rank) elasticity tensor are reduced to 5 (from a total of 21 independent constants in the case of a fully anisotropic solid). The (second-rank) tensors of electrical resistivity, permeability, etc. have two independent constants.
Example of transversely isotropic materials
An example of a transversely isotropic material is the so-called on-axis unidirectional fiber composite lamina where the fibers are circular in cross section. In a unidirectional composite, the plane normal to the fiber direction can be considered as the isotropic plane, at long wavelengths (low frequencies) of excitation. In the figure to the right, the fibers would be aligned with the axis, which is normal to the plane of isotropy.
In terms of effective properties, geological layers of rocks are often interpreted as being transversely isotropic. Calculating the effective elastic properties of such layers in petrology has been coined Backus upscaling, which is described below.
Material symmetry matrix
The material matrix has a symmetry with respect to a given orthogonal transformation () if it does not change when subjected to that transformation.
For invariance of the material properties under such a transformation we require
Hence the condition for material symmetry is (using the definition of an orthogonal transformation)
Orthogonal transformations can be represented in Carte |
https://en.wikipedia.org/wiki/Matt%20Nagle | Matthew Nagle (October 16, 1979 – July 24, 2007) was the first person to use a brain–computer interface to restore functionality lost due to paralysis. He was a C3 tetraplegic, paralyzed from the neck down after being stabbed.
Biography
Nagle attended Weymouth High School (Class of 1998). He was an exceptional athlete and a star football player. In 2001, he sustained a stabbing injury while leaving the town’s annual fireworks show near Wessagussett Beach on July 3. He was stabbed and his spinal cord severed when he stepped in to help a friend.
Nagle died on July 24, 2007, in Stoughton, Massachusetts, from sepsis.
BrainGate Clinical Trial
Nagle agreed to participate in a clinical trial involving the BrainGate Neural Interface System (developed by Cyberkinetics) out of a desire to again be healthy and lead a normal life, and in hopes that modern medical discoveries could help him. He also hoped that his participation in this Clinical Trial would help improve the lives of people who, like him, suffered injuries or diseases that cause severe motor disabilities.
The device was implanted on June 22, 2004, by neurosurgeon Gerhard Friehs. A 96-electrode "Utah Array" was placed on the surface of his brain over the region of motor cortex that controlled his dominant left hand and arm. A link connected it to the outside of his skull, where it could be connected to a computer. The computer was then trained to recognize Nagle's thought patterns and associate them with movements he was trying to achieve.
While he was implanted, Matt could control a computer "mouse" cursor, using it then to press buttons that can control TV, check e-mail, and do basically everything that can be done by pressing buttons. He could draw (although the cursor control is not precise) on the screen. He could also send commands to an external prosthetic hand (close and open). The results of the study are published in the journal Nature. Per Food and Drug Administration (FDA) regulations and the stud |
https://en.wikipedia.org/wiki/Ichthyornis | Ichthyornis (meaning "fish bird", after its fish-like vertebrae) is an extinct genus of toothy seabird-like ornithuran from the late Cretaceous period of North America. Its fossil remains are known from the chalks of Alberta, Alabama, Kansas (Greenhorn Limestone), New Mexico, Saskatchewan, and Texas, in strata that were laid down in the Western Interior Seaway during the Turonian through Campanian ages, about 95–83.5 million years ago. Ichthyornis is a common component of the Niobrara Formation fauna, and numerous specimens have been found.
Ichthyornis has been historically important in shedding light on bird evolution. It was the first known prehistoric bird relative preserved with teeth, and Charles Darwin noted its significance during the early years of the theory of evolution. Ichthyornis remains important today as it is one of the few Mesozoic era ornithurans known from more than a few specimens.
Description of the Ichthyornis
It is thought that Ichthyornis was the Cretaceous ecological equivalent of modern seabirds such as gulls, petrels, and skimmers. An average specimen was the size of a pigeon, long, with a skeletal wingspan (not taking feathers into account) of around , though there is considerable size variation among known specimens, with some smaller and some much larger than the type specimen of I. dispar.
Ichthyornis is notable primarily for its combination of vertebrae which are concave both in front and back (similar to some fish, which is where it gets its name) and several more subtle features of its skeleton which set it apart from its close relatives. Ichthyornis is perhaps most well known for its teeth. The teeth were present only in the middle portion of the upper and lower jaws. The jaw tips had no teeth and were covered in a beak. The beak of Ichthyornis, like the hesperornithids, was compound and made up of several distinct plates, similar to the beak of an albatross, rather than a single sheet of keratin as in most modern birds. The |
https://en.wikipedia.org/wiki/CAST%20tool | CAST tools are software applications used in the process of software testing. The acronym stands for "Computer Aided Software Testing". Such tools are available from various vendors and there are different tools for different types of testing, as well as for test management. They are known to be cost-effective and time-saving because they reduce the incidence of human error and are thorough.
'Cast is also a multimedia professional development tool or multimedia database'
External links
Cast Tools: some examples.
Software testing tools |
https://en.wikipedia.org/wiki/Palmette | The palmette is a motif in decorative art which, in its most characteristic expression, resembles the fan-shaped leaves of a palm tree. It has a far-reaching history, originating in ancient Egypt with a subsequent development through the art of most of Eurasia, often in forms that bear relatively little resemblance to the original. In ancient Greek and Roman uses it is also known as the anthemion (from the Greek ανθέμιον, a flower). It is found in most artistic media, but especially as an architectural ornament, whether carved or painted, and painted on ceramics. It is very often a component of the design of a frieze or border. The complex evolution of the palmette was first traced by Alois Riegl in his Stilfragen of 1893. The half-palmette, bisected vertically, is also a very common motif, found in many mutated and vestigial forms, and especially important in the development of plant-based scroll ornament.
Description
The essence of the palmette is a symmetrical group of spreading "fronds" that spread out from a single base, normally widening as they go out, before ending at a rounded or fairly blunt pointed tip. There may be a central frond that is larger than the rest. The number of fronds is variable, but typically between five and about fifteen.
In the repeated border design commonly referred to as anthemion the palm fronds more closely resemble petals of the honeysuckle flower, as if designed to attract fertilizing insects. Some compare the shape to an open hamsa hand – explaining the commonality and derivation of the 'palm' of the hand.
In some forms of the motif the volutes or scrolls resemble a pair of eyes, like those on the harmika of the Tibetan or Nepalese stupa and the eyes and sun-disk at the crown of Egyptian stelae.
In some variants the features of a more fully developed face become discernible in the palmette itself, while in certain architectural uses, usually at the head of pilasters or herms, the fan of palm-fronds transforms into a male o |
https://en.wikipedia.org/wiki/IBM%20Lightweight%20Third-Party%20Authentication | Lightweight Third-Party Authentication (LTPA), is an authentication technology used in IBM WebSphere and Lotus Domino products. When accessing web servers that use the LTPA technology it is possible for a web user to re-use their login across physical servers.
A Lotus Domino server or an IBM WebSphere server that is configured to use the LTPA authentication will challenge the web user for a name and password. When the user has been authenticated, their browser will have received a session cookie - a cookie that is only available for one browsing session. This cookie contains the LTPA token.
If the user – after having received the LTPA token – accesses a server that is a member of the same authentication realm as the first server, and if the browsing session has not been terminated (the browser was not closed down), then the user is automatically authenticated and will not be challenged for a name and password. Such an environment is also called a single sign-on environment.
See also
Access control
List of single sign-on implementations |
https://en.wikipedia.org/wiki/Edward%20Charles%20Titchmarsh | Edward Charles "Ted" Titchmarsh (June 1, 1899 – January 18, 1963) was a leading British mathematician.
Education
Titchmarsh was educated at King Edward VII School (Sheffield) and Balliol College, Oxford, where he began his studies in October 1917.
Career
Titchmarsh was known for work in analytic number theory, Fourier analysis and other parts of mathematical analysis. He wrote several classic books in these areas; his book on the Riemann zeta-function was reissued in an edition edited by Roger Heath-Brown.
Titchmarsh was Savilian Professor of Geometry at the University of Oxford from 1932 to 1963. He was a Plenary Speaker at the ICM in 1954 in Amsterdam.
He was on the governing body of Abingdon School from 1935-1947.
Awards
Fellow of the Royal Society, 1931
De Morgan Medal, 1953
Sylvester Medal, 1955
Berwick Prize winner, 1956
Publications
The Zeta-Function of Riemann (1930);
Introduction to the Theory of Fourier Integrals (1937) 2nd. edition(1939) 2nd. edition (1948);
The Theory of Functions (1932);
Mathematics for the General Reader (1948);
The Theory of the Riemann Zeta-Function (1951); 2nd edition, revised by D. R. Heath-Brown (1986)
Eigenfunction Expansions Associated with Second-order Differential Equations. Part I (1946) 2nd. edition (1962);
Eigenfunction Expansions Associated with Second-order Differential Equations. Part II (1958); |
https://en.wikipedia.org/wiki/Equivalence%20partitioning | Equivalence partitioning or equivalence class partitioning (ECP) is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived. In principle, test cases are designed to cover each partition at least once. This technique tries to define test cases that uncover classes of errors, thereby reducing the total number of test cases that must be developed. An advantage of this approach is reduction in the time required for testing software due to lesser number of test cases.
Equivalence partitioning is typically applied to the inputs of a tested component, but may be applied to the outputs in rare cases. The equivalence partitions are usually derived from the requirements specification for input attributes that influence the processing of the test object.
The fundamental concept of ECP comes from equivalence class which in turn comes from equivalence relation.
A software system is in effect a computable function implemented as an algorithm in some implementation programming language.
Given an input test vector some instructions of that algorithm get covered, ( see code coverage for details ) others do not.
This gives the interesting relationship between input test vectors:-
is an equivalence relation between test vectors if and only if the coverage foot print of the
vectors are exactly the same, that is, they cover the same instructions, at same step.
This would evidently mean that the relation cover would partition the domain of the test vector
into multiple equivalence class. This partitioning is called equivalence class partitioning of test input.
If there are equivalent classes, only vectors are sufficient to fully cover the system.
The demonstration can be done using a function written in C:
int safe_add( int a, int b )
{
int c = a + b;
if ( a > 0 && b > 0 && c <= 0 )
{
fprintf ( stderr, "Overflow (positive)!\n" );
}
if ( a < 0 && b |
https://en.wikipedia.org/wiki/Boundary-value%20analysis | Boundary-value analysis is a software testing technique in which tests are designed to include representatives of boundary values in a range. The idea comes from the boundary. Given that we have a set of test vectors to test the system, a topology can be defined on that set. Those inputs which belong to the same equivalence class as defined by the equivalence partitioning theory would constitute the basis. Given that the basis sets are neighbors, there would exist a boundary between them. The test vectors on either side of the boundary are called boundary values. In practice this would require that the test vectors can be ordered, and that the individual parameters follows some kind of order (either partial order or total order).
Formal definition
Formally the boundary values can be defined as below:
Let the set of the test vectors be .
Let's assume that there is an ordering relation defined over them, as .
Let be two equivalent classes.
Assume that test vector and .
If or then the classes are in the same neighborhood and the values are boundary values.
In plainer English, values on the minimum and maximum edges of an equivalence partition are tested. The values could be input or output ranges of a software component, can also be the internal implementation. Since these boundaries are common locations for errors that result in software faults they are frequently exercised in test cases.
Application
The expected input and output values to the software component should be extracted from the component specification. The values are then grouped into sets with identifiable boundaries. Each set, or partition, contains values that are expected to be processed by the component in the same way. Partitioning of test data ranges is explained in the equivalence partitioning test case design technique. It is important to consider both valid and invalid partitions when designing test cases.
The demonstration can be done using a function written in Java.
class Safe |
https://en.wikipedia.org/wiki/Indentation%20hardness | Indentation hardness tests are used in mechanical engineering to determine the hardness of a material to deformation. Several such tests exist, wherein the examined material is indented until an impression is formed; these tests can be performed on a macroscopic or microscopic scale.
When testing metals, indentation hardness correlates roughly linearly with tensile strength, but it is an imperfect correlation often limited to small ranges of strength and hardness for each indentation geometry. This relation permits economically important nondestructive testing of bulk metal deliveries with lightweight, even portable equipment, such as hand-held Rockwell hardness testers.
Material hardness
Different techniques are used to quantify material characteristics at smaller scales. Measuring mechanical properties for materials, for instance, of thin films, cannot be done using conventional uniaxial tensile testing. As a result, techniques testing material "hardness" by indenting a material with a very small impression have been developed to attempt to estimate these properties.
Hardness measurements quantify the resistance of a material to plastic deformation. Indentation hardness tests compose the majority of processes used to determine material hardness, and can be divided into three classes: macro, micro and nanoindentation tests. Microindentation tests typically have forces less than . Hardness, however, cannot be considered to be a fundamental material property. Classical hardness testing usually creates a number which can be used to provide a relative idea of material properties. As such, hardness can only offer a comparative idea of the material's resistance to plastic deformation since different hardness techniques have different scales.
The equation based definition of hardness is the pressure applied over the contact area between the indenter and the material being tested. As a result hardness values are typically reported in units of pressure, although this |
https://en.wikipedia.org/wiki/Interference%20fit | An interference fit, also known as a pressed fit or friction fit, is a form of fastening between two tightfitting mating parts that produces a joint which is held together by friction after the parts are pushed together.
Depending on the amount of interference, parts may be joined using a tap from a hammer or forced together using a hydraulic press. Critical components that must not sustain damage during joining may also be cooled significantly below room temperature to shrink one of the components before fitting. This method allows the components to be joined without force and produces a shrink fit interference when the component returns to normal temperature. Interference fits are commonly used with aircraft fasteners to improve the fatigue life of a joint.
These fits, though applicable to shaft and hole assembly, are more often used for bearing-housing or bearing-shaft assembly. This is referred to as a 'press-in' mounting.
Tightness of fit
The tightness of fit is controlled by amount of interference; the allowance (planned difference from nominal size). Formulas exist to compute allowance that will result in various strengths of fit such as loose fit, light interference fit, and interference fit. The value of the allowance depends on which material is being used, how big the parts are, and what degree of tightness is desired. Such values have already been worked out in the past for many standard applications, and they are available to engineers in the form of tables, obviating the need for re-derivation.
As an example, a shaft made of 303 stainless steel will form a tight fit with allowance of . A slip fit can be formed when the bore diameter is wider than the rod; or, if the rod is made 12–20μm under the given bore diameter. An example:
The allowance per inch of diameter usually ranges from (0.1–0.25%), (0.15%) being a fair average. Ordinarily the allowance per inch decreases as the diameter increases; thus the total allowance for a diameter of m |
https://en.wikipedia.org/wiki/Tao%20Ho | Ho Tao (; 17 July 1936 – 29 March 2019) was a Hong Kong architect born in Shanghai. He was the designer of the Bauhinia emblem, and also of the flag of the Hong Kong Special Administrative Region.
Background
Born in Shanghai in 1936 to Ping Yin Ho & Chin Hwa, Ho grew up with elder brother Chien and younger sister Diana. He graduated from Pui Ching Middle School and went on to receive a BA in art history with a minor in music and theology at Williams College in Massachusetts. Then, he studied for his MArch at Harvard's Graduate School of Design under the tutelage of Josep Lluís Sert, Sigfried Giedion, and Walter Gropius, the latter of whom hired Ho upon graduation as his personal assistant at The Architects Collaborative.
In 1968, four years after Ho returned to Hong Kong, he founded TaoHo Design Architects. Early projects include the Hong Kong International School and the Hong Kong Arts Centre, whose designs heralded the arrival of the Bauhaus to Hong Kong. Within his broad portfolio of works are the Hong Kong Pavilion at the 1986 World Expo in Vancouver, the renovation of Hong Kong's Governor House for Lord Chris Patten, the award-winning Wing Kwong Pentecostal Church, the first panda pavilion at Ocean Park and the revitalization of the Western Market in Sheung Wan. Tao also designed Hong Kong SAR's Bauhinia emblem, its flag and the ceremonial pen used at the handover signing ceremony in 1997.
An advocate of heritage conservation and cultural development, Ho conceived of an arts district for the city that is now the West Kowloon Cultural District. He enjoyed sharing his views on “TaoHo on Music” (樂論滔滔在何弢) for RTHK in the 1980s. In the 1990s, he was one of the four regular hosts of “Free As The Wind” (講東講西), discussing his views on cultural and social issues.
He served as president of the Hong Kong Institute of Architects from 1994 to 1998. As president, he criticised the proposed controversial design for the new Hong Kong Central Library. This was controversial |
https://en.wikipedia.org/wiki/Fuzzing | In programming and software development, fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to expose corner cases that have not been properly dealt with.
For the purpose of security, input that crosses a trust boundary is often the most useful. For example, it is more important to fuzz code that handles the upload of a file by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user.
History
The term "fuzz" originates from a fall 1988 class project in the graduate Advanced Operating Systems class (CS736), taught by Prof. Barton Miller at the University of Wisconsin, whose results were subsequently published in 1990. To fuzz test a UNIX utility meant to automatically generate random input and command-line parameters for the utility. The project was designed to test the reliability of UNIX command line programs by executing a large number of random inputs in quick succession until they crashed. Miller's team was able to crash 25 to 33 percent of the utilities that they tested. They then debugged each of the crashes to determine the cause and categorized each detected failure. To allow other researchers to conduct similar experiments with other software, the source code of the tools, the test procedures, and the raw result data were made publicly available. This early fuzzing would now be called b |
https://en.wikipedia.org/wiki/Jaakko%20Hintikka | Kaarlo Jaakko Juhani Hintikka (12 January 1929 – 12 August 2015) was a Finnish philosopher and logician. Hintikka is regarded as the founder of formal epistemic logic and of game semantics for logic.
Life and career
Hintikka was born in Helsingin maalaiskunta (now Vantaa).
In 1953, he received his doctorate from the University of Helsinki for a thesis entitled Distributive Normal Forms in the Calculus of Predicates. He was a student of Georg Henrik von Wright.
Hintikka was a Junior Fellow at Harvard University (1956-1969), and held several professorial appointments at the University of Helsinki, the Academy of Finland, Stanford University, Florida State University and finally Boston University from 1990 until his death. He was the prolific author or co-author of over 30 books and over 300 scholarly articles, Hintikka contributed to mathematical logic, philosophical logic, the philosophy of mathematics, epistemology, language theory, and the philosophy of science. His works have appeared in over nine languages.
Hintikka edited the academic journal Synthese from 1962 to 2002, and was a consultant editor for more than ten journals. He was the first vice-president of the Fédération Internationale des Sociétés de Philosophie, the vice-president of the Institut International de Philosophie (1993–1996), as well as a member of the American Philosophical Association, the International Union of History and Philosophy of Science, Association for Symbolic Logic, and a member of the governing board of the Philosophy of Science Association. In 2005, he won the Rolf Schock Prize in logic and philosophy "for his pioneering contributions to the logical analysis of modal concepts, in particular the concepts of knowledge and belief". In 1985, he was president of the Florida Philosophical Association.
He was a member of the Norwegian Academy of Science and Letters. On May 26, 2000, Hintikka received an honorary doctorate from the Faculty of History and Philosophy at Uppsala Univ |
https://en.wikipedia.org/wiki/Aerodynamic%20heating | Aerodynamic heating is the heating of a solid body produced by its high-speed passage through air. In science and engineering, an understanding of aerodynamic heating is necessary for predicting the behaviour of meteoroids which enter the earth's atmosphere, to ensure spacecraft safely survive atmospheric reentry, and for the design of high-speed aircraft and missiles.
Aircraft
The effects of aerodynamic heating on the temperature of the skin, and subsequent heat transfer into the structure, the cabin, the equipment bays and the electrical, hydraulic and fuel systems, have to be incorporated in the design of supersonic and hypersonic aircraft and missiles.
One of the main concerns caused by aerodynamic heating arises in the design of the wing. For subsonic speeds, two main goals of wing design are minimizing weight and maximizing strength. Aerodynamic heating, which occurs at supersonic and hypersonic speeds, adds an additional consideration in wing structure analysis. An idealized wing structure is made up of spars, stringers, and skin segments. In a wing that normally experiences subsonic speeds, there must be a sufficient number of stringers to withstand the axial and bending stresses induced by the lift force acting on the wing. In addition, the distance between the stringers must be small enough that the skin panels do not buckle, and the panels must be thick enough to withstand the shear stress and shear flow present in the panels due to the lifting force on the wing. However, the weight of the wing must be made as small as possible, so the choice of material for the stringers and the skin is an important factor.
At supersonic speeds, aerodynamic heating adds another element to this structural analysis. At normal speeds, spars and stringers experience a load called Delta P, which is a function of the lift force, first and second moments of inertia, and length of the spar. When there are more spars and stringers, the Delta P in each member is reduced, and th |
https://en.wikipedia.org/wiki/Particle%20acceleration | In a compressible sound transmission medium - mainly air - air particles get an accelerated motion: the particle acceleration or sound acceleration with the symbol a in metre/second2. In acoustics or physics, acceleration (symbol: a) is defined as the rate of change (or time derivative) of velocity. It is thus a vector quantity with dimension length/time2. In SI units, this is m/s2.
To accelerate an object (air particle) is to change its velocity over a period. Acceleration is defined technically as "the rate of change of velocity of an object with respect to time" and is given by the equation
where
a is the acceleration vector
v is the velocity vector expressed in m/s
t is time expressed in seconds.
This equation gives a the units of m/(s·s), or m/s2 (read as "metres per second per second", or "metres per second squared").
An alternative equation is:
where
is the average acceleration (m/s2)
is the initial velocity (m/s)
is the final velocity (m/s)
is the time interval (s)
Transverse acceleration (perpendicular to velocity) causes change in direction. If it is constant in magnitude and changing in direction with the velocity, we get a circular motion. For this centripetal acceleration we have
One common unit of acceleration is g-force, one g being the acceleration caused by the gravity of Earth.
In classical mechanics, acceleration is related to force and mass (assumed to be constant) by way of Newton's second law:
Equations in terms of other measurements
The Particle acceleration of the air particles a in m/s2 of a plain sound wave is:
See also
Sound
Sound particle
Particle displacement
Particle velocity
External links
Relationships of acoustic quantities associated with a plane progressive acoustic sound wave - pdf
Acoustics |
https://en.wikipedia.org/wiki/Tristan%20Louis | Tristan Louis (born February 28, 1971) is a French-born American author, entrepreneur and internet activist.
Early work
Louis was born in Digne-les-Bains, Alpes-de-Haute-Provence. In 1994 and 1995, as publisher of iWorld, part of the Mecklermedia group of Internet online media companies, he first became involved in online politics on Usenet, particularly the newsgroup alt.internet.media-coverage, during debate over the Communications Decency Act and activism against it. In a joint effort with the EFF and the Voters Telecommunications Watch, iWorld and Mecklermedia publicly endorsed a national day of protest; turning the background of web pages around the world to black. The protest received national news coverage and was a catalyst in the planning for a lawsuit (Reno v. American Civil Liberties Union) which went to the United States Supreme Court and reaffirmed First Amendment protection for Internet publishers.
After leaving iWorld, Louis contributed to many publications as a freelance writer, including a popular line of introductions to the internet, and helped co-found several start-ups, including Earthweb and Net Quotient, a consulting group. At Earthweb, Louis reprised his role of editor, hoping to reproduce the early success of iWorld and helping launch the company on the stock market.
From 1999 to early 2000, Louis joined the short-lived dot-com Boo.com; when the company failed, he wrote a detailed analysis of the challenges the company had faced; offering some context in terms of running large scale websites, which was circulated widely.
In January 2006, Louis participated in Microsoft Search Champs v4 in Seattle.
In 2011, Louis returned to startups, launching Keepskor, a branded app company which was acquired in 2014.
Since 2017, Louis serves as president and CEO of Casebook PBC, an organization focused on building a SaaS platform for social services.
Wall Street career
Throughout the 2000s, Louis worked in several roles on Wall Street, most notabl |
https://en.wikipedia.org/wiki/CRC-based%20framing | CRC-based framing is a kind of frame synchronization used in Asynchronous Transfer Mode (ATM) and other similar protocols.
The concept of CRC-based framing was developed by StrataCom, Inc. in order to improve the efficiency of a pre-standard Asynchronous Transfer Mode (ATM) link protocol. This technology was ultimately used in the principal link protocols of ATM itself and was one of the most significant developments of StrataCom. An advanced version of CRC-based framing was used in the ITU-T SG15 G.7041 Generic Framing Procedure (GFP), which itself is used in several packet link protocols.
Overview of CRC-based framing
The method of CRC-Based framing re-uses the header cyclic redundancy check (CRC), which is present in ATM and other similar protocols, to provide framing on the link with no additional overhead. In ATM, this field is known as the Header Error Control/Check (HEC) field. It consists of the remainder of the division of the 32 bits of the header (taken as the coefficients of a polynomial over the field with two elements) by the polynomial . The pattern 01010101 is XORed with the 8-bit remainder before being inserted in the last octet of the header.
Constantly checked as data is transmitted, this scheme is able to correct single-bit errors and detect many multiple-bit errors.
For a tutorial and an example of computing the CRC see mathematics of cyclic redundancy checks.
The header CRC/HEC is needed for another purpose within an ATM system, to improve the robustness in cell delivery. Using this same CRC/HEC field for the second purpose of link framing provided a significant improvement in link efficiency over what other methods of framing, because no additional bits were required for this second purpose.
A receiver utilizing CRC-based framing bit-shifts along the received bit stream until it finds a bit position where the header CRC is correct for a number of times. The receiver then declares that it has found the frame. A hysteresis function is appli |
https://en.wikipedia.org/wiki/Cathepsin | Cathepsins (Ancient Greek kata- "down" and hepsein "boil"; abbreviated CTS) are proteases (enzymes that degrade proteins) found in all animals as well as other organisms. There are approximately a dozen members of this family, which are distinguished by their structure, catalytic mechanism, and which proteins they cleave. Most of the members become activated at the low pH found in lysosomes. Thus, the activity of this family lies almost entirely within those organelles. There are, however, exceptions such as cathepsin K, which works extracellularly after secretion by osteoclasts in bone resorption. Cathepsins have a vital role in mammalian cellular turnover.
Classification
Cathepsin A (serine protease)
Cathepsin B (cysteine protease)
Cathepsin C (cysteine protease)
Cathepsin D (aspartyl protease)
Cathepsin E (aspartyl protease)
Cathepsin F (cysteine proteinase)
Cathepsin G (serine protease)
Cathepsin H (cysteine protease)
Cathepsin K (cysteine protease)
Cathepsin L1 (cysteine protease)
Cathepsin L2 (or V) (cysteine protease)
Cathepsin O (cysteine protease)
Cathepsin S (cysteine protease)
Cathepsin W (cysteine proteinase)
Cathepsin Z (or X) (cysteine protease)
Clinical significance
Cathepsins are involved in many physiological processes have been implicated in a number of human diseases. The cysteine cathepsins have attracted significant research effort as drug targets.
Cancer, Cathepsin D is a mitogen and "it attenuates the anti-tumor immune response of decaying chemokines to inhibit the function of dendritic cells". Cathepsins B and L are involved in matrix degradation and cell invasion.
Stroke
Traumatic brain injury
Alzheimer's disease
Arthritis
Ebola, Cathepsin B and to a lesser extent cathepsin L have been found to be necessary for the virus to enter host cells.
COPD
Chronic periodontitis
Pancreatitis
Several ocular disorders: keratoconus, retinal detachment, age-related macular degeneration, and glaucoma.
Cathepsin A
Deficienc |
https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s%20theorem | In mathematics, Apéry's theorem is a result in number theory that states the Apéry's constant ζ(3) is irrational. That is, the number
cannot be written as a fraction where p and q are integers. The theorem is named after Roger Apéry.
The special values of the Riemann zeta function at even integers () can be shown in terms of Bernoulli numbers to be irrational, while it remains open whether the function's values are in general rational or not at the odd integers () (though they are conjectured to be irrational).
History
Leonhard Euler proved that if n is a positive integer then
for some rational number . Specifically, writing the infinite series on the left as , he showed
where the are the rational Bernoulli numbers. Once it was proved that is always irrational, this showed that is irrational for all positive integers n.
No such representation in terms of π is known for the so-called zeta constants for odd arguments, the values for positive integers n. It has been conjectured that the ratios of these quantities
are transcendental for every integer .
Because of this, no proof could be found to show that the zeta constants with odd arguments were irrational, even though they were (and still are) all believed to be transcendental. However, in June 1978, Roger Apéry gave a talk titled "Sur l'irrationalité de ζ(3)." During the course of the talk he outlined proofs that and were irrational, the latter using methods simplified from those used to tackle the former rather than relying on the expression in terms of π. Due to the wholly unexpected nature of the proof and Apéry's blasé and very sketchy approach to the subject, many of the mathematicians in the audience dismissed the proof as flawed. However Henri Cohen, Hendrik Lenstra, and Alfred van der Poorten suspected Apéry was on to something and set out to confirm his proof. Two months later they finished verification of Apéry's proof, and on August 18 Cohen delivered a lecture giving full details of th |
https://en.wikipedia.org/wiki/Bay%20Networks | Bay Networks, Inc., was a network hardware vendor formed through the merger of Santa Clara, California, based SynOptics Communications and Billerica, Massachusetts based Wellfleet Communications on July 6, 1994. SynOptics was an important early innovator of Ethernet products, having developed a pre-standard twisted pair 10Mbit/s Ethernet product and a modular Ethernet hub product that dominated the enterprise networking market. Wellfleet was an important competitor to Cisco Systems in the router market, ultimately commanding up to a 20% market share of the network router business worldwide. The combined company was renamed Bay Networks as a nod to the legacy that SynOptics was based in the San Francisco area and Wellfleet was based in the Boston area, two cities well known for their bays.
Acquisitions
Bay Networks expanded its product line both through internal development and acquisition, acquiring the following companies during the course of its existence:
Centillion Networks, Inc. (May, 1995) - Provided Asynchronous Transfer Mode switching and Token Ring technology.
Xylogics, Inc. (December, 1995) - Remote access technologies.
Performance Technology (March, 1996) - LAN-to-WAN access technology.
ARMON Networking, Ltd. (April, 1996) - RMON and RMON2 network management technology.
LANcity Corporation (October, 1996) - Cable modem technology.
Penril Datability Networks (November, 1996) - Dial-up modems and remote access products based on Digital Signal Processing technology.
NetICs, Inc. (December, 1996) - ASIC-based Fast Ethernet switching technology.
ISOTRO Network Management, Inc. (April, 1997) - DNS and DHCP technologies.
Rapid City Communications (June, 1997) - Gigabit Ethernet switching and routing technology.
New Oak Communications (January, 1998) - Provided VPN technology to Bay Networks product line.
Netsation Corp. (February, 1998) - Technology was used to augment Bay Networks Optivity network management system.
NetServe GmbH (July, 1998) - Vo |
https://en.wikipedia.org/wiki/SynOptics | SynOptics Communications was a Santa Clara, California-based early computer network equipment vendor from 1985 until 1994. SynOptics popularized the concept of the modular Ethernet hub and high-speed Ethernet networking over copper twisted-pair and fiber optic cables.
History
SynOptics Communications was founded in 1985 by Andrew K. Ludwick and Ronald V. Schmidt, both of whom worked at Xerox's Palo Alto Research Center (PARC).
The most significant product that Synoptics produced was LattisNet (originally named AstraNet) in 1987.
This meant that unshielded twisted-pair cabling already installed in office buildings could be re-utilized for computer networking instead of special coaxial cables.
The star network topology made the network much easier to manage and maintain. Together these two innovations directly led to the ubiquity of Ethernet networks.
Before the final standard version of what is known today as the 10BASE-T protocol, there were several different methods and standards for running Ethernet over twisted-pair cabling at various speeds, such as StarLAN. LattisNet was similar to the final 10BASE-T protocol except that it had slightly different voltage and signal characteristics. Synoptics updated their product line to the 10BASE-T specification once it was published.
Through the late 1980s and into the early 1990s, SynOptics produced a series of innovative products including early 10BASE-2 hubs, pre-standard (LattisNet), and 100BASE-TX products.
The company was the market leader in Ethernet LAN hubs over rivals 3Com and Cabletron.
Despite intense competition that drove down prices, Synoptics' annual revenue grew to a high of $700 million in 1993.
To move away from the rapidly commoditizing Layer 1/2 Ethernet equipment market and grow their market share in the increasingly lucrative and more profitable Layer 3 networking arena, SynOptics merged with Billerica, Massachusetts based Wellfleet Communications on July 6, 1994, in a US$ 2.7 Billion dollar de |
https://en.wikipedia.org/wiki/Perl%20Compatible%20Regular%20Expressions | Perl Compatible Regular Expressions (PCRE) is a library written in C, which implements a regular expression engine, inspired by the capabilities of the Perl programming language. Philip Hazel started writing PCRE in summer 1997. PCRE's syntax is much more powerful and flexible than either of the POSIX regular expression flavors (BRE, ERE) and than that of many other regular-expression libraries.
While PCRE originally aimed at feature-equivalence with Perl, the two implementations are not fully equivalent. During the PCRE 7.x and Perl 5.9.x phase, the two projects coordinated development, with features being ported between them in both directions.
In 2015, a fork of PCRE was released with a revised programming interface (API). The original software, now called PCRE1 (the 1.xx–8.xx series), has had bugs mended, but no further development. , it is considered obsolete, and the current 8.45 release is likely to be the last. The new PCRE2 code (the 10.xx series) has had a number of extensions and coding improvements and is where development takes place.
A number of prominent open-source programs, such as the Apache and Nginx HTTP servers, and the PHP and R scripting languages, incorporate the PCRE library; proprietary software can do likewise, as the library is BSD-licensed. As of Perl 5.10, PCRE is also available as a replacement for Perl's default regular-expression engine through the re::engine::PCRE module.
The library can be built on Unix, Windows, and several other environments. PCRE2 is distributed with a POSIX C wrapper, several test programs, and the utility program pcre2grep that is built in tandem with the library.
Features
Just-in-time compiler support
This optional feature is available if enabled when the PCRE2 library is built. Large performance benefits are possible when (for example) the calling program utilizes the feature with compatible patterns that are executed repeatedly. The just-in-time compiler support was written by Zoltan Herczeg and is |
https://en.wikipedia.org/wiki/WTX%20%28form%20factor%29 | WTX (for Workstation Technology Extended) was a motherboard form factor specification introduced by Intel at the IDF in September 1998, for its use at high-end, multiprocessor, multiple-hard-disk servers and workstations. The specification had support from major OEMs (Compaq, Dell, Fujitsu, Gateway, Hewlett-Packard, IBM, Intergraph, NEC, Siemens Nixdorf, and UMAX) and motherboard manufacturers (Acer, Asus, Supermicro and Tyan) and was updated (1.1) in February 1999. , the specification has been discontinued and the URL www.wtx.org no longer hosts a website and has not been owned by Intel since at least 2004.
This form factor was geared specifically towards the needs of high-end systems, and included specifications for a WTX power supply unit (PSU) using two WTX-specific 24-pin and 22-pin Molex connectors.
The WTX specification was created to standardize a new motherboard and chassis form factor, fix the relative processor location, and allow for high volume airflow through a portion of the chassis where the processors are positioned. This allowed for standard form factor motherboards and chassis to be used to integrate processors with more demanding thermal management requirements.
Bigger than ATX, maximum WTX motherboard size was . This was intended to provide more room in order to accommodate higher numbers of integrated components.
WTX computer cases were backwards compatible with ATX motherboards (but not vice versa), and sometimes came equipped with ATX power supplies.
See also
eATX: a version of ATX which has a form factor of .
SWTX: Server Workstation Technology Extended
External links
Intel's 1998 WTX case definition
WTX Form Factor definition
WTX Power Supply connectors
IBM PC compatibles
Motherboard form factors |
https://en.wikipedia.org/wiki/Percent%20sign | The percent sign (sometimes per cent sign in British English) is the symbol used to indicate a percentage, a number or ratio as a fraction of 100. Related signs include the permille (per thousand) sign and the permyriad (per ten thousand) sign (also known as a basis point), which indicate that a number is divided by one thousand or ten thousand, respectively. Higher proportions use parts-per notation.
Correct style
Form and spacing
English style guides prescribe writing the percent sign following the number without any space between (e.g. 50%). However, the International System of Units and ISO 31-0 standard prescribe a space between the number and percent sign, in line with the general practice of using a non-breaking space between a numerical value and its corresponding unit of measurement.
Other languages have other rules for spacing in front of the percent sign:
In Czech and in Slovak, the percent sign is spaced with a non-breaking space if the number is used as a noun. In Czech, no space is inserted if the number is used as an adjective (e.g. “a 50% increase”), whereas Slovak uses a non-breaking space in this case as well.
In Finnish, the percent sign is always spaced, and a case suffix can be attached to it using the colon (e.g. 50 %:n kasvu 'an increase of 50%').
In French, the percent sign must be spaced with a non-breaking space.
According to the Real Academia Española, in Spanish, the percent sign should be spaced now, despite the fact that it is not the linguistic norm. Despite that, in North American Spanish (Mexico and the US), several style guides and institutions either recommend the percent sign be written following the number without any space between or do so in their own publications in accordance with common usage in that region.
In Russian, the percent sign is rarely spaced, contrary to the guidelines of the GOST 8.417-2002 state standard.
In Chinese, the percent sign is almost never spaced, probably because Chinese does not use sp |
https://en.wikipedia.org/wiki/Etendue | Etendue or étendue (; ) is a property of light in an optical system, which characterizes how "spread out" the light is in area and angle. It corresponds to the beam parameter product (BPP) in Gaussian beam optics. Other names for etendue include acceptance, throughput, light grasp, light-gathering power, optical extent, and the AΩ product. Throughput and AΩ product are especially used in radiometry and radiative transfer where it is related to the view factor (or shape factor). It is a central concept in nonimaging optics.
From the source point of view, etendue is the product of the area of the source and the solid angle that the system's entrance pupil subtends as seen from the source. Equivalently, from the system point of view, the etendue equals the area of the entrance pupil times the solid angle the source subtends as seen from the pupil. These definitions must be applied for infinitesimally small "elements" of area and solid angle, which must then be summed over both the source and the diaphragm as shown below. Etendue may be considered to be a volume in phase space.
Etendue never decreases in any optical system where optical power is conserved. A perfect optical system produces an image with the same etendue as the source. The etendue is related to the Lagrange invariant and the optical invariant, which share the property of being constant in an ideal optical system. The radiance of an optical system is equal to the derivative of the radiant flux with respect to the etendue.
Definition
An infinitesimal surface element, , with normal is immersed in a medium of refractive index . The surface is crossed by (or emits) light confined to a solid angle, , at an angle with the normal . The area of projected in the direction of the light propagation is . The etendue of an infinitesimal bundle of light crossing is defined as
Etendue is the product of geometric extent and the squared refractive index of a medium through which the beam propagates. Because angl |
https://en.wikipedia.org/wiki/Sergei%20Chetverikov | Sergei Sergeevich Chetverikov (; 6 May 1880 – 2 July 1959) was a Russian biologist and one of the early contributors to the development of the field of genetics. His research showed how early genetic theories applied to natural populations, and has therefore contributed towards the modern synthesis of evolutionary theory.
Between the two World Wars, Soviet biological research managed to connect genetics with field research on natural populations. Chetverikov lead a team at the Nikolai Koltsov Institute of Experimental Biology in Moscow, and in 1926 produced what should have been one of the landmark papers of the modern synthesis. However, published only in Russian, it was largely ignored in the English-speaking world (though J.B.S. Haldane possessed a translation).
Chetverikov influenced several Russian geneticists who later came to work in the West, such as Theodosius Dobzhansky and Nikolay Timofeev-Ressovsky, both of whom continued to work in a similar style. The significance of Chetverikov's work came to light much later, by which time the evolutionary synthesis was virtually complete.
He was arrested by OGPU in 1929 and sent to exile to Yekaterinburg for five years. He later moved to Nizhny Novgorod and organized the Department of Genetics at Gorky University. He was dismissed from his post at the behest of Lysenko in 1948. |
https://en.wikipedia.org/wiki/Harmonic%20%28mathematics%29 | In mathematics, a number of concepts employ the word harmonic. The similarity of this terminology to that of music is not accidental: the equations of motion of vibrating strings, drums and columns of air are given by formulas involving Laplacians; the solutions to which are given by eigenvalues corresponding to their modes of vibration. Thus, the term "harmonic" is applied when one is considering functions with sinusoidal variations, or solutions of Laplace's equation and related concepts.
Mathematical terms whose names include "harmonic" include:
Projective harmonic conjugate
Cross-ratio
Harmonic analysis
Harmonic conjugate
Harmonic form
Harmonic function
Harmonic mean
Harmonic mode
Harmonic number
Harmonic series
Alternating harmonic series
Harmonic tremor
Spherical harmonics
Mathematical terminology
Harmonic analysis |
https://en.wikipedia.org/wiki/Tamiya%20Corporation | is a Japanese manufacturer of plastic model kits, radio-controlled cars, battery and solar powered educational models, sailboat models, acrylic and enamel model paints, and various modeling tools and supplies. The company was founded by Yoshio Tamiya in Shizuoka, Japan, in 1946.
The company has gained a reputation among hobbyists of producing models of outstanding quality and accurate scale detail. The company's philosophy is reflected directly in its motto: "First in quality around the world". Tamiya's metal molds are produced from plans with the concept of being "easy to understand and build, even for beginners". Even the box art is consistent with this throughout the company. Tamiya has been awarded the Modell des Jahres (Model of the Year) award, hosted by the German magazine ModellFan.
Products currently commercialized by Tamiya include (toy and collectibles): scale plastic model cars, aircraft, military vehicles, motorcycles, figurines, radio-controlled cars, trucks, and tanks. Tamiya also produces materials and tools, including enamel paints, acrylic paints, airbrushes, aerosol paint, and marker pens.
History
Entrance of plastic models
The company was founded in 1946 as Tamiya Shoji & Co. (Tamiya Commerce Company in translation) by Yoshio Tamiya (15 May 1905 – 2 November 1988) in Oshika, Shizuoka City. It was a sawmill and lumber supply company. With the high availability of wood, the timber company's wood products division (founded in 1947) also produced wooden models of ships and airplanes, which later became company's foundation. In 1953, the company stopped selling architectural lumber and focused solely on model making.
In the mid-1950s, wooden model sales were decreased due to foreign-made plastic models starting to be imported. This led the company to also manufacture plastic models, starting in 1959. Their first model was the Japanese battleship Yamato. Tamiya's competitors already sold similar models for 350 yen, forcing the company to match t |
https://en.wikipedia.org/wiki/Prolation | Prolation is a term used in the theory of the mensural notation of medieval and Renaissance music to describe its rhythmic structure on a small scale, as opposed to tempus, which described a larger scale. The term "prolation" is derived from the Latin prolatio ("enlargement"/"prolongation"), first used by Philippe de Vitry in describing Ars Nova, a musical style that came about in 14th-century France.
Prolation, together with tempus, corresponds roughly to the concept of time signature in modern music. Prolation describes whether a semibreve (whole note) is equal in length to two minims (half notes) (minor prolation or imperfect prolation; in Latin "prolatio minor") or, like a tuplet, three minims (major prolation or perfect prolation; in Latin "prolatio maior"). Tempus similarly describes the relationship between the breve and semibreve. These may be compared to the additive rhythm and divisive rhythm, rhythmic divisions and rhythmic groupings which define time signatures.
Early medieval music was often structured in subdivisions of three, while the note values in modern music are most often subdivided into two parts, 4/4 being the most common time signature, meaning that minor prolation has primarily survived in our time signature system, while major prolation has been replaced by notation modifying note values with dots or triplets. The history of written medieval music shows a gradual shift from major to minor prolation being common.
The equivalent term in the Italian notation of the fourteenth century is "divisio", which covers both tempus and prolation. Italian divisiones, first described by Marchetto da Padova, can also allow four minims within a semibreve. For instance octonaria and duodenaria place eight and twelve minims in a breve respectively divided into two or three "major" semibreves.
Further reading
Musical notation
Medieval music theory |
https://en.wikipedia.org/wiki/Atmel%20AVR%20instruction%20set | The Atmel AVR instruction set is the machine language for the Atmel AVR, a modified Harvard architecture 8-bit RISC single chip microcontroller which was developed by Atmel in 1996. The AVR was one of the first microcontroller families to use on-chip flash memory for program storage.
Processor registers
There are 32 general-purpose 8-bit registers, R0–R31. All arithmetic and logic operations operate on those registers; only load and store instructions access RAM.
A limited number of instructions operate on 16-bit register pairs. The lower-numbered register of the pair holds the least significant bits and must be even-numbered. The last three register pairs are used as pointer registers for memory addressing. They are known as X (R27:R26), Y (R29:R28) and Z (R31:R30). Postincrement and predecrement addressing modes are supported on all three. Y and Z also support a six-bit positive displacement.
Instructions which allow an immediate value are limited to registers R16–R31 (8-bit operations) or to register pairs R25:R24–R31:R30 (16-bit operations ADIW and SBIW). Some variants of the MUL operation are limited to eight registers, R16 through R23.
Special purpose registers
In addition to these 32 general-purpose registers, the CPU has a few special-purpose registers:
PC: 16- or 22-bit program counter
SP: 8- or 16-bit stack pointer
SREG: 8-bit status register
RAMPX, RAMPY, RAMPZ, RAMPD and EIND: 8-bit segment registers that are prepended to 16-bit addresses in order to form 24-bit addresses; only available in parts with large address spaces.
Status register
The status register bits are:
C Carry flag. This is a borrow flag on subtracts. The INC and DEC instructions do not modify the carry flag, so they may be used to loop over multi-byte arithmetic operations.
Z Zero flag. Set to 1 when an arithmetic result is zero.
N Negative flag. Set to a copy of the most significant bit of an arithmetic result.
V Overflow flag. Set in case of two's complement overflow.
|
https://en.wikipedia.org/wiki/Physical%20paradox | A physical paradox is an apparent contradiction in physical descriptions of the universe. While many physical paradoxes have accepted resolutions, others defy resolution and may indicate flaws in theory. In physics as in all of science, contradictions and paradoxes are generally assumed to be artifacts of error and incompleteness because reality is assumed to be completely consistent, although this is itself a philosophical assumption. When, as in fields such as quantum physics and relativity theory, existing assumptions about reality have been shown to break down, this has usually been dealt with by changing our understanding of reality to a new one which remains self-consistent in the presence of the new evidence.
Paradoxes relating to false assumptions
Certain physical paradoxes defy common sense predictions about physical situations. In some cases, this is the result of modern physics correctly describing the natural world in circumstances which are far outside of everyday experience. For example, special relativity has traditionally yielded two common paradoxes: the twin paradox and the ladder paradox. Both of these paradoxes involve thought experiments which defy traditional common sense assumptions about time and space. In particular, the effects of time dilation and length contraction are used in both of these paradoxes to create situations which seemingly contradict each other. It turns out that the fundamental postulate of special relativity that the speed of light is invariant in all frames of reference requires that concepts such as simultaneity and absolute time are not applicable when comparing radically different frames of reference.
Another paradox associated with relativity is Supplee's paradox which seems to describe two reference frames that are irreconcilable. In this case, the problem is assumed to be well-posed in special relativity, but because the effect is dependent on objects and fluids with mass, the effects of general relativity need t |
https://en.wikipedia.org/wiki/Yang%20Hui | Yang Hui (, ca. 1238–1298), courtesy name Qianguang (), was a Chinese mathematician and writer during the Song dynasty. Originally, from Qiantang (modern Hangzhou, Zhejiang), Yang worked on magic squares, magic circles and the binomial theorem, and is best known for his contribution of presenting Yang Hui's Triangle. This triangle was the same as Pascal's Triangle, discovered by Yang's predecessor Jia Xian. Yang was also a contemporary to the other famous mathematician Qin Jiushao.
Written work
The earliest extant Chinese illustration of 'Pascal's triangle' is from Yang's book Xiangjie Jiuzhang Suanfa () of 1261 AD, in which Yang acknowledged that his method of finding square roots and cubic roots using "Yang Hui's Triangle" was invented by mathematician Jia Xian who expounded it around 1100 AD, about 500 years before Pascal. In his book (now lost) known as Rújī Shìsuǒ () or Piling-up Powers and Unlocking Coefficients, which is known through his contemporary mathematician Liu Ruxie (). Jia described the method used as 'li cheng shi suo' (the tabulation system for unlocking binomial coefficients). It appeared again in a publication of Zhu Shijie's book Jade Mirror of the Four Unknowns () of 1303 AD.
Around 1275 AD, Yang finally had two published mathematical books, which were known as the Xugu Zhaiqi Suanfa () and the Suanfa Tongbian Benmo (, summarily called Yang Hui suanfa ). In the former book, Yang wrote of arrangement of natural numbers around concentric and non concentric circles, known as magic circles and vertical-horizontal diagrams of complex combinatorial arrangements known as magic squares, providing rules for their construction. In his writing, he harshly criticized the earlier works of Li Chunfeng and Liu Yi (), the latter of whom were both content with using methods without working out their theoretical origins or principle. Displaying a somewhat modern attitude and approach to mathematics, Yang once said:
The men of old changed the name of their |
https://en.wikipedia.org/wiki/Acorn%20Online%20Media%20Set%20Top%20Box | The Acorn Online Media Set Top Box was produced by the Online Media division of Acorn Computers Ltd for the Cambridge Cable and Online Media Video on Demand trial and launched early 1996. Part of this trial involved a home-shopping system in partnership with Parcelforce.
The hardware was trialled by NatWest bank, as exhibited at the 1995 Acorn World trade show.
Specification
STB1
The STB1 was a customised Risc PC based system, with a Wild Vision Movie Magic expansion card in a podule slot, and a network card based on Asynchronous Transfer Mode.
Memory: 4 MiB RAM
Processor: ARM 610 processor at 33 MHz; approx 28.7 MIPS
Operating system: RISC OS 3.50 held in 4 MiB ROM
STB20
The STB20 was a new PCB based around the ARM7500 System On Chip.
Memory:
Processor: ARM7500 processor
Operating system: RISC OS 3.61, a version specific for this STB, held in 4 MiB ROM.
STB22
By this time Online Media had been restructured back into Acorn Computers, so the STB22 is branded as 'Acorn'.
Memory:
Processor:
Operating system: a development of RISC OS held in 4 MiB ROM |
https://en.wikipedia.org/wiki/Acorn%20Network%20Computer | The Acorn Network Computer was a network computer (a type of thin client) designed and manufactured by Acorn Computers Ltd. It was the implementation of the Network Computer Reference Profile that Oracle Corporation commissioned Acorn to specify for network computers (for more detail on the history, see Acorn's Network Computer). Sophie Wilson of Acorn led the effort. It was launched in August 1996.
The NCOS operating system used in this first implementation was based on RISC OS and ran on ARM hardware. Manufacturing obligations were achieved through a contract with Fujitsu subsidiary D2D.
In 1997, Acorn offered its designs at no cost to licensees of .
Hardware models
Original model
The NetStation was available in two versions, one with a modem for home use via a television, and a version with an Ethernet card for use in businesses and schools with VGA monitors and an on-site BSD Unix fileserver based on RiscBSD, an early ARM port of NetBSD. Both versions were upgradable, as the modem and Ethernet cards were replaceable "podules" (Acorn-format Eurocards). The home version was trialled in 1997/98 in conjunction with BT.
The and both used the and supported PAL, NTSC and SVGA displays. They had identical specifications. The used a StrongARM SA-110 200 MHz processor. The ARM7500-based DeskLite was launched in 1998.
StrongARM
Acorn continued to produce ARM-based designs, demonstrating its first StrongARM prototype in May 1996, and the 6 months later. This evolved into the CoNCord, launched in late 1997.
New markets
Further designs included the Set-top Box NC (), the , and the .
Later versions
The second generation Network Computer operating system was no longer based on RISC OS. NC Desktop, from Oracle subsidiary Network Computer Inc., instead combined NetBSD and the X Window System, featuring desktop windows whose contents were typically described using HTML, reminiscent of (but not entirely equivalent to) the use of Display PostScript in NeXTStep. The pr |
https://en.wikipedia.org/wiki/Magnetosome | Magnetosomes are membranous structures present in magnetotactic bacteria (MTB). They contain iron-rich magnetic particles that are enclosed within a lipid bilayer membrane. Each magnetosome can often contain 15 to 20 magnetite crystals that form a chain which acts like a compass needle to orient magnetotactic bacteria in geomagnetic fields, thereby simplifying their search for their preferred microaerophilic environments. Recent research has shown that magnetosomes are invaginations of the inner membrane and not freestanding vesicles. Magnetite-bearing magnetosomes have also been found in eukaryotic magnetotactic algae, with each cell containing several thousand crystals.
Overall, magnetosome crystals have high chemical purity, narrow size ranges, species-specific crystal morphologies and exhibit specific arrangements within the cell. These features indicate that the formation of magnetosomes is under precise biological control and is mediated biomineralization.
Magnetotactic bacteria usually mineralize either iron oxide magnetosomes, which contain crystals of magnetite (), or iron sulfide magnetosomes, which contain crystals of greigite (). Several other iron sulfide minerals have also been identified in iron sulfide magnetosomes—including mackinawite (tetragonal FeS) and a cubic FeS—which are thought to be precursors of . One type of magnetotactic bacterium present at the oxic-anoxic transition zone (OATZ) of the southern basin of the Pettaquamscutt River Estuary, Narragansett, Rhode Island, United States is known to produce both iron oxide and iron sulfide magnetosomes.
Purpose
Magnetotactic bacteria are widespread, motile, diverse prokaryotes that biomineralize a unique organelle called the magnetosome. A magnetosome consists of a nano-sized crystal of a magnetic iron mineral, which is enveloped by a lipid bilayer membrane. In the cells of most all magnetotactic bacteria, magnetosomes are organized as well-ordered chains. The magnetosome chain causes the cel |
https://en.wikipedia.org/wiki/Duane%20syndrome | Duane syndrome is a congenital rare type of strabismus most commonly characterized by the inability of the eye to move outward. The syndrome was first described by ophthalmologists Jakob Stilling (1887) and Siegmund Türk (1896), and subsequently named after Alexander Duane, who discussed the disorder in more detail in 1905.
Other names for this condition include: Duane's retraction syndrome, eye retraction syndrome, retraction syndrome, congenital retraction syndrome and Stilling-Türk-Duane syndrome.
Presentation
The characteristic features of the syndrome are:
Limitation of abduction (outward movement) of the affected eye.
Less marked limitation of adduction (inward movement) of the same eye.
Retraction of the eyeball into the socket on adduction, with associated narrowing of the palpebral fissure (eye closing).
Widening of the palpebral fissure on attempted abduction. (N. B. Mein and Trimble point out that this is "probably of no significance" as the phenomenon also occurs in other conditions in which abduction is limited.)
Poor convergence.
A head turn to the side of the affected eye to compensate for the movement limitations of the eye(s) and to maintain binocular vision.
While usually isolated to the eye abnormalities, Duane syndrome can be associated with other problems including cervical spine abnormalities Klippel–Feil syndrome, Goldenhar syndrome, heterochromia, and congenital deafness.
Causes
Duane syndrome is most probably a miswiring of the eye muscles, causing some eye muscles to contract when they shouldn't and other eye muscles not to contract when they should. Alexandrakis and Saunders found that in most cases the abducens nucleus and nerve are absent or hypoplastic, and the lateral rectus muscle is innervated by a branch of the oculomotor nerve. This view is supported by the earlier work of Hotchkiss et al. who reported on the autopsy findings of two patients with Duane's syndrome. In both cases the sixth cranial nerve nucleus and nerve was abs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.