source
stringlengths 31
203
| text
stringlengths 28
2k
|
|---|---|
https://en.wikipedia.org/wiki/Chua%27s%20circuit
|
Chua's circuit (also known as a Chua circuit) is a simple electronic circuit that exhibits classic chaotic behavior. This means roughly that it is a "nonperiodic oscillator"; it produces an oscillating waveform that, unlike an ordinary electronic oscillator, never "repeats". It was invented in 1983 by Leon O. Chua, who was a visitor at Waseda University in Japan at that time. The ease of construction of the circuit has made it a ubiquitous real-world example of a chaotic system, leading some to declare it "a paradigm for chaos".
Chaotic criteria
An autonomous circuit made from standard components (resistors, capacitors, inductors) must satisfy three criteria before it can display chaotic behaviour. It must contain:
one or more nonlinear elements,
one or more locally active resistors,
three or more energy-storage elements.
Chua's circuit is the simplest electronic circuit meeting these criteria. As shown in the top figure, the energy storage elements are two capacitors (labeled C1 and C2) and an inductor (labeled L; L1 in lower figure). A "locally active resistor" is a device that has negative resistance and is active (it can amplify), providing the power to generate the oscillating current. The locally active resistor and nonlinearity are combined in the device NR, which is called "Chua's diode". This device is not sold commercially but is implemented in various ways by active circuits. The circuit diagram shows one common implementation. The nonlinear resistor is implemented by two linear resistors and two diodes. At the far right is a negative impedance converter made from three linear resistors and an operational amplifier, which implements the locally active resistance (negative resistance).
Dynamics
Analyzing the circuit using Kirchhoff's circuit laws, the dynamics of Chua's circuit can be accurately modeled by means of a system of three nonlinear ordinary differential equations in the variables x(t), y(t), and z(t), which represent the voltages across t
|
https://en.wikipedia.org/wiki/NCR%205380
|
The NCR 5380 is an early SCSI controller chip developed by NCR Microelectronics. It was popular due to its simplicity and low cost. The 5380 was used in the Macintosh Plus and in numerous SCSI cards for personal computers, including the Amiga and Atari TT. The 5380 was second sourced by several chip makers, including AMD and Zilog. The 5380 was designed by engineers at the NCR plant then located in Wichita, Kansas, and initially fabricated by NCR Microelectronics in Colorado Springs, Colorado. It was the first single-chip implementation of the SCSI-1 protocol.
The NCR 5380 also made a significant appearance in Digital Equipment Corporation's VAX computers, where it was featured on various Q-Bus modules and as an integrated SCSI controller in numerous MicroVAX, VAXstation and VAXserver computers. Many UMAX SCSI optical scanners also contain the 53C80 chip interfaced to an Intel 8031-series microcontroller.
Single-chip SCSI controller NCR 53c400 used SCSI 5380 core.
See also
NCR 53C9x
References
SCSI
Integrated circuits
NCR Corporation products
|
https://en.wikipedia.org/wiki/DataPlay
|
DataPlay is an optical disc system developed by DataPlay Inc. and released to the consumer market in 2002. Using very small (32mm diameter) disks enclosed in a protective cartridge storing 250MB per side, DataPlay was intended primarily for portable music playback, although it could also store other types of data, using both pre-recorded disks and user-recorded disks (and disks that combined pre-recorded information with a writable area). It would also allow for multisession recording. It won the CES Best of Show award in 2001.
DataPlay also included an elaborate digital rights management system designed to allow consumers to "unlock" extra pre-recorded content on the disk at any time, through the internet, following the initial purchase. It was based on the Secure Digital Music Initiative's DRM system. Dataplay's DRM system was one of the reasons behind its attractiveness to the music industry. It also included a proprietary file system, Dataplay File System (DFS) which natively supported DRM. By default it would allow up to 3 copies to other Dataplay discs, without allowing any copies to CDs.
The recorded music industry was initially generally supportive of DataPlay and a small number of a pre-recorded DataPlay disks were released, including the Britney Spears album Britney. Graphics on press releases show that Sting and Garth Brooks were also set to have DataPlay releases. In 2001 the first DIY DataPlay album was released by the experimental rave producer Backmasker. However, as a pre-recorded format, DataPlay was a failure. The company closed due to a lack of funding. In 2003 a company called DPHI bought Dataplay's intellectual property and reintroduced it at CES 2004. The company swapped Dataplay's DFS file system in favor of the FAT file system. Again, they were marketed as a cheaper alternative to memory cards, with a device being designed that would allow users to transfer data from an SD card to a cheaper and higher capacity Dataplay disc. Each disc wo
|
https://en.wikipedia.org/wiki/Wireless%20Router%20Application%20Platform
|
The Wireless Router Application Platform (WRAP) is a format of single board computer defined by Swiss company PC Engines. This is specially designed for wireless router, firewall, load balancer, VPN or other network appliances.
Basic specs
32-bit x86 compatible CPU, low energy consumption (AMD Geode SC1100 at 266 MHz)
supports MMX instructions
64-bit SDRAM memory controller (max: 89 MHz)
PCI bus controller
IDE interfaces
ACPI 1.0-compatible power management
tinyBIOS : Made specially by PC Engines
64 or 128MB SDRAM
Compact flash memory (includes boot OS)
Monitoring: watchdog timer, LM77 thermal monitor
Power supply: 7V ~ 18V external DC power or Power over Ethernet
LAN: National semiconductor DP83816
I/O: MiniPCI slots, console serial port
Different boards
There are three different models of the WRAP:
The WRAP 1-1 has two Ethernet ports, and two mini-PCI slots, on a 16x16cm board.
The WRAP 1-2 has three Ethernet ports and one mini-PCI slot, on a 16x16cm board.
The WRAP 2 has one Ethernet port, and two mini-PCI slots, on a 10x16cm board.
Operating System
The WRAP is capable of running many different operating systems, including various Linux distributions, FreeBSD, NetBSD, OpenBSD, as well as proprietary OSes. The WRAP lacks a keyboard controller (for obvious reasons), so some OSes that rely on one for the boot process may have to be modified.
End Of Life (EOL)
PC Engines announced the end of life for the WRAP platform in 2007. The board was replaced by the ALIX.
External links
PC Engines information page on the WRAP
BowlFish
Routers (computing)
Router
|
https://en.wikipedia.org/wiki/Kiesselbach%27s%20plexus
|
Kiesselbach's plexus is an anastomotic arterial network (plexus) of four or five arteries in the nose supplying the nasal septum. It lies in the anterior inferior part of the septum known as Little's area, Kiesselbach's area, or Kiesselbach's triangle. It is a common site for nosebleeds.
Structure
Kiesselbach's plexus is an anastomosis of four or five arteries:
the anterior ethmoidal artery, a branch of the ophthalmic artery.
the sphenopalatine artery, a terminal branch of the maxillary artery.
the greater palatine artery, a branch of the maxillary artery.
a septal branch of the superior labial artery, a branch of the facial artery.
a posterior ethmoidal artery, a branch of the ophthalmic artery. There is contention as whether this is truly part of Kiesselbach's plexus. Most sources quote that it is not part of the plexus, but rather one of the blood supplies for the nasal septum itself.
It runs vertically downwards just behind the columella, and crosses the floor of the nose. It joins the venous plexus on the lateral nasal wall.
Function
Kiesselbach's plexus supplies blood to the nasal septum.
Clinical significance
Ninety percent of nosebleeds (epistaxis) occur in Kiesselbach's plexus. It is exposed to the drying effect of inhaled air. It can also be damaged by trauma from a finger nail (nose picking), as it is fragile. It is the usual site for nosebleeds in children and young adults. A physician may use a nasal speculum to see that an anterior nosebleed comes from Kiesselbach's plexus.
History
James Lawrence Little (1836–1885), an American surgeon, first described the area in detail in 1879. Little described the area as being "about half an inch ... from the lower edge of the middle of the column [septum]".
Kiesselbach's plexus is named after Wilhelm Kiesselbach (1839–1902), a German otolaryngologist who published a paper on the area in 1884. The area may be called Little's area, Kiesselbach's area, or Kiesselbach's triangle.
See also
Anatomical
|
https://en.wikipedia.org/wiki/Unit%20type
|
In the area of mathematical logic and computer science known as type theory, a unit type is a type that allows only one value (and thus can hold no information). The carrier (underlying set) associated with a unit type can be any singleton set. There is an isomorphism between any two such sets, so it is customary to talk about the unit type and ignore the details of its value. One may also regard the unit type as the type of 0-tuples, i.e. the product of no types.
The unit type is the terminal object in the category of types and typed functions. It should not be confused with the zero or bottom type, which allows no values and is the initial object in this category. Similarly, the Boolean is the type with two values.
The unit type is implemented in most functional programming languages. The void type that is used in some imperative programming languages serves some of its functions, but because its carrier set is empty, it has some limitations (as detailed below).
In programming languages
Several computer programming languages provide a unit type to specify the result type of a function with the sole purpose of causing a side effect, and the argument type of a function that does not require arguments.
In Haskell, Rust, and Elm, the unit type is called () and its only value is also (), reflecting the 0-tuple interpretation.
In ML descendants (including OCaml, Standard ML, and F#), the type is called unit but the value is written as ().
In Scala, the unit type is called Unit and its only value is written as ().
In Common Lisp the type named is a unit type which has one value, namely the symbol . This should not be confused with the type, which is the bottom type.
In Python, there is a type called NoneType which allows the single value of None.
In Swift, the unit type is called Void or () and its only value is also (), reflecting the 0-tuple interpretation.
In Java, the unit type is called Void and its only value is null.
In Go, the unit type is written struct
|
https://en.wikipedia.org/wiki/Ground%20substance
|
Ground substance is an amorphous gel-like substance in the extracellular space of animals that contains all components of the extracellular matrix (ECM) except for fibrous materials such as collagen and elastin. Ground substance is active in the development, movement, and proliferation of tissues, as well as their metabolism. Additionally, cells use it for support, water storage, binding, and a medium for intercellular exchange (especially between blood cells and other types of cells). Ground substance provides lubrication for collagen fibers.
The components of the ground substance vary depending on the tissue. Ground substance is primarily composed of water and large organic molecules, such as glycosaminoglycans (GAGs), proteoglycans, and glycoproteins. GAGs are polysaccharides that trap water, giving the ground substance a gel-like texture. Important GAGs found in ground substance include hyaluronic acid, heparan sulfate, dermatan sulfate, and chondroitin sulfate. With the exception of hyaluronic acid, GAGs are bound to proteins called proteoglycans. Glycoproteins are proteins that attach components of the ground substance to one another and to the surfaces of cells. Components of the ground substance are secreted by fibroblasts. Usually it is not visible on slides, because it is lost during staining in the preparation process.
Link proteins such as vinculin, spectrin and actomyosin stabilize the proteoglycans and organize elastic fibers in the ECM. Changes in the density of ground substance can allow collagen fibers to form aberrant cross-links. Loose connective tissue is characterized by few fibers and cells, and a relatively large amount of ground substance. Dense connective tissue has a smaller amount of ground substance compared to the fibrous material.
The meaning of the term has evolved over time.
See also
Milieu intérieur
References
External links
Biochemistry
Histology
|
https://en.wikipedia.org/wiki/Traffic%20mix
|
Traffic mix is a traffic model in telecommunication engineering and teletraffic theory.
Definitions
A traffic mix is a modelisation of user behaviour. In telecommunications, user behaviour activities may be described by a number of systems, ranging from simple to complex. For example, for plain old telephone service (POTS), a sequence of connection requests to an exchange can be modelled by fitting negative exponential distributions to the average time between requests and the average duration of a connection. This in turn can be used to work out the utilisation of the line for the purposes of network planning and dimensioning.
Objectives
Traffic mix has two goals:
Network links dimensioning
Network equipment dimensioning
Both these functions are extremely important to network operators. If insufficient capability is deployed at a node (for example, if a backbone router has 1 gigabit/sec of switching capacity and more than this is offered) then the risk of equipment failure increases, and customers experience poor service. However, if the network is overprovisioned the cost in equipment can be high. Most providers therefore seek to maximise the effect of their spending by maintaining an unused overhead capacity for growth, and expanding key nodes to relieve problem areas. Identification of these areas is accomplished by network dimensioning.
Traffic mix type
Telephony traffic mix
Call attempts per day
Call holding time
Mean holding time
Mobile telephony traffic mix
Call attempts
Call holding time
Mean holding time
Mean number of SMS send
Mean number of SMS received
User mobility
Internet traffic mix
UL/DL acknowledged Kbs
Packet Data Channel allocation successes
User throughput
Session/Packet interarrival time/Latency
See also
A. K. Erlang
Call center
Engset calculation
Erlang distribution
Poisson distribution
External links
Competitive ISP - Scientific treatment of internet Traffic
Teletraffic
Telecommunications engineering
|
https://en.wikipedia.org/wiki/Type%20class
|
In computer science, a type class is a type system construct that supports ad hoc polymorphism. This is achieved by adding constraints to type variables in parametrically polymorphic types. Such a constraint typically involves a type class T and a type variable a, and means that a can only be instantiated to a type whose members support the overloaded operations associated with T.
Type classes were first implemented in the Haskell programming language after first being proposed by Philip Wadler and Stephen Blott as an extension to "eqtypes" in Standard ML, and were originally conceived as a way of implementing overloaded arithmetic and equality operators in a principled fashion.
In contrast with the "eqtypes" of Standard ML, overloading the equality operator through the use of type classes in Haskell does not require extensive modification of the compiler frontend or the underlying type system.
Overview
Type classes are defined by specifying a set of function or constant names, together with their respective types, that must exist for every type that belongs to the class. In Haskell, types can be parameterized; a type class Eq intended to contain types that admit equality would be declared in the following way:
class Eq a where
(==) :: a -> a -> Bool
(/=) :: a -> a -> Bool
where a is one instance of the type class Eq, and a defines the function signatures for 2 functions (the equality and inequality functions), which each take 2 arguments of type a and return a boolean.
The type variable a has kind ( is also known as Type in the latest GHC release), meaning that the kind of Eq is
Eq :: Type -> Constraint
The declaration may be read as stating a "type a belongs to type class Eq if there are functions named (==), and (/=), of the appropriate types, defined on it". A programmer could then define a function elem (which determines if an element is in a list) in the following way:
elem :: Eq a => a -> [a] -> Bool
elem y [] = False
elem y (x:xs) = (x == y
|
https://en.wikipedia.org/wiki/Rigid%20frame
|
In structural engineering, a rigid frame is the load-resisting skeleton constructed with straight or curved members interconnected by mostly rigid connections, which resist movements induced at the joints of members. Its members can take bending moment, shear, and axial loads.
The two common assumptions as to the behavior of a building frame are (1) that its beams are free to rotate at their connections or (2) that its members are so connected that the angles they make with each other do not change under load. Frameworks with connections of intermediate stiffness will be intermediate between these two extremes. Frameworks with connections of intermediate stiffness are commonly called semirigid frames. The AISC specifications recognize three basic frame types: Rigid Frame, Simple Frame, and Partially Restrained Frame.
AISC standard
The AISC Steel Specification Commentary on Section B3 provides guidance for the classification of a connection in terms of its rigidity. The secant stiffness of the connection Ks is taken as an index property of connection stiffness. Specifically,
Ks = Ms/θs
where
Ms = moment at service loads, kip-in (N-mm)
θs = rotation at service loads, rads
The secant stiffness of the connection is compared to the rotational stiffness of the connected member as follows, in which L and EI are the length and bending rigidity, respectively, of the beam.
Notes
References
Structural system
Construction
|
https://en.wikipedia.org/wiki/Allogamy
|
Allogamy or cross-fertilization is the fertilization of an ovum from one individual with the spermatozoa of another. By contrast, autogamy is the term used for self-fertilization. In humans, the fertilization event is an instance of allogamy. Self-fertilization occurs in hermaphroditic organisms where the two gametes fused in fertilization come from the same individual. This is common in plants (see Sexual reproduction in plants) and certain protozoans.
In plants, allogamy is used specifically to mean the use of pollen from one plant to fertilize the flower of another plant and usually synonymous with the term "cross-fertilization" or "cross-pollination" (outcrossing). The latter term can be used more specifically to mean pollen exchange between different plant strains or even different plant species (where the term cross-hybridization can be used) rather than simply between different individuals.
Parasites having complex life cycles can pass through alternate stages of allogamous and autogamous reproduction, and the description of a hitherto unknown allogamous stage can be a significant finding with implications for human disease.
Avoidance of inbreeding depression
Allogamy ordinarily involves cross-fertilization between unrelated individuals leading to the masking of deleterious recessive alleles in progeny. By contrast, close inbreeding, including self-fertilization in plants and automictic parthenogenesis in hymenoptera, tends to lead to the harmful expression of deleterious recessive alleles (inbreeding depression).
In dioecious plants, the stigma may receive pollen from several different potential donors. As multiple pollen tubes from the different donors grow through the stigma to reach the ovary, the receiving maternal plant may carry out pollen selection favoring pollen from less related donor plants. Thus post-pollination selection may occur in order to promote allogamy and avoid inbreeding depression. Also, seeds may be aborted selectively dependin
|
https://en.wikipedia.org/wiki/Quasi-quotation
|
Quasi-quotation or Quine quotation is a linguistic device in formal languages that facilitates rigorous and terse formulation of general rules about linguistic expressions while properly observing the use–mention distinction. It was introduced by the philosopher and logician Willard Van Orman Quine in his book Mathematical Logic, originally published in 1940. Put simply, quasi-quotation enables one to introduce symbols that stand for a linguistic expression in a given instance and are used as that linguistic expression in a different instance.
For example, one can use quasi-quotation to illustrate an instance of substitutional quantification, like the following:
"Snow is white" is true if and only if snow is white.
Therefore, there is some sequence of symbols that makes the following sentence true when every instance of φ is replaced by that sequence of symbols: "φ" is true if and only if φ.
Quasi-quotation is used to indicate (usually in more complex formulas) that the φ and "φ" in this sentence are related things, that one is the iteration of the other in a metalanguage. Quine introduced quasiquotes because he wished to avoid the use of variables, and work only with closed sentences (expressions not containing any free variables). However, he still needed to be able to talk about sentences with arbitrary predicates in them, and thus, the quasiquotes provided the mechanism to make such statements. Quine had hoped that, by avoiding variables and schemata, he would minimize confusion for the readers, as well as staying closer to the language that mathematicians actually use.
Quasi-quotation is sometimes denoted using the symbols ⌜ and ⌝ (unicode U+231C, U+231D), or double square brackets, ⟦ ⟧ ("Oxford brackets"), instead of ordinary quotation marks.
How it works
Quasi-quotation is particularly useful for stating formation rules for formal languages. Suppose, for example, that one wants to define the well-formed formulas (wffs) of a new formal language, L,
|
https://en.wikipedia.org/wiki/Audacious%20%28software%29
|
Audacious is a free and open-source audio player software with a focus on low resource use, high audio quality, and support for a wide range of audio formats. It is designed primarily for use on POSIX-compatible Unix-like operating systems, with limited support for Microsoft Windows. Audacious was the default audio player in Ubuntu Studio in 2011–12, and was the default music player in Lubuntu until October 2018, when it was replaced with VLC.
History
Audacious began as a fork of Beep Media Player, which itself is a fork of XMMS. Ariadne "kaniini" Conill decided to fork Beep Media Player after the original development team announced that they were stopping development in order to create a next-generation version called BMPx. According to the Audacious home page, Conill and others "had [their] own ideas about how a player should be designed, which [they] wanted to try in a production environment."
Since version 2.1, Audacious includes both the Winamp-like interface known from previous versions and a new, GTK-based interface known as GTKUI, which resembles foobar2000 to some extent. GTKUI became the default interface in Audacious 2.4.
Change to C++ and Qt
Before version 3.0, Audacious used the GTK 2.x toolkit by default. Partial support for GTK3 was added in version 2.5, and Audacious 3.0 has full support for GTK3 and uses it by default. However, dissatisfied with the evolution of GTK3, the Audacious team chose to revert to GTK2 starting with the 3.6 release, with long-term plans of porting to Qt.
Since August 8, 2018, the official website has HTTPS enabled site-wide and GTK3 support was dropped completely.
As version 4.0, Audacious is using Qt as its primary toolkit but the GTK 2.x support is still available.
As version 4.3, Audacious has reinstated to support GTK3.
Features
Audacious contains built-in gapless playback.
Default codec support
MP3 using libmpg123
Advanced Audio Coding (AAC and AAC+)
Vorbis
FLAC
Wavpack
Shorten (SHN)
Musepack
TTA
|
https://en.wikipedia.org/wiki/OpenBSD%20security%20features
|
The OpenBSD operating system focuses on security and the development of security features. According to author Michael W. Lucas, OpenBSD "is widely regarded as the most secure operating system available anywhere, under any licensing terms."
API and build changes
Bugs and security flaws are often caused by programmer error. A common source of error is the misuse of the strcpy and strcat string functions in the C programming language. There are two common alternatives, strncpy and strncat, but they can be difficult to understand and easy to misuse, so OpenBSD developers Todd C. Miller and Theo de Raadt designed the strlcpy and strlcat functions. These functions are intended to make it harder for programmers to accidentally leave buffers unterminated or allow them to be overflowed. They have been adopted by the NetBSD and FreeBSD projects but not by the GNU C Library.
On OpenBSD, the linker has been changed to issue a warning when unsafe string manipulation functions, such as strcpy, strcat, or sprintf, are found. All occurrences of these functions in the OpenBSD source tree have been replaced. In addition, a static bounds checker is included in OpenBSD in an attempt to find other common programming mistakes at compile time. Other security-related APIs developed by the OpenBSD project include issetugid and arc4random.
Kernel randomization
In a June 2017 email, Theo de Raadt stated that a problem with stable systems was that they could be running for months at a time. Although there is considerable randomization within the kernel, some key addresses remain the same. The project in progress modifies the linker so that on every boot, the kernel is relinked, as well as all other randomizations. This differs from kernel ASLR; in the email he states that "As a result, every new kernel is unique. The relative offsets between functions and data are unique ... [The current] change is scaffolding to ensure you boot a newly-linked kernel upon every reboot ... so that a new
|
https://en.wikipedia.org/wiki/List%20of%20biology%20journals
|
This is a list of articles about scientific journals in biology and its various subfields.
General
Agriculture
Bulgarian Journal of Agricultural Science
EuroChoices
Journal of Animal Science
Journal of Dairy Science
Journal of Food Science
Poultry Science
Animal Feed Science and Technology
Journal of Animal Breeding and Genetics
Animal Production
Animals
animal
Animal Genetics
Agroecology and Sustainable Food Systems
Pertanika Journal of Tropical Agricultural Science
Anatomy
Microscopy Research and Technique
Biochemistry
Bioengineering
Biomedical Microdevices
Biotechnology and Bioprocess Engineering
Critical Reviews in Biotechnology
International Journal of Computational Biology and Drug Design
Bioinformatics
Biophysics
Annual Review of Biophysics and Biomolecular Structure
Biophysical Journal
FEBS Letters
Structure
Botany
Cell and Molecular
Ecology
Entomology
International Journal of Insect Science
Forestry
Genetics
Healthcare
Immunology
Annals of Allergy, Asthma & Immunology
Molecular Imaging and Biology
Nature Immunology
Nature Reviews Immunology
Malacology
Iberus
Microbiology and infectious disease
Advances in Microbial Physiology
African Journal of Infectious Diseases
Annual Review of Microbiology
Canadian Journal of Microbiology
Microbiology and Molecular Biology Reviews
Nature Reviews Microbiology
Mycology
Neuroscience
Nutrition
African Journal of Food, Agriculture, Nutrition and Development
Applied Physiology, Nutrition, and Metabolism
Ornithology
Pharmacy
Acta Facultatis Pharmaceuticae Universitatis Comenianae
Virology
Zoology
See also
List of scientific journals
External links
Biology
Journals
Biology journals
|
https://en.wikipedia.org/wiki/David%20Eisenbud
|
David Eisenbud (born 8 April 1947 in New York City) is an American mathematician. He is a professor of mathematics at the University of California, Berkeley and former director of the then Mathematical Sciences Research Institute (MSRI), now known as Simons Laufer Mathematical Sciences Institute (SLMath). He served as Director of MSRI from 1997 to 2007, and then again from 2013 to 2022.
Biography
Eisenbud is the son of mathematical physicist Leonard Eisenbud, who was a student and collaborator of the renowned physicist Eugene Wigner. Eisenbud received his Ph.D. in 1970 from the University of Chicago, where he was a student of Saunders Mac Lane and, unofficially, James Christopher Robson. He then taught at Brandeis University from 1970 to 1997, during which time he had visiting positions at Harvard University, Institut des Hautes Études Scientifiques (IHÉS), University of Bonn, and Centre national de la recherche scientifique (CNRS). He joined the staff at MSRI in 1997, and took a position at Berkeley at the same time.
From 2003 to 2005 Eisenbud was President of the American Mathematical Society.
Eisenbud's mathematical interests include commutative and non-commutative algebra, algebraic geometry, topology, and computational methods in these fields. He has written over 150 papers and books with over 60 co-authors. Notable contributions include the theory of matrix factorizations for maximal Cohen–Macaulay modules over hypersurface rings, the Eisenbud–Goto conjecture on degrees of generators of syzygy modules, and the Buchsbaum–Eisenbud criterion for exactness of a complex. He also proposed the Eisenbud–Evans conjecture, which was later settled by the Indian mathematician Neithalath Mohan Kumar.
He has had 31 doctoral students, including Craig Huneke, Mircea Mustaţă, Irena Peeva, and Gregory G. Smith (winner of the Aisenstadt Prize in 2007).
Eisenbud's hobbies are juggling (he has written two papers on the mathematics of juggling) and music. He has appeared
|
https://en.wikipedia.org/wiki/Neighbour-sensing%20model
|
The Neighbour-Sensing mathematical model of hyphal growth is a set of interactive computer models that simulate the way fungi hyphae grow in three-dimensional space. The three-dimensional simulation is an experimental tool which can be used to study the morphogenesis of fungal hyphal networks.
The modelling process starts from the proposition that each hypha in the fungal mycelium generates a certain abstract field that (like known physical fields) decreases with increasing distance. Both scalar and vector fields are included in the models. The field(s) and its (their) gradient(s) are used to inform the algorithm that calculates the likelihood of branching, the angle of branching and the growth direction of each hyphal tip in the simulated mycelium. The growth vector is being informed of its surroundings so, effectively, the virtual hyphal tip is sensing the neighbouring mycelium. This is why we call it the Neighbour-Sensing model.
Cross-walls in living hyphae are formed only at right angles to the long axis of the hypha. A daughter hyphal apex can only arise if a branch is initiated. So, for the fungi, hyphal branch formation is the equivalent of cell division in animals, plants and protists. The position of origin of a branch, and its direction and rate of growth are the main formative events in the development of fungal tissues and organs. Consequently, by simulating the mathematics of the control of hyphal growth and branching the Neighbour-Sensing model provides the user with a way of experimenting with features that may regulate hyphal growth patterns during morphogenesis to arrive at suggestions that could be tested with live fungi.
The model was proposed by Audrius Meškauskas and David Moore in 2004 and developed using the supercomputing facilities of the University of Manchester.
The key idea of this model is that all parts of the fungal mycelium have identical field generation systems, field sensing mechanisms and growth direction altering algorithms.
|
https://en.wikipedia.org/wiki/James%20S.%20McDonnell%20Foundation
|
The James S. McDonnell Foundation was founded in 1950 by aerospace pioneer James S. McDonnell. It was established to "improve the quality of life," and does so by contributing to the generation of new knowledge through its support of research and scholarship. Originally called the McDonnell Foundation, the organization was renamed the James S. McDonnell Foundation in 1984 in honor of its founder. The foundation is based in St. Louis, Missouri.
The Foundation is a member of the Brain Tumor Funders' Collaborative, a partnership among eight private philanthropic and advocacy organizations designed to bridge the “translational gap” that prevents promising laboratory science from yielding new medical treatments. Fair market value of Foundation assets were around $609 million in 2007. Susan M. Fitzpatrick was named President beginning 2015.
Grants
In 2004, the Foundation awarded approximately $15.5 million in grants. Since its inception, the McDonnell Foundation has awarded over $295 million in grants. Grants are awarded via the Foundation-initiated, peer-reviewed proposal processes through the 21st Century Science Initiative. This initiative supports scientific, educational, and charitable causes on a local, national, and international level. For instance for research related to cancer, or climate change.
References
External links
James S. McDonnell Foundation. – official website
Brain Tumor Funders' Collaborative.
Biomedical research foundations
Organizations established in 1950
Medical and health foundations in the United States
Organizations based in St. Louis
|
https://en.wikipedia.org/wiki/S%20phase%20index
|
S-phase index (SPI), is a measure of cell growth and viability, especially the capacity of tumor cells to proliferate. It is defined as the number of BrdU-incorporating cells relative to the volume of DNA staining determined from whole mount confocal analyses.
Only cells in the S phase will incorporate BrdU into their DNA structure, which assists in determining length of the cell cycle.
References
Murphy, Terence D. "Drosophila skpA, a component of SCF ubiquitin ligases, regulates centrosome duplication independently of cyclin E accumulation", Journal of Cell Science 116, 2321-2332 (2003).
Cellular processes
|
https://en.wikipedia.org/wiki/Blagger%20%28video%20game%29
|
Blagger is a platform game created by Antony Crowther for the Commodore 64 and released by Alligata in 1983. A BBC Micro port was released the same year, Acorn Electron, Amstrad CPC (through Amsoft) and MSX in 1984, Commodore 16 and Plus/4 in 1985 and Amstrad PCW in 1987. In some countries this game was released under the name Gangster.
Son of Blagger, was released in 1984 with a third and final title Blagger Goes to Hollywood released in 1985. Another sequel, known as New Blagger but developed as Blagger 2, being a direct continuation of the original, was produced in 1985 but not released.
Gameplay
The game is divided into a series of single-screen levels. The goal of the player on each screen is to manipulate Blagger, a burglar, to collect the scattered keys and then reach the safe. The keys must be collected and the safe opened in a limited amount of time. Blagger can walk left and right, and jump left, right and up. The jumping action is in a fixed pattern and cannot be altered once initiated. Gameplay involves learning the best order in which to collect the keys, and good timing of movement and jumping.
Not all platforms are permanent; some decay once Blagger has stepped on them. Other platforms serve to move Blagger in a particular direction. Blagger will die if he touches cacti, one of the moving enemy obstacles of the level, or if he falls more than a certain distance. The moving enemies vary from level to level, and include cars, aliens, mad hatters, and giant mouths. The movement of the enemies is in a fixed pattern, generally travelling from one point to another and back again.
The BBC and Electron versions feature floating "RG"s as hazards (R.G. being the initials of the programmer of those versions, R.S. Goodley).
Reception
References
External links
A remake of the original Blagger at Darn Kitty
Blagger at Plus/4 World
Complete video from the C64 version at Internet Archive
1983 video games
Amsoft games
Amstrad CPC games
Amstrad PCW games
BBC
|
https://en.wikipedia.org/wiki/Program%20temporary%20fix
|
In IBM terminology, a Program temporary fix or Product temporary fix (PTF), sometimes depending on date, is a single bug fix, or group of fixes, distributed in a form ready to install for customers.
A PTF normally follows an APAR (Authorized Program Analysis Report), and where an "APAR fix" was issued, the PTF "is a tested APAR" or set of APAR fixes. However, if an APAR is resolved as "Fixed If Next" or "Permanent Restriction" then there may be no PTF fixing it, only a subsequent release.
PTF installation
Initially, installations had to install service via a semi-manual process.
Over time, IBM started to provide service aids such as IMAPTFLE
and utilities such as IEBEDIT to simplify the installation of batches of PTFs. For OS/360 and successors, this culminated in System Modification Program (SMP) and System Modification Program/Extended (SMP/E).
For VM, this culminated in Virtual Machine Serviceability Enhancements Staged (VM/SP SES) and VMSES/E.
For DOS/360 and successors, this culminated in Maintain System History Program (MSHP)
PTF usage
PTFs used to be distributed in a group on a so-called Program Update Tape (PUT) or Recommended Service Upgrade (RSU), approximately on a monthly basis. They can now be downloaded straight to the system through a direct connection to IBM support. In some instances IBM will release a "Cumulative PTF Pack", a large number of fixes which function best as a whole, and are sometimes codependent. When this happens, IBM issues compact discs containing the entire PTF pack, which can be loaded directly onto the system from its media drive.
One reason for the use of physical media is size, and related (default) size limits. "By default, the /home file system on VIOS (Virtual I/O Server) for System p is only 10GB in size." If the "Cumulative PTF Pack" is larger than the default, "If you try (to) FTP 17GB of ISO images you will run out of space."
In z/OS, the PTFs are processed using SMP/E (System Modification Program/Extended)
|
https://en.wikipedia.org/wiki/Aeronautical%20Message%20Handling%20System
|
Air Traffic Services Message Handling Services (AMHS) is a standard for aeronautical ground-ground communications (e.g. for the transmission of NOTAM, Flight Plans or Meteorological Data) based on X.400 profiles. It has been defined by the ICAO.
Levels of service
ICAO Doc 9880 Part II defines two fundamental levels of service within the ATSMHS;
Basic ATSMHS and
the Extended ATSMHS.
Additionally, ICAO Doc 9880 (Part II, section 3.4) outlines different subsets of the Extended ATSMHS. The Basic ATSMHS performs an operational role similar to the
Aeronautical Fixed Telecommunication Network with a few enhancements. The Extended ATSMHS provided enhanced features but includes the Basic level of service
capability; in this way it is ensured that users with Extended Service capabilities can inter-operate, at a basic level, with users having Basic Service capabilities and vice versa.
The ATSMHS is provided by a set of end systems, which collectively comprise the ATS Message Handling System. The systems co-operate to provide users (human or automated) with a data communication service. The AMHS network is composed of interconnected ATS Message Servers that perform message switching at the application layer (Layer 7 in the OSI model).
Direct users connect to ATS Message Servers by means of ATS Message User Agents. An ATS Message User Agent supporting the Extended level of service will use
the Basic level of service to allow communication with users who only support the Basic ATSMHS.
Interoperability
In order to ensure unobstructed communication between the ANSPs, the European Air Navigation Planning Group (EANPG) of ICAO has defined 59 test cases in its EUR AMHS Manual (V5.0), 17/06/2010 (Appendix D, AMHS Conformance Tests), ASIA/PAC AMHS Manual (Annex B, AMHS Conformance and Compatibility Test, V2.0, 22/09/08) which have to be performed prior to establishment of bilateral links between the ANSPs. Those tests are conducted using a test engine (AMHS Conformance Test Too
|
https://en.wikipedia.org/wiki/Linear%20stability
|
In mathematics, in the theory of differential equations and dynamical systems, a particular stationary or quasistationary solution to a nonlinear system is called linearly unstable if the linearization of the equation at this solution has the form , where r is the perturbation to the steady state, A is a linear operator whose spectrum contains eigenvalues with positive real part. If all the eigenvalues have negative real part, then the solution is called linearly stable. Other names for linear stability include exponential stability or stability in terms of first approximation. If there exist an eigenvalue with zero real part then the question about stability cannot be solved on the basis of the first approximation and we approach the so-called "centre and focus problem".
Examples
Ordinary differential equation
The differential equation
has two stationary (time-independent) solutions: x = 0 and x = 1.
The linearization at x = 0 has the form
. The linearized operator is A0 = 1. The only eigenvalue is . The solutions to this equation grow exponentially;
the stationary point x = 0 is linearly unstable.
To derive the linearization at , one writes
, where . The linearized equation is then ; the linearized operator is , the only eigenvalue is , hence this stationary point is linearly stable.
Nonlinear Schrödinger Equation
The nonlinear Schrödinger equation
where and , has solitary wave solutions of the form .
To derive the linearization at a solitary wave, one considers the solution in the form
. The linearized equation on is given by
where
with
and
the differential operators.
According to Vakhitov–Kolokolov stability criterion,
when , the spectrum of A has positive point eigenvalues, so that the linearized equation is linearly (exponentially) unstable; for , the spectrum of A is purely imaginary, so that the corresponding solitary waves are linearly stable.
It should be mentioned that linear stability does not automatically imply stability;
in particular,
|
https://en.wikipedia.org/wiki/Quasi-isomorphism
|
In homological algebra, a branch of mathematics, a quasi-isomorphism or quism is a morphism A → B of chain complexes (respectively, cochain complexes) such that the induced morphisms
of homology groups (respectively, of cohomology groups) are isomorphisms for all n.
In the theory of model categories, quasi-isomorphisms are sometimes used as the class of weak equivalences when the objects of the category are chain or cochain complexes. This results in a homology-local theory, in the sense of Bousfield localization in homotopy theory.
See also
Derived category
References
Gelfand, Sergei I., Manin, Yuri I. Methods of Homological Algebra, 2nd ed. Springer, 2000.
Algebraic topology
Homological algebra
Equivalence (mathematics)
|
https://en.wikipedia.org/wiki/Vortex-induced%20vibration
|
In fluid dynamics, vortex-induced vibrations (VIV) are motions induced on bodies interacting with an external fluid flow, produced by, or the motion producing, periodic irregularities on this flow.
A classic example is the VIV of an underwater cylinder. How this happens can be seen by putting a cylinder into the water (a swimming-pool or even a bucket) and moving it through the water in a direction perpendicular to its axis. Since real fluids always present some viscosity, the flow around the cylinder will be slowed while in contact with its surface, forming a so-called boundary layer. At some point, however, that layer can separate from the body because of its excessive curvature. A vortex is then formed, changing the pressure distribution along the surface. When the vortex does not form symmetrically around the body (with respect to its midplane), different lift forces develop on each side of the body, thus leading to motion transverse to the flow. This motion changes the nature of the vortex formation in such a way as to lead to a limited motion amplitude (differently, than, from what would be expected in a typical case of resonance). This process then repeats until the flow rate changes substantially.
VIV manifests itself on many different branches of engineering, from cables to heat exchanger tube arrays. It is also a major consideration in the design of ocean structures. Thus, study of VIV is a part of many disciplines, incorporating fluid mechanics, structural mechanics, vibrations, computational fluid dynamics (CFD), acoustics, statistics, and smart materials.
Motivation
They occur in many engineering situations, such as bridges, stacks, transmission lines, aircraft control surfaces, offshore structures, thermowells, engines, heat exchangers, marine cables, towed cables, drilling and production risers in petroleum production, mooring cables, moored structures, tethered structures, buoyancy and spar hulls, pipelines, cable-laying, members of jacketed struc
|
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20spectroscopy%20of%20proteins
|
Nuclear magnetic resonance spectroscopy of proteins (usually abbreviated protein NMR) is a field of structural biology in which NMR spectroscopy is used to obtain information about the structure and dynamics of proteins, and also nucleic acids, and their complexes. The field was pioneered by Richard R. Ernst and Kurt Wüthrich at the ETH, and by Ad Bax, Marius Clore, Angela Gronenborn at the NIH, and Gerhard Wagner at Harvard University, among others. Structure determination by NMR spectroscopy usually consists of several phases, each using a separate set of highly specialized techniques. The sample is prepared, measurements are made, interpretive approaches are applied, and a structure is calculated and validated.
NMR involves the quantum-mechanical properties of the central core ("nucleus") of the atom. These properties depend on the local molecular environment, and their measurement provides a map of how the atoms are linked chemically, how close they are in space, and how rapidly they move with respect to each other. These properties are fundamentally the same as those used in the more familiar magnetic resonance imaging (MRI), but the molecular applications use a somewhat different approach, appropriate to the change of scale from millimeters (of interest to radiologists) to nanometers (bonded atoms are typically a fraction of a nanometer apart), a factor of a million. This change of scale requires much higher sensitivity of detection and stability for long term measurement. In contrast to MRI, structural biology studies do not directly generate an image, but rely on complex computer calculations to generate three-dimensional molecular models.
Currently most samples are examined in a solution in water, but methods are being developed to also work with solid samples. Data collection relies on placing the sample inside a powerful magnet, sending radio frequency signals through the sample, and measuring the absorption of those signals. Depending on the environmen
|
https://en.wikipedia.org/wiki/H-bridge
|
An H-bridge is an electronic circuit that switches the polarity of a voltage applied to a load. These circuits are often used in robotics and other applications to allow DC motors to run forwards or backwards. The name is derived from its common schematic diagram representation, with four switching elements configured as the branches of a letter "H" and the load connected as the cross-bar.
Most DC-to-AC converters (power inverters),
most AC/AC converters,
the DC-to-DC push–pull converter, isolated DC-to-DC converter
most motor controllers,
and many other kinds of power electronics use H bridges.
In particular, a bipolar stepper motor is almost always driven by a motor controller containing two H bridges.
General
H-bridges are available as integrated circuits, or can be built from discrete components.
The term H-bridge is derived from the typical graphical representation of such a circuit. An H-bridge is built with four switches (solid-state or mechanical). When the switches S1 and S4 (according to the first figure) are closed (and S2 and S3 are open) a positive voltage is applied across the motor. By opening S1 and S4 switches and closing S2 and S3 switches, this voltage is reversed, allowing reverse operation of the motor.
Using the nomenclature above, the switches S1 and S2 should never be closed at the same time, as this would cause a short circuit on the input voltage source. The same applies to the switches S3 and S4. This condition is known as shoot-through.
Common usage
H bridge is used to supply power to a two terminal device. By proper arrangement of the switches, the polarity of the power to the device can be changed. Two examples are discussed below, DC motor Driver and transformer of switching regulator. Note that, not all of the case of switching condition is safe. The "short"(see below in "DC motor driver" section) cases are dangerous to the power source and to the switches.
DC motor Driver
Changing the polarity of the power supply to DC moto
|
https://en.wikipedia.org/wiki/Solid-state%20electronics
|
Solid-state electronics are semiconductor electronics: electronic equipment that use semiconductor devices such as transistors, diodes and integrated circuits (ICs). The term is also used as an adjective for devices in which semiconductor electronics that have no moving parts replace devices with moving parts, such as the solid-state relay in which transistor switches are used in place of a moving-arm electromechanical relay, or the solid-state drive (SSD) a type of semiconductor memory used in computers to replace hard disk drives, which store data on a rotating disk.
History
The term "solid-state" became popular at the beginning of the semiconductor era in the 1960s to distinguish this new technology. A semiconductor device works by controlling an electric current consisting of electrons or holes moving within a solid crystalline piece of semiconducting material such as silicon, while the thermionic vacuum tubes it replaced worked by controlling a current of electrons or ions in a vacuum within a sealed tube.
Although the first solid-state electronic device was the cat's whisker detector, a crude semiconductor diode invented around 1904, solid-state electronics started with the invention of the transistor in 1947. Before that, all electronic equipment used vacuum tubes, because vacuum tubes were the only electronic components that could amplify—an essential capability in all electronics. The transistor, which was invented by John Bardeen and Walter Houser Brattain while working under William Shockley at Bell Laboratories in 1947, could also amplify, and replaced vacuum tubes. The first transistor Hi-Fi system was developed by engineers at GE and demonstrated at the University of Philadelphia in 1955. In terms of commercial production, The Fisher TR-1 was the first "All Transistor" preamplifier, which became available mid-1956. In 1961, a company named Transis-tronics released a solid-state amplifier, the TEC S-15.
The replacement of bulky, fragile, energy-h
|
https://en.wikipedia.org/wiki/Screw%20axis
|
A screw axis (helical axis or twist axis) is a line that is simultaneously the axis of rotation and the line along which translation of a body occurs. Chasles' theorem shows that each Euclidean displacement in three-dimensional space has a screw axis, and the displacement can be decomposed into a rotation about and a slide along this screw axis.
Plücker coordinates are used to locate a screw axis in space, and consist of a pair of three-dimensional vectors. The first vector identifies the direction of the axis, and the second locates its position. The special case when the first vector is zero is interpreted as a pure translation in the direction of the second vector. A screw axis is associated with each pair of vectors in the algebra of screws, also known as screw theory.
The spatial movement of a body can be represented by a continuous set of displacements. Because each of these displacements has a screw axis, the movement has an associated ruled surface known as a screw surface. This surface is not the same as the axode, which is traced by the instantaneous screw axes of the movement of a body. The instantaneous screw axis, or 'instantaneous helical axis' (IHA), is the axis of the helicoidal field generated by the velocities of every point in a moving body.
When a spatial displacement specializes to a planar displacement, the screw axis becomes the displacement pole, and the instantaneous screw axis becomes the velocity pole, or instantaneous center of rotation, also called an instant center. The term centro is also used for a velocity pole, and the locus of these points for a planar movement is called a centrode.
History
The proof that a spatial displacement can be decomposed into a rotation around, and translation along, a line in space is attributed to Michel Chasles in 1830. Recently the work of Giulio Mozzi has been identified as presenting a similar result in 1763.
Screw axis symmetry
A screw displacement (also screw operation or rotary translation) i
|
https://en.wikipedia.org/wiki/Event%20%28UML%29
|
An event in the Unified Modeling Language (UML) is a notable occurrence at a particular point in time.
Events can, but do not necessarily, cause state transitions from one state to another in state machines represented by state machine diagrams.
A transition between states occurs only when any guard condition for that transition are satisfied.
References
Unified Modeling Language
Data modeling
Terms in science and technology
|
https://en.wikipedia.org/wiki/Memory%20organisation
|
There are several ways to organise memories with respect to the way they are connected to the cache:
one-word-wide memory organisation
wide memory organisation
interleaved memory organisation
independent memory organisation
One-Word-Wide
The memory is one word wide and connected via a one word wide bus to the cache.
Wide
The memory is more than one word wide (usually four words wide) and connected by an equally wide bus to the low level cache (which is also wide). From the cache multiple busses of one word wide go to a MUX which selects the correct bus to connect to the high level cache.
Interleaved
There are several memory banks which are one word wide, and one word wide bus. There is some logic in the memory that selects the correct bank to use when the memory gets accessed by the cache.
Memory interleaving is a way to distribute individual addresses over memory modules. Its aim is to keep the most of modules busy as computations proceed. With memory interleaving, the low-order k bits of the memory address generally specify the module on several buses.
Computer memory
See also
Cache hierarchy
Memory hierarchy
Memory geometry
|
https://en.wikipedia.org/wiki/Phylogenetic%20network
|
A phylogenetic network is any graph used to visualize evolutionary relationships (either abstractly or explicitly) between nucleotide sequences, genes, chromosomes, genomes, or species. They are employed when reticulation events such as hybridization, horizontal gene transfer, recombination, or gene duplication and loss are believed to be involved. They differ from phylogenetic trees by the explicit modeling of richly linked networks, by means of the addition of hybrid nodes (nodes with two parents) instead of only tree nodes (a hierarchy of nodes, each with only one parent). Phylogenetic trees are a subset of phylogenetic networks. Phylogenetic networks can be inferred and visualised with software such as SplitsTree, the R-package, phangorn,
and, more recently, Dendroscope. A standard format for representing phylogenetic networks is a variant of Newick format which is extended to support networks as well as trees.
Many kinds and subclasses of phylogenetic networks have been defined based on the biological phenomenon they represent or which data they are built from (hybridization networks, usually built from rooted trees, ancestral recombination graphs (ARGs) from binary sequences, median networks from a set of splits, optimal realizations and reticulograms from a distance matrix), or restrictions to get computationally tractable problems (galled trees, and their generalizations level-k phylogenetic networks, tree-child or tree-sibling phylogenetic networks).
Microevolution
Phylogenetic trees also have trouble depicting microevolutionary events, for example the geographical distribution of muskrat or fish populations of a given species among river networks, because there is no species boundary to prevent gene flow between populations. Therefore, a more general phylogenetic network better depicts these situations.
Rooted vs unrooted
Unrooted phylogenetic network
Let X be a set of taxa. An unrooted phylogenetic network N on X is any undirected graph whose leaves
|
https://en.wikipedia.org/wiki/Island%20gigantism
|
Island gigantism, or insular gigantism, is a biological phenomenon in which the size of an animal species isolated on an island increases dramatically in comparison to its mainland relatives. Island gigantism is one aspect of the more general "island effect" or "Foster's rule", which posits that when mainland animals colonize islands, small species tend to evolve larger bodies, and large species tend to evolve smaller bodies (insular dwarfism). This is itself one aspect of the more general phenomenon of island syndrome which describes the differences in morphology, ecology, physiology and behaviour of insular species compared to their continental counterparts. Following the arrival of humans and associated introduced predators (dogs, cats, rats, pigs), many giant as well as other island endemics have become extinct (e.g. the dodo (Raphus cucullatus) evolved from a Nicobar pigeon). A similar size increase, as well as increased woodiness, has been observed in some insular plants such as the Mapou tree (Cyphostemma mappia) in Mauritius which is also known as the "Mauritian baobab" although it is member of the grape family (Vitaceae).
Possible causes
Large mammalian carnivores are often absent on islands because of insufficient range or difficulties in over-water dispersal. In their absence, the ecological niches for large predators may be occupied by birds, reptiles or smaller carnivorans, which can then grow to larger-than-normal size. For example, on prehistoric Gargano Island in the Miocene-Pliocene Mediterranean, on islands in the Caribbean like Cuba, and on Madagascar and New Zealand, some or all apex predators were birds like eagles, falcons and owls, including some of the largest known examples of these groups. However, birds and reptiles generally make less efficient large predators than advanced carnivorans.
Since small size usually makes it easier for herbivores to escape or hide from predators, the decreased predation pressure on islands can allow them to
|
https://en.wikipedia.org/wiki/Families%20of%20Structurally%20Similar%20Proteins%20database
|
Families of Structurally Similar Proteins or FSSP is a database of structurally superimposed proteins generated using the "Distance-matrix ALIgnment" (DALI) algorithm.The database currently contains an extended structural family for each of 330 representative protein chains. Each data set contains structural alignments of one search structure with all other structurally significantly similar proteins in the representative set (remote homologs, < 30% sequence identity), as well as all structures in the Protein Data Bank with 70-30% sequence identity relative to the search structure (medium homologs). Very close homologs (above 70% sequence identity) are excluded as they rarely have marked structural differences. The alignments of remote homologs are the result of pairwise all-against-all structural comparisons in the set of 330 representative protein chains. All such comparisons are based purely on the 3D co-ordinates of the proteins and are derived by automatic (objective) structure comparison programs. The significance of structural similarity is estimated based on statistical criteria. The FSSP database is available electronically from the EMBL file server and by anonymous ftp (file transfer protocol). The database is helpful for the comparison of protein structures.
See also
CATH
SCOP
References
External links
FSSP Search page at EBI
Protein structure
Protein classification
Biological databases
Protein superfamilies
|
https://en.wikipedia.org/wiki/A/ROSE
|
A/ROSE (the Apple Real-time Operating System Environment) is a small embedded operating system that runs on Apple Computer's "Macintosh Coprocessor Platform", an expansion card for the Apple Macintosh.
The idea was to offer a single "overdesigned" hardware platform on which third party vendors could build practically any product, reducing the otherwise heavy workload of developing a NuBus-based expansion card. However, the MCP cards were fairly expensive, limiting the appeal of the concept. A/ROSE saw very little use, apparently limited solely to Apple's own networking cards for serial I/O, Ethernet, Token Ring and Twinax. GreenSpring Computers developed the RM1260, which is an IndustryPack (IP) carrier card with a 68000 CPU running A/ROSE and is intended for the data acquisition market.
History
A/ROSE and the MCP originally came about in August 1987 during the development of the Macintosh II. While working on various networking products for the new system, the developers realized that the existing classic Mac OS would make any "serious" card difficult to create, due to large latencies and the difficulty of writing complex device drivers. Their solution was to make an "intelligent" NuBus card that was essentially an entire computer on a card, containing its own Motorola 68000 processor, working space in RAM mirrored in the main system, and its own basic operating system. The first version of the system was ready for use in February 1988.
A/ROSE was internally called MR-DOS (Multitasking Realtime Distributed Operating System), but Microsoft (developer of MS-DOS) did not appreciate the name and put pressure on Apple to change its name. Eric M. Trehus, a QA engineer on the Token Ring card that ran A/ROSE reportedly said "A/ROSE by any other name is still MR-DOS."
A/ROSE is infamous for its esoteric purpose, which is generally not understood by Mac end users, as well as for causing many Mac emulators, such as Basilisk II, to produce a system error at boot time.
|
https://en.wikipedia.org/wiki/Coefficient%20of%20restitution
|
The coefficient of restitution (COR, also denoted by e), is the ratio of the final to initial relative speed between two objects after they collide. It normally ranges from 0 to 1 where 1 would be a perfectly elastic collision. A perfectly inelastic collision has a coefficient of 0, but a 0 value does not have to be perfectly inelastic. It is measured in the Leeb rebound hardness test, expressed as 1000 times the COR, but it is only a valid COR for the test, not as a universal COR for the material being tested.
The value is almost always less than 1 due to initial translational kinetic energy being lost to rotational kinetic energy, plastic deformation, and heat. It can be more than 1 if there is an energy gain during the collision from a chemical reaction, a reduction in rotational energy, or another internal energy decrease that contributes to the post-collision velocity.
The mathematics were developed by Sir Isaac Newton in 1687. It is also known as Newton's experimental law.
Further details
Line of impact – It is the line along which e is defined or in absence of tangential reaction force between colliding surfaces, force of impact is shared along this line between bodies. During physical contact between bodies during impact its line along common normal to pair of surfaces in contact of colliding bodies. Hence e is defined as a dimensionless one-dimensional parameter.
Range of values for e – treated as a constant
e is usually a positive, real number between 0 and 1:
e = 0: This is a perfectly inelastic collision.
0 < e < 1: This is a real-world inelastic collision, in which some kinetic energy is dissipated.
e = 1: This is a perfectly elastic collision, in which no kinetic energy is dissipated, and the objects rebound from one another with the same relative speed with which they approached.e < 0: A COR less than zero would represent a collision in which the separation velocity of the objects has the same direction (sign) as the closing velocity, implyi
|
https://en.wikipedia.org/wiki/Machine%20olfaction
|
Machine olfaction is the automated simulation of the sense of smell. An emerging application in modern engineering, it involves the use of robots or other automated systems to analyze air-borne chemicals. Such an apparatus is often called an electronic nose or e-nose. The development of machine olfaction is complicated by the fact that e-nose devices to date have responded to a limited number of chemicals, whereas odors are produced by unique sets of (potentially numerous) odorant compounds. The technology, though still in the early stages of development, promises many applications, such as:
quality control in food processing, detection and diagnosis in medicine, detection of drugs, explosives and other dangerous or illegal substances, disaster response, and environmental monitoring.
One type of proposed machine olfaction technology is via gas sensor array instruments capable of detecting, identifying, and measuring volatile compounds. However, a critical element in the development of these instruments is pattern analysis, and the successful design of a pattern analysis system for machine olfaction requires a careful consideration of the various issues involved in processing multivariate data: signal-preprocessing, feature extraction, feature selection, classification, regression, clustering, and validation. Another challenge in current research on machine olfaction is the need to predict or estimate the sensor response to aroma mixtures. Some pattern recognition problems in machine olfaction such as odor classification and odor localization can be solved by using time series kernel methods.
Detection
There are three basic detection techniques using conductive-polymer odor sensors (polypyrrole), tin-oxide gas sensors, and quartz-crystal micro-balance sensors. They generally comprise (1) an array of sensors of some type, (2) the electronics to interrogate those sensors and produce digital signals, and (3) data processing and user interface software.
The entire s
|
https://en.wikipedia.org/wiki/Rankine%20body
|
The Rankine body, discovered by Scottish physicist and engineer Macquorn Rankine, is a feature of naval architecture involving the flow of liquid around a body/surface.
In fluid mechanics, a fluid flow pattern formed by combining a uniform stream with a source and a sink of equal strengths, with the line joining the source and sink along the stream direction, conforms to the shape of a Rankine body.
See also
Rankine half body
External links
Derivation of the Rankine body using potential flow.
Fluid dynamics
|
https://en.wikipedia.org/wiki/Rankine%27s%20method
|
Rankine's method or tangential angle method is an angular technique for laying out circular curves by a combination of chaining and angles at circumference, fully exploiting the theodolite and making a substantial improvement in accuracy and productivity over existing methods. This method requires access to only one road/path of communication to lay out a curve. Points on curve are calculated by their angular offset from the path of communication.
Rankine's method is named for its discoverer William John Macquorn Rankine at an early stage of his career. He had been working on railways in Ireland, on the construction of the Dublin and Drogheda line.
Background
This method makes sure that any line drawn from the known tangent to curve is a chord of the curve by constraining the deflection angle of line. Since end points of chords lie on the curve this can be used to approximate the shape of actual curve.
Procedure
Let AB be a tangent line/path of communication or start of a curve, then successive points on the curve can be obtained by drawing an arbitrary line of length from point A with an angle
where is deflection from nth chord in degrees.
R is the radius of circular curve
is arbitrary length of chord
See also
Dublin and Drogheda Railway
References
Surveying
Scottish inventions
|
https://en.wikipedia.org/wiki/Pfam
|
Pfam is a database of protein families that includes their annotations and multiple sequence alignments generated using hidden Markov models. The most recent version, Pfam 35.0, was released in November 2021 and contains 19,632 families.
Uses
The general purpose of the Pfam database is to provide a complete and accurate classification of protein families and domains. Originally, the rationale behind creating the database was to have a semi-automated
method of curating information on known protein families to improve the efficiency of annotating genomes. The Pfam classification of protein families has been widely adopted by biologists because of its wide coverage of
proteins and sensible naming conventions.
It is used by experimental biologists researching specific proteins, by structural biologists to identify new targets for structure determination, by computational biologists to organise sequences and by evolutionary biologists tracing the origins of proteins. Early genome projects, such as human and fly used Pfam extensively for functional annotation of genomic data.
The Pfam website allows users to submit protein or DNA sequences to search for matches to families in the database. If DNA is submitted, a six-frame translation is performed, then each frame is searched. Rather than performing a typical BLAST search, Pfam uses profile hidden Markov models, which give greater weight to matches at conserved sites, allowing better remote homology detection, making them more suitable for annotating genomes of organisms with no well-annotated close relatives.
Pfam has also been used in the creation of other resources such as iPfam, which catalogs domain-domain interactions within and between proteins, based on information in structure databases and mapping of Pfam domains onto these structures.
Features
For each family in Pfam one can:
View a description of the family
Look at multiple alignments
View protein domain architectures
Examine species distribution
|
https://en.wikipedia.org/wiki/Cross-multiplication
|
In mathematics, specifically in elementary arithmetic and elementary algebra, given an equation between two fractions or rational expressions, one can cross-multiply to simplify the equation or determine the value of a variable.
The method is also occasionally known as the "cross your heart" method because lines resembling a heart outline can be drawn to remember which things to multiply together.
Given an equation like
where and are not zero, one can cross-multiply to get
In Euclidean geometry the same calculation can be achieved by considering the ratios as those of similar triangles.
Procedure
In practice, the method of cross-multiplying means that we multiply the numerator of each (or one) side by the denominator of the other side, effectively crossing the terms over:
The mathematical justification for the method is from the following longer mathematical procedure. If we start with the basic equation
we can multiply the terms on each side by the same number, and the terms will remain equal. Therefore, if we multiply the fraction on each side by the product of the denominators of both sides——we get
We can reduce the fractions to lowest terms by noting that the two occurrences of on the left-hand side cancel, as do the two occurrences of on the right-hand side, leaving
and we can divide both sides of the equation by any of the elements—in this case we will use —getting
Another justification of cross-multiplication is as follows. Starting with the given equation
multiply by = 1 on the left and by = 1 on the right, getting
and so
Cancel the common denominator = , leaving
Each step in these procedures is based on a single, fundamental property of equations. Cross-multiplication is a shortcut, an easily understandable procedure that can be taught to students.
Use
This is a common procedure in mathematics, used to reduce fractions or calculate a value for a given variable in a fraction. If we have an equation
|
https://en.wikipedia.org/wiki/Genome%20Valley
|
Genome Valley is an Indian high-technology business district spread across /(3.1 sq mi) in Hyderabad, India. It is located across the suburbs, Turakapally, Shamirpet, Medchal, Uppal, Patancheru, Jeedimetla, Gachibowli and Keesara. The Genome Valley has developed as a cluster for Biomedical research, training and manufacturing. Genome Valley is now into its Phase III, which is about 11 kms from the Phase I and II with the total area approximately .
History
Genome Valley was an initiative of N Chandrababu Naidu, the then Chief Minister of Andhra Pradesh and was commissioned in 1999 as S. P. Biotech Park in a public-private partnership with Bharat Biotech International, and its founder Krishna Ella, alongside private infrastructure companies such as Shapoorji Pallonji Group and ICICI Bank.
Alexandria Knowledge Park SEZ
In 2009, U.S.-based infrastructure giant Alexandria Real Estate Equities has announced its plans to invest in the bio-cluster, which led to the Alexandria Knowledge Park SEZ. The bio-cluster at Shamirpet holds Certification mark by the United States Patent and Trademark Office and the European Union.
IKP Knowledge Park
The IKP Knowledge Park is spread over 200 acres in Turakapally, is an initiative of ICICI Bank with five "innovation corridors" - a first of its kind knowledge-nurturing centre for Indian companies and a knowledge gateway for multinational companies". The first phase of Innovation Corridor I, comprising 10 laboratories, around 3,000 ft² (300 m²) each, is operational and fully occupied. The second phase of Innovation Corridor I, comprising 16 laboratory modules of 1,700 ft² (170 m²) each, is ready for operation.
MN Park
In 2016, Mission Neutral Park has acquired specialized R&D assets in Genome Valley from U.S.-based Alexandria REIT and rechristened it as MN Park. It is a collaborative life sciences ecosystem in Genome Valley, Hyderabad consisting of Grade A R&D facilities.
MN Park is spread over 400 acres including build-up facilitie
|
https://en.wikipedia.org/wiki/Software%20industry%20in%20Telangana
|
The Indian state of Telangana has a significant amount of software export in India. While the majority of the industry is concentrated in Hyderabad, other cities are also becoming significant IT destinations in the state. Hyderabad houses the largest campuses of tech giants like Google, Facebook, Microsoft, Amazon, and Apple outside of the US. In Hyderabad, the central region of the business happens in Financial District, HITECH City , the Madhapur suburb , Kokapet SEZ (Neopolis) and Salarpuria Sattva Knowledge City. As of 2023, Hyderabad has 9,05,715 employees in the IT/ITES sector, working in more than 1500 companies. The number of startups in Telangana had increased from 400 in 2016 to 2,000 in 2022. Hyderabad added two companies in unicorn startup list in first two months of 2022.
The IT exports from Hyderabad (Telangana) stood second in India at ₹2,41,275 crore (US$ 32 billion) in FY 2022-23 improving from previous year. IT sector exports from Telangana account for 50 per cent of total exports from state. Telangana contributed to 16.77 per cent of Indian IT sector employment as of FY 2023.
History
The first IT tower in Hyderabad was established by the name Intergraph in Begumpet in 1986. The initiation of this Software Industry in Hyderabad was laid foundation by N. Janardhana Reddy in 1991. HITEC City, nicknamed Cyberabad, was set up with the collaboration of Larsen & Toubro. Mr N. Chandrababu Naidu developed a slogan of "Bye Bye Bangalore" and "Hello Hyderabad" during his tenure and worked hard to bring International companies like Microsoft, CA Technologies, Deloitte and went to create Vision 2020. Mr N. Chandrababu Naidu persuaded Bill gate to set up a Microsoft development center in Hyderabad, at that It was only the Microsoft development center set up by Microsoft out of USA. N. Chandrababu Naidu also worked hard to bring biotechnology companies to Hyderabad, he developed Genome Valley a high-end technology park commissioned in 1999 as S. P. Biotech
|
https://en.wikipedia.org/wiki/Jean-Yves%20B%C3%A9ziau
|
Jean-Yves Beziau (; born January 15, 1965, in Orléans, France is a Swiss Professor in logic at the University of Brazil, Rio de Janeiro, and Researcher of the Brazilian Research Council. He is permanent member and former president of the Brazilian Academy of Philosophy. Before going to Brazil, he was Professor of the Swiss National Science Foundation at the University of Neuchâtel in Switzerland and researcher at Stanford University working with Patrick Suppes.
Career
Béziau works in the field of logic—in particular, paraconsistent logic, the square of opposition and universal logic. He holds a Maîtrise in Philosophy from Pantheon-Sorbonne University, a DEA in Philosophy from Pantheon-Sorbonne University, a PhD in Philosophy from the University of São Paulo, a MSc and a PhD in Logic and Foundations of Computer Science from Paris Diderot University.
Béziau is the editor-in-chief of the journal Logica Universalis and of the South American Journal of Logic—an online, open-access journal—as well as of the Springer book series Studies in Universal Logic. He is also the editor of College Publication's book series Logic PhDs
He has launched four major international series of events: UNILOG (World Congress and School on Universal Logic), SQUARE (World Congress on the Square of Opposition), WOCOLOR (World Congress on Logic and Religion), LIQ (Logic in Question).
Béziau created the World Logic Day (January 14).
Selected publications
"What is paraconsistent logic?" In D. Batens et al. (eds.), Frontiers of Paraconsistent Logic, Research Studies Press, Baldock, 2000, pp. 95–111.
Handbook of Paraconsistency (ed. with Walter Carnielli and Dov Gabbay). London: College Publication, 2007.
"Semantic computation of truth based on associations already learned" (with Patrick Suppes), Journal of Applied Logic, 2 (2004), pp. 457–467.
"From paraconsistent logic to universal logic", Sorites, 12 (2001), pp. 5–32.
Logica Universalis: Towards a General Theory of Logic (ed.)
|
https://en.wikipedia.org/wiki/Principle%20of%20minimum%20energy
|
The principle of minimum energy is essentially a restatement of the second law of thermodynamics. It states that for a closed system, with constant external parameters and entropy, the internal energy will decrease and approach a minimum value at equilibrium. External parameters generally means the volume, but may include other parameters which are specified externally, such as a constant magnetic field.
In contrast, for isolated systems (and fixed external parameters), the second law states that the entropy will increase to a maximum value at equilibrium. An isolated system has a fixed total energy and mass. A closed system, on the other hand, is a system which is connected to another, and cannot exchange matter (i.e. particles), but can transfer other forms of energy (e.g. heat), to or from the other system. If, rather than an isolated system, we have a closed system, in which the entropy rather than the energy remains constant, then it follows from the first and second laws of thermodynamics that the energy of that system will drop to a minimum value at equilibrium, transferring its energy to the other system. To restate:
The maximum entropy principle: For a closed system with fixed internal energy (i.e. an isolated system), the entropy is maximized at equilibrium.
The minimum energy principle: For a closed system with fixed entropy, the total energy is minimized at equilibrium.
Mathematical explanation
The total energy of the system is where S is entropy, and the are the other extensive parameters of the system (e.g. volume, particle number, etc.). The entropy of the system may likewise be written as a function of the other extensive parameters as . Suppose that X is one of the which varies as a system approaches equilibrium, and that it is the only such parameter which is varying. The principle of maximum entropy may then be stated as:
and at equilibrium.
The first condition states that entropy is at an extremum, and the second condition states
|
https://en.wikipedia.org/wiki/Polyconic%20projection%20class
|
Polyconic can refer either to a class of map projections or to a specific projection known less ambiguously as the American polyconic projection. Polyconic as a class refers to those projections whose parallels are all non-concentric circular arcs, except for a straight equator, and the centers of these circles lie along a central axis. This description applies to projections in equatorial aspect.
Polyconic projections
Some of the projections that fall into the polyconic class are:
American polyconic projection—each parallel becomes a circular arc having true scale, the same scale as the central meridian
Latitudinally equal-differential polyconic projection
Rectangular polyconic projection
Van der Grinten projection—projects entire earth into one circle; all meridians and parallels are arcs of circles.
Nicolosi globular projection—typically used to project a hemisphere into a circle; all meridians and parallels are arcs of circles.
A series of polyconic projections, each in a circle, was also presented by Hans Mauer in 1922, who also presented an equal-area polyconic in 1935. Another series by Georgiy Aleksandrovich Ginzburg appeared starting in 1949.
Most polyconic projections, when used to map the entire sphere, produce an "apple-shaped" map of the world.
There are many "apple-shaped" projections, almost all of them obscure.
See also
List of map projections
References
External links
Table of examples and properties of all common projections, from radicalcartography.net
Map projections
|
https://en.wikipedia.org/wiki/HNCA%20experiment
|
HNCA is a 3D triple-resonance NMR experiment commonly used in the field of protein NMR. The name derives from the experiment's magnetization transfer pathway: The magnetization of the amide proton of an amino acid residue is transferred to the amide nitrogen, and then to the alpha carbons of both the starting residue and the previous residue in the protein's amino acid sequence. In contrast, the complementary HNCOCA experiment
transfers magnetization only to the alpha carbon of the previous residue. The HNCA experiment is used, often in tandem with HNCOCA, to assign alpha carbon resonance signals to specific residues in the protein. This experiment requires a purified sample of protein prepared with 13C and 15N isotopic labelling, at a concentration greater than 0.1 mM, and is thus generally only applied to recombinant proteins.
The spectrum produced by this experiment has 3 dimensions: A proton axis, a 15N axis and a 13C axis. For residue i peaks will appear at {HN(i), N(i), Calpha (i)} and {HN(i), N(i), Calpha(i-1)}, while for the complementary HNCOCA experiment peaks appear only at {HN(i), N(i), Calpha(i-1)}. Together, these two experiments reveal the alpha carbon chemical shift for each amino acid residue in a protein, and provide information linking adjacent residues in the protein's sequence.
References
Citations
General references
Protein NMR Spectroscopy : Principles and Practice (1995) John Cavanagh, Wayne J. Fairbrother, Arthur G. Palmer III, Nicholas J. Skelton, Academic Press
Protein methods
Biophysics
Protein structure
Nuclear magnetic resonance experiments
|
https://en.wikipedia.org/wiki/HNCOCA%20experiment
|
HNCOCA is a 3D triple-resonance NMR experiment commonly used in the field of protein NMR. The name derives from the experiment's magnetization transfer pathway: The magnetization of the amide proton of an amino acid residue is transferred to the amide nitrogen, and then to the alpha carbon of the previous residue in the protein's amino acid sequence. In contrast, the complementary HNCA experiment transfers magnetization to the alpha carbons of both the starting residue and the previous residue in the sequence. The HNCOCA experiment is used, often in tandem with HNCA, to assign alpha carbon resonance signals to specific residues in the protein. This experiment requires a purified sample of protein prepared with 13C and 15N isotopic labelling, at a concentration greater than 0.1 mM, and is thus generally only applied to recombinant proteins.
The spectrum produced by this experiment has 3 dimensions: A proton axis, a 15N axis and a 13C axis. For residue i peaks will appear at {HN(i), N(i), Cα(i-1)} only, while for the complementary HNCA experiment peaks appear at {HN(i), N(i), Cα(i-1)} and {HN(i), N(i), Cα (i)}. Together, these two experiments reveal the alpha carbon chemical shift for each amino acid residue in a protein, and provide information linking adjacent residues in the protein's sequence.
References
Citations
General references
Protein methods
Biophysics
Protein structure
Nuclear magnetic resonance experiments
|
https://en.wikipedia.org/wiki/Set-theoretic%20topology
|
In mathematics, set-theoretic topology is a subject that combines set theory and general topology. It focuses on topological questions that are independent of Zermelo–Fraenkel set theory (ZFC).
Objects studied in set-theoretic topology
Dowker spaces
In the mathematical field of general topology, a Dowker space is a topological space that is T4 but not countably paracompact.
Dowker conjectured that there were no Dowker spaces, and the conjecture was not resolved until M.E. Rudin constructed one in 1971. Rudin's counterexample is a very large space (of cardinality ) and is generally not well-behaved. Zoltán Balogh gave the first ZFC construction of a small (cardinality continuum) example, which was more well-behaved than Rudin's. Using PCF theory, M. Kojman and S. Shelah constructed a subspace of Rudin's Dowker space of cardinality that is also Dowker.
Normal Moore spaces
A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC.
Cardinal functions
Cardinal functions are widely used in topology as a tool for describing various topological properties. Below are some examples. (Note: some authors, arguing that "there are no finite cardinal numbers in general topology", prefer to define the cardinal functions listed below so that they never take on finite cardinal numbers as values; this requires modifying some of the definitions given below, e.g. by adding "" to the right-hand side of the definitions, etc.)
Perhaps the simplest cardinal invariants of a topological space X are its cardinality and the cardinality of its topology, denoted respectively by |X| and o(X).
The weight w(X ) of a topological space X is the smallest possible cardinality of a base for X. When w(X ) the space X is said to be second countable.
The -weight of a space X is the smallest cardinality of a -base for X. (A -base is
|
https://en.wikipedia.org/wiki/True%20north
|
True north (also called geodetic north or geographic north) is the direction along Earth's surface towards the place where the imaginary rotational axis of the Earth intersects the surface of the Earth. That place is called the True North Pole. True south is the direction opposite to the true north. North per se is one of the cardinal directions, a system of naming orientations on the Earth. There are multiple ways of determining the North in different contexts.
It is important to make a distinction between the magnetic north and Magnetic North Pole which is a less steady location close to the True North Pole determined by a compass and the magnetic field of the Earth. Due to fundamental limitations in map projection, true north also differs from the grid north which is marked by the direction of the grid lines on a typical printed map. However, the longitude lines on a globe lead to the true poles, because the three-dimensional representation avoids those limitations.
The celestial pole is the location on the imaginary celestial sphere where an imaginary extension of the rotational axis of the Earth intersects the celestial sphere. Within a margin of error of 1°, the true north direction can be approximated by the position of the pole star Polaris which would currently appear to be very close to the intersection, tracing a tiny circle in the sky each sidereal day. Due to the axial precession of Earth, true north rotates in an arc with respect to the stars that takes approximately 25,000 years to complete. Around 2101–2103, Polaris will make its closest approach to the celestial north pole (extrapolated from recent Earth precession). The visible star nearest the north celestial pole 5,000 years ago was Thuban.
On maps published by the United States Geological Survey (USGS) and the United States Armed Forces, true north is marked with a line terminating in a five-pointed star. The east and west edges of the USGS topographic quadrangle maps of the United States ar
|
https://en.wikipedia.org/wiki/Zweikanalton
|
Zweikanalton ("two-channel sound") or A2 Stereo, is an analog television sound transmission system used in Germany, Austria, Australia, Switzerland, Netherlands and some other countries that use or used PAL-B or PAL-G. TV3 Malaysia formerly used Zweikanalton on its UHF analogue transmission frequency (Channel 29), while NICAM was instead used on its VHF analogue transmission frequency (Channel 12). South Korea also formerly utilised a modified version of Zweikanalton for its NTSC analogue television system until 31 December 2012. It relies on two separate FM carriers.
This offers a relatively high separation between the channels (compared to a subcarrier-based multiplexing system) and can thus be used for bilingual broadcasts as well as stereo. Unlike the competing NICAM standard, Zweikanalton is an analog system.
How it works
A 2nd FM sound carrier containing the right channel for stereo is transmitted at a frequency 242 kHz higher than the existing mono FM sound carrier, and channel mixing is used in the receiver to derive the left channel.
The second sound carrier also contains a 54.6875 kHz pilot tone to indicate whether the transmission is stereo or bilingual.
The pilot tone is 50% amplitude-modulated with 117.5 Hz for stereo or 274.1 Hz for bilingual.
a.The second sound carrier frequency of DK systems varies from country, and sometimes manufacturers divide them into DK1/DK2/DK3 systems.
b.The video bandwidth is reduced.
Zweikanalton can be adapted to any existing analogue television system, and modern PAL or SECAM television receivers generally include a sound detector IC that can decode both Zweikanalton and NICAM.
Zweikanalton can carry either a completely separate audio program, or can be used for stereo sound transmission. In the latter case, the first FM carrier carries (L+R) for compatibility, while the second carrier carries R (not L-R.) After combining the two channels, this method improves the signal-to-noise ratio by reducing the correlated n
|
https://en.wikipedia.org/wiki/SMC%20protein
|
SMC complexes represent a large family of ATPases that participate in many aspects of higher-order chromosome organization and dynamics. SMC stands for Structural Maintenance of Chromosomes.
Classification
Eukaryotic SMCs
Eukaryotes have at least six SMC proteins in individual organisms, and they form three distinct heterodimers with specialized functions:
A pair of SMC1 and SMC3 constitutes the core subunits of the cohesin complexes involved in sister chromatid cohesion. SMC1 and SMC3 also have functions in the repair of DNA double-strained breaks in the process of homologous recombination.
Likewise, a pair of SMC2 and SMC4 acts as the core of the condensin complexes implicated in chromosome condensation. SMC2 and SMC4 have the function of DNA repair as well. Condensin I plays a role in single-strained break repair but not in double-strained breaks. The opposite is true for Condensin II, which plays a role in homologous recombination.
A dimer composed of SMC5 and SMC6 functions as part of a yet-to-be-named complex implicated in DNA repair and checkpoint responses.
Each complex contains a distinct set of non-SMC regulatory subunits. Some organisms have variants of SMC proteins. For instance, mammals have a meiosis-specific variant of SMC1, known as SMC1β. The nematode Caenorhabditis elegans has an SMC4-variant that has a specialized role in dosage compensation.
The following table shows the SMC proteins names for several model organisms and vertebrates:
Prokaryotic SMCs
SMC proteins are conserved from bacteria to humans. Most bacteria have a single SMC protein in individual species that forms a homodimer. Recently SMC proteins have been shown to aid the daughter cells DNA at the origin of replication to guarantee proper segregation. In a subclass of Gram-negative bacteria, including Escherichia coli, a distantly related protein known as MukB plays an equivalent role.
Molecular structure
Primary structure
SMC proteins are 1,000-1,500 amino-acid long. They h
|
https://en.wikipedia.org/wiki/First%20flush
|
First flush is the initial surface runoff of a rainstorm. During this phase, water pollution entering storm drains in areas with high proportions of impervious surfaces is typically more concentrated compared to the remainder of the storm. Consequently, these high concentrations of urban runoff result in high levels of pollutants discharged from storm sewers to surface waters.
First flush effect
The term "first flush effect" refers to rapid changes in water quality (pollutant concentration or load) that occur after early season rains. Soil and vegetation particles wash into streams; sediments and other accumulated organic particles on the river bed are re-suspended, and dissolved substances from soil and shallow groundwater can be flushed into streams. Recent research has shown that this effect has not been observed in relatively pervious areas.
The term is often also used to address the first flood after a dry period, which is supposed to contain higher concentrations than a subsequent one. This is referred to as "first flush flood." There are various definitions of the first flush phenomenon.
First foul flush
Storm water runoff in a combined sewer produces a first foul flush with a suspension of accumulated sanitary solids from the sewer in addition to pollutants from surface runoff. Inflow may produce a foul flush effect in sanitary sewers if flows peak during wet weather. As flow rates increase above average, a relatively small percentage of the total flow contains a disproportionately large percentage of the total pollutant mass associated with overall flow volume through the peak flow event. Sewer solids deposition during low flow periods and subsequent resuspension during peak flow events is the major pollutant source for the first-flush combined-sewer overflow (CSO) phenomenon.
Sanitary sewage solids can either go through the system or settle out in laminar flow portions of the sewer to be available for washout during peak flows. The wetted perimeter of
|
https://en.wikipedia.org/wiki/Index%20of%20biomedical%20engineering%20articles
|
Articles related specifically to biomedical engineering include:
A
Artificial heart —
Artificial heart valve —
Artificial intelligence —
Artificial limb —
Artificial pacemaker —
Automated external defibrillator —
B
Bachelor of Science in Biomedical Engineering—
Bedsores—
Biochemistry —
Biochemistry topics list —
Bioelectrochemistry—
Bioelectronics—
Bioimpedance —
Bio-implants —
Bioinformatics —
Biology —
Biology topics list —
Biomechanics —
Biomedical engineering —
Biomedical imaging —
Biomedical Imaging Resource —
Bionics —
Biotechnology —
Biotelemetry —
Biothermia —
BMES —
Brain–computer interface —
Brain implant
C
Cell engineering —
Chemistry —
Chemistry topics list —
Clinical engineering —
Cochlear implant —
Corrective lens —
Crutch —
D
Dental implant —
Dialysis machines —
Diaphragmatic pacemaker —
E
Engineering —
F
Functional electrical stimulation
G
Genetic engineering —
Genetic engineering topics —
Genetics —
H
Health care —
Heart-lung machine —
Heart rate monitor —
I
Implant —
Implantable cardioverter-defibrillator —
Infusion pump —
Instrumentation for medical devices —
J
K
L
Laser applications in medicine —
M
Magnetic resonance imaging —
Maxillo-facial prosthetics —
Medical equipment —
Medical imaging —
Medical research —
Medication —
Medicine —
Microfluidics —
Molecular biology —
Molecular biology topics —
N
Nanoengineering —
Nano-scaffold —
Nanotechnology —
Neural engineering —
Neurally controlled animat —
Neuroengineering —
Neuroprosthetics —
Neurostimulator —
Neurotechnology —
O
Ocular prosthetics —
Optical imaging —
Optical spectroscopy —
Orthosis —
P
Pharmacology —
Physiological system modelling —
Positron emission tomography —
Prosthesis —
Polysomnograph —
Q
R
Radiological imaging —
Radiation therapy —
Reliability engineering —
Remote physiological monitoring —
Replacement joint —
Retinal implant —
S
Safety engineering —
Stem cell —
T
Tissue engineering —
Tissue viability —
U
V
W
X
X-ray —
Z
Biomedical engineering
Biomedical
|
https://en.wikipedia.org/wiki/Index%20of%20chemical%20engineering%20articles
|
This is an alphabetical list of articles pertaining specifically to chemical engineering.
A
Absorption --
Adsorption --
Analytical chemistry --
B
Bioaccumulate --
Biochemical engineering --
Biochemistry --
Biochemistry topics list --
Bioinformatics --
Biology --
Bioprocess Engineering --
Biomolecular engineering --
Bioinformatics --
Biomedical engineering --
Bioseparation --
Biotechnology --
Bioreactor --
Biotite --
C
Catalysis --
Catalytic cracking --
Catalytic reforming --
Catalytic reaction engineering --
Ceramics --
Certified Chartered Chemical Engineers --
Chartered Chemical Engineers --
Chemical engineering --
Chemical kinetics --
Chemical reaction --
Chemical synthesis --
Chemical vapor deposition (CVD) --
Chemical solution deposition --
Chemistry --
Chromatographic separation --
Circulating fluidized bed --
Combustion --
Computational fluid dynamics (CFD) --
Conservation of energy --
Conservation of mass --
Conservation of momentum --
Crystallization processes --
D
Deal-Grove model --
Dehumidification --
Dehydrogenation --
Depressurization --
Desorption --
Desulfonation --
Desulfurization --
Diffusion --
Distillation --
Drag coefficient --
Drying --
E
Electrochemical engineering --
Electrodialysis --
Electrokinetic phenomena --
Electrodeposition --
Electrolysis --
Electrolytic reduction --
Electroplating --
Electrostatic precipitation --
Electrowinning --
Emulsion --
Energy --
Engineering --
Engineering economics --
Enzymatic reaction --
F
Filtration --
Fluid dynamics --
Flow battery --
Fuel cell --
Fuel technology --
G
Gasification --
H
Heat transfer --
History of chemical engineering --
Hydrometallurgy --
I
Immobilization --
Inorganic chemistry --
Ion exchange --
J
K
Kinetics (physics) --
L
Laboratory --
Leaching --
M
Mass balance --
Mass transfer --
Materials science --
Medicinal chemistry --
Microelectronics --
Microfluidics --
Microreaction technology --
Mineral processing --
Mixing --
Momentum transfer --
|
https://en.wikipedia.org/wiki/Hypostatic%20abstraction
|
Hypostatic abstraction in mathematical logic, also known as hypostasis or subjectal abstraction, is a formal operation that transforms a predicate into a relation; for example "Honey is sweet" is transformed into "Honey has sweetness". The relation is created between the original subject and a new term that represents the property expressed by the original predicate.
Description
Technical definition
Hypostasis changes a propositional formula of the form X is Y to another one of the form X has the property of being Y or X has Y-ness. The logical functioning of the second object Y-ness consists solely in the truth-values of those propositions that have the corresponding abstract property Y as the predicate. The object of thought introduced in this way may be called a hypostatic object and in some senses an abstract object and a formal object.
The above definition is adapted from the one given by Charles Sanders Peirce. As Peirce describes it, the main point about the formal operation of hypostatic abstraction, insofar as it operates on formal linguistic expressions, is that it converts a predicative adjective or predicate into an extra subject, thus increasing by one the number of "subject" slots—called the arity or adicity—of the main predicate.
Application
The grammatical trace of this hypostatic transformation is a process that extracts the adjective "sweet" from the predicate "is sweet", replacing it by a new, increased-arity predicate "possesses", and as a by-product of the reaction, as it were, precipitating out the substantive "sweetness" as a second subject of the new predicate.
The abstraction of hypostasis takes the concrete physical sense of "taste" found in "honey is sweet" and ascribes to it the formal metaphysical characteristics in "honey has sweetness". This is the fallacy of reification.
See also
References
Sources
Abstraction
Mathematical analysis
Mathematical logic
Mathematical relations
Concepts in metaphysics
Charles Sanders
|
https://en.wikipedia.org/wiki/UAProf
|
The UAProf (User Agent Profile) specification is concerned with capturing capability and preference information for wireless devices. This information can be used by content providers to produce content in an appropriate format for the specific device.
UAProf is related to the Composite Capability/Preference Profiles Specification created by the World Wide Web Consortium. UAProf is based on RDF.
UAProf files typically have the file extensions rdf or xml, and are usually served with mimetype application/xml. They are an XML-based file format. The RDF format means that the document schema is extensible.
A UAProf file describes the capabilities of a mobile handset, including Vendor, Model, Screensize, Multimedia Capabilities, Character Set support, and more. Recent UAProfiles have also begun to include data conforming to MMS, PSS5 and PSS6 schemas, which includes much more detailed data about video, multimedia, streaming and MMS capabilities.
A mobile handset sends a header within an http request, containing the URL to its UAProf. The http header is usually X-WAP-Profile:, but sometimes may look more like 19-Profile:, WAP-Profile: or a number of other similar headers.
UAProf production for a device is voluntary: for GSM devices, the UAProf is normally produced by the vendor of the device (e.g. Nokia, Samsung, LG) whereas for CDMA / BREW devices it's more common for the UAProf to be produced by the telecommunications company.
A content delivery system (such as a WAP site) can use UAProf to adapt content for display, or to decide what items to offer for download. However, drawbacks to relying solely on UAProf are (See also ):
Not all devices have UAProfs (including many new Windows Mobile devices, iDen handsets, or legacy handsets)
Not all advertised UAProfs are available (about 20% of links supplied by handsets are dead or unavailable, according to figures from UAProfile.com)
UAProf can contain schema or data errors which can cause parsing to fail
Retrieving
|
https://en.wikipedia.org/wiki/Tonelli%E2%80%93Shanks%20algorithm
|
The Tonelli–Shanks algorithm (referred to by Shanks as the RESSOL algorithm) is used in modular arithmetic to solve for r in a congruence of the form r2 ≡ n (mod p), where p is a prime: that is, to find a square root of n modulo p.
Tonelli–Shanks cannot be used for composite moduli: finding square roots modulo composite numbers is a computational problem equivalent to integer factorization.
An equivalent, but slightly more redundant version of this algorithm was developed by
Alberto Tonelli
in 1891. The version discussed here was developed independently by Daniel Shanks in 1973, who explained:
My tardiness in learning of these historical references was because I had lent Volume 1 of Dickson's History to a friend and it was never returned.
According to Dickson, Tonelli's algorithm can take square roots of x modulo prime powers pλ apart from primes.
Core ideas
Given a non-zero and a prime (which will always be odd), Euler's criterion tells us that has a square root (i.e., is a quadratic residue) if and only if:
.
In contrast, if a number has no square root (is a non-residue), Euler's criterion tells us that:
.
It is not hard to find such , because half of the integers between 1 and have this property. So we assume that we have access to such a non-residue.
By (normally) dividing by 2 repeatedly, we can write as , where is odd. Note that if we try
,
then . If , then is a square root of . Otherwise, for , we have and satisfying:
; and
is a -th root of 1 (because ).
If, given a choice of and for a particular satisfying the above (where is not a square root of ), we can easily calculate another and for such that the above relations hold, then we can repeat this until becomes a -th root of 1, i.e., . At that point is a square root of .
We can check whether is a -th root of 1 by squaring it times and check whether it is 1. If it is, then we do not need to do anything, as the same choice of and works. But if it is not, must
|
https://en.wikipedia.org/wiki/Millennium%20Mathematics%20Project
|
The Millennium Mathematics Project (MMP) was set up within the University of Cambridge in England as a joint project between the Faculties of Mathematics and Education in 1999. The MMP aims to support maths education for pupils of all abilities from ages 5 to 19 and promote the development of mathematical skills and understanding, particularly through enrichment and extension activities beyond the school curriculum, and to enhance the mathematical understanding of the general public. The project was directed by John Barrow from 1999 until September 2020.
Programmes
The MMP includes a range of complementary programmes:
The NRICH website publishes free mathematics education enrichment material for ages 5 to 19. NRICH material focuses on problem-solving, building core mathematical reasoning and strategic thinking skills. In the academic year 2004/5 the website attracted over 1.7 million site visits (more than 49 million hits).
Plus Magazine is a free online maths magazine for age 15+ and the general public. In 2004/5, Plus attracted over 1.3 million website visits (more than 31 million hits). The website won the Webby award in 2001 for the best Science site on the Internet.
The Motivate video-conferencing project links university mathematicians and scientists to primary and secondary schools in areas of the UK from Jersey and Belfast to Glasgow and inner-city London, with international links to Pakistan, South Africa, India and Singapore.
The project has also developed a Hands On Maths Roadshow presenting creative methods of exploring mathematics, and in 2004 took on the running of Simon Singh's Enigma schools workshops, exploring maths through cryptography and codebreaking. Both are taken to primary and secondary schools and public venues such as shopping centres across the UK and Ireland. James Grime is the Enigma Project Officer and gives talks in schools and to the general public about the history and mathematics of code breaking - including the demonstration of
|
https://en.wikipedia.org/wiki/Continuous%20symmetry
|
In mathematics, continuous symmetry is an intuitive idea corresponding to the concept of viewing some symmetries as motions, as opposed to discrete symmetry, e.g. reflection symmetry, which is invariant under a kind of flip from one state to another. However, a discrete symmetry can always be reinterpreted as a subset of some higher-dimensional continuous symmetry, e.g. reflection of a 2 dimensional object in 3 dimensional space can be achieved by continuously rotating that object 180 degrees across a non-parallel plane.
Formalization
The notion of continuous symmetry has largely and successfully been formalised in the mathematical notions of topological group, Lie group and group action. For most practical purposes continuous symmetry is modelled by a group action of a topological group that preserves some structure. Particularly, let be a function, and G is a group that acts on X then a subgroup is a symmetry of f if for all .
One-parameter subgroups
The simplest motions follow a one-parameter subgroup of a Lie group, such as the Euclidean group of three-dimensional space. For example translation parallel to the x-axis by u units, as u varies, is a one-parameter group of motions. Rotation around the z-axis is also a one-parameter group.
Noether's theorem
Continuous symmetry has a basic role in Noether's theorem in theoretical physics, in the derivation of conservation laws from symmetry principles, specifically for continuous symmetries. The search for continuous symmetries only intensified with the further developments of quantum field theory.
See also
Goldstone's theorem
Infinitesimal transformation
Noether's theorem
Sophus Lie
Motion (geometry)
Circular symmetry
References
Symmetry
Lie groups
Group actions (mathematics)
|
https://en.wikipedia.org/wiki/Application%20directory
|
An application directory is a grouping of software code, help files and resources that together comprise a complete software package but are presented to the user as a single object.
They are currently used in RISC OS and the ROX Desktop, and also form the basis of the Zero Install application distribution system. Similar technology includes VMware ThinApp, and the NEXTSTEP/GNUstep/Mac OS X concept of application bundles. Their heritage lies in the system for automatically launching software stored on floppy disk on Acorn's earlier 8-bit micros such as the BBC Micro (the !BOOT file).
Bundling various files in this manner allows tools for manipulating applications to be replaced by tools for manipulating the file system. Applications can often be "installed" simply by dragging them from a distribution medium to a hard disk, and "uninstalled" by deleting the application directory.
Fixed contents
In order to support user interaction with application directories, several files have special status.
Application binaries
Launching an application directory causes the included file AppRun (ROX Desktop) or !Run (RISC OS) to be launched. On RISC OS this is generally an Obey file (a RISC OS command script) which allocates memory and loads OS extension modules and shared libraries before executing the application binary, usually called !RunImage. Under the ROX Desktop, it is not uncommon for it to be a shell script that will launch the correct system binary if available or compile a suitable binary from source otherwise.
Help files and icons
Both RISC OS and the ROX Desktop allow the user to view help files associated with an application directory without launching the application. RISC OS relies on a file in the directory named !Help which is launched as if the user double-clicked on it when help is requested (and can be any format the system understands, but plain text and !Draw formats are common), while the ROX Desktop opens the application's Help subdirectory.
Simila
|
https://en.wikipedia.org/wiki/Feingold%20diet
|
The Feingold diet is an elimination diet initially devised by Benjamin Feingold following research in the 1970s that appeared to link food additives with hyperactivity; by eliminating these additives and various foods the diet was supposed to alleviate the condition.
Popular in its day, the diet has since been referred to as an "outmoded treatment"; there is no good evidence that it is effective, and it is difficult for people to follow.
Technique
The diet was originally based on the elimination of salicylate, artificial food coloring, and artificial flavors; later on in the 1970s, the preservatives BHA, BHT, and (somewhat later) TBHQ were eliminated.
Besides foods with the eliminated additives, aspirin- or additive-containing drugs and toiletries were to be avoided. Even today, parents are advised to limit their purchases of mouthwash, toothpaste, cough drops, perfume, and various other nonfood products to those published in the Feingold Association's annual Foodlist and Shopping Guide. Some versions of the diet prohibit only artificial food coloring and additives. According to the Royal College of Psychiatrists the diet prohibited a number of foods that contain salicylic acid including apples, cucumbers and tomatoes.
Feingold stressed that the diet must be followed strictly and for an entire lifetime, and that whole families – not just the subject being "treated" – must observe the diet's rules.
Effectiveness
Although the diet had a certain popular appeal, a 1983 meta-analysis found research on it to be of poor quality, and that overall there was no good evidence that it was effective in fulfilling its claims.
In common with other elimination diets, the Feingold diet can be costly and boring, and thus difficult for people to maintain.
In general, there is no evidence to support broad claims that food coloring causes food intolerance and ADHD-like behavior in children. It is possible that certain food coloring may act as a trigger in those who are genet
|
https://en.wikipedia.org/wiki/System%20integration%20testing
|
System integration testing (SIT) involves the overall testing of a complete system of many subsystem components or elements. The system under test may be composed of hardware, or software, or hardware with embedded software, or hardware/software with human-in-the-loop testing.
SIT consists, initially, of the "process of assembling the constituent parts of a system in a logical, cost-effective way, comprehensively checking system execution (all nominal & exceptional paths), and including a full functional check-out." Following integration, system test is a process of "verifying that the system meets its requirements, and validating that the system performs in accordance with the customer or user expectations."
In technology product development, the beginning of system integration testing is often the first time that an entire system has been assembled such that it can be tested as a whole. In order to make system testing most productive, the many constituent assemblies and subsystems will have typically gone through a subsystem test and successfully verified that each subsystem meets its requirements at the subsystem interface level.
In the context of software systems and software engineering, system integration testing is a testing process that exercises a software system's coexistence with others. With multiple integrated systems, assuming that each have already passed system testing, SIT proceeds to test their required interactions. Following this, the deliverables are passed on to acceptance testing.
Software system integration testing
For software SIT is part of the software testing life cycle for collaborative projects. Usually, a round of SIT precedes the user acceptance test (UAT) round. Software providers usually run a pre-SIT round of tests before consumers run their SIT test cases.
For example, if an integrator (company) is providing an enhancement to a customer's existing solution, then they integrate the new application layer and the new data
|
https://en.wikipedia.org/wiki/Nilmanifold
|
In mathematics, a nilmanifold is a differentiable manifold which has a transitive nilpotent group of diffeomorphisms acting on it. As such, a nilmanifold is an example of a homogeneous space and is diffeomorphic to the quotient space , the quotient of a nilpotent Lie group N modulo a closed subgroup H. This notion was introduced by Anatoly Mal'cev in 1951.
In the Riemannian category, there is also a good notion of a nilmanifold. A Riemannian manifold is called a homogeneous nilmanifold if there exist a nilpotent group of isometries acting transitively on it. The requirement that the transitive nilpotent group acts by isometries leads to the following rigid characterization: every homogeneous nilmanifold is isometric to a nilpotent Lie group with left-invariant metric (see Wilson).
Nilmanifolds are important geometric objects and often arise as concrete examples with interesting properties; in Riemannian geometry these spaces always have mixed curvature, almost flat spaces arise as quotients of nilmanifolds, and compact nilmanifolds have been used to construct elementary examples of collapse of Riemannian metrics under the Ricci flow.
In addition to their role in geometry, nilmanifolds are increasingly being seen as having a role in arithmetic combinatorics (see Green–Tao) and ergodic theory (see, e.g., Host–Kra).
Compact nilmanifolds
A compact nilmanifold is a nilmanifold which is compact. One way to construct such spaces is to start with a simply connected nilpotent Lie group N and a discrete subgroup . If the subgroup acts cocompactly (via right multiplication) on N, then the quotient manifold will be a compact nilmanifold. As Mal'cev has shown, every compact
nilmanifold is obtained this way.
Such a subgroup as above is called a lattice in N. It is well known that a nilpotent Lie group admits a lattice if and only if its Lie algebra admits a basis with rational structure constants: this is Malcev's criterion. Not all nilpotent Lie groups admit l
|
https://en.wikipedia.org/wiki/NoScript
|
NoScript (or NoScript Security Suite) is a free and open-source extension for Firefox- and Chromium-based web browsers, written and maintained by Giorgio Maone, an Italian software developer and member of the Mozilla Security Group.
Features
Active content blocking
By default, NoScript blocks active (executable) web content, which can be wholly or partially unblocked by allowlisting a site or domain from the extension's toolbar menu or by clicking a placeholder icon.
In the default configuration, active content is globally denied, although the user may turn this around and use NoScript to block specific unwanted content. The allowlist may be permanent or temporary (until the browser closes or the user revokes permissions). Active content may consist of JavaScript, web fonts, media codecs, WebGL, and Flash. The add-on also offers specific countermeasures against security exploits.
Because many web browser attacks require active content that the browser normally runs without question, disabling such content by default and using it only to the degree that it is necessary reduces the chances of vulnerability exploitation. In addition, not loading this content saves significant bandwidth and defeats some forms of web tracking.
NoScript is useful for developers to see how well their site works with JavaScript turned off. It also can remove many irritating web elements, such as in-page pop-up messages and certain paywalls, which require JavaScript in order to function.
NoScript takes the form of a toolbar icon or status bar icon in Firefox. It displays on every website to denote whether NoScript has either blocked, allowed, or partially allowed scripts to run on the web page being viewed. Clicking or hovering (since version 2.0.3rc1) the mouse cursor on the NoScript icon gives the user the option to allow or forbid the script's processing.
NoScript's interface, whether accessed by right-clicking on the web page or the distinctive NoScript box at the bottom of the p
|
https://en.wikipedia.org/wiki/Implicit%20data%20structure
|
In computer science, an implicit data structure or space-efficient data structure is a data structure that stores very little information other than the main or required data: a data structure that requires low overhead. They are called "implicit" because the position of the elements carries meaning and relationship between elements; this is contrasted with the use of pointers to give an explicit relationship between elements. Definitions of "low overhead" vary, but generally means constant overhead; in big O notation, O(1) overhead. A less restrictive definition is a succinct data structure, which allows greater overhead.
Definition
An implicit data structure is one with constant space overhead (above the information-theoretic lower bound).
Historically, defined an implicit data structure (and algorithms acting on one) as one "in which structural information is implicit in the way data are stored, rather than explicit in pointers." They are somewhat vague in the definition, defining it most strictly as a single array, with only the size retained (a single number of overhead), or more loosely as a data structure with constant overhead (). This latter definition is today more standard, and the still-looser notion of a data structure with non-constant but small overhead is today known as a succinct data structure, as defined by ; it was referred to as semi-implicit by .
A fundamental distinction is between static data structures (read-only) and dynamic data structures (which can be modified). Simple implicit data structures, such as representing a sorted list as an array, may be very efficient as a static data structure, but inefficient as a dynamic data structure, due to modification operations (such as insertion in the case of a sorted list) being inefficient.
Examples
A trivial example of an implicit data structure is an array data structure, which is an implicit data structure for a list, and requires only the constant overhead of the length; unlike a lin
|
https://en.wikipedia.org/wiki/Edge%20device
|
In computer networking, an edge device is a device that provides an entry point into enterprise or service provider core networks. Examples include routers, routing switches, integrated access devices (IADs), multiplexers, and a variety of metropolitan area network (MAN) and wide area network (WAN) access devices. Edge devices also provide connections into carrier and service provider networks. An edge device that connects a local area network to a high speed switch or backbone (such as an ATM switch) may be called an edge concentrator.
Functions
In general, edge devices are normally routers that provide authenticated access (most commonly PPPoA and PPPoE) to faster, more efficient backbone and core networks. The trend is to make the edge device smart and the core device(s) "dumb and fast", so edge routers often include quality of service (QoS) and multi-service functions to manage different types of traffic. Consequently, core networks are often designed with switches that use routing protocols such as Open Shortest Path First (OSPF) or Multiprotocol Label Switching (MPLS) for reliability and scalability, allowing edge routers to have redundant links to the core network. Links between core networks are different—for example, Border Gateway Protocol (BGP) routers are often used for peering exchanges.
Translation
Edge devices may translate between one type of network protocol and another. For example, Ethernet or Token Ring types of local area networks (LANs) or xDSL equipment may use an Asynchronous Transfer Mode (ATM) backbone to other core networks. ATM networks send data in cells and use connection-oriented virtual circuits. An IP network is packet oriented; so if ATM is used as a core, packets must be encapsulated in cells and the destination address must be converted to a virtual circuit identifier. Some new types of optical fibre use a passive optical network subscriber loop such as GPON, with the edge device connecting to Ethernet for backhaul (telecommu
|
https://en.wikipedia.org/wiki/Vacuum%20evaporation
|
Vacuum evaporation is the process of causing the pressure in a liquid-filled container to be reduced below the vapor pressure of the liquid, causing the liquid to evaporate at a lower temperature than normal. Although the process can be applied to any type of liquid at any vapor pressure, it is generally used to describe the boiling of water by lowering the container's internal pressure below standard atmospheric pressure and causing the water to boil at room temperature.
The vacuum evaporation treatment process consists of reducing the interior pressure of the evaporation chamber below atmospheric pressure. This reduces the boiling point of the liquid to be evaporated, thereby reducing or eliminating the need for heat in both the boiling and condensation processes. There are other advantages, such as the ability to distill liquids with high boiling points and avoiding decomposition of substances that are heat sensitive.
Application
Food
When the process is applied to food and the water is evaporated and removed, the food can be stored for long periods without spoiling. It is also used when boiling a substance at normal temperatures would chemically change the consistency of the product, such as egg whites coagulating when attempting to dehydrate the albumen into a powder.
This process was invented by Henri Nestlé in 1866, of Nestlé Chocolate fame, although the Shakers were already using a vacuum pan before that (see condensed milk).
This process is used industrially to make such food products as evaporated milk for milk chocolate and tomato paste for ketchup.
In the sugar industry vacuum evaporation is used in the crystallization of sucrose solutions. Traditionally this process was performed in batch mode, but nowadays continuous vacuum pans are available.
Wastewater treatment
Vacuum evaporators are used in a wide range of industrial sectors to treat industrial wastewater. It represents a clean, safe and very versatile technology with low management cost
|
https://en.wikipedia.org/wiki/Apache%20Directory
|
Apache Directory is an open source project of the Apache Software Foundation. The Apache Directory Server, originally written by Alex Karasulu, is an embeddable directory server entirely written in Java. It was certified LDAPv3-compatible by The Open Group in 2006. Besides LDAP, the server supports other protocols as well, and a Kerberos server.
There exist these subprojects:
Apache Directory Studio - an LDAP browser/editor for data, schema, LDIF, and DSML written in an Eclipse-based framework.
Apache SCIMple - an implementation of SCIM v2.0 specification.
Apache Fortress - a standards-based authorization system.
Apache Kerby - a Kerberos implementation written in Java.
Apache LDAP API - an SDK for directory access in Java.
Apache Mavibot - a Multi Version Concurrency Control (MVCC) BTree in Java.
See also
List of LDAP software
References
External links
Apache Directory Server
Apache Directory Studio
Apache Directory Mavibot
Apache Directory SCIMple
Apache Directory Fortress
Apache Directory Kerby
Apache Directory LDAP API
Directory
Directory services
|
https://en.wikipedia.org/wiki/List%20of%20Python%20software
|
The Python programming language is actively used by many people, both in industry and academia, for a wide variety of purposes.
Integrated Development Environments (IDEs) for Python
Atom, an open source cross-platform IDE with autocomplete, help and more Python features under package extensions.
Codelobster, a cross-platform IDE for various languages, including Python.
EasyEclipse, an open source IDE for Python and other languages.
Eclipse ,with the Pydev plug-in. Eclipse supports many other languages as well.
Emacs, with the built-in python-mode.
Eric, an IDE for Python and Ruby
Geany, IDE for Python development and other languages.
IDLE, a simple IDE bundled with the default implementation of the language.
Jupyter Notebook, an IDE that supports markdown, Python, Julia, R and several other languages.
Komodo IDE an IDE PHOTOS Python, Perl, PHP and Ruby.
NetBeans, is written in Java and runs everywhere where a JVM is installed.
Ninja-IDE, free software, written in Python and Qt, Ninja name stands for Ninja-IDE Is Not Just Another IDE
PIDA, open source IDE written in Python capable of embedding other text editors, such as Vim.
PyCharm, a proprietary and Open Source IDE for Python development.
PyScripter, Free and open-source software Python IDE for Microsoft Windows.
PythonAnywhere, an online IDE and Web hosting service.
Python Tools for Visual Studio, Free and open-source plug-in for Visual Studio.
Spyder, IDE for scientific programming.
Vim, with "lang#python" layer enabled.
Visual Studio Code, an Open Source IDE for various languages, including Python.
Wing IDE, cross-platform proprietary with some free versions/licenses IDE for Python.
Replit, an online IDE that supports multiple languages.
Unit testing frameworks
Python package managers and Python distributions
Anaconda, Python distribution with conda package manager
Enthought, Enthought Canopy Python with Python package manager
pip, package management system used to install and manage
|
https://en.wikipedia.org/wiki/RTEMS
|
Real-Time Executive for Multiprocessor Systems (RTEMS), formerly Real-Time Executive for Missile Systems, and then Real-Time Executive for Military Systems, is a real-time operating system (RTOS) designed for embedded systems. It is free and open-source software.
Development began in the late 1980s with early versions available via File Transfer Protocol (ftp) as early as 1993. OAR Corporation is currently managing the RTEMS project in cooperation with a steering committee which includes user representatives.
Design
RTEMS is designed for real-time, embedded systems and to support various open application programming interface (API) standards including Portable Operating System Interface (POSIX) and µITRON. The API now known as the Classic RTEMS API was originally based on the Real-Time Executive Interface Definition (RTEID) specification. RTEMS includes a port of the FreeBSD Internet protocol suite (TCP/IP stack) and support for various file systems including Network File System (NFS) and File Allocation Table (FAT).
RTEMS provides extensive multi-processing and memory-management services, and even a System-database alongside many other facilities. It has extensive documentation.
Architectures
RTEMS has been ported to various target processor architectures:
ARM
Atmel AVR
Blackfin
Freescale, now NXP ColdFire
Texas Instruments – C3x/C4x DSPs
Intel – x86 architecture members 80386, Pentium, and above
LatticeMico32
68k
MIPS
Nios II
OpenRISC
PowerPC
Renesas – H8/300, M32C, M32R, SuperH
RISC-V RV32, RV64 using QEMU
SPARC – ERC32, LEON, V9
Uses
RTEMS is used in many application domains. The Experimental Physics and Industrial Control System (EPICS) community includes multiple people who are active RTEMS submitters. RTEMS is also popular for space uses since it supports multiple microprocessors developed for use in space including SPARC ERC32 and LEON, MIPS Mongoose-V, ColdFire, and PowerPC architectures, which are available in space hardened models. RT
|
https://en.wikipedia.org/wiki/Radeon%20R100%20series
|
The Radeon R100 is the first generation of Radeon graphics chips from ATI Technologies. The line features 3D acceleration based upon Direct3D 7.0 and OpenGL 1.3, and all but the entry-level versions offloading host geometry calculations to a hardware transform and lighting (T&L) engine, a major improvement in features and performance compared to the preceding Rage design. The processors also include 2D GUI acceleration, video acceleration, and multiple display outputs. "R100" refers to the development codename of the initially released GPU of the generation. It is the basis for a variety of other succeeding products.
Development
Architecture
The first-generation Radeon GPU was launched in 2000, and was initially code-named Rage 6 (later R100), as the successor to ATI's aging Rage 128 Pro which was unable to compete with the GeForce 256. The card also had been described as Radeon 256 in the months leading up to its launch, possibly to draw comparisons with the competing Nvidia card, although the moniker was dropped with the launch of the final product.
The R100 was built on a 180 nm semiconductor manufacturing process. Like the GeForce, the Radeon R100 featured a hardware transform and lighting (T&L) engine to perform geometry calculations, freeing up the host computer's CPU. In 3D rendering the processor can write 2 pixels to the framebuffer and sample 3 texture maps per pixel per clock. This is commonly referred to as a 2×3 configuration, or a dual-pipeline design with 3 TMUs per pipe. As for Radeon's competitors, the GeForce 256 is 4×1, GeForce2 GTS is 4×2 and 3dfx Voodoo 5 5500 is a 2×1+2×1 SLI design. Unfortunately, the third texture unit did not get much use in games during the card's lifetime because software was not frequently performing more than dual texturing.
In terms of rendering, its "Pixel Tapestry" architecture allowed for Environment Mapped Bump Mapping (EMBM) and Dot Product (Dot3) Bump Mapping support, offering the most complete Bump Mapping su
|
https://en.wikipedia.org/wiki/Planetary%20surface%20construction
|
Planetary-surface construction is the construction of artificial habitats and other structures on planetary surfaces. Planetary surface construction can be divided into three phases or classes, coinciding with a phased schedule for habitation:
• Class I: Pre-integrated hard shell modules ready to use immediately upon delivery.
• Class II: Prefabricated kit-of-parts that is surface assembled after delivery.
• Class III: in-situ resource utilization (ISRU) derived structure with integrated Earth components.
Class I structures are prepared and tested on Earth, and are designed to be fully self-contained habitats that can be delivered to the surface of other planets. In an initial mission to put human explorers on Mars, a Class I habitat would provide the bare minimum habitable facilities when continued support from Earth is not possible.
The Class II structures call for a pre-manufactured kit-of-parts system that has flexible capacity for demountability and reuse. Class II structures can be used to expand the facilities established by the initial Class I habitat, and can allow for the assembly of additional structures either before the crew arrives, or after their occupancy of the pre-integrated habitat.
The purpose of Class III structures is to allow for the construction of additional facilities that would support a larger population, and to develop the capacity for the local production of building materials and structures without the need for resupply from Earth.
To facilitate the development of technology required to implement the three phases, Cohen and Kennedy (1997) stress the need to explore robust robotic system concepts that can be used to assist in the construction process, or perform the tasks autonomously. Among other things, they suggest a roadmap that stresses the need for adapting structural components for robotic assembly, and determining appropriate levels of modularity, assembly, and component packaging. The roadmap also sets the development of
|
https://en.wikipedia.org/wiki/Dan%20Shechtman
|
Dan Shechtman (; born January 24, 1941) is the Philip Tobias Professor of Materials Science at the Technion – Israel Institute of Technology, an Associate of the US Department of Energy's Ames National Laboratory, and Professor of Materials Science at Iowa State University. On April 8, 1982, while on sabbatical at the U.S. National Bureau of Standards in Washington, D.C., Shechtman discovered the icosahedral phase, which opened the new field of quasiperiodic crystals.
He was awarded the 2011 Nobel Prize in Chemistry for the discovery of quasicrystals, making him one of six Israelis who have won the Nobel Prize in Chemistry.
Biography
Dan Shechtman was born in 1941 in Tel Aviv, in what was then Mandatory Palestine; the city became part of the new state of Israel in 1948. He grew up in Petah Tikva and Ramat Gan in a Jewish family. His grandparents had immigrated to Palestine during the Second Aliyah (1904–1914) and founded a printing house. As a child Shechtman was fascinated by Jules Verne's The Mysterious Island (1874), which he read many times. His childhood dream was to become an engineer like the main protagonist, Cyrus Smith.
Shechtman is married to Prof. Tzipora Shechtman, Head of the Department of Counseling and Human Development at Haifa University, and author of two books on psychotherapy. They have a son Yoav Shechtman (a postdoctoral researcher in the lab of W. E. Moerner) and three daughters: Tamar Finkelstein (an organizational psychologist at the Israeli police leadership center), Ella Shechtman-Cory (a PhD in clinical psychology), and Ruth Dougoud-Nevo (also a PhD in clinical psychology). He is an atheist.
Academic career
After receiving his Ph.D. in Materials Engineering from the Technion in 1972, where he also obtained his B.Sc. in Mechanical Engineering in 1966 and M.Sc. in Materials Engineering in 1968, Shechtman was an NRC fellow at the Aerospace Research Laboratories at Wright Patterson AFB, Ohio, where he studied for three years the micr
|
https://en.wikipedia.org/wiki/Ethnomycology
|
Ethnomycology is the study of the historical uses and sociological impact of fungi and can be considered a subfield of ethnobotany or ethnobiology. Although in theory the term includes fungi used for such purposes as tinder, medicine (medicinal mushrooms) and food (including yeast), it is often used in the context of the study of psychoactive mushrooms such as psilocybin mushrooms, the Amanita muscaria mushroom, and the ergot fungus.
American banker Robert Gordon Wasson pioneered interest in this field of study in the late 1950s, when he and his wife became the first Westerners on record allowed to participate in a mushroom velada, held by the Mazatec curandera María Sabina. The biologist Richard Evans Schultes is also considered an ethnomycological pioneer. Later researchers in the field include Terence McKenna, Albert Hofmann, Ralph Metzner, Carl Ruck, Blaise Daniel Staples, Giorgio Samorini, Keewaydinoquay Peschel, John Marco Allegro, Clark Heinrich, John W. Allen, Jonathan Ott, Paul Stamets, Casey Brown and Juan Camilo Rodríguez Martínez.
Besides mycological determination in the field, ethnomycology depends to a large extent on anthropology and philology. One of the major debates among ethnomycologists is Wasson's theory that the Soma mentioned in the Rigveda of the Indo-Aryans was the Amanita muscaria mushroom. Following his example similar attempts have been made to identify psychoactive mushroom usage in many other (mostly) ancient cultures, with varying degrees of credibility. Another much written about topic is the content of the Kykeon, the sacrament used during the Eleusinian mysteries in ancient Greece between approximately 1500 BCE and 396 CE. Although not an ethnomycologist as such, philologist John Allegro has made an important contribution suggesting, in a book controversial enough to have his academic career destroyed, that Amanita muscaria was not only consumed as a sacrament but was the main focus of worship in the more esoteric sects of Sumeri
|
https://en.wikipedia.org/wiki/Vector-valued%20function
|
A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range.
Example: Helix
A common example of a vector-valued function is one that depends on a single real parameter t, often representing time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian , these specific types of vector-valued functions are given by expressions such as
where f(t), g(t) and h(t) are the coordinate functions of the parameter t, and the domain of this vector-valued function is the intersection of the domains of the functions f, g, and h. It can also be referred to in a different notation:
The vector r(t) has its tail at the origin and its head at the coordinates evaluated by the function.
The vector shown in the graph to the right is the evaluation of the function near t = 19.5 (between 6π and 6.5π; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as t increases from zero through 8π.
In 2D, We can analogously speak about vector-valued functions as
or
Linear case
In the linear case the function can be expressed in terms of matrices:
where y is an n × 1 output vector, x is a k × 1 vector of inputs, and A is an n × k matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form
where in addition b is an n × 1 vector of parameters.
The linear case arises often, for example in multiple regression, where for instance the n × 1 vector of predicted values of a dependent variable is expressed linearly in terms of a k × 1 vector (k < n) of estimated values of model parameters:
in which
|
https://en.wikipedia.org/wiki/Executable-space%20protection
|
In computer security, executable-space protection marks memory regions as non-executable, such that an attempt to execute machine code in these regions will cause an exception. It makes use of hardware features such as the NX bit (no-execute bit), or in some cases software emulation of those features. However, technologies that emulate or supply an NX bit will usually impose a measurable overhead while using a hardware-supplied NX bit imposes no measurable overhead.
The Burroughs 5000 offered hardware support for executable-space protection on its introduction in 1961; that capability remained in its successors until at least 2006. In its implementation of tagged architecture, each word of memory had an associated, hidden tag bit designating it code or data. Thus user programs cannot write or even read a program word, and data words cannot be executed.
If an operating system can mark some or all writable regions of memory as non-executable, it may be able to prevent the stack and heap memory areas from being executable. This helps to prevent certain buffer overflow exploits from succeeding, particularly those that inject and execute code, such as the Sasser and Blaster worms. These attacks rely on some part of memory, usually the stack, being both writable and executable; if it is not, the attack fails.
OS implementations
Many operating systems implement or have an available executable space protection policy. Here is a list of such systems in alphabetical order, each with technologies ordered from newest to oldest.
For some technologies, there is a summary which gives the major features each technology supports. The summary is structured as below.
Hardware Supported Processors: (Comma separated list of CPU architectures)
Emulation: (No) or (Architecture Independent) or (Comma separated list of CPU architectures)
Other Supported: (None) or (Comma separated list of CPU architectures)
Standard Distribution: (No) or (Yes) or (Comma separated list of dist
|
https://en.wikipedia.org/wiki/Augmented%20matrix
|
In linear algebra, an augmented matrix is a matrix obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices.
Given the matrices and , where
the augmented matrix (A|B) is written as
This is useful when solving systems of linear equations.
For a given number of unknowns, the number of solutions to a system of linear equations depends only on the rank of the matrix representing the system and the rank of the corresponding augmented matrix. Specifically, according to the Rouché–Capelli theorem, any system of linear equations is inconsistent (has no solutions) if the rank of the augmented matrix is greater than the rank of the coefficient matrix; if, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has free parameters where is the difference between the number of variables and the rank; hence in such a case there are an infinitude of solutions.
An augmented matrix may also be used to find the inverse of a matrix by combining it with the identity matrix.
To find the inverse of a matrix
Let be the square 2×2 matrix
To find the inverse of C we create (C|I) where I is the 2×2 identity matrix. We then reduce the part of (C|I) corresponding to C to the identity matrix using only elementary row operations on (C|I).
the right part of which is the inverse of the original matrix.
Existence and number of solutions
Consider the system of equations
The coefficient matrix is
and the augmented matrix is
Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are an infinite number of solutions.
In contrast, consider the system
The coefficient matrix is
and the augmented matrix is
|
https://en.wikipedia.org/wiki/Cray%20C90
|
The Cray C90 series (initially named the Y-MP C90) was a vector processor supercomputer launched by Cray Research in 1991. The C90 was a development of the Cray Y-MP architecture. Compared to the Y-MP, the C90 processor had a dual vector pipeline and a faster 4.1 ns clock cycle (244 MHz), which together gave three times the performance of the Y-MP processor. The maximum number of processors in a system was also doubled from eight to 16. The C90 series used the same Model E IOS (Input/Output Subsystem) and UNICOS operating system as the earlier Y-MP Model E.
The C90 series included the C94, C98 and C916 models (configurations with a maximum of four, eight, and 16 processor respectively) and the C92A and C94A (air-cooled models). Maximum SRAM memory was between 1 and 8 GB, depending on model.
The D92, D92A, D94 and D98 (also known as the C92D, C92AD, C94D and C98D respectively) variants were equipped with slower, but higher-density DRAM memory, allowing increased maximum memory sizes of up to 16 GB, depending on the model.
The successor system was the Cray T90.
External links
Cray Research and Cray computers FAQ Part 5
References
Computer-related introductions in 1991
C90
Vector supercomputers
|
https://en.wikipedia.org/wiki/Centre%20for%20Computational%20Geography
|
The Centre for Computational Geography (CCG) is an inter-disciplinary research centre based at the University of Leeds. The CCG was founded in 1993 by Stan Openshaw and Phil Rees, and builds on over 40 years experience in spatial analysis and modelling within the School of Geography. CCG research is concerned with the development and application of tools for analysis, visualisation and modelling geographical systems.
References
External links
Centre for Computational Geography Website
Computational science
Geography organizations
Research institutes in West Yorkshire
University of Leeds
|
https://en.wikipedia.org/wiki/Needle%20roller%20bearing
|
A needle roller bearing is a special type of roller bearing which uses long, thin cylindrical rollers resembling needles. Ordinary roller bearings' rollers are only slightly longer than their diameter, but needle bearings typically have rollers that are at least four times longer than their diameter. Like all bearings, they are used to reduce the friction of a rotating surface.
Compared to ball bearings and ordinary roller bearings, needle bearings have a greater surface area in contact with the races, so they can support a greater load. They are also thinner, so they require less clearance between the axle and the surrounding structure.
Needle bearings are heavily used in automobile components such as rocker arm pivots, pumps, compressors, and transmissions. The drive shaft of a rear-wheel drive vehicle typically has at least eight needle bearings (four in each U joint) and often more if it is particularly long, or operates on steep slopes.
See also
Race (bearing)
Tapered roller bearing
References
Rolling-element bearings
|
https://en.wikipedia.org/wiki/Whonamedit%3F
|
Whonamedit? is an online English-language dictionary of medical eponyms and the people associated with their identification. Though it is a dictionary, many eponyms and persons are presented in extensive articles with comprehensive bibliographies. The dictionary is hosted in Norway and maintained by medical historian Ole Daniel Enersen.
References
External links
Medical websites
Medical dictionaries
Eponyms
|
https://en.wikipedia.org/wiki/Send%20track
|
Send tracks (sometimes simply called Sends) are the software audio routing equivalent to the aux-sends found on multitrack sound mixing/sequencing consoles.
In audio recording, a given song is almost always made up of multiple tracks, with each instrument or sound on their own track (for example, one track could contain the drums, one for the guitar, one for a vocal, etc). Further, each track can be separately adjusted in many ways, such as changing the volume, adding effects, and so on. This can be done with individual hardware components, commonly known as "outside the box," or via software applications known as DAWs (Digital Audio Workstations), commonly known as "inside the box."
Send tracks are tracks that aren't (normally) used to record sound on themselves, but to apply those adjustments to multiple, perhaps even all, tracks the same way. For example: if the drums are not on one track, but are instead spread out across multiple tracks (which is common), there is often the desire to treat them all the same in terms of volume, effects, etc. Instead of doing that for each track, you can set up a single send track to apply to all of them.
Advantages
Because one can treat numerous tracks uniformly with a single send track, they can save a lot of time and resources. They are also inherently more flexible than their hardware equivalent, since any number of send tracks can be created as needed. For more complicated effect chains, send tracks also allow their output to be routed to other send tracks, which can switch their routing to other send tracks in turn. The solutions offered by most multi-track software provide musicians with an easier (although arguably less hands-on) approach to controlling sends and their respective effects on the audio.
Audio engineering
|
https://en.wikipedia.org/wiki/Crystal%20detector
|
A crystal detector is an obsolete electronic component used in some early 20th century radio receivers that consists of a piece of crystalline mineral which rectifies the alternating current radio signal. It was employed as a detector (demodulator) to extract the audio modulation signal from the modulated carrier, to produce the sound in the earphones. It was the first type of semiconductor diode, and one of the first semiconductor electronic devices. The most common type was the so-called cat's whisker detector, which consisted of a piece of crystalline mineral, usually galena (lead sulfide), with a fine wire touching its surface.
The "asymmetric conduction" of electric current across electrical contacts between a crystal and a metal was discovered in 1874 by Karl Ferdinand Braun. Crystals were first used as radio wave detectors in 1894 by Jagadish Chandra Bose in his microwave experiments. Bose first patented a crystal detector in 1901. The crystal detector was developed into a practical radio component mainly by G. W. Pickard, who began research on detector materials in 1902 and found hundreds of substances that could be used in forming rectifying junctions. The physical principles by which they worked were not understood at the time they were used, but subsequent research into these primitive point contact semiconductor junctions in the 1930s and 1940s led to the development of modern semiconductor electronics.
The unamplified radio receivers that used crystal detectors are called crystal radios. The crystal radio was the first type of radio receiver that was used by the general public, and became the most widely used type of radio until the 1920s. It became obsolete with the development of vacuum tube receivers around 1920, but continued to be used until World War II and remains a common educational project today thanks to its simple design.
Operation
The contact between two dissimilar materials at the surface of the detector's semiconducting crysta
|
https://en.wikipedia.org/wiki/Automatic%20watch
|
An automatic watch, also known as a self-winding watch or simply an automatic, is a mechanical watch where the natural motion of the wearer provides energy to wind the mainspring, making manual winding unnecessary if worn enough. It is distinguished from a manual watch in that a manual watch must have its mainspring wound by hand at regular intervals.
Operation
In a mechanical watch the watch's gears are turned by a spiral spring called a mainspring. In a manual watch, energy is stored in the mainspring by turning a knob, the crown, on the side of the watch. Then the energy from the mainspring powers the watch movement until it runs down, requiring the spring to be wound again.
A self-winding watch movement has a mechanism which winds the mainspring using the natural motions of the wearer's body. The watch contains an oscillating weight that turns on a pivot. The normal movements of the watch in the user's pocket (for a pocketwatch) or on the user's arm (for a wristwatch) cause the rotor to pivot on its staff, which is attached to a ratcheted winding mechanism. The motion of the watch is thereby translated into circular motion of the weight which, through a series of reverser and reducing gears, eventually winds the mainspring. There are many different designs for modern self-winding mechanisms. Some designs allow winding of the watch to take place while the weight swings in only one direction while other, more advanced, mechanisms have two ratchets and wind the mainspring during both clockwise and anti-clockwise weight motions.
The fully wound mainspring in a typical watch can store enough energy reserve for roughly two days, allowing the watch to keep running through the night while stationary. In many cases automatic wristwatches can also be wound manually by turning the crown, so the watch can be kept running when not worn, and in case the wearer's wrist motions are not sufficient to keep it wound automatically.
Preventing overwinding
Self-winding mechanism
|
https://en.wikipedia.org/wiki/Non-negative%20matrix%20factorization
|
Non-negative matrix factorization (NMF or NNMF), also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix is factorized into (usually) two matrices and , with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically.
NMF finds applications in such fields as astronomy, computer vision, document clustering, missing data imputation, chemometrics, audio signal processing, recommender systems, and bioinformatics.
History
In chemometrics non-negative matrix factorization has a long history under the name "self modeling curve resolution".
In this framework the vectors in the right matrix are continuous curves rather than discrete vectors.
Also early work on non-negative matrix factorizations was performed by a Finnish group of researchers in the 1990s under the name positive matrix factorization.
It became more widely known as non-negative matrix factorization after Lee and Seung investigated the properties of the algorithm and published some simple and useful
algorithms for two types of factorizations.
Background
Let matrix be the product of the matrices and ,
Matrix multiplication can be implemented as computing the column vectors of as linear combinations of the column vectors in using coefficients supplied by columns of . That is, each column of can be computed as follows:
where is the -th column vector of the product matrix and is the -th column vector of the matrix .
When multiplying matrices, the dimensions of the factor matrices may be significantly lower than those of the product matrix and it is this property that forms the basis of NMF. NMF generates factors with significan
|
https://en.wikipedia.org/wiki/Unified%20Soil%20Classification%20System
|
The Unified Soil Classification System (USCS) is a soil classification system used in engineering and geology to describe the texture and grain size of a soil. The classification system can be applied to most unconsolidated materials, and is represented by a two-letter symbol. Each letter is described below (with the exception of Pt):
If the soil has 5–12% by weight of fines passing a #200 sieve (5% < P#200 < 12%), both grain size distribution and plasticity have a significant effect on the engineering properties of the soil, and dual notation may be used for the group symbol. For example, GW-GM corresponds to "well-graded gravel with silt."
If the soil has more than 15% by weight retained on a #4 sieve (R#4 > 15%), there is a significant amount of gravel, and the suffix "with gravel" may be added to the group name, but the group symbol does not change. For example, SP-SM could refer to "poorly graded SAND with silt" or "poorly graded SAND with silt and gravel."
Symbol chart
ASTM D-2487
See also
AASHTO Soil Classification System
AASHTO
ASTM International
References
Specific
Soil classification
|
https://en.wikipedia.org/wiki/Dot-com%20company
|
A dot-com company, or simply a dot-com (alternatively rendered dot.com, dot com, dotcom or .com), is a company that does most of its business on the Internet, usually through a website on the World Wide Web that uses the popular top-level domain ".com". As of 2021, .com is by far the most used TLD, with almost half of all registrations.
The suffix .com in a URL usually (but not always) refers to a commercial or for-profit entity, as opposed to a non-commercial entity or non-profit organization, which usually use .org. The name for the domain came from the word commercial, as that is the main intended use. Since the .com companies are web-based, often their products or services are delivered via web-based mechanisms, even when physical products are involved. On the other hand, some .com companies do not offer any physical products.
History
Origin of the .com domain (1985-1991)
The .com top-level domain (TLD) was one of the first seven created when the Internet was first implemented in 1985; the others were .mil, .gov, .edu, .net, .int, and .org. The United States Department of Defense originally controlled the domain, but control was later transferred to the National Science Foundation as it was mainly used for non-defense-related purposes.
Beginning of online commerce and rise in valuation (1992-1999)
With the creation of the World Wide Web in 1991, many companies began creating websites to sell their products. In 1994, the first secure online credit card transaction was made using the NetMarket platform. By 1995, over 40 million people were using the Internet. That same year, companies including Amazon.com and eBay were launched, paving the way for future e-commerce companies. At the time of Amazon's IPO in 1997, they were recording a 900% increase in revenue over the previous year. By 1998, with a valuation of over $14 billion, they were still not making a profit. The same phenomenon occurred with many other internet companiesventure capitalists were eage
|
https://en.wikipedia.org/wiki/Plasmid%20preparation
|
A plasmid preparation is a method of DNA extraction and purification for plasmid DNA, it is an important step in many molecular biology experiments and is essential for the successful use of plasmids in research and biotechnology. Many methods have been developed to purify plasmid DNA from bacteria. During the purification procedure, the plasmid DNA is often separated from contaminating proteins and genomic DNA.
These methods invariably involve three steps: growth of the bacterial culture, harvesting and lysis of the bacteria, and purification of the plasmid DNA. Purification of plasmids is central to molecular cloning. A purified plasmid can be used for many standard applications, such as sequencing and transfections into cells.
Growth of the bacterial culture
Plasmids are almost always purified from liquid bacteria cultures, usually E. coli, which have been transformed and isolated. Virtually all plasmid vectors in common use encode one or more antibiotic resistance genes as a selectable marker, for example a gene encoding ampicillin or kanamycin resistance, which allows bacteria that have been successfully transformed to multiply uninhibited. Bacteria that have not taken up the plasmid vector are assumed to lack the resistance gene, and thus only colonies representing successful transformations are expected to grow.
Bacteria are grown under favourable conditions.
Harvesting and lysis of the bacteria
There are several methods for cell lysis, including alkaline lysis, mechanical lysis, and enzymatic lysis.
Alkaline lysis
The most common method is alkaline lysis, which involves the use of a high concentration of a basic solution, such as sodium hydroxide, to lyse the bacterial cells. When bacteria are lysed under alkaline conditions (pH 12.0–12.5) both chromosomal DNA and protein are denatured; the plasmid DNA however, remains stable. Some scientists reduce the concentration of NaOH used to 0.1M in order to reduce the occurrence of ssDNA. After the addition o
|
https://en.wikipedia.org/wiki/Alligation
|
Alligation is an old and practical method of solving arithmetic problems related to mixtures of ingredients. There are two types of alligation: alligation medial, used to find the quantity of a mixture given the quantities of its ingredients, and alligation alternate, used to find the amount of each ingredient needed to make a mixture of a given quantity. Alligation medial is merely a matter of finding a weighted mean. Alligation alternate is more complicated and involves organizing the ingredients into high and low pairs which are then traded off. Alligation alternate provides answers when an algebraic solution (e.g., using simultaneous equations) is not possible (e.g., you have three variables but only two equations). Note that in this class of problem, there may be multiple feasible answers.
Two further variations on Alligation occur : Alligation Partial and Alligation Total (see John King's Arithmetic Book 1795 which includes worked examples.) The technique is not used in schools although it is used still in pharmacies for quick calculation of quantities.
Examples
Alligation medial
Suppose you make a cocktail drink combination out of 1/2 Coke, 1/4 Sprite, and 1/4 orange soda. The Coke has 120 grams of sugar per liter, the Sprite has 100 grams of sugar per liter, and the orange soda has 150 grams of sugar per liter. How much sugar does the drink have? This is an example of alligation medial because you want to find the amount of sugar in the mixture given the amounts of sugar in its ingredients. The solution is just to find the weighted average by composition:
grams per liter
Alligation alternate
Suppose you like 1% milk, but you have only 3% whole milk and ½% low fat milk. How much of each should you mix to make an 8-ounce cup of 1% milk? This is an example of alligation alternate because you want to find the amount of two ingredients to mix to form a mixture with a given amount of fat. Since there are only two ingredients, there is only one possible wa
|
https://en.wikipedia.org/wiki/M23%20software%20distribution%20system
|
m23 is a software distribution and management system for the Debian, Ubuntu, Kubuntu Linux, Xubuntu, Linux Mint, elementary OS, Fedora, CentOS and openSUSE distributions.
m23 can partition and format clients and install a Linux operating system and any number of software packages like office packages, graphic tools, server applications or games via the m23 system. The entire administration is done via a webbrowser and is possible from all computers having access to the m23 server. m23 is developed predominantly by Hauke Goos-Habermann since the end of 2002.
m23 differentiates between servers and clients. An m23 server is used for software distribution and the management of the clients. Computers which are administered (e.g. software is installed) through the m23 server are the clients.
The client is booted over the network during the installation of the operating system. It is possible to start the client with a boot ROM on its network card, a boot disk or a boot CD. The client's hardware is detected and set up. The gathered hardware and partition information is sent to the m23 server. Afterwards, this information is shown in the m23 administration interface. Now the administrator has to choose how to partition and format the client. There are other settings, too, e.g. the distribution to be installed on the client.
The m23 clients can be installed as workstation with the graphical user interfaces KDE 5.x, GNOME 3.x, Xfce, Unity, LXDE and pure X11 or as a server without graphical subsystem. In most server setups, the server does not need a user interface because most of the server software runs in text mode.
M23 is released under the GNU GPL.
Features
Three steps to a complete client: To install a client via m23 is rather simple. Only three steps are needed for a completely installed client.
Integration of existing clients into m23: Existing Debian-based systems can be assimilated into the m23 system easily and administered like a normal client (installed wi
|
https://en.wikipedia.org/wiki/DIMACS
|
The Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) is a collaboration between Rutgers University, Princeton University, and the research firms AT&T, Bell Labs, Applied Communication Sciences, and NEC. It was founded in 1989 with money from the National Science Foundation. Its offices are located on the Rutgers campus, and 250 members from the six institutions form its permanent members.
DIMACS is devoted to both theoretical development and practical applications of discrete mathematics and theoretical computer science. It engages in a wide variety of evangelism including encouraging, inspiring, and facilitating researchers in these subject areas, and sponsoring conferences and workshops.
Fundamental research in discrete mathematics has applications in diverse fields including Cryptology, Engineering, Networking, and Management Decision Support.
Past directors have included Fred S. Roberts, Daniel Gorenstein, András Hajnal, and Rebecca N. Wright.
The DIMACS Challenges
DIMACS sponsors implementation challenges to determine practical algorithm performance on problems of interest. There have been eleven DIMACS challenges so far.
1990-1991: Network Flows and Matching
1992-1992: NP-Hard Problems: Max Clique, Graph Coloring, and SAT
1993-1994: Parallel Algorithms for Combinatorial Problems
1994-1995: Computational Biology: Fragment Assembly and Genome Rearrangement
1995-1996: Priority Queues, Dictionaries, and Multidimensional Point Sets
1998-1998: Near Neighbor Searches
2000-2000: Semidefinite and Related Optimization Problems
2001-2001: The Traveling Salesman Problem
2005-2005: The Shortest Path Problem
2011-2012: Graph Partitioning and Graph Clustering
2013-2014: Steiner Tree Problems
2020-2021: Vehicle Routing Problems
References
External links
DIMACS Website
1989 establishments in New Jersey
Combinatorics
Discrete mathematics
Rutgers University
Mathematical institutes
|
https://en.wikipedia.org/wiki/Three%20hares
|
The three hares (or three rabbits) is a circular motif appearing in sacred sites from East Asia, the Middle East and to the churches of Devon, England (as the "Tinners' Rabbits"), and historical synagogues in Europe. It is used as an architectural ornament, a religious symbol, and in other modern works of art or a logo for adornment (including tattoos), jewelry, and a coat of arms on an escutcheon. It is viewed as a puzzle, a topology problem or a visual challenge, and has been rendered as sculpture, drawing, and painting.
The symbol features three hares or rabbits chasing each other in a circle. Like the triskelion, the triquetra, and their antecedents (e.g., the triple spiral), the symbol of the three hares has a threefold rotational symmetry. Each of the ears is shared by two hares, so that only three ears are shown. Although its meaning is apparently not explained in contemporary written sources from any of the medieval cultures where it is found, it is thought to have a range of symbolic or mystical associations with fertility and the lunar cycle. When used in Christian churches, it is presumed to be a symbol of the Trinity. Its origins and original significance are uncertain, as are the reasons why it appears in such diverse locations.
Origins in Buddhism and diffusion on the Silk Road
The earliest occurrences appear to be in cave temples in China, dated to the Sui dynasty (6th to 7th centuries). The iconography spread along the Silk Road, and was a symbol associated with Buddhism. In other contexts the metaphor has been given different meaning. For example, Guan Youhui, a retired researcher from the Dunhuang Academy, who spent 50 years studying the decorative patterns in the Mogao Caves, believes the three rabbits—"like many images in Chinese folk art that carry auspicious symbolism—represent peace and tranquility". See Aurel Stein. The hares have appeared in Lotus motifs.
The three hares appear on 13th century Mongol metalwork, and on a copper coin, foun
|
https://en.wikipedia.org/wiki/Microsoft%20Security%20Development%20Lifecycle
|
The Microsoft Security Development Lifecycle is a software development process used and proposed by Microsoft to reduce software maintenance costs and increase reliability of software concerning software security related bugs. It is based on the classical spiral model.
Versions
See also
Trusted computing base
Further reading
External links
Software development process
Microsoft initiatives
Data security
Security
Crime prevention
National security
Cryptography
Information governance
|
https://en.wikipedia.org/wiki/Long%20double
|
In C and related programming languages, long double refers to a floating-point data type that is often more precise than double precision though the language standard only requires it to be at least as precise as double. As with C's other floating-point types, it may not necessarily map to an IEEE format.
long double in C
History
The long double type was present in the original 1989 C standard, but support was improved by the 1999 revision of the C standard, or C99, which extended the standard library to include functions operating on long double such as sinl() and strtold().
Long double constants are floating-point constants suffixed with "L" or "l" (lower-case L), e.g., 0.3333333333333333333333333333333333L or 3.1415926535897932384626433832795029L for quadruple precision. Without a suffix, the evaluation depends on FLT_EVAL_METHOD.
Implementations
On the x86 architecture, most C compilers implement long double as the 80-bit extended precision type supported by x86 hardware (generally stored as 12 or 16 bytes to maintain data structure alignment), as specified in the C99 / C11 standards (IEC 60559 floating-point arithmetic (Annex F)). An exception is Microsoft Visual C++ for x86, which makes long double a synonym for double. The Intel C++ compiler on Microsoft Windows supports extended precision, but requires the /Qlong‑double switch for long double to correspond to the hardware's extended precision format.
Compilers may also use long double for the IEEE 754 quadruple-precision binary floating-point format (binary128). This is the case on HP-UX, Solaris/SPARC, MIPS with the 64-bit or n32 ABI, 64-bit ARM (AArch64) (on operating systems using the standard AAPCS calling conventions, such as Linux), and z/OS with FLOAT(IEEE). Most implementations are in software, but some processors have hardware support.
On some PowerPC systems, long double is implemented as a double-double arithmetic, where a long double value is regarded as the exact sum of two double-precis
|
https://en.wikipedia.org/wiki/Dave%20Thomas%20%28programmer%29
|
Dave Thomas (born 1956) is a computer programmer, author and editor. He has written about Ruby and together with Andy Hunt, he co-authored The Pragmatic Programmer and runs The Pragmatic Bookshelf publishing company. Thomas moved to the United States from England in 1994 and lives north of Dallas, Texas.
Thomas coined the phrases 'Code Kata' and 'DRY' (Don't Repeat Yourself), and was an original signatory and author of The Manifesto for Agile Software Development. He studied computer science at Imperial College London.
Works
The Pragmatic Programmer, Andrew Hunt and David Thomas, 1999, Addison Wesley, .
Programming Ruby: A Pragmatic Programmer's Guide, David Thomas and Andrew Hunt, 2000, Addison Wesley,
Pragmatic Version Control Using CVS, David Thomas and Andrew Hunt, 2003, The Pragmatic Bookshelf,
Pragmatic Unit Testing in Java with JUnit, Andrew Hunt and David Thomas, 2003, The Pragmatic Bookshelf,
Pragmatic Unit Testing in C# with Nunit, Andrew Hunt and David Thomas, 2004, The Pragmatic Bookshelf,
Programming Ruby (2nd Edition), Dave Thomas, Chad Fowler, and Andrew Hunt, 2004, The Pragmatic Bookshelf,
Pragmatic Unit Testing in C# with Nunit, 2nd Edition, Andy Hunt and David Thomas with Matt Hargett, 2007, The Pragmatic Bookshelf,
Agile Web Development with Rails, Dave Thomas, David Heinemeier Hansson, Andreas Schwarz, Thomas Fuchs, Leon Breedt, and Mike Clark, 2005, Pragmatic Bookshelf,
Agile Web Development with Rails (2nd edition), Dave Thomas, with David Heinemener Hansson, Mike Clark, Justin Gehtland, James Duncan Davidson, 2006, Pragmatic Bookshelf,
Programming Elixir: Functional |> Concurrent |> Pragmatic |> Fun, Dave Thomas, foreword by José Valim the creator of Elixir, and edited by Lynn Beighley, 2014, Pragmatic Bookshelf,
References
External links
pragprog.com, website for the Pragmatic Programmers
Dave Thomas's Blog
CodeKata
Dave Thomas Interview: The Corruption of Agile; Ruby and Elixir; Katas and More, Dr.Dobb's, March 18
|
https://en.wikipedia.org/wiki/Disjunct%20distribution
|
In biology, a taxon with a disjunct distribution is one that has two or more groups that are related but considerably separated from each other geographically. The causes are varied and might demonstrate either the expansion or contraction of a species' range.
Range fragmentation
Also called range fragmentation, disjunct distributions may be caused by changes in the environment, such as mountain building and continental drift or rising sea levels; it may also be due to an organism expanding its range into new areas, by such means as rafting, or other animals transporting an organism to a new location (plant seeds consumed by birds and animals can be moved to new locations during bird or animal migrations, and those seeds can be deposited in new locations in fecal matter). Other conditions that can produce disjunct distributions include: flooding, or changes in wind, stream, and current flows, plus others such as anthropogenic introduction of alien introduced species either accidentally or deliberately (agriculture and horticulture).
Habitat fragmentation
Disjunct distributions can occur when suitable habitat is fragmented, which produces fragmented populations, and when that fragmentation becomes so divergent that species movement between one suitable habitat to the next is disrupted, isolated population can be produced. Extinctions can cause disjunct distribution, especially in areas where only scattered areas are habitable by a species; for instance, island chains or specific elevations along a mountain range or areas along a coast or between bodies of water like streams, lakes and ponds.
Examples
There are many patterns of disjunct distributions at many scales: Irano-Turanian disjunction, Europe - East Asia, Europe-South Africa (e.g. genus Erica), Mediterranean-Hoggart disjunction (genus Olea), etc.
Lusitanian distribution
This kind of disjunct distribution of a species, such that it occurs in Iberia and in Ireland, without any intermediate localities, is us
|
https://en.wikipedia.org/wiki/ASCII%20ribbon%20campaign
|
The ASCII ribbon campaign was an Internet phenomenon started in 1998 advocating that email be sent only in plain text, because of inefficiencies or dangers of using HTML email. Proponents placed ASCII art in their signature blocks, meant to look like an awareness ribbon, along with a message or link to an advocacy site:
History
Following the development of Microsoft Windows 95, standards adherents became annoyed that they were receiving email in HTML and non-human-readable formats. The first known appearance of a ribbon in support of the campaign was in the signature of an email dated 17 June 1998 by Maurício Teixeira of Brazil. Two groups of pursuers, Asciiribbon.org and ARC.Pasp.DE, differ in their attitudes towards vCard.
See also
Simple Mail Transfer Protocol
MIME
References
External links
.
Awareness ribbon
Email
ASCII art
1990s in Internet culture
|
https://en.wikipedia.org/wiki/Greek%20Font%20Society
|
The Greek Font Society () is a non-profit organization in Greece, founded in 1992, devoted to improving the standard of Greek digital typography.
It has issued four digital fonts, all with full polytonic support:
GFS Bodoni, a modernized version of Giambattista Bodoni's 1793 design. (Font details)
GFS Didot, inspired by Firmin Didot's 1805 design. (Font details)
GFS Neohellenic, cut by the Lanston Monotype Company. (Font details)
GFS Porson, originally created by Richard Porson from Cambridge, at the end of the 18th century. (Font details)
Other fonts include:
GFS Complutum
GFS Bodoni Classic
GFS Baskerville
GFS Gazis
GFS Didot Classic
GFS Porson
GFS Solomos
GFS Olga
GFS Neohellenic
GFS Artemisia
GFS Theokritos
GFS Elpis
GFS Göschen
The society has been quite prolific in the creation of new fonts. It sponsored an international symposium on the Greek alphabet and Greek typography in 1995. For the 2004 Summer Olympics in Athens, it designed and published an edition of the 14 Olympian Odes of Pindar using historic Greek typefaces. The majority of its fonts are licensed under the SIL Open Font License.
TeX versions of the following typefaces are also available: GFS Didot TeX, GFS Bodoni TeX, GFS NeoHellenic TeX, GFS Porson TeX and GFS Artemisia TeX.
A notable recent addition is the GFS Neohellenic Math OpenType font (George Matthiopoulos, Antonis Tsolomitis and others), which may be the only sans-serif math typeface currently (spring 2018) available (freely or otherwise) for use with XeTeX and LuaTeX as well as OpenType-compatible software such as LibreOffice.
References
Michael S. Macrakis (ed.), Greek letters: from tablets to pixels, proceedings of a conference sponsored by the Greek Font Society, Oak Knoll Press, 1996, . Includes papers on history, typography, and character coding by Hermann Zapf, Matthew Carter, Nicolas Barker, John A. Lane, Kyle McCarter, Jerôme Peignot, Pierre MacKay, Silvio Levy, et al.
External links
The Greek Font Soc
|
https://en.wikipedia.org/wiki/Covariant%20classical%20field%20theory
|
In mathematical physics, covariant classical field theory represents classical fields by sections of fiber bundles, and their dynamics is phrased in the context of a finite-dimensional space of fields. Nowadays, it is well known that jet bundles and the variational bicomplex are the correct domain for such a description. The Hamiltonian variant of covariant classical field theory is the covariant Hamiltonian field theory where momenta correspond to derivatives of field variables with respect to all world coordinates. Non-autonomous mechanics is formulated as covariant classical field theory on fiber bundles over the time axis ℝ.
Examples
Many important examples of classical field theories which are of interest in quantum field theory are given below. In particular, these are the theories which make up the Standard model of particle physics. These examples will be used in the discussion of the general mathematical formulation of classical field theory.
Uncoupled theories
Scalar field theory
Klein−Gordon theory
Spinor theories
Dirac theory
Weyl theory
Majorana theory
Gauge theories
Maxwell theory
Yang–Mills theory. This is the only theory in the uncoupled theory list which contains interactions: Yang–Mills contains self-interactions.
Coupled theories
Yukawa coupling: coupling of scalar and spinor fields.
Scalar electrodynamics/chromodynamics: coupling of scalar and gauge fields.
Quantum electrodynamics/chromodynamics: coupling of spinor and gauge fields. Despite these being named quantum theories, the Lagrangians can be considered as those of a classical field theory.
Requisite mathematical structures
In order to formulate a classical field theory, the following structures are needed:
Spacetime
A smooth manifold .
This is variously known as the world manifold (for emphasizing the manifold without additional structures such as a metric), spacetime (when equipped with a Lorentzian metric), or the base manifold for a more geometrical viewpoint.
St
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.