source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/Bipartite%20dimension
|
In the mathematical fields of graph theory and combinatorial optimization, the bipartite dimension or biclique cover number of a graph G = (V, E) is the minimum number of bicliques (that is complete bipartite subgraphs), needed to cover all edges in E. A collection of bicliques covering all edges in G is called a biclique edge cover, or sometimes biclique cover. The bipartite dimension of G is often denoted by the symbol d(G).
Example
An example for a biclique edge cover is given in the following diagrams:
Bipartite dimension formulas for some graphs
The bipartite dimension of the n-vertex complete graph, is .
The bipartite dimension of a 2n-vertex
crown graph equals , where
is the inverse function of the central binomial coefficient .
The bipartite dimension of the lattice graph is
, if is even and for some integers ;
and is otherwise .
determine the bipartite dimension for some special graphs. For example, the path
has and the cycle has .
Computing the bipartite dimension
The computational task of determining the bipartite dimension for a given graph G is an optimization problem. The decision problem for bipartite dimension can be phrased as:
INSTANCE: A graph and a positive integer .
QUESTION: Does G admit a biclique edge cover containing at most bicliques?
This problem appears as problem GT18 in Garey and Johnson's classical book on NP-completeness, and is a rather straightforward reformulation of
another decision problem on families of finite sets.
The set basis problem appears as problem SP7 in Garey and Johnson's book.
Here, for a family of subsets of a finite set ,
a set basis for is another family of subsets of , such that every set can be described as the union of some basis elements from . The set basis problem is now given as follows:
INSTANCE: A finite set , a family of subsets of , and a positive integer k.
QUESTION: Does there exist a set basis of size at most for ?
In its former formulation, the problem was proved to be
|
https://en.wikipedia.org/wiki/Edgeworth%20price%20cycle
|
An Edgeworth price cycle is cyclical pattern in prices characterized by an initial jump, which is then followed by a slower decline back towards the initial level. The term was introduced by Maskin and Tirole (1988) in a theoretical setting featuring two firms bidding sequentially and where the winner captures the full market.
Phases of a price cycle
A price cycle has the following phases:
War of attrition: When the price is at marginal cost, the firms are engaged in a war of attrition where each firm hopes that the competitor will raise her price first ("relent").
Jump: When one firm relents, the other firm will then in the next period undercut, which is when the market price jumps. This first period is the most valuable to be the low-price firm, which is what causes firms to want to stay in the war of attrition to force the competitor to jump first.
Undercutting: then follows a sequence where the firms take turns at undercutting each other until the market arrives back in the war of attrition at the low price.
Discussion
It can be debated whether Edgeworth Cycles should be thought of as tacit collusion because it is a Markov Perfect equilibrium, but Maskin and Tirole write: "Thus our model can be viewed as a theory of tacit collusion." (p. 592).
Edgeworth cycles have been reported in gasoline markets in many countries. Because the cycles tend to occur frequently, weekly average prices found in government reports will generally mask the cycling. Wang (2012) emphasizes the role of price commitment in facilitating price cycles: without price commitment, the dynamic game becomes one of simultaneous move and here, the cycles are no longer a Markov Perfect equilibrium but rely on, e.g., supergame arguments.
Edgeworth cycles are distinguished from both sticky pricing and cost-based pricing. Sticky prices are typically found in markets with less aggressive price competition, so there are fewer or no cycles. Purely cost-based pricing occurs when retailers m
|
https://en.wikipedia.org/wiki/Affine%20differential%20geometry
|
Affine differential geometry is a type of differential geometry which studies invariants of volume-preserving affine transformations. The name affine differential geometry follows from Klein's Erlangen program. The basic difference between affine and Riemannian differential geometry is that affine differential geometry studies manifolds equipped with a volume form rather than a metric.
Preliminaries
Here we consider the simplest case, i.e. manifolds of codimension one. Let be an n-dimensional manifold, and let ξ be a vector field on transverse to such that for all where ⊕ denotes the direct sum and Span the linear span.
For a smooth manifold, say N, let Ψ(N) denote the module of smooth vector fields over N. Let be the standard covariant derivative on Rn+1 where
We can decompose DXY into a component tangent to M and a transverse component, parallel to ξ. This gives the equation of Gauss: where is the induced connexion on M and is a bilinear form. Notice that ∇ and h depend upon the choice of transverse vector field ξ. We consider only those hypersurfaces for which h is non-degenerate. This is a property of the hypersurface M and does not depend upon the choice of transverse vector field ξ. If h is non-degenerate then we say that M is non-degenerate. In the case of curves in the plane, the non-degenerate curves are those without inflexions. In the case of surfaces in 3-space, the non-degenerate surfaces are those without parabolic points.
We may also consider the derivative of ξ in some tangent direction, say X. This quantity, DXξ, can be decomposed into a component tangent to M and a transverse component, parallel to ξ. This gives the Weingarten equation: The type-(1,1)-tensor is called the affine shape operator, the differential one-form is called the transverse connexion form. Again, both S and τ depend upon the choice of transverse vector field ξ.
The first induced volume form
Let be a volume form defined on Rn+1. We can induce a volume form on
|
https://en.wikipedia.org/wiki/Significant%20Weather%20Observing%20Program
|
The Significant Weather Observing Program (SWOP) was created at the National Weather Service (NWS) Weather Forecast Office (WFO) in Central Illinois in order to provide forecasters with additional data during and after significant weather events.
See also
Community Collaborative Rain, Hail and Snow Network (CoCoRaHS)
Citizen Weather Observer Program (CWOP)
Skywarn
Cooperative Observer Program
|
https://en.wikipedia.org/wiki/Burt%20Totaro
|
Burt James Totaro, FRS (b. 1967), is an American mathematician, currently a professor at the University of California, Los Angeles, specializing in algebraic geometry and algebraic topology.
Education and early life
Totaro participated in the Study of Mathematically Precocious Youth while in grade school and enrolled at Princeton University at the age of thirteen, becoming the youngest freshman in its history. He scored a perfect 800 on the math portion and a 690 on the verbal portion of the SAT-I exam at the age of 12. He graduated in 1984 and went on to graduate school at the University of California, Berkeley, receiving his Ph.D. in 1989.
Career and research
Since 2009, he has been one of three managing editors of the journal Compositio Mathematica; he is also on the editorial boards of Forum of Mathematics, Pi and Sigma, the Journal of the American Mathematical Society, and the Bulletin of the American Mathematical Society. In 2012, he became a Professor in the UCLA Department of Mathematics.
Totaro's work is influenced by the Hodge conjecture, and is based on the connections and application of topology to algebraic geometry. His work has applications in a number of diverse areas of mathematics, from representation theory to Lie theory and group cohomology.
Selected works
Recognition
In 2000, he was elected Lowndean Professor of Astronomy and Geometry at the University of Cambridge. In the same year, he was awarded the Whitehead Prize by the London Mathematical Society.
In 2009, Totaro was elected Fellow of the Royal Society. He was included in the 2019 class of fellows of the American Mathematical Society "for contributions to algebraic geometry, Lie theory and cohomology and their connections and for service to the profession".
|
https://en.wikipedia.org/wiki/Perceived%20performance
|
Perceived performance, in computer engineering, refers to how quickly a software feature appears to perform its task. The concept applies mainly to user acceptance aspects.
The amount of time an application takes to start up, or a file to download, is not made faster by showing a startup screen (see Splash screen) or a file progress dialog box. However, it satisfies some human needs: it appears faster to the user as well as providing a visual cue to let them know the system is handling their request.
In most cases, increasing real performance increases perceived performance, but when real performance cannot be increased due to physical limitations, techniques can be used to increase perceived performance at the cost of marginally decreasing real performance. For example, drawing and refreshing a progress bar while loading a file satisfies the user who is watching, but steals time from the process that is actually loading the file, but usually this is only a very small amount of time. All such techniques must exploit the inability of the user to accurately judge real performance, or they would be considered detrimental to performance.
Techniques for improving perceived performance may include more than just decreasing the delay between the user's request and visual feedback. Sometimes an increase in delay can be perceived as a performance improvement, such as when a variable controlled by the user is set to a running average of the users input. This can give the impression of smoother motion, but the controlled variable always reaches the desired value a bit late. Since it smooths out hi-frequency jitter, when the user is attempting to hold the value constant, they may feel like they are succeeding more readily. This kind of compromise would be appropriate for control of a sniper rifle in a video game. Another example may be doing trivial computation ahead of time rather than after a user triggers an action, such as pre-sorting a large list of data before a user w
|
https://en.wikipedia.org/wiki/Chirp%20spread%20spectrum
|
In digital communications, chirp spread spectrum (CSS) is a spread spectrum technique that uses wideband linear frequency modulated chirp pulses to encode information. A chirp is a sinusoidal signal whose frequency increases or decreases over time (often with a polynomial expression for the relationship between time and frequency).
Overview
As with other spread spectrum methods, chirp spread spectrum uses its entire allocated bandwidth to broadcast a signal, making it robust to channel noise. Further, because the chirps utilize a broad band of the spectrum, chirp spread spectrum is also resistant to multi-path fading even when operating at very low power. However, it is unlike direct-sequence spread spectrum (DSSS) or frequency-hopping spread spectrum (FHSS) in that it does not add any pseudo-random elements to the signal to help distinguish it from noise on the channel, instead relying on the linear nature of the chirp pulse. Additionally, chirp spread spectrum is resistant to the Doppler effect, which is typical in mobile radio applications.
Uses
Chirp spread spectrum was originally designed to compete with ultra-wideband for precision ranging and low-rate wireless networks in the 2.45 GHz band. However, since the release of IEEE 802.15.4a (also known as IEEE 802.15.4a-2007), it is no longer actively being considered by the IEEE for standardization in the area of precision ranging.
Chirp spread spectrum is ideal for applications requiring low power usage and needing relatively low data rates (1 Mbit/s or less). In particular, IEEE 802.15.4a specifies CSS as a technique for use in low-rate wireless personal area networks (LR-WPAN). However, whereas IEEE 802.15.4-2006 standard specifies that WPANs encompass an area of 10 m or less, IEEE 802.15.4a-2007, specifies CSS as a physical layer to be used when longer ranges and devices moving at high speeds are part of your network. Nanotron's CSS implementation was actually seen to work at a range of 570 meters b
|
https://en.wikipedia.org/wiki/Resource%20defense%20polygyny
|
In animal behavior, resource defense polygyny is a mating strategy where a male is able to support multiple female mates by competing with other males for access to a resource. In such a system, males are territorial. Because male movement is restricted, female-female competition for a male also results. Males capable of maintaining a larger territory are said to have greater resource holding power. It is one of the three major types of polygyny, the other two being female defense polygyny and leks.
Examples
Resource defense polygyny is a common strategy in insects. For examples, damselflies in the family Calopterygidae typically display resource defense polygyny, in which territorial males guard riverine habitat that is sought after by females for egg deposition. Within a species there may be a territorial and nonterritorial morph.
Many bird species also display resource defense polygyny. The yellow headed blackbird is an example, where a male may have multiple females nesting in his territory.
See also
Polygyny threshold model
|
https://en.wikipedia.org/wiki/Askold%20Khovanskii
|
Askold Georgievich Khovanskii (; born 3 June 1947, Moscow) is a Russian and Canadian mathematician currently a professor of mathematics at the University of Toronto, Canada. His areas of research are algebraic geometry, commutative algebra, singularity theory, differential geometry and differential equations. His research is in the development of the theory of toric varieties and Newton polyhedra in algebraic geometry. He is also the inventor of the theory of fewnomials, and the Bernstein–Khovanskii–Kushnirenko theorem is named after him.
He obtained his Ph.D. from Steklov Mathematical Institute in Moscow under the supervision of Vladimir Arnold. In his Ph.D. thesis, he developed a topological version of Galois theory. He studies the theory of Newton–Okounkov bodies, or Okounkov bodies for short.
Among his graduate students are Olga Gel'fond, Feodor Borodich, H. Petrov-Tan'kin, Kiumars Kaveh, Farzali Izadi, Ivan Soprunov, Jenya Soprunova, Vladlen Timorin, Valentina Kirichenko, Sergey Chulkov, V. Kisunko, Mikhail Mazin, O. Ivrii, K. Matveev, Yuri Burda, and J. Yang.
In 2014, he received the Jeffery–Williams Prize of the Canadian Mathematical Society for outstanding contributions to mathematical research in Canada.
|
https://en.wikipedia.org/wiki/Exposed%20point
|
In mathematics, an exposed point of a convex set is a point at which some continuous linear functional attains its strict maximum over . Such a functional is then said to expose . There can be many exposing functionals for . The set of exposed points of is usually denoted .
A stronger notion is that of strongly exposed point of which is an exposed point such that some exposing functional of attains its strong maximum over at , i.e. for each sequence we have the following implication: . The set of all strongly exposed points of is usually denoted .
There are two weaker notions, that of extreme point and that of support point of .
Mathematical analysis
Convex geometry
Functional analysis
|
https://en.wikipedia.org/wiki/CMAQ
|
CMAQ is an acronym for the Community Multiscale Air Quality Model, a sophisticated three-dimensional Eulerian grid chemical transport model developed by the US EPA for studying air pollution from local to hemispheric scales. EPA and state environmental agencies use CMAQ to develop and assess implementation actions needed to attain National Ambient Air Quality Standards (NAAQS) defined under the Clean Air Act. CMAQ simulates air pollutants of concern—including ozone, particulate matter (PM), and a variety of air toxics — to optimize air quality management. Deposition values from CMAQ are used to assess ecosystem impacts such as eutrophication and acidification from air pollutants. In addition, the National Weather Service uses CMAQ to produce twice-daily forecast guidance for ozone air quality across the U.S. CMAQ unites the modeling of meteorology, emissions, and chemistry to simulate the fate of air pollutants under varying atmospheric conditions. Other kinds of models—including crop management and hydrology models— can be linked with the CMAQ simulations, as needed, to simulate pollution more holistically across environmental media.
CMAQ is developed and maintained by scientists in EPA’s Office of Research and Development, and new versions of the software are made publicly available through regular public releases.
CMAQ may also refer to the Congestion Mitigation and Air Quality Improvement Program, a program of the United States Department of Transportation.
|
https://en.wikipedia.org/wiki/Deception%20IV%3A%20Blood%20Ties
|
Deception IV: Blood Ties, known in Japan as , is a strategy game for the PlayStation Vita and PlayStation 3 by Tecmo Koei, and a sequel to Kagero II: Dark Illusion within the Deception series. The game was released in 2014 for Japan on 27 February, and the western localization of the game was released in North America on 25 March and Europe on 28 March.
An expanded version titled Deception IV: The Nightmare Princess, known in Japan as Kagero: Another Princess (影牢 ~もう1人のプリンセス~), was released in Japan on March 26, 2015 for PlayStation 4, PlayStation 3, and PlayStation Vita. The Nightmare Princess is the first Deception game to be released on the PlayStation 4; it was also made available in digital format on the PlayStation 3 and PlayStation Vita on the same date. It was released in North America on July 14 and in Europe on July 17.
Gameplay
The game is a revisit of Tecmo's 1996 PlayStation game Tecmo's Deception. As a game focused on strategy, the player aims to defeat enemies by luring them into a wide variety of traps. The aim of the game is to prevent the enemy from reaching the player, exclusively using traps. Players can choose to utilise a variety of different traps, including rolling boulders, electrocution, fire, spring boards, spiked walls, human cannons, falling bathtubs, banana peels, an iron maiden, and locomotives. Proper timing of traps is an important aspect of the gameplay, as the player is also able to fall victim to their own traps.
Players are able to set their trap combinations within three methods, namely Brutality, Magnificence, or Humiliation; each different type used will grant different rewards from the devil's three servants. Effective utilisation of the environment also allows the player to deal additional damage to foes. The PlayStation Vita version of the game allows players to select and activate traps using the touch screen.
The Nightmare Princess introduces the new character Velguirie, who is able to set traps as well as directly a
|
https://en.wikipedia.org/wiki/Experimental%20Lakes%20Area
|
IISD Experimental Lakes Area (IISD-ELA, known as ELA before 2014) is an internationally unique research station encompassing 58 formerly pristine freshwater lakes in Kenora District Ontario, Canada. In response to the International Joint Commission (IJC)'s 1965 recommendations related to transboundary pollution, the federal and provincial governments set aside these lakes to study water pollution. During the 1970s and 1980s, David Schindler, who was at that time 'Canada's leading ecologist', conducted a series of innovative, landmark large-scale experiments in ELA on eutrophication that led to the banning of phosphates in detergents. In an unexpected and controversial move that was widely condemned by the scientific community in 2012 the ELA was de-funded by the Canadian Federal Government. The facility is now managed and operated by the International Institute for Sustainable Development (IISD) and has a mandate to investigate the aquatic effects of a wide variety of stresses on lakes and their catchments. IISD-ELA used the whole ecosystem approach and makes long-term, whole-lake investigations of freshwater focusing on eutrophication.
In an article published in AAAS's scientific journal Science, Eric Stokstad described ELA's "extreme science" as the manipulation of whole lake ecosystem with ELA researchers collecting long-term records for climatology, hydrology, and limnology that address key issues in water management. The site has influenced public policy in water management in Canada, the US, and around the world.
Minister of State for Science and Technology, Gary Goodyear, argued that "our government has been working hard to ensure that the Experimental Lakes Area facility is transferred to a non-governmental operator better suited to conducting the type of world-class research that can be undertaken at this facility" and that "[t]he federal government has been leading negotiations in order to secure an operator with an international track record." On April
|
https://en.wikipedia.org/wiki/Stressor
|
A stressor is a chemical or biological agent, environmental condition, external stimulus or an event seen as causing stress to an organism. Psychologically speaking, a stressor can be events or environments that individuals might consider demanding, challenging, and/or threatening individual safety.
Events or objects that may trigger a stress response may include:
environmental stressors (hypo or hyper-thermic temperatures, elevated sound levels, over-illumination, overcrowding)
daily "stress" events (e.g., traffic, lost keys, money, quality and quantity of physical activity)
life changes (e.g., divorce, bereavement)
workplace stressors (e.g., high job demand vs. low job control, repeated or sustained exertions, forceful exertions, extreme postures, office clutter)
chemical stressors (e.g., tobacco, alcohol, drugs)
social stressor (e.g., societal and family demands)
Stressors can cause physical, chemical and mental responses internally. Physical stressors produce mechanical stresses on skin, bones, ligaments, tendons, muscles and nerves that cause tissue deformation and (in extreme cases) tissue failure. Chemical stresses also produce biomechanical responses associated with metabolism and tissue repair. Physical stressors may produce pain and impair work performance. Chronic pain and impairment requiring medical attention may result from extreme physical stressors or if there is not sufficient recovery time between successive exposures. A recent study shows that physical office clutter could be an example of physical stressors in a workplace setting.
Stressors may also affect mental function and performance. One possible mechanism involves stimulation of the hypothalamus, CRF (corticotropin release factor) -> pituitary gland releases ACTH (adrenocorticotropic hormone) -> adrenal cortex secretes various stress hormones (e.g., cortisol) -> stress hormones (30 varieties) travel in the blood stream to relevant organs, e.g., glands, heart, intestines -> flight
|
https://en.wikipedia.org/wiki/IntelliTXT
|
IntelliTXT is a keyword advertising platform developed by Vibrant Media. Web page publishers insert a script into their pages which calls the IntelliTXT platform when a viewer views the page. This script then finds keywords on the page and double underlines them. When holding the mouse over the double underlined link, an advertisement associated with that word will pop up. Advertisers pay to have their particular words associated to their advertisements.
Customers
According to Vibrant Media, more than 4500 publishers use the IntelliTXT system. Nike, Sony and Microsoft are advertising on the platform, their ads reaching more than 100 million unique users in the US and 170 million internationally each month.
Competitors
Adbrite
|
https://en.wikipedia.org/wiki/Privatization%20%28computer%20programming%29
|
Privatization is a technique used in shared-memory programming to enable parallelism, by removing dependencies that occur across different threads in a parallel program. Dependencies between threads arise from two or more threads reading or writing a variable at the same time. Privatization gives each thread a private copy, so it can read and write it independently and thus, simultaneously.
Each parallel algorithm specifies whether a variable is shared or private. Many errors in implementation can arise if the variable is declared to be shared but the algorithm requires it to be private, or vice versa.
Traditionally, parallellizing compilers could apply privatization to scalar elements only.
To exploit parallelism that occurs across iterations within a parallel program (loop-level parallelism), the need grew for compilers that can also perform array variable privatization. Most of today's compilers can performing array privatization with more features and functions to enhance the performance of the parallel program in general. An example is the Polaris parallelizing compiler.
Description
A shared-memory multiprocessor is a "computer system composed of multiple independent processors that execute different instruction streams". The shared memory programming model is the most widely used for parallel processor designs. This programming model starts by identifying possibilities for parallelism within a piece of code and then mapping these parallel tasks into threads.
The next step is to determine the scope of variables used in a parallel program, which is one of the key steps and main concerns within this model.
Variable scope
The next step in the model groups tasks together into bigger tasks, as there are typically more tasks than available processors. Typically, the number of execution threads that the tasks are assigned to, is chosen to be less than or equal to the number of processors, with each thread assigned to a unique processor.
Right after this step
|
https://en.wikipedia.org/wiki/Toric%20lens
|
A toric lens is a lens with different optical power and focal length in two orientations perpendicular to each other. One of the lens surfaces is shaped like a "cap" from a torus (see figure at right), and the other one is usually spherical. Such a lens behaves like a combination of a spherical lens and a cylindrical lens. Toric lenses are used primarily in eyeglasses, contact lenses and intraocular lenses to correct astigmatism.
Torus
A torus is the surface of revolution resulting when a circle with radius r rotates around an axis lying within the same plane as the circle, at a distance R from the circle's centre (see figure at right). If R > r, a ring torus is produced. If R = r, a horn torus is produced, where the opening is contracted into a single point. R < r results in a spindle torus, where only two "dips" remain from the opening; these dips become less deep as R approaches 0. When R = 0, the torus degenerates into a sphere with radius r.
Radius of curvature and optical power
The greatest radius of curvature of the toric lens surface, , corresponds to the smallest refractive power, S, given by
,
where n is the index of refraction of the lens material.
The smallest radius of curvature, r, corresponds to the greatest refractive power, s, given by
.
Since , . The lens behaves approximately like a combination of a spherical lens with optical power s and a cylindrical lens with power . In ophthalmology and optometry, is called the cylinder power of the lens.
Note that both the greatest and the smallest curvature have a circular shape. Consequently, in contrast with a popular assumption, the toric lens is not an ellipsoid of revolution.
Light ray and its refractive power
Light rays within the (x,y)-plane of the torus (as defined in the figure above) are refracted according to the greatest radius of curvature, , which means that it has the smallest refractive power, S.
Light rays within a plane through the axis of revolution (the z axis) of the toru
|
https://en.wikipedia.org/wiki/Michael%20Butler%20%28computer%20scientist%29
|
Michael J. Butler is an Irish computer scientist. As of 2022, he is professor of computer science and Dean of the Faculty of Engineering and Physical Sciences at the University of Southampton, England.
Biography
Butler was born in Ireland. He received his bachelor's degree in computer science from Trinity College, Dublin in 1988. He then took an MSc (1989) and DPhil (1992) at the Programming Research Group of the University of Oxford, working in the area of communicating sequential processes. He then worked for Broadcom in Dublin and at Åbo Akademi University in Turku, Finland with Ralph-Johan Back on refinement calculus. He joined the University of Southampton in 1995 as a lecturer, rising to reader in 2000 and then professor in the same year. He led the Dependable Systems & Software Engineering group at the School of Electronics and Computer Science, University of Southampton (inactive as of 2022).
His main research is in the area of the B-Method (originated by J.-R. Abrial), especially tool support such as ProB (advanced model checking for B which allows for the simulation of Event-B machines in the Rodin/Eclipse platform), U2B (UML and B), csp2B (CSP and B), and the RODIN toolset for Event-B.
|
https://en.wikipedia.org/wiki/Gate%20equivalent
|
A gate equivalent (GE) stands for a unit of measure which allows specifying manufacturing-technology-independent complexity of digital electronic circuits.
For today's CMOS technologies, the silicon area of a two-input drive-strength-one NAND gate usually constitutes the technology-dependent unit area commonly referred to as gate equivalent.
A specification in gate equivalents for a certain circuit reflects a complexity measure, from which a corresponding silicon area can be deduced for a dedicated manufacturing technology.
In digital circuit design, a dedicated standard cell library is employed for each manufacturing technology (e.g., CMOS). The standard cell library comprises many different logic gates, for example a NAND gate. For each logical type of logic gate, e.g., a two-input NAND, there usually exist different physical realizations in the standard cell library, for instance with different output drive strengths.
Basically, a two-input drive-strength-one NAND gate in CMOS technology consists of four transistors.
See also
Logic family
NMOS logic
MOSFET
Fanout
FO4
Boolean logic
|
https://en.wikipedia.org/wiki/Attack%20model
|
In cryptanalysis, attack models or attack types are a classification of cryptographic attacks specifying the kind of access a cryptanalyst has to a system under attack when attempting to "break" an encrypted message (also known as ciphertext) generated by the system. The greater the access the cryptanalyst has to the system, the more useful information they can get to utilize for breaking the cypher.
In cryptography, a sending party uses a cipher to encrypt (transform) a secret plaintext into a ciphertext, which is sent over an insecure communication channel to the receiving party. The receiving party uses an inverse cipher to decrypt the ciphertext to obtain the plaintext. A secret knowledge is required to apply the inverse cipher to the ciphertext. This secret knowledge is usually a short number or string called a key. In a cryptographic attack a third party cryptanalyst analyzes the ciphertext to try to "break" the cipher, to read the plaintext and obtain the key so that future enciphered messages can be read. It is usually assumed that the encryption and decryption algorithms themselves are public knowledge and available to the cryptographer, as this is the case for modern ciphers which are published openly. This assumption is called Kerckhoffs's principle.
Models
Some common attack models are:
Ciphertext-only attack (COA) - in this type of attack it is assumed that the cryptanalyst has access only to the ciphertext, and has no access to the plaintext. This type of attack is the most likely case encountered in real life cryptanalysis, but is the weakest attack because of the cryptanalyst's lack of information. Modern ciphers are required to be very resistant to this type of attack. In fact, a successful cryptanalysis in the COA model usually requires that the cryptanalyst must have some information on the plaintext, such as its distribution, the language in which the plaintexts are written in, standard protocol data or framing which is part of the pla
|
https://en.wikipedia.org/wiki/Electrostatic%20deflection%20%28structural%20element%29
|
In molecular physics/nanotechnology, electrostatic deflection is the deformation of a beam-like structure/element bent by an electric field (Fig. 1). It can be due to interaction between electrostatic fields and net charge or electric polarization effects. The beam-like structure/element is generally
cantilevered (fix at one of its ends). In nanomaterials, carbon nanotubes (CNTs) are typical ones for electrostatic deflections.
Mechanisms of electric deflection due to electric polarization can be understood as follows:
As shown in Fig.2, when a material is brought into an electric field (E), the field tends to shift the positive charge (in red) and the negative charge (in blue) in opposite directions. Thus, induced dipoles are created. Fig. 3 shows a beam-like structure/element in an electric field. The interaction between the molecular dipole moment and the electric field results an induced torque (T). Then this torque tends to align the beam toward the direction of field.
In case of a cantilevered CNT (Fig. 1), it would be bent to the field direction. Meanwhile, the electrically induced torque and stiffness of the CNT compete against each other. This deformation has been observed in experiments. This property is an important characteristic for CNTs promising nanoelectromechanical systems applications, as well as for their fabrication, separation and electromanipulation. Recently, several nanoelectromechanical systems based on cantilevered CNTs have been reported such as: nanorelays, nanoswitches, nanotweezers and feedback device which are designed for memory, sensing or actuation uses. Furthermore, theoretical studies have been carried out to try to get a full understanding of the electric deflection of carbon nanotubes.
|
https://en.wikipedia.org/wiki/RINEX
|
In the field of geodesy, Receiver Independent Exchange Format (RINEX) is a data interchange format for raw satellite navigation system data. This allows the user to post-process the received data to produce a more accurate result — usually with other data unknown to the original receiver, such as better models of the atmospheric conditions at time of measurement.
The final output of a navigation receiver is usually its position, speed or other related physical quantities. However, the calculation of these quantities are based on a series of measurements from one or more satellite constellations. Although receivers calculate positions in real time, in many cases it is interesting to store intermediate measures for later use. RINEX is the standard format that allows the management and disposal of the measures generated by a receiver, as well as their off-line processing by a multitude of applications, whatever the manufacturer of both the receiver and the computer application.
The RINEX format is designed to evolve over time, adapting to new types of measurements and new satellite navigation systems. The first RINEX version was developed by W. Gurtner in 1989 and published by W. Gurtner and G. Mader in the CSTG GPS Bulletin of September/October 1990. Since 1993 the RINEX 2 is available, which has been revised and adopted several times. RINEX enables storage of measurements of pseudorange, carrier-phase, Doppler and signal-to-noise from GPS (including GPS modernization signals e.g. L5 and L2C), GLONASS, Galileo, Beidou, along with data from EGNOS and WAAS satellite based augmentation systems (SBAS), QZSS, simultaneously. RINEX version 3.02 was submitted in April 2013 and contain new observation codes from GPS or Galileo systems. The most recent version is RINEX 4.01 from July 2023.
Although not part of the RINEX format, the Hatanaka compression scheme is commonly used to reduce the size of RINEX files, resulting in an ASCII-based CompactRINEX or CRINEX format. It u
|
https://en.wikipedia.org/wiki/Sofia%20University%20Museum%20of%20Paleontology%20and%20Historical%20Geology
|
The Sofia University "St. Kliment Ohridski" Museum of Paleontology and Historical Geology (SUMPHG) (), is a paleontology museum located in the main building of Sofia University “St. Kliment Ohridski", Sofia, Bulgaria.
History
The museum is within the main building of Sofia University, designed by Jean Bréasson, re-designed by Yordan Milanov, and later by Ljuben Konstantinov. Its collections are primarily intended for research and are, thus, not accessible to the public. A limited number of fossils from the collection is on display in the SUMPHG, and is one of the primary localities for storing fossils collected in Bulgaria. The original fossils, around which the current collection has grown, were those gathered by the first Bulgarian state geologist Georgi Zlatarski (1854 - 1909) and those purchased from Dr. A. Krantz. Later specimens collected by doctoral students and as part of the Bulgarian geological surveys were added.
Faculty
Many notable Bulgarian paleontologists have worked at SUMPHG, including Peter Bakalov, Vassil Tzankov, Ivan Nikolov, Natalia Dimitrova, Milka Entcheva, Emilia Kojumdjieva, Nonka Motekova, Stoycho Breskovski, Angel Pamouktchiev et al.
Public access
Admission is free to the museum for all visitors. The museum is open 10 am - 12 am, 1 pm - 4 pm Monday to Friday. It is closed on Saturdays and Sundays. SUMPHG is an important venue for widening interest in paleontology, evolutionary biology and Earth sciences.
The museum logo is based on the Deinotherium skeleton displayed by the entrance.
Gallery
Exhibits of geologic eras and periods
|
https://en.wikipedia.org/wiki/Applegate%20mechanism
|
The Applegate mechanism (Applegate's mechanism or Applegate effect) explains long term orbital period variations seen in certain eclipsing binaries. As a main sequence star goes through an activity cycle, the outer layers of the star are subject to a magnetic torque changing the distribution of angular momentum, resulting in a change in the star's oblateness. The orbit of the stars in the binary pair is gravitationally coupled to their shape changes, so that the period shows modulations (typically on the order of ∆P/P ~ 10−5) on the same time scale as the activity cycles (typically on the order of decades).
Introduction
Careful timing of eclipsing binaries has shown that systems showing orbital period modulations on the order of ∆P/P ~ 10−5 over a period of decades are quite common. A striking example of such a system is Algol, for which the detailed observational record extends back over two centuries. Over this span of time, a graph of the time dependence of the difference between the observed times of eclipses versus the predicted times shows a feature (termed the "great inequality") with a full amplitude of 0.3 days and a recurrent time scale of centuries. Superimposed on this feature is a secondary modulation with a full amplitude of 0.06 days and a recurrent time scale of about 30 years. Orbital period modulations of similar amplitude are seen in other Algol binaries as well.
Although recurrent, these period modulations do not follow a strictly regular cycle. Irregular recurrence rules out attempts to explain these period modulations as being due to apsidal precession or the presence of distant, unseen companions. Apsidal precession explanations also have the problem that they require an eccentric orbit, but the systems in which these modulations are observed often show orbits of little eccentricity. Furthermore, third body explanations have the issue that in many cases, a third body massive enough to produce the observed modulation should not have managed t
|
https://en.wikipedia.org/wiki/Ascending%20chain%20condition
|
In mathematics, the ascending chain condition (ACC) and descending chain condition (DCC) are finiteness properties satisfied by some algebraic structures, most importantly ideals in certain commutative rings. These conditions played an important role in the development of the structure theory of commutative rings in the works of David Hilbert, Emmy Noether, and Emil Artin.
The conditions themselves can be stated in an abstract form, so that they make sense for any partially ordered set. This point of view is useful in abstract algebraic dimension theory due to Gabriel and Rentschler.
Definition
A partially ordered set (poset) P is said to satisfy the ascending chain condition (ACC) if no infinite strictly ascending sequence
of elements of P exists.
Equivalently, every weakly ascending sequence
of elements of P eventually stabilizes, meaning that there exists a positive integer n such that
Similarly, P is said to satisfy the descending chain condition (DCC) if there is no infinite descending chain of elements of P. Equivalently, every weakly descending sequence
of elements of P eventually stabilizes.
Comments
Assuming the axiom of dependent choice, the descending chain condition on (possibly infinite) poset P is equivalent to P being well-founded: every nonempty subset of P has a minimal element (also called the minimal condition or minimum condition). A totally ordered set that is well-founded is a well-ordered set.
Similarly, the ascending chain condition is equivalent to P being converse well-founded (again, assuming dependent choice): every nonempty subset of P has a maximal element (the maximal condition or maximum condition).
Every finite poset satisfies both the ascending and descending chain conditions, and thus is both well-founded and converse well-founded.
Example
Consider the ring
of integers. Each ideal of consists of all multiples of some number . For example, the ideal
consists of all multiples of . Let
be the ideal consisting
|
https://en.wikipedia.org/wiki/Particle%20shower
|
In particle physics, a shower is a cascade of secondary particles produced as the result of a high-energy particle interacting with dense matter. The incoming particle interacts, producing multiple new particles with lesser energy; each of these then interacts, in the same way, a process that continues until many thousands, millions, or even billions of low-energy particles are produced. These are then stopped in the matter and absorbed.
Types
There are two basic types of showers. Electromagnetic showers are produced by a particle that interacts primarily or exclusively via the electromagnetic force, usually a photon or electron. Hadronic showers are produced by hadrons (i.e. nucleons and other particles made of quarks), and proceed mostly via the strong nuclear force.
Electromagnetic showers
An electromagnetic shower begins when a high-energy electron, positron or photon enters a material. At high energies (above a few MeV), in which the photoelectric effect and Compton scattering are insignificant, photons interact with matter primarily via pair production — that is, they convert into an electron-positron pair, interacting with an atomic nucleus or electron in order to conserve momentum. High-energy electrons and positrons primarily emit photons, a process called bremsstrahlung. These two processes (pair production and bremsstrahlung) continue, leading to a cascade of particles of decreasing energy until photons fall below the pair production threshold, and energy losses of electrons other than bremsstrahlung start to dominate.
The characteristic amount of matter traversed for these related interactions is called the radiation length . is both the mean distance over which a high-energy electron loses all but 1/e of its energy by bremsstrahlung and 7/9 of the mean free path for pair production by a high energy photon. The length of the cascade scales with ; the "shower depth" is approximately determined by the relation
where is the radiation length of t
|
https://en.wikipedia.org/wiki/Calponin%20family%20repeat
|
In molecular biology, the calponin family repeat is a 26 amino acid protein domain. Calponin 1 (CNN1) contains three copies of this domain. This domain is also found in vertebrate smooth muscle protein (SM22 or transgelin), and a number of other proteins whose physiological role is not yet established, including Drosophila synchronous flight muscle protein SM20, Caenorhabditis elegans unc-87 protein, rat neuronal protein NP25, and an Onchocerca volvulus antigen.
|
https://en.wikipedia.org/wiki/Azor%20Betts
|
Azor Betts (September 13, 1740 – September 14, 1809) was an American Loyalist doctor who began his practice in the Province of New York before the American Revolutionary War. His staunch defense of smallpox inoculation and support of the Loyalist cause led to his arrest and eventual departure to Canada.
Life before the Revolution
Azor Betts was born on September 13, 1740, in Norwalk, Connecticut, the son of Nathan Betts and Mary Belden. He married Gloriana Purdy in 1765 in Rye, New York, and practiced medicine in New York City prior to the Revolutionary War.
The Revolution and smallpox
The events of 1776 that began open hostility between the Continental Army and the British Army in America were tempered by outbreaks of smallpox that began the year previous. General George Washington of the Continentals ordered on May 20, 1776, that no man in his army be inoculated with smallpox, or face serious punishment. Betts first administered smallpox to members of the Continental Army mere days after the order was given, and was placed under arrest by local authorities. Testimony during a hearing on the matter before the New York Committee of Safety on May 26, 1776, was given by both Doctor Foster representing the prosecution and Betts in his defense. Doctor Foster testified that:
In his defense, Betts told the Committee that:
As a reaction to the news that Betts had performed these inoculations in New York, Washington, immediately drew up another order, this time spelling out the punishment for any soldier caught being inoculated with smallpox:
Jailed again for more smallpox inoculations, Betts became an open Loyalist, serving as both a Captain-Lieutenant in the Kings American Regiment and also as a surgeon for the Queen's Rangers. In May 1783, Betts left America for good, making his home in Kingston, New Brunswick.
Life in Canada
Soon after arriving in Kingston, Betts created isolation wards for those infected with smallpox. He continued this practice, and when the sma
|
https://en.wikipedia.org/wiki/Bi-specific%20T-cell%20engager
|
Bi-specific T-cell engagers (BiTEs) are a class of artificial bispecific monoclonal antibodies that are investigated for use as anti-cancer drugs. They direct a host's immune system, more specifically the T cells' cytotoxic activity, against cancer cells. BiTE is a registered trademark of Micromet AG (fully owned subsidiary of Amgen Inc).
BiTEs are fusion proteins consisting of two single-chain variable fragments (scFvs) of different antibodies, or amino acid sequences from four different genes, on a single peptide chain of about 55 kilodaltons. One of the scFvs binds to T cells via the CD3 receptor, and the other to a tumor cell via a tumor specific molecule.
Mechanism of action
Like other bispecific antibodies, and unlike ordinary monoclonal antibodies, BiTEs form a link between T cells and tumor cells. This causes T cells to exert cytotoxic activity on tumor cells by producing proteins like perforin and granzymes, independently of the presence of MHC I or co-stimulatory molecules. These proteins enter tumor cells and initiate the cell's apoptosis.
This action mimics physiological processes observed during T cell attacks against tumor cells.
BiTEs in clinical assessment or with clinical approvals
Several BiTEs are currently in preclinical and clinical trials to assess their therapeutic efficacy and safety.
Blinatumomab
Blinatumomab links T cells with CD19 receptors found on the surface of B cells. The Food and Drug Administration (US) and the European Medicines Agency approved this therapy for adults with Philadelphia chromosome-negative relapsed or refractory acute lymphoblastic leukemia.
Glofitamab
It is a bispecific CD20-directed CD3 T-cell engager. It was approved for medical use in Canada in March 2023, in the United States in June 2023, and in the European Union in July 2023.
Mosunetuzumab
Bispecifically binds CD20 and CD3 to engage T-cells. Mosunetuzumab was approved for medical use in the European Union in June 2022.
Solitomab
Solitomab lin
|
https://en.wikipedia.org/wiki/Fluence%20response
|
Both fluence rates and irradiance of light are important signals for plants and are detected by phytochrome. Exploiting different modes of photoreversibility in this molecule allow plants to respond to different levels of light. There are three main types of fluence rate governed responses that are brought about by different levels of light.
Very low fluence responses
As the name would suggest this type of response is triggered by very low levels of light and is thought to be mediated by phytochrome A. It can be initiated by fluences as low as 0.0001μmol/m2 up to about 0.05μmol/m2. Germination of Arabidopsis can be induced with very low levels of red light, as can oat seedlings. Such low levels of light are sufficient for inducing this response since they only convert 0.02% of the phytochrome to its active form. The backward reaction by far red light is only 98% efficient making the conversion non-photoreversible and allowing the response to proceed. VLFRs can also be induced by making up the required fluence by brief flashes of light. Since this depends on light levels and time it is known as the law of reciprocity.
Low fluence responses
These responses require at least 1μmol/m2 to be initiated and become saturated at about 1000μmol/m2. Unlike VLFRs, these responses are photoreversible. This was shown by exposing lettuce seed to a brief flash of red light causing germination. It was then shown if this red flash was followed by a flash of far red light, germination was again inhibited. LFRs also follow the law of reciprocity. Other examples of LFRs include leaf de-etiolation and enhancement of rate of chlorophyll production.
High-irradiance responses
HIRs require long exposure to relatively high light levels. The degree of response will depend on the level of light. They are characterised by the fact that they do not follow the law of reciprocity and depend on the rate of photons hitting the leaf surface, as opposed to the total light levels. This means that
|
https://en.wikipedia.org/wiki/%C5%A0id%C3%A1k%20correction%20for%20t-test
|
One of the application of Student's t-test is to test the location of one sequence of independent and identically distributed random variables. If we want to test the locations of multiple sequences of such variables, Šidák correction should be applied in order to calibrate the level of the Student's t-test. Moreover, if we want to test the locations of nearly infinitely many sequences of variables, then Šidák correction should be used, but with caution. More specifically, the validity of Šidák correction depends on how fast the number of sequences goes to infinity.
Introduction
Suppose we are interested in different hypotheses, , and would like to check if all of them are true. Now the hypothesis test scheme becomes
: all of are true;
: at least one of is false.
Let be the level of this test (the type-I error), that is, the probability that we falsely reject when it is true.
We aim to design a test with certain level .
Suppose when testing each hypothesis , the test statistic we use is .
If these 's are independent, then a test for can be developed by the following procedure, known as Šidák correction.
Step 1, we test each of null hypotheses at level .
Step 2, if any of these null hypotheses is rejected, we reject .
Finite case
For finitely many t-tests,
suppose where for each , are independently and identically distributed, for each are independent but not necessarily identically distributed, and has finite fourth moment.
Our goal is to design a test for with level . This test can be based on the t-statistic of each sequences, that is,
where:
Using Šidák correction, we reject if any of the t-tests based on the t-statistics above reject at level More specifically, we reject when
where
The test defined above has asymptotic level , because
Infinite case
In some cases, the number of sequences, , increase as the data size of each sequences, , increase. In particular, suppose . If this is true, then we will need to t
|
https://en.wikipedia.org/wiki/Sony%20Vaio%20L%20series
|
The Sony Vaio L series is a range of Vaio all-in-one desktop computers sold by Sony since 2006.
Windows 7 models
Since the launch of Windows 7, the L series has been a touchscreen PC, featuring a 24" 1920x1080 LCD touchscreen. As of 2013, the L series used the Windows 8 operating system.
The Sony Vaio J series is similar to the L series, except that it features a 21.5" 1920x1080 LCD touchscreen.
Specifications
|
https://en.wikipedia.org/wiki/Richard%20Zach
|
Richard Zach is a Canadian logician, philosopher of mathematics, and historian of logic and analytic philosophy. He is currently Professor of Philosophy at the University of Calgary.
Research
Zach's research interests include the development of formal logic and historical figures (Hilbert, Gödel, and Carnap) associated with this development. In the philosophy of mathematics Zach has worked on Hilbert's program and the philosophical relevance of proof theory. In mathematical logic, he has made contributions to proof theory (epsilon calculus, proof complexity) and to modal and many-valued logic, especially Gödel logic.
Career
Zach received his undergraduate education at the Vienna University of Technology and his Ph.D. at the Group in Logic and the Methodology of Science at the University of California, Berkeley. His dissertation, Hilbert's Program: Historical, Philosophical, and Metamathematical Perspectives, was jointly supervised by Paolo Mancosu and Jack Silver.
He has taught at the University of Calgary since 2001, and holds the rank of Professor. He has held visiting appointments at the University of California, Irvine and McGill University. Zach is a founding editor of the Review of Symbolic Logic and the Journal for the Study of the History of Analytic Philosophy, and is also associate editor of Studia Logica, and a subject editor for the Stanford Encyclopedia of Philosophy (History of Modern Logic). He serves on the editorial boards of the Bernays edition and the Carnap edition. He was elected to the Council of the Association for Symbolic Logic in 2008 (ASL) and he has served on the ASL Committee on Logic Education and the executive committee of the Kurt Gödel Society.
|
https://en.wikipedia.org/wiki/RetrOryza
|
RetrOryza is a database of Long terminal repeat-retrotransposons for the rice genome.
See also
Long terminal repeat
Retrotransposon
Rice
|
https://en.wikipedia.org/wiki/Magnetic%202D%20materials
|
Magnetic 2D materials or magnetic van der Waals materials are two-dimensional materials that display ordered magnetic properties such as antiferromagnetism or ferromagnetism. After the discovery of graphene in 2004, the family of 2D materials has grown rapidly. There have since been reports of several related materials, all except for magnetic materials. But since 2016 there have been numerous reports of 2D magnetic materials that can be exfoliated with ease just like graphene.
The first few-layered van der Waals magnetism was reported in 2017 (Cr2Ge2Te6, and CrI3). One reason for this seemingly late discovery is that thermal fluctuations tend to destroy magnetic order for 2D magnets easier compared to 3D bulk. It is also generally accepted in the community that low dimensional materials have different magnetic properties compared to bulk. This academic interest that transition from 3D to 2D magnetism can be measured has been the driving force behind much of the recent works on van der Waals magnets. Much anticipated transition of such has been since observed in both antiferromagnets and ferromagnets: FePS3, Cr2Ge2Te6, CrI3, NiPS3, MnPS3, Fe3GeTe2
Although the field has been only around since 2016, it has become one of the most active fields in condensed matter physics and materials science and engineering. There have been several review articles written up to highlight its future and promise.
Overview
Magnetic van der Waals materials is a new addition to the growing list of 2d materials. The special feature of these new materials is that they exhibit a magnetic ground state, either antiferromagnetic or ferromagnetic, when they are thinned down to very few sheets or even one layer of materials. Another, a probably more important feature of these materials is that they can be easily produced in few layers or monolayer form using simple means such as scotch tape, which is rather uncommon among other magnetic materials like oxide magnets.
Interest in these material
|
https://en.wikipedia.org/wiki/MUSHRA
|
MUSHRA stands for Multiple Stimuli with Hidden Reference and Anchor and is a methodology for conducting a codec listening test to evaluate the perceived quality of the output from lossy audio compression algorithms. It is defined by ITU-R recommendation BS.1534-3. The MUSHRA methodology is recommended for assessing "intermediate audio quality". For very small audio impairments, Recommendation ITU-R BS.1116-3 (ABC/HR) is recommended instead.
The main advantage over the mean opinion score (MOS) methodology (which serves a similar purpose) is that MUSHRA requires fewer participants to obtain statistically significant results. This is because all codecs are presented at the same time, on the same samples, so that a paired t-test or a repeated measures analysis of variance can be used for statistical analysis. Also, the 0–100 scale used by MUSHRA makes it possible to rate very small differences.
In MUSHRA, the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. The recommendation specifies that a low-range and a mid-range anchor should be included in the test signals. These are typically a 7 kHz and a 3.5 kHz low-pass version of the reference. The purpose of the anchors is to calibrate the scale so that minor artifacts are not unduly penalized. This is particularly important when comparing or pooling results from different labs.
Listener behavior
Both, MUSHRA and ITU BS.1116 tests call for trained expert listeners who know what typical artifacts sound like and where they are likely to occur. Expert listeners also have a better internalization of the rating scale which leads to more repeatable results than with untrained listeners. Thus, with trained listeners, fewer listeners are needed to achieve statistically significant results.
It is assumed that preferences are similar for expert listeners and naive listeners and thus results of expert listeners are also predic
|
https://en.wikipedia.org/wiki/Toponome
|
The toponome is the spatial network code of proteins and other biomolecules in morphologically intact cells and tissues. It is mapped and decoded by imaging cycler microscopy (ICM) in situ able to co-map many thousand supermolecules in one sample (tissue section or cell sample at high subcellular resolution). The term "toponome" is derived from the ancient Greek nouns "topos" (τόπος: "place, position") and "nomos" (νόμος: "law"), and the term "toponomics" refers to the study of the toponome. It was introduced by Walter Schubert in 2003. It addresses the fact that the network of biomolecules in cells and tissues follows topological rules enabling coordinated actions. For example, the cell surface toponome provides the spatial protein interaction code for the execution of a cell movement, a "code of conduct". This is intrinsically dependent on the specific spatial arrangement of similar and dissimilar compositions of supermolecules (compositional periodicity) with a specific spatial order along a cell surface membrane. This spatial order is periodically repeated when the cell tries to enter the exploratory state from the spherical state (spatial periodicity). This spatial toponome code is hierarchically organized with lead biomolecule(s), anti-colocated (absent) biomolecule(s) and wildcard molecules which are variably associated with the lead biomolecule(s). It has been shown that inhibition of lead molecule(s) in a surface membrane leads to disassembly of the corresponding biomolecular network and loss of function.
Citations
Systems biology
Bioinformatics
Omics
Topology
|
https://en.wikipedia.org/wiki/Cepheid%20variable
|
A Cepheid variable () is a type of variable star that pulsates radially, varying in both diameter and temperature. It changes in brightness, with a well-defined stable period and amplitude.
Cepheids are important cosmic benchmarks for scaling galactic and extragalactic distances. A strong direct relationship exists between a Cepheid variable's luminosity and its pulsation period.
This characteristic of classical Cepheids was discovered in 1908 by Henrietta Swan Leavitt after studying thousands of variable stars in the Magellanic Clouds. The discovery allows one to know the true luminosity of a Cepheid by just observing its pulsation period. This tells one the distance to the star, by comparing its known luminosity to its observed brightness.
The term Cepheid originates from Delta Cephei in the constellation Cepheus, identified by John Goodricke in 1784. It was the first of its type to be identified.
The mechanics of stellar pulsation as a heat-engine was proposed in 1917 by Arthur Stanley Eddington (who wrote at length on the dynamics of Cepheids). It was not until 1953 that S. A. Zhevakin identified ionized helium as a likely valve for the engine.
History
On September 10, 1784, Edward Pigott detected the variability of Eta Aquilae, the first known representative of the class of classical Cepheid variables. The eponymous star for classical Cepheids, Delta Cephei, was discovered to be variable by John Goodricke a few months later. The number of similar variables grew to several dozen by the end of the 19th century, and they were referred to as a class as Cepheids. Most of the Cepheids were known from the distinctive light curve shapes with the rapid increase in brightness and a hump, but some with more symmetrical light curves were known as Geminids after the prototype ζ Geminorum.
A relationship between the period and luminosity for classical Cepheids was discovered in 1908 by Henrietta Swan Leavitt in an investigation of thousands of variable stars in the M
|
https://en.wikipedia.org/wiki/Penrose%20triangle
|
The Penrose triangle, also known as the Penrose tribar, the impossible tribar, or the impossible triangle, is a triangular impossible object, an optical illusion consisting of an object which can be depicted in a perspective drawing, but cannot exist as a solid object. It was first created by the Swedish artist Oscar Reutersvärd in 1934. Independently from Reutersvärd, the triangle was devised and popularized in the 1950s by psychiatrist Lionel Penrose and his son, prominent Nobel Prize-winning mathematician Sir Roger Penrose, who described it as "impossibility in its purest form". It is featured prominently in the works of artist M. C. Escher, whose earlier depictions of impossible objects partly inspired it.
Description
The tribar/triangle appears to be a solid object, made of three straight beams of square cross-section which meet pairwise at right angles at the vertices of the triangle they form. The beams may be broken, forming cubes or cuboids.
This combination of properties cannot be realized by any three-dimensional object in ordinary Euclidean space. Such an object can exist in certain Euclidean 3-manifolds. There also exist three-dimensional solid shapes each of which, when viewed from a certain angle, appears the same as the 2-dimensional depiction of the Penrose triangle on this page (such as – for example – the adjacent image depicting a sculpture in Perth, Australia). The term "Penrose Triangle" can refer to the 2-dimensional depiction or the impossible object itself.
If a line is traced around the Penrose triangle, a 4-loop Möbius strip is formed.
Depictions
M.C. Escher's lithograph Waterfall (1961) depicts a watercourse that flows in a zigzag along the long sides of two elongated Penrose triangles, so that it ends up two stories higher than it began. The resulting waterfall, forming the short sides of both triangles, drives a water wheel. Escher points out that in order to keep the wheel turning, some water must occasionally be added to com
|
https://en.wikipedia.org/wiki/Michael%20Dummett
|
Sir Michael Anthony Eardley Dummett (; 27 June 1925 – 27 December 2011) was an English academic described as "among the most significant British philosophers of the last century and a leading campaigner for racial tolerance and equality." He was, until 1992, Wykeham Professor of Logic at the University of Oxford. He wrote on the history of analytic philosophy, notably as an interpreter of Frege, and made original contributions particularly in the philosophies of mathematics, logic, language and metaphysics. He was known for his work on truth and meaning and their implications to debates between realism and anti-realism, a term he helped to popularize. He devised the Quota Borda system of proportional voting, based on the Borda count. In mathematical logic, he developed an intermediate logic, already studied by Kurt Gödel: the Gödel–Dummett logic.
Education and army service
Born 27 June 1925 at his parents' house, 56, York Terrace, Marylebone, London, Dummett was the son of George Herbert Dummett (1880 – 12 November 1969), later of Shepherd's Cottage, Curridge, Berkshire, a silk merchant and rayon dealer, and Mabel Iris (1893–1980), daughter of the civil servant and conservationist Sir Sainthill Eardley-Wilmot (himself grandson of the politician Sir John Eardley-Wilmot, 1st Baronet). He studied at Sandroyd School in Wiltshire, at Winchester College as a scholar, and at Christ Church, Oxford, which awarded him a major scholarship in 1943. He was called up for military service that year and served until 1947, first as a private in the Royal Artillery, then in the Intelligence Corps in India and Malaya. In 1950 he graduated with a first in Politics, Philosophy and Economics from Oxford and was elected a Prize Fellow of All Souls College, Oxford.
Academic career
In 1979, Dummett became Wykeham Professor of Logic at Oxford, a post he held until retiring in 1992. During his term as Wykeham Professor, he held a Fellowship at New College, Oxford. He has also held teaching
|
https://en.wikipedia.org/wiki/Wilfried%20Schmid
|
Wilfried Schmid (born May 28, 1943) is a German-American mathematician who works in Hodge theory, representation theory, and automorphic forms. After graduating as valedictorian of Princeton University's class of 1964, Schmid earned his Ph.D. at University of California, Berkeley in 1967 under the direction of Phillip Griffiths, and then taught at Berkeley and Columbia University, becoming a full professor at Columbia at age 27. In 1978, he moved to Harvard University, where he served as the Dwight Parker Robinson Professor of Mathematics until his retirement in 2019.
Schmid's early work concerns the construction of discrete series representations of semi-simple Lie groups. Notable accomplishments here include a proof of the Langlands conjecture on the discrete series, along with a later proof (joint with Michael Atiyah) constructing all such discrete series representations on spaces of harmonic spinors. Schmid, along with his student Henryk Hecht, proved Blattner's conjecture in 1975. In the 1970s, he described the singularities of the Griffith's period map by applying Lie-theoretic methods to problems in algebraic geometry.
Schmid has been very involved in K–12 mathematics education in his home state, and both nationally and internationally. His interest arose in 1999 after being disturbed by the experiences of his 2nd-grade daughter, Sabina, in her mathematics class. He was heavily involved in the drafting of the Massachusetts Mathematics Curriculum Framework in 2000. Later, he served on the National Mathematics Advisory Panel of the U.S. Department of Education. He has opposed new ways of teaching children that would neglect basic math skills.
In 2012, he became a fellow of the American Mathematical Society and in 2020 he was elected as a member of the U.S. National Academy of Sciences.
|
https://en.wikipedia.org/wiki/Eoxin%20D4
|
Eoxin D4, also known as 14,15-leukotriene D4, is an eoxin. Cells make eoxins by metabolizing arachidonic acid with a 15-lipoxygenase enzyme to form 15(S)-hydroperoxyeicosapentaenoic acid (i.e. 15(S)-HpETE). This product is then converted serially to eoxin A4 (i.e. EXA4), EXC4, EXD4, and EXE4 by LTC4 synthase, an unidentified gamma-glutamyltransferase, and an unidentified dipeptidase, respectively, in a pathway which appears similar if not identical to the pathway which forms leukotreines, i.e. LTA4, LTC4, LTD4, and LTE4. This pathway is schematically shown as follows:
EXA4 is viewed as an intracellular-bound, short-lived intermediate which is rapidly metabolized to the down-stream eoxins. The eoxins down stream of EXA4 are secreted from their parent cells and, it is proposed but not yet proven, serve to regulate allergic responses and the development of certain cancers (see Eoxins).
|
https://en.wikipedia.org/wiki/Ironic%20process%20theory
|
Ironic process theory (IPT) is a psychological phenomenon suggesting that when individuals intentionally try to avoid thinking a certain thought or feeling a certain emotion, a paradoxical effect is produced. The attempted avoidance not only fails in its object but in fact causes the thought or emotion to occur more frequently and more intensely. IPT is also known as "ironic rebound," or "the white bear problem."
The phenomenon was identified through thought suppression studies in experimental psychology. Social psychologist Daniel Wegner first studied ironic process theory in a laboratory setting in 1987. Ironic mental processes have been shown in a variety of situations, where they are usually created by or worsened by stress. In extreme cases, ironic mental processes result in intrusive thoughts about doing something immoral or out of character, which can be troubling to the individual. These findings have since guided clinical practice. For example, they show why it would be unproductive to try to suppress anxiety-producing or depressing thoughts.
Mechanism
Wegner claims that successful thought suppression requires two distinct mental processes, that must be performed simultaneously. The first process is the operating process, which occupies mental resources to will away the unwanted thought, object, or emotion that is persistent in the mind. It works continuously until the thought is cleared completely. The second one is the monitoring process, which acts as a detector searching for unwanted thoughts. It then replaces them by shifting attention to other objects.
When individuals' attention is on another task, their mental resources become limited, making it difficult to conduct the operating process. However, the monitoring process is still running, making individuals aware of those unwanted thoughts. The shutdown of operating processes and the continuance of monitoring reduce their ability to suppress the thoughts, and the unwanted thoughts eventually beco
|
https://en.wikipedia.org/wiki/Serum%20amyloid%20A
|
Serum amyloid A (SAA) proteins are a family of apolipoproteins associated with high-density lipoprotein (HDL) in plasma. Different isoforms of SAA are expressed constitutively (constitutive SAAs) at different levels or in response to inflammatory stimuli (acute phase SAAs). These proteins are produced predominantly by the liver.
Acute-phase serum amyloid A proteins
Acute-phase serum amyloid A proteins (A-SAAs) are secreted during the acute phase of inflammation. These proteins have several roles, including the transport of cholesterol to the liver for secretion into the bile, the recruitment of immune cells to inflammatory sites, and the induction of enzymes that degrade extracellular matrix. A-SAAs are implicated in several chronic inflammatory diseases, such as amyloidosis, atherosclerosis, and rheumatoid arthritis. Three acute-phase SAA isoforms have been reported in mice, called SAA1, SAA2, and SAA3. During inflammation, SAA1 and SAA2 are expressed and induced principally in the liver, whereas SAA3 is induced in many distinct tissues. SAA1 and SAA2 genes are regulated in liver cells by the proinflammatory cytokines IL-1, IL-6, and TNF-α. Both SAA1 and SAA2 are induced up to a 1000-fold in mice under acute inflammatory conditions following exposure to bacterial lipopolysaccharide (LPS). Three A-SAA genes have also been identified in humans, although the third gene, SAA3, is believed to represent a pseudogene that does not generate messenger RNA or protein. Molecular weights of the human proteins are estimated at 11.7 kDa for SAA1 and 12.8 kDa for SAA4.
Serum amyloid A (SAA) is also an acute phase marker that responds rapidly. Similar to CRP, levels of acute-phase SAA increase within hours after inflammatory stimulus, and the magnitude of increase may be greater than that of CRP. Relatively trivial inflammatory stimuli can lead to SAA responses. It has been suggested that SAA levels correlate better with disease activity in early inflammatory joint disease t
|
https://en.wikipedia.org/wiki/Thunderstorm%20asthma
|
Thunderstorm asthma (also referred to in the media as thunder fever or a pollen bomb) is the triggering of an asthma attack by environmental conditions directly caused by a local thunderstorm. It has been proposed that during a thunderstorm, pollen grains can absorb moisture and then burst into much smaller fragments with these fragments being easily dispersed by wind. While larger pollen grains are usually filtered by hairs in the nose, the smaller pollen fragments are able to pass through and enter the lungs, triggering the asthma attack.
History
The phenomenon has been recognised for a significant period of time with a study of an event in Birmingham, England, noting the correlation between thunderstorms and hospitalisations. This fact that these were not isolated events and were part of an ongoing pattern of events is clearly documented in the review "Thunderstorm asthma, an overview of the evidence base". A significant impetus in the study of the phenomenon occurred after an event in 2016 in Melbourne, Australia. Since then there have been further reports of widespread thunderstorm asthma in Wagga Wagga, Australia; London, England; Naples, Italy; Atlanta, United States; and Ahvaz, Iran. The outbreak in Melbourne, in November 2016, that overwhelmed the ambulance system and some local hospitals, resulted in at least nine deaths. There was a similar incident in Kuwait in early December, 2016 with at least 5 deaths and many admissions to the ICU.
Statistics
Many of those affected during a thunderstorm asthma outbreak may have never experienced an asthma attack before.
It has been found 95% of those that were affected by thunderstorm asthma had a history of hayfever, and 96% of those people had tested positive to grass pollen allergies, particularly rye grass. A rye grass pollen grain can hold up to 700 tiny starch granules, measuring 0.6 to 2.5 μm, small enough to reach the lower airways in the lung.
Prevention
Patients with a history of grass allergies should
|
https://en.wikipedia.org/wiki/Symmachia%20menetas
|
Symmachia menetas is a species in the butterfly family Riodinidae found in Brazil and Suriname. It was first described by Dru Drury in 1782.
Description
Upperside. Antennae black. Front of the head yellow. Thorax black, with two yellow streaks at the base of the wings. Abdomen dark brown. Half of the superior wings black, beginning at the shoulders, and running to the external edges, on which are seven cream-coloured spots variously shaped. The other half of these wings is scarlet, without any marks. Posterior wings entirely scarlet, edged with black.
Underside. Palpi cream coloured. Breast and abdomen light yellow. Legs black, but underneath pale yellow. Wings coloured as on the upperside. Margins of the wings entire. Wingspan inches (33 mm).
Subspecies
Symmachia menetas menetas (Brazil, Suriname)
Symmachia menetas eurina Schaus, 1902 (Brazil: Paraná, Santa Catarina)
Sources
Riodinidae of South America
Butterflies described in 1782
Descriptions from Illustrations of Exotic Entomology
|
https://en.wikipedia.org/wiki/Bilhete%20%C3%9Anico
|
Bilhete Único (Unified Ticket) is the name of the São Paulo transportation contactless smart card system for fare control.
Using Philips Mifare technology, the ticketing system is managed by SPTrans (São Paulo Transporte S/A), the city bus transportation authority, which is controlled by the municipal government. Tickets were first issued using the system on May 18, 2004, when Marta Suplicy was the mayor, allowing for up to four rides in two hours by paying a single fare on buses. From 2006 it has also been used in the local rapid transit system (São Paulo Metro) and on suburban railways operated by CPTM.
History
The original technical design (in about 1997) was based on Seoul's ticketing solution and provider. But the project was aborted, mostly due software problems with the complex Vale-Transporte regulation.
Around 2001/2002 the project was restarted by SPTrans under the title, Projeto de Bilhetagem Eletrônica. SPSTrans took on the role of Solution Integrator and Sponsor, choosing to have at least two providers for every supply and not to depend on a sole provider, as most other cities do.
Providers
Completion of the project resulted in the Bilhete Único, which has at least 30 different solution and service providers directly involved in the project.
The solution was a major gain solving the recharge problem: all cards are pre-paid, and recharge cannot be done on board. Other Brazilian cities failed on creating and spreading a large recharge network. Due to "win-win" agreements with Electronic Benefits Cards networks and the National Lottery network, São Paulo had over 6000 recharge points around the city by 2010.
Other software and hardware solution providers are:
portals and back-office.
Microsoft: Windows desktops on all parts. Windows servers, Biztalk and MS-SQL on EDI from garages.
Oracle: Provides the central SQL database and data warehouse.
IBM: Provides RISC servers and AIX on central processors.
Fares and regulations
As of January 1st, 2020, regu
|
https://en.wikipedia.org/wiki/Pancake%20sorting
|
Pancake sorting is the mathematical problem of sorting a disordered stack of pancakes in order of size when a spatula can be inserted at any point in the stack and used to flip all pancakes above it. A pancake number is the minimum number of flips required for a given number of pancakes. In this form, the problem was first discussed by American geometer Jacob E. Goodman. A variant of the problem is concerned with burnt pancakes, where each pancake has a burnt side and all pancakes must, in addition, end up with the burnt side on bottom.
All sorting methods require pairs of elements to be compared. For the traditional sorting problem, the usual problem studied is to minimize the number of comparisons required to sort a list. The number of actual operations, such as swapping two elements, is then irrelevant. For pancake sorting problems, in contrast, the aim is to minimize the number of operations, where the only allowed operations are reversals of the elements of some prefix of the sequence. Now, the number of comparisons is irrelevant.
The pancake problems
The original pancake problem
The minimum number of flips required to sort any stack of pancakes has been shown to lie between and (approximately 1.07n and 1.64n,) but the exact value is not known.
The simplest pancake sorting algorithm performs at most flips. In this algorithm, a kind of selection sort, we bring the largest pancake not yet sorted to the top with one flip; take it down to its final position with one more flip; and repeat this process for the remaining pancakes.
In 1979, Bill Gates and Christos Papadimitriou gave a lower bound of 1.06n flips and an upper bound of . The upper bound was improved, thirty years later, to by a team of researchers at the University of Texas at Dallas, led by Founders Professor Hal Sudborough.
In 2011, Laurent Bulteau, Guillaume Fertin, and Irena Rusu proved that the problem of finding the shortest sequence of flips for a given stack of pancakes is NP-hard, th
|
https://en.wikipedia.org/wiki/300%20%28number%29
|
300 (three hundred) is the natural number following 299 and preceding 301.
Mathematical properties
The number 300 is a triangular number and the sum of a pair of twin primes (149 + 151), as well as the sum of ten consecutive primes (13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47).
It is palindromic in 3 consecutive bases: 30010 = 6067 = 4548 = 3639, and also in base 13. Factorization is 30064 + 1 is prime
Integers from 301 to 399
300s
301
301 = 7 × 43 = . 301 is the sum of three consecutive primes (97 + 101 + 103), happy number in base 10, lazy caterer number .
302
302 = 2 × 151. 302 is a nontotient, a happy number, the number of partitions of 40 into prime parts
303
303 = 3 × 101. 303 is a palindromic semiprime. The number of compositions of 10 which cannot be viewed as stacks is 303.
304
305
305 = 5 × 61. 305 is the convolution of the first 7 primes with themselves.
306
306 = 2 × 32 × 17. 306 is the sum of four consecutive primes (71 + 73 + 79 + 83), pronic number, and an untouchable number.
307
307 is a prime number, Chen prime, number of one-sided octiamonds
308
308 = 22 × 7 × 11. 308 is a nontotient, totient sum of the first 31 integers, heptagonal pyramidal number, and the sum of two consecutive primes (151 + 157).
309
309 = 3 × 103, Blum integer, number of primes <= 211.
310s
310
311
312
312 = 23 × 3 × 13, idoneal number.
313
314
314 = 2 × 157. 314 is a nontotient, smallest composite number in Somos-4 sequence.
315
315 = 32 × 5 × 7 = rencontres number, highly composite odd number, having 12 divisors.
316
316 = 22 × 79. 316 is a centered triangular number and a centered heptagonal number
317
317 is a prime number, Eisenstein prime with no imaginary part, Chen prime, and a strictly non-palindromic number.
317 is the exponent (and number of ones) in the fourth base-10 repunit prime.
318
319
319 = 11 × 29. 319 is the sum of three consecutive primes (103 + 107 + 109), Smith number, cannot be represented as the sum of
|
https://en.wikipedia.org/wiki/Animal%20model%20of%20schizophrenia
|
Research into the mental disorder of schizophrenia, involves multiple animal models as a tool, including in the preclinical stage of drug development.
Several models simulate schizophrenia defects. These fit into four basic categories: pharmacological models, developmental models, lesion models, and genetic models. Historically, pharmacological, or drug-induced models were the most widely used. These involve the manipulation of various neurotransmitter systems, including dopamine, glutamate, serotonin, and GABA. Lesion models, in which an area of an animal's brain is damaged, arose from theories that schizophrenia involves neurodegeneration, and that problems during neurodevelopment cause the disease. Traditionally, rodent models of schizophrenia mostly targeted symptoms analogous to the positive symptoms of schizophrenia, with some models also having symptoms similar to the negative symptoms. Recent developments in schizophrenia research, however, have targeted cognitive symptoms as some of the most debilitating and influential in patients' daily lives, and thus have become a larger target in animal models of schizophrenia. Animals used as models for schizophrenia include rats, mice, and primates.
Uses and limitations
The modelling of schizophrenia in animals can range from attempts to imitate the full extent of symptoms found in schizophrenia, to more specific modelling which investigate the efficacy of antipsychotic drugs. Each extreme has its limitations, with whole-syndrome modeling often failing due to the complexity and heterogeneous nature of schizophrenia, as well as difficulty translating human specific diagnostic criteria such as disorganized speech to animals. Antipsychotic-specific modelling faces similar issues, one of which is that it is not useful for discovering drugs with unique mechanisms of action, while traditional medications for schizophrenia have generalized effects (blocking of dopamine receptors) that make it difficult to attribute outcom
|
https://en.wikipedia.org/wiki/3rd%20meridian%20west
|
The meridian 3° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, the Atlantic Ocean, Europe, Africa, the Southern Ocean, and Antarctica to the South Pole.
The 3rd meridian west forms a great circle with the 177th meridian east.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 3rd meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="125" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Scotland — island of Westray
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" | Westray Firth
|-
|
! scope="row" |
| Scotland — islands of Rousay and Wyre
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | North Sea
| style="background:#b0e0e6;" | Wide Firth
|-
|
! scope="row" |
| Scotland — island of Mainland (Orkney)
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Scapa Flow
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Scotland — island of South Ronaldsay
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | North Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Scotland — passing through Dundee (at )
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Firth of Forth
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Scotland — passing just east of Edinburgh (at ) England — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Irish Sea
| style="background:#b0e0e6;" | Morecambe Bay
|-valign="top"
|
https://en.wikipedia.org/wiki/Verhildersum
|
Verhildersum is a borg directly to the east of the town of Leens in the Dutch province of Groningen. It is now a museum.
Etymology
The name Verhildersum comes from Verhildert, where Ver means 'woman' or 'noble woman' and Hilder(t) is a proper name. When this woman lived is unknown. The ending -um in Verhildersum stands for 'house'.
History
The borg dates, as a heerd (a word for farm in Gronings), from the 14th century. It became a borg in the 17th century.
In 1398 a certain Aylko Ferhildema is mentioned, the same person as Aylko Onsta from Sauwerd. The surname Ferhildema could indicate that he (had) lived in Verhildersum. It is unknown whether, after Aylko Onsta, other members of the Onsta family lived in Verhildersum, but it is considered highly probable that the Onsta's kept possession of the borg for some time. One clue to this is that the borg was destroyed in both 1400 and 1514 by the city-Groningers (inhabitants of the city Groningen), just like the Onstaborg. However, in between these two battles no mention of the borg is made in official records. In a document mention is made of the reconstruction of the borg after 1514 for the sum of 1200 gold pieces, excluding some exterior buildings.
After the death of the inhabitant Aepke Onsta in 1564, Ecke Claessen is mentioned as the inhabitant of the borg in 1576. Complaints by him are made with regard to troubles caused by billeted soldiers with their two wives and a child, who reside at the borg due to the Eighty Years' War. During the war Eylco Onsta flees to East Frisia, but despite this keeps calling himself hoofdeling of Wetsinge, Sauwerd and Verhildersum. However in reality, in 1587, twenty years after the death of Aepke Onsta, it is arranged in the property settlement that not he alone, but he and his sister Hidde Onsta together become owner of the borg Verhildersum. Verhildersum was then a heerd (farm) of 150 yokes (about 75 hectares) on which stood 'the old borg'.
Estate
Around the borg lies the Ve
|
https://en.wikipedia.org/wiki/Test%20driver%20%28software%29
|
In software testing a test driver is a software component or application that initiates and controls the execution of a program under test, especially when such components are part of a larger system and cannot run in isolation. Drivers control applications across various stages of software testing, from unit and integration testing right through to system integration testing and acceptance testing
, especially when the target module is a component of a larger system that is not yet fully implemented or otherwise unavailable.
Definition
A test driver is a software component or tool developed to initiate and oversee the execution of a component under test, particularly when the component is part of a larger system and the system is yet to be fully implemented. Essentially, the test driver mimics the components of a system that interact with the component under test, feeding it the necessary input and controlling its execution. The primary goal of using a test driver is to verify the functionality of the isolated component in the absence of its intended complete environment.
Purpose
Test drivers are tailored to meet the unique requirements of different testing environments. With manual test drivers, testers can directly initiate actions, offering them direct control throughout the testing phase. In comparison, automated test drivers—typically in the form of tools or scripts—can carry out tests on their own. These are especially useful in situations that demand repetitive or extensive testing.
Comparison with test stubs
Test drivers and test stubs are both instrumental in software testing, but they serve distinct roles within a test harness.
Test drivers are typically an active component and control or call the system under test without further inputs after they are initialised, stubs on the other hand are usually passive components that only receive data and respond to calls from the tested system when needed.
|
https://en.wikipedia.org/wiki/Slice%20preparation
|
The slice preparation or brain slice is a laboratory technique in electrophysiology that allows the study of neurons from various brain regions in isolation from the rest of the brain, in an ex-vivo condition. Brain tissue is initially sliced via a tissue slicer then immersed in artificial cerebrospinal fluid (aCSF) for stimulation and/or recording. The technique allows for greater experimental control, through elimination of the effects of the rest of the brain on the circuit of interest, careful control of the physiological conditions through perfusion of substrates through the incubation fluid, to precise manipulation of neurotransmitter activity through perfusion of agonists and antagonists. However, the increase in control comes with a decrease in the ease with which the results can be applied to the whole neural system.
Slice preparation techniques
Free hand sectioning is a type of preparation techniques where a skilled operator uses razor blade for slicing. The blade is wetted with an isotonic solution before cutting to avoid tissue smudging during cutting. This method has several drawbacks such as sample size limitation and difficult to observe progress. Modern microtome devices such as Compresstome microtomes are used to prepare slices as these devices have less limitations.
Benefits
When investigating mammalian CNS activity, slice preparation has several advantages and disadvantages when compared to in vivo study.
Slice preparation is both faster and cheaper than in vivo preparation, and does not require anaesthesia beyond the initial sacrifice. The removal of the brain tissue from the body removes the mechanical effects of heartbeat and respiration, which allows for extended intracellular recording. The physiological conditions of the sample, such as oxygen and carbon dioxide levels, or pH of the extracellular fluid can be carefully adjusted and maintained. Slice work under a microscope also allows for careful placement of the recording electrode, whic
|
https://en.wikipedia.org/wiki/Sexual%20differentiation%20in%20humans
|
Sexual differentiation in humans is the process of development of sex differences in humans. It is defined as the development of phenotypic structures consequent to the action of hormones produced following gonadal determination. Sexual differentiation includes development of different genitalia and the internal genital tracts and body hair plays a role in sex identification.
The development of sexual differences begins with the XY sex-determination system that is present in humans, and complex mechanisms are responsible for the development of the phenotypic differences between male and female humans from an undifferentiated zygote. Females typically have two X chromosomes, and males typically have a Y chromosome and an X chromosome. At an early stage in embryonic development, both sexes possess equivalent internal structures. These are the mesonephric ducts and paramesonephric ducts. The presence of the SRY gene on the Y chromosome causes the development of the testes in males, and the subsequent release of hormones which cause the paramesonephric ducts to regress. In females, the mesonephric ducts regress.
Divergent sexual development, known as intersex, can be a result of genetic and hormonal factors.
Sex determination
Most mammals, including humans, have an XY sex-determination system: the Y chromosome carries factors responsible for triggering male development. In the absence of a Y chromosome, the fetus will undergo female development. This is because of the presence of the sex-determining region of the Y chromosome, also known as the SRY gene. Thus, male mammals typically have an X and a Y chromosome (XY), while female mammals typically have two X chromosomes (XX).
Chromosomal sex is determined at the time of fertilization; a chromosome from the sperm cell, either X or Y, fuses with the X chromosome in the egg cell. Gonadal sex refers to the gonads, that is the testis or ovaries, depending on which genes are expressed. Phenotypic sex refers to the struct
|
https://en.wikipedia.org/wiki/Herbert%20Weston%20Edmunds
|
Herbert Weston Edmunds (1881 – 27 September 1954) was a British marine insurance underwriter and philatelist. Edmunds was president of the Royal Philatelic Society London 1950–53.
Edmunds was educated at Highgate School and the University of Cambridge. He was a member of Lloyd's of London for more than 30 years. He joined the Royal Philatelic Society in 1931 and specialised in the philately of Hanover, Tuscany, and Oldenburg.
|
https://en.wikipedia.org/wiki/Polyphase%20matrix
|
In signal processing, a polyphase matrix is a matrix whose elements are filter masks. It represents a filter bank as it is used in sub-band coders alias discrete wavelet transforms.
If are two filters, then one level the traditional wavelet transform maps an input signal to two output signals , each of the half length:
Note, that the dot means polynomial multiplication; i.e., convolution and means downsampling.
If the above formula is implemented directly, you will compute values that are subsequently flushed by the down-sampling. You can avoid their computation by splitting the filters and the signal into even and odd indexed values before the wavelet transformation:
The arrows and denote left and right shifting, respectively. They shall have the same precedence like convolution, because they are in fact convolutions with a shifted discrete delta impulse.
The wavelet transformation reformulated to the split filters is:
This can be written as matrix-vector-multiplication
This matrix is the polyphase matrix.
Of course, a polyphase matrix can have any size, it need not to have square shape. That is, the principle scales well to any filterbanks, multiwavelets, wavelet transforms based on fractional refinements.
Properties
The representation of sub-band coding by the polyphase matrix is more than about write simplification. It allows the adaptation of many results from matrix theory and module theory. The following properties are explained for a matrix, but they scale equally to higher dimensions.
Invertibility/perfect reconstruction
The case that a polyphase matrix allows reconstruction of a processed signal from the filtered data, is called perfect reconstruction property. Mathematically this is equivalent to invertibility. According to the theorem of invertibility of a matrix over a ring, the polyphase matrix is invertible if and only if the determinant of the polyphase matrix is a Kronecker delta, which is zero everywhere except for one
|
https://en.wikipedia.org/wiki/Phycoerythrocyanin
|
Phycoerythrocyanin is a kind of phycobiliprotein, magenta chromoprotein involved in photosynthesis of some Cyanobacteria. This chromoprotein consists of alpha- and beta-subunits, generally aggregated as hexamer. Alpha-phycoerythrocyanin contains a phycoviolobilin, a violet bilin, that covalently attached at Cys-84, and beta-phycoerythrocyanin contains two phycocyanobilins, a blue bilin, that covalently attached at Cys-84 and -155, respectively. Phycoerythrocyanin is similar to phycocyanin, an important component of the light-harvesting complex (phycobilisome) of cyanobacteria and red algae.
While only phycocyanobilin is covalently bound to phycocyanin, leading to an absorption maximum around 620 nm, phycoerythrocyanin containing both phycoviolobilin and phycocyanobilin leads to an absorption maximum around 575 nm. As both phycoerythrocyanin and phycocyanin have phycocyanobilin acting as the terminal acceptor of energy transfer, they fluoresce around 635 nm, which is absorbed by allophycocyanins that have maximal absorption around 650 nm and maximal fluorescence around 670 nm. Finally, the light energy absorbed by phycoerythrocyanin is transferred to photosynthetic reaction center.
|
https://en.wikipedia.org/wiki/Sky%20Q
|
Sky Q is a subscription-based television and entertainment service operated by British satellite television provider Sky, as a part of its operations in Austria and Germany, Ireland, Italy and in the UK. The name also refers to the Sky Q set-top box.
Sky Q launched in 2016, replacing the previous Sky+ and Sky+ HD services. Sky Q has been referred to as a "multimedia platform" that combines conventional television with on-demand and catch-up services, as well as third-party services. It includes a PVR set top box, a multiroom set top box, a dedicated broadband-connected "hub", and applications for mobile and desktop devices. As of April 2018, Sky Q was in 2.5 million homes in the UK, Ireland and Italy. In July 2018, Sky reported that there were 3.6 million Sky Q customers.
LaunchOne
Sky Q was first announced by Sky UK in November 2015, and was released in the UK in February 2016.
Sky did not roll out Sky Q in Germany, Austria and Italy immediately, but released a modified version of the Sky Q set top box by Autumn 2016, named Sky+ Pro in Germany and Austria, and My Sky in Italy. Like Sky Q, the box is capable of UHD resolution and has a built-in Wi-Fi router, but it omits significant Sky Q features. Sky Italia later launched Sky Q in Italy in November 2017, and Sky Deutschland did so in Germany and Austria in May 2018. In contrast to the UK, Ireland and Italy (especially where Sky Italia launched Sky Q separately from My Sky), existing customers in Germany and Austria could receive Sky Q through a free software update on existing Sky+ Pro receivers.
Hardware
The Sky Q "Silver" set top box (called "Platinum" in Italy) has a 2 terabyte hard disk and 12 satellite tuners, allowing up to six live TV channels to be recorded while watching a seventh. The box is capable of receiving and displaying 4K resolution "ultra-high-definition" (UHD) broadcasts, which were started by Sky in the UK in August 2016.
The standard Sky Q box has 1 terabyte of storage and 8 tuners, sup
|
https://en.wikipedia.org/wiki/History%20of%20subatomic%20physics
|
The idea that matter consists of smaller particles and that there exists a limited number of sorts of primary, smallest particles in nature has existed in natural philosophy at least since the 6th century BC. Such ideas gained physical credibility beginning in the 19th century, but the concept of "elementary particle" underwent some changes in its meaning: notably, modern physics no longer deems elementary particles indestructible. Even elementary particles can decay or collide destructively; they can cease to exist and create (other) particles in result.
Increasingly small particles have been discovered and researched: they include molecules, which are constructed of atoms, that in turn consist of subatomic particles, namely atomic nuclei and electrons. Many more types of subatomic particles have been found. Most such particles (but not electrons) were eventually found to be composed of even smaller particles such as quarks. Particle physics studies these smallest particles and their behaviour under high energies, whereas nuclear physics studies atomic nuclei and their (immediate) constituents: protons and neutrons.
Early development
The idea that all matter is composed of elementary particles dates to as far as the 6th century BC. The Jains in ancient India were the earliest to advocate the particular nature of material objects between 9th and 5th century BCE. According to Jain leaders like Parshvanatha and Mahavira, the ajiva (non living part of universe) consists of matter or pudgala, of definite or indefinite shape which is made up tiny uncountable and invisible particles called permanu. Permanu occupies space-point and each permanu has definite colour, smell, taste and texture. Infinite varieties of permanu unite and form pudgala. The philosophical doctrine of atomism and the nature of elementary particles were also studied by ancient Greek philosophers such as Leucippus, Democritus, and Epicurus; ancient Indian philosophers such as Kanada, Dignāga, and Dha
|
https://en.wikipedia.org/wiki/List%20of%20fractals%20by%20Hausdorff%20dimension
|
According to Benoit Mandelbrot, "A fractal is by definition a set for which the Hausdorff-Besicovitch dimension strictly exceeds the topological dimension."
Presented here is a list of fractals, ordered by increasing Hausdorff dimension, to illustrate what it means for a fractal to have a low or a high dimension.
Deterministic fractals
Random and natural fractals
See also
Fractal dimension
Hausdorff dimension
Scale invariance
Notes and references
Further reading
External links
The fractals on Mathworld
Other fractals on Paul Bourke's website
Soler's Gallery
Fractals on mathcurve.com
1000fractales.free.fr - Project gathering fractals created with various software
Fractals unleashed
IFStile - software that computes the dimension of the boundary of self-affine tiles
Hausdorff Dimension
Hausdorff Dimension
Mathematics-related lists
|
https://en.wikipedia.org/wiki/Cas%20Cremers
|
Casimier Joseph Franciscus "Cas" Cremers (born 1974) is a computer scientist and a faculty member at the CISPA Helmholtz Center for Information Security in Saarbruecken, Germany.
Career
Cremers received his PhD from Eindhoven University of Technology in 2006, under the supervision of Sjouke Mauw and Erik de Vink. Between 2006 and 2013, he worked at the Information Security Group at ETH Zurich, Switzerland, until joining the University of Oxford in 2013. He was made full professor of Information Security in 2015.
His research focuses on information security, in particular the formal analysis of security protocols. This work ranges from developing mathematical foundations for protocol analysis to the development of analysis tools, notably the Scyther and Tamarin tools. Recently his research expanded into directions such as protocol standardisation, including the improvement of the ISO/IEC 9798 standard, and applied cryptography, leading to the development of new security requirements and protocols. His joint work with Marko Horvat, Sam Scott, and Thyla van der Merwe led to a not insignificant change to the TLS 1.3 specification.
In 2018 Cremers moved from Oxford University to the Cispa Helmholtz Center for Information Security at Saarbrücken.
Cremers previously worked in MSX computer game development, initially working for the Sigma Group before founding his own group Parallax; he is credited for work on nine different games, and many other demos, in a combination of roles including programmer, designer, composer, and writer. He was interviewed by blog "Distrito Entebras" on the history of his career working in MSX games development.
Publications
Cremers' publications cover security, cryptography, ISO standards, automated verification of security protocols, and formal methods.
His thesis was entitled "Scyther - Semantics and Verification of Security Protocols", and was supervised by Sjouke Mauw and Erik de Vink. Also published with Sjouke Mauw is their book Op
|
https://en.wikipedia.org/wiki/Fibular%20collateral%20ligament
|
The lateral collateral ligament (LCL, long external lateral ligament or fibular collateral ligament) is an extrinsic ligament of the knee located on the lateral side of the knee. Its superior attachment is at the lateral epicondyle of the femur (superoposterior to the popliteal groove); its inferior attachment is at the lateral aspect of the head of fibula (anterior to the apex). The LCL is not fused with the joint capsule. Inferiorly, the LCL splits the tendon of insertion of the biceps femoris muscle.
Structure
The LCL measures some 5 cm in length. It is rounded, and is more narrow and less broad compared to the medial collateral ligament. It extends obliquely inferoposteriorly from its superior attachment to its inferior attachment.
In contrast to the medial collateral ligament, it is not fused with either the capsular ligament nor the lateral meniscus. Because of this, the LCL is more flexible than its medial counterpart, and is therefore less susceptible to injury.
Relations
Immediately below its origin is the groove for the tendon of the popliteus.
The greater part of its lateral surface is covered by the tendon of the biceps femoris; the tendon, however, divides at its insertion into two parts, which are separated by the ligament.
Deep to the ligament are the tendon of the popliteus, and the inferior lateral genicular vessels and nerve.
Function
Both collateral ligaments are taut when the knee joint is in extension. With the knee in flexion, the radius of curvatures of the condyles is decreased and the origin and insertions of the ligaments are brought closer together which make them lax. The pair of ligaments thus stabilize the knee joint in the coronal plane. Therefore, damage and rupture of these ligaments can be diagnosed by examining the knee's stability in the mediolateral axis.
Clinical significance
Causes of injury
The LCL is usually injured as a result of varus force across the knee, which is a force pushing the knee from the medial (inn
|
https://en.wikipedia.org/wiki/Particle%20in%20a%20one-dimensional%20lattice
|
In quantum mechanics, the particle in a one-dimensional lattice is a problem that occurs in the model of a periodic crystal lattice. The potential is caused by ions in the periodic structure of the crystal creating an electromagnetic field so electrons are subject to a regular potential inside the lattice. It is a generalization of the free electron model, which assumes zero potential inside the lattice.
Problem definition
When talking about solid materials, the discussion is mainly around crystals – periodic lattices. Here we will discuss a 1D lattice of positive ions. Assuming the spacing between two ions is , the potential in the lattice will look something like this:
The mathematical representation of the potential is a periodic function with a period . According to Bloch's theorem, the wavefunction solution of the Schrödinger equation when the potential is periodic, can be written as:
where is a periodic function which satisfies . It is the Bloch factor with Floquet exponent which gives rise to the band structure of the energy spectrum of the Schrödinger equation with a periodic potential like the Kronig–Penney potential or a cosine function as in the Mathieu equation.
When nearing the edges of the lattice, there are problems with the boundary condition. Therefore, we can represent the ion lattice as a ring following the Born–von Karman boundary conditions. If is the length of the lattice so that , then the number of ions in the lattice is so big, that when considering one ion, its surrounding is almost linear, and the wavefunction of the electron is unchanged. So now, instead of two boundary conditions we get one circular boundary condition:
If is the number of ions in the lattice, then we have the relation: . Replacing in the boundary condition and applying Bloch's theorem will result in a quantization for :
Kronig–Penney model
The Kronig–Penney model (named after Ralph Kronig and William Penney) is a simple, idealized quantum-mechanical system tha
|
https://en.wikipedia.org/wiki/Zonule%20of%20Zinn
|
The zonule of Zinn () (Zinn's membrane, ciliary zonule) (after Johann Gottfried Zinn) is a ring of fibrous strands forming a zonule (little band) that connects the ciliary body with the crystalline lens of the eye. These fibers are sometimes collectively referred to as the suspensory ligaments of the lens, as they act like suspensory ligaments.
Development
The non-pigmented ciliary epithelial cells of the eye synthesize portions of the zonules.
Anatomy
The zonule of Zinn is split into two layers: a thin layer, which lies near the hyaloid fossa, and a thicker layer, which is a collection of zonular fibers. Together, the fibers are known as the suspensory ligament of the lens. The zonules are about 1–2 μm in diameter.
The zonules attach to the lens capsule 2 mm anterior and 1 mm posterior to the equator, and arise of the ciliary epithelium from the pars plana region as well as from the valleys between the ciliary processes in the pars plicata.
When colour granules are displaced from the zonules of Zinn (by friction against the lens), the irises slowly fade. In some cases those colour granules clog the channels and lead to glaucoma pigmentosa.
The zonules are primarily made of fibrillin, a connective tissue protein. Mutations in the fibrillin gene lead to the condition Marfan syndrome, and consequences include an increased risk of lens dislocation.
Clinical appearance
The zonules of Zinn are difficult to visualize using a slit lamp, but may be seen with exceptional dilation of the pupil, or if a coloboma of the iris or a subluxation of the lens is present. The number of zonules present in a person appears to decrease with age. The zonules insert around the outer margin of the lens (equator), both anteriorly and posteriorly.
Function
Securing the lens to the optical axis and transferring forces from the ciliary muscle in accommodation. When colour granules are displaced from the zonules of Zinn, caused by friction of the lens, the iris can slowly fade. These
|
https://en.wikipedia.org/wiki/Australian%20Faunal%20Directory
|
The Australian Faunal Directory (AFD) is an online catalogue of taxonomic and biological information on all animal species known to occur within Australia. It is a program of the Department of Climate Change, Energy, the Environment and Water of the Government of Australia. By May 12, 2021, the Australian Faunal Directory has collected information about 126,442 species and subspecies. It includes the data from the discontinued Zoological Catalogue of Australia and is regularly updated. Started in the 1980s, it set a goal to compile a "list of all Australian fauna including terrestrial vertebrates, ants and marine fauna" and create an "Australian biotaxonomic information system". This important electronic key and educative package enables faster and orderly identification of Australian centipede species .
|
https://en.wikipedia.org/wiki/Asynchronous%20connection-oriented%20logical%20transport
|
Introduction
ACL is an informal acronym which refers to the Bluetooth Asynchronous Connection-oriented Logical transport. ACL is used as a shorthand to refer to one of two types of logical transport defined in the Bluetooth Core Specification, either BR/EDR ACL or LE ACL. BR/EDR ACL is the ACL logical transport variant used with Bluetooth Basic Rate/Enhanced Data Rate (BR/EDR, also known as Bluetooth Classic) whilst LE ACL is the ACL logical transport variant used with Bluetooth Low Energy (LE).
The ACL transports are part of the Bluetooth data transport architecture.
Note that all definitions of Bluetooth terminology, protocols and procedures including ACL are defined in the Bluetooth Core Specification which is published by the standards development organisation, the Bluetooth Special Interest Group (Bluetooth SIG).
The Bluetooth Data Transport Architecture
The architecture section of the Bluetooth Core Specification defines a number of concepts which collectively constitute the Bluetooth data transport architecture. Key amongst these concepts are the Physical Channel, Physical Link, Logical Link and Logical Transport. Certain combinations are intended for use in different application types which have particular requirements regarding issues such as topology, timing, reliability and radio channel use.
The LE ACL logical transport is used with either an LE-C logical link, which carries control data or an LE-U logical link which is for user data. It is based on an LE Active Physical Link and the LE Piconet Physical Channel. See Figure 1.
The BR/EDR ACL logical transport is used with either an ACL-C logical link for control data or an ACL-U logical link for user data and it is based on a BR/EDR Active Physical Link and either the BR/EDR Basic Piconet Physical Channel or the BR/EDR Adapted Piconet Physical Channel. See Figure 2.
Both ACL variants are designed to provide reliable, bi-directional, point to point communication.
LE ACL
Overview
A Blue
|
https://en.wikipedia.org/wiki/Greater%20sciatic%20foramen
|
The greater sciatic foramen is an opening (foramen) in the posterior human pelvis. It is formed by the sacrotuberous and sacrospinous ligaments. The piriformis muscle passes through the foramen and occupies most of its volume. The greater sciatic foramen is wider in women than in men.
Structure
It is bounded as follows:
anterolaterally by the greater sciatic notch of the ilium.
posteromedially by the sacrotuberous ligament.
inferiorly by the sacrospinous ligament and the ischial spine.
superiorly by the anterior sacroiliac ligament.
Function
The piriformis, which exits the pelvis through the foramen, occupies most of its volume.
The following structures also exit the pelvis through the greater sciatic foramen:
See also
Lesser sciatic foramen
|
https://en.wikipedia.org/wiki/Pot-in-pot%20refrigerator
|
A pot-in-pot refrigerator, clay pot cooler or zeer () is an evaporative cooling refrigeration device which does not use electricity. It uses a porous outer clay pot (lined with wet sand) containing an inner pot (which can be glazed to prevent penetration by the liquid) within which the food is placed. The evaporation of the outer liquid draws heat from the inner pot. The device can cool any substance, and requires only a flow of relatively dry air and a source of water.
History
Many clay pots from around 3000 BC were discovered in the Indus Valley civilization and are considered to have been used for cooling as well as storing water. The pots are similar to the present-day ghara and matki used in India and Pakistan.
There is evidence that evaporative cooling may have been used in North Africa as early as the Old Kingdom of Egypt, circa 2500 BC. Frescoes show slaves fanning water jars, which would increase air flow around porous jars to aid evaporation and cooling the contents. These jars exist even today and are called zeer, hence the name of the pot cooler. Despite being developed in Northern Africa, the technology appeared to have been forgotten since the advent of modern electrical refrigerators.
However, in the Indian subcontinent, ghara, matka and surahi, all of which are different types of clay water pots, are in everyday use to cool water. In Spain, botijos are popular. A botijo is a porous clay container used to keep and to cool water; they have been in use for centuries and are still in relatively widespread use. Botijos are favored most by the low Mediterranean climate; locally, the cooling effect is known as "botijo effect".
In the 1890s, gold miners in Australia developed the Coolgardie safe, based on the same principles.
In rural northern Nigeria in the 1990s, Mohamed Bah Abba developed the Pot-in-Pot Preservation Cooling System, consisting of a small clay pot placed inside a larger one, and the space between the two filled with moist sand. The i
|
https://en.wikipedia.org/wiki/European%20Bioinformatics%20Institute
|
The European Bioinformatics Institute (EMBL-EBI) is an intergovernmental organization (IGO) which, as part of the European Molecular Biology Laboratory (EMBL) family, focuses on research and services in bioinformatics. It is located on the Wellcome Genome Campus in Hinxton near Cambridge, and employs over 600 full-time equivalent (FTE) staff. Institute leaders such as Rolf Apweiler, Alex Bateman, Ewan Birney, and Guy Cochrane, an adviser on the National Genomics Data Center Scientific Advisory Board, serve as part of the international research network of the BIG Data Center at the Beijing Institute of Genomics.
Additionally, the EMBL-EBI hosts training programs that teach scientists the fundamentals of the work with biological data and promote the plethora of bioinformatic tools available for their research, both EMBL-EBI and non-EMBL-EBI-based.
Bioinformatic services
One of the roles of the EMBL-EBI is to index and maintain biological data in a set of databases, including Ensembl (housing whole genome sequence data), UniProt (protein sequence and annotation database) and Protein Data Bank (protein and nucleic acid tertiary structure database). A variety of online services and tools is provided, such as Basic Local Alignment Search Tool (BLAST) or Clustal Omega sequence alignment tool, enabling further data analysis.
BLAST
BLAST is an algorithm for the comparison of biomacromolecule primary structure, most often nucleotide sequence of DNA/RNA and amino acid sequence of proteins, stored in the bioinformatic databases, with the query sequence. The algorithm utilizes scoring of the available sequences against the query by a scoring matrix such as BLOSUM 62. The highest scoring sequences represent the closest relatives of the query, in terms of functional and evolutionary similarity.
The database search by BLAST requires input data to be in a correct format (e.g. FASTA, GenBank, PIR or EMBL format). Users may also designate the specific databases to be searched, s
|
https://en.wikipedia.org/wiki/Tennenbaum%27s%20theorem
|
Tennenbaum's theorem, named for Stanley Tennenbaum who presented the theorem in 1959, is a result in mathematical logic that states that no countable nonstandard model of first-order Peano arithmetic (PA) can be recursive (Kaye 1991:153ff).
Recursive structures for PA
A structure in the language of PA is recursive if there are recursive functions and from to , a recursive two-place relation <M on , and distinguished constants such that
where indicates isomorphism and is the set of (standard) natural numbers. Because the isomorphism must be a bijection, every recursive model is countable. There are many nonisomorphic countable nonstandard models of PA.
Statement of the theorem
Tennenbaum's theorem states that no countable nonstandard model of PA is recursive. Moreover, neither the addition nor the multiplication of such a model can be recursive.
Proof sketch
This sketch follows the argument presented by Kaye (1991). The first step in the proof is to show that, if M is any countable nonstandard model of PA, then the standard system of M (defined below) contains at least one nonrecursive set S. The second step is to show that, if either the addition or multiplication operation on M were recursive, then this set S would be recursive, which is a contradiction.
Through the methods used to code ordered tuples, each element can be viewed as a code for a set of elements of M. In particular, if we let be the ith prime in M, then . Each set will be bounded in M, but if x is nonstandard then the set may contain infinitely many standard natural numbers. The standard system of the model is the collection . It can be shown that the standard system of any nonstandard model of PA contains a nonrecursive set, either by appealing to the incompleteness theorem or by directly considering a pair of recursively inseparable r.e. sets (Kaye 1991:154). These are disjoint r.e. sets so that there is no recursive set with and .
For the latter construction, be
|
https://en.wikipedia.org/wiki/BSND
|
Bartter syndrome, infantile, with sensorineural deafness (Barttin), also known as BSND, is a human gene which is associated with Bartter syndrome.
This gene encodes an essential beta subunit for CLC chloride channels. These heteromeric channels localize to basolateral membranes of renal tubules and of potassium-secreting epithelia of the inner ear. Mutations in this gene have been associated with Bartter syndrome with sensorineural deafness.
|
https://en.wikipedia.org/wiki/Tricholoma%20muscarium
|
Tricholoma muscarium is a mushroom found in Japan.
Toxicity
Tricholoma muscarium contains ibotenic acid and tricholomic acid and is considered to be an edible mushroom in Japan.
|
https://en.wikipedia.org/wiki/Gamma%20probe
|
A gamma probe is a handheld device containing a scintillation counter, for intraoperative use following injection of a radionuclide, to locate sentinel lymph nodes by their radioactivity. It is used primarily for sentinel lymph node mapping and parathyroid surgery. Gamma probes are also used for RSL (radioactive seed localization), to locate small and non-palpable breast lesions.
History
The sentinel node market experienced high growth in the early and mid 1990s starting with melanoma sentinel node surgical search and breast cancer sentinel node staging; both are currently considered standards of care. The use of a radioactive tracer, rather than a coloured dye, was proposed in 1984.
Clinical use
To locate the draining lymph nodes or sentinel lymph node from a breast cancer tumour a Technetium-99m based radiopharmaceutical is common. This may be a nanocolloid or sestamibi. Although imaging with a gamma camera may also take place, the idea of a small gamma probe is that it can be used to identify lymph nodes (or other sites) with uptake at a much higher resolution, during an operation. The probe may be collimated to further restrict the field of detection.
See also
Nuclear medicine
Molecular Imaging
|
https://en.wikipedia.org/wiki/Monte%20Carlo%20POMDP
|
In the class of Markov decision process algorithms, the Monte Carlo POMDP (MC-POMDP) is the particle filter version for the partially observable Markov decision process (POMDP) algorithm. In MC-POMDP, particles filters are used to update and approximate the beliefs, and the algorithm is applicable to continuous valued states, actions, and measurements.
|
https://en.wikipedia.org/wiki/List%20of%20textbooks%20in%20thermodynamics%20and%20statistical%20mechanics
|
A list of notable textbooks in thermodynamics and statistical mechanics, arranged by category and date.
Only or mainly thermodynamics
Both thermodynamics and statistical mechanics
2e Kittel, Charles; and Kroemer, Herbert (1980) New York: W.H. Freeman
2e (1988) Chichester: Wiley , .
(1990) New York: Dover
Statistical mechanics
. 2e (1936) Cambridge: University Press; (1980) Cambridge University Press.
; (1979) New York: Dover
Vol. 5 of the Course of Theoretical Physics. 3e (1976) Translated by J.B. Sykes and M.J. Kearsley (1980) Oxford : Pergamon Press.
. 3e (1995) Oxford: Butterworth-Heinemann
. 2e (1987) New York: Wiley
. 2e (1988) Amsterdam: North-Holland . 2e (1991) Berlin: Springer Verlag ,
; (2005) New York: Dover
2e (2000) Sausalito, Calif.: University Science
2e (1998) Chichester: Wiley
Specialized topics
Kinetic theory
Vol. 10 of the Course of Theoretical Physics (3rd Ed). Translated by J.B. Sykes and R.N. Franklin (1981) London: Pergamon ,
Quantum statistical mechanics
Mathematics of statistical mechanics
Translated by G. Gamow (1949) New York: Dover
. Reissued (1974), (1989); (1999) Singapore: World Scientific
; (1984) Cambridge: University Press . 2e (2004) Cambridge: University Press
Miscellaneous
(available online here)
Historical
(1896, 1898) Translated by Stephen G. Brush (1964) Berkeley: University of California Press; (1995) New York: Dover
Translated by J. Kestin (1956) New York: Academic Press.
German Encyclopedia of Mathematical Sciences. Translated by Michael J. Moravcsik (1959) Ithaca: Cornell University Press; (1990) New York: Dover
See also
List of textbooks on classical mechanics and quantum mechanics
List of textbooks in electromagnetism
List of books on general relativity
Further reading
|
https://en.wikipedia.org/wiki/Wine%20sauce
|
Wine sauce is a culinary sauce prepared with wine as a primary ingredient, heated and mixed with stock, butter, herbs, spices, onions, garlic and other ingredients. Several types of wines may be used, including red wine, white wine and port wine. Some versions are prepared using a reduction. Several types of wine sauces exist, and it is used in many dishes, including those prepared with seafood, poultry and beef.
Wine sauces are associated with French cuisine.
Ingredients and preparation
Wine is a primary ingredient in wine sauce. Wine sauce may be prepared using various wines, such as red wines, white wines, Burgundy wines, and port wines, among others. Ingredients in addition to wine may include stock, mushrooms, butter or shrimp butter, tarragon vinegar, shallot, chervil, tarragon, spices, onion, garlic and others. Some wine sauces are prepared using a reduction, which may intensify their flavor or make the flavor sharper. Reduced wine may be used to prepare thicker wine sauces, while those lacking a reduction are generally thin. Some wine sauces are creamy, prepared with the addition of cream or milk.
Fish velouté is a French velouté sauce base from which several types of sauces can be prepared, including wine sauce. White wine sauce and champagne sauce are the most common sauces prepared from a fish velouté base.
Varieties
Several types of wine sauces exist using wine as a primary ingredient. Sauce poivrade is a wine sauce in French cuisine that is prepared with mirepoix thickened with flour and moistened with wine and a little vinegar, then heavily seasoned with pepper. Sauce bourguignonne is a French sauce with a base of red wine with onions or shallots. Bordelaise sauce is a classic French sauce prepared with red wine, meat glaze or demi-glace, butter, shallots and bone marrow. Sauce lyonnaise is a French sauce prepared with white wine, vinegar and onions, which may be served with meat.
Some sauces, such as Normande sauce, use wine as a flavorant rather
|
https://en.wikipedia.org/wiki/Pollutant-induced%20abnormal%20behaviour
|
Pollutant-induced abnormal behaviour refers to the abnormal behaviour induced by pollutants. Chemicals released into the natural environment by humans impact the behaviour of a wide variety of animals. The main culprits are endocrine-disrupting chemicals (EDCs), which mimic, block, or interfere with animal hormones. A new research field, integrative behavioural ecotoxicology, is emerging. However, chemical pollutants are not the only anthropogenic offenders. Noise and light pollution also induce abnormal behaviour.
This topic is of special concern for its conservation and human health implications and has been studied greatly by animal behaviourists, environmental toxicologists, and conservation scientists. Behaviours serve as potential indicators for ecological health. Behaviour can be more sensitive to EDCs than developmental and physiological traits, and it was the behaviour of eagles that first drew attention to the now well-known dangers of DDT. However, behaviour is generally difficult to measure and can be highly variable.
Behaviours which are critical for survival, such as reproductive and social behaviours, and cognitive abilities like learning can be affected directly or indirectly by chemical pollutants— many examples have been documented, and their chemical culprits have been identified. These same behaviours can also be altered by anthropogenic noise and light, although their mechanisms are relatively unknown.
EDCs known to alter behaviour
Atrazine - a common herbicide
Bisphenol A - component of some plastics
Carbaryl
Cypermethrin - a common insecticide
DDT
DEHP
Dioxins and dioxin-like compounds
Endosulfan
Fenarimol - a common fungicide
Fenitrothion
Kepone
Lead compounds
Mercury compounds
Methoxychlor
Nonylphenol
PCBs
Vinclozolin
17β-trenbolone
Determining the link between such pollutants and altered behaviours often requires both field studies and laboratory studies. Field studies are useful in determining whether behaviour
|
https://en.wikipedia.org/wiki/J%C3%A1nos%20Koml%C3%B3s%20%28mathematician%29
|
János Komlós (born 23 May 1942, in Budapest) is a Hungarian-American mathematician, working in probability theory and discrete mathematics. He has been a professor of mathematics at Rutgers University since 1988. He graduated from the Eötvös Loránd University, then became a fellow at the Mathematical Institute of the Hungarian Academy of Sciences. Between 1984–1988 he worked at the University of California, San Diego.
Notable results
Komlós' theorem: He proved that every L1-bounded sequence of real functions contains a subsequence such that the arithmetic means of all its subsequences converge pointwise almost everywhere. In probabilistic terminology, the theorem is as follows. Let ξ1,ξ2,... be a sequence of random variables such that E[ξ1],E[ξ2],... is bounded. Then there exist a subsequence ξ'1, ξ'2,... and a random variable β such that for each further subsequence η1,η2,... of ξ'0, ξ'1,... we have (η1+...+ηn)/n → β a.s.
With Miklós Ajtai and Endre Szemerédi he proved the ct2/log t upper bound for the Ramsey number R(3,t). The corresponding lower bound was established by Jeong Han Kim only in 1995, and this result earned him a Fulkerson Prize.
The same team of authors developed the optimal Ajtai–Komlós–Szemerédi sorting network.
Komlós and Szemerédi proved that if G is a random graph on n vertices with
edges, where c is a fixed real number, then the probability that G has a Hamiltonian circuit converges to
With Gábor Sárközy and Endre Szemerédi he proved the so-called blow-up lemma which claims that the regular pairs in Szemerédi's regularity lemma are similar to complete bipartite graphs when considering the embedding of graphs with bounded degrees.
Komlós worked on Heilbronn's problem; he, János Pintz and Szemerédi disproved Heilbronn's conjecture.
Komlós also wrote highly cited papers on sums of random variables, space-efficient representations of sparse sets, random matrices, the Szemerédi regularity lemma, and derandomization.
Degrees, awar
|
https://en.wikipedia.org/wiki/Hart%20circle
|
In geometry, the Hart circle is derived from three given circles that cross pairwise to form eight circular triangles. For any one of these eight triangles, and its three neighboring triangles, there exists a Hart circle, tangent to the inscribed circles of these four circular triangles. Thus, the three given circles have eight Hart circles associated with them. The Hart circles are named after their discover, Andrew Searle Hart. They can be seen as analogous to the nine-point circle of straight-sided triangles.
|
https://en.wikipedia.org/wiki/Guadeloupe%20amazon
|
The Guadeloupe amazon or Guadeloupe parrot (Amazona violacea) is a hypothetical extinct species of parrot that is thought to have been endemic to the Lesser Antillean island region of Guadeloupe. Mentioned and described by 17th- and 18th-century writers, it received a scientific name in 1789. It was moved to the genus Amazona in 1905, and is thought to have been related to, or possibly the same as, the extant imperial amazon. A tibiotarsus and an ulna bone from the island of Marie-Galante may belong to the Guadeloupe amazon. In 1905, a species of extinct violet macaw was also claimed to have lived on Guadeloupe, but in 2015, it was suggested to have been based on a description of the Guadeloupe amazon.
According to contemporary descriptions, the head, neck and underparts of the Guadeloupe amazon were mainly violet or slate, mixed with green and black; the back was brownish green; and the wings were green, yellow and red. It had iridescent feathers, and was able to raise a "ruff" of feathers around its neck. The bird fed on fruits and nuts, and the male and female took turns sitting on the nest. It was eaten by French settlers, who also destroyed its habitat. Rare by 1779, it appears to have become extinct by the end of the 18th century.
Taxonomy
The Guadeloupe amazon was first described in 1664 by the French botanist Jean-Baptiste Du Tertre, who also wrote about and illustrated the bird in 1667. The French clergyman Jean-Baptiste Labat described the bird in 1742, and it was mentioned in later natural history works by writers such as Mathurin Jacques Brisson, Comte de Buffon, and John Latham; the latter gave it the name "ruff-necked parrot". German naturalist Johann Friedrich Gmelin coined the scientific name Psittacus violaceus for the bird in his 1789 edition of Systema Naturae, based on the writings of Du Tertre, Brisson, and Buffon. The specific name violaceus means "violet".
In 1891, the Italian zoologist Tommaso Salvadori included Psittacus violaceus in a li
|
https://en.wikipedia.org/wiki/Hokey%20pokey%20%28ice%20cream%29
|
Hokey pokey is a flavour of ice cream in New Zealand, consisting of plain vanilla ice cream with small, solid lumps of honeycomb toffee. Hokey pokey is the New Zealand term for honeycomb toffee. The original recipe until around 1980 consisted of solid toffee, but in a marketing change Tip Top decided to use small balls of honeycomb toffee instead.
It is the second-most popular ice cream flavour behind vanilla in New Zealand, and is a frequently cited example of Kiwiana. It is also exported to Japan, Australia and the Pacific Islands.
Origins and etymology
The term hokey pokey has been used in reference to honeycomb toffee in New Zealand since the late 19th century. The origin of this term, in reference to honeycomb specifically, is not known with certainty, and it is not until the mid-20th century that hokey-pokey ice cream was created.
Coincidentally, "hokey pokey" was a slang term for ice cream in general in the 19th and early 20th centuries in several areas — including New York City and parts of Great Britain — specifically for the ice cream sold by street vendors, or "hokey pokey men". The vendors, said to be mostly of Italian descent, supposedly used a sales pitch or song involving the phrase "hokey pokey", for which several origins have been suggested. One such song in use in 1930s Liverpool was "Hokey pokey penny a lump, that's the stuff to make ye jump".
The term hokey pokey likely has multiple origins. One of these is the expression "hocus-pocus", which is possibly the source of the name hokey pokey in New Zealand. As a general name for ice cream outside New Zealand, it may be a corruption of one of several Italian phrases. According to "The Encyclopedia of Food" (published 1923, New York) hokey pokey (in the U.S.) is "a term applied to mixed colors and flavors of ice cream in cake form". The Encyclopedia says the term originated from the Italian phrase oh che poco - "oh how little". Alternative possible derivations include other similar-sounding Ita
|
https://en.wikipedia.org/wiki/Partial%20cleavage%20stimulation%20factor%20domain
|
The partial cleavage stimulation factor domain, or partial CstF domain, is a protein domain that occurs in proteins from apicomplexan parasites.
Currently (as of 2012), little is known about the function of this domain. However, it is homologous to the amino-terminal part of the cleavage stimulation factor, which is thought to be involved with mRNA maturation in mammals. Proteins with this domain have been detected in the malaria parasite Plasmodium falciparum nucleus.
|
https://en.wikipedia.org/wiki/Lusitanosaurus
|
Lusitanosaurus (meaning "Portuguese lizard") is a genus of large basal thyreophoran dinosaur from the Sinemurian stage of Early Jurassic of Portugal. It is the second example of the group from the Lower Jurassic of Europe and it is the oldest known dinosaur from the Iberian Peninsula. It is based on a large left maxilla with teeth that was lost in the fire at Museu Nacional de História Natural e da Ciência, Lisbon, in 1978.
Description
The fossil consists of a single partial left maxilla, an upper jaw bone, with seven teeth. The jaw measured 10.5 cm, with an estimated skull of 38.7 cm for the living animal. The teeth possess similarities with that of Scelidosaurus, which approaches it narrowly by the presence of important anterior and posterior basilar points on each tooth. The maxilla was clearly bigger, being the double of the size than the maxilla of Scelidosaurus. Lapparent & Zbyszewski vinculated it originally with Scelidosaurus and assigned the two to Stegosauria, he described that the teeth present were different to Scelidosaurus. Ginsburg cited the specimen and note a bigger size than the holotype of Scelidosaurus. Lusitanosaurus was probably a semibipedal to quadrupedal herbivore, with a dense armour on most parts of the body.
History of discovery
The genus was first described by Albert-Félix de Lapparent and Georges Zbyszewski in 1957. The type species is Lusitanosaurus liasicus. The generic name is derived from Lusitania, the ancient Latin name for the region. The specific name refers to the Lias. The holotype was part of the collection of the Museu de História Natural da Universidade de Lisboa. The exact location of the find and the date of collection are unknown, which makes a correct geological dating difficult, but it can be inferred from the matrix rock that it has been discovered near São Pedro de Moel, in strata from the Late Sinemurian (Early Jurassic). This would make it the oldest known dinosaur from Portugal.
Classification
It was originally
|
https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s%20effect
|
The Eötvös effect is the change in measured Earth's gravity caused by the change in centrifugal acceleration resulting from eastbound or westbound velocity. When moving eastbound, the object's angular velocity is increased (in addition to Earth's rotation), and thus the centrifugal force also increases, causing a perceived reduction in gravitational force.
Discovery
In the early 1900s, a German team from the Geodetic Institute of Potsdam carried out gravity measurements on moving ships in the Atlantic, Indian, and Pacific oceans. While studying their results, the Hungarian nobleman and physicist Baron Roland von Eötvös (Loránd Eötvös) noticed that the readings were lower when the boat moved eastwards, higher when it moved westward. He identified this as primarily a consequence of Earth's rotation. In 1908, new measurements were made in the Black Sea on two ships, one moving eastward and one westward. The results substantiated Eötvös' claim.
Formulation
Geodesists use the following formula to correct for velocity relative to Earth during a gravimetric run.
Here,
is the relative acceleration
is the rotation rate of the Earth
is the velocity in longitudinal direction (east-west)
is the latitude where the measurements are taken.
is the velocity in latitudinal direction (north-south)
is the radius of the Earth
The first term in the formula, 2Ωu cos(ϕ), corresponds to the Eötvös effect. The second term is a refinement that under normal circumstances is much smaller than the Eötvös effect.
Physical explanation
The most common design for a gravimeter for field work is a spring-based design; a spring that suspends an internal weight. The suspending force provided by the spring counteracts the gravitational force. A well-manufactured spring has the property that the amount of force that the spring exerts is proportional to the extension of the spring from its equilibrium position (Hooke's law). The stronger the effective gravity at a particular location, the mor
|
https://en.wikipedia.org/wiki/Nodular%20parenchyma
|
Nodular parenchyma is a small mass of tissue within a gland or organ that carries out the specialized functions of the gland or organ.
External links
Nodular parenchyma entry in the public domain NCI Dictionary of Cancer Terms
Tissues (biology)
|
https://en.wikipedia.org/wiki/Technetium%20%2899mTc%29%20votumumab
|
{{DISPLAYTITLE:Technetium (99mTc) votumumab}}
Technetium (99mTc) votumumab (trade name HumaSPECT) is a human monoclonal antibody labelled with the radionuclide technetium-99m. It was developed for the detection of colorectal tumors, but has never been marketed.
The target of votumumab is CTAA16.88, a complex of cytokeratin polypeptides in the molecular weight range of 35 to 43 kDa, which is expressed in colorectal tumors.
|
https://en.wikipedia.org/wiki/Quadratic%20field
|
In algebraic number theory, a quadratic field is an algebraic number field of degree two over , the rational numbers.
Every such quadratic field is some where is a (uniquely defined) square-free integer different from and . If , the corresponding quadratic field is called a real quadratic field, and, if , it is called an imaginary quadratic field or a complex quadratic field, corresponding to whether or not it is a subfield of the field of the real numbers.
Quadratic fields have been studied in great depth, initially as part of the theory of binary quadratic forms. There remain some unsolved problems. The class number problem is particularly important.
Ring of integers
Discriminant
For a nonzero square free integer , the discriminant of the quadratic field is if is congruent to modulo , and otherwise . For example, if is , then is the field of Gaussian rationals and the discriminant is . The reason for such a distinction is that the ring of integers of is generated by in the first case and by in the second case.
The set of discriminants of quadratic fields is exactly the set of fundamental discriminants.
Prime factorization into ideals
Any prime number gives rise to an ideal in the ring of integers of a quadratic field . In line with general theory of splitting of prime ideals in Galois extensions, this may be
is inert is a prime ideal.
The quotient ring is the finite field with elements: .
splits is a product of two distinct prime ideals of .
The quotient ring is the product .
is ramified is the square of a prime ideal of .
The quotient ring contains non-zero nilpotent elements.
The third case happens if and only if divides the discriminant . The first and second cases occur when the Kronecker symbol equals and , respectively. For example, if is an odd prime not dividing , then splits if and only if is congruent to a square modulo . The first two cases are, in a certain sense, equally likely to occur as runs through the prim
|
https://en.wikipedia.org/wiki/Mary%20Fenton
|
Mary Fenton alias Mehrbai (c. 1854 – c. 1896) was the first Gujarati, Parsi and Urdu theatre actress of European origin. Born to an Irish soldier in the British Indian Army, she fell in love and married Parsi actor-director Kavasji Palanji Khatau. He introduced her to acting and she had a successful stage career.
Early life
Mary Fenton was born in Landour near Mussoorie in India to Jannette and Mathew Fenton, an Irish retired soldier of the British Indian Army. She was baptized as Mary Jane Fenton, but there is no further information of her early life and education. Parsi theatre actor-director Kavasji Palanji Khatau was rehearsing for his play Inder Sabha, when Fenton had come to book the hall for her magic lantern show. She admired his acting, met him, fell in love and finally married him. Subsequently, she adopted a Parsi name Mehrbai. She already knew Hindi and Urdu, and in the 1870s Khatau gave her further training in singing and acting.
She created a sensation in the theatre due to her talent and relationship with Khatau. However, following a dispute between Khatau and the Empress Victoria Theatrical Company owner Jahangir Pestonjee Khambatta regarding Fenton's entry into theatre in 1878, Khatau left Bombay for Delhi and joined Alfred Theatre Company owned by Manek Master who also opposed Fenton. Consequently, Khatau started his own Alfred Company in 1881, where Fenton had a long and successful career.
Fenton and Khatau later separated. They had a son Jahangir Khatau.
Career
She was the first Anglo-Indian actress of the Parsi, Gujarati, and Urdu theatre. She became popular for her roles as the Parsi heroine. She acted in Nanabhai Ranina's Nazan Shirin (1881), Bamanji Kabra's Bholi Gul (Innocent Flower, 1882, based on Ellen Wood's English novel East Lynne), Agha Hasan Amanat's Urdu opera Inder Sabha, Khambatta's Khudadad (The Gift of God, 1898, based on Shakespeare's Pericles, Prince of Tyre), Gamde ni Gori (Village Nymph, 1890), Alauddin (1891), Tara Khurs
|
https://en.wikipedia.org/wiki/Elizaveta%20Karamihailova
|
Elisabeth Ivanova Kara-Michailova (), alternatively Elisabeth Karamichailova was a Bulgarian physicist of a Bulgarian father and an English mother. She was among the handful of female nuclear physics pioneers at the beginning of the 20th century, established the first practical courses of particle physics in Bulgaria and was the first woman to hold a professorial title in the country.
Early life
Elisabeth Karamichailova was born in 1897 in Vienna, to Ivan Mikhaylov and Mary Slade. Both her parents had studied at the University of Vienna - Ivan, born in Shumen, was studying medicine, while Mary, a native of Minster Lovell in Oxfordshire, studied music. After her father graduated in 1907, the family remained in Vienna for two years before moving to Bulgaria in 1909 where they acquired a spacious house in central Sofia.
Karamichailova grew up in both an artistic and scientific environment. Her father turned the upper floor of his house into a Red Cross Hospital where he treated his patients without requiring payment. She enrolled in the Sofia Girls' College and graduated there in 1917, after which she departed to study at the University of Vienna.
Studies in radioactivity
In 1922 Karamichailova graduated as a PhD in Physics and Mathematics. She wrote her thesis, entitled "About Electric Figures on Different Materials, Especially On Crystals" under the direction of Karl Przibram. Karamichailova continued her work at the Institute for Radium Studies afterwards, becoming particularly interested in radioluminescence. She cooperated with Marietta Blau in the study of polonium, and later researched methods for neutron bombardment of thorium. Karamichailova simultaneously attended courses in electronic and radio engineering at the Vienna Polytechnic. In the autumn of 1923 she returned briefly to Bulgaria and worked as a "guest fellow" at the Physics Institute of Sofia University. Soon Karamichailova went back to Vienna and began her work on the transmutation of light ele
|
https://en.wikipedia.org/wiki/Network%20of%20practice
|
Network of practice (often abbreviated as NoP) is a concept originated by John Seely Brown and Paul Duguid. This concept, related to the work on communities of practice by Jean Lave and Etienne Wenger, refers to the overall set of various types of informal, emergent social networks that facilitate information exchange between individuals with practice-related goals. In other words, networks of practice range from communities of practice, where learning occurs, to electronic networks of practice (often referred to as virtual or electronic communities).
Basic concepts
To further define the concept, firstly the term network implies a set of individuals who are connected through social relationships, whether they be strong or weak. Terms such as community tend to denote a stronger form of relationship, but networks refer to all networks of social relationships, be they weak or strong. Second, the term practice represents the substrate that connects individuals in their networks. The principal ideas are that practice implies the actions of individuals and groups when conducting their work, e.g., the practice of software engineers, journalists, educators, etc., and that practice involves interaction among individuals.
What distinguishes a network of practice from other networks is that the primary reason for the emergence of relationships within a network of practice is that individuals interact through information exchange in order to perform their work, asking for and sharing knowledge with each other. A network of practice can be distinguished from other networks that emerge due to other factors, such as interests in common hobbies or discussing sports while taking the same bus to work, etc. Finally, practice need not necessarily be restricted to include those within one occupation or functional discipline. Rather it may include individuals from a variety of occupations; thus, the term, practice, is more appropriate than others such as occupation.
As indicated above
|
https://en.wikipedia.org/wiki/Transformation%20between%20distributions%20in%20time%E2%80%93frequency%20analysis
|
In the field of time–frequency analysis, several signal formulations are used to represent the signal in a joint time–frequency domain.
There are several methods and transforms called "time-frequency distributions" (TFDs), whose interconnections were organized by Leon Cohen.
The most useful and popular methods form a class referred to as "quadratic" or bilinear time–frequency distributions. A core member of this class is the Wigner–Ville distribution (WVD), as all other TFDs can be written as a smoothed or convolved versions of the WVD. Another popular member of this class is the spectrogram which is the square of the magnitude of the short-time Fourier transform (STFT). The spectrogram has the advantage of being positive and is easy to interpret, but also has disadvantages, like being irreversible, which means that once the spectrogram of a signal is computed, the original signal can't be extracted from the spectrogram. The theory and methodology for defining a TFD that verifies certain desirable properties is given in the "Theory of Quadratic TFDs".
The scope of this article is to illustrate some elements of the procedure to transform one distribution into another. The method used to transform a distribution is borrowed from the phase space formulation of quantum mechanics, even though the subject matter of this article is "signal processing". Noting that a signal can be recovered from a particular distribution under certain conditions, given a certain TFD ρ1(t,f) representing the signal in a joint time–frequency domain, another, different, TFD ρ2(t,f) of the same signal can be obtained to calculate any other distribution, by simple smoothing or filtering; some of these relationships are shown below. A full treatment of the question can be given in Cohen's book.
General class
If we use the variable , then, borrowing the notations used in the field of quantum mechanics, we can show that time–frequency representation, such as Wigner distribution function (WDF)
|
https://en.wikipedia.org/wiki/Plant%20%28control%20theory%29
|
A plant in control theory is the combination of process and actuator. A plant is often referred to with a transfer function
(commonly in the s-domain) which indicates the relation between an input signal and the output signal of a system without feedback, commonly determined by physical properties of the system. An example would be an actuator with its transfer of the input of the actuator to its physical displacement. In a system with feedback, the plant still has the same transfer function, but a control unit and a feedback loop (with their respective transfer functions) are added to the system.
|
https://en.wikipedia.org/wiki/Chlamydia%20psittaci
|
{{Taxobox
| image = Chlamydophila psittaci FA stain.jpg
| image_caption = Direct fluorescent antibody stain of a mouse brain impression smear showing C. psittaci.
| domain = Bacteria
| phylum = Chlamydiota
| classis = Chlamydiia
| ordo = Chlamydiales
| familia = Chlamydiaceae
| genus = Chlamydia
| species = C. psittaci
| binomial = Chlamydia psittaci
| synonyms = Chlamydophila psittaci
}}Chlamydia psittaci is a lethal intracellular bacterial species that may cause endemic avian chlamydiosis, epizootic outbreaks in mammals, and respiratory psittacosis in humans. Potential hosts include feral birds and domesticated poultry, as well as cattle, pigs, sheep, and horses. C. psittaci is transmitted by inhalation, contact, or ingestion among birds and to mammals. Psittacosis in birds and in humans often starts with flu-like symptoms and becomes a life-threatening pneumonia. Many strains remain quiescent in birds until activated by stress. Birds are excellent, highly mobile vectors for the distribution of chlamydia infection, because they feed on, and have access to, the detritus of infected animals of all sorts.C. psittaci in birds is often systemic, and infections can be inapparent, severe, acute, or chronic with intermittent shedding. C. psittaci strains in birds infect mucosal epithelial cells and macrophages of the respiratory tract. Septicaemia eventually develops and the bacteria become localized in epithelial cells and macrophages of most organs, conjunctiva, and gastrointestinal tracts. It can also be passed in the eggs. Stress will commonly trigger onset of severe symptoms, resulting in rapid deterioration and death. C. psittaci strains are similar in virulence, grow readily in cell culture, have 16S rRNA genes that differ by <0.8%, and belong to eight known serotypes. All should be considered to be readily transmissible to humans.C. psittaci serovar A is endemic among psittacine birds and has caused sporadic zoonotic disease in humans, other mammals, and tortoises
|
https://en.wikipedia.org/wiki/Basic%20Interoperable%20Scrambling%20System
|
Basic Interoperable Scrambling System, usually known as BISS, is a satellite signal scrambling system developed by the European Broadcasting Union and a consortium of hardware manufacturers.
Prior to its development, "ad hoc" or "occasional use" satellite news feeds were transmitted either using proprietary encryption methods (e.g. RAS, or PowerVu), or without any encryption. Unencrypted satellite feeds allowed anyone with the correct equipment to view the program material.
Proprietary encryption methods were determined by encoder manufacturers, and placed major compatibility limitations on the type of satellite receiver (IRD) that could be used for each feed. BISS was an attempt to create an "open platform" encryption system, which could be used across a range of manufacturers equipment.
There are mainly two different types of BISS encryption used:
BISS-1 transmissions are protected by a 12 digit hexadecimal "session key" that is agreed by the transmitting and receiving parties prior to transmission. The key is entered into both the encoder and decoder, this key then forms part of the encryption of the digital TV signal and any receiver with BISS-support with the correct key will decrypt the signal.
BISS-E (E for encrypted) is a variation where the decoder has stored one secret BISS-key entered by for example a rights holder. This is unknown to the user of the decoder. The user is then sent a 16-digit hexadecimal code, which is entered as a "session key". This session key is then mathematically combined internally to calculate a BISS-1 key that can decrypt the signal.
Only a decoder with the correct secret BISS-key will be able to decrypt a BISS-E feed. This gives the rights holder control as to exactly which decoder can be used to decrypt/decode a specific feed. Any BISS-E encrypted feed will have a corresponding BISS-1 key that will unlock it.
BISS-E is amongst others used by EBU to protect UEFA Champions League, NBC in the United States for NBC O&O and Af
|
https://en.wikipedia.org/wiki/Derrick%20Baxby
|
Derrick Baxby (1940 – 24 March 2017) was a British microbiologist and authority on Orthopoxviruses. He was a senior lecturer in medical microbiology at the University of Liverpool.
He proposed that a presumed horsepox virus could be the long-sought ancestor of vaccinia. In 1977, he reported 12 cases of cowpox occurring in England between 1965 and 1976.
Selected publications
Jenner's Smallpox Vaccine: The Riddle of Vaccinia Virus and Its Origin. Heinemann Educational Books, London, 1981.
"Two hundred years of vaccination", Current Biology, Vol. 6 (1996), No. 7, pp. 769–772.
"The End of Smallpox", History Today, Vol. 49, No. 3 (March 1999).
"Edward Jenner's Inquiry; A Bicentenary Analysis", Vaccine, 1999 January 28;17(4):301-7.
|
https://en.wikipedia.org/wiki/Amazingports
|
AmazingPorts is a Linux-based software product customized for use as a firewall, captive portal and billing system (Hotspots). The project started in 2001.
Description
AmazingPorts is mainly deployed as an access control system in private and public networks. It can be deployed as a single hotspot controller in airports, hotels, private locations and hospitals. It was used in Internet cafes in Europe by 2002 together with Intel. It was used for a city-wide Wi-Fi project in 2004, and Internet roaming in 2002.
AmazingPorts was created in 2001 with an initial vision of building free networks. Later the company refocused and provided its technology to network builders. The company implemented service-oriented provisioning in 2002 and was the first to implement 802.11a public hotspots in Europe. During 2009 and 2010 the administrative system was updated.
Features include:
Firewall
Service Oriented Provisioning
NAT
Distributed and/or centralised routing
Fully customisable and language sensitive captive portal or any third party web page
Dynamic Host Configuration Protocol (DHCP) server
Integrated PayPal payments supporting many currencies
Automatic currency updates from the European Central Bank
Roles based Administration
Seamless Roaming
Compliance with the anti-terrorist Directive 2006/24/EC
|
https://en.wikipedia.org/wiki/Mathemagician
|
A mathemagician is a mathematician who is also a magician. The term "mathemagic" is believed to have been introduced by Royal Vale Heath with his 1933 book "Mathemagic".
The name "mathemagician" was probably first applied to Martin Gardner, but has since been used to describe many mathematician/magicians, including Arthur T. Benjamin, Persi Diaconis, and Colm Mulcahy. Diaconis has suggested that the reason so many mathematicians are magicians is that "inventing a magic trick and inventing a theorem are very similar activities."
Mathemagician is a neologism, specifically a portmanteau, that combines mathematician and magician. A great number of self-working mentalism tricks rely on mathematical principles. Max Maven often utilizes this type of magic in his performance.
The Mathemagician is the name of a character in the 1961 children's book The Phantom Tollbooth. He is the ruler of Digitopolis, the kingdom of mathematics.
Notable mathemagicians
Arthur T. Benjamin
Jin Akiyama
Persi Diaconis
Richard Feynman
Karl Fulves
Martin Gardner
Ronald Graham
Royal Vale Heath
Colm Mulcahy
Raymond Smullyan
W. W. Rouse Ball
Alex Elmsley
|
https://en.wikipedia.org/wiki/Insyde%20Software
|
Insyde Software () is a company that specializes in UEFI system firmware and engineering support services, primarily for OEM and ODM computer and component device manufacturers. They are listed on the Gre Tai Market of Taiwan and headquartered in Taipei, with offices in Westborough, Massachusetts, and Portland, Oregon. The company's market capitalization of the company's common shares is currently around $115M.
Overview
The company's product portfolio includes InsydeH2O BIOS (Insyde Software's implementation of the Intel Platform Innovation Framework for UEFI/EFI), BlinkBoot, a UEFI-based boot loader for enabling Internet of Things devices, and Supervyse, which is a full-featured systems management/BMC firmware for providing out-of-band remote management for server computers.
Insyde Software was formed when it purchased the BIOS assets of SystemSoft Corporation (NASDAQ:SYSF) in October, 1998. Initially
Insyde Software was a privately held company that included investments from Intel Pacific Inc., China Development Industrial Bank, Professional Computer Technology Limited (PCT), company management and selected employees. At that time, Insyde Software's management team consisted of Jeremy Wang, Chairman (also the Chairman of PCT); Jonathan Joseph, President (a former founder of SystemSoft); Hansen Liou, the General Manager of Taiwan Operations and Asia-Pacific Sales, and Stephen Gentile, the Vice President of Marketing.
Shortly after the initial investment, the company was introduced by Intel to a new BIOS coding architecture called EFI (now UEFI) and the two companies began working together on it. In 2001, the two companies entered into a joint development agreement and Insyde’s first shipment of the technology occurred in October 2003 as InsydeH2O UEFI BIOS. Since that time, UEFI has become the mainstay of Insyde’s business.
On 23 January 2003, Insyde Software announced its initial public offering on the GreTai Securities Market (GTSM) based in Taipei, Taiwan
|
https://en.wikipedia.org/wiki/Elimination%20%28pharmacology%29
|
In pharmacology the elimination or excretion of a drug is understood to be any one of a number of processes by which a drug is eliminated (that is, cleared and excreted) from an organism either in an unaltered form (unbound molecules) or modified as a metabolite. The kidney is the main excretory organ although others exist such as the liver, the skin, the lungs or glandular structures, such as the salivary glands and the lacrimal glands. These organs or structures use specific routes to expel a drug from the body, these are termed elimination pathways:
Urine
Tears
Perspiration
Saliva
Respiration
Milk
Faeces
Bile
Drugs are excreted from the kidney by glomerular filtration and by active tubular secretion following the same steps and mechanisms as the products of intermediate metabolism. Therefore, drugs that are filtered by the glomerulus are also subject to the process of passive tubular reabsorption. Glomerular filtration will only remove those drugs or metabolites that are not bound to proteins present in blood plasma (free fraction) and many other types of drugs (such as the organic acids) are actively secreted. In the proximal and distal convoluted tubules non-ionised acids and weak bases are reabsorbed both actively and passively. Weak acids are excreted when the tubular fluid becomes too alkaline and this reduces passive reabsorption. The opposite occurs with weak bases. Poisoning treatments use this effect to increase elimination, by alkalizing the urine causing forced diuresis which promotes excretion of a weak acid, rather than it getting reabsorbed. As the acid is ionised, it cannot pass through the plasma membrane back into the blood stream and instead gets excreted with the urine. Acidifying the urine has the same effect for weakly basic drugs.
On other occasions drugs combine with bile juices and enter the intestines. In the intestines the drug will join with the unabsorbed fraction of the administered dose and be eliminated with the faeces
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.