source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Acsensorize | Acsensorize (v.t.), pronounced as ac-sensor-ize, is the act of adding a multitude of dissimilar sensors, generally of a variety of sensing modalities, to an existing system that may or may not already have sensors;
acsensorizing (pres. part.);
acsensorized (pass. part.);
acsensorization (n.) is the process of acsensorizing.
It was first used by researchers at General Motors Global Research and Development, and was published in. The word was motivated by accessorize. Acsensorizing plays a significant role in big data research and machine learning.
See also
Sensorization |
https://en.wikipedia.org/wiki/Felid%20hybrids | A felid hybrid is any of a number of hybrids between various species of the cat family, Felidae. This article deals with hybrids between the species of the subfamily Felinae (feline hybrids).
For hybrids between two species of the genus Panthera (lions, tigers, jaguars, and leopards), see Panthera hybrid. There are no known hybrids between Neofelis (the clouded leopard) and other genera. By contrast, many genera of Felinae are interfertile with each other, though few hybridize under natural conditions, and not all combinations are likely to be viable (e.g. between the tiny rusty-spotted cat and the leopard-sized cougar).
All-wild feline hybridization
Caracal × serval hybrids
A caraval is a cross between a male caracal (Caracal caracal) and a female serval (Leptailurus serval), while a male serval's and female caracal's offspring are called servicals. The first servicals were bred accidentally when the two animals were housed together at the Los Angeles Zoo. The offspring were tawny with pale spots. If a female servical is crossed to a male caracal, the result is a car-servical; if she is crossed to a male serval, the result is a ser-servical.
Bobcat × lynx
The blynx or lynxcat is a hybrid of a bobcat (Lynx rufus) and some other species of genus Lynx. The appearance of the offspring depends on which lynx species is used, as the Eurasian lynx (Lynx lynx) is more heavily spotted than the Canada lynx (Lynx canadensis). These hybrids have been bred in captivity and also occur naturally where a lynx or bobcat cannot find a member of its own species for mating.
At least seven such hybrids have been reported in the United States, outside of captivity. In August 2003, two wild-occurring hybrids between wild Canadian lynx and bobcats were confirmed by DNA analysis in the Moosehead region of Maine. Three hybrids were identified in northeastern Minnesota. These were the first confirmed hybrids outside of captivity. Mitochondrial DNA studies showed them all to be t |
https://en.wikipedia.org/wiki/Dinosterane | Dinosterane is a steroidal alkane, also known as 4α,23,24-trimethylcholestane. It is used in geochemistry as a biomarker, interpreted as an indication of dinoflagellate presence due to its derivative dinosterol high occurrence in extant dinoflagellate species and its rarity in other taxa, although it has been shown to be produced by a single species of marine diatom as well.
History of use as a biomarker
A 1984 study was conducted which established the dinoflagellate origin for dinosterane based on distributions of modern dinoflagellates and dinosterane abundance in sediment.
In 1993, dinosteranes were discovered in a section of the Bristol Trough that was dated to the Rhaetian Age. Due to the co-deposition of these dinosteranes with dinoflagellate cysts and comparison of microfossil abundance with hydrocarbon abundance, the dinosterane was associated with marine dinoflagellates. This was the first stratigraphic evidence for Mesozoic dinoflagellates.
In 1998, dinosteranes were found in high relative abundance in samples from the Lükati Formation, which were collected from the Kopli quarry in Estonia. This evidence was used to place the origin of dinoflagellates as early as the Early Cambrian, much earlier than the Bristol Trough studies had been able to.
Characterisation
Dinosterane's mass spectrum shows a highly increased abundance of the m/z = 98 ion compared to 24-ethyl-4α-methylcholestane, which is likely due to preferential cleavage of the C-22,23 bond. |
https://en.wikipedia.org/wiki/Morodo | Morodo is a software based provider of the low-cost and free communication service called MO-Call which includes desktop, mobile and web applications. It enables low cost text messaging, low cost international calls and instant messaging services, saving its users up to 90% on phone bills compared to the cost of traditional mobile and fixed operators. Morodo’s mobile applications feature direct calling, callback, VoIP and SMS. The desktop application features include VoIP, SMS and Instant Messaging
. Morodo’s applications are supported on 95% of all handsets on the market today, as well as a desktop application supporting Windows XP, Vista and 7, Mac OS X and Linux. The mobile application works in more than 200 countries around the world.
Technology
Morodo’s mobile application, MO-Call, supports over 2000 handsets across iOS, BlackBerry, Android, Symbian, Java, Windows Mobile and Android. The desktop version of MO-Call was developed in QT and supports Windows, Mac and Linux.
To use MO-Call on smart phones, the application can be minimised to the background of the phone and will recognize international numbers and route the call onto Morodo’s network. Calls can be connected to Morodo’s network via a local access number, a call back or over an internet data channel such as Wi-Fi or 3G. SMS works by converting the text into data and then delivered via the internet connection through the Morodo network. The MO-Call app also supports VoIP calling where calls are connected using Voice Over Internet Protocol. |
https://en.wikipedia.org/wiki/Taxus%20%C3%97%20media | Taxus × media, sometimes known simply as Taxus media, is a conifer (more specifically, a yew) created by the hybridization of English yew Taxus baccata and Japanese yew Taxus cuspidata. This hybridization is thought to have been performed by the Massachusetts-based horticulturalist T.D. Hatfield in the early 1900s. Taxonomy and common naming
Taxus × media is available in a large number of shrubby, often wide-spreading, cultivars under a variety of names.
Description
Like most yew species, T. × media prefers well-drained and well-watered soils, but has some degree of drought tolerance and in fact may die in conditions of excessive precipitation if the soil beneath the plant is not sufficiently well-drained.Taxus × media is among the smallest extant species in the genus Taxus and (depending upon cultivar) may not even grow to the size of what one would consider a typical tree. Immature shrubs are very small and achieve (over the time span of ten to twenty years) heights of at most and diameters of at most , depending on the cultivar. Furthermore, T. × media is known to grow rather slowly and is not injured by frequent pruning, making this hybrid very desirable as a hedge in low-maintenance landscaping and also a good candidate for bonsai.
ToxicityTaxus × media also shares with its fellow yew trees a high level of taxine in its branches, needles, and seeds. Taxine is toxic to the mammalian heart.
Varieties (cultivars)
Taxus × media var. hicksii (also known by the common name Hicks's yew or alternately, Hicks yew) is a common cultivar of this hybrid, and is the tallest and thinnest variety of T. × media, limiting itself to a diameter, despite the fact it can achieve a height of close to .
Another commonly-planted cultivar of T. × media is the broader-spreading densiformis version, which can reach a diameter exceeding 10 feet; nonetheless, this cultivar does not grow much past in height.
Another cultivar of T. × media is the Kelseyi version (known as the Ke |
https://en.wikipedia.org/wiki/Calcium%20signaling%20in%20cell%20division | Calcium plays a crucial role in regulating the events of cellular division. Calcium acts both to modulate intracellular signaling as a secondary messenger and to facilitate structural changes as cells progress through division. Exquisite control of intracellular calcium dynamics are required, as calcium appears to play a role at multiple cell cycle checkpoints.
The major downstream calcium effectors are the calcium-binding calmodulin protein and downstream calmodulin-dependent protein kinases I / II. Evidence points to this signaling cascade as a major mediator of calcium signaling in cell division.
Meiosis
Historically, one of the most well characterized roles of intracellular calcium is activation of the ovum after sperm fertilization. In deuterosome eggs (mammals, fish, amphibians, ascidians, sea urchins, etc.), successful sperm entry leads to a distinct rise in intracellular calcium ions (Ca2+), with mammals and ascidians displaying a series of intracellular calcium spikes required for completion of meiosis.... Unfertilized vertebrate eggs arrest development after meiosis I. This developmental pause is attributed to the vaguely defined cytostatic factor (CSF). Current researches suggest “CSF” is actually multiple pathways working together to halt division at metaphase of meiosis II. Upon sperm entry into the egg, Ca2+ is released from intracellular stores, leading to inhibition of the CSF-arrest mechanism. Calmodulin dependent kinase II was shown to be the protein responsible for converting the Ca2+ influx signal into inhibition of CSF and activation of cyclin degradation machinery to degrade cyclin B, resulting in progression through meiosis II.
In mammals, this rise in Ca2+ was shown to be driven by IP3 stimulation induced by PLCζ provided by the sperm. In general, PLC enzymes stimulate calcium release by internal stores through the breakdown of PIP2 into IP3 and DAG.
Mitosis
Signaling
Beyond the events of meiosis, changes in Ca2+ levels are observed |
https://en.wikipedia.org/wiki/Almost%20prime | In number theory, a natural number is called k-almost prime if it has k prime factors. More formally, a number n is k-almost prime if and only if Ω(n) = k, where Ω(n) is the total number of primes in the prime factorization of n (can be also seen as the sum of all the primes' exponents):
A natural number is thus prime if and only if it is 1-almost prime, and semiprime if and only if it is 2-almost prime. The set of k-almost primes is usually denoted by Pk. The smallest k-almost prime is 2k. The first few k-almost primes are:
The number πk(n) of positive integers less than or equal to n with exactly k prime divisors (not necessarily distinct) is asymptotic to:
a result of Landau. See also the Hardy–Ramanujan theorem.
Properties
The multiple of a -almost prime and a -almost prime is a -almost prime.
A -almost prime cannot have a -almost prime as a factor for all . |
https://en.wikipedia.org/wiki/Schematic | A schematic, or schematic diagram, is a designed representation of the elements of a system using abstract, graphic symbols rather than realistic pictures. A schematic usually omits all details that are not relevant to the key information the schematic is intended to convey, and may include oversimplified elements in order to make this essential meaning easier to grasp, as well as additional organization of the information.
For example, a subway map intended for passengers may represent a subway station with a dot. The dot is not intended to resemble the actual station at all but aims to give the viewer information without unnecessary visual clutter. A schematic diagram of a chemical process uses symbols in place of detailed representations of the vessels, piping, valves, pumps, and other equipment that compose the system, thus emphasizing the functions of the individual elements and the interconnections among them and suppresses their physical details. In an electronic circuit diagram, the layout of the symbols may not look anything like the circuit as it appears in the physical world: instead of representing the way the circuit looks, the schematic aims to capture, on a more general level, the way it works. This may be contrasted with a wiring diagram, which preserves the spatial relationships between each of its components.
Types
Schematics and other types of diagrams, e.g.,
A semi-schematic diagram combines some of the abstraction of a purely schematic diagram with other elements displayed as realistically as possible, for various reasons. It is a compromise between a purely abstract diagram (e.g. the schematic of the Washington Metro) and an exclusively realistic representation (e.g. the corresponding aerial view of Washington).
Electrical and electronic industry
In electrical and electronic industry, a schematic diagram is often used to describe the design of equipment. Schematic diagrams are often used for the maintenance and repair of electronic and ele |
https://en.wikipedia.org/wiki/Dejter%20graph | In the mathematical field of graph theory, the Dejter graph is a 6-regular graph with 112 vertices and 336 edges. The Dejter graph is
obtained by deleting a copy
of the Hamming code of length 7 from the binary
7-cube.
The Dejter graph, and by extension any graph obtained by deleting a Hamming code of length 2r-1 from a
(2r-1)-cube, is a symmetric graph.
In particular, the Dejter graph admits a 3-factorization into two
copies of the Ljubljana graph, which is the third smallest existing semi-symmetric cubic graph of regular degree 3. The Ljubljana graph has girth 10.
In fact, it is proven that the Dejter graph can be 2-colored, say in the color set {red, blue}, as in the top figure to the right, so that both the resulting edge-monochromatic red and blue vertex-spanning subgraphs are copies of the Ljubljana graph. These two copies contain exactly the 112 vertices of the Dejter graph and 168 edges each, having both copies girth 10, while the Dejter graph has girth 6 and the 7-cube girth 4. It seems that the Dejter graph is the smallest symmetric graph having a connected self-complementary vertex-spanning semi-symmetric cubic subgraph.
Both the red and blue vertex-spanning Ljubljana subgraphs of the Dejter graph can be presented as covering graphs of the Heawood graph, namely as 8-covers of the Heawood graph. This is suggested in each of the two representations of the Ljubljana graph, (red above, blue below, both to the right), by alternately coloring the inverse images of successive vertices of the Heawood graph, say in black and white (better viewed by twice clicking on images for figure enlargements), according to the Heawood graph bipartition. Each such inverse image is formed by the 8 neighbors, along a fixed coordinate direction of the 7-cube, of the half of the Hamming code having a fixed weight, 0 or 1. By exchanging these weights via the permutation (0 1), one can pass from the adjacency offered by the red Ljubljana graph to the one offered by the blue Ljublj |
https://en.wikipedia.org/wiki/Klee%27s%20measure%20problem | In computational geometry, Klee's measure problem is the problem of determining how efficiently the measure of a union of (multidimensional) rectangular ranges can be computed. Here, a d-dimensional rectangular range is defined to be a Cartesian product of d intervals of real numbers, which is a subset of Rd.
The problem is named after Victor Klee, who gave an algorithm for computing the length of a union of intervals (the case d = 1) which was later shown to be optimally efficient in the sense of computational complexity theory. The computational complexity of computing the area of a union of 2-dimensional rectangular ranges is now also known, but the case d ≥ 3 remains an open problem.
History and algorithms
In 1977, Victor Klee considered the following problem: given a collection of n intervals in the real line, compute the length of their union. He then presented an algorithm to solve this problem with computational complexity (or "running time") — see Big O notation for the meaning of this statement. This algorithm, based on sorting the intervals, was later shown by Michael Fredman and Bruce Weide (1978) to be optimal.
Later in 1977, Jon Bentley considered a 2-dimensional analogue of this problem: given a collection of n rectangles, find the area of their union. He also obtained a complexity algorithm, now known as Bentley's algorithm, based on reducing the problem to n 1-dimensional problems: this is done by sweeping a vertical line across the area. Using this method, the area of the union can be computed without explicitly constructing the union itself. Bentley's algorithm is now also known to be optimal (in the 2-dimensional case), and is used in computer graphics, among other areas.
These two problems are the 1- and 2-dimensional cases of a more general question: given a collection of n d-dimensional rectangular ranges, compute the measure of their union. This general problem is Klee's measure problem.
When generalized to the d-dimensional case, B |
https://en.wikipedia.org/wiki/F%C3%B6rster%20resonance%20energy%20transfer | Förster resonance energy transfer (FRET), fluorescence resonance energy transfer, resonance energy transfer (RET) or electronic energy transfer (EET) is a mechanism describing energy transfer between two light-sensitive molecules (chromophores). A donor chromophore, initially in its electronic excited state, may transfer energy to an acceptor chromophore through nonradiative dipole–dipole coupling. The efficiency of this energy transfer is inversely proportional to the sixth power of the distance between donor and acceptor, making FRET extremely sensitive to small changes in distance.
Measurements of FRET efficiency can be used to determine if two fluorophores are within a certain distance of each other. Such measurements are used as a research tool in fields including biology and chemistry.
FRET is analogous to near-field communication, in that the radius of interaction is much smaller than the wavelength of light emitted. In the near-field region, the excited chromophore emits a virtual photon that is instantly absorbed by a receiving chromophore. These virtual photons are undetectable, since their existence violates the conservation of energy and momentum, and hence FRET is known as a radiationless mechanism. Quantum electrodynamical calculations have been used to determine that radiationless (FRET) and radiative energy transfer are the short- and long-range asymptotes of a single unified mechanism.
Terminology
Förster resonance energy transfer is named after the German scientist Theodor Förster. When both chromophores are fluorescent, the term "fluorescence resonance energy transfer" is often used instead, although the energy is not actually transferred by fluorescence. In order to avoid an erroneous interpretation of the phenomenon that is always a nonradiative transfer of energy (even when occurring between two fluorescent chromophores), the name "Förster resonance energy transfer" is preferred to "fluorescence resonance energy transfer"; however, the latt |
https://en.wikipedia.org/wiki/Few-body%20systems | In mechanics, a few-body system consists of a small number of well-defined structures or point particles.
Quantum mechanics
In quantum mechanics, examples of few-body systems include light nuclear systems (that is, few-nucleon bound and scattering states), small molecules, light atoms (such as helium in an external electric field), atomic collisions, and quantum dots. A fundamental difficulty in describing few-body systems is that the Schrödinger equation and the classical equations of motion are not analytically solvable for more than two mutually interacting particles even when the underlying forces are precisely known. This is known as the few-body problem. For some three-body systems an exact solution can be obtained iteratively through the Faddeev equations. It can be shown that under certain conditions Faddeev equations should lead to the Efimov effect. Most three-body systems are amenable to extremely accurate numerical solutions that use large sets of basis functions and then variationally optimize the amplitudes of the basis functions. Particular cases are the Hydrogen molecular ion or the Helium atom. The latter has been solved very precisely using basis sets of Hylleraas or Frankowski-Pekeris functions (see references of the work of G.W.F. Drake and J.D. Morgan III in Helium atom section).
In many cases theory has to resort to approximations to treat few-body systems. These approximations have to be tested by detailed experimental data. Atomic collisions or precision laser spectroscopy are particularly suitable for such tests. The fundamental force underlying atomic systems, the electromagnetic force, is essentially understood. Therefore, any discrepancy found between experiment and theory can be directly related to the theoretical description of few-body effects, or to the existence of new fundamental forces (beyond-Standard-Model forces). In nuclear systems, in contrast, the underlying force is much less understood. Furthermore, in atomic col |
https://en.wikipedia.org/wiki/Petro-occipital%20fissure | This grooved surface of the foramen magnum is separated on either side from the petrous portion of the temporal bone by the petro-occipital fissure, which is occupied in the fresh state by a plate of cartilage; the fissure is continuous behind with the jugular foramen, and its margins are grooved for the inferior petrosal sinus. |
https://en.wikipedia.org/wiki/Wireless%20Toronto | Wireless Toronto is a volunteer not-for-profit community wireless network in Toronto. Wireless Toronto began in 2005 with the goal of setting up no-cost public wireless Internet access around the Greater Toronto Area and exploring ways to use Wi-Fi technology to strengthen local community and culture. At its peak, Wireless Toronto hotspots served over 1000 connections per day at 38 individual locations.
Wireless Toronto hotspots are created using Linksys WRT54G or Motorola WR850G wireless routers running OpenWrt and WifiDog.
Other free wireless services in the GTA
The Toronto Public Library (TPL) offers free public wireless access in all of its 99 branches.
The Markham Public Libraries (MPL) offers free public wireless access in the Angus Glen Library, the Markham Village Library, the Thornhill Community Library, and the Unionville Library
Viva offers free wireless access on its Rapid Transit Vehicles
TOwifi offers a free Wi-Fi hotspot map
The TTC offers free, ad supported wireless service at many of its stations
See also
Municipal wireless network
List of wireless community networks by region |
https://en.wikipedia.org/wiki/Pseudomonas%20aeruginosa | Pseudomonas aeruginosa is a common encapsulated, Gram-negative, aerobic–facultatively anaerobic, rod-shaped bacterium that can cause disease in plants and animals, including humans. A species of considerable medical importance, P. aeruginosa is a multidrug resistant pathogen recognized for its ubiquity, its intrinsically advanced antibiotic resistance mechanisms, and its association with serious illnesses – hospital-acquired infections such as ventilator-associated pneumonia and various sepsis syndromes.
The organism is considered opportunistic insofar as serious infection often occurs during existing diseases or conditions – most notably cystic fibrosis and traumatic burns. It generally affects the immunocompromised but can also infect the immunocompetent as in hot tub folliculitis. Treatment of P. aeruginosa infections can be difficult due to its natural resistance to antibiotics. When more advanced antibiotic drug regimens are needed adverse effects may result.
It is citrate, catalase, and oxidase positive. It is found in soil, water, skin flora, and most human-made environments throughout the world. It thrives not only in normal atmospheres, but also in low-oxygen atmospheres, thus has colonized many natural and artificial environments. It uses a wide range of organic material for food; in animals, its versatility enables the organism to infect damaged tissues or those with reduced immunity. The symptoms of such infections are generalized inflammation and sepsis. If such colonizations occur in critical body organs, such as the lungs, the urinary tract, and kidneys, the results can be fatal. Because it thrives on moist surfaces, this bacterium is also found on and in medical equipment, including catheters, causing cross-infections in hospitals and clinics. It is also able to decompose hydrocarbons and has been used to break down tarballs and oil from oil spills. P. aeruginosa is not extremely virulent in comparison with other major pathogenic bacterial species |
https://en.wikipedia.org/wiki/Kenneth%20Falconer%20%28mathematician%29 | Kenneth John Falconer FRSE (born 25 January 1952) is a British mathematician working in mathematical analysis and in particular on fractal geometry. He is Regius Professor of Mathematics in the School of Mathematics and Statistics at the University of St Andrews.
Research
Falconer is known for his work on the mathematics of fractals and in particular sets and measures arising from iterated function systems, especially self-similar and self-affine sets. Closely related is his research on Hausdorff and other fractal dimensions. He formulated Falconer's conjecture on the dimension of distance sets and conceived the notion of a digital sundial. In combinatorial geometry he established a lower bound of 5 for the chromatic number of the plane in the Lebesgue measurable case.
Education and career
Falconer was educated at Kingston Grammar School, Kingston upon Thames and Corpus Christi College, Cambridge. He graduated in 1974 and completed his PhD in 1979 under the supervision of Hallard Croft.
He was a research fellow at Corpus Christi College, Cambridge from 1977 to 1980 before moving to Bristol University. He was appointed Professor of Pure Mathematics at the University of St Andrews in 1993 and was head of the School of Mathematics and Statistics from 2001 to 2004. He served on the council of the London Mathematical Society from 2000 to 2009 including as publications secretary from 2006 to 2009.
Recognition
Falconer was elected a Fellow of the Royal Society of Edinburgh in 1998.
In 2020 he was awarded the Shephard Prize of the London Mathematical Society.
Personal life
Falconer was born 25 January 1952 at Bearsted Memorial Maternity Hospital outside Hampton Court Palace.
His recreational interests include long-distance walking and hill walking. He was chair of the Long Distance Walkers Association from 2000 to 2003 and editor of their journal Strider from 1987 to 1992 and 2007–12. In 2021 he was appointed a Vice President of the LDWA. He has twice climbed all th |
https://en.wikipedia.org/wiki/Melatonin%20as%20a%20medication%20and%20supplement | Melatonin is a dietary supplement and medication as well as naturally occurring hormone. As a hormone, melatonin is released by the pineal gland and is involved in sleep–wake cycles. As a supplement, it is often used for the attempted short-term treatment of disrupted sleep patterns, such as from jet lag or shift work, and is typically taken orally. Evidence of its benefit for this use, however, is not strong. A 2017 review found that sleep onset occurred six minutes faster with use, but found no change in total time asleep.
Side effects from melatonin supplements are minimal at low doses for short durations (in the studies reported about equally for both melatonin and placebo). Side effects of melatonin are rare but may occur in 1 to 10 patients in 1,000. They may include somnolence (sleepiness), headaches, nausea, diarrhea, abnormal dreams, irritability, nervousness, restlessness, insomnia, anxiety, migraine, lethargy, psychomotor hyperactivity, dizziness, hypertension, abdominal pain, heartburn, mouth ulcers, dry mouth, hyperbilirubinaemia, dermatitis, night sweats, pruritus, rash, dry skin, pain in the extremities, symptoms of menopause, chest pain, glycosuria (sugar in the urine), proteinuria (protein in the urine), abnormal liver function tests, increased weight, tiredness, mood swings, aggression and feeling hungover. Its use is not recommended during pregnancy or breastfeeding or for those with liver disease.
Melatonin acts as an agonist of the melatonin MT1 and MT2 receptors, the biological targets of endogenous melatonin. It is thought to activate these receptors in the suprachiasmatic nucleus of the hypothalamus in the brain to regulate the circadian clock and sleep–wake cycles. Immediate-release melatonin has a short elimination half-life of about 20 to 50minutes. Prolonged-release melatonin used as a medication has a half-life of 3.5 to 4hours.
Melatonin was discovered in 1958. It is sold over the counter in Canada and the United States; in the Unite |
https://en.wikipedia.org/wiki/Heat%20map | A heat map (or heatmap) is a 2-dimensional data visualization technique that represents the magnitude of individual values within a dataset as a color. The variation in color may be by hue or intensity.
"Heat map" is a relatively new term, but the practice of shading matrices has existed for over a century.
History
Heat maps originated in 2D displays of the values in a data matrix. Larger values were represented by small dark gray or black squares (pixels) and smaller values by lighter squares. Toussaint Loua (1873) used a shading matrix to visualize social statistics across the districts of Paris. Sneath (1957) displayed the results of a cluster analysis by permuting the rows and the columns of a matrix to place similar values near each other according to the clustering. Jacques Bertin used a similar representation to display data that conformed to a Guttman scale. The idea for joining cluster trees to the rows and columns of the data matrix originated with Robert Ling in 1973. Ling used overstruck printer characters to represent different shades of gray, one character-width per pixel. Leland Wilkinson developed the first computer program in 1994 (SYSTAT) to produce cluster heat maps with high-resolution color graphics. The Eisen et al. display shown in the figure is a replication of the earlier SYSTAT design.
Software designer Cormac Kinney trademarked the term 'heat map' in 1991 to describe a 2D display depicting financial market information. The company that acquired Kinney's invention in 2003 unintentionally allowed the trademark to lapse.
Types
There are two main type of heat maps: spatial, and grid.
A spatial heat map displays the magnitude of a spatial phenomena as color, usually cast over a map. In the image labeled “Spatial Heat Map Example,” temperature is displayed by color range across a map of the world. Color ranges from blue (cold) to red (hot).
A grid heat map displays magnitude as color in a two-dimensional matrix, with each dimension rep |
https://en.wikipedia.org/wiki/Fei%E2%80%93Ranis%20model%20of%20economic%20growth | The Fei–Ranis model of economic growth is a dualism model in developmental economics or welfare economics that has been developed by John C. H. Fei and Gustav Ranis and can be understood as an extension of the Lewis model. It is also known as the Surplus Labor model. It recognizes the presence of a dual economy comprising both the modern and the primitive sector and takes the economic situation of unemployment and underemployment of resources into account, unlike many other growth models that consider underdeveloped countries to be homogenous in nature. According to this theory, the primitive sector consists of the existing agricultural sector in the economy, and the modern sector is the rapidly emerging but small industrial sector. Both the sectors co-exist in the economy, wherein lies the crux of the development problem. Development can be brought about only by a complete shift in the focal point of progress from the agricultural to the industrial economy, such that there is augmentation of industrial output. This is done by transfer of labor from the agricultural sector to the industrial one, showing that underdeveloped countries do not suffer from constraints of labor supply. At the same time, growth in the agricultural sector must not be negligible and its output should be sufficient to support the whole economy with food and raw materials. Like in the Harrod–Domar model, saving and investment become the driving forces when it comes to economic development of underdeveloped countries.
Basics of the model
One of the biggest drawbacks of the Lewis model was the undermining of the role of agriculture in boosting the growth of the industrial sector. In addition to that, he did not acknowledge that the increase in productivity of labor should take place prior to the labor shift between the two sectors. However, these two ideas were taken into account in the Fei–Ranis dual economy model of three growth stages. They further argue that the model lacks in the proper |
https://en.wikipedia.org/wiki/Ann%20McDermott | Ann E. McDermott is an American biophysicist who uses nuclear magnetic resonance to study the structure, function, and dynamics of proteins in native-like environments. She is currently the Esther Breslow Professor of Biological Chemistry and Chair of the Educational Policy and Planning Committee of the Arts and Sciences at Columbia University. She has also previously served as Columbia's Associate Vice President for Academic Advising and Science Initiatives in the Arts and Sciences. She is an elected member of both the American Academy of Arts and Sciences and the National Academy of Sciences.
Education
McDermott obtained her Bachelor of Science in Chemistry from Harvey Mudd College in Claremont, CA in 1981. In 1988, she obtained her doctoral degree at U.C. Berkeley in the Department of Chemistry with Kenneth Sauer and Melvin Klein.
Career
As a post-doctoral researcher she worked with Robert G Griffin at The Massachusetts Institute of Technology. She joined Columbia University in 1991.
McDermott is a member of the board of trustees for Harvey Mudd College. She is also a member of the Board of the New York Structural Biology Center.
Research interests
McDermott's research exploits Nuclear Magnetic Resonance to study the functions, structures, and dynamics of proteins including enzymes, viral proteins, membrane proteins and amyloid proteins. In particular, her group uses and develops solid state methodology including high-resolution magic angle spinning.
Awards and honors
McDermott has won several awards and fellowships throughout her career including the DuPont Young Investigator Award (1992), the Cottrell Scholars Award (1994), the Alfred P. Sloan Research Fellowship (1995), the American Chemical Society's Award in Pure Chemistry (1996), the Eastern Analytic Symposium Award for Achievement in Magnetic Resonance (2005), and the Royal Society of Chemistry's Bourke Award (2014). In 2000, she was inducted into the American Academy of Arts and Sciences. In 2 |
https://en.wikipedia.org/wiki/Impartial%20game | In combinatorial game theory, an impartial game is a game in which the allowable moves depend only on the position and not on which of the two players is currently moving, and where the payoffs are symmetric. In other words, the only difference between player 1 and player 2 is that player 1 goes first. The game is played until a terminal position is reached. A terminal position is one from which no moves are possible. Then one of the players is declared the winner and the other the loser. Furthermore, impartial games are played with perfect information and no chance moves, meaning all information about the game and operations for both players are visible to both players.
Impartial games include Nim, Sprouts, Kayles, Quarto, Cram, Chomp, Subtract a square, Notakto, and poset games. Go and chess are not impartial, as each player can only place or move pieces of their own color. Games such as poker, dice or dominos are not impartial games as they rely on chance.
Impartial games can be analyzed using the Sprague–Grundy theorem, stating that every impartial game under the normal play convention is equivalent to a nimber. The representation of this nimber can change from game to game, but every possible state of any variation of an impartial game board should be able to have some nimber value. For example, several nim heaps in the game nim can be calculated, then summed using nimber addition, to give a nimber value for the game.
A game that is not impartial is called a partisan game, though some partisan games can still be evaluated using nimbers such as Domineering. Domineering would not be classified as an impartial game as players use differently acting pieces, one player with vertical dominoes, one with horizontal ones, thereby breaking the rule that each player must be able to act using the same operations.
Requirements
All impartial games must meet the following conditions:
Two players must alternate turns until a final state is reached.
A winner is chosen wh |
https://en.wikipedia.org/wiki/Aluminide | An aluminide is a compound that has aluminium with other elements. Since aluminium is near the nonmetals on the periodic table, it can bond with metals differently from other metals. The properties of an aluminide are between those of a metal alloy and those of an ionic compound.
Examples
Magnesium aluminide, MgAl
Titanium aluminide, TiAl
Iron aluminides, including Fe3Al and FeAl
Nickel aluminide, Ni3Al
See category for a list. |
https://en.wikipedia.org/wiki/Midblastula | In developmental biology, midblastula or midblastula transition (MBT) occurs during the blastula stage of embryonic development in non-mammals. During this stage, the embryo is referred to as a blastula. The series of changes to the blastula that characterize the midblastula transition include activation of zygotic gene transcription, slowing of the cell cycle, increased asynchrony in cell division, and an increase in cell motility.
Blastula Before MBT
Before the embryo undergoes the midblastula transition it is in a state of fast and constant replication of cells. The cell cycle is very short. The cells in the zygote are also replicating synchronously, always undergoing cell division at the same time. The zygote is not producing its own mRNA but rather it is using mRNAs that were produced in the mother and loaded into the oocyte in order to produce proteins necessary for zygotic growth. The zygotic DNA (genetic material) is not being used because it is repressed through a variety of mechanisms such as methylation. This repressed DNA is sometimes referred to as heterochromatin and is tightly packed together inside the cell because it is not being used for transcription.
Characteristics of the MBT
Before the zygote undergoes the midblastula transition it is in a state of fast and constant replication of cells.
Activation of Zygotic Gene Transcription
At this stage, the zygote starts producing its own mRNAs that are made from its own DNA, and no longer uses the maternal mRNA. This can also be called the maternal to zygotic transition. The maternal mRNAs are then degraded. Since the cells are now transcribing their own DNA, this stage is where expression of paternal genes is first observed.
Cell Cycle Changes
When the zygote begins to produce its own mRNA, the cell cycle begins to slow down and the G1 and G2 phases are added to the cell cycle. The addition of these phases allows the cell to have more time to proofread the new genetic material it is making to |
https://en.wikipedia.org/wiki/Bland%27s%20rule | In mathematical optimization, Bland's rule (also known as Bland's algorithm, Bland's anti-cycling rule or Bland's pivot rule) is an algorithmic refinement of the simplex method for linear optimization.
With Bland's rule, the simplex algorithm solves feasible linear optimization problems without cycling.
The original simplex algorithm starts with an arbitrary basic feasible solution, and then changes the basis in order to decrease the minimization target and find an optimal solution. Usually, the target indeed decreases in every step, and thus after a bounded number of steps an optimal solution is found. However, there are examples of degenerate linear programs, on which the original simplex algorithm cycles forever. It gets stuck at a basic feasible solution (a corner of the feasible polytope) and changes bases in a cyclic way without decreasing the minimization target.
Such cycles are avoided by Bland's rule for choosing a column to enter and a column to leave the basis.
Bland's rule was developed by Robert G. Bland, now an Emeritus Professor of operations research at Cornell University, while he was a research fellow at the Center for Operations Research and Econometrics in Belgium.
Algorithm
One uses Bland's rule during an iteration of the simplex method to decide first what column (known as the entering variable) and then row (known as the leaving variable) in the tableau to pivot on. Assuming that the problem is to minimize the objective function, the algorithm is loosely defined as follows:
Choose the lowest-numbered (i.e., leftmost) nonbasic column with a negative (reduced) cost.
Now among the rows, choose the one with the lowest ratio between the (transformed) right hand side and the coefficient in the pivot tableau where the coefficient is greater than zero. If the minimum ratio is shared by several rows, choose the row with the lowest-numbered column (variable) basic in it.
It can be formally proved that, with Bland's selection rule, the simplex a |
https://en.wikipedia.org/wiki/MGI%20%28company%29 | MGI or MGI Tech is a Chinese biotechnology company, which provides a line of products and technologies that serves the genetic sequencing, genotyping and gene expression, and proteomics markets. Its headquarters are located in Shenzhen, China.
History
In 2016, MGI was founded as a subsidiary of BGI Group. It manufactures high-throughput genetic sequencing systems and other products for use in the life sciences and health care sectors. As of July 2022, the company's operation is divided into two primary business segments: genetic sequencers and laboratory automation systems.
MGI was a subsidiary of BGI Group before, it was spun out and listed on the Shanghai stock exchange in 2022.
In May 2020, MGI raised $1 billion in series B funding from IDG Capital and CPE China Fund. In December 2020, the company submitted its IPO application to Shanghai Stock Exchange STAR Market. In September 2022, the company has won the approval for its IPO on Shanghai Stock Exchange STAR Market’s high-tech board.
Subsidiaries
Complete Genomics
In March 2013 Complete Genomics was acquired by BGI Group. After the acquisition, Complete Genomics moved to San Jose, and in June 2018 became part of MGI. The acquisition was the one of the outcomes of $1.5 billion 'collaborative funds' i.e., '10 years loan' which was initially provided by China Development Bank to acquire all 128 of Illumina, Inc.'s newest and fastest next-generation sequencers including HiSeq 2000.
Research & development
The company is known for manufacturing DNA sequencers based on low-cost DNA nanoball sequencing technologies which was refined further by the parent company BGI after the acquisition of Complete Genomics. The refinement included combinatorial probe anchor synthesis technologies which involves loading DNA nanoballs (DNBs) onto a patterned array chip using a fluidic system. Subsequently, a sequencing primer is added to the adaptor region of the DNBs in order to hybridize them.
Critical findings
Human exom |
https://en.wikipedia.org/wiki/United%20States%20v.%20Fricosu | United States v. Fricosu, 841 F.Supp.2d 1232 (D. Col 2012), is a federal criminal case in Colorado that addressed whether a person can be compelled to reveal his or her encryption passphrase or password, despite the U.S. Constitution's Fifth Amendment protection against self-incrimination. On January 23, 2012, judge Robert E. Blackburn held that under the All Writs Act, Fricosu is required to produce an unencrypted hard drive.
Fricosu's attorney claimed it was possible she did not remember the password. A month later, Fricosu's ex-husband handed the police a list of potential passwords. One of the passwords worked, rendering the self-incrimination issue moot.
The Electronic Frontier Foundation filed an amicus brief in support of Fricosu.
Fricosu subsequently entered a plea agreement in 2013, meaning that the question of a defendant's right to resist mandatory decryption will not be addressed by a higher court until such time as a future case addressing the same issue arises.
See also
Key disclosure law
In re Boucher
United States v. Kirschner |
https://en.wikipedia.org/wiki/Numerical%20Methods%20for%20Partial%20Differential%20Equations | Numerical Methods for Partial Differential Equations is a bimonthly peer-reviewed scientific journal covering the development and analysis of new methods for the numerical solution of partial differential equations. It was established in 1985 and is published by John Wiley & Sons. The editors-in-chief are George F. Pinder (University of Vermont) and John R. Whiteman (Brunel University).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.009. |
https://en.wikipedia.org/wiki/Naturalistic%20decision-making | The naturalistic decision making (NDM) framework emerged as a means of studying how people make decisions and perform cognitively complex functions in demanding, real-world situations. These include situations marked by limited time, uncertainty, high stakes, team and organizational constraints, unstable conditions, and varying amounts of experience.
The NDM framework and its origins
The NDM movement originated at a conference in Dayton, Ohio in 1989, which resulted in a book by Gary Klein, Judith Orasanu, Roberta Calderwood, and Caroline Zsambok. The NDM framework focuses on cognitive functions such as decision making, sensemaking, situational awareness, and planning – which emerge in natural settings and take forms that are not easily replicated in the laboratory. For example, it is difficult to replicate high stakes, or to achieve extremely high levels of expertise, or to realistically incorporate team and organizational constraints. Therefore, NDM researchers rely on cognitive field research methods such as task analysis to observe and study skilled performers. From the perspective of scientific methodology, NDM studies usually address the initial stages of observing phenomena and developing descriptive accounts. In contrast, controlled laboratory studies emphasize the testing of hypotheses. NDM and controlled experimentation are thus complementary approaches. NDM provides the observations and models, and controlled experimentation provides the testing and formalization.
Recognition-primed decision-making model (RPD)
The present form of RPD has three main variations. In the first variation, the decision maker when faced with the problem at hand, responds with the course of action that was first generated. In the second variation, the decision maker tries to understand the course of events that led up to the current situation, using mental simulation. In the final variation, the decision maker evaluates each course of action generated and then chooses th |
https://en.wikipedia.org/wiki/Wallis%20product | In mathematics, the Wallis product for , published in 1656 by John Wallis, states that
Proof using integration
Wallis derived this infinite product using interpolation, though his method is not regarded as rigorous. A modern derivation can be found by examining for even and odd values of , and noting that for large , increasing by 1 results in a change that becomes ever smaller as increases. Let
(This is a form of Wallis' integrals.) Integrate by parts:
Now, we make two variable substitutions for convenience to obtain:
We obtain values for and for later use.
Now, we calculate for even values by repeatedly applying the recurrence relation result from the integration by parts. Eventually, we end get down to , which we have calculated.
Repeating the process for odd values ,
We make the following observation, based on the fact that
Dividing by :
, where the equality comes from our recurrence relation.
By the squeeze theorem,
Proof using Laplace's method
See the main page on Gaussian integral.
Proof using Euler's infinite product for the sine function
While the proof above is typically featured in modern calculus textbooks, the Wallis product is, in retrospect, an easy corollary of the later Euler infinite product for the sine function.
Let :
Relation to Stirling's approximation
Stirling's approximation for the factorial function asserts that
Consider now the finite approximations to the Wallis product, obtained by taking the first terms in the product
where can be written as
Substituting Stirling's approximation in this expression (both for and ) one can deduce (after a short calculation) that converges to as .
Derivative of the Riemann zeta function at zero
The Riemann zeta function and the Dirichlet eta function can be defined:
Applying an Euler transform to the latter series, the following is obtained:
See also
John Wallis, English mathematician who is given partial credit for the development of infinitesimal calculus and pi |
https://en.wikipedia.org/wiki/WRDC | WRDC (channel 28) is a television station licensed to Durham, North Carolina, United States, serving the Research Triangle area as an affiliate of MyNetworkTV. It is owned by Sinclair Broadcast Group alongside Raleigh-licensed CW affiliate WLFL (channel 22). Both stations share studios in the Highwoods Office Park, just outside downtown Raleigh, while WRDC's transmitter is located in Auburn, North Carolina.
Channel 28 is the third-oldest television station in the Triangle and was the market's NBC affiliate for its first 27 years of operation. It was perennially the third-rated station in the market and did not produce local newscasts for significant portions of its tenure with NBC, which contributed to the network moving to another station.
Prior use of channel 28 in Raleigh
Channel 28 in Raleigh was initially occupied by WNAO-TV, the first television station in the Raleigh–Durham market and North Carolina's first UHF station. Owned by the Sir Walter Television Company, WNAO-TV broadcast from July 12, 1953, to December 31, 1957, primarily as a CBS affiliate with secondary affiliations with other networks. The station was co-owned with WNAO radio (850 AM and 96.1 FM)), which Sir Walter had bought from The News & Observer newspaper after obtaining the television construction permit. After the Raleigh–Durham market received two VHF television stations in 1954 and 1956 (WTVD, channel 11, and WRAL-TV, channel 5, respectively), WNAO-TV found the going increasingly difficult, as did many early UHF stations. The station signed off December 31, 1957, and its owner entered into a joint venture with another dark UHF outlet that was successful in obtaining channel 8 in High Point.
History
WRDU-TV/Triangle Telecasters
In 1966, a major overhaul of the UHF allocation table moved the market's channel 28 allotment from Raleigh to Durham. On November 18 of that year, Triangle Telecasters, Inc., a group led by law professor Robinson O. Everett, applied to the Federal Communicatio |
https://en.wikipedia.org/wiki/Twin%20pattern | In software engineering, the Twin pattern is a software design pattern that allows developers to model multiple inheritance in programming languages that do not support multiple inheritance. This pattern avoids many of the problems with multiple inheritance.
Definition
Instead of having a single class which is derived from two super-classes, have two separate sub-classes each derived from one of the two super-classes. These two sub-classes are closely coupled, so, both can be viewed as a Twin object having two ends.
Applicability
The twin pattern can be used:
to model multiple inheritance in a language in which multiple inheritance is not supported
to avoid some problems of multiple inheritance.
Structure
There will be two or more parent classes which are used to be inherited. There will be sub-classes each of which is derived from one of the super-classes. The sub-classes are mutually linked via fields, and each sub-class may override the methods inherited from the super-class. New methods and fields are usually declared in one sub-class.
The following diagram shows the typical structure of multiple inheritance:
The following diagram shows the Twin pattern structure after replacing the previous multiple inheritance structure:
Collaborations
Each child class is responsible for the protocol inherited from its parent. It handles the messages from this protocol and forwards other messages to its partner class.
Clients of the twin pattern reference one of the twin objects directly and the other via its twin field.
Clients that rely on the protocols of parent classes communicate with objects of the respective child class.
Sample code
The following code is a sketched implementation of a computer game board with moving balls.
Class for the game board:
public class Gameboard extends Canvas {
public int width, height;
public GameItem firstItem;
…
}
Code sketch for GameItem class:
public abstract class GameItem {
Gameboard board;
int |
https://en.wikipedia.org/wiki/The%20Western%20Australian%20Flora%3A%20A%20Descriptive%20Catalogue | The Western Australian Flora: A Descriptive Catalogue was published by the Wildflower Society of Western Australia, the Western Australian Herbarium,
CALM, and the Botanic Gardens and Parks Authority of Perth, Western Australia.
Publication
At the time of publication in 2000 the number of published vascular plant species recognised had reached 9640 - almost double since the work of Beard in 1969.
The publication of the book was an important stage of the cataloguing of details of flora in Western Australia.
The introduction of the book by Alex R. Chapman is available as a PDF file, at the external link at FloraBase.
Floristic regions
The front inside cover has an important distinctive map of Western Australian Biogeographic Regions and Botanical Provinces, after Creswell and Thackray 1995 - the authors of the IBRA system.
After Beard's summary of Diels, Burbidge and Gardner in 1980 he established the three phytogeographic provinces - Northern, Eremaean, and South West, and the map links in the relationship of these with the IBRA regions and sub regions.
Publication details
Paczkowska, Grazyna and Alex R. Chapman (2000). The Western Australian Flora : a descriptive catalogue Perth, W.A: Wildflower Society of Western Australia : Western Australian Herbarium: Western Australian Botanic Gardens & Parks Authority (pbk.)
See also
How to Know Western Australian Wildflowers
Vegetation Survey of Western Australia |
https://en.wikipedia.org/wiki/Glossy%20display | A glossy display is an electronic display with a glossy surface. In certain light environments, glossy displays provide better color intensity and contrast ratios than matte displays. The primary disadvantage of these displays is their tendency to reflect any external light, often resulting in an undesirable glare.
Technology
Some LCDs use an antireflective coating, or nanotextured glass surface, to reduce the amount of external light reflecting from the surface without affecting light emanating from the screen as an alternative to matte display.
Disadvantages
Because of the reflective nature of the display, in most lighting conditions that include direct light sources facing the screen, glossy displays create reflections, which can be distracting to the user of the computer. This can be especially distracting to users working in an environment where the position of lights and windows are fixed, such as in an office, as these create unavoidable reflections on glossy displays.
Adverse health effects
Ergonomic studies show that prolonged work in the office environment with the presence of discomforting glares and disturbances from light reflections on the screen can cause mild to severe health effects, ranging from eye strain and headaches to photosensitive epileptic episodes. These effects are usually explained by the physiology of the human eye and the human visual system. The image of light sources reflected in the screen can cause the human visual system to focus on that image, which is usually at a much farther distance than the information shown on the screen. This competition between two images that can be focused is considered to be the primary source of such effects.
Advantages
In controlled environments, such as darkened rooms, or rooms where all light sources are diffused, glossy displays create more saturated colors, deeper blacks, brighter whites, and are sharper than matte displays. This is why supporters of glossy screens consider these typ |
https://en.wikipedia.org/wiki/Summer%20Rain%20%28Johnny%20Rivers%20song%29 | "Summer Rain" is a song written by James Hendricks and performed by Johnny Rivers on his 1968 LP Realization. Of his several dozen releases, it is considered his sixth greatest hit internationally.
It reached No. 14 on the U.S. Billboard Hot 100, No. 6 on the U.S. Cash Box Top 100, and No. 10 in Canada in early January, 1968.
"Summer Rain" is about lifelong love during "the summer of love" of 1967. It was released in the late fall, as a reminiscence of the previous summer. The song references Sgt. Pepper's Lonely Hearts Club Band, the Beatles album which was released during the middle of that year.
Chart performance
Weekly charts
Year-end charts
Personnel
Lead vocals and acoustic guitar by Johnny Rivers
Strings and horns by Marty Paich
Drums by Hal Blaine
Bass by Joe Osborn
Uncredited: Electric Organ |
https://en.wikipedia.org/wiki/Viglen | Viglen Ltd provides IT products and services, including storage systems, servers, workstations and data/voice communications equipment and services.
History
The company was formed in 1975, by Vigen Boyadjian. During the 1980s, the company specialised in direct sales through multi page advertisements in leading computer magazines, catering particularly, but not exclusively, to owners of Acorn computers.
Viglen was acquired by Alan Sugar (later Lord Sugar)'s company Amstrad in June 1994. It was listed as a public limited company in 1997, and Amstrad plc shares were split into Viglen and Betacom shares, Betacom being renamed to Amstrad PLC. Following the sale in July 2007 of Amstrad PLC to Rupert Murdoch's BSkyB, Viglen became Sugar's sole IT establishment.
Viglen used to be run by CEO Bordan Tkachuk, a longtime associate of Lord Sugar, who can be seen making special guest appearances on The Apprentice. From 1994 to 1998, the company sponsored Charlton Athletic F.C., expiring when they won promotion to the FA Premier League.
In December 2005, Viglen relocated from its London headquarters in Wembley to Colney Street near St Albans, into a building which also houses its fabrication plant. , Viglen focused particularly on the education and public sectors, selling both desktop and server systems, and also had interests in other IT markets such as managed services, high performance clusters, and network attached storage.
In July 2009, Lord Sugar resigned as the chairman of Viglen (and most of his other companies), handing over the reins of the company to longtime associate, Claude Littner. In January 2014, Sugar sold his interest in Viglen to the Westcoast Group, which merged it with another of its subsidiaries, XMA.
The Apprentice
Under its former ownership by Lord Sugar, the Viglen headquarters doubled up as one of the filming locations for the BBC programme The Apprentice, with various scenes including the infamous "job interviews" being set there. The "walk of sh |
https://en.wikipedia.org/wiki/Spunk%20Library | The Spunk Library (also known as Spunk Press) was an anarchist Internet archive. The name "spunk" was chosen for the term's meaning in Swedish ("anything we want it to mean"), English ("courage or spirit"), and Australian ("an attractive person"), summarized by the website as "nondescript, energetic, courageous and attractive".
According to anarchist librarian Chuck Munson, the library was begun as Spunk Press in 1992. The founding contributors – Ian Heavens, Jack Jansen, Andrew Flood, Iain McKay and Practical Anarchy editors Munson and Mikael Cardell – originally met via an online forum, namely Jansen's Anarchy Discussion email list. The Library was run by an editorial collective during the 1990s. It was not intended to replace print publishing, but rather served a shop window promoting anarchist book publishers, newspapers and journals.
By 1995, it was already the largest anarchist archive of published material catalogued on computer networks, though it faced a media assault accusing it of collaborating with terrorists such as the Red Army Faction, of providing instructions for bomb-making and of coordinating the “disruption of schools, looting of shops and attacks on multinational firms.” The Library remained largely inactive during the first decade of the 2000's, with the home page last being updated in March 2002.
The Rough Guide to the Internet described the Library as being "organized neatly and with reassuring authority". Chris Atton, writing in Alternative Media (2002) hailed the site as an "advertisement for socially responsible anarchism with a significant intellectual pedigree", remarking that "[i]n a world where anarchism is still largely derided or maligned by the mass media, that is an important function" and drawing a comparison to Infoshop.org.
See also
Alternative Media Project
Libcom.org, an online space that acts in part as a library and archive predominantly in english.
Anarchy Archives, an online research center on the history and theory o |
https://en.wikipedia.org/wiki/Leaf%20angle%20distribution | The leaf angle distribution (or LAD) of a plant canopy refers to the mathematical description of the angular orientation of the leaves in the vegetation. Specifically, if each leaf is conceptually represented by a small flat plate, its orientation can be described with the zenith and the azimuth angles of the surface normal to that plate. If the leaf has a complex structure and is not flat, it may be necessary to approximate the actual leaf by a set of small plates, in which case there may be a number of leaf normals and associated angles. The LAD describes the statistical distribution of these angles.
Examples of leaf angle distributions
Different plant canopies exhibit different LADs: For instance, grasses and willows have their leaves largely hanging vertically (such plants are said to have an erectophile LAD), while oaks tend to maintain their leaves more or less horizontally (these species are known as having a planophile LAD). In some tree species, leaves near the top of the canopy follow an erectophile LAD while those at the bottom of the canopy are more planophile. This may be interpreted as a strategy by that plant species to maximize exposure to light, an important constraint to growth and development. Yet other species (notably sunflower) are capable of reorienting their leaves throughout the day to optimize exposure to the Sun: this is known as heliotropism.
Importance of LAD
The LAD of a plant canopy has a significant impact on the reflectance, transmittance and absorption of solar light in the vegetation layer, and thus also on its growth and development. LAD can also serve as a quantitative index to monitor the state of the plants, as wilting usually results in more erectophile LADs. Models of radiation transfer need to take this distribution into account to predict, for instance, the albedo or the productivity of the canopy.
Measuring LAD
Accurately measuring the statistical properties of leaf angle distributions is not a trivial matter, especi |
https://en.wikipedia.org/wiki/27%20%28number%29 | 27 (twenty-seven; Roman numeral XXVII) is the natural number following 26 and preceding 28.
In mathematics
Twenty-seven is equal to the cube of three: ; also 23 (see tetration). It is divisible by the number of prime numbers below it (9).
In decimal, 27 is the first composite number not divisible by any of its digits. In base ten, it is also
It is also the first non-trivial decagonal number.
27 has a prime aliquot sum of 13 (the sixth prime number) in the aliquot sequence (27, 13, 1, 0) of only one composite number, rooted in the 13-aliquot tree.
Whereas the composite index of 27 is 17 (the cousin prime to 13), 7 is the prime index of 17; a prime reciprocal magic square based on multiples of has a magic constant of 27.
In the Collatz conjecture (i.e. the problem), a starting value of 27 requires 3 × 37 = 111 steps to reach 1, more than any smaller number.
The next two larger numbers to require more steps are 54 and 55, where the fourteenth prime number (43) requires twenty-seven steps to reach 1.
Including the null-motif, there are 27 distinct hypergraph motifs.
There are exactly twenty-seven straight lines on a smooth cubic surface, which give a basis of the fundamental representation of Lie algebra .
The unique simple formally real Jordan algebra, the exceptional Jordan algebra of self-adjoint 3 by 3 matrices of quaternions is 27-dimensional; its automorphism group is the 52-dimensional exceptional Lie algebra
There are twenty-seven sporadic groups, if the non-strict group of Lie type (with an irreducible representation that is twice that of in 104 dimensions) is included.
In Robin's theorem for the Riemann hypothesis, twenty-seven integers fail to hold for values where is the Euler–Mascheroni constant; this hypothesis is true if and only if this inequality holds for every larger
Base-specific
In base ten, if one cyclically rotates the digits of a three-digit number that is a multiple of 27, the new number is also a multiple of 27. For exa |
https://en.wikipedia.org/wiki/Heavy%20baryon%20chiral%20perturbation%20theory | Heavy baryon chiral perturbation theory (HBChPT) is an effective quantum field theory used to describe the interactions of pions and nucleons/baryons. It is somewhat an extension of chiral perturbation theory (ChPT) which just describes the low-energy interactions of pions. In a richer theory one would also like to describe the interactions of baryons with pions. A fully relativistic Lagrangian of nucleons is non-predictive as the quantum corrections, or loop diagrams can count as quantities and therefore do not describe higher-order corrections.
Because the baryons are much heavier than the pions, HBChPT rests on the use of a nonrelativistic description of baryons compared to that of the pions. Therefore, higher order terms in the HBChPT Lagrangian come in at higher orders of where is the baryon mass.
Quantum chromodynamics |
https://en.wikipedia.org/wiki/Driver%20circuit | In electronics, a driver is a circuit or component used to control another circuit or component, such as a high-power transistor, liquid crystal display (LCD), stepper motors, SRAM memory, and numerous others.
They are usually used to regulate current flowing through a circuit or to control other factors such as other components and some other devices in the circuit. The term is often used, for example, for a specialized integrated circuit that controls high-power switches in switched-mode power converters. An amplifier can also be considered a driver for loudspeakers, or a voltage regulator that keeps an attached component operating within a broad range of input voltages.
Typically the driver stage(s) of a circuit requires different characteristics to other circuit stages. For example, in a transistor power amplifier circuit, typically the driver circuit requires current gain, often the ability to discharge the following transistor bases rapidly, and low output impedance to avoid or minimize distortion.
In SRAM memory driver circuits are used to rapidly discharge necessary bit lines from a precharge level to the write margin or below.
See also
Hitachi HD44780 LCD controller |
https://en.wikipedia.org/wiki/Hearing%20loss%20with%20craniofacial%20syndromes | Hearing loss with craniofacial syndromes is a common occurrence. Many of these multianomaly disorders involve structural malformations of the outer or middle ear, making a significant hearing loss highly likely.
Treacher Collins syndrome
Individuals with Treacher Collins syndrome often have both cleft palate and hearing loss, in addition to other disabilities. Hearing loss is often secondary to absent, small or unusually formed ears (microtia) and commonly results from malformations of the middle ear. Researchers have found that most patients with Treacher Collins syndrome have symmetric external ear canal abnormalities and symmetrically dysmorphic or absent ossicles in the middle ear space. Inner ear structure is largely normal. Most patients show a moderate hearing impairment or greater, and the type of loss is generally a conductive hearing loss. Patients with Treacher Collins syndrome exhibit hearing losses similar to those of patients with malformed or missing ossicles (Pron et al., 1993).
Pierre Robin sequence
Persons with Pierre Robin sequence (PRS) are at greater risk for hearing impairment than persons with cleft lip and/or palate without PRS. One study showed an average of 83% hearing loss in PRS, compared to 60% in cleft individuals without PRS (Handzic et al., 1995). Similarly, PRS individuals typically exhibit conductive, bilateral hearing losses that are greater in degree than in cleft individuals without PRS. Middle ear effusion is generally apparent, with no middle ear or inner ear malformations. Accordingly, management by ear tubes (myringotomy tubes) is often effective and may restore normal levels of hearing (Handzic et al., 1995).
Stickler syndrome
The hearing loss most typical in patients with Stickler syndrome is a sensorineural hearing loss, indicating that the source of the deficit lies in the inner ear, the vestibulocochlear nerve or the processing centers of the brain. Szymko-Bennett et al. (2001) found that the overall hearing los |
https://en.wikipedia.org/wiki/Vehicle%20tracking%20system | A vehicle tracking system combines the use of automatic vehicle location in individual vehicles with software that collects these fleet data for a comprehensive picture of vehicle locations. Modern vehicle tracking systems commonly use GPS or GLONASS technology for locating the vehicle, but other types of automatic vehicle location technology can also be used. Vehicle information can be viewed on electronic maps via the Internet or specialized software. Urban public transit authorities are an increasingly common user of vehicle tracking systems, particularly in large cities.
Active versus passive tracking
Several types of vehicle tracking devices exist. Typically they are classified as "passive" and "active". "Passive" devices store GPS location, speed, heading and sometimes a trigger event such as key on/off, door open/closed. Once the vehicle returns to a predetermined point, the device is removed and the data downloaded to a computer for evaluation. Passive systems include auto download type that transfer data via wireless download. "Active" devices also collect the same information but usually transmit the data in near-real-time via cellular or satellite networks to a computer or data center for evaluation.
Many modern vehicle tracking devices combine both active and passive tracking abilities: when a cellular network is available and a tracking device is connected it transmits data to a server; when a network is not available the device stores data in internal memory and will transmit stored data to the server later when the network becomes available again.
Historically, vehicle tracking has been accomplished by installing a box into the vehicle, either self-powered with a battery or wired into the vehicle's power system. For detailed vehicle locating and tracking this is still the predominant method; however, many companies are increasingly interested in the emerging cell phone technologies that provide tracking of multiple entities, such as both a salesp |
https://en.wikipedia.org/wiki/ProSyst | ProSyst Software GmbH was founded in Cologne in 1997 as a company specializing in Java software and middleware. ProSyst's first commercial application was a Java EE application server. In 2000, the company sold this server technology and has since focused completely on OSGi solutions.
In 1999, ProSyst was among the first companies to join the OSGi Alliance and since then has made important contributions to the development of each release of OSGi specifications (Release 1–4). ProSyst is a member of the OSGi Alliance board of directors alongside IBM, Nokia, NTT, Siemens, Oracle Corporation, Samsung, Motorola and Telcordia. Additionally, members of ProSyst staff serve in several positions on the OSGi Alliance.
In recent years ProSyst set its focus exclusively on the development of OSGi related software such as Frameworks, Bundles, Remote Management Systems and OSGi tools for developers including a full SDK available for download. ProSyst's OSGi applications are used by SmartHome devices, mobile phone manufacturers, network equipment providers (in CPEs), white goods manufacturers, car manufacturers and in the eHealth market.
ProSyst employs more than 120 Java and OSGi experts and offers OSGi related training, support (SLAs), technical consulting and development services.
As a member, ProSyst contributes to OSGi, Eclipse, Java Community Process, Nokia Forum Pro and the CVTA Connected Vehicle Trade Association.
Prosyst was acquired by Bosch in February 2015, and was merged into Bosch Group's software and systems unit Bosch Software Innovations GmbH.
Notable products
Commercial off-the-shelf products around OSGi mBS
Reduced-size Java client from 1999 |
https://en.wikipedia.org/wiki/GC%20skew | GC skew is when the nucleotides guanine and cytosine are over- or under-abundant in a particular region of DNA or RNA. GC skew is also a statistical method for measuring strand-specific guanine overrepresentation.
In equilibrium conditions (without mutational or selective pressure and with nucleotides randomly distributed within the genome) there is an equal frequency of the four DNA bases (adenine, guanine, thymine, and cytosine) on both single strands of a DNA molecule. However, in most bacteria (e.g. E. coli) and some archaea (e.g. Sulfolobus solfataricus), nucleotide compositions are asymmetric between the leading strand and the lagging strand: the leading strand contains more guanine (G) and thymine (T), whereas the lagging strand contains more adenine (A) and cytosine (C). This phenomenon is referred to as GC and AT skew and the corresponding statistics were defined as:
GC skew = (G - C)/(G + C)
AT skew = (A − T)/(A + T)
Asymmetric nucleotide composition
Erwin Chargaff's work in 1950 demonstrated that, in DNA, the bases guanine and cytosine were found in equal abundance, and the bases adenine and thymine were found in equal abundance. However, there was no equality between the amount of one pair versus the other. Chargaff's finding is referred to as Chargaff's rule or parity rule 1. Three years later, Watson and Crick used this fact during their derivation of the structure of DNA, their double helix model.
A natural result of parity rule 1, at the state of equilibrium, in which there is no mutation and/or selection biases in any of the two DNA strands, is that when there is an equal substitution rate, the complementary nucleotides on each strand have equal amounts of a given base and its complement. In other words, in each DNA strand the frequency of the occurrence of T is equal to A and the frequency of the occurrence of G is equal to C because the substitution rate is presumably equal. This phenomenon is referred to as parity rule 2. Hence, the secon |
https://en.wikipedia.org/wiki/Mercury%20telluride | Mercury telluride (HgTe) is a binary chemical compound of mercury and tellurium. It is a semi-metal related to the II-VI group of semiconductor materials. Alternative names are mercuric telluride and mercury(II) telluride.
HgTe occurs in nature as the mineral form coloradoite.
Physical properties
All properties are at standard temperature and pressure unless stated otherwise. The lattice parameter is about 0.646 nm in the cubic crystalline form. The bulk modulus is about 42.1 GPa. The thermal expansion coefficient is about 5.2×10−6/K. Static dielectric constant 20.8, dynamic dielectric constant 15.1. Thermal conductivity is low at 2.7 W·m2/(m·K). HgTe bonds are weak leading to low hardness values. Hardness 2.7×107 kg/m2.
Doping
N-type doping can be achieved with elements such as boron, aluminium, gallium, or indium. Iodine and iron will also dope n-type. HgTe is naturally p-type due to mercury vacancies. P-type doping is also achieved by introducing zinc, copper, silver, or gold.
Topological insulation
Mercury telluride was the first topological insulator discovered, in 2007. Topological insulators cannot support an electric current in the bulk, but electronic states confined to the surface can serve as charge carriers.
Chemistry
HgTe bonds are weak. Their enthalpy of formation, around −32kJ/mol, is less than a third of the value for the related compound cadmium telluride. HgTe is easily etched by acids, such as hydrobromic acid.
Growth
Bulk growth is from a mercury and tellurium melt in the presence of a high mercury vapour pressure. HgTe can also be grown epitaxially, for example, by sputtering or by metalorganic vapour phase epitaxy.
Nanoparticles of mercury telluride can be obtained via cation exchange from cadmium telluride nanoplatelets.
See also
Cadmium telluride
Mercury selenide
Mercury cadmium telluride |
https://en.wikipedia.org/wiki/Biosocial%20criminology | Biosocial criminology is an interdisciplinary field that aims to explain crime and antisocial behavior by exploring biocultural factors. While contemporary criminology has been dominated by sociological theories, biosocial criminology also recognizes the potential contributions of fields such as behavioral genetics, neuropsychology, and evolutionary psychology.
Approaches
Environment
Environment has a significant effect on genetic expression. Disadvantaged environments enhance antisocial gene expression, suppress prosocial gene action and prevent the realization of genetic potential.
Behavioral genetics
One approach to studying the role of genetics for crime is to calculate the heritability coefficient, which describes the proportion of the variance that is due to actualized genetic effects for some trait in a given population in a specific environment at a specific time. According to Kevin Beaver and Anthony Walsh, the heritability coefficient for antisocial behavior is estimated to be between 0.40 and 0.58.
The methodology often used in biosocial criminology (that of twin studies) has been criticized for producing inflated heritability estimates, though biosocial criminologists maintain that these criticisms are baseless. Criminal justice researchers Brian Boutwell and J.C. Barnes argue that many sociological studies that do not control for genetic inheritance of risk factors have misleading or unreliable results.
Neurophysiology
Another approach is to examine the relationship between neurophysiology and criminality. One example is that measured levels of neurotransmitters such as serotonin and dopamine have been associated with criminal behavior. Another is that neuroimaging studies give strong evidence that both brain structure and function are involved in criminal behaviors. The limbic system creates emotions such as anger and jealousy that ultimately may cause criminal behavior. The prefrontal cortex is involved in delaying gratification and impulse cont |
https://en.wikipedia.org/wiki/Hexazinone | Hexazinone is an organic compound that is used as a broad spectrum herbicide. It is a colorless solid. It exhibits some solubility in water but is highly soluble in most organic solvents except alkanes. A member of the triazine class herbicides, it is manufactured by DuPont and sold under the trade name Velpar.
It functions by inhibiting photosynthesis and thus is a nonselective herbicide. It is used to control grasses, broadleaf, and woody plants. In the United States approximately 33% is used on alfalfa, 31% in forestry, 29% in industrial areas, 4% on rangeland and pastures, and < 2% on sugarcane.
Hexazinone is a pervasive groundwater contaminant. Use of hexazinone causes groundwater to be at high risk of contamination due to the high leaching potential it exhibits.
History
Hexazinone is widely used as a herbicide. It is a non-selective herbicide from the triazine family. It is used among a broad range of places. It is used to control weeds within all sort of applications. From sugarcane plantations, forestry field nurseries, pineapple plantations to high- and railway grasses and industrial plant sites.
Hexazinone was first registered in 1975 for the overall control of weeds and later for uses in crops.
Structure and reactivity
Triazines like hexazinone can bind to the D-1 quinone protein of the electron transport chain in photosystem II to inhibit the photosynthesis. These diverted electrons can thereby damage membranes and destroy cells.
Synthesis
Hexazinone can be synthesized in two different reaction processes. One process starts with a reaction of methyl chloroformate with cyanamide, forming hexazinone after a five-step pathway:
A second synthesis starts with methylthiourea.:
Degradation
The degradation of hexazinone has long been studied. It degrades approximately 10% in five weeks, when exposed to artificial sunlight in distilled water. However, degradation in natural waters can be three to seven times greater. Surprisingly, the pH and the |
https://en.wikipedia.org/wiki/Caristi%20fixed-point%20theorem | In mathematics, the Caristi fixed-point theorem (also known as the Caristi–Kirk fixed-point theorem) generalizes the Banach fixed-point theorem for maps of a complete metric space into itself. Caristi's fixed-point theorem modifies the -variational principle of Ekeland (1974, 1979). The conclusion of Caristi's theorem is equivalent to metric completeness, as proved by Weston (1977).
The original result is due to the mathematicians James Caristi and William Arthur Kirk.
Caristi fixed-point theorem can be applied to derive other classical fixed-point results, and also to prove the existence of bounded solutions of a functional equation.
Statement of the theorem
Let be a complete metric space. Let and be a lower semicontinuous function from into the non-negative real numbers. Suppose that, for all points in
Then has a fixed point in that is, a point such that The proof of this result utilizes Zorn's lemma to guarantee the existence of a minimal element which turns out to be a desired fixed point. |
https://en.wikipedia.org/wiki/APE100 | APE100 was a family of SIMD supercomputers developed by the Istituto Nazionale di Fisica Nucleare (INFN, "National Institute for Nuclear Physics") in Italy between 1989 and 1994. The systems were developed to study the structure of elementary particles by means of lattice gauge theories, especially quantum chromodynamics.
APE ("ah-pei"), an acronym for Array Processor Experiment, was the collective name of several generations of massively parallel supercomputers since 1984, optimized for theoretical physics simulations. The APE machines were massively parallel 3D arrays of custom computing nodes with periodic boundary conditions.
APE100 was developed at INFN in Rome and Pisa under the direction of Nicola Cabibbo. Each node was capable of 50MFLOPS so that the complete configuration with 2,048 nodes had a performance of 100 gigaflops. In 1991, it became the most powerful supercomputer in the world.
A version of APE100 has been marketed by Alcatel Alenia Space under the name of Quadrics. After 1994 the project at INFN was continued with the new names APEmille and ApeNext. |
https://en.wikipedia.org/wiki/Computational%20number%20theory | In mathematics and computer science, computational number theory, also known as algorithmic number theory, is the study of
computational methods for investigating and solving problems in number theory and arithmetic geometry, including algorithms for primality testing and integer factorization, finding solutions to diophantine equations, and explicit methods in arithmetic geometry.
Computational number theory has applications to cryptography, including RSA, elliptic curve cryptography and post-quantum cryptography, and is used to investigate conjectures and open problems in number theory, including the Riemann hypothesis, the Birch and Swinnerton-Dyer conjecture, the ABC conjecture, the modularity conjecture, the Sato-Tate conjecture, and explicit aspects of the Langlands program.
Software packages
Magma computer algebra system
SageMath
Number Theory Library
PARI/GP
Fast Library for Number Theory
Further reading |
https://en.wikipedia.org/wiki/Chalaza | The chalaza (; from Greek "hailstone"; plural chalazas or chalazae, ) is a structure inside bird eggs and plant ovules. It attaches or suspends the yolk or nucellus within the larger structure.
In animals
In the eggs of most birds (not of the reptiles), the chalazae are two spiral bands of tissue that suspend the yolk in the center of the white (the albumen). The function of the chalazae is to hold the yolk in place. In baking, the chalazae are sometimes removed in order to ensure a uniform texture.
In plants
In plant ovules, the chalaza is located opposite the micropyle opening of the integuments. It is the tissue where the integuments and nucellus are joined. Nutrients from the plant travel through vascular tissue in the funiculus and outer integument through the chalaza into the nucellus. During the development of the embryo sac inside a flowering plant ovule, the three cells at the chalazal end become the antipodal cells.
Chalazogamy
In most flowering plants, the pollen tube enters the ovule through the micropyle opening in the integuments for fertilization (porogamy). In chalazogamous fertilization, the pollen tubes penetrate the ovule through the chalaza rather than the micropyle opening.
Chalazogamy was first discovered in monoecious plant species of the family Casuarinaceae by Melchior Treub, but has since then also been observed in others, for example in pistachio and walnut.
Notes
Oology
Plant morphology
Plant anatomy
Pollination |
https://en.wikipedia.org/wiki/F-15%20Strike%20Eagle%20%28video%20game%29 | F-15 Strike Eagle is an F-15 Strike Eagle combat flight simulator originally released for the Atari 8-bit family in 1984 by MicroProse then ported to other systems. It is the first in the F-15 Strike Eagle series followed by F-15 Strike Eagle II and F-15 Strike Eagle III. An arcade version of the game was released simply as F-15 Strike Eagle in 1991, which uses higher-end hardware than was available in home systems, including the TMS34010 graphics-oriented CPU.
Gameplay
The game begins with the player selecting Libya (much like Operation El Dorado Canyon), the Persian Gulf, or Vietnam as a mission theater. Play then begins from the cockpit of an F-15 already in flight and equipped with a variety of missiles, bombs, drop tanks, flares and chaff. The player flies the plane in combat to bomb various targets including a primary and secondary target while also engaging in air-to-air combat with enemy fighters.
The game ends when either the player's plane crashes, is destroyed, or when the player returns to base.
Ports
The game was first released for the Atari 8-bit family, with ports appearing from 1985-87 for the Apple II, Commodore 64, ZX Spectrum, MSX, and Amstrad CPC. It was also ported to the IBM PC as a self-booting disk, being one of the first games that MicroProse company released for IBM compatibles. The initial IBM release came on a self-booting 5.25" floppy disk and supported only CGA graphics, but a revised version in 1986 was offered on 3.5" disks and added limited EGA support (which added the ability to change color palettes if an EGA card was present).
Versions for the Game Boy, Game Gear, and NES were published in the early 1990s.
Reception
F-15 Strike Eagle was a commercial blockbuster. It sold 250,000 copies by March 1987, and surpassed 1 million units in 1989. It ultimately reached over 1.5 million sales overall, and was MicroProse's best-selling Commodore game as of late 1987. Computer Gaming World in 1984 called F-15 "an excellent simulation" |
https://en.wikipedia.org/wiki/X-ray%20microtomography | In radiography, X-ray microtomography uses X-rays to create cross-sections of a physical object that can be used to recreate a virtual model (3D model) without destroying the original object. It is similar to tomography and X-ray computed tomography. The prefix micro- (symbol: µ) is used to indicate that the pixel sizes of the cross-sections are in the micrometre range. These pixel sizes have also resulted in creation of its synonyms high-resolution X-ray tomography, micro-computed tomography (micro-CT or µCT), and similar terms. Sometimes the terms high-resolution computed tomography (HRCT) and micro-CT are differentiated, but in other cases the term high-resolution micro-CT is used. Virtually all tomography today is computed tomography.
Micro-CT has applications both in medical imaging and in industrial computed tomography. In general, there are two types of scanner setups. In one setup, the X-ray source and detector are typically stationary during the scan while the sample/animal rotates. The second setup, much more like a clinical CT scanner, is gantry based where the animal/specimen is stationary in space while the X-ray tube and detector rotate around. These scanners are typically used for small animals (in vivo scanners), biomedical samples, foods, microfossils, and other studies for which minute detail is desired.
The first X-ray microtomography system was conceived and built by Jim Elliott in the early 1980s. The first published X-ray microtomographic images were reconstructed slices of a small tropical snail, with pixel size about 50 micrometers.
Working principle
Imaging system
Fan beam reconstruction
The fan-beam system is based on a one-dimensional (1D) X-ray detector and an electronic X-ray source, creating 2D cross-sections of the object. Typically used in human computed tomography systems.
Cone beam reconstruction
The cone-beam system is based on a 2D X-ray detector (camera) and an electronic X-ray source, creating projection images that late |
https://en.wikipedia.org/wiki/Sonic%20philosophy | Sonic philosophy or the philosophy of sound is a philosophical theory that proposes thinking sonically instead of thinking about sound. It is applied in ontology or the investigation of being and the determination of what exists. The materialist sonic philosophy is also considered part of aesthetic philosophy and traces the effect of sound on philosophy and draws from the notion that sound is a flux, event, and effect.
Background
Scholars cite the role of the naturalistic philosophies of Friedrich Nietzsche and Arthur Schopenhauer in the development of sonic philosophy. Both maintained that music and sound directly figure the world as it is in itself and that they serve as the primary forces and movements behind all natural change, tension, creation, and destruction. The latter's notion of music considers it as a direct expression of the will. In his early unpublished writings, Nietzsche wrote that the concept of the philosopher involves his attempts "to let all sounds of the world reverberate in him and to place this comprehensive sound outside himself into concepts". These thinkers' positions underscored the philosophical importance of sound as they articulate the presentation of an ontology that unsettles the ordinary conception of things. This ontology allows the investigation of being and the determination of what things exist.
Modern sound philosophy also emerged out of philosophical aesthetics as scholars address the question of whether sounds and sounding artworks can be treated in the same way as other arts (e.g. visual arts) are approached.
Sonic philosophy is considered in opposition to the Kantian philosophy of humanism since it challenges the suggestion that the world is only "for-us" and mediated by discourse. A modern conceptualization articulated the philosophy as based on the idea that sound is a flux, event, and effect. It is also part of a contemporary project that rejects the essentialist and phenomenological approach to sonic theory.
Soni |
https://en.wikipedia.org/wiki/Parametric%20equation | In mathematics, a parametric equation defines a group of quantities as functions of one or more independent variables called parameters. Parametric equations are commonly used to express the coordinates of the points that make up a geometric object such as a curve or surface, called a parametric curve and parametric surface, respectively. In such cases, the equations are collectively called a parametric representation, or parametric system, or parameterization (alternatively spelled as parametrisation) of the object.
For example, the equations
form a parametric representation of the unit circle, where is the parameter: A point is on the unit circle if and only if there is a value of such that these two equations generate that point. Sometimes the parametric equations for the individual scalar output variables are combined into a single parametric equation in vectors:
Parametric representations are generally nonunique (see the "Examples in two dimensions" section below), so the same quantities may be expressed by a number of different parameterizations.
In addition to curves and surfaces, parametric equations can describe manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).
Parametric equations are commonly used in kinematics, where the trajectory of an object is represented by equations depending on time as the parameter. Because of this application, a single parameter is often labeled ; however, parameters can represent other physical quantities (such as geometric variables) or can be selected arbitrarily for convenience. Parameterizations are non-unique; more than one set of parametric equations can specify the same curve.
Applications
|
https://en.wikipedia.org/wiki/Quality%20%26%20Quantity | Quality & Quantity is an interdisciplinary double-blind peer-reviewed academic journal dealing with methodological issues in the fields of economics, psychology and sociology, mathematics, and statistics. The journal is published by Springer Science+Business Media. |
https://en.wikipedia.org/wiki/Succinic%20acid%20fermentation | Microbial production of Succinic acid can be performed with wild bacteria like Actinobacillus succinogenes, Mannheimia succiniciproducens and Anaerobiospirillum succiniciproducens or genetically modified Escherichia coli, Corynebacterium glutamicum and Saccharomyces cerevisiae. Understanding of the central carbon metabolism of these organisms is crucial in determining the maximum obtainable yield of succinic acid on the carbon source employed as substrate.
Metabolic pathways
Neglecting the carbon utilised for biomass formation (known to be a small fraction of the total carbon utilised) basic biochemistry balances can be performed based on the established metabolic pathways of these organisms. Using glucose as substrate the natural producing succinic acid producers are first considered. These organisms use the excretion of acetic acid (and sometimes formic acid) to balance the NADH requirement of succinic acid production. Two possible paths exist as indicated in Figure 1 and Figure 2. The difference between the two pathways lies in the pyruvate oxidation step where pyruvate formate lyase is employed in Figure 1 and pyruvate dehydrogenase employed in Figure 2. The additional NADH generated in Figure 2 results in 66% of the molar glucose flux ending up as succinic acid compared to the 50% of Figure 1. The overall yields can be expressed on a mass basis where the pathway in Figure 1 results in a 0.66 gram succinic acid per gram of glucose consumed (g/g). The pathway in Figure 2 results in a yield of 0.87 g/g.
The metabolic pathway can be genetically engineered in order to have succinic acid as the only excretion product. This can be achieved by using the oxidative section of the tricarboxylic acid cycle (TCA) under anaerobic conditions as illustrated in Figure 3. Alternatively the glyoxylate bypass can be utilised (Figure 4) to give the same result. For both these scenarios the mass based succinic acid yield is 1.12 g/g. This implies that the theoretical maximum yi |
https://en.wikipedia.org/wiki/List%20of%20cat%20body-type%20mutations | Cats, like all living organisms, occasionally have mutations that affect their body type. Sometimes, these mutations are striking enough that humans select for and perpetuate them. However, in relatively small or isolated feral cat populations the mutations can also spread without human intervention, for example on islands. Cat breeders exploit the naturally occurring mutations by selectively breeding them in a small gene pool, resulting in the creation of new cat breeds with unusual physical characteristics. The term designer cats is often used to refer to these cat breeds. This is not always in the best interests of the cat, as many of these mutations are harmful; some are even lethal in their homozygous form. To protect the animal’s welfare it is illegal in several countries or states to breed with parent cats that bear certain of these hypertype mutations.
This article gives a selection of cat body type mutant alleles and the associated mutations with a brief description.
Tail types
Jb Japanese Bobtail gene (autosomal dominant). Cats homozygous and heterozygous for this gene display shortened and kinked tails. Cats homozygous for the gene tend to have shorter, more kinked tails. This can be distinguished phenotypically from the Manx cat mutation by the presence of kinking in the tail, often forming what looks like a knot at the distal end of the tail. Unlike the Manx tailless gene, there are no associated skeletal disorders and the gene is not associated with lethality.
M Manx tailless gene (dominant with high penetrance). Cats with the homozygous genotype (MM) die before birth, and stillborn kittens show gross abnormalities of the central nervous system. Cats with the heterozygous genotype (Mm) show severely shortened tail length, ranging from taillessness to a partial, stumpy tail. Some Manx cats die before 12 months old and exhibit skeletal and organ defects. Because it was discovered in naturally occurring populations of cats, the Manx gene could con |
https://en.wikipedia.org/wiki/Fujitsu | is a Japanese multinational information and communications technology equipment and services corporation, established in 1935 and headquartered in Tokyo. It is the world's sixth-largest IT services provider by annual revenue, and the largest in Japan, in 2021. The hardware offerings from Fujitsu are mainly of personal and enterprise computing products, including x86, SPARC and mainframe compatible server products, although the corporation and its subsidiaries also offer a diversity of products and services in the areas of data storage, telecommunications, advanced microelectronics, and air conditioning. It has approximately 126,400 employees and its products and services are available in approximately 180 countries.
Fujitsu is listed on the Tokyo Stock Exchange and Nagoya Stock Exchange; its Tokyo listing is a constituent of the Nikkei 225 and TOPIX 100 indices.
History
1935 to 2000
Fujitsu was established on June 20, 1935, which makes it one of the oldest operating IT companies after IBM and before Hewlett-Packard, under the name , as a spin-off of the Fuji Electric Company, itself a joint venture between the Furukawa Electric Company and the German conglomerate Siemens which had been founded in 1923. Despite its connections to the Furukawa zaibatsu, Fujitsu escaped the Allied occupation of Japan after the Second World War mostly unscathed.
In 1954, Fujitsu manufactured Japan's first computer, the FACOM 100 mainframe, and in 1961 launched its second generation computers (transistorized) the FACOM 222 mainframe. The 1968 FACOM230 "5" Series marked the beginning of its third generation computers. Fujitsu offered mainframe computers from 1955 until at least 2002 Fujitsu's computer products have also included minicomputers, small business computers, servers and personal computers.
In 1955, Fujitsu founded Kawasaki Frontale as a company football club; Kawasaki Frontale has been a J. League football club since 1999. In 1967, the company's name was officially changed |
https://en.wikipedia.org/wiki/Sociedade%20Brasileira%20de%20Zoologia | The Sociedade Brasileira de Zoologia (), founded in 1978, is a scientific society devoted to Zoology. It publishes the journal Zoologia.
List of presidents
1978–1980: José Cândido de Melo Carvalho
1980–1982: José Willibaldo Thomé
1982–1988: Nelson Papavero
1988–1990: Renato Contin Marinoni
1990–1992: Adriano Lúcio Peracchi
1992–1996: Jayme de Loyola e Silva
1996–2004: Olaf H. H. Mielke
2004–2008: Mário Antonio Navarro da Silva
2008–2012: Rodney Ramiro Cavichioli
2012–2016: Rosana Moreira da Rocha
2016–2018: Luciane Marinoni
External links
Zoological societies |
https://en.wikipedia.org/wiki/CACNG2 | Calcium channel, voltage-dependent, gamma subunit 2, also known as CACNG2 or stargazin is a protein that in humans is encoded by the CACNG2 gene.
Function
L-type calcium channels are composed of five subunits. The protein encoded by this gene represents one of these subunits, gamma, and is one of several gamma subunit proteins. It is an integral membrane protein that is thought to stabilize the calcium channel in an inactive (closed) state. This protein is similar to the mouse stargazin protein, mutations in which having been associated with absence seizures, also known as petit-mal or spike-wave seizures. This gene is a member of the neuronal calcium channel gamma subunit gene subfamily of the PMP-22/EMP/MP20 family.
Stargazin is involved in the transportation of AMPA receptors to the synaptic membrane, and the regulation of their receptor rate constants — via its extracellular domain — once it is there. As it is highly expressed throughout the cerebral cortex, it is likely to have an important role in learning within these areas, due to the importance of AMPA receptors in LTP.
Clinical significance
Disruptions of CACNG2 have been implicated in autism.
Interactions
CACNG2 has been shown to interact with GRIA4, DLG4, and MAGI2.
See also
Voltage-dependent calcium channel |
https://en.wikipedia.org/wiki/Calutron | A calutron is a mass spectrometer originally designed and used for separating the isotopes of uranium. It was developed by Ernest Lawrence during the Manhattan Project and was based on his earlier invention, the cyclotron. Its name was derived from California University Cyclotron, in tribute to Lawrence's institution, the University of California, where it was invented. Calutrons were used in the industrial-scale Y-12 uranium enrichment plant at the Clinton Engineer Works in Oak Ridge, Tennessee. The enriched uranium produced was used in the Little Boy atomic bomb that was detonated over Hiroshima on 6 August 1945.
The calutron is a type of sector mass spectrometer, an instrument in which a sample is ionized and then accelerated by electric fields and deflected by magnetic fields. The ions ultimately collide with a plate and produce a measurable electric current. Since the ions of the different isotopes have the same electric charge but different masses, the heavier isotopes are deflected less by the magnetic field, causing the beam of particles to separate into several beams by mass, striking the plate at different locations. The mass of the ions can be calculated according to the strength of the field and the charge of the ions. During World War II, calutrons were developed to use this principle to obtain substantial quantities of high-purity uranium-235, by taking advantage of the small mass difference between uranium isotopes.
Electromagnetic separation for uranium enrichment was abandoned in the post-war period in favor of the more complicated, but more efficient, gaseous diffusion method. Although most of the calutrons of the Manhattan Project were dismantled at the end of the war, some remained in use to produce isotopically enriched samples of naturally occurring elements for military, scientific and medical purposes.
Origins
News of the discovery of nuclear fission by German chemists Otto Hahn and Fritz Strassmann in 1938, and its theoretical explanatio |
https://en.wikipedia.org/wiki/Magneto-electric%20spin-orbit | Magneto-electric spin-orbit (MESO) is a technology designed for constructing scalable integrated circuits, that works with a different operating principle than CMOS devices such as MOSFETs, proposed by Intel, that is compatible with CMOS device manufacturing techniques and machinery.
MESO devices operate by the coupling of the magnetoelectric effect with the spin orbit coupling. Specifically, the magnetoelectric effect will induce a change in magnetization within the device due to an induced electric field, which can then be read out by the spin orbit coupling component which converts it into an electric charge. This mechanism is analogous to how a CMOS device operates with the source, gate and drain electrodes working together to form a logic gate.
As of 2020, the technology is under development by Intel and University of California, Berkeley. The first experiment, conducted in 2020 in nanoGUNE, proved that spin-orbit coupling could be used for implementing MESO.
Performance
Before the introduction of MESO, Intel evaluated 17 different device architectures for beyond CMOS scaling which aims to circumvent scaling challenges present with CMOS devices such as MOSFETs used in integrated circuits. These architectures were made with production processes compatible with those used for CMOS devices since some CMOS devices are still necessary for interfacing with other circuits and for providing the clock signal for an integrated circuit, and for reusing existing production equipment: Tunneling FETs, graphene p-n junctions, ITFETs, BisFET, spinFETs, all spin logic, spin torque oscillators, domain wall logic, spin torque majority, spin torque triad, spin wave device, nano magnet logic, charge spin logic, piezo FETs, MITFETs, FeFETs and negative capacitance FETs were tested and it was found that none offered both improved performance characteristics and lower power consumption compared with CMOS. According to VentureBeat, simulations showed that, on a 32-bit ALU, MESO de |
https://en.wikipedia.org/wiki/Dissipative%20operator | In mathematics, a dissipative operator is a linear operator A defined on a linear subspace D(A) of Banach space X, taking values in X such that for all λ > 0 and all x ∈ D(A)
A couple of equivalent definitions are given below. A dissipative operator is called maximally dissipative if it is dissipative and for all λ > 0 the operator λI − A is surjective, meaning that the range when applied to the domain D is the whole of the space X.
An operator that obeys a similar condition but with a plus sign instead of a minus sign (that is, the negation of a dissipative operator) is called an accretive operator.
The main importance of dissipative operators is their appearance in the Lumer–Phillips theorem which characterizes maximally dissipative operators as the generators of contraction semigroups.
Properties
A dissipative operator has the following properties:
From the inequality given above, we see that for any x in the domain of A, if ‖x‖ ≠ 0 then so the kernel of λI − A is just the zero vector and λI − A is therefore injective and has an inverse for all λ > 0. (If we have the strict inequality for all non-null x in the domain, then, by the triangle inequality, which implies that A itself has an inverse.) We may then state that
for all z in the range of λI − A. This is the same inequality as that given at the beginning of this article, with (We could equally well write these as which must hold for any positive κ.)
λI − A is surjective for some λ > 0 if and only if it is surjective for all λ > 0. (This is the aforementioned maximally dissipative case.) In that case one has (0, ∞) ⊂ ρ(A) (the resolvent set of A).
A is a closed operator if and only if the range of λI - A is closed for some (equivalently: for all) λ > 0.
Equivalent characterizations
Define the duality set of x ∈ X, a subset of the dual space X of X, by
By the Hahn–Banach theorem this set is nonempty. In the Hilbert space case (using the canonical duality between a Hilbert space and its dual) i |
https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%207 | Cyclin-dependent kinase 7, or cell division protein kinase 7, is an enzyme that in humans is encoded by the CDK7 gene.
The protein encoded by this gene is a member of the cyclin-dependent protein kinase (CDK) family. CDK family members are highly similar to the gene products of Saccharomyces cerevisiae cdc28, and Schizosaccharomyces pombe cdc2, and are known to be important regulators of cell cycle progression.
This protein forms a trimeric complex with cyclin H and MAT1, which functions as a Cdk-activating kinase (CAK). It is an essential component of the transcription factor TFIIH, that is involved in transcription initiation and DNA repair. This protein is thought to serve as a direct link between the regulation of transcription and the cell cycle.
Clinical significance e.g. cancer
Given that CDK7 is involved in two important regulation roles, it's expected that CDK7 regulation may play a role in cancerous cells. Cells from breast cancer tumors were found to have elevated levels of CDK7 and Cyclin H when compared to normal breast cells. It was also found that the higher levels were generally found in ER-positive breast cancer. Together, these findings indicate that CDK7 therapy might make sense for some breast cancer patients. Further confirming these findings, recent research indicates that inhibition of CDK7 may be an effective therapy for HER2-positive breast cancers, even overcoming therapeutic resistance. THZ1 was tested on HER2-positive breast cancer cells and exhibited high potency for the cells regardless of their sensitivity to HER2 inhibitors. This finding was demonstrated in vivo, where inhibition of HER2 and CDK7 resulted in tumor regression in therapeutically resistant HER2+ xenograft models.
Inhibitors
The growth suppressor p53 has been shown to interact with cyclin H both in vitro and in vivo. Addition of wild type p53 was found to heavily downregulated CAK activity, resulting in decreased phosphorylation of both CDK2 and CTD by CDK7. Mutant |
https://en.wikipedia.org/wiki/Eukaryotic%20initiation%20factor | Eukaryotic initiation factors (eIFs) are proteins or protein complexes involved in the initiation phase of eukaryotic translation. These proteins help stabilize the formation of ribosomal preinitiation complexes around the start codon and are an important input for post-transcription gene regulation. Several initiation factors form a complex with the small 40S ribosomal subunit and Met-tRNAiMet called the 43S preinitiation complex (43S PIC). Additional factors of the eIF4F complex (eIF4A, E, and G) recruit the 43S PIC to the five-prime cap structure of the mRNA, from which the 43S particle scans 5'-->3' along the mRNA to reach an AUG start codon. Recognition of the start codon by the Met-tRNAiMet promotes gated phosphate and eIF1 release to form the 48S preinitiation complex (48S PIC), followed by large 60S ribosomal subunit recruitment to form the 80S ribosome. There exist many more eukaryotic initiation factors than prokaryotic initiation factors, reflecting the greater biological complexity of eukaryotic translation. There are at least twelve eukaryotic initiation factors, composed of many more polypeptides, and these are described below.
eIF1 and eIF1A
eIF1 and eIF1A both bind to the 40S ribosome subunit-mRNA complex. Together they induce an "open" conformation of the mRNA binding channel, which is crucial for scanning, tRNA delivery, and start codon recognition. In particular, eIF1 dissociation from the 40S subunit is considered to be a key step in start codon recognition.
eIF1 and eIF1A are small proteins (13 and 16 kDa, respectively in humans) and are both components of the 43S PIC. eIF1 binds near the ribosomal P-site, while eIF1A binds near the A-site, in a manner similar to the structurally and functionally related bacterial counterparts IF3 and IF1, respectively.
eIF2
eIF2 is the main protein complex responsible for delivering the initiator tRNA to the P-site of the preinitiation complex, as a ternary complex containing Met-tRNAiMet and GTP (the |
https://en.wikipedia.org/wiki/Bioproducts%20engineering | Bioproducts engineering or bioprocess engineering refers to engineering of bio-products from renewable bioresources. This pertains to the design and development of processes and technologies for the sustainable manufacture of bioproducts (materials, chemicals and energy) from renewable biological resources.
Bioproducts engineers harness the molecular building blocks of renewable resources to design, develop and manufacture environmentally friendly industrial and consumer products. From biofuels, renewable energy, and bioplastics to paper products and "green" building materials such as bio-based composites, Bioproducts engineers are developing sustainable solutions to meet the world's growing materials and energy demand. Conventional bioproducts and emerging bioproducts are two broad categories used to categorize bioproducts. Examples of conventional bio-based products include building materials, pulp and paper, and forest products. Examples of emerging bioproducts or biobased products include biofuels, bioenergy, starch-based and cellulose-based ethanol, bio-based adhesives, biochemicals, biodegradable plastics, etc. Bioproducts Engineers play a major role in the design and development of "green" products including biofuels, bioenergy, biodegradable plastics, biocomposites, building materials, paper and chemicals. Bioproducts engineers also develop energy efficient, environmentally friendly manufacturing processes for these products as well as effective end-use applications. Bioproducts engineers play a critical role in a sustainable 21st century bio-economy by using renewable resources to design, develop, and manufacture the products we use every day. The career outlook for bioproducts engineers is very bright with employment opportunities in a broad range of industries, including pulp and paper, alternative energy, renewable plastics, and other fiber, forest products, building materials and chemical-based industries.
Commonly referred to as bioprocess engineerin |
https://en.wikipedia.org/wiki/Shakedown%20%28continuum%20mechanics%29 | In continuum mechanics, elastic shakedown behavior is one in which plastic deformation takes place during running in, while due to residual stresses or strain hardening the steady state is perfectly elastic.
Plastic shakedown behavior is one in which the steady state is a closed elastic-plastic loop, with no net accumulation of plastic deformation.
Ratcheting behavior is one in which the steady state is an open elastic-plastic loop, with the material accumulating a net strain during each cycle.
Shakedown concept can be applied to solid metallic materials under cyclic repeated loading or to granular materials under cyclic loading (such case can occur in road pavements under traffic loading).
Ratcheting Check
Not needed for only primary loading that meets static loading requirements.
Needed for cyclic thermal loading plus primary loading with a mean.
Shakedown of granular materials
If repeated loading on the granular induces stress beyond the yield surface, three different cases may be observed. In case 1 the residual strain in the materials increases almost without limit. This so-called “ratcheting” state is close to what can be predicted applying simple Mohr–Coulomb criterion to a cyclic loading. In the responses like case 2, residual strain in the materials grows to some extent, but at some stage the growth is stopped and further cyclic loading produces closed hysteresis loops of stress–strain. Finally in case 3 the growth of residual strain is practically diminishes when sufficient loading cycles are applied. Case 2 and case 3 are cases of plastic and elastic shakedown respectively. |
https://en.wikipedia.org/wiki/5%2C6-Dihydroxycytosine | 5,6-Dihydroxycytosine (Isouramil) can be formed from treatment of DNA with osmium tetroxide. |
https://en.wikipedia.org/wiki/Alla%20Sheffer | Alla Sheffer is a Canadian researcher in computer graphics, geometric modeling, geometry processing, and mesh generation, particularly known for her research on mesh parameterization and angle-based flattening. She is currently a professor of computer science at the University of British Columbia.
Education and career
Sheffer was educated at the Hebrew University of Jerusalem, earning a bachelor's degree in mathematics and computer science in 1991, a master's degree in computer science in 1995, and a Ph.D. in computer science in 1999. Her dissertation, Geometric Modeling and Applied Computational Geometry, was supervised by Michel Bercovier.
After postdoctoral research at the University of Illinois at Urbana–Champaign, she became an assistant professor at the Technion – Israel Institute of Technology in 2001. She moved to the University of British Columbia in 2003, and became a full professor there in 2013.
Recognition
The Canadian Human–Computer Communications Society gave Sheffer their Achievement Award in 2018, "for her numerous highly impactful contributions to the field of computer graphics research".
In 2020, Sheffer was elected as a Fellow of the Royal Society of Canadaand a member of the ACM SIGGRAPH Academy. In 2021, she was elected as a Fellow of IEEE. She was named a 2021 ACM Fellow "for contributions to geometry processing, mesh parameterization, and perception-driven shape analysis and modeling". |
https://en.wikipedia.org/wiki/Stem%20Cell%20and%20Regenerative%20Medicine%20Cluster | Stem Cell and Regenerative Medicine Cluster () is a business cluster located in Vilnius, Lithuania. Founded in 2011 the cluster unifies 11 SMEs and state enterprises, including research centers, hospitals, medical centers, and other related institutions. The cluster is currently managed by the Stem Cell Research Center.
Activities
The cluster engages in fundamental and applied scientific research in the fields of stem cells and regenerative medicine. In addition, members of the cluster carry out clinical research (including pre-clinical and clinical trials) in related fields.
Members of the cluster have the capacity to offer an integrated value chain approach, as facilities available include cGMP, in vivo and in vitro testing, and pre-clinical and clinical research.
Current members
Stem Cell Research Center (JSC Kamieniniu lasteliu tyrimu centras)
Vilnius University Hospital Santariskiu Klinikos (Vilniaus Universitetine Ligonine Santariskiu Klinikos)
State Research Institute Center For Innovative Medicine (Valstybinis mokslinių tyrimų institutas Inovatyvios medicinos centras)
Northway Medical Center (JSC Northway Medicinos Centras)
Northway Real Estate (JSC Northway Nekilnojamas Turtas)
Pasilaiciai General Practice (JSC Pasilaiciu seimos medicinos centras)
Lirema Ophtomology (JSC Lirema)
Biotechnology center Biotechpharma (JSC Northway Biotech)
Kardivita Private Hospital (JSC Kardivita privatus medicinos centras)
Biotechnology park (JSC Biotechnologiju parkas)
Biosantara (JSC Biosantara) |
https://en.wikipedia.org/wiki/Higman%27s%20lemma | In mathematics, Higman's lemma states that the set of finite sequences over a finite alphabet, as partially ordered by the subsequence relation, is well-quasi-ordered. That is, if is an infinite sequence of words over some fixed finite alphabet, then there exist indices such that can be obtained from by deleting some (possibly none) symbols. More generally this remains true when the alphabet is not necessarily finite, but is itself well-quasi-ordered, and the subsequence relation allows the replacement of symbols by earlier symbols in the well-quasi-ordering of labels. This is a special case of the later Kruskal's tree theorem. It is named after Graham Higman, who published it in 1952.
Reverse-mathematical calibration
Higman's lemma has been reverse mathematically calibrated (in terms of subsystems of second-order arithmetic) as equivalent to over the base theory . |
https://en.wikipedia.org/wiki/Immunological%20synapse | In immunology, an immunological synapse (or immune synapse) is the interface between an antigen-presenting cell or target cell and a lymphocyte such as a T/B cell or Natural Killer cell. The interface was originally named after the neuronal synapse, with which it shares the main structural pattern. An immunological synapse consists of molecules involved in T cell activation, which compose typical patterns—activation clusters. Immunological synapses are the subject of much ongoing research.
Structure and function
The immune synapse is also known as the supramolecular activation cluster or SMAC. This structure is composed of concentric rings each containing segregated clusters of proteins—often referred to as the bull’s-eye model of the immunological synapse:
c-SMAC (central-SMAC) composed of the θ isoform of protein kinase C, CD2, CD4, CD8, CD28, Lck, and Fyn.
p-SMAC (peripheral-SMAC) within which the lymphocyte function-associated antigen-1 (LFA-1) and the cytoskeletal protein talin are clustered.
d-SMAC (distal-SMAC) enriched in CD43 and CD45 molecules.
New investigations, however, have shown that a "bull’s eye" is not present in all immunological synapses. For example, different patterns appear in the synapse between a T-cell and a dendritic cell.
This complex as a whole is postulated to have several functions including but not limited to:
Regulation of lymphocyte activation
Transfer of peptide-MHC complexes from APCs to lymphocytes
Directing secretion of cytokines or lytic granules
Recent research has proposed a striking parallel between the immunological synapse and the primary cilium based mainly on similar actin rearrangement, orientation of the centrosome towards the structure and involvement of similar transport molecules (such as IFT20, Rab8, Rab11). This structural and functional homology is the topic of ongoing research.
Formation
The initial interaction occurs between LFA-1 present in the p-SMAC of a T-cell, and non-specific adhesion molecules ( |
https://en.wikipedia.org/wiki/Aspartame | Aspartame is an artificial non-saccharide sweetener 200 times sweeter than sucrose and is commonly used as a sugar substitute in foods and beverages. It is a methyl ester of the aspartic acid/phenylalanine dipeptide with brand names NutraSweet, Equal, and Canderel. Aspartame was approved by the US Food and Drug Administration (FDA) in 1974, and then again in 1981, after approval was revoked in 1980.
Aspartame is one of the most studied food additives in the human food supply. Reviews by over 100 governmental regulatory bodies found the ingredient safe for consumption at the normal acceptable daily intake (ADI) limit.
Uses
Aspartame is around 180 to 200 times sweeter than sucrose (table sugar). Due to this property, even though aspartame produces of energy per gram when metabolized, about the same as sucrose, the quantity of aspartame needed to produce a sweet taste is so small that its caloric contribution is negligible. The sweetness of aspartame lasts longer than that of sucrose, so it is often blended with other artificial sweeteners such as acesulfame potassium to produce an overall taste more like that of sugar.
Like many other peptides, aspartame may hydrolyze (break down) into its constituent amino acids under conditions of elevated temperature or high pH. This makes aspartame undesirable as a baking sweetener and prone to degradation in products hosting a high pH, as required for a long shelf life. The stability of aspartame under heating can be improved to some extent by encasing it in fats or in maltodextrin. The stability when dissolved in water depends markedly on pH. At room temperature, it is most stable at pH 4.3, where its half-life is nearly 300 days. At pH 7, however, its half-life is only a few days. Most soft-drinks have a pH between 3 and 5, where aspartame is reasonably stable. In products that may require a longer shelf life, such as syrups for fountain beverages, aspartame is sometimes blended with a more stable sweetener, such as saccha |
https://en.wikipedia.org/wiki/Hempel%27s%20dilemma | Hempel's dilemma is a question first asked (at least on record) by the philosopher Carl Hempel. It has relevance to naturalism and physicalism in philosophy, and to philosophy of mind.
The dilemma questions how the language of physics can be used to accurately describe existence, given that it relies on imperfect human linguistics, or as Hempel stated: "The thesis of physicalism would seem to require a language in which a true theory of all physical phenomena can be formulated. But it is quite unclear what is to be understood here by a physical phenomenon, especially in the context of a doctrine that has taken a decidedly linguistic turn."
Overview
Physicalism, in at least one rough sense, is the claim that the entire world may be described and explained using the laws of nature, in other words, that all phenomena are natural phenomena. This leaves open the question of what is 'natural' (in physicalism 'natural' means procedural, causally coherent or all effects have particular causes regardless of human knowledge [like physics] and interpretation and it also means 'ontological reality' and not just a hypothesis or a calculational technique), but one common understanding of the claim is that everything in the world is ultimately explicable in the terms of physics. This is known as reductive physicalism. However, this type of physicalism in its turn leaves open the question of what we are to consider as the proper terms of physics. There seem to be two options here, and these options form the horns of Hempel's dilemma, because neither seems satisfactory.
On the one hand, we may define the physical as whatever is currently explained by our best physical theories, e.g., quantum mechanics, general relativity. Though many would find this definition unsatisfactory, some would accept that we have at least a general understanding of the physical based on these theories, and can use them to assess what is physical and what is not. And therein lies the rub, as a worked- |
https://en.wikipedia.org/wiki/Paul%20Scherrer%20Institute | The Paul Scherrer Institute (PSI) is a multi-disciplinary research institute for natural and engineering sciences in Switzerland. It is located in the Canton of Aargau in the municipalities Villigen and Würenlingen on either side of the River Aare, and covers an area over 35 hectares in size. Like ETH Zurich and EPFL, PSI belongs to the Swiss Federal Institutes of Technology Domain of the Swiss Confederation. The PSI employs around 2,100 people. It conducts basic and applied research in the fields of matter and materials, human health, and energy and the environment. About 37% of PSI's research activities focus on material sciences, 24% on life sciences, 19% on general energy, 11% on nuclear energy and safety, and 9% on particle physics.
PSI develops, builds and operates large and complex research facilities and makes them available to the national and international scientific communities. In 2017, for example, more than 2,500 researchers from 60 different countries came to PSI to take advantage of the concentration of large-scale research facilities in the same location, which is unique worldwide. About 1,900 experiments are conducted each year at the approximately 40 measuring stations in these facilities.
In recent years, the institute has been one of the largest recipients of money from the Swiss lottery fund.
History
The institute, named after the Swiss physicist Paul Scherrer, was created in 1988 when EIR (Eidgenössisches Institut für Reaktorforschung, Swiss Federal Institute for Reactor Research, founded in 1960) was merged with SIN (Schweizerisches Institut für Nuklearphysik, Swiss Institute for Nuclear Research, founded in 1968). The two institutes on opposite sides of the River Aare served as national centres for research: one focusing on nuclear energy and the other on nuclear and particle physics. Over the years, research at the centres expanded into other areas, and nuclear and reactor physics accounts for just 11 percent of the research work at PSI |
https://en.wikipedia.org/wiki/Modified%20Newtonian%20dynamics | Modified Newtonian dynamics (MOND) is a hypothesis that proposes a modification of Newton's law of universal gravitation to account for observed properties of galaxies. It is an alternative to the hypothesis of dark matter in terms of explaining why galaxies do not appear to obey the currently understood laws of physics.
Created in 1982 and first published in 1983 by Israeli physicist Mordehai Milgrom, the hypothesis' original motivation was to explain why the velocities of stars in galaxies were observed to be larger than expected based on Newtonian mechanics. Milgrom noted that this discrepancy could be resolved if the gravitational force experienced by a star in the outer regions of a galaxy was proportional to the square of its centripetal acceleration (as opposed to the centripetal acceleration itself, as in Newton's second law) or alternatively, if gravitational force came to vary inversely linearly with radius (as opposed to the inverse square of the radius, as in Newton's law of gravity). MOND departs from Newton's laws at extremely small accelerations that are characteristic of the outer regions of galaxies as well as the inter-galaxy forces within galaxy clusters, but which are far below anything encountered in the Solar System or on Earth.
MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halos. Since Milgrom's original proposal, proponents of MOND have claimed to successfully predict a variety of galactic phenomena that they state are difficult to understand as consequences of dark matter.
Though MOND explains the anomalously great rotational velocities of galaxies at their perimeters, it does not fully explain the velocity dispersions of individual galaxies within galaxy clusters. MOND reduces the discrepancy between the velocity dispersions and clusters' observed missing baryonic mass from a factor of around 10 to |
https://en.wikipedia.org/wiki/Block%20cellular%20automaton | A block cellular automaton or partitioning cellular automaton is a special kind of cellular automaton in which the lattice of cells is divided into non-overlapping blocks (with different partitions at different time steps) and the transition rule is applied to a whole block at a time rather than a single cell. Block cellular automata are useful for simulations of physical quantities, because it is straightforward to choose transition rules that obey physical constraints such as reversibility and conservation laws.
Definition
A block cellular automaton consists of the following components:
A regular lattice of cells
A finite set of the states that each cell may be in
A partition of the cells into a uniform tessellation in which each tile of the partition has the same size and shape
A rule for shifting the partition after each time step
A transition rule, a function that takes as input an assignment of states for the cells in a single tile and produces as output another assignment of states for the same cells.
In each time step, the transition rule is applied simultaneously and synchronously to all of the tiles in the partition. Then, the partition is shifted and the same operation is repeated in the next time step, and so forth. In this way, as with any cellular automaton, the pattern of cell states changes over time to perform some nontrivial computation or simulation.
Neighborhoods
The simplest partitioning scheme is probably the Margolus neighborhood, named after Norman Margolus, who first studied block cellular automata using this neighborhood structure. In the Margolus neighborhood, the lattice is divided into -cell blocks (or squares in two dimensions, or cubes in three dimensions, etc.) which are shifted by one cell (along each dimension) on alternate timesteps.
A closely related technique due to K. Morita and M. Harao consists in partitioning each cell into a finite number of parts, each part being devoted to some neighbor. The evolution proceeds by exch |
https://en.wikipedia.org/wiki/Tuberous%20sclerosis%20complex%20tumor%20suppressors | Tuberous sclerosis complex (TSC) tumor suppressors form the TSC1-TSC2 molecular complex. Under poor growth conditions the TSC1-TSC2 complex limits cell growth. A key promoter of cell growth, mTORC1, is inhibited by the tuberous sclerosis complex. Insulin activates mTORC1 and causes dissociation of TSC from the surface of lysosomes.
Resistance to ischemia-reperfusion injury by protein restriction is mediated by activation of the tuberous sclerosis complex. |
https://en.wikipedia.org/wiki/Approximation-preserving%20reduction | In computability theory and computational complexity theory, especially the study of approximation algorithms, an approximation-preserving reduction is an algorithm for transforming one optimization problem into another problem, such that the distance of solutions from optimal is preserved to some degree. Approximation-preserving reductions are a subset of more general reductions in complexity theory; the difference is that approximation-preserving reductions usually make statements on approximation problems or optimization problems, as opposed to decision problems.
Intuitively, problem A is reducible to problem B via an approximation-preserving reduction if, given an instance of problem A and a (possibly approximate) solver for problem B, one can convert the instance of problem A into an instance of problem B, apply the solver for problem B, and recover a solution for problem A that also has some guarantee of approximation.
Definition
Unlike reductions on decision problems, an approximation-preserving reduction must preserve more than the truth of the problem instances when reducing from one problem to another. It must also maintain some guarantee on the relationship between the cost of the solution to the cost of the optimum in both problems. To formalize:
Let and be optimization problems.
Let be an instance of problem , with optimal solution . Let denote the cost of a solution to an instance of problem . This is also the metric used to determine which solution is considered optimal.
An approximation-preserving reduction is a pair of functions (which often must be computable in polynomial time), such that:
maps an instance of to an instance of .
maps a solution of to a solution of .
preserves some guarantee of the solution's performance, or approximation ratio, defined as .
Types
There are many different types of approximation-preserving reductions, all of which have a different guarantee (the third point in the definition above). |
https://en.wikipedia.org/wiki/Epidemiology%20of%20syphilis | Syphilis is a bacterial infection transmitted by sexual contact and is believed to have infected people in 1999 with greater than 90% of cases in the developing world. It affects between 700,000 and 1.6 million pregnancies a year, resulting in spontaneous abortions, stillbirths, and congenital syphilis. In Sub-Saharan Africa syphilis contributes to approximately 20% of perinatal deaths.
In the developed world, syphilis infections were in decline until the 1980s and 1990s due to widespread use of antibiotics. Since the year 2000, rates of syphilis have been increasing in the US, UK, Australia, and Europe primarily among men who have sex with men. This is attributed to unsafe sexual practices. A Sexually transmitted disease (STD) Surveillance study done by the Centers for Disease Control and Prevention in 2016 showed that men who have sex with men only account for over half (52%) of the 27,814 cases during that year. Nationally, the highest rates of primary and secondary syphilis in 2016 were observed among men aged 20–34 years, among men in the West, and among Black men.
Increased rates among heterosexuals have occurred in China and Russia since the 1990s. Syphilis increases the risk of HIV transmission by two to five times and co-infection is common (30–60% in a number of urban centers).
Untreated, it has a mortality rate of 8% to 58%, with a greater death rate in males. The higher incidence of mortality among males compared to females is not well understood, but is thought to be related to immunological differences across gender. The symptoms of syphilis have become less severe over the 19th and 20th century in part due to widespread availability of effective treatment and partly due to decreasing virulence of the spirochete. With early treatment few complications result.
China
In China rates of syphilis have increased from the 1990s to the 2010s. This occurred after a successful campaign to reduce rates was carried out in the 1950s. Rates of diagnosis are hig |
https://en.wikipedia.org/wiki/Countergradient%20variation | Countergradient variation is a type of phenotypic plasticity that occurs when the phenotypic variation determined by a biological population's genetic components opposes the phenotypic variation caused by an environmental gradient. This can cause different populations of the same organism to display similar phenotypes regardless of their underlying genetics and differences in their environments.
To illustrate a common example known as countergradient growth rate, consider two populations. The two populations live in different environments that affect growth differently due to many ecological factors, such as the temperature and available food. One population is genetically predisposed to have an increased growth rate but inhabits an environment that reduces growth rate, such as a cool environment, and thereby limits the opportunities to take full advantage of any genetic predisposition. The second population is genetically predisposed to have a decreased growth rate but inhabits an environment that supports an increased growth rate, such as a warm environment, and allows members of the population to grow faster despite their genetic disadvantage. Since the genetic influence directly counteracts the environmental influence in each population, both populations will have a similar intermediate growth rate. Countergradient variation can reduce apparent variability by creating similar phenotypes, but it is still possible for the two populations to show phenotypic diversity if either the genetic gradient or the environmental gradient has a stronger influence.
Many examples of countergradient variation have been discovered through the use of transplant experiments. Countergradient variation of growth rate is one of the most common examples. Growth rate and body size have important ecological implications, such as how they impact an organism's survival, life history, and fecundity. Countergradient variation has been described in many ectothermic animals, since ectotherms |
https://en.wikipedia.org/wiki/Master%20of%20Wine | Master of Wine (MW) is a qualification (not an academic degree) issued by The Institute of Masters of Wine in the United Kingdom. The MW qualification is generally regarded in the wine industry as one of the highest standards of professional knowledge.
The Institute was founded in 1955, and the MW examinations were first arranged in 1953 by the Worshipful Company of Vintners and the Wine and Spirits Association.
Qualification
Prospective students must hold an advanced wine qualification, at least Diploma level from the Wine & Spirit Education Trust, or an appropriately high level sommelier certificate, such as Advanced Sommelier from the Court of Master Sommeliers, and have a minimum of three years' professional work experience in the global wine community. Their application must be supported by reference from a Master of Wine or another senior wine trade professional. The qualification process takes at least three years to complete, involves both theory and practical work, and culminates in writing a research paper of 6,000–10,000 words.
Membership
Until 1983, the examination was limited to United Kingdom wine importers, merchants and retailers. The first non-UK Master of Wine was awarded in 1988. As of March 2021, there are 416 MWs in the world, living in 31 countries. The MWs are spread across 5 continents, wherein the UK has 208 MWs, US has 45 MWs, Australia has 24 MWs and France has 16 MWs. There are 9 countries with 1 MW each on the list.
Today, members hold a range of occupations including winemakers, viticulturists, winemaking consultants, wine writers and journalists, wine educators, and wine service, restaurant and hotel management. In addition, many are involved in the purchasing, importing, distribution, sales and marketing of wine.
Notable Masters of Wine
Notable Masters of Wine include:
Tim Atkin
Gerard Basset
Nicolas Belfrage
Julian Brind
Michael Broadbent
Clive Coates
Mary Ewing-Mulligan
Doug Frost
Ned Goodwin
Anthony Hanson
Benj |
https://en.wikipedia.org/wiki/Windows%20Live%20OneCare | Windows Live OneCare (previously Windows OneCare Live, codenamed A1) was a computer security and performance enhancement service developed by Microsoft for Windows. A core technology of OneCare was the multi-platform RAV (Reliable Anti-virus), which Microsoft purchased from GeCAD Software Srl in 2003, but subsequently discontinued. The software was available as an annual paid subscription, which could be used on up to three computers.
On 18 November 2008, Microsoft announced that Windows Live OneCare would be discontinued on 30 June 2009 and will instead be offering users a new free anti-malware suite called Microsoft Security Essentials to be available before then. However, virus definitions and support for OneCare would continue until a subscription expires. In the end-of-life announcement, Microsoft noted that Windows Live OneCare would not be upgraded to work with Windows 7 and would also not work in Windows XP Mode.
History
Windows Live OneCare entered a beta state in the summer of 2005. The managed beta program was launched before the public beta, and was located on BetaPlace, Microsoft's former beta delivery system. On 31 May 2006, Windows Live OneCare made its official debut in retail stores in the United States.
The beta version of Windows Live OneCare 1.5 was released in early October 2006 by Microsoft. Version 1.5 was released to manufacturing on 3 January 2007 and was made available to the public on 30 January 2007. On 4 July 2007, beta testing started for version 2.0, and the final version was released on 16 November 2007.
Microsoft acquired Komoku on 20 March 2008 and merged its computer security software into Windows Live OneCare.
Windows Live OneCare 2.5 (build 2.5.2900.28) final was released on 3 July 2008. On the same day, Microsoft also released Windows Live OneCare for Server 2.5.
Features
Windows Live OneCare features integrated anti-virus, personal firewall, and backup utilities, and a tune-up utility with the integrated functionality o |
https://en.wikipedia.org/wiki/Paul%20Bernays | Paul Isaac Bernays (17 October 1888 – 18 September 1977) was a Swiss mathematician who made significant contributions to mathematical logic, axiomatic set theory, and the philosophy of mathematics. He was an assistant and close collaborator of David Hilbert.
Biography
Bernays was born into a distinguished German-Jewish family of scholars and businessmen. His great-grandfather, Isaac ben Jacob Bernays, served as chief rabbi of Hamburg from 1821 to 1849.
Bernays spent his childhood in Berlin, and attended the Köllner Gymnasium, 1895–1907. At the University of Berlin, he studied mathematics under Issai Schur, Edmund Landau, Ferdinand Georg Frobenius, and Friedrich Schottky; philosophy under Alois Riehl, Carl Stumpf and Ernst Cassirer; and physics under Max Planck. At the University of Göttingen, he studied mathematics under David Hilbert, Edmund Landau, Hermann Weyl, and Felix Klein; physics under Voigt and Max Born; and philosophy under Leonard Nelson.
In 1912, the University of Berlin awarded him a Ph.D. in mathematics for a thesis, supervised by Landau, on the analytic number theory of binary quadratic forms. That same year, the University of Zurich awarded him habilitation for a thesis on complex analysis and Picard's theorem. The examiner was Ernst Zermelo. Bernays was Privatdozent at the University of Zurich, 1912–17, where he came to know George Pólya. His collected communications with Kurt Gödel span many decades.
Starting in 1917, David Hilbert employed Bernays to assist him with his investigations of the foundation of arithmetic. Bernays also lectured on other areas of mathematics at the University of Göttingen. In 1918, that university awarded him a second habilitation for a thesis on the axiomatics of the propositional calculus of Principia Mathematica.
In 1922, Göttingen appointed Bernays extraordinary professor without tenure. His most successful student there was Gerhard Gentzen. After Nazi Germany enacted the Law for the Restoration of the Professi |
https://en.wikipedia.org/wiki/Glossary%20of%20module%20theory | Module theory is the branch of mathematics in which modules are studied. This is a glossary of some terms of the subject.
See also: Glossary of linear algebra, Glossary of ring theory, Glossary of representation theory.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
P
Q
R
S
T
U
W
Z |
https://en.wikipedia.org/wiki/NEFM | Neurofilament medium polypeptide (NF-M) is a protein that in humans is encoded by the NEFM gene.
Function
Neurofilaments are type IV intermediate filament heteropolymers composed of light (NEFL), medium (this protein), and heavy (NEFH) chains. Neurofilaments comprise the exoskeleton and functionally maintain neuronal caliber. They may also play a role in intracellular transport to axons and dendrites. This gene encodes the medium neurofilament protein. This protein is commonly used as a biomarker of neuronal damage. |
https://en.wikipedia.org/wiki/Clarke%20Medal | The Clarke Medal is awarded by the Royal Society of New South Wales, the oldest learned society in Australia and the Southern Hemisphere, for distinguished work in the Natural sciences.
The medal is named in honour of the Reverend William Branwhite Clarke, one of the founders of the Society and was to be "awarded for meritorious contributions to Geology, Mineralogy and Natural History of Australasia, to be open to men of science, whether resident in Australasia or elsewhere".
It is now awarded annually for distinguished work in the Natural Sciences (geology, botany and zoology) done in the Australian Commonwealth and its territories. Each discipline is considered in rotation every three years.
Recipients
Source: Royal Society of New South Wales
1878: Richard Owen (Zoology)
1879: George Bentham (Botany)
1880: Thomas Huxley (Palaeontology)
1881: Frederick McCoy (Palaeontology)
1882: James Dwight Dana (Geology)
1883: Ferdinand von Mueller (Botany)
1884: Alfred Richard Cecil Selwyn (Geology)
1885: Joseph Dalton Hooker (Botany)
1886: Laurent-Guillaume de Koninck (Palaeontology)
1887: Sir James Hector (Geology)
1888: Julian Tenison Woods (Geology)
1889: Robert L. J. Ellery (Astronomy)
1890: George Bennett (Zoology)
1891: Frederick Hutton (Geology)
1892: William Turner Thiselton-Dyer (Botany)
1893: Ralph Tate (Botany and Geology)
1895: Joint Award: Robert Logan Jack (Geology) and Robert Etheridge, Jr. (Palaeontology)
1896: Augustus Gregory (Exploration)
1900: John Murray (Oceanography)
1901: Edward John Eyre (Exploration)
1902: Frederick Manson Bailey (Botany)
1903: Alfred William Howitt (Anthropology)
1907: Walter Howchin (Geology)
1909: Walter Roth (Anthropology)
1912: William Harper Twelvetrees (Geology)
1914: Arthur Smith Woodward (Palaeontology)
1915: William Aitcheson Haswell (Zoology)
1917: Edgeworth David (Geology)
1918: Leonard Rodway (Botany)
1920: Joseph Edmund Carne (Geology)
1921: Joseph James Fletcher (Biology)
1922: Richa |
https://en.wikipedia.org/wiki/Jackson%20structured%20programming | Jackson structured programming (JSP) is a method for structured programming developed by British software consultant Michael A. Jackson and described in his 1975 book Principles of Program Design. The technique of JSP is to analyze the data structures of the files that a program must read as input and produce as output, and then produce a program design based on those data structures, so that the program control structure handles those data structures in a natural and intuitive way.
JSP describes structures (of both data and programs) using three basic structures – sequence, iteration, and selection (or alternatives). These structures are diagrammed as (in effect) a visual representation of a regular expression.
Introduction
Michael A. Jackson originally developed JSP in the 1970s. He documented the system in his 1975 book Principles of Program Design. In a 2001 conference talk, he provided a retrospective analysis of the original driving forces behind the method, and related it to subsequent software engineering developments. Jackson's aim was to make COBOL batch file processing programs easier to modify and maintain, but the method can be used to design programs for any programming language that has structured control constructs— sequence, iteration, and selection ("if/then/else").
Jackson Structured Programming was similar to Warnier/Orr structured programming although JSP considered both input and output data structures while the Warnier/Orr method focused almost exclusively on the structure of the output stream.
Motivation for the method
At the time that JSP was developed, most programs were batch COBOL programs that processed sequential files stored on tape. A typical program read through its input file as a sequence of records, so that all programs had the same structure— a single main loop that processed all of the records in the file, one at a time. Jackson asserted that this program structure was almost always wrong, and encouraged programmers to |
https://en.wikipedia.org/wiki/Pavel%20Korovkin | Pavel Petrovich Korovkin () (the family name is also transliterated as Korowkin in German sources) (9 July 1913 – 11 August 1985) was a Soviet mathematician whose main fields of research were orthogonal polynomials, approximation theory and potential theory. In 1947 he proved a generalization of Egorov's theorem: from the early 1950s on, his research interests turned to functional analysis and he examined the stability of the exterior Dirichlet problem and the convergence and approximation properties of linear positive operators on spaces of continuous functions. The set of terms and Korovkin approximation are named after him.
Life and career
Korovkin was born to a poor peasant family. He lost his father early and grew from 1914 to 1920 at an orphanage. In 1930 he graduated high school in Leningrad. As the winner of a mathematics contest he had a right to enter the Leningrad State University without entrance exams. After a year of working at a factory he entered the Faculty of Mathematics and Mechanics. His scientific advisor was V.I. Smirnov. Korovkin earned his doctorate in 1939 with a dissertation on orthogonal polynomials. He then was appointed to Kalinin Pedagogical Institute.
At the beginning of Great Patriotic War Korovkin voluntarily enlisted to the Red Army. He started as a cannon platoon chief and till the end of war promoted to artillery regiment chief. He was awarded with Order of the Red Star.
In December 1945, he continued his work at the Kalinin Pedagogical Institute, in 1947 with a thesis on convergence of polynomial sequences, and was appointed professor in 1948. At the Moscow Automobile and Road Institute from 1958 to 1970 he headed the department of higher mathematics, then he became head of the Department of Mathematical Analysis at the Kaluga State Pedagogical Institute.
Selected publications
.
, translated in English as .
Notes |
https://en.wikipedia.org/wiki/Automata-based%20programming%20%28Shalyto%27s%20approach%29 | Automata-based programming is a programming technology. Its defining characteristic is the use of finite state machines to describe program behavior. The transition graphs of state machines are used in all stages of software development (specification, implementation, debugging and documentation). Automata-based programming technology was introduced by Anatoly Shalyto in 1991. Switch-technology was developed to support automata-based programming. Automata-based programming is considered to be rather general purpose program development methodology than just another one finite state machine implementation.
Automata-based programming
The main idea of suggested approach is to construct computer programs the same way the automation of technological processes (and other kinds of processes too) is done.
For all that on the basis of data domain analysis the sources of input events, the control system (the system of interacting finite state machines) and the control objects implementing output actions are singled out. These control objects can also form yet another type of input actions that are transmitted through a feedback from control objects back to the finite state machines.
Main features
In recent years great attention has been paid to the development of the technology of programming for embedded systems and real-time systems. These systems have special requirements for the quality of software. One of the best known approaches for this field of tasks is synchronous programming.
Simultaneously with the advance of synchronous programming in Europe, an approach to software development for crucial systems called automata-based programming or state-based programming was being created in Russia.
The term event is being used more and more widely in programming; recently it has become one of the most commonly used terms in software development. As opposed to it, the offered approach is based on the term state (State-Driven Architecture). After introduction of the term |
https://en.wikipedia.org/wiki/Aminobacterium | Aminobacterium is a Gram-negative genus of bacteria from the family of Synergistaceae.
Phylogeny
See also
List of bacterial orders
List of bacteria genera |
https://en.wikipedia.org/wiki/Partial%20geometry | An incidence structure consists of points , lines , and flags where a point is said to be incident with a line if . It is a (finite) partial geometry if there are integers such that:
For any pair of distinct points and , there is at most one line incident with both of them.
Each line is incident with points.
Each point is incident with lines.
If a point and a line are not incident, there are exactly pairs , such that is incident with and is incident with .
A partial geometry with these parameters is denoted by .
Properties
The number of points is given by and the number of lines by .
The point graph (also known as the collinearity graph) of a is a strongly regular graph: .
Partial geometries are dual structures: the dual of a is simply a .
Special case
The generalized quadrangles are exactly those partial geometries with .
The Steiner systems are precisely those partial geometries with .
Generalisations
A partial linear space of order is called a semipartial geometry if there are integers such that:
If a point and a line are not incident, there are either or exactly pairs , such that is incident with and is incident with .
Every pair of non-collinear points have exactly common neighbours.
A semipartial geometry is a partial geometry if and only if .
It can be easily shown that the collinearity graph of such a geometry is strongly regular with parameters
.
A nice example of such a geometry is obtained by taking the affine points of and only those lines that intersect the plane at infinity in a point of a fixed Baer subplane; it has parameters .
See also
Strongly regular graph
Maximal arc |
https://en.wikipedia.org/wiki/Heisenberg%27s%20microscope | Heisenberg's microscope is a thought experiment proposed by Werner Heisenberg that has served as the nucleus of some commonly held ideas about quantum mechanics. In particular, it provides an argument for the uncertainty principle on the basis of the principles of classical optics.
The concept was criticized by Heisenberg's mentor Niels Bohr, and theoretical and experimental developments have suggested that Heisenberg's intuitive explanation of his mathematical result might be misleading. While the act of measurement does lead to uncertainty, the loss of precision is less than that predicted by Heisenberg's argument when measured at the level of an individual state. The formal mathematical result remains valid, however, and the original intuitive argument has also been vindicated mathematically when the notion of disturbance is expanded to be independent of any specific state.
Heisenberg's argument
Heisenberg supposes that an electron is like a classical particle, moving in the direction along a line below the microscope. Let the cone of light rays leaving the microscope lens and focusing on the electron make an angle with the electron. Let be the wavelength of the light rays. Then, according to the laws of classical optics, the microscope can only resolve the position of the electron up to an accuracy of
An observer perceives an image of the particle because the light rays strike the particle and bounce back through the microscope to the observer's eye. We know from experimental evidence that when a photon strikes an electron, the latter has a Compton recoil with momentum proportional to , where is Planck's constant. However, the extent of "recoil cannot be exactly known, since the direction of the scattered photon is undetermined within the bundle of rays entering the microscope." In particular, the electron's momentum in the direction is only determined up to
Combining the relations for and , we thus have
,
which is an approximate expression of Hei |
https://en.wikipedia.org/wiki/Food%20spoilage | Food spoilage is the process where a food product becomes unsuitable to ingest by the consumer. The cause of such a process is due to many outside factors as a side-effect of the type of product it is, as well as how the product is packaged and stored. Due to food spoilage, one-third of the world's food produced for the consumption of humans is lost every year. Bacteria and various fungi are the cause of spoilage and can create serious consequences for the consumers, but there are preventive measures that can be taken.
Bacteria
Some bacteria are responsible for the spoilage of food. When bacteria breaks down the food, acids and other waste products are generated in the process. While the bacteria itself may or may not be harmful, the waste products may be unpleasant to taste or may even be harmful to one's health.
There are many species of pathogenic bacteria that target different categories of food. For example, Clostridium botulinum spoils food such as meat and poultry, and Bacillus cereus, which spoils almost all type of food. When stored or subjected to unruly conditions, the organisms will begin to breed apace, releasing harmful toxins that can cause severe illness, even when cooked safely.
Fungi
Fungi have been seen as a method of food spoilage, causing only an undesirable appearance to food, however, there has been significant evidence of various fungi being a cause of death. Fungi are caused by acidifying, fermenting, discoloring and disintegrating processes and can create fuzz, powder and slimes of many different colors, including black, white, red, brown and green.
Mold is a type of fungus, but the two terms are not reciprocal of each other; they have their own defining features and perform their own tasks. Very well known types of mold are Aspergillus and Penicillium, and, like regular fungi, create a fuzz, powder and slime of various colors.
Yeast is also a type of fungus that grows vegetatively via single cells that either bud or divide by way of |
https://en.wikipedia.org/wiki/Industrial%20enzymes | Industrial enzymes are enzymes that are commercially used in a variety of industries such as pharmaceuticals, chemical production, biofuels, food & beverage, and consumer products. Due to advancements in recent years, biocatalysis through isolated enzymes is considered more economical than use of whole cells. Enzymes may be used as a unit operation within a process to generate a desired product, or may be the product of interest. Industrial biological catalysis through enzymes has experienced rapid growth in recent years due to their ability to operate at mild conditions, and exceptional chiral and positional specificity, things that traditional chemical processes lack. Isolated enzymes are typically used in hydrolytic and isomerization reactions. Whole cells are typically used when a reaction requires a co-factor. Although co-factors may be generated in vitro, it is typically more cost-effective to use metabolically active cells.
Enzymes as a unit of operation
Immobilization
Despite their excellent catalytic capabilities, enzymes and their properties must be improved prior to industrial implementation in many cases. Some aspects of enzymes that must be improved prior to implementation are stability, activity, inhibition by reaction products, and selectivity towards non-natural substrates. This may be accomplished through immobilization of enzymes on a solid material, such as a porous support. Immobilization of enzymes greatly simplifies the recovery process, enhances process control, and reduces operational costs. Many immobilization techniques exist, such as adsorption, covalent binding, affinity, and entrapment. Ideal immobilization processes should not use highly toxic reagents in the immobilization technique to ensure stability of the enzymes. After immobilization is complete, the enzymes are introduced into a reaction vessel for biocatalysis.
Adsorption
Enzyme adsorption onto carriers functions based on chemical and physical phenomena such as van der Waa |
https://en.wikipedia.org/wiki/Black-box%20testing | Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance. It is sometimes referred to as specification-based testing.
Test procedures
Specific knowledge of the application's code, internal structure and programming knowledge in general is not required. The tester is aware of what the software is supposed to do but is not aware of how it does it. For instance, the tester is aware that a particular input returns a certain, invariable output but is not aware of how the software produces the output in the first place.
Test cases
Test cases are built around specifications and requirements, i.e., what the application is supposed to do. Test cases are generally derived from external descriptions of the software, including specifications, requirements and design parameters. Although the tests used are primarily functional in nature, non-functional tests may also be used. The test designer selects both valid and invalid inputs and determines the correct output, often with the help of a test oracle or a previous result that is known to be good, without any knowledge of the test object's internal structure.
Test design techniques
Typical black-box test design techniques include:
Decision table testing
All-pairs testing
Equivalence partitioning
Boundary value analysis
Cause–effect graph
Error guessing
State transition testing
Use case testing
User story testing
Domain analysis
Syntax testing
Combining technique
Hacking
In penetration testing, black-box testing refers to a method where an ethical hacker has no knowledge of the system being attacked. The goal of a black-box penetration test is to simulate an external hacking or cyber warfare attack.
See also
ABX test
Acceptance testing
Blind experiment
Boundary testing
Fuzz testing
Gr |
https://en.wikipedia.org/wiki/KIR2DL1 | Killer cell immunoglobulin-like receptor 2DL1 is a protein that in humans is encoded by the KIR2DL1 gene.
Function
Killer-cell immunoglobulin-like receptors (KIRs) are transmembrane glycoproteins expressed by natural killer cells and subsets of T cells. The KIR genes are polymorphic and highly homologous and they are found in a cluster on chromosome 19q13.4 within the 1 Mb leukocyte receptor complex (LRC). The gene content of the KIR gene cluster varies among haplotypes, although several "framework" genes are found in all haplotypes (KIR3DL3, KIR3DP1, KIR2DL4, KIR3DL2). The KIR proteins are classified by the number of extracellular immunoglobulin domains (2D or 3D) and by whether they have a long (L) or short (S) cytoplasmic domain. KIR proteins with the long cytoplasmic domain transduce inhibitory signals upon ligand binding via an immune tyrosine-based inhibitory motif (ITIM), while KIR proteins with the short cytoplasmic domain lack the ITIM motif and instead associate with the TYRO protein tyrosine kinase binding protein to transduce activating signals. The ligands for several KIR proteins are subsets of HLA class I molecules; thus, KIR proteins are thought to play an important role in regulation of the immune response.
Interactions
KIR2DL1 has been shown to interact with HLA-C.
See also
Cluster of differentiation |
https://en.wikipedia.org/wiki/Maximum%20cut | For a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets and , such that the number of edges between and is as large as possible. Finding such a cut is known as the max-cut problem.
The problem can be stated simply as follows. One wants a subset of the vertex set such that the number of edges between and the complementary subset is as large as possible. Equivalently, one wants a bipartite subgraph of the graph with as many edges as possible.
There is a more general version of the problem called weighted max-cut, where each edge is associated with a real number, its weight, and the objective is to maximize the total weight of the edges between and its complement rather than the number of the edges. The weighted max-cut problem allowing both positive and negative weights can be trivially transformed into a weighted minimum cut problem by flipping the sign in all weights.
Lower bounds
Edwards obtained the following two lower bound for Max-Cut on a graph with vertices and edges (in (a) is arbitrary, but in (b) it is connected):
(a)
(b)
Bound (b) is often called the Edwards-Erdős bound as Erdős conjectured it. Edwards proved the Edwards-Erdős bound using probabilistic method; Crowston et al. proved the bound using linear algebra and analysis of pseudo-boolean functions.
The proof of Crowston et al. allows us to extend the Edwards-Erdős bound to the Balanced Subgraph Problem (BSP) on signed graphs , i.e. graphs where each edge is assigned + or –. For a partition of into subsets and , an edge is balanced if either and and are in the same subset, or and and are different subsets. BSP aims at finding a partition with the maximum number of balanced edges in . The Edwards-Erdős gives a lower bound on for every connected signed graph .
Bound (a) was improved for special classes of graphs: triangle-free graphs, graphs of given maximum degree |
https://en.wikipedia.org/wiki/International%20Center%20for%20Relativistic%20Astrophysics | ICRA, the International Center for Relativistic Astrophysics is an international research institute for relativistic astrophysics and related areas. Its members are seven Universities and four organizations. The center is located in Rome, Italy.
The International Center for Relativistic Astrophysics (ICRA) was founded in 1985 by Remo Ruffini (University of Rome "La Sapienza") together with Riccardo Giacconi (Nobel Prize for Physics 2002), Abdus Salam (Nobel Prize for Physics 1979), Paul Boyton (University of Washington), George Coyne (former director of the Vatican observatory), Francis Everitt (Stanford University), Fang Li-Zhi (University of Science and Technology of China). It became a legal entity in 1991 with the Ministerial Decree 22/11/1991 from the Ministry of Education, Universities and Research.In 1978 Fang was assigned to host Ruffini, a guest of the Chinese Academy of Sciences. They gave joint university lectures and developed a profound friendship. In 1981 in China they published a small book introducing relativistic astrophysics that became revered among astrophysics students. In 1982 Fang and Ruffini organized the first international conference on astrophysics in China—the third Marcel Grossmann Meeting—and thereafter remained organizers of the Grossmann meetings. Together with Abdus Salam, Riccardo Giacconi, George Coyne, and Francis Everitt, they founded the International Center for Relativistic Astrophysics (ICRA) in 1985. Physics TodayThe International Center of Relativistic Astrophysics is located in the Department of Physics building at the main Campus of the University of Rome "Sapienza".
In 2005 ICRA has been among the founders of ICRANet, the International Center for Relativistic Astrophysics Network. The national activities of research and teaching in Italy remained operative at ICRA in Rome, while international activities and coordination are now based in ICRANet in Pescara.
Structure
President: Yu Wang
Former President: Remo Ruffini
I |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.