source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Sound%20quality | Sound quality is typically an assessment of the accuracy, fidelity, or intelligibility of audio output from an electronic device. Quality can be measured objectively, such as when tools are used to gauge the accuracy with which the device reproduces an original sound; or it can be measured subjectively, such as when human listeners respond to the sound or gauge its perceived similarity to another sound.
The sound quality of a reproduction or recording depends on a number of factors, including the equipment used to make it, processing and mastering done to the recording, the equipment used to reproduce it, as well as the listening environment used to reproduce it. In some cases, processing such as equalization, dynamic range compression or stereo processing may be applied to a recording to create audio that is significantly different from the original but may be perceived as more agreeable to a listener. In other cases, the goal may be to reproduce audio as closely as possible to the original.
When applied to specific electronic devices, such as loudspeakers, microphones, amplifiers or headphones sound quality usually refers to accuracy, with higher quality devices providing higher accuracy reproduction. When applied to processing steps such as mastering recordings, absolute accuracy may be secondary to artistic or aesthetic concerns. In still other situations, such as recording a live musical performance, audio quality may refer to proper placement of microphones around a room to optimally use room acoustics.
Digital audio
Digital audio is stored in many formats. The simplest form is uncompressed PCM, where audio is stored as a series of quantized audio samples spaced at regular intervals in time. As samples are placed closer together in time, higher frequencies can be reproduced. According to the sampling theorem, any bandwidth-limited signal (that does not contain a pure sinusoidal component), bandwidth B, can be perfectly described by more than 2B samp |
https://en.wikipedia.org/wiki/Thermus%20antranikianii | Thermus antranikianii is a bacterium belonging to the Deinococcota phylum, known to be present in hazardous conditions. This species was identified in Iceland, together with Thermus igniterrae. |
https://en.wikipedia.org/wiki/Mac%20gaming | Mac gaming refers to the use of video games on Macintosh personal computers. In the 1990s, Apple computers did not attract the same level of video game development as Microsoft Windows computers due to the high popularity of Microsoft Windows and, for 3D gaming, Microsoft's DirectX technology. In recent years, the introduction of Mac OS X and support for Intel processors has eased porting of many games, including 3D games through use of OpenGL and more recently Apple's own Metal API. Virtualization technology and Boot Camp also permit the use of Windows and its games on Macintosh computers. Today, a growing number of popular games run natively on macOS, though as of early 2019, a majority still require the use of Microsoft Windows.
macOS Catalina (and later) eliminated support for 32-bit games, including those compatible with older versions of macOS.
Early game development on the Mac
Prior to the release of the Macintosh 128K, the first Macintosh computer, marketing executives at Apple feared that including a game in the finished operating system would aggravate the impression that the graphical user interface made the Mac toy-like. More critically, the limited amount of RAM in the original Macintosh meant that fitting a game into the operating system would be very difficult. Eventually, Andy Hertzfeld created a Desk Accessory called Puzzle that occupied only 600 bytes of memory. This was deemed small enough to be safely included in the operating system, and it shipped with the Mac when released in 1984. With Puzzle—the first computer game specifically for a mouse—the Macintosh became the first computer with a game in its ROM, and it would remain a part of the Mac OS for the next ten years, until being replaced in 1994 with Jigsaw, a jigsaw puzzle game included as part of System 7.5.
During the development of the Mac, a chess game similar to Archon based on Alice in Wonderland was shown to the development team. The game was written by Steve Capps for the Apple L |
https://en.wikipedia.org/wiki/Vault%20Boy | Vault Boy is the mascot of the Fallout media franchise. Created by staff at Interplay Entertainment, the original owners of the Fallout intellectual property (IP), Vault Boy was introduced in 1997's Fallout as an advertising character representing Vault-Tec, a fictional megacorporation that built a series of specialized fallout shelters throughout the United States prior to the nuclear holocaust that sets up the world state of the Fallout universe. Within the video game series, Vault Boy serves as a generic representation of the player character's statistical information within user interface (UI) menus, and is a recurring element in Vault-Tec products found throughout the fictional Fallout universe.
Vault Boy's design was developed by Leonard Boyarsky, who drew inspiration from 1950s films as well as the visual aesthetic of the economics-themed board game Monopoly. Vault Boy is a ubiquitous feature in promotional material and merchandising for the Fallout brand, and is regarded by critics to be one of the most recognizable elements of the franchise and the embodiment of its sardonic, retrofuturistic themes.
Concept and design
Vault Boy was unnamed in 1997's Fallout, although the game's instruction manual refers to the character as Vault-Man. He was created by Leonard Boyarsky, who originally thought of him as the "skill guy" when he developed the character's first piece of concept art. Vault Boy was partly based on Rich Uncle Pennybags' aesthetic from the Monopoly board game, and Boyarsky came up with the idea and design for the Vault Boy “cards”, which is intended to evoke the feel of Monopoly cards by showing the character engaged in a variety of activities in humorous ways. Vault Boy was drawn for Fallout by George Almond for the first few cards and by Tramell Ray Isaac who finalized the look of the character. The character is inspired by films made during the 1950s, in particular the cartoon character Bert the Turtle from the 1952 civil defense animated live |
https://en.wikipedia.org/wiki/Canjica | Canjica is a white variety of corn typical of Brazilian cuisine. It is mostly used in a special kind of sweet popcorn and in a sweet dish also named "canjica", a popular Festa Junina dish.
See also
List of Brazilian dishes
List of Brazilian sweets and desserts |
https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Zurbenko%20filter | Within statistics, the Kolmogorov–Zurbenko (KZ) filter was first proposed by A. N. Kolmogorov and formally defined by Zurbenko. It is a series of iterations of a moving average filter of length m, where m is a positive, odd integer. The KZ filter belongs to the class of low-pass filters. The KZ filter has two parameters, the length m of the moving average window and the number of iterations k of the moving average itself. It also can be considered as a special window function designed to eliminate spectral leakage.
Background
A. N. Kolmogorov had the original idea for the KZ filter during a study of turbulence in the Pacific Ocean. Kolmogorov had just received the International Balzan Prize for his law of 5/3 in the energy spectra of turbulence. Surprisingly the 5/3 law was not obeyed in the Pacific Ocean, causing great concern. Standard fast Fourier transform (FFT) was completely fooled by the noisy and non-stationary ocean environment. KZ filtration resolved the problem and enabled proof of Kolmogorov's law in that domain. Filter construction relied on the main concepts of the continuous Fourier transform and their discrete analogues. The algorithm of the KZ filter came from the definition of higher-order derivatives for discrete functions as higher-order differences. Believing that infinite smoothness in the Gaussian window was a beautiful but unrealistic approximation of a truly discrete world, Kolmogorov chose a finitely differentiable tapering window with finite support, and created this mathematical construction for the discrete case. The KZ filter is robust and nearly optimal. Because its operation is a simple Moving Average (MA), the KZ filter performs well in a missing data environment, especially in multidimensional time series where missing data problem arises from spatial sparseness. Another nice feature of the KZ filter is that the two parameters have clear interpretation so that it can be easily adopted by specialists in different areas. A few softwa |
https://en.wikipedia.org/wiki/Monocytopoiesis | Monocytopoiesis is the process which leads to the production of monocytes (and, subsequently, macrophages).
It can be induced by macrophage colony-stimulating factor.
It is a component of myelopoiesis. |
https://en.wikipedia.org/wiki/Sycophancy | In modern English, sycophant denotes an "insincere flatterer" and is used to refer to someone practising sycophancy (i.e., insincere flattery to gain advantage). The word has its origin in the legal system of Classical Athens. Most legal cases of the time were brought by private litigants as there was no police force and only a limited number of officially appointed public prosecutors. By the fifth century BC this practice had given rise to abuse by "sycophants": litigants who brought unjustified prosecutions. The word retains the same meaning ('slanderer') in Modern Greek, French, (where it also can mean 'informer') and Italian. In modern English, the meaning of the word has shifted to its present usage.
Etymology
The origin of the Ancient Greek word () is a matter of debate, but disparages the unjustified accuser who has in some way perverted the legal system.
The original etymology of the word (// 'fig', and // 'to show') "revealer of figs"—has been the subject of extensive scholarly speculation and conjecture. Plutarch appears to be the first to have suggested that the source of the term was in laws forbidding the exportation of figs, and that those who leveled the accusation against another of illegally exporting figs were therefore called sycophants. Athenaeus provided a similar explanation. Blackstone's Commentaries repeats this story, but adds an additional take—that there were laws making it a capital offense to break into a garden and steal figs, and that the law was so odious that informers were given the name sycophants.
A different explanation of the origin of the term by Shadwell was that the sycophant refers to the manner in which figs are harvested, by shaking the tree and revealing the fruit hidden among the leaves. The sycophant, by making false accusations, makes the accused yield up their fruit. The Encyclopædia Britannica Eleventh Edition listed these and other explanations, including that the making of false accusations was an insult to th |
https://en.wikipedia.org/wiki/Fundamental%20polygon | In mathematics, a fundamental polygon can be defined for every compact Riemann surface of genus greater than 0. It encodes not only the topology of the surface through its fundamental group but also determines the Riemann surface up to conformal equivalence. By the uniformization theorem, every compact Riemann surface has simply connected universal covering surface given by exactly one of the following:
the Riemann sphere,
the complex plane,
the unit disk D or equivalently the upper half-plane H.
In the first case of genus zero, the surface is conformally equivalent to the Riemann sphere.
In the second case of genus one, the surface is conformally equivalent to a torus C/Λ for some lattice Λ in C. The fundamental polygon of Λ, if assumed convex, may be taken to be either a period parallelogram or a centrally symmetric hexagon, a result first proved by Fedorov in 1891.
In the last case of genus g > 1, the Riemann surface is conformally equivalent to H/Γ where Γ is a Fuchsian group of Möbius transformations. A fundamental domain for Γ is given by a convex polygon for the hyperbolic metric on H. These can be defined by Dirichlet polygons and have an even number of sides. The structure of the fundamental group Γ can be read off from such a polygon. Using the theory of quasiconformal mappings and the Beltrami equation, it can be shown there is a canonical convex Dirichlet polygon with 4g sides, first defined by Fricke, which corresponds to the standard presentation of Γ as the group with 2g generators a1, b1, a2, b2, ..., ag, bg and the single relation [a1,b1][a2,b2] ⋅⋅⋅ [ag,bg] = 1, where [a,b] = a b a−1b−1.
Any Riemannian metric on an oriented closed 2-manifold M defines a complex structure on M, making M a compact Riemann surface. Through the use of fundamental polygons, it follows that two oriented closed 2-manifolds are classified by their genus, that is half the rank of the Abelian group Γ/[Γ,Γ], where Γ = 1(M). Moreover, it also follows from the theory of |
https://en.wikipedia.org/wiki/Transcritical%20bifurcation | In bifurcation theory, a field within mathematics, a transcritical bifurcation is a particular kind of local bifurcation, meaning that it is characterized by an equilibrium having an eigenvalue whose real part passes through zero.
A transcritical bifurcation is one in which a fixed point exists for all values of a parameter and is never destroyed. However, such a fixed point interchanges its stability with another fixed point as the parameter is varied. In other words, both before and after the bifurcation, there is one unstable and one stable fixed point. However, their stability is exchanged when they collide. So the unstable fixed point becomes stable and vice versa.
The normal form of a transcritical bifurcation is
This equation is similar to the logistic equation, but in this case we allow and to be positive or negative (while in the logistic equation and must be non-negative).
The two fixed points are at and . When the parameter is negative, the fixed point at is stable and the fixed point is unstable. But for , the point at is unstable and the point at is stable. So the bifurcation occurs at .
A typical example (in real life) could be the consumer-producer problem where the consumption is proportional to the (quantity of) resource.
For example:
where
is the logistic equation of resource growth; and
is the consumption, proportional to the resource . |
https://en.wikipedia.org/wiki/Space%20Opera%20Miniatures | Space Opera Miniatures is a set of miniatures published by Fantasy Games Unlimited for Space Opera.
Contents
Space Opera Miniatures is a line of 15 mm scale miniatures with each set containing 10 different figures intended to be used to represent characters for Space Opera .
Reception
William A. Barton reviewed Space Opera Miniatures in The Space Gamer No. 49. Barton commented that "Generally, the Space Opera Miniatures are well-cast and quite suitable for role-playing use, either with Space Opera or mixed (for variety) with figures from other lines for Traveller, Star Patrol, Universe, or any other SF RPG or miniature system."
Ian J. Knight reviewed Space Opera Figures for Imagine magazine, and stated that "The range is nicely detailed, especially about the aliens' faces, full of character, and chunky in the currently popular fashion. The casting is crisp and largely flash-free. It will stretch your painting abilities to do them justice, but the results will stand proudly alongside the best of the existing 15mm ranges." |
https://en.wikipedia.org/wiki/Multiplexed%20point-of-care%20testing | Multiplexed point-of-care testing (xPOCT) is a more complex form of point-of-care testing (POCT), or bedside testing. Point-of-care testing is designed to provide diagnostic tests at or near the time and place that the patient is admitted. POCT uses the concentrations of analytes to provide the user with information on the physiological state of the patient. An analyte is a substance, chemical or biological, that is being analyzed using a certain instrument. While point-of-care testing is the quantification of one analyte from one in vitro (e.g. blood, plasma or urine) sample, multiplexed point-of-care testing is the simultaneous on-site quantification of various analytes from a single sample.
Processing of one biological sample to yield multiple biomarker results allows for POCT testing to be done for patients who may have conditions that require the confirmation of multiple biomarkers and tests before diagnosis (e.g. many types of cancers). xPOCT has important emerging applications in resource-limited settings, (e.g. in the developing countries, in doctor's practices, or at home by non experts) xPOCT has recently become more important for in vitro diagnostics.
Background
Historically, medical testing has been a tedious, long and expensive process in a clinical setting. It usually involves taking a large sample from the patient (e.g. urine, blood, saliva, tissue swab), and processing it in a separate laboratory, which takes hours or sometimes days to complete. In that time frame, the patient needs to be provided with care, which is not favorable to do without the desired information from the laboratory test. As far back as the 1950s, radioimmunoassays were first demonstrated for the sensitive detection of insulin and thyroxine levels in human plasma. In the 1990s, research that was being conducted in the microelectronics industry was applied to the design of immunoassays and since then the applications for immunoassays have expanded.
There has been a movement t |
https://en.wikipedia.org/wiki/The%20Human%20Zoo%20%28book%29 | The Human Zoo is a book written by the British zoologist Desmond Morris, published in 1969. It is a follow-up to his earlier book The Naked Ape; both books examine how the biological nature of the human species has shaped the character of the cultures of the contemporary world.
The Human Zoo examines the nature of civilised society, especially in the cities. Morris compares the human inhabitants of a city to the animal inhabitants of a zoo, which have their survival needs provided for, but at the cost of living in an unnatural environment. Humans in their cities, and animals in their zoos, both have food and shelter provided for them, and have considerable free time on their hands. But they have to live in an unnatural environment, and are both likely to have problems in developing healthy social relationships, both are liable to suffer from isolation and boredom, and both live in a limited amount of physical space. The book explains how the inhabitants of cities and zoos have invented ways to deal with these problems, and the consequences that follow when they fail at dealing with them.
From this point of view, Morris examines why civilised society is the way it is. He offers explanations of the best and the worst features of civilised society. He examines the magnificent achievements of civilised society, the sublime explorations that make up science and the humanities, as well as the horrible behaviours of this same society such as war, slavery and rape. This book, and Morris's earlier book The Naked Ape, are two of the early works in the field of sociobiology, which have both contributed much to contemporary understandings of society.
The Unabomber, Ted Kaczynski, was heavily influenced by The Human Zoo. Kaczynski’s concept of “surrogate activities” comes from Morris’s concept of “survival-substitute activities,” while Kaczynski's concept of “the power process” is based on Morris’s concept of “the Stimulus Struggle”, though he disagreed with Morris on the ex |
https://en.wikipedia.org/wiki/Luminous%20intensity | In photometry, luminous intensity is a measure of the wavelength-weighted power emitted by a light source in a particular direction per unit solid angle, based on the luminosity function, a standardized model of the sensitivity of the human eye. The SI unit of luminous intensity is the candela (cd), an SI base unit.
Measurement
Photometry deals with the measurement of visible light as perceived by human eyes. The human eye can only see light in the visible spectrum and has different sensitivities to light of different wavelengths within the spectrum. When adapted for bright conditions (photopic vision), the eye is most sensitive to yellow-green light at 555 nm. Light with the same radiant intensity at other wavelengths has a lower luminous intensity. The curve which represents the response of the human eye to light is a defined standard function or established by the International Commission on Illumination (CIE, for Commission Internationale de l'Éclairage) and standardized in collaboration with the ISO.
Luminous intensity of artificial light sources is typically measured using and a goniophotometer outfitted with a photometer or a spectroradiometer.
Relationship to other measures
Luminous intensity should not be confused with another photometric unit, luminous flux, which is the total perceived power emitted in all directions. Luminous intensity is the perceived power per unit solid angle. If a lamp has a 1 lumen bulb and the optics of the lamp are set up to focus the light evenly into a 1 steradian beam, then the beam would have a luminous intensity of 1 candela. If the optics were changed to concentrate the beam into 1/2 steradian then the source would have a luminous intensity of 2 candela. The resulting beam is narrower and brighter, though its luminous flux remains unchanged.
Luminous intensity is also not the same as the radiant intensity, the corresponding objective physical quantity used in the measurement science of radiometry.
Units
Like other |
https://en.wikipedia.org/wiki/UBJSON | Universal Binary JSON (UBJSON) is a computer data interchange format. It is a binary form directly imitating JSON, but requiring fewer bytes of data. It aims to achieve the generality of JSON, combined with being much easier to process than JSON.
Rationale and Objectives
UBJSON is a proposed successor to BSON, BJSON and others. UBJSON has the following goals:
Complete compatibility with the JSON specification – there is a 1:1 mapping between standard JSON and UBJSON.
Ease of implementation – only including data types that are widely supported in popular programming languages so that there are no problems with certain languages not being supported well.
Ease of use – it can be quickly understood and adopted.
Speed and efficiency – UBJSON uses data representations that are (roughly) 30% smaller than their compacted JSON counterparts and are optimized for fast parsing. Streamed serialisation is supported, meaning that the transfer of UBJSON over a network connection can start sending data before the final size of the data is known.
Data types and syntax
UBJSON data can be either a value or a container.
Value types
UBJSON uses a single binary tuple to represent all JSON value types:
type [length] [data]
Each element in the tuple is defined as:
type
The type is a 1-byte ASCII character used to indicate the type of the data following it. The ASCII characters were chosen to make manually walking and debugging data stored in the UBJSON format as easy as possible (e.g. making the data relatively readable in a hex editor). Types are available for the five JSON value types. There is also a no-op type used for stream keep-alive.
Null: Z
No-op: N - no operation, to be ignored by the receiving end
Boolean types: true (T) and false (F)
Numeric types: int8 (i), uint8 (U), int16 (I), int32 (l), int64 (L), float32 (d), float64 (D), and high-precision (H)
ASCII character: C
UTF-8 string: S
High-precision numbers are represented as an arbitrarily long, UTF-8 stri |
https://en.wikipedia.org/wiki/Biffen%20Lecture | The Biffen Lecture is a lectureship organised by the John Innes Centre, named after Rowland Biffen.
Lecturers
Source: John Innes Centre
2001 John Doebley
2002 Francesco Salamini
2003 Steven D. Tanksley
2004 Michael Freeling
2006 Dick Flavell
2008 Rob Martienssen – 'Propagating silent heterochromatin with RNA interference in plants and fission yeast'
2009 Susan McCouch, Department of Plant Breeding & Genetics, Cornell University – 'Gene flow and genetic isolation during crop evolution'
2010 Peter Langridge, University of Adelaide, Australia – 'Miserable but worth the trouble: Genomics, wheat and difficult environments'
2012 Sarah Hake, Plant Gene Expression Center, USDA-ARS – 'Patterning the maize leaf'
2014 Professor Pamela Ronald, Department of Plant Pathology & The Genome Center, University of California Davis – ‘Engineering crops for resistance to disease and tolerance of stress’
2015 Professor Lord May, Department of Zoology, University of Oxford – ‘Unanswered questions in ecology, and why they matter’
2016 Edward Buckler, US Department of Agriculture – ‘Breeding 4.0? Sorting through the adaptive and deleterious variants in maize and beyond’
See also
Bateson Lecture
Chatt Lecture
Darlington Lecture
Haldane Lecture
List of genetics awards |
https://en.wikipedia.org/wiki/Issues%20relating%20to%20biofuels | There are various social, economic, environmental and technical issues with biofuel production and use, which have been discussed in the popular media and scientific journals. These include: the effect of moderating oil prices, the "food vs fuel" debate, poverty reduction potential, carbon emissions levels, sustainable biofuel production, deforestation and soil erosion, loss of biodiversity, effect on water resources, the possible modifications necessary to run the engine on biofuel, as well as energy balance and efficiency. The International Resource Panel, which provides independent scientific assessments and expert advice on a variety of resource-related themes, assessed the issues relating to biofuel use in its first report Towards sustainable production and use of resources: Assessing Biofuels. In it, it outlined the wider and interrelated factors that need to be considered when deciding on the relative merits of pursuing one biofuel over another. It concluded that not all biofuels perform equally in terms of their effect on climate, energy security and ecosystems, and suggested that environmental and social effects need to be assessed throughout the entire life-cycle.
Social and economic effects
Oil price moderation
The International Energy Agency's World Energy Outlook 2006 concludes that rising oil demand, if left unchecked,
would accentuate the consuming countries' vulnerability to a severe supply disruption and resulting price shock. The report suggested that biofuels may one day offer a viable alternative, but also that "the implications of the use of biofuels for global security as well as for economic, environmental, and public health need to be further evaluated".
According to Francisco Blanch, a commodity strategist for Merrill Lynch, crude oil would be trading 15 per cent higher and gasoline would be as much as 25 per cent more expensive, if it were not for biofuels. Gordon Quaiattini, president of the Canadian Renewable Fuels Association, argued |
https://en.wikipedia.org/wiki/2p15-16.1%20microdeletion%20syndrome | 2p15-16.1 microdeletion is an extremely rare genetic disorder caused by a small deletion in the short arm of human chromosome 2. First described in two patients in 2007, by 2013 only 21 people have been reported as having the disorder in the medical literature.
Presentation
As of 2013, only 21 patients with a 2p15-16.1 microdeletion had been identified. The clinical similarities between the individuals resulted in the classification of a new genetic syndrome. The shared clinical features include moderate to severe intellectual disability and similar facial features including telecanthus, drooping eyelids, downslanting, short palpebral fissures, a prominent nasal bridge, high palate with long, smooth philtrum and an everted lower lip. Some of the patients also had feeding problems in infancy, microcephaly, optic nerve hypoplasia and hydronephrosis, wide-spaced nipples, short stature, cortical dysplasia, camptodactyly and pigeon toe.
Cause
Three of the patients reported had a consistent proximal breakpoint on chromosome 2, but varying distal breakpoints. The patients have 2p15–16.1 deletions of 5.7 megabases (Mb), 4.5 Mb, 3.9 Mb, 3.35Mb 3.3Mb and 570 kilobases, respectively. In all 21 patients the deletions are de novo — neither parent possessed nor transmitted the mutation to the affected individual. One patient is a genetic mosaic, having some cells with the deletion and others without.
Affected genes
The largest deletion encompasses approximately 15 protein-coding genes, 6 pseudogenes and a number of other as yet uncharacterised candidates, including:
AHSA2, activator of heat shock 90kDa protein ATPase homolog
BCL11A, B-cell lymphoma/leukemia 11A
C2orf74, Uncharacterized protein C2orf74
FANCL, E3 ubiquitin-protein ligase FANCL
KIAA1841, Uncharacterized protein KIAA1841
PAPOLG, Poly(A) polymerase gamma
PEX13, Peroxisomal membrane protein Peroxin-13
PUS10, Pseudouridylate synthase 10
REL, C-Rel proto-oncogene protein
SNORA70B, small nucleolar RNA, H/A |
https://en.wikipedia.org/wiki/Minimum%20intelligent%20signal%20test | The minimum intelligent signal test, or MIST, is a variation of the Turing test proposed by Chris McKinstry in which only boolean (yes/no or true/false) answers may be given to questions. The purpose of such a test is to provide a quantitative statistical measure of humanness, which may subsequently be used to optimize the performance of artificial intelligence systems intended to imitate human responses.
McKinstry gathered approximately 80,000 propositions that could be answered yes or no, e.g.:
Is Earth a planet?
Was Abraham Lincoln once President of the United States?
Is the sun bigger than my foot?
Do people sometimes lie?
He called these propositions Mindpixels.
These questions test both specific knowledge of aspects of culture, and basic facts about the meaning of various words and concepts. It could therefore be compared with the SAT, intelligence testing and other controversial measures of mental ability. McKinstry's aim was not to distinguish between shades of intelligence but to identify whether a computer program could be considered intelligent at all.
According to McKinstry, a program able to do much better than chance on a large number of MIST questions would be judged to have some level of intelligence and understanding. For example, on a 20-question test, if a program were guessing the answers at random, it could be expected to score 10 correct on average. But the probability of a program scoring 20 out of 20 correct by guesswork is only one in 220, i.e. one in 1,048,576; so if a program were able to sustain this level of performance over several independent trials, with no prior access to the propositions, it should be considered intelligent.
Discussion
McKinstry criticized existing approaches to artificial intelligence such as chatterbots, saying that his questions could "kill" AI programs by quickly exposing their weaknesses. He contrasted his approach, a series of direct questions assessing an AI's capabilities, to the Turing test and |
https://en.wikipedia.org/wiki/DDC-I | DDC-I, Inc. is a privately held company providing software development of real-time operating systems, software development tools, and software services for safety-critical embedded applications, headquartered in Phoenix, Arizona. It was first created in 1985 as the Danish firm DDC International A/S (also known as DDC-I A/S), a commercial outgrowth of Dansk Datamatik Center, a Danish software research and development organization of the 1980s. The American subsidiary was created in 1986. For many years, the firm specialized in language compilers for the programming language Ada.
In 2003, the Danish office was closed and all operations moved to the Phoenix location.
Origins
The origins of DDC International A/S lay in Dansk Datamatik Center, a Danish software research and development organization that was formed in 1979 to demonstrate the value of using modern techniques, especially those involving formal methods, in software design and development. Among its several projects was the creation of a compiler system for the programming language Ada. Ada was a difficult language to implement and early compiler projects for it often proved disappointments. But the DDC compiler design was sound and it first passed the United States Department of Defense-sponsored Ada Compiler Validation Capability (ACVC) standardized suite of language and runtime tests on a VAX/VMS system in September 1984. As such, it was the first European Ada compiler to meet this standard.
Success of the Ada project led to a separate company being formed in 1985, called DDC International A/S, with the purpose of commercializing the Ada compiler system product. Like its originator, it was based in Lyngby, Denmark. Ole N. Oest was named the managing director of DDC International. In 1986, DDC-I, Inc. was founded as the American subsidiary company. Located in Phoenix, Arizona, it focused on sales, customer support, and engineering consulting activities in the United States.
Ada compiler
DDC |
https://en.wikipedia.org/wiki/Hedlundia%20thuringiaca | Hedlundia thuringiaca is a widely cultivated species of ornamental shrub. It is cultivated by grafting.
Description
It has purple-grey bark which is smooth, but begins cracking and flaking as it matures. The leaves are narrowly ovate to elliptic (in shape) and long and wide. They are lobed, except at the very tip, the lobes become deeper towards the base, becoming toothed.
The leaves are glossy dark green above and grey and hair underneath. In late spring, it flowers, with dense clusters of 5 petaled white flowers, which
are wide. After flowering, it produces a rounded, bright red berry which is wide.
Taxonomy
It is a diploid hybrid between Sorbus aucuparia and the diploid Aria edulis. It is rare in the wild, but occurs at scattered sites across much of Europe, (within; Austria, Czechoslovakia, France, Germany, Great Britain, Hungary, Ireland, Romania and Switzerland,) and Turkey.
It has been introduced into Belgium and Illinois, USA.
It was first published in Memoranda Soc. Fauna Fl. Fenn. 93: 34 in 2017.
GRIN (United States Department of Agriculture and the Agricultural Research Service) accepts it as ×Hedlundia thuringiaca . |
https://en.wikipedia.org/wiki/Pulse%20shaping | In electronics and telecommunications, pulse shaping is the process of changing a transmitted pulses' waveform to optimize the signal for its intended purpose or the communication channel. This is often done by limiting the bandwidth of the transmission and filtering the pulses to control intersymbol interference. Pulse shaping is particularly important in RF communication for fitting the signal within a certain frequency band and is typically applied after line coding and modulation.
Need for pulse shaping
Transmitting a signal at high modulation rate through a band-limited channel can create intersymbol interference. The reason for this are Fourier correspondences (see Fourier transform). A bandlimited signal corresponds to an infinite time signal, that causes neighbouring pulses to overlap. As the modulation rate increases, the signal's bandwidth increases. As soon as the spectrum of the signal is a sharp rectangular, this leads to a sinc shape in the time domain. This happens if the bandwidth of the signal is larger than the channel bandwidth, leading to a distortion. This distortion usually manifests itself as intersymbol interference (ISI). Theoretically for sinc shaped pulses, there is no ISI, if neighbouring pulses are perfectly aligned, i.e. in the zero crossings of each other. But this requires a very good synchronization and precise/stable sampling without jitters. As a practical tool to determine ISI, one uses the Eye pattern, that visualizes typical effects of the channel and the synchronization/frequency stability.
The signal's spectrum is determined by the modulation scheme and data rate used by the transmitter, but can be modified with a pulse shaping filter. This pulse shaping will make the spectrum smooth, leading to a time limited signal again. Usually the transmitted symbols are represented as a time sequence of dirac delta pulses multiplied with the symbol. This is the formal transition from the digital to the analog domain. At this point, th |
https://en.wikipedia.org/wiki/Perceptual%20control%20theory | Perceptual control theory (PCT) is a model of behavior based on the properties of negative feedback control loops. A control loop maintains a sensed variable at or near a reference value by means of the effects of its outputs upon that variable, as mediated by physical properties of the environment. In engineering control theory, reference values are set by a user outside the system. An example is a thermostat. In a living organism, reference values for controlled perceptual variables are endogenously maintained. Biological homeostasis and reflexes are simple, low-level examples. The discovery of mathematical principles of control introduced a way to model a negative feedback loop closed through the environment (circular causation), which spawned perceptual control theory. It differs fundamentally from some models in behavioral and cognitive psychology that model stimuli as causes of behavior (linear causation). PCT research is published in experimental psychology, neuroscience, ethology, anthropology, linguistics, sociology, robotics, developmental psychology, organizational psychology and management, and a number of other fields. PCT has been applied to design and administration of educational systems, and has led to a psychotherapy called the method of levels.
Principles and differences from other theories
The perceptual control theory is deeply rooted in biological cybernetics, systems biology and control theory and the related concept of feedback loops. Unlike some models in behavioral and cognitive psychology it sets out from the concept of circular causality. It shares, therefore, its theoretical foundation with the concept of plant control, but it is distinct from it by emphasizing the control of the internal representation of the physical world.
The plant control theory focuses on neuro-computational processes of movement generation, once a decision for generating the movement has been taken. PCT spotlights the embeddedness of agents in their environment |
https://en.wikipedia.org/wiki/Prosolvable%20group | In mathematics, more precisely in algebra, a prosolvable group (less common: prosoluble group) is a group that is isomorphic to the inverse limit of an inverse system of solvable groups. Equivalently, a group is called prosolvable, if, viewed as a topological group, every open neighborhood of the identity contains a normal subgroup whose corresponding quotient group is a solvable group.
Examples
Let p be a prime, and denote the field of p-adic numbers, as usual, by . Then the Galois group , where denotes the algebraic closure of , is prosolvable. This follows from the fact that, for any finite Galois extension of , the Galois group can be written as semidirect product , with cyclic of order for some , cyclic of order dividing , and of -power order. Therefore, is solvable.
See also
Galois theory |
https://en.wikipedia.org/wiki/StegoShare | StegoShare is a steganography tool that allows embedding of large files into multiple images. It may be used for anonymous file sharing.
Features
Supports various image formats (png, jpg, bmp, gif, tiff etc.)
Maximal supported hidden file's size is 2Gb, number of cover images in the set up to 65536
Average capacity is 40% (a 100 Mb file could be embedded into a 250Mb image)
128-bit encryption
Good output images quality (changes undetectable by human eye)
Use in the file sharing networks
This software can be easily used for anonymous file sharing. An uploader downloads legal images from a public photo hosting site, and embeds the censored file into those images. The uploader then uploads pictures to the public photo torrent tracker and puts the links referencing the stego pictures with censored file's description on a forum or blog. Downloaders, seeders, and public photo trackers, if caught distributing illegal files, are protected from legal prosecution, because they can always use plausible deniability, saying that they knew nothing about the illicit file in the images. This is impossible to prove otherwise, as the human eye cannot differentiate between an ordinary image and a picture with hidden embedded file.
Vulnerabilities
The cover file manipulation algorithm used is based on fixed location LSB insertion, making its output images detectable to most steganalysis software by a simply Histogram Characteristic Function.
See also
Steganography
Anonymous p2p
Plausible deniability |
https://en.wikipedia.org/wiki/List%20of%20gene%20families | This is a list of gene families or gene complexes, i.e. sets of genes which are related ancestrally and often serve similar biological functions. These gene families typically encode functionally related proteins, and sometimes the term gene families is a shorthand for the sets of proteins that the genes encode. They may or may not be physically adjacent on the same chromosome.
Regulatory protein gene families
14-3-3 protein family
Achaete-scute complex (neuroblast formation)
FOX proteins (forkhead box proteins)
Families containing homeobox domains
DLX gene family
Hox gene family
POU family
Krüppel-type zinc finger (ZNF)
MADS-box gene family
NOTCH2NL
P300-CBP coactivator family
SOX gene family
Immune system proteins
Immunoglobulin superfamily
Major histocompatibility complex (MHC)
Motor proteins
Dynein
Kinesin
Myosin
Signal transducing proteins
G-proteins
MAP Kinase
Olfactory receptor
Peroxiredoxin
Receptor tyrosine kinases
Transporters
ABC transporters
Antiporter
Aquaporins
Other families
See also
Protein family
Housekeeping gene
F
Biological classification
Gene families |
https://en.wikipedia.org/wiki/Segregated%20Runge%E2%80%93Kutta%20methods | The Segregated Runge–Kutta (SRK) method is a family of IMplicit–EXplicit (IMEX) Runge–Kutta methods that were developed to approximate the solution of differential algebraic equations (DAE) of index 2.
The SRK method were motivated as a numerical method for the time integration of the incompressible Navier–Stokes equations with two salient properties. First, velocity and pressure computations are segregated. Second, the method keeps the same order of accuracy for both velocities and pressures. However, the SRK method can also be applied to any other DAE of index 2.
The Segregated Runge–Kutta method
Consider an index 2 DAE defined as follows:
where , , and
In the previous equations is known as the differential variable, while is known as the algebraic variable. The time derivative of the differential variable, , depends on itself, , on the algebraic variable, , and on the time, . The second equation can be seen as a constraint on differential variable, .
Let us take the time derivative of the second equation. Assuming that the function is linear and does not depend on time, and that the function is linear with respect to , we have that
A Runge–Kutta time integration scheme is defined as a multistage integration in which each stage is computed as a combination of the unknowns evaluated in other stages. Depending on the definition of the parameters, this combination can lead to an implicit scheme or an explicit scheme. Implicit and explicit schemes can be combined, leading to IMEX schemes.
Suppose that the function can be split into two operators and such that
where and are the terms to be treated implicitly and explicitly, respectively.
The SRK method is based on the use of IMEX Runge–Kutta schemes and can be defined by the following scheme:
Given a time step size , at a time ,
for each Runge-Kutta stage , with , solve:
1)
2) .
Update the variables at solving:
3)
4) . |
https://en.wikipedia.org/wiki/Acoustic%20radiation%20pressure | Acoustic radiation pressure is the apparent pressure difference between the average pressure at a surface moving with the displacement of the wave propagation (the Lagrangian pressure) and the pressure that would have existed in the fluid of the same mean density when at rest. Numerous authors make a distinction between the phenomena of Rayleigh radiation pressure and Langevin radiation pressure.
See also
Radiation pressure
Acoustic levitation
Acoustic radiation force |
https://en.wikipedia.org/wiki/Printk | printk is a C function from the Linux kernel interface that prints messages to the kernel log. It accepts a string parameter called the format string, which specifies a method for rendering an arbitrary number of varied data type parameter(s) into a string. The string is then printed to the kernel log.
It provides a printf-like abstraction and its parsing of the format string and arguments behave similarly to printf. It acts as a debugging tool for kernel programmers who need this function for logging messages from the kernel.
The printk function prototype is:
int printk(const char *fmt, ...);
C standard library and its printf function is unavailable in kernel mode, hence the need for printk.
Differences from printf
The function printk is based on printf, but cannot always be used in the same way that printf is used.
Log levels
printk allows a caller to specify the type and importance of the message being sent. This specifier is called the log level.
The log level specifies the type of message being sent to the kernel message log. The log level is specified by prepending (using C's string literal concatenation) a string describing the log level to the start of the message to be produced. For example, a message could be produced at the KERN_INFO using the following:
printk(KERN_INFO "Message: %s\n", arg);
The string specifying the log level consists of the ASCII start of header character followed by a digit describing the log level or the character 'c' to indicate the message is a continuation of the previous message. The following log levels, along with their interpretations, are given below.
When a log level is not specified, the default log level is KERN_WARNING, unless a different default has been set in the kernel itself, such as with the loglevel= boot argument.
Log levels are defined in <linux/kern_levels.h>. Which log levels are printed is configured using the sysctl file /proc/sys/kernel/printk.
Pointer formats
The %p format specifier (used for pri |
https://en.wikipedia.org/wiki/Comparison%20of%20encrypted%20external%20drives | This is a technical feature comparison of commercial encrypted external drives.
Background information
Ironkey was acquired by Kingston Technology in February 2016
IronClad is a technology, a secure type of "PC on a stick" (flash drive which has an Operating System included), which runs on top of Ironkey drive. It is also known as a "turnkey solution", as in it "plugs and plays".
Operating systems
Features
Bootable: Whether (with the appropriate OS installed on the drive and supporting BIOS on a computer) the drive can be used to boot a computer.
Encryption Type: Type of encryption used.
Certification: Whether FIPS 140-2 or similar validation has been passed.
Managed: Whether enterprise level management software for maintaining large numbers of devices is included.
Interface: List of USB, Firewire, eSATA, or other interfaces for connection a computer.
Max Capacity: Maximum size drive is available in.
Included Software: List of any included software, excluding any standard freeware or trialware obtainable by an end user.
Other Features: Other notable features that differentiate the device.
See also
Comparison of disk encryption software
Disk encryption software
Notes and references
Disk encryption
Encrypted external drives
Cryptographic software |
https://en.wikipedia.org/wiki/Sony%20HDR-HC1 | The Sony HDR-HC1, introduced in mid-2005 (MSRP US$1999), is the first consumer HDV camcorder to support 1080i.
The CMOS sensor has resolution of 1920x1440 for digital still pictures and captures video at 1440x1080 interlaced, which is the resolution defined for HDV 1080i. The camera may also use the extra pixels for digital image stabilization.
The camcorder can also convert the captured HDV data to DV data for editing the video using non-linear editing systems which do not support HDV or for creating edits which are viewable on non-HDTV television sets.
The HVR-A1 is the prosumer version of the HDR-HC1. It has more manual controls and XLR ports.
Unique features
Expanded focus
Expanded focus lets the user magnify the image temporarily to obtain better manual focus. Expanded focus works in pause mode only; it is not possible to magnify the frame during recording.
A similar feature, named Focus Assist, appeared on the Canon HV20, which was released two years after the HDR-HC1. Focus Assist on Canon camcorders also works only when recording is paused.
Spot meter and spot focus
Spot meter and Spot focus are possible thanks to a touch-sensitive LCD screen, employed on most modern Sony consumer camcorders.
The user can touch the screen to specify a specific region of the image; the camcorder automatically adjusts focus or exposure according to distance to the object and to illumination of the selected spot.
Depending on a scene, changing focus with Spot Focus can cause focus "breathing" or "hunting", when the subject goes in and out of focus several times before the image stabilizes.
Shot transition
Shot transition allows for a smooth automatic scene transition. In particular, it makes rack focus easy.
Two sets of focus and zoom can be preset and stored in "Store-A" and "Store-B" memory slots. The settings can then be gradually applied from one to another within 4 seconds. The transition time is not adjustable.
Presently, the HDR-HC1 is the only consumer c |
https://en.wikipedia.org/wiki/Yaravirus | Yaravirus is an amoebic virus (a virus that reproduces in amoeba) tentatively placed in phylum Nucleocytoviricota, discovered in the waters of Lake Pampulha in Minas Gerais, Brazil, in 2020. The virus was found to be significantly smaller than any known amoebic virus, and is notable in that 90% of its genome appears to have no homology to previously sequenced amino acids in other organisms. The organism was named after the Brazilian mythological figure, Iara.
One author described the virus as one that "simply makes no sense", and as "an extreme example", noting that "of Yaravirus' 74 genes, 68 are unlike any ever found in any virus". With respect to efforts by scientists to develop a megataxonomy of viruses, Yaravirus, was described as "lonely and unclassifiable". Another analysis describes the virus as "either highly reduced and divergent NCLDVs or, more probably, the first non-NCLDV isolated from Acanthamoeba species", also noting "an ATPase most similar to the mimivirus homologue" and a major capsid protein phylogeny that is "not compatible with that of the ATPase phylogeny", suggesting that the virus originated through a horizontal gene transfer.
Further reading
Brazilian scientists announced the discovery of a new amoebic "Yaravirus" in Lake Pampulha. (bioRxiv)(Science magazine)
, PDF here. |
https://en.wikipedia.org/wiki/Software%20bloat | Software bloat is a process whereby successive versions of a computer program become perceptibly slower, use more memory, disk space or processing power, or have higher hardware requirements than the previous version, while making only dubious user-perceptible improvements or suffering from feature creep. The term is not applied consistently; it is often used as a pejorative by end users (bloatware) to describe undesired user interface changes even if those changes had little or no effect on the hardware requirements. In long-lived software, perceived bloat can occur from the software servicing a large, diverse marketplace with many differing requirements. Most end users will feel they only need some limited subset of the available functions, and will regard the others as unnecessary bloat, even if end users with different requirements require those functions.
Actual (measurable) bloat can occur due to de-emphasising algorithmic efficiency in favour of other concerns like developer productivity, or possibly through the introduction of new layers of abstraction like a virtual machine or other scripting engine for the purposes of convenience when developer constraints are reduced. The perception of improved developer productivity, in the case of practising development within virtual machine environments, comes from the developers no longer taking resource constraints and usage into consideration during design and development; this allows the product to be completed faster but it results in increases to the end user's hardware requirements to compensate.
The term "bloatware" is also used to describe unwanted pre-installed software or bundled programs.
Types of bloat
Program bloat
In computer programming, code bloat refers to the presence of program code (source code or machine code) that is perceived as unnecessarily long, slow, or otherwise wasteful of resources.
Causes
Software inefficiency
Software developers involved in the industry during the 1970s had sev |
https://en.wikipedia.org/wiki/NINCDS-ADRDA%20Alzheimer%27s%20Criteria | The NINCDS-ADRDA Alzheimer's Criteria were proposed in 1984 by the National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association (now known as the Alzheimer's Association) and are among the most used in the diagnosis of Alzheimer's disease (AD). These criteria require that the presence of cognitive impairment and a suspected dementia syndrome be confirmed by neuropsychological testing for a clinical diagnosis of possible or probable AD; while they need histopathologic confirmation (microscopic examination of brain tissue) for the definitive diagnosis. They specify as well eight cognitive domains that may be impaired in AD. These criteria have shown good reliability and validity.
Criteria
Definite Alzheimer's disease: The patient meets the criteria for probable Alzheimer's disease and has histopathologic evidence of AD via autopsy or biopsy.
Probable Alzheimer's disease: Dementia has been established by clinical and neuropsychological examination. Cognitive impairments also have to be progressive and be present in two or more areas of cognition. The onset of the deficits has been between the ages of 40 and 90 years and finally there must be an absence of other diseases capable of producing a dementia syndrome.
Possible Alzheimer's disease: There is a dementia syndrome with an atypical onset, presentation or progression; and without a known etiology; but no co-morbid diseases capable of producing dementia are believed to be in the origin of it.
Unlikely Alzheimer's disease: The patient presents a dementia syndrome with a sudden onset, focal neurologic signs, or seizures or gait disturbance early in the course of the illness.
Cognitive domains
The NINCDS-ADRDA Alzheimer's Criteria specify eight cognitive domains that may be impaired in AD: memory, language, perceptual skills, attention, constructive abilities, orientation, problem solving and functional abilities.
Other criteria
Similar to t |
https://en.wikipedia.org/wiki/Freidlin%E2%80%93Wentzell%20theorem | In mathematics, the Freidlin–Wentzell theorem (due to Mark Freidlin and Alexander D. Wentzell) is a result in the large deviations theory of stochastic processes. Roughly speaking, the Freidlin–Wentzell theorem gives an estimate for the probability that a (scaled-down) sample path of an Itō diffusion will stray far from the mean path. This statement is made precise using rate functions. The Freidlin–Wentzell theorem generalizes Schilder's theorem for standard Brownian motion.
Statement
Let B be a standard Brownian motion on Rd starting at the origin, 0 ∈ Rd, and let Xε be an Rd-valued Itō diffusion solving an Itō stochastic differential equation of the form
where the drift vector field b : Rd → Rd is uniformly Lipschitz continuous. Then, on the Banach space C0 = C0([0, T]; Rd) equipped with the supremum norm ||·||∞, the family of processes (Xε)ε>0 satisfies the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by
if ω lies in the Sobolev space H1([0, T]; Rd), and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0,
and |
https://en.wikipedia.org/wiki/Introduction%20to%20quantum%20mechanics | Quantum mechanics is the study of matter and its interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and classical theory led to a revolution in physics, a shift in the original scientific paradigm: the development of quantum mechanics.
Many aspects of quantum mechanics are counterintuitive and can seem paradoxical because they describe behavior quite different from that seen at larger scales. In the words of quantum physicist Richard Feynman, quantum mechanics deals with "nature as She is—absurd".
One example of this is the uncertainty principle applied to particles, which implies that the more closely one pins down one measurement on a particle (such as the position of an electron), the less accurate another complementary measurement pertaining to the same particle (such as its speed) must become. The position and speed of a particle cannot both be measured with arbitrary precision, regardless of the quality of the measuring instruments.
Another example is entanglement. In certain circumstances, two particles with a shared history may become mutually 'entangled', in which case a measurement made on one particle (such as an electron that is measured to have spin up) will provide full information about the outcome of a later equivalent measurement on the other particle (that the other will be found to have spin down). This applies even though the particles may be so far apart that it is impossible for the result of the first measurement to have been transmitted to the s |
https://en.wikipedia.org/wiki/Acute-phase%20protein | Acute-phase proteins (APPs) are a class of proteins whose concentrations in blood plasma either increase (positive acute-phase proteins) or decrease (negative acute-phase proteins) in response to inflammation. This response is called the acute-phase reaction (also called acute-phase response). The acute-phase reaction characteristically involves fever, acceleration of peripheral leukocytes, circulating neutrophils and their precursors. The terms acute-phase protein and acute-phase reactant (APR) are often used synonymously, although some APRs are (strictly speaking) polypeptides rather than proteins.
In response to injury, local inflammatory cells (neutrophil granulocytes and macrophages) secrete a number of cytokines into the bloodstream, most notable of which are the interleukins IL1, and IL6, and TNF-α. The liver responds by producing many acute-phase reactants. At the same time, the production of a number of other proteins is reduced; these proteins are, therefore, referred to as "negative" acute-phase reactants. Increased acute-phase proteins from the liver may also contribute to the promotion of sepsis.
Regulation of synthesis
TNF-α, IL-1β and IFN-γ are important for the expression of inflammatory mediators such as prostaglandins and leukotrienes, and they also cause the production of platelet-activating factor and IL-6. After stimulation with proinflammatory cytokines, Kupffer cells produce IL-6 in the liver and present it to the hepatocytes. IL-6 is the major mediator for the hepatocytic secretion of APPs. Synthesis of APP can also be regulated indirectly by cortisol. Cortisol can enhance expression of IL-6 receptors in liver cells and induce IL-6-mediated production of APPs.
Positive
Positive acute-phase proteins serve (as part of the innate immune system) different physiological functions within the immune system. Some act to destroy or inhibit growth of microbes, e.g., C-reactive protein, mannose-binding protein, complement factors, ferritin, cerulop |
https://en.wikipedia.org/wiki/Miniature%20wargaming | Miniature wargaming is a form of wargaming in which military units are represented by miniature physical models on a model battlefield. The use of physical models to represent military units is in contrast to other tabletop wargames that use abstract pieces such as counters or blocks, or computer wargames which use virtual models. The primary benefit of using models is aesthetics, though in certain wargames the size and shape of the models can have practical consequences on how the match plays out.
Miniature wargaming is typically a recreational form of wargaming because issues concerning scale can compromise realism too much for most serious military applications. A historical exception to this is naval wargaming before the advent of computers.
Overview
A miniature wargame is played with miniature models of soldiers, artillery, and vehicles on a model of a battlefield. The benefit of using models as opposed to abstract pieces is primarily an aesthetic one. Models offer a visually-pleasing way of identifying the units on the battlefield. In most miniature wargame systems, the model itself may be irrelevant as far as the rules are concerned; what really matters is the dimensions of the base that the model is mounted on. Distances between infantry units are measured from the base of the model. The exception to this trend may be models of vehicles such as tanks, which do not require a base to be stable and have naturally rectangular shapes; in such cases, the distances between units may be measured from the edge of the model itself. Some miniature wargames use the dimensions of the model to determine whether a target behind cover is within line-of-fire of an attacker.
Most miniature wargames are turn-based. Players take turns to move their model warriors across the model battlefield and declare attacks on the opponent. In most miniature wargames, the outcomes of fights between units are resolved through simple arithmetic, usually combined with dice rolls or playing |
https://en.wikipedia.org/wiki/BCAR1 | Breast cancer anti-estrogen resistance protein 1 is a protein that in humans is encoded by the BCAR1 gene.
Gene
BCAR1 is localized on chromosome 16 on region q, on the negative strand and it consists of seven exons. Eight different gene isoforms have been identified that share the same sequence starting from the second exon onwards but are characterized by different starting sites. The longest isoform is called BCAR1-iso1 (RefSeq NM_001170714.1) and is 916 amino acids long, the other shorter isoforms start with an alternative first exon.
Function
BCAR1 is a ubiquitously expressed adaptor molecule originally identified as the major substrate of v-Src and v-Crk . p130Cas/BCAR1 belongs to the Cas family of adaptor proteins and can act as a docking protein for several signalling partners. Due to its ability to associate with multiple signaling partners, p130Cas/BCAR1 contributes to the regulation to a variety of signaling pathways leading to cell adhesion, migration, invasion, apoptosis, hypoxia and mechanical forces. p130Cas/BCAR1 plays a role in cell transformation and cancer progression and alterations of p130Cas/BCAR1 expression and the resulting activation of selective signalling are determinants for the occurrence of different types of human tumors.
Due to the capacity of p130Cas/BCAR1, as an adaptor protein, to interact with multiple partners and to be regulated by phosphorylation and dephosphorylation, its expression and phosphorylation can lead to a wide range of functional consequences. Among the regulators of p130Cas/BCAR1 tyrosine phosphorylation, receptor tyrosine kinases (RTKs) and integrins play a prominent role. RTK-dependent p130Cas/BCAR1 tyrosine phosphorylation and the subsequent binding with specific downstream signaling molecule modulate cell processes such as actin cytoskeleton remodeling, cell adhesion, proliferation, migration, invasion and survival. Integrin-mediated p130Cas/BCAR1 phosphorylation upon adhesion to extracellular matrix (E |
https://en.wikipedia.org/wiki/Push%E2%80%93pull%20agricultural%20pest%20management | Push–pull technology is an intercropping strategy for controlling agricultural pests by using repellent "push" plants and trap "pull" plants. For example, cereal crops like maize or sorghum are often infested by stem borers. Grasses planted around the perimeter of the crop attract and trap the pests, whereas other plants, like Desmodium, planted between the rows of maize, repel the pests and control the parasitic plant Striga. Push–pull technology was developed at the International Centre of Insect Physiology and Ecology (ICIPE) in Kenya in collaboration with Rothamsted Research, UK. and national partners. This technology has been taught to smallholder farmers through collaborations with universities, NGOs and national research organizations.
How push–pull works
Push–pull technology involves use of behaviour-modifying stimuli to manipulate the distribution and abundance of stemborers and beneficial insects for management of stemborer pests. It is based on in-depth understanding of chemical ecology, agrobiodiversity, plant-plant and insect-plant interactions, and involves intercropping a cereal crop with a repellent intercrop such as Desmodium uncinatum (silverleaf) (push), with an attractive trap plant such as Napier grass (pull) planted as a border crop around this intercrop. Gravid stemborer females are repelled from the main crop and are simultaneously attracted to the trap crop.
The push
The "push" in the intercropping scheme is provided by the plants that emit volatile chemicals (kairomones) which repel stemborer moths and drive them away from the main crop (maize or sorghum). The most commonly used species of push plants are legumes of the genus Desmodium (e.g. silverleaf Desmodium, D. uncinatum, and greenleaf Desmodium, D. intortum). The Desmodium is planted in between the rows of maize or sorghum, where they emit volatile chemicals (such as (E)-β-ocimene and (E)-4,8-dimethyl-1,3,7-nonatriene) that repel the stemborer moths. These semiochemicals are al |
https://en.wikipedia.org/wiki/Kadowaki%E2%80%93Woods%20ratio | The Kadowaki–Woods ratio is the ratio of A, the quadratic term of the resistivity and γ2, the linear term of the specific heat. This ratio is found to be a constant for transition metals, and for heavy-fermion compounds, although at different values.
In 1968 M. J. Rice pointed out that the coefficient A should vary predominantly as the square of the linear electronic specific heat coefficient γ; in particular he showed that the ratio A/γ2 is material independent for the pure 3d, 4d and 5d transition metals. Heavy-fermion compounds are characterized by very large values of A and γ. Kadowaki and Woods showed that A/γ2 is material-independent within the heavy-fermion compounds, and that it is about 25 times larger than in aforementioned transition metals.
According to the theory of electron-electron scattering the ratio A/γ2 contains indeed several non-universal factors, including the square of the strength of the effective electron-electron interaction. Since in general the interactions differ in nature from one group of materials to another, the same values of A/γ2 are only expected within a particular group. In 2005 Hussey proposed a re-scaling of A/γ2 to account for unit cell volume, dimensionality, carrier density and multi-band effects. In 2009 Jacko, Fjaerestad, and Powell demonstrated fdx(n)A/γ2 to have the same value in transition metals, heavy fermions, organics and oxides with A varying over 10 orders of magnitude, where fdx(n) may be written in terms of the dimensionality of the system, the electron density and, in layered systems, the interlayer spacing or the interlayer hopping integral.
See also
Wilson ratio |
https://en.wikipedia.org/wiki/Tetraview | A tetraview is an attempt to graph a complex function of a complex variable, by a method invented by Davide P. Cervone.
A graph of a real function of a real variable is the set of ordered pairs (x,y) such that y = f(x). This is the ordinary two-dimensional Cartesian graph studied in school algebra.
Every complex number has both a real part and an imaginary part, so one complex variable is two-dimensional and a pair of complex variables is four-dimensional. A tetraview is an attempt to give a picture of a four-dimensional object using a two-dimensional representation—either on a piece of paper or on a computer screen, showing a still picture consisting of five views, one in the center and one at each corner. This is roughly analogous to a picture of a three-dimensional object by giving a front view, a side view, and a view from above.
A picture of a three-dimensional object is a projection of that object from three dimensions into two dimensions. A tetraview is set of five projections, first from four dimensions into three dimensions, and then from three dimensions into two dimensions.
A complex function w = f(z), where z = a + bi and w = c + di are complex numbers, has a graph in four-space (four dimensional space) R4 consisting of all points (a, b, c, d) such that c + di = f(a + bi).
To construct a tetraview, we begin with the four points (1,0,0,0), (0, 1, 0, 0), (0, 0, 1, 0), and (0, 0, 0, 1), which are vertices of a spherical tetrahedron on the unit three-sphere S3 in R4.
We project the four-dimensional graph onto the three-dimensional sphere along one of the four coordinate axes, and then give a two-dimensional picture of the resulting three-dimensional graph. This provides the four corner graph. The graph in the center is a similar picture "taken" from the point of view of the origin.
External links
http://www.math.union.edu/~dpvc/professional/art/tetra-exp.html
http://www.maa.org/cvm/1998/01/sbtd/article/tour/tetra-Z3/tetra-Z3.html
Functions an |
https://en.wikipedia.org/wiki/Split-quaternion | In abstract algebra, the split-quaternions or coquaternions form an algebraic structure introduced by James Cockle in 1849 under the latter name. They form an associative algebra of dimension four over the real numbers.
After introduction in the 20th century of coordinate-free definitions of rings and algebras, it was proved that the algebra of split-quaternions is isomorphic to the ring of the real matrices. So the study of split-quaternions can be reduced to the study of real matrices, and this may explain why there are few mentions of split-quaternions in the mathematical literature of the 20th and 21st centuries.
Definition
The split-quaternions are the linear combinations (with real coefficients) of four basis elements that satisfy the following product rules:
,
,
,
.
By associativity, these relations imply
,
,
and also .
So, the split-quaternions form a real vector space of dimension four with as a basis. They form also a noncommutative ring, by extending the above product rules by distributivity to all split-quaternions.
Let consider the square matrices
They satisfy the same multiplication table as the corresponding split-quaternions. As these matrices form a basis of the two-by-two matrices, the unique linear function that maps to (respectively) induces an algebra isomorphism from the split-quaternions to the two-by-two real matrices.
The above multiplication rules imply that the eight elements form a group under this multiplication, which is isomorphic to the dihedral group D4, the symmetry group of a square. In fact, if one considers a square whose vertices are the points whose coordinates are or , the matrix is the clockwise rotation of the quarter of a turn, is the symmetry around the first diagonal, and is the symmetry around the axis.
Properties
Like the quaternions introduced by Hamilton in 1843, they form a four dimensional real associative algebra. But like the real algebra of 2×2 matrices – and unlike the real alge |
https://en.wikipedia.org/wiki/Souring | Souring is a food preparation technique that causes a physical and chemical change in food by exposing it to an acid. This acid can be added explicitly (e.g., vinegar, lemon juice, lime juice, etc.), or can be produced within the food itself by a microbe, such as Lactobacillus.
Souring is similar to pickling or fermentation, but souring typically occurs in minutes or hours, while pickling and fermentation can take a much longer amount of time.
Examples
Dairy products produced by souring include:
Clabber,
Cheese,
Crème fraîche,
Cultured buttermilk,
Curd,
Filmjölk,
Kefir,
Paneer,
Smetana,
Soured milk,
Sour cream, and
Yogurt.
Grain products include:
Idli,
Sourdough, and
Sour mash.
Others foods produced by souring include:
Ceviche, Kinilaw, and
Key lime pie.
See also
Fermented milk products
Food preservation
Marination |
https://en.wikipedia.org/wiki/Software%20design | Software design is the process by which an agent creates a specification of a software artifact intended to accomplish goals, using a set of primitive components and subject to constraints. The term is sometimes used broadly to refer to "all the activity involved in conceptualizing, framing, implementing, commissioning, and ultimately modifying" the software, or more specifically "the activity following requirements specification and before programming, as ... [in] a stylized software engineering process."
Software design usually involves problem-solving and planning a software solution. This includes both a low-level component and algorithm design and a high-level, architecture design.
Overview
Software design is the process of envisioning and defining software solutions to one or more sets of problems. One of the main components of software design is the software requirements analysis (SRA). SRA is a part of the software development process that lists specifications used in software engineering.
If the software is "semi-automated" or user centered, software design may involve user experience design yielding a storyboard to help determine those specifications. If the software is completely automated (meaning no user or user interface), a software design may be as simple as a flow chart or text describing a planned sequence of events. There are also semi-standard methods like Unified Modeling Language and Fundamental modeling concepts. In either case, some documentation of the plan is usually the product of the design. Furthermore, a software design may be platform-independent or platform-specific, depending upon the availability of the technology used for the design.
The main difference between software analysis and design is that the output of a software analysis consists of smaller problems to solve. Additionally, the analysis should not be designed very differently across different team members or groups. In contrast, the design focuses on capabilities, a |
https://en.wikipedia.org/wiki/Euler%E2%80%93Bernoulli%20beam%20theory | Euler–Bernoulli beam theory (also known as engineer's beam theory or classical beam theory) is a simplification of the linear theory of elasticity which provides a means of calculating the load-carrying and deflection characteristics of beams. It covers the case corresponding to small deflections of a beam that is subjected to lateral loads only. By ignoring the effects of shear deformation and rotatory inertia, it is thus a special case of Timoshenko–Ehrenfest beam theory. It was first enunciated circa 1750, but was not applied on a large scale until the development of the Eiffel Tower and the Ferris wheel in the late 19th century. Following these successful demonstrations, it quickly became a cornerstone of engineering and an enabler of the Second Industrial Revolution.
Additional mathematical models have been developed, such as plate theory, but the simplicity of beam theory makes it an important tool in the sciences, especially structural and mechanical engineering.
History
Prevailing consensus is that Galileo Galilei made the first attempts at developing a theory of beams, but recent studies argue that Leonardo da Vinci was the first to make the crucial observations. Da Vinci lacked Hooke's law and calculus to complete the theory, whereas Galileo was held back by an incorrect assumption he made.
The Bernoulli beam is named after Jacob Bernoulli, who made the significant discoveries. Leonhard Euler and Daniel Bernoulli were the first to put together a useful theory circa 1750.
Static beam equation
The Euler–Bernoulli equation describes the relationship between the beam's deflection and the applied load:The curve describes the deflection of the beam in the direction at some position (recall that the beam is modeled as a one-dimensional object). is a distributed load, in other words a force per unit length (analogous to pressure being a force per area); it may be a function of , , or other variables. is the elastic modulus and is the second moment |
https://en.wikipedia.org/wiki/Kringle%20domain | Kringle domains are autonomous protein domains that fold into large loops stabilized by 3 disulfide linkages. These are important in protein–protein interactions with blood coagulation factors. Their name refers to the Kringle, a Scandinavian pastry which they somewhat resemble.
Kringle domains have been found in plasminogen, hepatocyte growth factors, prothrombin, and apolipoprotein(a).
Kringles are found throughout the blood clotting and fibrinolytic proteins. Kringle domains are believed to play a role in binding mediators (e.g., membranes, other proteins or phospholipids), and in the regulation of proteolytic activity. Kringle domains are characterised by a triple loop, 3-disulfide bridge structure, whose conformation is defined by a number of hydrogen bonds and small pieces of anti-parallel beta-sheet. They are found in a varying number of copies in some plasma proteins including prothrombin and urokinase-type plasminogen
activator, which are serine proteases belonging to MEROPS peptidase family S1A.
Human proteins containing this domain
ATF; F12; F2; HABP2; HGF; HGFAC; KREMEN1; KREMEN2;
LPA; LPAL2; MST1; PIK3IP1; PLAT; PLAU; PLG; PRSS12; ROR1; ROR2; |
https://en.wikipedia.org/wiki/Defective%20coloring | In graph theory, a mathematical discipline, coloring refers to an assignment of colours or labels to vertices, edges and faces of a graph. Defective coloring is a variant of proper vertex coloring. In a proper vertex coloring, the vertices are coloured such that no adjacent vertices have the same colour. In defective coloring, on the other hand, vertices are allowed to have neighbours of the same colour to a certain extent. (See here for Glossary of graph theory)
History
Defective coloring was introduced nearly simultaneously by Burr and Jacobson, Harary and Jones and Cowen, Cowen and Woodall. Surveys of this and related colorings are given by Marietjie Frick. Cowen, Cowen and Woodall focused on graphs embedded on surfaces and gave a complete characterization of all k and d such that every planar is (k, d)-colorable. Namely, there does not exist a d such that every planar graph is (1, d)- or (2, d)-colorable; there exist planar graphs which are not (3, 1)-colorable, but every planar graph is (3, 2)-colorable. Together with the (4, 0)-coloring implied by the four color theorem, this solves defective chromatic number for the plane. Poh and Goddard showed that any planar graph has a special (3,2)-coloring in which each color class is a linear forest, and this can be obtained from a more general result of Woodall.
For general surfaces, it was shown that for each genus , there exists a such that every graph on the surface of genus is (4, k)-colorable. This was improved to (3, k)-colorable by Dan Archdeacon.
For general graphs, a result of László Lovász from the 1960s, which has been rediscovered many times provides a O(∆E)-time algorithm for defective coloring graphs of maximum degree ∆.
Definitions and terminology
Defective coloring
A (k, d)-coloring of a graph G is a coloring of its vertices with k colours such that each vertex v has at most d neighbours having the same colour as the vertex v. We consider k to be a positive integer (it is inconsequential to co |
https://en.wikipedia.org/wiki/Taplitumomab%20paptox | Taplitumomab paptox is a mouse monoclonal antibody. The antibody itself, taplitumomab, is linked to the protein PAP, an antiviral from Phytolacca americana, a species of pokeweed. This is reflected by the 'paptox' in the drug's name. |
https://en.wikipedia.org/wiki/Schools%20Interoperability%20Framework | The Schools Interoperability Framework, Systems Interoperability Framework (UK), or SIF, is a data-sharing open specification for academic institutions from kindergarten through workforce. This specification is being used primarily in the United States, Canada, the UK, Australia, and New Zealand; however, it is increasingly being implemented in India, and elsewhere.
The specification comprises two parts: an XML specification for modeling educational data which is specific to the educational locale (such as North America, Australia or the UK), and a service-oriented architecture (SOA) based on both direct and brokered RESTful-models for sharing that data between institutions, which is international and shared between the locales.
SIF is not a product, but an industry initiative that enables diverse applications to interact and share data. , SIF was estimated to have been used in more than 48 US states and 6 countries, supporting five million students.
The specification was started and maintained by its specification body, the Schools Interoperability Framework Association, renamed the Access For Learning Community (A4L) in 2015.
History
Traditionally, the standalone applications used by public school districts have the limitation of data isolation; that is, it is difficult to access and share their data. This often results in redundant data entry, data integrity problems, and inefficient or incomplete reporting. In such cases, a student's information can appear in multiple places but may not be identical, for example, or decision makers may be working with incomplete or inaccurate information. Many district and site technology coordinators also experience an increase in technical support problems from maintaining numerous proprietary systems. SIF was created to solve these issues.
The Schools Interoperability Framework (SIF) began as an initiative chiefly championed initially by Microsoft to create "a blueprint for educational software interoperability and |
https://en.wikipedia.org/wiki/Lennox%20Lewis%20vs.%20Mike%20Tyson | Lennox Lewis vs. Mike Tyson, billed as Lewis–Tyson: Is On, was a heavyweight professional boxing match that took place on June 8, 2002, at the Pyramid Arena in Memphis, Tennessee. The defending unified WBC, IBF, IBO, and The Ring champion Lennox Lewis defeated former undisputed heavyweight champion Mike Tyson by knockout in the eighth round. Prior to the event, Lewis was awarded The Ring magazine heavyweight title, which had been vacant since the late 1980s and was last held by Tyson.
General information
The fight was originally scheduled for April 6, 2002 in Las Vegas. However, Nevada refused to grant Tyson a license after a press conference brawl between Lewis and Tyson (see below). Several other states refused Tyson a license before Memphis finally bid US$12 million in order to host the fight.
The referee for the fight was Eddie Cotton, officiating his 20th world title bout. Alfred Buqwana of South Africa, Anek Hongtongkam of Thailand and Bob Logist of Belgium were appointed as judges, as both the WBC and the Tennessee Athletic Commission wanted judges from different continents. Lewis weighed in at and Tyson at (the second highest of his career).
The fight was promoted by Main Events and was a pay-per-view shown as a joint collaboration between HBO and Showtime in the United States and on Sky Box Office in the United Kingdom. The joint promotion was a rarity as at the time HBO and Showtime were arch-rivals in American boxing broadcasting, though it would later be repeated in 2015 with the Floyd Mayweather Jr. vs. Manny Pacquiao match. HBO's Jim Lampley called the fight alongside Showtime's Bobby Czyz, and in addition each fighter was introduced by a ring announcer allied with a specific network: Lewis had HBO's Michael Buffer introduce him, while Jimmy Lennon Jr. of Showtime did the same for Tyson. It was the highest-grossing event in pay-per-view history, generating US$106.9 million from 1.95 million buys in the U.S., until it was surpassed by De La Hoya v |
https://en.wikipedia.org/wiki/Shimura%20variety | In number theory, a Shimura variety is a higher-dimensional analogue of a modular curve that arises as a quotient variety of a Hermitian symmetric space by a congruence subgroup of a reductive algebraic group defined over Q. Shimura varieties are not algebraic varieties but are families of algebraic varieties. Shimura curves are the one-dimensional Shimura varieties. Hilbert modular surfaces and Siegel modular varieties are among the best known classes of Shimura varieties.
Special instances of Shimura varieties were originally introduced by Goro Shimura in the course of his generalization of the complex multiplication theory. Shimura showed that while initially defined analytically, they are arithmetic objects, in the sense that they admit models defined over a number field, the reflex field of the Shimura variety. In the 1970s, Pierre Deligne created an axiomatic framework for the work of Shimura. In 1979, Robert Langlands remarked that Shimura varieties form a natural realm of examples for which equivalence between motivic and automorphic L-functions postulated in the Langlands program can be tested. Automorphic forms realized in the cohomology of a Shimura variety are more amenable to study than general automorphic forms; in particular, there is a construction attaching Galois representations to them.
Definition
Shimura datum
Let S = ResC/R Gm be the Weil restriction of the multiplicative group from complex numbers to real numbers. It is a real algebraic group, whose group of R-points, S(R), is C* and group of C-points is C*×C*. A Shimura datum is a pair (G, X) consisting of a (connected) reductive algebraic group G defined over the field Q of rational numbers and a G(R)-conjugacy class X of homomorphisms h: S → GR satisfying the following axioms:
For any h in X, only weights (0,0), (1,−1), (−1,1) may occur in gC, i.e. the complexified Lie algebra of G decomposes into a direct sum
where for any z ∈ S, h(z) acts trivially on the first summand and via |
https://en.wikipedia.org/wiki/Thabit%20number | In number theory, a Thabit number, Thâbit ibn Qurra number, or 321 number is an integer of the form for a non-negative integer n.
The first few Thabit numbers are:
2, 5, 11, 23, 47, 95, 191, 383, 767, 1535, 3071, 6143, 12287, 24575, 49151, 98303, 196607, 393215, 786431, 1572863, ...
The 9th century mathematician, physician, astronomer and translator Thābit ibn Qurra is credited as the first to study these numbers and their relation to amicable numbers.
Properties
The binary representation of the Thabit number 3·2n−1 is n+2 digits long, consisting of "10" followed by n 1s.
The first few Thabit numbers that are prime (Thabit primes or 321 primes):
2, 5, 11, 23, 47, 191, 383, 6143, 786431, 51539607551, 824633720831, ...
, there are 67 known prime Thabit numbers. Their n values are:
0, 1, 2, 3, 4, 6, 7, 11, 18, 34, 38, 43, 55, 64, 76, 94, 103, 143, 206, 216, 306, 324, 391, 458, 470, 827, 1274, 3276, 4204, 5134, 7559, 12676, 14898, 18123, 18819, 25690, 26459, 41628, 51387, 71783, 80330, 85687, 88171, 97063, 123630, 155930, 164987, 234760, 414840, 584995, 702038, 727699, 992700, 1201046, 1232255, 2312734, 3136255, 4235414, 6090515, 11484018, 11731850, 11895718, 16819291, 17748034, 18196595, 18924988, 20928756, ...
The primes for 234760 ≤ n ≤ 3136255 were found by the distributed computing project 321 search.
In 2008, PrimeGrid took over the search for Thabit primes. It is still searching and has already found all currently known Thabit primes with n ≥ 4235414. It is also searching for primes of the form 3·2n+1, such primes are called Thabit primes of the second kind or 321 primes of the second kind.
The first few Thabit numbers of the second kind are:
4, 7, 13, 25, 49, 97, 193, 385, 769, 1537, 3073, 6145, 12289, 24577, 49153, 98305, 196609, 393217, 786433, 1572865, ...
The first few Thabit primes of the second kind are:
7, 13, 97, 193, 769, 12289, 786433, 3221225473, 206158430209, 6597069766657, 221360928884514619393, ...
Their n values are:
1, 2, |
https://en.wikipedia.org/wiki/Human%20back | The human back, also called the dorsum (: dorsa), is the large posterior area of the human body, rising from the top of the buttocks to the back of the neck. It is the surface of the body opposite from the chest and the abdomen. The vertebral column runs the length of the back and creates a central area of recession. The breadth of the back is created by the shoulders at the top and the pelvis at the bottom.
Back pain is a common medical condition, generally benign in origin.
Structure
The central feature of the human back is the vertebral column, specifically the length from the top of the thoracic vertebrae to the bottom of the lumbar vertebrae, which houses the spinal cord in its spinal canal, and which generally has some curvature that gives shape to the back. The ribcage extends from the spine at the top of the back (with the top of the ribcage corresponding to the T1 vertebra), more than halfway down the length of the back, leaving an area with less protection between the bottom of the ribcage and the hips. The width of the back at the top is defined by the scapula, the broad, flat bones of the shoulders.
Muscles
The muscles of the back can be divided into three distinct groups; a superficial group, an intermediate group and a deep group.
Superficial group
The superficial group, also known as the appendicular group, is primarily associated with movement of the appendicular skeleton. It is composed of trapezius, latissimus dorsi, rhomboid major, rhomboid minor and levator scapulae. It is innervated by anterior rami of spinal nerves, reflecting its embryological origin outside the back.
Intermediate group
The intermediate group is also known as respiratory group as it may serve a respiratory function. It is composed of serratus posterior superior and serratus posterior inferior. Like the superficial group, it is innervated by anterior rami of spinal nerves.
Deep group
The deep group, also known as the intrinsic group due to its embryological origin in |
https://en.wikipedia.org/wiki/Rate%20of%20reinforcement | In behaviorism, rate of reinforcement is number of reinforcements per time, usually per minute. Symbol of this rate is usually Rf. Its first major exponent was B.F. Skinner (1939). It is used in the Matching Law.
Rf = # of reinforcements/unit of time = SR+/t
See also
Rate of response |
https://en.wikipedia.org/wiki/LocalTalk | LocalTalk is a particular implementation of the physical layer of the AppleTalk networking system from Apple Computer.
LocalTalk specifies a system of shielded twisted pair cabling, plugged into self-terminating transceivers, running at a rate of 230.4 kbit/s. CSMA/CA was implemented as a random multiple access method.
Networking was envisioned in the Macintosh during planning, so the Mac was given expensive RS-422 capable serial ports, first on a nine-pin D-connector, then on a Mini-DIN-8 connector. The ports were driven by the Zilog SCC, which could serve as either a standard UART or handle the much more complicated HDLC protocol, which was a packet oriented protocol that incorporated addressing, bit-stuffing, and packet checksumming in hardware. Coupled together with the RS422 electrical connections, this provided a reasonably high-speed data connection.
The 230.4 kbit/s bit rate is the highest in the series of standard serial bit rates (110, 150, 300, 600, 1200, 2400, 4800, 9600, 14400, 19200, 28800, 38400, 57600, 115200, 230400) derived from the 3.6864 MHz clock after the customary divide-by-16. This clock frequency, 3.6864 MHz, was chosen (in part) to support the common asynchronous baud rates up to 38.4 kbit/s using the SCC's internal baud-rate generator. When the SCC's internal PLL was used to lock to the clock embedded in the LocalTalk serial data stream (using its FM0 encoding method) a divide-by-16 setting on the PLL yielded the fastest rate available, namely 230.4 kbit/s.
Originally released as "AppleTalk Personal Network", LocalTalk used shielded twisted-pair cable with 3-pin Mini-DIN connectors. Cables were daisy-chained from transceiver to transceiver. Each transceiver had two 3-pin Mini-DIN ports, and a "pigtail" cable to connect to the Mac's DE-9 serial connector. Later, when the Mac Plus introduced the 8-pin Mini-DIN serial connector, transceivers were updated as well.
A variation of LocalTalk called PhoneNET was introduced by Farallon Compu |
https://en.wikipedia.org/wiki/Clostridium%20perfringens%20beta%20toxin | Clostridium perfringens beta toxin is one of the four major lethal protein toxins produced by Clostridium perfringens Type B and Type C strains. It is a necrotizing agent and it induces hypertension by release of catecholamine. It has been shown to cause necrotic enteritis in mammals and induces necrotizing intestinal lesions in the rabbit ileal loop model. C. perfringens beta toxin is susceptible to breakdown by proteolytic enzymes, particularly trypsin. Beta toxin is therefore highly lethal to infant mammals because of trypsin inhibitors present in the colostrum.
Structure and homology
Clostridium perfringens beta toxin shows significant genetic homology with several other toxins. C. perfringens beta toxin shows 28% homology with S. aureus alpha toxin and similar homology to S. aureus gamma-toxin and leukocidin. It appears in two forms. The smaller, with a molecular mass of 34 kDa, represents the monomeric gene product. The larger has a molecular mass of 118 kDa and may be an oligomer of smaller units. The first 27 amino acids may encode a signal that allows beta toxin to cross the cell membrane, further evidenced by the presence of beta toxin in extracellular fluid of C. perfringens cultures.
Function
Pore formation
Because C. perfringens beta toxin shares homology with S. aureus pore-forming alpha toxin, it was hypothesized that beta toxin acts in a similar way. Upon investigation, it was found that C. perfringens beta toxin forms cation-selective pores in cell membranes of 1.6–1.8 nm and results in swelling and lysis in HL60 cells. Treatment of these cells with beta toxin induces an efflux of K+ and influxes of Ca2+, Cl− and Na+. Heat-stable beta-toxin oligomers are shown to bind to cell membranes of human umbilical vein endothelial cells; endothelial cells are beta toxin's primary target, upon introduction. Further work on beta toxin has been hampered by its ineffectiveness on many readily available cell lines.
Clinical significance
C. perfringens Type |
https://en.wikipedia.org/wiki/Bioinformatics%20discovery%20of%20non-coding%20RNAs | Non-coding RNAs have been discovered using both experimental and bioinformatic approaches. Bioinformatic approaches can be divided into three main categories. The first involves homology search, although these techniques are by definition unable to find new classes of ncRNAs. The second category includes algorithms designed to discover specific types of ncRNAs that have similar properties. Finally, some discovery methods are based on very general properties of RNA, and are thus able to discover entirely new kinds of ncRNAs.
Discovery by homology search
Homology search refers to the process of searching a sequence database for RNAs that are similar to already known RNA sequences. Any algorithm that is designed for homology search of nucleic acid sequences can be used, e.g., BLAST. However, such algorithms typically are not as sensitive or accurate as algorithms specifically designed for RNA.
Of particular importance for RNA is its conservation of a secondary structure, which can be modeled to achieve additional accuracy in searches. For example, Covariance models can be viewed as an extension to a profile hidden Markov model that also reflects conserved secondary structure. Covariance models are implemented in the Infernal software package.
Discovery of specific types of ncRNAs
Some types of RNAs have shared properties that algorithms can exploit. For example, tRNAscan-SE is specialized to finding tRNAs. The heart of this program is a tRNA homology search based on covariance models, but other tRNA-specific search programs are used to accelerate searches.
The properties of snoRNAs have enabled the development of programs to detect new examples of snoRNAs, including those that might be only distantly related to previously known examples. Computer programs implementing such approaches include snoscan and snoReport.
Similarly, several algorithms have been developed to detect microRNAs. Examples include miRNAFold and miRNAminer.
Discovery by general pr |
https://en.wikipedia.org/wiki/Linked%20data%20structure | In computer science, a linked data structure is a data structure which consists of a set of data records (nodes) linked together and organized by references (links or pointers). The link between data can also be called a connector.
In linked data structures, the links are usually treated as special data types that can only be dereferenced or compared for equality. Linked data structures are thus contrasted with arrays and other data structures that require performing arithmetic operations on pointers. This distinction holds even when the nodes are actually implemented as elements of a single array, and the references are actually array indices: as long as no arithmetic is done on those indices, the data structure is essentially a linked one.
Linking can be done in two ways using dynamic allocation and using array index linking.
Linked data structures include linked lists, search trees, expression trees, and many other widely used data structures. They are also key building blocks for many efficient algorithms, such as topological sort and set union-find.
Common types of linked data structures
Linked lists
A linked list is a collection of structures ordered not by their physical placement in memory but by logical links that are stored as part of the data in the structure itself. It is not necessary that it should be stored in the adjacent memory locations. Every structure has a data field and an address field. The Address field contains the address of its successor.
Linked list can be singly, doubly or multiply linked and can either be linear or circular.
Basic properties
Objects, called nodes, are linked in a linear sequence.
A reference to the first node of the list is always kept. This is called the 'head' or 'front'.
A linked list with three nodes contain two fields each: an integer value and a link to the next node
Example in Java
This is an example of the node class used to store integers in a Java implementation of a linked list:
public clas |
https://en.wikipedia.org/wiki/Small%20Veblen%20ordinal | In mathematics, the small Veblen ordinal is a certain large countable ordinal, named after Oswald Veblen. It is occasionally called the Ackermann ordinal, though the Ackermann ordinal described by is somewhat smaller than the small Veblen ordinal.
There is no standard notation for ordinals beyond the Feferman–Schütte ordinal . Most systems of notation use symbols such as , , , some of which are modifications of the Veblen functions to produce countable ordinals even for uncountable arguments, and some of which are "collapsing functions".
The small Veblen ordinal or is the limit of ordinals that can be described using a version of Veblen functions with finitely many arguments. It is the ordinal that measures the strength of Kruskal's theorem. It is also the ordinal type of a certain ordering of rooted trees . |
https://en.wikipedia.org/wiki/Paranormal%20operator | In mathematics, especially operator theory, a paranormal operator is a generalization of a normal operator. More precisely, a bounded linear operator T on a complex Hilbert space H is said to be paranormal if:
for every unit vector x in H.
The class of paranormal operators was introduced by V. Istratescu in 1960s, though the term "paranormal" is probably due to Furuta.
Every hyponormal operator (in particular, a subnormal operator, a quasinormal operator and a normal operator) is paranormal. If T is a paranormal, then Tn is paranormal. On the other hand, Halmos gave an example of a hyponormal operator T such that T2 isn't hyponormal. Consequently, not every paranormal operator is hyponormal.
A compact paranormal operator is normal. |
https://en.wikipedia.org/wiki/Pratique | Pratique is the license given to a ship to enter a port, that indicates to local authorities (on assurance from the captain) that it is free from contagious disease. The clearance granted is commonly referred to as free pratique. A ship can signal a request for pratique by flying a solid yellow square-shaped flag. This yellow flag is the Q flag in the set of international maritime signal flags.
In the event that free pratique is not granted, a vessel will be held in quarantine, according to the customs and health regulations prevailing at the port of entry, typically until a customs or biosecurity officer makes a satisfactory inspection.
Since flying the Q flag involves a request for boarding by Port State Control, it has also become an invitation to Customs to inspect a vessel for dutiable goods or contraband, as in the Rich Harvest case, where a yacht carrying a large quantity of alcohol flew the Q flag in order to seek exemption from having to pay duty during a temporary visit to port. The same vessel was also flying the Q flag when she was boarded in Cape Verde and found to be carrying more than one ton of cocaine. However, although the captain had thereby invited the authorities to make an inspection (being, according to his claim, ignorant of the fact that the boat was carrying contraband), he and the crew were nevertheless arrested for trafficking.
A question over who granted pratique arose with the Ruby Princess COVID-19 incident.
See also
Quarantine |
https://en.wikipedia.org/wiki/Security%20testing | Security testing is a process intended to detect flaws in the security mechanisms of an information system and as such help enable it to protect data and maintain functionality as intended. Due to the logical limitations of security testing, passing the security testing process is not an indication that no flaws exist or that the system adequately satisfies the security requirements.
Typical security requirements may include specific elements of confidentiality, integrity, authentication, availability, authorization and non-repudiation. Actual security requirements tested depend on the security requirements implemented by the system. Security testing as a term has a number of different meanings and can be completed in a number of different ways. As such, a Security Taxonomy helps us to understand these different approaches and meanings by providing a base level to work from.
Confidentiality
A security measure which protects against the disclosure of information to parties other than the intended recipient is by no means the only way of ensuring the security.
Integrity
Integrity of information refers to protecting information from being modified by unauthorized parties
A measure intended to allow the receiver to determine that the information provided by a system is correct.
Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding information to a communication, to form the basis of an algorithmic check, rather than the encoding all of the communication.
To check if the correct information is transferred from one application to other.
Authentication
This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labelling claims to be, or assuring that a computer program is a trusted one.
Authorization
The process of determining that a requester is allowed to receive a service or perform an operation. |
https://en.wikipedia.org/wiki/Pest%20Management%20Regulatory%20Agency | The Pest Management Regulatory Agency (PMRA) is the Canadian government agency responsible for the regulation of pest control products in Canada under the federal authority of the Pest Control Products Act and Regulations. The agency is a branch that reports to Parliament through Health Canada. The PMRA is responsible for providing access to pest management tools while minimizing the risks to human health and the environment by “using modern evidence-based scientific approaches to pesticide regulation, in an open and transparent manner”. Their main activity areas include: new product evaluation, post market review and compliance and enforcement.
The PMRA works with provincial, territorial and federal departments in Canada to help refine and strengthen pesticide regulation across the country. Outside of Canada, the Agency works closely with international organizations such as the United States Environmental Protection Agency (EPA), the North American Free Trade Agreement Technical Working Group, the European Union, and the Organisation for Economic Co-operation and Development (OECD). They work to align the processes used to regulate pest control products and uphold the protection of health and the environment.
As of April 2017, the agency has approximately 400 employees. Over 75% of PMRA’s employees are scientists, with specializations in toxicology, environmental science, biology and chemistry. Other employees include policy, regulations, communications and administration. The PMRA's headquarters is in the Sir Charles Tupper Building in Ottawa, ON.
History
In 1990, the Pesticides Registration Review Team was tasked with developing recommendations to improve the federal pesticide regulatory system. The Review Team consulted with Canadians across the country proposing a major reform to the federal pest management regulatory system. Proposed reforms from the Review Team were to establish a multi-stakeholder advisory council and the federal/provincial/territorial c |
https://en.wikipedia.org/wiki/Loopless%20algorithm | In computational combinatorics, a loopless algorithm or loopless imperative algorithm is an imperative algorithm that generates successive combinatorial objects, such as partitions, permutations, and combinations, in constant time and the first object in linear time. The objects must be immediately available in simple form without requiring any additional steps.
A loopless functional algorithm is a functional algorithm that takes the form unfoldr step • prolog where step takes constant time and prolog takes linear time in the size of the input. The standard function unfoldr is a right-associative Bird unfold. |
https://en.wikipedia.org/wiki/Data%20aggregation | Data aggregation is the compiling of information from databases with intent to prepare combined datasets for data processing.
Description
The United States Geological Survey explains that, “when data are well documented, you know how and where to look for information and the results you return will be what you expect.” The source information for data aggregation may originate from public records and criminal databases. The information is packaged into aggregate reports and then sold to businesses, as well as to local, state, and government agencies. This information can also be useful for marketing purposes. In the United States, many data brokers' activities fall under the Fair Credit Reporting Act (FCRA) which regulates consumer reporting agencies. The agencies then gather and package personal information into consumer reports that are sold to creditors, employers, insurers, and other businesses.
Various reports of information are provided by database aggregators. Individuals may request their own consumer reports which contain basic biographical information such as name, date of birth, current address, and phone number. Employee background check reports, which contain highly detailed information such as past addresses and length of residence, professional licenses, and criminal history, may be requested by eligible and qualified third parties. Not only can this data be used in employee background checks, but it may also be used to make decisions about insurance coverage, pricing, and law enforcement. Privacy activists argue that database aggregators can provide erroneous information.
Role of the Internet
The potential of the Internet to consolidate and manipulate information has a new application in data aggregation, also known as screen scraping. The Internet gives users the opportunity to consolidate their usernames and passwords, or PINs. Such consolidation enables consumers to access a wide variety of PIN-protected websites containing personal information |
https://en.wikipedia.org/wiki/Universal%20graph | In mathematics, a universal graph is an infinite graph that contains every finite (or at-most-countable) graph as an induced subgraph. A universal graph of this type was first constructed by Richard Rado and is now called the Rado graph or random graph. More recent work
has focused on universal graphs for a graph family : that is, an infinite graph belonging to F that contains all finite graphs in . For instance, the Henson graphs are universal in this sense for the -clique-free graphs.
A universal graph for a family of graphs can also refer to a member of a sequence of finite graphs that contains all graphs in ; for instance, every finite tree is a subgraph of a sufficiently large hypercube graph
so a hypercube can be said to be a universal graph for trees. However it is not the smallest such graph: it is known that there is a universal graph for -vertex trees, with only vertices and edges, and that this is optimal. A construction based on the planar separator theorem can be used to show that -vertex planar graphs have universal graphs with edges, and that bounded-degree planar graphs have universal graphs with edges. It is also possible to construct universal graphs for planar graphs that have vertices.
Sumner's conjecture states that tournaments are universal for polytrees, in the sense that every tournament with vertices contains every polytree with vertices as a subgraph.
A family of graphs has a universal graph of polynomial size, containing every -vertex graph as an induced subgraph, if and only if it has an adjacency labelling scheme in which vertices may be labeled by -bit bitstrings such that an algorithm can determine whether two vertices are adjacent by examining their labels. For, if a universal graph of this type exists, the vertices of any graph in may be labeled by the identities of the corresponding vertices in the universal graph, and conversely if a labeling scheme exists then a universal graph may be constructed having a vertex |
https://en.wikipedia.org/wiki/Royal%20Air%20Force%20roundels | The air forces of the United Kingdom – the Royal Navy's Fleet Air Arm, the Army's Army Air Corps and the Royal Air Force use a roundel, a circular identification mark, painted on aircraft to identify them to other aircraft and ground forces. In one form or another, it has been used on British military aircraft from 1915 to the present.
Background
When the First World War started in 1914 it was the habit of ground troops to fire on all aircraft, friend or foe, so that the need for some form of identification mark became evident. At first the Union Flag was painted under the wings and on the sides of the fuselage. It soon became obvious that at a distance the St George's Cross of the Union Flag was likely to be confused with the Iron Cross that was already being used to identify German aircraft. After the use of a Union Flag inside a shield was tried it was decided to follow the lead of the French who used a tricolour cockade (a roundel of red and white with a blue centre). The British reversed the colours and it became the standard marking on Royal Flying Corps aircraft from 11 December 1914, although it was well into 1915 before the new marking was used with complete consistency.
The official order stated:
The Royal Naval Air Service specified in A.I.D. SK. No. A78 a five-foot red ring with a white centre and a thin white outline on the lower surfaces of the lower wings at mid span, from October 1914 until it was decided to standardise on the RFC roundel for all British military aircraft in June 1915.
With the same roundel being carried by RFC and RNAS aircraft, the use of the Union Jack was discontinued. The Royal Flying Corps and its successor the Royal Air Force have employed numerous versions of the roundel since then.
By 1917, a thin white outline was usually added to the roundel, to make the blue of the outer circle easier to distinguish from the dark PC.10 and PC.12 protective doping. On squadrons operating at night there was not the same need to make th |
https://en.wikipedia.org/wiki/Quasi-solid | Quasi-solid, Falsely-solid, or semisolid is the physical term for something whose state lies between a solid and a liquid. While similar to solids in some respects, such as having the ability to support their own weight and hold their shapes, a quasi-solid also shares some properties of liquids, such as conforming in shape to something applying pressure to it and the ability to flow under pressure. The words quasi-solid, semisolid, and semiliquid may be used interchangeably.
Quasi-solids and semisolids are sometimes described as amorphous because at the microscopic scale they have a disordered structure unlike the more common crystalline solids. They should not be confused with amorphous solids as they are not solids and exhibit properties such as flow which bulk solids do not.
Examples
Pharmaceutical and cosmetic creams, gels, and ointments, e.g. petroleum jelly, toothpaste, hand sanitizer
Foods, e.g. pudding, guacamole, salsa, mayonnaise, whipping cream, peanut butter, jelly, jam
See also
Plasticity (physics)
Viscosity
Premelting |
https://en.wikipedia.org/wiki/History%20of%20CAD%20software | Designers have used computers for calculations since their invention. Digital computers were used in power system analysis or optimization as early as proto-"Whirlwind" in 1949. Circuit design theory or power network methodology was algebraic, symbolic, and often vector-based.
1940s–1950s
Between the mid-1940s and 1950s, various developments were made in computer software. Some of these developments include servo-motors controlled by generated pulse (1949), a digital computer with built-in operations to automatically coordinate transforms to compute radar related vectors (1951), and the graphic mathematical process of forming a shape with a digital machine tool (1952).
In 1953, MIT researcher Douglas T. Ross saw the "interactive display equipment" being used by radar operators, believing it would be exactly what his SAGE-related data reduction group needed. Ross and the other researchers from the Massachusetts Institute of Technology Lincoln Laboratory were the sole users of the complex display systems installed for the pre-SAGE Cape Cod system. Ross claimed in an interview that they "used it for their own personal workstation." The designers of these early computers built utility programs to ensure programmers could debug software, using flowcharts on a display scope, with logical switches that could be opened and closed during the debugging session. They found that they could create electronic symbols and geometric figures to create simple circuit diagrams and flowcharts. These programs also enabled objects to be reproduced at will; it also was possible to change their orientation, linkage (flux, mechanical, lexical scoping), or scale. This presented numerous possibilities to them.
Ross coined the term computer-aided design (CAD) in 1959.
1960s
The invention of the 3D CAD/CAM is often attributed to French engineer Pierre Bézier (Arts et Métiers ParisTech, Renault). Between 1966 and 1968, after his mathematical work concerning surfaces, he developed UNISURF |
https://en.wikipedia.org/wiki/Zoospore | A zoospore is a motile asexual spore that uses a flagellum for locomotion. Also called a swarm spore, these spores are created by some protists, bacteria, and fungi to propagate themselves.
Diversity
Flagella types
Zoospores may possess one or more distinct types of flagella - tinsel or "decorated", and whiplash, in various combinations.
Tinsellated (straminipilous) flagella have lateral filaments known as mastigonemes perpendicular to their main axis, which allow for more surface area, and disturbance of the medium, giving them the property of a rudder, that is, used for steering.
Whiplash flagella are straight, to power the zoospore through its medium. Also, the "default" zoospore only has the propelling, whiplash flagella.
Both tinsel and whiplash flagella beat in a sinusoidal wave pattern, but when both are present, the tinsel beats in the opposite direction of the whiplash, to give two axes of control of motility.
Morphological types
In eukaryotes, the four main types of zoospore are illustrated in Fig. 1 at right:
Posterior whiplash flagella are a characteristic of the Chytridiomycota, and a proposed uniting trait of the opisthokonts, a large clade of eukaryotes containing animals and fungi. Most of these have a single posterior flagellum (Fig. 1a), but the Neocallimastigales have up to 16 (Fig. 1b).
Anisokonts are biflagellated zoospores with two whip types flagella of unequal length (Fig. 1c). These are found in some of the Myxomycota and Plasmodiophoromycota.
Zoospores with a single anterior flagellum (Fig. 1d) of the tinsel type are characteristic of Hyphochytriomycetes.
Heterokont are biflagellated zoospores (Fig. 1e, f) with both whiplash (smooth) and tinsel type (fine outgrowths called mastigonemes) flagella attached anteriorly or laterally. These zoospores are characteristic of the Oomycota and other heterokonts.
Zoosporangium
A zoosporangium is the asexual structure (sporangium) in which the zoospores develop in plants, fungi, or protists |
https://en.wikipedia.org/wiki/Microdialysis | Microdialysis is a minimally-invasive sampling technique that is used for continuous measurement of free, unbound analyte concentrations in the extracellular fluid of virtually any tissue. Analytes may include endogenous molecules (e.g. neurotransmitter, hormones, glucose, etc.) to assess their biochemical functions in the body, or exogenous compounds (e.g. pharmaceuticals) to determine their distribution within the body. The microdialysis technique requires the insertion of a small microdialysis catheter (also referred to as microdialysis probe) into the tissue of interest. The microdialysis probe is designed to mimic a blood capillary and consists of a shaft with a semipermeable hollow fiber membrane at its tip, which is connected to inlet and outlet tubing. The probe is continuously perfused with an aqueous solution (perfusate) that closely resembles the (ionic) composition of the surrounding tissue fluid at a low flow rate of approximately 0.1-5μL/min. Once inserted into the tissue or (body)fluid of interest, small solutes can cross the semipermeable membrane by passive diffusion. The direction of the analyte flow is determined by the respective concentration gradient and allows the usage of microdialysis probes as sampling as well as delivery tools. The solution leaving the probe (dialysate) is collected at certain time intervals for analysis.
History
The microdialysis principle was first employed in the early 1960s, when push-pull canulas and dialysis sacs were implanted into animal tissues, especially into rodent brains, to directly study the tissues' biochemistry. While these techniques had a number of experimental drawbacks, such as the number of samples per animal or no/limited time resolution, the invention of continuously perfused dialytrodes in 1972 helped to overcome some of these limitations. Further improvement of the dialytrode concept resulted in the invention of the "hollow fiber", a tubular semipermeable membrane with a diameter of ~200-300μm |
https://en.wikipedia.org/wiki/1seg | is a mobile terrestrial digital audio/video and data broadcasting service in Japan, Argentina, Brazil, Chile, Uruguay, Paraguay, Peru and the Philippines. Service began experimentally during 2005 and commercially on April 1, 2006. It is designed as a component of ISDB-T, the terrestrial digital broadcast system used in those countries, as each channel is divided into 13 segments, with a further segment separating it from the next channel; an HDTV broadcast signal occupies 12 segments, leaving the remaining (13th) segment for mobile receivers, hence the name, "1seg" or "One Seg".
Its use in Brazil was established in late 2007 (starting in just a few cities), with a slight difference from the Japanese counterpart: it is broadcast under a 30 frame/s transmission setting (Japanese broadcasts are under the 15 frame/s transmission setting).
Technical information
The ISDB-T system uses the UHF band at frequencies between 470 and 770 MHz (806 MHz in Brazil), giving a total bandwidth 300 MHz. The bandwidth is divided into fifty name channels 13 through 62. Each channel is 6 MHz wide consisting of a 5.57 MHz wide signalling band and a 430 kHz guard band to limit cross channel interference. Each of these channels is further divided into 13 segments, each with 428 kHz of bandwidth. 1 seg uses a single of these segments to carry the 1seg transport stream.
1seg, like ISDB-T uses QPSK for modulation, with 2/3 forward error correction and 1/4 guard ratio. The total datarate is 416 kbit/s.
The television system uses an H.264/MPEG-4 AVC video stream and an HE-AAC audio stream multiplexed into an MPEG transport stream. The maximum video resolution is 320x240 pixels, with a video bitrate of between 220 and 320 kbit/s. Audio conforms to HE-AAC profile, with a bitrate of 48 to 64 kbit/s. Additional data (EPG, interactive services, etc.) is transmitted using BML and occupies the remaining 10 to 100 kbit/s of bandwidth.
Conditional access and copy control are implemented in 1seg broa |
https://en.wikipedia.org/wiki/Error%20threshold%20%28evolution%29 | In evolutionary biology and population genetics, the error threshold (or critical mutation rate) is a limit on the number of base pairs a self-replicating molecule may have before mutation will destroy the information in subsequent generations of the molecule. The error threshold is crucial to understanding "Eigen's paradox".
The error threshold is a concept in the origins of life (abiogenesis), in particular of very early life, before the advent of DNA. It is postulated that the first self-replicating molecules might have been small ribozyme-like RNA molecules. These molecules consist of strings of base pairs or "digits", and their order is a code that directs how the molecule interacts with its environment. All replication is subject to mutation error. During the replication process, each digit has a certain probability of being replaced by some other digit, which changes the way the molecule interacts with its environment, and may increase or decrease its fitness, or ability to reproduce, in that environment.
Fitness landscape
It was noted by Manfred Eigen in his 1971 paper (Eigen 1971) that this mutation process places a limit on the number of digits a molecule may have. If a molecule exceeds this critical size, the effect of the mutations becomes overwhelming and a runaway mutation process will destroy the information in subsequent generations of the molecule. The error threshold is also controlled by the "fitness landscape" for the molecules. The fitness landscape is characterized by the two concepts of height (=fitness) and distance (=number of mutations). Similar molecules are "close" to each other, and molecules that are fitter than others and more likely to reproduce, are "higher" in the landscape.
If a particular sequence and its neighbors have a high fitness, they will form a quasispecies and will be able to support longer sequence lengths than a fit sequence with few fit neighbors, or a less fit neighborhood of sequences. Also, it was noted by Wi |
https://en.wikipedia.org/wiki/Arachidonic%20acid%205-hydroperoxide | Arachidonic acid 5-hydroperoxide (5-hydroperoxyeicosatetraenoic acid, 5-HPETE) is an intermediate in the metabolism of arachidonic acid by the ALOX5 enzyme in humans or Alox5 enzyme in other mammals. The intermediate is then further metabolized to: a) leukotriene A4 which is then metabolized to the chemotactic factor for leukocytes, leukotriene B4, or to contractors of lung airways, leukotriene C4, leukotriene D4, and leukotriene E4; b) the leukocyte chemotactic factors, 5-Hydroxyicosatetraenoic acid and 5-oxo-eicosatetraenoic acid; or c) the specialized pro-resolving mediators of inflammation, lipoxin A4 and lipoxin B4. |
https://en.wikipedia.org/wiki/Pugh%27s%20closing%20lemma | In mathematics, Pugh's closing lemma is a result that links periodic orbit solutions of differential equations to chaotic behaviour. It can be formally stated as follows:
Let be a diffeomorphism of a compact smooth manifold . Given a nonwandering point of , there exists a diffeomorphism arbitrarily close to in the topology of such that is a periodic point of .
Interpretation
Pugh's closing lemma means, for example, that any chaotic set in a bounded continuous dynamical system corresponds to a periodic orbit in a different but closely related dynamical system. As such, an open set of conditions on a bounded continuous dynamical system that rules out periodic behaviour also implies that the system cannot behave chaotically; this is the basis of some autonomous convergence theorems.
See also
Smale's problems |
https://en.wikipedia.org/wiki/Rate%20of%20return | In finance, return is a profit on an investment. It comprises any change in value of the investment, and/or cash flows (or securities, or other investments) which the investor receives from that investment over a specified time period, such as interest payments, coupons, cash dividends and stock dividends. It may be measured either in absolute terms (e.g., dollars) or as a percentage of the amount invested. The latter is also called the holding period return.
A loss instead of a profit is described as a negative return, assuming the amount invested is greater than zero.
To compare returns over time periods of different lengths on an equal basis, it is useful to convert each return into a return over a period of time of a standard length. The result of the conversion is called the rate of return.
Typically, the period of time is a year, in which case the rate of return is also called the annualized return, and the conversion process, described below, is called annualization.
The return on investment (ROI) is return per dollar invested. It is a measure of investment performance, as opposed to size (c.f. return on equity, return on assets, return on capital employed).
Calculation
The return, or the holding period return, can be calculated over a single period. The single period may last any length of time.
The overall period may, however, instead be divided into contiguous subperiods. This means that there is more than one time period, each sub-period beginning at the point in time where the previous one ended. In such a case, where there are multiple contiguous subperiods, the return or the holding period return over the overall period can be calculated by combining the returns within each of the subperiods.
Single-period
Return
The direct method to calculate the return or the holding period return over a single period of any length of time is:
where:
= final value, including dividends and interest
= initial value
For example, if someone purchases 100 s |
https://en.wikipedia.org/wiki/Cell%20physiology | Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str |
https://en.wikipedia.org/wiki/Garden%20delphiniums | Garden delphiniums are horticultural hybrids derived from some perennial species in the genus Delphinium. Breeding of garden delphiniums started from the 19th century in Western Europe. In the 20th century, the United States, Japan and New Zealand also contributed to delphinium breeding.
Cultivar Groups
There are mainly two cultivar groups of garden delphinium:
Elatum Group is the most popular group. The cultivars in the group are tetraploid hybrids derived mainly from Delphinium elatum. Other species like D. cardinale are also involved.
Belladonna Group, also known as Delphinium × belladonna, contains mostly hexaploid hybrids between D. grandiflorum and D. elatum or Elatum Group.
Some Delphinium cultivars belong to neither group, such as D. grandiflorum cultivars which don't involve interspecific hybridization and D. × ruysii 'Pink Sensation' which is a hybrid between Elatum Group and D. nudicaule.
Gallery
Elatum Group
Belladonna Group |
https://en.wikipedia.org/wiki/Universal%20Personal%20Telecommunications | Universal personal telecommunications (UPT) was a special segment of the international telephone number space which had been set aside for universal personal telephone numbers. This service had been allocated country code +87810 and was completed by a 10-digit subscriber number which provided 10 billion unique numbers. The International Telecommunication Union (ITU) introduced this concept in 2001, referring to it as "global number portability" (not to be confused with number portability).
The delegation of UPT was requested by VisionNG Chairman Herwart Wermescher and was confirmed by Counsellor, SG2 of ITU-TSB Richard Hill on May 21, 2002.
The UPT number allocation was withdrawn in 2022.
The UPT service
The UPT standards have been developed to allow a UPT number to be associated with any device on any network, anywhere in the world. An individual should be able to enter an access code to make or receive calls on any device and can be provisioned as a global mobile telephone number.
UPT allowed ad hoc sharing of physical devices and was intended to be independent of geography or network provider. From the +87810 numbering space, operators could offer their customers next generation services – voice, data, email, SMS, web and location-based services – using a single "number for life" that transcended national boundaries and traditional ways of thinking about communications.
Initially, UPT number blocks were allocated to VoIP, but as technology advanced, UPT was more and more seen as a numbering and addressing solution for Digital Identity and Internet of Things marketplace.
In February 2016, the ITU approved the assignment of a Mobile Country Code and Network Code associated to the UPT Country Code. This allowed for the provision of 10 billion unique IMSI's for the deployment of Global Mobile Services.
UPT service profile
The UPT service profile was a record that contained all information related to a UPT user, which information is required to provide that use |
https://en.wikipedia.org/wiki/Viewdata | Viewdata is a Videotex implementation. It is a type of information retrieval service in which a subscriber can access a remote database via a common carrier channel, request data and receive requested data on a video display over a separate channel. Samuel Fedida, who had the idea for Viewdata in 1968, was credited as inventor of the system which was developed while working for the British Post Office which was the operator of the national telephone system. The first prototype became operational in 1974. The access, request and reception are usually via common carrier broadcast channels. This is in contrast with teletext.
Technology
Viewdata offered a display of 40×24 characters, based on ISO 646 (IRV IA5) – 7 bits with no accented characters.
Originally Viewdata was accessed with a special purpose terminal (or emulation software) and a modem running at ITU-T V.23 speed (1,200 bit/s down, 75 bit/s up). By 2004 it was normally accessed over TCP/IP using Viewdata client software on a personal computer running Microsoft Windows, or using a Web-based emulator.
Travel industry
As of 2015, Viewdata was still in use in the United Kingdom, mainly by the travel industry. Travel agents use it to look up the price and availability of package holidays and flights. Once they find what the customer is looking for they can place a booking.
There are a number of factors still holding up a move to a Web-based standard. Viewdata is regarded within the industry as low-cost and reliable, travel consultants have been trained to use Viewdata and would need training to book holidays on the Internet, and tour operators cannot agree on a Web-based standard.
Bulletin board systems
It was made in the late 1970s and early 1980s to make it easier for travel consultants to check availability and make bookings for holidays.
A number of Viewdata bulletin board systems existed in the 1980s, predominantly in the UK due to the proliferation of the BBC Micro, and a short-lived Viewdata Revival |
https://en.wikipedia.org/wiki/CYP10%20family | Cytochrome P450, family 10, also known as CYP10, is a cytochrome P450 family found in Lophotrochozoa belongs to Mitochondrial clan CYPs, which is located in the inner membrane of mitochondria(IMM). The first gene identified in this family is the CYP10A1 from the Lymnaea stagnalis (pond snail), which is highly expressed in the female gonadotropic hormone producing dorsal bodies. |
https://en.wikipedia.org/wiki/Henry%20Langdon%20Childe | Henry Langdon Childe (1781–1874) was an English showman, known as a developer of the magic lantern and dissolving views, a precursor of the dissolve in cinematic technique. While the priority question on the technical innovations Childe used is still debated, he established the use of double and triple lanterns for special theatrical effects, to the extent that the equipment involved became generally available through suppliers to other professionals. By the 1840s the "dissolving view", rooted in Gothic horror, had become a staple of illustrated talks with restrained animations.
Early life
Childe was born in Poole, Dorset the youngest of three children. He and his wife Elizabeth had one daughter Maria. She is recorded in the 1851 census as an artist in glass, living in Lambeth with her parents.
Development of lantern technique
Paul de Philipsthal used a magic lantern in London in 1802, for a phantasmagoria; he used effects such as animation of images, and a lantern on rails so that images could be changed in size. Childe reportedly worked for Philipsthal. He demonstrated his own magic lantern at the Sanspareil Theatre which was replaced by 1806, by the Adelphi Theatre.
The magic lantern had not advanced much from the 17th century to the latter part of the 18th century. Childe used achromatic lenses and an improved oil-lamp; and moved to the limelight, then associated with Thomas Drummond. The limelight has also been attributed to Robert Hare, and Goldsworthy Gurney. In Childe's hands, it increased the scale and brightness of the projected images at public performances.
It was the combination of the double image and the improved lighting that made the lantern technique standard for a time; credit for this advance in projection, underpinning "dissolving views" in practice, has been given to John Benjamin Dancer. The innovations of Childe and the instrument-maker Edward Marmaduke Clarke (the "biscenascope") played a part in displacing the diorama as a fashionable |
https://en.wikipedia.org/wiki/ARCore | ARCore, also known as Google Play Services for AR, is a software development kit developed by Google that allows for augmented reality applications to be built.
ARCore uses three key technologies to integrate virtual content with the real world as seen through the camera of a smartphone or tablet:
Six degrees of freedom allows the phone to understand and track its position relative to the world.
Environmental understanding allows the phone to detect the size and location of flat horizontal surfaces like the ground or a coffee table.
Light estimation allows the phone to estimate the environment's current lighting conditions.
ARCore has been integrated into a multitude of devices. |
https://en.wikipedia.org/wiki/Codd%27s%20theorem | Codd's theorem states that relational algebra and the domain-independent relational calculus queries, two well-known foundational query languages for the relational model, are precisely equivalent in expressive power. That is, a database query can be formulated in one language if and only if it can be expressed in the other.
The theorem is named after Edgar F. Codd, the father of the relational model for database management.
The domain independent relational calculus queries are precisely those relational calculus queries that are invariant under choosing domains of values beyond those appearing in the database itself. That is, queries that may return different results for different domains are excluded. An example of such a forbidden query is the query "select all tuples other than those occurring in relation R", where R is a relation in the database. Assuming different domains, i.e., sets of atomic data items from which tuples can be constructed, this query returns different results and thus is clearly not domain independent.
Codd's Theorem is notable since it establishes the equivalence of two syntactically quite dissimilar languages: relational algebra is a variable-free language, while relational calculus is a logical language with variables and quantification.
Relational calculus is essentially equivalent to first-order logic, and indeed, Codd's Theorem had been known to logicians since the late 1940s.
Query languages that are equivalent in expressive power to relational algebra were called relationally complete by Codd. By Codd's Theorem, this includes relational calculus. Relational completeness clearly does not imply that any interesting database query can be expressed in relationally complete languages. Well-known examples of inexpressible queries include simple aggregations (counting tuples, or summing up values occurring in tuples, which are operations expressible in SQL but not in relational algebra) and computing the transitive closure of a graph |
https://en.wikipedia.org/wiki/P%20fimbriae | P fimbriae (also known as pyelonephritis-associated pili) or P pili or Pap are chaperone-usher type (specifically of the π family) fimbrial appendages found on the surface of many Escherichia coli bacteria. The P fimbriae is considered to be one of the most important virulence factor in uropathogenic E. coli and plays an important role in upper urinary tract infections. P fimbriae mediate adherence to host cells, a key event in the pathogenesis of urinary tract infections.
Structure and expression
P fimbriae are large, linear structures projecting from the surface of the bacterial cell. With lengths of 1-2um, the pili can be larger than the diameter of the bacteria itself. The main body of the fimbriae is composed of approx. 1000 copies of the major fimbrial subunit protein PapA, forming a helical rod. The short fimbrial tip is made of the subunits PapK, PapE, PapF and the tip adhesin PapG, which mediates the binding.
The fimbriae is assembled by a chaperone-usher system, and proteins required for the assembly are expressed by the Pap operon, which is located on pathogenicity islands. The genes of the Pap operon encode five structural proteins (PapA, PapK, PapE, PapF and PapG), four proteins involved in the transport and assembly (PapD, PapH, PapC, PapJ) and two proteins (PapB, PapI) regulating the operon expression.
Role during infection
Adherence to host uroepithelial cells is a crucial step during the infection that allows uropathogenic E.coli to colonize the urinary tract and prevents bacterial removal during micturition. The binding of the P fimbriae to epithelial cells is mediated by the tip adhesin PapG. Four different alleles of PapG have been described, which bind to different glycolipid structures on host cells. In humans, especially variant papGII and papGIII have shown to be clinically relevant.
Variant PapGII binds preferentially to globoside (GbO4), found abundantly on human kidney epithelial cells. PapGII triggers a strong inflammatory response |
https://en.wikipedia.org/wiki/Carbon%20detonation | Carbon detonation or carbon deflagration is the violent reignition of thermonuclear fusion in a white dwarf star that was previously slowly cooling. It involves a runaway thermonuclear process which spreads through the white dwarf in a matter of seconds, producing a type Ia supernova which releases an immense amount of energy as the star is blown apart. The carbon detonation/deflagration process leads to a supernova by a different route than the better known type II (core-collapse) supernova (the type II is caused by the cataclysmic explosion of the outer layers of a massive star as its core implodes).
White dwarf density and mass increase
A white dwarf is the remnant of a small to medium size star (the Sun is an example of these). At the end of its life, the star has burned its hydrogen and helium fuel, and thermonuclear fusion processes cease. The star does not have enough mass to either burn much heavier elements, or to implode into a neutron star or type II supernova as a larger star can, from the force of its own gravity, so it gradually shrinks and becomes very dense as it cools, glowing white and then red, for a period many times longer than the present age of the Universe.
Occasionally, a white dwarf gains mass from another source – for example, a binary star companion that is close enough for the dwarf star to siphon sufficient amounts of matter onto itself; or from a collision with other stars, the siphoned matter having been expelled during the process of the companion's own late stage stellar evolution. If the white dwarf gains enough matter, its internal pressure and temperature will rise enough for carbon to begin fusing in its core. Carbon detonation generally occurs at the point when the accreted matter pushes the white dwarf's mass close to the Chandrasekhar limit of roughly 1.4 solar masses, the mass at which gravity can overcome the electron degeneracy pressure that prevents it from collapsing during its lifetime. This also happens when two whit |
https://en.wikipedia.org/wiki/Cuboid%20%28computer%20vision%29 | In computer vision, the term cuboid is used to describe a small spatiotemporal volume extracted for purposes of behavior recognition. The cuboid is regarded as a basic geometric primitive type and is used to depict three-dimensional objects within a three dimensional representation of a flat, two dimensional image.
Production
Cuboids can be produced from both two-dimensional and three-dimensional images.
One method used to produce cuboids utilizes scene understanding (SUN) primitive databases, which are collections of pictures that already contain cuboids. By sorting through SUN primitive databases with machine learning tools, computers observe the conditions in which cuboids are produced in images from SUN primitive databases and can learn to produce cuboids from other images.
RGB-D images, which are RGB images that also record the depth of each pixel, are occasionally used to produce cuboids because computers no longer need to determine the depth of an object, as they typically do because depth is already recorded.
Cuboid production is sensitive to changes in color and illumination, blockage, and background clutter. This means that it is difficult for computers to produce cuboids of objects that are multicolored, irregularly illuminated, or partially covered, or if there are many objects in the background. This is partially due to the fact that algorithms for producing cuboids are still relatively simple.
Usage
Cuboids are created for point cloud-based three-dimensional maps and can be utilized in various situations such as augmented reality, the automated control of cars, drones, and robots, and object detection.
Cuboids allow for software to identify a scene through geometric descriptions in an “object-agnostic” fashion.
Interest points, locations within images that are identified by a computer as essential to identifying the image, created from two-dimensional images can be used with cuboids for image matching, identifying a room or scene, and instance |
https://en.wikipedia.org/wiki/Solovay%20model | In the mathematical field of set theory, the Solovay model is a model constructed by in which all of the axioms of Zermelo–Fraenkel set theory (ZF) hold, exclusive of the axiom of choice, but in which all sets of real numbers are Lebesgue measurable. The construction relies on the existence of an inaccessible cardinal.
In this way Solovay showed that in the proof of the existence of a non-measurable set from ZFC (Zermelo–Fraenkel set theory plus the axiom of choice), the axiom of choice is essential, at least granted that the existence of an inaccessible cardinal is consistent with ZFC.
Statement
ZF stands for Zermelo–Fraenkel set theory, and DC for the axiom of dependent choice.
Solovay's theorem is as follows.
Assuming the existence of an inaccessible cardinal, there is an inner model of ZF + DC of a suitable forcing extension V[G] such that every set of reals is Lebesgue measurable, has the perfect set property, and has the Baire property.
Construction
Solovay constructed his model in two steps, starting with a model M of ZFC containing an inaccessible cardinal κ.
The first step is to take a Levy collapse M[G] of M by adding a generic set G for the notion of forcing that collapses all cardinals less than κ to ω. Then M[G] is a model of ZFC with the property that every set of reals that is definable over a countable sequence of ordinals is Lebesgue measurable, and has the Baire and perfect set properties. (This includes all definable and projective sets of reals; however for reasons related to Tarski's undefinability theorem the notion of a definable set of reals cannot be defined in the language of set theory, while the notion of a set of reals definable over a countable sequence of ordinals can be.)
The second step is to construct Solovay's model N as the class of all sets in M[G] that are hereditarily definable over a countable sequence of ordinals. The model N is an inner model of M[G] satisfying ZF + DC such that every set of reals is Lebesgue measur |
https://en.wikipedia.org/wiki/Semimodule | In mathematics, a semimodule over a semiring R is like a module over a ring except that it is only a commutative monoid rather than an abelian group.
Definition
Formally, a left R-semimodule consists of an additively-written commutative monoid M and a map from to M satisfying the following axioms:
.
A right R-semimodule can be defined similarly. For modules over a ring, the last axiom follows from the others. This is not the case with semimodules.
Examples
If R is a ring, then any R-module is an R-semimodule. Conversely, it follows from the second, fourth, and last axioms that (-1)m is an additive inverse of m for all , so any semimodule over a ring is in fact a module.
Any semiring is a left and right semimodule over itself in the same way that a ring is a left and right module over itself. Every commutative monoid is uniquely an -semimodule in the same way that an abelian group is a -module. |
https://en.wikipedia.org/wiki/GL-ONC1 | GL-ONC1 (USAN: olvimulogene nanivacirepvec; abbreviated as Olvi-Vec) is an investigational therapeutic product consisting of the clinical grade formulation of the laboratory strain GLV-1h68, an oncolytic virus developed by Genelux Corporation. GL-ONC1 is currently under evaluation in Phase I/II human clinical trials in the United States and Europe.
GL-ONC1 (CAS Registry Number (CAS RN): 1473430-36-2) is a triple modified and attenuated vaccinia virus (Lister strain) that causes regression and elimination of a wide range of solid tumors in preclinical mouse models. It was generated by insertion of three expression cassettes (encoding Renilla luciferase-Aequorea green fluorescent protein fusion, beta-galactosidase, and beta-glucuronidase) into the F14.5L, J2R (encoding thymidine kinase) and A56R (encoding hemagglutinin) loci of the parental viral Lister strain genome, respectively. The oncolytic virus specifically infects and kills tumor cells which leads to oncolysis, immune activation and triggering anti-tumor immune responses.
Clinical trials
Regional (cavity) administration
One Phase I/II Study of intraperitoneal administration of GL-ONC1 in patients with advanced peritoneal carcinomatosis has been completed. A Phase II study of intraperitoneal administration of GL-ONC1 (Olvi-Vec) in heavily-pretreated patients with platinum-resistant/refractory ovarian cancer was completed. Positive clinical data have been reported in IGCS 2020 and ESMO 2020 conferences. A registration trial of Olvi-Vec(aka GL-ONC1)-primed immunochemotherapy is being planned.
In a Phase I study, intra-pleural administration of GL-ONC1 is being evaluated in patients with malignant pleural effusion, which is caused by cancer from malignant pleural mesothelioma, non-small cell lung cancer (NSCLC), or breast cancer. In this trial GL-ONC1 infection of tumor cells was identified in 6 out of 8 patients with epithelioid malignant pleural mesothelioma.
Systemic (intravenous) administration
System |
https://en.wikipedia.org/wiki/Mobilewalla | MobileWalla is a Singapore-based web-based search portal for applications targeted at mobile devices. It was founded on 7 March 2011 by Dr. Anindya Datta, a professor with the School of Computing, National University of Singapore. The portal is the first ever deep search and discovery engine for finding apps and uses around 114 variables for its rating system.
During 2016 US elections, the firm targeted evangelicals with cell's phone location in real time of 6 months preceding the election.
End of May 2020, the firm targeted George Floyd protesters located in New York, Los Angeles, Minneapolis and Atlanta and published a report 2 weeks after showing demographics data: ethnicity, gender and age distribution. Datta proffered that the report was prepared to satisfy the curiosity of its employees, and not on behest of any law enforcement or public agency. This however prompted various legislators to seek further clarifications with the company on the data itself. |
https://en.wikipedia.org/wiki/Tier%202%20network | A Tier 2 network is an Internet service provider which engages in the practice of peering with other networks, but which also purchases IP transit to reach some portion of the Internet.
Tier 2 providers are the most common Internet service providers, as it is much easier to purchase transit from a Tier 1 network than to peer with them and attempt to become a Tier 1 carrier.
The term Tier 3 is sometimes also used to describe networks who solely purchase IP transit from other networks to reach the Internet.
List of large or important Tier 2 networks
See also
Peering point
Network access point |
https://en.wikipedia.org/wiki/Joseph%20Bertrand | Joseph Louis François Bertrand (; 11 March 1822 – 5 April 1900) was a French mathematician who worked in the fields of number theory, differential geometry, probability theory, economics and thermodynamics.
Biography
Joseph Bertrand was the son of physician Alexandre Jacques François Bertrand and the brother of archaeologist Alexandre Bertrand. His father died when Joseph was only nine years old, but that did not stand in his way of learning and understanding algebraic and elementary geometric concepts, and he also could speak Latin fluently, all when he was of the same age of nine.
At eleven years old he attended the course of the École Polytechnique as an auditor (open courses). From age eleven to seventeen, he obtained two bachelor's degrees, a license and a PhD with a thesis on the mathematical theory of electricity and is admitted first to the 1839 entrance examination of the École Polytechnique. Bertrand was a professor at the École Polytechnique and Collège de France, and was a member of the Paris Academy of Sciences where he was its permanent secretary for twenty-six years.
He conjectured, in 1845, that there is at least one prime between n and 2n − 2 for every n > 3. Chebyshev proved this conjecture, now called Bertrand's postulate, in 1850. He was also famous for a paradox in the field of probability, now known as Bertrand's Paradox. There is another paradox in game theory that is named after him, called the Bertrand Paradox. In 1849, he was the first to define real numbers using what is now called a Dedekind cut.
Bertrand translated into French Carl Friedrich Gauss's work on the theory of errors and the method of least squares.
In the field of economics, he reviewed the work on oligopoly theory, specifically the Cournot Competition Model (1838) of French mathematician Antoine Augustin Cournot. His Bertrand Competition Model (1883) argued that Cournot had reached a very misleading conclusion, and he reworked it using prices rather than quantities as t |
https://en.wikipedia.org/wiki/Fundamenta%20Informaticae | Fundamenta Informaticae is a peer-reviewed scientific journal covering computer science. The editor-in-chief is Damian Niwiński. It was established in 1977 by the Polish Mathematical Society as Series IV of the Annales Societatis Mathematicae Polonae, with its main focus on theoretical foundations of computer science. The journal is currently hosted on the Episciences.org platform of the Center for direct scientific communication, and published by IOS Press under the auspices of the European Association for Theoretical Computer Science.
Further reading
Janusz Kowalski, 2004. The Polish Mathematical Society (PTM). European Mathematical Society Newsletter 54:24-29.
External links
Computer science journals
Theoretical computer science
IOS Press academic journals
Academic journals established in 1977
English-language journals
5 times per year journals |
https://en.wikipedia.org/wiki/Elliptic%20hypergeometric%20series | In mathematics, an elliptic hypergeometric series is a series Σcn such that the ratio
cn/cn−1 is an elliptic function of n, analogous to generalized hypergeometric series where the ratio is a rational function of n, and basic hypergeometric series where the ratio is a periodic function of the complex number n. They were introduced by Date-Jimbo-Kuniba-Miwa-Okado (1987) and in their study of elliptic 6-j symbols.
For surveys of elliptic hypergeometric series see , or .
Definitions
The q-Pochhammer symbol is defined by
The modified Jacobi theta function with argument x and nome p is defined by
The elliptic shifted factorial is defined by
The theta hypergeometric series r+1Er is defined by
The very well poised theta hypergeometric series r+1Vr is defined by
The bilateral theta hypergeometric series rGr is defined by
Definitions of additive elliptic hypergeometric series
The elliptic numbers are defined by
where the Jacobi theta function is defined by
The additive elliptic shifted factorials are defined by
The additive theta hypergeometric series r+1er is defined by
The additive very well poised theta hypergeometric series r+1vr is defined by
Further reading |
https://en.wikipedia.org/wiki/Redheffer%20star%20product | In mathematics, the Redheffer star product is a binary operation on linear operators that arises in connection to solving coupled systems of linear equations. It was introduced by Raymond Redheffer in 1959, and has subsequently been widely adopted in computational methods for scattering matrices. Given two scattering matrices from different linear scatterers, the Redheffer star product yields the combined scattering matrix produced when some or all of the output channels of one scatterer are connected to inputs of another scatterer.
Definition
Suppose are the block matrices
and
,
whose blocks have the same shape when
.
The Redheffer star product is then defined by:
,
assuming that are invertible,
where is an identity matrix conformable
to or , respectively.
This can be rewritten several ways making use of the so-called
push-through identity
.
Redheffer's definition extends beyond matrices to
linear operators on a Hilbert space .
.
By definition, are linear endomorphisms of ,
making linear endomorphisms of ,
where is the direct sum.
However, the star product still makes sense as long as the transformations are compatible,
which is possible when
and
so that .
Properties
Existence
exists if and only if
exists.
Thus when either exists, so does the Redheffer star product.
Identity
The star identity is the identity on ,
or .
Associativity
The star product is associative, provided all of the relevant matrices are defined.
Thus .
Adjoint
Provided either side exists, the adjoint of a Redheffer
star product is .
Inverse
If is the left matrix inverse of such that
, has a right inverse, and
exists, then .
Similarly, if is the left matrix inverse of such
that , has a right inverse, and
exists, then .
Also, if and has a left inverse
then .
The star inverse equals the matrix inverse and both can be computed with
block inversion as
.
Derivation from a linear system
The star product arises from solving multiple linear systems of equa |
https://en.wikipedia.org/wiki/Vertically%20Generalized%20Production%20Model | The Vertically Generalized Production Model (VGPM) is a model commonly used to estimate primary production within the ocean. The VGPM was designed by Behrenfeld and Falkowski and was originally published in a 1997 article in Limnology and Oceanography. It is one of the most frequently used models for primary production estimation due to its ability to be applied to chlorophyll a data from satellites, and its relatively simple design. Chlorophyll a is a common measure of primary production, as it is a main component of photosynthesis.
Primary production is often measured using three variables: the biomass (or amount in weight) of the phytoplankton, the availability of light, and the rate of carbon fixation. The VGPM is now one of the most popular models to use for satellite chlorophyll data due to it being surface light dependent as well as using an estimated maximum value of primary production compared to the units of chlorophyll throughout the water column, known as PBopt. It also considers environmental factors that often influence primary production as well as allows for variables often collected using remote satellites to derive the primary production without having to physically sample the water. This PBopt was found to be dependent on surface chlorophyll, and data for this can be collected using satellites. Satellites can only collect the parameters used to estimate primary production; they cannot calculate it themselves, which is why the need for a model to do so exists.
Because of this being a generalized model, it is intended to reflect most accurately the open ocean. Other localized areas, especially coastal regions, may need to incorporate additional factors to get the most accurate representation of primary production. The values produced using the VGPM are estimates and there will be some level of uncertainty with using this model. |
https://en.wikipedia.org/wiki/Atlases%20of%20the%20flora%20and%20fauna%20of%20Britain%20and%20Ireland | The biodiversity of Great Britain and Ireland is one of the most well-studied geographical areas of its size in the world. This biota work has resulted in the publication of distribution atlases for many taxonomic groups. This page lists these publications.
A full atlas is generally regarded as a definitive work on distribution, whereas a provisional atlas is typically produced as an interim stage to show survey progress.
One of the bodies responsible for publishing a great number of distribution atlases is the Institute of Terrestrial Ecology. Each atlas presents 10 km2 distribution maps for the species within its scope. Maps typically use different symbols to signify records from differing time-periods - solid symbols for 10-km squares (hectads) that have recent records, and unfilled symbols for 10-km squares for which only older records exist, according to a defined cut-off date.
The atlases are produced by the Biological Records Centre (BRC), which is run by the Institute of Terrestrial Ecology, part of the Centre for Ecology and Hydrology based at CEH Wallingford, Crowmarsh Gifford, Oxfordshire. The data used to produce the maps is gathered by volunteer biological recorders and collated by the BRC Recording Schemes.
The atlases fall into two groups:
Main Atlases are commercially published books, presenting the current state of knowledge for well-recorded groups. They typically include text information about the species, and other supporting material such as analyses of trends. They are usually produced only where a well-established recording scheme has been in operation for a significant period of time, and the scheme organisers believe that the data represent a comprehensive picture of the distribution of each species.
Provisional Atlases give recorders an indication of progress and illustrate early results. Some of the later ones are quite detailed and less "provisional" - for example the Hoverfly Atlas, which provides charts of flight-period as well |
https://en.wikipedia.org/wiki/VSide | vSide was an Internet-based 3D virtual world that was launched on May 15, 2006. The game is in its public beta phase. However, the game appears to have been discontinued as of 2021. Initially developed by American studio Doppelganger, Inc., a studio founded in 2004, the game was acquired in June 2009 by ExitReality, which is the owner and developer. Inside the game's universe, users are called and can interact with each other through social networking, celebrity entertainment, virtual boutique shopping and self-expression. Membership is free.
ExitReality provides "next generation social entertainment" with vSide, where in-world activities and engagement focus on music, entertainment and fashion. vSide is designed to be an online social environment where teenagers can hang out in real-time with their friends in public and personal spaces. It is based both on professional and user-generated content or design. A major part of its concept is users hosting their own events for others to participate in. It has won the "2007 Top 100 Private Companies to Watch" award from the 2007 On Hollywood Conference and the "CNET Top Five Selection" from the Under The Radar Conference 2007.
In addition to encouraging teenagers to express themselves through personal spaces or "apartments", vSide also encourages users to express themselves though social gestures and connections, including who they hang out with, where they are "seen", where they base their personal space, and the activities that they engage in. vSide also offers an extensive character customization system with over five million different clothing combinations available for free. Avatars can be additionally personalized through the purchase of virtual apparel.
As with all communities, social status is important in the vSide community. Users can earn in-world status and climb a social ladder based on the number of friends they have, the number of new friends that they bring into the world and the activities they host |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.