source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Potamogeton%20%C3%97%20angustifolius | Potamogeton × angustifolius is a hybrid pondweed between Potamogeton gramineus and Potamogeton lucens, known as long-leaved pondweed. It is widespread in rivers and lakes except where the water is very soft.
Description
Potamogeton × angustifolius is a hybrid between shining pondweed Potamogeton lucens and various-leaved pondweed Potamogeton gramineus. It is a perennial, growing from robust rhizomes. The stems are variable: slender to robust, terete, and branching, usually reaching 1.2 m but rarely up to 2m. The submerged leaves are reduced to phyllodes at the base of the stem, but elsewhere are broad and translucent, yellowish to dark green, sometimes with a pinkish tinge. The leaves measure 50-130 × 10–25 mm on the stems and main branches, but may be much smaller on the side branches; they have 4-5 (rarely 6) veins either side of the midrib and are usually sessile but some clones may have a few petiolate leaves. Turions are absent.
Unlike P. lucens, Potamogeton × angustifolius sometimes produces floating leaves, which are opaque and typically 55-105 × 22–40 mm.
The stipules are persistent, open, green when fresh, drying to olive or brownish. The inflorescences are 20–50 mm long and have inconspicuous greenish flowers on robust peduncles 45–190 mm long. Fruits are not always produced; if present they are approximately 3 × 2 mm.
Identification of Potamogeton × angustifolius may require experience. It is larger and more robust than P. gramineus but more slender and graceful than P. lucens. There is a tendency for main stems to more closely resemble P. lucens, and side branches P. gramineus. There is however no single character to identify this hybrid, and accurate determination is likely to rely on a combination of characters.
Taxonomy
Potamogeton × angustifolius was first described by the Czech botanist Jan Svatopluk Presl in 1821. The species name means 'narrow-leaved'. Until recently the synonym P. x zizii was widely used and is likely to be encountered in t |
https://en.wikipedia.org/wiki/Arylsulfatase | Arylsulfatase (EC 3.1.6.1, sulfatase, nitrocatechol sulfatase, phenolsulfatase, phenylsulfatase, p-nitrophenyl sulfatase, arylsulfohydrolase, 4-methylumbelliferyl sulfatase, estrogen sulfatase) is a type of sulfatase enzyme with systematic name aryl-sulfate sulfohydrolase. This enzyme catalyses the following chemical reaction
an aryl sulfate + H2O a phenol + sulfate
Types include:
Arylsulfatase A (also known as "cerebroside-sulfatase")
Arylsulfatase B (also known as "N-acetylgalactosamine-4-sulfatase")
Steroid sulfatase (formerly known as "arylsulfatase C")
ARSC2
ARSD
ARSF
ARSG
ARSH
ARSI
ARSJ
ARSK
ARSL (formerly known as "arylsulfatase E", "ARSE")
See also
Aryl |
https://en.wikipedia.org/wiki/Double%20vector%20bundle | In mathematics, a double vector bundle is the combination of two compatible vector bundle structures, which contains in particular the tangent of a vector bundle and the double tangent bundle .
Definition and first consequences
A double vector bundle consists of , where
the side bundles and are vector bundles over the base ,
is a vector bundle on both side bundles and ,
the projection, the addition, the scalar multiplication and the zero map on E for both vector bundle structures are morphisms.
Double vector bundle morphism
A double vector bundle morphism consists of maps , , and such that is a bundle morphism from to , is a bundle morphism from to , is a bundle morphism from to and is a bundle morphism from to .
The 'flip of the double vector bundle is the double vector bundle .
Examples
If is a vector bundle over a differentiable manifold then is a double vector bundle when considering its secondary vector bundle structure.
If is a differentiable manifold, then its double tangent bundle is a double vector bundle. |
https://en.wikipedia.org/wiki/Network%20General | Network General Corporation was an American technology company active between 1986 and 2007 and based in Silicon Valley. Founded in 1986 by Harry Saal and Len Shustek to develop and market network packet and protocol analyzers, the company's flagship product, the Sniffer was the market leader in its field for many years. In 1997, Network General was acquired by McAfee Associates (MCAF) for $1.3 billion, and the two companies merged to form Network Associates. In 2004, Network Associates sold off most of the patents originally belonging to Network General to a group of investors including Saal, who founded a new Network General Corporation. In 2007, NetScout Systems acquired the new Network General for $205 million.
History
Network General Corporation was founded in May 1986 by Harry Saal and Len Shustek to develop and market network protocol analyzers. Saal, the company's primary founder, president, and CEO, had previously worked at IBM as a software engineer before founding Nestar Systems, his first startup dedicated to computer networking, in October 1978 with three others, including Shustek, Jim Hinds and Nick Fortis. Although successful at first, Nestar eventually floundered and was sold off in 1986. Deciding they wanted another go at a computer networking company, Saal and Shustek founded Network General in Menlo Park, California, in 1986.
In the year of the company's founding, Network General introduced the Sniffer. The inspiration behind the Sniffer was an internal test tool that had been developed within Nestar. Between the company's inception and the end of 1988, the Sniffer became Network General's flagship product, and the company sold $8.9 million worth of Sniffers and associated services, earning them $1.8 million in net profit. Financing was initially provided only by the founders until an investment of several million by TA Associates in late 1987. The company grew from having only two employees in 1986 to 15 employees in 1988. In February 1989, th |
https://en.wikipedia.org/wiki/The%20Design%20of%20Experiments | The Design of Experiments is a 1935 book by the English statistician Ronald Fisher about the design of experiments and is considered a foundational work in experimental design. Among other contributions, the book introduced the concept of the null hypothesis in the context of the lady tasting tea experiment. A chapter is devoted to the Latin square.
Chapters
Introduction
The principles of experimentation, illustrated by a psycho-physical experiment
A historical experiment on growth rate
An agricultural experiment in randomized blocks
The Latin square
The factorial design in experimentation
Confounding
Special cases of partial confounding
The increase of precision by concomitant measurements. Statistical Control
The generalization of null hypotheses. Fiducial probability
The measurement of amount of information in general
Quotations regarding the null hypothesis
Fisher introduced the null hypothesis by an example, the now famous Lady tasting tea experiment, as a casual wager. She claimed the ability to determine the means of tea preparation by taste. Fisher proposed an experiment and an analysis to test her claim. She was to be offered 8 cups of tea, 4 prepared by each method, for determination. He proposed the null hypothesis that she possessed no such ability, so she was just guessing. With this assumption, the number of correct guesses (the test statistic) formed a hypergeometric distribution. Fisher calculated that her chance of guessing all cups correctly was 1/70. He was provisionally willing to concede her ability (rejecting the null hypothesis) in this case only. Having an example, Fisher commented:
"...the null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis."
"...the null hypothesis must be exact, that is free from vagueness and ambiguity, because it must supply the |
https://en.wikipedia.org/wiki/Nutritional%20genomics | Nutritional genomics, also known as nutrigenomics, is a science studying the relationship between human genome, human nutrition and health. People in the field work toward developing an understanding of how the whole body responds to a food via systems biology, as well as single gene/single food compound relationships. Nutritional genomics or Nutrigenomics is the relation between food and inherited genes, it was first expressed in 2001.
Introduction
The term "nutritional genomics" is an umbrella term including several subcategories, such as nutrigenetics, nutrigenomics, and nutritional epigenetics. Each of these subcategories explain some aspect of how genes react to nutrients and express specific phenotypes, like disease risk. There are several applications for nutritional genomics, for example how much nutritional intervention and therapy can successfully be used for disease prevention and treatment.
Background and preventive health
Nutritional science originally emerged as a field that studied individuals lacking certain nutrients and the subsequent effects, such as the disease scurvy which results from a lack of vitamin C. As other diseases closely related to diet (but not deficiency), such as obesity, became more prevalent, nutritional science expanded to cover these topics as well. Nutritional research typically focuses on preventative measure, trying to identify what nutrients or foods will raise or lower risks of diseases and damage to the human body.
For example, Prader–Willi syndrome, a disease whose most distinguishing factor is insatiable appetite, has been specifically linked to an epigenetic pattern in which the paternal copy in the chromosomal region is erroneously deleted, and the maternal loci is inactivated by over methylation. Yet, although certain disorders may be linked to certain single-nucleotide polymorphisms (SNPs) or other localized patterns, variation within a population may yield many more polymorphisms.
Mediterranean Diet
The Medi |
https://en.wikipedia.org/wiki/Euler%20angles | The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body with respect to a fixed coordinate system.
They can also represent the orientation of a mobile frame of reference in physics or the orientation of a general basis in 3-dimensional linear algebra.
Classic Euler angles usually take the inclination angle in such a way that zero degrees represent the vertical orientation. Alternative forms were later introduced by Peter Guthrie Tait and George H. Bryan intended for use in aeronautics and engineering in which zero degrees represent the horizontal position.
Chained rotations equivalence
Euler angles can be defined by elemental geometry or by composition of rotations. The geometrical definition demonstrates that three composed elemental rotations (rotations about the axes of a coordinate system) are always sufficient to reach any target frame.
The three elemental rotations may be extrinsic (rotations about the axes xyz of the original coordinate system, which is assumed to remain motionless), or intrinsic (rotations about the axes of the rotating coordinate system XYZ, solidary with the moving body, which changes its orientation with respect to the extrinsic frame after each elemental rotation).
In the sections below, an axis designation with a prime mark superscript (e.g., z″) denotes the new axis after an elemental rotation.
Euler angles are typically denoted as α, β, γ, or ψ, θ, φ. Different authors may use different sets of rotation axes to define Euler angles, or different names for the same angles. Therefore, any discussion employing Euler angles should always be preceded by their definition.
Without considering the possibility of using two different conventions for the definition of the rotation axes (intrinsic or extrinsic), there exist twelve possible sequences of rotation axes, divided in two groups:
Proper Euler angles
Tait–Bryan angles .
Tait–Bryan angles are also called Cardan angles; nautica |
https://en.wikipedia.org/wiki/Religion%20and%20health | Scholarly studies have investigated the effects of religion on health. The World Health Organization (WHO) discerns four dimensions of health, namely physical, social, mental, and spiritual health. Having a religious belief may have both positive and negative impacts on health and morbidity.
Religion and spirituality
Spirituality has been ascribed many different definitions in different contexts, but a general definition is: an individual's search for meaning and purpose in life. Spirituality is distinct from organized religion in that spirituality does not necessarily need a religious framework. That is, one does not necessarily need to follow certain rules, guidelines or practices to be spiritual, but an organized religion often has some combination of these in place. Some people who suffer from severe mental disorders may find comfort in religion. People who report themselves to be spiritual people may not observe any specific religious practices or traditions. Its important to identify what is spirituality in a expanded format to determine what is the best way to research and study it.
Scientific research
More than 3000 empirical studies have examined relationships between religion and health, including more than 1200 in the 20th century, and more than 2000 additional studies between 2000 and 2009.
Various other reviews of the religion/spirituality and health literature have been published. These include two reviews from an NIH-organized expert panel that appeared in a 4-article special section of American Psychologist. Several chapters in edited academic books have also reviewed the empirical literature. The literature has also been reviewed extensively from the perspective of public health and its various subfields ranging from health policy and management to infectious diseases and vaccinology.
More than 30 meta-analyses and 100 systematic reviews have been published on relations between religious or spiritual factors and health outcomes.
Dimensions of hea |
https://en.wikipedia.org/wiki/Ipronidazole | Ipronidazole is an antiprotozoal drug of the nitroimidazole class used in veterinary medicine. It is used for the treatment of histomoniasis in turkeys and for swine dysentery. |
https://en.wikipedia.org/wiki/Quantities%2C%20Units%20and%20Symbols%20in%20Physical%20Chemistry | Quantities, Units and Symbols in Physical Chemistry, also known as the Green Book, is a compilation of terms and symbols widely used in the field of physical chemistry. It also includes a table of physical constants, tables listing the properties of elementary particles, chemical elements, and nuclides, and information about conversion factors that are commonly used in physical chemistry. The Green Book is published by the International Union of Pure and Applied Chemistry (IUPAC) and is based on published, citeable sources. Information in the Green Book is synthesized from recommendations made by IUPAC, the International Union of Pure and Applied Physics (IUPAP) and the International Organization for Standardization (ISO), including recommendations listed in the IUPAP Red Book Symbols, Units, Nomenclature and Fundamental Constants in Physics and in the ISO 31 standards.
History, list of editions, and translations to non-English languages
The third edition of the Green Book () was first published by IUPAC in 2007. A second printing of the third edition was released in 2008; this printing made several minor revisions to the 2007 text. A third printing of the third edition was released in 2011. The text of the third printing is identical to that of the second printing.
A Japanese translation of the third edition of the Green Book () was published in 2009. A French translation of the third edition of the Green Book () was published in 2012. A Portuguese translation (Brazilian Portuguese and European Portuguese) of the third edition of the Green Book () was published in 2018, with updated values of the physical constants and atomic weights; it is referred to as the "Livro Verde".
A concise four-page summary of the most important material in the Green Book was published in the July–August 2011 issue of Chemistry International, the IUPAC news magazine.
The second edition of the Green Book () was first published in 1993. It was reprinted in 1995, 1996, and 1998.
|
https://en.wikipedia.org/wiki/Stochastic%20programming | In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which some or all problem parameters are uncertain, but follow known probability distributions. This framework contrasts with deterministic optimization, in which all problem parameters are assumed to be known exactly. The goal of stochastic programming is to find a decision which both optimizes some criteria chosen by the decision maker, and appropriately accounts for the uncertainty of the problem parameters. Because many real-world decisions involve uncertainty, stochastic programming has found applications in a broad range of areas ranging from finance to transportation to energy optimization.
Two-stage problems
The basic idea of two-stage stochastic programming is that (optimal) decisions should be based on data available at the time the decisions are made and cannot depend on future observations. The two-stage formulation is widely used in stochastic programming. The general formulation of a two-stage stochastic programming problem is given by:
where is the optimal value of the second-stage problem
The classical two-stage linear stochastic programming problems can be formulated as
where is the optimal value of the second-stage problem
In such formulation is the first-stage decision variable vector, is the second-stage decision variable vector, and contains the data of the second-stage problem. In this formulation, at the first stage we have to make a "here-and-now" decision before the realization of the uncertain data , viewed as a random vector, is known. At the second stage, after a realization of becomes available, we optimize our behavior by solving an appropriate optimization problem.
At the first stage we optimize (minimize in the above formulation) the cost of the first-stage decision plus the expected cost of the (optimal) second-stage decision. We |
https://en.wikipedia.org/wiki/Subclavian%20groove | On the medial part of the clavicle is a broad rough surface, the costal tuberosity (rhomboid impression), rather more than 2 cm. in length, for the attachment of the costoclavicular ligament. The rest of this surface is occupied by a groove, which gives attachment to the Subclavius; the coracoclavicular fascia, which splits to enclose the muscle, is attached to the margins of the groove. Not infrequently this groove is subdivided longitudinally by a line which gives attachment to the intermuscular septum of the Subclavius. |
https://en.wikipedia.org/wiki/Pho%20regulon | The Phosphate (Pho) regulon is a regulatory mechanism used for the conservation and management of inorganic phosphate within the cell. It was first discovered in Escherichia coli as an operating system for the bacterial strain, and was later identified in other species. The Pho system is composed of various components including extracellular enzymes and transporters that are capable of phosphate assimilation in addition to extracting inorganic phosphate from organic sources. This is an essential process since phosphate plays an important role in cellular membranes, genetic expression, and metabolism within the cell. Under low nutrient availability, the Pho regulon helps the cell survive and thrive despite a depletion of phosphate within the environment. When this occurs, phosphate starvation-inducible (psi) genes activate other proteins that aid in the transport of inorganic phosphate.
Function
The Pho regulon is controlled by a two-component regulatory system composed of a histidine kinase sensor protein (PhoR) within the inner membrane and a transcriptional response regulator (PhoB/PhoR) on the cytoplasmic side of the membrane. These proteins bind to upstream promoters in the pho regulon in order to induce a general change in gene transcription. This occurs when the cell senses low concentrations of phosphate within its internal environment causing the response regulator to be phosphorylated inducing an overall decrease in gene transcription. This mechanism is ubiquitous within gram-positive, gram-negative, cyanobacteria, yeasts, and archaea.
Signal transduction pathway
Depletion of inorganic phosphate within the cell is required for activation of the Pho regulon in most prokaryotes. In the most commonly studied bacterium, E. coli, seven total proteins are used to detect intracellular levels of inorganic phosphate along with transfusing that signal appropriately. Of the seven proteins, one is a metal binding protein (PhoU) and four are phosphate-specific trans |
https://en.wikipedia.org/wiki/Huffaker%27s%20mite%20experiment | In 1958, Carl B. Huffaker, an ecologist and agricultural entomologist at the University of California, Berkeley, did a series of experiments with predatory and herbivorous mite species to investigate predator–prey population dynamics. In these experiments, he created model universes with arrays of rubber balls and oranges (food for the herbivorous mites) on trays and then introduced the predator and prey mite species in various permutations. Specifically, Huffaker was seeking to understand how spatial heterogeneity and the varying dispersal ability of each species affected long-term population dynamics and survival. Contrary to previous experiments on this topic (especially those by Georgii Gause), he found that long-term coexistence was possible under select environmental conditions. He published his findings in the paper, "Experimental Studies on Predation: Dispersion Factors and Predator–Prey Oscillations".
Experimental design
The aim of Huffaker’s 1958 experiment was to “shed light upon the fundamental nature of predator–prey interaction” and to “establish an ecosystem in which a predatory and a prey species could continue living together so that the phenomena associated with their interactions could be studied in detail”. He used two mite species, the six-spotted mite Eotetranychus sexmaculatus as the prey species and Typhlodromus occidentalis as the predatory species. Oranges provided a background environment and a food source for the herbivorous mites. The amount of available food on each orange was controlled by sealing off portions of each orange using damp paper and paraffin wax. Huffaker introduced patchiness into the system by replacing oranges with rubber balls of a similar size. He referred to the resultant systems as "universes." Huffaker created a series of 12 universes in his experiment, trying different arrangements to reach a universe in which the predator population would not annihilate the prey population, and in which, instead, the two |
https://en.wikipedia.org/wiki/Paloozaville | Paloozaville is an animated/live-action series for children and their parents on the Video On Demand network, Mag Rack.
The Mag Rack original series was created exclusively for On Demand and stars John Lithgow as Paloozaville's absent-minded mayor. The show is based on Lithgow's best selling children's books. Every episode begins with a boredom crisis that is subsequently solved by co-host Suza Palooza (Carmen De La Paz) and her team of kids. Every episode has a different theme centering on arts and crafts, music, history, dance, literature, and drama. The series strives to create educational children's entertainment that will allow parents to spend time with their children and learn at the same time.
External links
https://web.archive.org/web/20060822014311/http://www.magrack.com/paloozaville/
Video on demand
2000s American children's television series
American television series with live action and animation |
https://en.wikipedia.org/wiki/Adaptive%20comparative%20judgement | Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel |
https://en.wikipedia.org/wiki/Push%20of%20the%20past | The push of the past is a type of survivorship bias associated with evolutionary diversification when extinction is possible. Groups that survive a long time are likely to have “got off to a flying start”, and this statistical bias creates an illusion of a true slow-down of diversification rate through time.
Birth–Death modelling in evolutionary studies
The evolutionary processes of speciation and extinction can be modelled with a stochastic “birth–death model” (BDM), which is an important component in the study of macroevolution. A BDM assigns each species a certain probability of splitting () or going extinct () per interval of time. This gives rise to an exponential distribution, with the number of species in a particular clade N at any time t given by
,
although this expression only gives the expected value when and are large (see below).
In the special case of there being no extinction, this simplifies to the so-called "Yule process".
Lineage-through-time plots
A different type of plot of diversity through time, called a “lineage through time” (LTT) plot, retrospectively reconstructs the number of lineages that led to the living species of a group. This is equivalent to constructing a dated phylogeny and then counting how many branches are present at each time interval. As we know retrospectively that all such lineages survived until the present, it follows that no extinction is possible along them. It can be shown that the rate of production of new lineages through time is given by .
Survivorship bias in diversification
Rather than considering the distribution of all possible stochastic outcomes for given values of and it is also possible to consider what happens when certain conditions of survivorship are imposed on the possible outcomes.
Push of the past
If a BDM is forward-modelled, i.e. if the fate of an original single species is modelled through time, then a wide range of possible outcomes can occur, as the process is stochastic. With |
https://en.wikipedia.org/wiki/Model-based%20specification | Model-based specification is an approach to formal specification where the system specification is expressed as a system state model. This state model is constructed using well-understood mathematical entities such as sets and functions. System operations are specified by defining how they affect the state of the system model.
The most widely used notations for developing model-based specifications are VDM and Z (pronounced Zed, not Zee). These notations are based on typed set theory. Systems are therefore modelled using sets and relations between sets.
Another well-known approach to formal specification is algebraic specification.
See also
Model-based design
Model-based testing |
https://en.wikipedia.org/wiki/Sign-value%20notation | A sign-value notation represents numbers using a sequence of numerals which each represent a distinct quantity, regardless of their position in the sequence. Sign-value notations are typically additive, subtractive, or multiplicative depending on their conventions for grouping signs together to collectively represent numbers.
Although the absolute value of each sign is independent of its position, the value of the sequence as a whole may depend on the order of the signs, as with numeral systems which combine additive and subtractive notation, such as Roman numerals. There is no need for zero in sign-value notation.
Additive notation
Additive notation represents numbers by a series of numerals that added together equal the value of the number represented, much as tally marks are added together to represent a larger number. To represent multiples of the sign value, the same sign is simply repeated. In Roman numerals, for example, means ten and means fifty, so means eighty (50 + 10 + 10 + 10).
Although signs may be written in a conventional order the value of each sign does not depend on its place in the sequence, and changing the order does not affect the total value of the sequence in an additive system. Frequently used large numbers are often expressed using unique symbols to avoid excessive repetition. Aztec numerals, for example, use a tally of dots for numbers less than twenty alongside unique symbols for powers of twenty, including 400 and 8,000.
Subtractive notation
Subtractive notation represents numbers by a series of numerals in which signs representing smaller values are typically subtracted from those representing larger values to equal the value of the number represented. In Roman numerals, for example, means one and means ten, so means nine (10 − 1). The consistent use of the subtractive system with Roman numerals was not standardised until after the widespread adoption of the printing press in Europe.
History
Sign-value notation was the |
https://en.wikipedia.org/wiki/OGFr | Opioid growth factor receptor, also known as OGFr or the ζ-opioid receptor, is a protein which in humans is encoded by the OGFR gene. The protein encoded by this gene is a receptor for opioid growth factor (OGF), also known as [Met(5)]-enkephalin. The endogenous ligand is thus a known opioid peptide, and OGFr was originally discovered and named as a new opioid receptor zeta (ζ). However it was subsequently found that it shares little sequence similarity with the other opioid receptors, and has quite different function.
Function
The natural function of this receptor appears to be in regulation of tissue growth, and it has been shown to be important in embryonic development, wound repair, and certain forms of cancer.
OGF is a negative regulator of cell proliferation and tissue organization in a variety of processes. The encoded unbound receptor for OGF has been localized to the outer nuclear envelope, where it binds OGF and is translocated into the nucleus. The coding sequence of this gene contains a polymorphic region of 60 nt tandem imperfect repeat units. Several transcripts containing between zero and eight repeat units have been reported.
Mechanism of activation
The opioid growth factor receptor consists of a chain of 677 amino acids, which includes a nuclear localization sequence region. When OGF binds to the receptor, an OGF-OGFr complex is formed, which leads to the increase in the synthesis of the selective cyclin-dependent kinase (CDK) inhibitor proteins, p12 and p16. Retinoblastoma protein becomes inactivated through phosphorylation by CDKs, and leads to the progression of the cell cycle from the G1 phase to the S phase. Because the activation of the OGF receptor, blocks the phosphorylation of retinoblastoma proteins, retardation of the G1 phase occurs, which prevents the cell from further dividing.
Therapeutic applications
Upregulation of OGFr and consequent stimulation of the OGF-OGFr system are important for the anti-proliferative effects of imid |
https://en.wikipedia.org/wiki/Diffusion%20model | In machine learning, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of generative models. The goal of diffusion models is to learn a diffusion process that generates the probability distribution of a given dataset. It mainly consists of three major components: the forward process, the reverse process, and the sampling procedure. Three examples of generic diffusion modeling frameworks used in computer vision are denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations.
Diffusion models can be applied to a variety of tasks, including image denoising, inpainting, super-resolution, and image generation. For example, in image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by supplying an image composed of random noise for the network to denoise.
Diffusion models have been applied to generate many kinds of real-world data, the most famous of which are text-conditional image generators like DALL-E and Stable Diffusion. More examples are in a later section in the article.
Denoising diffusion model
Non-equilibrium thermodynamics
Diffusion models were introduced in 2015 as a method to learn a model that can sample from a highly complex probability distribution. They used techniques from non-equilibrium thermodynamics, especially diffusion.
Consider, for example, how one might model the distribution of all naturally-occurring photos. Each image is a point in the space of all images, and the distribution of naturally-occurring photos is a "cloud" in space, which, by repeatedly adding noise to the images, diffuses out to the rest of the image space, until the cloud becomes all but indistinguishable from a gaussian distribution . A model that can approximately undo the diffusion can then be used to sample from the or |
https://en.wikipedia.org/wiki/Water-use%20efficiency | Water-use efficiency (WUE) refers to the ratio of water used in plant metabolism to water lost by the plant through transpiration. Two types of water-use efficiency are referred to most frequently:
photosynthetic water-use efficiency (also called instantaneous water-use efficiency), which is defined as the ratio of the rate of carbon assimilation (photosynthesis) to the rate of transpiration, and
water-use efficiency of productivity (also called integrated water-use efficiency), which is typically defined as the ratio of biomass produced to the rate of transpiration.
Increases in water-use efficiency are commonly cited as a response mechanism of plants to moderate to severe soil water deficits and have been the focus of many programs that seek to increase crop tolerance to drought. However, there is some question as to the benefit of increased water-use efficiency of plants in agricultural systems, as the processes of increased yield production and decreased water loss due to transpiration (that is, the main driver of increases in water-use efficiency) are fundamentally opposed. If there existed a situation where water deficit induced lower transpirational rates without simultaneously decreasing photosynthetic rates and biomass production, then water-use efficiency would be both greatly improved and the desired trait in crop production. |
https://en.wikipedia.org/wiki/Countably%20generated%20module | In mathematics, a module over a (not necessarily commutative) ring is countably generated if it is generated as a module by a countable subset. The importance of the notion comes from Kaplansky's theorem (Kaplansky 1958), which states that a projective module is a direct sum of countably generated modules.
More generally, a module over a possibly non-commutative ring is projective if and only if (i) it is flat, (ii) it is a direct sum of countably generated modules and (iii) it is a Mittag-Leffler module. (Bazzoni–Stovicek) |
https://en.wikipedia.org/wiki/Local%20Leo%20Cold%20Cloud | The Local Leo Cold Cloud is a relatively nearby cloud of interstellar gas. It ranges from 11.3 to 24.3 parsecs in distance. The cloud's neutral gas temperature is around 20K, which is cold compared to the 1,000,000K temperature of the Local Bubble in which it is embedded. The hydrogen atom density in this cloud is 3,000 atoms per cubic centimeter, which is dense for interstellar medium. Thermal infrared radiation from dust in the cloud can be detected at 0.1 mm. |
https://en.wikipedia.org/wiki/Shall%20the%20Dust%20Praise%20Thee%3F | "Shall the Dust Praise Thee?" is a science fiction short story by American writer Damon Knight. It was first published in the anthology Dangerous Visions (1967). His agent refused to publish it and suggested the Atheist Journal in Moscow might buy it, but no one else would. The title comes from Psalm 30:9 in the Bible.
Summary
God arrives on Earth, ready to inflict the Day of Wrath on humankind, but finds that all life has already disappeared. The angels tell God that there has been a great war between England, Russia, China, and America which has wiped out all life on earth, and that the true end of days had already occurred through nuclear warfare. No living creatures, no water, no grass, nothing but dust and brittle stone remain on the world. All that remains of humanity is the phrase left by the last humans as a message to God, saying, "WE WERE HERE. WHERE WERE YOU?" |
https://en.wikipedia.org/wiki/Online%20tutoring | Online tutoring is the process of tutoring in an online, virtual, or networked, environment, in which teachers and learners participate from separate physical locations. Aside from space, participants can also be separated by time.
Online tutoring is practiced using many different approaches for distinct sets of users. The distinctions are in content and user interface, as well as in tutoring styles and tutor-training methodologies. Definitions associated with online tutoring vary widely, reflecting the ongoing evolution of the technology, the refinement and variation in online learning methodology, and the interactions of the organizations that deliver online tutoring services with the institutions, individuals, and learners that employ the services. This Internet-based service is a form of micropublishing.
Concept and definitions
An institution, website or individual can offer online tutoring through an internal or external tutoring website or through a learning management systems (LMS). Online environments applied in education could also involve the use of a virtual learning environment platform such as Moodle, Sakai, WebCT, and Blackboard. Some of these are paid systems but some are free and open source such as Google+ Hangouts. Online tutoring may be offered either via a link in an LMS, or directly through the tutoring service's platform, where a subscriber may be required to pay for tutoring time before the delivery of service. Many educational institutions and major textbook publishers sponsor a certain amount of tutoring without a direct charge to the learner.
Tutoring may take the form of a group of learners simultaneously logged in online, then receiving instruction from a single tutor, also known as many-to-one tutoring and live online tutoring. This is often known as e-moderation, defined as the facilitation of the achievement of goals of independent learning, learner autonomy, self-reflection, knowledge construction, collaborative or group-based lea |
https://en.wikipedia.org/wiki/Lion%20algorithm | Lion algorithm (LA) is one among the bio-inspired (or) nature-inspired optimization algorithms (or) that are mainly based on meta-heuristic principles. It was first introduced by B. R. Rajakumar in 2012 in the name, Lion’s Algorithm.. It was further extended in 2014 to solve the system identification problem. This version was referred as LA, which has been applied by many researchers for their optimization problems.
Inspiration from lion’s social behaviour
Lions form a social system called a "pride", which consists of 1–3 pair of lions. A pride of lions shares a common area known as territory in which a dominant lion is called as territorial lion. The territorial lion safeguards its territory from outside attackers, especially nomadic lions. This process is called territorial defense. It protects the cubs till they become sexually matured. The maturity period is about 2–4 years. The pride undergoes survival fights to protect its territory and the cubs from nomadic lions. Upon getting defeated by the nomadic lions, the dominating nomadic lion takes the role of territorial lion by killing or driving out the cubs of the pride. The lioness of the pride give birth to cubs though the new territorial lion. When the cubs of the pride mature and considered to be stronger than the territorial lion, they take over the pride. This process is called territorial take-over. If territorial take-over happens, either the old territorial lion, which is considered to be laggard, is driven out or it leaves the pride. The stronger lions and lioness form the new pride and give birth to their own cubs
Terminology
In the LA, the terms that are associated with lion’s social system are mapped to the terminology of optimization problems. Few of such notable terms are related here.
Lion: A potential solution to be generated or determined as optimal (or) near-optimal solution of the problem. The lion can be a territorial lion and lioness, cubs and nomadic lions that represent the solution |
https://en.wikipedia.org/wiki/Ethernet%20extender | An Ethernet extender (also network extender or LAN extender) is any device used to extend an Ethernet or network segment beyond its inherent distance limitation which is approximately for most common forms of twisted pair Ethernet. These devices employ a variety of transmission technologies and physical media (wireless, copper wire, fiber-optic cable, coaxial cable).
The extender forwards traffic between LANs transparent to higher network-layer protocols over distances that far exceed the limitations of standard Ethernet.
Options
Extenders that use copper wire include 2- and 4-wire variants using unconditioned copper wiring to extend a LAN. Network extenders use various methods (line encodings), such as TC-PAM, 2B1Q or DMT, to transmit information. While transmitting over copper wire does not allow for the speeds that fiber-optic transmission does, it allows the use of existing voice-grade copper or CCTV coaxial cable wiring. Copper-based Ethernet extenders must be used on unconditioned wire (without load coils), such as unused twisted pairs and alarm circuits.
Connecting a private LAN between buildings or more distant locations is a challenge. Wi-Fi requires a clear line-of-sight, special antennas, and is subject to weather. If the buildings are within 100m, a normal Ethernet cable segment can be used, with due consideration of potential grounding problems between the locations. Up to 200m, it may be possible to set up an ordinary Ethernet bridge or router in the middle, if power and weather protection can be arranged.
Fiber optic connection is ideal, allowing connections of over a km and high speeds with no electrical shock or surge issues, but is technically specialized and expensive for both the end equipment interfaces and the cable. Damage to the cable requires special skills to repair or total replacement.
Specialized equipment can inter-connect two LANs over a single twisted pair of wires, such as the Moxa IEX Series, Cisco LRE (Long Reach Etherne |
https://en.wikipedia.org/wiki/Graham%27s%20law | Graham's law of effusion (also called Graham's law of diffusion) was formulated by Scottish physical chemist Thomas Graham in 1848. Graham found experimentally that the rate of effusion of a gas is inversely proportional to the square root of the molar mass of its particles. This formula is stated as:
,
where:
Rate1 is the rate of effusion for the first gas. (volume or number of moles per unit time).
Rate2 is the rate of effusion for the second gas.
M1 is the molar mass of gas 1
M2 is the molar mass of gas 2.
Graham's law states that the rate of diffusion or of effusion of a gas is inversely proportional to the square root of its molecular weight. Thus, if the molecular weight of one gas is four times that of another, it would diffuse through a porous plug or escape through a small pinhole in a vessel at half the rate of the other (heavier gases diffuse more slowly). A complete theoretical explanation of Graham's law was provided years later by the kinetic theory of gases. Graham's law provides a basis for separating isotopes by diffusion—a method that came to play a crucial role in the development of the atomic bomb.
Graham's law is most accurate for molecular effusion which involves the movement of one gas at a time through a hole. It is only approximate for diffusion of one gas in another or in air, as these processes involve the movement of more than one gas.
In the same conditions of temperature and pressure, the molar mass is proportional to the mass density. Therefore, the rates of diffusion of different gases are inversely proportional to the square roots of their mass densities:
where:
ρ is the mass density.
Examples
First Example: Let gas 1 be H2 and gas 2 be O2. (This example is solving for the ratio between the rates of the two gases)
Therefore, hydrogen molecules effuse four times faster than those of oxygen.
Graham's Law can also be used to find the approximate molecular weight of a gas if one gas is a known species, and if there is a specific |
https://en.wikipedia.org/wiki/Tor%20Carding%20Forum | The Tor Carding Forum (TCF) was a Tor-based forum specializing in the trade of stolen credit card details, identity theft and currency counterfeiting. The site was founded by an individual known as 'Verto' who also founded the now defunct Evolution darknet market.
The site required $50 for registration.
A 2013 investigation into counterfeit banknotes in Pittsburgh led to a source of Ugandan fakes being identified as having been purchased via the Tor Carding Forums. By December 2014 Ryan Andrew Gustafson a.k.a. "Jack Farrel" and "Willy Clock", a US citizen living in Uganda was arrested for large scale sale of counterfeit United States currency by the U.S. Secret Service which was being sold through the Tor Carding Forums as well as other crime forums.
In December 2014 the site closed following a hack, directing users to Evolution's forums.
In June 2015 a dark web researcher identified the clearnet IP address of a similar hidden service branded 'The Tor Carding Forum V2' which was subsequently shut down. |
https://en.wikipedia.org/wiki/Cellular%20and%20Molecular%20Life%20Sciences | Cellular and Molecular Life Sciences is a peer-reviewed scientific journal covering cellular and molecular life sciences. It was established in 1945 as Experientia, obtaining its current name in 1994. The Editors-in-chief are Roberto Bruzzone and Jean Leon Thomas. According to the Journal Citation Reports, the journal has a 2020 impact factor of 9.261. |
https://en.wikipedia.org/wiki/Simulated%20fluorescence%20process%20algorithm | The Simulated Fluorescence Process (SFP) is a computing algorithm used for scientific visualization of 3D data from, for example, fluorescence microscopes. By modeling a physical light/matter interaction process, an image can be computed which shows the data as it would have appeared in reality when viewed under these conditions.
Principle
The algorithm considers a virtual light source producing excitation light that illuminates the object. This casts shadows either on parts of the object itself or on other objects below it. The interaction between the excitation light and the object provokes the emission light, which also interacts with the object before it finally reaches the eye of the viewer.
See also
Computer graphics lighting
Rendering (computer graphics) |
https://en.wikipedia.org/wiki/Passiflora%20%C3%97%20violacea | Passiflora × violacea, the violet passion flower, is a hybrid between two species of flowering plants, Passiflora racemosa × Passiflora caerulea, in the family Passifloraceae. The name Passiflora × violacea has yet to be resolved as a correct scientific name; nevertheless it is widely found in the horticultural literature.
It is an evergreen climber growing to with five-lobed leaves, clinging spiral tendrils, large showy purple flowers with maroon and white filaments, and the prominent stigmas and anthers typical of the genus. While somewhat hardier than one of its parents, P. racemosa, it is considerably less hardy than the other, P. caerulea (which can be grown outside in warm or coastal areas). P. × violacea will tolerate temperatures down to , but in most temperate zones is grown under glass, for instance in an unheated conservatory or greenhouse.
Passiflora × violacea may well be the very first Passiflora to have been hybridised, by the British nurseryman Thomas Milne, in 1819. It was subsequently described by Joseph Sabine of the Royal Horticultural Society, then in 1824 by the French botanist Jean-Louis-Auguste Loiseleur-Deslongchamps in the "Herbier General de l'Amateur,", giving it its current name.
This hybrid has in its turn given rise to several cultivars, notably ‘Victoria’. It has won the Royal Horticultural Society’s Award of Garden Merit. |
https://en.wikipedia.org/wiki/JEB%20decompiler | JEB is a disassembler and decompiler software for Android applications and native machine code. It decompiles Dalvik bytecode to Java source code, and x86, ARM, MIPS, RISC-V machine code to C source code. The assembly and source outputs are interactive and can be refactored. Users can also write their own scripts and plugins to extend JEB functionality.
Version 2.2 introduced Android debugging modules for Dalvik and native (Intel, ARM, MIPS) code. Users can "seamlessly debug Dalvik bytecode and native machine code, for all apps [...] including those that do not explicitly allow debugging".
Version 2.3 introduced native code decompilers. The first decompiler that shipped with JEB was a MIPS 32-bit interactive decompiler.
JEB 3 ships with additional decompilers, including Intel x86, Intel x86-64, WebAssembly (wasm), Ethereum (evm), Diem blockchain (diemvm).
JEB 4 was released in 2021. A RISC-V decompiler was added to JEB 4.5. A S7 PLC block decompiler was added to JEB 4.16.
JEB 5 was released in 2023.
History
JEB is the first Dalvik decompiler to provide interactive output, as reverse-engineers may examine cross-references, insert comments, or rename items, such as classes and methods. Whenever possible, the correspondence between the bytecode and the decompiled Java code is accessible to the user. Although JEB is branded as a decompiler, it also provides a full APK view (manifest, resources, certificates, etc.). An API allows users to customize or automate actions through scripts and plugins, in Python and Java.
The name may be a reference to the well-known security software IDA, as "JEB" = rot1("IDA").
Decompilers
JEB ships with the following proprietary and open-source decompiler plugins:
Dalvik bytecode to Java
Java bytecode to Java
Intel x86/x86-64 machine code to C
ARM machine code to C
MIPS machine code to C
RISC-V machine code to C
S7 (MC7) bytecode to C
WebAssembly bytecode to C
EVM bytecode (compiled Ethereum smart contracts) to Solidity-li |
https://en.wikipedia.org/wiki/List%20of%20oncogenic%20bacteria | This is a list of bacteria that have been identified as promoting or causing:
Uncontrolled growth of tissue in the body
Cancer
Carcinomas
Tumors (including benign or slow growing)
Neoplasms
Sarcomas
Precancerous lesions
Coinfectious agent promoting the above growths
Species or genera
See also
Carcinogenic bacteria
Sexually transmitted disease
Infectious causes of cancer
Infections associated with diseases
List of infectious diseases
Timeline of peptic ulcer disease and Helicobacter pylori - featured list |
https://en.wikipedia.org/wiki/Medical%20education | Medical education is education related to the practice of being a medical practitioner, including the initial training to become a physician (i.e., medical school and internship) and additional training thereafter (e.g., residency, fellowship, and continuing medical education).
Medical education and training varies considerably across the world. Various teaching methodologies have been used in medical education, which is an active area of educational research.
Medical education is also the subject-didactic academic field of educating medical doctors at all levels, including entry-level, post-graduate, and continuing medical education. Specific requirements such as entrustable professional activities must be met before moving on in stages of medical education.
Common techniques and evidence base
Medical education applies theories of pedagogy specifically in the context of medical education. Medical education has been a leader in the field of evidence-based education, through the development of evidence syntheses such as the Best Evidence Medical Education collection, formed in 1999, which aimed to "move from opinion-based education to evidence-based education". Common evidence-based techniques include the Objective structured clinical examination (commonly known as the 'OSCE) to assess clinical skills, and reliable checklist-based assessments to determine the development of soft skills such as professionalism. However, there is a persistence of ineffective instructional methods in medical education, such as the matching of teaching to learning styles and Edgar Dales' "Cone of Learning".
Entry-level education
Entry-level medical education programs are tertiary-level courses undertaken at a medical school. Depending on jurisdiction and university, these may be either undergraduate-entry (most of Europe, Asia, South America and Oceania), or graduate-entry programs (mainly Australia, Philippines and North America). Some jurisdictions and universities provide both u |
https://en.wikipedia.org/wiki/Colonel%20Sanders | Colonel Harland David Sanders (September 9, 1890
December 16, 1980) was an American businessman and founder of fast food chicken restaurant chain Kentucky Fried Chicken (also known as KFC). He later acted as the company's brand ambassador and symbol and his name and image are still symbols of the company.
Sanders held a number of jobs in his early life, such as steam engine stoker, insurance salesman, and filling station operator. He began selling fried chicken from his roadside restaurant in North Corbin, Kentucky, during the Great Depression. During that time, Sanders developed his "secret recipe" and his patented method of cooking chicken in a pressure fryer. Sanders recognized the potential of the restaurant franchising concept, and the first KFC franchise opened in South Salt Lake, Utah, in 1952. When his original restaurant closed, he devoted himself full-time to franchising his fried chicken throughout the country.
The company's rapid expansion across the United States and overseas became overwhelming for Sanders. In 1964, then 73 years old, he sold the company to a group of investors led by John Y. Brown Jr. and Jack C. Massey for $2 million ($ million today). However, he retained control of operations in Canada, and he became a salaried brand ambassador for Kentucky Fried Chicken. In his later years, he became highly critical of the food served by KFC restaurants, believing they had cut costs and allowed quality to deteriorate.
Life and career
1890–1906: early life
Harland David Sanders was born on September 9, 1890, in a four-room house located east of Henryville, Indiana. He was the oldest of three children born to Wilbur David and Margaret Ann (née Dunlevy) Sanders. His mother was of Irish and Dutch descent. The family attended the Advent Christian Church. His father was a mild and affectionate man who worked his farm until he broke his leg in a fall. He then worked as a butcher in Henryville for two years. Sanders's mother was a devout Christian |
https://en.wikipedia.org/wiki/Bertrand%20paradox%20%28economics%29 | In economics and commerce, the Bertrand paradox — named after its creator, Joseph Bertrand — describes a situation in which two players (firms) reach a state of Nash equilibrium where both firms charge a price equal to marginal cost ("MC"). The paradox is that in models such as Cournot competition, an increase in the number of firms is associated with a convergence of prices to marginal costs. In these alternative models of oligopoly, a small number of firms earn positive profits by charging prices above cost.
Suppose two firms, A and B, sell a homogeneous commodity, each with the same cost of production and distribution, so that customers choose the product solely on the basis of price. It follows that demand is infinitely price-elastic. Neither A nor B will set a higher price than the other because doing so would yield the entire market to their rival. If they set the same price, the companies will share both the market and profits.
On the other hand, if either firm were to lower its price, even a little, it would gain the whole market and substantially larger profits. Since both A and B know this, they will each try to undercut their competitor until the product is selling at zero economic profit. This is the pure-strategy Nash equilibrium. Recent work has shown that there may be an additional mixed-strategy Nash equilibrium with positive economic profits under the assumption that monopoly profits are infinite. For the case of finite monopoly profits, it has been shown that positive profits under price competition are impossible in mixed equilibria and even in the more general case of correlated equilibria.
The Bertrand paradox rarely appears in practice because real products are almost always differentiated in some way other than price (brand name, if nothing else); firms have limitations on their capacity to manufacture and distribute, and two firms rarely have identical costs.
Bertrand's result is paradoxical because if the number of firms goes from one to |
https://en.wikipedia.org/wiki/QTY%20Code | The QTY Code is a design method to transform membrane proteins that are intrinsically insoluble in water into variants with water solubility, while retaining their structure and function.
Similar structures of amino acids
The QTY Code is based on two key molecular structural facts: 1) all 20 natural amino acids are found in alpha-helices regardless of their chemical properties, although some amino acids have a higher propensity to form an alpha-helix; and, 2) several amino acids share striking structural similarities despite their very different chemical properties. These may be paired as: Glutamine (Q) vs Leucine (L); Threonine (T) vs Valine (V) and Isoleucine (I); and Tyrosine (Y) vs Phenylalanine (F). The QTY Code systematically replaces water-insoluble amino acids (L, V, I and F) with water-soluble amino acids (Q, T and Y) in transmembrane alpha-helices. Thus, its application to membrane proteins changes the water-insoluble form of membrane proteins into water-soluble variants. The QTY Code was specifically conceived to render G protein-coupled receptors (GPCRs) into a water-soluble form. Despite substantial transmembrane domain changes, the QTY variants of GPCRs maintain stable structure and ligand binding activities.
Hydrogen bond interactions between water and the amino acids
The side chain of glutamine (Q) can form 4 hydrogen bonds with 4 water molecules. There are 2 hydrogen donors from nitrogen and 2 hydrogen acceptors for oxygen. The –OH group of threonine (T) and tyrosine (Y) can form 3 hydrogen bonds with 3 water molecules (2 H-acceptors and 1 H-donor). Color code: Green = carbon, red = oxygen, blue = nitrogen, gray = hydrogen, yellow disks = hydrogen bonds.
Three types of alpha-helices and with nearly identical molecular structure
There are 3 types of alpha-helices and with nearly identical molecular structure, namely: a) 1.5Å per amino acid rise, b) 100˚ per amino acid turn, c) 3.6 amino acids and 360˚ per helical turn, and d) 5.4Å per helic |
https://en.wikipedia.org/wiki/High-voltage%20interface%20relay | High voltage interface relays, a.k.a., interface relays: or coupling relays or insulating interfaces is a special class of electrical relays designed to provide informational and electrical compatibility between functional components isolated from each other and not allowing for a direct connection due to a high difference of potentials. A common design principle of these devices is a special galvanic isolation module between the input (control) and the output (switching) circuits of the relay. Interface relays are widely used in control and protection systems of high voltage (10-100 kV) electronic and electrophysical equipment and in high power installations.
Classification
Any electromagnetic relay has a certain level of isolation between the input and output circuits. However, in ordinary relays, this function is not prevalent and, hence, not considered in the existing system of relay classification. In interface relays, however, the property of galvanic isolation (decoupling) between the input and output circuits is significantly bolstered, and parameters of the galvanic isolation have an utmost importance from standpoint of the functions performed by this relay. On the other hand, the parameters associated with switching capacity are secondary and can significantly vary in interface relays with the same level of galvanic decoupling.
In this respect, categorization of interface relays into existing classes of ordinary relays is arguable. Rather, it seems more appropriate to categorize them as a separate class of electrical relays and classify according to characteristics of the galvanic decoupling unit
by insulation voltage level:
low level (to 10 kV)
medium level (10 to 100 kV)
high level (above 100 kV)
by construction of galvanic isolation module:
opto-electronic
electromagnetic (transformer)
pneumatic
radio frequency
ultrasonic
electrohydrolic
with mechanical transmission
by operational (execution) speed:
super fast (up to 100 μsec)
fast (100 |
https://en.wikipedia.org/wiki/Myeloid-derived%20suppressor%20cell | Myeloid-derived suppressor cells (MDSC) are a heterogeneous group of immune cells from the myeloid lineage (a family of cells that originate from bone marrow stem cells).
MDSCs expand under pathologic conditions such as chronic infection and cancer, as a result of altered haematopoiesis. MDSCs differ from other myeloid cell types in that they have immunosuppressive activities, as opposed to immune-stimulatory properties. Similar to other myeloid cells, MDSCs interact with immune cell types such as T cells, dendritic cells, macrophages and natural killer cells to regulate their functions. Tumors with high levels of infiltration by MDSCs have been associated with poor patient outcome and resistance to therapies. MDSCs can also be detected in the blood. In patients with breast cancer, levels of MDSC in blood are about 10-fold higher than normal. The size of the myeloid suppressor compartment is considered to be an important factor in the success or failure of cancer immunotherapy, highlighting the importance of this cell type for human pathophysiology. A high level of MDSC infiltrate in the tumor microenvironment (TME) correlates with shorter survival times of patients with solid tumors and could mediate resistance to checkpoint inhibitor therapy. Studies are needed to determine whether MDSCs are a population of immature myeloid cells that have stopped differentiation or a distinct myeloid lineage.
Formation
MDSCs are formed from bone marrow precursors when myelopoietic processes are interrupted, caused by several illnesses. Cancer patients' growing tumors produce cytokines and other substances that affect MDSC development. Tumor cell lines overexpress colony-stimulating factors (G-CSF and GM-CSF) and IL6, which promote development of MDSCs that have immune suppressive function in vivo. Other cytokines, including IL10, IL1, VEGF, and PGE2 have been associated with the formation and regulation of MDSCs. GM-CSF promotes synthesis of MDSCs from bone marrow, and the tr |
https://en.wikipedia.org/wiki/Pneumobilia | Pneumobilia is the presence of gas in the biliary system. It is typically detected by ultrasound or a radiographic imaging exam, such as CT, or MRI. It is a common finding in patients that have recently undergone biliary surgery or endoscopic biliary procedure. While the presence of air within biliary system is not harmful, this finding may alternatively suggest a pathological process, such as a biliary-enteric anastomosis, an infection of the biliary system, an incompetent sphincter of Oddi, or spontaneous biliary-enteric fistula.
Causes
In a healthy individual with normal anatomy, there is no air within the biliary tree. When this finding is present, it may be secondary to:
Recent surgical or endoscopic biliary procedure (e.g. ERCP, biliary enteric anastomosis)
Incompetent sphincter of Oddi (e.g. passage of large gallstone, scarring related to chronic pancreatitis)
Spontaneous biliary enteric fistula (e.g. gallstone ileus)
Infection by gas-forming organisms (e.g. emphysematous cholangitis)
Congenital abnormalities
Other rare causes that have been reported include duodenal diverticulum, paraduodenal abscess, operative trauma, and carcinoma of the duodenum, stomach and bile duct. |
https://en.wikipedia.org/wiki/Jane%20Coffin%20Childs%20Memorial%20Fund%20for%20Medical%20Research | The Jane Coffin Childs Memorial Fund for Medical Research (the "JCC"), established in 1937, awards the "Jane Coffin Childs Postdoctoral Fellowship" for research in the medical and related sciences bearing on cancer.
History
The Fund was founded on June 11, 1937, by Starling Winston Childs and Alice S. Childs, in memory of Jane Coffin Childs. Its funds have been on the order of $3 million.
Description
Currently, the Foundation awards 20 to 30 fellowships per year. The fellowship is regarded as one of the most prestigious fellowships in the US, and postdoctoral candidates are awarded with a three-year support. The researchers and the research labs where the fellows conduct their projects have made major scientific contributions in areas such as the advancement of understanding the human genome, and the application of genetic approaches to understanding pathway regulation, and stem cell activation. There are nearly two dozen individuals associated with the Fund—as grantees, fellows, and advisers—have won Nobel Prizes in physiology, medicine, and chemistry.
Over the years, the Fund has attracted distinguished scientists for its Board of Scientific Advisers. As of 2020, 17 of the former Board members have earned the Nobel Prize.
Members of the Board of Scientific Advisers have included:
Ali Shilatifard
Elizabeth Blackburn
Peter Cresswell
Elaine Fuchs
Tony Hunter
Cynthia Kenyon
John Kuriyan
Susan McConnell
Thomas D. Pollard
Randy Schekman
Charles J. Sherr
Pamela A. Silver
Graham C. Walker
The Jane Coffin Childs Memorial Fund for Medical Research is dedicated to providing financial support to offer highly qualified scientists the opportunity to pursue research into the causes and origins of cancer. The goal of the Fund is to provide support to the brightest individual scientists pursuing careers in cancer research while promoting and emphasizing the value and contribution of the individual in keeping with the spirit of the conception of the Fund.
Notabl |
https://en.wikipedia.org/wiki/Shprintzen%E2%80%93Goldberg%20syndrome | Shprintzen–Goldberg syndrome is a congenital multiple-anomaly syndrome that has craniosynostosis, multiple abdominal hernias, cognitive impairment, and other skeletal malformations as key features. Several reports have linked the syndrome to a mutation in the FBN1 gene, but these cases do not resemble those initially described in the medical literature in 1982 by Shprintzen and Goldberg, and Greally et al. in 1998 failed to find a causal link to FBN1. At this time, the cause of Shprintzen–Goldberg syndrome has been identified as a mutation in the gene SKI located on chromosome 1 at the p36 locus. The syndrome is rare with fewer than 50 cases described in the medical literature to date.
Signs and Symptoms
People with Shprintzen-Goldberg syndrome can experience a range of symptoms that vary in severity. Due to craniosynostosis, people with SGS may have a long and narrow head, wide spaced protruding eyes that may slant downwards, a high and narrow palate, a high and prominent forehead, a small lower jaw, and low-set posteriorly-rotated ears. Some other skeletal abnormalities people with SGS may experience include joint hypermobility, clubfoot, scoliosis, camptodactyly, arachnodactyly, long limbs, and a chest which appears to sink in or stick out. Other symptoms that may be experienced include brain abnormalities (i.e. hydrocephalus), developmental delays, intellectual disability, gastrointestinal problems (i.e. constipation, gastroparesis), abdominal or umbilical hernias, easily bruised skin, trouble breathing, and hypotonia. Some cardiac issues which are occasionally seen in people with SGS include aortic aneurysm, aortic regurgitation, aortic root dilation, mitral valve regurgitation, and mitral valve prolapse.
See also
Craniosynostosis |
https://en.wikipedia.org/wiki/Kelp | Kelps are large brown algae or seaweeds that make up the order Laminariales. There are about 30 different genera. Despite its appearance, kelp is not a plant but a stramenopile, a group containing many protists.
Kelp grows in "underwater forests" (kelp forests) in shallow oceans, and is thought to have appeared in the Miocene, 5 to 23 million years ago. The organisms require nutrient-rich water with temperatures between . They are known for their high growth rate—the genera Macrocystis and Nereocystis can grow as fast as half a metre a day, ultimately reaching .
Through the 19th century, the word "kelp" was closely associated with seaweeds that could be burned to obtain soda ash (primarily sodium carbonate). The seaweeds used included species from both the orders Laminariales and Fucales. The word "kelp" was also used directly to refer to these processed ashes.
Description
In most kelp, the thallus (or body) consists of flat or leaf-like structures known as blades. Blades originate from elongated stem-like structures, the stipes. The holdfast, a root-like structure, anchors the kelp to the substrate of the ocean.
Gas-filled bladders (pneumatocysts) form at the base of blades of American species, such as Nereocystis lueteana, (Mert. & Post & Rupr.) to hold the kelp blades close to the surface.
Growth and reproduction
Growth occurs at the base of the meristem, where the blades and stipe meet. Growth may be limited by grazing. Sea urchins, for example, can reduce entire areas to urchin barrens. The kelp life cycle involves a diploid sporophyte and haploid gametophyte stage. The haploid phase begins when the mature organism releases many spores, which then germinate to become male or female gametophytes. Sexual reproduction then results in the beginning of the diploid sporophyte stage, which will develop into a mature individual.
The parenchymatous thalli are generally covered with a mucilage layer, rather than cuticle.
Evolution of kelp structure
Under evoluti |
https://en.wikipedia.org/wiki/Combat%20information%20center | A combat information center (CIC) or action information centre (AIC) is a room in a warship or AWACS aircraft that functions as a tactical center and provides processed information for command and control of the near battlespace or area of operations. Within other military commands, rooms serving similar functions are known as command centers.
Regardless of the vessel or command locus, each CIC organizes and processes information into a form more convenient and usable by the commander in authority. Each CIC funnels communications and data received over multiple channels, which is then organized, evaluated, weighted and arranged to provide ordered timely information flow to the battle command staff under the control of the CIC officer and his deputies.
Overview
CICs are widely depicted in film and television treatments, frequently with large maps, numerous computer consoles and radar and sonar repeater displays or consoles, as well as the almost ubiquitous grease-pencil annotated polar plot on an edge-lighted transparent plotting board. At the time the CIC concept was born, the projected map-like polar display (PPI scopes) with the ship at the center was making its way into radar displays displacing the A-scope which was simply a time-delayed blip showing a range on the cathode ray tube display of an oscilloscope.
Such polar plots are used routinely in navigation and military action management to display time-stamped range and bearing information to the CIC decision makers. A single 'mark' (range and bearing datum) bears little actionable decision-making information by itself. A succession of such data tells much more, including whether the contact is closing or opening in range, an idea of its speed and direction (these are calculable, even from bearings-only data, given sufficient observations and knowledge of tactics), the relation to other contacts and their ranges and behaviors. Harvesting such data sets from the polar plots and computers (Common to sonar, ra |
https://en.wikipedia.org/wiki/Histochemistry%20and%20Cell%20Biology | Histochemistry and Cell Biology is a peer-reviewed scientific journal in the field of molecular histology and cell biology, publishing original articles dealing with the localization and identification of molecular components, metabolic activities, and cell biological aspects of cells and tissues. The journal covers the development, application, and evaluation of methods and probes that can be used in the entire area of histochemistry and cell biology. The journal is published by Springer Science+Business Media and the official journal of the Society for Histochemistry. Earlier names of the journal are Histochemie and Histochemistry. The editors-in-chief are Jürgen Roth (University of Zurich), Takehiko Koji (University of Nagasaki), Michael Schrader (University of Exeter) and Douglas J. Taatjes (University of Vermont). |
https://en.wikipedia.org/wiki/Gastrophrenic%20ligament | The postero-superior surface of the stomach is covered by peritoneum, except over a small area close to the cardiac orifice; this area is limited by the lines of attachment of the gastrophrenic ligament, and lies in apposition with the diaphragm, and frequently with the upper portion of the left suprarenal gland. |
https://en.wikipedia.org/wiki/Vadim%20G.%20Vizing | Vadim Georgievich Vizing (, ; 25 March 1937 – 23 August 2017) was a Soviet and Ukrainian mathematician known for his contributions to graph theory, and especially for Vizing's theorem stating that the edges of any simple graph with maximum degree Δ can be colored with at most Δ + 1 colors.
Biography
Vizing was born in Kiev on March 25, 1937. His mother was half-German, and because of this the Soviet authorities forced his family to move to Siberia in 1947. After completing his undergraduate studies in mathematics in Tomsk State University in 1959, he began his Ph.D. studies at the Steklov Institute of Mathematics in Moscow, on the subject of function approximation, but he left in 1962 without completing his degree. Instead, he returned to Novosibirsk, working from 1962 to 1968 at the Russian Academy of Sciences there and earning a Ph.D. in 1966. In Novosibirsk, he was a regular participant in A. A. Zykov's seminar in graph theory. After holding various additional positions, he moved to Odessa in 1974, where he taught mathematics for many years at the Academy for Food Technology (originally known as Одесский технологический институт пищевой промышленности им. М. В. Ломоносова, "Odessa Technological Institute of Food Industry named after Mikhail Lomonosov").
Research results
The result now known as Vizing's theorem, published in 1964, when Vizing was working in Novosibirsk, states that the edges of any graph with at most Δ edges per vertex can be colored using at most Δ + 1 colors. It is a continuation of the work of Claude Shannon, who showed that any multigraph can have its edges colored with at most (3/2)Δ colors (a tight bound, as a triangle with Δ/2 edges per side requires this many colors). Although Vizing's theorem is now standard material in many graph theory textbooks, Vizing had trouble publishing the result initially, and his paper on it appears in an obscure journal, Diskret. Analiz.
Vizing also made other contributions to graph theory and graph co |
https://en.wikipedia.org/wiki/Demiregular%20tiling | In geometry, the demiregular tilings are a set of Euclidean tessellations made from 2 or more regular polygon faces. Different authors have listed different sets of tilings. A more systematic approach looking at symmetry orbits are the 2-uniform tilings of which there are 20. Some of the demiregular ones are actually 3-uniform tilings.
20 2-uniform tilings
Grünbaum and Shephard enumerated the full list of 20 2-uniform tilings in Tilings and Patterns, 1987:
Ghyka's list (1946)
Ghyka lists 10 of them with 2 or 3 vertex types, calling them semiregular polymorph partitions.
Steinhaus's list (1969)
Steinhaus gives 5 examples of non-homogeneous tessellations of regular polygons beyond the 11 regular and semiregular ones. (All of them have 2 types of vertices, while one is 3-uniform.)
Critchlow's list (1970)
Critchlow identifies 14 demi-regular tessellations, with 7 being 2-uniform, and 7 being 3-uniform.
He codes letter names for the vertex types, with superscripts to distinguish face orders. He recognizes A, B, C, D, F, and J can't be a part of continuous coverings of the whole plane. |
https://en.wikipedia.org/wiki/Sodium%20hydride | Sodium hydride is the chemical compound with the empirical formula NaH. This alkali metal hydride is primarily used as a strong yet combustible base in organic synthesis. NaH is a saline (salt-like) hydride, composed of Na+ and H− ions, in contrast to molecular hydrides such as borane, methane, ammonia, and water. It is an ionic material that is insoluble in all solvents (other than molten Na), consistent with the fact that H− ions do not exist in solution. Because of the insolubility of NaH, all reactions involving NaH occur at the surface of the solid.
Basic properties and structure
NaH is produced by the direct reaction of hydrogen and liquid sodium. Pure NaH is colorless, although samples generally appear grey. NaH is around 40% denser than Na (0.968 g/cm3).
NaH, like LiH, KH, RbH, and CsH, adopts the NaCl crystal structure. In this motif, each Na+ ion is surrounded by six H− centers in an octahedral geometry. The ionic radii of H− (146 pm in NaH) and F− (133 pm) are comparable, as judged by the Na−H and Na−F distances.
"Inverse sodium hydride"
A very unusual situation occurs in a compound dubbed "inverse sodium hydride", which contains H+ and Na− ions. Na− is an alkalide, and this compound differs from ordinary sodium hydride in having a much higher energy content due to the net displacement of two electrons from hydrogen to sodium. A derivative of this "inverse sodium hydride" arises in the presence of the base [36]adamanzane. This molecule irreversibly encapsulates the H+ and shields it from interaction with the alkalide Na−. Theoretical work has suggested that even an unprotected protonated tertiary amine complexed with the sodium alkalide might be metastable under certain solvent conditions, though the barrier to reaction would be small and finding a suitable solvent might be difficult.
Applications in organic synthesis
As a strong base
NaH is a base of wide scope and utility in organic chemistry. As a superbase, it is capable of deprotonating a ra |
https://en.wikipedia.org/wiki/Integral%20Transforms%20and%20Special%20Functions | Integral Transforms and Special Functions is a monthly peer-reviewed scientific journal, specialised in topics of mathematical analysis, the theory of differential and integral equations, and approximation theory, but publishes also papers in other areas of mathematics. It is published by Taylor & Francis and the editor-in-chief is S.B. Yakubovich (University of Porto)
External links
Mathematics journals
Taylor & Francis academic journals |
https://en.wikipedia.org/wiki/Topological%20vector%20lattice | In mathematics, specifically in functional analysis and order theory, a topological vector lattice is a Hausdorff topological vector space (TVS) that has a partial order making it into vector lattice that is possesses a neighborhood base at the origin consisting of solid sets.
Ordered vector lattices have important applications in spectral theory.
Definition
If is a vector lattice then by the vector lattice operations we mean the following maps:
the three maps to itself defined by , , , and
the two maps from into defined by and.
If is a TVS over the reals and a vector lattice, then is locally solid if and only if (1) its positive cone is a normal cone, and (2) the vector lattice operations are continuous.
If is a vector lattice and an ordered topological vector space that is a Fréchet space in which the positive cone is a normal cone, then the lattice operations are continuous.
If is a topological vector space (TVS) and an ordered vector space then is called locally solid if possesses a neighborhood base at the origin consisting of solid sets.
A topological vector lattice is a Hausdorff TVS that has a partial order making it into vector lattice that is locally solid.
Properties
Every topological vector lattice has a closed positive cone and is thus an ordered topological vector space.
Let denote the set of all bounded subsets of a topological vector lattice with positive cone and for any subset , let be the -saturated hull of .
Then the topological vector lattice's positive cone is a strict -cone, where is a strict -cone means that is a fundamental subfamily of that is, every is contained as a subset of some element of ).
If a topological vector lattice is order complete then every band is closed in .
Examples
The Banach spaces () are Banach lattices under their canonical orderings.
These spaces are order complete for .
See also |
https://en.wikipedia.org/wiki/CcdA/CcdB%20Type%20II%20Toxin-antitoxin%20system | The CcdA/CcdB Type II Toxin-antitoxin system is one example of the bacterial toxin-antitoxin (TA) systems that encode two proteins, one a potent inhibitor of cell proliferation (toxin) and the other its specific antidote (antitoxin). These systems preferentially guarantee growth of plasmid-carrying daughter cells in a bacterial population by killing newborn bacteria that have not inherited a plasmid copy at cell division (post-segregational killing).
The ccd system (control of cell death) of the F plasmid encodes two proteins, the CcdB protein (101 amino acids; toxin) and the CcdA antidote (72 amino acids). The antidote prevents CcdB toxicity by forming a tight CcdA–CcdB complex.
Mechanism of action
The target of CcdB is the GyrA subunit of DNA gyrase, an essential type II topoisomerase in Escherichia coli. Gyrase alters DNA topology by effecting a transient double-strand break in the DNA backbone, passing the double helix through the gate and resealing the gaps. The CcdB poison acts by trapping DNA gyrase in a cleaved complex with the gyrase A subunit covalently closed to the cleaved DNA, causing DNA breakage and cell death in a way closely related to quinolones antibiotics.
In absence of the antitoxin, the CcdB poison traps DNA-gyrase cleavable complexes, inducing breaks into DNA and cell death.
Regulation of the ccd operon by the CCdA/CCdB complex is dependent upon the ratio of the two molecules to each other in the complex: a (CcdA)2–(CcdB)2 complex binds the DNA of the operon thus repressing transcription, but when CcdB is in excess of CcdA de-repression occurs, whereas repression will occur when CcdA levels are greater than or equal to that of CcdB. As a model system, by ensuring an antidote–toxin ratio greater than one, this mechanism might prevent the harmful effect of CcdB in plasmid-containing bacteria.
Comparison with parD
The Ccd and parD systems are found to be strikingly similar in terms of their structures and actions. The antitoxin protein of ea |
https://en.wikipedia.org/wiki/Raptor%20persecution | In the United Kingdom, raptor persecution is a crime against wildlife. The offence includes poisoning, shooting, trapping, and nest destruction or disturbance of birds of prey.
International context
There is a long history of game bird shooting and hunting for sport, and the international trafficking of wildlife products, including raptors and raptor feathers, is a billion-dollar industry. Understanding and suppressing raptor persecution is complex, because the reasons behind it are shaped by local, cultural and historic conditions.
In some countries raptors are hunted for use in falconry. In China, people capture eagles and other raptors for falconry festivals that attract tourists. In Germany, buzzards and hawks are at risk, and the red kite is endangered. In the European Union, the EU Birds Directive (Council Directive 79/409/EEC 1979) regulates the hunting of all wild birds, stating that they must not be caught, killed or persecuted (with the exception of proper hunting). Campaigning groups in Germany have identified hunting and poaching hotspots on migratory routes across the continent. Persecution increased across Europe during the COVID-19 Pandemic.
United Kingdom
Birds of prey are protected species in the United Kingdom, and criminal offences against them are covered by the Wildlife and Countryside Act 1981. But it is a crime that is difficult to monitor, due to the remoteness of many of the areas in which the birds live and cultural and social pressures in certain sectors of the rural community which discourage reporting. Incidents of egg thefts and illegal killings of birds of prey, including red kites, peregrine falcons and barn owls, increased in England during the lockdown periods of the COVID-19 pandemic as the absence of the public emboldened gamekeepers and miscreants. In Wales, however, the number of offences has decreased, as egg thefts have fallen dramatically there.
The Royal Society for the Protection of Birds (RSPB) began recording rapt |
https://en.wikipedia.org/wiki/Nuclear%20weapons%20debate | The nuclear weapons debate refers to the controversies surrounding the threat, use and stockpiling of nuclear weapons. Even before the first nuclear weapons had been developed, scientists involved with the Manhattan Project were divided over the use of the weapon. The only time nuclear weapons have been used in warfare was during the final stages of World War II when USAAF B-29 Superfortress bombers dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki in early August 1945. The role of the bombings in Japan's surrender and the U.S.'s ethical justification for them have been the subject of scholarly and popular debate for decades.
Nuclear disarmament refers both to the act of reducing or eliminating nuclear weapons and to the end state of a nuclear-free world. Proponents of disarmament typically condemn a priori the threat or use of nuclear weapons as immoral and argue that only total disarmament can eliminate the possibility of nuclear war. Critics of nuclear disarmament say that it would undermine deterrence and make conventional wars more likely, more destructive, or both. The debate becomes considerably complex when considering various scenarios for example, total vs partial or unilateral vs multilateral disarmament.
Nuclear proliferation is a related concern, which most commonly refers to the spread of nuclear weapons to additional countries and increases the risks of nuclear war arising from regional conflicts. The diffusion of nuclear technologies -- especially the nuclear fuel cycle technologies for producing weapons-usable nuclear materials such as highly enriched uranium and plutonium -- contributes to the risk of nuclear proliferation. These forms of proliferation are sometimes referred to as horizontal proliferation to distinguish them from vertical proliferation, the expansion of nuclear stockpiles of established nuclear powers.
History
Manhattan Project
Because the Manhattan Project was considered to be "top secret," there was no |
https://en.wikipedia.org/wiki/Pyrimidine%20metabolism | Pyrimidine biosynthesis occurs both in the body and through organic synthesis.
De novo biosynthesis of pyrimidine
De Novo biosynthesis of a pyrimidine is catalyzed by three gene products CAD, DHODH and UMPS. The first three enzymes of the process are all coded by the same gene in CAD which consists of carbamoyl phosphate synthetase II, aspartate carbamoyltransferase and dihydroorotase. Dihydroorotate dehydrogenase (DHODH) unlike CAD and UMPS is a mono-functional enzyme and is localized in the mitochondria. UMPS is a bifunctional enzyme consisting of orotate phosphoribosyltransferase (OPRT) and orotidine monophosphate decarboxylase (OMPDC). Both, CAD and UMPS are localized around the mitochondria, in the cytosol. In Fungi, a similar protein exists but lacks the dihydroorotase function: another protein catalyzes the second step.
In other organisms (Bacteria, Archaea and the other Eukaryota), the first three steps are done by three different enzymes.
Pyrimidine catabolism
Pyrimidines are ultimately catabolized (degraded) to CO2, H2O, and urea. Cytosine can be broken down to uracil, which can be further broken down to N-carbamoyl-β-alanine, and then to beta-alanine, CO2, and ammonia by beta-ureidopropionase. Thymine is broken down into β-aminoisobutyrate which can be further broken down into intermediates eventually leading into the citric acid cycle.
β-aminoisobutyrate acts as a rough indicator for rate of DNA turnover.
Regulations of pyrimidine nucleotide biosynthesis
Through negative feedback inhibition, the end-products UTP and UDP prevent the enzyme CAD from catalyzing the reaction in animals. Conversely, PRPP and ATP act as positive effectors that enhance the enzyme's activity.
Pharmacotherapy
Modulating the pyrimidine metabolism pharmacologically has therapeutical uses, and could implement in cancer treatment.
Pyrimidine synthesis inhibitors are used in active moderate to severe rheumatoid arthritis and psoriatic arthritis, as well as in multiple scle |
https://en.wikipedia.org/wiki/Journal%20of%20Statistical%20Physics | The Journal of Statistical Physics is a biweekly publication containing both original and review papers, including book reviews. All areas of statistical physics as well as related fields concerned with collective phenomena in physical systems are covered.
The journal was established by Howard Reiss. Joel L. Lebowitz is the honorary editor.
In the period 1969-1979 the journal published about 65 articles per year, while in the 1980-2016 period approximately 220 articles per year. In total, as to 2017, more than 9000 articles have appeared on this journal. According to Web of Science as of July 2017 the 10 most cited articles which have appeared on this journal are:
Tsallis, C, Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys., vol. 52(1-2), 479–487, (1988). Times Cited: 4,245
Feigenbaum, MJ, Quantitative universality for a class of non-linear transformations, J. Stat. Phys., vol. 19(1), 25–52, (1978). Times Cited: 2,230
Sauer, T; Yorke, JA; Casdagli, M, Embedology, J. Stat. Phys., vol. 65(3-4), 579–616, (1991). Times Cited: 1,319
Wertheim, MS, Fluids with highly directional attractive forces. 1. Statistical thermodynamics, J. Stat. Phys., vol. 35(1-2), 19–34, (1984). Times Cited: 1,232
Wertheim, MS, Fluids with highly directional attractive forces. 3. Multiple attraction sites, J. Stat. Phys., vol. 42(3-4), 459–476, (1986). Times Cited: 1,109
Feigenbaum, MJ, Universal metric properties of non-linear transformations, J. Stat. Phys., vol. 21(6), 669–706, (1979). Times Cited: 1,071
Wertheim, MS, Fluids with highly directional attractive forces. 2. Thermodynamic perturbation-theory and integral-equations, J. Stat. Phys., vol. 35(1-2), 35–47, (1984). Times Cited: 1,051
Wertheim, MS, Fluids with highly directional attractive forces. 4. equilibrium polymerization, J. Stat. Phys., vol. 42(3-4), 477–492, (1986). Times Cited: 984
Voorhees, PW, The theory of Ostwald Ripening, J. Stat. Phys., vol. 38(1-2), 231–252, (1985). Times Cited: 8 |
https://en.wikipedia.org/wiki/Primitive%20atrium | The primitive atrium is a stage in the embryonic development of the human heart. It grows rapidly and partially encircles the bulbus cordis; the groove against which the bulbus cordis lies is the first indication of a division into right and left atria.
The cavity of the primitive atrium becomes subdivided into right and left chambers by a septum, the septum primum, which grows downward into the cavity.
For a time the atria communicate with each other by an opening, the primary interatrial foramen, below the free margin of the septum.
This opening is closed by the union of the septum primum with the septum intermedium, and the communication between the atria is re-established through an opening which is developed in the upper part of the septum primum; this opening is known as the foramen ovale (ostium secundum of Born) and persists until birth.
A second septum, the septum secundum, semilunar in shape, grows downward from the upper wall of the atrium immediately to the right of the primary septum and foramen ovale.
Shortly after birth it fuses with the primary septum, and by this means the foramen ovale is closed, but sometimes the fusion is incomplete and the upper part of the foramen remains patent. The limbus fossæ ovalis denotes the free margin of the septum secundum.
Issuing from each lung is a pair of pulmonary veins; each pair unites to form a single vessel, and these in turn join in a common trunk which opens into the left atrium.
Subsequently, the common trunk and the two vessels forming it expand and form the vestibule or greater part of the atrium, the expansion reaching as far as the openings of the four vessels, so that in the adult all four veins open separately into the left atrium. |
https://en.wikipedia.org/wiki/List%20of%20types%20of%20numbers | Numbers can be classified according to how they are represented or according to the properties that they have.
Main types
Natural numbers (): The counting numbers {1, 2, 3, ...} are commonly called natural numbers; however, other definitions include 0, so that the non-negative integers {0, 1, 2, 3, ...} are also called natural numbers. Natural numbers including 0 are also sometimes called whole numbers.
Integers (): Positive and negative counting numbers, as well as zero: {..., −3, −2, −1, 0, 1, 2, 3, ...}.
Rational numbers (): Numbers that can be expressed as a ratio of an integer to a non-zero integer. All integers are rational, but there are rational numbers that are not integers, such as .
Real numbers (): Numbers that correspond to points along a line. They can be positive, negative, or zero. All rational numbers are real, but the converse is not true.
Irrational numbers: Real numbers that are not rational.
Imaginary numbers: Numbers that equal the product of a real number and the square root of −1. The number 0 is both real and purely imaginary.
Complex numbers (): Includes real numbers, imaginary numbers, and sums and differences of real and imaginary numbers.
Hypercomplex numbers include various number-system extensions: quaternions (), octonions (), and other less common variants.
-adic numbers: Various number systems constructed using limits of rational numbers, according to notions of "limit" different from the one used to construct the real numbers.
Number representations
Decimal: The standard Hindu–Arabic numeral system using base ten.
Binary: The base-two numeral system used by computers, with digits 0 and 1.
Ternary: The base-three numeral system with 0, 1, and 2 as digits.
Quaternary: The base-four numeral system with 0, 1, 2, and 3 as digits.
Hexadecimal: Base 16, widely used by computer system designers and programmers, as it provides a more human-friendly representation of binary-coded values.
Octal: Base 8, occasionally used b |
https://en.wikipedia.org/wiki/Pickle%20meat | Pickle meat also referred to as pickled pork is a Louisiana cuisine specialty often served with red beans and rice.
See also
List of pickled foods
List of pork dishes |
https://en.wikipedia.org/wiki/Mushroom%20sauce | Mushroom sauce is a white or brown sauce prepared using mushrooms as its primary ingredient. It can be prepared in different styles using various ingredients, and is used to top a variety of foods.
Overview
In cooking, mushroom sauce is sauce with mushrooms as the primary ingredient. Often cream-based, it can be served with veal, chicken and poultry, pasta, and other foods such as vegetables. Some sources also suggest pairing mushroom sauce with fish.
It is made with mushrooms, butter, cream or olive oil, white wine (some variations may use a mellow red wine) and pepper with a wide variety of variations possible with additional ingredients such as shallot, garlic, lemon juice, flour (to thicken the sauce), chicken stock, saffron, basil, parsley, or other herbs. It is a variety of allemande sauce.
Mushroom sauce can also be prepared as a brown sauce. Canned mushrooms can be used to prepare the sauce.
For vegan dishes, cream can be replaced with ground almonds, mixed with water and evaporated until needed consistency.
History
Mushroom sauces have been cooked for hundreds of years. An 1864 cookbook includes two recipes, one sauce tournee and one a brown gravy.
United States President Dwight D. Eisenhower, a well-known steak lover, was reportedly quite fond of mushroom sauce.
Gallery
See also
List of mushroom dishes
List of sauces
Mushroom gravy
Mushroom ketchup |
https://en.wikipedia.org/wiki/Meat%20extract | Meat extract is highly concentrated meat stock, usually made from beef or chicken. It is used to add meat flavor in cooking, and to make broth for soups and other liquid-based foods.
Meat extract was invented by Baron Justus von Liebig, a German 19th-century organic chemist. Liebig specialised in chemistry and the classification of food and wrote a paper on how the nutritional value of a meat is lost by boiling. Liebig's view was that meat juices, as well as the fibres, contained much important nutritional value and that these were lost by boiling or cooking in unenclosed vessels. Fuelled by a desire to help feed the undernourished, in 1840 he developed a concentrated beef extract, Extractum carnis Liebig, to provide a nutritious meat substitute for those unable to afford the real thing. However, it took 30 kg of meat to produce 1 kg of extract, making the extract too expensive.
Commercialization
Liebig's Extract of Meat Company
Liebig went on to co-found the Liebig's Extract of Meat Company, (later Oxo), in London whose factory, opened in 1865 in Fray Bentos, a port in Uruguay, took advantage of meat from cattle being raised for their hides — at one third the price of British meat. Before that, it was the Giebert et Compagnie (April 1863).
Bovril
In the 1870s, John Lawson Johnston invented 'Johnston's Fluid Beef', later renamed Bovril. Unlike Liebig's meat extract, Bovril also contained flavourings. It was manufactured in Argentina and Uruguay which could provide cheap cattle.
Effects
Liebig and Bovril were important contributors to the beef industry in South America.
Bonox
On the market in 1919 and created by the Fred Walker and Company Bonox is manufactured in Australia. When it was created it was often offered as an alternative hot drink with it being common to offer "Coffee, tea or Bonox".
Today
Meat extracts have largely been supplanted by bouillon cubes and yeast extract. Some brands of meat extract, such as Oxo and Bovril, now contain yeast extrac |
https://en.wikipedia.org/wiki/Mucous%20sheaths%20of%20the%20tendons%20around%20the%20ankle | The mucous sheaths of the tendons around the ankle protect tendons in the ankle. All the tendons crossing the ankle-joint are enclosed for part of their length in mucous sheaths which have an almost uniform length of about 8 cm. each.
Front of the ankle
On the front of the ankle the sheath for the Tibialis anterior extends from the upper margin of the transverse crural ligament to the interval between the diverging limbs of the cruciate ligament; those for the Extensor digitorum longus and Extensor hallucis longus reach upward to just above the level of the tips of the malleoli, the former being the higher.
The sheath of the Extensor hallucis longus is prolonged on to the base of the first metatarsal bone, while that of the Extensor digitorum longus reaches only to the level of the base of the fifth metatarsal.
Medial side of the ankle
On the medial (closer to the center line of the body) side of the ankle the sheath for the Tibialis posterior extends highest up—to about 4 cm. above the tip of the malleolus—while below it stops just short of the tuberosity of the navicular.
The sheath for Flexor hallucis longus reaches up to the level of the tip of the malleolus, while that for the Flexor digitorum longus is slightly higher; the former is continued to the base of the first metatarsal, but the latter stops opposite the first cuneiform bone.
Lateral side of the ankle
On the lateral (outer) side of the ankle a sheath which is single for the greater part of its extent encloses the Peronæi longus and brevis.
It extends upward for about 4 cm. above the tip of the malleolus and downward and forward for about the same distance. |
https://en.wikipedia.org/wiki/ELM327 | The ELM327 is a programmed microcontroller produced for translating the on-board diagnostics (OBD) interface found in most modern cars. The ELM327 command protocol is one of the most popular PC-to-OBD interface standards and is also implemented by other vendors.
The original ELM327 was implemented on the PIC18F2480 microcontroller from Microchip Technology.
While in business, ELM Electronics also sold other variants of the product, with slightly different part numbers, which implemented only a subset of the OBD protocols.
In June 2020, ELM Electronics announced it was closing the business in June 2022.
Uses
The ELM327 abstracts the low-level protocol and presents a simple interface that can be called via a UART, typically by a hand-held diagnostic tool or a computer program connected by USB, RS-232, Bluetooth or Wi-Fi. New applications include smartphones.
There are a large number of programs available that connect to the ELM327.
The function of such software may include supplementary vehicle instrumentation, reporting and clearing of error codes.
ELM327 Functions:
Read diagnostic trouble codes, both generic and manufacturer-specific.
Clear some trouble codes and turn off the MIL ("Malfunction Indicator Light", more commonly known as the "Check Engine Light")
Display current sensor data
Engine RPM
Calculated Load Value
Coolant Temperature
Fuel System Status
Vehicle Speed
Short Term Fuel Trim
Long Term Fuel Trim
Intake Manifold Pressure
Timing Advance
Intake Air Temperature
Air Flow Rate
Absolute Throttle Position
Oxygen sensor voltages/associated short term fuel trims
Fuel Pressure
Protocols supported
The protocols supported by ELM327 are:
SAE J1850 PWM (41.6 kbit/s)
SAE J1850 VPW (10.4 kbit/s)
ISO 9141-2 (5 baud init, 10.4 kbit/s)
ISO 14230-4 KWP (5 baud init, 10.4 kbit/s)
ISO 14230-4 KWP (fast init, 10.4 kbit/s)
ISO 15765-4 CAN (11 bit ID, 500 kbit/s)
ISO 15765-4 CAN (29 bit ID, 500 kbit/s)
ISO 15765-4 CAN (11 bit ID, 250 kbit/s)
ISO 15765-4 CAN (29 |
https://en.wikipedia.org/wiki/Guillemin%20effect | Guillemin effect is one of the magnetomechanical effects. It is connected with the tendency of a previously bent rod, made of magnetostrictive material, to be straightened, when subjected to magnetic field applied in the direction of rod's axis.
See also
Magnetomechanical effects
Magnetostriction
Magnetocrystalline anisotropy
Magnetic ordering |
https://en.wikipedia.org/wiki/The%20Unfinished%20Twentieth%20Century | In the 2001 book The Unfinished Twentieth Century, author Jonathan Schell suggests that an essential feature of the twentieth century was the development of humankind's capacity for self-destruction, with the rise in many forms of "policies of extermination". Schell goes on to suggest that the world now faces a clear choice between the abolition of all nuclear weapons, and full nuclearization, as the necessary technology and materials diffuse around the globe.
See also
List of books about nuclear issues
Nuclear disarmament |
https://en.wikipedia.org/wiki/List%20of%20foodborne%20illness%20outbreaks%20in%20the%20United%20States | In 1999, an estimated 5,000 deaths, 325,000 hospitalizations and 76 million illnesses were caused by foodborne illnesses within the US. The Centers for Disease Control and Prevention began tracking outbreaks starting in the 1970s. By 2012, the figures were roughly 130,000 hospitalizations and 3,000 deaths.
1850s
The Swill milk scandal leads to the deaths of 8,000 babies in one year alone.
1919
35 people died in 1919 from botulism from improperly canned black olives produced in California.
1963
Two women died in 1963 from botulism from canned tuna fish from the Washington Packing Corporation.
1970s
1971
On July 2, the U.S. Food and Drug Administration (FDA) released a public warning after learning that a Westchester County, New York, man had died and his wife had become seriously ill from botulism after eating a portion of a can of Bon Vivant vichyssoise soup. 6,444 vichyssoise soup cans were recalled, including all Bon Vivant soups – more than a million cans in all. On July 7, the FDA ordered the shutdown of the company's Newark, New Jersey, plant. Out of 324 soup cans, five were found to be contaminated with botulinum toxin, all in the initial batch of vichyssoise that was recalled. The company filed for bankruptcy within a month of the start of the recall, and changed its business name to Moore & Co. The FDA resolved to destroy the company's stock of canned soup, but the company fought the proposed action in court until 1974.
1974
Salmonella in unpasteurized apple cider caused 200 illnesses in New Jersey.
1977
Botulism in peppers served at the Trini and Carmen restaurant in Pontiac, Michigan, caused the largest outbreak of botulism poisonings in the United States up to that time. The peppers were canned at home by a former employee. Fifty-nine people were sickened.
1978
Botulism in Clovis, New Mexico. 34 people who ate at a restaurant, Colonial Park Country Club, developed clinical botulism in the second-largest outbreak in United States his |
https://en.wikipedia.org/wiki/Computational%20lithography | Computational lithography (also known as computational scaling) is the set of mathematical and algorithmic approaches designed to improve the resolution attainable through photolithography. Computational lithography came to the forefront of photolithography technologies in 2008 when the semiconductor industry faced challenges associated with the transition to a 22 nanometer CMOS microfabrication process and has become instrumental in further shrinking the design nodes and topology of semiconductor transistor manufacturing.
History
Computational lithography means the use of computers to simulate printing of micro-lithography structures. Pioneering work was done by Chris Mack at NSA in developing PROLITH, Rick Dill at IBM and Andy Neureuther at University of California, Berkeley from the early 1980s. These tools were limited to lithography process optimization as the algorithms were limited to a few square micrometres of resist. Commercial full-chip optical proximity correction (OPC), using model forms, was first implemented by TMA (now a subsidiary of Synopsys) and Numerical Technologies (also part of Synopsys) around 1997.
Since then the market and complexity has grown significantly. With the move to sub-wavelength lithography at the 180 nm and 130 nm nodes, RET techniques such as Assist features, phase shift masks started to be used together with OPC. For the transition from 65 nm to 45 nm nodes customers were worrying that not only that design rules were insufficient to guarantee printing without yield limiting hotspots, but also that tape-out time may need thousands of CPUs or weeks of run time. This predicted exponential increase in computational complexity for mask synthesis on moving to the 45 nm process node spawned a significant venture capital investment in design for manufacturing start-up companies.
A number of startup companies promoting their own disruptive solutions to this problem started to appear, techniques from custom hardware acceleration t |
https://en.wikipedia.org/wiki/Soddy%20circles%20of%20a%20triangle | In geometry, the Soddy circles of a triangle are two circles associated with any triangle in the plane. Their centers are the Soddy centers of the triangle. They are all named for Frederick Soddy, who rediscovered Descartes' theorem on the radii of mutually tangent quadruples of circles.
Any triangle has three externally tangent circles centered at its vertices. Two more circles, its Soddy circles, are tangent to the three circles centered at the vertices; their centers are called Soddy centers. The line through the Soddy centers is the Soddy line of the triangle. These circles are related to many other notable features of the triangle. They can be generalized to additional triples of tangent circles centered at the vertices in which one circle surrounds the other two.
Construction
Let be the three vertices of a circle, and let be the lengths of the opposite sides, and be the semiperimeter. Then the three Soddy circles centered at have radii , respectively.
By Descartes' theorem, two more circles, sometimes also called Soddy circles, are tangent to these three circles. The centers of these two tangent circles are the Soddy
centers of the triangle.
Related features
Each of the three circles centered at the vertices crosses two sides of the triangle at right angles, at one of the three intouch points of the triangle, where its incircle is tangent to the side. The two circles tangent to these three circles are separated by the incircle, one interior to it and one exterior. The Soddy centers lie at the common intersections of three hyperbolas, each having two triangle vertices as foci and passing through the third vertex.
The inner Soddy center is an equal detour point: the polyline connecting any two triangle vertices through the inner Soddy point is longer than the line segment connecting those vertices directly, by an amount that does not depend on which two vertices are chosen. By Descartes' theorem, the inner Soddy circle's curvature is , where is the tri |
https://en.wikipedia.org/wiki/Romanesco%20broccoli | Romanesco broccoli (also known as broccolo romanesco, romanesque cauliflower, or romanesco) is in fact a cultivar of the cauliflower (Brassica oleracea var. botrytis), not broccoli (Brassica oleracea var. italica). It is an edible flower bud of the species Brassica oleracea, which also includes regular broccoli and cauliflower. It is chartreuse in color, and has a form naturally approximating a fractal. Romanesco has a nutty flavor and a firmer texture than cauliflower and broccoli when cooked.
Description
Romanesco superficially resembles a cauliflower, but it is chartreuse in color, with the form of a natural fractal. Nutritionally, romanesco is rich in vitamin C, vitamin K, dietary fiber, and carotenoids.
Fractal structure
The inflorescence (the bud) is self-similar in character, with the branched meristems making up a logarithmic spiral, giving a form approximating a natural fractal; each bud is composed of a series of smaller buds, all arranged in yet another logarithmic spiral. This self-similar pattern continues at smaller levels. The pattern is only an approximate fractal since the pattern eventually terminates when the feature size becomes sufficiently small. The number of spirals on the head of Romanesco broccoli is a Fibonacci number.
The causes of its differences in appearance from the normal cauliflower and broccoli have been modeled as an extension of the preinfloresence stage of bud growth. A 2021 paper has ascribed this phenomenon to perturbations of floral gene networks that causes the development of meristems into flowers to fail, but instead to repeat itself in a self-similar way.
See also
Phyllotaxis |
https://en.wikipedia.org/wiki/Solid%20Snake | is a fictional character from the Metal Gear series created by Hideo Kojima and developed and published by Konami. He is depicted as a former Green Beret and highly skilled special operations soldier engaged in solo stealth and espionage missions who is often tasked with destroying models of the bipedal nuclear weapon-armed mecha known as Metal Gear. Controlled by the player, he must act alone, supported via radio by commanding officers and specialists. While his first appearances in the original Metal Gear games were references to Hollywood films, the Metal Gear Solid series has given a consistent design by artist Yoji Shinkawa alongside an established personality while also exploring his relationship with his mentor and father.
During the Metal Gear Solid games, the character has been voiced by voice actor Akio Ōtsuka in the Japanese version and by Canadian screenwriter and actor David Hayter in the English version. He also appears in Super Smash Bros. Brawl and Super Smash Bros. Ultimate. Considered to be one of the most iconic protagonists in video game history, Solid Snake has been acclaimed by critics, with his personality and both Ōtsuka's and Hayter's voice acting being noted as primary factors of the character's appeal.
Characteristics
In the early games, Solid Snake's visual appearances were references to popular actors. He was given his own consistent design in Metal Gear Solid. He also establishes Philanthropy, an anti-Metal Gear organization carrying the motto "To let the world be", with his friend Otacon. In Metal Gear Solid 4: Guns of the Patriots, he has access to OctoCamo, which allows him to change his appearance to match the surface he is leaning on. And FaceCamo, which can change his facial appearance to make him look like other characters, as well as his younger self.
Snake possesses an IQ of 180 and is fluent in six languages. Solid Snake has been on the battlefield for most of his life, and says that it is the only place he feels truly aliv |
https://en.wikipedia.org/wiki/Mail%C3%BCfterl | Mailüfterl is a nickname for the Austrian Binär dezimaler Volltransistor-Rechenautomat (binary-decimal fully transistorized computing automaton), an early transistorized computer. Other early transistorized computers included TRADIC, Harwell CADET and TX-0.
Mailüfterl was built from May 1956 to May 1958 at the Vienna University of Technology by Heinz Zemanek.
Heinz Zemanek had come to an agreement with Konrad Zuse, whose company Zuse KG would finance the work of Rudolf Bodo, who helped build the Mailüfterl, also that all circuit diagrams of the Z22 were supplied to Bodo and Zemanek, and that after the Mailüfterl project Bodo should work for the Zuse KG to help build the transistorized Z23.
The first program, computation of the prime 5,073,548,261, was executed in May 1958. Completion of the software continued until 1961. The nickname was coined by Zemanek: Even if it cannot match the rapid calculation speed of American models called "Whirlwind" or "Typhoon", it will be enough for a "Wiener Mailüfterl" (Viennese May breeze).
The computer has 3,000 transistors, 5,000 diodes, 1,000 assembly platelets, 100,000 solder joints, 15,000 resistors, 5,000 capacitors and about of wire. It is 4 meters (13') wide, 2.5 meters (8') high, and 50 centimeters (20") deep. The machine was comparable in calculating power to what were then considered small vacuum-tube computers. Calculations and representation of values worked using the BCD system.
Zemanek later said about his project that it was a "semi-illegal" undertaking of an assistant professor, which he and a group of students realized without official authorization, and hence without financial support, from the university. In 1954 he traveled to Philips in the Netherlands, where he asked for a donation in kind. Transistors, invented seven years before and just beginning to be available commercially, were very difficult to obtain in quantity at any price, but Zemanek received a commitment for 1,000 rather slow hearing-aid tr |
https://en.wikipedia.org/wiki/Proper%20right%20and%20proper%20left | Proper right and proper left are conceptual terms used to unambiguously convey relative direction when describing an image or other object. The "proper right" hand of a figure is the hand that would be regarded by that figure as its right hand. In a frontal representation, that appears on the left as the viewer sees it, creating the potential for ambiguity if the hand is just described as the "right hand".
The terms are mainly used in discussing images of humans, whether in art history, medical contexts such as x-ray images, or elsewhere, but they can be used in describing any object that has an unambiguous front and back (for example furniture) or, when describing things that move or change position, with reference to the original position. However a more restricted use may be preferred, and the internal instructions for cataloguing objects in the "Inventory of American Sculpture" at the Smithsonian American Art Museum say that "The terms "proper right" and "proper left" should be used when describing figures only". In heraldry, right and left is always used in the meaning of proper right and proper left, as for the imaginary bearer of a coat of arms; to avoid confusion, the Latin terms dexter and sinister are often used.
The alternative is to use language that makes it clear that the viewer's perspective is being used. The swords in the illustrations might be described as: "to the left as the viewer sees it", "at the view's left", "at the viewer's left", and so on. However these formulations do not work for freestanding sculpture in the round, where the viewer might be at any position around the sculpture. A British 19th-century manual for military drill contrasts "proper left" with "present left" when discussing the orientation of formations performing intricate movements on a parade ground, "proper" meaning the orientation at the start of the drill.
The terms are analogous to the nautical port and starboard, where "port" is to a watercraft as "proper left" |
https://en.wikipedia.org/wiki/Bioisostere | In medicinal chemistry, bioisosteres are chemical substituents or groups with similar physical or chemical properties which produce broadly similar biological properties in the same chemical compound. In drug design, the purpose of exchanging one bioisostere for another is to enhance the desired biological or physical properties of a compound without making significant changes in chemical structure. The main use of this term and its techniques are related to pharmaceutical sciences. Bioisosterism is used to reduce toxicity, change bioavailability, or modify the activity of the lead compound, and may alter the metabolism of the lead.
Examples
Classical bioisosteres
Classical bioisosterism was originally formulated by James Moir and refined by Irving Langmuir as a response to the observation that different atoms with the same valence electron structure had similar biological properties.
For example, the replacement of a hydrogen atom with a fluorine atom at a site of metabolic oxidation in a drug candidate may prevent such metabolism from taking place. Because the fluorine atom is similar in size to the hydrogen atom the overall topology of the molecule is not significantly affected, leaving the desired biological activity unaffected. However, with a blocked pathway for metabolism, the drug candidate may have a longer half-life.
Procainamide, an amide, has a longer duration of action than Procaine, an ester, because of the isosteric replacement of the ester oxygen with a nitrogen atom. Procainamide is a classical bioisostere because the valence electron structure of a disubstituted oxygen atom is the same as a trisubstituted nitrogen atom, as Langmuir showed.
Another example is seen in a series of anti-bacterial chalcones. By modifying certain substituents, the pharmacological activity of the chalcone and its toxicity are also modified.
Non-classical bioisosteres
Non-classical bioisosteres may differ in a multitude of ways from classical bioisosteres, but |
https://en.wikipedia.org/wiki/Stride%20%28software%29 | Stride was a cloud-based team business communication and collaboration tool, launched by Atlassian on 7 September 2017 to replace the cloud-based version of HipChat. Stride software was available to download onto computers running Windows, Mac or Linux, as well as Android, iOS smartphones, and tablets. Stride was bought by Atlassian's competitor Slack Technologies and was discontinued on February 15, 2019.
The features of Stride include chat rooms, one-on-one messaging, file sharing, 5 GB of file storage, group voice and video calling, built-in collaboration tools, and up to 25,000 of searchable message history. Premium features include unlimited file storage, users, group chat rooms, file sharing and storage, apps, and history retention. The premium version, priced at $3/user/month, also includes advanced meeting functionality like group screen sharing, remote desktop control, and dial-in/dial-out capabilities. Stride offered integrations with Atlassian's other products as well as other third-party applications listed in the Atlassian Marketplace, such as GitHub, Giphy, Stand-Bot and Google Calendar.
Stride offered additional features beyond messaging to improve efficiency and productivity. It aimed to reduce collaboration noise by introducing a "focus" mode, and eliminates the divisions between text chat, voice meetings, and videoconferencing, by simplifying transitioning between these modes in the same channel.
On July 26, 2018, Atlassian announced that HipChat and Stride would be discontinued February 15, 2019, and that it had reached a deal to sell their intellectual property to Slack. Slack will pay an undisclosed amount over three years to assume the user bases of the services, and Atlassian will take a minority investment in Slack. The companies also announced a commitment to work on integration of Slack with Atlassian services.
See also
List of collaborative software |
https://en.wikipedia.org/wiki/Goncalo%20alves | Gonçalo alves is a hardwood (from the Portuguese name, Gonçalo Alves). It is sometimes referred to as tigerwood—a name that underscores the wood's often dramatic, contrasting color scheme, that some compare to rosewood.
While the sapwood is very light in color, the heartwood is a sombre brown, with dark streaks that give it a unique look. The wood's color deepens with exposure and age and even the plainer-looking wood has a natural luster.
Two species are usually listed as sources for gonçalo alves: Astronium fraxinifolium and Astronium graveolens, although other species in the genus may yield similar wood; the amount of striping that is present may vary. All trees grow in neotropical forests; Brazil is a major exporter of these woods. |
https://en.wikipedia.org/wiki/Biomineralising%20polychaete | Biomineralising polychaetes are polychaetes that produce minerals to harden or stiffen their own tissues (biomineralize).
The most important biomineralizing polychaetes are serpulids, sabellids and cirratulids. They secrete tubes of calcium carbonate. Serpulids have most advanced biomineralization system among the annelids. Serpulids possess very diverse tube ultrastructures. Serpulid tubes are composed of aragonite, calcite or mixture of both polymorphs. In addition to the tubes, some serpulid species secrete calcareous opercula. Some sabellids and cirratulids can secrete aragonitic tubes. Sabellid and cirratulid tubes have a spherulitic prismatic ultrastructure. There are thin organic sheets in serpulid tube mineral structures. These sheets have evolved as an adaptation to strengthen the mechanical properties of the tubes. |
https://en.wikipedia.org/wiki/Hyperstability | In stability theory, hyperstability is a property of a system that requires the state vector to remain bounded if the inputs are restricted to belonging to a subset of the set of all possible inputs.
Definition: A system is hyperstable if there are two constants such that any state trajectory of the system satisfies the inequality: |
https://en.wikipedia.org/wiki/Infrasternal%20angle | The lower opening of the thorax is formed by the twelfth thoracic vertebra behind, by the eleventh and twelfth ribs at the sides, and in front by the cartilages of the tenth, ninth, eighth, and seventh ribs, which ascend on either side and form an angle, the infrasternal angle or subcostal angle, into the apex of which the xiphoid process projects.
Pregnancy causes the angle to increase from 68° to 103°. |
https://en.wikipedia.org/wiki/Berger%20code | In telecommunication, a Berger code is a unidirectional error detecting code, named after its inventor, J. M. Berger. Berger codes can detect all unidirectional errors. Unidirectional errors are errors that only flip ones into zeroes or only zeroes into ones, such as in asymmetric channels. The check bits of Berger codes are computed by counting all the zeroes in the information word, and expressing that number in natural binary. If the information word consists of bits, then the Berger code needs "check bits", giving a Berger code of length k+n. (In other words, the check bits are enough to check up to information bits).
Berger codes can detect any number of one-to-zero bit-flip errors, as long as no zero-to-one errors occurred in the same code word.
Similarly, Berger codes can detect any number of zero-to-one bit-flip errors, as long as no one-to-zero bit-flip errors occur in the same code word.
Berger codes cannot correct any error.
Like all unidirectional error detecting codes,
Berger codes can also be used in delay-insensitive circuits.
Unidirectional error detection
As stated above, Berger codes detect any number of unidirectional errors. For a given code word, if the only errors that have occurred are that some (or all) bits with value 1 have changed to value 0, then this transformation will be detected by the Berger code implementation. To understand why, consider that there are three such cases:
Some 1s bit in the information part of the code word have changed to 0s.
Some 1s bits in the check (or redundant) portion of the code word have changed to 0s.
Some 1s bits in both the information and check portions have changed to 0s.
For case 1, the number of 0-valued bits in the information section will, by definition of the error, increase. Therefore, our Berger check code will be lower than the actual 0-bit-count for the data, and so the check will fail.
For case 2, the number of 0-valued bits in the information section have stayed the same, but the v |
https://en.wikipedia.org/wiki/2014%20celebrity%20nude%20photo%20leak | On August 31, 2014, a collection of nearly five hundred private pictures of various celebrities, mostly women, with many containing nudity, were posted on the imageboard 4chan, and swiftly disseminated by other users on websites and social networks such as Imgur and Reddit. The leak has been popularly dubbed "The Fappening" and also "Celebgate". The images were initially believed to have been obtained via a breach of Apple's cloud services suite iCloud, or a security issue in the iCloud API which allowed them to make unlimited attempts at guessing victims' passwords. Apple claimed in a press release that access was gained via spear phishing attacks.
The incident was met with varied reactions from the media and fellow celebrities. Critics argued the leak was a major invasion of privacy for the photos' subjects, while some of the alleged subjects denied the images' authenticity. The leak also prompted increased concern from analysts surrounding the privacy and security of cloud computing services such as iCloud—with a particular emphasis on their use to store sensitive, private information.
Origin of the term
"The Fappening" is a jocular portmanteau coined by combining the words "fap", an internet slang term for masturbation, and the title of the 2008 film The Happening. Though the term is a vulgarism originating either with the imageboards where the pictures were initially posted or Reddit, mainstream media outlets soon adopted the term themselves, such as the BBC. The term has received criticism from journalists like Radhika Sanghani of The Daily Telegraph and Toyin Owoseje of the International Business Times, who said that the term not only trivialized the leak, but also, according to Sanghani, "[made] light of a very severe situation." Both articles used the term extensively to describe the event, including in their headlines.
"Celebgate" is a reference to the Watergate scandal.
Procurement and distribution
The images were obtained via the online storage offe |
https://en.wikipedia.org/wiki/Wronskian | In the mathematics of a square matrix, the Wronskian (or Wrońskian) is a determinant introduced by the Polish mathematician . It is used in the study of differential equations, where it can sometimes show linear independence of a set of solutions.
Definition
The Wronskian of two differentiable functions and is .
More generally, for real- or complex-valued functions , which are times differentiable on an interval , the Wronskian is a function on defined by
This is the determinant of the matrix constructed by placing the functions in the first row, the first derivatives of the functions in the second row, and so on through the derivative, thus forming a square matrix.
When the functions are solutions of a linear differential equation, the Wronskian can be found explicitly using Abel's identity, even if the functions are not known explicitly. (See below.)
The Wronskian and linear independence
If the functions are linearly dependent, then so are the columns of the Wronskian (since differentiation is a linear operation), and the Wronskian vanishes. Thus, one may show that a set of differentiable functions is linearly independent on an interval by showing that their Wronskian does not vanish identically. It may, however, vanish at isolated points.
A common misconception is that everywhere implies linear dependence, but pointed out that the functions and have continuous derivatives and their Wronskian vanishes everywhere, yet they are not linearly dependent in any neighborhood of . There are several extra conditions which combine with vanishing of the Wronskian in an interval to imply linear dependence.
Maxime Bôcher observed that if the functions are analytic, then the vanishing of the Wronskian in an interval implies that they are linearly dependent.
gave several other conditions for the vanishing of the Wronskian to imply linear dependence; for example, if the Wronskian of functions is identically zero and the Wronskians of of them do n |
https://en.wikipedia.org/wiki/Signal%20reflection | In telecommunications, signal reflection occurs when a signal is transmitted along a transmission medium, such as a copper cable or an optical fiber. Some of the signal power may be reflected back to its origin rather than being carried all the way along the cable to the far end. This happens because imperfections in the cable cause impedance mismatches and non-linear changes in the cable characteristics. These abrupt changes in characteristics cause some of the transmitted signal to be reflected. In radio frequency (RF) practice this is often measured in a dimensionless ratio known as voltage standing wave ratio (VSWR) with a VSWR bridge. The ratio of energy bounced back depends on the impedance mismatch. Mathematically, it is defined using the reflection coefficient.
Because the principles are the same, this concept is perhaps easiest to understand when considering an optical fiber. Imperfections in the glass create mirrors that reflect the light back along the fiber.
Impedance discontinuities cause attenuation, attenuation distortion, standing waves, ringing and other effects because a portion of a transmitted signal will be reflected back to the transmitting device rather than continuing to the receiver, much like an echo. This effect is compounded if multiple discontinuities cause additional portions of the remaining signal to be reflected back to the transmitter. This is a fundamental problem with the daisy chain method of connecting electronic components.
When a returning reflection strikes another discontinuity, some of the signal rebounds in the original signal direction, creating multiple echo effects. These forward echoes strike the receiver at different intervals making it difficult for the receiver to accurately detect data values on the signal. The effects can resemble those of jitter.
Because damage to the cable can cause reflections, an instrument called an electrical time-domain reflectometer (ETDR; for electrical cables) or an optical time- |
https://en.wikipedia.org/wiki/Spin%20ice | A spin ice is a magnetic substance that does not have a single minimal-energy state. It has magnetic moments (i.e. "spin") as elementary degrees of freedom which are subject to frustrated interactions. By their nature, these interactions prevent the moments from exhibiting a periodic pattern in their orientation down to a temperature much below the energy scale set by the said interactions. Spin ices show low-temperature properties, residual entropy in particular, closely related to those of common crystalline water ice. The most prominent compounds with such properties are dysprosium titanate (Dy2Ti2O7) and holmium titanate (Ho2Ti2O7). The orientation of the magnetic moments in spin ice resembles the positional organization of hydrogen atoms (more accurately, ionized hydrogen, or protons) in conventional water ice (see figure 1).
Experiments have found evidence for the existence of deconfined magnetic monopoles in these materials, with properties resembling those of the hypothetical magnetic monopoles postulated to exist in vacuum.
Technical description
In 1935, Linus Pauling noted that the hydrogen atoms in water ice would be expected to remain disordered even at absolute zero. That is, even upon cooling to zero temperature, water ice is expected to have residual entropy, i.e., intrinsic randomness. This is due to the fact that the hexagonal crystalline structure of common water ice contains oxygen atoms with four neighboring hydrogen atoms. In ice, for each oxygen atom, two of the neighboring hydrogen atoms are near (forming the traditional H2O molecule), and two are further away (being the hydrogen atoms of two neighboring water molecules). Pauling noted that the number of configurations conforming to this "two-near, two-far" ice rule grows exponentially with the system size, and, therefore, that the zero-temperature entropy of ice was expected to be extensive. Pauling's findings were confirmed by specific heat measurements, though pure crystals of water ice |
https://en.wikipedia.org/wiki/MU%20puzzle | The MU puzzle is a puzzle stated by Douglas Hofstadter and found in Gödel, Escher, Bach involving a simple formal system called "MIU". Hofstadter's motivation is to contrast reasoning within a formal system (i.e., deriving theorems) against reasoning about the formal system itself. MIU is an example of a Post canonical system and can be reformulated as a string rewriting system.
The puzzle
Suppose there are the symbols , , and which can be combined to produce strings of symbols. The MU puzzle asks one to start with the "axiomatic" string and transform it into the string using in each step one of the following transformation rules:
{|
|-
| Nr.
| COLSPAN=3 | Formal rule
| Informal explanation
| COLSPAN=3 | Example
|-
| 1.
| ALIGN=RIGHT | x || → || x
| Add a to the end of any string ending in
| ALIGN=RIGHT | || to ||
|-
| 2.
| ALIGN=RIGHT | x || → || xx
| Double the string after the
| ALIGN=RIGHT | || to ||
|-
| 3.
| ALIGN=RIGHT | xy || → || xy
| Replace any with a
| ALIGN=RIGHT | || to ||
|-
| 4.
| ALIGN=RIGHT | xy || → || xy
| Remove any
| ALIGN=RIGHT | || to ||
|}
Solution
The puzzle cannot be solved: it is impossible to change the string into by repeatedly applying the given rules. In other words, MU is not a theorem of the MIU formal system. To prove this, one must step "outside" the formal system itself.
In order to prove assertions like this, it is often beneficial to look for an invariant; that is, some quantity or property that doesn't change while applying the rules.
In this case, one can look at the total number of in a string. Only the second and third rules change this number. In particular, rule two will double it while rule three will reduce it by 3. Now, the invariant property is that the number of is not divisible by 3:
In the beginning, the number of s is 1 which is not divisible by 3.
Doubling a number that is not divisible by 3 does not make it divisible by 3.
Subtracting 3 from a number that is not divisible |
https://en.wikipedia.org/wiki/Security%20token | A security token is a peripheral device used to gain access to an electronically restricted resource. The token is used in addition to, or in place of, a password. It acts like an electronic key to access something. Examples of security tokens include wireless keycards used to open locked doors, or a banking token used as a digital authenticator for signing in to online banking, or signing a transaction such as a wire transfer.
Security tokens can be used to store information such as passwords, cryptographic keys used to generate digital signatures, or biometric data (such as fingerprints). Some designs incorporate tamper resistant packaging, while others may include small keypads to allow entry of a PIN or a simple button to start a generating routine with some display capability to show a generated key number. Connected tokens utilize a variety of interfaces including USB, near-field communication (NFC), radio-frequency identification (RFID), or Bluetooth. Some tokens have audio capabilities designed for those who are vision-impaired.
Password types
All tokens contain some secret information that is used to prove identity. There are four different ways in which this information can be used:
Static password token The device contains a password which is physically hidden (not visible to the possessor), but which is transmitted for each authentication. This type is vulnerable to replay attacks.
Synchronous dynamic password token A timer is used to rotate through various combinations produced by a cryptographic algorithm. The token and the authentication server must have synchronized clocks.
Asynchronous password token A one-time password is generated without the use of a clock, either from a one-time pad or cryptographic algorithm.
Challenge–response token Using public key cryptography, it is possible to prove possession of a private key without revealing that key. The authentication server encrypts a challenge (typically a random number, or at least data |
https://en.wikipedia.org/wiki/Biodemography%20of%20human%20longevity | Biodemography is a multidisciplinary approach, integrating biological knowledge (studies on human biology and animal models) with demographic research on human longevity and survival. Biodemographic studies are important for understanding the driving forces of the current longevity revolution (dramatic increase in human life expectancy), forecasting the future of human longevity, and identification of new strategies for further increase in healthy and productive life span.
Theory
Biodemographic studies have found a remarkable similarity in survival dynamics between humans and laboratory animals. Specifically, three general biodemographic laws of survival are found:
Gompertz–Makeham law of mortality
Compensation law of mortality
Late-life mortality deceleration (now disputed)
The Gompertz–Makeham law states that death rate is a sum of an age-independent component (Makeham term) and an age-dependent component (Gompertz function), which increases exponentially with age.
The compensation law of mortality (late-life mortality convergence) states that the relative differences in death rates between different populations of the same biological species are decreasing with age, because the higher initial death rates are compensated by lower pace of their increase with age.
The disputed late-life mortality deceleration law states that death rates stop increasing exponentially at advanced ages and level off to the late-life mortality plateau. A consequence of this deceleration is that there would be no fixed upper limit to human longevity — no fixed number which separates possible and impossible values of lifespan. If true, this would challenges the common belief in existence of a fixed maximal human life span.
Biodemographic studies have found that even genetically identical laboratory animals kept in constant environment have very different lengths of life, suggesting a crucial role of chance and early-life developmental noise in longevity determination. This leads |
https://en.wikipedia.org/wiki/Modulus%20of%20smoothness | In mathematics, moduli of smoothness are used to quantitatively measure smoothness of functions. Moduli of smoothness generalise modulus of continuity and are used in approximation theory and numerical analysis to estimate errors of approximation by polynomials and splines.
Moduli of smoothness
The modulus of smoothness of order
of a function is the function defined by
and
where the finite difference (n-th order forward difference) is defined as
Properties
1.
2. is non-decreasing on
3. is continuous on
4. For we have:
5. for
6. For let denote the space of continuous function on that have -st absolutely continuous derivative on and
If then
where
Applications
Moduli of smoothness can be used to prove estimates on the error of approximation. Due to property (6), moduli of smoothness provide more general estimates than the estimates in terms of derivatives.
For example, moduli of smoothness are used in Whitney inequality to estimate the error of local polynomial approximation. Another application is given by the following more general version of Jackson inequality:
For every natural number , if is -periodic continuous function, there exists a trigonometric polynomial of degree such that
where the constant depends on |
https://en.wikipedia.org/wiki/Implementation%20theory | Implementation theory is an area of research in game theory concerned with whether a class of mechanisms (or institutions) can be designed whose equilibrium outcomes implement a given set of normative goals or welfare criteria.
There are two general types of implementation problems: the economic problem of producing and allocating public and private goods and choosing over a finite set of alternatives. In the case of producing and allocating public/private goods, solution concepts are focused on finding dominant strategies.
In his paper "Counterspeculation, Auctions, and Competitive Sealed Tenders", William Vickrey showed that if preferences are restricted to the case of quasi-linear utility functions then the mechanism dominant strategy is dominant-strategy implementable. "A social choice rule is dominant strategy incentive compatible, or strategy-proof, if the associated revelation mechanism has the property that honestly reporting the truth is always a dominant strategy for each agent." However, the payments to agents become large, sacrificing budget neutrality to incentive compatibility.
In a game where multiple agents are to report their preferences (or their type), it may be in the best interest of some agents to lie about their preferences. This may improve their payoff, but it may not be seen as a fair outcome to other agents.
Although largely theoretical, implementation theory may have profound implications on policy creation because some social choice rules may be impossible to implement under specific game conditions.
See also
Implementability (mechanism design) |
https://en.wikipedia.org/wiki/List%20of%20circle%20topics | This list of circle topics includes things related to the geometric shape, either abstractly, as in idealizations studied by geometers, or concretely in physical space. It does not include metaphors like "inner circle" or "circular reasoning" in which the word does not refer literally to the geometric shape.
Geometry and other areas of mathematics
Circle
Circle anatomy
Annulus (mathematics)
Area of a disk
Bipolar coordinates
Central angle
Circular sector
Circular segment
Circumference
Concentric
Concyclic
Degree (angle)
Diameter
Disk (mathematics)
Horn angle
Measurement of a Circle
List of topics related to
Pole and polar
Power of a point
Radical axis
Radius
Radius of convergence
Radius of curvature
Sphere
Tangent lines to circles
Versor
Specific circles
Apollonian circles
Circles of Apollonius
Archimedean circle
Archimedes' circles – the twin circles doubtfully attributed to Archimedes
Archimedes' quadruplets
Circle of antisimilitude
Bankoff circle
Brocard circle
Carlyle circle
Circumscribed circle (circumcircle)
Midpoint-stretching polygon
Coaxal circles
Director circle
Fermat–Apollonius circle
Ford circle
Fuhrmann circle
Generalised circle
GEOS circle
Great circle
Great-circle distance
Circle of a sphere
Horocycle
Incircle and excircles of a triangle
Inscribed circle
Johnson circles
Magic circle (mathematics)
Malfatti circles
Nine-point circle
Orthocentroidal circle
Osculating circle
Riemannian circle
Schinzel circle
Schoch circles
Spieker circle
Tangent circles
Twin circles
Unit circle
Van Lamoen circle
Villarceau circles
Woo circles
Circle-derived entities
Apollonian gasket
Arbelos
Bicentric polygon
Bicentric quadrilateral
Coxeter's loxodromic sequence of tangent circles
Cyclic quadrilateral
Cycloid
Ex-tangential quadrilateral
Hawaiian earring
Inscribed angle
Inscribed angle theorem
Inversive distance
Inversive geometry
Irrational rotation
Lens (geometry)
Lune
Lune of |
https://en.wikipedia.org/wiki/Overpopulation | Overpopulation or overabundance is a phenomenon in which a species' population becomes larger than the carrying capacity of its environment. This may be caused by increased birth rates, lowered mortality rates, reduced predation or large scale migration, leading to an overabundant species and other animals in the ecosystem competing for food, space, and resources. The animals in an overpopulated area may then be forced to migrate to areas not typically inhabited, or die off without access to necessary resources.
Judgements regarding overpopulation always involve both facts and values. Animals often are judged overpopulated when their numbers cause impacts that people find dangerous, damaging, expensive, or otherwise harmful. Societies may be judged overpopulated when their human numbers cause impacts that degrade ecosystem services, decrease human health and well-being, or crowd other species out of existence.
Background
In ecology, overpopulation is a concept used primarily in wildlife management. Typically, an overpopulation causes the entire population of the species in question to become weaker, as no single individual is able to find enough food or shelter. As such, overpopulation is thus characterized by an increase in the diseases and parasite-load which live upon the species in question, as the entire population is weaker. Other characteristics of overpopulation are lower fecundity, adverse effects on the environment (soil, vegetation or fauna) and lower average body weights. Especially the worldwide increase of deer populations, which usually show irruptive growth, is proving to be of ecological concern. Ironically, where ecologists were preoccupied with conserving or augmenting deer populations only a century ago, the focus has now shifted in the direct opposite, and ecologists are now more concerned with limiting the populations of such animals.
Supplemental feeding of charismatic species or interesting game species is a major problem in causing overp |
https://en.wikipedia.org/wiki/Rodrigues%20parrot | The Rodrigues parrot or Leguat's parrot (Necropsittacus rodricanus) is an extinct species of parrot that was endemic to the Mascarene island of Rodrigues. The species is known from subfossil bones and from mentions in contemporary accounts. It is unclear to which other species it is most closely related, but it is classified as a member of the tribe Psittaculini, along with other Mascarene parrots. The Rodrigues parrot bore similarities to the broad-billed parrot of Mauritius, and may have been related. Two additional species have been assigned to its genus (N. francicus and N. borbonicus), based on descriptions of parrots from the other Mascarene islands, but their identities and validity have been debated.
The Rodrigues parrot was green, and had a proportionally large head and beak and a long tail. Its exact size is unknown, but it may have been around long. It was the largest parrot on Rodrigues, and it had the largest head of any Mascarene parrot. It may have looked similar to the great-billed parrot. By the time it was discovered, it frequented and nested on islets off southern Rodrigues, where introduced rats were absent, and fed on the seeds of the shrub Fernelia buxifolia. The species was last mentioned in 1761, and probably became extinct soon after, perhaps due to a combination of predation by introduced animals, deforestation, and hunting by humans.
Taxonomy
Birds thought to be the Rodrigues parrot were first mentioned by the French traveler François Leguat in his 1708 memoir, A New Voyage to the East Indies. Leguat was the leader of a group of nine French Huguenot refugees who colonised Rodrigues between 1691 and 1693 after they were marooned there. Subsequent accounts were written by the French sailor Julien Tafforet, who was marooned on the island in 1726, in his Relation de l'Île Rodrigue, and then by the French astronomer Alexandre Pingré, who travelled to Rodrigues to view the 1761 transit of Venus.
In 1867, the French zoologist Alphonse Milne- |
https://en.wikipedia.org/wiki/Pattern%20search%20%28optimization%29 | Pattern search (also known as direct search, derivative-free search, or black-box search) is a family of numerical optimization methods that does not require a gradient. As a result, it can be used on functions that are not continuous or differentiable. One such pattern search method is "convergence" (see below), which is based on the theory of positive bases. Optimization attempts to find the best match (the solution that has the lowest error value) in a multidimensional analysis space of possibilities.
History
The name "pattern search" was coined by Hooke and Jeeves. An early and simple variant is attributed to Fermi and Metropolis when they worked at the Los Alamos National Laboratory. It is described by Davidon, as follows:
Convergence
Convergence is a pattern search method proposed by Yu, who proved that it converges using the theory of positive bases. Later, Torczon, Lagarias and co-authors used positive-basis techniques to prove the convergence of another pattern-search method on specific classes of functions. Outside of such classes, pattern search is a heuristic that can provide useful approximate solutions for some issues, but can fail on others. Outside of such classes, pattern search is not an iterative method that converges to a solution; indeed, pattern-search methods can converge to non-stationary points on some relatively tame problems.
See also
Golden-section search conceptually resembles PS in its narrowing of the search range, only for single-dimensional search spaces.
Nelder–Mead method aka. the simplex method conceptually resembles PS in its narrowing of the search range for multi-dimensional search spaces but does so by maintaining n + 1 points for n-dimensional search spaces, whereas PS methods computes 2n + 1 points (the central point and 2 points in each dimension).
Luus–Jaakola samples from a uniform distribution surrounding the current position and uses a simple formula for exponentially decreasing the sampling range.
Random sear |
https://en.wikipedia.org/wiki/Conservation%20behavior | Conservation behavior is the interdisciplinary field about how animal behavior can assist in the conservation of biodiversity. It encompasses proximate and ultimate causes of behavior and incorporates disciplines including genetics, physiology, behavioral ecology, and evolution.
Introduction
Conservation behavior is aimed at applying an understanding of animal behavior to solve problems in the field of conservation biology. These are problems that may arise during conservation efforts such as captive breeding, species reintroduction, reserve connectivity, and wildlife management. By using patterns in animal behavior, biologists can be successful in these conservation efforts. This is done by understanding the proximate and ultimate causes of problems that arise. For example, understanding how proximate processes affect survival can help biologist train captive-reared animals to recognize predators post-release. Ultimate causes also have a clear benefit to conservation. For example, understanding social relationships that lead to fitness (biology) can help biologists manage wildlife that exhibit infanticide. Conservation projects may have a better chance of being successful if biologists search for a deeper understanding of how animals make adaptive decisions.
While animal behavior and conservation biology are conceptually intertwined, the idea of using animal behavior in conservation management was only first used explicitly in 1974. Since then, conservation behavior has slowly gained prominence with a surge of publications in the field since the mid-1990s and the Animal Behavior Society even forming a committee in support of conservation behavior. A number of studies have shown that animal behavior can be an important consideration during conservation projects. More importantly, ignorance of animal behavior in conservation projects may lead to their failure. Recent calls for stronger integration of behavior and physiology to advance conservation science emphasiz |
https://en.wikipedia.org/wiki/Winding%20machine | A winding machine or winder is a machine for wrapping string, twine, cord, thread, yarn, rope, wire, ribbon, tape, etc. onto a spool, bobbin, reel, etc.
In textiles
Winders are used heavily in textile manufacturing, especially in preparation to weaving where the yarn is wound onto a bobbin and then used in a shuttle. Ball winders, such as the Scottish Liaghra, are another type of winder that wind the yarn up from skein form into balls. Ball winders are commonly used by knitters and occasionally spinners.
Mechanized winders
Winders have a center roll (a bobbin, spool, reel, belt-winding shell, etc.) on which the material is wound up. Often there are metal bars that travel through the center of the roll and are shaped according to their intended purpose. A circular bar facilitates greater speed, while a square bar provides a greater potential for torque. Edge sensors are used to sense how full the center roll is. They are mounted on adjustable slides to accommodate many different widths, as the width increases as the center roll is filled. The sensitivity of the sensor depends on the required speed of operation.
Types of winding machine
Winding machines are classified based on the materials they are winding, some major types are
Coil winding machine
Film winding machine
Rope winding machine
Paper winding machine
Foil winding machine
Roll slitting machines
spool winding machine
cop winding machine
On the basis of working the winders are classified as follows
Shaft or shaft-less winding machine
Cantilevered turret winding machine
Carriage style winding machine
Available features
Automatic splice initiation
The benefits of automatic splicing add up to significantly increased productivity, greater quality control and reduced waste. It consists of a tail grabber and automatic diameter calculated splice initiation technique. The precision shear wheel and anvil mechanism guarantee a clean cut and no overlap. The splicing technique is divided into two major categories |
https://en.wikipedia.org/wiki/Physics%20Essays | Physics Essays is a quarterly journal supposedly covering theoretical and experimental physics. It was established in 1988 and the editor-in-chief is Emilio Panarella.
The journal has a reputation for being a "free forum where extravagant views on physics (in particular, those involving parapsychology) are welcome". The journal has been accused of charging authors for publication without disclosing the fees up front.
In the 1990s, the journal was published by University of Toronto Press. Beginning in 2009, and for some period of time, the journal was affiliated with the American Institute of Physics, which managed subscriptions.
In 2003, the journal published a paper describing Randell Mills' hydrino theory, which is both at odds with quantum mechanics and widely rejected by physicists. In 2004, the journal published an author from Himachal Pradesh who claimed to prove that the usual mathematical expression of mass-energy equivalence was not valid in general, a claim he said was being ignored by the wider scientific community. In 2017, the journal published an article from an amateur physicist who claimed to redefine the elementary charge and eliminate the fine structure constant, directly in contradiction to mainstream physics.
Abstracting and indexing
The journal is indexed and abstracted in the following bibliographic databases:
Chemical Abstracts Service
EBSCO databases
Emerging Sources Citation Index
INSPIRE-HEP
The journal was indexed in Current Contents/Physical, Chemical, and Earth Sciences and the Science Citation Index Expanded until it was dropped in 2015. Its last impact factor, according to the 2014 Journal Citation Reports, was 0.245 for 2013. Scopus similarly dropped its coverage in 2017, at the time ranking 174 out of 205 in the category "General Physics and Astronomy". For most recent years, until it was de-listed by Scopus in 2017, it was ranked by SCImago Journal Rank as a fourth-quartile journal under the category "Physics and Astronomy |
https://en.wikipedia.org/wiki/Frequency%20changer | A frequency changer or frequency converter is an electronic or electromechanical device that converts alternating current (AC) of one frequency to alternating current of another frequency. The device may also change the voltage, but if it does, that is incidental to its principal purpose, since voltage conversion of alternating current is much easier to achieve than frequency conversion.
Traditionally, these devices were electromechanical machines called a motor-generator set. Also devices with mercury arc rectifiers or vacuum tubes were in use. With the advent of solid state electronics, it has become possible to build completely electronic frequency changers. These devices usually consist of a rectifier stage (producing direct current) which is then inverted to produce AC of the desired frequency. The inverter may use thyristors, IGCTs or IGBTs. If voltage conversion is desired, a transformer will usually be included in either the AC input or output circuitry and this transformer may also provide galvanic isolation between the input and output AC circuits. A battery may also be added to the DC circuitry to improve the converter's ride-through of brief outages in the input power.
Frequency changers vary in power-handling capability from a few watts to megawatts.
Applications
Frequency changers are used for converting bulk AC power from one frequency to another, when two adjacent power grids operate at different utility frequency.
A variable-frequency drive (VFD) is a type of frequency changer used for speed control of AC motors such as used for pumps and fans. The speed of a Synchronous AC motor is dependent on the frequency of the AC power supply, so changing frequency allows the motor speed to be changed. This allows fan or pump output to be varied to match process conditions, which can provide energy savings.
A cycloconverter is also a type of frequency changer. Unlike a VFD, which is an indirect frequency changer since it uses an AC-DC stage and then a D |
https://en.wikipedia.org/wiki/NAP1L1 | Nucleosome assembly protein 1-like 1 is a protein that in humans is encoded by the NAP1L1 gene.
This gene encodes a member of the nucleosome assembly protein (NAP) family. This protein participates in DNA replication and may play a role in modulating chromatin formation and contribute to the regulation of cell proliferation. Alternative splicing of this gene results in several transcript variants; however, not all have been fully described. |
https://en.wikipedia.org/wiki/Grid%20%28spatial%20index%29 | In the context of a spatial index, a grid or mesh is a regular tessellation of a manifold or 2-D surface that divides it into a series of contiguous cells, which can then be assigned unique identifiers and used for spatial indexing purposes. A wide variety of such grids have been proposed or are currently in use, including grids based on "square" or "rectangular" cells, triangular grids or meshes, hexagonal grids, and grids based on diamond-shaped cells. A "global grid" is a kind of grid that covers the entire surface of the globe.
Types of grids
Square or rectangular grids are frequently used for purposes such as translating spatial information expressed in Cartesian coordinates (latitude and longitude) into and out of the grid system. Such grids may or may not be aligned with the grid lines of latitude and longitude; for example, Marsden Squares, World Meteorological Organization squares, c-squares and others are aligned, while Universal Transverse Mercator coordinate system and various local grid based systems such as the British national grid reference system are not. In general, these grids fall into two classes, "equal angle" or "equal area". Grids that are "equal angle" have cell sizes that are constant in degrees of latitude and longitude but are unequal in area (particularly with varying latitude). Grids that are "equal area" (statistical grids), that have cell sizes that are constant in distance on the ground (e.g. 100 km, 10 km) but not in degrees of longitude, in particular.
A commonly used triangular grid is the "Quaternary Triangular Mesh" (QTM), which was developed by Geoffrey Dutton in the early 1980s. It eventually resulted in a thesis entitled "A Hierarchical Coordinate System for Geoprocessing and Cartography" that was published in 1999.
This grid was also employed as the basis of the rotatable globe that forms part of the Microsoft Encarta product.
Hexagonal grids may also be used. In general, triangular and hexagonal grids are constructed so |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.