source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Continuous%20q-Laguerre%20polynomials | In mathematics, the continuous q-Laguerre polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. give a detailed list of their properties.
Definition
The polynomials are given in terms of basic hypergeometric functions and the q-Pochhammer symbol by 。 |
https://en.wikipedia.org/wiki/Martinez%20beavers | The Martinez beavers are a family of North American beavers living in Alhambra Creek in downtown Martinez, California. Best known as the longtime home of famed 19th/20th-century naturalist John Muir, Martinez has become a national example of urban stream restoration utilizing beavers as ecosystem engineers.
In late 2006, a male and female beaver arrived in Alhambra Creek, proceeding to produce 4 kits over the course of the summer. After a decision by the city of Martinez to exterminate the beavers, local conservationists formed an organization called Worth a Dam and as a result of their activism, the decision was overturned. Subsequently, wildlife populations have increased in diversity along the Alhambra Creek watershed, most likely due to the dams maintained by the beavers.
Alhambra Creek
In late 2006, Alhambra Creek, which runs through the city of Martinez, was adopted by two beavers. The beavers built a dam 30 feet wide and at one time 6 feet high, and chewed through half the willows and other creekside landscaping the city planted as part of its 9.7 million 1999 flood-improvement project (after a flood in 1997).
In November 2007, the city declared that the risk of flooding from the dam necessitated removal of the beavers. Since the California Department of Fish and Game (DFG) does not allow relocation, extermination was the only solution. Residents voiced objections, prompting a beaver vigil and rally, as well as local media interest. Within three days of the announcement of the decision to exterminate the beavers, downtown Martinez was invaded by news cameras and curious spectators. Because of the public outcry, the city obtained an exception from DFG, who pledged to pay for their successful relocation. This 11th-hour decision relieved much of the tension, but residents continued to press the city to allow the beavers to stay. In a heavily-attended city council meeting, the city was alternately praised for gaining the DFG exception and chided for not resea |
https://en.wikipedia.org/wiki/Hypercomplex%20analysis | In mathematics, hypercomplex analysis is the extension of complex analysis to the hypercomplex numbers. The first instance is functions of a quaternion variable, where the argument is a quaternion (in this case, the sub-field of hypercomplex analysis is called quaternionic analysis). A second instance involves functions of a motor variable where arguments are split-complex numbers.
In mathematical physics, there are hypercomplex systems called Clifford algebras. The study of functions with arguments from a Clifford algebra is called Clifford analysis.
A matrix may be considered a hypercomplex number. For example, the study of functions of 2 × 2 real matrices shows that the topology of the space of hypercomplex numbers determines the function theory. Functions such as square root of a matrix, matrix exponential, and logarithm of a matrix are basic examples of hypercomplex analysis.
The function theory of diagonalizable matrices is particularly transparent since they have eigendecompositions. Suppose where the Ei are projections. Then for any polynomial ,
The modern terminology for a "system of hypercomplex numbers" is an algebra over the real numbers, and the algebras used in applications are often Banach algebras since Cauchy sequences can be taken to be convergent. Then the function theory is enriched by sequences and series. In this context the extension of holomorphic functions of a complex variable is developed as the holomorphic functional calculus. Hypercomplex analysis on Banach algebras is called functional analysis.
See also
Giovanni Battista Rizza |
https://en.wikipedia.org/wiki/J.%20M.%20R.%20Parrondo | Juan Manuel Rodríguez Parrondo (born 9 January 1964) is a Spanish physicist. He is mostly popular for the invention of the Parrondo's paradox and his contributions in the thermodynamical study of information.
Biography
Juan Parrondo received his bachelors degree in 1987 and defended his Ph.D at Complutense University of Madrid in 1992. He started a permanent position at UCM at 1996. In the same year he invented the well-known Parrondo's Paradox, according to which 2 losing strategies may win while working together. Since then, the paradox has been widely used in biology and finances. He has also completed a lot of research in the field of Information Theory, mostly looking at information as a thermodynamic concept, which as a result of ergodicity breaking changed the entropy of the system.
Works by Juan M.R. Parrondo
"Noise-Induced Non-equilibrium Phase Transition" C. Van den Broeck, J. M. R. Parrondo and R. Toral, Physical Review Letters, vol. 73 p. 3395 (1994)
Notes
Further reading
"Game theory: Losing strategies can win by Parrondo's paradox" G. P. Harmer and D. Abbott, Nature vol. 402, p. 864 (1999)
External links
Home page of Juan M. R. Parrondo
1964 births
Living people
Scientists from Madrid
Spanish physicists
Probability theorists
Complutense University of Madrid alumni |
https://en.wikipedia.org/wiki/BRAT1 | BRCA1-associated ATM activator 1 is a protein in humans that is encoded by the BRAT1 gene.
Function
The protein encoded by this ubiquitously expressed gene interacts with the tumor suppressing BRCA1 (breast cancer 1) protein and the ATM (ataxia telangiectasia mutated) protein. ATM is thought to be a master controller of cell cycle checkpoint signalling pathways that are required for cellular responses to DNA damage such as double-strand breaks that are induced by ionizing radiation and complexes with BRCA1 in the multi-protein complex BASC (BRAC1-associated genome surveillance complex). The protein encoded by this gene is thought to play a role in the DNA damage pathway regulated by BRCA1 and ATM. |
https://en.wikipedia.org/wiki/Kaniadakis%20Gaussian%20distribution | The Kaniadakis Gaussian distribution (also known as κ-Gaussian distribution) is a probability distribution which arises as a generalization of the Gaussian distribution from the maximization of the Kaniadakis entropy under appropriated constraints. It is one example of a Kaniadakis κ-distribution. The κ-Gaussian distribution has been applied successfully for describing several complex systems in economy, geophysics, astrophysics, among many others.
The κ-Gaussian distribution is a particular case of the κ-Generalized Gamma distribution.
Definitions
Probability density function
The general form of the centered Kaniadakis κ-Gaussian probability density function is:
where is the entropic index associated with the Kaniadakis entropy, is the scale parameter, and
is the normalization constant.
The standard Normal distribution is recovered in the limit
Cumulative distribution function
The cumulative distribution function of κ-Gaussian distribution is given bywhereis the Kaniadakis κ-Error function, which is a generalization of the ordinary Error function as .
Properties
Moments, mean and variance
The centered κ-Gaussian distribution has a moment of odd order equal to zero, including the mean.
The variance is finite for and is given by:
Kurtosis
The kurtosis of the centered κ-Gaussian distribution may be computed thought:
which can be written asThus, the kurtosis of the centered κ-Gaussian distribution is given by:or
κ-Error function
The Kaniadakis κ-Error function (or κ-Error function) is a one-parameter generalization of the ordinary error function defined as:
Although the error function cannot be expressed in terms of elementary functions, numerical approximations are commonly employed.
For a random variable distributed according to a κ-Gaussian distribution with mean 0 and standard deviation , κ-Error function means the probability that X falls in the interval .
Applications
The κ-Gaussian distribution has been applied in severa |
https://en.wikipedia.org/wiki/30th%20meridian%20west | The meridian 30° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Greenland, the Atlantic Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 30th meridian west forms a great circle with the 150th meridian east, and it is the reference meridian for the time zone UTC-2.
Between the 45th and 61st parallels north, the meridian forms the boundary between the Gander and Shanwick Oceanic air traffic control areas, and for this reason is also known as the "Molson-Guinness Line".
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 30th meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="125" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| H.H. Benedict Range (Northern Peary Land)
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Frederick E. Hyde Fjord
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Bronlund Fjord (Southern Peary Land)
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Independence Fjord
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Mainland and Sokongen Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Southern Ocean
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" | Antarctica
| Claimed by both (Argentine Antarctica) and (British Antarctic Territory)
|-
|}
See also
29th meridian west
31st meridian west
Beyond Thirty |
https://en.wikipedia.org/wiki/Newest%20vertex%20bisection | Newest Vertex Bisection is an algorithmic method to locally refine triangulations. It is widely used in computational science, numerical simulation, and computer graphics. The advantage of newest vertex bisection is that it allows local refinement of triangulations without degenerating the shape of the triangles after repeated usage.
In newest vertex bisection, whenever a triangle is to be split into smaller triangles, it will be bisected by drawing a line from the newest vertex to the midpoint of the edge opposite to that vertex. That midpoint becomes the newest vertex of the two newer triangles. One can show that repeating this procedure for a given triangulation leads to triangles that belong to only a finite number of similarity classes.
Generalizations of newest vertex bisection to dimension three and higher are known. Newest vertex bisection is used in local mesh refinement for adaptive finite element methods, where it is an alternative to red-green refinement and uniform mesh refinement. |
https://en.wikipedia.org/wiki/Vernon%20Coleman | Vernon Edward Coleman (born 1946) is an English conspiracy theorist, writer, novelist, anti-vivisectionist, anti-vaccination activist and AIDS denialist who writes on topics related to human health, politics and animal welfare. He was formerly a general practitioner (GP) and newspaper columnist.
Coleman's medical claims have been widely discredited and described as pseudoscientific conspiracy theories.
Early life
Coleman was born in 1946, the only child of an electrical engineer. He was raised in Walsall, Staffordshire, in the West Midlands of England, where he attended Queen Mary's Grammar School and a medical school in Birmingham.
Career
Coleman qualified as a physician in 1970 and worked as a GP. In 1981, the Department of Health and Social Security (DHSS) fined him for refusing to write the diagnoses on sick notes, which he considered a breach of patient confidentiality.
After publishing his first book, The Medicine Men, in 1976, which accused the National Health Service of being controlled by pharmaceutical companies, Coleman left the NHS.
Coleman has since written under multiple pen names; in the late 1970s, he published three novels about life as a GP under the name Edward Vernon.
In 1987 Coleman appeared on the Central Weekend Programme as a sceptic against jogging for fitness.
An anti-vivisectionist, Coleman provided a supplementary memorandum for the House of Lords on the topic of vivisection in 1993.
In 1994 Coleman was ordered to pay damages for threatening scientist Colin Blakemore, who had been targeted by anti-vivisection activists after a letter bomb sent by animal rights group calling itself 'The Justice Department' was sent to Blakemore's home, with another exploding and injuring three people. Blakemore was later granted a temporary injunction by a High Court judge after Coleman had said he would publish a pamphlet with Blakemore's home address and telephone number to encourage the public to 'get in touch with you to discuss your work'. C |
https://en.wikipedia.org/wiki/Specific%20storage | In the field of hydrogeology, storage properties are physical properties that characterize the capacity of an aquifer to release groundwater. These properties are storativity (S), specific storage (Ss) and specific yield (Sy). According to Groundwater, by Freeze and Cherry (1979), specific storage, [m−1], of a saturated aquifer is defined as the volume of water that a unit volume of the aquifer releases from storage under a unit decline in hydraulic head.
They are often determined using some combination of field tests (e.g., aquifer tests) and laboratory tests on aquifer material samples. Recently, these properties have been also determined using remote sensing data derived from Interferometric synthetic-aperture radar.
Storativity
Storativity or the storage coefficient is the volume of water released from storage per unit decline in hydraulic head in the aquifer, per unit area of the aquifer. Storativity is a dimensionless quantity, and is always greater than 0.
is the volume of water released from storage ([L3]);
is the hydraulic head ([L])
is the specific storage
is the specific yield
is the thickness of aquifer
is the area ([L2])
Confined
For a confined aquifer or aquitard, storativity is the vertically integrated specific storage value. Specific storage is the volume of water released from one unit volume of the aquifer under one unit decline in head. This is related to both the compressibility of the aquifer and the compressibility of the water itself. Assuming the aquifer or aquitard is homogeneous:
Unconfined
For an unconfined aquifer, storativity is approximately equal to the specific yield () since the release from specific storage () is typically orders of magnitude less ().
The specific storage is the amount of water that a portion of an aquifer releases from storage, per unit mass or volume of the aquifer, per unit change in hydraulic head, while remaining fully saturated.
Mass specific storage is the mass of water that an aquife |
https://en.wikipedia.org/wiki/Mycelium%20Running | Mycelium Running: How Mushrooms Can Help Save the World is the sixth book written by American mycologist Paul Stamets.
In Mycelium Running (Ten Speed Press 2005), Stamets explores the use and applications of fungi in bioremediation—a practice called mycoremediation. Stamets details methods of termite and ant control using nontoxic mycelia, and describes how certain fungi may be able to neutralize anthrax, nerve gas, and smallpox. He includes the following with regard to the mycelium:
Is this the largest organism in the world? This 2,400-acre (9.7 km2) site in eastern Oregon had a contiguous growth of mycelium before logging roads cut through it. Estimated at 1,665 football fields in size and 2,200 years old, this one fungus has killed the forest above it several times over, and in so doing has built deeper soil layers that allow the growth of ever-larger stands of trees. Mushroom-forming forest fungi are unique in that their mycelial mats can achieve such massive proportions.
See also
List of books about mushrooms |
https://en.wikipedia.org/wiki/Voltage-dependent%20anion%20channel | Voltage-dependent anion channels, or mitochondrial porins, are a class of porin ion channel located on the outer mitochondrial membrane. There is debate as to whether or not this channel is expressed in the cell surface membrane.
This major protein of the outer mitochondrial membrane of eukaryotes forms a voltage-dependent anion-selective channel (VDAC) that behaves as a general diffusion pore for small hydrophilic molecules. The channel adopts an open conformation at low or zero membrane potential and a closed conformation at potentials above 30–40 mV. VDAC facilitates the exchange of ions and molecules between mitochondria and cytosol and is regulated by the interactions with other proteins and small molecules.
Structure
This protein contains about 280 amino acids and forms a beta barrel which spans the mitochondrial outer membrane.
Since its discovery in 1976, extensive function and structure analysis of VDAC proteins has been conducted. A prominent feature of the pore emerged: when reconstituted into planar lipid bilayers, there is a voltage-dependent switch between an anion-selective high-conductance state with high metabolite flux and a cation-selective low-conductance state with limited passage of metabolites.
More than 30 years after its initial discovery, in 2008, three independent structural projects of VDAC-1 were completed. The first was solved by multi-dimensional NMR spectroscopy. The second applied a hybrid approach using crystallographic data. The third was for mouse VDAC-1 crystals determined by X-ray crystallographic techniques. The three projects of the 3D structures of VDAC-1 revealed many structural features. First, VDAC-1 represents a new structural class of outer membrane β-barrel proteins with an odd number of strands. Another aspect is that the negatively charged side chain of residue E73 is oriented towards the hydrophobic membrane environment. The 19-stranded 3D structure obtained under different experimental sources by thr |
https://en.wikipedia.org/wiki/Adenine%20nucleotide%20translocator | Adenine nucleotide translocator (ANT), also known as the ADP/ATP translocase (ANT), ADP/ATP carrier protein (AAC) or mitochondrial ADP/ATP carrier, exchanges free ATP with free ADP across the inner mitochondrial membrane. ANT is the most abundant protein in the inner mitochondrial membrane and belongs to mitochondrial carrier family.
Free ADP is transported from the cytoplasm to the mitochondrial matrix, while ATP produced from oxidative phosphorylation is transported from the mitochondrial matrix to the cytoplasm, thus providing the cells with its main energy currency. ADP/ATP translocases are exclusive to eukaryotes and are thought to have evolved during eukaryogenesis. Human cells express four ADP/ATP translocases: SLC25A4, SLC25A5, SLC25A6 and SLC25A31, which constitute more than 10% of the protein in the inner mitochondrial membrane. These proteins are classified under the mitochondrial carrier superfamily.
Types
In humans, there exist three paraologous ANT isoforms:
SLC25A4 – found primarily in heart and skeletal muscle
SLC25A5 – primarily expressed in fibroblasts
SLC25A6 – primarily express in liver
Structure
ANT has long been thought to function as a homodimer, but this concept was challenged by the projection structure of the yeast Aac3p solved by electron crystallography, which showed that the protein was three-fold symmetric and monomeric, with the translocation pathway for the substrate through the centre. The atomic structure of the bovine ANT confirmed this notion, and provided the first structural fold of a mitochondrial carrier. Further work has demonstrated that ANT is a monomer in detergents and functions as a monomer in mitochondrial membranes.
ADP/ATP translocase 1 is the major AAC in human cells and the archetypal protein of this family. It has a mass of approximately 30 kDa, consisting of 297 residues. It forms six transmembrane α-helices that form a barrel that results in a deep cone-shaped depression accessible from the outside w |
https://en.wikipedia.org/wiki/Die-cast%20toy | A die-cast toy is a toy or a collectible model produced by using the die-casting method of putting molten lead, zinc alloy or plastic in a mold to produce a particular shape. Such toys are made of metal, with plastic, rubber, glass, or other machined metal parts. Wholly plastic toys are made by a similar process of injection molding, but the two methods are distinct because of the properties of the materials.
Process
The metal used in die-casting is either a lead alloy (used early on), or more commonly, Zamak (called Mazak in the UK), an alloy of zinc with small quantities of aluminium and copper. Lead or iron are impurities that must be carefully avoided in Zamac, as they give rise to a deterioration of the metal most commonly called zinc pest. The terms white metal or pot metal are also used when applied to alloys based more on lead or iron. The most common die-cast vehicles are scale models of automobiles, aircraft, military vehicles, construction equipment, and trains, although almost anything can be produced by this method, like Monopoly game pieces, furniture handles, or metal garden sprinklers.
Industry leaders
Die-cast (or diecast, or die cast) toys were first produced early in the 20th century by manufacturers such as Meccano (Dinky Toys) in the United Kingdom, Dowst Brothers (TootsieToys) in the United States and Fonderie de précision de Nanterre (Solido) in France. The first models on the market were basic, consisting of a small vehicle body with no interior. In the early days, as mentioned, it was common for impurities in the alloy to result in zinc pest, and the casting would distort, crack, or crumble. As a result, die-cast toys made before World War II are difficult to find in good condition. The later high-purity Zamak alloy avoided this problem.
Lesney began making die-cast toys in 1947. Their popular Matchbox 1-75 series was so named because there were always 75 different vehicles in the line, each packaged in a small box designed to look like |
https://en.wikipedia.org/wiki/Isolated%20singularity | In complex analysis, a branch of mathematics, an isolated singularity is one that has no other singularities close to it. In other words, a complex number z0 is an isolated singularity of a function f if there exists an open disk D centered at z0 such that f is holomorphic on D \ {z0}, that is, on the set obtained from D by taking z0 out.
Formally, and within the general scope of general topology, an isolated singularity of a holomorphic function is any isolated point of the boundary of the domain . In other words, if is an open subset of , and is a holomorphic function, then is an isolated singularity of .
Every singularity of a meromorphic function on an open subset is isolated, but isolation of singularities alone is not sufficient to guarantee a function is meromorphic. Many important tools of complex analysis such as Laurent series and the residue theorem require that all relevant singularities of the function be isolated.
There are three types of isolated singularities: removable singularities, poles and essential singularities.
Examples
The function has 0 as an isolated singularity.
The cosecant function has every integer as an isolated singularity.
Nonisolated singularities
Other than isolated singularities, complex functions of one variable may exhibit other singular behavior. Namely, two kinds of nonisolated singularities exist:
Cluster points, i.e. limit points of isolated singularities: if they are all poles, despite admitting Laurent series expansions on each of them, no such expansion is possible at its limit.
Natural boundaries, i.e. any non-isolated set (e.g. a curve) around which functions cannot be analytically continued (or outside them if they are closed curves in the Riemann sphere).
Examples
The function is meromorphic on , with simple poles at , for every . Since , every punctured disk centered at has an infinite number of singularities within, so no Laurent expansion is available for around , which is in fact a cluster |
https://en.wikipedia.org/wiki/Angioblast | Angioblasts (or vasoformative cells) are embryonic cells from which the endothelium of blood vessels arises. They are derived from embryonic mesoderm. Blood vessels first make their appearance in several scattered vascular areas (blood islands) that are developed simultaneously between the endoderm and the mesoderm of the yolk-sac, i. e., outside the body of the embryo. Here a new type of cell, the angioblast, is differentiated from the mesoderm.
These cells as they divide form small, dense syncytial masses, which soon join with similar masses by means of fine processes to form plexuses. They form capillaries through vasculogenesis and angiogenesis.
Angioblasts are one of the two products formed from hemangioblasts (the other being multipotential hemopoietic stem cells).
See also
List of human cell types derived from the germ layers |
https://en.wikipedia.org/wiki/VCard | vCard, also known as VCF (Virtual Contact File), is a file format standard for electronic business cards. vCards can be attached to e-mail messages, sent via Multimedia Messaging Service (MMS), on the World Wide Web, instant messaging, NFC or through QR code. They can contain name and address information, phone numbers, e-mail addresses, URLs, logos, photographs, and audio clips.
vCard is used as a data interchange format in smartphone contacts, personal digital assistants (PDAs), personal information managers (PIMs) and customer relationship management systems (CRMs). To accomplish these data interchange applications, other "vCard variants" have been used and proposed as "variant standards", each for its specific niche: XML representation, JSON representation, or web pages.
An unofficial vCard Plus format makes use of a URL to a customized landing page with all the basic information along with a profile photo, geographic location, and other fields. This can also be saved as a contact file on smartphones.
Overview
The standard Internet media type (MIME type) for a vCard has varied with each version of the specification.
vCard information is common in web pages: the "free text" content is human-readable but not machine-readable. As technologies evolve, the "free text" (HTML) was adapting to be also machine-readable.
RDFa with the vCard Ontology can be used in HTML and various XML-family languages, e.g. SVG, MathML.
Related formats
jCard, "The JSON Format for vCard" is a standard proposal of 2014 in . This proposal has not yet become a widely used standard. The RFC 7095 does not use real JSON objects, but rather uses arrays of sequence-dependent tag-value pairs (like an XML file).
hCard is a microformat that allows a vCard to be embedded inside an HTML page. It makes use of CSS class names to identify each vCard property. Normal HTML markup and CSS styling can be used alongside the hCard class names without affecting the webpage's ability to be parsed by a h |
https://en.wikipedia.org/wiki/Human-transcriptome%20DataBase%20for%20Alternative%20Splicing | The Human-transcriptome DataBase for Alternative Splicing (H-DBAS) is a database of alternatively spliced human transcripts based on H-Invitational.
See also
Alternative splicing |
https://en.wikipedia.org/wiki/Banana%20passionfruit | Banana passionfruit (Passiflora supersect. Tacsonia), also known as taxo and curuba, is a group of around 64 Passiflora species found in South America. Most species in this section are found in high elevation cloud forest habitats. Flowers have a cylindrical hypanthium.
Species
Invasive species
P. tarminiana and P. tripartita thrive in the climate of New Zealand. They are invasive species since they can smother forest margins and forest regrowth. It is illegal to sell, cultivate and distribute the plants.
Banana passionfruit vines are now smothering more than of native forest on the islands of Hawaii and Kauai. Seeds are spread by feral pigs, birds and humans. The vine can also be found all across the highlands of New Guinea and Tasmania. |
https://en.wikipedia.org/wiki/Moss%20bioreactor | A moss bioreactor is a photobioreactor used for the cultivation and propagation of mosses. It is usually used in molecular farming for the production of recombinant protein using transgenic moss. In environmental science moss bioreactors are used to multiply peat mosses e.g. by the Mossclone consortium to monitor air pollution.
Moss is a very frugal photoautotrophic organism that has been kept in vitro for research purposes since the beginning of the 20th century.
The first moss bioreactors for the model organism Physcomitrella patens were developed in the 1990s to comply with the safety standards regarding the handling of genetically modified organisms and to gain sufficient biomass for experimental purposes.
Functional principle
The moss bioreactor is used to cultivate moss in a suspension culture in agitated, and aerated liquid medium. The culture is kept under lighting with temperature and pH value held constant. The culture medium—often a minimal medium—contains all nutrients and minerals needed for growth of the moss.
To ensure a maximum growth rate, the moss is kept at the protonema stage by continuous mechanical disruption, e.g. by using rotating blades. Once the density of the culture has reached a certain threshold, the lack of nutrients and the increasing concentration of phytohormones in the medium triggers the differentiation of the protonema to the adult gametophyte. At this point the culture has to be diluted with fresh medium if it is intended for further use.
According to the intended yield, this basic principle can be adapted to various types and sizes of bioreactors. The cultivation chamber can, for example, consist of a column, a tube, or exchangeable plastic bags.
Production of biopharmaceuticals
Various biopharmaceuticals have already been produced using moss bioreactors. Ideally, the recombinant protein can be directly purified from the culture medium. One example for this production method is factor H: this molecule is part of the hum |
https://en.wikipedia.org/wiki/Spencer%20Wells | Spencer Wells (born April 6, 1969) is an American geneticist, anthropologist, author and entrepreneur. He co-hosts The Insight podcast with Razib Khan. Wells led The Genographic Project from 2005 to 2015, as an Explorer-in-Residence at the National Geographic Society, and is the founder and executive director of personal genomics nonprofit The Insitome Institute.
Biography
Youth and education
Wells was born in Marietta, Georgia and grew up in Lubbock, Texas. He attended both All Saints School and Lubbock High School, and received a National Merit Scholarship. He obtained a Bachelor of Science in biology from the University of Texas at Austin in 1988 and a Ph.D. in biology from Harvard University in 1994. He was a postdoctoral fellow at Stanford University between 1994 and 1998, and a research fellow at the University of Oxford from 1999 to 2000.
Career
Wells did his Ph.D. work under Richard Lewontin, and later did postdoctoral research with Luigi Luca Cavalli-Sforza and Sir Walter Bodmer. His work, which has helped to establish the critical role played by Central Asia in the peopling of the world, has been published in journals such as Science, American Journal of Human Genetics, and the Proceedings of the National Academy of Sciences.
Wells is renowned for his logistically complex sample-collecting expeditions in remote parts of the world. EurAsia98, which in 1998 took him and his team from London to the Altai Mountains on the Mongolian border, via an overland route through the Caucasus, Iran and the -stans of Central Asia, was sponsored by Land Rover. In 2005 he led a team of Genographic scientists on the first modern expedition to the Tibesti Mountains in northern Chad, and in 2006 he led a team to the Wakhan Corridor on the Tajik-Afghan border. His work has taken him to more than 100 countries.
He wrote the book The Journey of Man: A Genetic Odyssey (2002), which explains how genetic data has been used to trace human migrations over the past 50,000 ye |
https://en.wikipedia.org/wiki/Whitetopping | Whitetopping is the covering of an existing asphalt pavement with a layer of Portland cement concrete. Whitetopping is divided into types depending on the thickness of the concrete layer and whether the layer is bonded to the asphalt substrate. Unbonded whitetopping, also called conventional whitetopping, uses concrete thicknesses of 20cm (8") or more that is not bonded to the asphalt. Bonded whitetopping uses thicknesses of 5 to 15cm (2-6") bonded to the asphalt pavement and is divided into two types, thin and ultrathin. The bond is made by texturing the asphalt. Thin whitetopping uses a bonded layer of concrete that is 10 - 15cm (4-6") thick while an ultrathin layer is 5 to 10 cm (2-4") thick. Ultrathin whitetopping is suitable for light duty uses, such as roads with low traffic volume, parking lots and small airports. Fiber reinforced concrete is used in some thin whitetopping overlays and almost all ultrathin whitetopping overlays.
Whitetopping is suitable for asphalt pavement with little deterioration, although repairs can be made to the asphalt if necessary. If the pavement is badly damaged, it should be completely removed and a new concrete pavement should be installed. The pavement should be relatively hard, as well. Deterioration of overlays is significantly increased on asphalt bases with high viscosity. If a grade or a distance between the pavement and a bridge needs to be preserved, the asphalt can be milled so that the height of the pavement does not change. However, whitetopping requires the asphalt layer to be at least 7.5cm (3") thick. If necessary, a section of new concrete roadway can be placed under a bridge with gentle slopes on either side that meet up with the whitetopped portions of the road.
See also
Cool pavement |
https://en.wikipedia.org/wiki/Breast%20atrophy | Breast atrophy is the normal or spontaneous atrophy or shrinkage of the breasts.
Breast atrophy commonly occurs in women during menopause when estrogen levels decrease. It can also be caused by hypoestrogenism and/or hyperandrogenism in women in general, such as in antiestrogen treatment for breast cancer, in polycystic ovary syndrome (PCOS), and in malnutrition such as that associated with eating disorders like anorexia nervosa or with chronic disease. It can also be an effect of weight loss.
In the treatment of gynecomastia in males and macromastia in women, and in hormone replacement therapy (HRT) for trans men, breast atrophy may be a desired effect.
Examples of treatment options for breast atrophy, depending on the situation/when appropriate, can include estrogens, antiandrogens, and proper nutrition or weight gain.
See also
Mammoplasia
Micromastia |
https://en.wikipedia.org/wiki/Different%20ideal | In algebraic number theory, the different ideal (sometimes simply the different) is defined to measure the (possible) lack of duality in the ring of integers of an algebraic number field K, with respect to the field trace. It then encodes the ramification data for prime ideals of the ring of integers. It was introduced by Richard Dedekind in 1882.
Definition
If OK is the ring of integers of K, and tr denotes the field trace from K to the rational number field Q, then
is an integral quadratic form on OK. Its discriminant as quadratic form need not be +1 (in fact this happens only for the case K = Q). Define the inverse different or codifferent or Dedekind's complementary module as the set I of x ∈ K such that tr(xy) is an integer for all y in OK, then I is a fractional ideal of K containing OK. By definition, the different ideal δK is the inverse fractional ideal I−1: it is an ideal of OK.
The ideal norm of δK is equal to the ideal of Z generated by the field discriminant DK of K.
The different of an element α of K with minimal polynomial f is defined to be δ(α) = f′(α) if α generates the field K (and zero otherwise): we may write
where the α(i) run over all the roots of the characteristic polynomial of α other than α itself. The different ideal is generated by the differents of all integers α in OK. This is Dedekind's original definition.
The different is also defined for a finite degree extension of local fields. It plays a basic role in Pontryagin duality for p-adic fields.
Relative different
The relative different δL / K is defined in a similar manner for an extension of number fields L / K. The relative norm of the relative different is then equal to the relative discriminant ΔL / K. In a tower of fields L / K / F the relative differents are related by δL / F = δL / KδK / F.
The relative different equals the annihilator of the relative Kähler differential module :
The ideal class of the relative different δL / K is always a square in the class gr |
https://en.wikipedia.org/wiki/Glucosinolate | Glucosinolates are natural components of many pungent plants such as mustard, cabbage, and horseradish. The pungency of those plants is due to mustard oils produced from glucosinolates when the plant material is chewed, cut, or otherwise damaged. These natural chemicals most likely contribute to plant defence against pests and diseases, and impart a characteristic bitter flavor property to cruciferous vegetables.
Occurrence
Glucosinolates occur as secondary metabolites of almost all plants of the order Brassicales. This includes the economically important family Brassicaceae as well as Capparaceae and Caricaceae.
Outside of the Brassicales, the genera Drypetes and Putranjiva in the family Putranjivaceae, are the only other known occurrence of glucosinolates.
Glucosinolates occur in various edible plants such as cabbage (white cabbage, Chinese cabbage, broccoli), Brussels sprouts, watercress, horseradish, capers, and radishes where the breakdown products often contribute a significant part of the distinctive taste. The glucosinolates are also found in seeds of these plants.
Chemistry
Glucosinolates constitute a natural class of organic compounds that contain sulfur and nitrogen and are derived from glucose and an amino acid.
They are water-soluble anions and belong to the glucosides. Every glucosinolate contains a central carbon atom, which is bound to the sulfur atom of the thioglucose group, and via a nitrogen atom to a sulfate group (making a sulfated aldoxime). In addition, the central carbon is bound to a side group; different glucosinolates have different side groups, and it is variation in the side group that is responsible for the variation in the biological activities of these plant compounds.
The essence of glucosinolate chemistry is their ability to convert into an isothiocyanate (a "mustard oil") upon hydrolysis of the thioglucoside bond by the enzyme myrosinase.
The semisystematic naming of glucosinolates consists of the chemical name of the group "R |
https://en.wikipedia.org/wiki/Imperative%20logic | Imperative logic is the field of logic concerned with imperatives. In contrast to declaratives, it is not clear whether imperatives denote propositions or more generally what role truth and falsity play in their semantics. Thus, there is almost no consensus on any aspect of imperative logic.
Jørgensen's dilemma
One of a logic's principal concerns is logical validity. It seems that arguments with imperatives can be valid. Consider:
P1. Take all the books off the table!
P2. Foundations of Arithmetic is on the table.
C1. Therefore, take Foundations of Arithmetic off the table!
However, an argument is valid if the conclusion follows from the premises. This means the premises give us reason to believe the conclusion, or, alternatively, the truth of the premises determines truth of the conclusion. Since imperatives are neither true nor false and since they are not proper objects of belief, none of the standard accounts of logical validity apply to arguments containing imperatives.
Here is the dilemma. Either arguments containing imperatives can be valid or not. On the one hand, if such arguments can be valid, we need a new or expanded account of logical validity and the concomitant details. Providing such an account has proved challenging. On the other hand, if such arguments cannot be valid (either because such arguments are all invalid or because validity is not a notion that applies to imperatives), then our logical intuitions regarding the above argument (and others similar to it) are mistaken. Since either answer seems problematic, this has come to be known as Jørgensen's dilemma, named after Jørgen Jørgensen (da).
While this problem was first noted in a footnote by Frege, it received a more developed formulation by Jørgensen.
Deontic logic takes the approach of adding a modal operator to an argument with imperatives such that a truth-value can be assigned to the proposition. For example, it may be hard to assign a truth-value to the argument "Take all the boo |
https://en.wikipedia.org/wiki/Grassland%20degradation | Grassland degradation, also called vegetation or steppe degradation, is a biotic disturbance in which grass struggles to grow or can no longer exist on a piece of land due to causes such as overgrazing, burrowing of small mammals, and climate change. Since the 1970s, it has been noticed to affect plains and plateaus of alpine meadows or grasslands, most notably being in the Philippines and in the Tibetan and Inner Mongolian region of China, where of grassland is degraded each year. Across the globe it is estimated that 23% of the land is degraded. It takes years and sometimes even decades, depending on what is happening to that piece of land, for a grassland to become degraded. The process is slow and gradual, but so is restoring degraded grassland. Initially only patches of grass appear to die and appear brown in nature; but the degradation process, if not addressed, can spread to many acres of land. As a result, the frequency of landslides and dust storms may increase. The degraded land's less fertile ground cannot yield crops, nor can animals graze in these fields. With a dramatic decrease in plant diversity in this ecosystem, more carbon and nitrogen may be released into the atmosphere. These results can have serious effects on humans such as displacing herders from their community; a decrease in vegetables, fruit, and meat that are regularly acquired from these fields; and a catalyzing effect on global warming.
Causes
Overgrazing
It is thought that grassland degradation is principally attributed to overgrazing. This occurs when animals consume grass at a faster rate than it can grow back. Lately, overgrazing has become more apparent, partially because of the increase in urbanization, which makes less room for available farmland. With these smaller plots, farmers try to maximize their space and profits by densely packing their land with animals. Another point that comes with the high density of owned animals is that farmers need to be able to provide for t |
https://en.wikipedia.org/wiki/Happy%20Tooth | The Happy Tooth is a registered trademark of Toothfriendly International. It stands for guaranteed toothfriendly quality.
The Happy Tooth mark distinguishes products that are not harmful for teeth. In order for products to carry the logo they have to be scientifically tested and proven not to be cariogenic or erosive. The test is based on the measurement of the pH of dental plaque and saliva and is carried out by three appointed independent university institutes.
The compliance of a product is tested by means of intra-oral pH telemetry. Applying a standardized method, the plaque pH is measured at least in four volunteers during and for 30 minutes after consumption of the product with an indwelling, interproximally placed, plaque-covered electrode. Products which do not lower plaque pH below 5.7 under the conditions of this test, lack a cariogenic potential. The erosive potential is measured with a plaque free electrode. The acid exposure of the teeth must not exceed 40 mmol H min. Schachtele Ch.F. et al. (1986). Human plaque acidity models – Working Group Consensus Report. J. Dent. Res., 65 (Spec. Iss.): 1530–1531.
Toothfriendly International grants the rights to use the trademark. |
https://en.wikipedia.org/wiki/Virial%20coefficient | Virial coefficients appear as coefficients in the virial expansion of the pressure of a many-particle system in powers of the density, providing systematic corrections to the ideal gas law. They are characteristic of the interaction potential between the particles and in general depend on the temperature. The second virial coefficient depends only on the pair interaction between the particles, the third () depends on 2- and non-additive 3-body interactions, and so on.
Derivation
The first step in obtaining a closed expression for virial coefficients is a cluster expansion of the grand canonical partition function
Here is the pressure, is the volume of the vessel containing the particles, is Boltzmann's constant, is the absolute temperature, is the fugacity, with the chemical potential. The quantity is the canonical partition function of a subsystem of particles:
Here is the Hamiltonian (energy operator) of a subsystem of particles. The Hamiltonian is a sum of the kinetic energies of the particles and the total -particle potential energy (interaction energy). The latter includes pair interactions and possibly 3-body and higher-body interactions. The grand partition function can be expanded in a sum of contributions from one-body, two-body, etc. clusters. The virial expansion is obtained from this expansion by observing that equals . In this manner one derives
.
These are quantum-statistical expressions containing kinetic energies. Note that the one-particle partition function contains only a kinetic energy term. In the classical limit the kinetic energy operators commute with the potential operators and the kinetic energies in numerator and denominator cancel mutually. The trace (tr) becomes an integral over the configuration space. It follows that classical virial coefficients depend on the interactions between the particles only and are given as integrals over the particle coordinates.
The derivation of higher than virial coefficients bec |
https://en.wikipedia.org/wiki/Homometric%20structures | In chemistry and crystallography, crystal structures that have the same set of interatomic distances are called homometric structures. Homometric structures need not be congruent (that is, related by a rigid motion or reflection). Homometric crystal structures produce identical diffraction patterns; therefore, they cannot be distinguished by a diffraction experiment.
Recently, a Monte Carlo algorithm was proposed to calculate the number of homometric structures corresponding to any given set of interatomic distances.
See also
Patterson function
Arthur Lindo Patterson |
https://en.wikipedia.org/wiki/Pharmacological%20cardiotoxicity | Pharmacological cardiotoxicity is a cardiac damage under the action of drugs and it can occur both affecting the performances of the cardiac muscle and by altering the ion channels/currents of the functional cardiac cells, named the cardiomyocytes.
Two distinct case in which can occur are related to anti-cancer drugs and antiarrhythmic drugs. From early observations, some of the first ones which go under the name of anthracycline. It has emerged that such drugs cause a progressive form of heart failure leading to cardiac death. The mechanism of cell injury is thought to account for iron-dependent generation of reactive oxygen species with a spreading of oxidative damage to the cardiomyocytes. On the other hand, related to the antiarrhythmic drugs, the cardiotoxicity is associated to the risk of induce a potential fatal arrhythmias due to an imbalance in the amount of ion currents that flows in/out the cell membrane of the cardiomyocytes.
Pharmacological action
The pharmacological action represents a mechanism by means of a specific effect can be obtained. Depending on the class and type of the drug, the pharmacological action may be different.
In the case of electrophysiology, the drug directly acts at the level of the cells, affecting the mechanism of opening/closing of the ionic channels, as it happens with the anti-arrhythmic drugs. Due to the ionic permeability properties of the cardiac cells membrane, during the action potential, the opening of the ion channels generates ion currents that flow in/out of the lipophilic cell membrane.
The anti-arrhythmic drugs action is that of modifying such ion currents, acting on the structure of the ion channel, and trying to restore the physiological opening/closing mechanism of the ion channels. It may be that, instead of providing a benefit to the heart, such as the aforementioned desired effect, a new drug can negatively affect the ion currents, ending up to excessively modifying the amount of ion currents flowing th |
https://en.wikipedia.org/wiki/Nail%20clipper | A nail clipper (also called nail clippers, a nail trimmer, a nail cutter or nipper type) is a hand tool used to trim fingernails, toenails and hangnails.
Design
Nail clippers are usually made of stainless steel but can also be made of plastic and aluminum. Two common varieties are the plier type and the compound lever type. Many nail clippers usually come with a miniature file fixed to it to allow rough edges of nails to be manicured. Nail clippers occasionally come with a nail catcher. The nail clipper consists of a head that may be concave or convex. Specialized nail clippers which have convex clipping ends are intended for trimming toenails, while concave clipping ends are for fingernails. The cutting head may be manufactured to be parallel or perpendicular to the principal axis of the cutter. Cutting heads that are parallel to the principal axis are made to address accessibility issues involved with cutting toenails.
History
Before the invention of the modern nail clipper, people would use small knives to trim or pare their nails. Descriptions of nail trimming in literature date as far back as the 8th century BC. The Book of Deuteronomy exhorts in 21:12 that a man, should he wish to take a captive as a wife, "shall bring her home to [his] house, and she shall shave her head and trim her nails". A reference is made in Horace's Epistles, written circa 20 BC, to "A close-shaven man, it's said, in an empty barber's booth, penknife in hand, quietly cleaning his nails."
The first United States patent for an improvement in a finger-nail clipper was filed in 1875 by Valentine Fogerty. Other subsequent patents for an improvement in finger-nail clippers are those in 1876 by William C. Edge, and in 1878 by John H. Hollman. Filings for finger-nail clippers include, in 1881, those of Eugene Heim and Celestin Matz, in 1885 by George H. Coates (for a finger-nail cutter), in 1886 by Hungarian Inventor David Gestetner and in 1905 by Chapel S. Carter with a later patent in |
https://en.wikipedia.org/wiki/Kait%20Diaz | Kait Diaz is a fictional character in the Gears of War franchise. She first appeared in Gears of War 4 as a supporting character, and is the main protagonist of Gears 5. Within the series, she is originally a member of the Outsiders, a loosely organized faction of rebel human settlers who live outside of the walled city states of the supranational and intergovernmental military collective known as the Coalition of Ordered Governments ("COG"), following the conclusion of humanity's war with the Locust Horde on the planet Sera. By the events of Gears 5, she has enlisted with the COG armed forces and has attained the rank of corporal. The character is voiced by Laura Bailey in each of her appearances. Outside of the video game series, Kait has appeared in the novels Gears of War: Ascendance and Gears of War: Bloodlines.
Kait Diaz is the first lead female character of the Gears of War video game franchise, which is predominantly known for its hypermasculine male characters. The Coalition, a subsidiary of Xbox Game Studios and the current developers responsible for the Gears of War franchise following its acquisition from Epic Games, intended for a strong female lead to replace Marcus Fenix and wrote the narrative arc of the fourth and fifth installments of the franchise around the character.
Kait has been received positively, and is considered to be one of the more popular characters in the Gears of War series. She has been praised for her nuanced characterization, strong backstory, and as a positive example of gender equality and representation in video games. Various merchandise for the character, as with other of the series' characters, has been released.
Creation and development
Kait Diaz is designed as a character with Latino roots; the Coalition's character designers mixed Latino-inspired physical traits with typically Caucasian features to give her an "interesting look". Her less regimented look is a reflection of her youthful urge to stand out among her peer |
https://en.wikipedia.org/wiki/Graticula | Graticula, formerly incorrectly named Craticula, is a genus of Palaeozoic coralline alga. They form the framework of reef rocks in the Silurian of Gotland, from the Högklint, Slite and Halla groups.
The Graticulaceae closely resemble the Cretaceous Solenoporaceae, and are only really differentiated by their stratigraphic position.
Graticula mineralized with calcite. |
https://en.wikipedia.org/wiki/Vaginal%20artery | The vaginal artery is an artery in females that supplies blood to the vagina and the base of the bladder.
Structure
The vaginal artery is usually a branch of the internal iliac artery. Some sources say that the vaginal artery can arise from the uterine artery, but the phrase vaginal branches of uterine artery is the term for blood supply to the vagina coming from the uterine artery.
The vaginal artery is frequently represented by two or three branches. These descend to the vagina, supplying its mucous membrane. They anastomose with branches from the uterine artery. It can send branches to the bulb of the vestibule, the fundus of the bladder, and the contiguous part of the rectum.
Function
The vaginal artery supplies oxygenated blood to the muscular wall of the vagina, along with the uterine artery and the internal pudendal artery. It also supplies the cervix, along with the uterine artery.
Other animals
In horses, the vaginal artery may haemorrhage after birth, which can cause death.
See also
Uterine artery |
https://en.wikipedia.org/wiki/Lucidchart | Lucidchart is a web-based diagramming application that allows users to visually collaborate on drawing, revising and sharing charts and diagrams, and improve processes, systems, and organizational structures. It is produced by Lucid Software Inc., based in Utah, United States and co-founded by Ben Dilts and Karl Sun. Lucidchart is used by companies such as Google, GE, NBC Universal, and Amazon.
History
In January 2011, Lucid Software Inc. was incorporated in Delaware through the conversion of Lucidchart, LLC, a Utah limited liability company formed in April 2009. In 2010, Lucid announced that it had integrated Lucidchart into the Google Apps Marketplace.
In 2011, Lucid raised $1 million in seed funding from 500 Startups, 2M Companies, K9 Ventures, and several angel investors.
On October 17, 2018, Lucid announced it had raised an additional $72 million from Meritech Capital, Spectrum Equity and ICONIQ Capital.
In 2020, Lucid launched a digital whiteboarding capability called Lucidspark.
In 2021, Lucid launched a digital cloud visualization capability called Lucidscale.
Features
Lucidchart is entirely browser-based, running on browsers that support HTML5. This means it does not require plugins or updates of a third-party software like Adobe Flash. The platform supports real-time collaboration, allowing all users to work simultaneously on projects and see each user’s additions reflected in real time. All data is encrypted and stored in secure data centers.
Additional features include:
A drag-and-drop interface
Real-time co-authoring, shape-specific comments, and collaborative cursors
Data linking
Auto-visualization to generate org charts and ERDs
SQL import and export capabilities
Lucidchart also supports importing files from draw.io, Gliffy, OmniGraffle, and Microsoft Visio. The platform is integrated with Google Workspace and Drive, Microsoft Teams and other Office products, Atlassian’s Jira and Confluence, Salesforce, GitHub, Slack, and others. |
https://en.wikipedia.org/wiki/Narasimhan%E2%80%93Seshadri%20theorem | In mathematics, the Narasimhan–Seshadri theorem, proved by , says that a holomorphic vector bundle over a Riemann surface is stable if and only if it comes from an irreducible projective unitary representation of the fundamental group.
The main case to understand is that of topologically trivial bundles, i.e. those of degree zero (and the other cases are a minor
technical extension of this case). This case of the Narasimhan–Seshadri theorem says that a degree zero holomorphic vector bundle over a Riemann surface is stable if and only if it comes from an irreducible unitary representation of the fundamental group of the Riemann surface.
gave another proof using differential geometry, and showed that the stable vector bundles have an essentially unique unitary connection of constant (scalar) curvature. In the degree zero case, Donaldson's version of the theorem says that a degree zero holomorphic vector bundle over a Riemann surface is stable if and only if it admits a flat unitary connection compatible with its holomorphic structure. Then the fundamental group representation appearing in the original statement is just the monodromy representation of this flat unitary connection.
See also
Nonabelian Hodge correspondence
Kobayashi–Hitchin correspondence
Stable vector bundle |
https://en.wikipedia.org/wiki/KOMDIV-32 | The KOMDIV-32 () is a family of 32-bit microprocessors developed and manufactured by the Scientific Research Institute of System Development (NIISI) of the Russian Academy of Sciences. The manufacturing plant of NIISI is located in Dubna on the grounds of the Kurchatov Institute. The KOMDIV-32 processors are intended primarily for spacecraft applications and many of them are radiation hardened (rad-hard).
These microprocessors are compatible with MIPS R3000 and have an integrated MIPS R3010 compatible floating-point unit.
Overview
Details
1V812
0.5 µm CMOS process, 3-layer metal
108-pin ceramic Quad Flat Package (QFP)
1.5 million transistors, 8KB L1 instruction cache, 8KB L1 data cache, compatible with IDT 79R3081E
1890VM1T
0.5 µm CMOS process
1890VM2T
0.35 µm CMOS process
1990VM2T
0.35 µm silicon on insulator (SOI) CMOS process
108-pin ceramic Quad Flat Package (QFP)
working temperature from -60 to 125 °C
5890VM1Т
0.5 µm silicon on insulator (SOI) CMOS process
108-pin ceramic Quad Flat Package (QFP)
cache (8KB each for data and instructions)
working temperature from -60 to 125 °C
5890VE1Т
0.5 µm SOI CMOS process
240-pin ceramic QFP
radiation tolerance to not less than 200 kRad, working temperature from -60 to 125 °C
System-on-a-chip (SoC) including PCI master / slave, 16 GPIO, 3 UART, 3 32-bit timers
cache (8KB each for data and instructions)
second-sourced by MVC Nizhny Novgorod under the name 1904VE1T () with a clock rate of 40 MHz
1900VM2T
development name Rezerv-32
0.35 µm SOI CMOS process
108-pin ceramic QFP
radiation tolerance to not less than 200 kRad, working temperature from -60 to 125 °C
triple modular redundancy on block level with self-healing
both registers and cache (4KB each for data and instructions) are implemented as dual interlocked storage cells (DICE)
1907VM014
0.25 µm SOI CMOS process; manufacturing to be moved to Mikron
256-pin ceramic QFP
production planned for 2016 (previously this device was planned to go into producti |
https://en.wikipedia.org/wiki/Low-density%20parity-check%20code | In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed using a sparse Tanner graph (subclass of the bipartite graph). LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical maximum (the Shannon limit) for a symmetric memoryless channel. The noise threshold defines an upper bound for the channel noise, up to which the probability of lost information can be made as small as desired. Using iterative belief propagation techniques, LDPC codes can be decoded in time linear to their block length.
LDPC codes are finding increasing use in applications requiring reliable and highly efficient information transfer over bandwidth-constrained or return-channel-constrained links in the presence of corrupting noise. Implementation of LDPC codes has lagged behind that of other codes, notably turbo codes. The fundamental patent for turbo codes expired on August 29, 2013.
LDPC codes are also known as Gallager codes, in honor of Robert G. Gallager, who developed the LDPC concept in his doctoral dissertation at the Massachusetts Institute of Technology in 1960. LDPC codes have also been shown to have ideal combinatorial properties. In his dissertation, Gallager showed that LDPC codes achieve the Gilbert–Varshamov bound for linear codes over binary fields with high probability. In 2020 it was shown that Gallager's LDPC codes achieve list decoding capacity and also achieve the Gilbert–Varshamov bound for linear codes over general fields.
History
Impractical to implement when first developed by Gallager in 1963, LDPC codes were forgotten until his work was rediscovered in 1996. Turbo codes, another class of capacity-approaching codes discovered in 1993, became the coding scheme of choice in the late 1990s, used for applications such as the Deep |
https://en.wikipedia.org/wiki/Basilar%20part%20of%20occipital%20bone | The basilar part of the occipital bone (also basioccipital) extends forward and upward from the foramen magnum, and presents in front an area more or less quadrilateral in outline.
In the young skull this area is rough and uneven, and is joined to the body of the sphenoid by a plate of cartilage.
By the twenty-fifth year this cartilaginous plate is ossified, and the occipital and sphenoid form a continuous bone.
Surfaces
On its lower surface, about 1 cm. in front of the foramen magnum, is the pharyngeal tubercle which gives attachment to the fibrous raphe of the pharynx.
On either side of the middle line the longus capitis and rectus capitis anterior are inserted, and immediately in front of the foramen magnum the anterior atlantooccipital membrane is attached.
The upper surface, which constitutes the lower half of the clivus, presents a broad, shallow groove which inclines upward and forward from the foramen magnum; it supports the medulla oblongata, and near the margin of the foramen magnum gives attachment to the tectorial membrane
On the lateral margins of this surface are faint grooves for the inferior petrosal sinuses.
Additional images |
https://en.wikipedia.org/wiki/Classification%20theorem | In mathematics, a classification theorem answers the classification problem "What are the objects of a given type, up to some equivalence?". It gives a non-redundant enumeration: each object is equivalent to exactly one class.
A few issues related to classification are the following.
The equivalence problem is "given two objects, determine if they are equivalent".
A complete set of invariants, together with which invariants are solves the classification problem, and is often a step in solving it.
A (together with which invariants are realizable) solves both the classification problem and the equivalence problem.
A canonical form solves the classification problem, and is more data: it not only classifies every class, but provides a distinguished (canonical) element of each class.
There exist many classification theorems in mathematics, as described below.
Geometry
Classification of Euclidean plane isometries
Classification theorems of surfaces
Classification of two-dimensional closed manifolds
Enriques–Kodaira classification of algebraic surfaces (complex dimension two, real dimension four)
Nielsen–Thurston classification which characterizes homeomorphisms of a compact surface
Thurston's eight model geometries, and the geometrization conjecture
Berger classification
Classification of Riemannian symmetric spaces
Classification of 3-dimensional lens spaces
Classification of manifolds
Algebra
Classification of finite simple groups
Classification of Abelian groups
Classification of Finitely generated abelian group
Classification of Rank 3 permutation group
Classification of 2-transitive permutation groups
Artin–Wedderburn theorem — a classification theorem for semisimple rings
Classification of Clifford algebras
Classification of low-dimensional real Lie algebras
Classification of Simple Lie algebras and groups
Classification of simple complex Lie algebras
Classification of simple real Lie algebras
Classification of centerless simple Lie gro |
https://en.wikipedia.org/wiki/Production%20equipment%20control | Production equipment control involves production equipment that resides in the shop floor of a manufacturing company and its purpose is to produce goods of a wanted quality when provided with production resources of a required quality. In modern production lines the production equipment is fully automated using industrial control methods and involves limited unskilled labour participation. Modern production equipment consists of mechatronic modules that are integrated according to a control architecture. The most widely known architectures involve hierarchy, polyarchy, hetaerarchy and hybrid. The methods for achieving a technical effect are described by control algorithms, which may or may not utilize formal methods in their design.
Industrial equipment
Formal methods |
https://en.wikipedia.org/wiki/Logic%20block | In computing, a logic block or configurable logic block (CLB) is a fundamental building block of field-programmable gate array (FPGA) technology. Logic blocks can be configured by the engineer to provide reconfigurable logic gates.
Logic blocks are the most common FPGA architecture, and are usually laid out within a logic block array. Logic blocks require I/O pads (to interface with external signals), and routing channels (to interconnect logic blocks).
Programmable logic blocks were invented by David W. Page and LuVerne R. Peterson, and defined within their 1985 patents.
Applications
An application circuit must be mapped into an FPGA with adequate resources. While the number of logic blocks and I/Os required is easily determined from the design, the number of routing tracks needed may vary considerably even among designs with the same amount of logic.
For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing tracks increase the cost (and decrease the performance) of the part without providing any benefit, FPGA manufacturers try to provide just enough tracks so that most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed. This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs.
FPGAs are also widely used for systems validation including pre-silicon validation, post-silicon validation, and firmware development. This allows chip companies to validate their design before the chip is produced in the factory, reducing the time-to-market.
Architecture
In general, a logic block consists of a few logic cells (each cell is called an adaptive logic module (ALM), a logic element (LE), slice, etc.). A typical cell consists of a 4-input LUT, a full adder (FA), and a D-type flip-flop (DFF), as shown to the right. The LUTs are in this figure split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT th |
https://en.wikipedia.org/wiki/Mutalyzer | Mutalyzer is a web-based software tool which was primarily developed to check the description of sequence variants identified in a gene during genetic testing. Mutalyzer applies the rules of the standard human sequence variant nomenclature and can correct descriptions accordingly. Apart from the sequence variant description, Mutalyzer requires a DNA sequence record containing the transcript and protein feature annotation as a reference. Mutalyzer 2 accepts GenBank and Locus Reference Genomic (LRG) records. The annotation is also used to apply the correct codon translation tables and generate DNA and protein variant descriptions for any organism. The Mutalyzer server supports programmatic access via a SOAP Web service described in the Web Services Description Language (WSDL) and an HTTP/RPC+JSON web service.
Background
Genetic testing is generally performed in families with hereditary disease. Any sequence variant identified in a gene can be described in test reports using the position of the change and the nucleotide or amino acid involved. With this simple rule, a deletion of the nucleotide guanine (G) in a stretch of 4 G nucleotides might be described in 4 different ways, when each of the G positions is used. Although different descriptions do not affect the functional consequences of the change, they may obfuscate the fact that two persons share the same variant or the real frequency of a variant in the population. The standard human sequence variant nomenclature proposed by the Human Genome Variation Society was developed to solve this problem. Proper variant descriptions are expected to facilitate searches for more information about the functional consequences in the literature and in gene variant or locus-specific databases (LSDBs).
Mutalyzer is used by the Leiden Open Variation Database (LOVD), which stores sequence variant information for many human genes, to check variant descriptions before submission of new data. This helps data sharing, display and i |
https://en.wikipedia.org/wiki/Forecasting%20complexity | Forecasting complexity is a measure of complexity put forward (under the original name of) by the physicist Peter Grassberger.
It was later renamed "statistical complexity" by James P. Crutchfield and Karl Young. |
https://en.wikipedia.org/wiki/Einstein%20field%20equations | In the general theory of relativity, the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it.
The equations were published by Albert Einstein in 1915 in the form of a tensor equation which related the local (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress–energy tensor).
Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass–energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress–energy–momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of nonlinear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation.
As well as implying local energy–momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light.
Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves.
Mathematical form
The Einstein field equations (EFE) may be written in the form:
where is the Einstein tensor, is the metric tensor, is the st |
https://en.wikipedia.org/wiki/Hardy%E2%80%93Ramanujan%20theorem | In mathematics, the Hardy–Ramanujan theorem, proved by Ramanujan and checked by Hardy states that the normal order of the number ω(n) of distinct prime factors of a number n is log(log(n)).
Roughly speaking, this means that most numbers have about this number of distinct prime factors.
Precise statement
A more precise version states that for every real-valued function ψ(n) that tends to infinity as n tends to infinity
or more traditionally
for almost all (all but an infinitesimal proportion of) integers. That is, let g(x) be the number of positive integers n less than x for which the above inequality fails: then g(x)/x converges to zero as x goes to infinity.
History
A simple proof to the result was given by Pál Turán, who used the Turán sieve to prove that
Generalizations
The same results are true of Ω(n), the number of prime factors of n counted with multiplicity.
This theorem is generalized by the Erdős–Kac theorem, which shows that ω(n) is essentially normally distributed. |
https://en.wikipedia.org/wiki/Google%20Duo | Google Duo is a deprecated proprietary voice over IP (VoIP) and videotelephony service developed by Google, available for Android, IOS and web browsers. It let users make and receive one-to-one and group audio and video calls with other Duo users in high definition, using end-to-end encryption by default. Duo could be used either with a phone number or a Google account, allowing users to call someone from their contact list.
Google Duo was announced at Google's developer conference on May 18, 2016, and began its worldwide release on August 16, 2016. Google announced in 2022 that the service would be merged into Google Meet, and it was shut down by the end of the year.
History
In December 2016, Google Duo replaced Hangouts within the suite of Google apps device manufacturers must install in order to gain access to the Google Play, with Hangouts instead becoming optional.
In August 2020, it was reported that Google was planning to eventually merge Google Duo with the business-oriented Google Meet. In December 2021 this objective had been dropped, but Duo continued to be available and updated. In June 2022, Google reversed course and announced that Duo and Meet would, in fact, be merged. The merger began in August, with the Duo mobile app renamed to Meet and the original Meet app renamed "Meet Original" and scheduled to be phased out. Google had said the Duo web app would redirected to the Google Meet web app, but as of April 2023, video calling and meetings are still separate on the web at duo.google.com and meet.google.com.
Technologies
Google Duo was optimized for low-bandwidth mobile networks through WebRTC and uses QUIC over UDP. Optimization was further achieved through the degradation of video quality through monitoring network quality. For packet loss concealment, Duo used Google DeepMind.
In February 2021, Google announced a new very low-bitrate codec for speech compression called "Lyra" that could operate with network speeds as low as 3kbps that avoid |
https://en.wikipedia.org/wiki/Scarr%E2%80%93Rowe%20effect | In behavioral genetics, the Scarr–Rowe effect, also known as the Scarr–Rowe hypothesis, refers to the proposed moderating effect of low socioeconomic status on the heritability of children's IQ. According to this hypothesis, lower socioeconomic status and greater exposure to social disadvantage during childhood leads to a decrease in the heritability of IQ, as compared to children raised in more advantaged environments. It is considered an example of gene–environment interaction. This hypothesized effect was first proposed by Sandra Scarr, who found support for it in a 1971 study of twins in Philadelphia, and these results were replicated by David C. Rowe in 1999. Since then, similar results have been replicated numerous times, though not all replication studies have yielded positive results. A 2015 meta-analysis found that the effect was predominant in the United States while less evident in societies with robust child welfare systems.
Original research
In Sandra Scarr's original work, she outlines the methods behind the study of genetic and environmental variance, using data from the 1960 US Census tract to estimate Socioeconomic status (SES), and the Iowa Tests of Basic Skills to study genetic and environmental variations across children from advantaged and disadvantaged populations. This study focussed on the heritability of IQ based on social class and race, with the social and racial groups being too variated to make generalizations between groups questionable. Across both races, the total variance was generally found to be greater in the groups of higher socioeconomic status. It is suggested that, while genetic factors are not as significant in determining aptitude in lower SES groups of either race, there is a larger variance in phenotypes among higher SES children.
One of the major limitations of Scarr's original work is that the sample of twins was separated into same-sex and opposite-sex as opposed to monozygotic (MZ) and dizygotic (DZ), a method which |
https://en.wikipedia.org/wiki/Cylindric%20algebra | In mathematics, the notion of cylindric algebra, invented by Alfred Tarski, arises naturally in the algebraization of first-order logic with equality. This is comparable to the role Boolean algebras play for propositional logic. Cylindric algebras are Boolean algebras equipped with additional cylindrification operations that model quantification and equality. They differ from polyadic algebras in that the latter do not model equality.
Definition of a cylindric algebra
A cylindric algebra of dimension (where is any ordinal number) is an algebraic structure such that is a Boolean algebra, a unary operator on for every (called a cylindrification), and a distinguished element of for every and (called a diagonal), such that the following hold:
(C1)
(C2)
(C3)
(C4)
(C5)
(C6) If , then
(C7) If , then
Assuming a presentation of first-order logic without function symbols,
the operator models existential quantification over variable in formula while the operator models the equality of variables and . Hence, reformulated using standard logical notations, the axioms read as
(C1)
(C2)
(C3)
(C4)
(C5)
(C6) If is a variable different from both and , then
(C7) If and are different variables, then
Cylindric set algebras
A cylindric set algebra of dimension is an algebraic structure such that is a field of sets, is given by , and is given by . It necessarily validates the axioms C1–C7 of a cylindric algebra, with instead of , instead of , set complement for complement, empty set as 0, as the unit, and instead of . The set X is called the base.
A representation of a cylindric algebra is an isomorphism from that algebra to a cylindric set algebra. Not every cylindric algebra has a representation as a cylindric set algebra. It is easier to connect the semantics of first-order predicate logic with cylindric set algebra. (For more details, see .)
Generalizations
Cylindric algebras have been genera |
https://en.wikipedia.org/wiki/Larson%E2%80%93Miller%20relation | The Larson–Miller relation, also widely known as the Larson–Miller parameter and often abbreviated LMP, is a parametric relation used to extrapolate experimental data on creep and rupture life of engineering materials.
Background and usage
F.R. Larson and J. Miller proposed that creep rate could adequately be described by the Arrhenius type equation:
Where r is the creep process rate, A is a constant, R is the universal gas constant, T is the absolute temperature, and is the activation energy for the creep process. Taking the natural log of both sides:
With some rearrangement:
Using the fact that creep rate is inversely proportional to time, the equation can be written as:
Taking the natural log:
After some rearrangement the relation finally becomes:
, where B =
This equation is of the same form as the Larson–Miller relation.
where the quantity LMP is known as the Larson–Miller parameter. Using the assumption that activation energy is independent of applied stress, the equation can be used to relate the difference in rupture life to differences in temperature for a given stress. The material constant C is typically found to be in the range of 20 to 22 for metals when time is expressed in hours and temperature in degrees Rankine.
The Larson–Miller model is used for experimental tests so that results at certain temperatures and stresses can predict rupture lives of time spans that would be impractical to reproduce in the laboratory.
Expanding the equation as a Taylor series makes the relationship easier to understand. Only the first terms are kept.
Changing the time, by a factor of 10, changes the logarithm by 1 and the LMP changes by an amount equal to the temperature.
To get an equal change in LMP by changing the temperature, the temperature needs to be raised or lowered by about 5% of its absolute value.
Typically a 5% increase in absolute temperature will increase the rate of creep by a factor of ten.
The equation was developed during the |
https://en.wikipedia.org/wiki/Private%20Internet%20Access | Private Internet Access (PIA) is a personal VPN service that allows users to connect to multiple locations. In 2018, former Mt. Gox CEO Mark Karpelès was named chief technology officer of PIA's parent company, London Trust Media. In November 2019, Private Internet Access was acquired by Kape Technologies.
Company
The CEO of Private Internet Access (and its parent company, London Trust Media, Inc.) is Ted Kim. The company was founded by Andrew Lee. Lee started the company PIA because he wanted a way to prevent Internet Relay Chat from revealing IP addresses.
On November 18, 2019, Private Internet Access announced that it would be merged into Kape Technologies, which operates three competing VPN services, CyberGhost, ExpressVPN and Zenmate. Some users objected to the acquisition, as Kape (under its former name, Crossrider) previously developed browser toolbars bundled with potentially unwanted programs.
History
Founded in 2010, Private Internet Access was formed under parent company London Trust Media and entrepreneur Andrew Lee. The company was formed due to Lee's interest to take privacy mainstream. The goal of the company is to create the “next VPN” and to do this they open-sourced their client software code for everyone to use. During their acquisition by Kape Technologies, PIA had to assure their users that privacy and security will remain the company's top priority as they continue to work under Kape's umbrella. However, due to Kape Technologies history, a good amount of faith in PIA was lost.
After merging with Kape Technologies, Private Internet Access became one of many privacy software products offered by the corporation. Along with Private Internet Access, Kape also offers Cyberghost, Intego, Webselenese and Restoro. Previously Crossrider, a UK-based company, Kape Technologies changed their name in 2018 in order to go under a type of rebranding and "escape a strong association to the past activities of the company" through which they had issues wit |
https://en.wikipedia.org/wiki/After%20Dark%20%28software%29 | After Dark is a series of computer screensaver software introduced by Berkeley Systems in 1989 for the Apple Macintosh, and in 1991 for Microsoft Windows.
Following the original, additional editions included More After Dark, Before Dark, and editions themed around licensed properties such as Star Trek, The Simpsons, Looney Tunes, Marvel, and Disney characters.
On top of the included animated screensavers, the program allowed for the development and use of third-party modules, many hundreds of which were created at the height of its popularity.
Flying Toasters
The most famous of the included screensaver modules is the iconic Flying Toasters, which featured 1940s-style chrome toasters sporting bird-like wings, flying across the screen with pieces of toast. Engineer Jack Eastman came up with the display after seeing a toaster in the kitchen during a late-night programming session and imagining the addition of wings. A slider in the Flying Toasters module enabled users to adjust the toast's darkness and an updated Flying Toasters Pro module added a choice of music—Richard Wagner's Ride of the Valkyries or a flying toaster anthem with optional karaoke lyrics. Yet another version called Flying Toasters! added bagels and pastries, baby toasters, and more elaborate toaster animation. The Flying Toasters were one of the key reasons that After Dark became popular, and Berkeley began to produce other merchandising products such as T-shirts with the Flying Toaster image and slogans such as "The 51st Flying Toaster Squadron: On a mission to save your screen!"
The toasters were the subject of two lawsuits, the first in 1993, Berkeley Systems vs Delrina Corporation, over a module of Delrina's Opus 'N Bill screensaver in which Opus the penguin shoots down the toasters. After a U.S. District judge ruled that Delrina's "Death Toasters" was infringing, Delrina later changed the wings of the toasters to propellers. The second case was brought in 1994 by 1960s rock group Jefferson |
https://en.wikipedia.org/wiki/Stanislav%20Smirnov | Stanislav Konstantinovich Smirnov (; born 3 September 1970) is a Russian mathematician currently working at the University of Geneva. He was awarded the Fields Medal in 2010. His research involves complex analysis, dynamical systems and probability theory.
Career
Smirnov's Ph.D. was conducted at Caltech under advisor Nikolai Makarov. In 1998 he was employed as part of the faculty at the Royal Institute of Technology in Stockholm, after which he took up his second position as a professor in the Analysis, Mathematical Physics and Probability group at the University of Geneva in 2003.
Research
Smirnov has worked on percolation theory, where he proved Cardy's formula for critical site percolation on the triangular lattice, and deduced conformal invariance. The conjecture was proved in the special case of site percolation on the triangular lattice. Smirnov's theorem has led to a fairly complete theory for percolation on the triangular lattice, and to its relationship to the Schramm–Loewner evolution introduced by Oded Schramm. He also established conformality for the two-dimensional critical Ising model.
Awards
Smirnov was awarded the Saint Petersburg Mathematical Society Prize (1997), the Clay Research Award (2001), the Salem Prize (joint with Oded Schramm, 2001), the Göran Gustafsson Prize (2001), the Rollo Davidson Prize (2002), and the Prize of the European Mathematical Society (2004). In 2010 Smirnov was awarded the Fields medal for his work on the mathematical foundations of statistical physics, particularly finite lattice models. His citation read "for the proof of conformal invariance of percolation and the planar Ising model in statistical physics".
Publications
Probability and Statistical Physics in St. Petersburg, American Math Society, (2015) |
https://en.wikipedia.org/wiki/K%20shortest%20path%20routing | The k shortest path routing problem is a generalization of the shortest path routing problem in a given network. It asks not only about a shortest path but also about next k−1 shortest paths (which may be longer than the shortest path). A variation of the problem is the loopless k shortest paths.
Finding k shortest paths is possible by extending Dijkstra algorithm or Bellman-Ford algorithm.
History
Since 1957 many papers were published on the k shortest path routing problem. Most of the fundamental works were done between 1960s and 2001. Since then, most of the research has been on the problem's applications and its variants. In 2010, Michael Günther et al. published a book on Symbolic calculation of k-shortest paths and related measures with the stochastic process algebra tool CASPA.
Algorithm
The Dijkstra algorithm can be generalized to find the k shortest paths.
Variations
There are two main variations of the k shortest path routing problem. In one variation, paths are allowed to visit the same node more than once, thus creating loops. In another variation, paths are required to be simple and loopless. The loopy version is solvable using Eppstein's algorithm and the loopless variation is solvable by Yen's algorithm.
Loopy variant
In this variant, the problem is simplified by not requiring paths to be loopless. A solution was given by B. L. Fox in 1975 in which the k-shortest paths are determined in asymptotic time complexity (using big O notation. In 1998, David Eppstein reported an approach that maintains an asymptotic complexity of by computing an implicit representation of the paths, each of which can be output in O(n) extra time. In 2015, Akuba et al. devised an indexing method as a significantly faster alternative for Eppstein's algorithm, in which a data structure called an index is constructed from a graph and then top-k distances between arbitrary pairs of vertices can be rapidly obtained.
Loopless variant
In the loopless variant, th |
https://en.wikipedia.org/wiki/L%C3%A9o-Pariseau%20Prize | The Léo-Pariseau Prize is a Québécois prize which is awarded annually to a distinguished individual working in the field of biological or health sciences. The prize is awarded by the Association francophone pour le savoir (Acfas), and is named after Léo Pariseau, the first president of Acfas.
The award was inaugurated in 1944 and was the first Acfas prize. Prior to 1980 the prize was awarded to researchers in a large variety of disciplines, before being restricted to biological and health sciences. There are now ten annual prizes for researchers in different disciplines.
Winners
Source: Acfas – Prix de la Recherche Scientifique de l'Acfas – Prix Léo-Pariseau
1944 - Marie-Victorin Kirouac, botany, Université de Montréal
1945 - Paul-Antoine Giguère, chemistry, Université Laval
1946 - Marius Barbeau, ethnology, Université Laval
1947 - Jacques Rousseau, botany and ethnology, Université de Montréal
1948 - Léo Marion, chemistry, University of Ottawa
1949 - Jean Bruchési, history and political science, Université de Montréal
1950 - Louis-Charles Simard, pathology, Université de Montréal
1951 - Cyrias Ouellet, chemistry, Université Laval
1952 - Louis-Paul Dugal, physiology, Université de Montréal
1953 - Guy Frégault, history, Université de Montréal
1954 - Pierre Demers. physics, Université de Montréal
1955 - René Pomerleau, mycology, Université de Montréal
1956 - Marcel Rioux, anthropology, Université de Montréal
1957 - No prize awarded.
1958 - Roger Gaudry, chemistry, Université de Montréal
1959 - Lionel Daviault, entomology
1960 - Marcel Trudel, history, Université Laval
1961 - Raymond Lemieux, chemistry, University of Alberta
1962 - Charles-Philippe Leblond, histology, McGill University
1963 - Lionel Groulx, history, Université de Montréal
1964 - Larkin Kerwin, physics, Université Laval
1965 - Pierre Dansereau, ecology, Université du Québec à Montréal
1966 - Noël Mailloux, psychology, Université de Montréal
1967 - Albéric Boivin, physics, Université Laval
1968 - Lé |
https://en.wikipedia.org/wiki/List%20of%20countries%20by%20air%20pollution | The following list of countries by air pollution sorts the countries of the world according to their average measured concentration of particulate matter (PM2.5) in micrograms per cubic meter (µg/m³). The World Health Organization's recommended limit is 10 micrograms per cubic meter, although there are also various national guideline values, which are often much higher. Air pollution is among the biggest health problems of modern industrial society and is responsible for more than 10 percent of all deaths worldwide (nearly 4.5 million premature deaths in 2019), according to The Lancet. Air pollution can affect nearly every organ and system of the body, negatively affecting nature and humans alike. Air pollution is a particularly big problem in emerging and developing countries, where global environmental standards often cannot be met. The data in this list refers only to outdoor air quality and not indoor air quality, which caused an additional two million premature deaths in 2019.
List (2018−2022)
All data are valid for the year 2018-2022 and are taken from the IQAir 2022 World Air Quality Ranking
List (2021)
All data are valid for the year 2020 and are taken from the Air Quality Life Index (ACLI) of the University of Chicago. In addition to particulate matter pollution, the modeled potential loss of life expectancy of the population due to particulate matter pollution is given. |
https://en.wikipedia.org/wiki/Simon%20model | In applied probability theory, the Simon model is a class of stochastic models that results in a power-law distribution function. It was proposed by Herbert A. Simon to account for the wide range of empirical distributions following a power-law. It models the dynamics of a system of elements with associated counters (e.g., words and their frequencies in texts, or nodes in a network and their connectivity ). In this model the dynamics of the system is based on constant growth via addition of new elements (new instances of words) as well as incrementing the counters (new occurrences of a word) at a rate proportional to their current values.
Description
To model this type of network growth as described above, Bornholdt and Ebel considered a network with nodes, and each node with connectivities , . These nodes
form classes of nodes with identical connectivity .
Repeat the following steps:
(i) With probability add a new node and attach a link to it from an arbitrarily chosen node.
(ii) With probability add one link from an arbitrary node to a node of class chosen with a probability proportional to .
For this stochastic process, Simon found a stationary solution exhibiting power-law scaling, , with exponent
Properties
(i) Barabási-Albert (BA) model can be mapped to the subclass of Simon's model, when using the simpler probability for a node being
connected to another node with connectivity (same as the preferential attachment at BA model). In other words, the Simon model describes a general class of stochastic processes that can result in a scale-free network, appropriate to capture Pareto and Zipf's laws.
(ii) The only free parameter of the model reflects the relative
growth of number of nodes versus the number of links. In general has small values; therefore, the scaling exponents can be predicted to be . For instance, Bornholdt and Ebel studied the linking dynamics of World Wide Web, and predicted the scaling exponent as , which was consistent wit |
https://en.wikipedia.org/wiki/Window%20of%20the%20World | The Window of the World () is a theme park located in the western part of the city of Shenzhen in the People's Republic of China. It has about 130 reproductions of some of the most famous tourist attractions in the world squeezed into 48 hectares (118 acres). The 108 metre (354 ft) tall Eiffel Tower dominates the skyline and the sight of the Pyramids and the Taj Mahal all in proximity to each other are all part of the appeal of this theme park.
Transportation
The Window of the World Station on Line 1 and Line 2 of the Shenzhen Metro is located directly in front of the park. The Happy Line monorail has a stop near Window of the World.
Monorail and open cars runs inside the park.
In media
In his autobiographical graphic novel Shenzhen, Guy Delisle visits the park with a Chinese acquaintance. The Park was a destination of The Amazing Race 28.
List of major attractions in the Window of the World
Europe region
The Matterhorn and the Alps between the Valais canton of and the Aosta Valley region of
The gloriette in the gardens at Schönbrunn Palace and the Johann Strauss monument at Stadtpark of Vienna,
The Lion's Mound near Waterloo,
The Little Mermaid statue of Copenhagen,
The Eiffel Tower, Arc de Triomphe, Louvre Pyramid, Notre Dame cathedral, Grande Arche, Fountain of Warsaw, and Fontaine de l'Observatoire in Paris, Île-de-France
The Palace of Versailles near the town of Versailles, Île-de-France
Mont Saint-Michel in Normandy
The Pont du Gard aqueduct of Vers-Pont-du-Gard, Languedoc-Roussillon
The Cologne cathedral of Cologne, North Rhine-Westphalia
Neuschwanstein Castle of Hohenschwangau, Bavaria
The Acropolis of Athens, Attica
The Lion Gate at Mycenae of Mykines, Argolis
The Colosseum, St. Peter's Basilica, Palazzo Poli, Trajan's Column, and Spanish Steps of Rome, Lazio
Canals and St. Mark's Square of Venice, Veneto
The Leaning Tower and cathedral of Pisa, Tuscany
The Piazza della Signoria of Florence, Tuscany
The windmills and tulips |
https://en.wikipedia.org/wiki/Leslie%20Valiant | Leslie Gabriel Valiant (born 28 March 1949) is a British American computer scientist and computational theorist. He was born to a chemical engineer father and a translator mother. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. Valiant was awarded the Turing Award in 2010, having been described by the A.C.M. as a heroic figure in theoretical computer science and a role model for his courage and creativity in addressing some of the deepest unsolved problems in science; in particular for his "striking combination of depth and breadth".
Education
Valiant was educated at King's College, Cambridge, Imperial College London, and the University of Warwick where he received a PhD in computer science in 1974.
Research and career
Valiant is world-renowned for his work in Theoretical Computer Science. Among his many contributions to Complexity Theory, he introduced the notion of #P-completeness ("Sharp-P completeness") to explain why enumeration and reliability problems are intractable. He created the Probably Approximately Correct or PAC model of learning that introduced the field of Computational Learning Theory and became a theoretical basis for the development of Machine Learning. He also introduced the concept of Holographic Algorithms inspired by the Quantum Computation model. In computer systems, he is most well-known for introducing the Bulk Synchronous Parallel processing model. Analogous to the von Neumann model for a single computer architecture, BSP has been an influential model for parallel and distributed computing architectures. Recent examples are Google adopting it for computation at large scale via MapReduce, MillWheel, Pregel and Dataflow, and Facebook creating a graph analytics system capable of processing over 1 trillion edges. There have also been active open-source projects to add explicit BSP programming as well as other high-performance parallel programming models derived from B |
https://en.wikipedia.org/wiki/Runway%20bus | The Runway bus is a front-side bus developed by Hewlett-Packard for use by its PA-RISC microprocessor family. The Runway bus is a 64-bit wide, split transaction, time multiplexed address and data bus running at 120 MHz. This scheme was chosen by HP as they determined that a bus using separate address and data wires would have only delivered 20% more bandwidth for a 50% increase in pin count, which would have made microprocessors using the bus more expensive. The Runway bus was introduced with the release of the PA-7200 and was subsequently used by the PA-8000, PA-8200, PA-8500, PA-8600 and PA-8700 microprocessors. Early implementations of the bus used in the PA-7200, PA-8000 and PA-8200 had a theoretical bandwidth of 960 MB/s. Beginning with the PA-8500, the Runway bus was revised to transmit on both rising and falling edges of a 125 MHz clock signal, which increased its theoretical bandwidth to 2 GB/s. The Runway bus was succeeded with the introduction of the PA-8800, which used the Itanium 2 bus.
Bus features
64-bit multiplexed address/data
20 bus protocol signals
Supports cache coherency
Three frequency options (1.0, 0.75 and 0.67 of CPU clock — 0.50 apparently was later added)
Parity protection on address/data and control signal
Each attached device contains its own arbitrator logic
Split transactions, up to six transactions can be pending at once
Snooping cache coherency protocol
1-4 processors "glueless" multi-processing (no support chips needed)
768 MB/s sustainable throughput, peak 960 MB/s at 120 MHz
Runway+/Runway DDR: On PA-8500, PA-8600 and PA-8700, the bus operates in DDR (double data rate) mode,
resulting in a peak bandwidth of about 2.0 GB/s (Runway+ or Runway DDR) with 125 MHz
Most machines use the Runway bus to connect the CPUs directly to the IOMMU (Astro, U2/Uturn or Java) and memory.
However, the N class and L3000 servers use an interface chip called Dew to bridge the Runway bus to the Merced bus that connects to the IOMMU and mem |
https://en.wikipedia.org/wiki/Experimental%20analysis%20of%20behavior | The experimental analysis of behavior is a science that studies the behavior of individuals across a variety of species. A key early scientist was B. F. Skinner who discovered operant behavior, reinforcers, secondary reinforcers, contingencies of reinforcement, stimulus control, shaping, intermittent schedules, discrimination, and generalization. A central method was the examination of functional relations between environment and behavior, as opposed to hypothetico-deductive learning theory that had grown up in the comparative psychology of the 1920–1950 period. Skinner's approach was characterized by observation of measurable behavior which could be predicted and controlled. It owed its early success to the effectiveness of Skinner's procedures of operant conditioning, both in the laboratory and in behavior therapy.
Basic learning processes in behavior analysis
Classical (or respondent) conditioning
In classical or respondent conditioning, a neutral stimulus (conditioned stimulus) is delivered just before a reflex-eliciting stimulus (unconditioned stimulus) such as food or pain. This typically done by pairing the two stimuli, as in Pavlov's experiments with dogs, where a bell was followed by food delivery. After repeated pairings, the conditioned stimulus comes to elicit the response.
Operant conditioning
Operant conditioning (also, "instrumental conditioning") is a learning process in which behavior is sensitive to, or controlled by its consequences. Specifically, behavior followed by some consequences becomes more frequent (positive reinforcement), behavior followed by other consequences becomes less frequent (punishment) and behavior not followed by yet other consequence becomes more frequent (negative reinforcement). For example, in a food-deprived subject, when lever-pressing is followed by food delivery lever-pressing increases in frequency (positive reinforcement). Likewise, when stepping off a treadmill is followed by delivery of electric shock, st |
https://en.wikipedia.org/wiki/Burn%3A%20The%20Misunderstood%20Science%20of%20Metabolism | Burn: The Misunderstood Science of Metabolism is a 2022 book written by Herman Pontzer in which he discusses metabolism, human health and use of energy in the human body. The book examines research and proposes a constrained approach to total energy expenditure. |
https://en.wikipedia.org/wiki/GGL%20domain | GGL domain is domain found in the gamma subunit of the heterotrimeric G protein complex and in regulators of G protein signaling RGS proteins.
Human proteins containing this domain
GNG4; GNG10; GNG11
GNGT1
RGS6; RGS7; RGS9; RGS11
See also
Beta-gamma complex |
https://en.wikipedia.org/wiki/Inferior%20mesenteric%20plexus | The inferior mesenteric plexus is derived chiefly from the aortic plexus.
It surrounds the inferior mesenteric artery, and divides into a number of secondary plexuses, which are distributed to all the parts supplied by the artery, viz., the left colic and sigmoid plexuses, which supply the descending and sigmoid parts of the colon; and the superior hemorrhoidal plexus, which supplies the rectum and joins in the pelvis with branches from the pelvic plexuses.
Additional images
See also
Inferior mesenteric artery
Superior mesenteric plexus |
https://en.wikipedia.org/wiki/AN/PYQ-10 | The AN/PYQ-10 Simple Key Loader (SKL) is a ruggedized, portable, hand-held fill device, for securely receiving, storing, and transferring data between compatible cryptographic and communications equipment. The SKL was designed and built by Ralph Osterhout and then sold to Sierra Nevada Corporation, with software developed by Science Applications International Corporation (SAIC) under the auspices of the United States Army. It is intended to supplement and eventually replace the AN/CYZ-10 Data Transfer Device (DTD). The PYQ-10 provides all the functions currently resident in the CYZ-10 and incorporates new features that provide streamlined management of COMSEC key, Electronic Protection (EP) data, and Signal Operating Instructions (SOI). Cryptographic functions are performed by an embedded KOV-21 card developed by the National Security Agency (NSA). The AN/PYQ-10 supports both the DS-101 and DS-102 interfaces, as well as the KSD-64 Crypto Ignition Key. The SKL is backward-compatible with existing End Cryptographic Units (ECU) and forward-compatible with future security equipment and systems, including NSA's Key Management Infrastructure.
Between 2005 and 2007, the U.S. Army budget included funds for over 24,000 SKL units. The estimated price for FY07 was $1708 each. When released in May 2005, the price was $1695 each. This price includes the unit and the internal encryptor card. |
https://en.wikipedia.org/wiki/RGS4 | Regulator of G protein signaling 4 also known as RGP4 is a protein that in humans is encoded by the RGS4 gene. RGP4 regulates G protein signaling.
Function
Regulator of G protein signalling (RGS) family members are regulatory molecules that act as GTPase activating proteins (GAPs) for G alpha subunits of heterotrimeric G proteins. RGS proteins are able to deactivate G protein subunits of the Gi alpha, Go alpha and Gq alpha subtypes. They drive G proteins into their inactive GDP-bound forms. Regulator of G protein signaling 4 belongs to this family. All RGS proteins share a conserved 120-amino acid sequence termed the RGS domain which conveys GAP activity. Regulator of G protein signaling 4 protein is 37% identical to RGS1 and 97% identical to rat Rgs4. This protein negatively regulates signaling upstream or at the level of the heterotrimeric G protein and is localized in the cytoplasm.
Clinical significance
A number of studies associate the RGS4 gene with schizophrenia, while some fail to detect an association.
RGS4 is also of interest as one of the three main RGS proteins (along with RGS9 and RGS17) involved in terminating signalling by the mu opioid receptor, and may be important in the development of tolerance to opioid drugs.
Inhibitors
cyclic peptides
CCG-4986
Interactions
RGS4 has been shown to interact with:
COPB2,
ERBB3, and
GNAQ. |
https://en.wikipedia.org/wiki/Coastline%20paradox | The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve–like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension. Although the "paradox of length" was previously noted by Hugo Steinhaus, the first systematic study of this phenomenon was by Lewis Fry Richardson, and it was expanded upon by Benoit Mandelbrot.
The measured length of the coastline depends on the method used to measure it and the degree of cartographic generalization. Since a landmass has features at all scales, from hundreds of kilometers in size to tiny fractions of a millimeter and below, there is no obvious size of the smallest feature that should be taken into consideration when measuring, and hence no single well-defined perimeter to the landmass. Various approximations exist when specific assumptions are made about minimum feature size.
The problem is fundamentally different from the measurement of other, simpler edges. It is possible, for example, to accurately measure the length of a straight, idealized metal bar by using a measurement device to determine that the length is less than a certain amount and greater than another amount—that is, to measure it within a certain degree of uncertainty. The more accurate the measurement device, the closer results will be to the true length of the edge. When measuring a coastline, however, the closer measurement does not result in an increase in accuracy—the measurement only increases in length; unlike with the metal bar, there is no way to obtain a maximum value for the length of the coastline.
In three-dimensional space, the coastline paradox is readily extended to the concept of fractal surfaces, whereby the area of a surface varies depending on the measurement resolution.
Mathematical aspects
The basic concept of length originates from Euclidean distance. In Euclidean geometry, a straight line repr |
https://en.wikipedia.org/wiki/SGI%20Onyx | SGI Onyx is a series of visualization systems designed and manufactured by SGI, introduced in 1993 and offered in two models, deskside and rackmount, codenamed Eveready and Terminator respectively. The Onyx's basic system architecture is based on the SGI Challenge servers, but with graphics hardware.
The Onyx was employed in early 1995 for development kits used to produce software for the Nintendo 64 and, because the technology was so new, the Onyx was noted as the major factor for the impressively high price of – for such kits.
The Onyx was succeeded by the Onyx2 in 1996 and was discontinued on March 31, 1999.
CPU
The deskside variant can accept one CPU board, and the rackmount variant can take up to six CPU boards. Both models were launched with the IP19 CPU board with one, two, or four MIPS R4400 CPUs, initially with 100 and 150 MHz options and later increased to 200 and 250 MHz. Later, the IP21 CPU board was introduced, with one or two R8000 microprocessors at 75 or 90 MHz; machines with this board were referred to as POWER Onyx. Finally, SGI introduced the IP25 board with one, two, or four R10000 CPUs at 195 MHz.
Graphics
The Onyx was launched with the RealityEngine2 or VTX graphics subsystems, and InfiniteReality was introduced in 1995.
RealityEngine2
The RealityEngine2 is the original high-end graphics subsystem for the Onyx and was found in two different versions: deskside and rack. The deskside model has one GE10 (Geometry Engine) board with 12 Intel i860XP processors, up to four RM4 or RM5 (Raster Manager) boards, and a DG2 (Display Generator) board. The rack model differs by supporting up to three RealityEngine2 pipes (display outputs) vs the single pipe of the deskside.
VTX
The VTX graphics subsystem is a cost reduced version of the RealityEngine2, using the same hardware but in a feature reduced configuration that can not be upgraded. It consists of one GE10 board (6 Intel i860XP processors vs 12 in RE2), a single RM4 or RM5 board, and a DG2 boar |
https://en.wikipedia.org/wiki/Oneirodes%20sanjeevani | Oneirodes sanjeevani is a species of ceratioid anglerfish known by a single specimen which was described in March 2017. The holotype was recovered by midwater trawling in the Indian Ocean at a depth between 380 and 600 meters. It is named in honor of Dr. V. N. Sanjeevan. |
https://en.wikipedia.org/wiki/Electron%20density | Electron density or electronic density is the measure of the probability of an electron being present at an infinitesimal element of space surrounding any given point. It is a scalar quantity depending upon three spatial variables and is typically denoted as either or . The density is determined, through definition, by the normalised -electron wavefunction which itself depends upon variables ( spatial and spin coordinates). Conversely, the density determines the wave function modulo up to a phase factor, providing the formal foundation of density functional theory.
According to quantum mechanics, due to the uncertainty principle on an atomic scale the exact location of an electron cannot be predicted, only the probability of its being at a given position; therefore electrons in atoms and molecules act as if they are "smeared out" in space. For one-electron systems, the electron density at any point is proportional to the square magnitude of the wavefunction.
Definition
The electronic density corresponding to a normalised -electron wavefunction (with and denoting spatial and spin variables respectively) is defined as
where the operator corresponding to the density observable is
Computing as defined above we can simplify the expression as follows.
In words: holding a single electron still in position we sum over all possible arrangements of the other electrons. The factor N arises since all electrons are indistinguishable, and hence all the integrals evaluate to the same value.
In Hartree–Fock and density functional theories, the wave function is typically represented as a single Slater determinant constructed from orbitals, , with corresponding occupations . In these situations, the density simplifies to
General properties
From its definition, the electron density is a non-negative function integrating to the total number of electrons. Further, for a system with kinetic energy T, the density satisfies the inequalities
For finite kinetic energies, |
https://en.wikipedia.org/wiki/Limiting%20amplitude%20principle | In mathematics, the limiting amplitude principle is a concept from operator theory and scattering theory used for choosing a particular solution to the Helmholtz equation. The choice is made by considering a particular time-dependent problem of the forced oscillations due to the action of a periodic force.
The principle was introduced by Andrey Nikolayevich Tikhonov and Alexander Andreevich Samarskii.
It is closely related to the limiting absorption principle (1905) and the Sommerfeld radiation condition (1912).
The terminology -- both the limiting absorption principle and the limiting amplitude principle -- was introduced by Aleksei Sveshnikov.
Formulation
To find which solution to the Helmholz equation with nonzero right-hand side
with some fixed , corresponds to the outgoing waves,
one considers the wave equation with the source term,
with zero initial data . A particular solution to the Helmholtz equation corresponding to outgoing waves is obtained as the limit
for large times.
See also
Limiting absorption principle
Sommerfeld radiation condition |
https://en.wikipedia.org/wiki/Neurotransmitter%20receptor | A neurotransmitter receptor (also known as a neuroreceptor) is a membrane receptor protein that is activated by a neurotransmitter. Chemicals on the outside of the cell, such as a neurotransmitter, can bump into the cell's membrane, in which there are receptors. If a neurotransmitter bumps into its corresponding receptor, they will bind and can trigger other events to occur inside the cell. Therefore, a membrane receptor is part of the molecular machinery that allows cells to communicate with one another. A neurotransmitter receptor is a class of receptors that specifically binds with neurotransmitters as opposed to other molecules.
In postsynaptic cells, neurotransmitter receptors receive signals that trigger an electrical signal, by regulating the activity of ion channels. The influx of ions through ion channels opened due to the binding of neurotransmitters to specific receptors can change the membrane potential of a neuron. This can result in a signal that runs along the axon (see action potential) and is passed along at a synapse to another neuron and possibly on to a neural network. On presynaptic cells, there are receptors known as autoreceptors that are specific to the neurotransmitters released by that cell, which provide feedback and mediate excessive neurotransmitter release from it.
There are two major types of neurotransmitter receptors: ionotropic and metabotropic. Ionotropic means that ions can pass through the receptor, whereas metabotropic means that a second messenger inside the cell relays the message (i.e. metabotropic receptors do not have channels). There are several kinds of metabotropic receptors, including G protein-coupled receptors. Ionotropic receptors are also called ligand-gated ion channels and they can be activated by neurotransmitters (ligands) like glutamate and GABA, which then allow specific ions through the membrane. Sodium ions (that are, for example, allowed passage by the glutamate receptor) excite the post-synaptic cell |
https://en.wikipedia.org/wiki/Ribosome%20biogenesis | Ribosome biogenesis is the process of making ribosomes. In prokaryotes, this process takes place in the cytoplasm with the transcription of many ribosome gene operons. In eukaryotes, it takes place both in the cytoplasm and in the nucleolus. It involves the coordinated function of over 200 proteins in the synthesis and processing of the three prokaryotic or four eukaryotic rRNAs, as well as assembly of those rRNAs with the ribosomal proteins. Most of the ribosomal proteins fall into various energy-consuming enzyme families including ATP-dependent RNA helicases, AAA-ATPases, GTPases, and kinases. About 60% of a cell's energy is spent on ribosome production and maintenance.
Ribosome biogenesis is a very tightly regulated process, and it is closely linked to other cellular activities like growth and division.
Some have speculated that in the origin of life, ribosome biogenesis predates cells, and that genes and cells evolved to enhance the reproductive capacity of ribosomes.
Ribosomes
Ribosomes are the macromolecular machines that are responsible for mRNA translation into proteins. The eukaryotic ribosome, also called the 80S ribosome, is made up of two subunits – the large 60S subunit (which contains the 25S [in plants] or 28S [in mammals], 5.8S, and 5S rRNA and 46 ribosomal proteins) and a small 40S subunit (which contains the 18S rRNA and 33 ribosomal proteins). The ribosomal proteins are encoded by ribosomal genes.
Prokaryotes
There are 52 genes that encode the ribosomal proteins, and they can be found in 20 operons within prokaryotic DNA. Regulation of ribosome synthesis hinges on the regulation of the rRNA itself.
First, a reduction in aminoacyl-tRNA will cause the prokaryotic cell to respond by lowering transcription and translation. This occurs through a series of steps, beginning with stringent factors binding to ribosomes and catalyzing the reaction:GTP + ATP --> pppGpp + AMP
The γ-phosphate is then removed and ppGpp will bind to and inhibit RNA polym |
https://en.wikipedia.org/wiki/Supratrochlear%20lymph%20nodes | One or two supratrochlear lymph nodes are placed above the medial epicondyle of the humerus, medial to the basilic vein.
Their afferents drain the middle, ring, and little fingers, the medial portion of the hand, and the superficial area over the ulnar side of the forearm; these vessels are, however, in free communication with the other lymphatic vessels of the forearm.
Their efferents accompany the basilic vein and join the deeper vessels.
They are distinguished in Terminologia anatomica from the "epitrochlear" (or "cubital") lymph nodes, but the region is similar.
Clinical significance
The supratrochlear lymph nodes swell up when an infection is detected in the hand or forearm areas. They may be palpable.
Additional images
See also
Trochlea of humerus |
https://en.wikipedia.org/wiki/Complexity%20and%20Real%20Computation | Complexity and Real Computation is a book on the computational complexity theory of real computation. It studies algorithms whose inputs and outputs are real numbers, using the Blum–Shub–Smale machine as its model of computation. For instance, this theory is capable of addressing a question posed in 1991 by Roger Penrose in The Emperor's New Mind: "is the Mandelbrot set computable?"
The book was written by Lenore Blum, Felipe Cucker, Michael Shub and Stephen Smale, with a foreword by Richard M. Karp, and published by Springer-Verlag in 1998 (doi:10.1007/978-1-4612-0701-6, ).
Purpose
Stephen Vavasis observes that this book fills a significant gap in the literature: although theoretical computer scientists working on discrete algorithms had been studying models of computation and their implications for the complexity of algorithms since the 1970s, researchers in numerical algorithms had for the most part failed to define their model of computation, leaving their results on a shaky foundation. Beyond the goal of making this aspect of the topic more well-founded, the book also has the aims of presenting new results in the complexity theory of real-number computation, and of collecting previously-known results in this theory.
Topics
The introduction of the book reprints the paper "Complexity and real computation: a manifesto", previously published by the same authors. This manifesto explains why classical discrete models of computation such as the Turing machine are inadequate for the study of numerical problems in areas such as scientific computing and computational geometry, motivating the newer model studied in the book. Following this, the book is divided into three parts.
Part I of the book sets up models of computation over any ring, with unit cost per ring operation. It provides analogues of recursion theory and of the P versus NP problem in each case, and proves the existence of NP-complete problems analogously to the proof of the Cook–Levin theorem in the cl |
https://en.wikipedia.org/wiki/Data%20Privacy%20Day | Data Privacy Day (known in Europe as Data Protection Day) is an international event that occurs every year on 28 January. The purpose of Data Privacy Day is to raise awareness and promote privacy and data protection best practices. It is currently observed in the United States, Canada, Nigeria, Israel and 47 European countries.
Data Privacy Day's educational initiative originally focused on raising awareness among businesses as well as users about the importance of protecting the privacy of their personal information online, particularly in the context of social networking. The educational focus has expanded over the years to include families, consumers and businesses. In addition to its educational initiative, Data Privacy Day promotes events and activities that stimulate the development of technology tools that promote individual control over personally identifiable information; encourage compliance with privacy laws and regulations; and create dialogues among stakeholders interested in advancing data protection and privacy. The international celebration offers many opportunities for collaboration among governments, industry, academia, nonprofits, privacy professionals and educators.
The Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data was opened for signature by the Council of Europe on 28 January 1981. This convention is currently in the process of being updated in order to reflect new legal challenges caused by technological development. The Convention on Cybercrime is also protecting the integrity of data systems and thus of privacy in cyberspace. Privacy including data protection is also protected by Article 8 of the European Convention on Human Rights.
The day was initiated by the Council of Europe to be first held in 2007 as the European Data Protection Day.
Two years later, on 26 January 2009, the United States House of Representatives passed House Resolution HR 31 by a vote of 402–0, declaring 28 January |
https://en.wikipedia.org/wiki/Hobbyist%20operating%20system | The development of a hobbyist operating system is one of the more involved and technical options for a computer hobbyist.
The definition of a hobby operating system can sometimes be vague. It can be from the developer's view, where the developers do it just for fun or learning; it can also be seen from the user's view, where the users are only using it as a toy; or it can be defined as an operating system which doesn't have a very big user base.
Development can begin from existing resources like a kernel, an operating system, or a bootloader, or it can also be made completely from scratch. The development platform could be a bare hardware machine, which is the nature of an operating system, but it could also be developed and tested on a virtual machine.
Since the hobbyist must claim more ownership for adapting a complex system to the ever-changing needs of the technical terrain, much enthusiasm is common amongst the many different groups attracted to operating system development.
Development
Elements of operating system development include:
Kernel:
Bootstrapping
Memory management
Process management and scheduling
Device driver management
Program API
External programs
User interface
The C programming language is frequently used for hobby operating system programming, as well as assembly language, though other languages can be used as well.
The use of assembly language is common with small systems, especially those based on eight bit microprocessors such as the MOS Technology 6502 family or the Zilog Z80, or in systems with a lack of available resources because of its small output size and low-level efficiency.
User interface
Most hobby operating systems use a command-line interface or a simple text user interface due to ease of development. More advanced hobby operating systems may have a graphical user interface. For example, AtheOS was a hobby operating system with a graphical interface written entirely by one programmer.
Examples
Use of BIOS
This s |
https://en.wikipedia.org/wiki/Biotextile | Biotextiles are structures composed of textile fibers designed for use in specific biological environments where their performance depends on biocompatibility and biostability with cells and biological fluids. Biotextiles include implantable devices such as surgical sutures, hernia repair fabrics, arterial grafts, artificial skin and parts of artificial hearts.
They were first created 30 years ago by Dr. Martin W. King, a professor in North Carolina State University’s College of Textiles.
Medical textiles are a broader group which also includes bandages, wound dressings, hospital linen, preventive clothing etc. Antiseptic biotextiles are textiles used in fighting against cutaneous bacterial proliferation. Zeolite and triclosan are at the present time the most used molecules. This original property allows to fightinhibits the development of odours or bacterial proliferation in the diabetic foot.
New developments
In the new paradigm of tissue engineering, professionals are trying to develop new textiles so that the body can form new tissue around these devices so it’s not relying solely on synthetic foreign implanted material. Graduate student Jessica Gluck has demonstrated that viable and functioning liver cells can be grown on textile scaffolds.
See also
Technical textiles |
https://en.wikipedia.org/wiki/Amorphous%20carbon | Amorphous carbon is free, reactive carbon that has no crystalline structure. Amorphous carbon materials may be stabilized by terminating dangling-π bonds with hydrogen. As with other amorphous solids, some short-range order can be observed. Amorphous carbon is often abbreviated to aC for general amorphous carbon, aC:H or HAC for hydrogenated amorphous carbon, or to ta-C for tetrahedral amorphous carbon (also called diamond-like carbon).
In mineralogy
In mineralogy, amorphous carbon is the name used for coal, carbide-derived carbon, and other impure forms of carbon that are neither graphite nor diamond. In a crystallographic sense, however, the materials are not truly amorphous but rather polycrystalline materials of graphite or diamond within an amorphous carbon matrix. Commercial carbon also usually contains significant quantities of other elements, which may also form crystalline impurities.
In modern science
With the development of modern thin film deposition and growth techniques in the latter half of the 20th century, such as chemical vapour deposition, sputter deposition, and cathodic arc deposition, it became possible to fabricate truly amorphous carbon materials.
True amorphous carbon has localized π electrons (as opposed to the aromatic π bonds in graphite), and its bonds form with lengths and distances that are inconsistent with any other allotrope of carbon. It also contains a high concentration of dangling bonds; these cause deviations in interatomic spacing (as measured using diffraction) of more than 5% as well as noticeable variation in bond angle.
The properties of amorphous carbon films vary depending on the parameters used during deposition. The primary method for characterizing amorphous carbon is through the ratio of sp2 to sp3 hybridized bonds present in the material. Graphite consists purely of sp2 hybridized bonds, whereas diamond consists purely of sp3 hybridized bonds. Materials that are high in sp3 hybridized bonds are referred t |
https://en.wikipedia.org/wiki/Anti-phishing%20software | Anti-phishing software consists of computer programs that attempt to identify phishing content contained in websites, e-mail, or other forms used to accessing data (usually from the internet) and block the content, usually with a warning to the user (and often an option to view the content regardless). It is often integrated with web browsers and email clients as a toolbar that displays the real domain name for the website the viewer is visiting, in an attempt to prevent fraudulent websites from masquerading as other legitimate websites.
Most popular web browsers comes with built-in anti-phishing and anti-malware protection services, but almost none of the alternate web browsers have such protections.
Password managers can also be used to help defend against phishing, as can some mutual authentication techniques.
Types of anti-phishing software
Email security
According to Gartner, "email security refers collectively to the prediction, prevention, detection and response framework used to provide attack protection and access protection for email." Email security solution may be : Email security spans gateways, email systems, user behavior, content security, and various supporting processes, services and adjacent security architecture.
Security awareness computer-based training
According to Gartner, security awareness training include one or more of the following capabilities: Ready-to-use training and educational content, Employee testing and knowledge checks, Availability in multiple languages, Phishing and other social engineering attack simulations, Platform and awareness analytics to help measure the efficacy of the awareness program.
Notable client-based anti-phishing programs
avast!
Avira Premium Security Suite
Earthlink ScamBlocker (discontinued)
eBay Toolbar
Egress Defend
ESET Smart Security
G Data Software G DATA Antivirus
GeoTrust TrustWatch
Google Safe Browsing (used in Mozilla Firefox, Google Chrome, Opera, Safari, and Vivaldi)
Kaspersky Internet |
https://en.wikipedia.org/wiki/Core%20War | Core War is a 1984 programming game created by D. G. Jones and A. K. Dewdney in which two or more battle programs (called "warriors") compete for control of a virtual computer. These battle programs are written in an abstract assembly language called Redcode. The standards for the language and the virtual machine were initially set by the International Core Wars Society (ICWS), but later standards were determined by community consensus.
Gameplay
At the beginning of a game, each battle program is loaded into memory at a random location, after which each program executes one instruction in turn. The goal of the game is to cause the processes of opposing programs to terminate (which happens if they execute an invalid instruction), leaving the victorious program in sole possession of the machine.
The earliest published version of Redcode defined only eight instructions. The ICWS-86 standard increased the number to 10 while the ICWS-88 standard increased it to 11. The currently used 1994 draft standard has 16 instructions. However, Redcode supports a number of different addressing modes and (starting from the 1994 draft standard) instruction modifiers which increase the actual number of operations possible to 7168. The Redcode standard leaves the underlying instruction representation undefined and provides no means for programs to access it. Arithmetic operations may be done on the two address fields contained in each instruction, but the only operations supported on the instruction codes themselves are copying and comparing for equality.
Constant instruction length and time Each Redcode instruction occupies exactly one memory slot and takes exactly one cycle to execute. The rate at which a process executes instructions, however, depends on the number of other processes in the queue, as processing time is shared equally.
Circular memory The memory is addressed in units of one instruction. The memory space (or core) is of finite size, but only relative addressing is u |
https://en.wikipedia.org/wiki/The%20Flight%20of%20Dragons%20%28book%29 | The Flight of Dragons is a 1979 speculative evolution book written by Peter Dickinson and illustrated by Wayne Anderson.
Thesis
According to Dickinson's hypothesis, the chief obstacle to admitting the past existence of dragons is the difficulty of powered flight by such a large organism. To resolve this, he introduces a dirigible-like structure in which hydrochloric acid would dissolve large amounts of rapidly growing bone, releasing massive amounts of hydrogen that, once aloft, would support the body above the ground. The dragon's wings are traced to "modifications of the ribcage" (an anatomical evolutionary path shared by the genus Draco), and the expulsion of fire from the throat, as a means of removal of excess gas. The absence of fossil evidence is traced again to the internal acids, which (in Dickinson's view) would dissolve the bones soon after death.
Dickinson states he got the idea for his "pseudo-scientific monograph" after looking at one of Ursula K. LeGuin's Earthsea books:
This one had a bulky body and rather stubby wings, which obviously would never get it airborne, let alone with the two people it was carrying on its back, and all its own weight of muscle and bone. Obviously any lift had to come from the body itself. Its very shape suggested some kind of gas-bag. I thought about it for the rest of the journey, and on and off for a couple of days after, and at the end of that time had managed to slot everything I knew about dragons – why they laired in caves, around which nothing would grow and where hoards of gold could be found, why they had a preferred diet of princesses, how and why they breathed fire, why they had only one vulnerable spot and their blood melted the blade of the sword that killed them, and so on – into a coherent theory that explained why these things were necessary accompaniments to the evolution of lighter-than-air flight.
Film
In 1982, Rankin/Bass Productions released a made-for-TV animated film The Flight of Dragons, aspec |
https://en.wikipedia.org/wiki/In%20vitro%20recombination | Recombinant DNA (rDNA), or molecular cloning, is the process by which a single gene, or segment of DNA, is isolated and amplified. Recombinant DNA is also known as in vitro recombination. A cloning vector is a DNA molecule that carries foreign DNA into a host cell, where it replicates, producing many copies of itself along with the foreign DNA. There are many types of cloning vectors such as plasmids and phages. In order to carry out recombination between vector and the foreign DNA, it is necessary the vector and DNA to be cloned by digestion, ligase the foreign DNA into the vector with the enzyme DNA ligase. And DNA is inserted by introducing the DNA into bacteria cells by transformation.
Steps
Preparation of foreign DNA
There are two major sources of foreign DNA for molecular cloning is genomic DNA (gDNA) and complementary (or copy) DNA (cDNA). cDNA molecules are DNA copies of mRNA molecules, produced in vitro by action of the enzyme reverse transcriptase. In order to obtain the cDNA for a specific gene, it is first necessary to construct a cDNA library.
Fragmenting gDNA
The DNA of interest needs to be fragmented to provide a relevant DNA segment of suitable size. Preparation of DNA fragments for cloning is achieved by means of PCR, but it may also be accomplished by restriction enzyme digestion and fractionation by gel electrophoresis.
Preparing cDNA libraries
To prepare a cDNA library, the first step is to isolate the total mRNA from the cell type of interest. Then, the enzyme reverse transcriptase is used to generate cDNAs. Reverse transcriptase is a RNA-dependent DNA polymerase. It depends on the presence of a primer, usually a poly-dT oligonucleotide, to prime DNA synthesis. DNA polymerase can use these single–stranded primers to initiate second strand DNA synthesis on the mRNA templates. After the single-stranded DNA molecules are converted into double-stranded DNA molecules by DNA polymerase, they are inserted into vectors and cloned. To do this, th |
https://en.wikipedia.org/wiki/Unscented%20transform | The unscented transform (UT) is a mathematical function used to estimate the result of applying a given nonlinear transformation to a probability distribution that is characterized only in terms of a finite set of statistics. The most common use of the unscented transform is in the nonlinear projection of mean and covariance estimates in the context of nonlinear extensions of the Kalman filter. Its creator Jeffrey Uhlmann explained that "unscented" was an arbitrary name that he adopted to avoid it being referred to as the “Uhlmann filter” though others have indicated that "unscented" is a contrast to "scented" intended as a euphemism for "stinky"
Background
Many filtering and control methods represent estimates of the state of a system in the form of a mean vector and an associated error covariance matrix. As an example, the estimated 2-dimensional position of an object of interest might be represented by a mean position vector, , with an uncertainty given in the form of a 2x2 covariance matrix giving the variance in , the variance in , and the cross covariance between the two. A covariance that is zero implies that there is no uncertainty or error and that the position of the object is exactly what is specified by the mean vector.
The mean and covariance representation only gives the first two moments of an underlying, but otherwise unknown, probability distribution. In the case of a moving object, the unknown probability distribution might represent the uncertainty of the object's position at a given time. The mean and covariance representation of uncertainty is mathematically convenient because any linear transformation can be applied to a mean vector and covariance matrix as and . This linearity property does not hold for moments beyond the first raw moment (the mean) and the second central moment (the covariance), so it is not generally possible to determine the mean and covariance resulting from a nonlinear transformation because the result depends on al |
https://en.wikipedia.org/wiki/Biotransducer | A biotransducer is the recognition-transduction component of a biosensor system. It consists of two intimately coupled parts; a bio-recognition layer and a physicochemical transducer, which acting together converts a biochemical signal to an electronic or optical signal. The bio-recognition layer typically contains an enzyme or another binding protein such as antibody. However, oligonucleotide sequences, sub-cellular fragments such as organelles (e.g. mitochondria) and receptor carrying fragments (e.g. cell wall), single whole cells, small numbers of cells on synthetic scaffolds, or thin slices of animal or plant tissues, may also comprise the bio-recognition layer. It gives the biosensor selectivity and specificity. The physicochemical transducer is typically in intimate and controlled contact with the recognition layer. As a result of the presence and biochemical action of the analyte (target of interest), a physico-chemical change is produced within the biorecognition layer that is measured by the physicochemical transducer producing a signal that is proportionate to the concentration of the analyte. The physicochemical transducer may be electrochemical, optical, electronic, gravimetric, pyroelectric or piezoelectric. Based on the type of biotransducer, biosensors can be classified as shown to the right.
Electrochemical biotransducers
Electrochemical biosensors contain a biorecognition element that selectively reacts with the target analyte and produces an electrical signal that is proportional to the analyte concentration. In general, there are several approaches that can be used to detect electrochemical changes during a biorecognition event and these can be classified as follows: amperometric, potentiometric, impedance, and conductometric.
Amperometric
Amperometric transducers detect change in current as a result of electrochemical oxidation or reduction. Typically, the bioreceptor molecule is immobilized on the working electrode (commonly gold, carbon, o |
https://en.wikipedia.org/wiki/Ballistic%20pendulum | A ballistic pendulum is a device for measuring a bullet's momentum, from which it is possible to calculate the velocity and kinetic energy. Ballistic pendulums have been largely rendered obsolete by modern chronographs, which allow direct measurement of the projectile velocity.
Although the ballistic pendulum is considered obsolete, it remained in use for a significant length of time and led to great advances in the science of ballistics. The ballistic pendulum is still found in physics classrooms today, because of its simplicity and usefulness in demonstrating properties of momentum and energy. Unlike other methods of measuring the speed of a bullet, the basic calculations for a ballistic pendulum do not require any measurement of time, but rely only on measures of mass and distance.
In addition its primary uses of measuring the velocity of a projectile or the recoil of a gun, the ballistic pendulum can be used to measure any transfer of momentum. For example, a ballistic pendulum was used by physicist C. V. Boys to measure the elasticity of golf balls, and by physicist Peter Guthrie Tait to measure the effect that spin had on the distance a golf ball traveled.
History
The ballistic pendulum was invented in 1742 by English mathematician Benjamin Robins (1707–1751), and published in his book New Principles of Gunnery, which revolutionized the science of ballistics, as it provided the first way to accurately measure the velocity of a bullet.
Robins used the ballistic pendulum to measure projectile velocity in two ways. The first was to attach the gun to the pendulum, and measure the recoil. Since the momentum of the gun is equal to the momentum of the ejecta, and since the projectile was (in those experiments) the large majority of the mass of the ejecta, the velocity of the bullet could be approximated. The second, and more accurate method, was to directly measure the bullet momentum by firing it into the pendulum. Robins experimented with musket balls of arou |
https://en.wikipedia.org/wiki/Rotational%20energy | Rotational energy or angular kinetic energy is kinetic energy due to the rotation of an object and is part of its total kinetic energy. Looking at rotational energy separately around an object's axis of rotation, the following dependence on the object's moment of inertia is observed:
where
The mechanical work required for or applied during rotation is the torque times the rotation angle. The instantaneous power of an angularly accelerating body is the torque times the angular velocity. For free-floating (unattached) objects, the axis of rotation is commonly around its center of mass.
Note the close relationship between the result for rotational energy and the energy held by linear (or translational) motion:
In the rotating system, the moment of inertia, I, takes the role of the mass, m, and the angular velocity, , takes the role of the linear velocity, v. The rotational energy of a rolling cylinder varies from one half of the translational energy (if it is massive) to the same as the translational energy (if it is hollow).
An example is the calculation of the rotational kinetic energy of the Earth. As the Earth has a sidereal rotation period of 23.93 hours, it has an angular velocity of . The Earth has a moment of inertia, I = . Therefore, it has a rotational kinetic energy of .
Part of the Earth's rotational energy can also be tapped using tidal power. Additional friction of the two global tidal waves creates energy in a physical manner, infinitesimally slowing down Earth's angular velocity ω. Due to the conservation of angular momentum, this process transfers angular momentum to the Moon's orbital motion, increasing its distance from Earth and its orbital period (see tidal locking for a more detailed explanation of this process).
See also
Flywheel
List of energy storage projects
Rigid rotor
Rotational spectroscopy
Notes |
https://en.wikipedia.org/wiki/Proctodeum | A proctodeum is the back ectodermal part of an alimentary canal. It is created during embryogenesis by a folding of the outer body wall. It will form the lower part of the anal canal, below the pectinate line, which will be lined by stratified squamous non-keratinized (zona hemorrhagica) and stratified squamous keratinized (zona cutanea) epithelium. The junction between them is Hilton's white line.
External links
Embryology of digestive system |
https://en.wikipedia.org/wiki/Hoffman%E2%80%93Singleton%20graph | In the mathematical field of graph theory, the Hoffman–Singleton graph is a 7-regular undirected graph with 50 vertices and 175 edges. It is the unique strongly regular graph with parameters (50,7,0,1). It was constructed by Alan Hoffman and Robert Singleton while trying to classify all Moore graphs, and is the highest-order Moore graph known to exist. Since it is a Moore graph where each vertex has degree 7, and the girth is 5, it is a (7,5)-cage.
Construction
Here are some constructions of the Hoffman–Singleton graph.
Construction from pentagons and pentagrams
Take five pentagons Ph and five pentagrams Qi . Join vertex j of Ph to vertex h·i+j of Qi. (All indices are modulo 5.)
Construction from PG(3,2)
Take a Fano plane on seven elements, such as {abc, ade, afg, bef, bdg, cdf, ceg} and apply all 2520 even permutations on the 7-set abcdefg. Canonicalize each such Fano plane (e.g. by reducing to lexicographic order) and discard duplicates. Exactly 15 Fano planes remain. Each 3-set (triplet) of the set abcdefg is present in exactly 3 Fano planes. The incidence between the 35 triplets and 15 Fano planes induces PG(3,2), with 15 points and 35 lines. To make the Hoffman-Singleton graph, create a graph vertex for each of the 15 Fano planes and 35 triplets. Connect each Fano plane to its 7 triplets, like a Levi graph, and also connect disjoint triplets to each other like the odd graph O(4).
A very similar construction from PG(3,2) is used to build the Higman–Sims graph, which has the Hoffman-Singleton graph as a subgraph.
Construction on a groupoid
Let be the set . Define a binary operation on such that for each and in ,
.
Then the Hoffman-Singleton graph has vertices and that there exists an edge between and whenever for some .
Algebraic properties
The automorphism group of the Hoffman–Singleton graph is a group of order isomorphic to PΣU(3,52) the semidirect product of the projective special unitary group PSU(3,52) with the cyclic group of order 2 gener |
https://en.wikipedia.org/wiki/KCNK13 | Potassium channel, subfamily K, member 13, also known as KCNK13 is a human gene. The protein encoded by this gene, K2P13.1 is a potassium channel containing two pore-forming P domains.
See also
Tandem pore domain potassium channel |
https://en.wikipedia.org/wiki/Clioquinol | Clioquinol (iodochlorhydroxyquin) is an antifungal drug and antiprotozoal drug. It is neurotoxic in large doses. It is a member of a family of drugs called hydroxyquinolines which inhibit certain enzymes related to DNA replication. The drugs have been found to have activity against both viral and protozoal infections.
Antiprotozoal use
A 1964 report described the use of clioquinol in both the treatment and prevention of shigella infection and Entamoeba histolytica infection in institutionalized individuals at Sonoma State Hospital in California. The report indicates 4000 individuals were treated over a 4-year period with few side effects.
Several recently reported journal articles describing its use as an antiprotozoal include:
A 2005 reference to its use in treating a Dutch family for Entamoeba histolytica infection.
A 2004 reference to its use in the Netherlands in the treatment of Dientamoeba fragilis infection.
A 1979 reference to the use in Zaire in the treatment of Entamoeba histolytica infection.
Subacute myelo-optic neuropathy
Clioquinol's use as an antiprotozoal drug has been restricted or discontinued in some countries due to an event in Japan where over 10,000 people developed subacute myelo-optic neuropathy (SMON) between 1957 and 1970. The drug was used widely in many countries before and after the SMON event without similar reports. As yet, no explanation exists as to why it produced this reaction, and some researchers have questioned whether clioquinol was the causative agent in the disease, noting that the drug had been used for 20 years prior to the epidemic without incident, and that the SMON cases began to reduce in number prior to the discontinuation of the drug. Theories suggested have included improper dosing, the permitted use of the drug for extended periods of time, and dosing which did not consider the smaller average stature of Japanese; however a dose dependent relationship between SMON development and clioquinol use was nev |
https://en.wikipedia.org/wiki/National%20Initiative%20for%20Cybersecurity%20Careers%20and%20Studies | National Initiative for Cybersecurity Careers and Studies (NICCS) is an online training initiative and portal built as per the National Initiative for Cybersecurity Education framework. This is a federal cybersecurity training subcomponent, operated and maintained by Cybersecurity and Infrastructure Security Agency.
History
The initiative was launched by Janet Napolitano, then-Secretary of Homeland Security of Department of Homeland Security on February 21, 2013. The primary objective of the initiative is to develop and train the next generation of American cyber professional by involving academia and the private sector.
Federal Virtual Training Environment
NICCS hosts Federal Virtual Training Environment, a completely free online cybersecurity training system for federal and state government employees. It contains more than 800 hours of training materials on ethical hacking and surveillance, risk management, and malware analysis.
See also
Cybersecurity and Infrastructure Security Agency
National Cyber Security Division
National Initiative for Cybersecurity Education |
https://en.wikipedia.org/wiki/MRC%20Cognition%20and%20Brain%20Sciences%20Unit | The Cognition and Brain Sciences Unit is a branch of the UK Medical Research Council, based in Cambridge, England. The CBSU is a centre for cognitive neuroscience, with a mission to improve human health by understanding and enhancing cognition and behaviour in health, disease and disorder. It is one of the largest and most long-lasting contributors to the development of psychological theory and practice.
The CBSU has its own magnetic resonance imaging (MRI, 3T) scanner on-site, as well as a 306-channel magnetoencephalography (MEG) system and a 128-channel electroencephalography (EEG) laboratory.
The CBSU has close links to clinical neuroscience research in the University of Cambridge Medical School. Over 140 scientists, students, and support staff work in research areas such as Memory, Attention, Emotion, Speech and Language, Development and Aging, Computational Modelling and Neuroscience Methods. With dedicated facilities available on site, the Unit has particular strengths in the application of neuroimaging techniques in the context of well-developed neuro-cognitive theory.
History
The unit was established in 1944 as the MRC Applied Psychology Unit. In June 2001, the History of Modern Biomedicine Research Group held a witness seminar to gather information on the unit's history.
On 1 July 2017, the CBU was merged with the University of Cambridge. Coming under the Clinical School, the unit is still funded by the British government through Research Councils UK but is managed and maintained by Cambridge University.
List of directors
Kenneth Craik, 1944–1945
Frederic Bartlett, 1945–1951
Norman Mackworth, 1951–1958
Donald Broadbent, 1958–1974
Alan Baddeley, 1974–1997
William Marslen-Wilson, 1997–2010
Susan Gathercole, 2011–2018
Matthew Lambon Ralph, 2018– |
https://en.wikipedia.org/wiki/Durotaxis | Durotaxis is a form of cell migration in which cells are guided by rigidity gradients, which arise from differential structural properties of the extracellular matrix (ECM). Most normal cells migrate up rigidity gradients (in the direction of greater stiffness).
History of durotaxis research
The process of durotaxis requires a cell to actively sense the environment, process the mechanical stimulus, and execute a response. Originally, this was believed to be an emergent metazoan property, as the phenomenon requires a complex sensory loop that is dependent on the communication of many different cells. However, as the wealth of relevant scientific literature grew in the late 1980s and throughout the 1990s, it became apparent that single cells possess the ability to do the same. The first observations of durotaxis in isolated cells were that mechanical stimuli could cause the initiation and elongation of axons in the sensory and brain neurons of chicks and induce motility in previously stationary fish epidermal keratocytes. ECM stiffness was also noted to influence cytoskeletal stiffness, fibronectin fibril assembly, the strength of integrin-cytoskeletal interactions, morphology and motility rate, all of which were known influence cell migration.
With information from the previous observations, Lo and colleagues formulated the hypothesis that individual cells can detect substrate stiffness by a process of active tactile exploration in which cells exert contractile forces and measure the resulting deformation in the substrate. Supported by their own experiments, this team coined the term "durotaxis" in their paper in the Biophysical Journal in the year 2000. More recent research supports the previous observations and the principle of durotaxis, with continued evidence for cell migration up rigidity gradients and stiffness-dependent morphological changes
Substrate rigidity
The rigidity of the ECM is significantly different across cell types; for example, it ranges fr |
https://en.wikipedia.org/wiki/Data%20anonymization | Data anonymization is a type of information sanitization whose intent is privacy protection. It is the process of removing personally identifiable information from data sets, so that the people whom the data describe remain anonymous.
Overview
Data anonymization has been defined as a "process by which personal data is altered in such a way that a data subject can no longer be identified directly or indirectly, either by the data controller alone or in collaboration with any other party." Data anonymization may enable the transfer of information across a boundary, such as between two departments within an agency or between two agencies, while reducing the risk of unintended disclosure, and in certain environments in a manner that enables evaluation and analytics post-anonymization.
In the context of medical data, anonymized data refers to data from which the patient cannot be identified by the recipient of the information. The name, address, and full postcode must be removed, together with any other information which, in conjunction with other data held by or disclosed to the recipient, could identify the patient.
There will always be a risk that anonymized data may not stay anonymous over time. Pairing the anonymized dataset with other data, clever techniques and raw power are some of the ways previously anonymous data sets have become de-anonymized; The data subjects are no longer anonymous.
De-anonymization is the reverse process in which anonymous data is cross-referenced with other data sources to re-identify the anonymous data source.
Generalization and perturbation are the two popular anonymization approaches for relational data. The process of obscuring data with the ability to re-identify it later is also called pseudonymization and is one-way companies can store data in a way that is HIPAA compliant.
However, according to ARTICLE 29 DATA PROTECTION WORKING PARTY, Directive 95/46/EC refers to anonymisation in Recital 26 "signifies that to anonymi |
https://en.wikipedia.org/wiki/API%20writer | An API writer is a technical writer who writes documents that describe an application programming interface (API). The primary audience includes programmers, developers, system architects, and system designers.
Overview
An API is a library consisting of interfaces, functions, classes, structures, enumerations, etc. for building a software application. It is used by developers to interact with and extend the software. An API for a given programming language or system may consist of system-defined and user-defined constructs. As the number and complexity of these constructs increases, it becomes very tedious for developers to remember all of the functions and the parameters defined. Hence, the API writers play a key role in building software applications.
Due to the technical subject matter, API writers must understand application source code enough to extract the information that API documents require. API writers often use tooling that extracts software documentation placed by programmers in the source code in a structured manner, preserving the relationships between the comments and the programming constructs they document.
API writers must also understand the software product and document the new features or changes as part of the new software release. The schedule of software releases varies from organization to organization. API writers need to understand the software life cycle well and integrate themselves into the systems development life cycle (SDLC).
API writers in the United States generally follow The Chicago Manual of Style for grammar and punctuation.
Qualifications
API writers typically possess a mix of programming and language skills; many API writers have backgrounds in programming or technical writing.
Computer programming background (Knowledge of C, C++, Java, PHP, or other programming languages)
Knowledge of formatting standards like Doxygen, Javadoc, OpenAPI, or DITA
Knowledge of editors and tools, like FrameMaker
Excellent communication |
https://en.wikipedia.org/wiki/Flag%20of%20Wichita%2C%20Kansas | Wichita's official city flag was adopted in 1937. Designed by a local artist from South Wichita, Cecil McAlister, it represents freedom, happiness, contentment and home.
The blue sun in the center represents happiness and contentment. The Zia symbol for permanent ‘home’ is stitched on the blue sun. The three red and white rays that alternate from the off-center blue sun represent the path of freedom to come and go as one pleases.
Selected from more than 100 entries that were submitted for a city flag design contest, it was officially adopted on Flag Day, June 14, 1937, by Mayor T. Walker Weaver. The first Wichita flag was produced by local seamstress Mary J. Harper. It flew for the first time on July 23, 1937, over City Hall.
History
The early 20th century was a difficult time for the young city of Wichita. It saw the ups and downs of the oil industry, the Great Depression, the boom of the aircraft industry and rapid growth and expansion of manufacturing jobs during World War II. The cycle of innovation and invention throughout Wichita generated numerous taglines, catchphrases, and monikers. From Cowtown to Doo Dah to the first claim of the world title: The Broomcorn Capital, left Wichita feeling outdated and out of touch with larger cities.
With the varied aircraft industry and test center for aviation moving into town, Wichita was soon dubbed the "Air Capital of the World." However, after the Depression and the Dust Bowl sweeping through the city, city leaders determined that Wichita needed more than a slogan, but something visual - a flag.
The Contest
In 1937, the forefathers of Wichita generated the idea to hold a contest for local Wichitans to design a flag. A panel of artists was gathered to judge the submitted designs and a variety of winnings were offered. Over 100 submissions were received and the panel of judges went to work determining the next identity associated with Wichita. Cecil McAlister was selected as the winner and the grand prize of $40 w |
https://en.wikipedia.org/wiki/Melzer%27s%20reagent | Melzer's reagent (also known as Melzer's iodine reagent, Melzer's solution or informally as Melzer's) is a chemical reagent used by mycologists to assist with the identification of fungi, and by phytopathologists for fungi that are plant pathogens.
Composition
Melzer's reagent is an aqueous solution of chloral hydrate, potassium iodide, and iodine. Depending on the formulation, it consists of approximately 2.50-3.75% potassium iodide and 0.75–1.25% iodine, with the remainder of the solution being 50% water and 50% chloral hydrate. Melzer's is toxic to humans if ingested due to the presence of iodine and chloral hydrate. Due to the legal status of chloral hydrate, Melzer's reagent is difficult to obtain in the United States.
In response to difficulties obtaining chloral hydrate, scientists at Rutgers formulated Visikol (compatible with Lugol's iodine) as a replacement. In 2019, research showed that Visikol behaves differently to Melzer’s reagent in several key situations, noting it should not be recommended as a viable substitute.
Melzer's reagent is part of a class of iodine/potassium iodide (IKI)-containing reagents used in biology; Lugol's iodine is another such formula.
Reactions
Melzer's is used by exposing fungal tissue or cells to the reagent, typically in a microscope slide preparation, and looking for any of three color reactions:
Amyloid or Melzer's-positive reaction, in which the material reacts blue to black.
Pseudoamyloid or dextrinoid reaction, in which the material reacts brown to reddish brown.
Inamyloid or Melzer's-negative, in which the tissues do not change color, or react faintly yellow-brown.
Among the amyloid reaction, two types can be distinguished:
Euamyloid reaction, in which the material turns blue without potassium hydroxide (KOH)-pretreatment.
Hemiamyloid reaction, in which the material turns red in Lugol's solution, but shows no reaction in Melzer's reagent; when KOH-pretreated it turns blue in both reagents (hemiamyloidity).
M |
https://en.wikipedia.org/wiki/Ramicolous%20lichen | A ramicolous lichen is one that lives on branches. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.