source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/Antagonism%20%28phytopathology%29
|
In phytopathology, antagonism refers to the action of any organism that suppresses or interferes with the normal growth and activity of a plant pathogen, such as the main parts of bacteria or fungi.
These organisms can be used for pest control and are referred to as biological control agents. They may be predators, parasites, parasitoids, or pathogens that attack a harmful insect, weed, or plant disease or any other organism in its vicinity. The inhibitory substance is highly specific in its action, affecting only a specific species. Many soil microorganisms are antagonistic. They secrete a potent enzyme which destroys other cells by digesting their cell walls and degrade the cellular material as well as released protoplasmic material serves as a nutrient for the inhibitor organism, for example Aspergillus has an antagonistic effect on Penicillium and Cladosporium. Trichoderma has an effect on actinomycetes. Pseudomonas show antagonism on Cladosporiumsuch organism may be of great practical importance since they often produce antibiotics which modify the normal growth processes.
Mechanism
Antibiosis example — enzymes, toxins, antibiotics.
Direct parasitism example — biotrophic or necrotrophic.
competition example — for nutrients.
Induced resistance (indirect).
|
https://en.wikipedia.org/wiki/Immunological%20Genome%20Project
|
The Immunological Genome Project (ImmGen) is a collaborative scientific research project that is currently building a gene-expression database for all characterized immune cells in the mouse. The overarching goal of the project is to computationally reconstruct the gene regulatory network in immune cells
. All data generated as part of ImmGen are made freely and publicly available at the ImmGen portal .
The ImmGen project began in 2008, as a collaboration between several immunology and computational biology laboratories across the United States, and will be completing its second phase on 2017. Currently, raw data and specialized data browsers from the first and second phases are on www.ImmGen.org.
Project
Background
A true understanding of cell differentiation in the immune system will require a general perspective on the transcriptional profile of each cell type of the adaptive and innate immune systems, and how these profiles evolve through cell differentiation or activation by immunogenic or tolerogenic ligands. The ImmGen project aims to establish the roadmap of these transcriptional states.
Gene-expression compendium
The first aim of ImmGen is to generate a compendium of whole-genome transcriptional profiles (initially by microarray, now mostly by RNA-sequencing) for nearly all characterized cell populations of the adaptive and innate immune systems in the mouse, at major stages of differentiation and activation. This effort is being carried out by a group of collaborating immunology research laboratories across the U.S. Each of the laboratories brings a unique expertise in a particular cell lineage, and all are employing standardized procedures for cell sorting. The compendium of microarray data currently include over 250 immunologically relevant cell types, from all lymphoid organs and other tissues which are monitored by immune cells.
Publications
A series of ImmGen reports was published as the compendium accumulated. Some lineage specific reports
|
https://en.wikipedia.org/wiki/Acoustic%20holography
|
Acoustic holography is a method for estimating the sound field near a sound source by measuring acoustic parameters away from the source by means of an array of pressure and/or particle velocity transducers. The measuring techniques included in acoustic holography are becoming increasingly popular in various fields, most notably those of transportation, vehicle and aircraft design, and noise, vibration, and harshness (NVH). The general idea of acoustic holography has led to different versions such as near-field acoustic holography (NAH) and statistically optimal near-field acoustic holography (SONAH).
For audio rendering and production, Wave Field Synthesis and Higher Order Ambisonics are related technologies, respectively modelling the acoustic pressure field on a plane, or in a spherical volume.
|
https://en.wikipedia.org/wiki/Asystole
|
Asystole (New Latin, from Greek privative a "not, without" + systolē "contraction") is the absence of ventricular contractions in the context of a lethal heart arrhythmia (in contrast to an induced asystole on a cooled patient on a heart-lung machine and general anesthesia during surgery necessitating stopping the heart). Asystole is the most serious form of cardiac arrest and is usually irreversible. Also referred to as cardiac flatline, asystole is the state of total cessation of electrical activity from the heart, which means no tissue contraction from the heart muscle and therefore no blood flow to the rest of the body.
Asystole should not be confused with very brief pauses below 3 seconds in the heart's electrical activity that can occur in certain less severe abnormal rhythms. Asystole is different from very fine occurrences of ventricular fibrillation, though both have a poor prognosis, and untreated fine VF will lead to asystole. Faulty wiring, disconnection of electrodes and leads, and power disruptions should be ruled out.
Asystolic patients (as opposed to those with a "shockable rhythm" such as coarse or fine ventricular fibrillation, or unstable ventricular tachycardia that is not producing a pulse, which can potentially be treated with defibrillation) usually present with a very poor prognosis. Asystole is found initially in only about 28% of cardiac arrest cases in hospitalized patients, but only 15% of these survive, even with the benefit of an intensive care unit, with the rate being lower (6%) for those already prescribed drugs for high blood pressure.
Asystole is treated by cardiopulmonary resuscitation (CPR) combined with an intravenous vasopressor such as epinephrine (a.k.a. adrenaline). Sometimes an underlying reversible cause can be detected and treated (the so-called "Hs and Ts", an example of which is hypokalaemia). Several interventions previously recommended—such as defibrillation (known to be ineffective on asystole, but previously perf
|
https://en.wikipedia.org/wiki/LIN28B%20%28gene%29
|
Lin-28 homolog B is a protein that in humans is encoded by the LIN28B gene.
Function
The protein encoded by this gene belongs to the lin-28 family, which is characterized by the presence of a cold-shock domain and a pair of CCHC zinc finger domains. This gene is highly expressed in testis, fetal liver, placenta, and in primary human tumors and cancer cell lines. It is negatively regulated by microRNAs that target sites in the 3' UTR, and overexpression of this gene in primary tumors is linked to the repression of let-7 family of microRNAs and derepression of let-7 targets, which facilitates cellular transformation.
|
https://en.wikipedia.org/wiki/GRTensorII
|
GRTensorII is a Maple package designed for tensor computations, particularly in general relativity.
This package was developed at Queen's University in Kingston, Ontario by Peter Musgrave, Denis Pollney and Kayll Lake. While there are many packages which perform tensor computations (including a standard Maple package), GRTensorII is particularly well suited for carrying out routine computations of useful quantities when working with (or searching for) exact solutions in general relativity. Its principal advantages include
convenience of definition of new spacetimes and tensor expression
efficient computation with frames
efficient computation of Ricci and Weyl spinor components and of Petrov classification
efficient computation of the Carminati-McLenaghan invariants and other curvature invariants
Currently, GRTensorII does have some drawbacks:
Maple is expensive
valuable subpackages for perturbation and junction computations have not been updated
no subpackage is yet publicly available in GRTensorII for executing the Cartan-Karlhede algorithm
sharing information with standard Maple packages can sometimes become awkward
|
https://en.wikipedia.org/wiki/Pono%20%28digital%20music%20service%29
|
Pono (, Hawaiian word for "proper") was a portable digital media player and music download service for high-resolution audio. It was developed by musician Neil Young and his company PonoMusic, which raised money for development and initial production through a crowd-funding campaign on Kickstarter. Production and shipments to backers started in October 2014, and shipments to the general public began in the first quarter of 2015.
Pono's stated goal to present songs "as they first sound during studio recording sessions", using "high-resolution" 24-bit 192kHz audio instead of "the compressed audio inferiority that MP3s offer" received mixed reactions, with some describing Pono as a competitor to similar music services such as HDtracks, but others doubting its potential for success.
In April 2017 it was announced that Pono was discontinued, and alternative plans were later abandoned.
Background
Writing in his book Waging Heavy Peace, Young expressed concern about digital audio quality, criticizing in particular the quality offered by Apple's iTunes Store. "My goal is to try and rescue the art form that I've been practicing for the past 50 years," he said.
Founding
PonoMusic was founded in 2012 by Young, along with Silicon Valley entrepreneur John Hamm as the company's CEO. The name was derived from pono (), a Hawaiian word for "righteousness."
Pono reportedly had backing from, and had signed a full agreement, with Warner. In September 2012, Young appeared on the Late Show with David Letterman with a prototype of the player, and confirmed backing from Warner, as well as major record labels Sony, and Universal. Young claimed that Pono would provide "...the finest quality, highest-resolution digital music from ... major labels [as well as] prominent independent labels..." using the FLAC audio file format.
On March 12, 2014, the company with the help of Alex Daly and her crowdfunding consultancy Vann Alexandra, launched a successful crowdfunding campaign on Kickstart
|
https://en.wikipedia.org/wiki/Fractional%20excretion%20of%20sodium
|
The fractional excretion of sodium (FENa) is the percentage of the sodium filtered by the kidney which is excreted in the urine. It is measured in terms of plasma and urine sodium, rather than by the interpretation of urinary sodium concentration alone, as urinary sodium concentrations can vary with water reabsorption. Therefore, the urinary and plasma concentrations of sodium must be compared to get an accurate picture of kidney clearance. In clinical use, the fractional excretion of sodium can be calculated as part of the evaluation of acute kidney failure in order to determine if hypovolemia or decreased effective circulating plasma volume is a contributor to the kidney failure.
Calculation
FENa is calculated in two parts—figuring out how much sodium is excreted in the urine, and then finding its ratio to the total amount of sodium that passed through (aka "filtered by") the kidney.
First, the actual amount of sodium excreted is calculated by multiplying the urine sodium concentration by the urinary flow rate. This is the numerator in the equation. The denominator is the total amount of sodium filtered by the kidneys. This is calculated by multiplying the plasma sodium concentration by the glomerular filtration rate calculated using creatinine filtration. This formula is represented mathematically as:
[(Sodiumurinary × Flow rateurinary) ÷ ((Sodiumplasma) × ((Creatinineurinary × Flow rateurinary) ÷ (Creatinineplasma)))] × 100
Sodium (mmol/L)
Creatinine (mg/dL)
The flow rates cancel out in the above equation, simplifying to the standard equation:
For ease of recall, one can just remember the fractional excretion of sodium is the clearance of sodium divided by the glomerular filtration rate (i.e. the "fraction" excreted).
Interpretation
FENa can be useful in the evaluation of acute kidney failure in the context of low urine output. Low fractional excretion indicates sodium retention by the kidney, suggesting pathophysiology extrinsic to the urinary system s
|
https://en.wikipedia.org/wiki/Imbert%E2%80%93Fedorov%20effect
|
The Imbert–Fiodaraŭ effect (named after Fiodar Ivanavič Fiodaraŭ (1911 – 1994) and Christian Imbert (1937 – 1998) is an optical phenomenon in which a beam of circularly or elliptically polarized light undergoes a small sideways shift, when refracted or totally internally reflected. The sideways shift is perpendicular to the plane containing the incident and reflected beams. This effect is the circular polarization analog of the Goos–Hänchen effect.
|
https://en.wikipedia.org/wiki/Underwater%20survey
|
An underwater survey is a survey performed in an underwater environment or conducted remotely on an underwater object or region. Survey can have several meanings. The word originates in Medieval Latin with meanings of looking over and detailed study of a subject. One meaning is the accurate measurement of a geographical region, usually with the intention of plotting the positions of features as a scale map of the region. This meaning is often used in scientific contexts, and also in civil engineering and mineral extraction. Another meaning, often used in a civil, structural, or marine engineering context, is the inspection of a structure or vessel to compare actual condition with the specified nominal condition, usually with the purpose of reporting on the actual condition and compliance with, or deviations from, the nominal condition, for quality control, damage assessment, valuation, insurance, maintenance, and similar purposes. In other contexts it can mean inspection of a region to establish presence and distribution of specified content, such as living organisms, either to establish a baseline, or to compare with a baseline.
These types of survey may be done in or of the underwater environment, in which case they may be referred to as underwater surveys, which may include bathymetric, hydrographic, and geological surveys, archaeological surveys, ecological surveys, and structural or vessel safety surveys. In some cases they can be done by remote sensing, using a variety of tools, and sometimes by direct human intervention, usually by a professional diver. Underwater surveys are an essential part of the planning, and often of quality control and monitoring, of underwater construction, dredging, mineral extraction, ecological monitoring, and archaeological investigations. They are often required as part of an ecological impact study.
Types
The types of underwater survey include, but are not necessarily restricted to, archeological, bathymetric and hydrographic
|
https://en.wikipedia.org/wiki/Double-charm%20tetraquark
|
The double-charm tetraquark (T, cc) is a type of long-lived tetraquark that was discovered in 2021 in the LHCb experiment conducted at the Large Hadron Collider. It contains four quarks: two charm quarks, an anti-up and an anti-down quark.
It has a theoretical computed mass of . The discovery showed an exceptionally strong peak, with 20-sigma significance.
It is hypothesized that studying the behavior of the double-charm tetraquark may play a part in explaining the behavior of the strong force. Following the discovery of the T, researchers now plan experiments to find its double-beauty counterpart T. This tetraquark has been found to have a longer lifespan than most known exotic-matter particles.
|
https://en.wikipedia.org/wiki/Quantum%20gate%20teleportation
|
Quantum gate teleportation is a quantum circuit construction where a gate is applied to target qubits by first applying the gate to an entangled state and then teleporting the target qubits through that entangled state.
This separation of the physical application of the gate from the target qubit can be useful in cases where applying the gate directly to the target qubit may be more likely to destroy it than to apply the desired operation.
For example, the KLM protocol can be used to implement a Controlled NOT gate on a photonic quantum computer, but the process can be prone to errors that destroy the target qubits.
By using gate teleportation, the CNOT operation can be applied to a state that can be easily recreated if it is destroyed, allowing the KLM CNOT to be used in long-running quantum computations without risking the rest of the computation.
Additionally, gate teleportation is a key component of magic state distillation, a technique that can be used to overcome the limitations of the Eastin-Knill theorem.
Quantum gate teleportation has been demonstrated in various types of quantum computers, including linear optical,
superconducting quantum computing,
and trapped ion quantum computing.
|
https://en.wikipedia.org/wiki/Delay-locked%20loop
|
In electronics, a delay-locked loop (DLL) is a pseudo-digital circuit similar to a phase-locked loop (PLL), with the main difference being the absence of an internal voltage-controlled oscillator, replaced by a delay line.
A DLL can be used to change the phase of a clock signal (a signal with a periodic waveform), usually to enhance the clock rise-to-data output valid timing characteristics of integrated circuits (such as DRAM devices). DLLs can also be used for clock recovery (CDR). From the outside, a DLL can be seen as a negative delay gate placed in the clock path of a digital circuit.
The main component of a DLL is a delay chain composed of many delay gates connected output-to-input. The input of the chain (and thus of the DLL) is connected to the clock that is to be negatively delayed. A multiplexer is connected to each stage of the delay chain; a control circuit automatically updates the selector of this multiplexer to produce the negative delay effect. The output of the DLL is the resulting, negatively delayed clock signal.
Another way to view the difference between a DLL and a PLL is that a DLL uses a variable phase (=delay) block, whereas a PLL uses a variable frequency block.
A DLL compares the phase of its last output with the input clock to generate an error signal which is then integrated and fed back as the control to all of the delay elements.
The integration allows the error to go to zero while keeping the control signal, and thus the delays, where they need to be for phase lock. Since the control signal directly impacts the phase this is all that is required.
A PLL compares the phase of its oscillator with the incoming signal to generate an error signal which is then integrated to create a control signal for the voltage-controlled oscillator. The control signal impacts the oscillator's frequency, and phase is the integral of frequency, so a second integration is unavoidably performed by the oscillator itself.
In the Control Systems jargon, th
|
https://en.wikipedia.org/wiki/Logical%20truth
|
Logical truth is one of the most fundamental concepts in logic. Broadly speaking, a logical truth is a statement which is true regardless of the truth or falsity of its constituent propositions. In other words, a logical truth is a statement which is not only true, but one which is true under all interpretations of its logical components (other than its logical constants). Thus, logical truths such as "if p, then p" can be considered tautologies. Logical truths are thought to be the simplest case of statements which are analytically true (or in other words, true by definition). All of philosophical logic can be thought of as providing accounts of the nature of logical truth, as well as logical consequence.
Logical truths are generally considered to be necessarily true. This is to say that they are such that no situation could arise in which they could fail to be true. The view that logical statements are necessarily true is sometimes treated as equivalent to saying that logical truths are true in all possible worlds. However, the question of which statements are necessarily true remains the subject of continued debate.
Treating logical truths, analytic truths, and necessary truths as equivalent, logical truths can be contrasted with facts (which can also be called contingent claims or synthetic claims). Contingent truths are true in this world, but could have turned out otherwise (in other words, they are false in at least one possible world). Logically true propositions such as "If p and q, then p" and "All married people are married" are logical truths because they are true due to their internal structure and not because of any facts of the world (whereas "All married people are happy", even if it were true, could not be true solely in virtue of its logical structure).
Rationalist philosophers have suggested that the existence of logical truths cannot be explained by empiricism, because they hold that it is impossible to account for our knowledge of logical tru
|
https://en.wikipedia.org/wiki/Alvarez%20hypothesis
|
The Alvarez hypothesis posits that the mass extinction of the non-avian dinosaurs and many other living things during the Cretaceous–Paleogene extinction event was caused by the impact of a large asteroid on the Earth. Prior to 2013, it was commonly cited as having happened about 65 million years ago, but Renne and colleagues (2013) gave an updated value of 66 million years. Evidence indicates that the asteroid fell in the Yucatán Peninsula, at Chicxulub, Mexico. The hypothesis is named after the father-and-son team of scientists Luis and Walter Alvarez, who first suggested it in 1980. Shortly afterwards, and independently, the same was suggested by Dutch paleontologist Jan Smit.
In March 2010, an international panel of scientists endorsed the asteroid hypothesis, specifically the Chicxulub impact, as being the cause of the extinction. A team of 41 scientists reviewed 20 years of scientific literature and in so doing also ruled out other theories such as massive volcanism. They had determined that a space rock in diameter hurtled into earth at Chicxulub. For comparison, the Martian moon Phobos has a diameter of , and Mount Everest is just under . The collision would have released the same energy as , over a billion times the energy of the atomic bombs dropped on Hiroshima and Nagasaki.
A 2016 drilling project into the peak ring of the crater strongly supported the hypothesis, and confirmed various matters that had been unclear until that point. These included the fact that the peak ring comprised granite (a rock found deep within the Earth) rather than typical sea floor rock, which had been shocked, melted, and ejected to the surface in minutes, and evidence of colossal seawater movement directly afterwards from sand deposits. Crucially, the cores also showed a near complete absence of gypsum, a sulfate-containing rock, which would have been vaporized and dispersed as an aerosol into the atmosphere, confirming the presence of a probable link between the impact a
|
https://en.wikipedia.org/wiki/Heptagonal%20number
|
A heptagonal number is a figurate number that is constructed by combining heptagons with ascending size. The n-th heptagonal number is given by the formula
.
The first few heptagonal numbers are:
0, 1, 7, 18, 34, 55, 81, 112, 148, 189, 235, 286, 342, 403, 469, 540, 616, 697, 783, 874, 970, 1071, 1177, 1288, 1404, 1525, 1651, 1782, …
Parity
The parity of heptagonal numbers follows the pattern odd-odd-even-even. Like square numbers, the digital root in base 10 of a heptagonal number can only be 1, 4, 7 or 9. Five times a heptagonal number, plus 1 equals a triangular number.
Additional properties
The heptagonal numbers have several notable formulas:
Sum of reciprocals
A formula for the sum of the reciprocals of the heptagonal numbers is given by:
with golden ratio .
Heptagonal roots
In analogy to the square root of x, one can calculate the heptagonal root of x, meaning the number of terms in the sequence up to and including x.
The heptagonal root of x is given by the formula
which is obtained by using the quadratic formula to solve for its unique positive root n.
|
https://en.wikipedia.org/wiki/Underground%20World%20Home
|
Underground World Home was a home designed for the 1964 New York World's Fair by architect Jay Swayze. The home-exhibit was in Flushing Meadow Park in Queens and appeared to be a luxury bomb shelter which was marketed as secure and safe.
History
The 1964 World's Fair featured the home which was designed by architect Jay Swayze. Swayze was a promoter of underground living and he lived in his own underground bunker-house in Plainview, Texas, which was called the Atomitat.
Fairgoers could tour the home for the price of one dollar. It was a large underground bunker and it was unveiled in 1964 during the Cold War. It was also directly following the Cuban Missile Crisis when Americans were concerned about nuclear war. The company "Underground World Homes" was owned by Avon investor and millionaire Girard B. Henderson. He was convinced that the U.S. and the U.S.S.R. would escalate their conflict and it would lead to nuclear war. In addition to the underground home, there was also an exhibit sponsored by Henderson called: "Why Live Underground?" In the brochure for the Underground World Home, it was touted as a place of comfort, security and safety.
The exhibitors at the fair were required to dismantle their exhibits after the fair. The home was 3 feet below surface and it is thought that because of the cost involved with removal, the home was simply buried and lost to history. The architect Swayze wrote a book Underground Gardens & Homes: The Best of Two Worlds, Above and Below but he did not say in the book what happened to the home. One researcher searched New York Public Library World's Fair records and found paperwork which said that demolition of the home was completed on 15 March 1966.
Design
The home had ten rooms and . It featured air conditioning and backlit murals to create the illusion of outdoor lighting. The murals were hand painted by a Texas-based artist who was flown to New York. The architect (Swayze) cited solid research to convince fairgoers tha
|
https://en.wikipedia.org/wiki/Potassium%20acetate
|
Potassium acetate (also called potassium ethanoate), (CH3COOK) is the potassium salt of acetic acid. It is a hygroscopic solid at room temperature.
Preparation
It can be prepared by treating a potassium-containing base such as potassium hydroxide or potassium carbonate with acetic acid:
CH3COOH + KOH → CH3COOK + H2O
This sort of reaction is known as an acid-base neutralization reaction.
The sesquihydrate in water solution (CH3COOK·1½H2O) begins to form semihydrate at 41.3 °C.
Applications
Deicing
Potassium acetate (as a substitute for calcium chloride or magnesium chloride) can be used as a deicer to remove ice or prevent its formation. It offers the advantage of being less aggressive on soils and much less corrosive: for this reason, it is preferred for airport runways although it is more expensive.
Fire extinguishing
Potassium acetate is the extinguishing agent used in Class K fire extinguishers because of its ability to cool and form a crust over burning oils.
Food additive
Potassium acetate is used in processed foods as a preservative and acidity regulator. In the European Union, it is labeled by the E number E261; it is also approved for usage in the USA, Australia, and New Zealand. Potassium hydrogen diacetate (CAS #) with formula KH(OOCCH3)2 is a related food additive with the same E number as potassium acetate.
Medicine and biochemistry
In medicine, potassium acetate is used as part of electrolyte replacement protocols in the treatment of diabetic ketoacidosis because of its ability to break down to bicarbonate to help neutralize the acidotic state.
In molecular biology, potassium acetate is used to precipitate dodecyl sulfate (DS) and DS-bound proteins to extract ethanol from DNA.
Potassium acetate is used in mixtures applied for tissue preservation, fixation, and mummification. Most museums today use a formaldehyde-based method recommended by Kaiserling in 1897 which contains potassium acetate. This process was used to soak Lenin's corpse.
Use
|
https://en.wikipedia.org/wiki/Taeuber%20Paradox
|
The Taeuber Paradox is a paradox in demography, which results from two seemingly contradictory expectations given a population-wide decrease in mortality, e.g. from curing or reducing the mortality of a disease in a population. The two expectations are:
Since the disease would have otherwise caused some deaths, there should be fewer deaths if the disease is cured than in the world where the disease is not cured
Since everyone dies eventually, there must in the long run be the same number of deaths, and the deaths will be redistributed among the remaining causes.
The paradox was named after Conrad Taeuber, a sociologist and demographer.
Resolution
The paradox is resolved by noting that the life expectancy of the population will increase when the disease is cured, leading to a temporary decrease in the overall death rate before deaths are reapportioned among other causes. Thus, curing a disease will not cause an overall decrease in population mortality, but can improve mortality in certain groups (e.g. at certain ages) within a population, or even across all groups (e.g. all ages) within the populations. Comparing two populations with the same overall mortality while one has lower mortality in each subgroup is an example of Simpson's paradox.
In the special case where the force of mortality is reduced by a constant fraction X, then the increase in life expectancy can be estimated as X * H * e, where e is the life expectancy before the reduction in mortality and H is estimated as (2 - e / a), where a is the stationary age of the population. As an example, if cancer is responsible for 10% of all deaths at all ages and were suddenly cured, in a population with an expected lifespan of 75 years and an average age of 50 (giving an estimated H of 1/2), we would estimate life expectancy to increase by only 3.75 years (5% of the original life expectancy, rather than the larger 10% increase you might expect intuitively).
As of 2005, H was estimated to be around 0.2 and
|
https://en.wikipedia.org/wiki/Euler%20sequence
|
In mathematics, the Euler sequence is a particular exact sequence of sheaves on n-dimensional projective space over a ring. It shows that the sheaf of relative differentials is stably isomorphic to an -fold sum of the dual of the Serre twisting sheaf.
The Euler sequence generalizes to that of a projective bundle as well as a Grassmann bundle (see the latter article for this generalization.)
Statement
Let be the n-dimensional projective space over a commutative ring A. Let be the sheaf of 1-differentials on this space, and so on. The Euler sequence is the following exact sequence of sheaves on :
The sequence can be constructed by defining a homomorphism with and in degree 1, surjective in degrees , and checking that locally on the standard charts, the kernel is isomorphic to the relative differential module.
Geometric interpretation
We assume that A is a field k.
The exact sequence above is dual to the sequence
,
where is the tangent sheaf of .
Let us explain the coordinate-free version of this sequence, on for an -dimensional vector space V over k:
This sequence is most easily understood by interpreting sections of the central term as 1-homogeneous vector fields on V. One such section, the Euler vector field, associates to each point of the variety the tangent vector . This vector field is radial in the sense that it vanishes uniformly on 0-homogeneous functions, that is, the functions that are invariant by homothetic rescaling, or "independent of the radial coordinate".
A function (defined on some open set) on gives rise by pull-back to a 0-homogeneous function on V (again partially defined). We obtain 1-homogeneous vector fields by multiplying the Euler vector field by such functions. This is the definition of the first map, and its injectivity is immediate.
The second map is related to the notion of derivation, equivalent to that of vector field. Recall that a vector field on an open set U of the projective space can be defined as a derivat
|
https://en.wikipedia.org/wiki/ZXDC
|
Zinc finger, X-linked, duplicated family member C (ZXDC) is a human CIITA-binding protein involved in the activation of major histocompatibility complex (MHC) class I and II. For binding to occur, ZXDC must form an oligomeric complex with another copy of itself or with ZXDA, a related protein. ZXDC is activated by sumoylation, a post-translational modification. ZXDC plays a role in controlling immunological responses, cancer formation and progression, and cell proliferation, differentiation, and survival.
History
When Sakaue and colleagues searched for novel transcription factors involved in the development of Xenopus laevis, they discovered ZXDC (ZXD Family Zinc Finger C). This discovery was discovered in a research article that was published in 1998. They found a cDNA clone called "Xfin" that encodes a protein with two zinc finger C2H2 domains.
Later research established the interspecies conservation of the Xfin protein, and the human equivalent was given the name ZXDC. It was discovered that the protein was broadly generated in human tissues and that a number of variables, including hypoxia, estrogen, and cytokines, controlled how the protein was expressed.
Structure
ZXDC is a protein composed of 455 amino acids in humans, with a molecular weight of approximately 49.9 kDa.
Near its C-terminus, the protein has two C2H2-type zinc finger domains that are likely to be involved in DNA binding and transcriptional control. A flexible linker region that separates the zinc fingers may provide more adaptability in binding to various DNA sequences.
Other domains found in ZXDC include a proline-rich region close to the N-terminus that may be important in protein-protein interactions and a central region that is projected to be intrinsically disordered. ZXDC also contains numerous other features in addition to the zinc fingers. Although the exact purpose of these domains is not yet known, they might assist in mediating interactions with other proteins or controlli
|
https://en.wikipedia.org/wiki/Methuselah-like%20proteins
|
The Methuselah-like proteins are a family of G protein-coupled receptors found in insects that play a role in aging and reproduction. Antagonizing these receptors can extend the life span of the animal and make it more resistant to free radicals and starvation, but also reduce reproduction and increase cold sensitivity. The age dependent decline in olfaction and motor function is unaffected.
Methuselah-like proteins are related to G protein-coupled receptors of the secretin receptor family.
|
https://en.wikipedia.org/wiki/Monique%20Teillaud
|
Monique Teillaud is a French researcher in computational geometry at the French Institute for Research in Computer Science and Automation (INRIA) in Nancy, France. She moved to Nancy in 2014 from a different INRIA center in Sophia Antipolis, where she was one of the developers of CGAL, a software library of computational geometry algorithms.
Teillaud graduated from the École Normale Supérieure de Jeunes Filles in 1985, she then got a position at École nationale supérieure d'informatique pour l'industrie et l'entreprise before moving to Inria in 1989. She completed her Ph.D. in 1991 at Paris-Sud University under the supervision of Jean-Daniel Boissonnat.
She was the 2008 program chair of the Symposium on Computational Geometry.
She is also the author or editor of two books in computational geometry:
Towards Dynamic Randomized Algorithms in Computational Geometry (Lecture Notes in Computer Science 758, Springer, 1993)
Effective Computational Geometry for Curves and Surfaces (edited with Boissonat, Springer, 2007)
|
https://en.wikipedia.org/wiki/Universal%20hashing
|
In mathematics and computing, universal hashing (in a randomized algorithm or data structure) refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known (for hashing integers, vectors, strings), and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.
Introduction
Assume we want to map keys from some universe into bins (labelled ). The algorithm will have to handle some data set of keys, which is not known in advance. Usually, the goal of hashing is to obtain a low number of collisions (keys from that land in the same bin). A deterministic hash function cannot offer any guarantee in an adversarial setting if , since the adversary may choose to be precisely the preimage of a bin. This means that all data keys land in the same bin, making hashing useless. Furthermore, a deterministic hash function does not allow for rehashing: sometimes the input data turns out to be bad for the hash function (e.g. there are too many collisions), so one would like to change the hash function.
The solution to these problems is to pick a function randomly from a family of hash functions. A family of functions is called a universal family if, .
In other words, any two different keys of the universe collide with probability at most when the hash function is drawn uniformly at random from . This is exactly the probability of collision we would expect if the hash function assigned truly random hash codes to every key.
Sometimes, the definition is relaxed by a constant factor, only requiring collision probability rather than . This concept was introduced by Carter and Wegman in 1977, and has found numerous applications in computer
|
https://en.wikipedia.org/wiki/Music%20stand
|
A music stand is a pedestal or elevated rack designed to hold sheets of music in position for reading. Most music stands for orchestral, chamber music or solo orchestra-family instruments (violin, oboe, trumpet, etc.) can be raised or lowered to accommodate seated or standing performers, or performers of different heights. Many types of keyboard instruments have a built-in or removable music rack or stand where sheet music can be placed. Music stands enable musicians to read sheet music or scores while playing an instrument or conducting, as the stand leaves the hands free. For choirs, singers typically hold their music in a folder, and singers performing solo recitals or opera performances typically memorize the lyrics and melodies. Some singers use stands, such as lounge singers and wedding vocalists who have a repertoire of hundreds of songs, which makes remembering all of the verses difficult.
There is evidence of music stands from China as early as 200 BC. They did not appear in Europe until much later, as most musicians played from memory or improvised. In the 16th century, playing music with a group in one's home became popular, and music was printed for amateurs' use. This music was typically laid down on a table or other flat surface in front of the instrumentalists.
Beginning in the 17th century, some amateur musicians used table-top music stands, which were the first kind of music stand in Europe.
A few are still used today.
It is not until the 17th century that floor-standing music stands were developed in the West. Such music stands were common by 1730, at least in France.
Types
There are various types of music stand for different purposes and intended users. Folding stands collapse, which makes them easily portable. Folding stands are typically used by amateur musicians to practice and at rehearsals and performances. Professional musicians are more likely to limit their use of folding stands to rehearsals held outside of normal performance venu
|
https://en.wikipedia.org/wiki/Catalan%20surface
|
In geometry, a Catalan surface, named after the Belgian mathematician Eugène Charles Catalan, is a ruled surface all of whose rulings are parallel to a fixed plane.
Equations
The vector equation of a Catalan surface is given by
r = s(u) + v L(u),
where r = s(u) is the space curve and L(u) is the unit vector of the ruling at u = u. All the vectors L(u) are parallel to the same plane, called the directrix plane of the surface. This can be characterized by the condition: the mixed product [L(u), L' (u), L" (u)] = 0.
The parametric equations of the Catalan surface are
Special cases
If all the rulings of a Catalan surface intersect a fixed line, then the surface is called a conoid.
Catalan proved that the helicoid and the plane were the only ruled minimal surfaces.
See also
Generalized helicoid
|
https://en.wikipedia.org/wiki/Polyelectrolyte
|
Polyelectrolytes are polymers whose repeating units bear an electrolyte group. Polycations and polyanions are polyelectrolytes. These groups dissociate in aqueous solutions (water), making the polymers charged. Polyelectrolyte properties are thus similar to both electrolytes (salts) and polymers (high molecular weight compounds) and are sometimes called polysalts. Like salts, their solutions are electrically conductive. Like polymers, their solutions are often viscous. Charged molecular chains, commonly present in soft matter systems, play a fundamental role in determining structure, stability and the interactions of various molecular assemblies. Theoretical approaches to describing their statistical properties differ profoundly from those of their electrically neutral counterparts, while technological and industrial fields exploit their unique properties. Many biological molecules are polyelectrolytes. For instance, polypeptides, glycosaminoglycans, and DNA are polyelectrolytes. Both natural and synthetic polyelectrolytes are used in a variety of industries.
Charge
Acids are classified as either weak or strong (and bases similarly may be either weak or strong). Similarly, polyelectrolytes can be divided into "weak" and "strong" types. A "strong" polyelectrolyte is one that dissociates completely in solution for most reasonable pH values. A "weak" polyelectrolyte, by contrast, has a dissociation constant (pKa or pKb) in the range of ~2 to ~10, meaning that it will be partially dissociated at intermediate pH. Thus, weak polyelectrolytes are not fully charged in solution, and moreover their fractional charge can be modified by changing the solution pH, counter-ion concentration, or ionic strength.
The physical properties of polyelectrolyte solutions are usually strongly affected by this degree of charging. Since the polyelectrolyte dissociation releases counter-ions, this necessarily affects the solution's ionic strength, and therefore the Debye length. This in tu
|
https://en.wikipedia.org/wiki/Plate%20count%20agar
|
Plate count agar (PCA), also called standard methods agar (SMA), is a microbiological growth medium commonly used to assess or to monitor "total" or viable bacterial growth of a sample. PCA is not a selective medium.
The total number of living aerobic bacteria can be determined using a plate count agar (PCA) which is a substrate for bacteria to grow on. The medium contains casein which provides nitrogen, carbon, amino acids, vitamins and minerals to aid in the growth of the organism. Yeast extract is the source for vitamins, particularly of B-group. Glucose is the fermentable carbohydrate and agar is the solidifying agent. This is a non-selective medium and the bacteria is counted as colony forming units per gram (CFU/g) in solid samples and (CFU/ml) in liquid samples.
Pour Plate Technique
The pour plate technique is the typical technique used to prepare PCAs. Here, the inoculum is added to the molten agar before pouring the plate. The molten agar is cooled to about 45 degrees Celsius and is poured using a sterile method into a petri dish containing a specific diluted sample. From here, the plates are rotated to ensure the samples are uniformly mixing with the agar. Incubation of the plates is the next step and is carried out for about 3 days at 20 to 30 degrees Celsius.
Composition of Plate Count Agar
Ingredients Gms/L
Enzymatic Digest of Casein/tryptone 5.0
Yeast Extract 2.5
Glucose 1.0
Agar 15.0
Benefits to using PCA:
- Easy to perform
- There is a larger sample volume than the surface spread method allowing for detection of lower microbiological concentrations
- The agar surface does not have to be pre-dried
- The number of microbes/ mL in a specimen can be determined
- You do not need previously prepared plates
|
https://en.wikipedia.org/wiki/Chown
|
The command , an abbreviation of change owner, is used on Unix and Unix-like operating systems to change the owner of file system files, directories. Unprivileged (regular) users who wish to change the group membership of a file that they own may use .
The ownership of any file in the system may only be altered by a super-user. A user cannot give away ownership of a file, even when the user owns it. Similarly, only a member of a group can change a file's group ID to that group.
The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system.
See also
chgrp
chmod
takeown
|
https://en.wikipedia.org/wiki/Horticultural%20flora
|
A horticultural flora, also known as a garden flora, is a plant identification aid structured in the same way as a native plants flora. It serves the same purpose: to facilitate plant identification; however, it only includes plants that are under cultivation as ornamental plants growing within the prescribed climate zone or region. Traditionally published in book form, often in several volumes, such floras are increasingly likely to be produced as websites or CD ROMs.
Scope and contents
Horticultural floras include both cultigens (plants deliberately altered in some way by human activity) and those wild plants brought directly into cultivation that do not have cultigen names. They might also include colour images and useful information specific to the zone or region including:
historical details about outstanding public and private cultivated plant collections
exceptional trees (age, history, rarity, size etc.)
prominent nurserymen and plant breeders
references to the taxonomic and other literature on the plant groups
easy "spotting" or "field" characters useful for quick identification
notes on ecology (especially the potential of plants to naturalise and become invasive)
horticultural history of introduction
conservation.
Uses
Written by a professional plant taxonomist or plantsperson, a horticultural flora assists clarification of scientific and common names, the identification of plant characteristics that occur in cultivated plants that are additional to those in wild counterparts, and descriptions of cultivars.
Although horticultural floras may include a range of food plants, their emphasis is generally on ornamental plants and so these floras are sometimes referred to as "garden floras". Increasingly they provide data for sustainable landscaping, such as:
drought tolerance and irrigation needs
food sources of native and migratory bird and butterfly species
companion planting
invasive species notation
local habitat restoration aspects.
Publis
|
https://en.wikipedia.org/wiki/Thales%27s%20theorem
|
In geometry, Thales's theorem states that if , , and are distinct points on a circle where the line is a diameter, the angle is a right angle. Thales's theorem is a special case of the inscribed angle theorem and is mentioned and proved as part of the 31st proposition in the third book of Euclid's Elements. It is generally attributed to Thales of Miletus, but it is sometimes attributed to Pythagoras.
History
Babylonian mathematicians knew this for special cases before Greek mathematicians proved it.
Thales of Miletus (early 6th century BC) is traditionally credited with proving the theorem; however, even by the 5th century BC there was nothing extant of Thales' writing, and inventions and ideas were attributed to men of wisdom such as Thales and Pythagoras by later doxographers based on hearsay and speculation. Reference to Thales was made by Proclus (5th century AD), and by Diogenes Laërtius (3rd century AD) documenting Pamphila's (1st century AD) statement that Thales "was the first to inscribe in a circle a right-angle triangle".
Thales was claimed to have traveled to Egypt and Babylonia, where he is supposed to have learned about geometry and astronomy and thence brought their knowledge to the Greeks, along the way inventing the concept of geometric proof and proving various geometric theorems. However, there is no direct evidence for any of these claims, and they were most likely invented speculative rationalizations. Modern scholars believe that Greek deductive geometry as found in Euclid's Elements was not developed until the 4th century BC, and any geometric knowledge Thales may have had would have been observational.
The theorem appears in Book II of Euclid's Elements () as proposition 31: "In a circle the angle in the semicircle is right, that in a greater segment less than a right angle, and that in a less segment greater than a right angle; further the angle of the greater segment is greater than a right angle, and the angle of the less segment i
|
https://en.wikipedia.org/wiki/Generalized%20uncertainty%20principle
|
Generalized uncertainty principle — collective name for hypothetical generalizations uncertainty principle, which take into account
the influence of gravitational interactions on the maximum achievable accuracy of measuring physical quantities.
From the hypothesis of the generalized uncertainty principle it follows that there is an absolute minimum of the uncertainty of the position of any particle of the order of the Planck length. Therefore, it plays an important role in the theories of quantum gravity and string theory, which assume the existence of a minimum length scale.
The simplest version of the generalized uncertainty principle can be arrived at in a thought experiment Heisenberg's microscope to measure the coordinates of a particle by taking into account the gravitational interaction of a photon and an observed particle.
In system of units, wich suppose , it is given by an inequality relating the uncertainty of coordinate , the uncertainty of momentum , and the Newtonian gravitational constant :
Other versions of the generalized uncertainty principle can be arrived at in a thought experiment to measure the area of the visible horizon of a black hole or in a thought experiment with microscopic black holes.
Observable consequences
If the quantum corrections introduced by the generalized uncertainty principle become significant at energies exceeding the energy of the
electroweak interaction, then the consequence of its existence should be a change in the thermodynamic properties of compact stars with two different components, the radii of compact stars should be smaller than those predicted by other theories.
See also
Uncertainty principle
|
https://en.wikipedia.org/wiki/Nuclear%20resonance%20fluorescence
|
Nuclear resonance fluorescence (NRF) is a nuclear process in which a nucleus absorbs and emits high-energy photons called gamma rays. NRF interactions typically take place above 1 MeV, and most NRF experiments target heavy nuclei such as uranium and thorium
This process is used for scanning cargo for contraband. Its far more effective than just using x-rays because x-rays can only see the shape of the item in question. With nuclear resonance fluorescence its possible to see what the molecular structure is and thus, distinguish between salt and cocaine without even opening the container. (from National Geographic Magazine, February 2018, article: They Are Watching Us, by Robert Draper)
Mode of interaction
NRF reactions are the result of nuclear absorption and subsequent emission of high-energy photons (gamma rays). As a gamma ray strikes the nucleus, the nucleus becomes excited (that is, the nuclear system as a quantum mechanical ensemble is put into a state with a higher energy). Much like electronic excitation, the nucleus will decay toward its ground state, releasing a high-energy photon at a number of possible, discrete energies. Thus, NRF can be quantified using spectroscopy. Nuclei can be identified by the distinct pattern of NRF emission peaks, although NRF analysis is much less straightforward than typical electronic emissions.
As the energy of incident photons increases, the average spacing between nuclear energy levels decreases. For sufficiently energetic nuclei (i.e. incident photons of over ~1 MeV), the mean spacing between energy levels may be lower than the mean width of each NRF resonance. At this point, determinations of peak spacing cannot be analytical, and must rely on specialized applications of the statistical methods of signal processing.
There is a related phenomenon at the level of electron orbitals. A photon, generally in a lower energy range, can be absorbed by displacing an orbital electron, and then a new photon having the same energy
|
https://en.wikipedia.org/wiki/Systems%20medicine
|
Systems medicine is an interdisciplinary field of study that looks at the systems of the human body as part of an integrated whole, incorporating biochemical, physiological, and environment interactions. Systems medicine draws on systems science and systems biology, and considers complex interactions within the human body in light of a patient's genomics, behavior and environment.
The earliest uses of the term systems medicine appeared in 1992, in an article on systems medicine and pharmacology by T. Kamada.
An important topic in systems medicine and systems biomedicine is the development of computational models that describe disease progression and the effect of therapeutic interventions.
More recent approaches include the redefinition of disease phenotypes based on common mechanisms rather than symptoms. These provide then therapeutic targets including network pharmacology and drug repurposing. Since 2018, there is a dedicated scientific journal, Systems Medicine.
Fundamental schools of systems medicine
Essentially, the issues dealt with by systems medicine can be addressed in two basic ways, molecular (MSM) and organismal systems medicine (OSM):
Molecular systems medicine (MSM)
This approach relies on omics technologies (genomics, proteomics, transcriptomics, phenomics, metabolomics etc.) and tries to understand physiological processes and the evolution of disease in a bottom-up strategy, i.e. by simulating, synthesising and integrating the description of molecular processes to deliver an explanation of an organ system or even the organism in its whole.
Organismal systems medicine (OSM)
This branch of systems medicine, going back to the traditions of Ludwig von Bertalanffy's systems theory and biological cybernetics is a top-down strategy that starts with the description of large, complex processing structures (i.e. neural networks, feedback loops and other motifs) and tries to find sufficient and necessary conditions for the corresponding functional o
|
https://en.wikipedia.org/wiki/Authenticated%20Key%20Exchange
|
Authenticated Key Exchange (AKE) or Authenticated Key Agreement is the exchange of session key in a key exchange protocol which also authenticates the identities of parties involved in key exchange.
Transport Layer Security integral to securing HTTP connections is perhaps the most widely deployed AKE protocol.
|
https://en.wikipedia.org/wiki/Cray%20APP
|
The Cray APP (Attached Parallel Processor) was a parallel computer sold by Cray Research from 1992 onwards. It was based on the Intel i860 microprocessor and could be configured with up to 84 processors. The design was based on "computational nodes" of 12 processors interconnected by a shared bus, with multiple nodes connected to each other, memory and I/O nodes via an 8×8 crossbar switch.
The APP was marketed as a "matrix co-processor" system and required a SPARC-based host system to operate, such as the Cray S-MP. Connection to the host system was via VMEbus or HiPPI. A fully configured APP had a peak performance of 6.7 (single-precision) gigaflops.
The APP was originally designed by FPS Computing as the FPS MCP-784. FPS were acquired by Cray Research in 1991, becoming Cray Research Superservers Inc., and the MCP-784 was relaunched by Cray in 1992 as the APP.
|
https://en.wikipedia.org/wiki/Actor%20%28UML%29
|
An actor in the Unified Modeling Language (UML) "specifies a role played by a user or any other system that interacts with the subject."
"An Actor models a type of role played by an entity that interacts with the subject (e.g., by exchanging signals and data),
but which is external to the subject."
"Actors may represent roles played by human users, external hardware, or other subjects. Actors do not necessarily represent specific physical entities but merely particular facets (i.e., “roles”) of some entities that are relevant to the specification of its associated use cases. A single physical instance may play the role of several different actors and a given actor may be played by multiple different instances."
UML 2 does not permit associations between Actors. The use of generalization/specialization relationship between actors is useful in modeling overlapping behaviours between actors and does not violate this constraint since a generalization relation is not a type of association.
Actors interact with use cases.
|
https://en.wikipedia.org/wiki/Fremitus
|
Fremitus is a vibration transmitted through the body. In common medical usage, it usually refers to assessment of the lungs by either the vibration intensity felt on the chest wall (tactile fremitus) and/or heard by a stethoscope on the chest wall with certain spoken words (vocal fremitus), although there are several other types.
Types
Vocal fremitus
When a person speaks, the vocal cords create vibrations (vocal fremitus) in the tracheobronchial tree and through the lungs and chest wall, where they can be felt (tactile fremitus). This is usually assessed with the healthcare provider placing the flat of their palms on the chest wall and then asking a patient to repeat a phrase containing low-frequency vowels such as "blue balloons" or "toys for tots" (the original diphthong used was the German word neunundneunzig but the translation to the English 'ninety-nine' was a higher-frequency diphthong and thus not as effective in eliciting fremitus). An increase in tactile fremitus indicates denser or inflamed lung tissue, which can be caused by diseases such as pneumonia. A decrease suggests air or fluid in the pleural spaces or a decrease in lung tissue density, which can be caused by diseases such as chronic obstructive pulmonary disease or asthma.
Pleural fremitus
Pleural fremitus is a palpable vibration of the wall of the thorax caused by friction between the parietal and visceral pleura of the lungs. See pleural friction rub for the auditory analog of this sign.
Dental fremitus
Fremitus appears when teeth move. This can be assessed by feeling and looking at teeth when the mouth is opened and closed.
Periodontal fremitus
Periodontal fremitus occurs in either of the alveolar bones when an individual sustains trauma from occlusion. It is a result of teeth exhibiting at least slight mobility rubbing against the adjacent walls of their sockets, the volume of which has been expanded ever so slightly by inflammatory responses, bone resorption or both. As a test to
|
https://en.wikipedia.org/wiki/Crazy%20Ray
|
Wilford Jones (January 22, 1931 – March 17, 2007), better known as Crazy Ray, was the unofficial mascot of the Dallas Cowboys. By some accounts, he was also the team's original mascot, who attended almost every home game since the team's inception.
History
He started selling pennants at games in 1962 and quickly endeared himself to the Cowboys fans with his western outfits, magic tricks, trademark whistle, and galloping along with a hobby horse.
He was never officially employed by the Cowboys, but was given a Special Parking Pass and All-access for home games. He was also known as the "Whistling Vendor" at Dallas Tornado soccer games, Texas Rangers baseball games, and at the Dallas Black Hawks minor-league professional ice hockey team at State Fair Coliseum. He could be seen at the State Fair of Texas and various concerts entertaining the public.
Crazy Ray also had a special friendship with rival Zema Williams (i.e. *Chief Zee), the Washington Redskins' unofficial mascot. In some photographs, Crazy Ray and Chief Zee were seen pretending to fight with each other during games.
Ray died on March 17, 2007, from heart disease and diabetes, aged 76, in Dallas. He missed only three games in 46 seasons.
Honors
Crazy Ray has a place in the Visa Hall of Fans Exhibit at the Pro Football Hall of Fame as he was selected as the fan choice for the Dallas Cowboys.
See also
The Barrel Man
Chief Zee
Fireman Ed
Hogettes
License Plate Guy
|
https://en.wikipedia.org/wiki/E.%20B.%20Babcock
|
Ernest Brown Babcock (July 10, 1877 – December 8, 1954) was an American plant geneticist who pioneered the under
standing of plant evolution in terms of genetics. He is particularly known for seeking to understand by field investigations and extensive experiments, the entire polyploid apomictic genus Crepis, in which he recognize 196 species. He published more than 100 articles and books explaining plant genetics, including the seminal textbook (with Roy Elwood Clausen) Genetics in Relation to Agriculture. He instructed Marion Elizabeth Stilwell Cave.
|
https://en.wikipedia.org/wiki/C1orf122
|
C1orf122 (Chromosome 1 open reading frame 122) is a gene in the human genome that encodes the cytosolic protein ALAESM.. ALAESM is present in all tissue cells and highly up-regulated in the brain, spinal cord, adrenal gland and kidney. This gene can be expressed up to 2.5 times the average gene in its highly expressed tissues. Although the function of C1orf122 is unknown, it is predicted to be used for mitochondria localization.
Gene
C1orf122 is located on chromosome 1 at 1p34.3. The gene is 1,665 nucleotides long, covering 37,808,405 to 37,809,454. It contains three exons with boundaries between amino acids 12 and 13, and amino acids 79 and 80.
mRNA
C1orf122 has two isoforms. Variant one contains 1,329 nucleotides with three exons. Variant two contains 1,226 nucleotides with three exons. Variant two lacks an in-frame portion of the 5' coding region, resulting in a shorter N-terminus.
Protein
ALAESM has a molecular weight of 1100 kDa and an isoelectric point of 6.29. It is a cytosolic protein without a transmembrane domain.
Predicted post-translational modifications
There are few predicted kinase phosphorylation sites in this protein. Position 7 is predicted to be phosphorylated by CK1, VRK, and VRK2. Position 10 is predicted to be phosphorylated by CRK1, VRK, PKC, PLK, and AGC. Position 82 has a possible phosphorylation by TKL and MLK. Position 94 is predicted to be phosphorylated by PKC, AGC, MAPK, NEK, CMGC and IKK.
ALAESM does have a few predicted reactive sites. It is predicted to be palmitoylated at position 10, allowing the covalent attachment of fatty acids. It is predicted to undergo glycation at positions 21 and 101 which attaches a sugar molecule to the amino acid. It is predicted to have a nuclear export signal strand from position 55-64 which signals the protein to leave the nucleus. It likely can be glycosylated at position 82 and 94 which attaches a carbohydrate to the amino acid. It is predicted to be phosphorylated by an unspecified actor
|
https://en.wikipedia.org/wiki/Licensed%20to%20Kill%3F
|
Licensed to Kill? The Nuclear Regulatory Commission and the Shoreham Power Plant, a 1998 book by Joan Aron, presents the first detailed case study of how an activist public and elected officials of New York state opposed the Shoreham Nuclear Power Plant on Long Island. The book explains that nuclear power faltered when "public concerns about health, safety, and the environment superseded other interests about national security or energy supplies".
Aron argues that the Shoreham closure resulted from the collapse of public trust for the Nuclear Regulatory Commission and the entire nuclear industry. For Aron, the unwillingness of the Long Island Lighting Company (LILCO) management to consider true public interest in the debate resulted in "the loss of the goodwill of its customers". Also, the willingness of LILCO to press on with plans for Shoreham despite changes in the economics of nuclear power and market demand "reflected a basic failure of foresight".
See also
Anti-nuclear movement in the United States
List of anti-nuclear protests in the United States
List of books about nuclear issues
Richard Kessel
|
https://en.wikipedia.org/wiki/Turn%20%28biochemistry%29
|
A turn is an element of secondary structure in proteins where the polypeptide chain reverses its overall direction.
Definition
According to one definition, a turn is a structural motif where the Cα atoms of two residues separated by a few (usually 1 to 5) peptide bonds are close (less than ). The proximity of the terminal Cα atoms often correlates with formation of an inter main chain hydrogen bond between the corresponding residues. Such hydrogen bonding is the basis for the original, perhaps better known, turn definition. In many cases, but not all, the hydrogen-bonding and Cα-distance definitions are equivalent.
Types of turns
Turns are classified according to the separation between the two end residues:
In an α-turn the end residues are separated by four peptide bonds (i → i ± 4).
In a β-turn (the most common form), by three bonds (i → i ± 3).
In a γ-turn, by two bonds (i → i ± 2).
In a δ-turn, by one bond (i → i ± 1), which is sterically unlikely.
In a π-turn, by five bonds (i → i ± 5).
Turns are classified by their backbone dihedral angles (see Ramachandran plot). A turn can be converted into its inverse turn (in which the main chain atoms have opposite chirality) by changing the sign on its dihedral angles. (The inverse turn is not a true enantiomer since the Cα atom chirality is maintained.) Thus, the γ-turn has two forms, a classical form with (φ, ψ) dihedral angles of roughly (75°, −65°) and an inverse form with dihedral angles (−75°, 65°). At least eight forms of the beta turn occur, varying in whether a cis isomer of a peptide bond is involved and on the dihedral angles of the central two residues. The classical and inverse β-turns are distinguished with a prime, e.g., type I and type I′ beta turns. If an i → i + 3 hydrogen bond is taken as the criterion for turns, the four categories of Venkatachalam (I, II, II′, I′) suffice to describe all possible beta turns. All four occur frequently in proteins but I is most common, followed by II, I′ a
|
https://en.wikipedia.org/wiki/Viability%20theory
|
Viability theory is an area of mathematics that studies the evolution of dynamical systems under constraints on the system state. It was developed to formalize problems arising in the study of various natural and social phenomena, and has close ties to the theories of optimal control and set-valued analysis.
Motivation
Many systems, organizations, and networks arising in biology and the social sciences do not evolve in a deterministic way, nor even in a stochastic way. Rather they evolve with a Darwinian flavor, driven by random fluctuations but yet constrained to remain "viable" by their environment. Viability theory started in 1976 by translating mathematically the title of the book Chance and Necessity by Jacques Monod to the differential inclusion for chance and
for necessity. The differential inclusion is a type of “evolutionary engine” (called an evolutionary system associating with any initial state x a subset of evolutions starting at x. The system is said to be deterministic if this set is made of one and only one evolution and contingent otherwise.
Necessity is the requirement that at each instant, the evolution is viable (remains) in the environment K described by viability constraints, a word encompassing polysemous concepts as stability, confinement, homeostasis, adaptation, etc., expressing the idea that some variables must obey some constraints (representing physical, social, biological and economic constraints, etc.) that can never be violated. So, viability theory starts as the confrontation of evolutionary systems governing evolutions and viability constraints that such evolutions must obey. They share common features:
Systems designed by human brains, in the sense that agents, actors, decision-makers act on the evolutionary system, as in engineering (control theory and differential games)
Systems observed by human brains, more difficult to understand since there is no consensus on the actors piloting the variable, who, at least, may b
|
https://en.wikipedia.org/wiki/Disk%20encryption%20theory
|
Disk encryption is a special case of data at rest protection when the storage medium is a sector-addressable device (e.g., a hard disk). This article presents cryptographic aspects of the problem. For an overview, see disk encryption. For discussion of different software packages and hardware devices devoted to this problem, see disk encryption software and disk encryption hardware.
Problem definition
Disk encryption methods aim to provide three distinct properties:
The data on the disk should remain confidential.
Data retrieval and storage should both be fast operations, no matter where on the disk the data is stored.
The encryption method should not waste disk space (i.e., the amount of storage used for encrypted data should not be significantly larger than the size of plaintext).
The first property requires defining an adversary from whom the data is being kept confidential. The strongest adversaries studied in the field of disk encryption have these abilities:
they can read the raw contents of the disk at any time;
they can request the disk to encrypt and store arbitrary files of their choosing;
and they can modify unused sectors on the disk and then request their decryption.
A method provides good confidentiality if the only information such an adversary can determine over time is whether the data in a sector has or has not changed since the last time they looked.
The second property requires dividing the disk into several sectors, usually 512 bytes ( bits) long, which are encrypted and decrypted independently of each other. In turn, if the data is to stay confidential, the encryption method must be tweakable; no two sectors should be processed in exactly the same way. Otherwise, the adversary could decrypt any sector of the disk by copying it to an unused sector of the disk and requesting its decryption.
The third property is generally non-controversial. However, it indirectly prohibits the use of stream ciphers, since stream ciphers require, for the
|
https://en.wikipedia.org/wiki/Stable%20module%20category
|
In representation theory, the stable module category is a category in which projectives are "factored out."
Definition
Let R be a ring. For two modules M and N over R, define to be the set of R-linear maps from M to N modulo the relation that f ~ g if f − g factors through a projective module. The stable module category is defined by setting the objects to be the R-modules, and the morphisms are the equivalence classes .
Given a module M, let P be a projective module with a surjection . Then set to be the kernel of p. Suppose we are given a morphism and a surjection where Q is projective. Then one can lift f to a map which maps into . This gives a well-defined functor from the stable module category to itself.
For certain rings, such as Frobenius algebras, is an equivalence of categories. In this case, the inverse can be defined as follows. Given M, find an injective module I with an inclusion . Then is defined to be the cokernel of i. A case of particular interest is when the ring R is a group algebra.
The functor Ω−1 can even be defined on the module category of a general ring (without factoring out projectives), as the cokernel of the injective envelope. It need not be true in this case that the functor Ω−1 is actually an inverse to Ω. One important property of the stable module category is it allows defining the Ω functor for general rings. When R is perfect (or M is finitely generated and R is semiperfect), then Ω(M) can be defined as the kernel of the projective cover, giving a functor on the module category. However, in general projective covers need not exist, and so passing to the stable module category is necessary.
Connections with cohomology
Now we suppose that R = kG is a group algebra for some field k and some group G. One can show that there exist isomorphisms
for every positive integer n. The group cohomology of a representation M is given by where k has a trivial G-action, so in this way the stable module category gives a nat
|
https://en.wikipedia.org/wiki/Indicator%20diagram
|
An indicator diagram is a chart used to measure the thermal, or cylinder, performance of reciprocating steam and internal combustion engines and compressors. An indicator chart records the pressure in the cylinder versus the volume swept by the piston, throughout the two or four strokes of the piston which constitute the engine, or compressor, cycle. The indicator diagram is used to calculate the work done and the power produced in an engine cylinder or used in a compressor cylinder.
The indicator diagram was developed by James Watt and his employee John Southern to help understand how to improve the efficiency of steam engines. In 1796, Southern developed the simple, but critical, technique to generate the diagram by fixing a board so as to move with the piston, thereby tracing the "volume" axis, while a pencil, attached to a pressure gauge, moved at right angles to the piston, tracing "pressure".
The gauge enabled Watt to calculate the work done by the steam while ensuring that its pressure had dropped to zero by the end of the stroke, thereby ensuring that all useful energy had been extracted. The total work could be calculated from the area between the "volume" axis and the traced line. The latter fact had been realised by Davies Gilbert as early as 1792 and used by Jonathan Hornblower in litigation against Watt over patents on various designs. Daniel Bernoulli had also had the insight about how to calculate work.
Watt used the diagram to make radical improvements to steam engine performance and long kept it a trade secret. Though it was made public in a letter to the Quarterly Journal of Science in 1822, it remained somewhat obscure, John Farey, Jr. only learned of it on seeing it used, probably by Watt's men, when he visited Russia in 1826.
In 1834, Émile Clapeyron used a diagram of pressure against volume to illustrate and elucidate the Carnot cycle, elevating it to a central position in the study of thermodynamics.
Later instruments for steam engine (i
|
https://en.wikipedia.org/wiki/Growing%20region
|
A growing region also known as a farming region or agricultural region refers to a geographic area characterised by specific climate factors, soil conditions and agricultural practices that are favourable for the cultivation and production of crops, plants, or livestock. Depending on the environmental characteristics, a growing region can be dominated by a single crop or crop combination. For example, the American Corn Belt, the Philippine coconut landscape and the Malayan rubber landscape are examples of growing regions that are dominated by a particular crop. On the other hand, Queensland and New South Wales of Australia characterised by high inherent soil fertility and high seasonal rainfall have highly diverse crop production including wheat, barley, oilseeds, sorghum maize and wheat.
Most crops are cultivated not in one place only, but in several distinct regions in diverse parts of the world. Cultivation in these areas may be enabled by a large-scale regional climate, or by a unique microclimate.
Growing regions, because of the need for climate consistency, are usually oriented along a general latitude, and in the United States these are often called "belts".
The growing region of a traditional staple crop often has a strong cultural cohesiveness.
Examples
The need for growing fodder has also historically limited livestock to certain agricultural regions.
In Viticulture, American Viticultural Area - AVA regions are a specialized geographic type; and European wine appellations of Protected Geographical Status origin are another.
|
https://en.wikipedia.org/wiki/Prevalent%20and%20shy%20sets
|
In mathematics, the notions of prevalence and shyness are notions of "almost everywhere" and "measure zero" that are well-suited to the study of infinite-dimensional spaces and make use of the translation-invariant Lebesgue measure on finite-dimensional real spaces. The term "shy" was suggested by the American mathematician John Milnor.
Definitions
Prevalence and shyness
Let be a real topological vector space and let be a Borel-measurable subset of is said to be prevalent if there exists a finite-dimensional subspace of called the probe set, such that for all we have for -almost all where denotes the -dimensional Lebesgue measure on Put another way, for every Lebesgue-almost every point of the hyperplane lies in
A non-Borel subset of is said to be prevalent if it contains a prevalent Borel subset.
A Borel subset of is said to be shy if its complement is prevalent; a non-Borel subset of is said to be shy if it is contained within a shy Borel subset.
An alternative, and slightly more general, definition is to define a set to be shy if there exists a transverse measure for (other than the trivial measure).
Local prevalence and shyness
A subset of is said to be locally shy if every point has a neighbourhood whose intersection with is a shy set. is said to be locally prevalent if its complement is locally shy.
Theorems involving prevalence and shyness
If is shy, then so is every subset of and every translate of
Every shy Borel set admits a transverse measure that is finite and has compact support. Furthermore, this measure can be chosen so that its support has arbitrarily small diameter.
Any finite or countable union of shy sets is also shy. Analogously, countable intersection of prevalent sets is prevalent.
Any shy set is also locally shy. If is a separable space, then every locally shy subset of is also shy.
A subset of -dimensional Euclidean space is shy if and only if it has Lebesgue measure zero.
Any prevalent subs
|
https://en.wikipedia.org/wiki/P3%20peptide
|
p3 peptide also known as amyloid β- peptide (Aβ)17–40/42 is the peptide resulting from the α- and γ-secretase cleavage from the amyloid precursor protein (APP). It is known to be the major constituent of diffuse plaques observed in Alzheimer's disease (AD) brains and pre-amyloid plaques in people affected by Down syndrome. However, p3 peptide's role in these diseases is not truly known yet.
Structure
There is little information related to the p3 peptides composition and structure, and moreover most of it has to do with characteristics that concern to its role in Alzheimer's disease. p3 can be found as a 24 or 26 residues peptide, depending on which is gamma secretase's cleavage. The peptide which has 26 residues, presents the following sequence:
VFFAEDVGSNKGAIIGLMVGGVVIAT
In relation to the secondary structure of p3 peptide, it is thought that after the cleavage by the α- and γ- secretases and extraction from the membrane it would convert quickly from the α-helix conformation it has when it is part of APPsα sequence to a β-hairpin structure. Then, this highly hydrophobic monomer would rapidly evolve into fibrils with no soluble intermediate forms, the ones related to amyloid’s structure. The main reason why p3 does not aggregate in amyloidogenic forms while Aβ does, is that the N-terminal domain Aβ1–16, which is present in Aβ’s sequence but not in p3's one, is known to protect the hydrophobic core of the oligomers from being dissolved by the watered medium. So, p3 peptide oligomers would likely expose hydrophobic residues to water and would be less stable. As a consequence, p3 peptide structural determinants can assemble into fibrils, but no oligomeric forms have been identified. That is why p3 peptide represents the benign form of amyloid.
Properties
Energy plays a very important role in p3 peptides. While Aβ models have a strong negative energy, p3 oligomeric models have a positive one. Another characteristic that must be pointed out is that p3 peptides h
|
https://en.wikipedia.org/wiki/Complex%20manifold
|
In differential geometry and complex geometry, a complex manifold is a manifold with an atlas of charts to the open unit disc in , such that the transition maps are holomorphic.
The term complex manifold is variously used to mean a complex manifold in the sense above (which can be specified as an integrable complex manifold), and an almost complex manifold.
Implications of complex structure
Since holomorphic functions are much more rigid than smooth functions, the theories of smooth and complex manifolds have very different flavors: compact complex manifolds are much closer to algebraic varieties than to differentiable manifolds.
For example, the Whitney embedding theorem tells us that every smooth n-dimensional manifold can be embedded as a smooth submanifold of R2n, whereas it is "rare" for a complex manifold to have a holomorphic embedding into Cn. Consider for example any compact connected complex manifold M: any holomorphic function on it is constant by the maximum modulus principle. Now if we had a holomorphic embedding of M into Cn, then the coordinate functions of Cn would restrict to nonconstant holomorphic functions on M, contradicting compactness, except in the case that M is just a point. Complex manifolds that can be embedded in Cn are called Stein manifolds and form a very special class of manifolds including, for example, smooth complex affine algebraic varieties.
The classification of complex manifolds is much more subtle than that of differentiable manifolds. For example, while in dimensions other than four, a given topological manifold has at most finitely many smooth structures, a topological manifold supporting a complex structure can and often does support uncountably many complex structures. Riemann surfaces, two dimensional manifolds equipped with a complex structure, which are topologically classified by the genus, are an important example of this phenomenon. The set of complex structures on a given orientable surface, modulo biholomorphi
|
https://en.wikipedia.org/wiki/Ren%C3%A9%20Dagron
|
René Prudent Patrice Dagron (17 March 1817 – 13 June 1900) was a French photographer and inventor. He was born in Aillières-Beauvoir, Sarthe, France.
On 21 June 1859, Dagron was granted the first microfilm patent in history. Dagron is also considered the inventor of the miniature photographic jewels () known as Stanhopes because a modified Stanhope lens is used to view the microscopic picture attached to the lens. He is buried at Ivry Cemetery, Ivry-sur-Seine.
Early life
He grew up in rural France but he left for Paris at an early age. In Paris he distinguished himself in the study of Physics and Chemistry. As a chemistry student Dagron became interested in Daguerrotypes when the process of producing them was announced on 19 August 1839. After graduation, Dagron established a photographic portrait studio in Paris. While in Paris Dagron became familiar with the collodion wet plate and collodio-albumen dry plate processes which he would later adapt to his microfilm techniques.
Stanhope Viewers
In 1857 John Benjamin Dancer's microfilms were exhibited in Paris for the first time and Dagron immediately saw their potential. He used the concept of microphotography to produce simple microfilm viewers which he would later manufacture and incorporate into novelties and souvenir products as well as other applications. Soon after that, Dagron encountered problems with imitators and people infringing on his patents.
On 21 June 1859, Dagron was granted the first microfilm patent in history and in the same year he introduced his photographic miniature Stanhope toys and jewels during the International Exhibition in Paris.
In 1862, Dagron exhibited his miniature Stanhope viewers during London's International Fair. In the London Fair he received an honourable mention and presented a set of microfilms to Queen Victoria. The same year Dagron published his book: "Cylindres photo-microscopiques montes et non-montes sur bijoux, brevetes en France et a l'etranger". (Translated as:
|
https://en.wikipedia.org/wiki/Jackiw%E2%80%93Teitelboim%20gravity
|
The R = T model, also known as Jackiw–Teitelboim gravity (named after Roman Jackiw and Claudio Teitelboim), is a theory of gravity with dilaton coupling in one spatial and one time dimension. It should not be confused with the CGHS model or Liouville gravity. The action is given by
The metric in this case is more amenable to analytical solutions than the general 3+1D case though a canonical reduction for the latter has recently been obtained. For example, in 1+1D, the metric for the case of two mutually interacting bodies can be solved exactly in terms of the Lambert W function, even with an additional electromagnetic field.
|
https://en.wikipedia.org/wiki/Optic%20stalk
|
The optic vesicles project toward the sides of the head, and the peripheral part of each expands to form a hollow bulb, while the proximal part remains narrow and constitutes the optic stalk.
Closure of the choroidal fissure in the optic stalk occurs during the seventh week of development. The former optic stalk is then called the optic nerve. In short, the optic stalks are the structures that precede the optic nerves embryologically.
|
https://en.wikipedia.org/wiki/Transposons%20as%20a%20genetic%20tool
|
Transposons are semi-parasitic DNA sequences which can replicate and spread through the host's genome. They can be harnessed as a genetic tool for analysis of gene and protein function. The use of transposons is well-developed in Drosophila (in which P elements are most commonly used) and in Thale cress (Arabidopsis thaliana) and bacteria such as Escherichia coli (E. coli ).
Currently transposons can be used in genetic research and recombinant genetic engineering for insertional mutagenesis. Insertional mutagenesis is when transposons function as vectors to help remove and integrate genetic sequences. Given their relatively simple design and inherent ability to move DNA sequences, transposons are highly compatible at transducing genetic material, making them ideal genetic tools.
Signature-Tagging Mutagenesis
Signature-tagging mutagenesis (also known as STM) is a technique focused on using transposable element insertion to determine the phenotype of a locus in an organism's genome. While genetic sequencing techniques can determine the genotype of a genome, they cannot determine the function or phenotypic expression of gene sequences. STM can bypass this issue by mutating a locus, causing it form a new phenotype; by comparing the observed phenotypic expressions of the mutated and unaltered locus, one can deduce the phenotypic expression of the locus.
In STM, specially tagged transposons are inserted into an organism, such as a bacterium, and randomly integrated into the host genome. In theory, the modified mutant organism should express the altered gene, thus altering the phenotype. If a new phenotype is observed, the genome is sequenced and searched for tagged transposons. If the site of transposon integration is found, then the locus may be responsible for expressing the phenotypes.
There have been many studies conducted transposon based STM, most notably with the P elements in Drosophila. P elements are transposons originally described in Drosophila melanogaste
|
https://en.wikipedia.org/wiki/Formazine
|
Formazine (formazin) is a heterocyclic polymer produced by reaction of hexamethylenetetramine with hydrazine sulfate.
The hexamethylenetetramine tetrahedral cage-like structure, similar to adamantane, serves as molecular building block to form a tridimensional polymeric network.
Formazine is very poorly soluble in water and when directly synthesized in aqueous solution, by simply mixing its two highly soluble precursors, it forms small size colloidal particles. These organic colloids are responsible of the light scattering of the formazine suspensions in all the directions. Optical properties of colloidal suspensions depend on the suspended particles size and size distribution. Because formazine is a stable synthetic material with uniform particle size it is commonly used as a standard to calibrate turbidimeters and to control the reproducibility of their measurements. Formazin use was first proposed by Kingsbury et al. (1926) for the rapid standardization of turbidity measurements of albumin in urine. The unit is called Formazin Turbidity Unit (FTU). A suspension of 1.25 mg/L hydrazine sulfate and 12.5 mg/L hexamethylenetetramine in water has a turbidity of one FTU.
In the United States environmental monitoring the turbidity standard unit is called Nephelometric Turbidity Units (NTU), while the international standard unit is called Formazin Nephelometric Unit (FNU). The most generally applicable unit is Formazin Turbidity Unit (FTU), although different measurement methods can give quite different values as reported in FTU.
Turbidity measurement
For turbidity measurement, a formazine suspension is prepared by mixing solutions of 10 g/L hydrazine sulfate and 100 g/L hexamethylenetetramine with ultrapure water. The resulting solution is left for 24 hours, at 25 °C ±3 °C, for the suspension to develop. This produces a suspension with a turbidity value of 4000 NTU/FAU/FTU/FNU. This is then diluted to a value to suit the instrument range. There is no straightforward
|
https://en.wikipedia.org/wiki/Holin
|
Holins are a diverse group of small proteins produced by dsDNA bacteriophages in order to trigger and control the degradation of the host's cell wall at the end of the lytic cycle. Holins form pores in the host's cell membrane, allowing lysins to reach and degrade peptidoglycan, a component of bacterial cell walls. Holins have been shown to regulate the timing of lysis with great precision. Over 50 unrelated gene families encode holins, making them the most diverse group of proteins with common function. Together with lysins, holins are being studied for their potential use as antibacterial agents.
While canonical holins act by forming large pores, pinholins such as the S protein of lambdoid phage 21 act by forming heptameric channels that depolarize the bacterial membrane. They are associated with SAR endolysins, which remain inactive in the periplasm prior to the depolarization of the membrane.
Viruses that infect eukaryotic cells may use similar channel-forming proteins called viroporins.
Classification
Structure
According to their structure there are three main classes of holins.
Class I holins
Class I holins have three transmembrane domains (TMDs) with the N-terminus in the periplasm and the C-terminus in the cytoplasm. They generally have over 95 residues. Examples of class I holins include the bacteriophage λ S protein (λ holin) and the Staphylococcus aureus phage P68 hol15 protein.
Class II holins
Class II holins have two TMDs, with both the N- and the C-terminus in the cytoplasm. Their number of residues usually falls between 65 and 95. Examples include the S protein from lambdoid phage 21 and the Hol3626 protein from Clostridium perfringens bacteriophage Ф3626.
Class III holins
Unlike class I and class II holins, which are composed of hydrophobic transmembrane helices, class III holins form a single highly hydrophilic TMD, with the N-terminus in the cytoplasm and the C-terminus in the periplasm. The first class III holin to be characterized was
|
https://en.wikipedia.org/wiki/BMX%20Simulator
|
BMX Simulator is a racing video game designed by Richard Darling and released by Codemasters in 1986 for the Commodore 64. It is part of a series of games that includes ATV Simulator, Grand Prix Simulator, Professional Ski Simulator, and a sequel: Professional BMX Simulator. BMX Simulator was ported to the Amiga, Atari 8-bit family, Atari ST, Amstrad CPC, MSX, ZX Spectrum, Commodore Plus/4 and Commodore 16.
Gameplay
BMX Simulator is an overhead race game similar to the arcade video game Super Sprint. The player must race against another player, or the computer, around a series of seven different bicycle motocross (BMX) tracks. There is also a time limit to be beaten. Only two cyclists can compete in each race. The race can be viewed in slow-motion instant replay afterward.
Reception
Sinclair User called it "a classy conversion from the Commodore original" and a "full price game in budget clothing".
ZX Computing said it was fun from start to finish, and rated it a Monster Hit.
Legacy
BMX Simulator was followed by a sequel in 1988, Professional BMX Simulator, by the Oliver Twins. It was later rereleased as BMX Simulator 2.
|
https://en.wikipedia.org/wiki/Particle%20beam%20cooling
|
Particle beam cooling is the process of improving the quality of particle beams produced by particle accelerators, by reducing the emittance. Techniques for particle beam cooling include:
Stochastic cooling
Electron cooling
Ionization cooling
Laser cooling
Radiation damping
Buffer-gas cooling within RF quadrupoles
|
https://en.wikipedia.org/wiki/Landrace
|
A landrace is a domesticated, locally adapted, often traditional variety of a species of animal or plant that has developed over time, through adaptation to its natural and cultural environment of agriculture and pastoralism, and due to isolation from other populations of the species. Landraces are distinct from cultivars and from standard breeds.
A significant proportion of farmers around the world grow landrace crops, and most plant landraces are associated with traditional agricultural systems. Landraces of many crops have probably been grown for millennia. Increasing reliance upon modern plant cultivars that are bred to be uniform has led to a reduction in biodiversity, because most of the genetic diversity of domesticated plant species lies in landraces and other traditionally used varieties. Some farmers using scientifically improved varieties also continue to raise landraces for agronomic reasons that include better adaptation to the local environment, lower fertilizer requirements, lower cost, and better disease resistance. Cultural and market preferences for landraces include culinary uses and product attributes such as texture, color, or ease of use.
Plant landraces have been the subject of more academic research, and the majority of academic literature about landraces is focused on botany in agriculture, not animal husbandry. Animal landraces are distinct from ancestral wild species of modern animal stock, and are also distinct from separate species or subspecies derived from the same ancestor as modern domestic stock. Not all landraces derive from wild or ancient animal stock; in some cases, notably dogs and horses, domestic animals have escaped in sufficient numbers in an area to breed feral populations that form new landraces through evolutionary pressure.
Characteristics
There are differences between authoritative sources on the specific criteria which describe landraces, although there is broad consensus about the existence and utility of the cla
|
https://en.wikipedia.org/wiki/Linear%20grammar
|
In computer science, a linear grammar is a context-free grammar that has at most one nonterminal in the right-hand side of each of its productions.
A linear language is a language generated by some linear grammar.
Example
An example of a linear grammar is G with N = {S}, Σ = {a, b}, P with start symbol S and rules
S → aSb
S → ε
It generates the language .
Relationship with regular grammars
Two special types of linear grammars are the following:
the left-linear or left-regular grammars, in which all rules are of the form A → αw where α is either empty or a single nonterminal and w is a string of terminals;
the right-linear or right-regular grammars, in which all rules are of the form A → wα where w is a string of terminals and α is either empty or a single nonterminal.
Each of these can describe exactly the regular languages.
A regular grammar is a grammar that is left-linear or right-linear.
Observe that by inserting new nonterminals, any linear grammar can be replaced by an equivalent one where some of the rules are left-linear and some are right-linear. For instance, the rules of G above can be replaced with
S → aA
A → Sb
S → ε
However, the requirement that all rules be left-linear (or all rules be right-linear) leads to a strict decrease in the expressive power of linear grammars.
Expressive power
All regular languages are linear; conversely, an example of a linear, non-regular language is { }. as explained above.
All linear languages are context-free; conversely, an example of a context-free, non-linear language is the Dyck language of well-balanced bracket pairs.
Hence, the regular languages are a proper subset of the linear languages, which in turn are a proper subset of the context-free languages.
While regular languages are deterministic, there exist linear languages that are nondeterministic. For example, the language of even-length palindromes on the alphabet of 0 and 1 has the linear grammar S → 0S0 | 1S1 | ε. An arbitrary string of this
|
https://en.wikipedia.org/wiki/Ian%20Wilmut
|
Sir Ian Wilmut (7 July 1944 – 10 September 2023) was a British embryologist and the chair of the Scottish Centre for Regenerative Medicine at the University of Edinburgh. He is best known as the leader of the research group that in 1996 first cloned a mammal from an adult somatic cell, a Finnish Dorset lamb named Dolly.
Wilmut was appointed OBE in 1999 for services to embryo development and knighted in the 2008 New Year Honours. He, Keith Campbell and Shinya Yamanaka jointly received the 2008 Shaw Prize for Medicine and Life Sciences for their work on cell differentiation in mammals.
Early life and education
Wilmut was born in Hampton Lucy, Warwickshire, England, on 7 July 1944. Wilmut's father, Leonard Wilmut, was a mathematics teacher who suffered from diabetes for fifty years, which eventually caused him to become blind. The younger Wilmut attended the Boys' High School in Scarborough, where his father taught. His early desire was to embark on a naval career, but he was unable to do so due to his colour blindness. As a schoolboy, Wilmut worked as a farm hand on weekends, which inspired him to study Agriculture at the University of Nottingham.
In 1966, Wilmut spent eight weeks working in the laboratory of Christopher Polge, who is credited with developing the technique of cryopreservation in 1949. The following year Wilmut joined Polge's laboratory to undertake a Doctor of Philosophy degree at the University of Cambridge, from where he graduated in 1971 with a thesis on semen cryopreservation. During this time he was a postgraduate student at Darwin College.
Career and research
After completing his PhD, he was involved in research focusing on gametes and embryogenesis, including working at the Roslin Institute.
Wilmut was the leader of the research group that in 1996 first cloned a mammal, a lamb named Dolly. She died of a respiratory disease in 2003. In 2008 Wilmut announced that he would abandon the technique of somatic cell nuclear transfer by which Dolly
|
https://en.wikipedia.org/wiki/Microgenomates
|
The Microgenomates are a proposed supergroup of bacterial candidate phyla in the Candidate Phyla Radiation.
Organisms from the Microgenomates group have never been cultured in a lab; rather they have only been detected in the environment through genetic sequencing.
The Microgenomates group was originally discovered from sequences retrieved from the Yellowstone National Park hot spring "Obsidian Pool" and named OP11.
The group was later split into the additional bacterial phyla Absconditabacteria (SR1) and Parcubacteria (OD1) and then into over 11 more bacterial phyla, including Curtisbacteria, Daviesbacteria, Levybacteria, Gottesmanbacteria, Woesebacteria, Amesbacteria, Shapirobacteria, Roizmanbacteria, Beckwithbacteria, Collierbacteria, Pacebacteria.
|
https://en.wikipedia.org/wiki/List%20of%20books%20about%20mushrooms
|
This is a list of published books about mushrooms and mycology, including their history in relation to man, their identification, their usage as food and medicine, and their ecology.
Identification guides
These are larger works that may be hard to take on a hike but help with in depth identification after mushroom hunting.
Europe
These are identification guides relevant only to Europe.
North America
These are identification guides relevant only to North America. Below are sections detailing specific regions of North America, such as the Southeastern United States and the Pacific Northwest.
Alaska
Northeastern United States
These are identification guides relevant to the Northeastern United States.
Midwestern United States
These are identification guides relevant to the Midwestern United States.
Pacific Northwest
These are identification guides focused on mushrooms found in the Pacific Northwest.
Southwestern United States
These are identification guides relevant to the Southwestern United States.
Southeastern United States
These are guides relevant to the Southeastern United States.
Field guides
These are identification guides small enough to take with you while mushroom hunting or on a hike.
Smith, Alexander and Weber, Nancy (1980). The Mushroom Hunter's Field Guide. Ann Arbor, MI: University of Michigan Press. .
Russel, Bill. (2006). Field Guide to Wild Mushrooms of Pennsylvania and the Mid-Atlantic. University Park, PA. University of Pennsylvania Press. ISBN 0-271-02891-2.
Cultivation
These are books about growing mushrooms and fungiculture.
Fungal biology
These are books about mycology and fungal biology.
Ecology
These are books related to the intersection of fungi and ecology, such as mycoremediation.
Food
These are books that explore mushrooms and fungi from the perspective of food and food science, e.g. books that explore the chemical and nutritional compositions of edible mushrooms, or books of recipes specializing in using wild
|
https://en.wikipedia.org/wiki/Fray%20in%20Magical%20Adventure
|
Fray in Magical Adventure, also known as just Fray (フレイ) and Fray-Xak Epilogue (Gai-den), is a 1990 spin-off "gaiden" (sidestory) game in a role-playing video game series Xak developed and published by the Japanese software developer Micro Cabin. Even though it is directly connected to the more serious Xak storyline, Fray has a less serious tone and light-hearted comedic approach to telling the story. It was originally released for the MSX2 and was later ported to several different systems, among them MSX turbo R, PC-9801, PC Engine (as Fray CD), and Game Gear.
Gameplay
Fray is a simple action RPG. The game proceeds by the player's character Fray fighting through a preset overhead view map shooting opposing monsters, jumping over obstacles, and locating powerups and Gold, the game's currency, along the way. At the end of each stage the player will fight a boss and enter a town or safe haven where the player can purchase new equipment, hit points and the option to save their progress. Fray advances in power through the items that she can equip such as different rods and shields. Battles are in real-time as Fray walks around on automatic vertically scrolling game map as well as the monster characters. She has an attack and defense rating, and can switch between different projectile weapon styles as well as use special attacks and healing items.
Plot
Fray features a high fantasy setting where a great war was fought between the benevolent but weakening ancient gods and a demon race, which led to the collapse and eventual mortality of the gods. After this 'War of Sealing', the gods divided the world into three parts: Xak, the world of humans, Oceanity, the world of faeries, and Zekisis, the world of demons. The demon world of Zekisis was tightly sealed from the other two worlds to prevent reentry of the warmongering demon race. Some demons were left behind in Xak, however, and others managed to discover a separate means to enter Xak from Zekisis anyway. (This ancient h
|
https://en.wikipedia.org/wiki/Hirsch%20conjecture
|
In mathematical programming and polyhedral combinatorics, the Hirsch conjecture is the statement that the edge-vertex graph of an n-facet polytope in d-dimensional Euclidean space has diameter no more than n − d. That is, any two vertices of the polytope must be connected to each other by a path of length at most n − d. The conjecture was first put forth in a letter by to George B. Dantzig in 1957 and was motivated by the analysis of the simplex method in linear programming, as the diameter of a polytope provides a lower bound on the number of steps needed by the simplex method. The conjecture is now known to be false in general.
The Hirsch conjecture was proven for d < 4 and for various special cases, while the best known upper bounds on the diameter are only sub-exponential in n and d. After more than fifty years, a counter-example was announced in May 2010 by Francisco Santos Leal, from the University of Cantabria. The result was presented at the conference 100 Years in Seattle: the mathematics of Klee and Grünbaum and appeared in Annals of Mathematics. Specifically, the paper presented a 43-dimensional polytope of 86 facets with a diameter of more than 43. The counterexample has no direct consequences for the analysis of the simplex method, as it does not rule out the possibility of a larger but still linear or polynomial number of steps.
Various equivalent formulations of the problem had been given, such as the d-step conjecture, which states that the diameter of any 2d-facet polytope in d-dimensional Euclidean space is no more than d; Santos Leal's counterexample also disproves this conjecture.
Statement of the conjecture
The graph of a convex polytope is any graph whose vertices are in bijection with the vertices of in such a way that any two vertices of the graph are joined by an edge if and only if the two corresponding vertices of are joined by an edge of the polytope. The diameter of , denoted , is the diameter of any one of its graphs. These d
|
https://en.wikipedia.org/wiki/Canned%20tomato
|
Canned tomatoes, or tinned tomatoes, are tomatoes, usually peeled, that are sealed into a can after having been processed by heat.
Variants
Canned tomatoes are available in several different forms. The traditional forms are whole peeled tomatoes, packed in juice or purée, and ground tomatoes, sometimes referred to as "kitchen-ready." Ground tomatoes are not to be confused with purée, which is similar but more cooked. Taste tests indicate that whole tomatoes packed in juice tend to be perceived as fresher-tasting than those packed in purée. Crushed tomatoes, commonly used for pasta sauces, are made by adding ground tomatoes to a heavy medium made from tomato paste. Diced tomatoes have become increasingly common for applications where a chunkier or more substantial product is needed. In recent years, the Petite Diced form (3/8" cut pieces) has become the fastest growing segment of canned tomatoes.
Usage
In areas and situations where in-season, perfectly ripe tomatoes are not available, canned tomatoes are often used as an alternative to prepare dishes such as tomato sauce or pizza. The top uses for canned tomatoes are Italian or pasta sauces, chili, soup, pizza, stew, casseroles, and Mexican cuisine.
Economic aspects
Industrially produced canned tomatoes are internationally a staple product and subject to regular market analysis as well as trade considerations.
Home preservation
Home canned tomatoes may be prepared in a number of ways. However, safety measures need to be taken since improperly canned tomatoes can cause botulism poisoning, whether produced industrially or at home.
Diced tomatoes
Diced tomatoes usually refers to tomatoes that have been diced. In the United States retail environment, however, the term refers to canned chunks of plum tomatoes in tomato juice or tomato purée, sometimes seasoned with basil or garlic. This product is a relatively recent arrival in the processed tomato market, and has become quite popular since its introduction in the
|
https://en.wikipedia.org/wiki/Phyz
|
Phyz (Dax Phyz) is a public domain, 2.5D physics engine with built-in editor and DirectX graphics and sound. In contrast to most other real-time physics engines, it is vertex based and stochastic. Its integrator is based on a SIMD-enabled assembly version of the Mersenne Twister random number generator, instead of traditional LCP or iterative methods, allowing simulation of large numbers of micro objects with Brownian motion and macro effects such as object resonance and deformation.
Description
Purpose
Dax Phyz is used to model and simulate physical phenomena, to animate static graphics, and to create videos, GUI front-ends and games. There is no specified correlation between Phyz and reality.
Features
Deformable and breakable objects (soft body dynamics).
N-body particle simulation.
Rod, stick, pin, slot, rocket, charge, magnet, heat, actuator and custom constraints.
Turing complete, real-time logic components (Phyz Logics).
Explosives.
Collision and break sound effects.
Message-based application programming interface.
Real-time, constraint-aware editing.
Metaballics effects.
Bitmap import.
OpenMP 2.0 support.
Platform availability
Phyz requires Windows with DirectX 9.0c or later, a display adapter with hardware support for DirectX 9, a CPU with full SSE2 support, and 1 GB of free RAM.
The metaballics effects require a GPGPU-capable display adapter.
PhyzLizp
PhyzLizp, included with Phyz, is an external application based on the Lisp programming language (Lizp 4). It can be used to measure and control events in Phyz, and to create Phyz extensions such as graphical interfaces, network gateways, non-linear constraints or games.
Screenshots
Hammer scene (upper left; deformable objects): The hammer's centre of mass is displaced from its rotational axis, creating a torque which keeps the ruler from rotating.
Wedge scene (upper right; breakable objects): How to make an impression.
Yoda scene (lower left; bitmap import, metaballics): 3.446 vertices and
|
https://en.wikipedia.org/wiki/DBASS3/5
|
DBASS3 and DBASS5 in computational biology is a database of new exon boundaries induced by pathogenic mutations in human disease genes.
The database has been used in a large number of studies; Google Scholar has 87 entries for papers using DBASS3, and 80 for papers using DBASS5 including
Vallée MP, Di Sera TL, Nix DA, Paquette AM, Parsons MT, Bell R, Hoffman A, Hogervorst FB, Goldgar DE, Spurdle AB, Tavtigian SV. Adding in silico assessment of potential splice aberration to the integrated evaluation of BRCA gene unclassified variants. Human Mutation. 2016 Jul;37(7):627-39.
Dhir A, Buratti E. Alternative splicing: role of pseudoexons in human disease and potential therapeutic strategies. The FEBS Journal. 2010 Feb 1;277(4):841-55.
Vallée MP, Francy TC, Judkins MK, Babikyan D, Lesueur F, Gammon A, Goldgar DE, Couch FJ, Tavtigian SV. Classification of missense substitutions in the BRCA genes: A database dedicated to Ex‐UVs. Human mutation. 2012 Jan;33(1):22-8.
Churbanov A, Vořechovský I, Hicks C. A method of predicting changes in human gene splicing induced by genetic variants in context of cis-acting elements. BMC Bioinformatics. 2010 Dec;11(1):1-2.
Wang J, Zhang J, Li K, Zhao W, Cui Q. SpliceDisease database: linking RNA splicing and disease. Nucleic Acids Research. 2012 Jan 1;40(D1):D1055-9.
See also
Human diseases markers
|
https://en.wikipedia.org/wiki/Stock%20Exchange%20of%20Visions
|
The Stock Exchange of Visions is a project initiated in 2006 by Fabrica, Benetton's research center. It gathers visionaries from diverse nationalities and cultures, who hail from a wide range of specialties, to provide insight into their vision for the future.
Artists, sociologists, activists, scientists and others have answered a questionnaire designed to explore their idea of the future regarding our culture, environment, resources, economy and society. Stock Exchange of Visions aims to contribute to the awareness of our relationship with the planet while supplying positive and thoughtful answers regarding major global issues.
Stock Exchange of Visions consists of an interactive installation and website which allows the participant to access the growing content of the project and interact with it. The installation is a site-specific knowledge hub while the website provides global access to the visions of the future collected by the project.
Installation
The Stock Exchange of Visions installation creates an on-site, interactive knowledge experience. The installation features a revolutionary interactive menu to access the visions of the future, which are projected onto a life-size video screen. The life-size video screen aims to create a dialogue sphere between the selected visionary and the installation participant.
The Stock Exchange of Visions installation is a traveling installation, which has been presented at the main cultural outlets of Europe. The installation was first seen at the Centre Georges Pompidou in Paris (2006), the second presentation will be the Trienale of Milan (2007). The objective of this traveling installation is to allow visitors to have an interactive physical experience with the visions of the future, while the website provides constant global access to the content of the project.
Visionaries
Stock Exchange of Visions has collected the video interviews of the following Visionaries:
|
https://en.wikipedia.org/wiki/Heterogeneous%20System%20Architecture
|
Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks. The HSA is being developed by the HSA Foundation, which includes (among many others) AMD and ARM. The platform's stated aim is to reduce communication latency between CPUs, GPUs and other compute devices, and make these various devices more compatible from a programmer's perspective, relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done with OpenCL or CUDA).
CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance. Heterogeneous computing is widely used in system-on-chip devices such as tablets, smartphones, other mobile devices, and video game consoles. HSA allows programs to use the graphics processor for floating point calculations without separate memory or scheduling.
Rationale
The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other than GPUs, such as other manufacturers' DSPs, as well.
Modern GPUs are very well suited to perform single instruction, multiple data (SIMD) and single instruction, multiple threads (SIMT), while modern CPUs are still being optimized for branching. etc.
Overview
Originally introduced by embedded systems such as the Cell Broadband Engine, sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), or any type of application-specific integrated circuits (ASICs). The system architecture allows
|
https://en.wikipedia.org/wiki/Hofstadter%27s%20butterfly
|
In condensed matter physics, Hofstadter's butterfly is a graph of the spectral properties of non-interacting two-dimensional electrons in a perpendicular magnetic field in a lattice. The fractal, self-similar nature of the spectrum was discovered in the 1976 Ph.D. work of Douglas Hofstadter and is one of the early examples of modern scientific data visualization. The name reflects the fact that, as Hofstadter wrote, "the large gaps [in the graph] form a very striking pattern somewhat resembling a butterfly."
The Hofstadter butterfly plays an important role in the theory of the integer quantum Hall effect and the theory of topological quantum numbers.
History
The first mathematical description of electrons on a 2D lattice, acted on by a perpendicular homogeneous magnetic field, was studied by Rudolf Peierls and his student R. G. Harper in the 1950s.
Hofstadter first described the structure in 1976 in an article on the energy levels of Bloch electrons in perpendicular magnetic fields. It gives a graphical representation of the spectrum of Harper's equation at different frequencies. One key aspect of the mathematical structure of this spectrum – the splitting of energy bands for a specific value of the magnetic field, along a single dimension (energy) – had been previously mentioned in passing by Soviet physicist Mark Azbel in 1964 (in a paper cited by Hofstadter), but Hofstadter greatly expanded upon that work by plotting all values of the magnetic field against all energy values, creating the two-dimensional plot that first revealed the spectrum's uniquely recursive geometric properties.
Written while Hofstadter was at the University of Oregon, his paper was influential in directing further research. It predicted on theoretical grounds that the allowed energy level values of an electron in a two-dimensional square lattice, as a function of a magnetic field applied perpendicularly to the system, formed what is now known as a fractal set. That is, the distributio
|
https://en.wikipedia.org/wiki/Wireless%20security
|
Wireless security is the prevention of unauthorized access or damage to computers or data using wireless networks, which include Wi-Fi networks. The term may also refer to the protection of the wireless network itself from adversaries seeking to damage the confidentiality, integrity, or availability of the network. The most common type is Wi-Fi security, which includes Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access (WPA). WEP is an old IEEE 802.11 standard from 1997. It is a notoriously weak security standard: the password it uses can often be cracked in a few minutes with a basic laptop computer and widely available software tools. WEP was superseded in 2003 by WPA, a quick alternative at the time to improve security over WEP. The current standard is WPA2; some hardware cannot support WPA2 without firmware upgrade or replacement. WPA2 uses an encryption device that encrypts the network with a 256-bit key; the longer key length improves security over WEP. Enterprises often enforce security using a certificate-based system to authenticate the connecting device, following the standard 802.11X.
In January 2018, the Wi-Fi Alliance announced WPA3 as a replacement to WPA2. Certification began in June 2018, and WPA3 support has been mandatory for devices which bear the "Wi-Fi CERTIFIED™" logo since July 2020.
Many laptop computers have wireless cards pre-installed. The ability to enter a network while mobile has great benefits. However, wireless networking is prone to some security issues. Hackers have found wireless networks relatively easy to break into, and even use wireless technology to hack into wired networks. As a result, it is very important that enterprises define effective wireless security policies that guard against unauthorized access to important resources. Wireless Intrusion Prevention Systems (WIPS) or Wireless Intrusion Detection Systems (WIDS) are commonly used to enforce wireless security policies.
The risks to users of wireless technology
|
https://en.wikipedia.org/wiki/Google%20I/O
|
Google I/O, or simply I/O, is an annual developer conference held by Google in Mountain View, California. The name "I/O" is taken from the number googol, with the "I" representing the "1" in googol and the "O" representing the first "0" in the number. The format of the event is similar to Google Developer Day.
History
Google I/O 2020 was cancelled due to the COVID-19 pandemic, but Google I/O 2021 took place online. Google I/O returned to its in-person format in 2022 despite the COVID-19 pandemic; Google I/O 2022 took place as an in-person conference for the first time since the one held in 2019.
Evolution
2008
Major topics included:
Android
App Engine
Bionic
Maps API
OpenSocial
Web Toolkit
Speakers included Marissa Mayer, David Glazer, Steve Horowitz, Alex Martelli, Steve Souders, Dion Almaer, Mark Lucovsky, Guido van Rossum, Jeff Dean, Chris DiBona, Josh Bloch, Raffaello D'Andrea, Geoff Stearns.
2009
Major topics included:
AJAX APIs
Android
App Engine
Chrome
OpenSocial
Wave
Web Toolkit
Speakers included Aaron Boodman, Adam Feldman, Adam Schuck, Alex Moffat, Alon Levi, Andrew Bowers, Andrew Hatton, Anil Sabharwal, Arne Roomann-Kurrik, Ben Collins-Sussman, Jacob Lee, Jeff Fisher, Jeff Ragusa, Jeff Sharkey, Jeffrey Sambells, Jerome Mouton and Jesse Kocher.
Attendees were given a HTC Magic.
2010
Major topics included:
APIs
Android
App Engine
Chrome
Enterprise
Geo
OpenSocial
Social Web
TV
Wave
Speakers included Aaron Koblin, Adam Graff, Adam Nash, Adam Powell, Adam Schuck, Alan Green, Albert Cheng, Albert Wenger, Alex Russell, Alfred Fuller, Amit Agarwal, Amit Kulkarni, Amit Manjhi, Amit Weinstein, Andres Sandholm, Angus Logan, Arne Roomann-Kurrik, Bart Locanthi, Ben Appleton, Ben Chang, Ben Collins-Sussman.
Attendees were given a HTC Evo 4G at the event. Prior to the event, U.S. attendees received a Motorola Droid while non-U.S. attendees received a Nexus One.
2011
Major topics included:
Android
Google Play Music
Google Pla
|
https://en.wikipedia.org/wiki/Polynomial%20functor
|
In algebra, a polynomial functor is an endofunctor on the category of finite-dimensional vector spaces that depends polynomially on vector spaces. For example, the symmetric powers and the exterior powers are polynomial functors from to ; these two are also Schur functors.
The notion appears in representation theory as well as category theory (the calculus of functors). In particular, the category of homogeneous polynomial functors of degree n is equivalent to the category of finite-dimensional representations of the symmetric group over a field of characteristic zero.
Definition
Let k be a field of characteristic zero and the category of finite-dimensional k-vector spaces and k-linear maps. Then an endofunctor is a polynomial functor if the following equivalent conditions hold:
For every pair of vector spaces X, Y in , the map is a polynomial mapping (i.e., a vector-valued polynomial in linear forms).
Given linear maps in , the function defined on is a polynomial function with coefficients in .
A polynomial functor is said to be homogeneous of degree n if for any linear maps in with common domain and codomain, the vector-valued polynomial is homogeneous of degree n.
Variants
If “finite vector spaces” is replaced by “finite sets”, one gets the notion of combinatorial species (to be precise, those of polynomial nature).
|
https://en.wikipedia.org/wiki/Symrise
|
Symrise AG is a German chemicals company that is a major producer of flavours and fragrances with sales of €4.618 billion in 2022. Major competitors include Givaudan, Takasago International Corporation, International Flavors and Fragrances and Döhler. Symrise is a member of the European Flavour Association. In 2021, Symrise was ranked 4th by FoodTalks' Global Top 50 Food Flavours and Fragrances Companies list.
History
Symrise was founded in 2003 by the merger of Bayer subsidiary Haarmann & Reimer (H&R) and Dragoco, both based in Holzminden, Germany.
Haarman & Reimer
Haarman & Reimer (H&R) was founded in 1874 by chemists Ferdinand Tiemann and Wilhelm Haarmann after they succeeded in first synthesizing vanillin from coniferin. Holzminden was the site where vanillin was first produced industrially.
In 1917, H&R supported Leopold Ružička's unsuccessful three-year project to synthesize irone, a fragrance of violets.
In 1953, H&R was acquired by Bayer.
Dragoco
Dragoco was founded in 1919 by Carl-Wilhelm Gerberding and his cousin August Bellmer.
Horst-Otto Gerberding, majority holder and Chairman of the Executive Board at Dragoco, placed all of his shares into the new Syrmise corporation and the merger was completed May 23, 2003.
History since 2003
In April 2005, Symrise acquired Flavours Direct, a UK-based manufacturer of compounded flavours and seasonings.
In January 2006, Symrise acquired Hamburg based Kaden Biochemicals GmbH, a producer of specialty botanical extracts.
In November 2006, Symrise announced plans to sell shares worth €650 million in an IPO. The firm also announced that its main shareholders, including EQT, would also sell shares worth an unspecified amount. The IPO would leave well above 50% of Symrise shares in free-float. Deutsche Bank and UBS conducted the listing on December 11, 2006. Symrise was listed on the Frankfurt Stock Exchange with the trading symbol SY1. With 81,030,358 shares issued at an issue price of €17.25 for a total volume of
|
https://en.wikipedia.org/wiki/Package%20principles
|
In computer programming, package principles are a way of organizing classes in larger systems to make them more organized and manageable. They aid in understanding which classes should go into which packages (package cohesion) and how these packages should relate with one another (package coupling). Package principles also includes software package metrics, which help to quantify the dependency structure, giving different and/or more precise insights into the overall structure of classes and packages.
See also
SOLID
Robert Cecil Martin
|
https://en.wikipedia.org/wiki/Multipactor%20effect
|
The multipactor effect is a phenomenon in radio-frequency (RF) amplifier vacuum tubes and waveguides, where, under certain conditions, secondary electron emission in resonance with an alternating electric field leads to exponential electron multiplication, possibly damaging and even destroying the RF device.
Description
The multipactor effect occurs when electrons accelerated by radio-frequency (RF) fields are self-sustained in a vacuum (or near vacuum) via an electron avalanche caused by secondary electron emission. The impact of an electron to a surface can, depending on its energy and angle, release one or more secondary electrons into the vacuum. These electrons can then be accelerated by the RF fields and impact with the same or another surface. Should the impact energies, number of electrons released, and timing of the impacts be such that a sustained multiplication of the number of electrons occurs, the phenomenon can grow exponentially and may lead to operational problems of the RF system such as damage of RF components or loss or distortion of the RF signal.
Mechanism
The mechanism of multipactor depends on the orientation of an RF electric field with respect to the surface. There are two types of multipactor: two-surface multipactor on metals and single-surface multipactor on dielectrics.
Two-surface multipactor on metals
This is a multipactor effect that occurs in the gap between metallic electrodes. Often, an RF electric field is normal to the surface. A resonance between electron flight time and RF field cycle is a mechanism for multipactor development.
The existence of multipactor is dependent on the following three conditions being met: The average number of electrons released is greater than or equal to one per incident electron (this is dependent on the secondary electron yield of the surface), and the time taken by the electron to travel from the surface from which it was released to the surface it impacts is an integer multiple of one half of
|
https://en.wikipedia.org/wiki/Holographic%20display
|
A holographic display is a type of 3D display that utilizes light diffraction to display a three-dimensional image to the viewer. Holographic displays are distinguished from other forms of 3D displays in that they do not require the viewer to wear any special glasses or use external equipment to be able to see the image, and do not cause the vergence-accommodation conflict.
Some commercially available 3D displays are advertised as being holographic, but are actually multiscopic.
Timeline
1947 - Hungarian scientist Dennis Gabor first came up with the concept of a hologram while trying to improve the resolution of electron microscopes. He derived the name for holography, with "holos" being the Greek word for "whole," and "gramma" which is the term for "message."
1960 - The world's first laser was developed by Russian scientists Nikolay Basov and Alexander Prokhorov, and American scientist Charles H. Townes. This was a major milestone for holography because laser technology serves as the basis of some modern day holographic displays.
1962 - Yuri Denisyuk invented the white-light reflection hologram which was the first hologram that could be viewed under the light given off by an ordinary incandescent light bulb.
1968 - White-light transmission holography was invented by Stephen Benton. This type of holography was unique because it was able to reproduce the entire spectrum of colors by separating the seven colors that create white light.
1972 - Lloyd Cross produced the first traditional hologram by using white-light transmission holography to recreate a moving 3-dimensional image.
1989 - MIT spatial imaging group pioneered electroholography, which uses magnetic waves and acoustic-optical sensors to portray moving pictures onto a display.
2005 - The University of Texas developed the laser plasma display, which is considered the first real 3D holographic display.
2011 - DARPA announces the Urban Photonic Sand Table (UPST) project, a dynamic digital holographi
|
https://en.wikipedia.org/wiki/Chromosomal%20inversion
|
An inversion is a chromosome rearrangement in which a segment of a chromosome becomes inverted within its original position. An inversion occurs when a chromosome undergoes a two breaks within the chromosomal arm, and the segment between the two breaks inserts itself in the opposite direction in the same chromosome arm. The breakpoints of inversions often happen in regions of repetitive nucleotides, and the regions may be reused in other inversions. Chromosomal segments in inversions can be as small as 100 kilobases or as large as 100 megabases. The number of genes captured by an inversion can range from a handful of genes to hundreds of genes. Inversions can happen either through ectopic recombination, chromosomal breakage and repair, or non-homologous end joining.
Inversions are of two types: paracentric and pericentric. Paracentric inversions do not include the centromere, and both breakpoints occur in one arm of the chromosome. Pericentric inversions span the centromere, and there is a breakpoint in each arm.
Inversions usually do not cause any abnormalities in carriers, as long as the rearrangement is balanced, with no extra or missing DNA. However, in individuals which are heterozygous for an inversion, there is an increased production of abnormal chromatids (this occurs when crossing-over occurs within the span of the inversion). This leads to lowered fertility, due to production of unbalanced gametes. Inversions do not involve either loss or gain of genetic information; they simply rearrange the linear DNA sequence.
Detection
Cytogenetic techniques may be able to detect inversions, or inversions may be inferred from genetic analysis. Nevertheless, in most species, small inversions go undetected. More recently, comparative genomics has been used to detect chromosomal inversions, by mapping the genome. Population genomics may also be used to detect inversions, using areas of high linkage disequilibrium as indicators for possible inversion sites. Human fami
|
https://en.wikipedia.org/wiki/Auxiliary%20line
|
An auxiliary line (or helping line) is an extra line needed to complete a proof in plane geometry. Other common auxiliary constructs in elementary plane synthetic geometry are the helping circles.
As an example, a proof of the theorem on the sum of angles of a triangle can be done by adding a straight line parallel to one of the triangle sides (passing through the opposite vertex).
Although the adding of auxiliary constructs can often make a problem obvious, it's not at all obvious to discover the helpful construct among all the possibilities, and for this reason many prefer to use more systematic methods for the solution of geometric problems (such as the coordinate method, which requires much less ingenuity).
|
https://en.wikipedia.org/wiki/Orbital%20lamina%20of%20ethmoid%20bone
|
The orbital lamina of ethmoid bone (or lamina papyracea or orbital lamina) is a smooth, oblong, paper-thin bone plate which forms the lateral wall of the labyrinth of the ethmoid bone. It covers the middle and posterior ethmoidal cells, and forms a large part of the medial wall of the orbit.
It articulates above with the orbital plate of the frontal bone, below with the maxilla and the orbital process of palatine bone, in front with the lacrimal, and behind with the sphenoid.
Its name lamina papyracea is an appropriate description, as this part of the ethmoid bone is paper-thin and fractures easily. A fracture here could cause entrapment of the medial rectus muscle.
Additional Images
|
https://en.wikipedia.org/wiki/Acetogenesis
|
Acetogenesis is a process through which acetate is produced either by the reduction of CO2 or by the reduction of organic acids, rather than by the oxidative breakdown of carbohydrates or ethanol, as with acetic acid bacteria.
The different bacterial species that are capable of acetogenesis are collectively termed acetogens. Reduction of CO2 to acetate by anaerobic bacteria occurs via the Wood–Ljungdahl pathway and requires an electron source (e.g., H2, CO, formate, etc.). Some acetogens can synthesize acetate autotrophically from carbon dioxide and hydrogen gas. Reduction of organic acids to acetate by anaerobic bacteria occurs via fermentation.
Discovery
In 1932, organisms were discovered that could convert hydrogen gas and carbon dioxide into acetic acid. The first acetogenic bacterium species, Clostridium aceticum, was discovered in 1936 by Klaas Tammo Wieringa. A second species, Moorella thermoacetica, attracted wide interest because of its ability, reported in 1942, to convert glucose into three moles of acetic acid.
Biochemistry
The precursor to acetic acid is the thioester acetyl CoA. The key aspects of the acetogenic pathway are several reactions that include the reduction of carbon dioxide to carbon monoxide and the attachment of the carbon monoxide to a methyl group. The first process is catalyzed by enzymes called carbon monoxide dehydrogenase. The coupling of the methyl group (provided by methylcobalamin) and the CO is catalyzed by acetyl CoA synthase.
2 CO2 + 4 H2 → CH3COOH + 2H2O
Applications
The unique metabolism of acetogens has significance in biotechnological uses. In carbohydrate fermentations, the decarboxylation reactions involved result in the loss of carbon into carbon dioxide. This loss is an issue with an increased requirement of minimization of CO2 emissions, as well as successful competition for fossil fuels with biofuel production being limited by monetary value. Acetogens can ferment glucose without any CO2 emissions and co
|
https://en.wikipedia.org/wiki/Sonic%20Pi
|
Sonic Pi is a live coding environment based on Ruby, originally designed to support both computing and music lessons in schools, developed by Sam Aaron in the University of Cambridge Computer Laboratory in collaboration with Raspberry Pi Foundation.
Uses
Thanks to its use of the SuperCollider synthesis engine and accurate timing model, it is also used for live coding and other forms of algorithmic music performance and production, including at algoraves. Its research and development has been supported by Nesta, via the Sonic PI: Live & Coding project.
See also
Pure Data
Algorithmic composition
List of MIDI editors and sequencers
List of music software
Further reading
|
https://en.wikipedia.org/wiki/Doubly%20periodic%20function
|
In mathematics, a doubly periodic function is a function defined on the complex plane and having two "periods", which are complex numbers u and v that are linearly independent as vectors over the field of real numbers. That u and v are periods of a function ƒ means that
for all values of the complex number z.
The doubly periodic function is thus a two-dimensional extension of the simpler singly periodic function, which repeats itself in a single dimension. Familiar examples of functions with a single period on the real number line include the trigonometric functions like cosine and sine, In the complex plane the exponential function ez is a singly periodic function, with period 2πi.
Examples
As an arbitrary mapping from pairs of reals (or complex numbers) to reals, a doubly periodic function can be constructed with little effort. For example, assume that the periods are 1 and i, so that the repeating lattice is the set of unit squares with vertices at the Gaussian integers. Values in the prototype square (i.e. x + iy where 0 ≤ x < 1 and 0 ≤ y < 1) can be assigned rather arbitrarily and then 'copied' to adjacent squares. This function will then be necessarily doubly periodic.
If the vectors 1 and i in this example are replaced by linearly independent vectors u and v, the prototype square becomes a prototype parallelogram that still tiles the plane. The "origin" of the lattice of parallelograms does not have to be the point 0: the lattice can start from any point. In other words, we can think of the plane and its associated functional values as remaining fixed, and mentally translate the lattice to gain insight into the function's characteristics.
Use of complex analysis
If a doubly periodic function is also a complex function that satisfies the Cauchy–Riemann equations and provides an analytic function away from some set of isolated poles – in other words, a meromorphic function – then a lot of information about such a function can be obtained by applying some
|
https://en.wikipedia.org/wiki/Aiyu%20jelly
|
Aiyu jelly (; or ; or simply ), known in Amoy Hokkien as ogio (), and as ice jelly in Singapore (), is a jelly made from the gel from the seeds of the awkeotsang creeping fig found in Taiwan and East Asian countries of the same climates and latitudes. The jelly is not commonly made or found outside of Taiwan, Malaysia, and Singapore, though it can be bought fresh in specialty stores in Japan and canned in Chinatowns. It is also used in Taiwanese cuisine.
In Cantonese, it is also known as man tau long (文頭郎). It is commonly served with a slice of lime.
Origin
According to oral history, the plant and the jelly were named after the daughter of a Taiwanese tea businessman in the 1800s. The gelling property of the seeds was discovered by the businessman as he drank from a creek in Chiayi. He found a clear yellowish jelly in the water he was drinking and was refreshed upon trying it. Looking above the creek he noticed fruits on hanging vines. The fruits contained seeds that exuded a sticky gel when rubbed.
Upon this discovery, he gathered some of the fruits and served them at home with honeyed lemon juice or sweetened beverages. Finding the jelly-containing beverage delicious and thirst-quenching, the enterprising businessman delegated the task of selling it to his 15-year-old daughter, Aiyu. The snack was very well received and became highly popular. So, the businessman eventually named the jelly and the vines after his daughter.
However, the Austronesian name igos, coming from Spanish higo, hints at a possible Austronesian origin for this food.
Harvesting
Fruits of the creeping fig plant resemble large fig fruits the size of small mangos. The figs grow from flowers pollinated by the Wiebesia pumilae. and are harvested from September through January just before the fruit ripens to a dark purple. The fruits are then halved and turned inside out to dry over the course of several days. The dry fruits can be sold as is, or dried aiyu seeds () can then be pulled off the
|
https://en.wikipedia.org/wiki/Local%20parameter
|
In the geometry of complex algebraic curves, a local parameter for a curve C at a smooth point P is a meromorphic function on C that has a simple zero at P. This concept can be generalized to curves defined over fields other than (or schemes), because the local ring at a smooth point P of an algebraic curve C (defined over an algebraically closed field) is always a discrete valuation ring. This valuation will show a way to count the order (at the point P) of rational functions (which are natural generalizations for meromorphic functions in the non-complex realm) having a zero or a pole at P.
Local parameters, as its name indicates, are used mainly to properly count multiplicities in a local way.
Introduction
If C is a complex algebraic curve, count multiplicities of zeroes and poles of meromorphic functions defined on it. However, when discussing curves defined over fields other than , if there is no access to the power of the complex analysis, a replacement must be found in order to define multiplicities of zeroes and poles of rational functions defined on such curves. In this last case, say that the germ of the regular function vanishes at if . This is in complete analogy with the complex case, in which the maximal ideal of the local ring at a point P is actually conformed by the germs of holomorphic functions vanishing at P.
The valuation function on is given by
This valuation can naturally be extended to K(C) (which is the field of rational functions of C) because it is the field of fractions of . Hence, the idea of having a simple zero at a point P is now complete: it will be a rational function such that its germ falls into , with d at most 1.
This has an algebraic resemblance with the concept of a uniformizing parameter (or just uniformizer) found in the context of discrete valuation rings in commutative algebra; a uniformizing parameter for the DVR (R, m) is just a generator of the maximal ideal m. The link comes from the fact that a local paramete
|
https://en.wikipedia.org/wiki/HOAP
|
The HOAP series robots are an advanced humanoid robot platform manufactured by Fujitsu Automation in Japan. HOAP is an abbreviation for "Humanoid for Open Architecture Platform".
The HOAP series should not be confused with the HRP series (also known as Promet).
History
In 2001, Fujitsu realized its first commercial humanoid robot named HOAP-1. The HOAP-2 was released in 2003 followed by the HOAP-3 in 2005.
Specifications of HOAP-2
HOAP-2 is high and weighs . Its system consists of the robot body, PC and power supplies, and the PC OS uses RT-Linux (open C/C++language). The robot's smooth movement was achieved because the electric current control of the motor was possible (except neck and hand). The USB interface for the internal LAN facilitates easy modification or addition of new actuators and sensors.
The neck, waist and hands now have movement capability, allowing smooth movement. The robot is easy to program and has a simple initial start up using a sample program included with the purchase of the robot.
Capabilities of HOAP-2
HOAP-2 has been demonstrated with capabilities to successfully perform various tasks, including walking on flat terrain, performing sumo movements, cleaning a whiteboard, following a ball, and grasping thin objects, such as pens and brushes.
|
https://en.wikipedia.org/wiki/MCS%20algorithm
|
For mathematical optimization, Multilevel Coordinate Search (MCS) is an efficient algorithm for bound constrained global optimization using function values only.
To do so, the n-dimensional search space is represented by a set of non-intersecting hypercubes (boxes). The boxes are then iteratively split along an axis plane according to the value of the function at a representative point of the box (and its neighbours) and the box's size. These two splitting criteria combine to form a global search by splitting large boxes and a local search by splitting areas for which the function value is good.
Additionally, a local search combining a (multi-dimensional) quadratic interpolant of the function and line searches can be used to augment performance of the algorithm (MCS with local search); in this case the plain MCS is used to provide the starting (initial) points. The information provided by local searches (local minima of the objective function) is then fed back to the optimizer and affects the splitting criteria, resulting in reduced sample clustering around local minima, faster convergence and higher precision.
Simplified workflow
The MCS workflow is visualized in Figures 1 and 2. Each step of the algorithm can be split into four stages:
Identify a potential candidate for splitting (magenta, thick).
Identify the optimal splitting direction and the expected optimal position of the splitting point (green).
Evaluate the objective function at the splitting point or recover it from the already computed set; the latter applies if the current splitting point has already been reached when splitting a neighboring box.
Generate new boxes (magenta, thin) based on the values of the objective function at the splitting point.
At each step the green point with the temporary yellow halo is the unique base point of the box; each box has an associated value of the objective, namely its value at the box's base point.
In order to determine if a box will be split two separat
|
https://en.wikipedia.org/wiki/Arlene%20Minkiewicz
|
Arlene F. Minkiewicz is the Chief Scientist at PRICE Systems, a company generally acknowledged as the earliest developer of parametric cost estimation software. She leads the cost research activity for the entire suite of cost estimating products that PRICE develops and maintains. Minkiewicz has over 25 years of experience designing and implementing cost models.
History
In 1997 Minkiewicz contributed to the science of software measurement by proposing predictive object points (POPs) as a means of measuring three characteristics of object-oriented software (combined data and functions, object communication and reuse via software. Her research has been published in leading trade journals including Software Development; Crosstalk: The Journal of Defense Software Engineering, and the British Software Review. In 2002 Minkiewicz was named the Clyde Perry Parametrician of the Year by the International Society of Parametric Analysts. In 2004, her research into the cost implications of Commercial-Off-the-Shelf (COTS) software-based systems was recognized with best paper awards by the International Society of Parametric Analysts and The Society for Cost Estimating and Analysis (ISPA / SCEA). In 2012, ISPA / SCEA recognized Ms. Minkiewicz for excellence in the field of parametric cost estimation with the presentation of the Frank Freiman Award, the highest honor awarded by the society.
Education
Minkiewicz holds a bachelor's degree in electrical engineering from Lehigh University, (Bethlehem, Pa.) and a master's in computer science from Drexel University, (Philadelphia, Pa).
U.S. patent
On June 6, 2000, Minkiewicz (along with Bruce E. Fad) was awarded U.S. patent 6073107 for parametric software forecasting system and method, a parametric software estimating system that provides a class of user selectable size metrics.
Speaker and industry commentator
Minkiewicz speaks frequently on subjects pertaining to cost estimating and measurement at professional conferences
|
https://en.wikipedia.org/wiki/Computational%20phylogenetics
|
Computational phylogenetics, phylogeny inference, or phylogenetic inference focuses on computational and optimization algorithms, heuristics, and approaches involved in phylogenetic analyses. The goal is to find a phylogenetic tree representing optimal evolutionary ancestry between a set of genes, species, or taxa. Maximum likelihood, parsimony, Bayesian, and minimum evolution are typical optimality criteria used to assess how well a phylogenetic tree topology describes the sequence data. Nearest Neighbour Interchange (NNI), Subtree Prune and Regraft (SPR), and Tree Bisection and Reconnection (TBR), known as tree rearrangements, are deterministic algorithms to search for optimal or the best phylogenetic tree. The space and the landscape of searching for the optimal phylogenetic tree is known as phylogeny search space.
Maximum Likelihood (also likelihood) optimality criterion is the process of finding the tree topology along with its branch lengths that provides the highest probability observing the sequence data, while parsimony optimality criterion is the fewest number of state-evolutionary changes required for a phylogenetic tree to explain the sequence data.
Traditional phylogenetics relies on morphological data obtained by measuring and quantifying the phenotypic properties of representative organisms, while the more recent field of molecular phylogenetics uses nucleotide sequences encoding genes or amino acid sequences encoding proteins as the basis for classification.
Many forms of molecular phylogenetics are closely related to and make extensive use of sequence alignment in constructing and refining phylogenetic trees, which are used to classify the evolutionary relationships between homologous genes represented in the genomes of divergent species. The phylogenetic trees constructed by computational methods are unlikely to perfectly reproduce the evolutionary tree that represents the historical relationships between the species being analyzed. The historic
|
https://en.wikipedia.org/wiki/Crp%20domain
|
In molecular biology, the CRP domain is a protein domain consisting of a helix-turn-helix (HTH) motif. It is found at the C-terminus of numerous bacterial transcription regulatory proteins. These proteins bind DNA via the CRP domain. These proteins are very diverse, but for convenience may be grouped into subfamilies on the basis of sequence similarity. This family groups together a range of proteins, including ANR, CRP, CLP, CysR, FixK, Flp, FNR, FnrN, HlyX and NtcA.
|
https://en.wikipedia.org/wiki/Stampacchia%20Medal
|
The Stampacchia Gold Medal is an international prize awarded every three years by the Italian Mathematical Union (Unione Matematica Italiana – UMI {it}) together with the Ettore Majorana Foundation (Erice), in recognition of outstanding contributions to the field of Calculus of Variations and related applications. The medal, named after the Italian mathematician Guido Stampacchia, goes to a mathematician whose age does not exceed 35.
Prize Winners
2003 Tristan Rivière (ETH Zürich)
2006 Giuseppe Mingione (University of Parma)
2009 Camillo De Lellis (University of Zurich)
2012 Ovidiu Savin (Columbia University)
2015 Alessio Figalli (The University of Texas at Austin)
2018 Guido De Philippis (International School for Advanced Studies)
2021 Xavier Ros-Oton (ICREA and Universitat de Barcelona)
See also
List of mathematics awards
|
https://en.wikipedia.org/wiki/Bernstein%20polynomial
|
In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial expressed as a linear combination of Bernstein basis polynomials. The idea is named after mathematician Sergei Natanovich Bernstein.
Polynomials in Bernstein form were first used by Bernstein in a constructive proof for the Weierstrass approximation theorem. With the advent of computer graphics, Bernstein polynomials, restricted to the interval [0, 1], became important in the form of Bézier curves.
A numerically stable way to evaluate polynomials in Bernstein form is de Casteljau's algorithm.
Definition
Bernstein basis polynomials
The n+1 Bernstein basis polynomials of degree n are defined as
where is a binomial coefficient.
So, for example,
The first few Bernstein basis polynomials for blending 1, 2, 3 or 4 values together are:
The Bernstein basis polynomials of degree n form a basis for the vector space of polynomials of degree at most n with real coefficients.
Bernstein polynomials
A linear combination of Bernstein basis polynomials
is called a Bernstein polynomial or polynomial in Bernstein form of degree n. The coefficients are called Bernstein coefficients or Bézier coefficients.
The first few Bernstein basis polynomials from above in monomial form are:
Properties
The Bernstein basis polynomials have the following properties:
, if or
for
and where is the Kronecker delta function:
has a root with multiplicity at point (note: if , there is no root at 0).
has a root with multiplicity at point (note: if , there is no root at 1).
The derivative can be written as a combination of two polynomials of lower degree:
The k-th derivative at 0:
The k-th derivative at 1:
The transformation of the Bernstein polynomial to monomials is and by the inverse binomial transformation, the reverse transformation is
The indefinite integral is given by
The definite integral is constant for a given n:
If , then has a unique local maximum on
|
https://en.wikipedia.org/wiki/Burpee%20%28exercise%29
|
The burpee, a squat thrust with an additional stand between repetitions, is a full body exercise used in strength training. The movement itself is primarily an anaerobic exercise, but when done in succession over a longer period can be utilized as an aerobic exercise.
The basic movement as described by its namesake, physiologist Royal H. Burpee, is performed in four steps from a standing position and known as a "four-count burpee":
Move into a squat position with your hands on the ground.
Kick your feet back into an extended plank position, while keeping your arms extended.
Immediately return your feet into squat position.
Stand up from the squat position.
You can make this exercise modified by stepping back into a plank instead
Moves 2 and 3 constitute a squat thrust. Many variants of the basic burpee exist, and they often include a push-up and a jump.
Origin
The exercise was invented in 1939 by US physiologist Royal Huddleston Burpee Sr., who used it in the burpee test to assess fitness. Burpee earned a PhD in applied physiology from Teachers College, Columbia University in 1940 and created the "burpee" exercise as part of his PhD thesis as a quick and simple fitness test, which may be used as a measure of agility and coordination. The original burpee was a "four-count burpee" consisting of movements through four different positions, and in the fitness test, the burpee was performed four times, with five heart rate measurements taken before and after the four successive burpees to measure the efficiency of the heart at pumping blood and how quickly the heart rate returns to normal.
The exercise was popularized when the United States Armed Services made it one of the ways used to assess the fitness level of recruits when the US entered World War II. Although the original test was not designed to be performed at high volume, the Army used the burpee to test how many times it can be performed by a soldier in 20 seconds – eight burpees in 20 seconds is c
|
https://en.wikipedia.org/wiki/Archibald%20Burns%20%28photographer%29
|
Archibald Burns (1831–1880) was a Scottish photographer based in Edinburgh, Scotland, and active from 1858 to 1880. He documented the city through various publications and recorded the historic buildings in a section of the city that was cleared for improvements in the 1860s.
Life and career
Burns became active in photography as an amateur in the 1850s. He became a member of the Photographic Society of Scotland in 1858 and was one of the first members of the Edinburgh Photographic Society in 1861. He pursued primarily landscape and architectural photography and capitalized on new tourist markets for illustrated books and views in the latter part of the nineteenth century.
Burns first professional photography studio was located at 22 Calton Stairs from 1861 to 1871, at which point he established his business at the Rock House until 1880. Through parts of 1870 and 1871, Burns shared the Rock House, which had earlier been the studio of important Scottish calotypists Hill & Adamson, with Glaswegian photographer Thomas Annan.
Burns died in 1880 and was buried at Warriston Cemetery. The contents of his studio – inventory, materials, and hardware – were put up for sale in May of that year.
Works
Archibald Burns promoted himself as a landscape photographer and sold individual prints, stereographs, cabinet cards, and magic lantern slides of views of Edinburgh and surrounding area.
Burns illustrated two books on Edinburgh published in 1868, three years before he took his series of photographs of closes and wynds for the Edinburgh Improvement Trust (January and February 1871). The text in Picturesque "Bits" from Old Edinburgh (1st ed. 1868) emphasizes the architectural history of Scotland and the importance of photography in preserving the knowledge of fading vernacular styles and ends with a questions regarding the future of Scottish architecture. The book is illustrated with 15 tipped-in albumen prints by Burns and 8 woodcuts by Charles Paton after drawings in Daniel
|
https://en.wikipedia.org/wiki/Vincent%20Wigglesworth
|
Sir Vincent Brian Wigglesworth CBE FRS (17 April 1899 – 11 February 1994) was a British entomologist who made significant contributions to the field of insect physiology. He established the field in a textbook which was updated in a number of editions.
In particular, he studied metamorphosis. His most significant contribution was the discovery that neurosecretory cells in the brain of the South American kissing bug, Rhodnius prolixus, secrete a crucial hormone that triggers the prothoracic gland to release prothoracicotropic hormone (PTTH), which regulates the process of metamorphosis. This was the first experimental confirmation of the function of neurosecretory cells. He went on to discover another hormone, called the juvenile hormone, which prevented the development of adult characteristics in R. prolixus until the insect had reached the appropriate larval stage. Wigglesworth was able to distort the developmental phases of the insect by controlling levels of this hormone. From these observations, Wigglesworth was able to develop a coherent theory of how an insect's genome can selectively activate hormones which determine its development and morphology.
Personal life
Wigglesworth served in the Royal Field Artillery in France in World War I. He received his degree from the University of Cambridge and lectured at the London School of Hygiene and Tropical Medicine, the University of London, and finally at the University of Cambridge.
He was named Quick Professor of Biology at the University of Cambridge in 1952, appointed CBE in 1951, and knighted in 1964.
Wigglesworth was President of the Royal Entomological Society from 1963 to 1964 and the Association of Applied Biologists from 1966 to 1967. He was elected to the American Academy of Arts and Sciences in 1960, the United States National Academy of Sciences in 1971, and the American Philosophical Society in 1982.
He married Mable K Semple in St Albans in 1922. They had four children.
The bacterium Wiggleswor
|
https://en.wikipedia.org/wiki/Lobucavir
|
Lobucavir (previously known as BMS-180194, Cyclobut-G) is an antiviral drug that shows broad-spectrum activity against herpesviruses, hepatitis B and other hepadnaviruses, HIV/AIDS and cytomegalovirus. It initially demonstrated positive results in human clinical trials against hepatitis B with minimal adverse effects but was discontinued from further development following the discovery of increased risk of cancer associated with long-term use in mice. Although this carcinogenic risk is present in other antiviral drugs, such as zidovudine and ganciclovir that have been approved for clinical use, development was halted by Bristol-Myers Squibb, its manufacturer.
Medical use
Lobucavir has been shown to exhibit antiviral activity against herpesvirus, hepatitis B, HIV/AIDS, and human cytomegalovirus. It reached phase III clinical trials for hepatitis B and herpesvirus, phase II clinical trials for cytomegalovirus, and underwent a pilot study for use in treating AIDs prior to discontinuation.
Adverse effects
In early clinical trials, Lobucavir was relatively well tolerated in subjects and was not subject to discontinuation due to adverse effects. Commonly reported effects included headache, fatigue, diarrhea, abdominal pain, and flu-like symptoms common with other nucleoside analogs. Other studies, however, identified Lobucavir-induced carcinogenesis associated with long-term use in mice that led to the drug's discontinuation in clinical trials in 1999.
Mechanism of action
Lobucavir is a guanine analog that interferes with viral DNA polymerase. It must be phosphorylated into its triphosphate form within infected cells via intracellular enzymes before it can demonstrate its anti-viral activity. In hepatitis B studies, Lobucavir has been found to inhibit viral primer synthesis, reverse-transcription, and DNA-dependent DNA polymerization by acting as a non-obligate chain terminator of the viral polymerase. Unlike traditional chain terminators that lack a 3'-OH group to
|
https://en.wikipedia.org/wiki/Capybara%20%28software%29
|
Capybara is a web-based test automation software that simulates scenarios for user stories and automates web application testing for behavior-driven software development. It is written in the Ruby programming language.
Capybara can mimic actions of real users interacting with web-based applications. It can receive pages, parse the HTML and submit forms.
Background and motivation
During the software development process (especially in the Agile and Test-driven Development environments), as the size of the tests increase, it becomes difficult to manage tests which are complex and not modular.
By extending the human-readable behavior-driven development style of frameworks such as Cucumber and RSpec into the automation code itself, Capybara aims to develop simple web-based automated tests.
Anatomy of Capybara
Capybara is a Ruby library (also referred to as a gem) that is used with an underlying web-based driver. It consists of a user-friendly DSL (Domain Specific Language) which describe actions that are executed by the underlying web driver.
When the page is loaded using the DSL (and underlying web driver), Capybara will attempt to locate the relevant element in the DOM (Document Object Model) and execute an action such as click button, link, etc.
Drivers
By default, Capybara uses the :rack_test driver which does not have any support for executing JavaScript. Drivers can be switched in Before and After blocks. Some of the web drivers supported by Capybara are mentioned below.
RackTest
Written in Ruby, Capybara's default driver RackTest does not require a server to be started since it directly interacts with Rack interfaces. Consequently, it can only be used for Rack applications.
Selenium
Selenium-webdriver, which is mostly used in web-based automation frameworks, is supported by Capybara. Unlike Capybara's default driver, it supports JavaScript, can access HTTP resources outside of application and can also be set up for testing in headless mode which is
|
https://en.wikipedia.org/wiki/Putty%20knife
|
A putty knife is a specialized tool used when glazing single glazed windows, to work putty around the edges of each pane of glass. An experienced glazer will apply the putty by hand, and then smooth it with the knife. Modern insulated glazing may use other ways of securing the glass to the window frame.
A spackle knife (called a scraper in British English, also known as a spatula in American English) is also commonly called a "putty knife", and is used for scraping surfaces or spreading material such as plaster in various construction trades. Widths from 1" to 5" or 6" are commonly available. Wider-bladed knives up to about 12" are used for sheet rocking. Larger blades are made, but generally lack the stability of the smaller blades and do not make a perfectly flat surface.
Stiff-blade knives, typically 1 mm or .040" thick, are suitable for scraping. Flexible-blade knives, typically 0.5 mm or .020" thick, are suitable for spreading. Due to the conductive nature of metallic blades, they should be kept at a safe distance from electrical components.
Disposable knives, with handle and blade molded as a single piece of plastic, are suitable for occasional jobs such as spreading roof patching tar or mixing two-part adhesives, avoiding laborious cleanup which may involve hazardous solvents.
See also
Taping knife
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.