source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Acid%20growth | Acid growth refers to the ability of plant cells and plant cell walls to elongate or expand quickly at low (acidic) pH. The cell wall needs to be modified in order to maintain the turgor pressure. This modification is controlled by plant hormones like auxin. Auxin also controls the expression of some cell wall genes. This form of growth does not involve an increase in cell number. During acid growth, plant cells enlarge rapidly because the cell walls are made more extensible by expansin, a pH-dependent wall-loosening protein. Expansin loosens the network-like connections between cellulose microfibrils within the cell wall, which allows the cell volume to increase by turgor and osmosis. A typical sequence leading up to this would involve the introduction of a plant hormone (auxin, for example) that causes protons (H+ ions) to be pumped out of the cell into the cell wall. As a result, the cell wall solution becomes more acidic. It was suggested by different scientist that the epidermis is a unique target of the auxin but this theory has been disapproved over time. This activates expansin activity, causing the wall to become more extensible and to undergo wall stress relaxation, which enables the cell to take up water and to expand. The acid growth theory has been very controversial in the past. |
https://en.wikipedia.org/wiki/Adapter%20molecule%20crk | Adapter molecule crk also known as proto-oncogene c-Crk is a protein that in humans is encoded by the CRK gene.
The CRK protein participates in the Reelin signaling cascade downstream of DAB1.
Function
Adapter molecule crk is a member of an adapter protein family that binds to several tyrosine-phosphorylated proteins. This protein has several SH2 and SH3 domains (src-homology domains) and is involved in several signaling pathways, recruiting cytoplasmic proteins in the vicinity of tyrosine kinase through SH2-phosphotyrosine interaction. The N-terminal SH2 domain of this protein functions as a positive regulator of transformation whereas the C-terminal SH3 domain functions as a negative regulator of transformation. Two alternative transcripts encoding different isoforms with distinct biological activity have been described.
Crk together with CrkL participates in the Reelin signaling cascade downstream of DAB1.
v-Crk, a transforming oncoprotein from avian sarcoma viruses, is a fusion of viral "gag" protein with the SH2 and SH3 domains of cellular Crk. The name Crk is from "CT10 Regulator of Kinase" where CT10 is the avian virus from which was isolated a protein, lacking kinase domains, but capable of stimulating phosphorylation of tyrosines in cells.
Crk should not be confused with Src, which also has cellular (c-Src) and viral (v-Src) forms and is involved in some of the same signaling pathways but is a protein tyrosine-kinase.
Interactions
CRK (gene) has been shown to interact with:
BCAR1,
Cbl gene,
Dock180,
EPS15,
Epidermal growth factor receptor,
Grb2,
IRS4,
MAP4K1,
MAPK8,
NEDD9,
PDGFRA,
PDGFRB,
PTK2,
Paxillin
RAPGEF1,
RICS,
SH3KBP1, and
SOS1.
See also
CrkL, "Crk-like" protein |
https://en.wikipedia.org/wiki/Tony%20Koester | J. Anthony Koester, more commonly known as Tony Koester, is a well-known member of the United States model railroading community. Along with his friend Allen McClelland and his Virginian & Ohio, Koester popularized the idea of proto-freelancing with his HO scale model railroad, the Allegheny Midland. At Purdue University in the early 1960s, he studied electrical engineering, communication, and art. While at Purdue, he was also a member and president of the Purdue Railroad Club. In 1966, with Glenn Pizer he co-founded the Nickel Plate Road Historical & Technical Society to preserve the memory of his favorite railroad.
In 1969, Koester and his wife and children relocated from Indiana to northeastern New Jersey to take a position with Carstens Publications as editor of Railroad Model Craftsman. In 1973, the company relocated to Newton in northwestern New Jersey, and the Koesters built a new home that housed his last two model railroads. He had previously developed a close friendship with Jim Boyd, who joined Carstens in 1971 and in 1975 became the editor of Railfan & Railroad. It was Koester's exposure to the V&O and eastern mountain coal railroading in the Appalachians that led him to develop the concept of the Allegheny Midland. Blending elements of Nickel Plate and some C&O equipment and operation with Chesapeake & Ohio structures and scenery, the Allegheny Midland became the Nickel Plate's plausible West Virginia coal-hauler. Regular updates in the pages of Railroad Model Craftsman made the Allegheny Midland known to modelers across America and beyond.
Koester left Carstens in 1981 and took a job with Bell Laboratories, editing their publications for two decades. In November 1985, he also began writing a monthly column called "Trains of Thought" in the pages of Model Railroader, published by Kalmbach. In 1995, he became the founding editor of the annual Model Railroad Planning. After 20 years editing telecommunication journals and the corporate science magazine a |
https://en.wikipedia.org/wiki/Oded%20Schramm | Oded Schramm (; December 10, 1961 – September 1, 2008) was an Israeli-American mathematician known for the invention of the Schramm–Loewner evolution (SLE) and for working at the intersection of conformal field theory and probability theory.
Biography
Schramm was born in Jerusalem. His father, Michael Schramm, was a biochemistry professor at the Hebrew University of Jerusalem.
He attended Hebrew University, where he received his bachelor's degree in mathematics and computer science in 1986 and his master's degree in 1987, under the supervision of Gil Kalai. He then received his PhD from Princeton University in 1990 under the supervision of William Thurston.
After receiving his doctorate, he worked for two years at the University of California, San Diego, and then had a permanent position at the Weizmann Institute from 1992 to 1999. In 1999 he moved to the Theory Group at Microsoft Research in Redmond, Washington, where he remained for the rest of his life.
He and his wife had two children, Tselil and Pele. Tselil is an assistant professor of statistics at Stanford University.
On September 1, 2008, Schramm fell to his death while scrambling Guye Peak, north of Snoqualmie Pass in Washington.
Research
A constant theme in Schramm's research was the exploration of relations between discrete models and their continuous scaling limits, which for a number of models turn out to be conformally invariant.
Schramm's most significant contribution was the invention of Schramm–Loewner evolution, a tool which has paved the way for mathematical proofs of conjectured scaling limit relations on models from statistical mechanics such as self-avoiding random walk and percolation. This technique has had a profound impact on the field. It has been recognized by many awards to Schramm and others, including a Fields Medal to Wendelin Werner, who was one of Schramm's principal collaborators, along with Gregory Lawler. The New York Times wrote in his obituary:
Schramm's doctorate |
https://en.wikipedia.org/wiki/Chance%20seedling | A chance seedling is a plant that is the product of unintentional breeding.
Identifying the parent plants of a chance seedling may be difficult. It may be necessary to genetically analyse the seedling and surrounding plants to be sure. Plants that come from the artificial union of gametes from a maternal and paternal source are not chance seedlings.
A chance seedling may be a genetically unique individual with desirable characteristics that is then intentionally bred. The Kindred Spirit Hybrid Oak and the Granny Smith, Wolf River, Lady Alice, Red Delicious, Gravenstein, Braeburn, Samarbehisht Chausa, Calville Blanc d'hiver, Belle de Boskoop and Baldwin apples are examples of varieties that started with chance seedlings that were selected and assigned cultivar status owing to their desirable properties.
See also
Volunteer (botany) |
https://en.wikipedia.org/wiki/Reading%20Blaster%202000 | Reading Blaster 2000 is a remake of Reading Blaster: Invasion of the Word Snatchers created by Davidson & Associates in 1996. The game was later resold as Reading Blaster: Ages 6–9 in 1998 and, under Knowledge Adventure, as Reading Blaster for 3rd Grade in 2000. It is part of the Blaster Learning System series.
The game features a premise involving the characters depicted in previous Blaster products (Blasternaut, Galactic Commander and Spot) competing in The Challenge of the Reading Gladiators, a game show set in outer space. The user chooses which Blaster character to play and may either compete against a friend, playing another Blaster character, or the computer, in which case the player's opponent is Illitera, who was previously the main villain of the original Reading Blaster.
This is one of the few multiplayer video games in the Blaster series. It is also one of the few entries in the series to not have a plot involving foiling a villainous character's evil plan. Also in this game, Illitera becomes the only recurring villain in the Blaster games, although here she does nothing worse than bad-mouthing the show's hosts and eliciting boos from the unseen audience.
See also
Math Blaster Episode I: In Search of Spot
Math Blaster Episode II: Secret of the Lost City
External links
SuperKids review
1996 video games
1998 video games
1999 video games
Windows games
Classic Mac OS games
Mathematical education video games
Children's educational video games
Video games developed in the United States
Multiplayer and single-player video games
Davidson & Associates games |
https://en.wikipedia.org/wiki/Reed%E2%80%93Muller%20expansion | In Boolean logic, a Reed–Muller expansion (or Davio expansion) is a decomposition of a Boolean function.
For a Boolean function we call
the positive and negative cofactors of with respect to , and
the boolean derivation of with respect to , where denotes the XOR operator.
Then we have for the Reed–Muller or positive Davio expansion:
Description
This equation is written in a way that it resembles a Taylor expansion of about . There is a similar decomposition corresponding to an expansion about (negative Davio expansion):
Repeated application of the Reed–Muller expansion results in an XOR polynomial in :
This representation is unique and sometimes also called Reed–Muller expansion.
E.g. for the result would be
where
.
For the result would be
where
.
Geometric interpretation
This case can be given a cubical geometric interpretation (or a graph-theoretic interpretation) as follows: when moving along the edge from to , XOR up the functions of the two end-vertices of the edge in order to obtain the coefficient of . To move from to there are two shortest paths: one is a two-edge path passing through and the other one a two-edge path passing through . These two paths encompass four vertices of a square, and XORing up the functions of these four vertices yields the coefficient of . Finally, to move from to there are six shortest paths which are three-edge paths, and these six paths encompass all the vertices of the cube, therefore the coefficient of can be obtained by XORing up the functions of all eight of the vertices. (The other, unmentioned coefficients can be obtained by symmetry.)
Paths
The shortest paths all involve monotonic changes to the values of the variables, whereas non-shortest paths all involve non-monotonic changes of such variables; or, to put it another way, the shortest paths all have lengths equal to the Hamming distance between the starting and destination vertices. This means that it should be easy to generalize an alg |
https://en.wikipedia.org/wiki/Pr%C3%AAt%20%C3%A0%20Voter | Prêt à Voter is an E2E voting system devised by Peter Ryan of the University of Luxembourg. It aims to provide guarantees of accuracy of the count and ballot privacy that are independent of software, hardware etc. Assurance of accuracy flows from maximal transparency of the process, consistent with maintaining ballot privacy. In particular, Prêt à Voter enables voters to confirm that their vote is accurately included in the count whilst avoiding dangers of coercion or vote buying.
The key idea behind the Prêt à Voter approach is to encode the vote using a randomized candidate list. The randomisation of the candidate list on each ballot form ensures the secrecy of each vote. Incidentally, it also removes any bias towards the top candidate that can occur with a fixed ordering.
The value printed on the bottom of the receipt is the key to extraction of the vote. Buried cryptographically in this value is the information needed to reconstruct the candidate order and so extract the vote encoded on the receipt. This information is encrypted with secret keys shared across a number of tellers. Thus, only the set of tellers acting together are able to interpret the vote encoded on the receipt. No individual agent or machine involved in the election should ever be able to tie a particular voter to a particular decrypted vote.
After the election, voters (or perhaps proxies acting on their behalf) can visit the Web Bulletin Board (WBB) and confirm their receipts appear correctly. Once this is over, the tellers take over and perform anonymising mixes and decryption of the receipts. All the intermediate stages of this process are posted to the WBB and are audited later.
There are various auditing mechanisms to ensure that all the steps, the creation of the ballot forms, the mixing and decryption and so on were all performed correctly, but these are carefully designed so as not to impinge on ballot privacy.
Example
Suppose that our voter is called Anne. At the polling station, |
https://en.wikipedia.org/wiki/Fluorescence%20loss%20in%20photobleaching | Fluorescence Loss in Photobleaching (FLIP) is a fluorescence microscopy technique used to examine movement of molecules inside cells and membranes. A cell membrane is typically labeled with a fluorescent dye to allow for observation. A specific area of this labeled section is then bleached several times using the beam of a confocal laser scanning microscope. After each imaging scan, bleaching occurs again. This occurs several times, to ensure that all accessible fluorophores are bleached since unbleached fluorophores are exchanged for bleached fluorophores, causing movement through the cell membrane. The amount of fluorescence from that region is then measured over a period of time to determine the results of the photobleaching on the cell as a whole.
Experimental Setup
Before photobleaching can occur, cells must be injected with a fluorescent protein, often a green fluorescent protein (GFP), which will allow the targeted proteins to fluoresce and therefore be followed throughout the process. Then, a region of interest must be defined. This initial region of interest usually contains the whole cell or several cells. In FLIP, photobleaching occurs just outside the region of interest; therefore a photobleaching region also needs to be defined. A third region, the region where measurement will take place, needs to be determined as well. A number of initial scans need to be made to determine fluorescence before photobleaching. These scans will serve as the control scans, to which the photobleached scans will be compared later on. Photobleaching can then occur. Between each bleach pulse, it is necessary to allow time for recovery of fluorescent material. It is also important to take several scans of the region of interest immediately after each bleach pulse for further study. The change in fluorescence at the region of interest can then be quantified in one of three ways. The most common is to choose the location, size and number of the regions of interest based on vi |
https://en.wikipedia.org/wiki/Thiamine%20triphosphate | Thiamine triphosphate (ThTP) is a biomolecule found in most organisms including bacteria, fungi, plants and animals. Chemically, it is the triphosphate derivative of the vitamin thiamine.
Function
It has been proposed that ThTP has a specific role in nerve excitability, but this has never been confirmed and recent results suggest that ThTP probably plays a role in cell energy metabolism. Low or absent levels of thiamine triphosphate have been found in Leighs disease.
In E. coli, ThTP is accumulated in the presence of glucose during amino acid starvation. On the other hand, suppression of the carbon source leads to the accumulation, of adenosine thiamine triphosphate (AThTP).
Metabolism
It has been shown that in brain ThTP is synthesized in mitochondria by a chemiosmotic mechanism, perhaps similar to ATP synthase. In mammals, ThTP is hydrolyzed to thiamine pyrophosphate (ThDP) by a specific thiamine-triphosphatase. It can also be converted into ThDP by thiamine-diphosphate kinase.
History
Thiamine triphosphate (ThTP) was chemically synthesized in 1948 at a time when the only organic triphosphate known was ATP. The first claim of the existence of ThTP in living organisms was made in rat liver, followed by baker’s yeast. Its presence was later confirmed in rat tissues and in plants germs, but not in seeds, where thiamine was essentially unphosphorylated. In all those studies, ThTP was separated from other thiamine derivatives using a paper chromatographic method, followed by oxidation in fluorescent thiochrome compounds with ferricyanide in alkaline solution. This method is at best semi-quantitative, and the development of liquid chromatographic methods suggested that ThTP represents far less than 10% of total thiamine in animal tissues. |
https://en.wikipedia.org/wiki/ESDS%20International | ESDS International was a Jisc/ESRC funded service which provided the UK academic community with free online access to the major databanks produced by international governmental organisations such as the World Bank, International Monetary Fund and the United Nations. The service also supported the use of these databanks in teaching and research through the provision of a helpdesk for user queries, comprehensive documentation and training.
ESDS International also provided access to a range of international survey datasets including the European Social Survey and Eurobarometer.
The service aimed to promote and facilitate increased and more effective use of international datasets in research, learning and teaching across a range of disciplines.
Databases hosted by ESDS International included the major statistical publications of:
International Monetary Fund
World Bank
International Energy Agency
OECD
United Nations
Eurostat
International Labour Organization
UK Office for National Statistics
In July 2012, the Economic and Social Research Council (ESRC) announced that all of ESDS would become part of the UK Data Service, which was established as of October 1, 2012 - see http://www.esrc.ac.uk/research/our-research/uk-data-service/. |
https://en.wikipedia.org/wiki/Precipitin | A precipitin is an antibody which can precipitate out of a solution upon antigen binding.
Precipitin reaction
The precipitin reaction provided the first quantitative assay for antibody. The precipitin reaction is based upon the interaction of antigen with antibody leading to the production of antigen-antibody complexes.
To produce a precipitin reaction, varying amounts of soluble antigen are added to a fixed amount of serum containing antibody. As the amount of antigen added:
In the zone of antibody excess, each molecule of antigen is bound extensively by antibody and crosslinked to other molecules of antigen. The average size of antibody-antigen complex is small; cross-linking between antigen molecules by antibody is rare.
In the zone of equivalence, the formation of precipitin complexes is optimal. Extensive lattices of antigen and antibody are formed by cross-linking.
At high concentrations of antigen, the average size of antibody-antigen complexes is once again small because few antibody molecules are available to cross-link antigen molecules together.
The small, soluble immune complexes formed in vivo in the zone of antigen excess can cause a variety of pathological syndromes.
Antibody can only precipitate antigenic substrates that are multivalent—that is, only antigens that have multiple antibody-binding sites epitopes. This allows for the formation of large antigen:antibody complexes. |
https://en.wikipedia.org/wiki/Derivative%20chromosome | A derivative chromosome (der) is a structurally rearranged chromosome generated either by a chromosome rearrangement involving two or more chromosomes or by multiple chromosome aberrations within a single chromosome (e.g. an inversion and a deletion of the same chromosome, or deletions in both arms of a single chromosome). The term always refers to the chromosome that has an intact centromere.
Derivative chromosomes are designated by the abbreviation der when used to describe a Karyotype. The derivative chromosome must be specified in parentheses followed by all aberrations involved in this derivative chromosome. The aberrations must be listed from pter to qter and not be separated by a comma.
For example, 46,XY,der(4)t(4;8)(p16;q22)t(4;9)(q31;q31) would refer to a derivative chromosome 4 which is the result of a translocation between the short arm of chromosome 4 at region 1, band 6 and the long arm of chromosome 8 at region 2, band 2, and a translocation between the long arm of chromosome 4 at region 3, band 1 and the long arm of chromosome 9 at region 3, band 1. As for the initial string "46,XY", it only signifies that this translocation is occurring in an organism, which has this set of chromosomes, i.e. a human being. |
https://en.wikipedia.org/wiki/Morse%E2%80%93Palais%20lemma | In mathematics, the Morse–Palais lemma is a result in the calculus of variations and theory of Hilbert spaces. Roughly speaking, it states that a smooth enough function near a critical point can be expressed as a quadratic form after a suitable change of coordinates.
The Morse–Palais lemma was originally proved in the finite-dimensional case by the American mathematician Marston Morse, using the Gram–Schmidt orthogonalization process. This result plays a crucial role in Morse theory. The generalization to Hilbert spaces is due to Richard Palais and Stephen Smale.
Statement of the lemma
Let be a real Hilbert space, and let be an open neighbourhood of the origin in Let be a -times continuously differentiable function with that is, Assume that and that is a non-degenerate critical point of that is, the second derivative defines an isomorphism of with its continuous dual space by
Then there exists a subneighbourhood of in a diffeomorphism that is with inverse, and an invertible symmetric operator such that
Corollary
Let be such that is a non-degenerate critical point. Then there exists a -with--inverse diffeomorphism and an orthogonal decomposition
such that, if one writes
then
See also |
https://en.wikipedia.org/wiki/Neurotrophin-4 | Neurotrophin-4 (NT-4), also known as neurotrophin-5 (NT-5), is a protein that in humans is encoded by the NTF4 gene. It is a neurotrophic factor that signals predominantly through the TrkB receptor tyrosine kinase.
See also
Tropomyosin receptor kinase B § Agonists |
https://en.wikipedia.org/wiki/Misiurewicz%20point | In mathematics, a Misiurewicz point is a parameter value in the Mandelbrot set (the parameter space of complex quadratic maps) and also in real quadratic maps of the interval for which the critical point is strictly pre-periodic (i.e., it becomes periodic after finitely many iterations but is not periodic itself). By analogy, the term Misiurewicz point is also used for parameters in a multibrot set where the unique critical point is strictly pre-periodic (This term makes less sense for maps in greater generality that have more than one free critical point because some critical points might be periodic and others not). These points are named after the Polish-American mathematician Michał Misiurewicz, who was the first to study them.
Mathematical notation
A parameter is a Misiurewicz point if it satisfies the equations:
and:
so:
where:
is a critical point of ,
and are positive integers,
denotes the -th iterate of .
Name
The term "Misiurewicz point" is used ambiguously: Misiurewicz originally investigated maps in which all critical points were non-recurrent; that is, in which there exists a neighborhood for every critical point that is not visited by the orbit of this critical point. This meaning is firmly established in the context of the dynamics of iterated interval maps. Only in very special cases does a quadratic polynomial have a strictly periodic and unique critical point. In this restricted sense, the term is used in complex dynamics; a more appropriate one would be Misiurewicz–Thurston points (after William Thurston, who investigated post-critically finite rational maps).
Quadratic maps
A complex quadratic polynomial has only one critical point. By a suitable conjugation any quadratic polynomial can be transformed into a map of the form which has a single critical point at . The Misiurewicz points of this family of maps are roots of the equations:
Subject to the condition that the critical point is not periodic, where:
k is the pre-period
n |
https://en.wikipedia.org/wiki/Slot%20%28computer%20architecture%29 | A slot comprises the operation issue and data path machinery surrounding a set of one or more execution unit (also called a functional unit (FU)) which share these resources. The term slot is common for this purpose in very long instruction word (VLIW) computers, where the relationship between operation in an instruction and pipeline to execute it is explicit. In dynamically scheduled machines, the concept is more commonly called an execute pipeline.
Modern conventional central processing units (CPU) have several compute pipelines, for example: two arithmetic logic units (ALU), one floating point unit (FPU), one Streaming SIMD Extensions (SSE) (such as MMX), one branch. Each of them can issue one instruction per basic instruction cycle, but can have several instructions in process. These are what correspond to slots. The pipelines may have several FUs, such as an adder and a multiplier, but only one FU in a pipeline can be issued to in a given cycle. The FU population of a pipeline (slot) is a design option in a CPU. |
https://en.wikipedia.org/wiki/Ice%20Queen%20%28song%29 | "Ice Queen" is a song by Dutch symphonic metal band Within Temptation. It was released in June 2001 as the second single from their second studio album Mother Earth. The song was the band's commercial breakthrough, and it remains one of the band's most successful songs to date in Europe. It has been featured on the annual Dutch Top 2000 since 2011.
Along with the singles "Mother Earth," "Angels," and "Stand My Ground", "Ice Queen" has become one of the band's signature songs and is played as ending song on the setlist on almost every concert, except for some shows of The Unforgiving Tour onwards, where they started to end with the songs "Mother Earth" or "Stairway to the Skies".
Lyrics
Like many of Within Temptation's songs, the lyrics of Ice Queen take their inspiration from nature.
"It's a song about nature", said vocalist Sharon den Adel, in an interview with Dennis Weening on Westpop. "And how things go in nature". Guitarist Robert Westerholt further added that "it's about winter".
Video
There are two official videos for the song "Ice Queen". The first video, released only in the Netherlands even though it is generally known as the "German Version", starts with a girl checking a website for concert videos. Then she finds two links, one saying 'Within Temptation'. She clicks on that link and their concert in Landgraaf in 2001, is shown on her screen. While watching the concert on her computer, she clicks on a couple of links and information on the band appears on her screen.
The second video was made for international release. Sharon, the singer, is dancing on a background of a blue, starry sky. She is wearing a white dress and has white extensions in her hair. The other band members are also shown before various backgrounds made with green-screen effects. Such as Robert, the rhythm guitarist, who is shown on a background of fire, or the drummer, who is shown on a background of a thunderstorm. After a while, all band members are shown together on a red planet |
https://en.wikipedia.org/wiki/Now%20You%20See%20Him%2C%20Now%20You%20Don%27t | Now You See Him, Now You Don't is a 1972 American science fiction comedy film starring Kurt Russell as a chemistry student who accidentally discovers the secret to invisibility. It is the second film in Dexter Riley series.
Now You See Him, Now You Don't was the first Disney film to be shown on television in a two-hour time slot, in 1975. Previous television showings of Disney films had either shown them edited or split into two one-hour time slots.
Plot
At Medfield College, science buff Dexter Riley and his friends, including Richard Schuyler and Debbie Dawson, eavesdrop via a hidden walkie-talkie on a board meeting led by Dean Eugene Higgins, discussing the small college's continuing precarious finances. Later that afternoon, Professor Lufkin shows Higgins around the science laboratory where Dexter is working on an experiment with invisibility and another student, Druffle, explores the flight of bumblebees. That night, during a powerful thunderstorm, the laboratory is struck by lightning, resulting in the destruction of Dexter’s work. The next day, as Dexter examines his burnt equipment with dismay, Higgins meets with A.J. Arno, a recently released prisoner, who had also purchased Medfield's mortgage. When Dexter accidentally drops one half of his glasses into a container of his experimental formula, it appears as if the substance destroys them, but upon closer examination, Dexter realizes the frames are merely partially invisible. After several tests, Dexter places his fingers in the liquid and they disappear. Schuyler and Debbie arrive and are horrified to see Dexter with a partial hand, but Dexter insists Schuyler test the substance as well - admitting only afterward that he does not yet have an antidote and it adheres firmly to all surfaces tested - but just as quickly learn that it is water-soluble and rinses away cleanly.
Just then, Higgins brings Arno to visit the laboratory, stunning the students, as only two years earlier, Dexter was instrumental in ex |
https://en.wikipedia.org/wiki/Nitazoxanide | Nitazoxanide, sold under the brand name Alinia among others, is a broad-spectrum antiparasitic and broad-spectrum antiviral medication that is used in medicine for the treatment of various helminthic, protozoal, and viral infections. It is indicated for the treatment of infection by Cryptosporidium parvum and Giardia lamblia in immunocompetent individuals and has been repurposed for the treatment of influenza. Nitazoxanide has also been shown to have in vitro antiparasitic activity and clinical treatment efficacy for infections caused by other protozoa and helminths; evidence suggested that it possesses efficacy in treating a number of viral infections as well.
Chemically, nitazoxanide is the prototype member of the thiazolides, a class of drugs which are synthetic nitrothiazolyl-salicylamide derivatives with antiparasitic and antiviral activity. Tizoxanide, an active metabolite of nitazoxanide in humans, is also an antiparasitic drug of the thiazolide class.
Nitazoxanide tablets were approved as a generic medication in the United States in 2020.
Uses
Nitazoxanide is an effective first-line treatment for infection by Blastocystis species and is indicated for the treatment of infection by Cryptosporidium parvum or Giardia lamblia in immunocompetent adults and children. It is also an effective treatment option for infections caused by other protozoa and helminths (e.g., Entamoeba histolytica, Hymenolepis nana, Ascaris lumbricoides, and Cyclospora cayetanensis).
Chronic hepatitis B
Nitazoxanide alone has shown preliminary evidence of efficacy in the treatment of chronic hepatitis B over a one-year course of therapy. Nitazoxanide 500 mg twice daily resulted in a decrease in serum HBV DNA in all of 4 HBeAg-positive patients, with undetectable HBV DNA in 2 of 4 patients, loss of HBeAg in 3 patients, and loss of HBsAg in one patient. Seven of 8 HBeAg-negative patients treated with nitazoxanide 500 mg twice daily had undetectable HBV DNA and 2 had loss of HBsAg. Addit |
https://en.wikipedia.org/wiki/ProStores | ProStores was an e-commerce website hosting company owned by eBay. Formerly known as Kurant StoreSense, ProStores was acquired by eBay Inc. by the end of 2005 changing the name to ProStores by eBay.
ProStores' feature set included simple wizard-driven website, e-commerce capabilities, site design tools and e-business management. Smaller merchants could also manage the entire process of posting and selling products on eBay using the ProStores interface. It also offered inventory management, supplier communication and integration with Quickbooks and Dreamweaver.
eBay announced on July 1, 2014 that support for the platform would end February 1, 2015. |
https://en.wikipedia.org/wiki/IPHT%20Jena | The Leibniz Institute of Photonic Technology (IPHT — German: Institut für Photonische Technologien) is a non-university research facility in Jena, Thuringia, Germany. Focused on applications for various physical systems, the Institute's mandate is to find solutions to challenges in high technology systems. IPHT carries out research in the following areas: magnetics, quantum electronics, optics, microsystems, biophotonics and laser technology. The Institute works with both universities and companies.
The IPHT coordinates several EU-Projects funded by the European Commission:
Photonics4Life
S-Pulse
High-EF
Rod-Sol
Fiblys
External links
(in German)
Research institutes in Germany
Physics research institutes
Jena |
https://en.wikipedia.org/wiki/C%20to%20HDL | C to HDL tools convert C language or C-like computer code into a hardware description language (HDL) such as VHDL or Verilog. The converted code can then be synthesized and translated into a hardware device such as a field-programmable gate array. Compared to software, equivalent designs in hardware consume less power (yielding higher performance per watt) and execute faster with lower latency, more parallelism and higher throughput. However, system design and functional verification in a hardware description language can be tedious and time-consuming, so systems engineers often write critical modules in HDL and other modules in a high-level language and synthesize these into HDL through C to HDL or high-level synthesis tools.
C to is another name for this methodology. RTL refers to the register transfer level representation of a program necessary to implement it in logic.
History
Early development on C to HDL was done by Ian Page, Charles Sweeney and colleagues at Oxford University in the 1990s who developed the Handel-C language. They commercialized their research by forming Embedded Solutions Limited (ESL) in 1999 which was renamed Celoxica in September 2000. In 2008, the embedded systems departments of Celoxica was sold to Catalytic for $3 million and which later merged to become Agility Computing. In January 2009, Mentor Graphics acquired Agility's C synthesis assets. Celoxica continues to trade concentrating on hardware acceleration in the financial and other industries.
Applications
C to HDL techniques are most commonly applied to applications that have unacceptably high execution times on existing general-purpose supercomputer architectures. Examples include bioinformatics, computational fluid dynamics (CFD), financial processing, and oil and gas survey data analysis. Embedded applications requiring high performance or real-time data processing are also an area of use. System-on-chip (SoC) design may also take advantage of C to HDL technique |
https://en.wikipedia.org/wiki/Separation%20property%20%28finance%29 | A separation property is a crucial element of modern portfolio theory that gives a portfolio manager the ability to separate the process of satisfying investing clients' assets into two separate parts.
The first part is the determination of the "optimum risky portfolio". This portfolio is the same for all clients. In one version, it has the highest Sharpe ratio. See mutual fund separation theorem for a discussion of other possibilities. It is the construction of a universal portfolio that is kept separate from the individual needs of each client.
The second part is tailoring the use of that portfolio to the risk-aversive needs of each individual client. This is achieved through simulation of a given risk-return range by allocating the client's total investments partly to that universal portfolio and partly to the risk-free asset.
See also
Markowitz model #Choosing the best portfolio - an expansion of the above
Mutual fund separation theorem - relating to the construction of optimal portfolios
Fisher separation theorem - discussing an analogous result in corporate finance |
https://en.wikipedia.org/wiki/Oja%27s%20rule | Oja's learning rule, or simply Oja's rule, named after Finnish computer scientist Erkki Oja, is a model of how neurons in the brain or in artificial neural networks change connection strength, or learn, over time. It is a modification of the standard Hebb's Rule (see Hebbian learning) that, through multiplicative normalization, solves all stability problems and generates an algorithm for principal components analysis. This is a computational form of an effect which is believed to happen in biological neurons.
Theory
Oja's rule requires a number of simplifications to derive, but in its final form it is demonstrably stable, unlike Hebb's rule. It is a single-neuron special case of the Generalized Hebbian Algorithm. However, Oja's rule can also be generalized in other ways to varying degrees of stability and success.
Formula
Consider a simplified model of a neuron that returns a linear combination of its inputs using presynaptic weights :
Oja's rule defines the change in presynaptic weights given the output response of a neuron to its inputs to be
where is the learning rate which can also change with time. Note that the bold symbols are vectors and defines a discrete time iteration. The rule can also be made for continuous iterations as
Derivation
The simplest learning rule known is Hebb's rule, which states in conceptual terms that neurons that fire together, wire together. In component form as a difference equation, it is written
,
or in scalar form with implicit -dependence,
,
where is again the output, this time explicitly dependent on its input vector .
Hebb's rule has synaptic weights approaching infinity with a positive learning rate. We can stop this by normalizing the weights so that each weight's magnitude is restricted between 0, corresponding to no weight, and 1, corresponding to being the only input neuron with any weight. We do this by normalizing the weight vector to be of length one:
.
Note that in Oja's original paper, , correspondi |
https://en.wikipedia.org/wiki/Ocean%20color | Ocean color is the branch of ocean optics that specifically studies the color of the water and information that can be gained from looking at variations in color. The color of the ocean, while mainly blue, actually varies from blue to green or even yellow, brown or red in some cases. This field of study developed alongside water remote sensing, so it is focused mainly on how color is measured by instruments (like the sensors on satellites and airplanes).
Most of the ocean is blue in color, but in some places the ocean is blue-green, green, or even yellow to brown. Blue ocean color is a result of several factors. First, water preferentially absorbs red light, which means that blue light remains and is reflected back out of the water. Red light is most easily absorbed and thus does not reach great depths, usually to less than 50 meters (164 ft). Blue light, in comparison, can penetrate up to 200 meters (656 ft). Second, water molecules and very tiny particles in ocean water preferentially scatter blue light more than light of other colors. Blue light scattering by water and tiny particles happens even in the very clearest ocean water, and is similar to blue light scattering in the sky.
The main substances that affect the color of the ocean include dissolved organic matter, living phytoplankton with chlorophyll pigments, and non-living particles like marine snow and mineral sediments. Chlorophyll can be measured by satellite observations and serves as a proxy for ocean productivity (marine primary productivity) in surface waters. In long term composite satellite images, regions with high ocean productivity show up in yellow and green colors because they contain more (green) phytoplankton, whereas areas of low productivity show up in blue.
Overview
Ocean color depends on how light interacts with the materials in the water. When light enters water, it can either be absorbed (light gets used up, the water gets "darker"), scattered (light gets bounced around in diffe |
https://en.wikipedia.org/wiki/Valley%20of%20stability | In nuclear physics, the valley of stability (also called the belt of stability, nuclear valley, energy valley, or beta stability valley) is a characterization of the stability of nuclides to radioactivity based on their binding energy. Nuclides are composed of protons and neutrons. The shape of the valley refers to the profile of binding energy as a function of the numbers of neutrons and protons, with the lowest part of the valley corresponding to the region of most stable nuclei. The line of stable nuclides down the center of the valley of stability is known as the line of beta stability. The sides of the valley correspond to increasing instability to beta decay (β− or β+). The decay of a nuclide becomes more energetically favorable the further it is from the line of beta stability. The boundaries of the valley correspond to the nuclear drip lines, where nuclides become so unstable they emit single protons or single neutrons. Regions of instability within the valley at high atomic number also include radioactive decay by alpha radiation or spontaneous fission. The shape of the valley is roughly an elongated paraboloid corresponding to the nuclide binding energies as a function of neutron and atomic numbers.
The nuclides within the valley of stability encompass the entire table of nuclides. The chart of those nuclides is also known as a Segrè chart, after the physicist Emilio Segrè. The Segrè chart may be considered a map of the nuclear valley. The region of proton and neutron combinations outside of the valley of stability is referred to as the sea of instability.
Scientists have long searched for long-lived heavy isotopes outside of the valley of stability, hypothesized by Glenn T. Seaborg in the late 1960s. These relatively stable nuclides are expected to have particular configurations of "magic" atomic and neutron numbers, and form a so-called island of stability.
Description
All atomic nuclei are composed of protons and neutrons bound together b |
https://en.wikipedia.org/wiki/Sheet%20mulching | In permaculture, sheet mulching is an agricultural no-dig gardening technique that attempts to mimic the natural soil-building process in forests. When deployed properly and in combination with other permaculture principles, it can generate healthy, productive, and low maintenance ecosystems.
Sheet mulching, also known as composting in place, mimics nature by breaking down organic material from the topmost layers down. The simplest form of sheet mulching consists of applying a bottom layer of decomposable material, such as cardboard or newspapers, to the ground to kill existing vegetation and suppress weeds. Then, a top layer of organic mulch is applied. More elaborate sheet mulching involves more layers. Sheet mulching is used to transform a variety of surfaces into a fertile soil that can be planted. Sheet mulching can be applied to a lawn, a dirt lot full of perennial weeds, an area with poor soil, or even pavement or a rooftop.
Technique
A model for sheet mulching consists of the following steps:
The area of interest is flattened by trimming down existing plant species such as grasses.
The soil is analyzed and its pH is adjusted (if needed).
The soil is moisturized (if needed) to facilitate the activity of decomposers.
The soil is then covered with a thin layer of slowly decomposing material (known as the weed barrier), typically cardboard. This suppresses the weeds by blocking sunlight, adds nutrients to the soil as weed matter quickly decays beneath the barrier, and increases the mechanical stability of the growing medium.
A layer (around 10 cm thick) of weed-free soil, rich in nutrients is added, in an attempt to mimic the surface soil, or A horizon.
A layer (at most 15 cm thick) of weed-free, woody and leafy matter is added in an attempt to mimic the forest floor, or O horizon. Theoretically, the soil is now ready to receive the desirable plant seeds or transplants.
Variations and considerations
Often the barrier is applied a few months before |
https://en.wikipedia.org/wiki/Online%20disinhibition%20effect | The online disinhibition effect refers to the lack of restraint one feels when communicating online in comparison to communicating in-person. People tend to feel safer saying things online which they would not say in real life because they have the ability to remain completely anonymous and invisible when on particular websites, and as a result, free from potential consequences. Apart from anonymity, other factors such as asynchronous communication, empathy deficit, or individual personality and cultural factors also contribute to online disinhibition. The manifestations of such an effect could be in both positive and negative directions. Thus online disinhibition could be classified as either benign disinhibition or toxic disinhibition.
Classifications
Benign online disinhibition describes a situation in which people get some benefit from the absence of restraint in cyberspace. One example of benign online disinhibition can be seen as self-disclosure. With the help of Internet anonymity, people could share personal feelings or disclose themselves in the way they are reluctant to do in real life. For instance, young people feel relieved when revealing untold secrets or personally embarrassing details in online chats. Such self-disclosures enable people to establish an intimate interpersonal relationship sooner and stronger when compared with real life face to face communication. The online disinhibition effect also provides chances to express themselves for people who are unwilling to communicate in the real world, like people who are introverted, shy, socially phobic and individuals with a stutter or impaired hearing.
Another type of online disinhibition is called toxic disinhibition, which represents an increased tendency towards online flaming and inappropriate behaviors. These often contain hostile language, swearing, and even threats. This norm describes the negative side effect of the loss of inhibition on the cyberspace. The antisocial behaviors caused by |
https://en.wikipedia.org/wiki/Laurence%20Chisholm%20Young | Laurence Chisholm Young (14 July 1905 – 24 December 2000) was a British mathematician known for his contributions to measure theory, the calculus of variations, optimal control theory, and potential theory. He was the son of William Henry Young and Grace Chisholm Young, both prominent mathematicians. He moved to the US in 1949 but never sought American citizenship.
The concept of Young measure is named after him: he also introduced the concept of the generalized curve and a concept of generalized surface which later evolved in the concept of varifold. The Young integral also is named after him and has now been generalised in the theory of rough paths.
Life and academic career
Laurence Chisholm Young was born in Göttingen, the fifth of the six children of William Henry Young and Grace Chisholm Young. He held positions of Professor at the University of Cape Town, South Africa, and at the University of Wisconsin-Madison. He was also a chess grandmaster.
Selected publications
Books
, available from the Internet archive.
.
.
Papers
.
, memoir presented by Stanisław Saks at the session of 16 December 1937 of the Warsaw Society of Sciences and Letters. The free PDF copy is made available by the RCIN –Digital Repository of the Scientifics Institutes.
.
.
.
.
.
.
.
.
See also
Bounded variation
Caccioppoli set
Measure theory
Varifold
Notes |
https://en.wikipedia.org/wiki/INK4 | INK4 is a family of cyclin-dependent kinase inhibitors (CKIs). The members of this family (p16INK4a, p15INK4b, p18INK4c, p19INK4d) are inhibitors of CDK4 (hence their name INhibitors of CDK4), and of CDK6. The other family of CKIs, CIP/KIP proteins are capable of inhibiting all CDKs. Enforced expression of INK4 proteins can lead to G1 arrest by promoting redistribution of Cip/Kip proteins and blocking cyclin E-CDK2 activity. In cycling cells, there is a resassortment of Cip/Kip proteins between CDK4/5 and CDK2 as cells progress through G1. Their function, inhibiting CDK4/6, is to block progression of the cell cycle beyond the G1 restriction point. In addition, INK4 proteins play roles in cellular senescence, apoptosis and DNA repair.
INK4 proteins are tumor suppressors and loss-of-function mutations lead to carcinogenesis.
INK4 proteins are highly similar in terms of structure and function, with up to 85% amino acid similarity. They contain multiple ankyrin repeats.
Genes
The INK4a/ARF/INK4b locus encodes three genes (p15INK4b, ARF, and p16INK4a) in a 35-kilobase stretch of the human genome. P15INK4b has a different reading frame that is physically separated from p16INK4a and ARF. P16INK4a and ARF have different first exons that are spliced to the same second and third exon. While those second and third exons are shared by p16INK4a and ARF, the proteins are encoded in different reading frames meaning that p16INK4a and ARF are not isoforms, nor do they share any amino acid homology.
Evolution
Polymorphisms of the p15INK4b/p16INK4a homolog were found to segregate with melanoma susceptibility in the Xiphophorus indicating that INK4 proteins have been involved with tumor suppression for over 350 million years. Furthermore, the older INK4-based system has been further bolstered by the evolution of the recent addition of the ARF-based anti-cancer response.
Function
INK4 proteins are cell-cycle inhibitors. When they bind to CDK4 and CDK6, they induce an alloster |
https://en.wikipedia.org/wiki/Engels%20Maps | Engels Maps is a map company in the Ohio Valley with particular concentration on the Cincinnati-Dayton region. It also produces chamber of commerce maps.
Publications
It has three semi-annual publications that form its foundation:
Cincinnati Engels Guide
Dayton Engels Guide
Indianapolis Engels Guide
Their maps are also found in the Cincinnati Bell Yellow Pages and the Dayton WorkBook.
Corporate history
Engels Maps was founded by Judson Engels in 1994.
Sources
External links
Engels Maps
http://cincinnati.citysearch.com/profile/4343456/fort_thomas_ky/engels_maps_guide.html
Target Marketing
http://www.macraesbluebook.com/search/company.cfm?company=838024
http://engelsmaps.com engelsmaps.com
Geodesy
Companies based in Kentucky
Software companies based in Kentucky
American companies established in 1994
Map companies of the United States
Campbell County, Kentucky
1994 establishments in Kentucky
Software companies of the United States
Software companies established in 1994 |
https://en.wikipedia.org/wiki/Translational%20Genomics%20Research%20Institute | The Translational Genomics Research Institute (TGen) is a non-profit genomics research institute based in Arizona, United States.
The Translational Genomics Research Institute (TGen) is a non-profit genomics research institute based in Phoenix, Arizona, United States.
History and activities
TGen was established in July 2002 by Jeffrey Trent in Phoenix, Arizona, with an initial investment of US$100 million from Arizona public and private-sector investors.
TGen conducts research on various human disorders, including Alzheimer's disease, autism, Parkinson's, diabetes, cancer, and other complex diseases. The institute focuses on translational genomics research, which aims to apply the findings of the Humane Genome Project to develop improved diagnostics, prognostics, and therapies for these diseases.
The field of translational genomics research searches for ways to apply results from the Human Genome Project to the development of improved diagnostics, prognostics, and therapies for cancer, neurological disorders, diabetes and other complex diseases.
TGen has contributed to the growth of scientific research and biotechnology in Arizona. The institute has been involved in collaborations and studies, such as the research on chronic traumatic encephalopathy (CTE) in former NFL players in partnership with Exosome Sciences.
TGen Administration
Jeffrey M. Trent, Ph.D., President & Scientific Director
Sunil Sharma, MD FACP, Deputy Director
Michael Bassoff, President, TGen Foundation
Daniel Von Hoff, M.D., F.A.C.P., Executive Vice President, Physician-in-Chief
Tess Burleson, MBA, CPA, Chief Operating Officer and President, TGen Accelerators
Chuck Coleson, Chief Financial Officer
Galen Perry, Vice President, Marketing and Communications
Brady Young, Vice President, Human Resources
Kendall Van Keuren-Jensen, co-Director |
https://en.wikipedia.org/wiki/Quadratics | Quadratics is a six-part Canadian instructional television series produced by TVOntario in 1993. The miniseries is part of the Concepts in Mathematics series. The program uses computer animation to demonstrate quadratic equations and their corresponding functions in the Cartesian coordinate system.
Synopsis
Each program involves two robots, Edie and Charon, who work on an assembly line in a high-tech factory. The robots discuss their desire to learn about quadratic equations, and they are subsequently provided with lessons that further their education.
Episodes |
https://en.wikipedia.org/wiki/Corn%20ethanol | Corn ethanol is ethanol produced from corn biomass and is the main source of ethanol fuel in the United States, mandated to be blended with gasoline in the Renewable Fuel Standard. Corn ethanol is produced by ethanol fermentation and distillation. It is debatable whether the production and use of corn ethanol results in lower greenhouse gas emissions than gasoline. Approximately 45% of U.S. corn croplands are used for ethanol production.
Uses
Since 2001, corn ethanol production has increased by more than several times. Out of 9.50 billions of bushels of corn produced in 2001, 0.71 billions of bushels were used to produce corn ethanol. Compared to 2018, out of 14.62 billions of bushels of corn produced, 5.60 billion bushels were used to produce corn ethanol, reported by the United States Department of Energy. Overall, 94% of ethanol in the United States is produced from corn.
Currently, corn ethanol is mainly used in blends with gasoline to create mixtures such as E10, E15, and E85. Ethanol is mixed into more than 98% of United States gasoline to reduce air pollution. Corn ethanol is used as an oxygenate when mixed with gasoline. E10 and E15 can be used in all engines without modification. However, blends like E85, with a much greater ethanol content, require significant modifications to be made before an engine can run on the mixture without damaging the engine. Some vehicles that currently use E85 fuel, also called flex fuel, include, the Ford Focus, Dodge Durango, and Toyota Tundra, among others.
The future use of corn ethanol as a main gasoline replacement is unknown. Corn ethanol has yet to be proven to be as cost effective as gasoline due to corn ethanol being much more expensive to create compared to gasoline. Corn ethanol has to go through an extensive milling process before it can be used as a fuel source. One major drawback with corn ethanol, is the energy returned on energy invested (EROI), meaning the energy outputted in comparison to the energy requ |
https://en.wikipedia.org/wiki/Deborah%20and%20Franklin%20Haimo%20Awards%20for%20Distinguished%20College%20or%20University%20Teaching%20of%20Mathematics | The Deborah and Franklin Tepper Haimo Awards for Distinguished College or University Teaching of Mathematics are awards given by the Mathematical Association of America to recognize college or university teachers "who have been widely recognized as extraordinarily successful and whose teaching effectiveness has been shown to have had influence beyond their own institutions." The Haimo awards are the highest teaching honor bestowed by the MAA. The awards were established in 1993 by Deborah Tepper Haimo and named after Haimo and her husband Franklin Haimo. After the first year of the award (when seven awards were given) up to three awards are given every year.
Winners
The winners of the award have been:
1993: Joseph Gallian, Robert V. Hogg, Anne Lester Hudson, Frank Morgan, V. Frederick Rickey, Doris Schattschneider, and Philip D. Straffin Jr.
1994: Paul Halmos, Justin Jesse Price, and Alan Tucker
1995: Robert L. Devaney, Lisa Mantini, and David S. Moore
1996: Thomas Banchoff, Edward M. Landesman, and Herbert Wilf
1997: Carl C. Cowen, Carl Pomerance, and T. Christine Stevens
1998: Colin Adams, Rhonda Hatcher, and Rhonda Hughes
1999: Joel Brawley, Robert W. Case, and Joan Hutchinson
2000: Arthur T. Benjamin, Donald S. Passman, and Gary W. Towsley
2001: Edward Burger, Evelyn Silvia, and Leonard F. Klosinki
2002: Dennis DeTurck, Paul Sally, and Edward Spitznagel Jr.
2003: Judith Grabiner, Ranjan Roy, and Paul Zeitz
2004: Thomas Garrity, Andy Liu, and Olympia Nicodemi
2005: Gerald L. Alexanderson, Aparna Higgins, and Deborah Hughes Hallett
2006: Jacqueline Dewar, Keith Stroyan, and Judy L. Walker
2007: Jennifer Quinn, Michael Starbird, and Gilbert Strang
2008: Annalisa Crannell, Kenneth I. Gross, and James A. Morrow
2009: Michael Bardzell, David Pengelley, and Vali Siadat
2010: Curtis Bennett, Michael Dorff, and Allan J. Rossman
2011: Erica Flapan, Karen Rhea, and Zvezdelina Stankova
2012: Matthew DeLong, Susan Loepp, and Cynthia Wyels
2013: Matthias Beck, Margaret M. R |
https://en.wikipedia.org/wiki/Muller%20automaton | In automata theory, a Muller automaton is a type of an ω-automaton.
The acceptance condition separates a Muller automaton from other ω-automata.
The Muller automaton is defined using a Muller acceptance condition, i.e. the set of all states visited infinitely often must be an element of the acceptance set. Both deterministic and non-deterministic Muller automata recognize the ω-regular languages. They are named after David E. Muller, an American mathematician and computer scientist, who invented them in 1963.
Formal definition
Formally, a deterministic Muller-automaton is a tuple A = (Q,Σ,δ,q0,F) that consists of the following information:
Q is a finite set. The elements of Q are called the states of A.
Σ is a finite set called the alphabet of A.
δ: Q × Σ → Q is a function, called the transition function of A.
q0 is an element of Q, called the initial state.
F is a set of sets of states. Formally, F ⊆ P(Q) where P(Q) is powerset of Q. F defines the acceptance condition. A accepts exactly those runs in which the set of infinitely often occurring states is an element of F
In a non-deterministic Muller automaton, the transition function δ is replaced with a transition relation Δ that returns a set of states and the initial state q0 is replaced by a set of initial states Q0. Generally, 'Muller automaton' refers to a non-deterministic Muller automaton.
For more comprehensive formalisation look at ω-automaton.
Equivalence with other ω-automata
The Muller automata are equally expressive as parity automata, Rabin automata, Streett automata, and non-deterministic Büchi automata, to mention some, and strictly more expressive than the deterministic Büchi automata. The equivalence of the above automata and non-deterministic Muller automata can be shown very easily as the accepting conditions of these automata can be emulated using the acceptance condition of Muller automata and vice versa.
McNaughton's theorem demonstrates the equivalence of non-deterministic Büchi |
https://en.wikipedia.org/wiki/Uptake%20signal%20sequence | Uptake signal sequences (USS) are short DNA sequences preferentially taken up by competent bacteria of the family Pasteurellaceae (e.g., Haemophilus influenzae). Similar sequences, called DNA uptake sequences (DUS), are found in species of the family Neisseriaceae (including Neisseria meningitidis and Neisseria gonorrhoeae).
Neisseria meningitidis
Genetic transformation is the process by which a recipient bacterial cell takes up naked DNA from its environment and integrates this DNA into the recipient's genome by recombination. In N. meningitidis, DNA transformation requires the presence of short DUS (10-12 mers residing in coding and intergenic regions) of the donor DNA. Specific recognition of DUSs is mediated by a type IV pilin. Davidsen et al. reported that in N. meningitidis DUSs occur at a significantly higher density in genes involved in DNA repair and recombination (as well as in restriction-modification and replication) than in other annotated gene groups. These authors proposed that the over-representation of DUS in DNA repair and recombination genes may reflect the benefit of maintaining the integrity of the DNA repair and recombination machinery by preferentially taking up genome maintenance genes that could replace their damaged counterparts in the recipient cell's genome. Uptake of such genes could provide a mechanism for facilitating recovery from DNA damage after genotoxic stress. |
https://en.wikipedia.org/wiki/Turnsole | Turnsole, katasol, or folium was a dyestuff prepared from the annual plant Chrozophora tinctoria.
History
Turnsole became a mainstay of medieval manuscript illuminators starting with the development of the technique for extracting it in the thirteenth century, when it joined the vegetable-based woad and indigo in the illuminator's repertory.
Its use was mostly as substitute of the more expensive Tyrian purple, the famous dye obtained from Murex molluscs. However, the queen of blue colorants was always the expensive lapis lazuli or its substitute azurite, ground to the finest powders. Turnsole was downgraded to a shading glaze and fell out of use in the illuminator's palette by the turn of the seventeenth century, with the easier availability of less fugitive mineral-derived blue pigments.
According to its method of preparation, turnsole produced a range of translucent colors from blue, through purple to red, depending on its reaction to the acidity or alkalinity of its environment, in a chemical reaction, not understood in the Middle Ages, that is most familiar in the litmus test.
Folium ("leaf"), was actually derived from the three-lobed fruit (illustration), not the leaves, and medieval recipes are explicit that the fruits must not be broken, or the seeds released, during production of the pigment. The fruits were collected in autumn (August, September).
In the early fifteenth century, Cennino Cennini, in his Libro dell' Arte gives a recipe "XVIII: How you should tint paper turnsole color" and "LXXVI To paint a purple or turnsole drapery in fresco." (though neither of these recipes use or describe turnsole). Textiles soaked in the dye vat would be left in a close damp cellar in an atmosphere produced by pans of urine. It was not realized that the decomposition of urea in the urine was producing ammonia, but the technique reminds us how foul-smelling was the dyer's art.
It was sold impregnated into small pieces of linen and then extracted for use. The colour |
https://en.wikipedia.org/wiki/Darwin%E2%80%93Radau%20equation | In astrophysics, the Darwin–Radau equation (named after Rodolphe Radau and Charles Galton Darwin) gives an approximate relation between the moment of inertia factor of a planetary body and its rotational speed and shape. The moment of inertia factor is directly related to the largest principal moment of inertia, C. It is assumed that the rotating body is in hydrostatic equilibrium and is an ellipsoid of revolution. The Darwin–Radau equation states
where M and Re represent the mass and mean equatorial radius of the body. Here λ is the d'Alembert parameter and the Radau parameter η is defined as
where q is the geodynamical constant
and ε is the geometrical flattening
where Rp is the mean polar radius and Re is the mean equatorial radius.
For Earth, and , which yields , a good approximation to the measured value of 0.3307. |
https://en.wikipedia.org/wiki/Animal%20migration | Animal migration is the relatively long-distance movement of individual animals, usually on a seasonal basis. It is the most common form of migration in ecology. It is found in all major animal groups, including birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. The cause of migration may be local climate, local availability of food, the season of the year or for mating.
To be counted as a true migration, and not just a local dispersal or irruption, the movement of the animals should be an annual or seasonal occurrence, or a major habitat change as part of their life. An annual event could include Northern Hemisphere birds migrating south for the winter, or wildebeest migrating annually for seasonal grazing. A major habitat change could include young Atlantic salmon or sea lamprey leaving the river of their birth when they have reached a few inches in size. Some traditional forms of human migration fit this pattern.
Migrations can be studied using traditional identification tags such as bird rings, or tracked directly with electronic tracking devices.
Before animal migration was understood, folklore explanations were formulated for the appearance and disappearance of some species, such as that barnacle geese grew from goose barnacles.
Overview
Concepts
Migration can take very different forms in different species, and has a variety of causes.
As such, there is no simple accepted definition of migration. One of the most commonly used definitions, proposed by the zoologist J. S. Kennedy is
Migration encompasses four related concepts: persistent straight movement; relocation of an individual on a greater scale (in both space and time) than its normal daily activities; seasonal to-and-fro movement of a population between two areas; and movement leading to the redistribution of individuals within a population. Migration can be either obligate, meaning individuals must migrate, or facultative, meaning individuals can "choose" to migrate or not. Wi |
https://en.wikipedia.org/wiki/LULI | LULI : Laboratoire pour l'Utilisation des Lasers Intenses'' (LULI''') is a scientific research laboratory specialised in the study of plasmas generated by laser-matter interaction at high intensities and their applications. The main missions of LULI include: (i) Research in Plasma Physics, (ii) Development and operation of high-power high-energy lasers and experimental facilities, (iii) student formation in Plasma Physics, Optics and Laser Physics.
Research in Plasma Physics
Focusing the extreme power of pulsed lasers (up to the petawatt level, 1015W) onto tiny spots, μm to mm in diameter, leads to ultrahigh intensities reaching today 1020W/cm2 or more. Targets irradiated at such intensities can reach temperatures of the order of hundred million degrees and pressures of tens of megabars. Moreover, the electric and magnetic fields associated with the laser beam itself or the fields produced in the plasma are responsible for the acceleration of particles to relativistic energies and to the production of intense radiation from THz to x-rays and γ-rays.
The main subjects studied by LULI's scientists include laser inertial fusion and all its physical components (e.g. laser-plasma interaction), fundamental physics of hot and dense plasmas and its applications in astrophysics and geophysics. In the short-pulse picosecond regime, the main developments concern the fast-igniter scheme for inertial fusion, and the production of brief and intense sources of radiation and relativistic particles.
National and International Facility
LULI is the French national civilian facility dedicated to research using high-energy high-power lasers and their applications. French and foreign users have access to the two most energetic French academic laser chains: 100TW and LULI2000.
The main beam of the 100TW facility delivers 30 J in 300 fs at 1.06 μm. It is coupled with additional nanosecond and picosecond beams. Nano2000, the nanosecond version of LULI2000 consists in two laser beams |
https://en.wikipedia.org/wiki/Desensitization%20%28telecommunications%29 | In telecommunications, desensitization (also known as receiver blocking) is a form of electromagnetic interference where a radio receiver is unable to receive a weak radio signal that it might otherwise be able to receive when there is no interference. This is caused by a nearby transmitter with a strong signal on a close frequency, which overloads the receiver and makes it unable to fully receive the desired signal.
Typical receiver operation is such that the Minimum Detectable Signal (MDS) level is determined by the thermal noise of its electronic components. When a signal is received, additional spurious signals are produced within the receiver because it is not truly a linear device. When these spurious signals have a power level that is less than the thermal noise power level, then the receiver is operating normally. When these spurious signals have a power level that is higher than the thermal noise floor, then the receiver is desensitized. This is because the MDS has risen due to the level of the spurious signals. Spurious signals increase in level when the received signal strength increases.
When an interfering signal is present, it can contribute to the level of the spurious signals. Stronger interference generates stronger spurious signals. The interference may be at a different frequency than the signal of interest, but the spurious signals caused by that interference can show up at the same frequency as the signal of interest. It is these spurious signals that degrade the ability of the receiver by raising the MDS.
Consider the case of a repeater station, a station consisting of a transmitter and receiver, both operating at the same time, but on separate frequencies, and in some cases, separate antennas. Elevated MDS can be experienced in this case as well.
One way to correct this condition is adding a duplexer to the station. This is common in Land Mobile Radio services such as police, fire, various commercial and amateur service.
See also
Receiver |
https://en.wikipedia.org/wiki/Desensitization%20%28medicine%29 | In medicine, desensitization is a method to reduce or eliminate an organism's negative reaction to a substance or stimulus.
In pharmacology, drug desensitization refers to two related concepts. First, desensitization may be equivalent to drug tolerance and refers to subjects' reactions (positive or negative) to a drug reducing following its repeated use. This is a macroscopic, organism-level effect and differs from the second meaning of desensitization, which refers to a biochemical effect where individual receptors become less responsive after repeated application of an agonist. This may be mediated by phosphorylation, for instance by beta adrenoceptor kinase at the beta adrenoceptor.
Application to allergies
For example, if a person with diabetes mellitus has a bad allergic reaction to taking a full dose of beef insulin, the person is given a very small amount of the insulin at first, so small that the person has no adverse reaction or very limited symptoms as a result. Over a period of time, larger doses are given until the person is taking the full dose. This is one way to help the body get used to the full dose, and to avoid having the allergic reaction to beef-origin insulin.
A temporary desensitization method involves the administration of small doses of an allergen to produces an IgE-mediated response in a setting where an individual can be resuscitated in the event of anaphylaxis; this approach, through uncharacterized mechanisms, eventually overrides the hypersensitive IgE response.
Desensitization approaches for food allergies are generally at the research stage. They include:
oral immunotherapy, which involves building up tolerance by eating a small amount of (usually baked) food;
sublingual immunotherapy, which involves placing a small drop of milk or egg white under the tongue;
epicutaneous immunotherapy, which injects the allergic food under the skin;
monoclonal anti-IgE antibodies, which non-specifically reduce the body's capacity to produce |
https://en.wikipedia.org/wiki/Desensitization%20%28psychology%29 | In psychology, Desensitization is a treatment or process that diminishes emotional responsiveness to a negative, aversive, or positive stimulus after repeated exposure. Desensitization can also occur when an emotional response is repeatedly evoked when the action tendency associated with the emotion proves irrelevant or unnecessary. The process of desensitization was developed by psychologist Mary Cover Jones and is primarily used to assist individuals in unlearning phobias and anxieties. Desensitization is a psychological process where a response is repeatedly elicited in circumstances where the emotion's propensity for action is irrelevant. Joseph Wolpe (1958) developed a method of a hierarchal list of anxiety-evoking stimuli in order of intensity, which allows individuals to undergo adaptation. Although medication is available for individuals with anxiety, fear, or phobias, empirical evidence supports desensitization with high rates of cure, particularly in clients with depression or schizophrenia. Wolpe's "reciprocal inhibition" desensitization process is based on well-known psychology theories such as Hull's "drive-reduction" theory and Sherrington's concept of "reciprocal inhibition." Individuals are gradually exposed to anxiety triggers while using relaxation techniques to reduce anxiety. It is an effective treatment for anxiety disorders.
Steps
The hierarchical list is constructed between client and therapist in an ordered series of steps from the least disturbing to the most alarming fears or phobias. The therapist and the patient for acrophobia create a list of escalating exposure scenarios. The patient progresses from using a low step ladder to standing and taking the first step. The scenes are arranged in a commonly used version of this treatment to increase arousal. Secondly, the client is taught techniques that produce deep relaxation. This is repeated until the hierarchy element no longer causes anxiety or fear, at which point the next scene is show |
https://en.wikipedia.org/wiki/Adenine%20nucleotide%20translocator | Adenine nucleotide translocator (ANT), also known as the ADP/ATP translocase (ANT), ADP/ATP carrier protein (AAC) or mitochondrial ADP/ATP carrier, exchanges free ATP with free ADP across the inner mitochondrial membrane. ANT is the most abundant protein in the inner mitochondrial membrane and belongs to mitochondrial carrier family.
Free ADP is transported from the cytoplasm to the mitochondrial matrix, while ATP produced from oxidative phosphorylation is transported from the mitochondrial matrix to the cytoplasm, thus providing the cells with its main energy currency. ADP/ATP translocases are exclusive to eukaryotes and are thought to have evolved during eukaryogenesis. Human cells express four ADP/ATP translocases: SLC25A4, SLC25A5, SLC25A6 and SLC25A31, which constitute more than 10% of the protein in the inner mitochondrial membrane. These proteins are classified under the mitochondrial carrier superfamily.
Types
In humans, there exist three paraologous ANT isoforms:
SLC25A4 – found primarily in heart and skeletal muscle
SLC25A5 – primarily expressed in fibroblasts
SLC25A6 – primarily express in liver
Structure
ANT has long been thought to function as a homodimer, but this concept was challenged by the projection structure of the yeast Aac3p solved by electron crystallography, which showed that the protein was three-fold symmetric and monomeric, with the translocation pathway for the substrate through the centre. The atomic structure of the bovine ANT confirmed this notion, and provided the first structural fold of a mitochondrial carrier. Further work has demonstrated that ANT is a monomer in detergents and functions as a monomer in mitochondrial membranes.
ADP/ATP translocase 1 is the major AAC in human cells and the archetypal protein of this family. It has a mass of approximately 30 kDa, consisting of 297 residues. It forms six transmembrane α-helices that form a barrel that results in a deep cone-shaped depression accessible from the outside w |
https://en.wikipedia.org/wiki/De%20Sitter%20effect | In astrophysics, the term de Sitter effect (named after the Dutch physicist Willem de Sitter) has been applied to two unrelated phenomena:
De Sitter double star experiment
De Sitter precession – also known as geodetic precession or the geodetic effect
Astrophysics |
https://en.wikipedia.org/wiki/OCCAID | The Open Contributors Corporation for Advanced Internet Development (OCCAID) was a non-profit consortium that operated one of the largest IPv6 research networks in the world. It maintained both resale and facilities-based networks spanning 15,000 miles, with a presence in over 52 cities across 6 countries. This organisation no longer operates, what occurred to this organisation is unclear as there is very little information available for this organisation, apart from their official website.
OCCAID facilitated collaboration between research communities and the carrier industry, serving as a testbed and proving ground for advanced Internet protocols. Most of its participants connected to the network using Ethernet connections in areas where OCCAID has last-mile network connections.
OCCAID's primary collaboration activities had involved IPv6 and multicast protocols.
External links
Official site
IPv6
Computer network organizations |
https://en.wikipedia.org/wiki/Integrin-linked%20kinase | Integrin-linked kinase is an enzyme that in humans is encoded by the ILK gene involved with integrin-mediated signal transduction. Mutations in ILK are associated with cardiomyopathies. It is a 59kDa protein originally identified in a yeast-two hybrid screen with integrin β1 as the bait protein. Since its discovery, ILK has been associated with multiple cellular functions including cell migration, proliferation, and adhesion.
Integrin-linked kinases (ILKs) are a subfamily of Raf-like kinases (RAF). The structure of ILK consists of three features: 5 ankyrin repeats in the N-terminus, Phosphoinositide binding motif and extreme N-terminus of kinase catalytic domain. Integrins lack enzymatic activity and depend on adapters to signal proteins. ILK is linked to beta-1 and beta-3 integrin cytoplasmic domains and is one of the best described integrins. Although first described as a serine/threonine kinase by Hannigan, important motifs of ILK kinases are still uncharacterized. ILK is thought to have a role in development regulation and tissue homeostasis, however it was found that in flies, worms and mice ILK activity isn't required to regulate these processes.
Animal ILKs have been linked to the pinch- parvin complex which control muscle development. Mice lacking ILK were embryonic lethal due to lack of organized muscle cell development. In mammals ILK lacks catalytic activity but supports scaffolding protein functions for focal adhesions. In plants, ILKs signal complexes to focal adhesion sites. ILKs of plants contain multiple ILK genes. Unlike animals that contain few ILK genes ILKs have been found to possess oncogenic properties. ILKs control the activity of serine/threonine phosphatases.
Principle Features
Transduction of extracellular matrix signals through integrins influences intracellular and extracellular functions, and appears to require interaction of integrin cytoplasmic domains with cellular proteins. Integrin-linked kinase (ILK), interacts with the cyto |
https://en.wikipedia.org/wiki/Einstein%E2%80%93de%20Haas%20effect | The Einstein–de Haas effect is a physical phenomenon in which a change in the magnetic moment of a free body causes this body to rotate. The effect is a consequence of the conservation of angular momentum. It is strong enough to be observable in ferromagnetic materials. The experimental observation and accurate measurement of the effect demonstrated that the phenomenon of magnetization is caused by the alignment (polarization) of the angular momenta of the electrons in the material along the axis of magnetization. These measurements also allow the separation of the two contributions to the magnetization: that which is associated with the spin and with the orbital motion of the electrons.
The effect also demonstrated the close relation between the notions of angular momentum
in classical and in quantum physics.
The effect was predicted by O. W. Richardson in 1908. It is named after Albert Einstein and Wander Johannes de Haas, who published two papers in 1915 claiming the first experimental observation of the effect.
Description
The orbital motion of an electron (or any charged particle) around a certain axis produces a magnetic dipole with the magnetic moment of where and are the charge and the mass of the particle, while is the angular momentum of the motion (SI units are used). In contrast, the intrinsic magnetic
moment of the electron is related to its intrinsic angular momentum (spin) as (see Landé g-factor and anomalous magnetic dipole moment).
If a number of electrons in a unit volume of the material have a total orbital angular momentum of with respect to a certain axis, their magnetic moments would produce the magnetization of . For the spin contribution the relation would be . A change in magnetization, implies a proportional change in the angular momentum, of the electrons involved. Provided that there is no external torque along the magnetization axis applied to the body in the process, the rest of the body (practically all its mass) should ac |
https://en.wikipedia.org/wiki/De%20Vaucouleurs%27s%20law | de Vaucouleurs's law, also known as the de Vaucouleurs profile or de Vaucouleurs model, describes how the surface brightness of an elliptical galaxy varies as a function of apparent distance from the center of the galaxy:
By defining Re as the radius of the isophote containing half of the total luminosity of the galaxy, the half-light radius, de Vaucouleurs profile may be expressed as:
or
where Ie is the surface brightness at Re. This can be confirmed by noting
de Vaucouleurs model is a special case of Sersic's model, with a Sersic index of n=4. A number of (internal) density profiles that approximately reproduce de Vaucouleurs's law after projection onto the plane of the sky include Jaffe's model and Dehnen's model.
The model is named after Gérard de Vaucouleurs who first formulated it in 1948. Although an empirical model rather than a law of physics, it was so entrenched in astronomy during the 20th century that it was referred to as a "law". |
https://en.wikipedia.org/wiki/Gauss%E2%80%93Codazzi%20equations | In Riemannian geometry and pseudo-Riemannian geometry, the Gauss–Codazzi equations (also called the Gauss–Codazzi–Weingarten-Mainardi equations or Gauss–Peterson–Codazzi formulas) are fundamental formulas which link together the induced metric and second fundamental form of a submanifold of (or immersion into) a Riemannian or pseudo-Riemannian manifold.
The equations were originally discovered in the context of surfaces in three-dimensional Euclidean space. In this context, the first equation, often called the Gauss equation (after its discoverer Carl Friedrich Gauss), says that the Gauss curvature of the surface, at any given point, is dictated by the derivatives of the Gauss map at that point, as encoded by the second fundamental form. The second equation, called the Codazzi equation or Codazzi-Mainardi equation, states that the covariant derivative of the second fundamental form is fully symmetric. It is named for Gaspare Mainardi (1856) and Delfino Codazzi (1868–1869), who independently derived the result, although it was discovered earlier by Karl Mikhailovich Peterson.
Formal statement
Let be an n-dimensional embedded submanifold of a Riemannian manifold P of dimension . There is a natural inclusion of the tangent bundle of M into that of P by the pushforward, and the cokernel is the normal bundle of M:
The metric splits this short exact sequence, and so
Relative to this splitting, the Levi-Civita connection of P decomposes into tangential and normal components. For each and vector field Y on M,
Let
The Gauss formula now asserts that is the Levi-Civita connection for M, and is a symmetric vector-valued form with values in the normal bundle. It is often referred to as the second fundamental form.
An immediate corollary is the Gauss equation for the curvature tensor. For ,
where is the Riemann curvature tensor of P and R is that of M.
The Weingarten equation is an analog of the Gauss formula for a connection in the normal bundle. Let and a |
https://en.wikipedia.org/wiki/Delay%20equalization | In signal processing, delay equalization corresponds to adjusting the relative phases of different frequencies to achieve a constant group delay, using by adding an all-pass filter in series with an uncompensated filter. Clever machine-learning techniques are now being applied to the design of such filters. |
https://en.wikipedia.org/wiki/Local%20insertion | In broadcasting, local insertion (known in the United Kingdom as an opt-out) is the act or capability of a broadcast television station, radio station or cable system to insert or replace part of a network feed with content unique to the local station or system. Most often this is a station identification (required by the broadcasting authority such as the U.S. Federal Communications Commission), but is also commonly used for television or radio advertisements, or a weather or traffic report. A digital on-screen graphic ("dog" or "bug"), commonly a translucent watermark, may also be keyed (superimposed) with a television station ID over the network feed using a character generator using genlock. In cases where individual broadcast stations carry programs separate from those shown on the main network, this is known as regional variation (in the United Kingdom) or an opt-out (in Canada and the United States).
Automated local insertion used to be triggered with in-band signaling, such as DTMF tones or sub-audible sounds (such as 25 Hz), but is now done with out-of-band signaling, such as analog signal subcarriers via communications satellite, or now more commonly via digital signals; broadcast automation equipment can then handle these automatically. In an emergency, such as severe weather, local insertion may also occur instantly through command from another network or other source (such as the Emergency Alert System or First Warning). In this case, the most urgent warning messages may interrupt without delay, while others may be worked into a normal break in programming within 15 minutes of their initial issuance.
Within individual programs
In the United States, insertion can easily be heard every evening on the nationally syndicated radio show Delilah, where the host does a pre-recorded station-specific voiceover played over a music bed from the network. When host Delilah Rene says "this is Delilah", her voice (often in a slightly different tone or mood than what |
https://en.wikipedia.org/wiki/Gene%20trapping | Gene trapping is a high-throughput approach that is used to introduce insertional mutations across an organism's genome.
Method
Trapping is performed with gene trap vectors whose principal element is a gene trapping cassette consisting of a promoterless reporter gene and/or selectable genetic marker, flanked by an upstream 3' splice site (splice acceptor; SA) and a downstream transcriptional termination sequence (polyadenylation sequence; polyA).
When inserted into an intron of an expressed gene, the gene trap cassette is transcribed from the endogenous promoter of that gene in the form of a fusion transcript in which the exon(s) upstream of the insertion site is spliced in frame to the reporter/selectable marker gene. Since transcription is terminated prematurely at the inserted polyadenylation site, the processed fusion transcript encodes a truncated and nonfunctional version of the cellular protein and the reporter/selectable marker. Thus, gene traps simultaneously inactivate and report the expression of the trapped gene at the insertion site, and provide a DNA tag (gene trap sequence tag, GTST) for the rapid identification of the disrupted gene.
Access
The International Gene Trap Consortium is centralizing the data and supplies modified cell lines. |
https://en.wikipedia.org/wiki/Davies%20attack | In cryptography, the Davies attack is a dedicated statistical cryptanalysis method for attacking the Data Encryption Standard (DES). The attack was originally created in 1987 by Donald Davies. In 1994, Eli Biham and Alex Biryukov made significant improvements to the technique. It is a known-plaintext attack based on the non-uniform distribution of the outputs of pairs of adjacent S-boxes. It works by collecting many known plaintext/ciphertext pairs and calculating the empirical distribution of certain characteristics. Bits of the key can be deduced given sufficiently many known plaintexts, leaving the remaining bits to be found through brute force. There are tradeoffs between the number of required plaintexts, the number of key bits found, and the probability of success; the attack can find 24 bits of the key with 252 known plaintexts and 53% success rate.
The Davies attack can be adapted to other Feistel ciphers besides DES. In 1998, Pornin developed techniques for analyzing and maximizing a cipher's resistance to this kind of cryptanalysis. |
https://en.wikipedia.org/wiki/Logarithmic%20number%20system | A logarithmic number system (LNS) is an arithmetic system used for representing real numbers in computer and digital hardware, especially for digital signal processing.
Overview
A number, , is represented in an LNS by two components: the logarithm () of its absolute value (as a binary word usually in two's complement), and its sign bit ():
An LNS can be considered as a floating-point number with the significand being always equal to 1 and a non-integer exponent. This formulation simplifies the operations of multiplication, division, powers and roots, since they are reduced down to addition, subtraction, multiplication, and division, respectively.
On the other hand, the operations of addition and subtraction are more complicated and they are calculated by the formulae:
where the "sum" function is defined by , and the "difference" function by . These functions and are also known as Gaussian logarithms.
The simplification of multiplication, division, roots, and powers is counterbalanced by the cost of evaluating these functions for addition and subtraction. This added cost of evaluation may not be critical when using an LNS primarily for increasing the precision of floating-point math operations.
History
Logarithmic number systems have been independently invented and published at least three times as an alternative to fixed-point and floating-point number systems.
Nicholas Kingsbury and Peter Rayner introduced "logarithmic arithmetic" for digital signal processing (DSP) in 1971.
A similar LNS named "signed logarithmic number system" (SLNS) was described in 1975 by Earl Swartzlander and Aristides Alexopoulos; rather than use two's complement notation for the logarithms, they offset them (scale the numbers being represented) to avoid negative logs.
Samuel Lee and Albert Edgar described a similar system, which they called the "Focus" number system, in 1977.
The mathematical foundations for addition and subtraction in an LNS trace back to Zecchini Leonelli a |
https://en.wikipedia.org/wiki/Mucigel | Mucigel is a slimy substance that covers the root cap of the roots of plants. It is a highly hydrated polysaccharide, most likely a pectin, which is secreted from the outermost (epidermal) cells of the rootcap. Mucigel is formed in the Golgi bodies of such cells, and is secreted through the process of exocytosis. The layer of microorganism-rich soil surrounding the mucigel is called the rhizosphere.
Mucigel serves several functions, including:
Protection of rootcap; prevents desiccation
Lubrication of rootcap; allows root to more efficiently penetrate the soil
Creation of symbiotic environment for nitrogen fixing bacteria (i.e. diazotrophs) and fungi (which help with water absorption)
Provision of a 'diffusion bridge' between the fine root system and soil particles, which allows for a more efficient uptake of water and mineral nutrients by roots in dry soils.
Mucigel is composed of mucilage, microbial exopolysaccharides and glomalin proteins.
See also
Meristem |
https://en.wikipedia.org/wiki/Deblocking%20filter | A deblocking filter is a video filter applied to decoded compressed video to improve visual quality and prediction performance by smoothing the sharp edges which can form between macroblocks when block coding techniques are used. The filter aims to improve the appearance of decoded pictures. It is a part of the specification for both the SMPTE VC-1 codec and the ITU H.264 (ISO MPEG-4 AVC) codec.
H.264 deblocking filter
In contrast with older MPEG-1/2/4 standards, the H.264 deblocking filter is not an optional additional feature in the decoder. It is a feature on both the decoding path and on the encoding path, so that the in-loop effects of the filter are taken into account in reference to macroblocks used for prediction. When a stream is encoded, the filter strength can be selected, or the filter can be switched off entirely. Otherwise, the filter strength is determined by coding modes of adjacent blocks, quantization step size, and the steepness of the luminance gradient between blocks.
The filter operates on the edges of each 4×4 or 8×8 transform block in the luma and chroma planes of each picture. Each small block's edge is assigned a boundary strength based on whether it is also a macroblock boundary, the coding (intra/inter) of the blocks, whether references (in motion prediction and reference frame choice) differ, and whether it is a luma or chroma edge. Stronger levels of filtering are assigned by this scheme where there is likely to be more distortion. The filter can modify as many as three samples on either side of a given block edge (in the case where an edge is a luma edge that lies between different macroblocks and at least one of them is intra coded). In most cases it can modify one or two samples on either side of the edge (depending on the quantization step size, the tuning of the filter strength by the encoder, the result of an edge detection test, and other factors).
H.263 Annex J deblocking filter
Although the concept of an "in loop" deblock |
https://en.wikipedia.org/wiki/S.%20R.%20Srinivasa%20Varadhan | Sathamangalam Ranga Iyengar Srinivasa Varadhan, (born 2 January 1940) is an Indian American mathematician. He is known for his fundamental contributions to probability theory and in particular for creating a unified theory of large deviations. He is regarded as one of the fundamental contributors to the theory of diffusion processes with an orientation towards the refinement and further development of Itô’s stochastic calculus. In the year 2007, he became the first Asian to win the Abel Prize.
Early life and education
Srinivasa was born into a Hindu Tamil Brahmin Iyengar family in 1940 in Chennai (then Madras). In 1953, his family migrated to Kolkata. He grew up in Chennai and Kolkata. Varadhan received his undergraduate degree in 1959 and his postgraduate degree in 1960 from Presidency College, Chennai. He received his doctorate from ISI in 1963 under C R Rao, who arranged for Andrey Kolmogorov to be present at Varadhan's thesis defence. He was one of the "famous four" (the others being R Ranga Rao, K R Parthasarathy, and Veeravalli S Varadarajan) in ISI during 1956–1963.
Career
Since 1963, he has worked at the Courant Institute of Mathematical Sciences at New York University, where he was at first a postdoctoral fellow (1963–66), strongly recommended by Monroe D Donsker. Here he met Daniel Stroock, who became a close colleague and co-author. In an article in the Notices of the American Mathematical Society, Stroock recalls these early years:
Varadhan is currently a professor at the Courant Institute. He is known for his work with Daniel W Stroock on diffusion processes, and for his work on large deviations with Monroe D Donsker. He has chaired the Mathematical Sciences jury for the Infosys Prize from 2009 and was the chief guest in 2020.
Awards and honours
Varadhan's awards and honours include the National Medal of Science (2010) from President Barack Obama, "the highest honour bestowed by the United States government on scientists, engineers and inventors |
https://en.wikipedia.org/wiki/Raether%20limit | The Raether limit is the physical limiting value of the multiplication factor (M) or gas gain in an ionization avalanche process (Townsend avalanche).
Even though, theoretically, it seems as if M can increase without limit (exponentially), physically, it is limited to about M < 108 or αx < 20 (where α is the first Townsend coefficient and x is the length of the path of ionization, starting from the point of the primary ionization.
Heinz Raether postulated that this was due to the effect of the space charge on the electric field.
The multiplication factor or gas gain is of fundamental importance for the operation of the proportional counter and geiger counter ionising radiation detectors.
Sources
The Mechanism of the Electric Spark By Leonard Benedict Loeb, John M. Meek. Stanford University Press, 1941
High Voltage Engineering, M S Naidu, V Kamarju. Tata McGraw-Hill Education, 2009
Particle detectors
Ionization
Electrical phenomena |
https://en.wikipedia.org/wiki/Boltzmann%20brain | The Boltzmann brain thought experiment suggests that it might be more likely for a single brain to spontaneously form in a void, complete with a memory of having existed in our universe, rather than for the entire universe to come about in the manner cosmologists think it actually did. Physicists use the Boltzmann brain thought experiment as a reductio ad absurdum argument for evaluating competing scientific theories.
In contrast to brain in a vat thought experiments which are about perception and thought, Boltzmann brains are used in cosmology to test our assumptions about thermodynamics and the development of the universe. Over a sufficiently long time, random fluctuations could cause particles to spontaneously form literally any structure of any degree of complexity, including a functioning human brain. The scenario initially involved only a single brain with false memories, but physicist Sean Carroll pointed out that, in a fluctuating universe, the scenario works just as well with entire bodies, even entire galaxies.
The idea is named after the physicist Ludwig Boltzmann (1844–1906), who, in 1896, published a theory that tried to account for the fact that the universe is not as chaotic as the budding field of thermodynamics seemed to predict. He offered several explanations, one of them being that the universe, even after it had progressed to its most likely spread-out and featureless state of thermal equilibrium, would spontaneously fluctuate to a more ordered (or low-entropy) state such as the universe in which we find ourselves. Boltzmann brains were first proposed as a reductio ad absurdum response to this explanation by Boltzmann for the low-entropy state of our universe.
The Boltzmann brain gained new relevance around 2002, when some cosmologists started to become concerned that, in many theories about the universe, human brains are vastly more likely to arise from random fluctuations; this leads to the conclusion that, statistically, humans are likely |
https://en.wikipedia.org/wiki/Avoidance%20reaction | Avoidance reaction is a term used in the description of the movement of paramecium. This helps the cell avoid obstacles and causes other objects to bounce off of the cell's outer membrane. The paramecium does this by reversing the direction in which its cilia beat. This results in stopping, spinning or turning, after which point the paramecium resumes swimming forward. If multiple avoidance reactions follow one another, it is possible for a paramecium to swim backward, though not as smoothly as swimming forward.
Avoidance reaction occurs when the cell hits an obstruction, providing an anterior, mechanical stimulus:
- The cell will then reverse.
- It will then stop and rotate.
- Now facing a new direction, the cell will move off in that direction.
This process will continue until the cell is able to negotiate its way around the obstruction.
Movement of Paramecium cells is caused by control of calcium ions inside the cell and membrane potentials. The simplest explanation for the avoidance reaction is that membrane potential controls the influx of calcium ions, which regulates the beat frequency and angles of cilia on the surface of the cell. |
https://en.wikipedia.org/wiki/Sinus%20%28anatomy%29 | A sinus is a sac or cavity in any organ or tissue, or an abnormal cavity or passage caused by the destruction of tissue. In common usage, "sinus" usually refers to the paranasal sinuses, which are air cavities in the cranial bones, especially those near the nose and connecting to it. Most individuals have four paired cavities located in the cranial bone or skull.
Etymology
Sinus is Latin for "bay", "pocket", "curve", or "bosom". In anatomy, the term is used in various contexts.
The word "sinusitis" is used to indicate that one or more of the membrane linings found in the sinus cavities has become inflamed or infected. It is however distinct from a fistula, which is a tract connecting two epithelial surfaces. If left untreated, infections occurring in the sinus cavities can affect the chest and lungs.
Sinuses in the body
Paranasal sinuses
Maxillary
Ethmoid
Sphenoid
Frontal
Dural venous sinuses
Anterior midline
Cavernous
Superior petrosal
Inferior petrosal
Central sulcus
Inferior sagittal
Superior sagittal
Straight
Confluence of sinuses
Lateral
Transverse
Sigmoid
Inferior
Occipital
Arterial sinuses
Carotid sinus
Organ-specific spaces
Costodiaphragmatic recess (lung/diaphragm sinus, also known as phrenicocostal sinus)
Renal sinus (drains renal medulla)
Coronary sinus (subdivisions of the pericardium)
Lymphatic spaces
Subcapsular sinus (space between the lymph node and capsule)
Trabecular sinuses (space around the invaginations of the lymphatic capsule)
Medullary sinuses (space between the lymphatic cortex and efferent lymphatic drainage)
Paranasal sinuses
The four paired sinuses or air cavities can be referred to as:
Ethmoid sinus cavities which are located between the eyes.
Frontal sinus cavities which can be found above the eyes (more in the forehead region).
Maxillary sinus cavities are located on either side of the nostrils (cheekbone areas).
Sphenoid sinuses that are located behind the eyes and lie in the deeper recesses of th |
https://en.wikipedia.org/wiki/William%20Fulton%20%28mathematician%29 | William Edgar Fulton (born August 29, 1939) is an American mathematician, specializing in algebraic geometry.
Education and career
He received his undergraduate degree from Brown University in 1961 and his doctorate from Princeton University in 1966. His Ph.D. thesis, written under the supervision of Gerard Washnitzer, was on The fundamental group of an algebraic curve.
Fulton worked at Princeton and Brandeis University from 1965 until 1970, when he began teaching at Brown. In 1987 he moved to the University of Chicago. He is, as of 2011, a professor at the University of Michigan.
Fulton is known as the author or coauthor of a number of popular texts, including Algebraic Curves and Representation Theory.
Awards and honors
In 1996 he received the Steele Prize for mathematical exposition for his text Intersection Theory. Fulton is a member of the United States National Academy of Sciences since 1997; a fellow of the American Academy of Arts and Sciences from 1998, and was elected a foreign member of the Royal Swedish Academy of Sciences in 2000. In 2010, he was awarded the Steele Prize for Lifetime Achievement. In 2012 he became a fellow of the American Mathematical Society.
Selected works
Algebraic Curves: An Introduction To Algebraic Geometry, with Richard Weiss. New York: Benjamin, 1969. Reprint ed.: Redwood City, CA, USA: Addison-Wesley, Advanced Book Classics, 1989. . Full text online.
See also
Fulton–Hansen connectedness theorem |
https://en.wikipedia.org/wiki/Glutaredoxin | Glutaredoxins (also known as Thioltransferase) are small redox enzymes of approximately one hundred amino-acid residues that use glutathione as a cofactor. In humans this oxidation repair enzyme is also known to participate in many cellular functions, including redox signaling and regulation of glucose metabolism. Glutaredoxins are oxidized by substrates, and reduced non-enzymatically by glutathione. In contrast to thioredoxins, which are reduced by thioredoxin reductase, no oxidoreductase exists that specifically reduces glutaredoxins. Instead, glutaredoxins are reduced by the oxidation of glutathione. Reduced glutathione is then regenerated by glutathione reductase. Together these components compose the glutathione system.
Like thioredoxin, which functions in a similar way, glutaredoxin possesses an active centre disulfide bond. It exists in either a reduced or an oxidized form where the two cysteine residues are linked in an intramolecular disulfide bond. Glutaredoxins function as electron carriers in the glutathione-dependent synthesis of deoxyribonucleotides by the enzyme ribonucleotide reductase. Moreover, GRX act in antioxidant defense by reducing dehydroascorbate, peroxiredoxins, and methionine sulfoxide reductase. Beside their function in antioxidant defense, bacterial and plant GRX were shown to bind iron-sulfur clusters and to deliver the cluster to enzymes on demand.
In viruses
Glutaredoxin has been sequenced in a variety of viruses. On the basis of extensive sequence similarity, it has been proposed that Vaccinia virus protein O2L is, it seems, a glutaredoxin. Bacteriophage T4 thioredoxin seems to be evolution-related. In position 5 of the pattern T4, thioredoxin has Val instead of Pro.
In plants
Approximately 30 GRX isoforms are described in the model plant Arabidopsis thaliana and 48 in Oryza sativa L. According to their redox-active centre, they are subgrouped in six classes of the CSY[C/S]-, CGFS-, CC-type and 3 groups with additional domain of |
https://en.wikipedia.org/wiki/Bradley%20effect | The Bradley effect (less commonly the Wilder effect) is a theory concerning observed discrepancies between voter opinion polls and election outcomes in some United States government elections where a white candidate and a non-white candidate run against each other. The theory proposes that some white voters who intend to vote for the white candidate would nonetheless tell pollsters that they are undecided or likely to vote for the non-white candidate. It was named after Los Angeles mayor Tom Bradley, an African-American who lost the 1982 California gubernatorial election to California attorney general George Deukmejian, a white person, despite Bradley being ahead in voter polls going into the elections.
The Bradley effect posits that the inaccurate polls were skewed by the phenomenon of social desirability bias. Specifically, some voters give inaccurate polling responses for fear that, by stating their true preference, they will open themselves to criticism of racial motivation. Members of the public may feel under pressure to provide an answer that is deemed to be more publicly acceptable, or politically correct. The reluctance to give accurate polling answers has sometimes extended to post-election exit polls as well. The race of the pollster conducting the interview may factor into voters' answers.
Some analysts have dismissed the theory of the Bradley effect. Others have argued that it may have existed in past elections, but not in more recent ones, such as when the African-American Barack Obama was elected President of the United States in 2008 and 2012, both times against a white opponent. Others believe that it is a persistent phenomenon. Similar effects have been posited in other contexts, for example, the shy Tory factor and spiral of silence.
Origin
In 1982, Tom Bradley, the long-time mayor of Los Angeles, ran as the Democratic Party's candidate for Governor of California against Republican candidate George Deukmejian, who was white (of Armenian descen |
https://en.wikipedia.org/wiki/Geometric%20programming | A geometric program (GP) is an optimization problem of the form
where are posynomials and are monomials. In the context of geometric programming (unlike standard mathematics), a monomial is a function from to defined as
where and . A posynomial is any sum of monomials.
Geometric programming is
closely related to convex optimization: any GP can be made convex by means of a change of variables. GPs have numerous applications, including component sizing in IC design, aircraft design, maximum likelihood estimation for logistic regression in statistics, and parameter tuning of positive linear systems in control theory.
Convex form
Geometric programs are not in general convex optimization problems, but they can be transformed to convex problems by a change of variables and a transformation of the objective and constraint functions. In particular, after performing the change of variables and taking the log of the objective and constraint functions, the functions , i.e., the posynomials, are transformed into log-sum-exp functions, which are convex, and the functions , i.e., the monomials, become affine. Hence, this transformation transforms every GP into an equivalent convex program. In fact, this log-log transformation can be used to convert a larger class of problems, known as log-log convex programming (LLCP), into an equivalent convex form.
Software
Several software packages exist to assist with formulating and solving geometric programs.
MOSEK is a commercial solver capable of solving geometric programs as well as other non-linear optimization problems.
CVXOPT is an open-source solver for convex optimization problems.
GPkit is a Python package for cleanly defining and manipulating geometric programming models. There are a number of example GP models written with this package here.
GGPLAB is a MATLAB toolbox for specifying and solving geometric programs (GPs) and generalized geometric programs (GGPs).
CVXPY is a Python-embedded modeling language for s |
https://en.wikipedia.org/wiki/Posynomial | A posynomial, also known as a posinomial in some literature, is a function of the form
where all the coordinates and coefficients are positive real numbers, and the exponents are real numbers. Posynomials are closed under addition, multiplication, and nonnegative scaling.
For example,
is a posynomial.
Posynomials are not the same as polynomials in several independent variables. A polynomial's exponents must be non-negative integers, but its independent variables and coefficients can be arbitrary real numbers; on the other hand, a posynomial's exponents can be arbitrary real numbers, but its independent variables and coefficients must be positive real numbers. This terminology was introduced by Richard J. Duffin, Elmor L. Peterson, and Clarence Zener in their seminal book on geometric programming.
Posynomials are a special case of signomials, the latter not having the restriction that the be positive. |
https://en.wikipedia.org/wiki/Hilbert%20projection%20theorem | In mathematics, the Hilbert projection theorem is a famous result of convex analysis that says that for every vector in a Hilbert space and every nonempty closed convex there exists a unique vector for which is minimized over the vectors ; that is, such that for every
Finite dimensional case
Some intuition for the theorem can be obtained by considering the first order condition of the optimization problem.
Consider a finite dimensional real Hilbert space with a subspace and a point If is a or of the function defined by (which is the same as the minimum point of ), then derivative must be zero at
In matrix derivative notation
Since is a vector in that represents an arbitrary tangent direction, it follows that must be orthogonal to every vector in
Statement
Detailed elementary proof
Proof by reduction to a special case
It suffices to prove the theorem in the case of because the general case follows from the statement below by replacing with
Consequences
:
If then
which implies
:
Let where is the underlying scalar field of and define
which is continuous and linear because this is true of each of its coordinates
The set is closed in because is closed in and is continuous.
The kernel of any linear map is a vector subspace of its domain, which is why is a vector subspace of
:
Let
The Hilbert projection theorem guarantees the existence of a unique such that (or equivalently, for all ).
Let so that and it remains to show that
The inequality above can be rewritten as:
Because and is a vector space, and which implies that
The previous inequality thus becomes
or equivalently,
But this last statement is true if and only if every Thus
Properties
Expression as a global minimum
The statement and conclusion of the Hilbert projection theorem can be expressed in terms of global minimums of the followings functions. Their notation will also be used to simplify certain statements.
Given a non-emp |
https://en.wikipedia.org/wiki/HECToR | HECToR (High End Computing Terascale Resource) was a British academic national supercomputer service funded by EPSRC, Natural Environment Research Council (NERC) and BBSRC for the UK academic community. The HECToR service was run by partners including EPCC, Science and Technology Facilities Council (STFC) and Numerical Algorithms Group (NAG).
The supercomputer itself (currently a Cray XE6) was located at the University of Edinburgh in Scotland. The first phase came on line in October 2007, and, by the time it was decommissioned, it had been upgraded to Phase 3 configuration, with a peak performance of over 800 teraflops. Its successor is called ARCHER.
Hardware
HECToR's hardware configuration has been progressively upgraded since the system was first commissioned.
Phase 1
HECToR's initial configuration, known as Phase 1, featured 60 Cray XT4 cabinets containing 1416 compute blades, giving a total of 11,328 2.8 GHz AMD Opteron processor cores, connected to 576 terabytes of RAID backing storage, later increased to 934 TB. The peak performance of the system was 59 teraflops.
In August 2008, 28 Cray X2 Black Widow vector compute nodes were added to the system. Each node had 4 vector processors, giving a total of 112 processors. Each processor was capable of 25.6 gigaflops, giving a peak performance of 2.87 teraflops. Each 4-processor node shared 32 gigabytes of memory.
Phase 2a
In the summer of 2009, the XT4 cabinets were upgraded with quad-core 2.3 GHz Opteron processors with 8 GB memory each. This doubled the number of processor cores to 22,656, and increased total system memory to 45.3 terabytes. Peak performance was increased to 208 teraflops.
Phase 2b
The Phase 2b upgrade, performed in 2010, involved installation of a new 20-cabinet Cray XT6 system featuring 12-core Opteron 6100 processors, giving a total of 44,544 cores and a peak performance of over 360 teraflops. At the same time the existing XT4 system was reduced to approximately half its origin |
https://en.wikipedia.org/wiki/Sticky%20mouse | Sticky mouse is a murine possessing a gene mutation in the enzyme alanyl-tRNA synthetase (AARS). The sticky mouse, with this particular mutation, presents a good model in which to investigate mechanisms of neuronal degeneration. Its most immediately obvious symptom is a sticky secretion on the mouse's fur (thus the name); however, it is accompanied by lack of muscle control, ataxia, alopecia, loss of Purkinje cells in the cerebellum, and eventually, death.
Sticky mouse is one of several animal mutants that are known to have problems in mRNA translation and are used in studies.
See also
Wasted mouse (wst) - EEF1A2 defect
Harlequin mouse
Reeler - RELN defect
Shaking rat Kawasaki - RELN defect |
https://en.wikipedia.org/wiki/Disodium%20phosphate | Disodium phosphate (DSP), or disodium hydrogen phosphate, or sodium phosphate dibasic, is the inorganic compound with the formula Na2HPO4. It is one of several sodium phosphates. The salt is known in anhydrous form as well as forms with 2, 7, 8, and 12 hydrates. All are water-soluble white powders; the anhydrous salt being hygroscopic.
The pH of disodium hydrogen phosphate water solution is between 8.0 and 11.0, meaning it is moderately basic:
HPO42− + H2O H2PO4− + OH−
Production and reactions
It can be generated by neutralization of phosphoric acid with sodium hydroxide:
H3PO4 + 2 NaOH → Na2HPO4 + 2 H2O
Industrially It is prepared in a two-step process by treating dicalcium phosphate with sodium bisulfate, which precipitates calcium sulfate:
CaHPO4 + NaHSO4 → NaH2PO4 + CaSO4
In the second step, the resulting solution of monosodium phosphate is partially neutralized:
NaH2PO4 + NaOH → Na2HPO4 + H2O
Uses
It is used in conjunction with trisodium phosphate in foods and water softening treatment. In foods, it is used to adjust pH. Its presence prevents coagulation in the preparation of condensed milk. Similarly, it is used as an anti-caking additive in powdered products. It is used in desserts and puddings, e.g. Cream of Wheat to quicken cook time, and Jell-O Instant Pudding for thickening. In water treatment, it retards calcium scale formation. It is also found in some detergents and cleaning agents.
Heating solid disodium phosphate gives the useful compound tetrasodium pyrophosphate:
2 Na2HPO4 → Na4P2O7 + H2O
Laxative
Monobasic and dibasic sodium phosphate are used as a saline laxative to treat constipation or to clean the bowel before a colonoscopy. |
https://en.wikipedia.org/wiki/ZoneAlarm%20Z100G | ZoneAlarm Secure Wireless Router Z100G is a discontinued Unified Threat Management security router for the home and SOHO market.
The Z100G was developed by SofaWare Technologies, a Check Point Company. The hardware is similar to SofaWare's Safe@Office and VPN-1 Edge lines, and the software differs only in what features the license allows the user to access and to what degree.
Features
ZoneAlarm Z100G provides networking and security related features, including -
Router with 4 Fast Ethernet LAN ports and one WAN port.
Wireless access point with 108 Mbit/s Super G and Extended Range (XR) technologies.
Stateful Inspection Firewall
Remote Access VPN for a single user at a time
Intrusion Prevention IPS
Gateway Antivirus
Web filtering
USB 2.0 Print Server
Security Reporting
Integrated ActiveX Remote Desktop client to connect to internal computers
Performance
Firewall Throughput - 70 Mbit/s
VPN Throughput - 5 Mbit/s (AES)
Concurrent Firewall Connections - 4,000
External links
ZoneAlarm Z100G Home Page
ZoneAlarm Z100G Technical Specifications
Firewall software |
https://en.wikipedia.org/wiki/Digital%20speaker | Digital speakers or digital sound reconstruction (DSR) systems are a form of loudspeaker technology. Not to be confused with modern digital formats and processing, they are yet to be developed as a mature technology, having been experimented with extensively by Bell Labs as far back as the 1920s, but not realized as commercial products.
Principle of operation
The least significant bit drives a tiny speaker driver, of whatever physical design is chosen; a value of "1" causes this driver to be driven full amplitude, a value of "0" causes it to be off. This allows for high efficiency in the amplifier, which at any time is either passing zero current, or required to drop the output voltage by zero volts, therefore in a theoretical ideal amplifier dissipating no power as heat at any time. The next least significant bit drives a speaker of twice the area (most often, but not necessarily, a ring around the previous driver), again to either full amplitude, or off. The next least significant bit drives a speaker of twice this area, and so on.
Other approaches are possible. For example, instead of doubling the area of the next most significant diaphragm segment, it could simply be driven so it stroked twice as far. The digital principle of operation and attendant amplifier efficiency benefits would remain.
With the advent of smaller transducer size using manufacture process such as CMOS-MEMS. A more practical approach is to construct an array of speakers, known as Digital Loudspeaker Array (DLA) or Digital Transducer Array (DTA). The least significant bit will be represented by a single transducer, and the amount will double for the next least significant bit. A n-bits speaker arrays will consist of 2n-1 transducers, and the mth bits of said arrays will contain 2m-1 transducers. The entire array basically function as a thermometer-coded DAC that can decode PCM signal of same number of bits as the array into sound wave. Bit grouping or PWM encoding are potential ways to dec |
https://en.wikipedia.org/wiki/Network%20agility | Network Agility is an architectural discipline for computer networking. It can be defined as:
The ability of network software and hardware to automatically control and configure itself and other network assets across any number of devices on a network.
With regards to network hardware, network agility is used when referring to automatic hardware configuration and reconfiguration of network devices e.g. routers, switches, SNMP devices.
Network agility, as a software discipline, borrows from many fields, both technical and commercial.
On the technical side, network agility solutions leverage techniques from areas such as:
Service-oriented architecture (SOA)
Object-oriented design
Architectural patterns
Loosely coupled data streaming (e.g.: web services)
Iterative design
Artificial intelligence
Inductive scheduling
On-demand computing
Utility computing
Commercially, network agility is about solving real-world business problems using existing technology. It forms a three-way bridge between business processes, hardware resources, and software assets. In more detail, it takes, as input: 1
the business processes – i.e. what the network must achieve in real business terms;
the hardware that resides within the network; and
the set of software assets that run on this hardware.
Much of this input can be obtained through automatic discovery – finding the hardware, its types and locations, software, licenses etc. The business processes can be inferred to a certain degree, but it is these processes that business managers need to be able to control and organize.
Software resources discovered on the network can take a variety of forms – some assets may be licensed software products, others as blocks of software service code that can be accessed via some service enterprise portal, such as (but not necessarily) web services. These services may reside in-house, or they may be 'on-demand' via an on-line subscription service. Indeed, the primary motivation of network |
https://en.wikipedia.org/wiki/Flow%20to%20HDL | Flow to HDL tools and methods convert flow-based system design into a hardware description language (HDL) such as VHDL or Verilog. Typically this is a method of creating designs for field-programmable gate array, application-specific integrated circuit prototyping and digital signal processing (DSP) design. Flow-based system design is well-suited to field-programmable gate array design as it is easier to specify the innate parallelism of the architecture.
History
The use of flow-based design tools in engineering is a reasonably new trend. Unified Modeling Language is the most widely used example for software design. The use of flow-based design tools allows for more holistic system design and faster development. C to HDL tools and flow have a similar aim, but with C or C-like programming languages.
Applications
Most applications are ones which take too long with existing supercomputer architectures. These include bioinformatics, CFD, financial processing and oil and gas survey data analysis. Embedded applications that require high performance or real-time data processing are also an area of use. System-on-a-chip design can also be done using this flow.
Examples
Xilinx System Generator from Xilinx
StarBridge VIVA from defunct
Nimbus from defunct Exsedia
External links
an overview of flows by Daresbury Labs.
Xilinx's ESL initiative, some products listed and C to VHDL tools.
See also
Application Specific Integrated Circuit (ASIC)
C to HDL
Comparison of Free EDA software
Comparison of EDA Software
Complex programmable logic device (CPLD)
ELLA (programming language)
Electronic design automation (EDA)
Embedded C++
Field Programmable Gate Array (FPGA)
Hardware description language (HDL)
Handel-C
Icarus Verilog
Lustre (programming language)
MyHDL
Open source software
Register transfer notation
Register transfer level (RTL)
Ruby (hardware description language)
SpecC
SystemC
SystemVerilog
Systemverilog DPI
VHDL
VHDL-AMS
Verilog
Veri |
https://en.wikipedia.org/wiki/InterPlaNet | InterPlaNet (IPN), not to be confused with InterPlanetary Network, is a computer networking protocol designed to operate at interplanetary distances, where traditional protocols such as the Internet Protocol break down. It is the base for Interplanetary Internet. It has been under development by Vint Cerf and NASA since 1998 and a permanent network link to Mars was planned by 2008 until the Mars Telecommunications Orbiter was canceled in 2005. The protocol was expected to be space-qualified and ready for use by around 2010.
IPN Protocol Stack
The distance between the planets and their constant motion impose long and variable delay on the communications. Thus, the traditional protocol stack doesn't function properly. Delay/Disruption Tolerant Network (DTN) is implemented to address these constraints. DTN inserts a new set of protocols, called Bundling Protocols (BPs), to the traditional protocol stack. BP is a standard method of transmitting data using store-and-forward, where data are stored for a period of time at
intermediate nodes along a network path, and forwarded to the next station when a link is available. Licklider Transmission Protocol (LTP) is a BP and a transport protocol that functions in deep space. For an example of a dataflow with an intermediate space satellite between the deep space and the earth, see the figure. The intermediate node has two transport protocols, LTP (for data transmission over deep space communication links), and TCP (for transmission over earth communication links). The intermediate node turns the received data from LTP packets into TCP packets using their underlying convergence layer protocols. The protocols in the lower layers might change to support the corresponding communication and network.
See also
Intergalactic Computer Network
Interplanetary Internet
Delay-tolerant networking |
https://en.wikipedia.org/wiki/Earth%20Impact%20Database | The Earth Impact Database is a database of confirmed impact structures or craters on Earth. It was initiated in 1955 by the Dominion Observatory, Ottawa, under the direction of Carlyle S. Beals. Since 2001, it has been maintained as a not-for-profit source of information at the Planetary and Space Science Centre at the University of New Brunswick, Canada.
, the database lists 190 confirmed impact sites.
Other lists are wider in scope by including more than just confirmed sites, such as probable, possible, suspected and rejected or discredited impact sites on their lists. These are used for screening and tracking study of possible impact sites. Sites will appear first in these lists while under study and may be incorporated into UNB's Earth Impact Database after confirmation and collection of enough information about the site to satisfy the database's strict entry criteria.
A previous list was maintained by the Impact Field Studies Group at the University of Tennessee, Knoxville. The Catalogue of the Earth's Impact structures is maintained at the Siberian Center for Global Catastrophes.
See also
List of impact craters on Earth
List of possible impact structures on Earth |
https://en.wikipedia.org/wiki/Radial%20basis%20function%20network | In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.
Network architecture
Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers . The output of the network is then a scalar function of the input vector, , and is given by
where is the number of neurons in the hidden layer, is the center vector for neuron , and is the weight of neuron in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. The norm is typically taken to be the Euclidean distance (although the Mahalanobis distance appears to perform better with pattern recognition) and the radial basis function is commonly taken to be Gaussian
.
The Gaussian basis functions are local to the center vector in the sense that
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
Given certain mild conditions on the shape of the activation function, RBF networks are universal approximators on a compact subset of . This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision.
The parameters , , and are determined in a manner tha |
https://en.wikipedia.org/wiki/Google%20Workspace | Google Workspace is a collection of cloud computing, productivity and collaboration tools, software and products developed and marketed by Google. It consists of Gmail, Contacts, Calendar, Meet and Chat for communication; Currents for employee engagement; Drive for storage; and the Google Docs Editors suite for content creation. An Admin Panel is provided for managing users and services. Depending on edition Google Workspace may also include the digital interactive whiteboard Jamboard and an option to purchase add-ons such as the telephony service Voice. The education edition adds a learning platform Google Classroom and today has the name Workspace for Education.
While most of these services are individually available at no cost to consumers who use their free Google (Gmail) accounts, Google Workspace adds enterprise features such as custom email addresses at a domain (e.g. @yourcompany.com), an option for unlimited Drive storage, additional administrative tools and advanced settings, as well as 24/7 phone and email support.
The suite was first launched in February 2006 as Gmail for Your Domain, before being expanded into Google Apps for Your Domain in the same year, later rebranded as G Suite in 2016, then rebranded again in 2020 as Google Workspace.
As of April 2020, G Suite had 6 million paying businesses, and 120 million G Suite for Education users.
History
From February 10, 2006, Google started testing a version of the service at San Jose City College, hosting Gmail accounts with SJCC domain addresses and admin tools for account management. On August 28, 2006, Google launched Google Apps for Your Domain, a set of apps for organizations. Available for free as a beta service, it included Gmail, Talk, Calendar, and the Page Creator, which was later replaced with Sites. Dave Girouard, then Google's vice president and general manager for enterprise, outlined its benefits for business customers: "Organizations can let Google be the experts in delivering high q |
https://en.wikipedia.org/wiki/Volunteer%20computing | Volunteer computing is a type of distributed computing in which people donate their computers' unused resources to a research-oriented project, and sometimes in exchange for credit points. The fundamental idea behind it is that a modern desktop computer is sufficiently powerful to perform billions of operations a second, but for most users only between 10–15% of its capacity is used. Common tasks such as word processing or web browsing leave the computer mostly idle.
The practice of volunteer computing, which dates back to the mid-1990s, can potentially make substantial processing power available to researchers at minimal cost. Typically, a program running on a volunteer's computer periodically contacts a research application to request jobs and report results. A middleware system usually serves as an intermediary.
History
The first volunteer computing project was the Great Internet Mersenne Prime Search, which started in January 1996. It was followed in 1997 by distributed.net. In 1997 and 1998, several academic research projects developed Java-based systems for volunteer computing; examples include Bayanihan, Popcorn, Superweb, and Charlotte.
The term volunteer computing was coined by Luis F. G. Sarmenta, the developer of Bayanihan. It is also appealing for global efforts on social responsibility, or Corporate Social Responsibility as reported in a Harvard Business Review.
In 1999, the SETI@home and Folding@home projects were launched. These projects received considerable media coverage, and each one attracted several hundred thousand volunteers.
Between 1998 and 2002, several companies were formed with business models involving volunteer computing. Examples include Popular Power, Porivo, Entropia, and United Devices.
In 2002, the Berkeley Open Infrastructure for Network Computing (BOINC) project was founded at University of California, Berkeley Space Sciences Laboratory, funded by the National Science Foundation. BOINC provides a complete middleware system |
https://en.wikipedia.org/wiki/Membrane%20fusion%20protein | Membrane fusion proteins (not to be confused with chimeric or fusion proteins) are proteins that cause fusion of biological membranes. Membrane fusion is critical for many biological processes, especially in eukaryotic development and viral entry. Fusion proteins can originate from genes encoded by infectious enveloped viruses, ancient retroviruses integrated into the host genome, or solely by the host genome. Post-transcriptional modifications made to the fusion proteins by the host, namely addition and modification of glycans and acetyl groups, can drastically affect fusogenicity (the ability to fuse).
Fusion in eukaryotes
Eukaryotic genomes contain several gene families, of host and viral origin, which encode products involved in driving membrane fusion. While adult somatic cells do not typically undergo membrane fusion under normal conditions, gametes and embryonic cells follow developmental pathways to non-spontaneously drive membrane fusion, such as in placental formation, syncytiotrophoblast formation, and neurodevelopment. Fusion pathways are also involved in the development of musculoskeletal and nervous system tissues. Vesicle fusion events involved in neurotransmitter trafficking also relies on the catalytic activity of fusion proteins.
SNARE family
The SNARE family include bona fide eukaryotic fusion proteins. They are only found in eukaryotes and their closest archaeal relatives like Heimdallarchaeota.
Retroviral
These proteins originate from the env gene of endogenous retroviruses. They are domesticated viral class I fusion proteins.
Syncytins are responsible for structures of the placenta.
Syncytin-1
Syncytin-2
ERV3 is not functional in humans
HAP2 family
HAP2 is a domesticated viral class II fusion protein found in diverse eukaryotes including Toxoplasma, vascular plants, and fruit flies. This protein is essential for gamete fusion in these organisms.
Pathogenic viral fusion
Enveloped viruses readily overcome the thermodynamic barrier |
https://en.wikipedia.org/wiki/Phred%20quality%20score | A Phred quality score is a measure of the quality of the identification of the nucleobases generated by automated DNA sequencing. It was originally developed for the computer program Phred to help in the automation of DNA sequencing in the Human Genome Project. Phred quality scores are assigned to each nucleotide base call in automated sequencer traces. The FASTQ format encodes phred scores as ASCII characters alongside the read sequences. Phred quality scores have become widely accepted to characterize the quality of DNA sequences, and can be used to compare the efficacy of different sequencing methods. Perhaps the most important use of Phred quality scores is the automatic determination of accurate, quality-based consensus sequences.
Definition
Phred quality scores are logarithmically related to the base-calling error probabilities and defined as
.
This relation can be also be written as
.
For example, if Phred assigns a quality score of 30 to a base, the chances that this base is called incorrectly are 1 in 1000.
The phred quality score is the negative ratio of the error probability to the reference level of expressed in Decibel (dB).
History
The idea of sequence quality scores can be traced back to the original description of the SCF file format by Staden's group in 1992. In 1995, Bonfield and Staden proposed a method to use base-specific quality scores to improve the accuracy of consensus sequences in DNA sequencing projects.
However, early attempts to develop base-specific quality scores had only limited success.
The first program to develop accurate and powerful base-specific quality scores was the program Phred. Phred was able to calculate highly accurate quality scores that were logarithmically linked to the error probabilities. Phred was quickly adopted by all the major genome sequencing centers as well as many other laboratories; the vast majority of the DNA sequences produced during the Human Genome Project were processed with Phred.
|
https://en.wikipedia.org/wiki/Michael%20K%C3%B6lling | Michael Kölling is a German computer scientist, currently working at King's College London, best known for the development of the BlueJ and Greenfoot educational development environments and as author of introductory programming textbooks. In 2013 he received the SIGCSE Award for Outstanding Contribution to Computer Science Education for the development of the BlueJ.
Education and early life
Kölling was born in Bremen, Germany. He earned a degree in informatics from the University of Bremen. In 1999, he was awarded a Ph.D. in computer science from the University of Sydney, for research on the design of an object-oriented programming environment and language supervised by John Rosenberg.
Career and research
From 1995 to 1997 he worked at the Sydney University, followed by a position as a senior lecturer at Monash University and, from 2001, a post as an associate professor at the University of Southern Denmark. He worked at the School of Computing at the University of Kent, UK, until February 2017. He is now a professor of computer science at King's College London, where he also occupies the role of vice-dean for education.
Kölling is the lead designer of 'Blue', an object-oriented programming language and integrated environment, BlueJ, and Greenfoot. All are educational development environments aimed at teaching and learning programming. BlueJ and Greenfoot are widely used in many schools and universities.
Kölling co-wrote Objects First with Java with David J. Barnes, and wrote Introduction to Programming with Greenfoot.
At the Association for Computing Machinery (ACM) Special Interest Group (SIG) of Computer science education (SIGCSE) 2010 conference, held in Milwaukee, Wisconsin, his work was referenced as one of the most influential tools in the history of computer science education. This paper described Kölling's work on the Blue programming language, which preceded BlueJ.
Microsoft patent issue
On 22 May 2005 Kölling entered the BlueJ website in respon |
https://en.wikipedia.org/wiki/Lionel%20Wartime%20Freight%20Train | The Lionel Wartime Freight Train, better known among collectors as the "paper train," was a toy train set sold by the Lionel Corporation in 1943.
Origins
During World War II, government-mandated restrictions on the use of various metals halted production of all metal toys in favor of the war effort. Lionel, seeking an alternative product to keep the brand name alive during the war, sought the assistance of Samuel Gold, a designer of various novelties including cereal and soft drink premiums. Gold made an agreement with Lionel and completed a design for an all-paper product train in March 1943. It was sold for a retail price of $1 for the 1943 Christmas season, but disappeared soon afterwards due to poor customer response. Lionel began manufacturing its conventional products again beginning in late 1945.
Features
The paper train came in a flat box containing several sheets of heavy cardstock measuring 11 x 15 inches, on which was printed the various pieces of the set. Once assembled it included a steam locomotive, tender, boxcar, gondola, and caboose; all decorated for the fictional Lionel Lines. There were three railway employees, a crossing signal, crossing gate, and enough ties and rails to create a circle of track measuring 16 feet, 4 inches in circumference. In total, there were over 250 paper parts, 21 wooden dowel axles, and 42 corresponding pasteboard wheels.
Although the set did well financially, it was difficult to assemble and keep intact. The train was designed with the parts pre-scored and tabbed for assembly without cutting or adhesive but the tabs were prone to coming apart, and the train did not stay on the cardstock track reliably once assembled. As a result, the paper train overwhelmed many customers, so often parents simply gave up on assembly and threw it out.
Current Value / Reproduction
Today, original unassembled paper trains sell for around $300 in like-new condition, and up to $400 in perfect, mint condition. Greenberg Publishing Co |
https://en.wikipedia.org/wiki/Log%20management | Log management (LM) comprises an approach to dealing with large volumes of computer-generated log messages (also known as audit records, audit trails, event-logs, etc.).
Log management generally covers:
Log collection
Centralized log aggregation
Long-term log storage and retention
Log rotation
Log analysis (in real-time and in bulk after storage)
Log search and reporting.
Overview
The primary drivers for log management implementations are concerns about security, system and network operations (such as system or network administration) and regulatory compliance. Logs are generated by nearly every computing device, and can often be directed to different locations both on a local file system or remote system.
Effectively analyzing large volumes of diverse logs can pose many challenges, such as:
Volume: log data can reach hundreds of gigabytes of data per day for a large organization. Simply collecting, centralizing and storing data at this volume can be challenging.
Normalization: logs are produced in multiple formats. The process of normalization is designed to provide a common output for analysis from diverse sources.
Velocity: The speed at which logs are produced from devices can make collection and aggregation difficult
Veracity: Log events may not be accurate. This is especially problematic for systems that perform detection, such as intrusion detection systems.
Users and potential users of log management may purchase complete commercial tools or build their own log-management and intelligence tools, assembling the functionality from various open-source components, or acquire (sub-)systems from commercial vendors. Log management is a complicated process and organizations often make mistakes while approaching it.
Logging can produce technical information usable for the maintenance of applications or websites. It can serve:
to define whether a reported bug is actually a bug
to help analyze, reproduce and solve bugs
to help test new features i |
https://en.wikipedia.org/wiki/Bug%20Wars | The Bug Wars were origami contests among members of the Origami Detectives (Tanteidan in Japanese) which started when one member made a bug, a horned beetle with outspread wings, from a single sheet of paper: this design provoked other members to design more complex origami in the shape of bugs, such as wasps and praying mantises.
The Bug Wars motivated computational origamists to build models and algorithms to add complexity in a more systematic manner. The majority of the origamists in the Origami Detectives did not use these novel computational tools in the creation of their own origami art. Since the Bug Wars, there have been a collection of books, instruction guides, academic papers, and origami art that have been inspired by the prolonged event. Each year, the Origami Tanteidan Convention in Japan hosts conventions that feature the work of some of the world's most renowned origami artists. Along with the convention, Tanteidan Convention Books are released each year with exclusive folding instructions from different designers. An Origami Tanteidan Magazine is released more frequently (6 times a year) and includes diagrams for 3 to 5 models, a crease pattern challenge, and other related articles in each issue. Recent content is published both in English and Japanese. |
https://en.wikipedia.org/wiki/Society%20for%20Applied%20Spectroscopy | The Society for Applied Spectroscopy (SAS) is an organization promoting research and education in the fields of spectroscopy, optics, and analytical chemistry. Founded in 1958, it is currently headquartered in Frederick, MD. In 2006 it had about 2,000 members worldwide.
SAS is perhaps best known for its technical conference with the Federation of Analytical Chemistry and Spectroscopy Societies and short courses on various aspects of spectroscopy and data analysis. The society publishes the scientific journal Applied Spectroscopy.
SAS is affiliated with American Institute of Physics (AIP), Coblentz, Council for Near Infrared Spectroscopy (CNIRS), Federation of Analytical Chemistry and Spectroscopy Societies (FACSS), The Instrumentation, Systems, and Automation Society (ISA), and Optical Society of America (OSA).
SAS provides a number of awards with honorariums to encourage and recognize outstanding achievements.
See also
Spectroscopy
American Institute of Physics (AIP)
The Instrumentation, Systems, and Automation Society (ISA)
Optical Society of America (OSA) |
https://en.wikipedia.org/wiki/Transuranic%20waste | Transuranic waste (TRU) is stated by U.S. regulations, and independent of state or origin, to be waste which has been contaminated with alpha emitting transuranic radionuclides possessing half-lives greater than 20 years and in concentrations greater than 100 nCi/g (3.7 MBq/kg).
Elements having atomic numbers greater than that of uranium are called transuranic. Elements within TRU are typically man-made and are known to contain americium-241 and several isotopes of plutonium. Because of the elements' longer half-lives, TRU is disposed of more cautiously than low level waste and intermediate level waste. In the U.S. it is a byproduct of weapons production, nuclear research and power production, and consists of protective gear, tools, residue, debris and other items contaminated with small amounts of radioactive elements (mainly plutonium).
Under U.S. law, TRU is further categorized into "contact-handled" (CH) and "remote-handled" (RH) on the basis of the radiation field measured on the waste container's surface. CH TRU has a surface dose rate not greater than 2 mSv per hour (200 mrem/h), whereas RH TRU has rates of 2 mSv/h or higher. CH TRU has neither the high radioactivity of high level waste, nor its high heat generation. In contrast, RH TRU can be highly radioactive, with surface dose rates up to 10 Sv/h (1000 rem/h).
The United States currently permanently disposes of TRU generated from defense nuclear activities at the Waste Isolation Pilot Plant, a deep geologic repository.
Other countries do not include this category, favoring variations of High, Medium/Intermediate, and Low Level waste. |
https://en.wikipedia.org/wiki/XMPP%20Standards%20Foundation | XMPP Standards Foundation (XSF) is the foundation in charge of the standardization of the protocol extensions of XMPP, the open standard of instant messaging and presence of the IETF.
History
The XSF was originally called the Jabber Software Foundation (JSF). The Jabber Software Foundation was originally established to provide an independent, non-profit, legal entity to support the development community around Jabber technologies (and later XMPP). Originally its main focus was on developing JOSL, the Jabber Open Source License (since deprecated), and an open standards process for documenting the protocols used in the Jabber/XMPP developer community. Its founders included Michael Bauer and Peter Saint-Andre.
Process
Members of the XSF vote on acceptance of new members, a technical Council, and a Board of Directors. However, membership is not required to publish, view, or comment on the standards that it promulgates. The unit of work at the XSF is the XMPP Extension Protocol (XEP); XEP-0001 specifies the process for XEPs to be accepted by the community. Most of the work of the XSF takes place on the XMPP Extension Discussion List, the jdev and the xsf chat room.
Organization
Board of directors
The Board of Directors of the XMPP Standards Foundation oversees the business affairs of the organization. As elected by the XSF membership, the Board of Directors for 2020-2021 consists of the following individuals:
Ralph Meijer ( XSF Chair)
Dave Cridland
Ralph Meijer
Severino Ferrer de la Peñita
Arc Riley
Matthew Wild
Council
The XMPP Council is the technical steering group that approves XMPP Extension Protocols, as governed by the XSF Bylaws and XEP-0001. The Council is elected by the members of the XMPP Standards Foundation each year in September. The XMPP Council (2020–2021) consists of the following individuals:
Kim Alvefur
Dave Cridland
Daniel Gultsch
Georg Lukas
Jonas Schäfer
Members
There are currently 66 elected members of the XSF.
Emeritus Members |
https://en.wikipedia.org/wiki/Opisthorchiasis | Opisthorchiasis is a parasitic disease caused by certain species of genus Opisthorchis (specifically, Opisthorchis viverrini and Opisthorchis felineus). Chronic infection may lead to cholangiocarcinoma, a cancer of the bile ducts.
Medical care and loss of wages caused by Opisthorchis viverrini in Laos and in Thailand costs about $120 million annually. In Asia, infection by Opisthorchis viverrini and other liver flukes affects the poorest people. Along with other foodborne trematode infections such as clonorchiasis, fascioliasis and paragonimiasis, opisthorchiasis is listed among the World Health Organization's list of neglected tropical diseases.
Signs and symptoms
Symptoms of opisthorchiasis are indistinguishable from clonorchiasis. About 80% of infected people have no symptoms, though they can have eosinophilia. Asymptomatic infection can occur when there are less than 1000 eggs in one gram of feces. Infection is considered heavy when there are 10,000-30,000 eggs in one gram of feces. Symptoms of heavier infections may include diarrhea, epigastric and right upper quadrant pain, lack of appetite, fatigue, yellowing of the eyes and skin and mild fever.
These parasites are long-lived and cause heavy chronic infections that may lead to accumulation of fluid in the legs (edema) and in the peritoneal cavity (ascites), enlarged non-functional gallbladder and also ascending cholangitis, which can lead to periductal fibrosis, cholecystitis and cholelithiasis, obstructive jaundice, hepatomegaly and/or portal hypertension.
Chronic opisthorchiasis and cholangiocarcinoma
Both experimental and epidemiological evidence strongly implicates Opisthorchis viverrini infections in the etiology of a malignant cancer of the bile ducts (cholangiocarcinoma) in humans which has a very poor prognosis. Clonorchis sinensis and Opisthorchis viverrini are both categorized by the International Agency for Research on Cancer (IARC) as Group 1 carcinogens.
In humans, the onset of cholangioc |
https://en.wikipedia.org/wiki/Mosaic%20%28film%29 | Mosaic is a 2007 American animated superhero film about a new character created by Stan Lee. It features the voice of Anna Paquin as Maggie Nelson and with supporting roles done by Kirby Morrow, Cam Clarke, Garry Chalk, Ron Halder, and Nicole Oliver. It was released under the Stan Lee Presents banner, which is a series of direct-to-DVD animated films distributed by POW! Entertainment with Anchor Bay Entertainment. The story was by Stan Lee, with the script by former X-Men writer Scott Lobdell.
Mosaic was released on DVD on January 9, 2007, and had its television premiere on March 10, 2007, on Cartoon Network.
Plot
Aspiring young actress Maggie Nelson (Anna Paquin), who lives in New York City with her father, an Interpol agent, gains chameleon-like powers one night after she gets unknowingly caught between a severe electrical storm and a magic rune her father had brought home to study after it was found at the scene of a murder at a New York City museum. Her powers are from a secret and ancient race known as the Chameliel, who are able to hide in plain sight due to their shape shifting abilities, and she is told all about the Chameliel after meeting a young Chameliel named Mosaic (Kirby Morrow). The murder victim at the museum was a Chameliel who was killed by another Chameliel named Maniken, who is stealing some of the powerful Chameliel stones hidden around the world to use them to gain the alchemical powers of his dead wife Facade, and ruling the world. After Maniken kidnaps her father, Maggie becomes determined to help Mosaic to fight Maniken.
The two go from New York City, to the catacombs of Rome, to a large radio dish at the north magnetic pole, trying to stop Maniken, as he plans to sacrifice Maggie's father as part of a ceremony to use the Chameliel stones to transfer to Maniken the powers of his wife from her body and rule the Earth like a god. As Maniken prepares to begin the ceremony on the radio dish, Maggie uses her shape-shifting abilities with her |
https://en.wikipedia.org/wiki/Aging%20in%20dogs | Aging in dogs varies from breed to breed, and affects the dog's health and physical ability. As with humans, advanced years often bring changes in a dog's ability to hear, see, and move about easily. Skin condition, appetite, and energy levels often degrade with geriatric age. Medical conditions such as cancer, kidney failure, arthritis, dementia, and joint conditions, and other signs of old age may appear.
The aging profile of dogs varies according to their adult size (often determined by their breed): smaller dogs often live over 15–16 years (sometimes longer than 20 years), medium and large size dogs typically 10 to 20 years, and some giant dog breeds such as mastiffs, often only 7 to 8 years. The latter reach maturity at a slightly older age than smaller breeds—giant breeds becoming adult around two years old compared to the norm of around 13–15 months for other breeds.
Aging profile
They can be summarized into three types:
Popular myth – It is popularly believed that one dog year equals seven human years. This is considered to be inaccurate because dogs often reproduce at age 1 while humans almost never reproduce at age 7.
One size fits all – A general rule of thumb is that the first year of a dog's life is equivalent to 15 human years, the second year equivalent to 9 human years, and each subsequent year about 5 human years. So, a dog age 2 is equivalent to a human age 24, while a dog age 10 is equivalent to a human age 64. This is more accurate but still fails to allow for size/breed, which is a significant factor.
Size- or breed-specific calculators – These try to factor in the size or breed as well. These are the most accurate types. They typically work either by expected adult weight or by categorizing the dog as "small", "medium", or "large".
No one formula for dog-to-human age conversion is scientifically agreed on, although within fairly close limits they show great similarities. Researchers suggest that dog age depends on DNA methylation which i |
https://en.wikipedia.org/wiki/Interferon%20type%20I | The type-I interferons (IFN) are cytokines which play essential roles in inflammation, immunoregulation, tumor cells recognition, and T-cell responses. In the human genome, a cluster of thirteen functional IFN genes is located at the 9p21.3 cytoband over approximately 400 kb including coding genes for IFNα (IFNA1, IFNA2, IFNA4, IFNA5, IFNA6, IFNA7, IFNA8, IFNA10, IFNA13, IFNA14, IFNA16, IFNA17 and IFNA21), IFNω (IFNW1), IFNɛ (IFNE), IFNк (IFNK) and IFNβ (IFNB1), plus 11 IFN pseudogenes.
Interferons bind to interferon receptors. All type I IFNs bind to a specific cell surface receptor complex known as the IFN-α receptor (IFNAR) that consists of IFNAR1 and IFNAR2 chains.
Type I IFNs are found in all mammals, and homologous (similar) molecules have been found in birds, reptiles, amphibians and fish species.
Sources and functions
IFN-α and IFN-β are secreted by many cell types including lymphocytes (NK cells, B-cells and T-cells), macrophages, fibroblasts, endothelial cells, osteoblasts and others. They stimulate both macrophages and NK cells to elicit an anti-viral response, involving IRF3/IRF7 antiviral pathways, and are also active against tumors. Plasmacytoid dendritic cells have been identified as being the most potent producers of type I IFNs in response to antigen, and have thus been coined natural IFN producing cells.
IFN-ω is released by leukocytes at the site of viral infection or tumors.
IFN-α acts as a pyrogenic factor by altering the activity of thermosensitive neurons in the hypothalamus thus causing fever. It does this by binding to opioid receptors and eliciting the release of prostaglandin-E2 (PGE2).
A similar mechanism is used by IFN-α to reduce pain; IFN-α interacts with the μ-opioid receptor to act as an analgesic.
In mice, IFN-β inhibits immune cell production of growth factors, thereby slowing tumor growth, and inhibits other cells from producing vessel-producing growth factors, thereby blocking tumor angiogenesis and hindering the tumour fr |
https://en.wikipedia.org/wiki/Interferon%20type%20III | The type III interferon group is a group of anti-viral cytokines, that consists of four IFN-λ (lambda) molecules called IFN-λ1, IFN-λ2, IFN-λ3 (also known as IL29, IL28A and IL28B respectively), and IFN-λ4. They were discovered in 2003. Their function is similar to that of type I interferons, but is less intense and serves mostly as a first-line defense against viruses in the epithelium.
Genomic location
Genes encoding this group of interferons are all located on the long arm of chromosome 19 in human, specifically in region between 19q13.12 and 19q13.13. IFNL1 gene, encoding IL-29, is located downstream of IFNL2, encoding IL-28A. IFNL3, encoding IL28B, is located downstream of IFNL4.
In mice, the genes encoding for type III interferons are located on chromosome 7 and the family consists only of IFN-λ2 and IFN-λ3.
Structure
Interferons
All interferon groups belong to class II cytokine family which have a conserved structure that comprises six α-helices. The proteins of type III interferon group are highly homologous and show high amino acid sequence similarity between. The similarity between IFN-λ2 and IFN-λ3 is approximately 96%, similarity of IFNλ1 to IFNλ 2/3 is around 81%. Lowest similarity is found between IFN-λ4 and IFN-λ3 - only around 30%. Unlike type I interferon group, which consist of only one exon, type III interferons consist of multiple exons.
Receptor
The receptors for these cytokines are also structurally conserved. The receptors have two type III fibronectin domains in their extracellular domain. The interface of these two domains forms the cytokine binding site. The receptor complex for type III interferons consists of two subunits - IL10RB (also called IL10R2 or CRF2-4) and IFNLR1 (formerly called IL28RA, CRF2-12).
In contrast to the ubiquitous expression of receptors for type I interferons, IFNLR1 is largely restricted to tissues of epithelial origin. Despite high homology between type III interferons, the binding affinity to IFNLR1 diff |
https://en.wikipedia.org/wiki/Osteoblastoma | Osteoblastoma is an uncommon osteoid tissue-forming primary neoplasm of the bone.
It has clinical and histologic manifestations similar to those of osteoid osteoma; therefore, some consider the two tumors to be variants of the same disease, with osteoblastoma representing a giant osteoid osteoma. However, an aggressive type of osteoblastoma has been recognized, making the relationship less clear.
Although similar to osteoid osteoma, it is larger (between 2 and 6 cm).
Signs and symptoms
Patients with osteoblastoma usually present with pain of several months' duration. In contrast to the pain associated with osteoid osteoma, the pain of osteoblastoma usually is less intense, usually not worse at night, and not relieved readily with salicylatesfluid (aspirin and related compounds). If the lesion is superficial, the patient may have localized swelling and tenderness. Spinal lesions can cause painful scoliosis, although this is less common with osteoblastoma than with osteoid osteoma. In addition, lesions may mechanically interfere with the spinal cord or nerve roots, producing neurologic deficits. Pain and general weakness are common complaints.
Pathophysiology
The cause of osteoblastoma is unknown. Histologically, osteoblastoma are similar to osteoid osteomas, producing both osteoid and primitive woven bone amidst fibrovascular connective tissue, the difference being that osteoblastoma can grow larger than 2.0 cm in diameter while osteoid osteomas cannot. Although the tumor is usually considered benign, a controversial aggressive variant has been described in the literature, with histologic features similar to those of malignant tumors such as an osteosarcoma.
Diagnosis
When diagnosing osteoblastoma, the preliminary radiologic workup should consist of radiography of the site of the patient's pain. However, computed tomography (CT) is often necessary to support clinical and plain radiographic findings suggestive of osteoblastoma and to better define the margins o |
https://en.wikipedia.org/wiki/RegisterFly | RegisterFly was a New Jersey (U.S.) based internet hosting and domain name registrar that had their ICANN-accredited status terminated in March 2007.
History
RegisterFly formerly acted as a reseller of the services of eNom, but became an accredited registrar in its own right in 2006 through the acquisition of "Top Class Names" sold to the company by Directi with Bhavin Turakhia completing the asset transfer. By February 2007, the company was registrar for approximately 2,000,000 domain names held by about 900,000 customers. Notable clients of RegisterFly included the government of Thailand, the Easter Seals charity, and pop star Michael Jackson. In 2007, ICANN launched an investigation of RegisterFly amid allegations of fraud. No lawsuits were initially filed between the governing domain body and the company; however, there was during this time a lawsuit between the company's two owners, CEO John Naruszewicz and Kevin Medina. RegisterFly's website went offline for a time, causing serious concern amongst registrant customers of RegisterFly.
The incidents and lawsuit that then followed were the result of a feud between RegisterFly co-owners Kevin Medina and John Naruszewicz, who were at once both business and intimate partners. As problems at RegisterFly gained momentum, their ten-year-long business and romantic relationship abruptly came to an end. The lawsuit between the former partners alleged, among other matters, that Medina misappropriated corporate funds for personal use. However, the court ruled on March 8, 2007, in favor of Medina, stating that Naruszewicz had no ownership over RegisterFly. Medina resumed control over RegisterFly, but not before Naruszewicz published a public apology to customers on the company's web page.
On March 28, 2007, U.S. District Court Judge William Osteen unsealed a class action lawsuit filed by Attorney E. Clarke Dummit against RegisterFly, eNom, and ICANN. The lawsuit alleges that RegisterFly systematically defrauded custome |
https://en.wikipedia.org/wiki/Applied%20Physics%20A | Applied Physics A: Materials Science and Processing is a peer-reviewed scientific journal that is published monthly by Springer Science+Business Media. The editor-in-chief is Thomas Lippert (Paul Scherrer Institute). This publication is complemented by Applied Physics B (Lasers & Optics).
History
The journal Applied Physics was originally conceived and founded in 1972 by Helmut K.V. Lotsch at Springer-Verlag Berlin Heidelberg New York. Lotsch edited the journal up to volume 25 and split it thereafter into the two part A26(Solids and Surfaces) and B26(Photophysics and Laser Chemistry). He continued his editorship up to the volumes A61 and B61. Starting in 1995 the two journals were continued under separate editorships.
Aims and scope
Applied Physics A journal covers theoretical and experimental research in applied physics, including surfaces, thin films, the condensed phase of materials, nanostructured materials, application of nanotechnology, and techniques pertaining to advanced processing and characterization. Coverage also includes characterizing materials, evaluating materials, optical & electronic materials, production engineering, process engineering, interfaces (surfaces & thin films), corrosion, and finally coatings.
Publishing formats include articles pertaining to original research, reviews, and rapid communications. Invited papers are also included on a regular basis and collected in special issues.
Abstracting and indexing
This journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.584. |
https://en.wikipedia.org/wiki/Applied%20Physics%20B | Applied Physics B: Lasers & Optics is a peer-reviewed scientific journal published by Springer Science+Business Media. The editor-in-chief is Jacob Mackenzie (University of Southampton). Topical coverage includes laser physics, optical & laser materials, linear optics, nonlinear optics, quantum optics, and photonic devices. Interest also includes laser spectroscopy pertaining to atoms, molecules, and clusters. The journal publishes original research articles, invited reviews, and rapid communications.
History
The journal Applied Physics was originally conceived and founded in 1972 by Helmut K.V. Lotsch at Springer-Verlag Berlin Heidelberg New York. Lotsch edited the journal up to volume 25 and split it thereafter into the two part A26(Solids and Surfaces) and B26(Photophysics and Laser Chemistry). He continued his editorship up to the volumes A61 and B61. Starting in 1995 the two journal parts were continued under separate editorships: Applied Physics B: Photophysics and Laser Chemistry (), in existence from September 1981 (volume B: 26 no. 1) to December 1993 (volume B: 57 no. 6) It partly continues Applied Physics (), in existence from January 1973 (volume 1 no. 1) to August 1981 (volume 25 no. 4).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.070. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.