source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Lattice%20protein | Lattice proteins are highly simplified models of protein-like heteropolymer chains on lattice conformational space which are used to investigate protein folding. Simplification in lattice proteins is twofold: each whole residue (amino acid) is modeled as a single "bead" or "point" of a finite set of types (usually only two), and each residue is restricted to be placed on vertices of a (usually cubic) lattice. To guarantee the connectivity of the protein chain, adjacent residues on the backbone must be placed on adjacent vertices of the lattice. Steric constraints are expressed by imposing that no more than one residue can be placed on the same lattice vertex.
Because proteins are such large molecules, there are severe computational limits on the simulated timescales of their behaviour when modeled in all-atom detail. The millisecond regime for all-atom simulations was not reached until 2010, and it is still not possible to fold all real proteins on a computer. Simplification significantly reduces the computational effort in handling the model, although even in this simplified scenario the protein folding problem is NP-complete.
Overview
Different versions of lattice proteins may adopt different types of lattice (typically square and triangular ones), in two or three dimensions, but it has been shown that generic lattices can be used and handled via a uniform approach.
Lattice proteins are made to resemble real proteins by introducing an energy function, a set of conditions which specify the interaction energy between beads occupying adjacent lattice sites. The energy function mimics the interactions between amino acids in real proteins, which include steric, hydrophobic and hydrogen bonding effects. The beads are divided into types, and the energy function specifies the interactions depending on the bead type, just as different types of amino acids interact differently. One of the most popular lattice models, the hydrophobic-polar model (HP model), features just |
https://en.wikipedia.org/wiki/The%20Causes%20of%20Evolution | The Causes of Evolution is a 1932 book on evolution by J.B.S. Haldane (1990 edition ), based on a series of January 1931 lectures entitled "A Re-examination of Darwinism". It was influential in the founding of population genetics and the modern synthesis.
Chapters
It contains the following chapters:
Introduction
Variation within a Species
The Genetical Analysis of Interspecific Differences
Natural Selection
What is Fitness?
Conclusion
The book also contains an extensive appendix containing the majority of Haldane's mathematical treatment of the subject.
See also
Evolutionary biology
External links
Description by Princeton U Press
Contemporary review by R.A. Fisher
Review of the 1990 Princeton University reprint
1932 non-fiction books
Books about evolution
Modern synthesis (20th century)
Population genetics
Works by J. B. S. Haldane
1932 in biology |
https://en.wikipedia.org/wiki/Fr%C3%A9chet%20filter | In mathematics, the Fréchet filter, also called the cofinite filter, on a set is a certain collection of subsets of (that is, it is a particular subset of the power set of ).
A subset of belongs to the Fréchet filter if and only if the complement of in is finite.
Any such set is said to be , which is why it is alternatively called the cofinite filter on .
The Fréchet filter is of interest in topology, where filters originated, and relates to order and lattice theory because a set's power set is a partially ordered set under set inclusion (more specifically, it forms a lattice).
The Fréchet filter is named after the French mathematician Maurice Fréchet (1878-1973), who worked in topology.
Definition
A subset of a set is said to be cofinite in if its complement in (that is, the set ) is finite.
If the empty set is allowed to be in a filter, the Fréchet filter on , denoted by is the set of all cofinite subsets of .
That is:
If is a finite set, then every cofinite subset of is necessarily not empty, so that in this case, it is not necessary to make the empty set assumption made before.
This makes a on the lattice the power set of with set inclusion, given that denotes the complement of a set in the following two conditions hold:
Intersection condition If two sets are finitely complemented in then so is their intersection, since and
Upper-set condition If a set is finitely complemented in then so are its supersets in .
Properties
If the base set is finite, then since every subset of and in particular every complement, is then finite.
This case is sometimes excluded by definition or else called the improper filter on Allowing to be finite creates a single exception to the Fréchet filter's being free and non-principal since a filter on a finite set cannot be free and a non-principal filter cannot contain any singletons as members.
If is infinite, then every member of is infinite since it is simply minus finitely many of its member |
https://en.wikipedia.org/wiki/Adaptation%20and%20Natural%20Selection | Adaptation and Natural Selection: A Critique of Some Current Evolutionary Thought is a 1966 book by the American evolutionary biologist George C. Williams. Williams, in what is now considered a classic by evolutionary biologists, outlines a gene-centered view of evolution, disputes notions of evolutionary progress, and criticizes contemporary models of group selection, including the theories of Alfred Emerson, A. H. Sturtevant, and to a smaller extent, the work of V. C. Wynne-Edwards. The book takes its title from a lecture by George Gaylord Simpson in January 1947 at the Princeton University. Aspects of the book were popularised by Richard Dawkins in his 1976 book The Selfish Gene.
The aim of the book is to "clarify certain issues in the study of adaptation and the underlying evolutionary processes." Though more technical than a popular science book, its target audience is not specialists but biologists in general and the more advanced students of the topic. It was mostly written in the summer of 1963 when Williams utilized the University of California, Berkeley's library.
Contents
Williams argues that adaptation is "a special and onerous concept that should not be used unnecessarily". He writes that something should not be assigned a function unless it is uncontroversially the result of design rather than chance. For instance he considers mutations to be errors only, not a process that has persisted to provide variation and evolutionary potential. If something is considered (after critical appraisal) to be an adaptation, then we should assume the unit of selection in the process was as simple as possible, provided it is compatible with the evidence. For example, selection between individuals should be preferred to group selection as an explanation if both seem plausible. Williams writes that the only way adaptations can come into existence or persist is by natural selection.
Dealing with the idea of evolutionary progress, Williams argues that for natural select |
https://en.wikipedia.org/wiki/ABO%20blood%20group%20system | The ABO blood group system is used to denote the presence of one, both, or neither of the A and B antigens on erythrocytes. For human blood transfusions, it is the most important of the 44 different blood type (or group) classification systems currently recognized by the International Society of Blood Transfusions (ISBT) as of
December 2022. A mismatch (very rare in modern medicine) in this, or any other serotype, can cause a potentially fatal adverse reaction after a transfusion, or an unwanted immune response to an organ transplant. The associated anti-A and anti-B antibodies are usually IgM antibodies, produced in the first years of life by sensitization to environmental substances such as food, bacteria, and viruses.
The ABO blood types were discovered by Karl Landsteiner in 1901; he received the Nobel Prize in Physiology or Medicine in 1930 for this discovery. ABO blood types are also present in other primates such as apes and Old World monkeys.
History
Discovery
The ABO blood types were first discovered by an Austrian physician, Karl Landsteiner, working at the Pathological-Anatomical Institute of the University of Vienna (now Medical University of Vienna). In 1900, he found that red blood cells would clump together (agglutinate) when mixed in test tubes with sera from different persons, and that some human blood also agglutinated with animal blood. He wrote a two-sentence footnote:
This was the first evidence that blood variations exist in humans – it was believed that all humans have similar blood. The next year, in 1901, he made a definitive observation that blood serum of an individual would agglutinate with only those of certain individuals. Based on this he classified human blood into three groups, namely group A, group B, and group C. He defined that group A blood agglutinates with group B, but never with its own type. Similarly, group B blood agglutinates with group A. Group C blood is different in that it agglutinates with both A and B.
This was |
https://en.wikipedia.org/wiki/Land%C3%A9%20g-factor | In physics, the Landé g-factor is a particular example of a g-factor, namely for an electron with both spin and orbital angular momenta. It is named after Alfred Landé, who first described it in 1921.
In atomic physics, the Landé g-factor is a multiplicative term appearing in the expression for the energy levels of an atom in a weak magnetic field. The quantum states of electrons in atomic orbitals are normally degenerate in energy, with these degenerate states all sharing the same angular momentum. When the atom is placed in a weak magnetic field, however, the degeneracy is lifted.
Description
The factor comes about during the calculation of the first-order perturbation in the energy of an atom when a weak uniform magnetic field (that is, weak in comparison to the system's internal magnetic field) is applied to the system. Formally we can write the factor as,
The orbital is equal to 1, and under the approximation , the above expression simplifies to
Here, J is the total electronic angular momentum, L is the orbital angular momentum, and S is the spin angular momentum. Because for electrons, one often sees this formula written with 3/4 in place of . The quantities gL and gS are other g-factors of an electron. For an atom, and for an atom, .
If we wish to know the g-factor for an atom with total atomic angular momentum (nucleus + electrons), such that the total atomic angular momentum quantum number can take values of , giving
Here is the Bohr magneton and is the nuclear magneton. This last approximation is justified because is smaller than by the ratio of the electron mass to the proton mass.
A derivation
The following working is a common derivation.
Both orbital angular momentum and spin angular momentum of electron contribute to the magnetic moment. In particular, each of them alone contributes to the magnetic moment by the following form
where
Note that negative signs in the above expressions are because an electron carries negative charg |
https://en.wikipedia.org/wiki/Heraldic%20badge | A heraldic badge, emblem, impresa, device, or personal device worn as a badge indicates allegiance to, or the property of, an individual, family or corporate body. Medieval forms are usually called a livery badge, and also a cognizance. They are para-heraldic, not necessarily using elements from the coat of arms of the person or family they represent, though many do, often taking the crest or supporters. Their use is more flexible than that of arms proper.
Badges worn on clothing were common in the late Middle Ages, particularly in England. They could be made of base metal, cloth or other materials and worn on the clothing of the followers of the person in question; grander forms would be worn by important persons, with the Dunstable Swan Jewel in enamelled gold a rare survivor. Livery collars were also given to important persons, often with the badge as a pendant. The badge would also be embroidered or appliqued on standards, horse trappings, livery uniforms, and other belongings. Many medieval badges survive in English pub names.
Medieval usage
Origins
Badges with "a distinctly heraldic character" in England date to about the reign (1327–1377) of King Edward III.
In the fourteenth, fifteenth, and sixteenth centuries, the followers, retainers, dependants, and partisans of famous and powerful personages and houses bore well-known badges – precisely because they were known and recognised. (In contrast, the coat of arms was used exclusively by the individual to whom it belonged.)
Badges occasionally imitated a charge in the bearer's coat of arms, or had a more or less direct reference to such a charge. More often, badges commemorated some remarkable exploit, illustrated a family or feudal alliance, or indicated some territorial rights or pretensions. Some badges are rebuses, making a pun or play-on-words of the owner's name. It was not uncommon for the same personage or family to use more than one badge; and, on the other hand, two or more badges were often bo |
https://en.wikipedia.org/wiki/Evolution%20in%20Mendelian%20Populations | "Evolution in Mendelian Populations" is a lengthy 1931 scientific paper on evolution by the American population geneticist Sewall Wright.
The paper was first published in Genetics volume 16, pages 97–159. In it, Wright outlines various concepts, including genetic drift, effective population size, and inbreeding.
A contemporary review by R.A. Fisher can be found here
Overview
Studiers of evolution such as Lamarck and those who postulated the inheritance of acquired characteristics (e.g. Theodor Eimer and Edward Drinker Cope) were concerned with heredity and sought a link between one generation to the next. Lamarck thought that bodily responses from one generation should be passed along to future generations, which Wright refers to as "direct evolution". Sewall Wright expresses that the birth of genetics stems from Mendelian inheritance principles and so "any theory of evolution" must also be based on Mendelian inheritance.
See also
Evolutionary biology |
https://en.wikipedia.org/wiki/Herman%20te%20Riele | Hermanus Johannes Joseph te Riele (born 5 January 1947) is a Dutch mathematician at CWI in Amsterdam with a specialization in computational number theory. He is known for proving the correctness of the Riemann hypothesis for the first 1.5 billion non-trivial zeros of the Riemann zeta function with Jan van de Lune and Dik Winter, for disproving the Mertens conjecture with Andrew Odlyzko, and for factoring large numbers of world record size. In 1987, he found a new upper bound for π(x) − Li(x).
In 1970, Te Riele received an engineer's degree in mathematical engineering from Delft University of Technology and, in 1976, a PhD degree in mathematics and physics from University of Amsterdam (1976). |
https://en.wikipedia.org/wiki/Neal%20Koblitz | Neal I. Koblitz (born December 24, 1948) is a Professor of Mathematics at the University of Washington. He is also an adjunct professor with the Centre for Applied Cryptographic Research at the University of Waterloo. He is the creator of hyperelliptic curve cryptography and the independent co-creator of elliptic curve cryptography.
Biography
Koblitz received his B.A. in mathematics from Harvard University in 1969. While at Harvard, he was a Putnam Fellow in 1968. He received his Ph.D. from Princeton University in 1974 under the direction of Nick Katz. From 1975 to 1979 he was an instructor at Harvard University. In 1979 he began working at the University of Washington.
Koblitz's 1981 article "Mathematics as Propaganda" criticized the misuse of mathematics in the social sciences and helped motivate Serge Lang's successful challenge to the nomination of political scientist Samuel P. Huntington to the National Academy of Sciences. In The Mathematical Intelligencer, Koblitz, Steven Weintraub, and Saunders Mac Lane later criticized the arguments of Herbert A. Simon, who had attempted to defend Huntington's work.
He co-invented Elliptic-curve cryptography in 1985, with Victor S. Miller and for this was awarded the Levchin Prize in 2021.
With his wife Ann Hibner Koblitz, he in 1985 founded the Kovalevskaia Prize, to honor women scientists in developing countries. It was financed from the royalties of Ann Hibner Koblitz's 1983 biography of Sofia Kovalevskaia. Although the awardees have ranged over many fields of science, one of the 2011 winners was a Vietnamese mathematician, Lê Thị Thanh Nhàn. Koblitz is an atheist.
See also
List of University of Waterloo people
Gross–Koblitz formula
Selected publications |
https://en.wikipedia.org/wiki/Virtual%20private%20database | A virtual private database or VPD masks data in a larger database so that only a subset of the data appears to exist, without actually segregating data into different tables, schemas or databases. A typical application is constraining sites, departments, individuals, etc. to operate only on their own records and at the same time allowing more privileged users and operations (e.g. reports, data warehousing, etc.) to access on the whole table.
The term is typical of the Oracle DBMS, where the implementation is very general: tables can be associated to SQL functions, which return a predicate as a SQL expression. Whenever a query is executed, the relevant predicates for the involved tables are transparently collected and used to filter rows. SELECT, INSERT, UPDATE and DELETE can have different rules.
External links
Using Virtual Private Database to Implement Application Security Policies
http://www.oracle-base.com/articles/8i/VirtualPrivateDatabases.php
Data security
Types of databases |
https://en.wikipedia.org/wiki/Orientation%20%28mental%29 | Orientation is a function of the mind involving awareness of three dimensions: time, place and person. Problems with orientation lead to disorientation, and can be due to various conditions. It ranges from an inability to coherently understand person, place, time, and situation, to complete orientation.
Assessment
Assessment of a persons mental orientation is frequently designed to evaluate the need for focused diagnosis and treatment of conditions leading to Altered Mental Status (AMS). A variety of basic prompts and tests are available to determine a person's level of orientation. These tests frequently primarily assess the ability of the person (within EMS) to perform basic functions of life (see: Airway Breathing Circulation), many assessments then gauge their level of amnesia, awareness of surroundings, concept of time, place, and response to verbal, and sensory stimuli.
Causes of mental disorientation
Disorientation has a variety of causes, physiological and mental in nature. Physiological disorientation is frequently caused by an underlying or acute condition. Disease or injury that impairs the delivery of essential nutrients such as glucose, oxygen, fluids, or electrolytes can impair homeostasis, and therefore neurological function causing mental disorientation. Other causes are psycho-neurological in nature (see also Cognitive disorder) stemming from chemical imbalances in the brain, deterioration of the structure of the brain, or psychiatric states or illnesses that result in disorientation.
Mental orientation is frequently effected by shock, including physiological shock (see: Shock circulatory) and mental shock (see: Acute stress reaction, a psychological condition in response to acute stressful stimuli.)
Areas within precuneus, posterior cingulate cortex, inferior parietal lobe, medial prefrontal cortex, lateral frontal, lateral temporal cortices are believed to be responsible for situational orientation.
See also
Mental confusion
Mental status |
https://en.wikipedia.org/wiki/Z1%20%28computer%29 | The Z1 was a motor-driven mechanical computer designed by Konrad Zuse from 1936 to 1937, which he built in his parents' home from 1936 to 1938. It was a binary electrically driven mechanical calculator with limited programmability, reading instructions from punched celluloid film.
The “Z1” was the first freely programmable computer in the world that used Boolean logic and binary floating-point numbers, however, it was unreliable in operation. It was completed in 1938 and financed completely by private funds. This computer was destroyed in the bombardment of Berlin in December 1943, during World War II, together with all construction plans.
The Z1 was the first in a series of computers that Zuse designed. Its original name was "V1" for Versuchsmodell 1 (meaning Experimental Model 1). After WW2, it was renamed "Z1" to differentiate it from the flying bombs designed by Robert Lusser. The Z2 and Z3 were follow-ups based on many of the same ideas as the Z1.
Design
The Z1 contained almost all the parts of a modern computer, i.e. control unit, memory, micro sequences, floating-point logic, and input-output devices. The Z1 was freely programmable via punched tape and a punched tape reader. There was a clear separation between the punched tape reader, the control unit for supervising the whole machine and the execution of the instructions, the arithmetic unit, and the input and output devices.
The input tape unit read perforations in 35-millimeter film.
The Z1 was a 22-bit floating-point value adder and subtractor, with some control logic to make it capable of more complex operations such as multiplication (by repeated additions) and division (by repeated subtractions). The Z1's instruction set had eight instructions and it took between one and twenty-one cycles per instruction.
The Z1 had a 16-word floating point memory, where each word of memory could be read from – and written to – the control unit. The mechanical memory units were unique in their design and were |
https://en.wikipedia.org/wiki/Pain%20asymbolia | Pain asymbolia, also called pain dissociation, is a condition in which pain is experienced without unpleasantness. This usually results from injury to the brain, lobotomy, cingulotomy or morphine analgesia. Preexisting lesions of the insula may abolish the aversive quality of painful stimuli while preserving the location and intensity aspects. Typically, patients report that they have pain but are not bothered by it; they recognize the sensation of pain but are mostly or completely immune to suffering from it. The pathophysiology of this disease revolves around a disconnect between the insular cortex secondary to damage and the limbic system, specifically the cingulate gyrus whose prime response to the pain perceived by insular cortex is to tether it with an agonizing emotional response thus signaling the individual of its propensity to inflict actual harm. However, a disconnect is not the only prime causative factor, as damage to these aforementioned cortical structures also results in the same symptomology.
See also
Physical pain
Psychological pain
Suffering
Congenital insensitivity to pain |
https://en.wikipedia.org/wiki/Abbreviated%20mental%20test%20score | The Abbreviated Mental Test score (AMTS) is a 10-point test for rapidly assessing elderly patients for the possibility of dementia. It was first used in 1972, and is now sometimes also used to assess for mental confusion (including delirium) and other cognitive impairments.
A 4-item version called the Abbreviated Mental Test - 4 (AMT4) has been developed and tested.
Questionnaire
The following questions are put to the patient. Each question correctly answered scores one point. A score of 7–8 or less suggests cognitive impairment at the time of testing, although further and more formal tests are necessary to confirm a diagnosis of dementia, delirium or other causes of cognitive impairment. Culturally-specific questions may vary based on region.
Abbreviated Mental Test - 4 (AMT4)
The AMT4 uses 4 items from the AMTS: (i) What is your age? (ii) What is your date of birth? (iii) What is the name of this place? (iv) What is the year? A cut off score of 3/4 performs comparably to an AMTS cut-off score of 8/9. The AMT4 is part of the 4AT scale for delirium.
See also
General Practitioner Assessment Of Cognition – a brief screening tool for cognitive impairment designed for primary care
GERRI
Mini-mental state examination |
https://en.wikipedia.org/wiki/Gairdner%20Foundation | The Gairdner Foundation is a non-profit organization devoted to the recognition of outstanding achievements in biomedical research worldwide. It was created in 1957 by James Arthur Gairdner to recognize and reward the achievements of medical researchers whose work contributes significantly to improving the quality of human life. Since the first awards were made in 1959, the Gairdner Awards have become Canada's most prestigious medical awards, recognizing and celebrating the research of the world’s best and brightest biomedical researchers.
Since 1959, more than 390 Canada Gairdner Awards have been given to scientists from 35 countries; of these recipients, 98 have subsequently gone on to win a Nobel Prize.
History
The Gairdner Foundation was created in 1957 by James Arthur Gairdner (1893-1971). Known as Big Jim to his grandchildren, he was, indeed, a larger than life figure. Described by his friends as a talented maverick and visionary, Gairdner was a colorful personality who lived large. He was, by turns, an athlete, a soldier, a stockbroker, a businessman, a philanthropist and a landscape painter. When he died, he left his private estate to the Town of Oakville as an art gallery, which still operates today.
While he had always had an interest in medicine, it was the onset of severe arthritis in his early 50s that led Gairdner to become involved with the newly created Canadian Arthritis and Rheumatism Society. In 1957 he donated $500,000 to establish a foundation to recognize major research contributions in the conquest of disease and human suffering. The Gairdner Foundation was thus born, which was to be his most lasting legacy.
Gairdner’s decision to create awards that recognize outstanding discoveries by the world’s top scientists was, and continues to be, an act of extraordinary vision. Much of his original instruction regarding the process of selection and awarding of the prizes remains in place today, contributing to the current stature of the Canada G |
https://en.wikipedia.org/wiki/Taenia%20solium | Taenia solium, the pork tapeworm, belongs to the cyclophyllid cestode family Taeniidae. It is found throughout the world and is most common in countries where pork is eaten. It is a tapeworm that uses humans as its definitive host and pigs as the intermediate or secondary hosts. It is transmitted to pigs through human feces that contain the parasite eggs and contaminate their fodder. Pigs ingest the eggs, which develop into larvae, then into oncospheres, and ultimately into infective tapeworm cysts, called cysticercus. Humans acquire the cysts through consumption of uncooked or under-cooked pork and the cysts grow into an adult worms in the small intestine.
There are two forms of human infection. One is "primary hosting", called taeniasis, and is due to eating under-cooked pork that contains the cysts and results in adult worms in the intestines. This form generally is without symptoms; the infected person does not know they have tapeworms. This form is easily treated with anthelmintic medications which eliminate the tapeworm. The other form, "secondary hosting", called cysticercosis, is due to eating food, or drinking water, contaminated with faeces from someone infected by the adult worms, thus ingesting the tapeworm eggs, instead of the cysts. The eggs go on to develop cysts primarily in the muscles, and usually with no symptoms. However some people have obvious symptoms, the most harmful and chronic form of which is when the cysts form in the brain. Treatment of this form is more difficult but possible.
The adult worm has a flat, ribbon-like body which is white and measures 2 to 3 metres (6' to 10') long, or more. Its tiny attachment, the scolex, contains suckers and a rostellum as organs of attachment that attach to the wall of the small intestine. The main body, consists of a chain of segments known as proglottids. Each proglottid is a little more than a self-sustainable, very lightly ingestive, self-contained reproductive unit since tapeworms are hermaphrod |
https://en.wikipedia.org/wiki/Canada%20Gairdner%20International%20Award | The Canada Gairdner International Award is given annually by the Gairdner Foundation at a special dinner to five individuals for outstanding discoveries or contributions to medical science. Receipt of the Gairdner is traditionally considered a precursor to winning the Nobel Prize in Medicine; as of 2020, 98 Nobel Prizes have been awarded to prior Gairdner recipients.
Canada Gairdner International Awards are given annually in the amount of $100,000 (each) payable in Canadian funds and can be awarded to residents of any country in the world. A joint award may be given for the same discovery or contribution to medical science, but in that case each awardee receives a full prize.
Past winners
1959 Alfred Blalock, , Harry M. Rose, William D.M. Paton, Eleanor Zaimis, Wilfred G. Bigelow
1960 Joshua Harold Burn, John H. Gibbon Jr., William F. Hamilton, John McMichael, Karl Meyer, Arnold Rice Rich
1961 Russell Brock, Alan C. Burton, Alexander B. Gutman, Jonas H. Kellgren, Ulf S. von Euler
1962 Francis H.C. Crick, Albert H. Coons, Clarence Crafoord, Henry G. Kunkel, Stanley J. Sarnoff
1963 Murray L. Barr, Jacques Genest, Irvine H. Page, Pierre Grabar, C. Walton Lillehei, Eric G.L. Bywaters
1964 Seymour Benzer, Jr., Deborah Doniach, Ivan M. Roitt, Gordon D.W. Murray, Keith R. Porter
1965 Jerome W. Conn, Robin R.A. Coombs, Charles Enrique Dent, Charles P. Leblond, , Frederick Horace Smirk
1966 Rodney R. Porter, Geoffrey S. Dawes, Charles B. Huggins, Willem J. Kolff, Luis F. Leloir, Jacques Miller, Jan Waldenström
1967 Christian DeDuve, Marshall W. Nirenberg, George E. Palade, Julius Axelrod, Sidney Udenfriend, D. Harold Copp, Iain Macintyre, Peter Joseph Moloney, J. Fraser Mustard
1968 Bruce Chown, James L. Gowans, George H. Hitchings, , J. Edwin Seegmiller
1969 Frank J. Dixon, John P. Merrill, Belding H. Scribner, Robert B. Salter, Earl W. Sutherland, Ernest A. McCulloch, F. Mason Sones, James E. Till
1970 Vincent P. Dole, W. Richard S. Doll, Robert A. Good, Niels K. Jer |
https://en.wikipedia.org/wiki/SNDCP | SNDCP, Sub Network Dependent Convergence Protocol, is part of layer 3 of a GPRS protocol specification. SNDCP interfaces to the Internet Protocol at the top, and to the GPRS-specific Logical Link Control (LLC) protocol at the bottom.
In the spirit of the GPRS specifications, there can be many implementations of SNDCP, supporting protocols such as X.25. However, in reality, IP (Internet Protocol) is such an overwhelming standard that X.25 has become irrelevant for modern applications, so all implementations of SNDCP for GPRS only support IP as the payload type.
The SNDCP layer is relevant to the protocol stack of the mobile station and that of the SGSN, and works when a PDP Context is established and the quality of service has been negotiated.
Services offered by SNDCP
The SNDCP layer primarily converts, encapsulates and segments external network formats (like Internet Protocol Datagrams) into sub-network formats (called SNPDUs). It also performs compression of NPDUs to make for efficient Data transmission. It performs the multiple PDP Context PDU transfers and it also ensures that NPDUs from each PDP Context are transmitted to the LLC layer in sufficient time to maintain the QoS. SNDCP provides services to the higher layers which may include connectionless and connection-oriented mode, compression, multiplexing and segmentation. |
https://en.wikipedia.org/wiki/Canada%20Gairdner%20Wightman%20Award | The Canada Gairdner Wightman Award is annually awarded by the Gairdner Foundation to a Canadian who has demonstrated outstanding leadership in the field of medicine and medical science.
Award winners
Source: Gairdner- Past Recipients
See also
Gairdner Foundation
Gairdner Foundation Global Health Award
Gairdner Foundation International Award
List of medicine awards
External links
Canada Gairdner Wightman Award
The Gairdner Foundation
Canadian science and technology awards
Medicine awards |
https://en.wikipedia.org/wiki/Combinatorial%20principles | In proving results in combinatorics several useful combinatorial rules or combinatorial principles are commonly recognized and used.
The rule of sum, rule of product, and inclusion–exclusion principle are often used for enumerative purposes. Bijective proofs are utilized to demonstrate that two sets have the same number of elements. The pigeonhole principle often ascertains the existence of something or is used to determine the minimum or maximum number of something in a discrete context.
Many combinatorial identities arise from double counting methods or the method of distinguished element. Generating functions and recurrence relations are powerful tools that can be used to manipulate sequences, and can describe if not resolve many combinatorial situations.
Rule of sum
The rule of sum is an intuitive principle stating that if there are a possible outcomes for an event (or ways to do something) and b possible outcomes for another event (or ways to do another thing), and the two events cannot both occur (or the two things can't both be done), then there are a + b total possible outcomes for the events (or total possible ways to do one of the things). More formally, the sum of the sizes of two disjoint sets is equal to the size of their union.
Rule of product
The rule of product is another intuitive principle stating that if there are a ways to do something and b ways to do another thing, then there are a · b ways to do both things.
Inclusion–exclusion principle
The inclusion–exclusion principle relates the size of the union of multiple sets, the size of each set, and the size of each possible intersection of the sets. The smallest example is when there are two sets: the number of elements in the union of A and B is equal to the sum of the number of elements in A and B, minus the number of elements in their intersection.
Generally, according to this principle, if A1, …, An are finite sets, then
Rule of division
The rule of division states that there ar |
https://en.wikipedia.org/wiki/Development%20of%20Darwin%27s%20theory | Following the inception of Charles Darwin's theory of natural selection in 1838, the development of Darwin's theory to explain the "mystery of mysteries" of how new species originated was his "prime hobby" in the background to his main occupation of publishing the scientific results of the Beagle voyage. He was settling into married life, but suffered from bouts of illness and after his first child was born the family moved to rural Down House as a family home away from the pressures of London.
The publication in 1839 of his Journal and Remarks (now known as The Voyage of the Beagle) brought him success as an author, and in 1842 he published his first major scientific book, The Structure and Distribution of Coral Reefs, setting out his theory of the formation of coral atolls. He wrote out a sketch setting out his basic ideas on transmutation of species, which he expanded into an "essay" in 1844, and discussed his theory with friends as well as continuing with experiments and wide investigations. In the same year the anonymous Vestiges of the Natural History of Creation brought wide public interest in evolutionary ideas, but also showed the need for sound evidence to gain scientific acceptance of evolution.
In 1846 he completed his third geological book, and turned from supervising the publication of expert reports on the findings from the voyage to examining barnacle specimens himself. This grew into an eight-year study, making use of his theory to find hitherto unknown relationships between the many species of barnacle, and establishing his expertise as a biologist. His faith in Christianity dwindled and he stopped going to church. In 1851 his treasured daughter suffered a long illness and died. In 1854 he resumed his work on the species question which led on to the publication of Darwin's theory.
Background
Charles Darwin became a naturalist at a point in the history of evolutionary thought when theories of Transmutation were being developed to explain discrep |
https://en.wikipedia.org/wiki/Unimodality | In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object.
Unimodal probability distribution
In statistics, a unimodal probability distribution or unimodal distribution is a probability distribution which has a single peak. The term "mode" in this context refers to any peak of the distribution, not just to the strict definition of mode which is usual in statistics.
If there is a single mode, the distribution function is called "unimodal". If it has more modes it is "bimodal" (2), "trimodal" (3), etc., or in general, "multimodal". Figure 1 illustrates normal distributions, which are unimodal. Other examples of unimodal distributions include Cauchy distribution, Student's t-distribution, chi-squared distribution and exponential distribution. Among discrete distributions, the binomial distribution and Poisson distribution can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability.
Figure 2 and Figure 3 illustrate bimodal distributions.
Other definitions
Other definitions of unimodality in distribution functions also exist.
In continuous distributions, unimodality can be defined through the behavior of the cumulative distribution function (cdf). If the cdf is convex for x < m and concave for x > m, then the distribution is unimodal, m being the mode. Note that under this definition the uniform distribution is unimodal, as well as any other distribution in which the maximum distribution is achieved for a range of values, e.g. trapezoidal distribution. Usually this definition allows for a discontinuity at the mode; usually in a continuous distribution the probability of any single value is zero, while this definition allows for a non-zero probability, or an "atom of probability", at the mode.
Criteria for unimodality can also be defined through the characteristic function of the distribution o |
https://en.wikipedia.org/wiki/Mel%27s%20Hole | Mel's Hole is, according to an urban legend, a "bottomless pit" near Ellensburg, Washington. Claims about it were first made on the radio show Coast to Coast AM by a guest calling himself Mel Waters. Later investigation revealed no such person was listed as residing in that area, and no credible evidence has been given that the hole ever existed.
Description
The legend of the bottomless hole started on February 21, 1997, when a man identifying himself as Mel Waters appeared as a guest on Coast to Coast AM with Art Bell. Waters claimed that he owned rural property west of Ellensburg in Kittitas County, Washington that contained a mysterious hole. According to Waters, the hole had an unknown depth. He claimed to have measured its depth using fishing line and a weight, although he still had not hit bottom by the time of line had been used. He also claimed that his neighbor's dead dog had been seen alive sometime after it was thrown into the hole. According to Waters, the hole's magical properties prompted US federal agents to seize the land and fund his relocation to Australia.
Waters made guest appearances on Bell's show in 1997 (February 21 and 24), 2000, and 2002. Rebroadcasts of those appearances have helped create what has been described as a "modern, rural myth". The exact location of the hole was unspecified, yet several people claimed to have seen it, such as Gerald R. Osborne, who used the ceremonial name Red Elk, who described himself as an "intertribal medicine man...half-breed Native American / white", and who told reporters in 2012 he visited the hole many times since 1961 and claimed the US government maintained a top secret base there where "alien activity" occurs. But in 2002, Osborne was unable to find the hole on an expedition of 30 people he was leading.
Local news reporters who investigated the claims found no public records of anyone named Mel Waters ever residing in, or owning property in, Kittitas County. According to State Department of Na |
https://en.wikipedia.org/wiki/Cassini%20and%20Catalan%20identities | __notoc__
Cassini's identity (sometimes called Simson's identity) and Catalan's identity are mathematical identities for the Fibonacci numbers. Cassini's identity, a special case of Catalan's identity, states that for the nth Fibonacci number,
Note here is taken to be 0, and is taken to be 1.
Catalan's identity generalizes this:
Vajda's identity generalizes this:
History
Cassini's formula was discovered in 1680 by Giovanni Domenico Cassini, then director of the Paris Observatory, and independently proven by Robert Simson (1753). However Johannes Kepler presumably knew the identity already in 1608.
Catalan's identity is named after Eugène Catalan (1814–1894). It can be found in one of his private research notes, entitled "Sur la série de Lamé" and dated October 1879. However, the identity did not appear in print until December 1886 as part of his collected works . This explains why some give 1879 and others 1886 as the date for Catalan's identity .
The Hungarian-British mathematician Steven Vajda (1901–95) published a book on Fibonacci numbers (Fibonacci and Lucas Numbers, and the Golden Section: Theory and Applications, 1989) which contains the identity carrying his name. However the identity was already published in 1960 by Dustan Everman as problem 1396 in The American Mathematical Monthly.
Proof of Cassini identity
Proof by matrix theory
A quick proof of Cassini's identity may be given by recognising the left side of the equation as a determinant of a 2×2 matrix of Fibonacci numbers. The result is almost immediate when the matrix is seen to be the th power of a matrix with determinant −1:
Proof by induction
Consider the induction statement:
The base case is true.
Assume the statement is true for . Then:
so the statement is true for all integers .
Proof of Catalan identity
We use Binet's formula, that , where and .
Hence, and .
So,
Using ,
and again as ,
The Lucas number is defined as , so
Because
Cancelling the 's gives the result |
https://en.wikipedia.org/wiki/White%20Light%20%28novel%29 | White Light is a work of science fiction by Rudy Rucker published in 1980 by Virgin Books in the UK and Ace Books in the US. It was written while Rucker was teaching mathematics at the University of Heidelberg from 1978 to 1980, at roughly the same time he was working on the non-fiction book Infinity and the Mind.
On one level, the book is an exploration of the mathematics of infinity through fiction, in much the same way the novel Flatland: A Romance of Many Dimensions explored the concept of multiple dimensions. More specifically, White Light uses an imaginary universe to elucidate the set theory concept of aleph numbers, which are more or less the idea that some infinities are bigger than others.
Plot summary
The book is the story of Felix Rayman, a down-and-out mathematics teacher at SUCAS (a state college in New York, a play on SUNY) with a troubled family life and dead-in-the-water career. In the fictional town of Bernho (Geneseo), he begins experimenting with lucid dreaming—aided by "fuzz weed" (marijuana)—hoping to gain insight into Cantor's continuum hypothesis.
During an out-of-body experience, Felix loses his physical body and nearly falls victim to the Devil, who hunts the Earth for souls like his to take to Hell; Felix calls upon Jesus, who saves him. Jesus asks Felix to do him a favor: to take a restless ghost named Kathy to a place called "Cimön", and bring her to God/Absolute Infinite, which can be found there.
Cimön is permeated with the notion of infinity in its various guises: just getting there involves grappling with infinity, as Cimön is an infinite distance away from Earth. Felix and Kathy get there in their astral bodies by doubling their speed in half the time so that they asymptotically approach infinite speed at four hours. Eventually, at the speed of light, they turn into the eponymous "white light" and merge with Cimön.
In this new world, Felix encounters famous scientists and mathematicians such as Albert Einstein and Georg Cantor |
https://en.wikipedia.org/wiki/Kleinian%20model | In mathematics, a Kleinian model is a model of a three-dimensional hyperbolic manifold N by the quotient space where is a discrete subgroup of PSL(2,C). Here, the subgroup , a Kleinian group, is defined so that it is isomorphic to the fundamental group of the surface N. Many authors use the terms Kleinian group and Kleinian model interchangeably, letting one stand for the other. The concept is named after Felix Klein.
Many properties of Kleinian models are in direct analogy to those of Fuchsian models; however, overall, the theory is less well developed. A number of unsolved conjectures on Kleinian models are the analogs to theorems on Fuchsian models.
See also
Hyperbolic 3-manifold |
https://en.wikipedia.org/wiki/Hyperbolic%20manifold | In mathematics, a hyperbolic manifold is a space where every point looks locally like hyperbolic space of some dimension. They are especially studied in dimensions 2 and 3, where they are called hyperbolic surfaces and hyperbolic 3-manifolds, respectively. In these dimensions, they are important because most manifolds can be made into a hyperbolic manifold by a homeomorphism. This is a consequence of the uniformization theorem for surfaces and the geometrization theorem for 3-manifolds proved by Perelman.
Rigorous definition
A hyperbolic -manifold is a complete Riemannian -manifold of constant sectional curvature .
Every complete, connected, simply-connected manifold of constant negative curvature is isometric to the real hyperbolic space . As a result, the universal cover of any closed manifold of constant negative curvature is . Thus, every such can be written as where is a torsion-free discrete group of isometries on . That is, is a discrete subgroup of . The manifold has finite volume if and only if is a lattice.
Its thick–thin decomposition has a thin part consisting of tubular neighborhoods of closed geodesics and ends which are the product of a Euclidean ()-manifold and the closed half-ray. The manifold is of finite volume if and only if its thick part is compact.
Examples
The simplest example of a hyperbolic manifold is hyperbolic space, as each point in hyperbolic space has a neighborhood isometric to hyperbolic space.
A simple non-trivial example, however, is the once-punctured torus. This is an example of an (Isom(), )-manifold. This can be formed by taking an ideal rectangle in – that is, a rectangle where the vertices are on the boundary at infinity, and thus don't exist in the resulting manifold – and identifying opposite images.
In a similar fashion, we can construct the thrice-punctured sphere, shown below, by gluing two ideal triangles together. This also shows how to draw curves on the surface – the black line in the diagram becomes |
https://en.wikipedia.org/wiki/Cusp%20neighborhood | In mathematics, a cusp neighborhood is defined as a set of points near a cusp singularity.
Cusp neighborhood for a Riemann surface
The cusp neighborhood for a hyperbolic Riemann surface can be defined in terms of its Fuchsian model.
Suppose that the Fuchsian group G contains a parabolic element g. For example, the element t ∈ SL(2,Z) where
is a parabolic element. Note that all parabolic elements of SL(2,C) are conjugate to this element. That is, if g ∈ SL(2,Z) is parabolic, then for some h ∈ SL(2,Z).
The set
where H is the upper half-plane has
for any where is understood to mean the group generated by g. That is, γ acts properly discontinuously on U. Because of this, it can be seen that the projection of U onto H/G is thus
.
Here, E is called the neighborhood of the cusp corresponding to g.
Note that the hyperbolic area of E is exactly 1, when computed using the canonical Poincaré metric. This is most easily seen by example: consider the intersection of U defined above with the fundamental domain
of the modular group, as would be appropriate for the choice of T as the parabolic element. When integrated over the volume element
the result is trivially 1. Areas of all cusp neighborhoods are equal to this, by the invariance of the area under conjugation. |
https://en.wikipedia.org/wiki/Behavioural%20sciences | The behavioural sciences explore the cognitive processes within organisms and the behavioural interactions between organisms in the natural world. It involves the systematic analysis and investigation of human and animal behaviour through naturalistic observation, controlled scientific experimentation and mathematical modeling. It attempts to accomplish legitimate, objective conclusions through rigorous formulations and observation. Examples of behavioural sciences include psychology, psychobiology, anthropology, sociology, economics, and cognitive science. Generally, behavioural science primarily seeks to generalise about human behaviour as it relates to society and its impact on society as a whole.
Categories
Behavioural sciences include two broad categories: neural Information sciences and social Relational sciences.
Information processing sciences deal with information processing of stimuli from the social environment by cognitive entities in order to engage in decision making, social judgment and social perception for individual functioning and survival of organism in a social environment. These include psychology, cognitive science, behaviour analysis, psychobiology, neural networks, social cognition, social psychology, semantic networks, ethology, and social neuroscience.
On the other hand, Relational sciences deal with relationships, interaction, communication networks, associations and relational strategies or dynamics between organisms or cognitive entities in a social system. These include fields like sociological social psychology, social networks, dynamic network analysis, agent-based model, behaviour analysis, and microsimulation.
Applications
Insights from several pure disciplines across behavioural sciences are explored by various applied disciplines and practiced in the context of everyday life and business.
Consumer behaviour, for instance, is the study of the decision making process consumers make when purchasing goods or services. It stud |
https://en.wikipedia.org/wiki/Inversion%20of%20control | In software engineering, inversion of control (IoC) is a design pattern in which custom-written portions of a computer program receive the flow of control from a generic framework. The term "inversion" is historical: a software architecture with this design "inverts" control as compared to procedural programming. In procedural programming, a program's custom code calls reusable libraries to take care of generic tasks, but with inversion of control, it is the framework that calls the custom code.
Inversion of control has been widely used by application development frameworks since the rise of GUI environments and continues to be used both in GUI environments and in web server application frameworks. Inversion of control makes the framework extensible by the methods defined by the application programmer.
Event-driven programming is often implemented using IoC so that the custom code need only be concerned with the handling of events, while the event loop and dispatch of events/messages is handled by the framework or the runtime environment. In web server application frameworks, dispatch is usually called routing, and handlers may be called endpoints.
The phrase "inversion of control" has separately also come to be used in the community of Java programmers to refer specifically to the patterns of injecting objects' dependencies that occur with "IoC containers" in Java frameworks such as the Spring framework. In this different sense, "inversion of control" refers to granting the framework control over the implementations of dependencies that are used by application objects rather than to the original meaning of granting the framework control flow (control over the time of execution of application code e.g. callbacks).
Overview
As an example, with traditional programming, the main function of an application might make function calls into a menu library to display a list of available commands and query the user to select one. The library thus would return the chosen |
https://en.wikipedia.org/wiki/CREST%20syndrome | CREST syndrome, also known as the limited cutaneous form of systemic sclerosis (lcSSc), is a multisystem connective tissue disorder. The acronym "CREST" refers to the five main features: calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasia.
CREST syndrome is associated with detectable antibodies against centromeres (a component of the cell nucleus), and usually spares the kidneys (a feature more common in the related condition systemic scleroderma). If the lungs are involved, it is usually in the form of pulmonary arterial hypertension.
Signs and symptoms
Calcinosis
CREST causes thickening and tightening of the skin with deposition of calcific nodules ("calcinosis").
Raynaud's phenomenon
Raynaud's phenomenon is frequently the first manifestation of CREST/lcSSc, preceding other symptoms by years. Stress and cold temperature induce an exaggerated vasoconstriction of the small arteries, arterioles, and thermoregulatory vessels of the skin of the digits. Clinically this manifests as a white-blue-red transition in skin color. Underlying this transition is pallor and cyanosis of the digits, followed by a reactive hyperemia as they rewarm. When extreme and frequent, this phenomenon can lead to digital ulcerations, gangrene, or amputation.
Ulceration can predispose to chronic infections of the involved site.
Esophageal dysmotility
Presents as a sensation of food getting stuck (dysphagia) in the mid- or lower esophagus, atypical chest pain, or cough. People often state they must drink liquids to swallow solid food. This motility problem results from atrophy of the gastrointestinal tract wall smooth muscle. This change may occur with or without pathologic evidence of significant tissue fibrosis.
Sclerodactyly
Though it is the most easily recognizable manifestation, it is not prominent in all patients. Thickening generally only involves the skin of the fingers distal to the metacarpophalangeal joints in CREST. Early in the co |
https://en.wikipedia.org/wiki/Tempo%20and%20Mode%20in%20Evolution | Tempo and Mode in Evolution (1944) was George Gaylord Simpson's seminal contribution to the evolutionary synthesis, which integrated the facts of paleontology with those of genetics and natural selection.
Simpson argued that the microevolution of population genetics was sufficient in itself to explain the patterns of macroevolution observed by paleontology. Simpson also highlighted the distinction between tempo and mode. "Tempo" encompasses "evolutionary rates … their acceleration and deceleration, the conditions of exceptionally slow or rapid evolutions, and phenomena suggestive of inertia and momentum," while "mode" embraces "the study of the way, manner, or pattern of evolution, a study in which tempo is a basic factor, but which embraces considerably more than tempo."
Simpson's Tempo and Mode attempted to draw out several distinct generalizations:
Evolution's tempo can impart information about its mode.
Multiple tempos can be found in the fossil record: horotelic (medium tempo), bradytelic (slow tempo), and tachytelic (rapid tempo).
The facts of paleontology are consistent with the genetical theory of natural selection. Moreover, theories such as orthogenesis, Lamarckism, mutation pressures, and macromutations either are false or play little to no role.
Most evolution—"nine-tenths"—occurs by the steady phyletic transformation of whole lineages (anagenesis). This contrasts with Ernst Mayr's interpretation of speciation by splitting, particularly allopatric and peripatric speciation.
The lack of evidence for evolutionary transitions in the fossil record is best accounted for, first, by the poorness of the geological record, and, second, as a consequence of quantum evolution (which is responsible for "the origin of taxonomic units of relatively high rank, such as families, orders, and classes"). Quantum evolution built upon Sewall Wright's theory of random genetic drift.
Tempo and Mode earned Simpson the Daniel Giraud Elliot Medal from the National Academy |
https://en.wikipedia.org/wiki/KT%20%28energy%29 | kT (also written as kBT) is the product of the Boltzmann constant, k (or kB), and the temperature, T. This product is used in physics as a scale factor for energy values in molecular-scale systems (sometimes it is used as a unit of energy), as the rates and frequencies of many processes and phenomena depend not on their energy alone, but on the ratio of that energy and kT, that is, on (see Arrhenius equation, Boltzmann factor). For a system in equilibrium in canonical ensemble, the probability of the system being in state with energy E is proportional to .
More fundamentally, kT is the amount of heat required to increase the thermodynamic entropy of a system by k.
In physical chemistry, as kT often appears in the denominator of fractions (usually because of Boltzmann distribution), sometimes β = 1/kT is used instead of kT, turning into .
RT
RT is the product of the molar gas constant, R, and the temperature, T. This product is used in physics and chemistry as a scaling factor for energy values in macroscopic scale (sometimes it is used as a pseudo-unit of energy), as many processes and phenomena depend not on the energy alone, but on the ratio of energy and RT, i.e. E/RT. The SI units for RT are joules per mole (J/mol).
It differs from kT only by a factor of the Avogadro constant, NA. Its dimension is energy or ML2T−2, expressed in SI units as joules (J):
kT = RT/NA |
https://en.wikipedia.org/wiki/Artificial%20immune%20system | In artificial intelligence, artificial immune systems (AIS) are a class of computationally intelligent, rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for use in problem-solving.
Definition
The field of artificial immune systems (AIS) is concerned with abstracting the structure and function of the immune system to computational systems, and investigating the application of these systems towards solving computational problems from mathematics, engineering, and information technology. AIS is a sub-field of biologically inspired computing, and natural computation, with interests in machine learning and belonging to the broader field of artificial intelligence.
AIS is distinct from computational immunology and theoretical biology that are concerned with simulating immunology using computational and mathematical models towards better understanding the immune system, although such models initiated the field of AIS and continue to provide a fertile ground for inspiration. Finally, the field of AIS is not concerned with the investigation of the immune system as a substrate for computation, unlike other fields such as DNA computing.
History
AIS emerged in the mid-1980s with articles authored by Farmer, Packard and Perelson (1986) and Bersini and Varela (1990) on immune networks. However, it was only in the mid-1990s that AIS became a field in its own right. Forrest et al. (on negative selection) and Kephart et al. published their first papers on AIS in 1994, and Dasgupta conducted extensive studies on Negative Selection Algorithms. Hunt and Cooke started the works on Immune Network models in 1995; Timmis and Neal continued this work and made some improvements. De Castro & Von Zuben's and Nicosia & Cutello's work (on clonal selection) became notable in 2002. The first book on Artificial Immune Systems was edit |
https://en.wikipedia.org/wiki/Tseng%20Labs%20ET4000 | The Tseng Labs ET4000 was a line of SVGA graphics controller chips during the early 1990s, commonly found in many 386/486 and compatible systems, with some models, notably the ET4000/W32 and later chips, offering graphics acceleration. Offering above average host interface throughput coupled with a moderate price, Tseng Labs' ET4000 chipset family were well regarded for their performance, and were integrated into many companies' lineups, notably with Hercules' Dynamite series, the Diamond Stealth 32 and several Speedstar cards, and on many generic boards.
Models
ET4000AX
The ET4000AX was a major advancement over Tseng Labs' earlier ET3000 SVGA chipset, featuring a new 16-bit host interface controller with deep FIFO buffering and caching capabilities, and an enhanced, variable-width memory interface with support for up to 1MB of memory with a ~16-bit VRAM or ~32-bit DRAM memory data bus width. The FIFO buffers and cache functions had the effect of greatly improving host interface throughput, and therefore offering substantially improved redraw performance compared to the ET3000 and most of its contemporaries. The interface controller also offered support for IBM's MCA bus, in addition to an 8 or 16-bit ISA bus. The ET4000AX could also support the emerging VESA Local Bus standard with some additional external logic, albeit with a 16-bit host bus width.
Neither the ET4000AX or its succeeding family members offered an integrated RAMDAC, which hampered the line's cost/performance competitiveness later on.
ET4000/W32
Hardware acceleration via dedicated BitBLT hardware and a hardware cursor sprite was introduced in the ET4000/W32. The W32 offered improved local bus support along with further increased host interface performance, but by the time PCI Windows accelerators became commonplace, high host throughput was no longer a distinguishing feature. Nevertheless, as a mid-priced Windows accelerator, the W32 benchmarked favorably against competing mid-range S3 and |
https://en.wikipedia.org/wiki/Syslog | In computing, syslog is a standard for message logging. It allows separation of the software that generates messages, the system that stores them, and the software that reports and analyzes them. Each message is labeled with a facility code, indicating the type of system generating the message, and is assigned a severity level.
Computer system designers may use syslog for system management and security auditing as well as general informational, analysis, and debugging messages. A wide variety of devices, such as printers, routers, and message receivers across many platforms use the syslog standard. This permits the consolidation of logging data from different types of systems in a central repository. Implementations of syslog exist for many operating systems.
When operating over a network, syslog uses a client-server architecture where a syslog server listens for and logs messages coming from clients.
History
Syslog was developed in the 1980s by Eric Allman as part of the Sendmail project. It was readily adopted by other applications and has since become the standard logging solution on Unix-like systems. A variety of implementations also exist on other operating systems and it is commonly found in network devices, such as routers.
Syslog originally functioned as a de facto standard, without any authoritative published specification, and many implementations existed, some of which were incompatible. The Internet Engineering Task Force documented the status quo in RFC 3164 in August of 2001. It was standardized by RFC 5424 in March of 2009.
Various companies have attempted to claim patents for specific aspects of syslog implementations. This has had little effect on the use and standardization of the protocol.
Message components
The information provided by the originator of a syslog message includes the facility code and the severity level. The syslog software adds information to the information header before passing the entry to the syslog receiver. Such comp |
https://en.wikipedia.org/wiki/Human%20genetics | Human genetics is the study of inheritance as it occurs in human beings. Human genetics encompasses a variety of overlapping fields including: classical genetics, cytogenetics, molecular genetics, biochemical genetics, genomics, population genetics, developmental genetics, clinical genetics, and genetic counseling.
Genes are the common factor of the qualities of most human-inherited traits. Study of human genetics can answer questions about human nature, can help understand diseases and the development of effective treatment and help us to understand the genetics of human life. This article describes only basic features of human genetics; for the genetics of disorders please see: medical genetics.
Genetic differences and inheritance patterns
Inheritance of traits for humans are based upon Gregor Mendel's model of inheritance. Mendel deduced that inheritance depends upon discrete units of inheritance, called factors or genes.
Autosomal dominant inheritance
Autosomal traits are associated with a single gene on an autosome (non-sex chromosome)—they are called "dominant" because a single copy—inherited from either parent—is enough to cause this trait to appear. This often means that one of the parents must also have the same trait, unless it has arisen due to an unlikely new mutation. Examples of autosomal dominant traits and disorders are Huntington's disease and achondroplasia.
Autosomal recessive inheritance
Autosomal recessive traits is one pattern of inheritance for a trait, disease, or disorder to be passed on through families. For a recessive trait or disease to be displayed two copies of the trait or disorder needs to be presented. The trait or gene will be located on a non-sex chromosome. Because it takes two copies of a trait to display a trait, many people can unknowingly be carriers of a disease. From an evolutionary perspective, a recessive disease or trait can remain hidden for several generations before displaying the phenotype. Examples of auto |
https://en.wikipedia.org/wiki/Quasiperiodic%20motion | In mathematics and theoretical physics, quasiperiodic motion is in rough terms the type of motion executed by a dynamical system containing a finite number (two or more) of incommensurable frequencies.
That is, if we imagine that the phase space is modelled by a torus T (that is, the variables are periodic like angles), the trajectory of the system is modelled by a curve on T that wraps around the torus without ever exactly coming back on itself.
A quasiperiodic function on the real line is the type of function (continuous, say) obtained from a function on T, by means of a curve
R → T
which is linear (when lifted from T to its covering Euclidean space), by composition. It is therefore oscillating, with a finite number of underlying frequencies. (NB the sense in which theta functions and the Weierstrass zeta function in complex analysis are said to have quasi-periods with respect to a period lattice is something distinct from this.)
The theory of almost periodic functions is, roughly speaking, for the same situation but allowing T to be a torus with an infinite number of dimensions. |
https://en.wikipedia.org/wiki/Statistical%20genetics | Statistical genetics is a scientific field concerned with the development and application of statistical methods for drawing inferences from genetic data. The term is most commonly used in the context of human genetics. Research in statistical genetics generally involves developing theory or methodology to support research in one of three related areas:
population genetics - Study of evolutionary processes affecting genetic variation between organisms
genetic epidemiology - Studying effects of genes on diseases
quantitative genetics - Studying the effects of genes on 'normal' phenotypes
Statistical geneticists tend to collaborate closely with geneticists, molecular biologists, clinicians and bioinformaticians. Statistical genetics is a type of computational biology. |
https://en.wikipedia.org/wiki/Franck%E2%80%93Condon%20principle | The Franck–Condon principle (named for James Franck and Edward Condon) is a rule in spectroscopy and quantum chemistry that explains the intensity of vibronic transitions (the simultaneous changes in electronic and vibrational energy levels of a molecule due to the absorption or emission of a photon of the appropriate energy). The principle states that during an electronic transition, a change from one vibrational energy level to another will be more likely to happen if the two vibrational wave functions overlap more significantly.
Overview
The Franck–Condon principle has a well-established semiclassical interpretation based on the original contributions of James Franck. Electronic transitions are relatively instantaneous compared with the time scale of nuclear motions, therefore if the molecule is to move to a new vibrational level during the electronic transition, this new vibrational level must be instantaneously compatible with the nuclear positions and momenta of the vibrational level of the molecule in the originating electronic state. In the semiclassical picture of vibrations (oscillations) of a simple harmonic oscillator, the necessary conditions can occur at the turning points, where the momentum is zero.
In the quantum mechanical picture, the vibrational levels and vibrational wavefunctions are those of quantum harmonic oscillators, or of more complex approximations to the potential energy of molecules, such as the Morse potential. Figure 1 illustrates the Franck–Condon principle for vibronic transitions in a molecule with Morse-like potential energy functions in both the ground and excited electronic states. In the low temperature approximation, the molecule starts out in the v = 0 vibrational level of the ground electronic state and upon absorbing a photon of the necessary energy, makes a transition to the excited electronic state. The electron configuration of the new state may result in a shift of the equilibrium position of the nuclei constituting |
https://en.wikipedia.org/wiki/Method%20of%20distinguished%20element | In the mathematical field of enumerative combinatorics, identities are sometimes established by arguments that rely on singling out one "distinguished element" of a set.
Definition
Let be a family of subsets of the set and let be a distinguished element of set . Then suppose there is a predicate that relates a subset to . Denote to be the set of subsets from for which is true and to be the set of subsets from for which is false, Then and are disjoint sets, so by the method of summation, the cardinalities are additive
Thus the distinguished element allows for a decomposition according to a predicate that is a simple form of a divide and conquer algorithm. In combinatorics, this allows for the construction of recurrence relations. Examples are in the next section.
Examples
The binomial coefficient is the number of size-k subsets of a size-n set. A basic identity—one of whose consequences is that the binomial coefficients are precisely the numbers appearing in Pascal's triangle—states that:
Proof: In a size-(n + 1) set, choose one distinguished element. The set of all size-k subsets contains: (1) all size-k subsets that do contain the distinguished element, and (2) all size-k subsets that do not contain the distinguished element. If a size-k subset of a size-(n + 1) set does contain the distinguished element, then its other k − 1 elements are chosen from among the other n elements of our size-(n + 1) set. The number of ways to choose those is therefore . If a size-k subset does not contain the distinguished element, then all of its k members are chosen from among the other n "non-distinguished" elements. The number of ways to choose those is therefore .
The number of subsets of any size-n set is 2n.
Proof: We use mathematical induction. The basis for induction is the truth of this proposition in case n = 0. The empty set has 0 members and 1 subset, and 20 = 1. The induction hypothesis is the proposition in case n; we use it to prove ca |
https://en.wikipedia.org/wiki/Togo%20%28dog%29 | Togo (1913 – December 5, 1929) was the lead sled dog of musher Leonhard Seppala and his dog sled team in the 1925 serum run to Nome across central and northern Alaska. Despite covering a far greater distance than any other lead dogs on the run, over some of the most dangerous parts of the trail, his role was left out of contemporary news of the event at the time, in favor of the lead dog for the last leg of the relay, Balto, whom Seppala also owned and had bred.
Deemed at first a mere troublemaker, before being identified as a natural leader and puppy prodigy by Seppala, Togo had already shown extreme feats of dedication and endurance as a puppy, and as an adult continued to show unusual feats of intelligence, saving the lives of his team and musher on more than one occasion. Sled dogs bred from his line have contributed to the 'Seppala Siberian' sleddog line, as well as the mainstream Siberian Husky gene pool.
Background
Togo was one of the offspring of former lead dog Suggen and the female Siberian import Dolly. Early pedigree records are inconsistent in his birth year, including those kept by his breeder Viktor Anderson and his owner, Seppala; most sources published list his birth year as 1913, but no other form of consensus exists on his exact time of birth. He was named Cugu [tso`go], which means puppy in Northern Sami language, and later after the Japanese Admiral, Tōgō Heihachirō. Initially, he did not
look like he had potential as a sled dog. He only grew to about 48 pounds (22 kg) in adulthood and had a black, brown, and gray coat that made him appear perpetually dirty.
Togo was ill as a young puppy and required intensive nursing from Seppala's wife. He was very
bold and rowdy, thus seen as "difficult and mischievous", showing "all the signs of becoming a ... canine delinquent" according to one reporter. At first, this behaviour was interpreted as evidence that he had been spoiled by the individual attention given to him during his illness. As he did no |
https://en.wikipedia.org/wiki/Breathalyzer | A breathalyzer or breathalyser (a portmanteau of breath and analyzer/analyser) is a device for measuring breath alcohol content (BrAC). The name is a genericized trademark of the Breathalyzer brand name of instruments developed by inventor Robert Frank Borkenstein in the 1950s.
Origins
Research into the possibilities of using breath to test for alcohol in a person's body dates as far back as 1874, when Francis E. Anstie made the observation that small amounts of alcohol were excreted in breath.
In 1927, Emil Bogen produced a paper on breath analysis. He collected air in a football bladder and then tested this air for traces of alcohol, discovering that the alcohol content of 2 litres of expired air was a little greater than that of 1 cc of urine. Also in 1927, a Chicago chemist, William Duncan McNally, invented a breathalyzer in which the breath moving through chemicals in water would change color. One suggested use for his invention was for housewives to test whether their husbands had been drinking. In December 1927, in a case in Marlborough, England, Dr. Gorsky, a police surgeon, asked a suspect to inflate a football bladder with his breath. Since the 2 liters of the man's breath contained 1.5 mg of ethanol, Gorsky testified before the court that the defendant was "50% drunk". The use of drunkenness as the standard, as opposed to BAC, perhaps invalidated the analysis, as tolerance to alcohol varies. However, the story illustrates the general principles of breath analysis.
In 1931 the first practical roadside breath-testing device was the drunkometer developed by Rolla Neil Harger of the Indiana University School of Medicine. The drunkometer collected a motorist's breath sample directly into a balloon inside the machine. The breath sample was then pumped through an acidified potassium permanganate solution. If there was alcohol in the breath sample, the solution changed color. The greater the color change, the more alcohol there was present in the breath. Th |
https://en.wikipedia.org/wiki/G.%20Ledyard%20Stebbins | George Ledyard Stebbins Jr. (January 6, 1906 – January 19, 2000) was an American botanist and geneticist who is widely regarded as one of the leading evolutionary biologists of the 20th century. Stebbins received his Ph.D. in botany from Harvard University in 1931. He went on to the University of California, Berkeley, where his work with E. B. Babcock on the genetic evolution of plant species, and his association with a group of evolutionary biologists known as the Bay Area Biosystematists, led him to develop a comprehensive synthesis of plant evolution incorporating genetics.
His most important publication was Variation and Evolution in Plants, which combined genetics and Darwin's theory of natural selection to describe plant speciation. It is regarded as one of the main publications which formed the core of the modern synthesis and still provides the conceptual framework for research in plant evolutionary biology; according to Ernst Mayr, "Few later works dealing with the evolutionary systematics of plants have not been very deeply affected by Stebbins' work." He also researched and wrote widely on the role of hybridization and polyploidy in speciation and plant evolution; his work in this area has had a lasting influence on research in the field.
From 1960, Stebbins was instrumental in the establishment of the Department of Genetics at the University of California, Davis, and was active in numerous organizations involved in the promotion of evolution, and of science in general. He was elected to the National Academy of Sciences and the American Philosophical Society, was awarded the National Medal of Science, and was involved in the development of evolution-based science programs for California high schools, as well as the conservation of rare plants in that state.
Early life and education
Stebbins was born in Lawrence, New York, the youngest of three children. His parents were George Ledyard Stebbins, a wealthy real estate financier who developed Seal Harbor, |
https://en.wikipedia.org/wiki/Kripke%20structure%20%28model%20checking%29 | This article describes Kripke structures as used in model checking. For a more general description, see Kripke semantics.
A Kripke structure is a variation of the transition system, originally proposed by Saul Kripke, used in model checking to represent the behavior of a system.
It consists of a graph whose nodes represent the reachable states of the system and whose edges represent state transitions, together with a labelling function which maps each node to a set of properties that hold in the corresponding state. Temporal logics are traditionally interpreted in terms of Kripke structures.
Formal definition
Let be a set of atomic propositions, i.e. boolean-valued expressions formed from variables, constants and predicate symbols. Clarke et al. define a Kripke structure over as a 4-tuple consisting of
a finite set of states .
a set of initial states .
a transition relation such that is left-total, i.e., such that .
a labeling (or interpretation) function .
Since is left-total, it is always possible to construct an infinite path through the Kripke structure. A deadlock state can be modeled by a single outgoing edge back to itself.
The labeling function defines for each state the set of all atomic propositions that are valid in .
A path of the structure is a sequence of states such that for each , holds.
The word on the path is the sequence of sets of the atomic propositions
,
which is an ω-word over alphabet .
With this definition, a Kripke structure (say, having only one initial state may be identified with a Moore machine with a singleton input alphabet, and with the output function being its labeling function.
Example
Let the set of atomic propositions .
and can model arbitrary boolean properties of the system that the Kripke structure is
modelling.
The figure at right illustrates a Kripke structure ,
where
.
.
.
.
may produce a path and is the execution word over the path .
can produce execution words belonging to the la |
https://en.wikipedia.org/wiki/Atmospheric%20wave | An atmospheric wave is a periodic disturbance in the fields of atmospheric variables (like surface pressure or geopotential height, temperature, or wind velocity) which may either propagate (traveling wave) or not (standing wave). Atmospheric waves range in spatial and temporal scale from large-scale planetary waves (Rossby waves) to minute sound waves. Atmospheric waves with periods which are harmonics of 1 solar day (e.g. 24 hours, 12 hours, 8 hours... etc.) are known as atmospheric tides.
Causes and effects
The mechanism for the forcing of the wave, for example, the generation of the initial or prolonged disturbance in the atmospheric variables, can vary. Generally, waves are either excited by heating or dynamic effects, for example the obstruction of the flow by mountain ranges like the Rocky Mountains in the U.S. or the Alps in Europe. Heating effects can be small-scale (like the generation of gravity waves by convection) or large-scale (the formation of Rossby waves by the temperature contrasts between continents and oceans in the Northern hemisphere winter).
Atmospheric waves transport momentum, which is fed back into the background flow as the wave dissipates. This wave forcing of the flow is particularly important in the stratosphere, where this momentum deposition by planetary-scale Rossby waves gives rise to sudden stratospheric warmings and the deposition by gravity waves gives rise to the quasi-biennial oscillation.
In the mathematical description of atmospheric waves, spherical harmonics are used. When considering a section of a wave along a latitude circle, this is equivalent to a sinusoidal shape. Spherical harmonics, representing individual Rossby-Haurwitz planetary wave modes, can have any orientation with respect to the axis of rotation of the planet. Remarkably - while the very existence of these planetary wave modes requires the rotation of the planet around its polar axis - the phase velocity of the individual wave modes does not depend on |
https://en.wikipedia.org/wiki/Java%20Cryptography%20Architecture | In computing, the Java Cryptography Architecture (JCA) is a framework for working with cryptography using the Java programming language. It forms part of the Java security API, and was first introduced in JDK 1.1 in the package.
The JCA uses a "provider"-based architecture and contains a set of APIs for various purposes, such as encryption, key generation and management, secure random-number generation, certificate validation, etc. These APIs provide an easy way for developers to integrate security into application code.
See also
Java Cryptography Extension
Bouncy Castle (cryptography)
External links
Official JCA guides: JavaSE6, JavaSE7, JavaSE8, JavaSE9, JavaSE10, JavaSE11
Java platform
Cryptographic software |
https://en.wikipedia.org/wiki/Flag%20officer | A flag officer is a commissioned officer in a nation's armed forces senior enough to be entitled to fly a flag to mark the position from which the officer exercises command.
The term is used differently in different countries:
In many countries, a flag officer is a senior officer of the navy, specifically those who hold any of the admiral ranks; the term may or may not include the rank of commodore.
In some countries, such as the United States, India, and Bangladesh it may apply to all armed forces, not just the navy. This means generals can also be considered flag officers.
In most Arab armies, liwa (Arabic: لواء), which can be translated as flag officer, is a specific rank, equivalent to a major general. However, "ensign" is debatably a more exact translation of the word. In principle, a flag officer commands several units called "flags" (or "ensigns") (i.e. brigades).
General usage
The generic title of flag officer is used in many modern navies and coast guards to denote those who hold the rank of rear admiral or its equivalent and above, also called "flag ranks". In some navies, this also includes the rank of commodore. Flag officer corresponds to the generic terms general officer, used by land and some air forces to describe all grades of generals, and air officer, used by other air forces to describe all grades of air marshals and air commodores.
A flag officer sometimes is a junior officer, called a flag lieutenant or flag adjutant, attached as a personal adjutant or aide-de-camp.
Canada
In the Canadian Armed Forces, a flag officer (French: officier général, "general officer") is an admiral, vice admiral, rear admiral, or commodore, the naval equivalent of a general officer of the army or air force. It is a somewhat counterintuitive usage of the term, as only flag officers in command of commands or formations actually have their own flags (technically a commodore has only a broad pennant, not a flag), and army and air force generals in command of command |
https://en.wikipedia.org/wiki/Stressor | A stressor is a chemical or biological agent, environmental condition, external stimulus or an event seen as causing stress to an organism. Psychologically speaking, a stressor can be events or environments that individuals might consider demanding, challenging, and/or threatening individual safety.
Events or objects that may trigger a stress response may include:
environmental stressors (hypo or hyper-thermic temperatures, elevated sound levels, over-illumination, overcrowding)
daily "stress" events (e.g., traffic, lost keys, money, quality and quantity of physical activity)
life changes (e.g., divorce, bereavement)
workplace stressors (e.g., high job demand vs. low job control, repeated or sustained exertions, forceful exertions, extreme postures, office clutter)
chemical stressors (e.g., tobacco, alcohol, drugs)
social stressor (e.g., societal and family demands)
Stressors can cause physical, chemical and mental responses internally. Physical stressors produce mechanical stresses on skin, bones, ligaments, tendons, muscles and nerves that cause tissue deformation and (in extreme cases) tissue failure. Chemical stresses also produce biomechanical responses associated with metabolism and tissue repair. Physical stressors may produce pain and impair work performance. Chronic pain and impairment requiring medical attention may result from extreme physical stressors or if there is not sufficient recovery time between successive exposures. A recent study shows that physical office clutter could be an example of physical stressors in a workplace setting.
Stressors may also affect mental function and performance. One possible mechanism involves stimulation of the hypothalamus, CRF (corticotropin release factor) -> pituitary gland releases ACTH (adrenocorticotropic hormone) -> adrenal cortex secretes various stress hormones (e.g., cortisol) -> stress hormones (30 varieties) travel in the blood stream to relevant organs, e.g., glands, heart, intestines -> flight |
https://en.wikipedia.org/wiki/Emission%20theory%20%28vision%29 | Emission theory or extramission theory (variants: extromission) or extromissionism is the proposal that visual perception is accomplished by eye beams emitted by the eyes. This theory has been replaced by intromission theory (or intromissionism), which is that visual perception comes from something representative of the object (later established to be rays of light reflected from it) entering the eyes. Modern physics has confirmed that light is physically transmitted by photons from a light source, such as the sun, to visible objects, and finishing with the detector, such as a human eye or camera.
History
In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth, and water. He believed that Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye, making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated that there were two different types of emanations that interacted in some way: one that emanated from an object to the eye, and another that emanated from the eye to an object. He compared these outward-flowing emanations to the emission of light from a lantern.
Around 400 BC, emission theory was held by Plato.
Around 300 BC, Euclid wrote Optics and Catoptrics, in which he studied the properties of sight. Euclid postulated that the visual ray emitted from the eye travelled in straight lines, described the laws of reflection, and mathematically studied the appearance of objects by direct vision and by reflection.
Ptolemy (c. 2nd century) wrote Optics, a work marking the culmination of the ancient Greek optics, in which he developed theories of direct vision (optics proper), vision by reflection (catoptics), and, notably, vision by refraction (dioptrics).
Galen, also in the 2nd century, likewise endorsed the extramission theory (De Usu Partium Corporis Humani). His theo |
https://en.wikipedia.org/wiki/Dilation%20%28metric%20space%29 | In mathematics, a dilation is a function from a metric space into itself that satisfies the identity
for all points , where is the distance from to and is some positive real number.
In Euclidean space, such a dilation is a similarity of the space. Dilations change the size but not the shape of an object or figure.
Every dilation of a Euclidean space that is not a congruence has a unique fixed point that is called the center of dilation. Some congruences have fixed points and others do not.
See also
Homothety
Dilation (operator theory) |
https://en.wikipedia.org/wiki/Thapsigargin | Thapsigargin is a non-competitive inhibitor of the sarco/endoplasmic reticulum Ca2+ ATPase (SERCA). Structurally, thapsigargin is classified as a guaianolide, and is extracted from a plant, Thapsia garganica. It is a tumor promoter in mammalian cells.
Thapsigargin raises cytosolic (intracellular) calcium concentration by blocking the ability of the cell to pump calcium into the sarcoplasmic and endoplasmic reticula. Store-depletion can secondarily activate plasma membrane calcium channels, allowing an influx of calcium into the cytosol. Depletion of ER calcium stores leads to ER stress and activation of the unfolded protein response. Non-resolved ER stress can cumulatively lead to cell death. Prolonged store depletion can protect against ferroptosis via remodeling of ER-synthesized phospholipids.
Thapsigargin treatment and the resulting ER calcium depletion inhibits autophagy independent of the UPR.
Thapsigargin is useful in experimentation examining the impacts of increasing cytosolic calcium concentrations and ER calcium depletion.
A study from the University of Nottingham showed promising results for its use against Covid-19 and other coronavirus.
Biosynthesis
The complete biosynthesis of thapsigargin has yet to be elucidated. A proposed biosynthesis starts with the farnesyl pyrophosphate. The first step is controlled by the enzyme germacrene B synthase. In the second step, the C(8) position is easily activated for an allylic oxidation due to the position of the double bond. The next step is the addition of the acyloxy moiety by a P450 acetyltransferase; which is a well known reaction for the synthesis of the diterpene, taxol. In the third step, the lactone ring is formed by a cytochrome P450 enzyme using NADP+. With the butyloxy group on the C(8), the formation will only generate the 6,12-lactone ring. The fourth step is an epoxidation that initiates the last step of the base guaianolide formation. In the fifth step, a P450 enzyme closes the 5 + 7 g |
https://en.wikipedia.org/wiki/Sindicato%20Nacional%20de%20Trabajadores%20de%20la%20Industria%20de%20Alimentos | The National Union of Food Industry Workers (, SINALTRAINAL) is a Colombian food industry trade union.
The group has repeatedly tried to form unions in Colombia for workers of Panamco, a Colombian Coca-Cola bottling company, and have documentation of many members or leaders being murdered, kidnapped, and tortured by right-wing paramilitary groups such as the AUC in order to prevent unionisation. They are a central focus of the ongoing Coca-Cola boycott movement prevalent across college campuses worldwide (see criticism of Coca-Cola).
See also
Sinaltrainal v. Coca-Cola
Notes
Amnesty International (AI) report 27 August 2003 - fear for safety of SINALTRAINAL vice-president Juan Carlos Galvis
AI report 23 September 2005 - fear for safety of SINALTRAINAL member José Onofre Esquivel Luna
External links
http://www.sinaltrainal.org/
Trade unions in Colombia
Food processing trade unions |
https://en.wikipedia.org/wiki/Chromatoidal%20bodies | Chromatoidal bodies are aggregations of ribosomes found in cysts of some amoebae including Entamoeba histolytica and Entamoeba coli. They exist in the cytoplasm and are dark staining. In the early cystic stages of E. histolytica, chromatid bodies arise from aggregation of ribosomes forming polycrystalline masses. As the cyst matures, the masses fragment into separate particles and the chromatoidal body disappears. It is thought that the chromatoidal body formation is a manifestation of parasite-host adaptive conditions. Ribonucleoprotein is synthesized under favorable conditions, crystallized in the resistant cyst stage and dispersed in the newly excysted amoebae when the amoeba is able to establish itself in a new host. |
https://en.wikipedia.org/wiki/Koschevnikov%20gland | The Koschevnikov gland is a gland of the honeybee located near the sting shaft. The gland produces an alarm pheromone that is released when a bee stings. The pheromone contains more than 40 different compounds, including pentylacetate, butyl acetate, 1-hexanol, n-butanol, 1-octanol, hexylacetate, octylacetate, and 2-nonanol. These components have a low molar mass and evaporate quickly. This collection of compounds is the least specific of all pheromones. The alarm pheromone is released when a honey bee stings another animal to attract other bees to attack, as well. The release of the alarm pheromone may entice more bees to sting at the same location. Smoking the bees can reduce the pheromone's efficacy. |
https://en.wikipedia.org/wiki/The%20Dan%20Patrick%20Show | The Dan Patrick Show is a syndicated radio and television sports talk show, hosted by former ESPN personality Dan Patrick. It is currently produced by Patrick and is syndicated to radio stations by Premiere Radio Networks, within and independently of their Fox Sports Radio package. The three-hour program debuted on October 1, 2007. It is broadcast weekdays live beginning at 9:00 a.m. Eastern. The current show is a successor to the original Dan Patrick Show, which aired from 1999 to 2007 on ESPN Radio weekdays at 1:00 p.m. Eastern/10:00 a.m. Pacific.
The show was televised on three networks: on DirecTV's Audience Network (formerly the 101 Network) since August 3, 2009; on three AT&T SportsNet affiliates since October 25, 2010; and on B/R Live as of March 1, 2019. It can also be heard on Sirius XM Radio channel 211, and is distributed as a podcast by PodcastOne.
On January 10, 2020, Patrick announced on his show that the relationship with AT&T Sports for the live video broadcast would end in its current form, shortly after Super Bowl LIV. AT&T's Audience Network, which had simulcast the program since 2009, was ceasing operations, and the show would also end streaming via B/R Live, following a short-run that began in 2019. The final show under AT&T aired on February 28. On March 2, the live show began airing on The Dan Patrick Show YouTube channel with the radio show still being nationally syndicated via multiple platforms.
On August 10, 2020, it was announced that the show would move to Peacock on August 24, 2020. Highlights of the show continue to appear on the YouTube channel.
On July 19, 2023, Patrick announced that the show's run will end on December 24, 2027.
Guests
The show mainly features guests involved with American football and sometimes other sports, whether current or former athletes, coaches, commissioners or agents. Less often, guests who are not affiliated with sports will come on the show, although it is common for Patrick to ask at least one spor |
https://en.wikipedia.org/wiki/Ion%20beam | An ion beam is a type of charged particle beam consisting of ions. Ion beams have many uses in electronics manufacturing (principally ion implantation) and other industries. A variety of ion beam sources exists, some derived from the mercury vapor thrusters developed by NASA in the 1960s. The most common ion beams are of singly-charged ions.
Units
Ion current density is typically measured in mA/cm^2, and ion energy in eV. The use of eV is convenient for converting between voltage and energy, especially when dealing with singly-charged ion beams, as well as converting between energy and temperature (1 eV = 11600 K).
Broad-beam ion sources
Most commercial applications use two popular types of ion source, gridded and gridless, which differ in current and power characteristics and the ability to control ion trajectories. In both cases electrons are needed to generate an ion beam. The most common electron emitters are hot filament and hollow cathode.
Gridded ion source
In a gridded ion source, DC or RF discharge are used to generate ions, which are then accelerated and decimated using grids and apertures. Here, the DC discharge current or the RF discharge power are used to control the beam current.
The ion current density that can be accelerated using a gridded ion source is limited by the space charge effect, which is described by Child's law:
where is the voltage between the grids, is the distance between the grids, and is the ion mass.
The grids are placed as closely as possible to increase the current density, typically . The ions used have a significant impact on the maximum ion beam current, since . Everything else being equal, the maximum ion beam current with krypton is only 69% the maximum ion current of an argon beam, and with xenon the ratio drops to 55%.
Gridless ion sources
In a gridless ion source, ions are generated by a flow of electrons (no grids). The most common gridless ion source is the end-Hall ion source. Here, the discharge current |
https://en.wikipedia.org/wiki/Ion%20beam%20deposition | Ion beam deposition (IBD) is a process of applying materials to a target through the application of an ion beam.
An ion beam deposition apparatus typically consists of an ion source, ion optics, and the deposition target. Optionally a mass analyzer can be incorporated.
In the ion source source materials in the form of a gas, an evaporated solid, or a solution (liquid) are ionized. For atomic IBD, electron ionization, field ionization (Penning ion source) or cathodic arc sources are employed. Cathodic arc sources are used particularly for carbon ion deposition. Molecular ion beam deposition employs electrospray ionization or MALDI sources.
The ions are then accelerated, focused or deflected using high voltages or magnetic fields. Optional deceleration at the substrate can be employed to define the deposition energy. This energy usually ranges from a few eV up to a few keV. At low energy molecular ion beams are deposited intact (ion soft landing), while at a high deposition energy molecular ions fragment and atomic ions can penetrate further into the material, a process known as ion implantation.
Ion optics (such as radio frequency quadrupoles) can be mass selective. In IBD they are used to select a single, or a range of ion species for deposition in order to avoid contamination. For organic materials in particular, this process is often monitored by a mass spectrometer.
The ion beam current, which is quantitative measure for the deposited amount of material, can be monitored during the deposition process. Switching of the selected mass range can be used to define a stoichiometry.
See also
Cathodic arc deposition
Sputter deposition
Ion beam assisted deposition
Ion beam induced deposition
Electrospray ionization
MALDI
Thin film deposition |
https://en.wikipedia.org/wiki/Kid%20Spark%20Education | Kid Spark Education (previously known as Rokenbok Education and originally as The Rokenbok Toy Company) is a nonprofit organization that develops and produces affordable Mobile STEM Labs and curriculum for Schools and Youth Service Organizations. The Rokenbok Toy Company was founded in 1995 by Paul Eichen in the United States to create an heirloom quality toy system. The first Rokenbok toys debuted at the 1997 American International Toy Fair in New York City. In 2010 the company made a substantial push researching the effect of media, like Rokenbok, on developing minds. In 2015 the company transitioned into a 501(c)(3) and completed the development of their first 4 classroom specific products called Mobile STEM Labs. Since then, Kid Spark Education has placed Mobile STEM Labs in over 22 states across the country.
Kid Spark Education has focused on developing applied technology and engineering learning experiences for K-8 students. They also provide free or subsidized Kid Spark Education programs to schools and youth organizations serving underserved children. Much of Kid Spark's work is focused on designing professional development tools that allow teachers and youth services providers to become confident STEM mentors.
History
The Rokenbok Toy Company was founded in 1995 by Paul Eichen in the United States. In 2008 the company transitioned their sales to be nearly entirely online, due to the closure of many toy stores during the great recession. Videos were created for YouTube to act as marketing for the company, to not only demonstrate the products, but also how they could be combined to create larger builds. In 2015 the company transitioned to a 501(c)(3) not-for-profit. Around 2017 all chutes and vehicles (with exception to the Maker ROK-Bot) were discontinued.
Rokenbok Overview
Rokenbok was an educational toy that combined a modular construction system with interactive infrared controlled vehicles. The system was expandable and could be added to and modifie |
https://en.wikipedia.org/wiki/Mystery%20meat | Mystery meat is a disparaging term for meat products that have an unidentifiable source, typically ground or otherwise ultra-processed foods such as burger patties, chicken nuggets, Salisbury steaks, sausages and hot dogs. Most often the term is used in reference to food served in institutional cafeterias, such as prison food or a North American school lunch.
The term is also sometimes applied to meat products where the species from which the meat has come from is known, but the cuts of meat used are unknown. This is often the case where the cuts of meat used include offal and mechanically separated meat, or when non-meat substitutes such as textured vegetable protein are used to stretch the meat, where explicitly stating the type of meat used might diminish the perceived palatability of the product to some purchasers.
Common products
The most common mystery meat products sold in The United States include Spam and sometimes sausages. It is also disputed whether or not bologna/baloney is a 'mystery meat' product.
Use in marketing
In 2016, Nissin, a Japanese food company that produces Cup Noodles, started to call their ingredients as self-deprecating Nazoniku (literally Mystery Meat) as part of their official marketing campaign. Nazoniku, or formally known as Daisuminchi (literally Minced Meat Dice), are made from pork, soybeans and other ingredients.
See also
Pink slime
Mystery meat navigation
Chicken McNuggets |
https://en.wikipedia.org/wiki/Sharp%20NEC%20Display%20Solutions | Sharp NEC Display Solutions (Sharp/NEC; formerly NEC Display Solutions or NDS and NEC-Mitsubishi Electric Visual Systems or NEC-Mitsubishi or NM Visual) is a manufacturer of computer monitors and large-screen public-information displays, and has sold and marketed products under the NEC brand globally for more than twenty years. The company sells display products to the consumer, business, professional (e.g. financial, graphic design, CAD/CAM), digital signage and medical markets.
The company again became a joint venture of Sharp and NEC Corporation when NEC sold 66% to Sharp on March 25, 2020. Prior to that date, it was a wholly owned subsidiary of Japan-based NEC Corporation since March 31, 2005. Originally, the company was known as NEC-Mitsubishi, a 50/50 joint venture between NEC Corporation and Mitsubishi Electric that began in 2000, and sold display products under both the NEC and Mitsubishi brands. The company is no longer affiliated with Mitsubishi.
Brands
NEC MultiSync - line of LCD and CRT monitors and large format public displays designed for business applications, lifestyle and gaming.
NEC AccuSync - line of LCD and CRT monitors designed for home and office applications.
NEC SpectraView - line of LCD monitors designed for color sensitive graphics applications.
NEC SpectraView Reference - a line of LCD monitors designed for color critical professional applications
NEC MD Series - line of LCD monitors designed for medical diagnostic imaging applications.
NEC MULTEOS - line of LCD monitors designed for public demonstrations.
See also
Cromaclear
Diamondtron |
https://en.wikipedia.org/wiki/List%20of%20Usenet%20newsreaders | Usenet is a worldwide, distributed discussion system that uses the Network News Transfer Protocol (NNTP). Programs called newsreaders are used to read and post messages (called articles or posts, and collectively termed news) to one or more newsgroups. Users must have access to a news server to use a newsreader. This is a list of such newsreaders.
Types of clients
Text newsreader – designed primarily for reading/posting text posts; unable to download binary attachments
Traditional newsreader – a newsreader with text support that can also handle binary attachments, though less efficiently than more specialized clients
Binary grabber/plucker – designed specifically for easy and efficient downloading of multi-part binary post attachments; limited or nonexistent reading/posting ability. These generally offer multi-server and multi-connection support. Most now support NZBs, and several either support or plan to support automatic Par2 processing. Some additionally support video and audio streaming.
NZB downloader – binary grabber client without header support – cannot browse groups or read/post text messages; can only load 3rd-party NZBs to download binary post attachments. Some incorporate an interface for accessing selected NZB search websites.
Binary posting client – designed specifically and exclusively for posting multi-part binary files
Combination client – Jack-of-all-trades supporting text reading/posting, as well as multi-segment binary downloading and automatic Par2 processing
Web-Based Client - Client designed for access through a web browser and does not require any additional software to access Usenet.
Active
Commercial software
BinTube
Forté Agent
NewsBin
NewsLeecher
Novell GroupWise
Postbox
Turnpike
Usenet Explorer
Freeware
GrabIt
Opera Mail
Xnews – MS Windows
Free/Open-source software
Claws Mail is a GTK+-based email and news client for Linux, BSD, Solaris, and Windows.
GNOME Evolution
Gnus, is an email and news client, and feed |
https://en.wikipedia.org/wiki/The%20Aleph%20%28short%20story%29 | "The Aleph" (original Spanish title: "El Aleph") is a short story by the Argentine writer and poet Jorge Luis Borges. First published in September 1945, it was reprinted in the short story collection, The Aleph and Other Stories, in 1949, and revised by the author in 1974.
Plot summary
In Borges' story, the Aleph is a point in space that contains all other points. Anyone who gazes into it can see everything in the universe from every angle simultaneously, without distortion, overlapping, or confusion. The story traces the theme of infinity found in several of Borges' other works, such as "The Book of Sand". Borges has stated that the inspiration for this story came from H.G. Wells's short story "The Door in the Wall".
As in many of Borges' short stories, the protagonist is a fictionalized version of the author. At the beginning of the story, he is mourning the recent death of Beatriz Viterbo, a woman he loved, and he resolves to stop by the house of her family to pay his respects. Over time, he comes to know her first cousin, Carlos Argentino Daneri, a mediocre poet with a vastly exaggerated view of his own talent who has made it his lifelong quest to write an epic poem that describes every single location on the planet in excruciatingly fine detail.
Later in the story, a business attempts to tear down Daneri's house in the course of its expansion. Daneri becomes enraged, explaining to the narrator that he must keep the house in order to finish his poem, because the cellar contains an Aleph which he is using to write the poem. Though by now he believes Daneri to be insane, the narrator proposes to come to the house and see the Aleph for himself.
Left alone in the darkness of the cellar, the narrator begins to fear that Daneri is conspiring to kill him, and then he sees the Aleph for himself:
Though staggered by the experience of seeing the Aleph, the narrator pretends to have seen nothing in order to get revenge on Daneri, whom he dislikes, by giving Daneri |
https://en.wikipedia.org/wiki/Eugene%20Aserinsky | Eugene Aserinsky (May 6, 1921 – July 22, 1998), a pioneer in sleep research, was a graduate student at the University of Chicago in 1953 when he discovered REM sleep. He was the son of a dentist of Russian–Jewish descent.
He made the discovery after hours spent studying the eyelids of sleeping subjects. While the phenomenon was in the beginning more interesting for a fellow of PhD student Aserinsky, William Charles Dement, both Aserinsky and their PhD adviser, Nathaniel Kleitman, went on to demonstrate that this "rapid-eye movement" was correlated with dreaming and a general increase in brain activity. Aserinsky and Kleitman pioneered procedures that have now been used with thousands of volunteers using the electroencephalograph. Because of these discoveries, Aserinsky and Kleitman are generally considered the founders of modern sleep research.
Eugene Aserinsky died on July 22, 1998, when his car hit a tree north of San Diego. An autopsy was inconclusive about the cause of the accident, but raised the possibility that it had resulted from him having fallen asleep at the wheel. He was 77 and lived in Escondido, California. |
https://en.wikipedia.org/wiki/Nuclear%20weapons%20in%20popular%20culture | Since their public debut in August 1945, nuclear weapons and their potential effects have been a recurring motif in popular culture, to the extent that the decades of the Cold War are often referred to as the "atomic age".
Images of nuclear weapons
The atomic bombings of Hiroshima and Nagasaki ushered in the "atomic age", and the bleak pictures of the bombed-out cities released shortly after the end of World War II became symbols of the power and destruction of the new weapons (it is worth noting that the first pictures released were only from distances, and did not contain any human bodies—such pictures would only be released in later years).
The first pictures released of a nuclear explosion—the blast from the Trinity test—focused on the fireball itself; later pictures would focus primarily on the mushroom cloud that followed. After the United States began a regular program of nuclear testing in the late 1940s, continuing through the 1950s (and matched by the Soviet Union), the mushroom cloud has served as a symbol of the weapons themselves.
Pictures of nuclear weapons themselves (the actual casings) were not made public until 1960, and even those were only mock-ups of the "Fat Man" and "Little Boy" weapons dropped on Japan—not the more powerful weapons developed more recently. Diagrams of the general principles of operation of thermonuclear weapons have been available in very general terms since at least 1969 in at least two encyclopedia articles, and open literature research into inertial confinement fusion has been at least richly suggestive of how the "secondary" and "inter" stages of thermonuclear weapons work.
In general, however, the design of nuclear weapons has been the most closely guarded secret until long after the secrets had been independently developed—or stolen—by all the major powers and a number of lesser ones. It is generally possible to trace US knowledge of foreign progress in nuclear weapons technology by reading the US Department of |
https://en.wikipedia.org/wiki/THEOS | THEOS, which translates from Greek as "God", is an operating system which started out as OASIS, a microcomputer operating system for small computers that use the Z80 processor. When the operating system was launched for the IBM Personal Computer/AT in 1982, the decision was taken to change the name from OASIS to THEOS, short for THE Operating System.
History
OASIS
The OASIS operating system was originally developed and distributed in 1977 by Phase One Systems of Oakland, California (President Howard Sidorsky). OASIS was developed for the Z80 processor and was the first multi-user operating system for 8-bit microprocessor based computers (Z-80 from Zilog). "OASIS" was a backronym for "Online Application System Interactive Software".
OASIS consisted of a multi-user operating system, a powerful Business Basic/Interpreter, C compiler and a powerful text editor. Timothy Williams developed OASIS and was employed at Phase One. The market asked for 16-bit systems but there was no real 16-bit multi-user OS for 16-bit systems. Every month Phase One announced OASIS-16 but it did not come. One day Timothy Williams claimed that he owned OASIS and started a court case against Phase One and claimed several million U.S. dollars. Sidorsky had no choice and claimed Chapter 11. The court case took two years and finally the ruling was that Timothy Williams was allowed to develop the 16-bit version of OASIS but he was not allowed to use the OASIS name anymore.
David Shirley presented an alternative history at the Computer Information Centre, an OASIS distributor for the UK in the early 1980s. He said Timothy Williams developed the OASIS operating system and contracted with Phase One Systems to market and sell the product. Development of the 16-bit product was underway, but the product was prematurely announced by POS. This led to pressure to release OASIS early, when it was still not properly debugged or optimised. (OASIS 8-bit was quite well optimised by that point, with many parts |
https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%E2%80%93Witten%20model | In theoretical physics and mathematics, a Wess–Zumino–Witten (WZW) model, also called a Wess–Zumino–Novikov–Witten model, is a type of two-dimensional conformal field theory named after Julius Wess, Bruno Zumino, Sergei Novikov and Edward Witten. A WZW model is associated to a Lie group (or supergroup), and its symmetry algebra is the affine Lie algebra built from the corresponding Lie algebra (or Lie superalgebra). By extension, the name WZW model is sometimes used for any conformal field theory whose symmetry algebra is an affine Lie algebra.
Action
Definition
For a Riemann surface, a Lie group, and a (generally complex) number, let us define the -WZW model on at the level . The model is a nonlinear sigma model whose action is a functional of a field :
Here, is equipped with a flat Euclidean metric, is the partial derivative, and is the Killing form on the Lie algebra of . The Wess–Zumino term of the action is
Here is the completely anti-symmetric tensor, and is the Lie bracket.
The Wess–Zumino term is an integral over a three-dimensional manifold whose boundary is .
Topological properties of the Wess–Zumino term
For the Wess–Zumino term to make sense, we need the field to have an extension to . This requires the homotopy group to be trivial, which is the case in particular for any compact Lie group .
The extension of a given to is in general not unique.
For the WZW model to be well-defined,
should not depend on the choice of the extension.
The Wess–Zumino term is invariant under small deformations of , and only depends on its homotopy class.
Possible homotopy classes are controlled by the homotopy group .
For any compact, connected simple Lie group , we have , and different extensions of lead to values of that differ by integers. Therefore, they lead to the same value of provided the level obeys
Integer values of the level also play an important role in the representation theory of the model's symmetry algebra, which is an affine |
https://en.wikipedia.org/wiki/Solmization | Solmization is a system of attributing a distinct syllable to each note of a musical scale. Various forms of solmization are in use and have been used throughout the world, but solfège is the most common convention in countries of Western culture.
Overview
The seven syllables normally used for this practice in English-speaking countries are: do, re, mi, fa, sol, la, and ti (with sharpened notes of di, ri, fi, si, li and flattened notes of te, le, se, me, ra).
The system for other Western countries is similar, though si is often used as the final syllable rather than ti.
Guido of Arezzo is thought likely to have originated the modern Western system of solmization by introducing the ut–re–mi–fa–so–la syllables, which derived from the initial syllables of each of the first six half-lines of the first stanza of the hymn Ut queant laxis. Giovanni Battista Doni is known for having changed the name of note "Ut" (C), renaming it "Do" (in the "Do Re Mi ..." sequence known as solfège).
An alternative explanation, first proposed by Franciszek Meninski in Thesaurus Linguarum Orientalium (1680) and later by J.-B. Laborde in Essai sur la Musique Ancienne et Moderne (1780), is that the syllables were derived from the Arabic solmization system درر مفصّلات Durar Mufaṣṣalāt ("Separated Pearls") (dāl, rā', mīm, fā', ṣād, lām, tā) during the Middle Ages, but there is not any documentary evidence for it.
Byzantine music uses syllables derived from the Greek alphabet to name notes: starting with A, the notes are pa (alpha), vu (beta, pronounced v in modern greek), ga (gamma), di (delta), ke (epsilon), zo (zeta), ni (eta).
In Scotland, the system known as Canntaireachd ("chanting"') was used as a means of communicating bagpipe music verbally.
The Svara solmization of India has origins in Vedic texts like the Upanishads, which discuss a musical system of seven notes, realized ultimately in what is known as sargam. In Indian classical music, the notes in order are: sa, re, ga, ma, p |
https://en.wikipedia.org/wiki/Moisture%20sensitivity%20level | Moisture sensitivity level (MSL) is a rating that shows a device's susceptibility to damage due to absorbed moisture when subjected to reflow soldering as defined in J-STD-020.
It relates to the packaging and handling precautions for some semiconductors. The MSL is an electronic standard for the time period in which a moisture sensitive device can be exposed to ambient room conditions (30 °C/85%RH at Level 1; 30 °C/60%RH at all other levels).
Increasingly, semiconductors have been manufactured in smaller sizes. Components such as thin fine-pitch devices and ball grid arrays could be damaged during SMT reflow when moisture trapped inside the component expands.
The expansion of trapped moisture can result in internal separation (delamination) of the plastic from the die or lead-frame, wire bond damage, die damage, and internal cracks. Most of this damage is not visible on the component surface. In extreme cases, cracks will extend to the component surface. In the most severe cases, the component will bulge and pop. This is known as the "popcorn" effect. This occurs when part temperature rises rapidly to a high maximum during the soldering (assembly) process. This does not occur when part temperature rises slowly and to a low maximum during a baking (preheating) process.
Moisture sensitive devices are packaged in a moisture barrier antistatic bag with a desiccant and a moisture indicator card which is sealed.
Moisture sensitivity levels are specified in technical standard IPC/JEDEC Moisture/reflow Sensitivity Classification for Nonhermetic Surface-Mount Devices. The times indicate how long components can be outside of dry storage before they have to be baked to remove any absorbed moisture.
MSL 6 – Mandatory bake before use
MSL 5A – 24 hours
MSL 5 – 48 hours
MSL 4 – 72 hours
MSL 3 – 168 hours
MSL 2A – 4 weeks
MSL 2 – 1 year
MSL 1 – Unlimited floor life
Practical
MSL-specified parts must be baked before assembly if their exposure has exceeded the r |
https://en.wikipedia.org/wiki/Addition%20principle | In combinatorics, the addition principle or rule of sum is a basic counting principle. Stated simply, it is the intuitive idea that if we have A number of ways of doing something and B number of ways of doing another thing and we can not do both at the same time, then there are ways to choose one of the actions. In mathematical terms, the addition principle states that, for disjoint sets A and B, we have .
The rule of sum is a fact about set theory.
The addition principle can be extended to several sets. If are pairwise disjoint sets, then we have:This statement can be proven from the addition principle by induction on n.
Simple example
A person has decided to shop at one store today, either in the north part of town or the south part of town. If they visit the north part of town, they will shop at either a mall, a furniture store, or a jewelry store (3 ways). If they visit the south part of town then they will shop at either a clothing store or a shoe store (2 ways).
Thus there are possible shops the person could end up shopping at today.
Inclusion–exclusion principle
The inclusion–exclusion principle (also known as the sieve principle) can be thought of as a generalization of the rule of sum in that it too enumerates the number of elements in the union of some sets (but does not require the sets to be disjoint). It states that if A1, ..., An are finite sets, then
Subtraction principle
Similarly, for a given finite set S, and given another set A, if , then . To prove this, notice that by the addition principle.
Applications
The addition principle can be used to prove Pascal's rule combinatorially. To calculate , one can view it as the number of ways to choose k people from a room containing n children and 1 teacher. Then there are ways to choose people without choosing the teacher, and ways to choose people that includes the teacher. Thus .
The addition principle can also be used to prove the multiplication principle. |
https://en.wikipedia.org/wiki/Rule%20of%20product | In combinatorics, the rule of product or multiplication principle is a basic counting principle (a.k.a. the fundamental principle of counting). Stated simply, it is the intuitive idea that if there are a ways of doing something and b ways of doing another thing, then there are a · b ways of performing both actions.
Examples
In this example, the rule says: multiply 3 by 2, getting 6.
The sets {A, B, C} and {X, Y} in this example are disjoint sets, but that is not necessary. The number of ways to choose a member of {A, B, C}, and then to do so again, in effect choosing an ordered pair each of whose components are in {A, B, C}, is 3 × 3 = 9.
As another example, when you decide to order pizza, you must first choose the type of crust: thin or deep dish (2 choices). Next, you choose one topping: cheese, pepperoni, or sausage (3 choices).
Using the rule of product, you know that there are 2 × 3 = 6 possible combinations of ordering a pizza.
Applications
In set theory, this multiplication principle is often taken to be the definition of the product of cardinal numbers. We have
where is the Cartesian product operator. These sets need not be finite, nor is it necessary to have only finitely many factors in the product.
An extension of the rule of product considers there are different types of objects, say sweets, to be associated with objects, say people. How many different ways can the people receive their sweets?
Each person may receive any of the sweets available, and there are people, so there are ways to do this.
Related concepts
The rule of sum is another basic counting principle. Stated simply, it is the idea that if we have a ways of doing something and b ways of doing another thing and we can not do both at the same time, then there are a + b ways to choose one of the actions.
See also
Combinatorial principles |
https://en.wikipedia.org/wiki/Essence%20of%20Decision | Essence of Decision: Explaining the Cuban Missile Crisis is book by political scientist Graham T. Allison analyzing the 1962 Cuban Missile Crisis. Allison used the crisis as a case study for future studies into governmental decision-making. The book became the founding study of the John F. Kennedy School of Government, and in doing so revolutionized the field of international relations.
Allison originally published the book in 1971. In 1999, because of new materials available (including tape recordings of the U.S. government's proceedings), he rewrote the book with Philip Zelikow.
The title is based on a speech by John F. Kennedy, in which he said, "The essence of ultimate decision remains impenetrable to the observer - often, indeed, to the decider himself."
Thesis
When he first wrote the book, Allison contended that political science and the study of international relations were saturated with rational expectations theories inherited from the field of economics. Under such a view, the actions of states are analyzed by assuming that nations consider all options and act rationally to maximize their utility.
Allison attributes such viewpoints to the dominance of economists such as Milton Friedman, statesmen such as Robert McNamara and Henry Kissinger, disciplines such as game theory, and organizations such as the RAND Corporation. However, as he puts it:
It must be noted, however, that an imaginative analyst can construct an account of value-maximizing choice for any action or set of actions performed by a government.
Or, to put it bluntly, this approach (which Allison terms the "Rational Actor Model") violates the principle of falsifiability. Also, Allison notes that "rational" analysts must ignore a lot of facts in order to make their analysis fit their models.
In response, Allison constructed three different ways (or "lenses") through which analysts can examine events: the "Rational Actor" model, the "Organizational Behavior" model, and the "Governmental Po |
https://en.wikipedia.org/wiki/Cahiers%20de%20Topologie%20et%20G%C3%A9om%C3%A9trie%20Diff%C3%A9rentielle%20Cat%C3%A9goriques | The Cahiers de Topologie et Géométrie Différentielle Catégoriques (French: Notebooks of categorical topology and categorical differential geometry) is a French mathematical scientific journal established by Charles Ehresmann in 1957. It concentrates on category theory "and its applications, [e]specially in topology and differential geometry". Its older papers (two years or more after publication) are freely available on the internet through the French NUMDAM service.
It was originally published by the Institut Henri Poincaré under the name Cahiers de Topologie; after the first volume, Ehresmann changed the publisher to the Institut Henri Poincaré and later Dunod/Bordas. In the eighth volume he changed the name to Cahiers de Topologie et Géométrie Différentielle. After Ehresmann's death in 1979 the editorship passed to his wife Andrée Ehresmann; in 1984, at the suggestion of René Guitart, the name was changed again, to add "Catégoriques". |
https://en.wikipedia.org/wiki/Apple%20butter | Apple butter (Dutch: appelstroop ) is a highly concentrated form of apple sauce produced by long, slow cooking of apples with apple juice or water to a point where the sugar in the apples caramelizes, turning the apple butter a deep brown. The concentration of sugar gives apple butter a much longer shelf life as a preserve than apple sauce.
Background
The roots of apple butter lie in Limburg (Belgium and the Netherlands) and Rhineland (Germany), conceived during the Middle Ages, when the first monasteries (with large orchards) appeared. The production of the butter was a perfect way to conserve part of the fruit production of the monasteries in that region, at a time when almost every village had its own apple-butter producers. The production of apple butter was also a popular way of using apples in colonial America, well into the 19th century.
The product contains no actual dairy butter; the term butter refers only to the butter-like thick, soft consistency, and apple butter's use as a spread for breads. Sometimes seasoned with cinnamon, clove, and other spices, apple butter is usually spread on bread, used as a side dish, an ingredient in baked goods, or as a condiment. Apple butter may also be used on sandwiches to add an interesting flavor, but is not as commonly used as in historical times.
Vinegar or lemon juice is sometimes mixed in while cooking to provide a small amount of tartness to the usually sweet apple butter. The Pennsylvania Dutch often include apple butter as part of their traditional 'seven sweets and seven sours' dinner table array.
In areas of the American South, the production of apple butter is a family event, due to the large amount of labor necessary to produce apple butter in large quantities. Traditionally, apple butter was prepared in large copper kettles outside. Large paddles were used to stir the apples, and family members would take turns stirring. In Appalachian cuisine, apple butter was the only type of fruit preserve normally |
https://en.wikipedia.org/wiki/Harris%20tweed | Harris Tweed ( or ) is a tweed cloth that is handwoven by islanders at their homes in the Outer Hebrides of Scotland, finished in the Outer Hebrides, and made from pure virgin wool dyed and spun in the Outer Hebrides. This definition, quality standards and protection of the Harris Tweed name are enshrined in the Harris Tweed Act 1993.
Etymology
The original name of tweed fabric was "tweel", the Scots word for twill, as the fabric was woven in a twill weave rather than a plain (or tabby) weave. A number of theories exist as to how and why "tweel" became corrupted into "tweed"; in one, a London merchant in the 1830s, upon receiving a letter from a Hawick firm inquiring after "tweels", misinterpreted the spelling as a trade name taken from the River Tweed, which flows through the Scottish Borders. Subsequently, the goods were advertised as "tweed", the name used ever since.
History
For centuries, the islanders of Lewis and Harris, the Uists, Benbecula and Barra wove cloth known as - literally, "big cloth" in Scottish Gaelic - by hand. Originally woven by crofters, this cloth was woven for personal and practical uses and was ideal protection against the often cold climate of northern Scotland. The cloth was also used for trade or barter, eventually becoming a form of currency amongst islanders; it was not unusual for rents to be paid in blankets or lengths of .
By the end of the 18th century, the spinning of wool yarn from local raw materials had become a staple industry for crofters. Finished handmade cloth was exported to the Scottish mainland and traded, along with other commodities produced by the Islanders, such as goat and deer skins.
As the Industrial Revolution reached Scotland, mainland manufacturers developed mechanised weaving methods, with weavers in the Outer Hebrides retaining their traditional processes. The islanders of Lewis and Harris had long been known for the quality of their handwoven fabrics, but up to the middle of the nineteenth century, t |
https://en.wikipedia.org/wiki/Interagency%20GPS%20Executive%20Board | The Interagency GPS Executive Board (IGEB) was an agency of the United States federal government that sought to integrate the needs and desires of various governmental agencies into formal Global Positioning System Planning. GPS was administered by the Department of Defense, but had grown to service a wide variety of constituents. The majority of GPS uses are now non-military, so this board was fundamental in ensuring the needs of non-military users.
In 2004, the IGEB was superseded by the National Executive Committee for Space-Based Positioning, Navigation and Timing (PNT), established by presidential order.
External links
From the IGEB Era at the PNT
Global Positioning System
Defunct agencies of the United States government |
https://en.wikipedia.org/wiki/Key%20signature%20%28cryptography%29 | In cryptography, a key signature is the result of a third-party applying a cryptographic signature to a representation of a cryptographic key. This is usually done as a form of assurance or verification: If "Alice" has signed "Bob's" key, it can serve as an assurance to another party, say "Eve", that the key actually belongs to Bob, and that Alice has personally checked and attested to this.
The representation of the key that is signed is usually shorter than the key itself, because most public-key signature schemes can only encrypt or sign short lengths of data. Some derivative of the public key fingerprint may be used, i.e. via hash functions.
See also
Key (cryptography)
Public key certificate
Key management |
https://en.wikipedia.org/wiki/Lie%20theory | In mathematics, the mathematician Sophus Lie ( ) initiated lines of study involving integration of differential equations, transformation groups, and contact of spheres that have come to be called Lie theory. For instance, the latter subject is Lie sphere geometry. This article addresses his approach to transformation groups, which is one of the areas of mathematics, and was worked out by Wilhelm Killing and Élie Cartan.
The foundation of Lie theory is the exponential map relating Lie algebras to Lie groups which is called the Lie group–Lie algebra correspondence. The subject is part of differential geometry since Lie groups are differentiable manifolds. Lie groups evolve out of the identity (1) and the tangent vectors to one-parameter subgroups generate the Lie algebra. The structure of a Lie group is implicit in its algebra, and the structure of the Lie algebra is expressed by root systems and root data.
Lie theory has been particularly useful in mathematical physics since it describes the standard transformation groups: the Galilean group, the Lorentz group, the Poincaré group and the conformal group of spacetime.
Elementary Lie theory
The one-parameter groups are the first instance of Lie theory. The compact case arises through Euler's formula in the complex plane. Other one-parameter groups occur in the split-complex number plane as the unit hyperbola
and in the dual number plane as the line
In these cases the Lie algebra parameters have names: angle, hyperbolic angle, and slope. These species of angle are useful for providing polar decompositions which describe sub-algebras of 2 x 2 real matrices.
There is a classical 3-parameter Lie group and algebra pair: the quaternions of unit length which can be identified with the 3-sphere. Its Lie algebra is the subspace of quaternion vectors. Since the commutator ij − ji = 2k, the Lie bracket in this algebra is twice the cross product of ordinary vector analysis.
Another elementary 3-parameter example is given |
https://en.wikipedia.org/wiki/Cantor%E2%80%93Dedekind%20axiom | In mathematical logic, the Cantor–Dedekind axiom is the thesis that the real numbers are order-isomorphic to the linear continuum of geometry. In other words, the axiom states that there is a one-to-one correspondence between real numbers and points on a line.
This axiom became a theorem proved by Emil Artin in his book Geometric Algebra. More precisely, Euclidean spaces defined over the field of real numbers satisfy the axioms of Euclidean geometry, and, from the axioms of Euclidean geometry, one can construct a field that is isomorphic to the real numbers.
Analytic geometry was developped from the Cartesian coordinate system introduced by René Descartes. It implicitly assumed this axiom by blending the distinct concepts of real numbers and points on a line, sometimes referred to as the real number line. Artin's proof, not only makes this blend explicity, but also that analytic geometry is strictly equivalent with the traditional synthetic geometry, in the sense that exactly the same theorems can be proved in the two frameworks.
Another consequence is that Alfred Tarski's proof of the decidability of first-order theories of the real numbers could be seen as an algorithm to solve any first-order problem in Euclidean geometry. |
https://en.wikipedia.org/wiki/Photosensitivity | Photosensitivity is the amount to which an object reacts upon receiving photons, especially visible light. In medicine, the term is principally used for abnormal reactions of the skin, and two types are distinguished, photoallergy and phototoxicity. The photosensitive ganglion cells in the mammalian eye are a separate class of light-detecting cells from the photoreceptor cells that function in vision.
Skin reactions
Human medicine
Sensitivity of the skin to a light source can take various forms. People with particular skin types are more sensitive to sunburn. Particular medications make the skin more sensitive to sunlight; these include most of the tetracycline antibiotics, heart drugs amiodarone, and sulfonamides.
Some dietary supplements, such as St. John's Wort, include photosensitivity as a possible side effect.
Particular conditions lead to increased light sensitivity. Patients with systemic lupus erythematosus experience skin symptoms after sunlight exposure; some types of porphyria are aggravated by sunlight. A rare hereditary condition xeroderma pigmentosum (a defect in DNA repair) is thought to increase the risk of UV-light-exposure-related cancer by increasing photosensitivity.
Veterinary medicine
Photosensitivity occurs in multiple species including sheep, bovine, and horses. They are classified as primary if an ingested plant contains a photosensitive substance, like hypericin in St John's wort poisoning and ingestion of biserrula (Biserrula pelecinus) in sheep, or buckwheat plants (green or dried) in horses.
In hepatogenous photosensitization, the photosensitzing substance is phylloerythrin, a normal end-product of chlorophyll metabolism. It accumulates in the body because of liver damage, reacts with UV light on the skin, and leads to free radical formation. These free radicals damage the skin, leading to ulceration, necrosis, and sloughing. Non-pigmented skin is most commonly affected.
See also
Digital camera ISO
Bergaptene
Heliotropism
|
https://en.wikipedia.org/wiki/Mollifier | In mathematics, mollifiers (also known as approximations to the identity) are smooth functions with special properties, used for example in distribution theory to create sequences of smooth functions approximating nonsmooth (generalized) functions, via convolution. Intuitively, given a function which is rather irregular, by convolving it with a mollifier the function gets "mollified", that is, its sharp features are smoothed, while still remaining close to the original nonsmooth (generalized) function.
They are also known as Friedrichs mollifiers after Kurt Otto Friedrichs, who introduced them.
Historical notes
Mollifiers were introduced by Kurt Otto Friedrichs in his paper , which is considered a watershed in the modern theory of partial differential equations. The name of this mathematical object had a curious genesis, and Peter Lax tells the whole story in his commentary on that paper published in Friedrichs' "Selecta". According to him, at that time, the mathematician Donald Alexander Flanders was a colleague of Friedrichs: since he liked to consult colleagues about English usage, he asked Flanders an advice on how to name the smoothing operator he was using. Flanders was a puritan, nicknamed by his friends Moll after Moll Flanders in recognition of his moral qualities: he suggested to call the new mathematical concept a "mollifier" as a pun incorporating both Flanders' nickname and the verb 'to mollify', meaning 'to smooth over' in a figurative sense.
Previously, Sergei Sobolev used mollifiers in his epoch making 1938 paper, which contains the proof of the Sobolev embedding theorem: Friedrichs himself acknowledged Sobolev's work on mollifiers stating that:-"These mollifiers were introduced by Sobolev and the author...".
It must be pointed out that the term "mollifier" has undergone linguistic drift since the time of these foundational works: Friedrichs defined as "mollifier" the integral operator whose kernel is one of the functions nowadays called moll |
https://en.wikipedia.org/wiki/Complement%20membrane%20attack%20complex | The membrane attack complex (MAC) or terminal complement complex (TCC) is a complex of proteins typically formed on the surface of pathogen cell membranes as a result of the activation of the host's complement system, and as such is an effector of the immune system. Antibody-mediated complement activation leads to MAC deposition on the surface of infected cells. Assembly of the MAC leads to pores that disrupt the cell membrane of target cells, leading to cell lysis and death.
The MAC is composed of the complement components C5b, C6, C7, C8 and several C9 molecules.
A number of proteins participate in the assembly of the MAC. Freshly activated C5b binds to C6 to form a C5b-6 complex, then to C7 forming the C5b-6-7 complex. The C5b-6-7 complex binds to C8, which is composed of three chains (alpha, beta, and gamma), thus forming the C5b-6-7-8 complex. C5b-6-7-8 subsequently binds to C9 and acts as a catalyst in the polymerization of C9.
Structure and function
MAC is composed of a complex of four complement proteins (C5b, C6, C7, and C8) that bind to the outer surface of the plasma membrane, and many copies of a fifth protein (C9) that hook up to one another, forming a ring in the membrane. C6-C9 all contain a common MACPF domain. This region is homologous to cholesterol-dependent cytolysins from Gram-positive bacteria.
The ring structure formed by C9 is a pore in the membrane that allows free diffusion of molecules in and out of the cell. If enough pores form, the cell is no longer able to survive.
If the pre-MAC complexes of C5b-7, C5b-8 or C5b-9 do not insert into a membrane, they can form inactive complexes with Protein S (sC5b-7, sC5b-8 and sC5b-9). These fluid phase complexes do not bind to cell membranes and are ultimately scavenged by clusterin and vitronectin, two regulators of complement.
Initiation: C5-C7
The membrane attack complex is initiated when the complement protein C5 convertase cleaves C5 into C5a and C5b. All three pathways of the compleme |
https://en.wikipedia.org/wiki/Habitat | In ecology, habitat refers to the array of resources, physical and biotic factors that are present in an area, such as to support the survival and reproduction of a particular species. A species habitat can be seen as the physical manifestation of its ecological niche. Thus "habitat" is a species-specific term, fundamentally different from concepts such as environment or vegetation assemblages, for which the term "habitat-type" is more appropriate.
The physical factors may include (for example): soil, moisture, range of temperature, and light intensity. Biotic factors include the availability of food and the presence or absence of predators. Every species has particular habitat requirements, with habitat generalist species able to thrive in a wide array of environmental conditions while habitat specialist species requiring a very limited set of factors to survive. The habitat of a species is not necessarily found in a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body.
Habitat types are environmental categorizations of different environments based on the characteristics of a given geographical area, particularly vegetation and climate. Thus habitat types do not refer to a single species but to multiple species living in the same area. For example, terrestrial habitat types include forest, steppe, grassland, semi-arid or desert. Fresh-water habitat types include marshes, streams, rivers, lakes, and ponds; marine habitat types include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents.
Habitat types may change over time. Causes of change may include a violent event (such as the eruption of a volcano, an earthquake, a tsunami, a wildfire or a change in oceanic currents); or change may occur mo |
https://en.wikipedia.org/wiki/Proof%20procedure | In logic, and in particular proof theory, a proof procedure for a given logic is a systematic method for producing proofs in some proof calculus of (provable) statements.
Types of proof calculi used
There are several types of proof calculi. The most popular are natural deduction, sequent calculi (i.e., Gentzen-type systems), Hilbert systems, and semantic tableaux or trees. A given proof procedure will target a specific proof calculus, but can often be reformulated so as to produce proofs in other proof styles.
Completeness
A proof procedure for a logic is complete if it produces a proof for each provable statement. The theorems of logical systems are typically recursively enumerable, which implies the existence of a complete but usually extremely inefficient proof procedure; however, a proof procedure is only of interest if it is reasonably efficient.
Faced with an unprovable statement, a complete proof procedure may sometimes succeed in detecting and signalling its unprovability. In the general case, where provability is only a semidecidable property, this is not possible, and instead the procedure will diverge (not terminate).
See also
Automated theorem proving
Proof complexity
Proof tableaux
Deductive system
Proof (truth) |
https://en.wikipedia.org/wiki/CPC%20Binary%20Barcode | CPC Binary Barcode is Canada Post's proprietary symbology used in its automated mail sortation operations. This barcode is used on regular-size pieces of mail, especially mail sent using Canada Post's Lettermail service. This barcode is printed on the lower-right-hand corner of each faced envelope, using a unique ultraviolet-fluorescent ink.
Symbology description
The applied barcode uses printed and non-printed bars spaced 3 mm apart, and consists of two fields. The rightmost field, which is 27 bars in width, encodes the destination postal code. The leftmost field is 9 bars in width and applied right below the printed destination address. It is currently unclear what this field is used for.
In the postal code field, the rightmost bar is always printed, to allow the sortation equipment to properly lock onto the barcode and scan it. The leftmost bar, a parity field, is printed only when necessary to give the postal code field an odd number of printed bars. The remaining 25 bars represent the actual destination postal code. To eliminate any possibility of ambiguity during the scanning process, run-length restrictions are used within the postal code field. No more than five consecutive non-printed bars, or spaces, are permitted, and no more than six consecutive printed bars are allowed.
The actual representation of the postal code is split into four subfields of the barcode, each with their own separate encoding table. The first and last subfields, which share a common encoding table, are always eight bars in width, and encode the first two characters and the last two characters of the postal code respectively. The second subfield, which encodes the third character of the postal code, is always five bars in width, and the third subfield, which encodes the fourth character, is always four bars wide.
Generating barcodes
Disregarding the space, divide the postal code into four subfields (e.g. K1-A-0-B1).
Locate the contents of each subfield in the encoding t |
https://en.wikipedia.org/wiki/Kleptoplasty | Kleptoplasty or kleptoplastidy is a process in symbiotic relationships whereby plastids, notably chloroplasts from algae, are sequestered by the host. The word is derived from Kleptes (κλέπτης) which is Greek for thief. The alga is eaten normally and partially digested, leaving the plastid intact. The plastids are maintained within the host, temporarily continuing photosynthesis and benefiting the host.
Etymology
The word kleptoplasty is derived from Greek kleptes (κλέπτης), thief, and plastós (πλαστός), originally meaning formed or moulded, and used in biology to mean a plastid.
Process
Kleptoplasty is a process in symbiotic relationships whereby plastids, notably chloroplasts from algae, are sequestered by the host. The alga is eaten normally and partially digested, leaving the plastid intact. The plastids are maintained within the host, temporarily continuing photosynthesis and benefiting the host. The term was coined in 1990 to describe chloroplast symbiosis.
Taxonomic range
Kleptoplasty occurs in two major clades of eukaryotes, namely micro-organisms of the SAR Supergroup, and some marine animals.
SAR Supergroup
Foraminifera
Some species of the foraminiferan genera Bulimina, Elphidium, Haynesina, Nonion, Nonionella, Nonionellina, Reophax, and Stainforthia sequester diatom chloroplasts.
Alveolata
Dinoflagellates
The stability of transient plastids varies considerably across plastid-retaining species. In the dinoflagellates Gymnodinium spp. and Pfisteria piscicida, kleptoplastids are photosynthetically active for only a few days, while kleptoplastids in Dinophysis spp. can be stable for 2 months. In other dinoflagellates, kleptoplasty has been hypothesized to represent either a mechanism permitting functional flexibility, or perhaps an early evolutionary stage in the permanent acquisition of chloroplasts.
Ciliates
Mesodinium rubrum is a ciliate that steals chloroplasts from the cryptomonad Geminigera cryophila. M. rubrum participates in additi |
https://en.wikipedia.org/wiki/Categorification | In mathematics, categorification is the process of replacing set-theoretic theorems with category-theoretic analogues. Categorification, when done successfully, replaces sets with categories, functions with functors, and equations with natural isomorphisms of functors satisfying additional properties. The term was coined by Louis Crane.
The reverse of categorification is the process of decategorification. Decategorification is a systematic process by which isomorphic objects in a category are identified as equal. Whereas decategorification is a straightforward process, categorification is usually much less straightforward. In the representation theory of Lie algebras, modules over specific algebras are the principal objects of study, and there are several frameworks for what a categorification of such a module should be, e.g., so called (weak) abelian categorifications.
Categorification and decategorification are not precise mathematical procedures, but rather a class of possible analogues. They are used in a similar way to the words like 'generalization', and not like 'sheafification'.
Examples
One form of categorification takes a structure described in terms of sets, and interprets the sets as isomorphism classes of objects in a category. For example, the set of natural numbers can be seen as the set of cardinalities of finite sets (and any two sets with the same cardinality are isomorphic). In this case, operations on the set of natural numbers, such as addition and multiplication, can be seen as carrying information about coproducts and products of the category of finite sets. Less abstractly, the idea here is that manipulating sets of actual objects, and taking coproducts (combining two sets in a union) or products (building arrays of things to keep track of large numbers of them) came first. Later, the concrete structure of sets was abstracted away – taken "only up to isomorphism", to produce the abstract theory of arithmetic. This is a "decategorific |
https://en.wikipedia.org/wiki/Directive%20on%20the%20legal%20protection%20of%20biotechnological%20inventions | Directive 98/44/EC of the European Parliament and of the Council of 6 July 1998 on the legal protection of biotechnological inventions
is a European Union directive in the field of patent law, made under the internal market
provisions of the Treaty of Rome. It was intended to harmonise the laws of Member States regarding the patentability
of biotechnological inventions, including plant varieties (as legally defined) and human genes.
Content
The Directive is divided into the following five chapters:
Patentability (Chapter I)
Scope of Protection (Chapter II)
Compulsory cross-licensing (Chapter III)
Deposit, access and re-deposit of biological material (Chapter IV)
Final Provisions (entering into force) (Chapter V)
Timeline
The original proposal was adopted by the European Commission in 1988. The procedure for its adoption was slowed down by primarily ethical issues regarding the patentability of living matter. The European Parliament eventually rejected the joint text from the final Conciliation meeting at 3rd reading on 1 March 1995 so the first directive process did not yield a directive.
On 13 December 1995, the Commission adopted a new proposal was nearly identical to the rejected version, was changed again, but the Parliament put aside its ethical concerns on patenting of human genes in on 12 July 1998 in its second reading and adopted the Common Position of the Council, so in the second legislative process, the directive was adopted. The drafts person of the Parliament for this second procedure was Willi Rothley and the vote with the most yes votes was Amendment 9 from the Greens which got 221 against 294 votes out of 532 members voting
with 17 abstentions but 314 yes votes would have been required to reach the required an absolute majority to adopt it.
On 6 July 1998, a final version was adopted. Its code is 98/44/EC.
The Kingdom of the Netherlands brought Case C-377/98 before the European Court of Justice against the adoption of the directive |
https://en.wikipedia.org/wiki/Audio%20Home%20Recording%20Act | The Audio Home Recording Act of 1992 (AHRA) amended the United States copyright law by adding Chapter 10, "Digital Audio Recording Devices and Media". The act enabled the release of recordable digital formats such as Sony and Philips' Digital Audio Tape without fear of contributory infringement lawsuits.
The RIAA and music publishers, concerned that consumers' ability to make perfect digital copies of music would destroy the market for audio recordings, had threatened to sue companies and had lobbied Congress to pass legislation imposing mandatory copy protection technology and royalties on devices and media.
The AHRA establishes a number of important precedents in US copyright law that defined the debate between device makers and the content industry for the ensuing two decades. These include:
the first government technology mandate in the copyright law, requiring all digital audio recording devices sold, manufactured or imported in the US (excluding professional audio equipment) to include the Serial Copy Management System (SCMS).
the first anti-circumvention provisions in copyright law, later applied on a much broader scale by the Digital Millennium Copyright Act.
the first government-imposed royalties on devices and media, a portion of which is paid to the record industry directly.
The act also includes blanket protection from infringement actions for private, non-commercial analog audio copying, and for digital audio copies made with certain kinds of digital audio recording technology.
History and legislative background
By the late 1980s, several manufacturers were prepared to introduce read/write digital audio formats to the United States. These new formats were a significant improvement over the newly introduced read-only (at the time) digital format of the compact disc, allowing consumers to make perfect, multi-generation copies of digital audio recordings. Most prominent among these formats was Digital Audio Tape (DAT), followed in the early 1990s b |
https://en.wikipedia.org/wiki/Four%20Green%20Fields | Four Green Fields is a 1967 folk song by Irish musician Tommy Makem, described in The New York Times as a "hallowed Irish leave-us-alone-with-our-beauty ballad." Of Makem's many compositions, it has become the most familiar, and is part of the common repertoire of Irish folk musicians.
Content and meaning
The song is about Ireland (personified as an “old woman”) and its four provinces (represented by “green fields”), one of which remains occupied (”taken”) by the British (the “strangers”) despite the best efforts of the Irish people (her “sons”), who died trying to defend them. Its middle stanza is a description of the violence and deprivation experienced by the Irish, including the people in Northern Ireland. At the end of the song, one of her fields still shows the promise of new growth:
"But my sons have sons, as brave as were their fathers;
My fourth green field will bloom once again," said she.
The song is interpreted as an allegorical political statement regarding the constitutional status of Northern Ireland. The four fields are seen as the Provinces of Ireland with Ulster being the "field" that remained part of the United Kingdom after the Irish Free State separated. The old woman is seen as a traditional personification of Ireland herself (see Kathleen Ni Houlihan). The words spoken by the woman in Makem's song are taken directly from "Cathleen ni Houlihan", an early play by W. B. Yeats.
Background
The concept of Four Green Fields representing the four provinces of Ireland had been used before, having been previously used in the 1939 stained glass work My Four Green Fields by Evie Hone.
Makem frequently described the song as having been inspired by a drive through the "no man's land" adjoining Northern Ireland, where he saw an old woman tending livestock. She was oblivious to the political boundaries that loomed so large in the public's eye; the land was older than the argument, and she didn't care what was shown on the map.
Makem commonly sang t |
https://en.wikipedia.org/wiki/Comparison%20of%20instant%20messaging%20protocols | The following is a comparison of instant messaging protocols. It contains basic general information about the protocols.
Table of instant messaging protocols
See also
Comparison of cross-platform instant messaging clients
Comparison of Internet Relay Chat clients
Comparison of LAN messengers
Comparison of software and protocols for distributed social networking
LAN messenger
Secure instant messaging
Comparison of user features of messaging platforms |
https://en.wikipedia.org/wiki/The%20Literary%20Encyclopedia | The Literary Encyclopedia is an online reference work first published in October 2000. It was founded as an innovative project designed to bring the benefits of information technology to what at the time was still a largely conservative literary field. From its inception it was developed as a not-for-profit publication aimed to ensure that those who contribute to it are properly rewarded for the time and knowledge they invest - as such, its authors and editors are also shareholders in the Company.
The Literary Encyclopedia offers both freely available content and content and services for subscribers (individual and institutional, consisting mainly of higher education institutions and higher level secondary schools). Articles are solicited by invitation from specialist scholars, then refereed and approved by subject editors, which makes the LE both authoritative and reliable. It contains general profiles of literary writers, but also of major cultural, historical and scientific figures; articles on individual works of literature from all over the world (often containing succinct critical commentary and sections on critical reception); entries on hundreds of literary terms, concepts and movements, as well as extended essays on topics of historical and cultural importance.
The Literary Encyclopedia offers free access, upon request, to its entire database to all educational institutions in countries where the GDP is below the world average. It also offers a number of research grants to young and emerging scholars in its subscribing institutions, funded by royalties donated by the publication's contributors and editors.
The encyclopedia's founding editors were Robert Clark (University of East Anglia), Emory Elliott (University of California at Riverside) and Janet Todd (University of Cambridge), and its current editorial board numbers over 100 distinguished scholars from higher education institutions all over the world.
Written and owned by a global network of schol |
https://en.wikipedia.org/wiki/The%20Genetical%20Evolution%20of%20Social%20Behaviour | "The Genetical Evolution of Social Behaviour" is a 1964 scientific paper by the British evolutionary biologist W.D. Hamilton in which he mathematically lays out the basis for inclusive fitness.
Hamilton, then only a PhD student, completed his work in London. It was based on Haldane's idea, but Hamilton showed that it applied to all gene frequencies. Although initially obscure, it is now highly cited in biology books, and has gone on to reach such common currency that citations are now often unnecessary as it is assumed that the reader is so familiar with kin selection and inclusive fitness that he need not use the reference to obtain further information.
The paper's peer review process led to disharmony between one of the reviewers, John Maynard Smith and Hamilton. Hamilton thought that Maynard Smith had deliberately kept the paper, which has difficult mathematics, from publication so that Maynard Smith could claim credit for the concept of kin selection in his own paper. Indeed such was the time taken for peer review that Hamilton published a magazine essay in American Naturalist in 1963.
The American George R. Price found Hamilton's paper, and finding trouble in its implications for sociobiology, tried to disprove it but ended up rederiving his work through the Price equation.
The paper has been reprinted in books twice, firstly in George C. Williams's Group Selection, and secondly in the first volume of Hamilton's collected papers Narrow Roads of Gene Land. The latter includes a background essay by Hamilton.
Hamilton had previously written a short note explaining the background in 1988 when ISI recorded it as a citation classic.
See also
Group Selection (book by G. C. Williams which contains this paper) |
https://en.wikipedia.org/wiki/Cutting-plane%20method | In mathematical optimization, the cutting-plane method is any of a variety of optimization methods that iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts. Such procedures are commonly used to find integer solutions to mixed integer linear programming (MILP) problems, as well as to solve general, not necessarily differentiable convex optimization problems. The use of cutting planes to solve MILP was introduced by Ralph E. Gomory.
Cutting plane methods for MILP work by solving a non-integer linear program, the linear relaxation of the given integer program. The theory of Linear Programming dictates that under mild assumptions (if the linear program has an optimal solution, and if the feasible region does not contain a line), one can always find an extreme point or a corner point that is optimal. The obtained optimum is tested for being an integer solution. If it is not, there is guaranteed to exist a linear inequality that separates the optimum from the convex hull of the true feasible set. Finding such an inequality is the separation problem, and such an inequality is a cut. A cut can be added to the relaxed linear program. Then, the current non-integer solution is no longer feasible to the relaxation. This process is repeated until an optimal integer solution is found.
Cutting-plane methods for general convex continuous optimization and variants are known under various names: Kelley's method, Kelley–Cheney–Goldstein method, and bundle methods. They are popularly used for non-differentiable convex minimization, where a convex objective function and its subgradient can be evaluated efficiently but usual gradient methods for differentiable optimization can not be used. This situation is most typical for the concave maximization of Lagrangian dual functions. Another common situation is the application of the Dantzig–Wolfe decomposition to a structured optimization problem in which formulations with an exponentia |
https://en.wikipedia.org/wiki/Media%20Dispatch%20Protocol | The Media Dispatch Protocol (MDP) was developed by the Pro-MPEG Media Dispatch Group to provide an open standard for secure, automated, and tapeless delivery of audio, video and associated data files. Such files typically range from low-resolution content for the web to HDTV and high-resolution digital intermediate files for cinema production.
MDP is essentially a middleware protocol that decouples the technical details of how delivery occurs from the business logic that requires delivery. For example, a TV post-production company might have a contract to deliver a programme to a broadcaster. An MDP agent allows users be able to deal with company and programme names, rather than with filenames and network endpoints. It can also provide a delivery service as part of a service oriented architecture.
MDP acts as a communication layer between business logic and low-level file transfer mechanisms, providing a way to securely communicate and negotiate transfer-specific metadata about file packages, delivery routing, deadlines, and security information, and to manage and coordinate file transfers in progress, whilst hooking all this information to project, company and job identifiers.
MDP works by implementing a 'dispatch transaction' layer by which means agents negotiate and agree the details of the individual file transfers required for the delivery, and control, monitor and report on the progress of the transfers. At the heart of the protocol is the 'Manifest' - an XML document that encapsulates the information about the transaction.
MDP is based on existing open technologies such as XML, HTTP and TLS. The protocol is specified in a layered way to allow the adoption of new technologies (e.g. Web Services protocols such as SOAP and WSDL) as required.
Since early 2005, multiple implementations based on draft versions of the Media Dispatch Protocol have been in use, both for technical testing, and, since April 2005, for real-world production work. The experience with |
https://en.wikipedia.org/wiki/Cytorrhysis | Cytorrhysis is the permanent and irreparable damage to the cell wall after the complete collapse of a plant cell due to the loss of internal positive pressure (hydraulic turgor pressure). Positive pressure within a plant cell is required to maintain the upright structure of the cell wall. Desiccation (relative water content of less than or equal to 10%) resulting in cellular collapse occurs when the ability of the plant cell to regulate turgor pressure is compromised by environmental stress. Water continues to diffuse out of the cell after the point of zero turgor pressure, where internal cellular pressure is equal to the external atmospheric pressure, has been reached, generating negative pressure within the cell. That negative pressure pulls the center of the cell inward until the cell wall can no longer withstand the strain. The inward pressure causes the majority of the collapse to occur in the central region of the cell, pushing the organelles within the remaining cytoplasm against the cell walls. Unlike in plasmolysis (a phenomenon that does not occur in nature), the plasma membrane maintains its connections with the cell wall both during and after cellular collapse.
Cytorrhysis of plant cells can be induced in laboratory settings if they are placed in a hypertonic solution where the size of the solutes in the solution inhibit flow through the pores in the cell wall matrix. Polyethylene glycol is an example of a solute with a high molecular weight that is used to induce cytorrhysis under experimental conditions. Environmental stressors which can lead to occurrences of cytorrhysis in a natural setting include intense drought, freezing temperatures, and pathogens such as the rice blast fungus (Magnaporthe grisea).
Mechanisms of avoidance
Desiccation tolerance refers to the ability of a cell to successfully rehydrate without irreparable damage to the cell wall following severe dehydration. Avoiding cellular damage due to metabolic, mechanical, and oxidative st |
https://en.wikipedia.org/wiki/Linear%20predictive%20analysis | Linear predictive analysis is a simple form of first-order extrapolation: if it has been changing at this rate then it will probably continue to change at approximately the same rate, at least in the short term. This is equivalent to fitting a tangent to the graph and extending the line.
One use of this is in linear predictive coding which can be used as a method of reducing the amount of data needed to approximately encode a series. Suppose it is desired to store or transmit a series of values representing voice. The value at each sampling point could be transmitted (if 256 values are possible then 8 bits of data for each point are required, if the precision of 65536 levels are desired then 16 bits per sample are required). If it is known that the value rarely changes more than +/- 15 values between successive samples (-15 to +15 is 31 steps, counting the zero) then we could encode the change in 5 bits. As long as the change is less than +/- 15 values in successive steps the value will exactly reproduce the desired sequence. When the rate of change exceeds +/-15 then the reconstructed values will temporarily differ from the desired value; provided fast changes that exceed the limit are rare it may be acceptable to use the approximation in order to attain the improved coding density.
See also
Linear prediction |
https://en.wikipedia.org/wiki/Vaughan%20Pratt | Vaughan Pratt (born April 12, 1944) is a Professor Emeritus at Stanford University, who was an early pioneer in the field of computer science. Since 1969, Pratt has made several contributions to foundational areas such as search algorithms, sorting algorithms, and primality testing. More recently, his research has focused on formal modeling of concurrent systems and Chu spaces.
Career
Raised in Australia and educated at Knox Grammar School, where he was dux in 1961, Pratt attended Sydney University, where he completed his masters thesis in 1970, related to what is now known as natural language processing. He then went to the United States, where he completed a Ph.D. thesis at Stanford University in only 20 months under the supervision of advisor Donald Knuth. His thesis focused on analysis of the Shellsort sorting algorithm and sorting networks.
Pratt was an assistant professor at MIT (1972 to 1976) and then associate professor (1976 to 1982). In 1974, working in collaboration with Knuth and James H. Morris, Pratt completed and formalized work he had begun in 1970 as a graduate student at Berkeley; the coauthored result was the Knuth–Morris–Pratt pattern matching algorithm. In 1976, he developed the system of dynamic logic, a modal logic of structured behavior.
He went on sabbatical from MIT to Stanford (1980 to 1981), and was appointed a full professor at Stanford in 1981.
Pratt directed the SUN workstation project at Stanford from 1980 to 1982. He contributed in various ways to the founding and early operation of Sun Microsystems, acting in the role of consultant for its first year, then, taking a leave of absence from Stanford for the next two years, becoming director of research, and finally resuming his role as a consultant to Sun and returning to Stanford in 1985.
He also designed the Sun Microsystems logo, which features four interleaved copies of the word "sun"; it is an ambigram.
Pratt became professor emeritus at Stanford in 2000.
Major contributio |
https://en.wikipedia.org/wiki/Pickover%20stalk | Pickover stalks are certain kinds of details to be found empirically in the Mandelbrot set, in the study of fractal geometry. They are so named after the researcher Clifford Pickover, whose "epsilon cross" method was instrumental in their discovery. An "epsilon cross" is a cross-shaped orbit trap.
According to Vepstas (1997) "Pickover hit on the novel concept of looking to see how closely the orbits of interior points come to the x and y axes. In these pictures, the closer that the point approaches, the higher up the color scale, with red denoting the closest approach. The logarithm of the distance is taken to accentuate the details".
Biomorphs
Biomorphs are biological-looking Pickover Stalks. At the end of the 1980s, Pickover developed biological feedback organisms similar to Julia sets and the fractal Mandelbrot set. According to Pickover (1999) in summary, he "described an algorithm which could be used for the creation of diverse and complicated forms resembling invertebrate organisms. The shapes are complicated and difficult to predict before actually experimenting with the mappings. He hoped these techniques would encourage others to explore further and discover new forms, by accident, that are on the edge of science and art".
Pickover developed an algorithm (which uses neither random perturbations nor natural laws) to create very complicated forms resembling invertebrate organisms. The iteration, or recursion, of mathematical transformations is used to generate biological morphologies. He called them "biomorphs." At the same time he coined "biomorph" for these patterns, the famous evolutionary biologist Richard Dawkins used the word to refer to his own set of biological shapes that were arrived at by a very different procedure. More rigorously, Pickover's "biomorphs" encompass the class of organismic morphologies created by small changes to traditional convergence tests in the field of "Julia set" theory.
Pickover's biomorphs show a self-similarity at d |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.