source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Wafer-scale%20integration | Wafer-scale integration (WSI) is a rarely used system of building very-large integrated circuit (commonly called a "chip") networks from an entire silicon wafer to produce a single "super-chip". Combining large size and reduced packaging, WSI was expected to lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term very-large-scale integration, the state of the art when WSI was being developed.
Overview
In the normal integrated circuit manufacturing process, a single large cylindrical crystal (boule) of silicon is produced and then cut into disks known as wafers. The wafers are then cleaned and polished in preparation for the fabrication process. A photographic process is used to pattern the surface where material ought to be deposited on top of the wafer and where not to. The desired material is deposited and the photographic mask is removed for the next layer. From then on the wafer is repeatedly processed in this fashion, putting on layer after layer of circuitry on the surface.
Multiple copies of these patterns are deposited on the wafer in a grid fashion across the surface of the wafer. After all the possible locations are patterned, the wafer surface appears like a sheet of graph paper, with grid lines delineating the individual chips. Each of these grid locations is tested for manufacturing defects by automated equipment. Those locations that are found to be defective are recorded and marked with a dot of paint (this process is referred to as "inking a die" and more modern wafer fabrication techniques no longer require physical markings to identify defective die). The wafer is then sawed apart to cut out the individual chips. Those defective chips are thrown away, or recycled, while the working chips are placed into packaging and re-tested for any damage that might occur during the packaging process.
Flaws on the surface of the wafers and problems during the layering/depositing process a |
https://en.wikipedia.org/wiki/Orungo%20virus | Orungo virus (ORUV) is an arbovirus of the genus Orbivirus, the subfamily Sedoreovirinae and the family Reoviridae. There are four known subtypes of Orungo virus designated Orungo-1 (ORUV-1), Orungo-2 (ORUV-2), Orungo-3 (ORUV-3), and Orungo-4 (ORUV-4). It was first isolated by the Uganda Virus Research Institute in Entebbe, Uganda by Oyewale Tomori and colleagues. Antibodies to the virus have been found in humans, monkeys, sheep, and cattle. |
https://en.wikipedia.org/wiki/Kadam%20virus | The Kadam virus (or KAD, strain MP6640) is a tick-borne Flavivirus.
Located
The virus was first isolated by the Uganda Virus Research Institute in Entebbe, Uganda, after samples were taken from cattle in Karamoja in 1967. The viruses were usually only found from Rhipicephalus and Amblyomma ticks around Kenya and Uganda infecting cattle and humans.
Spread
In the early 1980s, Kadam virus;; was found to be spread in Saudi Arabia by Hyalomma'' ticks when found on a dead camel at Wadi Thamamah in Riyadh. |
https://en.wikipedia.org/wiki/USA%20Biolympiad | The USA Biolympiad (USABO), formerly called the USA Biology Olympiad before January 1, 2020, is a national competition sponsored by the Center for Excellence in Education to select the competitors for the International Biology Olympiad. Each year, twenty National Finalists gather at a nationally recognized institution for a two-week training camp. From the program's inception through 2009, the camp was held at George Mason University; from 2010 through 2015, the camp was held at Purdue University. It was then hosted at Marymount University for 2016 and 2017. As of 2018, it is being held at University of California, San Diego. At the end of the two weeks, four students are selected to represent the United States at the International Biology Olympiad.
History
The USA Biolympiad was first started in 2002, with nearly 10,000 students competing annually. Ever since the CEE (Center for Excellence in Education) started to administer the USABO exam, all four members of the Team USA in the years 2004, 2007, 2008, 2009, 2011, 2012, 2013, 2015, and 2017 were awarded gold medals in the International Biology Olympiad, with the US National Team able to accrue the most medals and subsequently win the IBO in 2011, 2013, 2015, and 2017, partly due to the rigorous selection process students undergo to compete in the IBO for the US. The USABO exam was held online in 2020 and 2021 due to the COVID-19 pandemic.
Organization and examination structure
USABO finalists are selected in two rounds of tests. The open exam is a 50-minute multiple-choice exam open to all high school students, who register through their school or an authorized USABO center. This exam is normally administered during the first week of February, though the exact test time may vary from year to year. During this round, there is no penalty for wrong answers, but students may only select one of the four or five choices for each question.
The top 500 students on the open exam, or roughly 10% of all participating st |
https://en.wikipedia.org/wiki/Zaslavskii%20map | The Zaslavskii map is a discrete-time dynamical system introduced by George M. Zaslavsky. It is an example of a dynamical system that exhibits chaotic behavior. The Zaslavskii map takes a point () in the plane and maps it to a new point:
and
where mod is the modulo operator with real arguments. The map depends on four constants ν, μ, ε and r. Russel (1980) gives a Hausdorff dimension of 1.39 but Grassberger (1983) questions this value based on their difficulties measuring the correlation dimension.
See also
List of chaotic maps |
https://en.wikipedia.org/wiki/Kaplan%E2%80%93Yorke%20map | The Kaplan–Yorke map is a discrete-time dynamical system. It is an example of a dynamical system that exhibits chaotic behavior. The Kaplan–Yorke map takes a point (xn, yn ) in the plane and maps it to a new point given by
where mod is the modulo operator with real arguments. The map depends on only the one constant α.
Calculation method
Due to roundoff error, successive applications of the modulo operator will yield zero after some ten or twenty iterations when implemented as a floating point operation on a computer. It is better to implement the following equivalent algorithm:
where the and are computational integers. It is also best to choose to be a large prime number in order to get many different values of .
Another way to avoid having the modulo operator yield zero after a short number of iterations is
which will still eventually return zero, albeit after many more iterations. |
https://en.wikipedia.org/wiki/Cryogenic%20processor | A Cryogenic processor is a unit designed to cool an object to ultra-low temperatures (usually around −300°F / −150°C) at a moderated rate in order to prevent thermal shock to the components being treated. The first commercial unit was developed by Ed Busch in the late 1960s. The development of programmable microprocessor controls allowed the machines to follow temperature profiles that increased the effectiveness of the process. Some manufacturers make cryogenic processors with home computers to define the temperature profile.
Before programmable controls were added to control cryogenic processors, the "treatment" process of an object was done manually by immersing the object in liquid nitrogen. This normally caused thermal shock to occur within an object, resulting in cracks to the structure. Modern cryogenic processors measure changes in temperature and adjust the input of liquid nitrogen accordingly to ensure that only fractional changes in temperature occur over a long period of time. Their temperature measurements and adjustments are condensed into "profiles" that are used to repeat the process in a certain way when treating for similarly grouped objects.
The general processing cycle for modern cryogenic processors occurs within a three-day time window, with 24 hours to reach the optimal minimum temperature for a product, 24 hours to hold at the minimum temperature, and 24 hours to return to room temperature. Depending on the product, some items will be heated in an oven to higher temperatures. Some processors are capable of providing both the negative and positive extreme temperatures, although separate units (a cryogenic processor and a dedicated oven) can sometimes produce better results depending upon the application.
The optimal minimum temperatures for objects, as well as the hold times involved, are determined utilizing different research methods and are backed by analysis of the product to determine the optimum procedure for a particular product. As |
https://en.wikipedia.org/wiki/American%20and%20British%20English%20spelling%20differences | Despite the various English dialects spoken from country to country and within different regions of the same country, there are only slight regional variations in English orthography, the two most notable variations being British and American spelling. Many of the differences between American and British/Commonwealth English date back to a time before spelling standards were developed. For instance, some spellings seen as "American" today were once commonly used in Britain, and some spellings seen as "British" were once commonly used in the United States.
A "British standard" began to emerge following the 1755 publication of Samuel Johnson's A Dictionary of the English Language, and an "American standard" started following the work of Noah Webster and, in particular, his An American Dictionary of the English Language, first published in 1828. Webster's efforts at spelling reform were somewhat effective in his native country, resulting in certain well-known patterns of spelling differences between the American and British varieties of English. However, English-language spelling reform has rarely been adopted otherwise. As a result, modern English orthography varies only minimally between countries and is far from phonemic in any country.
Historical origins
In the early 18th century, English spelling was inconsistent. These differences became noticeable after the publication of influential dictionaries. Today's British English spellings mostly follow Johnson's A Dictionary of the English Language (1755), while many American English spellings follow Webster's An American Dictionary of the English Language ("ADEL", "Webster's Dictionary", 1828).
Webster was a proponent of English spelling reform for reasons both philological and nationalistic. In A Companion to the American Revolution (2008), John Algeo notes: "it is often assumed that characteristically American spellings were invented by Noah Webster. He was very influential in popularizing certain spellings in A |
https://en.wikipedia.org/wiki/Contact%20force | A contact force is any force that occurs as a result of two objects making contact with each other. Contact forces are ubiquitous and are responsible for most visible interactions between macroscopic collections of matter. Pushing a car or kicking a ball are some of the everyday examples where contact forces are at work. In the first case the force is continuously applied to the car by a person, while in the second case the force is delivered in a short impulse.
Contact forces are often decomposed into orthogonal components, one perpendicular to the surface(s) in contact called the normal force, and one parallel to the surface(s) in contact, called the friction force.
Not all forces are contact forces; for example, the weight of an object is the force between the object and the Earth, even though the two do not need to make contact. Gravitational forces, electrical forces and magnetic forces are body forces and can exist without contact occurring.
Origin of contact forces
The microscopic origin of contact forces is diverse. Normal force is directly a result of Pauli exclusion principle and not a true force per se: Everyday objects do not actually touch each other; rather, contact forces are the result of the interactions of the electrons at or near the surfaces of the objects. The atoms in the two surfaces cannot penetrate one another without a large investment of energy because there is no low energy state for which the electron wavefunctions from the two surfaces overlap; thus no microscopic force is needed to prevent this penetration. On the more macroscopic level, such surfaces can be treated as a single object, and two bodies do not penetrate each other due to the stability of matter, which is again a consequence of Pauli exclusion principle, but also of the fundamental forces of nature: Cracks in the bodies do not widen due to electromagnetic forces that create the chemical bonds between the atoms; the atoms themselves do not disintegrate because of the ele |
https://en.wikipedia.org/wiki/Odic%20force | Odic force (also called Od [õd], Odyle, Önd, Odes, Odylic, Odyllic, or Odems) is the name given in the mid-19th century to a hypothetical vital energy or life force by Baron Carl von Reichenbach. Von Reichenbach coined the name from that of the Germanic god Odin in 1845.
History
As von Reichenbach was investigating the manner in which the human nervous system could be affected by various substances, he conceived the existence of a new force allied to electricity, magnetism, and heat, a force which he thought was radiated by most substances, and to the influence of which different people are variously sensitive. He named this vitalist concept Odic force. Proponents say that Odic force permeates all plants, animals, and humans.
Believers in Odic force said that it was visible in total darkness as colored auras surrounding living things, crystals, and magnets, but that viewing it required hours first spent in total darkness, and only very sensitive people had the ability to see it. They also said that it resembles the Asian concepts prana and qi. However, they regarded the Odic force as not associated with breath (like India's prana and the qi of Chinese martial arts) but rather mainly with biological electromagnetic fields.
Von Reichenbach did not tie Odic force into other vitalist theories. Baron von Reichenbach expounded the concept of Odic force in detail in a book-length article, Researches on Magnetism, Electricity, Heat and Light in their Relations to Vital Forces, which appeared in a special issue of a respected scientific journal, . He said that (1) the Odic force had a positive and negative flux, and a light and dark side; (2) individuals could forcefully "emanate" it, particularly from the hands, mouth, and forehead; and (3) the Odic force had many possible applications.
The Odic force was conjectured to explain the phenomenon of hypnotism. In Britain, impetus was given to this view of the subject following the translation of Reichenbach's Researches by |
https://en.wikipedia.org/wiki/Age%20of%20onset | The age of onset is the age at which an individual acquires, develops, or first experiences a condition or symptoms of a disease or disorder. For instance, the general age of onset for the spinal disease scoliosis is "10-15 years old," meaning that most people develop scoliosis when they are of an age between ten and fifteen years.
Diseases are often categorized by their ages of onset as congenital, infantile, juvenile, or adult. Missed or delayed diagnosis often occurs if a disease that is typically diagnosed in juveniles (such as asthma) is present in adults, and vice versa (such as arthritis). Depending on the disease, ages of onset may impact features such as phenotype, as is the case in Parkinson's and Huntington's diseases. For example, the phenotype for juvenile Huntington's disease clearly differs from adult-onset Huntington's disease and late-onset Parkinson's exhibits more severe motor and non-motor phenotypes.
Causes
Germ-line mutations are often at least in part the cause of disease onset at an earlier age. Though many germ-line mutations are deleterious, the genetic lens through which they may be viewed may provide insights to treatment, possibly through genetic counseling.
In some cases, the age of onset may be the result of mutation accumulation. If this is the case, it could be helpful to consider ages of onset as a product of the hypotheses depicted in theories of aging. Even some mental health disorders, whose ages of onset have been found to be harder to define than physical illnesses may have a mutated component. The symptoms of standard mental disorders often start off non-specific. Pathological changes pertaining to disorders often become more detailed and less fickle before they can be defined in the American Psychiatric Association's DSM. The brain is a dynamic and complex system, it is constantly re-wiring itself and a major concern is what happens to the brain in earlier life that mirrors what occurs later in its psycho-pathological sta |
https://en.wikipedia.org/wiki/Mason%20jar | A Mason jar, also known as a canning jar or fruit jar, is a glass jar used in home canning to preserve food. It was named after American tinsmith John Landis Mason, who patented it in 1858. The jar's mouth has a screw thread on its outer perimeter to accept a metal ring or "band". The band, when screwed down, presses a separate stamped steel disc-shaped lid against the jar's rim.
Mason lost his patent for the jars and numerous other companies started manufacturing similar jars. Over the years, the brand name Mason became the genericized trademark for that style of glass home canning jar, and the word "Mason" can be seen on many Ball and Kerr brand jars. The style of jar is occasionally referred to by common brand names such as Ball jar (in the eastern US) or Kerr jar (in the western US) even if the individual jar is not that brand.
In early 20th century America, Mason jars became useful to those who lived in areas with short growing seasons. The jars became an essential part of farming culture, while being used at fairs to display jams and pickles for judging and awards. This was a reflection of the labour that went into making the jams. The jams, pickles, and sauces would be given and exchanged as gifts during the holidays as a canned preserved good was of much value. The peak use of Mason jars came during World War II, when the U.S. government rationed food, encouraging the public to grow their own. As migration to cities occurred, along with the rise of refrigerators, the more efficient transport of goods made fruit and vegetables available year round, reducing the need for food preservation. Contemporary industrial preservation transitioned to the use of plastics like bakelite and nylon and billions of containers were produced instead.
In the early to mid-2010s a revival of the Mason jar occurred from a mix of the rise of thrifting and adoption by hipsters. Used as a novelty by major corporations like 7-Eleven to advertise new drinks, for greenwashing being b |
https://en.wikipedia.org/wiki/One-hot | In digital circuits and machine learning, a one-hot is a group of bits among which the legal combinations of values are only those with a single high (1) bit and all the others low (0). A similar implementation in which all bits are '1' except one '0' is sometimes called one-cold. In statistics, dummy variables represent a similar technique for representing categorical data.
Applications
Digital circuitry
One-hot encoding is often used for indicating the state of a state machine. When using binary, a decoder is needed to determine the state. A one-hot state machine, however, does not need a decoder as the state machine is in the nth state if, and only if, the nth bit is high.
A ring counter with 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15 flip flops chained in series with the Q output of each flip flop connected to the D input of the next and the D input of the first flip flop connected to the Q output of the 15th flip flop. The first flip flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip flop, which represents the last state. Upon reset of the state machine all of the flip flops are reset to '0' except the first in the chain, which is set to '1'. The next clock edge arriving at the flip flops advances the one 'hot' bit to the second flip flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state.
An address decoder converts from binary to one-hot representation.
A priority encoder converts from one-hot representation to binary.
Comparison with other encoding methods
Advantages
Determining the state has a low and constant cost of accessing one flip-flop
Changing the state has the constant cost of accessing two flip-flops
Easy to design and modify
Easy to detect illegal states
Takes advantage of an FPGA's abundant flip-flops
Using a one-hot implementation typically allows a sta |
https://en.wikipedia.org/wiki/Coding%20by%20exception | Coding by exception is an accidental complexity in a software system in which the program handles specific errors that arise with unique exceptions. When an issue arises in a software system, an error is raised tracing the issue back to where it was caught and then where that problem came from, if applicable. Exceptions can be used to handle the error while the program is running and avoid crashing the system. Exceptions should be generalized and cover numerous errors that arise. Using these exceptions to handle specific errors that arise to continue the program is called coding by exception. This anti-pattern can quickly degrade software in performance and maintainability. Executing code even after the exception is raised resembles the goto method in many software languages, which is also considered poor practice.
See also
Accidental complexity
Creeping featurism
Test-driven development
Anti-patterns |
https://en.wikipedia.org/wiki/Bonnor%20beam | In general relativity, the Bonnor beam is an exact solution which models an infinitely long, straight beam of light. It is an explicit example of a pp-wave spacetime. It is named after William B. Bonnor who first described it.
The Bonnor beam is obtained by matching together two regions:
a uniform plane wave interior region, which is shaped like the world tube of a solid cylinder, and models the electromagnetic and gravitational fields inside the beam,
a vacuum exterior region, which models the gravitational field outside the beam.
On the "cylinder" where they meet, the two regions are required to obey matching conditions stating that the metric tensor and extrinsic curvature tensor must agree.
The interior part of the solution is defined by
This is a null dust solution and can be interpreted as incoherent electromagnetic radiation.
The exterior part of the solution is defined by
The Bonnor beam can be generalized to several parallel beams travelling in the same direction. Perhaps surprisingly, the beams do not curve toward one another. On the other hand, "anti-parallel" beams (travelling along parallel trajectories, but in opposite directions) do attract each other. This reflects a general phenomenon: two pp-waves with parallel wave vectors superimpose linearly, but pp-waves with nonparallel wave vectors (including antiparallel Bonnor beams) do not superimpose linearly, as we would expect from the nonlinear nature of the Einstein field equation. |
https://en.wikipedia.org/wiki/Aichelburg%E2%80%93Sexl%20ultraboost | In general relativity, the Aichelburg–Sexl ultraboost is an exact solution which models the spacetime of an observer moving towards or away from a spherically symmetric gravitating object at nearly the speed of light. It was introduced by Peter C. Aichelburg and Roman U. Sexl in 1971.
The original motivation behind the ultraboost was to consider the gravitational field of massless point particles within general relativity. It can be considered an approximation to the gravity well of a photon or other lightspeed particle, although it does not take into account quantum uncertainty in particle position or momentum.
The metric tensor can be written, in terms of Brinkmann coordinates, as
The ultraboost can be obtained as the limit of a metric, which is also an exact solution, at least if one admits impulsive curvatures.
For example, one can take a Gaussian pulse.
In these plus-polarized axisymmetric vacuum pp-waves, the curvature is concentrated along the axis of symmetry, falling off like , and also near . As , the wave profile turns into a Dirac delta and the ultraboost is recovered.
The ultraboost helps also to understand why fast moving observers won't see moving stars and planet-like objects become black holes. |
https://en.wikipedia.org/wiki/Hypergeometric%20function | In mathematics, the Gaussian or ordinary hypergeometric function 2F1(a,b;c;z) is a special function represented by the hypergeometric series, that includes many other special functions as specific or limiting cases. It is a solution of a second-order linear ordinary differential equation (ODE). Every second-order linear ODE with three regular singular points can be transformed into this equation.
For systematic lists of some of the many thousands of published identities involving the hypergeometric function, see the reference works by and . There is no known system for organizing all of the identities; indeed, there is no known algorithm that can generate all identities; a number of different algorithms are known that generate different series of identities. The theory of the algorithmic discovery of identities remains an active research topic.
History
The term "hypergeometric series" was first used by John Wallis in his 1655 book Arithmetica Infinitorum.
Hypergeometric series were studied by Leonhard Euler, but the first full systematic treatment was given by .
Studies in the nineteenth century included those of , and the fundamental characterisation by of the hypergeometric function by means of the differential equation it satisfies.
Riemann showed that the second-order differential equation for 2F1(z), examined in the complex plane, could be characterised (on the Riemann sphere) by its three regular singularities.
The cases where the solutions are algebraic functions were found by Hermann Schwarz (Schwarz's list).
The hypergeometric series
The hypergeometric function is defined for by the power series
It is undefined (or infinite) if equals a non-positive integer. Here is the (rising) Pochhammer symbol, which is defined by:
The series terminates if either or is a nonpositive integer, in which case the function reduces to a polynomial:
For complex arguments with it can be analytically continued along any path in the complex plane that avoids the |
https://en.wikipedia.org/wiki/Master%20of%20Animals | The Master of Animals, Lord of Animals, or Mistress of the Animals is a motif in ancient art showing a human between and grasping two confronted animals. The motif is very widespread in the art of the Ancient Near East and Egypt. The figure may be female or male, it may be a column or a symbol, the animals may be realistic or fantastical, and the human figure may have animal elements such as horns, an animal upper body, an animal lower body, legs, or cloven feet. Although what the motif represented to the cultures that created the works probably varies greatly, unless shown with specific divine attributes, when male the figure is typically described as a hero by interpreters.
The motif is so widespread and visually effective that many depictions probably were conceived as decoration with only a vague meaning attached to them. The Master of Animals is the "favorite motif of Achaemenian official seals", but the figures in these cases should be understood as the king.
The human figure may be standing, as found from the fourth millennium BC, or as kneeling on one knee found from the third millennium BC. They are usually shown looking frontally, but in Assyrian pieces typically they are shown from the side. Sometimes the animals are clearly alive, whether fairly passive and tamed, or still struggling, rampant, or attacking. In other pieces they may represent dead hunter's prey.
Other associated representations show a figure controlling or "taming" a single animal, usually to the right of the figure. But the many representations of heroes or kings killing an animal are distinguished from these.
Art
The earliest known depiction of such a motif appears on stamp seals of the Ubaid period in Mesopotamia. The motif appears on a terracotta stamp seal from Tell Telloh, ancient Girsu, at the end of the prehistoric Ubaid period of Mesopotamia, .
The motif also was given the topmost location of the famous Gebel el-Arak Knife in the Louvre, an ivory and flint knife dating fr |
https://en.wikipedia.org/wiki/Digital%20room%20correction | Digital room correction (or DRC) is a process in the field of acoustics where digital filters designed to ameliorate unfavorable effects of a room's acoustics are applied to the input of a sound reproduction system. Modern room correction systems produce substantial improvements in the time domain and frequency domain response of the sound reproduction system.
History
The use of analog filters, such as equalizers, to normalize the frequency response of a playback system has a long history; however, analog filters are very limited in their ability to correct the distortion found in many rooms. Although digital implementations of the equalizers have been available for some time, digital room correction is usually used to refer to the construction of filters which attempt to invert the impulse response of the room and playback system, at least in part. Digital correction systems are able to use acausal filters, and are able to operate with optimal time resolution, optimal frequency resolution, or any desired compromise along the Gabor limit. Digital room correction is a fairly new area of study which has only recently been made possible by the computational power of modern CPUs and DSPs.
Operation
The configuration of a digital room correction system begins with measuring the impulse response of the room at a reference listening position, and sometimes at additional locations for each of the loudspeakers. Then, computer software is used to compute a FIR filter, which reverses the effects of the room and linear distortion in the loudspeakers. In low performance conditions, a few IIR peaking filters are used instead of FIR filters, which require convolution, a relatively computation-heavy operation. Finally, the calculated filter is loaded into a computer or other room correction device which applies the filter in real time. Because most room correction filters are acausal, there is some delay. Most DRC systems allow the operator to control the added delay through |
https://en.wikipedia.org/wiki/Pre-echo | In audio signal processing, pre-echo, sometimes called a forward echo, (not to be confused with reverse echo) is a digital audio compression artifact where a sound is heard before it occurs (hence the name). It is most noticeable in impulsive sounds from percussion instruments such as castanets or cymbals.
It occurs in transform-based audio compression algorithms – typically based on the modified discrete cosine transform (MDCT) – such as MP3, MPEG-4 AAC, and Vorbis, and is due to quantization noise being spread over the entire transform-window of the codec.
Cause
The psychoacoustic component of the effect is that one hears only the echo preceding the transient, not the one following – because this latter is drowned out by the transient. Formally, forward temporal masking is much stronger than backwards temporal masking, hence one hears a pre-echo, but no post-echo.
Mitigation
In an effort to avoid pre-echo artifacts, many sound processing systems use filters where all of the response occurs after the main impulse, rather than linear phase filters. Such filters necessarily introduce phase distortion and temporal smearing, but this additional distortion is less audible because of strong forward masking.
Avoiding pre-echo is a substantial design difficulty in transform domain lossy audio codecs such as MP3, MPEG-4 AAC, and Vorbis. It is also one of the problems encountered in digital room correction algorithms and frequency domain filters in general (denoising by spectral subtraction, equalization, and others). One way of reducing "breathing" for filters and compression techniques using piecewise Fourier-based transforms is picking a smaller transform window (short blocks in MP3), thus increasing the temporal resolution of the algorithm at the cost of reducing its frequency resolution.
To better reproduce transient and eliminate pre-echo- lossy audio compression software such as open source Vorbis encoder (oggenc from vorbis-tools)- impulse noisetune or and bit |
https://en.wikipedia.org/wiki/Monochromatic%20electromagnetic%20plane%20wave | In general relativity, the monochromatic electromagnetic plane wave spacetime is the analog of the monochromatic plane waves known from Maxwell's theory. The precise definition of the solution is quite complicated but very instructive.
Any exact solution of the Einstein field equation which models an electromagnetic field, must take into account all gravitational effects of the energy and mass of the electromagnetic field. Besides the electromagnetic field, if no matter and non-gravitational fields are present, the Einstein field equation and the Maxwell field equations must be solved simultaneously.
In Maxwell's field theory of electromagnetism, one of the most important types of an electromagnetic field are those representing electromagnetic microwave radiation. Of these, the most important examples are the electromagnetic plane waves, in which the radiation has planar wavefronts moving in a specific direction at the speed of light. Of these, the most basic is the monochromatic plane waves, in which only one frequency component is present. This is precisely the phenomenon that this solution model, but in terms of general relativity.
Definition of the solution
The metric tensor of the unique exact solution modeling a linearly polarized electromagnetic plane wave with amplitude and frequency can be written, in terms of Rosen coordinates, in the form
where is the first positive root of where . In this chart, are null coordinate vectors while are spacelike coordinate vectors.
Here, the Mathieu cosine is an even function which solves the Mathieu equation and also takes the value . Despite the name, this function is not periodic, and it cannot be written in terms of sinusoidal or even hypergeometric functions. (See Mathieu function for more about the Mathieu cosine function.)
In the expression for the metric, note that are null vector fields. Therefore, is a timelike vector field, while are spacelike vector fields.
To define the electromagnetic field |
https://en.wikipedia.org/wiki/John%20Matheson | John Ross Matheson, (November 14, 1917 – December 27, 2013) was a Canadian politician, lawyer, and judge who helped develop both the national flag of Canada and the Order of Canada.
Early life
John Matheson was born in Arundel, Quebec, the son of the Reverend Dr. A. Dawson Matheson and his wife Gertrude Matheson (née McCuaig). Matheson underwent training at the Royal Military College of Canada in 1936.
He graduated from Queen's University in 1940, winning the prestigious Tricolour Award in that year for distinguished achievement.
Military career
Matheson served as an officer with the 1st Field Regiment, Royal Canadian Horse Artillery, 1st Canadian Infantry Division in Italy during World War II. He was the only officer in this regiment to survive the war.
Matheson participated in the Battle of Ortona, where an air bursting German shell sent shrapnel into his head and caused damage similar to a stroke. He was left paralyzed from the neck down and unable to speak. He recovered after returning to Canada, but never regained the use of his right leg. His injuries caused him lifelong pain, and afterwards, he usually walked with the assistance of a cane.
Matheson held honorary militia appointments with the 30th Field Artillery Regiment, Royal Regiment of Canadian Artillery from 1972 to 1982. Afterwards, he retired with the rank of Colonel.
Family and legal career
After the war, Matheson met Edith Bickley, a radiologist's assistant, in St. Anne de Bellevue Hospital in Montreal, Quebec. He said they would never have met if she hadn’t been such a curious nurse. The couple married and eventually had six children. He received a Bachelor of Laws degree from Osgoode Hall Law School, a Master of Arts degree from Mount Allison University, and a Master of Laws degree from the University of Western Ontario. He was called to the Bar of Ontario in 1948 and was created a Queen's Counsel in 1967. He practiced law with the firm of Matheson, Henderson & Hart in Brockville, Ontario |
https://en.wikipedia.org/wiki/Elevator%20paradox%20%28physics%29 | The elevator paradox relates to a hydrometer placed on an "elevator" or vertical conveyor that, by moving to different elevations, changes the atmospheric pressure. In this classic demonstration, the floating hydrometer remains at an equilibrium position. Essentially, a hydrometer measures specific gravity of liquids independent of barometric pressure. This is because the change in air pressure is applied to the entire hydrometer flask. The submerged portion of the flask receives a transmitted force through the liquid, thus no portion of the apparatus receives a net force resulting from a change in air pressure.
This is a paradox if the buoyancy of the hydrometer is said to depend on the weight of the liquid that it displaces. At a higher barometric pressure, the liquid occupies a slightly smaller volume, and thus more dense might be considered to have a higher specific gravity. However, the hydrometer also displaces air, and the weight of the liquid and the air are affected equally by elevation.
Cartesian divers
A Cartesian diver, on the other hand, has an internal space that, unlike a hydrometer, is not rigid, and thus can change its displacement as increasing external air pressure compresses the air in the diver. If the diver, instead of being placed in the classic plastic bottle, were floated in a flask on an elevator, the diver would respond to a change in air pressure. Similarly, a non-rigid container like a toy balloon will be affected, as will the rib cage of a human SCUBA diver, and such systems will vary in buoyancy. A glass hydrometer is rigid under normal pressure, for all practical purposes.
The hydrometer in an accelerating frame of reference
The upward or downward acceleration of the elevator, as long as the net force is directed downward, will not change the equilibrium point of the hydrometer either. The force due to acceleration acts on the hydrometer exactly as it would on an equal mass of water or other liquid.
Experimental physics
Physic |
https://en.wikipedia.org/wiki/Vibration%20white%20finger | Vibration white finger (VWF), also known as hand-arm vibration syndrome (HAVS) or dead finger, is a secondary form of Raynaud's syndrome, an industrial injury triggered by continuous use of vibrating hand-held machinery. Use of the term vibration white finger has generally been superseded in professional usage by broader concept of HAVS, although it is still used by the general public. The symptoms of vibration white finger are the vascular component of HAVS.
HAVS is a widespread recognized industrial disease affecting tens of thousands of workers. It is a disorder that affects the blood vessels, nerves, muscles, and joints of the hand, wrist, and arm. Its best known effect is vibration-induced white finger (VWF), a term introduced by the Industrial Injury Advisory Council in 1970. Injury can occur at frequencies between 5 and 2000 Hz but the greatest risk for fingers is between 50 and 300 Hz. The total risk exposure for hand and arm is calculated by the use of ISO 5349-1, which stipulates maximum damage between 8 and 16 Hz and a rapidly declining risk at higher frequencies. The ISO 5349-1 frequency risk assessment has been criticized as corresponding poorly to observational data; more recent research suggests that medium and high frequency vibrations also increase HAVS risk.
Effects
Excessive exposure to hand arm vibrations can result in various patterns of diseases casually known as HAVS or VWF. This can affect nerves, joints, muscles, blood vessels or connective tissues of the hand and forearm:
Tingling 'whiteness' or numbness in the fingers (blood vessels and nerves affected): This may not be noticeable at the end of a working day, and in mild cases may affect only the tips of the fingers. As the condition becomes more severe, the whole finger down to the knuckles may become white. Feeling may also be lost.
Fingers change colour (blood vessels affected): With continued exposure the person may experience periodic attacks in which the fingers change colour whe |
https://en.wikipedia.org/wiki/Glassy%20carbon | Glass-like carbon, often called glassy carbon or vitreous carbon, is a non-graphitizing, or nongraphitizable, carbon which combines glassy and ceramic properties with those of graphite. The most important properties are high thermal stability, high thermal conductivity, hardness (7 Mohs), low density, low electrical resistance, low friction, extreme resistance to chemical attack, and impermeability to gases and liquids. Glassy carbon is widely used as an electrode material in electrochemistry, for high-temperature crucibles, and as a component of some prosthetic devices. It can be fabricated in different shapes, sizes and sections.
The names glassy carbon and vitreous carbon have been registered as trademarks, and IUPAC does not recommend their use as technical terms.
A historical review of glassy carbon was published in 2021.
History
Glassy carbon was first observed in the laboratories of The Carborundum Company, Manchester, UK, in the mid-1950s by Bernard Redfern, a materials scientist and diamond technologist. He noticed that Sellotape he used to hold ceramic (rocket nozzle) samples in a furnace maintained a sort of structural identity after firing in an inert atmosphere. He searched for a polymer matrix to mirror a diamond structure and discovered a resole resin that would, with special preparation, set without a catalyst. Crucibles were produced with this phenolic resin, and distributed to organisations such as UKAEA Harwell.
Redfern left The Carborundum Co., which officially wrote off all interests in the glassy carbon invention. While working at the Plessey Company laboratory (in a disused church) in Towcester, UK, Redfern received a glassy carbon crucible for duplication from UKAEA. He identified it as one he had made from markings he had engraved into the uncured precursor prior to carbonisation—it is almost impossible to engrave the finished product. The Plessey Company set up a laboratory, first in a factory previously used to make briar pipes in Li |
https://en.wikipedia.org/wiki/Neuromelanin | Neuromelanin (NM) is a dark pigment found in the brain which is structurally related to melanin. It is a polymer of 5,6-dihydroxyindole monomers. Neuromelanin is found in large quantities in catecholaminergic cells of the substantia nigra pars compacta and locus coeruleus, giving a dark color to the structures.
Physical properties and structure
Neuromelanin gives specific brain sections, such as the substantia nigra or the locus coeruleus, distinct color. It is a type of melanin and similar to other forms of peripheral melanin. It is insoluble in organic compounds, and can be labeled by silver staining. It is called neuromelanin because of its function and the color change that appears in tissues containing it. It contains black/brown pigmented granules. Neuromelanin is found to accumulate during aging, noticeably after the first 2–3 years of life. It is believed to protect neurons in the substantia nigra from iron-induced oxidative stress. It is considered a true melanin due to its stable free radical structure and it avidly chelates metals.
Synthetic pathways
Neuromelanin is directly biosynthesized from L-DOPA, precursor to dopamine, by tyrosine hydroxylase (TH) and aromatic acid decarboxylase (AADC). Alternatively, synaptic vesicles and endosomes accumulate cytosolic dopamine (via vesicular monoamine transporter 2 (VMAT2) and transport it to mitochondria where it is metabolized by monoamine oxidase. Excess dopamine and DOPA molecules are oxidized by iron catalysis into dopaquinones and semiquinones which are then phagocytosed and are stored as neuromelanin.
Neuromelanin biosynthesis is driven by excess cytosolic catecholamines not accumulated by synaptic vesicles.
Function
Neuromelanin is found in higher concentrations in humans than in other primates. Neuromelanin concentration increases with age, suggesting a role in neuroprotection (neuromelanin can chelate metals and xenobiotics) or senescence.
Role in disease
Neuromelanin-containing neurons in the sub |
https://en.wikipedia.org/wiki/Television%20transmitter | A television transmitter is a transmitter that is used for terrestrial (over-the-air) television broadcasting. It is an electronic device that radiates radio waves that carry a video signal representing moving images, along with a synchronized audio channel, which is received by television receivers ('televisions' or 'TVs') belonging to a public audience, which display the image on a screen. A television transmitter, together with the broadcast studio which originates the content, is called a television station. Television transmitters must be licensed by governments, and are restricted to a certain frequency channel and power level. They transmit on frequency channels in the VHF and UHF bands. Since radio waves of these frequencies travel by line of sight, they are limited by the horizon to reception distances of 40–60 miles depending on the height of transmitter station.
Television transmitters use one of two different technologies: analog, in which the picture and sound are transmitted by analog signals modulated onto the radio carrier wave, and digital in which the picture and sound are transmitted by digital signals. The original television technology, analog television, began to be replaced in a transition beginning in 2006 in many countries with digital television (DTV) systems. These transmit pictures in a new format called HDTV (high-definition television) which has higher resolution and a wider screen aspect ratio than analog. DTV makes more efficient use of scarce radio spectrum bandwidth, as several DTV channels can be transmitted in the same bandwidth as a single analog channel. In both analog and digital television, different countries use several incompatible modulation standards to add the video and audio signals to the radio carrier wave.
The principles of primarily analog systems are summarized as they are typically more complex than digital transmitters due to the multiplexing of VSB and FM modulation stages.
Types of transmitters
The |
https://en.wikipedia.org/wiki/Broadcast%20transmitter | A broadcast transmitter is an electronic device which radiates radio waves modulated with information content intended to be received by the general public. Examples are a radio broadcasting transmitter which transmits audio (sound) to broadcast radio receivers (radios) owned by the public, or a television transmitter, which transmits moving images (video) to television receivers (televisions). The term often includes the antenna which radiates the radio waves, and the building and facilities associated with the transmitter. A broadcasting station (radio station or television station) consists of a broadcast transmitter along with the production studio which originates the broadcasts. Broadcast transmitters must be licensed by governments, and are restricted to specific frequencies and power levels. Each transmitter is assigned a unique identifier consisting of a string of letters and numbers called a callsign, which must be used in all broadcasts.
Exciter
In broadcasting and telecommunication, the part which contains the oscillator, modulator, and sometimes audio processor, is called the "exciter". Most transmitters use the heterodyne principle, so they also have frequency conversion units. Confusingly, the high-power amplifier which the exciter then feeds into is often called the "transmitter" by broadcast engineers. The final output is given as transmitter power output (TPO), although this is not what most stations are rated by.
Effective radiated power (ERP) is used when calculating station coverage, even for most non-broadcast stations. It is the TPO, minus any attenuation or radiated loss in the line to the antenna, multiplied by the gain (magnification) which the antenna provides toward the horizon. This antenna gain is important, because achieving a desired signal strength without it would result in an enormous electric utility bill for the transmitter, and a prohibitively expensive transmitter. For most large stations in the VHF- and UHF-range, the t |
https://en.wikipedia.org/wiki/European%20Bioinformatics%20Institute | The European Bioinformatics Institute (EMBL-EBI) is an intergovernmental organization (IGO) which, as part of the European Molecular Biology Laboratory (EMBL) family, focuses on research and services in bioinformatics. It is located on the Wellcome Genome Campus in Hinxton near Cambridge, and employs over 600 full-time equivalent (FTE) staff. Institute leaders such as Rolf Apweiler, Alex Bateman, Ewan Birney, and Guy Cochrane, an adviser on the National Genomics Data Center Scientific Advisory Board, serve as part of the international research network of the BIG Data Center at the Beijing Institute of Genomics.
Additionally, the EMBL-EBI hosts training programs that teach scientists the fundamentals of the work with biological data and promote the plethora of bioinformatic tools available for their research, both EMBL-EBI and non-EMBL-EBI-based.
Bioinformatic services
One of the roles of the EMBL-EBI is to index and maintain biological data in a set of databases, including Ensembl (housing whole genome sequence data), UniProt (protein sequence and annotation database) and Protein Data Bank (protein and nucleic acid tertiary structure database). A variety of online services and tools is provided, such as Basic Local Alignment Search Tool (BLAST) or Clustal Omega sequence alignment tool, enabling further data analysis.
BLAST
BLAST is an algorithm for the comparison of biomacromolecule primary structure, most often nucleotide sequence of DNA/RNA and amino acid sequence of proteins, stored in the bioinformatic databases, with the query sequence. The algorithm utilizes scoring of the available sequences against the query by a scoring matrix such as BLOSUM 62. The highest scoring sequences represent the closest relatives of the query, in terms of functional and evolutionary similarity.
The database search by BLAST requires input data to be in a correct format (e.g. FASTA, GenBank, PIR or EMBL format). Users may also designate the specific databases to be searched, s |
https://en.wikipedia.org/wiki/Marine%20life | Marine life, sea life, or ocean life is the plants, animals, and other organisms that live in the salt water of seas or oceans, or the brackish water of coastal estuaries. At a fundamental level, marine life affects the nature of the planet. Marine organisms, mostly microorganisms, produce oxygen and sequester carbon. Marine life in part shape and protect shorelines, and some marine organisms even help create new land (e.g. coral building reefs).
Most life forms evolved initially in marine habitats. By volume, oceans provide about 90% of the living space on the planet. The earliest vertebrates appeared in the form of fish, which live exclusively in water. Some of these evolved into amphibians, which spend portions of their lives in water and portions on land. One group of amphibians evolved into reptiles and mammals and a few subsets of each returned to the ocean as sea snakes, sea turtles, seals, manatees, and whales. Plant forms such as kelp and other algae grow in the water and are the basis for some underwater ecosystems. Plankton forms the general foundation of the ocean food chain, particularly phytoplankton which are key primary producers.
Marine invertebrates exhibit a wide range of modifications to survive in poorly oxygenated waters, including breathing tubes as in mollusc siphons. Fish have gills instead of lungs, although some species of fish, such as the lungfish, have both. Marine mammals (e.g. dolphins, whales, otters, and seals) need to surface periodically to breathe air.
, more than 242,000 marine species have been documented, and perhaps two million marine species are yet to be documented. An average of 2,332 new species per year are being described.
Marine species range in size from the microscopic like phytoplankton, which can be as small as 0.02 micrometres, to huge cetaceans like the blue whale – the largest known animal, reaching in length. Marine microorganisms, including protists and bacteria and their associated viruses, have been var |
https://en.wikipedia.org/wiki/Widlar%20current%20source | A Widlar current source is a modification of the basic two-transistor current mirror that incorporates an emitter degeneration resistor for only the output transistor, enabling the current source to generate low currents using only moderate resistor values.
The Widlar circuit may be used with bipolar transistors, MOS transistors, and even vacuum tubes. An example application is the 741 operational amplifier, and Widlar used the circuit as a part in many designs.
This circuit is named after its inventor, Bob Widlar, and was patented in 1967.
DC analysis
Figure 1 is an example Widlar current source using bipolar transistors, where the emitter resistor R2 is connected to the output transistor Q2, and has the effect of reducing the current in Q2 relative to Q1. The key to this circuit is that the voltage drop across the resistor R2 subtracts from the base-emitter voltage of transistor Q2, thereby turning this transistor off compared to transistor Q1. This observation is expressed by equating the base voltage expressions found on either side of the circuit in Figure 1 as:
where β2 is the beta-value of the output transistor, which is not the same as that of the input transistor, in part because the currents in the two transistors are very different. The variable IB2 is the base current of the output transistor, VBE refers to base-emitter voltage. This equation implies (using the Shockley diode equation):
Eq. 1
where VT is the thermal voltage.
This equation makes the approximation that the currents are both much larger than the scale currents, IS1 and IS2; an approximation valid except for current levels near cut off. In the following, the scale currents are assumed to be identical; in practice, this needs to be specifically arranged.
Design procedure with specified currents
To design the mirror, the output current must be related to the two resistor values R1 and R2. A basic observation is that the output transistor is in active mode only so long as its collector |
https://en.wikipedia.org/wiki/Craig%20interpolation | In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theories. Roughly stated, the theorem says that if a formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem.
Example
In propositional logic, let
.
Then tautologically implies . This can be verified by writing in conjunctive normal form:
.
Thus, if holds, then holds. In turn, tautologically implies . Because the two propositional variables occurring in occur in both and , this means that is an interpolant for the implication .
Lyndon's interpolation theorem
Suppose that S and T are two first-order theories. As notation, let S ∪ T denote the smallest theory including both S and T; the signature of S ∪ T is the smallest one containing the signatures of S and T. Also let S ∩ T be the intersection of the languages of the two theories; the signature of S ∩ T is the intersection of the signatures of the two languages.
Lyndon's theorem says that if S ∪ T is unsatisfiable, then there is an interpolating sentence ρ in the language of S ∩ T that is true in all models of S and false in all models of T. Moreover, ρ has the stronger property that every relation symbol that has a positive occurrence in ρ has a positive occurrence in some formula of S and a negative occurrence in some formula of T, and every relation symbol with a negative occurrence in ρ has a negative occurrence in some formula of S and a positive occurrence in some formula of T.
Proof o |
https://en.wikipedia.org/wiki/Void%20type | The void type, in several programming languages derived from C and Algol68, is the return type of a function that returns normally, but does not provide a result value to its caller. Usually such functions are called for their side effects, such as performing some task or writing to their output parameters. The usage of the void type in such context is comparable to procedures in Pascal and syntactic constructs which define subroutines in Visual Basic. It is also similar to the unit type used in functional programming languages and type theory. See Unit type#In programming languages for a comparison.
C and C++ also support the pointer to void type (specified as void *), but this is an unrelated notion. Variables of this type are pointers to data of an unspecified type, so in this context (but not the others) void * acts roughly like a universal or top type. A program can convert a pointer to any type of data (except a function pointer) to a pointer to void and back to the original type without losing information, which makes these pointers useful for polymorphic functions. The C language standard does not guarantee that the different pointer types have the same size or alignment.
In C and C++
A function with void result type ends either by reaching the end of the function or by executing a return statement with no returned value. The void type may also replace the argument list of a function prototype to indicate that the function takes no arguments. Note that in all of these situations, void is not a type qualifier on any value. Despite the name, this is semantically similar to an implicit unit type, not a zero or bottom type (which is sometimes confusingly called the "void type"). Unlike a real unit type which is a singleton, the void type lacks a way to represent its value and the language does not provide any way to declare an object or represent a value with type void.
In the earliest versions of C, functions with no specific result defaulted to a return t |
https://en.wikipedia.org/wiki/Internet%20Locator%20Server | An Internet Locator Server (abbreviated ILS) is a server that acts as a directory for Microsoft NetMeeting clients. An ILS is not necessary within a local area network and some wide area networks in the Internet because one participant can type in the IP address of the other participant's host and call them directly. An ILS becomes necessary when one participant is trying to contact a host who has a private IP address internal to a local area network that is inaccessible to the outside world, or when the host is blocked by a firewall. An ILS is also useful when a participant has a different IP address during each session, e.g., assigned by the Dynamic Host Configuration Protocol.
There are two main approaches to using Internet Location Servers: use a public server on the Internet, or run and use a private server.
Private Internet Location Server
The machine running an Internet Location Server must have a public IP address.
If the network running an Internet Location Server has a firewall, it is usually necessary to run the server in the demilitarized zone of the network.
Microsoft Windows includes an Internet Location Server. It can be installed in the Control Panel using Add/Remove Windows Components, under "Networking Services" (Site Server ILS Services).
The Internet Location Server (ILS) included in Microsoft Windows 2000 offers service on port 1002, while the latest version of NetMeeting requests service from port 389. The choice of 1002 was to avoid conflict with Windows 2000's domain controllers, which use LDAP and Active Directory on port 389, as well as Microsoft Exchange Server 2000, which uses port 389. If the server is running neither Active Directory nor Microsoft Exchange Server, the Internet Location Server's port can be changed to 389 using the following command at a command prompt:
ILSCFG [servername] /port 389
Additional firewall issues
Internet Location Servers do not address two other issues with using NetMeeting behind a firewall. First, |
https://en.wikipedia.org/wiki/Research%20exemption | In patent law, the research exemption or safe harbor exemption is an exemption to the rights conferred by patents, which is especially relevant to drugs. According to this exemption, despite the patent rights, performing research and tests for preparing regulatory approval, for instance by the FDA in the United States, does not constitute infringement for a limited term before the end of patent term. This exemption allows generic manufacturers to prepare generic drugs in advance of the patent expiration.
In the United States, this exemption is also technically called § 271(e)(1) exemption or Hatch-Waxman exemption. In 2005, the U.S. Supreme Court considered the scope of the Hatch-Waxman exemption in Merck v. Integra. The Supreme Court held that the statute exempts from infringement all uses of compounds that are reasonably related to submission of information to the government under any law regulating the manufacture, use or distribution of drugs.
In Canada, this exemption is known as the Bolar provision or Roche-Bolar provision, named after the case Roche Products v. Bolar Pharmaceutical.
In the European Union, equivalent exemptions are allowed under the terms of EC Directives 2001/82/EC (as amended by Directive 2004/28/EC) and 2001/83/EC (as amended by Directives 2002/98/EC, 2003/63/EC, 2004/24/EC and 2004/27/EC).
Common law research exemption
The common law research exemption is an affirmative defense to infringement where the alleged infringer is using a patented invention for research purposes. The doctrine originated in the 1813 decision by Justice Joseph Story appellate decision Whittemore v. Cutter, 29 Fed. Cas. 1120 (C.C.D. Mass. 1813). Story famously wrote that the intent of the legislature could not have been to punish someone who infringes "merely for [scientific] experiments, or for the purpose of ascertaining the sufficiency of the machine to produce its described effects." Subsequent decisions later distinguished between commercial and non-commer |
https://en.wikipedia.org/wiki/Packet%20Storm | Packet Storm Security is an information security website offering current and historical computer security tools, exploits, and security advisories. It is operated by a group of security enthusiasts that publish new security information and offer tools for educational and testing purposes.
Overview
The site was originally created by Ken Williams who sold it in 1999 to Kroll O'Gara and just over a year later, it was given back to the security community. While at Kroll O'Gara, Packet Storm awarded Mixter $10,000 in a whitepaper contest dedicated to the mitigation of distributed denial of service attacks. Today, they offer a suite of consulting services and the site is referenced in hundreds of books.
In 2013, Packet Storm launched a bug bounty program to buy working exploits that would be given back to the community for their own testing purposes. Later that year, they worked with a security researcher to help expose a large scale shadow profile issue with the popular Internet site Facebook. After Facebook claimed that only 6 million people were affected, additional testing by Packet Storm exposed that the numbers were not accurately reported. |
https://en.wikipedia.org/wiki/Tf%E2%80%93idf | In information retrieval, tf–idf (also TF*IDF, TFIDF, TF–IDF, or Tf–idf), short for term frequency–inverse document frequency, is a measure of importance of a word to a document in a collection or corpus, adjusted for the fact that some words appear more frequently in general. It was often used as a weighting factor in searches of information retrieval, text mining, and user modeling. A survey conducted in 2015 showed that 83% of text-based recommender systems in digital libraries used tf–idf.
Variations of the tf–idf weighting scheme were often used by search engines as a central tool in scoring and ranking a document's relevance given a user query.
One of the simplest ranking functions is computed by summing the tf–idf for each query term; many more sophisticated ranking functions are variants of this simple model.
Motivations
Karen Spärck Jones (1972) conceived a statistical interpretation of term-specificity called Inverse Document Frequency (idf), which became a cornerstone of term weighting:
For example, the tf and idf for some words in Shakespeare's 37 plays are as follows:
We see that "Romeo", "Falstaff", and "salad" appears in very few plays, so seeing these words, one could get a good idea as to which play it might be. In contrast, "good" and "sweet" appears in every play and are completely uninformative as to which play it is.
Definition
The tf–idf is the product of two statistics, term frequency and inverse document frequency. There are various ways for determining the exact values of both statistics.
A formula that aims to define the importance of a keyword or phrase within a document or a web page.
Term frequency
Term frequency, , is the relative frequency of term within document ,
,
where is the raw count of a term in a document, i.e., the number of times that term occurs in document . Note the denominator is simply the total number of terms in document (counting each occurrence of the same term separately). There are various other |
https://en.wikipedia.org/wiki/Joseph%20Reed%20%28politician%29 | Joseph Reed (August 27, 1741March 5, 1785) was a Founding Father of the United States and a lawyer, military officer, and statesman of the American Revolutionary Era who lived the majority of his life in Pennsylvania. He served as a delegate to the Continental Congress and, while in Congress, signed the Articles of Confederation. He also served as President of Pennsylvania's Supreme Executive Council, during the American Revolutionary War, a position analogous to the modern office of Governor.
Early life
Reed was born in Trenton in the Province of New Jersey in 1741, the son of Andrew Reed, a shopkeeper and merchant, and Theodosia Bowes. His grandfather, Joseph Reed (1650–1727), was born in Carrickfergus, County Antrim in Ulster and settled in West Jersey. His brother, Bowes Reed (1740–1794), would serve as a colonel in the Revolutionary War and as Secretary of State of New Jersey. The family moved to Philadelphia shortly after Reed's birth and, as a boy, Reed was enrolled at Philadelphia Academy (later to be known as the University of Pennsylvania). He received his bachelor's degree from the College of New Jersey (later known as Princeton University) in 1757 and, soon after, began his professional education under Richard Stockton. In the summer of 1763, Reed sailed for England, where, for two years, he continued his studies in law at Middle Temple in London. Shortly after his studies ended in 1768, Reed was elected to the American Philosophical Society as a member.
During the course of his studies, Reed became romantically attached to Esther de Berdt, the daughter of the agent for the Province of Massachusetts Bay, Dennis de Berdt. Though very fond of Reed, de Berdt was aware of Reed's intention to return to Philadelphia and initially refused consent for Esther to marry him. Reed returned to the Colonies with only a tenuous engagement to Esther, and with an understanding that he would return to settle permanently in Great Britain shortly after. Following the deat |
https://en.wikipedia.org/wiki/Stable%20polynomial | In the context of the characteristic polynomial of a differential equation or difference equation, a polynomial is said to be stable if either:
all its roots lie in the open left half-plane, or
all its roots lie in the open unit disk.
The first condition provides stability for continuous-time linear systems, and the second case relates to stability
of discrete-time linear systems. A polynomial with the first property is called at times a Hurwitz polynomial and with the second property a Schur polynomial. Stable polynomials arise in control theory and in mathematical theory
of differential and difference equations. A linear, time-invariant system (see LTI system theory) is said to be BIBO stable if every bounded input produces bounded output. A linear system is BIBO stable if its characteristic polynomial is stable. The denominator is required to be Hurwitz stable if the system is in continuous-time and Schur stable if it is in discrete-time. In practice, stability is determined by applying any one of several stability criteria.
Properties
The Routh–Hurwitz theorem provides an algorithm for determining if a given polynomial is Hurwitz stable, which is implemented in the Routh–Hurwitz and Liénard–Chipart tests.
To test if a given polynomial P (of degree d) is Schur stable, it suffices to apply this theorem to the transformed polynomial
obtained after the Möbius transformation which maps the left half-plane to the open unit disc: P is Schur stable if and only if Q is Hurwitz stable and . For higher degree polynomials the extra computation involved in this mapping can be avoided by testing the Schur stability by the Schur-Cohn test, the Jury test or the Bistritz test.
Necessary condition: a Hurwitz stable polynomial (with real coefficients) has coefficients of the same sign (either all positive or all negative).
Sufficient condition: a polynomial with (real) coefficients such that
is Schur stable.
Product rule: Two polynomials f and g are stable (of th |
https://en.wikipedia.org/wiki/Substructure%20%28mathematics%29 | In mathematical logic, an (induced) substructure or (induced) subalgebra is a structure whose domain is a subset of that of a bigger structure, and whose functions and relations are restricted to the substructure's domain. Some examples of subalgebras are subgroups, submonoids, subrings, subfields, subalgebras of algebras over a field, or induced subgraphs. Shifting the point of view, the larger structure is called an extension or a superstructure of its substructure.
In model theory, the term "submodel" is often used as a synonym for substructure, especially when the context suggests a theory of which both structures are models.
In the presence of relations (i.e. for structures such as ordered groups or graphs, whose signature is not functional) it may make sense to relax the conditions on a subalgebra so that the relations on a weak substructure (or weak subalgebra) are at most those induced from the bigger structure. Subgraphs are an example where the distinction matters, and the term "subgraph" does indeed refer to weak substructures. Ordered groups, on the other hand, have the special property that every substructure of an ordered group which is itself an ordered group, is an induced substructure.
Definition
Given two structures A and B of the same signature σ, A is said to be a weak substructure of B, or a weak subalgebra of B, if
the domain of A is a subset of the domain of B,
f A = f B|An for every n-ary function symbol f in σ, and
R A R B An for every n-ary relation symbol R in σ.
A is said to be a substructure of B, or a subalgebra of B, if A is a weak subalgebra of B and, moreover,
R A = R B An for every n-ary relation symbol R in σ.
If A is a substructure of B, then B is called a superstructure of A or, especially if A is an induced substructure, an extension of A.
Example
In the language consisting of the binary functions + and ×, binary relation <, and constants 0 and 1, the structure (Q, +, ×, <, 0, 1) is a substructure of (R, +, ×, |
https://en.wikipedia.org/wiki/PICMG | PICMG, or PCI Industrial Computer Manufacturers Group, is a consortium of over 140 companies. Founded in 1994, the group was originally formed to adapt PCI technology for use in high-performance telecommunications, military, and industrial computing applications, but its work has grown to include newer technologies. PICMG is distinct from the similarly named and adjacently-focused PCI Special Interest Group (PCI-SIG).
PICMG currently focuses on developing and implementing specifications and guidelines for open standardsbased computer architectures from a wide variety of interconnects.
Background
PICMG is a standards development organization in the embedded computing industry. Members work collaboratively to develop new specifications and enhancements to existing ones. The members benefit from participating in standards development, gain early access to leading-edge technology, and forging relationships with thought leaders and suppliers in the industry.
The original PICMG mission was to provide extensions to the PCI standard developed by PCI-SIG for a range of applications. The organization's collaborations eventually expanded to include a variety of interconnect technologies for industrial computing and telecommunications. PICMG's specifications are used in a wide variety of industries including industrial automation, military, aerospace, telecommunications, medical, gaming, transportation, physics/research, test and measurement, energy, drone/robotics, and general embedded computing.
In 2011, PICMG completed its transfer of assets from the Communications Platforms Trade Association (CP-TA). Since 2006, CP-TA had been a collaboration of communications vendors, developing interoperability testing requirements, methodologies, and procedures based on open specifications from PICMG, The Linux Foundation, and the Service Availability Forum. PICMG has continued the educational and marketing outreach formerly conducted by members of the CP-TA marketing work group.
t |
https://en.wikipedia.org/wiki/Twelf | Twelf is an implementation of the logical framework LF developed by Frank Pfenning and Carsten Schürmann at Carnegie Mellon University. It is used for logic programming and for the formalization of programming language theory.
Introduction
At its simplest, a Twelf program (called a "signature") is a collection of declarations of type families (relations) and constants that inhabit those type families. For example, the following is the standard definition of the natural numbers, with standing for zero and the successor operator.
nat : type.
z : nat.
s : nat -> nat.
Here is a type, and and are constant terms. As a dependently typed system, types can be indexed by terms, which allows the definition of more interesting type families. Here is a definition of addition:
plus : nat -> nat -> nat -> type.
plus_zero : {M:nat} plus M z M.
plus_succ : {M:nat} {N:nat} {P:nat}
plus M (s N) (s P)
<- plus M N P.
The type family is read as a relation between three natural numbers , and , such that . We then give the constants that define the relation: the constant indicates that . The quantifier can be read as "for all of type ".
The constant defines the case for when the second argument is the successor of some other number (see pattern matching). The result is the successor of , where is the sum of and . This recursive call is made via the subgoal , introduced with . The arrow can be understood operationally as Prolog's , or as logical implication ("if M + N = P, then M + (s N) = (s P)"), or most faithfully to the type theory, as the type of the constant ("when given a term of type , return a term of type ").
Twelf features type reconstruction and supports implicit parameters, so in practice, one usually does not need to explicitly write (etc.) above.
These simple examples do not display LF's higher-order features, nor any of its theorem checking capabilities. See the Twelf distribution for its included examples.
Uses
Twelf is used |
https://en.wikipedia.org/wiki/ICT%201301 | The ICT 1301 and its smaller derivative ICT 1300 were early business computers from International Computers and Tabulators. Typical of mid-sized machines of the era, they used core memory, drum storage and punched cards, but they were unusual in that they were based on decimal logic instead of binary.
Description
The 1301 was the main machine in the line. Its main memory came in increments of 400 words of 48 bits (12 decimal digits or 12 four-bit binary values, 0-15) plus two parity bits. The maximum size was 4,000 words. It was the first ICT machine to use core memory.
Backing store was magnetic drum and optionally one-inch-, half-inch- or quarter-inch-wide magnetic tape. Input was from 80-column punched cards and optionally 160-column punched cards and punched paper tape. Output was to 80-column punched cards, line printer, and optionally to punched paper tape.
The machine ran at a clock speed of 1 MHz and its arithmetic logic unit (ALU) operated on data in a serial-parallel fashion—the 48-bit words were processed sequentially four bits at a time. A simple addition took 21 clock cycles; hardware multiplication averaged 170 clock cycles per digit; and division was performed in software.
A typical 1301 requires 700 square feet (65 square metres) of floor space and weighs about . It consumes about 13kVA of three-phase electric power. The electronics consist of over 4,000 printed circuit boards each with many germanium diodes (mainly OA5), germanium transistors (mainly Mullard GET872), resistors, capacitors, inductors, and a handful of thermionic valves and a few dozen relays operated when buttons were pressed. Integrated circuits were not available commercially at the time.
History
The 1301 was designed by an ICT and GEC joint subsidiary, Computer Developments Limited (CDL) at GEC's Coventry site formed in 1956. CDL was taken over by ICT, but the 1301 was built at the GEC site as ICT lacked the manufacturing capability at that time.
The computer was announce |
https://en.wikipedia.org/wiki/Planula | A planula is the free-swimming, flattened, ciliated, bilaterally symmetric larval form of various cnidarian species and also in some species of Ctenophores. Some groups of Nemerteans also produce larvae that are very similar to the planula, which are called planuliform larva.
Development
The planula forms either from the fertilized egg of a medusa, as is the case in scyphozoans and some hydrozoans, or from a polyp, as in the case of anthozoans.
Depending on the species, the planula either metamorphoses directly into a free-swimming, miniature version of the mobile adult form, or navigates through the water until it reaches a hard substrate (many may prefer specific substrates) where it anchors and grows into a polyp. The miniature-adult types include many open-ocean scyphozoans. The attaching types include all anthozoans with a planula stage, many coastal scyphozoans, and some hydrozoans.
Feeding and locomotion
The planulae of the subphylum Medusozoa have no mouth, and no digestive tract, and are unable to feed themselves, while those of Anthozoa can feed.
Planula larvae swim with the aboral end (the end opposite the mouth) in front. |
https://en.wikipedia.org/wiki/Pre-algebra | Prealgebra is a common name for a course in middle school mathematics in the United States, usually taught in the 7th grade or 8th grade. The objective of it is to prepare students for the study of algebra. Usually, algebra is taught in the 8th and 9th grade.
As an intermediate stage after arithmetic, prealgebra helps students pass specific conceptual barriers. Students are introduced to the idea that an equals sign, rather than just being the answer to a question as in basic arithmetic, means that two sides are equivalent and can be manipulated together. They also learn how numbers, variables, and words can be used in the same ways.
Subjects
Subjects taught in a prealgebra course may include:
Review of natural number arithmetic
Types of numbers such as integers, fractions, decimals and negative numbers
Ratios and percents
Factorization of natural numbers
Properties of operations such as associativity and distributivity
Simple (integer) roots and powers
Rules of evaluation of expressions, such as operator precedence and use of parentheses
Basics of equations, including rules for invariant manipulation of equations
Understanding of variable manipulation
Manipulation and plotting in the standard 4-quadrant Cartesian coordinate plane
Powers in scientific notation (example: 340,000,000 in scientific notation is 3.4 × 108)
Identifying Probability
Solving Square roots
Pythagorean Theorem
Prealgebra may include subjects from geometry, especially to further the understanding of algebra in applications to area and volume.
Prealgebra may also include subjects from statistics to identify probability and interpret data.
Proficiency in prealgebra is an indicator of college success. It can also be taught as a remedial course for college students.
See also
Precalculus
Mathematics education in the United States |
https://en.wikipedia.org/wiki/Six%20Degrees%3A%20The%20Science%20of%20a%20Connected%20Age | Six Degrees: The Science of a Connected Age (2004 in paperback, and 2003 in hardcover, ) is a popular science book by Duncan J. Watts covering the application of network theory to sociology.
The book covers Watts' own work on small-world networks, and continues on to cover scale-free networks, network searching, epidemics and network failures, social decision-making, thresholds in networks, and innovation in large organizations and its lack.
In addition to covering the theoretical models and empirical case studies, the book also includes several stories about the character of the researchers who developed this science and their relationships with each other.
The case studies used include blackouts in the North American electricity distribution network, the relationships among members of corporate boards of directors, the distribution of wealth in societies, peer-to-peer file-sharing systems, computer viruses, economic bubbles, and the 1997 Aisin fire crisis at Toyota.
See also
Social network |
https://en.wikipedia.org/wiki/Scientific%20community%20metaphor | In computer science, the scientific community metaphor is a metaphor used to aid understanding scientific communities. The first publications on the scientific community metaphor in 1981 and 1982 involved the development of a programming language named Ether that invoked procedural plans to process goals and assertions concurrently by dynamically creating new rules during program execution. Ether also addressed issues of conflict and contradiction with multiple sources of knowledge and multiple viewpoints.
Development
The scientific community metaphor builds on the philosophy, history and sociology of science. It was originally developed building on work in the philosophy of science by Karl Popper and Imre Lakatos. In particular, it initially made use of Lakatos' work on proofs and refutations. Subsequently, development has been influenced by the work of Geof Bowker, Michel Callon, Paul Feyerabend, Elihu M. Gerson, Bruno Latour, John Law, Karl Popper, Susan Leigh Star, Anselm Strauss, and Lucy Suchman.
In particular Latour's Science in Action had great influence. In the book, Janus figures make paradoxical statements about scientific development. An important challenge for the scientific community metaphor is to reconcile these paradoxical statements.
Qualities of scientific research
Scientific research depends critically on monotonicity, concurrency, commutativity, and pluralism to propose, modify, support, and oppose scientific methods, practices, and theories.
Quoting from Carl Hewitt, scientific community metaphor systems have characteristics of monotonicity, concurrency, commutativity, pluralism, skepticism and provenance.
monotonicity: Once something is published it cannot be undone. Scientists publish their results so they are available to all. Published work is collected and indexed in libraries. Scientists who change their mind can publish later articles contradicting earlier ones.
concurrency: Scientists can work concurrently, overlapping in |
https://en.wikipedia.org/wiki/SIGMA%20%28verification%20service%29 | SIGMA is an electronic verification service offered by Nielsen Media Research and is generally used for commercials, infomercials, video news releases, public service announcements, satellite media tours, and electronic press kits.
It operates by encoding the SIGMA encoder ID, date of encoding, and time of encoding in lines 20 and 22 of the video signal, which is outside of the area displayed on a normal television screen (this is similar to how closed captioning is transmitted).
On a professional video monitor with underscan capability activated or a computer display of the entire video frame, the SIGMA data will look like small, moving white lines at the top of the frame. Nielsen provides overnight reports of airplay in all television markets in the country.
Television technology |
https://en.wikipedia.org/wiki/List%20of%20set%20theory%20topics | This page is a list of articles related to set theory.
Articles on individual set theory topics
Lists related to set theory
Glossary of set theory
List of large cardinal properties
List of properties of sets of reals
List of set identities and relations
Set theorists
Societies and organizations
Association for Symbolic Logic
The Cabal
Topics
Set theory |
https://en.wikipedia.org/wiki/Biological%20soil%20crust | Biological soil crusts are communities of living organisms on the soil surface in arid and semi-arid ecosystems. They are found throughout the world with varying species composition and cover depending on topography, soil characteristics, climate, plant community, microhabitats, and disturbance regimes. Biological soil crusts perform important ecological roles including carbon fixation, nitrogen fixation and soil stabilization; they alter soil albedo and water relations and affect germination and nutrient levels in vascular plants. They can be damaged by fire, recreational activity, grazing and other disturbances and can require long time periods to recover composition and function. Biological soil crusts are also known as biocrusts or as cryptogamic, microbiotic, microphytic, or cryptobiotic soils.
Natural history
Biology and composition
Biological soil crusts are most often composed of fungi, lichens, cyanobacteria, bryophytes, and algae in varying proportions. These organisms live in intimate association in the uppermost few millimeters of the soil surface, and are the biological basis for the formation of soil crusts.
Cyanobacteria
Cyanobacteria are the main photosynthetic component of biological soil crusts, in addition to other photosynthetic taxa such as mosses, lichens, and green algae. The most common cyanobacteria found in soil crusts belong to large filamentous species such as those in the genus Microcoleus. These species form bundled filaments that are surrounded by a gelatinous sheath of polysaccharides. These filaments bind soil particles throughout the uppermost soil layers, forming a 3-D net-like structure that holds the soil together in a crust. Other common cyanobacteria species are as those in the genus Nostoc, which can also form sheaths and sheets of filaments that stabilize the soil. Some Nostoc species are also able to fix atmospheric nitrogen gas into bio-available forms such as ammonia.
Bryophytes
Bryophytes in soil crusts include moss |
https://en.wikipedia.org/wiki/K-function | In mathematics, the -function, typically denoted K(z), is a generalization of the hyperfactorial to complex numbers, similar to the generalization of the factorial to the gamma function.
Definition
Formally, the -function is defined as
It can also be given in closed form as
where denotes the derivative of the Riemann zeta function, denotes the Hurwitz zeta function and
Another expression using the polygamma function is
Or using the balanced generalization of the polygamma function:
where is the Glaisher constant.
Similar to the Bohr-Mollerup Theorem for the gamma function, the log K-function is the unique (up to an additive constant) eventually 2-convex solution to the equation where is the forward difference operator.
Properties
It can be shown that for :
This can be shown by defining a function such that:
Differentiating this identity now with respect to yields:
Applying the logarithm rule we get
By the definition of the -function we write
And so
Setting we have
Now one can deduce the identity above.
The -function is closely related to the gamma function and the Barnes -function; for natural numbers , we have
More prosaically, one may write
The first values are
1, 4, 108, 27648, 86400000, 4031078400000, 3319766398771200000, ... . |
https://en.wikipedia.org/wiki/Barnes%20G-function | In mathematics, the Barnes G-function G(z) is a function that is an extension of superfactorials to the complex numbers. It is related to the gamma function, the K-function and the Glaisher–Kinkelin constant, and was named after mathematician Ernest William Barnes. It can be written in terms of the double gamma function.
Formally, the Barnes G-function is defined in the following Weierstrass product form:
where is the Euler–Mascheroni constant, exp(x) = ex is the exponential function, and Π denotes multiplication (capital pi notation).
The integral representation, which may be deduced from the relation to the double gamma function, is
As an entire function, G is of order two, and of infinite type. This can be deduced from the asymptotic expansion given below.
Functional equation and integer arguments
The Barnes G-function satisfies the functional equation
with normalisation G(1) = 1. Note the similarity between the functional equation of the Barnes G-function and that of the Euler gamma function:
The functional equation implies that G takes the following values at integer arguments:
(in particular, )
and thus
where denotes the gamma function and K denotes the K-function. The functional equation uniquely defines the Barnes G-function if the convexity condition,
is added. Additionally, the Barnes G-function satisfies the duplication formula,
Characterisation
Similar to the Bohr-Mollerup theorem for the gamma function, for a constant , we have for
and for
as .
Value at 1/2
where is the Glaisher–Kinkelin constant.
Reflection formula 1.0
The difference equation for the G-function, in conjunction with the functional equation for the gamma function, can be used to obtain the following reflection formula for the Barnes G-function (originally proved by Hermann Kinkelin):
The logtangent integral on the right-hand side can be evaluated in terms of the Clausen function (of order 2), as is shown below:
The proof of this result hinges on the follow |
https://en.wikipedia.org/wiki/Avalanche%20%28P2P%29 | Avalanche is the name of a proposed peer-to-peer (P2P) network created by Pablo Rodriguez and Christos Gkantsidis at Microsoft, which claims to offer improved scalability and bandwidth efficiency compared to existing P2P systems.
The proposed system works in a similar way to BitTorrent, but aims to improve some of its shortfalls. Like BitTorrent, Avalanche splits the file to be distributed into small blocks. However, rather than peers simply transmitting the blocks, they transmit random linear combinations of the blocks along with the random coefficients of this linear combination - a technique known as 'network coding'. This technique removes the need for each peer to have complex knowledge of block distribution across the network (an aspect of BitTorrent-like protocols which the paper claims does not scale very well).
Bram Cohen, the creator of BitTorrent, criticized the proposed Avalanche system in a post to his blog. He mentions inaccuracies in the paper's analysis of the BitTorrent protocol (some of it being based on a 4-years-out-of-date version of the protocol which used an algorithm that "sucks") and describes the paper as "garbage." |
https://en.wikipedia.org/wiki/Pair%20by%20association | In relation to psychology, pair by association is the action of associating a stimulus with an arbitrary idea or object, eliciting a response, usually emotional. This is done by repeatedly pairing the stimulus with the arbitrary object.
For example, repeatedly pairing images of beautiful women in bathing suits elicits a sexual response in most men. Advertising agencies repeatedly pair products with attractive women in television commercials with the intention of eliciting an emotional or sexually aroused response in the consumer. This causes the consumer to be more likely to buy the product than when presented with a similar product without such an association.
Hippocampus
The hippocampal area, beyond its importance in episodic memory is in part responsible in the creation and storage of associations in the memory, especially for item associations. Furthermore, as stated by Gilbert & Kesner, the associations that are created are those that might be “critical” in paired-associative learning. Through studies on rats, it has been found that lesions to the hippocampus lead to object-place associative learning impairments. The findings of hippocampal damage that lead to impairments in the association between object-place as Gilbert & Kesner state have been seen in not only rodents but also non-human primates and humans . Previously learned associations made before the damage to the hippocampal area were not affected with impairment. Gilbert & Kesner have suggested in their work that this ability to still recall previously stored associations may be due to modified synapses in an “auto associative network”.
Pair by Association Task
Paired association learning can be defined as a system of learning in which items (such as words, letters, numbers, symbols etc.) are matched so that presentation of one member of the pair will cue the recall of the other member. It is this learning which constitutes the basics in a paired-associate task. These tasks can be divided into the |
https://en.wikipedia.org/wiki/T-cell%20receptor | The T-cell receptor (TCR) is a protein complex found on the surface of T cells, or T lymphocytes, that is responsible for recognizing fragments of antigen as peptides bound to major histocompatibility complex (MHC) molecules. The binding between TCR and antigen peptides is of relatively low affinity and is degenerate: that is, many TCRs recognize the same antigen peptide and many antigen peptides are recognized by the same TCR.
The TCR is composed of two different protein chains (that is, it is a heterodimer). In humans, in 95% of T cells the TCR consists of an alpha (α) chain and a beta (β) chain (encoded by TRA and TRB, respectively), whereas in 5% of T cells the TCR consists of gamma and delta (γ/δ) chains (encoded by TRG and TRD, respectively). This ratio changes during ontogeny and in diseased states (such as leukemia). It also differs between species. Orthologues of the 4 loci have been mapped in various species. Each locus can produce a variety of polypeptides with constant and variable regions.
When the TCR engages with antigenic peptide and MHC (peptide/MHC), the T lymphocyte is activated through signal transduction, that is, a series of biochemical events mediated by associated enzymes, co-receptors, specialized adaptor molecules, and activated or released transcription factors. Based on the initial receptor triggering mechanism, the TCR belongs to the family of non-catalytic tyrosine-phosphorylated receptors (NTRs).
History
In 1982, Nobel laureate James P. Allison first discovered a clonally expressed T-cell surface epitope in murine T lymphoma. In 1983, Ellis Reinherz first defined the structure of the human T-cell receptor using anti-idiotypic monoclonal antibodies to T-cell clones, complemented by studies in the mouse by Pippa Marrack and John Kappler. Then, Tak Wah Mak and Mark M. Davis identified the cDNA clones encoding the human and mouse TCR respectively in 1984. These findings allowed the entity and structure of the elusive TCR, known before |
https://en.wikipedia.org/wiki/IBM%20SSEC | The IBM Selective Sequence Electronic Calculator (SSEC) was an electromechanical computer built by IBM. Its design was started in late 1944 and it operated from January 1948 to August 1952. It had many of the features of a stored-program computer, and was the first operational machine able to treat its instructions as data, but it was not fully electronic.
Although the SSEC proved useful for several high-profile applications, it soon became obsolete. As the last large electromechanical computer ever built, its greatest success was the publicity it provided for IBM.
History
During World War II, International Business Machines Corporation (IBM) funded and built an Automatic Sequence Controlled Calculator (ASCC) for Howard H. Aiken at Harvard University. The machine, formally dedicated in August 1944, was widely known as the Harvard Mark I. The President of IBM, Thomas J. Watson Sr., did not like Aiken's press release that gave no credit to IBM for its funding and engineering effort. Watson and Aiken decided to go their separate ways, and IBM began work on a project to build their own larger and more visible machine.
Astronomer Wallace John Eckert of Columbia University provided specifications for the new machine; the project budget of almost $1 million was an immense amount for the time.
Francis "Frank" E. Hamilton (1898–1972) supervised the construction of both the ASCC and the SSEC. Robert Rex Seeber Jr. was also hired away from the Harvard group, and became known as the chief architect of the new machine.
Modules were manufactured in IBM's facility at Endicott, New York, under Director of Engineering John McPherson after the basic design was ready in December 1945.
Construction
The February 1946 announcement of the fully electronic ENIAC energized the project.
The new machine, called the IBM Selective Sequence Electronic Calculator (SSEC), was ready to be installed by August 1947.
Watson called such machines calculators because computer then referred to humans e |
https://en.wikipedia.org/wiki/Backward%20induction | Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by examining the last point at which a decision is to be made and then identifying what action would be most optimal at that moment. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation (i.e. for every possible information set) at every point in time. Backward induction was first used in 1875 by Arthur Cayley, who discovered the method while trying to solve the Secretary problem.
In the mathematical optimization method of dynamic programming, backward induction is one of the main methods for solving the Bellman equation. In game theory, backward induction is a method used to compute subgame perfect equilibria in sequential games. The only difference is that optimization involves just one decision maker, who chooses what to do at each point of time, whereas game theory analyzes how the decisions of several players interact. That is, by anticipating what the last player will do in each situation, it is possible to determine what the second-to-last player will do, and so on. In the related fields of automated planning and scheduling and automated theorem proving, the method is called backward search or backward chaining. In chess it is called retrograde analysis.
Backward induction has been used to solve games as long as the field of game theory has existed. John von Neumann and Oskar Morgenstern suggested solving zero-sum, two-person games by backward induction in their Theory of Games and Economic Behavior (1944), the book which established game theory as a field of study.
Decision making
Optimal-stopping problem
An unemployed person who will be able to work for ten more years t = 1,2,...,10 may be offered a 'good' job that pays $100, or a 'bad' job that pa |
https://en.wikipedia.org/wiki/Few-body%20systems | In mechanics, a few-body system consists of a small number of well-defined structures or point particles.
Quantum mechanics
In quantum mechanics, examples of few-body systems include light nuclear systems (that is, few-nucleon bound and scattering states), small molecules, light atoms (such as helium in an external electric field), atomic collisions, and quantum dots. A fundamental difficulty in describing few-body systems is that the Schrödinger equation and the classical equations of motion are not analytically solvable for more than two mutually interacting particles even when the underlying forces are precisely known. This is known as the few-body problem. For some three-body systems an exact solution can be obtained iteratively through the Faddeev equations. It can be shown that under certain conditions Faddeev equations should lead to the Efimov effect. Most three-body systems are amenable to extremely accurate numerical solutions that use large sets of basis functions and then variationally optimize the amplitudes of the basis functions. Particular cases are the Hydrogen molecular ion or the Helium atom. The latter has been solved very precisely using basis sets of Hylleraas or Frankowski-Pekeris functions (see references of the work of G.W.F. Drake and J.D. Morgan III in Helium atom section).
In many cases theory has to resort to approximations to treat few-body systems. These approximations have to be tested by detailed experimental data. Atomic collisions or precision laser spectroscopy are particularly suitable for such tests. The fundamental force underlying atomic systems, the electromagnetic force, is essentially understood. Therefore, any discrepancy found between experiment and theory can be directly related to the theoretical description of few-body effects, or to the existence of new fundamental forces (beyond-Standard-Model forces). In nuclear systems, in contrast, the underlying force is much less understood. Furthermore, in atomic col |
https://en.wikipedia.org/wiki/Trigram%20tagger | In computational linguistics, a trigram tagger is a statistical method for automatically identifying words as being nouns, verbs, adjectives, adverbs, etc. based on second order Markov models that consider triples of consecutive words. It is trained on a text corpus as a method to predict the next word, taking the product of the probabilities of unigram, bigram and trigram. In speech recognition, algorithms utilizing trigram-tagger score better than those algorithms utilizing IIMM tagger but less well than Net tagger.
The description of the trigram tagger is provided by Brants (2000). |
https://en.wikipedia.org/wiki/Rima%20glottidis | The rima glottidis is the opening between the two true vocal cords anteriorly, and the two arytenoid cartilages posteriorly. It is part of the larynx.
Anatomy
The rima glottidis is the narrowest part of larynx. It is longer (~23 mm) in males than in females (17-18 mm).
The rima glottidis is an aperture between the two true vocal cords anteriorly, and the bases and vocal process of the two arytenoid cartilages posteriorly. It is therefore described as subdivided into two parts: the larger anterior part between the vocal folds (intermembranous part, or glottis vocalis), and the smaller posterior part between arytenoid cartilages (intercartilaginous part, glottis respiratoria, intercartilaginous glottis, respiratory glottis, or interarytenoid space). It is limited posteriorly by an interarytenoid fold of mucous membrane.
Function
The rima glottidis is closed by the lateral cricoarytenoid muscles and the arytenoid muscle, and opened by the posterior cricoarytenoid muscles. All of these muscles receive innervation from the recurrent laryngeal nerve which is a branch of the vagus nerve (CN X).
The shape of rima glottidis is changed by movements of vocal cords and arytenoid cartilages during respiration and phonation.
Clinical significance
Any damage to the rima glottidis may result in a hoarse voice, aphonia or difficulty breathing. |
https://en.wikipedia.org/wiki/Kelvin%E2%80%93Voigt%20material | A Kelvin-Voigt material, also called a Voigt material, is the most simple model viscoelastic material showing typical rubbery properties. It is purely elastic on long timescales (slow deformation), but shows additional resistance to fast deformation. It is named after the British physicist and engineer Lord Kelvin and German physicist Woldemar Voigt.
Definition
The Kelvin-Voigt model, also called the Voigt model, is represented by a purely viscous damper and purely elastic spring connected in parallel as shown in the picture.
If, instead, we connect these two elements in series we get a model of a Maxwell material.
Since the two components of the model are arranged in parallel, the strains in each component are identical:
where the subscript D indicates the stress-strain in the damper and the subscript S indicates the stress-strain in the spring. Similarly, the total stress will be the sum of the stress in each component:
From these equations we get that in a Kelvin-Voigt material, stress σ, strain ε and their rates of change with respect to time t are governed by equations of the form:
or, in dot notation:
where E is a modulus of elasticity and is the viscosity. The equation can be applied either to the shear stress or normal stress of a material.
Effect of a sudden stress
If we suddenly apply some constant stress to Kelvin-Voigt material, then the deformations would approach the deformation for the pure elastic material with the difference decaying exponentially:
where t is time and is the retardation time.
If we would free the material at time , then the elastic element would retard the material back until the deformation becomes zero. The retardation obeys the following equation:
The picture shows the dependence of the dimensionless deformation
on dimensionless time . In the picture the stress on the material is loaded at time , and released at the later dimensionless time .
Since all the deformation is reversible (though not suddenly) the |
https://en.wikipedia.org/wiki/Stomatitis | Stomatitis is inflammation of the mouth and lips. It refers to any inflammatory process affecting the mucous membranes of the mouth and lips, with or without oral ulceration.
In its widest meaning, stomatitis can have a multitude of different causes and appearances. Common causes include infections, nutritional deficiencies, allergic reactions, radiotherapy, and many others.
When inflammation of the gums and the mouth generally presents itself, sometimes the term gingivostomatitis is used, though this is also sometimes used as a synonym for herpetic gingivostomatitis.
The term is derived from the Greek stoma (), meaning "mouth", and the suffix -itis (), meaning "inflammation".
Causes
Nutritional deficiency
Malnutrition (improper dietary intake) or malabsorption (poor absorption of nutrients into the body) can lead to nutritional deficiency states, several of which can lead to stomatitis. For example, deficiencies of iron, vitamin B2 (riboflavin), vitamin B3 (niacin), vitamin B6 (pyridoxine), vitamin B9 (folic acid) or vitamin B12 (cobalamine) may all manifest as stomatitis. Iron is necessary for the upregulation of transcriptional elements for cell replication and repair. Lack of iron can cause genetic downregulation of these elements, leading to ineffective repair and regeneration of epithelial cells, especially in the mouth and lips. Many disorders which cause malabsorption can cause deficiencies, which in turn causes stomatitis. Examples include tropical sprue.
Aphthous stomatitis
Aphthous stomatitis (canker sores) is the recurrent appearance of mouth ulcers in otherwise healthy individuals. The cause is not completely understood, but it is thought that the condition represents a T cell mediated immune response which is triggered by a variety of factors. The individual ulcers (aphthae) recur periodically and heal completely, although in the more severe forms, new ulcers may appear in other parts of the mouth before the old ones have finished healing. Aphth |
https://en.wikipedia.org/wiki/Projective%20vector%20field | A projective vector field (projective) is a smooth vector field on a semi Riemannian manifold (p.ex. spacetime) whose flow preserves the geodesic structure of without necessarily preserving the affine parameter of any geodesic. More intuitively, the flow of the projective maps geodesics smoothly into geodesics without preserving the affine parameter.
Decomposition
In dealing with a vector field on a semi Riemannian manifold (p.ex. in general relativity), it is often useful to decompose the covariant derivative into its symmetric and skew-symmetric parts:
where
and
Note that are the covariant components of .
Equivalent conditions
Mathematically, the condition for a vector field to be projective is equivalent to the existence of a one-form satisfying
which is equivalent to
The set of all global projective vector fields over a connected or compact manifold forms a finite-dimensional Lie algebra denoted by (the projective algebra) and satisfies for connected manifolds the condition: . Here a projective vector field is uniquely determined by specifying the values of , and (equivalently, specifying , , and ) at any point of . (For non-connected manifolds you need to specify these 3 in one point per connected component.) Projectives also satisfy the properties:
Subalgebras
Several important special cases of projective vector fields can occur and they form Lie subalgebras of . These subalgebras are useful, for example, in classifying spacetimes in general relativity.
Affine algebra
Affine vector fields (affines) satisfy (equivalently, ) and hence every affine is a projective. Affines preserve the geodesic structure of the semi Riem. manifold (read spacetime) whilst also preserving the affine parameter. The set of all affines on forms a Lie subalgebra of denoted by (the affine algebra) and satisfies for connected M, . An affine vector is uniquely determined by specifying the values of the vector field and its first covariant derivative (eq |
https://en.wikipedia.org/wiki/Induction%20puzzles | Induction puzzles are logic puzzles, which are examples of multi-agent reasoning, where the solution evolves along with the principle of induction.
A puzzle's scenario always involves multiple players with the same reasoning capability, who go through the same reasoning steps. According to the principle of induction, a solution to the simplest case makes the solution of the next complicated case obvious. Once the simplest case of the induction puzzle is solved, the whole puzzle is solved subsequently.
Typical tell-tale features of these puzzles include any puzzle in which each participant has a given piece of information (usually as common knowledge) about all other participants but not themselves. Also, usually, some kind of hint is given to suggest that the participants can trust each other's intelligence — they are capable of theory of mind (that "every participant knows modus ponens" is common knowledge). Also, the inaction of a participant is a non-verbal communication of that participant's lack of knowledge, which then becomes common knowledge to all participants who observed the inaction.
The muddy children puzzle is the most frequently appearing induction puzzle in scientific literature on epistemic logic. Muddy children puzzle is a variant of the well known wise men or cheating wives/husbands puzzles.
Hat puzzles are induction puzzle variations that date back to as early as 1961. In many variations, hat puzzles are described in the context of prisoners. In other cases, hat puzzles are described in the context of wise men.
Muddy Children Puzzle
Description
There is a set of attentive children. They think perfectly logically. The children consider it possible to have a muddy face. None of the children can determine the state of their own face themselves. But, every child knows the state of all other children's faces. A custodian tells the children that at least one of them has a muddy face. The children are each told that they should step forward if th |
https://en.wikipedia.org/wiki/List%20of%20optical%20disc%20authoring%20software | This is a list of optical disc authoring software.
Open source
Multi-platform
cdrtools, a comprehensive command line-based set of tools for creating and burning CDs, DVDs and Blu-rays
cdrkit, a fork of cdrtools by the Debian project
cdrdao, open source software for authoring and ripping of CDs in Disk-At-Once mode
DVDStyler, a GUI-based DVD authoring tool
libburnia, a collection of command line-based tools and libraries for burning discs
Linux and Unix
Brasero, a GNOME disc burning utility
dvd+rw-tools, a package for DVD and Blu-ray writing on Unix and Unix-like systems
K3b, the KDE disc authoring program
Nautilus, the GNOME file manager (includes basic disc burning capabilities)
Serpentine, the GNOME audio CD burning utility
Xfburn, the Xfce disc burning program
X-CD-Roast
Windows
InfraRecorder (based on cdrkit and cdrtools)
DVD Flick (ImgBurn is included)
Freeware
Windows
CDBurnerXP
ImgBurn
Ashampoo Burning Studio
DeepBurner Free
DVD Decrypter
DVD Shrink
macOS
Disco
Commercial proprietary
macOS
Adobe Encore
DVD Studio Pro
MacTheRipper
Roxio Toast
Linux
Nero Linux
Windows
Adobe Encore
Alcohol 120%
Ashampoo Burning Studio
AVS Video Editor
Blindwrite
CDRWIN
CloneCD
CloneDVD
DeepBurner
DiscJuggler
Roxio Creator
MagicISO
Nero Burning ROM
Netblender
SEBAS
TMPGEnc Authoring Works 7
UltraISO
See also
Comparison of disc authoring software
Optical disc authoring software
Optical disc authoring
es:Programas grabadores de discos ópticos |
https://en.wikipedia.org/wiki/Alfred%20Tauber | Alfred Tauber (5 November 1866 – 26 July 1942) was an Austrian Empire-born Austrian mathematician, known for his contribution to mathematical analysis and to the theory of functions of a complex variable: he is the eponym of an important class of theorems with applications ranging from mathematical and harmonic analysis to number theory. He was murdered in the Theresienstadt concentration camp.
Life and academic career
Born in Pressburg, Kingdom of Hungary, Austrian Empire (now Bratislava, Slovakia), he began studying mathematics at Vienna University in 1884, obtained his Ph.D. in 1889, and his habilitation in 1891.
Starting from 1892, he worked as chief mathematician at the Phönix insurance company until 1908, when he became an a.o. professor at the University of Vienna, though, already from 1901, he had been honorary professor at TU Vienna and director of its insurance mathematics chair. In 1933, he was awarded the Grand Decoration of Honour in Silver for Services to the Republic of Austria, and retired as emeritus extraordinary professor. However, he continued lecturing as a privatdozent until 1938, when he was forced to resign as a consequence of the "Anschluss". On 28–29 June 1942, he was deported with transport IV/2, č. 621 to Theresienstadt, where he was murdered on 26 July 1942.
Work
list 35 publications in the bibliography appended to his obituary, and also a search performed on the "Jahrbuch über die Fortschritte der Mathematik" database results in a list 35 mathematical works authored by him, spanning a period of time from 1891 to 1940. However, cites two papers on actuarial mathematics which do not appear in these two bibliographical lists and Binder's bibliography of Tauber's works (1984, pp. 163–166), while listing 71 entries including the ones in the bibliography of and the two cited by Hlawka, does not includes the short note so the exact number of his works is not known. According to , his scientific research can be divided into three areas: t |
https://en.wikipedia.org/wiki/Rupununi | The Rupununi is a region in the south-west of Guyana, bordering the Brazilian Amazon. The Rupununi river, also known by the local indigenous peoples as Raponani, flows through the Rupununi region. The name Rupununi originates from the word rapon in the Makushi language, in which it means the black-bellied whistling duck found along the river.
Geography
The Rupununi River is one of the main tributaries of the Essequibo River and is located in southern Guyana. The river originates in the Kanuku Mountains, which are located in the Upper Takutu-Essequibo region. The Rupununi River flows near the Guyana-Brazil border, and eventually leads into the Essequibo River. Throughout the flood season, the river shares a watershed with the Amazon. During the rainy season it is connected to the Takutu River by the flooded Pirara Creek, draining the vast swamps of the Parima or Amaku Lake. The region surrounding the Rupununi river is composed of mainly savannah, wetlands, forest, and low mountain ranges. The area of Region 9 is 57,750 square kilometers and has over 80 communities. Most people live within the Rupununi Savannah area, while the jungle covered areas are only populated near major rivers.
Geology
The geology of this area is divided into four main zones. Furthest south are areas of Rhyacian meta-sediments, meta-volcanics (Kwitaro Group) and associated granites, all intruded by Orosirian rocks of the Southern Guyana Granite Complex. The Kanuku Mountains consist of high grade gneisses in a NE-SW belt. The Takutu Graben is a NE-SW fault bounded basin initially filled by basaltic lava, then Mesozoic sediments, including the Takutu Formation. To the north of the Takutu Graben almost flat lying Statherian sandstones and conglomerates of the Roraima Group sediments overly Iwokrama Formation felsic volcanics and associated Orosirian granites. Relict Hadean zircons (xenocrysts) in the Iwokrama Formation suggest that older crust must occur at depth.
Animal life
The areas both |
https://en.wikipedia.org/wiki/Homothetic%20vector%20field | In physics, a homothetic vector field (sometimes homothetic collineation or homothety) is a projective vector field which satisfies the condition:
where c is a real constant. Homothetic vector fields find application in the study of singularities in general relativity. They can also be used to generate new solutions for Einstein equations by similarity reduction.
See also
Affine vector field
Conformal Killing vector field
Curvature collineation
Killing vector field
Matter collineation
Spacetime symmetries |
https://en.wikipedia.org/wiki/Affine%20vector%20field | An affine vector field (sometimes affine collineation or affine) is a projective vector field preserving geodesics and preserving the affine parameter. Mathematically, this is expressed by the following condition:
See also
Conformal vector field
Curvature collineation
Homothetic vector field
Killing vector field
Matter collineation
Spacetime symmetries
Mathematical methods in general relativity |
https://en.wikipedia.org/wiki/Duffing%20map | The Duffing map (also called as 'Holmes map') is a discrete-time dynamical system. It is an example of a dynamical system that exhibits chaotic behavior. The Duffing map takes a point (xn, yn) in the plane and maps it to a new point given by
The map depends on the two constants a and b. These are usually set to a = 2.75 and b = 0.2 to produce chaotic behaviour. It is a discrete version of the Duffing equation. |
https://en.wikipedia.org/wiki/Higher-dimensional%20Einstein%20gravity | Higher-dimensional Einstein gravity is any of various physical theories that attempt to generalise to higher dimensions various results of the well established theory of standard (four-dimensional) Einstein gravity, that is, general relativity. This attempt at generalisation has been strongly influenced in recent decades by string theory.
At present, this work can probably be most fairly described as extended theoretical speculation. Currently, it has no direct observational and experimental support, in contrast to four-dimensional general relativity. However, this theoretical work has led to the possibility of proving the existence of extra dimensions. This is best demonstrated by the proof of Harvey Reall and Roberto Emparan that there is a 'black ring' solution in 5 dimensions. If such a 'black ring' could be produced in a particle accelerator such as the Large Hadron Collider, this would provide the evidence that higher dimensions exist.
Exact solutions
The higher-dimensional generalization of the Kerr metric was discovered by Myers and Perry. Like the Kerr metric, the Myers-Perry metric has spherical horizon topology. The construction involves making a Kerr-Schild ansatz; by a similar method, the solution has been generalized to include a cosmological constant. The black ring is a solution of five-dimensional general relativity. It inherits its name from the fact that its event horizon is topologically S1 × S2. This is in contrast to other known black hole solutions in five dimensions which have horizon topology S3.
In 2014, Hari Kunduri and James Lucietti proved the existence of a black hole with Lens space topology of the L(2, 1) type in five dimensions, this was next extended to all L(p, 1) with positive integers p by Shinya Tomizawa and Masato Nozawa in 2016 and finally in a preprint to all L(p, q) and any dimension by Marcus Khuri and Jordan Rainone in 2022, a black lens doesn't necessarily need to rotate as a black ring but all examples so far need a |
https://en.wikipedia.org/wiki/Love%20wave | In elastodynamics, Love waves, named after Augustus Edward Hough Love, are horizontally polarized surface waves. The Love wave is a result of the interference of many shear waves (S-waves) guided by an elastic layer, which is welded to an elastic half space on one side while bordering a vacuum on the other side. In seismology, Love waves (also known as Q waves (Quer: German for lateral)) are surface seismic waves that cause horizontal shifting of the Earth during an earthquake. Augustus Edward Hough Love predicted the existence of Love waves mathematically in 1911. They form a distinct class, different from other types of seismic waves, such as P-waves and S-waves (both body waves), or Rayleigh waves (another type of surface wave). Love waves travel with a lower velocity than P- or S- waves, but faster than Rayleigh waves. These waves are observed only when there is a low velocity layer overlying a high velocity layer/ sub–layers.
Description
The particle motion of a Love wave forms a horizontal line perpendicular to the direction of propagation (i.e. are transverse waves). Moving deeper into the material, motion can decrease to a "node" and then alternately increase and decrease as one examines deeper layers of particles. The amplitude, or maximum particle motion, often decreases rapidly with depth.
Since Love waves travel on the Earth's surface, the strength (or amplitude) of the waves decrease exponentially with the depth of an earthquake. However, given their confinement to the surface, their amplitude decays only as , where represents the distance the wave has travelled from the earthquake. Surface waves therefore decay more slowly with distance than do body waves, which travel in three dimensions. Large earthquakes may generate Love waves that travel around the Earth several times before dissipating.
Since they decay so slowly, Love waves are the most destructive outside the immediate area of the focus or epicentre of an earthquake. They are what mo |
https://en.wikipedia.org/wiki/Chrysoviridae | Chrysoviridae is a family of double-stranded RNA viruses. Members of the family are called chrysoviruses.
Virology
The capsid is about 35-40 nm in diameter. The genome has four segments (tetrapartite). These segments are separately encapsulated.
Taxonomy
The following genera are recognized:
Alphachrysovirus
Betachrysovirus |
https://en.wikipedia.org/wiki/Nidovirales | Nidovirales is an order of enveloped, positive-strand RNA viruses which infect vertebrates and invertebrates. Host organisms include mammals, birds, reptiles, amphibians, fish, arthropods, molluscs, and helminths. The order includes the families Coronaviridae, Arteriviridae, Roniviridae, and Mesoniviridae.
Member viruses have a viral envelope and a positive-sense, single-stranded RNA genome which is capped and polyadenylated. Nidoviruses are named for the Latin nidus, meaning nest, as all viruses in this order produce a 3' co-terminal nested set of subgenomic mRNAs during infection.
Virology
Structure
Nidoviruses have a viral envelope and a positive-sense, single-stranded RNA genome which is capped and polyadenylated. The group expresses structural proteins separately from the nonstructural ones. The structural proteins are encoded at the 3' region of the genome and are expressed from a set of subgenomic mRNAs.
Member viruses encode one main proteinase and between one and three accessory proteinases which are mainly involved in expressing the replicase gene. These proteinases are also responsible for activating or inactivating specific proteins at the correct time in the virus life cycle, ensuring replication occurs at the right time.
Genome
Nidoviruses can be distinguished from other RNA viruses by a constellation of seven conserved domains—5'-TM2-3CLpro-TM3-RdRp-Zm-HEL1-NendoU-3'—with the first three being encoded in ORF1a and the remaining four in ORF1b. TM2 and TM3 and transmembrane domains; RdRp is the RNA-dependent RNA polymerase; Zm is a Zn-cluster binding domain fused with a helicase (HEL1); 3CLpro is a 3C-like protease; and NendoU is an uridylate-specific endonuclease. The 3CLpro has a catalytic His-Cys dyad, and is related to the SARS coronavirus main proteinase (Mpro).
Most, but not all, nidovirus subgenomic RNAs contain a 5′ leader sequence derived from the 5′ end of the genomic RNA. The frameshift that generates ORF1b frameshift occurs at a UUU |
https://en.wikipedia.org/wiki/Fish%20reproduction | Fish reproductive organs include testes and ovaries. In most species, gonads are paired organs of similar size, which can be partially or totally fused. There may also be a range of secondary organs that increase reproductive fitness. The genital papilla is a small, fleshy tube behind the anus in some fishes, from which the sperm or eggs are released; the sex of a fish can often be determined by the shape of its papilla.
Anatomy
Testes
Most male fish have two testes of similar size. In the case of sharks, the testes on the right side is usually larger. The primitive jawless fish have only a single testis, located in the midline of the body, although even this forms from the fusion of paired structures in the embryo.
Under a tough membranous shell, the tunica albuginea, the testis of some teleost fish, contains very fine coiled tubes called seminiferous tubules. The tubules are lined with a layer of cells (germ cells) that from puberty into old age, develop into sperm cells (also known as spermatozoa or male gametes). The developing sperm travel through the seminiferous tubules to the rete testis located in the mediastinum testis, to the efferent ducts, and then to the epididymis where newly created sperm cells mature (see spermatogenesis). The sperm move into the vas deferens, and are eventually expelled through the urethra and out of the urethral orifice through muscular contractions.
However, most fish do not possess seminiferous tubules. Instead, the sperm are produced in spherical structures called sperm ampullae. These are seasonal structures, releasing their contents during the breeding season, and then being reabsorbed by the body. Before the next breeding season, new sperm ampullae begin to form and ripen. The ampullae are otherwise essentially identical to the seminiferous tubules in higher vertebrates, including the same range of cell types.
In terms of spermatogonia distribution, the structure of teleosts testes has two types: in the most common, spe |
https://en.wikipedia.org/wiki/Symplectic%20integrator | In mathematics, a symplectic integrator (SI) is a numerical integration scheme for Hamiltonian systems. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical transformations. They are widely used in nonlinear dynamics, molecular dynamics, discrete element methods, accelerator physics, plasma physics, quantum physics, and celestial mechanics.
Introduction
Symplectic integrators are designed for the numerical solution of Hamilton's equations, which read
where denotes the position coordinates, the momentum coordinates, and is the Hamiltonian.
The set of position and momentum coordinates are called canonical coordinates.
(See Hamiltonian mechanics for more background.)
The time evolution of Hamilton's equations is a symplectomorphism, meaning that it conserves the symplectic 2-form . A numerical scheme is a symplectic integrator if it also conserves this 2-form.
Symplectic integrators also might possess, as a conserved quantity, a Hamiltonian which is slightly perturbed from the original one (only true for a small class of simple cases). By virtue of these advantages, the SI scheme has been widely applied to the calculations of long-term evolution of chaotic Hamiltonian systems ranging from the Kepler problem to the classical and semi-classical simulations in molecular dynamics.
Most of the usual numerical methods, like the primitive Euler scheme and the classical Runge–Kutta scheme, are not symplectic integrators.
Methods for constructing symplectic algorithms
Splitting methods for separable Hamiltonians
A widely used class of symplectic integrators is formed by the splitting methods.
Assume that the Hamiltonian is separable, meaning that it can be written in the form
This happens frequently in Hamiltonian mechanics, with T being the kinetic energy and V the potential energy.
For the notational simplicity, let us introduce the symbol to denote the canonical coordinates
including both the position and |
https://en.wikipedia.org/wiki/Thermodynamic%20beta | In statistical thermodynamics, thermodynamic beta, also known as coldness, is the reciprocal of the thermodynamic temperature of a system: (where is the temperature and is Boltzmann constant).
It was originally introduced in 1971 (as "coldness function") by , one of the proponents of the rational thermodynamics school of thought, based on earlier proposals for a "reciprocal temperature" function.
Thermodynamic beta has units reciprocal to that of energy (in SI units, reciprocal joules, ). In non-thermal units, it can also be measured in byte per joule, or more conveniently, gigabyte per nanojoule; 1 K−1 is equivalent to about 13,062 gigabytes per nanojoule; at room temperature: = 300K, β ≈ ≈ ≈ . The conversion factor is 1 GB/nJ = J−1.
Description
Thermodynamic beta is essentially the connection between the information theory and statistical mechanics interpretation of a physical system through its entropy and the thermodynamics associated with its energy. It expresses the response of entropy to an increase in energy. If a system is challenged with a small amount of energy, then β describes the amount the system will randomize.
Via the statistical definition of temperature as a function of entropy, the coldness function can be calculated in the microcanonical ensemble from the formula
(i.e., the partial derivative of the entropy with respect to the energy at constant volume and particle number ).
Advantages
Though completely equivalent in conceptual content to temperature, is generally considered a more fundamental quantity than temperature owing to the phenomenon of negative temperature, in which is continuous as it crosses zero whereas has a singularity.
In addition, has the advantage of being easier to understand causally: If a small amount of heat is added to a system, is the increase in entropy divided by the increase in heat. Temperature is difficult to interpret in the same sense, as it is not possible to "Add entropy" to a system excep |
https://en.wikipedia.org/wiki/Layering | Layering is the process whereby the branch of a tree, or other plant, produces roots and is separated from the original plant, becoming a new, separate plant. Layering is utilized by horticulturists to propagate desirable plants.
Natural layering typically occurs when a branch touches the ground, whereupon it produces adventitious roots. At a later stage the connection with the parent plant is severed and a new plant is produced as a result.
The horticultural layering process typically involves wounding the target region to expose the inner stem and optionally applying rooting compounds.
In ground layering or simple layering, the stem is bent down and the target region is buried in the soil. This is done in plant nurseries in imitation of natural layering by many plants such as brambles which bow over and touch the tip on the ground, at which point it grows roots and, when separated, can continue as a separate plant. In either case, the rooting process may take from several weeks to a year.
There are two methods of air layering, which do not involve burying the stem. In ring air layering, the exposed wound is covered in a growth medium such as sphagnum moss, and wrapped in a material such as plastic. The roots grow into the medium and after a period of time, the stem is separated from the original plant. Tourniquet air layering has a similar method to air layering, except that instead of creating a wound, a wire is wrapped around the stem and the ends are twisted until it is very tight.
Layering is more complicated than taking cuttings, but has the advantage that the propagated portion continues to receive water and nutrients from the parent plant while it is forming roots. This is important for plants that form roots slowly, or for propagating large pieces. Layering is used quite frequently in the propagation of bonsai; it is also used as a technique for both creating new roots and improving existing roots.
Method and process
A low-growing stem is bent down t |
https://en.wikipedia.org/wiki/Click-through%20rate | Click-through rate (CTR) is the ratio of clicks on a specific link to the number of times a page, email, or advertisement is shown. It is commonly used to measure the success of an online advertising campaign for a particular website, as well as the effectiveness of email campaigns.
Click-through rates for ad campaigns vary tremendously. The first online display ad, shown for AT&T on the website HotWired in 1994, had a 44% click-through rate. With time, the overall rate of user's clicks on webpage banner ads has decreased.
Purpose
The purpose of click-through rates is to measure the ratio of clicks to impressions of an online ad or email marketing campaign. Generally, the higher the CTR, the more effective the marketing campaign has been at bringing people to a website. Most commercial websites are designed to elicit some sort of action, whether it be to buy a book, read a news article, watch a music video, or search for a flight. People rarely visit websites with the intention of viewing advertisements, in the same way that few people watch television to view the commercials.
While marketers want to know the reaction of the web visitor, with current technology it is nearly impossible to quantify the emotional reaction to the site and the effect of that site on the firm's brand. In contrast, it is easy to determine the click-through rate, which measures the proportion of visitors who clicked on an advertisement that redirected them to another page. Forms of interaction with advertisements other than clicking are possible but rare; "click-through rate" is the most commonly used term to describe the efficacy of an advert.
Construction
The click-through rate of an advertisement is the number of times a click is made on the ad, divided by the number of times the ad is "served", that is, shown (also called impressions), expressed as a percentage:
Online advertising
Click-through rates for banner ads have decreased over time. When banner ads first started to ap |
https://en.wikipedia.org/wiki/Euler%E2%80%93Bernoulli%20beam%20theory | Euler–Bernoulli beam theory (also known as engineer's beam theory or classical beam theory) is a simplification of the linear theory of elasticity which provides a means of calculating the load-carrying and deflection characteristics of beams. It covers the case corresponding to small deflections of a beam that is subjected to lateral loads only. By ignoring the effects of shear deformation and rotatory inertia, it is thus a special case of Timoshenko–Ehrenfest beam theory. It was first enunciated circa 1750, but was not applied on a large scale until the development of the Eiffel Tower and the Ferris wheel in the late 19th century. Following these successful demonstrations, it quickly became a cornerstone of engineering and an enabler of the Second Industrial Revolution.
Additional mathematical models have been developed, such as plate theory, but the simplicity of beam theory makes it an important tool in the sciences, especially structural and mechanical engineering.
History
Prevailing consensus is that Galileo Galilei made the first attempts at developing a theory of beams, but recent studies argue that Leonardo da Vinci was the first to make the crucial observations. Da Vinci lacked Hooke's law and calculus to complete the theory, whereas Galileo was held back by an incorrect assumption he made.
The Bernoulli beam is named after Jacob Bernoulli, who made the significant discoveries. Leonhard Euler and Daniel Bernoulli were the first to put together a useful theory circa 1750.
Static beam equation
The Euler–Bernoulli equation describes the relationship between the beam's deflection and the applied load:The curve describes the deflection of the beam in the direction at some position (recall that the beam is modeled as a one-dimensional object). is a distributed load, in other words a force per unit length (analogous to pressure being a force per area); it may be a function of , , or other variables. is the elastic modulus and is the second moment |
https://en.wikipedia.org/wiki/Spam%20and%20Open%20Relay%20Blocking%20System | SORBS ("Spam and Open Relay Blocking System") is a list of e-mail servers suspected of sending or relaying spam (a DNS Blackhole List). It has been augmented with complementary lists that include various other classes of hosts, allowing for customized email rejection by its users.
History
The SORBS DNSbl project was created in November 2001. It was maintained as a private list until 6 January 2002 when the DNSbl was officially launched to the public. The list consisted of 78,000 proxy relays and rapidly grew to over 3,000,000 alleged compromised spam relays.
In November 2009 SORBS was acquired by GFI Software, to enhance their mail filtering solutions.
In July 2011 SORBS was re-sold to Proofpoint, Inc.
DUHL
SORBS adds IP ranges that belong to dialup modem pools, dynamically allocated wireless, and DSL connections as well as DHCP LAN ranges by using reverse DNS PTR records, WHOIS records, and sometimes by submission from the ISPs themselves. This is called the DUHL or Dynamic User and Host List. SORBS does not automatically rescan DUHL listed hosts for updated rDNS so to remove an IP address from the DUHL the user or ISP has to request a delisting or rescan. If other blocks are scanned in the region of listings and the scan includes listed netspace, SORBS automatically removes the netspace marked as static.
Matthew Sullivan of SORBS proposed in an Internet Draft that generic reverse DNS addresses include purposing tokens such as static or dynamic, abbreviations thereof, and more. That naming scheme would have allowed end users to classify IP addresses without the need to rely on third party lists, such as the SORBS DUHL. The Internet Draft has since expired. Generally it is considered more appropriate for ISPs to simply block outgoing traffic to port 25 if they wish to prevent users from sending email directly, rather than specifying it in the reverse DNS record for the IP.
SORBS' dynamic IP list originally came from Dynablock but has been developed independen |
https://en.wikipedia.org/wiki/Ice%20algae | Ice algae are any of the various types of algal communities found in annual and multi-year sea, and terrestrial lake ice or glacier ice.
On sea ice in the polar oceans, ice algae communities play an important role in primary production. The timing of blooms of the algae is especially important for supporting higher trophic levels at times of the year when light is low and ice cover still exists. Sea ice algal communities are mostly concentrated in the bottom layer of the ice, but can also occur in brine channels within the ice, in melt ponds, and on the surface.
Because terrestrial ice algae occur in freshwater systems, the species composition differs greatly from that of sea ice algae. In particular, terrestrial glacier ice algae communities are significant in that they change the color of glaciers and ice sheets, impacting the reflectivity of the ice itself.
Sea ice algae
Adapting to the sea ice environment
Microbial life in sea ice is extremely diverse, and includes abundant algae, bacteria and protozoa. Algae in particular dominate the sympagic environment, with estimates of more than 1000 unicellular eukaryotes found to associate with sea ice in the Arctic. Species composition and diversity vary based on location, ice type, and irradiance. In general, pennate diatoms such as Nitschia frigida (in the Arctic) and Fragilariopsis cylindrus (in the Antarctic) are abundant. Melosira arctica, which forms up to meter-long filaments attached to the bottom of the ice, are also widespread in the Arctic and are an important food source for marine species.
While sea ice algae communities are found throughout the column of sea ice, abundance and community composition depends on the time of year. There are many microhabitats available to algae on and within sea ice, and different algal groups have different preferences. For example, in late winter/early spring, motile diatoms like N. frigida have been found to dominate the uppermost layers of the ice, as far as briny |
https://en.wikipedia.org/wiki/Robert%20Sapolsky | Robert Morris Sapolsky (born April 6, 1957) is an American neuroendocrinology researcher and author. He is a professor of biology, neurology, neurological sciences, and neurosurgery at Stanford University. In addition, he is a research associate at the National Museums of Kenya.
Early life and education
Sapolsky was born in Brooklyn, New York, to immigrants from the Soviet Union. His father, Thomas Sapolsky, was an architect who renovated the restaurants Lüchow's and Lundy's. Robert was raised an Orthodox Jew and spent his time reading about and imagining living with silverback gorillas. By age twelve, he was writing fan letters to primatologists. He attended John Dewey High School and by that time was reading textbooks on the subject and teaching himself Swahili.
Sapolsky describes himself as an atheist. He said in his acceptance speech for the Emperor Has No Clothes Award, "I was raised in an Orthodox household and I was raised devoutly religious up until around age thirteen or so. In my adolescent years one of the defining actions in my life was breaking away from all religious belief whatsoever."
In 1978, Sapolsky received his B.A., summa cum laude, in biological anthropology from Harvard University. He then went to Kenya to study the social behaviors of baboons in the wild. When the Uganda–Tanzania War broke out in the neighboring countries, Sapolsky decided to travel into Uganda to witness the war up close, later commenting, "I was twenty-one and wanted adventure. [...] I was behaving like a late-adolescent male primate." He went to Uganda's capital Kampala, and from there to the border with Zaire (now the Democratic Republic of the Congo), and then back to Kampala, witnessing some fighting, including the Ugandan capital's conquest by the Tanzanian army and its Ugandan rebel allies on April 10-11 1979. Sapolsky then returned to New York and studied at Rockefeller University, where he received his Ph.D. in neuroendocrinology working in the lab of endocrinol |
https://en.wikipedia.org/wiki/Dihybrid%20cross | Dihybrid cross is a cross between two individuals with two observed traits that are controlled by two distinct genes. The idea of a dihybrid cross came from Gregor Mendel when he observed pea plants that were either yellow or green and either round or wrinkled. Crossing of two heterozygous individuals will result in predictable ratios for both genotype and phenotype in the offspring. The expected phenotypic ratio of crossing heterozygous parents would be 9:3:3:1. Deviations from these expected ratios may indicate that the two traits are linked or that one or both traits has a non-Mendelian mode of inheritance.
Mendelian History
Gregor Mendel was a Czech monk who bred peas plants in his monastery garden and compared the offspring to figure out inheritance of traits from 1856-1863. He first started looking at individual traits, but began to look at two distinct traits in the same plant. In his first experiment, he looked at the two distinct traits of pea color (yellow or green) and pea shape (round or wrinkled). He applied the same rules of a monohybrid cross to create the dihybrid cross. From these experiments, he determined the phenotypic ratio (9:3:3:1) seen in dihybrid cross for a heterozygous cross.
Through these experiments, he was able to determine the basic law of independent assortment and law of dominance. The law of independent assortment states that traits controlled by different genes are going to be inherited independently of each other. Mendel was able to determine this law out because in his crosses he was able to get all four possible phenotypes. The law of dominance states that if one dominant allele is inherited then the dominant phenotype will be expressed.
Expected genotype and phenotype ratios
The phenotypic ratio of a cross between two heterozygotes is 9:3:3:1, where 9/16 of the individuals possess the dominant phenotype for both traits, 3/16 of the individuals possess the dominant phenotype for one trait, 3/16 of the individuals possess t |
https://en.wikipedia.org/wiki/Antimicrobial%20peptides | Antimicrobial peptides (AMPs), also called host defence peptides (HDPs) are part of the innate immune response found among all classes of life. Fundamental differences exist between prokaryotic and eukaryotic cells that may represent targets for antimicrobial peptides. These peptides are potent, broad spectrum antimicrobials which demonstrate potential as novel therapeutic agents. Antimicrobial peptides have been demonstrated to kill Gram negative and Gram positive bacteria, enveloped viruses, fungi and even transformed or cancerous cells. Unlike the majority of conventional antibiotics it appears that antimicrobial peptides frequently destabilize biological membranes, can form transmembrane channels, and may also have the ability to enhance immunity by functioning as immunomodulators.
Structure
Antimicrobial peptides are a unique and diverse group of molecules, which are divided into subgroups on the basis of their amino acid composition and structure. Antimicrobial peptides are generally between 12 and 50 amino acids. These peptides include two or more positively charged residues provided by arginine, lysine or, in acidic environments, histidine, and a large proportion (generally >50%) of hydrophobic residues. The secondary structures of these molecules follow 4 themes, including i) α-helical, ii) β-stranded due to the presence of 2 or more disulfide bonds, iii) β-hairpin or loop due to the presence of a single disulfide bond and/or cyclization of the peptide chain, and iv) extended. Many of these peptides are unstructured in free solution, and fold into their final configuration upon partitioning into biological membranes. The peptides contain hydrophilic amino acid residues aligned along one side and hydrophobic amino acid residues aligned along the opposite side of a helical molecule. This amphipathicity of the antimicrobial peptides allows them to partition into the membrane lipid bilayer. The ability to associate with membranes is a definitive feature of a |
https://en.wikipedia.org/wiki/Biopesticide | A biopesticide is a biological substance or organism that damages, kills, or repels organisms seen as pests. Biological pest management intervention involves predatory, parasitic, or chemical relationships.
They are obtained from organisms including plants, bacteria and other microbes, fungi, nematodes, etc. They are components of integrated pest management (IPM) programmes, and have received much practical attention as substitutes to synthetic chemical plant protection products (PPPs).
Definitions
The U.S. Environmental Protection Agency states that biopesticides "are certain types of pesticides derived from such natural materials as animals, plants, bacteria, and certain minerals, and currently, there are 299 registered biopesticide active ingredients and 1401 active biopesticide product registrations." The EPA also states that biopesticides "include naturally occurring substances that control pests (biochemical pesticides), microorganisms that control pests (microbial pesticides), and pesticidal substances produced by plants containing added genetic material (plant-incorporated protectants) or PIPs".
The European Environmental Agency defines a biopesticide as “a pesticide made from biological sources, that is from toxins which occur naturally. - naturally occurring biological agents used to kill pests by causing specific biological effects rather than by inducing chemical poisoning.” Furthermore, the EEA defines a biopesticide as a pesticide in which “the active ingredient is a virus, fungus, or bacteria, or a natural product derived from a plant source. A biopesticide's mechanism of action is based on specific biological effects and not on chemical poisons.”
Types
Biopesticides usually have no known function in photosynthesis, growth or other basic aspects of plant physiology. Many chemical compounds produced by plants protect them from pests; they are called antifeedants. These materials are biodegradable and renewable, which can be economical for practi |
https://en.wikipedia.org/wiki/Inverse%20photoemission%20spectroscopy | Inverse photoemission spectroscopy (IPES) is a surface science technique used to study the unoccupied electronic structure of surfaces, thin films, and adsorbates. A well-collimated beam of electrons of a well defined energy (< 20 eV) is directed at the sample. These electrons couple to high-lying unoccupied electronic states and decay to low-lying unoccupied states, with a subset of these transitions being radiative. The photons emitted in the decay process are detected and an energy spectrum, photon counts vs. incident electron energy, is generated. Due to the low energy of the incident electrons, their penetration depth is only a few atomic layers, making inverse photoemission a particularly surface sensitive technique. As inverse photoemission probes the electronic states above the Fermi level of the system, it is a complementary technique to photoemission spectroscopy.
Theory
The energy of photons (, which includes Planck's constant) emitted when electrons incident on a substance using an electron beam with a constant energy () relax to a lower energy unoccupied state () is given by the conservation of energy as:
By measuring and , the unoccupied state () of the surface can be found.
Modes
Two modes can be used for this measurement. One is the isochromat mode, which scans the incident electron energy and keeps the detected photon energy constant. The other is the tunable photon energy mode, or spectrograph mode, which keeps the incident electron energy constant and measures the distribution of the detected photon energy. The latter can also measure the resonant inverse photoemission spectroscopy.
Isochromat mode
In isochromat mode, the incident electron energy is ramped and the emitted photons are detected at a fixed energy that is determined by the photon detector. Typically, an I2 gas filled Geiger-Müller tube with an entrance window of either SrF2 or CaF2 is used as the photon detector. The combination of window and filling gas determines the detecte |
https://en.wikipedia.org/wiki/BET%20theory | Brunauer–Emmett–Teller (BET) theory aims to explain the physical adsorption of gas molecules on a solid surface and serves as the basis for an important analysis technique for the measurement of the specific surface area of materials. The observations are very often referred to as physical adsorption or physisorption. In 1938, Stephen Brunauer, Paul Hugh Emmett, and Edward Teller presented their theory in the Journal of the American Chemical Society. BET theory applies to systems of multilayer adsorption that usually utilizes a probing gas (called the adsorbate) that does not react chemically with the adsorptive (the material upon which the gas attaches to) to quantify specific surface area. Nitrogen is the most commonly employed gaseous adsorbate for probing surface(s). For this reason, standard BET analysis is most often conducted at the boiling temperature of N2 (77 K). Other probing adsorbates are also utilized, albeit less often, allowing the measurement of surface area at different temperatures and measurement scales. These include argon, carbon dioxide, and water. Specific surface area is a scale-dependent property, with no single true value of specific surface area definable, and thus quantities of specific surface area determined through BET theory may depend on the adsorbate molecule utilized and its adsorption cross section.
Concept
The concept of the theory is an extension of the Langmuir theory, which is a theory for monolayer molecular adsorption, to multilayer adsorption with the following hypotheses:
gas molecules physically adsorb on a solid in layers infinitely;
gas molecules only interact with adjacent layers; and
the Langmuir theory can be applied to each layer.
the enthalpy of adsorption for the first layer is constant and greater than the second (and higher).
the enthalpy of adsorption for the second (and higher) layers is the same as the enthalpy of liquefaction.
The resulting BET equation is
where c is referred to as the BET C-cons |
https://en.wikipedia.org/wiki/Multi-configurational%20self-consistent%20field | Multi-configurational self-consistent field (MCSCF) is a method in quantum chemistry used to generate qualitatively correct reference states of molecules in cases where Hartree–Fock and density functional theory are not adequate (e.g., for molecular ground states which are quasi-degenerate with low-lying excited states or in bond-breaking situations). It uses a linear combination of configuration state functions (CSF), or configuration determinants, to approximate the exact electronic wavefunction of an atom or molecule. In an MCSCF calculation, the set of coefficients of both the CSFs or determinants and the basis functions in the molecular orbitals are varied to obtain the total electronic wavefunction with the lowest possible energy. This method can be considered a combination between configuration interaction (where the molecular orbitals are not varied but the expansion of the wave function) and Hartree–Fock (where there is only one determinant, but the molecular orbitals are varied).
MCSCF wave functions are often used as reference states for multireference configuration interaction (MRCI) or multi-reference perturbation theories like complete active space perturbation theory (CASPT2). These methods can deal with extremely complex chemical situations and, if computing power permits, may be used to reliably calculate molecular ground and excited states if all other methods fail.
Introduction
For the simplest single bond, found in the H2 molecule, molecular orbitals can always be written in terms of two functions χiA and χiB (which are atomic orbitals with small corrections) located at the two nuclei A and B:
where Ni is a normalization constant. The ground-state wavefunction for H2 at the equilibrium geometry is dominated by the configuration (φ1)2, which means that the molecular orbital φ1 is nearly doubly occupied. The Hartree–Fock (HF) model assumes that it is doubly occupied, which leads to a total wavefunction
where is the singlet (S = 0) spin |
https://en.wikipedia.org/wiki/Post%E2%80%93Hartree%E2%80%93Fock | In computational chemistry, post–Hartree–Fock (post-HF) methods are the set of methods developed to improve on the Hartree–Fock (HF), or self-consistent field (SCF) method. They add electron correlation which is a more accurate way of including the repulsions between electrons than in the Hartree–Fock method where repulsions are only averaged.
Details
In general, the SCF procedure makes several assumptions about the nature of the multi-body Schrödinger equation and its set of solutions:
For molecules, the Born–Oppenheimer approximation is inherently assumed. The true wavefunction should also be a function of the coordinates of each of the nuclei.
Typically, relativistic effects are completely neglected. The momentum operator is assumed to be completely nonrelativistic.
The basis set is composed of a finite number of orthogonal functions. The true wavefunction is a linear combination of functions from a complete (infinite) basis set.
The energy eigenfunctions are assumed to be products of one-electron wavefunctions. The effects of electron correlation, beyond that of exchange energy resulting from the anti-symmetrization of the wavefunction, are completely neglected.
For the great majority of systems under study, in particular for excited states and processes such as molecular dissociation reactions, the fourth item is by far the most important. As a result, the term post–Hartree–Fock method is typically used for methods of approximating the electron correlation of a system.
Usually, post–Hartree–Fock methods give more accurate results than Hartree–Fock calculations, although the added accuracy comes with the price of added computational cost.
Post–Hartree–Fock methods
Configuration interaction (CI)
Coupled cluster (CC)
Multi-configuration time-dependent Hartree (MCTDH,)
Møller–Plesset perturbation theory (MP2, MP3, MP4, etc.)
Quadratic configuration interaction (QCI)
Quantum chemistry composite methods (G2, G3, CBS, T1. etc.)
Related methods
Method |
https://en.wikipedia.org/wiki/Hyperchlorhydria | Hyperchlorhydria, sometimes called chlorhydria, sour stomach or acid stomach, refers to the state in the stomach where gastric acid levels are higher than the reference range. The combining forms of the name (chlor- + hydr-), referring to chlorine and hydrogen, are the same as those in the name of hydrochloric acid, which is the active constituent of gastric acid.
In humans, the normal pH is around 1 to 3, which varies throughout the day. The highest basal secretion levels are in the late evening (around 12 A.M. to 3 A.M.). Hyperchlorhydria is usually defined as having a pH less than 2. It has no negative consequences unless other conditions are also present such as gastroesophageal reflux disease (GERD).
A cause for hyperchlorhydria is increased Gastrin production
See also
Achlorhydria
Hypochlorhydria |
https://en.wikipedia.org/wiki/Cyanolichen | Cyanolichens are lichens that apart from the basic fungal component ("mycobiont"), contain cyanobacteria, otherwise known as blue-green algae, as the photosynthesizing component ("photobiont"). Overall, about a third of lichen photobionts are cyanobacteria and the other two thirds are green algae.
Some lichens contain both green algae and cyanobacteria apart from the fungal component, in which case they are called "tripartite". Normally the photobiont occupies an extensive layer covering much of the thallus, but in tripartite lichens, the cyanobacterium component may be enclosed in pustule-like outgrowths of the main thallus called cephalodia, which can take many forms. Apart from gaining energy through photosynthesis, the cyanobacteria which live in cephalodia may perform nitrogen fixation on behalf of the lichen community. These cyanobacteria are generally more rich in nitrogen-fixing cells called heterocysts than those which live in the main photobiont layer of lichens.
External links
Lichens of North America, by I. Brodo, S. Sharnoff and S.D. Sharnoff |
https://en.wikipedia.org/wiki/Tutu%20%28plant%29 | Tutu is a common name of Māori origin for plants in the genus Coriaria found in New Zealand.
Six New Zealand native species are known by the name:
Coriaria angustissima
Coriaria arborea
Coriaria lurida
Coriaria plumosa
Coriaria pteridoides
They are shrubs or trees; some are endemic to New Zealand. Most of the plant parts are poisonous, containing the neurotoxin tutin and its derivative hyenanchin. The widespread species Coriaria arborea is most often linked to cases of poisoning.
Honey containing tutin can be produced by honey bees feeding on honeydew produced by sap-sucking vine hopper insects (genus Scolypopa) feeding on tutu. The last recorded deaths from eating honey containing tutin were in the 1890s, although sporadic outbreaks of toxic honey poisoning continue to occur. Poisoning symptoms include delirium, vomiting, and coma. |
https://en.wikipedia.org/wiki/Flag%20of%20Indianapolis | The flag of Indianapolis has a dark blue field with a white five-pointed star pointing upwards in the center. Around the star is a circular field in red. Surrounding the red field is a white ring, from which extend four white stripes from top to bottom and from hoist to fly, thus creating four equal quadrants in the field. The stripes are about one-seventh the width of the flag, with the white ring the same width as the stripes. The diameter of the red circle is about two-ninths the width of the flag.
The current flag design was adopted by the City of Indianapolis on May 20, 1963. The flag was first raised from the City–County Building on November 7, 1963. It was designed by John Herron Art Institute student Roger E. Gohl.
History
First and second flag
The city's first municipal flag was designed by city council member William Johnson in 1911 and approved by a commission appointed by Mayor Samuel "Lew" Shank. The flag's unveiling was scheduled for July 4, 1911; however, it was reported that no one attended the ceremony as most residents were elsewhere greeting President William Howard Taft who was visiting Indianapolis for the Independence Day holiday.
A revised version of the first flag was designed by Harry B. Dynes and adopted by Common Council on June 21, 1915. The flag's design appears to draw inspiration from the American flag. The design divided the flag vertically into two sections. The first section (two-fifths of the flag's length) displays a dark blue field overlaid by a white ring with four white diagonal spokes radiating toward each of the section's four corners, representing the city's four diagonal avenues from Alexander Ralston's 1821 Plat of the Town of Indianapolis (Indiana, Kentucky, Massachusetts, and Virginia) meeting at Monument Circle. Eight white stars set in this section represent the city's four appointed boards (public works, public safety, health, and parks) and four elected officers (city clerk, controller, judge, and board of schoo |
https://en.wikipedia.org/wiki/Purple%20Rain%20%28song%29 | "Purple Rain" is a song by American musician Prince and his backing band the Revolution. It is the title track from the 1984 album of the same name, which in turn is the soundtrack album for the 1984 film of the same name starring Prince, and was released as the third single from the album. The song is a power ballad that combines rock, R&B, gospel, and orchestral music.
"Purple Rain" reached number two on the US Billboard Hot 100 and stayed there for two weeks, being kept off the top spot by "Wake Me Up Before You Go-Go" by Wham!. It reached the summit in Belgium and the Netherlands. It is certified gold by the Recording Industry Association of America (RIAA) and is considered to be one of Prince's signature songs. Following Prince's death in 2016, "Purple Rain" re-entered the Billboard Hot 100, where it reached number four. It also re-entered the UK Singles Chart at number six, placing two spaces higher than its original peak. In France, where it originally peaked at number 12, "Purple Rain" reached number one around a week after Prince's death.
"Purple Rain" is ranked at number 18 on Rolling Stone list of the 500 Greatest Songs of All Time and is included in the Rock and Roll Hall of Fame's 500 Songs that Shaped Rock and Roll. During the Super Bowl XLI halftime show in 2007, for which Prince was the featured performer, "Purple Rain" was the last song of his set; the event became especially notable when actual rain fell during the performance while the stage and stadium were lit up with purple lights, and the show continues to top lists of the best Super Bowl halftime shows of all time. Prince performed the song as the opening of a medley of his hits with Beyoncé at the 2004 Grammy Awards. It was also the final song he performed live, taking place at the end of his final performance in Atlanta on April 14, 2016, one week before he died.
Composition
Origins
"Purple Rain" was originally written as a country song and intended to be a collaboration with Stevie Nic |
https://en.wikipedia.org/wiki/Sticking%20probability | The sticking probability is the probability that molecules are trapped on surfaces and adsorb chemically. From Langmuir's adsorption isotherm, molecules cannot adsorb on surfaces when the adsorption sites are already occupied by other molecules, so the sticking probability can be expressed as follows:
where is the initial sticking probability and is the surface coverage fraction ranging from 0 to 1.
Similarly, when molecules adsorb on surfaces dissociatively, the sticking probability is
The square is owing to the fact that a disassociation of 1 molecule into 2 parts requires 2 adsorption sites. These equations are simple and can be easily understood but cannot explain experimental results.
In 1958, P. Kisliuk presented an equation for the sticking probability that can explain experimental results. In his theory, molecules are trapped in precursor states of physisorption before chemisorption. Then the molecules meet adsorption sites that molecules can adsorb to chemically, so the molecules behave as follows.
If these sites are not occupied, molecules do the following (with probability in parentheses):
adsorb on the surface chemically ()
desorb from the surface ()
move to the next precursor state ()
and if these sites are occupied, they
desorb from the surface ()
move to the next precursor state ()
Note that an occupied site is defined as one where there is a chemically bonded adsorbate so by definition it would be . Then the sticking probability is, according to equation (6) of the reference,
When , this equation is identical in result to Langmuir's adsorption isotherm.
Notes |
https://en.wikipedia.org/wiki/Lithium%20tantalate | Lithium tantalate is the inorganic compound with the formula LiTaO3. It is a white, diamagnetic, water-insoluble solid. The compound has the perovskite structure. It has optical, piezoelectric, and pyroelectric properties that make it valuable for nonlinear optics, passive infrared sensors such as motion detectors, terahertz generation and detection, surface acoustic wave applications, cell phones. Considerable information is available from commercial sources about this material.
Applications and research
Lithium tantalate is a standard detector element in infrared spectrophotometers.
Pyroelectric fusion has been demonstrated using a lithium tantalate crystal producing a large enough charge to generate and accelerate a beam of deuterium nuclei into a deuteriated target resulting in the production of a small flux of helium-3 and neutrons through nuclear fusion without extreme heat or pressure.
The phenomenon of freezing water to ice, depending on the charge applied to a surface of pyroelectric LiTaO3 crystals. |
https://en.wikipedia.org/wiki/Routh%E2%80%93Hurwitz%20theorem | In mathematics, the Routh–Hurwitz theorem gives a test to determine whether all roots of a given polynomial lie in the left half-plane. Polynomials with this property are called Hurwitz stable polynomials. The Routh–Hurwitz theorem is important in dynamical systems and control theory, because the characteristic polynomial of the differential equations of a stable linear system has roots limited to the left half plane (negative eigenvalues). Thus the theorem provides a mathematical test, the Routh-Hurwitz stability criterion, to determine whether a linear dynamical system is stable without solving the system. The Routh–Hurwitz theorem was proved in 1895, and it was named after Edward John Routh and Adolf Hurwitz.
Notations
Let f(z) be a polynomial (with complex coefficients) of degree n with no roots on the imaginary axis (i.e. the line Z = ic where i is the imaginary unit and c is a real number). Let us define (a polynomial of degree n) and (a nonzero polynomial of degree strictly less than n) by , respectively the real and imaginary parts of f on the imaginary line.
Furthermore, let us denote by:
p the number of roots of f in the left half-plane (taking into account multiplicities);
q the number of roots of f in the right half-plane (taking into account multiplicities);
the variation of the argument of f(iy) when y runs from −∞ to +∞;
w(x) is the number of variations of the generalized Sturm chain obtained from and by applying the Euclidean algorithm;
is the Cauchy index of the rational function r over the real line.
Statement
With the notations introduced above, the Routh–Hurwitz theorem states that:
From the first equality we can for instance conclude that when the variation of the argument of f(iy) is positive, then f(z) will have more roots to the left of the imaginary axis than to its right.
The equality p − q = w(+∞) − w(−∞) can be viewed as the complex counterpart of Sturm's theorem. Note the differences: in Sturm's theorem, the left me |
https://en.wikipedia.org/wiki/Pentation | In mathematics, pentation (or hyper-5) is the next hyperoperation (infinite sequence of arithmetic operations) after tetration and before hexation. It is defined as iterated (repeated) tetration (assuming right-associativity), just as tetration is iterated right-associative exponentiation. It is a binary operation defined with two numbers a and b, where a is tetrated to itself b-1 times. For instance, using hyperoperation notation for pentation and tetration, means 2 to itself 2 times, or . This can then be reduced to
Etymology
The word "pentation" was coined by Reuben Goodstein in 1947 from the roots penta- (five) and iteration. It is part of his general naming scheme for hyperoperations.
Notation
There is little consensus on the notation for pentation; as such, there are many different ways to write the operation. However, some are more used than others, and some have clear advantages or disadvantages compared to others.
Pentation can be written as a hyperoperation as . In this format, may be interpreted as the result of repeatedly applying the function , for repetitions, starting from the number 1. Analogously, , tetration, represents the value obtained by repeatedly applying the function , for repetitions, starting from the number 1, and the pentation represents the value obtained by repeatedly applying the function , for repetitions, starting from the number 1. This will be the notation used in the rest of the article.
In Knuth's up-arrow notation, is represented as or . In this notation, represents the exponentiation function and represents tetration. The operation can be easily adapted for hexation by adding another arrow.
In Conway chained arrow notation, .
Another proposed notation is , though this is not extensible to higher hyperoperations.
Examples
The values of the pentation function may also be obtained from the values in the fourth row of the table of values of a variant of the Ackermann function: if is defined by the Ackermann recu |
https://en.wikipedia.org/wiki/List%20of%20chaotic%20maps | In mathematics, a chaotic map is a map (namely, an evolution function) that exhibits some sort of chaotic behavior. Maps may be parameterized by a discrete-time or a continuous-time parameter. Discrete maps usually take the form of iterated functions. Chaotic maps often occur in the study of dynamical systems.
Chaotic maps often generate fractals. Although a fractal may be constructed by an iterative procedure, some fractals are studied in and of themselves, as sets rather than in terms of the map that generates them. This is often because there are several different iterative procedures to generate the same fractal.
List of chaotic maps
List of fractals
Cantor set
de Rham curve
Gravity set, or Mitchell-Green gravity set
Julia set - derived from complex quadratic map
Koch snowflake - special case of de Rham curve
Lyapunov fractal
Mandelbrot set - derived from complex quadratic map
Menger sponge
Newton fractal
Nova fractal - derived from Newton fractal
Quaternionic fractal - three dimensional complex quadratic map
Sierpinski carpet
Sierpinski triangle |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.