id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
11,815,157
https://en.wikipedia.org/wiki/Particle-size%20distribution
In granulometry, the particle-size distribution (PSD) of a powder, or granular material, or particles dispersed in fluid, is a list of values or a mathematical function that defines the relative amount, typically by mass, of particles present according to size. Significant energy is usually required to disintegrate soil, etc. particles into the PSD that is then called a grain size distribution. Significance The PSD of a material can be important in understanding its physical and chemical properties. It affects the strength and load-bearing properties of rocks and soils. It affects the reactivity of solids participating in chemical reactions, and needs to be tightly controlled in many industrial products such as the manufacture of printer toner, cosmetics, and pharmaceutical products. Significance in the collection of particulate matter Particle size distribution can greatly affect the efficiency of any collection device. Settling chambers will normally only collect very large particles, those that can be separated using sieve trays. Centrifugal collectors will normally collect particles down to about 20 μm. Higher efficiency models can collect particles down to 10 μm. Fabric filters are one of the most efficient and cost effective types of dust collectors available and can achieve a collection efficiency of more than 99% for very fine particles. Wet scrubbers that use liquid are commonly known as wet scrubbers. In these systems, the scrubbing liquid (usually water) comes into contact with a gas stream containing dust particles. The greater the contact of the gas and liquid streams, the higher the dust removal efficiency. Electrostatic precipitators use electrostatic forces to separate dust particles from exhaust gases. They can be very efficient at the collection of very fine particles. Filter Press used for filtering liquids by cake filtration mechanism. The PSD plays an important part in the cake formation, cake resistance, and cake characteristics. The filterability of the liquid is determined largely by the size of the particles. Nomenclature ρp: Actual particle density (g/cm3) ρg: Gas or sample matrix density (g/cm3) r2: Least-squares coefficient of determination. The closer this value is to 1.0, the better the data fit to a hyperplane representing the relationship between the response variable and a set of covariate variables. A value equal to 1.0 indicates all data fit perfectly within the hyperplane. λ: Gas mean free path (cm) D50: Mass-median-diameter (MMD). The log-normal distribution mass median diameter. The MMD is considered to be the average particle diameter by mass. σg: Geometric standard deviation. This value is determined mathematically by the equation: σg = D84.13/D50 = D50/D15.87 The value of σg determines the slope of the least-squares regression curve. α: Relative standard deviation or degree of polydispersity. This value is also determined mathematically. For values less than 0.1, the particulate sample can be considered to be monodisperse. α = σg/D50 Re(P) : Particle Reynolds Number. In contrast to the large numerical values noted for flow Reynolds number, particle Reynolds number for fine particles in gaseous mediums is typically less than 0.1. Ref : Flow Reynolds number. Kn: Particle Knudsen number. Types PSD is usually defined by the method by which it is determined. The most easily understood method of determination is sieve analysis, where powder is separated on sieves of different sizes. Thus, the PSD is defined in terms of discrete size ranges: e.g. "% of sample between 45 μm and 53 μm", when sieves of these sizes are used. The PSD is usually determined over a list of size ranges that covers nearly all the sizes present in the sample. Some methods of determination allow much narrower size ranges to be defined than can be obtained by use of sieves, and are applicable to particle sizes outside the range available in sieves. However, the idea of the notional "sieve", that "retains" particles above a certain size, and "passes" particles below that size, is universally used in presenting PSD data of all kinds. The PSD may be expressed as a "range" analysis, in which the amount in each size range is listed in order. It may also be presented in "cumulative" form, in which the total of all sizes "retained" or "passed" by a single notional "sieve" is given for a range of sizes. Range analysis is suitable when a particular ideal mid-range particle size is being sought, while cumulative analysis is used where the amount of "under-size" or "over-size" must be controlled. The way in which "size" is expressed is open to a wide range of interpretations. A simple treatment assumes the particles are spheres that will just pass through a square hole in a "sieve". In practice, particles are irregular – often extremely so, for example in the case of fibrous materials – and the way in which such particles are characterized during analysis is very dependent on the method of measurement used. Sampling Before a PSD can be determined, it is vital that a representative sample is obtained. In the case where the material to be analysed is flowing, the sample must be withdrawn from the stream in such a way that the sample has the same proportions of particle sizes as the stream. The best way to do this is to take many samples of the whole stream over a period, instead of taking a portion of the stream for the whole time.p. 6 In the case where the material is in a heap, scoop or thief sampling needs to be done, which is inaccurate: the sample should ideally have been taken while the powder was flowing towards the heap.p. 10 After sampling, the sample volume typically needs to be reduced. The material to be analysed must be carefully blended, and the sample withdrawn using techniques that avoid size segregation, for example using a rotary dividerp. 5. Particular attention must be paid to avoidance of loss of fines during manipulation of the sample. Measurement techniques Sieve analysis Sieve analysis is often used because of its simplicity, cheapness, and ease of interpretation. Methods may be simple shaking of the sample in sieves until the amount retained becomes more or less constant. Alternatively, the sample may be washed through with a non-reacting liquid (usually water) or blown through with an air current. Advantages: this technique is well-adapted for bulk materials. A large amount of materials can be readily loaded into sieve trays. Two common uses in the powder industry are wet-sieving of milled limestone and dry-sieving of milled coal. Disadvantages: many PSDs are concerned with particles too small for separation by sieving to be practical. A very fine sieve, such as 37 μm sieve, is exceedingly fragile, and it is very difficult to get material to pass through it. Another disadvantage is that the amount of energy used to sieve the sample is arbitrarily determined. Over-energetic sieving causes attrition of the particles and thus changes the PSD, while insufficient energy fails to break down loose agglomerates. Although manual sieving procedures can be ineffective, automated sieving technologies using image fragmentation analysis software are available. These technologies can sieve material by capturing and analyzing a photo of material. Air elutriation analysis Material may be separated by means of air elutriation, which employs an apparatus with a vertical tube through which fluid is passed at a controlled velocity. When the particles are introduced, often through a side tube, the smaller particles are carried over in the fluid stream while the large particles settle against the upward current. If we start with low flow rates small less dense particle attain terminal velocities, and flow with the stream, the particle from the stream is collected in overflow and hence will be separated from the feed. Flow rates can be increased to separate higher size ranges. Further size fractions may be collected if the overflow from the first tube is passed vertically upwards through a second tube of greater cross-section, and any number of such tubes can be arranged in series. Advantages: a bulk sample is analyzed using centrifugal classification and the technique is non-destructive. Each cut-point can be recovered for future size-respective chemical analyses. This technique has been used for decades in the air pollution control industry (data used for design of control devices). This technique determines particle size as a function of settling velocity in an air stream (as opposed to water, or some other liquid). Disadvantages: a bulk sample (about ten grams) must be obtained. It is a fairly time-consuming analytical technique. The actual test method has been withdrawn by ASME due to obsolescence. Instrument calibration materials are therefore no longer available. Photoanalysis Materials can now be analysed through photoanalysis procedures. Unlike sieve analyses which can be time-consuming and inaccurate, taking a photo of a sample of the materials to be measured and using software to analyze the photo can result in rapid, accurate measurements. Another advantage is that the material can be analyzed without being handled. This is beneficial in the agricultural industry, as handling of food products can lead to contamination. Photoanalysis equipment and software is currently being used in mining, forestry and agricultural industries worldwide. Optical counting methods PSDs can be measured microscopically by sizing against a graticule and counting, but for a statistically valid analysis, millions of particles must be measured. This is impossibly arduous when done manually, but automated analysis of electron micrographs is now commercially available. It is used to determine the particle size within the range of 0.2 to 100 micrometers. Electroresistance counting methods An example of this is the Coulter counter, which measures the momentary changes in the conductivity of a liquid passing through an orifice that take place when individual non-conducting particles pass through. The particle count is obtained by counting pulses. This pulse is proportional to the volume of the sensed particle. Advantages: very small sample aliquots can be examined. Disadvantages: sample must be dispersed in a liquid medium... some particles may (partially or fully) dissolve in the medium altering the size distribution. The results are only related to the projected cross-sectional area that a particle displaces as it passes through an orifice. This is a physical diameter, not really related to mathematical descriptions of particles (e.g. terminal settling velocity). Sedimentation techniques These are based upon study of the terminal velocity acquired by particles suspended in a viscous liquid. Sedimentation time is longest for the finest particles, so this technique is useful for sizes below 10 μm, but sub-micrometer particles cannot be reliably measured due to the effects of Brownian motion. Typical apparatus disperses the sample in liquid, then measures the density of the column at timed intervals. Other techniques determine the optical density of successive layers using visible light or x-rays. Advantages: this technique determines particle size as a function of settling velocity. Disadvantages: Sample must be dispersed in a liquid medium... some particles may (partially or fully) dissolve in the medium altering the size distribution, requiring careful selection of the dispersion media. Density is highly dependent upon fluid temperature remaining constant. X-Rays will not count carbon (organic) particles. Many of these instruments can require a bulk sample (e.g. two to five grams). Laser diffraction methods These depend upon analysis of the "halo" of diffracted light produced when a laser beam passes through a dispersion of particles in air or in a liquid. The angle of diffraction increases as particle size decreases, so that this method is particularly good for measuring sizes between 0.1 and 3,000 μm. Advances in sophisticated data processing and automation have allowed this to become the dominant method used in industrial PSD determination. This technique is relatively fast and can be performed on very small samples. A particular advantage is that the technique can generate a continuous measurement for analyzing process streams. Laser diffraction measures particle size distributions by measuring the angular variation in intensity of light scattered as a laser beam passes through a dispersed particulate sample. Large particles scatter light at small angles relative to the laser beam and small particles scatter light at large angles. The angular scattering intensity data is then analyzed to calculate the size of the particles responsible for creating the scattering pattern, using the Mie theory or Fraunhofer approximation of light scattering. The particle size is reported as a volume equivalent sphere diameter. Laser Obscuration Time" (LOT) or "Time Of Transition" (TOT) A focused laser beam rotates in a constant frequency and interacts with particles within the sample medium. Each randomly scanned particle obscures the laser beam to its dedicated photo diode, which measures the time of obscuration. The time of obscuration directly relates to the particle's Diameter, by a simple calculation principle of multiplying the known beam rotation Velocity in the directly measured Time of obscuration, (D=V*t). Acoustic spectroscopy or ultrasound attenuation spectroscopy Instead of light, this method employs ultrasound for collecting information on the particles that are dispersed in fluid. Dispersed particles absorb and scatter ultrasound similarly to light. This has been known since Lord Rayleigh developed the first theory of ultrasound scattering and published a book "The Theory of Sound" in 1878. There have been hundreds of papers studying ultrasound propagation through fluid particulates in the 20th century. It turns out that instead of measuring scattered energy versus angle, as with light, in the case of ultrasound, measuring the transmitted energy versus frequency is a better choice. The resulting ultrasound attenuation frequency spectra are the raw data for calculating particle size distribution. It can be measured for any fluid system with no dilution or other sample preparation. This is a big advantage of this method. Calculation of particle size distribution is based on theoretical models that are well verified for up to 50% by volume of dispersed particles on micron and nanometer scales. However, as concentration increases and the particle sizes approach the nanoscale, conventional modelling gives way to the necessity to include shear-wave re-conversion effects in order for the models to accurately reflect the real attenuation spectra. Air pollution emissions measurements Cascade impactors – particulate matter is withdrawn isokinetically from a source and segregated by size in a cascade impactor at the sampling point exhaust conditions of temperature, pressure, etc. Cascade impactors use the principle of inertial separation to size segregate particle samples from a particle laden gas stream. The mass of each size fraction is determined gravimetrically. The California Air Resources Board Method 501 is currently the most widely accepted test method for particle size distribution emissions measurements. Mathematical models Probability distributions The log-normal distribution is often used to approximate the particle size distribution of aerosols, aquatic particles and pulverized material. The Weibull distribution or Rosin–Rammler distribution is a useful distribution for representing particle size distributions generated by grinding, milling and crushing operations. The log-hyperbolic distribution was proposed by Bagnold and Barndorff-Nielsen to model the particle-size distribution of naturally occurring sediments. This model suffers from having non-unique solutions for a range of probability coefficients. The skew log-Laplace model was proposed by Fieller, Gilbertson and Olbricht as a simpler alternative to the log-hyperbolic distribution. Rosin–Rammler distribution The Weibull distribution, now named for Waloddi Weibull was first identified by and first applied by to describe particle size distributions. It is still widely used in mineral processing to describe particle size distributions in comminution processes. where : Particle size : 80th percentile of the particle size distribution : Parameter describing the spread of the distribution The inverse distribution is given by: where : Mass fraction Parameter estimation The parameters of the Rosin–Rammler distribution can be determined by refactoring the distribution function to the form Hence the slope of the line in a plot of versus yields the parameter and is determined by substitution into See also Particle size Sauter mean diameter — mathematical description of average particle size, based on an idealized sphere De Brouckere mean diameter —determination of average particle size from measurements Granulometry (morphology) Optical granulometry Raindrop size distribution References Further reading O. Ahmad, J. Debayle, and J. C. Pinoli. "A geometric-based method for recognizing overlapping polygonalshaped and semi-transparent particles in gray tone images", Pattern Recognition Letters 32(15), 2068–2079,2011. O. Ahmad, J. Debayle, N. Gherras, B. Presles, G. Févotte, and J. C. Pinoli. "Recognizing overlapped particles during a crystallization process from in situ video images for measuring their size distributions.",In 10th SPIE International Conference on Quality Control by Artificial Vision (QCAV), Saint-Etienne, France,June 2011. O. Ahmad, J. Debayle, N. Gherras, B. Presles, G. Févotte, and J. C. Pinoli. "Quantification of overlapping polygonal-shaped particles based on a new segmentation method of in situ images during crystallization.",Journal of Electronic Imaging, 21(2), 021115, 2012. . . External links Free expert system for size analysis technique selection Matlab toolbox for integrating and calibrating particle-size data from multiple sources Aerosols Chemical mixtures Colloidal chemistry Particle technology
Particle-size distribution
Chemistry,Engineering
3,633
25,175,288
https://en.wikipedia.org/wiki/Genetically%20modified%20fish
Genetically modified fish (GM fish) are organisms from the taxonomic clade which includes the classes Agnatha (jawless fish), Chondrichthyes (cartilaginous fish) and Osteichthyes (bony fish) whose genetic material (DNA) has been altered using genetic engineering techniques. In most cases, the aim is to introduce a new trait to the fish which does not occur naturally in the species, i.e. transgenesis. GM fish are used in scientific research and kept as pets. They are being developed as environmental pollutant sentinels and for use in aquaculture food production. In 2015, the AquAdvantage salmon was approved by the US Food and Drug Administration (FDA) for commercial production, sale and consumption, making it the first genetically modified animal to be approved for human consumption. Some GM fish that have been created have promoters driving an over-production of "all fish" growth hormone. This results in dramatic growth enhancement in several species, including salmonids, carps and tilapias. Critics have objected to GM fish on several grounds, including ecological concerns, animal welfare concerns and with respect to whether using them as food is safe and whether GM fish are needed to help address the world's food needs. History and process The first transgenic fish were produced in China in 1985. As of 2013, approximately 50 species of fish have been subject to genetic modification. This has resulted in more than 400 fish/trait combinations. Most of the modifications have been conducted on food species, such as Atlantic salmon (Salmo salar), tilapia (genus) and common carp (Cyprinus carpio). Generally, genetic modification entails manipulation of DNA. The process is known as cisgenesis when a gene is transferred between organisms that could be conventionally bred, or transgenesis when a gene from one species is added to a different species. Gene transfer into the genome of the desired organism, as for fish in this case, requires a vector like a lentivirus or mechanical/physical insertion of the altered genes into the nucleus of the host by means of a micro syringe or a gene gun. Uses Research Transgenic fish are used in research covering five broad areas Enhancing the traits of commercially available fish Their use as bioreactors for the development of bio-medically important proteins Their use as indicators of aquatic pollutants Developing new non-mammalian animal models Functional genomics studies Most GM fish are used in basic research in genetics and development. Two species of fish, zebrafish (Danio rerio) and medaka (Japanese rice fish, Oryzias latipes), are most commonly modified because they have optically clear chorions (shells), develop rapidly, the 1-cell embryo is easy to see and micro-inject with transgenic DNA, and zebrafish have the capability of regenerating their organ tissues. They are also used in drug discovery. GM zebrafish are being explored for benefits of unlocking human organ tissue diseases and failure mysteries. For instance, zebrafish are used to understand heart tissue repair and regeneration in efforts to study and discover cures for cardiovascular diseases. Transgenic rainbow trout (Oncorhynchus mykiss) have been developed to study muscle development. The introduced transgene causes green fluorescence to appear in fast twitch muscle fibres early in development which persist throughout life. It has been suggested the fish might be used as indicators of aquatic pollutants or other factors which influence development. In intensive fish farming, the fish are kept at high stocking densities. This means they suffer from frequent transmission of contagious diseases, a problem which is being addressed by GM research. Grass carp (Ctenopharyngodon idella) have been modified with a transgene coding for human lactoferrin, which doubles their survival rate relative to control fish after exposure to Aeromonas bacteria and Grass carp hemorrhage virus. Cecropin has been used in channel catfish to enhance their protection against several pathogenic bacteria by 2–4 times. Recreation Pets GloFish is a patented technology which allows GM fish (tetra, barb, zebrafish) to express jellyfish and sea coral proteins giving the fish bright red, green or orange fluorescent colors when viewed in ultraviolet light. Although the fish were originally created and patented for scientific research at the National University of Singapore, a Texas company, Yorktown Technologies, obtained the rights to market the fish as pets. They became the first genetically modified animal to become publicly available as a pet when introduced for commercial in 2003. They were quickly banned for sale in California; however, they are now on shelves once again in this state. As of 2013, Glofish are only sold in the US. Other transgenic lines of pet fish include Medaka which remain transparent throughout their lives and pink body color transgenic angelfish (Pterophyllum scalare) and lionhead fish expressing the Acropora coral (Acroporo millepora) red fluorescent protein. The ocean pout type III antifreeze protein transgene has been successfully micro-injected and expressed in goldfish. The transgenic goldfish showed higher cold tolerance compared with controls. Food One area of intensive research with GM fish has aimed to increase food production by modifying the expression of growth hormone (GH). The relative increases in growth differ between species.(Figure 1) They range from a doubling in weight, to some fish that are almost 100 times heavier than the wild-type at a comparable age. This research area has resulted in dramatic growth enhancement in several species, including salmon, trout and tilapia. Other sources indicate an 11-fold and 30-fold increase in growth of salmon and mud loach, respectively, compared to wild-type fish. Transgenic fish development has reached the stage where several species are ready to be marketed in different countries, for example, GM tilapia in Cuba, GM carp in the People's Republic of China, and GM salmon in the US and Canada. In 2014, it was reported that applications for the approval of transgenic fish as food had been made in Canada, China, Cuba and the United States. Over-production of GH from the pituitary gland increases growth rate mainly by an increase in food consumption by the fish, but also by a 10 to 15% increase in feed conversion efficiency. Another approach to increasing meat production in GM fish is "double muscling". This results in a phenotype similar to that of Belgian Blue cattle in rainbow trout. It is achieved by using transgenes expressing follistatin, which inhibits myostatin, and the development of two muscle layers. AquAdvantage salmon In November 2015, the FDA of the USA approved the AquAdvantage salmon created by AquaBounty for commercial production, sale and consumption. It is the first genetically modified animal to be approved for human consumption. The fish is essentially an Atlantic salmon with a single gene complex inserted: a growth hormone regulating gene from a Chinook salmon with a promoter sequence from an ocean pout. This permits the GM salmon to produce GH year round rather than pausing for part of the year as do wild-type Atlantic salmon. The wild-type salmon takes 24 to 30 months to reach market size (4–6 kg) whereas the GM salmon require 18 months for the GM fish to achieve this. AquaBounty argue that their GM salmon can be grown nearer to end-markets with greater efficiency (they require 25% less feed to achieve market weight) than the Atlantic salmon which are currently reared in remote coastal fish farms, thereby making it better for the environment, with recycled waste and lower transport costs. To prevent the genetically modified fish inadvertently breeding with wild salmon, all the fish raised for food are females, triploid, and 99% are reproductively sterile. The fish are raised in a facility in Panama with physical barriers and geographical containment such as river and ocean temperatures too high to support salmon survival to prevent escape. The FDA has determined AquAdvantage would not have a significant effect on the environment in the United States. A fish farm is also being readied in Indiana where the FDA has approved importation of salmon eggs. As of August 2017, GMO salmon is being sold in Canada. Sales in the US began in May 2021. Detecting aquatic pollution (potential) Several research groups have been developing GM zebrafish to detect aquatic pollution. The laboratory that developed the GloFish originally intended them to change color in the presence of pollutants, as environmental sentinels. Teams at the University of Cincinnati and Tulane University have been developing GM fish for the same purpose. Several transgenic methods have been used to introduce target DNA into zebrafish for environmental monitoring, including micro-injection, electroporation, particle gun bombardment, liposome-mediated gene transfer, and sperm-mediated gene transfer. Micro-injection is the most commonly used method to produce transgenic zebrafish as this produces the highest survival rate. Regulation The regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of genetically modified crops. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a fish not intended for food use is generally not reviewed by authorities responsible for food safety. The US FDA guidelines for evaluating transgenic animals define transgenic constructs as "drugs" regulated under the animal drug provisions of the Federal Food and Cosmetic Act. This classification is important for several reasons, including that it places all GM food animal permits under the jurisdiction of the FDA's Center for Veterinary Medicine (CVM) and imposes limits on what information the FDA can release to the public, and furthermore, it avoids a more open food safety review process. The US states of Washington and Maine have imposed permanent bans on the production of transgenic fish. Controversy Critics have objected to use of genetic engineering per se on several grounds, including ethical concerns, ecological concerns (especially about gene flow), and economic concerns raised by the fact GM techniques and GM organisms are subject to intellectual property law. GMOs also are involved in controversies over GM food with respect to whether using GM fish as food is safe, whether it would exacerbate or cause fish allergies, whether it should be labeled, and whether GM fish and crops are needed to address the world's food needs. These controversies have led to litigation, international trade disputes, and protests, and to restrictive regulation of commercial products in most countries. There is much doubt among the public about genetically modified animals in general. It is believed that the acceptance of GM fish by the general public is the lowest of all GM animals used for food and pharmaceuticals. Ethical concerns In transgenic fast-growing fish genetically modified for growth hormone, the mosaic founder fish vary greatly in their growth rate, reflecting the highly variable proportion and distribution of transgenic cells in their bodies. Fish with these high growth rates (and their progeny) sometimes develop a morphological abnormality similar to acromegaly in humans, exhibiting an enlarged head relative to the body and a bulging operculum. This becomes progressively worse as the fish ages. It can interfere with feeding and may ultimately cause death. According to a study commissioned by Compassion in World Farming, the abnormalities are probably a direct consequence of growth hormone over-expression and have been reported in GM coho salmon, rainbow trout, common carp, channel catfish and loach, but to a lesser extent in Nile tilapia. In GM coho salmon (Oncorhynchus kisutch) there are morphological changes and changed allometry that lead to reduced swimming abilities. They also exhibit abnormal behaviour such as increased levels of activity with respect to feed-intake and swimming. Several other transgenic fish show decreased swimming ability, likely due to body shape and muscle structure. Genetically modified triploid fish are more susceptible to temperature stress, have a higher incidence of deformities (e.g. abnormalities in the eye and lower jaw), and are less aggressive than diploids. Other welfare concerns of GM fish include increased stress under oxygen-deprived conditions caused by increased need for oxygen. It has been shown that deaths due to low levels of oxygen (hypoxia) in coho salmon are most pronounced in transgenics. It has been suggested the increased sensitivity to hypoxia is caused by the insertion of the extra set of chromosomes requiring a larger nucleus which thereby causes a larger cell overall and a reduction in the surface area to volume ratio of the cell. Ecological concerns Transgenic fish are usually developed in strains of near-wild origin. These have an excellent capacity for interbreeding with themselves or wild relatives and therefore possess a significant possibility for establishing themselves in nature should they escape biotic or abiotic containment measures. A wide range of concerns about the consequences of genetically modified fish escaping have been expressed. For polyploids, these include the degree of sterility, interference with spawning, competing with resources without contributing to subsequent generations. For transgenics, the concerns include characteristics of the genotype, the function of the gene, the type of the gene, potential for causing pleiotropic effects, potential for interacting with the remainder of the genome, stability of the construct, ability of the DNA construct to transpose within or between genomes. One study, using relevant life history data from the Japanese medaka (Oryzias latipes) predicts that a transgene introduced into a natural population by a small number of transgenic fish will spread as a result of enhanced mating advantage, but the reduced viability of offspring will cause eventual local extinction of both populations. GM coho salmon show greater risk-taking behaviour and better use of limited food than wild-type fish. Transgenic coho salmon have enhanced feeding capacity and growth, which can result in a considerably larger body size (>7-fold) compared to non-transgenic salmon. When transgenic and non-transgenic salmon in the same enclosure compete for different levels of food, transgenic individuals consistently outgrow non-transgenic individuals. When food abundance is low, dominant individuals emerge, invariably transgenic, that show strong agonistic and cannibalistic behavior to cohorts and dominate the acquisition of limited food resources. When food availability is low, all groups containing transgenic salmon experience population crashes or complete extinctions, whereas groups containing only non-transgenic salmon have good (72%) survival rates. This has led to the suggestion that these GM fish will survive better than the wild-type when conditions are very poor. Successful artificial transgenic hybridization between two species of loach (genus Misgurnus) has been reported, yet these species are not known to hybridize naturally. GloFish were not considered as an environmental threat because they were less fit than normal zebrafish which are unable to establish themselves in the wild in the US. AquAdvantage salmon The FDA has said the AquAdvantage Salmon can be safely contained in land-based tanks with little risk of escape into the wild; however, Joe Perry, former chair of the GM panel of the European Food Safety Authority, has been quoted as saying "There remain legitimate ecological concerns over the possible consequences if these GM salmon escape to the wild and reproduce, despite FDA assurances over containment and sterility, neither of which can be guaranteed". AquaBounty indicates their GM salmon can not interbreed with wild fish because they are triploid which makes them sterile. The possibility of fertile triploids is one of the major short-falls of triploidy being used as a means of bio-containment for transgenic fish. However, it is estimated that 1.1% of eggs remain diploid, and therefore capable of breeding, despite the triploidy process. Others have claimed the sterility process has a failure rate of 5%. With around a million fish in each of the 3,000 Atlantic sites a single failure could result in the release of 1,100 to 5,000 genetically altered fish capable of reproducing. Large scale trials using normal pressure, high pressure, or high pressure plus aged eggs for transgenic coho salmon, give triploidy frequencies of only 99.8%, 97.6%, and 97.0%, respectively. AquaBounty also emphasizes that their GM salmon would not survive wild conditions due to the geographical locations where their research is conducted, as well as the locations of their farms. The GH transgene can be transmitted via hybridization of GM AquAdvantage Salmon and the closely related wild brown trout (Salmo trutta). Transgenic hybrids are viable and grow more rapidly than transgenic salmon and other wild-type crosses in conditions emulating a hatchery. In stream mesocosms designed to simulate natural conditions, transgenic hybrids express competitive dominance and suppress the growth of transgenic and non-transgenic salmon by 82% and 54%, respectively. Natural levels of hybridization between these two species can be as high as 41%. Researchers examining this possibility concluded "Ultimately, we suggest that hybridization of transgenic fishes with closely related species represents potential ecological risks for wild populations and a possible route for introgression of a transgene, however low the likelihood, into a new species in nature." An article in Slate Magazine in December 2012 by Jon Entine, Director of the Genetic Literacy Project, criticized the Obama administration for preventing the publication of the environmental assessment (EA) of the AquAdvantage Salmon, which was completed in April 2012 and which concluded that "the salmon is safe to eat and poses no serious environmental hazards." The Slate article said that the publication of the report was stopped "after meetings with the White House, which was debating the political implications of approving the GM salmon, a move likely to infuriate a portion of its base". Within days of the article's publication and less than two months after the election, the FDA released the draft EA and opened the comment period. References Genetically modified organisms 1985 in biotechnology
Genetically modified fish
Engineering,Biology
3,725
14,869,054
https://en.wikipedia.org/wiki/Disulfide%20bond%20formation%20protein%20B
Disulfide bond formation protein B (DsbB) is a protein component of the pathway that leads to disulfide bond formation in periplasmic proteins of Escherichia coli () and other bacteria. In Bacillus subtilis it is known as BdbC (). The DsbB protein oxidizes the periplasmic protein DsbA which in turn oxidizes cysteines in other periplasmic proteins in order to make disulfide bonds. DsbB acts as a redox potential transducer across the cytoplasmic membrane. It is a membrane protein which spans the membrane four times with both the N- and C-termini of the protein are in the cytoplasm. Each of the periplasmic domains of the protein has two essential cysteines. The two cysteines in the first periplasmic domain are in a Cys-X-Y-Cys configuration that is characteristic of the active site of other proteins involved in disulfide bond formation, including DsbA and protein disulfide isomerase. See also Disulfide bond formation protein A Disulfide bond formation protein C References Protein domains Protein families Transmembrane proteins
Disulfide bond formation protein B
Biology
254
172,317
https://en.wikipedia.org/wiki/Ray%20transfer%20matrix%20analysis
Ray transfer matrix analysis (also known as ABCD matrix analysis) is a mathematical form for performing ray tracing calculations in sufficiently simple problems which can be solved considering only paraxial rays. Each optical element (surface, interface, mirror, or beam travel) is described by a ray transfer matrix which operates on a vector describing an incoming light ray to calculate the outgoing ray. Multiplication of the successive matrices thus yields a concise ray transfer matrix describing the entire optical system. The same mathematics is also used in accelerator physics to track particles through the magnet installations of a particle accelerator, see electron optics. This technique, as described below, is derived using the paraxial approximation, which requires that all ray directions (directions normal to the wavefronts) are at small angles relative to the optical axis of the system, such that the approximation remains valid. A small further implies that the transverse extent of the ray bundles ( and ) is small compared to the length of the optical system (thus "paraxial"). Since a decent imaging system where this is the case for all rays must still focus the paraxial rays correctly, this matrix method will properly describe the positions of focal planes and magnifications, however aberrations still need to be evaluated using full ray-tracing techniques. Matrix definition The ray tracing technique is based on two reference planes, called the input and output planes, each perpendicular to the optical axis of the system. At any point along the optical train an optical axis is defined corresponding to a central ray; that central ray is propagated to define the optical axis further in the optical train which need not be in the same physical direction (such as when bent by a prism or mirror). The transverse directions and (below we only consider the direction) are then defined to be orthogonal to the optical axes applying. A light ray enters a component crossing its input plane at a distance from the optical axis, traveling in a direction that makes an angle with the optical axis. After propagation to the output plane that ray is found at a distance from the optical axis and at an angle with respect to it. and are the indices of refraction of the media in the input and output plane, respectively. The ABCD matrix representing a component or system relates the output ray to the input according to where the values of the 4 matrix elements are thus given by and This relates the ray vectors at the input and output planes by the ray transfer matrix () , which represents the optical component or system present between the two reference planes. A thermodynamics argument based on the blackbody radiation can be used to show that the determinant of a RTM is the ratio of the indices of refraction: As a result, if the input and output planes are located within the same medium, or within two different media which happen to have identical indices of refraction, then the determinant of is simply equal to 1. A different convention for the ray vectors can be employed. Instead of using , the second element of the ray vector is , which is proportional not to the ray angle per se but to the transverse component of the wave vector. This alters the ABCD matrices given in the table below where refraction at an interface is involved. The use of transfer matrices in this manner parallels the matrices describing electronic two-port networks, particularly various so-called ABCD matrices which can similarly be multiplied to solve for cascaded systems. Some examples Free space example As one example, if there is free space between the two planes, the ray transfer matrix is given by: where is the separation distance (measured along the optical axis) between the two reference planes. The ray transfer equation thus becomes: and this relates the parameters of the two rays as: Thin lens example Another simple example is that of a thin lens. Its RTM is given by: where is the focal length of the lens. To describe combinations of optical components, ray transfer matrices may be multiplied together to obtain an overall RTM for the compound optical system. For the example of free space of length followed by a lens of focal length : Note that, since the multiplication of matrices is non-commutative, this is not the same RTM as that for a lens followed by free space: Thus the matrices must be ordered appropriately, with the last matrix premultiplying the second last, and so on until the first matrix is premultiplied by the second. Other matrices can be constructed to represent interfaces with media of different refractive indices, reflection from mirrors, etc. Eigenvalues A ray transfer matrix can be regarded as a linear canonical transformation. According to the eigenvalues of the optical system, the system can be classified into several classes. Assume the ABCD matrix representing a system relates the output ray to the input according to We compute the eigenvalues of the matrix that satisfy eigenequation by calculating the determinant Let , and we have eigenvalues . According to the values of and , there are several possible cases. For example: A pair of real eigenvalues: and , where . This case represents a magnifier or . This case represents unity matrix (or with an additional coordinate reverter) . . This case occurs if but not only if the system is either a unity operator, a section of free space, or a lens A pair of two unimodular, complex conjugated eigenvalues and . This case is similar to a separable Fractional Fourier Transform. Matrices for simple optical components Relation between geometrical ray optics and wave optics The theory of Linear canonical transformation implies the relation between ray transfer matrix (geometrical optics) and wave optics. Common decomposition There exist infinite ways to decompose a ray transfer matrix into a concatenation of multiple transfer matrices. For example in the special case when : . Resonator stability RTM analysis is particularly useful when modeling the behavior of light in optical resonators, such as those used in lasers. At its simplest, an optical resonator consists of two identical facing mirrors of 100% reflectivity and radius of curvature , separated by some distance . For the purposes of ray tracing, this is equivalent to a series of identical thin lenses of focal length , each separated from the next by length . This construction is known as a lens equivalent duct or lens equivalent waveguide. The of each section of the waveguide is, as above, analysis can now be used to determine the stability of the waveguide (and equivalently, the resonator). That is, it can be determined under what conditions light traveling down the waveguide will be periodically refocused and stay within the waveguide. To do so, we can find all the "eigenrays" of the system: the input ray vector at each of the mentioned sections of the waveguide times a real or complex factor is equal to the output one. This gives: which is an eigenvalue equation: where is the identity matrix. We proceed to calculate the eigenvalues of the transfer matrix: leading to the characteristic equation where is the trace of the , and is the determinant of the . After one common substitution we have: where is the stability parameter. The eigenvalues are the solutions of the characteristic equation. From the quadratic formula we find Now, consider a ray after passes through the system: If the waveguide is stable, no ray should stray arbitrarily far from the main axis, that is, must not grow without limit. Suppose Then both eigenvalues are real. Since one of them has to be bigger than 1 (in absolute value), which implies that the ray which corresponds to this eigenvector would not converge. Therefore, in a stable waveguide, and the eigenvalues can be represented by complex numbers: with the substitution . For let and be the eigenvectors with respect to the eigenvalues and respectively, which span all the vector space because they are orthogonal, the latter due to The input vector can therefore be written as for some constants and After waveguide sectors, the output reads which represents a periodic function. Gaussian beams The same matrices can also be used to calculate the evolution of Gaussian beams propagating through optical components described by the same transmission matrices. If we have a Gaussian beam of wavelength radius of curvature (positive for diverging, negative for converging), beam spot size and refractive index , it is possible to define a complex beam parameter by: (, , and are functions of position.) If the beam axis is in the direction, with waist at and Rayleigh range , this can be equivalently written as This beam can be propagated through an optical system with a given ray transfer matrix by using the equation: where is a normalization constant chosen to keep the second component of the ray vector equal to . Using matrix multiplication, this equation expands as Dividing the first equation by the second eliminates the normalization constant: It is often convenient to express this last equation in reciprocal form: Example: Free space Consider a beam traveling a distance through free space, the ray transfer matrix is and so consistent with the expression above for ordinary Gaussian beam propagation, i.e. As the beam propagates, both the radius and waist change. Example: Thin lens Consider a beam traveling through a thin lens with focal length . The ray transfer matrix is and so Only the real part of is affected: the wavefront curvature is reduced by the power of the lens , while the lateral beam size remains unchanged upon exiting the thin lens. Higher rank matrices Methods using transfer matrices of higher dimensionality, that is , , and , are also used in optical analysis. In particular, propagation matrices are used in the design and analysis of prism sequences for pulse compression in femtosecond lasers. See also Transfer-matrix method (optics) Linear canonical transformation Footnotes References Further reading External links Thick lenses (Matrix methods) ABCD Matrices Tutorial Provides an example for a system matrix of an entire system. ABCD Calculator An interactive calculator to help solve ABCD matrices. Geometrical optics Accelerator physics
Ray transfer matrix analysis
Physics
2,075
13,832,190
https://en.wikipedia.org/wiki/Christian%20Ernst%20Stahl
Christian Ernst Stahl (21 June 1848 – 3 December 1919) was a Franco-German botanist from Schiltigheim, Alsace. He worked on the ecophysiology of plants and has been considered a pioneer of chemical ecology in his work examining the defences of plants against herbivores, although he considered snails and slugs to be the dominant herbivores that drove plant evolution rather than insects. Biography Stahl was born in Schiltigheim to timber merchant Christian Adolf and Magdalene née Rhein. He went to the local schools and then grammar school at Strasbourg. He studied botany at the University of Strasbourg with Pierre-Marie-Alexis Millardet (1838-1902). The Franco-Prussian War of 1870-71 made him move to the University of Halle where he studied under Anton de Bary (1831-1888). He earned his doctorate in 1874 with a thesis on lenticels. He later became an assistant to Julius von Sachs (1832-1897) at the University of Würzburg. He was appointed an associate professor at the University of Strasbourg in 1880, and after just one year, he attained the chair of botany at the University of Jena in 1881, succeeding Eduard Strasburger. Here, he also served as director of the botanical garden. Botanical research Stahl is remembered for his pioneer experiments in the field of ecophysiology, as well as research involving the developmental history of lichens. He was able to induce the synthesis of the lichen Endocarpon pusillum from spores and algal material, including formation of apothecia, and thus he made a strong experimental case for the hypothesis by Simon Schwendener (1829-1919) that lichens are twin fungal-algal organisms. Stahl also examined chemotaxis and movement of slime moulds, phototaxis in desmids, geotropism in plants, and the role of mycorrhiza among plant roots. Stahl travelled to Algeria in 1887 and in 1889-90 he visited Ceylon and Java with Andreas Franz Wilhelm Schimper (1856–1901) and George Karsten (1863–1937). In 1894 he visited Mexico with Karsten. Other contributions by Stahl included studies concerning the influence of light on plants — he described the anatomy of sun and shade leaves; the effects of moisture and dryness on the formation of leaves, and the role of stomata in xerophytes and mesophytes. He conducted important research on the symbiotic relationship between mycorrhizal fungi and tree roots, and also worked on plant defense against snail and slug herbivory and a plethora of other botanical and ecological questions. He has been considered as a pioneer of chemical ecology with his speculation on the role of secondary metabolites. Stahl's students included Hans Adolf Eduard Driesch (1867-1941), Hans Kniep (1881-1930), Julius Schaxel (1887–1943), Otto Stocker (1888–1966) and Heinrich Walter (1889–1989). Selected scientific works Entwickelung und Anatomie der Lenticellen. (Evolution and anatomy of lenticel); Leipzig 1873. Beiträge zur Entwickelungsgeschichte der Flechten. (Contributions to the evolutionary history of lichens); Leipzig 1877. Über den Einfluß von Richtung und Stärke der Beleuchtung auf einige Bewegungserscheinungen im Pflanzenreich. - (On the influence of direction and intensity of illumination involving movement phenomena in the plant kingdom); Leipzig 1880. Über sogenannte Kompaßpflanzen. (treatise on compass plants); Jena 1883. Über den Einfluß des sonnigen oder schattigen Standortes auf die Ausbildung der Laubblätter. (Concerning sun and shade on the formation of leaves on deciduous trees); Jena 1883. Einfluß des Lichtes auf den Geotropismus einiger Pflanzenorgane. (Influence of light in regards to geotropism of plants); Berlin 1884. Zur Biologie der Myxomyceten (The biology of slime mold). Leipzig 1884. Pflanzen und Schnecken. Eine biologische Studie über die Schutzmittel der Pflanzen gegen Schneckenfraß. (Plants and snails, a biological study regarding protection of plants against slugs); Jena 1888. References Further reading Mägdefrau, Karl (1992) Geschichte der Botanik: Leben und Leistung grosser Forscher. Stuttgart: Gustav Fischer Verlag. 19th-century German botanists 1848 births 1919 deaths People from Schiltigheim Academic staff of the University of Strasbourg Academic staff of the University of Jena 20th-century German botanists Chemical ecologists Members of the Royal Society of Sciences in Uppsala
Christian Ernst Stahl
Chemistry
1,032
40,335,735
https://en.wikipedia.org/wiki/Topological%20fluid%20dynamics
Topological ideas are relevant to fluid dynamics (including magnetohydrodynamics) at the kinematic level, since any fluid flow involves continuous deformation of any transported scalar or vector field. Problems of stirring and mixing are particularly susceptible to topological techniques. Thus, for example, the Thurston–Nielsen classification has been fruitfully applied to the problem of stirring in two-dimensions by any number of stirrers following a time-periodic 'stirring protocol' (Boyland, Aref & Stremler 2000). Other studies are concerned with flows having chaotic particle paths, and associated exponential rates of mixing (Ottino 1989). At the dynamic level, the fact that vortex lines are transported by any flow governed by the classical Euler equations implies conservation of any vortical structure within the flow. Such structures are characterised at least in part by the helicity of certain sub-regions of the flow field, a topological invariant of the equations. Helicity plays a central role in dynamo theory, the theory of spontaneous generation of magnetic fields in stars and planets (Moffatt 1978, Parker 1979, Krause & Rädler 1980). It is known that, with few exceptions, any statistically homogeneous turbulent flow having nonzero mean helicity in a sufficiently large expanse of conducting fluid will generate a large-scale magnetic field through dynamo action. Such fields themselves exhibit magnetic helicity, reflecting their own topologically nontrivial structure. Much interest attaches to the determination of states of minimum energy, subject to prescribed topology. Many problems of fluid dynamics and magnetohydrodynamics fall within this category. Recent developments in topological fluid dynamics include also applications to magnetic braids in the solar corona, DNA knotting by topoisomerases, polymer entanglement in chemical physics and chaotic behavior in dynamical systems. A mathematical introduction to this subject is given by Arnold & Khesin (1998) and recent survey articles and contributions may be found in Ricca (2009), and Moffatt, Bajer & Kimura (2013). Topology is also crucial to the structure of neutral surfaces in a fluid (such as the ocean) where the equation of state nonlinearly depends on multiple components (e.g. salinity and heat). Fluid parcels remain neutrally buoyant as they move along neutral surfaces, despite variations in salinity or heat. On such surfaces, the salinity and heat are functionally related, but this function is multivalued. The spatial regions within which this function becomes single-valued are those where there is at most one contour of salinity (or heat) per isovalue, which are precisely the regions associated with each edge of the Reeb graph of the salinity (or heat) on the surface (Stanley 2019). References Arnold, V. I. & Khesin, B. A. (1998) Topological Methods in Hydrodynamics. Applied Mathematical Sciences 125, Springer-Verlag. Boyland, P.L., Aref, H. & Stremler, M.A. (2000) Topological fluid mechanics of stirring. J.Fluid Mech. 403, pp. 277–304. Krause, F. & Rädler, K.-H. (1980) Mean-field Magnetohydrodynamic and Dynamo Theory. Pergamon Press, Oxford. Moffatt, H.K. (1978) Magnetic Field Generation in Electrically Conducting Fluids. Cambridge Univ. Press. Moffatt, H.K., Bajer, K., & Kimura, Y. (Eds.) (2013) Topological Fluid Dynamics, Theory and Applications. Kluwer. Ottino, J. (1989) The Kinematics of Mixing: Stretching, Chaos and Transport. Cambridge Univ. Press. Parker, E.N. (1979) Cosmical Magnetic Fields: their Origin and their Activity. Oxford Univ. Press. Ricca, R.L. (Ed.) (2009) Lectures on Topological Fluid Mechanics. Springer-CIME Lecture Notes in Mathematics 1973. Springer-Verlag. Heidelberg, Germany. Stanley, G. J., 2019: Neutral surface topology. Ocean Modelling 138, 88–106. Fluid dynamics Topological dynamics
Topological fluid dynamics
Chemistry,Mathematics,Engineering
874
48,813,982
https://en.wikipedia.org/wiki/Emergency%20override%20system
The Local Access Alert (also known as Local Access System or Emergency Override System) is a system designed to warn radio stations, television stations, cable television broadcast feeds or satellite signals of impending dangers such as severe weather and other civil emergencies. With a gradual transition from analog cable to digital cable, the Local Access Alert has been phased out and largely replaced with the Emergency Alert System in the United States. History The first known Emergency Override Systems or Local Access Alerts were delivered during the boom of cable television in the 1960s, although it was not directly (and mainly) called the two main names of systems, as they sometimes pronounced it in various names. In the late 1960s and early 1970s, Local Access Alerts began to spread all over the United States, although few cities and towns had cable television yet. When cable systems continued to grow, the Local Access Alert was usually added. Local Access Alerts gained the most popularity on local cable systems from the latter half of the 1970s until the end of the 1990s, when the Emergency Alert System took over the role of the Emergency Broadcast System on cable television on January 1, 1997. The Emergency Alert System began to build up on most cable television systems through the installation of then-new generators and encoders between 1997 and 1999. Some of the notable EAS generators at the time include Video Data Systems, Texscan, Gorman-Redlich, Idea/Onics, and Cable Envoy; and encoders include SAGE, TFT, and Trilithic models. During the cable growth of the Emergency Alert System, only some cable systems retained the Local Access Alert equipment up into the first part of the 2010s. In the late 2000s and early 2010s, most remaining cable systems set up for Local Access Alerts used either both Trilithic EASyPLUS or Video Data Systems as their modern screens instead of the previously used older systems such as CommAlert (although older systems of the Local Access Alert were still used in a handful of areas at the time). By the end of the 2010s, the popularity of Local Access Alerts became nearly extinct on most cable systems in the United States, as all cable systems already had the Emergency Alert System, which at that time, local cable systems had become either Comcast, Time Warner Cable, or Suddenlink among others. As of the early 2020s, most Local Access Alerts are delivered as practice or demonstration warnings as part of the Emergency Alert System, but the remainder of the nearly-extinct Local Access Alerts can still be seen on a minority of very small cable systems that either haven't had equipment upgraded or became part of a major company in unincorporated or very minor areas. Purpose Police or emergency management let cable viewers in local and surrounding areas know of an impending emergency and instruct them to shelter or evacuate. Alerts are chiefly for weather warnings for severe weather such as tornadoes, flash floods, earthquakes, winter storms, and hurricanes. Alerts may also pertain to Amber alerts, traffic closures, 911 outages, forest fires, dam failures, train derailments, and road conditions. Activation procedure The Local Access Alert is initiated by local law enforcement or emergency management staff, much like the antiquated Emergency Broadcast System, by dialing a number and entering a PIN through a telephone to take control of the cable of an area in the path of danger. Cable subscribers in that area have every television channel interrupted by audio and often a given screen. The distinct attention signal played can be Morse code, a siren, DTMF tones, steady single (or dual) tones, or multiple hi-lo beeps. The screen shown can be black, white, colored depending on warning, a slide or static. More modern alerts use a black screen with the words "Local Access Alert" in all capital letters with a message stating that "a local authority has initiated a direct community access"; the text was generated using the Trilithic EASyPLUS character generator (the same one used for the Emergency Alert System). System tests Tests of the Local Access Alert occur once weekly at randomly selected times, as well as scheduled monthly tests and yearly tornado drills. These alerts resemble the format used for activation of the Emergency Broadcast System and the Emergency Alert System. Limitations A limitation of the Local Access Alert system is that operators have to dial out to end transmission. Simply hanging up the phone connected to the system after an emergency broadcast does not work, and viewers may hear other phone noises – such as off-hook tones or dial tones – before cable programming resumes. The newer Emergency Alert System employs Specific Area Message Encoding technology to activate for potential disasters and deactivate to resume cable broadcasts, especially late at night when many public servants aren't available to break in. References United States civil defense Disaster preparedness in the United States Cable television Warning systems
Emergency override system
Technology,Engineering
975
362,348
https://en.wikipedia.org/wiki/Terahertz%20radiation
Terahertz radiation – also known as submillimeter radiation, terahertz waves, tremendously high frequency (THF), T-rays, T-waves, T-light, T-lux or THz – consists of electromagnetic waves within the International Telecommunication Union-designated band of frequencies from 0.3 to 3 terahertz (THz), although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz. One terahertz is 1012 Hz or 1,000 GHz. Wavelengths of radiation in the terahertz band correspondingly range from 1 mm to 0.1 mm = 100 μm. Because terahertz radiation begins at a wavelength of around 1 millimeter and proceeds into shorter wavelengths, it is sometimes known as the submillimeter band, and its radiation as submillimeter waves, especially in astronomy. This band of electromagnetic radiation lies within the transition region between microwave and far infrared, and can be regarded as either. Compared to lower radio frequencies, terahertz radiation is strongly absorbed by the gases of the atmosphere, and in air most of the energy is attenuated within a few meters, so it is not practical for long distance terrestrial radio communication. It can penetrate thin layers of materials but is blocked by thicker objects. THz beams transmitted through materials can be used for material characterization, layer inspection, relief measurement, and as a lower-energy alternative to X-rays for producing high resolution images of the interior of solid objects. Terahertz radiation occupies a middle ground where the ranges of microwaves and infrared light waves overlap, known as the "terahertz gap"; it is called a "gap" because the technology for its generation and manipulation is still in its infancy. The generation and modulation of electromagnetic waves in this frequency range ceases to be possible by the conventional electronic devices used to generate radio waves and microwaves, requiring the development of new devices and techniques. Description Terahertz radiation falls in between infrared radiation and microwave radiation in the electromagnetic spectrum, and it shares some properties with each of these. Terahertz radiation travels in a line of sight and is non-ionizing. Like microwaves, terahertz radiation can penetrate a wide variety of non-conducting materials; clothing, paper, cardboard, wood, masonry, plastic and ceramics. The penetration depth is typically less than that of microwave radiation. Like infrared, terahertz radiation has limited penetration through fog and clouds and cannot penetrate liquid water or metal. Terahertz radiation can penetrate some distance through body tissue like x-rays, but unlike them is non-ionizing, so it is of interest as a replacement for medical X-rays. Due to its longer wavelength, images made using terahertz waves have lower resolution than X-rays and need to be enhanced (see figure at right). The earth's atmosphere is a strong absorber of terahertz radiation, so the range of terahertz radiation in air is limited to tens of meters, making it unsuitable for long-distance communications. However, at distances of ~10 meters the band may still allow many useful applications in imaging and construction of high bandwidth wireless networking systems, especially indoor systems. In addition, producing and detecting coherent terahertz radiation remains technically challenging, though inexpensive commercial sources now exist in the 0.3–1.0 THz range (the lower part of the spectrum), including gyrotrons, backward wave oscillators, and resonant-tunneling diodes. Due to the small energy of THz photons, current THz devices require low temperature during operation to suppress environmental noise. Tremendous efforts thus have been put into THz research to improve the operation temperature, using different strategies such as optomechanical meta-devices. Sources Natural Terahertz radiation is emitted as part of the black-body radiation from anything with a temperature greater than about 2 kelvin. While this thermal emission is very weak, observations at these frequencies are important for characterizing cold 10–20 K cosmic dust in interstellar clouds in the Milky Way galaxy, and in distant starburst galaxies. Telescopes operating in this band include the James Clerk Maxwell Telescope, the Caltech Submillimeter Observatory and the Submillimeter Array at the Mauna Kea Observatory in Hawaii, the BLAST balloon borne telescope, the Herschel Space Observatory, the Heinrich Hertz Submillimeter Telescope at the Mount Graham International Observatory in Arizona, and at the recently built Atacama Large Millimeter Array. Due to Earth's atmospheric absorption spectrum, the opacity of the atmosphere to submillimeter radiation restricts these observatories to very high altitude sites, or to space. Artificial , viable sources of terahertz radiation are the gyrotron, the backward wave oscillator ("BWO"), the molecule gas far-infrared laser, Schottky-diode multipliers, varactor (varicap) multipliers, quantum-cascade laser, the free-electron laser, synchrotron light sources, photomixing sources, single-cycle or pulsed sources used in terahertz time-domain spectroscopy such as photoconductive, surface field, photo-Dember and optical rectification emitters, and electronic oscillators based on resonant tunneling diodes have been shown to operate up to 1.98 THz. There have also been solid-state sources of millimeter and submillimeter waves for many years. AB Millimeter in Paris, for instance, produces a system that covers the entire range from 8 GHz to 1,000 GHz with solid state sources and detectors. Nowadays, most time-domain work is done via ultrafast lasers. In mid-2007, scientists at the U.S. Department of Energy's Argonne National Laboratory, along with collaborators in Turkey and Japan, announced the creation of a compact device that could lead to portable, battery-operated terahertz radiation sources. The device uses high-temperature superconducting crystals, grown at the University of Tsukuba in Japan. These crystals comprise stacks of Josephson junctions, which exhibit a property known as the Josephson effect: when external voltage is applied, alternating current flows across the junctions at a frequency proportional to the voltage. This alternating current induces an electromagnetic field. A small voltage (around two millivolts per junction) can induce frequencies in the terahertz range. In 2008, engineers at Harvard University achieved room temperature emission of several hundred nanowatts of coherent terahertz radiation using a semiconductor source. THz radiation was generated by nonlinear mixing of two modes in a mid-infrared quantum cascade laser. Previous sources had required cryogenic cooling, which greatly limited their use in everyday applications. In 2009, it was discovered that the act of unpeeling adhesive tape generates non-polarized terahertz radiation, with a narrow peak at 2 THz and a broader peak at 18 THz. The mechanism of its creation is tribocharging of the adhesive tape and subsequent discharge; this was hypothesized to involve bremsstrahlung with absorption or energy density focusing during dielectric breakdown of a gas. In 2013, researchers at Georgia Institute of Technology's Broadband Wireless Networking Laboratory and the Polytechnic University of Catalonia developed a method to create a graphene antenna: an antenna that would be shaped into graphene strips from 10 to 100 nanometers wide and one micrometer long. Such an antenna could be used to emit radio waves in the terahertz frequency range. Terahertz gap In engineering, the terahertz gap is a frequency band in the THz region for which practical technologies for generating and detecting the radiation do not exist. It is defined as 0.1 to 10 THz (wavelengths of 3 mm to 30 μm) although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz (a wavelength of 10 μm). Currently, at frequencies within this range, useful power generation and receiver technologies are inefficient and unfeasible. Mass production of devices in this range and operation at room temperature (at which energy kT is equal to the energy of a photon with a frequency of 6.2 THz) are mostly impractical. This leaves a gap between mature microwave technologies in the highest frequencies of the radio spectrum and the well-developed optical engineering of infrared detectors in their lowest frequencies. This radiation is mostly used in small-scale, specialized applications such as submillimetre astronomy. Research that attempts to resolve this issue has been conducted since the late 20th century. In 2024, an experiment has been published by German researchers where a TDLAS experiment at 4.75 THz has been performed in "infrared quality" with an uncooled pyroelectric receiver while the THz source has been a cw DFB-QC-Laser operated at 43.3 K and laser currents between 480 mA and 600 mA. Closure of the terahertz gap Most vacuum electronic devices that are used for microwave generation can be modified to operate at terahertz frequencies, including the magnetron, gyrotron, synchrotron, and free-electron laser. Similarly, microwave detectors such as the tunnel diode have been re-engineered to detect at terahertz and infrared frequencies as well. However, many of these devices are in prototype form, are not compact, or exist at university or government research labs, without the benefit of cost savings due to mass production. Research Molecular biology Terahertz radiation has comparable frequencies to the motion of biomolecular systems in the course of their function (a frequency 1THz is equivalent to a timescale of 1 picosecond, therefore in particular the range of hundreds of GHz up to low numbers of THz is comparable to biomolecular relaxation timescales of a few ps to a few ns). Modulation of biological and also neurological function is therefore possible using radiation in the range hundreds of GHz up to a few THz at relatively low energies (without significant heating or ionisation) achieving either beneficial or harmful effects. Medical imaging Unlike X-rays, terahertz radiation is not ionizing radiation and its low photon energies in general do not damage living tissues and DNA. Some frequencies of terahertz radiation can penetrate several millimeters of tissue with low water content (e.g., fatty tissue) and reflect back. Terahertz radiation can also detect differences in water content and density of a tissue. Such methods could allow effective detection of epithelial cancer with an imaging system that is safe, non-invasive, and painless. In response to the demand for COVID-19 screening terahertz spectroscopy and imaging has been proposed as a rapid screening tool. The first images generated using terahertz radiation date from the 1960s; however, in 1995 images generated using terahertz time-domain spectroscopy generated a great deal of interest. Some frequencies of terahertz radiation can be used for 3D imaging of teeth and may be more accurate than conventional X-ray imaging in dentistry. Security Terahertz radiation can penetrate fabrics and plastics, so it can be used in surveillance, such as security screening, to uncover concealed weapons on a person, remotely. This is of particular interest because many materials of interest have unique spectral "fingerprints" in the terahertz range. This offers the possibility to combine spectral identification with imaging. In 2002, the European Space Agency (ESA) Star Tiger team, based at the Rutherford Appleton Laboratory (Oxfordshire, UK), produced the first passive terahertz image of a hand. By 2004, ThruVision Ltd, a spin-out from the Council for the Central Laboratory of the Research Councils (CCLRC) Rutherford Appleton Laboratory, had demonstrated the world's first compact THz camera for security screening applications. The prototype system successfully imaged guns and explosives concealed under clothing. Passive detection of terahertz signatures avoid the bodily privacy concerns of other detection by being targeted to a very specific range of materials and objects. In January 2013, the NYPD announced plans to experiment with the new technology to detect concealed weapons, prompting Miami blogger and privacy activist Jonathan Corbett to file a lawsuit against the department in Manhattan federal court that same month, challenging such use: "For thousands of years, humans have used clothing to protect their modesty and have quite reasonably held the expectation of privacy for anything inside of their clothing, since no human is able to see through them." He sought a court order to prohibit using the technology without reasonable suspicion or probable cause. By early 2017, the department said it had no intention of ever using the sensors given to them by the federal government. Scientific use and imaging In addition to its current use in submillimetre astronomy, terahertz radiation spectroscopy could provide new sources of information for chemistry and biochemistry. Recently developed methods of THz time-domain spectroscopy (THz TDS) and THz tomography have been shown to be able to image samples that are opaque in the visible and near-infrared regions of the spectrum. The utility of THz-TDS is limited when the sample is very thin, or has a low absorbance, since it is very difficult to distinguish changes in the THz pulse caused by the sample from those caused by long-term fluctuations in the driving laser source or experiment. However, THz-TDS produces radiation that is both coherent and spectrally broad, so such images can contain far more information than a conventional image formed with a single-frequency source. Submillimeter waves are used in physics to study materials in high magnetic fields, since at high fields (over about 11 tesla), the electron spin Larmor frequencies are in the submillimeter band. Many high-magnetic field laboratories perform these high-frequency EPR experiments, such as the National High Magnetic Field Laboratory (NHMFL) in Florida. Terahertz radiation could let art historians see murals hidden beneath coats of plaster or paint in centuries-old buildings, without harming the artwork. In additional, THz imaging has been done with lens antennas to capture radio image of the object. Particle accelerators New types of particle accelerators that could achieve multi Giga-electron volts per metre (GeV/m) accelerating gradients are of utmost importance to reduce the size and cost of future generations of high energy colliders as well as provide a widespread availability of compact accelerator technology to smaller laboratories around the world. Gradients in the order of 100 MeV/m have been achieved by conventional techniques and are limited by RF-induced plasma breakdown. Beam driven dielectric wakefield accelerators (DWAs) typically operate in the Terahertz frequency range, which pushes the plasma breakdown threshold for surface electric fields into the multi-GV/m range. DWA technique allows to accommodate a significant amount of charge per bunch, and gives an access to conventional fabrication techniques for the accelerating structures. To date 0.3 GeV/m accelerating and 1.3 GeV/m decelerating gradients have been achieved using a dielectric lined waveguide with sub-millimetre transverse aperture. An accelerating gradient larger than 1 GeV/m, can potentially be produced by the Cherenkov Smith-Purcell radiative mechanism in a dielectric capillary with a variable inner radius. When an electron bunch propagates through the capillary, its self-field interacts with the dielectric material and produces wakefields that propagate inside the material at the Cherenkov angle. The wakefields are slowed down below the speed of light, as the relative dielectric permittivity of the material is larger than 1. The radiation is then reflected from the capillary's metallic boundary and diffracted back into the vacuum region, producing high accelerating fields on the capillary axis with a distinct frequency signature. In presence of a periodic boundary the Smith-Purcell radiation imposes frequency dispersion. A preliminary study with corrugated capillaries has shown some modification to the spectral content and amplitude of the generated wakefields, but the possibility of using Smith-Purcell effect in DWA is still under consideration. Communication The high atmospheric absorption of terahertz waves limits the range of communication using existing transmitters and antennas to tens of meters. However, the huge unallocated bandwidth available in the band (ten times the bandwidth of the millimeter wave band, 100 times that of the SHF microwave band) makes it very attractive for future data transmission and networking use. There are tremendous difficulties to extending the range of THz communication through the atmosphere, but the world telecommunications industry is funding much research into overcoming those limitations. One promising application area is the 6G cellphone and wireless standard, which will supersede the current 5G standard around 2030. For a given antenna aperture, the gain of directive antennas scales with the square of frequency, while for low power transmitters the power efficiency is independent of bandwidth. So the consumption factor theory of communication links indicates that, contrary to conventional engineering wisdom, for a fixed aperture it is more efficient in bits per second per watt to use higher frequencies in the millimeter wave and terahertz range. Small directive antennas a few centimeters in diameter can produce very narrow 'pencil' beams of THz radiation, and phased arrays of multiple antennas could concentrate virtually all the power output on the receiving antenna, allowing communication at longer distances. In May 2012, a team of researchers from the Tokyo Institute of Technology published in Electronics Letters that it had set a new record for wireless data transmission by using T-rays and proposed they be used as bandwidth for data transmission in the future. The team's proof of concept device used a resonant tunneling diode (RTD) negative resistance oscillator to produce waves in the terahertz band. With this RTD, the researchers sent a signal at 542 GHz, resulting in a data transfer rate of 3 Gigabits per second. It doubled the record for data transmission rate set the previous November. The study suggested that Wi-Fi using the system would be limited to approximately , but could allow data transmission at up to 100 Gbit/s. In 2011, Japanese electronic parts maker Rohm and a research team at Osaka University produced a chip capable of transmitting 1.5 Gbit/s using terahertz radiation. Potential uses exist in high-altitude telecommunications, above altitudes where water vapor causes signal absorption: aircraft to satellite, or satellite to satellite. Amateur radio A number of administrations permit amateur radio experimentation within the 275–3,000 GHz range or at even higher frequencies on a national basis, under license conditions that are usually based on RR5.565 of the ITU Radio Regulations. Amateur radio operators utilizing submillimeter frequencies often attempt to set two-way communication distance records. In the United States, WA1ZMS and W4WWQ set a record of on 403 GHz using CW (Morse code) on 21 December 2004. In Australia, at 30 THz a distance of was achieved by stations VK3CV and VK3LN on 8 November 2020. Manufacturing Many possible uses of terahertz sensing and imaging are proposed in manufacturing, quality control, and process monitoring. These in general exploit the traits of plastics and cardboard being transparent to terahertz radiation, making it possible to inspect packaged goods. The first imaging system based on optoelectronic terahertz time-domain spectroscopy were developed in 1995 by researchers from AT&T Bell Laboratories and was used for producing a transmission image of a packaged electronic chip. This system used pulsed laser beams with duration in range of picoseconds. Since then commonly used commercial/ research terahertz imaging systems have used pulsed lasers to generate terahertz images. The image can be developed based on either the attenuation or phase delay of the transmitted terahertz pulse. Since the beam is scattered more at the edges and also different materials have different absorption coefficients, the images based on attenuation indicates edges and different materials inside of objects. This approach is similar to X-ray transmission imaging, where images are developed based on attenuation of the transmitted beam. In the second approach, terahertz images are developed based on the time delay of the received pulse. In this approach, thicker parts of the objects are well recognized as the thicker parts cause more time delay of the pulse. Energy of the laser spots are distributed by a Gaussian function. The geometry and behavior of Gaussian beam in the Fraunhofer region imply that the electromagnetic beams diverge more as the frequencies of the beams decrease and thus the resolution decreases. This implies that terahertz imaging systems have higher resolution than scanning acoustic microscope (SAM) but lower resolution than X-ray imaging systems. Although terahertz can be used for inspection of packaged objects, it suffers from low resolution for fine inspections. X-ray image and terahertz images of an electronic chip are brought in the figure on the right. Obviously the resolution of X-ray is higher than terahertz image, but X-ray is ionizing and can be impose harmful effects on certain objects such as semiconductors and live tissues. To overcome low resolution of the terahertz systems near-field terahertz imaging systems are under development. In nearfield imaging the detector needs to be located very close to the surface of the plane and thus imaging of the thick packaged objects may not be feasible. In another attempt to increase the resolution, laser beams with frequencies higher than terahertz are used to excite the p-n junctions in semiconductor objects, the excited junctions generate terahertz radiation as a result as long as their contacts are unbroken and in this way damaged devices can be detected. In this approach, since the absorption increases exponentially with the frequency, again inspection of the thick packaged semiconductors may not be doable. Consequently, a tradeoff between the achievable resolution and the thickness of the penetration of the beam in the packaging material should be considered. THz gap research Ongoing investigation has resulted in improved emitters (sources) and detectors, and research in this area has intensified. However, drawbacks remain that include the substantial size of emitters, incompatible frequency ranges, and undesirable operating temperatures, as well as component, device, and detector requirements that are somewhere between solid state electronics and photonic technologies. Free-electron lasers can generate a wide range of stimulated emission of electromagnetic radiation from microwaves, through terahertz radiation to X-ray. However, they are bulky, expensive and not suitable for applications that require critical timing (such as wireless communications). Other sources of terahertz radiation which are actively being researched include solid state oscillators (through frequency multiplication), backward wave oscillators (BWOs), quantum cascade lasers, and gyrotrons. Safety The terahertz region is between the radio frequency region and the laser optical region. Both the IEEE C95.1–2005 RF safety standard and the ANSI Z136.1–2007 Laser safety standard have limits into the terahertz region, but both safety limits are based on extrapolation. It is expected that effects on biological tissues are thermal in nature and, therefore, predictable by conventional thermal models . Research is underway to collect data to populate this region of the spectrum and validate safety limits. A theoretical study published in 2010 and conducted by Alexandrov et al at the Center for Nonlinear Studies at Los Alamos National Laboratory in New Mexico created mathematical models predicting how terahertz radiation would interact with double-stranded DNA, showing that, even though involved forces seem to be tiny, nonlinear resonances (although much less likely to form than less-powerful common resonances) could allow terahertz waves to "unzip double-stranded DNA, creating bubbles in the double strand that could significantly interfere with processes such as gene expression and DNA replication". Experimental verification of this simulation was not done. Swanson's 2010 theoretical treatment of the Alexandrov study concludes that the DNA bubbles do not occur under reasonable physical assumptions or if the effects of temperature are taken into account. A bibliographical study published in 2003 reported that T-ray intensity drops to less than 1% in the first 500 μm of skin but stressed that "there is currently very little information about the optical properties of human tissue at terahertz frequencies". See also Far-infrared laser Full body scanner Heterojunction bipolar transistor High-electron-mobility transistor (HEMT) Picarin Terahertz time-domain spectroscopy Microwave analog signal processing References Further reading External links Electromagnetic spectrum Terahertz technology
Terahertz radiation
Physics
5,117
60,529,679
https://en.wikipedia.org/wiki/CIM%20Schema
CIM Schema is a computer specification, part of Common Information Model standard, and created by the Distributed Management Task Force. It is a conceptual diagram made of classes, attributes, relations between these classes and inheritances, defined in the world of software and hardware. This set of objects and their relations is a conceptual framework for describing computer elements and organizing information about the managed environment. This schema is the basis of other DMTF standards such as WBEM, SMASH or SMI-S for storage management. Extensibility The CIM schema is object-based and extensible, allowing manufacturers to represent their equipment using the elements defined in the core classes of CIM schema. For this, manufacturers provide software extensions called providers, which supplement existing classes by deriving them and adding new attributes. Examples of common core classes CIM_ComputerSystem for a computer host CIM_DataFile: Computer file CIM_Directory: Files directory CIM_DiskPartition: disk partition CIM_FIFOPipeFile: Named pipes CIM_OperatingSystem: Operating system CIM_Process: Computer process CIM_SqlTable: Database table CIM_SqlTrigger: Database trigger References DMTF standards Open standards Computer standards
CIM Schema
Technology
260
37,520,883
https://en.wikipedia.org/wiki/Left%20and%20right%20%28algebra%29
In algebra, the terms left and right denote the order of a binary operation (usually, but not always, called "multiplication") in non-commutative algebraic structures. A binary operation ∗ is usually written in the infix form: The argument  is placed on the left side, and the argument  is on the right side. Even if the symbol of the operation is omitted, the order of and does matter (unless ∗ is commutative). A two-sided property is fulfilled on both sides. A one-sided property is related to one (unspecified) of two sides. Although the terms are similar, left–right distinction in algebraic parlance is not related either to left and right limits in calculus, or to left and right in geometry. Binary operation as an operator A binary operation  may be considered as a family of unary operators through currying: , depending on  as a parameter – this is the family of right operations. Similarly, defines the family of left operations parametrized with . If for some , the left operation  is the identity operation, then is called a left identity. Similarly, if , then is a right identity. In ring theory, a subring which is invariant under any left multiplication in a ring is called a left ideal. Similarly, a right multiplication-invariant subring is a right ideal. Left and right modules Over non-commutative rings, the left–right distinction is applied to modules, namely to specify the side where a scalar (module element) appears in the scalar multiplication. The distinction is not purely syntactical because one gets two different associativity rules (the lowest row in the table) which link multiplication in a module with multiplication in a ring. A bimodule is simultaneously a left and right module, with two different scalar multiplication operations, obeying an associativity condition on them. Other examples Left eigenvectors Left and right group actions In category theory In category theory the usage of "left" and "right" has some algebraic resemblance, but refers to left and right sides of morphisms. See adjoint functors. See also Operator associativity External links Abstract algebra Mathematical terminology
Left and right (algebra)
Mathematics
453
14,646,684
https://en.wikipedia.org/wiki/Ammunition%20Design%20Group
Ammunition is a San Francisco, CA, design studio founded in 2007 by Robert Brunner. The current managing partners are Robert Brunner and Matt Rolandson. Ammunition was formed after it parted ways from Pentagram (design firm).The company designs hardware, software, and graphic identities for many companies, including Adobe Systems, Beats by Dre, Polaroid Corporation, and Square Inc. Notable projects Barnes & Noble Nook Ammunition developed the industrial design, user interface and accessory system for the Barnes & Noble Nook e-readers. Smartisan T1 smartphone In May 2014, the company designed the Smartisan T1 and T2 smartphones for China-based Smartisan Technology Co. Ltd. The company won several awards for their design. Awards Ammunition has been recognized with numerous international design awards from the Industrial Designers Society of America, Red Dot, Core77 Design Awards, D&AD, and the Good Design Awards (Chicago). In 2014, Ammunition won a Good Design award in the "Smartphone & Accessory" category for the Smartisan T1 smartphone. In 2014, Ammunition announced that their work was recognised in the Product and Graphic categories of the Spark Awards and won 10 awards In 2015, Ammunition won an iF Gold Award for the Smartisan T1 smartphone. In 2016, Ammunition won the Cooper Hewitt Product Design award for noteworthy projects including Beats By Dr Dre, the Lyft glowstache and the UNICEF Kid Power Band. In 2016, Ammunition won an iF Design Award for the Smartisan T2 smartphone IF Product Design Award for Smartisan T2 smartphone. Ammunition won gold at the 2016 IDSA International Design Excellence Awards Ammunition and Eargo won Gold at 2018 International Design Excellence Awards In 2020 ammunition x Gantri won AD cleverest Award for the Signal Floor Light Ammunition won runner up for the Consumer Technology Award, in the Core 77Design Awards 2024. In 2021 Ammunition designed the all new trophy for the 10th anniversary of the Innovation by Design awards. See also Pentagram Design Product design References External links Official site Wired news article on PC design Product design Industrial design firms Companies based in San Francisco
Ammunition Design Group
Engineering
427
1,780,029
https://en.wikipedia.org/wiki/Planet%20of%20Giants
Planet of Giants is the first serial of the second season in the British science fiction television series Doctor Who. Written by Louis Marks and directed by Mervyn Pinfield and Douglas Camfield, the serial was first broadcast on BBC1 in three weekly parts from 31 October to 14 November 1964. In the serial, the First Doctor (William Hartnell), his granddaughter Susan Foreman (Carole Ann Ford), and her teachers Ian Chesterton (William Russell) and Barbara Wright (Jacqueline Hill) are shrunk to the size of an inch after the Doctor's time machine the TARDIS arrives in contemporary England. The story's concept was first proposed as the first serial of the show's first season, but was rejected due to its technical complexity and lack of character development. When Marks was commissioned to write the script, he was inspired by Rachel Carson's 1962 environmental science book Silent Spring, the first major documentation on human impact on the environment. The story was originally written and filmed as a four-part serial, but later reduced to three parts; the third and fourth episodes were cut down to form a faster-paced climax. The serial premiered with 8.4 million viewers, maintaining audience figures throughout the three weeks. Retrospective response for the serial was mixed, with criticism directed at its story and characterisation despite praise for its ambition. It later received several print adaptations and home media releases. Plot Despite indications of a malfunction in the TARDIS, its fault locator shows nothing is wrong and that it is safe to go outside. The First Doctor (William Hartnell), Ian Chesterton (William Russell), Barbara Wright (Jacqueline Hill), and Susan Foreman (Carole Ann Ford) consequently explore the vicinity, finding the remains of giant earthworm and ant, which appear to have died instantaneously. The travellers realise they have returned to Earth but have shrunk to the height of an inch. Ian investigates the interior of a discarded matchbox when it is picked up by a government scientist called Farrow (Frank Crawshaw), who is visiting a callous industrialist named Forester (Alan Tilvern) to tell him that his application for a new insecticide called DN6 has been rejected as it is far too deadly to all forms of insect life. News of this appraisal prompts Forester to fatally shoot Farrow. The Doctor, Barbara, and Susan hear the gunshot and head for the house to find Ian unhurt near Farrow's corpse. Forester's aide, Smithers (Reginald Barratt), arrives but does not report the murder for fear of undermining the DN6 project to which he has dedicated his life. Ian and Barbara hide inside Farrow's briefcase to avoid being stepped on by Forester and Smithers, and get separated from the Doctor and Susan after the briefcase is brought inside the house. The Doctor and Susan climb up a drain pipe to find them. Forester alters Farrow's report to give support to the DN6 licence application and, disguising his voice as Farrow’s, makes a supportive phone call to the ministry to the same effect. This is overheard by the local telephone operator Hilda Rowse (Rosemary Johnson) and her policeman husband Bert (Fred Ferris), who suspect something is wrong. Within the house, Ian and Barbara encounter a giant fly, which is killed instantly when it contacts sample seeds that had been sprayed with DN6. Barbara had handled one of these seeds and begins to feel unwell. The Doctor, realising the toxic nature of DN6 and the probable contamination of Barbara, proposes they alert someone by hoisting up the giant telephone receiver, but they cannot make themselves heard. At the telephone exchange, the engaged signal makes Hilda and Bert increasingly concerned. Bert heads off to the house to investigate. The Doctor and his companions decide to attract attention by starting a fire, succeeding in manoeuvring an aerosol can into the flames of the Bunsen burner gas outlet. This coincides with Smithers discovering the true virulence of DN6 and demanding Forester cease his licence application. In the lab, the makeshift bomb explodes in Forester’s face as PC Rowse arrives. Back in the TARDIS, the Doctor succeeds in returning the craft and crew to normal size, a process which cures Barbara of her infection by DN6. Production Conception and writing The concept of the Doctor and his companions shrinking in size was initially proposed as the first story of the show's first season, written by C. E. Webber and entitled The Giants. After some rewrites, the serial was rejected by show creator Sydney Newman in June 1963 due to its technical complexity and lack of character development. The concept of The Giants was given to writer Robert Gould in mid-1963 to develop as the four-part fourth serial of the first season, but it was dropped by January 1964 due to scripting difficulties. By February 1964, the serial was assigned to writer Louis Marks. The main narrative was inspired by Rachel Carson's 1962 environmental science book Silent Spring, the first major documentation on human impact on the environment. The fictional insecticide featured in the story, DN6, was inspired by incidents described by Carson regarding the impact of DDT on insects. Writer Mark Wilson wrote in 2017 that the story aired during a time where environmental awareness was beginning to develop among the British public. Whitaker commissioned Marks for the serial in May 1964, then titled The Planet of Giants. Mervyn Pinfield was assigned to direct the serial. Filming The special effect inserts of a cat were filmed on 30 July 1964 using silent 35mm film, with sound added later during a studio recording. The show's regular cast—Hartnell, Russell, Hill, and Ford—filmed the sequences in which they appeared alongside giant props; the effect was achieved by recording the actors through glass and reflecting the object onto a half-silvered mirror. The footage was later deemed unsatisfactory, and the scenes were re-shot on 13 August. Rehearsals for the first episode took place on 17 August at the London Transport Assembly Rooms, across the road from the BBC Television Centre. Weekly recording for the serial began on 21 August at the Television Centre, Studio 4. Due to Pinfield's other commitments, the fourth and final episode was directed by Douglas Camfield, who had worked as a production assistant to Waris Hussein during the show's first season. The final episode was recorded on 11 September. Post-production Planet of Giants is the first Doctor Who serial to feature the work of incidental music composer Dudley Simpson, who first recorded on 14 August 1964. On 19 October 1964, head of serials Donald Wilson decided to reduce the four-part serial to three episodes, as it was felt to be an unsatisfactory opening to the show's second season; he preferred to open the season with the following serial, The Dalek Invasion of Earth, but its depiction of Susan's departure prevented the change. The 24-minute third and fourth episodes were transferred to 35 mm film and edited together into a single 25-minute episode from 29 October to 2 November to form a faster-paced climax featuring the main characters. Camfield was credited for the final episode. Reception Broadcast and ratings Planet of Giants was considered a strong debut to the second season, receiving 8.4 million viewers for the first two episodes and 8.9 million for the third. An Audience Research Report on the first episode indicated that the show had gained 17% of the viewing audience. The Appreciation Index increased slightly over the three episodes, from 57 to 59. The BBC Film and Videotape Library did not select the serial for preservation, and the original tapes were wiped in the late 1960s. In 1977, 16mm film prints of the serial were discovered at BBC Enterprises. Critical response At the BBC Programme Review Board after the broadcast of the first episode in November 1964, the director-general Hugh Greene was unimpressed by the story's concept; following the second episode's broadcast, he noted his disappointment at the serial and eagerness for the Daleks' return. An Audience Research Report on the first episode noted that the response had been positive, with praise directed at the props and special effects. Retrospective reviews of the serial were mixed. In The Discontinuity Guide (1995), Paul Cornell, Martin Day, and Keith Topping described the serial as "a strange mix of ecological [science fiction] and 'cops and gangsters, finding it "good fun, if a little unrepresentative of the series". In The Television Companion (1998), David J. Howe and Stephen James Walker found difficulty in understanding why the serial was considered so important by the production team, and found the plot to be "one of the weakest" in the series so far; they praised Hill's performance, and enjoyed Hartnell and Russell, though noted that Ford was "rather less impressive". In 2008, Patrick Mulkern of Radio Times wrote that the story had ambition and impressive set design, but felt that "the drama itself is less than enthralling"; Mulkern noted that Barbara "[came] across as uncharacteristically wet" and described Simpson's score as "annoyingly childish". In 2012, DVD Talks John Sinnott felt that the serial was a "solid installment", but considered it strange that the main characters do not interact with the criminals. Dave Golder of SFX described the serial as "undeniably slow, talky and lacking in excitement", particularly criticising Barbara's characterisation. Christopher Bahn of The A.V. Club appreciated the ambition of the serial but felt that it "never quite gels together" and the condensed final episodes hindered the overall story. Commercial releases A novelisation of Planet of Giants, written by Terrance Dicks, was published by Target Books in January 1990. It was the final First Doctor serial to be novelised. Dicks used the original rehearsal script for the first episode and a camera script for the scrapped final episode to restore the missing sequences. The serial was released on VHS by BBC Video in January 2002; it was the first commercially released story to receive the VidFIRE process. 2 Entertain released the serial on DVD on 20 August 2012, alongside audio commentaries, documentaries, and a recreation of the original third and fourth episodes; the recreation, based on the original scripts, used animation and newly recorded dialogue by Ford and Russell, with John Guilor and Katherine Hadoke as the Doctor and Barbara. The serial was released on Blu-ray on 5 December 2022, alongside the rest of the second season as part of The Collection. References Bibliography External links 1964 British television episodes Doctor Who serials novelised by Terrance Dicks Doctor Who stories set on Earth Fiction about size change First Doctor serials Television episodes set in England Television episodes set in the 1960s
Planet of Giants
Physics,Mathematics
2,226
23,812,281
https://en.wikipedia.org/wiki/Dry%20gas%20seal
Dry gas seals are non-contacting, dry-running mechanical face seals that consist of a mating (rotating) ring and a primary (stationary) ring. When operating, lifting geometry in the rotating ring generates a fluid-dynamic lifting force causing the stationary ring to separate and create a gap between the two rings. Dry gas seals are mechanical seals but use other chemicals and functions so that they do not contaminate a process. These seals are typically used in a harsh working environment such as oil exploration, extraction and refining, petrochemical industries, gas transmission and chemical processing. Machined-in lift profiles on one side of the seal face direct gas inward toward an extremely flat portion of the face. The gas that is flowing across the face generates a pressure that maintains a minute gap between the faces, optimizing fluid film stiffness and providing the highest possible degree of protection against face contact. The seal's film stiffness compensates for varying operations by adjusting gap and pressure to maintain stability. Design and use Grooves or machined ramps on the seal direct gas inward toward the non-grooved portion. The action of the gas flowing across the seal generates pressure that keeps a minute gap, therefore optimizing fluid film stiffness and providing protection against face contact. The use of these seals in centrifugal compressors has increased significantly in the last two decades because they eliminate contamination and do not use lubricating oil. Non-contacting dry gas seals are often used on compressors for pipelines, off-shore applications, oil refineries, petrochemical and gas processing plants. Types There are many dry gas seal configurations based on their application: Single seal Tandem seal - Broadly used in the petroleum industry Tandem seal with intermediate labyrinth Double opposed seal - Used when the processed gas is abrasive (like hydrogen) and lower pressure designs. All designs use buffering with "dry" gas, supplied through control and purification systems. All Dry Gas Seals need additional protection from the process and the bearing lubrication sides of the seal History The first dry gas seal for a compressor was patented by Kaydon Ring & Seal in 1951 when it was known as Koppers Corporation. Field applications of dry gas seal designs were completed in 1952. The original patent was for Kaydon's "Tapered Ramp" lift geometry, a constant diameter / variable depth dynamic lift design. From that first dry gas seal ever manufactured for a centrifugal compressor in 1951, Kaydon Ring & Seal has been instrumental in developing the dry gas seal into one of the most reliable and maintenance free sealing solution available today. John Crane Inc. issued a patent for dry gas seals in 1968 with field applications beginning in 1975, though the technology is now widely available among seal manufacturers. When the technology is aimed at correcting the problems with dry gas film environments by eliminating friction. Soon, the technology became a common replacement for other lubricated seals. The patented spiral-groove (constant depth / variable diameter) technology of the dry gas seal allows for easy lifting and separation of seal faces during operation. Also the dry gas seal lift geometry can be unidirectional or bidirectional, depending on the specific design of the lifting geometry. See also Hydrogen turboexpander-generator References Seals (mechanical)
Dry gas seal
Physics
667
53,710,118
https://en.wikipedia.org/wiki/Kukoamines
Kukoamines are chemicals that are present in some plants including Lycium chinense, potatoes, and tomatoes. The most prevalent example is kukoamine A; others include kukoamine B, C, and D. Chemically, kukoamines are catechols and also dihydrocaffeic acid derivatives of polyamines. References Catechols
Kukoamines
Chemistry
79
38,559,592
https://en.wikipedia.org/wiki/HD%20143787
HD 143787 is a single star in the southern constellation of Scorpius. It is a fifth magnitude star – apparent visual magnitude of 4.973, and hence is visible to the unaided eye. The distance to HD 143787 can be estimated from its annual parallax shift of , yielding a separation of 227 light years. It is moving closer to Earth with a heliocentric radial velocity of −37.9 km/s, and should come within in 1.2 million years. This is an evolved giant star with a stellar classification of K3 III. It is a red clump giant, which means it is on the horizontal branch and is generating energy through helium fusion at its core. At the age of 4.46 billion years, it has 1.25 times the mass of the Sun and is radiating 61.7 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,370 K. References K-type giants Horizontal-branch stars Scorpius Durchmusterung objects 143787 078650 5969
HD 143787
Astronomy
230
46,593,315
https://en.wikipedia.org/wiki/Ollie%20Luba
Oleh "Ollie" R. Luba is an American systems engineer, aerospace engineer, and program manager who was worked on the early development of the GPS III (Global Positioning System, Block IIIA). He was born in Logan, Philadelphia. He currently works at Lockheed Martin, and has for more than 28 years. Education Ollie Luba was born in Philadelphia, Pennsylvania. Luba graduated from Central High School, the second-oldest continuously public high school in the United States and one of the top high schools in the city and state. For college, Luba attended the University of Pennsylvania, and got a BSEE in Electrical Engineering. Thereafter, he went to Drexel University for two years to receive a MSEE in Electrical/Systems Engineering. In 1997, Luba went on to receive his master's degree in Technology Management (EMTM) from the University of Pennsylvania. He completed it in 2001. Career In August 1986, Luba began working at Lockheed Martin, then GE Aerospace, an American global aerospace, defense, security and advanced technology company with worldwide interests. He started working as an Associative Systems Engineer. He currently is a Principal Systems Engineer, and has worked at Lockheed Martin for over 28 years. GPS III Luba started working on the GPS III project in 2002 with his team, including Larry Boyd, Art Gower, and Jeff Crum. In 2005, completing their work, they wrote a paper, titled GPS III System Operations Concepts, which outlined the creation of the GPS III, its uses in the Air Force, connectivity worldwide, and continuation of the GPS project. For over 3 years, he and his team "analyzed potential operational concepts for the Air Force. The completed tasks support the government’s objective of a “realizable and operationally feasible” US Strategic Command (USSTRATCOM) and Air Force Space Command (AFSPC) concept of operations." LM Wisdom Starting in 2013, Luba moved on to his new major project within Lockheed Martin. He began and runs the project LM WISDOM® ITI (Insider Threat Identification), the industry leader in detecting and mitigating insider threats. It is a cyber-security platform that analyzes threats for organizations. LM Wisdom collects and monitors information online, such as revolutions or political instability. Personal life Ollie Luba, born Oleh Rostyslav Luba, is ethnically Ukrainian and speaks both English and Ukrainian. Both of his parents were born in Ukraine. Luba is a member of Plast, the largest Scouting organization in Ukraine, and the Institute of Navigation (ION), a non-profit professional organization for the advancement of the art and science of positioning, navigation and timing. His interests include Biking, Skiing, and Golf. He resides in Valley Forge, Pennsylvania with his wife and two children. References External links GPS III System Operations Concepts Lockheed Martin (http://www.lockheedmartin.com) LM Wisdom (https://lmwisdom.com) 1964 births 21st-century American engineers People associated with the Global Positioning System University of Pennsylvania School of Engineering and Applied Science alumni Drexel University alumni Living people
Ollie Luba
Technology
645
28,366,919
https://en.wikipedia.org/wiki/Sea%20ice%20concentration
Sea ice concentration is a useful variable for climate scientists and nautical navigators. It is defined as the area of sea ice relative to the total at a given point in the ocean. This article will deal primarily with its determination from remote sensing measurements. Significance Sea ice concentration helps determine a number of other important climate variables. Since the albedo of ice is much higher than that of water, ice concentration will regulate insolation in the polar oceans. When combined with ice thickness, it determines several other important fluxes between the air and sea, such as salt and fresh-water fluxes between the polar oceans (see for instance bottom water) as well as heat transfer between the atmosphere. Maps of sea ice concentration can be used to determine Sea ice area and Sea ice extent, both of which are important markers of climate change. Ice concentration charts are also used by navigators to determine potentially passable regions—see icebreaker. Methods In situ Measurements from ships and aircraft are based on simply calculating the relative area of ice versus water visible within the scene. This can be done using photographs or by eye. In situ measurements are used to validate remote sensing measurements. SAR and visible Both synthetic aperture radar and visible sensors (such as Landsat) are normally high enough resolution that each pixel is simply classified as a distinct surface type, i.e. water versus ice. The concentration can then be determined by counting the number of ice pixels in a given area which is useful for validating concentration estimates from lower resolution instruments such as microwave radiometers. Since SAR images are normally monochrome and the backscatter of ice can vary quite considerably, classification is normally done based on texture using groups of pixels—see pattern recognition. Visible sensors have the disadvantage of being quite weather sensitive—images are obscured by clouds—while SAR sensors, especially in the higher resolution modes, have a limited coverage and must be pointed. This is why the tool of choice for determining ice concentration is often a passive microwave sensor. Microwave radiometry All warm bodies emit electro-magnetic radiation: see thermal radiation. Since different objects will emit differently at different frequencies, we can often determine what type of object we are looking at based on its emitted radiation—see spectroscopy. This principle underlies all passive microwave sensors and most passive infrared sensors. Passive is used in the sense that the sensor only measures radiation that has been emitted by other objects but does not emit any of its own. (A SAR sensor, by contrast, is active.) SSMR and SSMI radiometers were flown on the Nimbus program and DMSP series of satellites. Because clouds are translucent in the microwave regime, especially at lower frequencies, microwave radiometers are quite weather insensitive. Since most microwave radiometers operate along a polar orbit with a broad, sweeping scan, full ice maps of the polar regions where the swaths are largely overlapping can usually be obtained within one day. This frequency and reliability comes at the cost of a poor resolution: the angular field of view of an antenna is directly proportional to the wavelength and inversely proportional to the effective aperture area. Thus we need a large deflector dish to compensate for a low frequency . Most ice concentration algorithms based on microwave radiometry are predicated on the dual observation that: 1. different surface types have different, strongly clustered, microwave signatures and 2. the radiometric signature at the instrument head is a linear combination of that of the different surface types, with the weights taking on the values of the relative concentrations. If we form a vector space from each of the instrument channels in which all but one of the signatures of the different surface types are linearly independent, then it is straightforward to solve for the relative concentrations: where is the radiometric signature at the instrument head (normally measured as a brightness temperature), is the signature of the nominal background surface type (normally water), is the signature of the ith surface type while Ci are the relative concentrations. Every operational ice concentration algorithm is predicated on this principle or a slight variation. The NASA team algorithm, for instance, works by taking the difference of two channels and dividing by their sum. This makes the retrieval slightly nonlinear, but with the advantage that the influence of temperature is mitigated. This is because brightness temperature varies roughly linearly with physical temperature when all other things are equal—see emissivity—and because the sea ice emissivity at different microwave channels is strongly correlated. As the equation suggests, concentrations of multiple ice types can potentially be detected, with NASA team distinguishing between first-year and multi-year ice (see image above). Accuracies of sea ice concentration derived from passive microwave sensors may be expected to be on the order of 5\% (absolute). A number of factors act to reduce the accuracy of the retrievals, the most obvious being variations in the microwave signatures produced by a given surface type. For sea ice, the presence of snow, variations in salt and moisture content, the presence of melt ponds as well as variations in surface temperature will all produce strong variations in the microwave signature of a given ice type. New and thin ice in particular will often have a microwave signature closer to that of open water. This is normally because of its high salt content, not because of radiation being transmitted from the water through the ice—see sea ice emissivity modelling. The presence of waves and surface roughness will change the signature over open water. Adverse weather conditions, clouds and humidity in particular, will also tend to reduce the accuracy of retrievals. See also Arctic sea ice decline References External links High-resolution sea ice concentration charts derived from AMSR-E 89 GHz channel The Arctic ice sheet True color satellite map with daily updates. Sea ice Remote sensing Radiometry
Sea ice concentration
Physics,Engineering
1,168
7,860,110
https://en.wikipedia.org/wiki/Sexual%20differentiation%20in%20humans
Sexual differentiation in humans is the process of development of sex differences in humans. It is defined as the development of phenotypic structures consequent to the action of hormones produced following gonadal determination. Sexual differentiation includes development of different genitalia and the internal genital tracts and body hair plays a role in sex identification. The development of sexual differences begins with the XY sex-determination system that is present in humans, and complex mechanisms are responsible for the development of the phenotypic differences between male and female humans from an undifferentiated zygote. Females typically have two X chromosomes, and males typically have a Y chromosome and an X chromosome. At an early stage in embryonic development, both sexes possess equivalent internal structures. These are the mesonephric ducts and paramesonephric ducts. The presence of the SRY gene on the Y chromosome causes the development of the testes in males, and the subsequent release of hormones which cause the paramesonephric ducts to regress. In females, the mesonephric ducts regress. Disorders of sexual development (DSD), encompassing conditions characterized by the appearance of undeveloped genitals that may be ambiguous, or look like those typical for the opposite sex, sometimes known as intersex, can be a result of genetic and hormonal factors. Sex determination Most mammals, including humans, have an XY sex-determination system: the Y chromosome carries factors responsible for triggering male development. In the absence of a Y chromosome, the fetus will undergo female development. This is because of the presence of the sex-determining region of the Y chromosome, also known as the SRY gene. Thus, male mammals typically have an X and a Y chromosome (XY), while female mammals typically have two X chromosomes (XX). Chromosomal sex is determined at the time of fertilization; a chromosome from the sperm cell, either X or Y, fuses with the X chromosome in the egg cell. Gonadal sex refers to the gonads, that is the testicles or ovaries, depending on which genes are expressed. Phenotypic sex refers to the structures of the external and internal genitalia. Six weeks elapse after fertilization before the first signs of sex differentiation can be observed in human embryos. The embryo and subsequent early fetus appear to be sexually indifferent, looking neither like a male or a female. Over the next several weeks, hormones are produced that cause undifferentiated tissue to transform into either male or female reproductive organs. This process is called sexual differentiation. The precursor of the internal female sex organs is called the Müllerian system. Reproductive system Differentiation between the sexes of the sex organs occurs throughout embryological, fetal and later life. In both males and females, the sex organs consist of two structures: the internal genitalia and the external genitalia. In males, the gonads are the testicles and in females, they are the ovaries. These are the organs that produce gametes (egg and sperm), the reproductive cells that will eventually meet to form the fertilized egg (zygote). As the zygote divides, it first becomes the embryo (which means 'growing within'), typically between zero and eight weeks, then from the eighth week until birth, it is considered the fetus (which means 'unborn offspring'). The internal genitalia are all the accessory glands and ducts that connect the gonads to the outside environment. The external genitalia consist of all the external reproductive structures. The sex of an early embryo cannot be determined because the reproductive structures do not differentiate until the seventh week. Prior to this, the child is considered bipotential because it cannot be identified as male or female. Internal genital differentiation The internal genitalia consist of two accessory ducts: mesonephric ducts (male) and paramesonephric ducts (female). The mesonephric system is the precursor to the male genitalia and the paramesonephric to the female reproductive system. As development proceeds, one of the pairs of ducts develops while the other regresses. This depends on the presence or absence of the sex determining region of the Y chromosome, also known as the SRY gene. In the presence of a functional SRY gene, the bipotential gonads develop into testes. Gonads are histologically distinguishable by 6–8 weeks of gestation. Subsequent development of one set and degeneration of the other depends on the presence or absence of two testicular hormones: testosterone and anti-Müllerian hormone (AMH). Disruption of typical development may result in the development of both, or neither, duct system, which may produce morphologically intersex individuals. Males: The SRY gene when transcribed and processed produces SRY protein that binds to DNA and directs the development of the gonad into testes. Male development can only occur when the fetal testis secretes key hormones at a critical period in early gestation. The testes begin to secrete three hormones that influence the male internal and external genitalia: they secrete anti-Müllerian hormone (AMH), testosterone, and dihydrotestosterone (DHT). Anti-Müllerian hormone causes the paramesonephric ducts to regress. Testosterone converts the mesonephric ducts into male accessory structures, including the epididymides, vasa deferentia, and seminal vesicles. Testosterone will also control the descending of the testes from the abdomen. Many other genes found on other autosomes, including WT1, SOX9 and SF1 also play a role in gonadal development. Females: Without testosterone and AMH, the mesonephric ducts degenerate and disappear. The paramesonephric ducts develop into the uterus, fallopian tubes, and upper vagina (the lower vagina develops from the urogenital sinus). There still remains a broad lack of information about the genetic controls of female development, and much remains unknown about the female embryonic process. External genital differentiation By 7 weeks, a fetus has a genital tubercle, urogenital sinus, urogenital folds and labioscrotal swellings. In females, without excess androgens, these become the vulva (clitoris, vestibule, labia minora and labia majora respectively). Males become externally distinct between 8 and 12 weeks, as androgens enlarge the genital tubercle and cause the urogenital groove and sinus to fuse in the midline, producing an unambiguous penis with a phallic urethra, and the labioscrotal swellings become a thinned, rugate scrotum where the testicles are situated. Dihydrotestosterone will differentiate the remaining male characteristics of the external genitalia. A sufficient amount of any androgen can cause external masculinization. The most potent is dihydrotestosterone (DHT), generated from testosterone in skin and genital tissue by the action of 5α-reductase. A male fetus may be incompletely masculinized if this enzyme is deficient. In some diseases and circumstances, other androgens may be present in high enough concentrations to cause partial or (rarely) complete masculinization of the external genitalia of a genetically female fetus. The testes begin to secrete three hormones that influence the male internal and external genitalia. They secrete anti-Müllerian hormone, testosterone, and Dihydrotestosterone. Anti-Müllerian hormone (AMH) causes the paramesonephric ducts to regress. Testosterone, which is secreted and converts the mesonephric ducts into male accessory structures, such as epididymis, vas deferens and seminal vesicle. Testosterone will also control the descending of the testes from the abdomen into the scrotum. Dihydrotestosterone, also known as (DHT) will differentiate the remaining male characteristics of the external genitalia. Further sex differentiation of the external genitalia occurs at puberty, when androgen levels again become disparate. Male levels of testosterone directly induce growth of the penis, and indirectly (via DHT) the prostate. Alfred Jost observed that while testosterone was required for mesonephric duct development, the regression of the paramesonephric duct was due to another substance. This was later determined to be paramesonephric inhibiting substance (MIS), a 140 kD dimeric glycoprotein that is produced by Sertoli cells. MIS blocks the development of paramesonephric ducts, promoting their regression. Secondary sexual characteristics Breast development Visible differentiation occurs at puberty, when estradiol and other hormones cause breasts to develop in typical females. Psychological and behavioral differentiation Human adults and children show many psychological and behavioral sex differences. Some (e.g. dress) are learned and cultural. Others are demonstrable across cultures and have both biological and learned determinants. For example, some studies claim girls are, on average, more verbally fluent than boys, but boys are, on average, better at spatial calculation. Some have observed that this may be due to two different patterns in parental communication with infants, noting that parents are more likely to talk to girls and more likely to engage in physical play with boys. Disorders of sex development The following are some of the conditions associated with atypical determination and differentiation process: A zygote with only X chromosome (XO) results in Turner syndrome and will develop with female characteristics. Congenital adrenal hyperplasia –Inability of adrenal to produce sufficient cortisol, leading to increased production of testosterone resulting in severe masculinization of 46 XX females. The condition also occurs in XY males, as they suffer from the effects of low cortisol and salt-wasting, not virilization. Persistent Müllerian duct syndrome – A rare type of pseudohermaphroditism that occurs in 46 XY males, caused by either a mutation in the Müllerian inhibiting substance (MIS) gene, on 19p13, or its type II receptor, 12q13. Results in a retention of Müllerian ducts (persistence of rudimentary uterus and fallopian tubes in otherwise normally virilized males), unilateral or bilateral undescended testes, and sometimes causes infertility. XY differences of sex development – Atypical androgen production or inadequate androgen response, which can cause incomplete masculinization in XY males. Varies from mild failure of masculinization with undescended testes to complete sex reversal and female phenotype (Androgen insensitivity syndrome) Swyer syndrome. A form of complete gonadal dysgenesis, mostly due to mutations in the first step of sex determination; the SRY genes. A 5-alpha-reductase deficiency results in atypical development characterized by female phenotype or undervirilized male phenotype with development of the epididymis, vas deferens, seminal vesicle, and ejaculatory duct, but also a pseudovagina. This is because testosterone is converted to the more potent DHT by 5-alpha reductase. DHT is necessary to exert androgenic effects farther from the site of testosterone production, where the concentrations of testosterone are too low to have any potency. Timeline See also Neuroscience of sex differences Sex differences in humans Determination of sex References Further reading Epigenetics Human reproduction Physiology
Sexual differentiation in humans
Biology
2,462
30,552,634
https://en.wikipedia.org/wiki/MIMOS%20II
MIMOS II is the miniaturised Mössbauer spectrometer, developed by Dr. Göstar Klingelhöfer at the Johannes Gutenberg University in Mainz, Germany, that is used on the Mars Exploration Rovers Spirit and Opportunity for close-up investigations on the Martian surface of the mineralogy of iron-bearing rocks and soils. MIMOS II uses a Cobalt-57 gamma ray source of about 300 mCi at launch which gave a 6-12 hr time for acquisition of a standard MB spectrum during the primary mission on Mars, depending on total Fe content and which Fe-bearing phases are present. Cobalt-57 has a half-life of only 271.8 days (hence the extended measuring times now on Mars after over a decade). The MIMOS II sensorheads used on Mars are approx 9 cm x 5 cm x 4 cm and weigh about 400g The MIMOS II system also includes a circuit board of about 100g. References Mars Exploration Rover mission Spectrometers Spacecraft instruments Space science experiments
MIMOS II
Physics,Chemistry
208
24,006,909
https://en.wikipedia.org/wiki/Interlace%20%28art%29
In the visual arts, interlace is a decorative element found in medieval art. In interlace, bands or portions of other motifs are looped, braided, and knotted in complex geometric patterns, often to fill a space. Interlacing is common in the Migration period art of Northern Europe, in the early medieval Insular art of Britain and Ireland, and Norse art of the Early Middle Ages, and in Islamic art. Intricate braided and interlaced patterns, called plaits in British usage, first appeared in late Roman art in various parts of Europe, in mosaic floors and other media. Coptic manuscripts and textiles of 5th- and 6th-century Christian Egypt are decorated with broad-strand ribbon interlace ornament bearing a "striking resemblance" to the earliest types of knotwork found in the Insular art manuscripts of Ireland and the British Isles. History and application Northern Europe Interlace is a key feature of the "Style II" animal style decoration of Migration Period art, and is found widely across Northern Europe, and was carried by the Lombards into Northern Italy. Typically the long "ribbons" eventually terminate in an animal's head. By about 700 it becomes less common in most of Europe, but continues to develop in the British Isles and Scandinavia, where it is found on metalwork, woodcarving, runestones, high crosses, and illuminated manuscripts of the 7th to 12th centuries. Artist George Bain has characterised the early Insular knotwork found in the 7th-century Book of Durrow and the Durham Cathedral Gospel Book fragment as "broken and rejoined" braids. Whether Coptic braid patterns were transmitted directly to Hiberno-Scottish monasteries from the eastern Mediterranean or came via Lombardic Italy is uncertain. Art historian James Johnson Sweeney argued for direct communication between the scriptoria of Early Christian Ireland and the Coptic monasteries of Egypt. This new style featured elongated beasts intertwined into symmetrical shapes, and can be dated to the mid-7th century based on the accepted dating of examples in the Sutton Hoo treasure. The most elaborate interlaced zoomorphics occur in Viking Age art of the Urnes style (arising before 1050), where tendrils of foliate designs intertwine with the stylized animals. The full-flowering of Northern European interlace occurred in the Insular art of the British Isles, where the animal style ornament of Northern Europe blended with ribbon knotwork and Christian influences in such works as the Book of Kells and the Cross of Cong. Whole carpet pages were illuminated with abstract patterns, including much use of interlace, and stone high crosses combined interlace panels with figurative ones. Insular interlace was copied in continental Europe, closely in the Franco-Saxon school of the 8th to 11th centuries, and less so in other Carolingian schools of illumination, where the tendency was to foliate decorative forms. In Romanesque art these became typical, and the interlace generally much less complex. Some animal forms are also found. Islamic art Geometric interlacing patterns are common in Islamic ornament. They can be considered a particular type of arabesque. Umayyad architectural elements such as floor mosaics, window grilles, carvings and wall paintings, and decorative metal work of the 8th to 10th centuries are followed by the intricate interlacings common in later medieval Islamic art. Interlaced elaborations are also found in Kufic calligraphy. Southern Europe Interlace and knotwork are often found in Byzantine art, continuing Roman usage, but they are not given great prominence. One notable example of a widespread local usage of interlace is the three-ribbon interlace found in the early medieval Croatia on stone carvings from the 9th to 11th centuries. Interlaces were widely used in times of Serbian Morava architectural school from the 14th to 15th century. They were used on and within churches and monasteries, as well as in religious literature. Interlaces are also an important ornament used in Brâncovenesc architecture, an architectural style that evolved in Romania during the administration of Prince Constantin Brâncoveanu in the late 17th and early 18th centuries. Later, in the late 19th century and the first half of the 20th, it will be reused in Romanian Revival architecture. Gallery Notes References External links Illustrated article by Peter Hubert on the origins of interlace sculpture. Medieval art Decorative knots Iconography Celtic art Ornaments Insular art Anglo-Saxon art Islamic art Visual motifs
Interlace (art)
Mathematics
900
12,792,329
https://en.wikipedia.org/wiki/Environmentalists%20for%20Nuclear
Environmentalists for Nuclear Energy (EFN) — in French: "Association des Écologistes Pour le Nucléaire – AEPN, founded in 1996" — is a pro-nuclear power non-profit organization that aims to provide information to the public on energy and the environment. It also promotes the benefits of nuclear energy for a cleaner world, and aims at uniting people in favor of clean nuclear energy. EFN is funded by the memberships and donations of its members. The website of the organization states that environmental opposition to nuclear energy is the "greatest misunderstanding and mistake of the century". History EFN was started by Bruno Comby in 1996 after the publication of his book Environmentalists For Nuclear Energy. EFN had over 10,000 members and supporters in 2013, with local correspondents and a network of affiliated organizations and in more than 60 countries, to inform the public on energy and the environment. Patrick Albert Moore and James Lovelock are supporters of the group. The annual assembly of EFN is held at its headquarters in Houilles, a suburb of Paris, France. The headquarters are in a positive energy ecohouse, powered with solar energy (thermal and photovoltaics), wind energy, geothermal air-conditioning, a high efficiency heat pump, double-flux ventilation, and just a small amount of low-carbon-emitting French nuclear energy. Conceptually designed by members of the organization, this house has an almost-nil carbon footprint (500 times less than a standard gas-heated construction of the same size). See also Environmental impact of nuclear power References External links Home page of Environmentalists for Nuclear. EFN-AUSTRALIA (President : Richard McNeall) Environmental impact of nuclear power Nuclear organizations
Environmentalists for Nuclear
Technology,Engineering
360
54,374,854
https://en.wikipedia.org/wiki/OMS%20encoding
OMS (aka TeX math symbol) is a 7-bit TeX encoding developed by Donald E. Knuth. It encodes mathematical symbols with variable sizes like for capital Pi notation, brackets, braces and radicals. Character set See also OML encoding OT1 encoding References Character sets TeX
OMS encoding
Mathematics
60
1,263,365
https://en.wikipedia.org/wiki/Acetylide
In chemistry, an acetylide is a compound that can be viewed as the result of replacing one or both hydrogen atoms of acetylene (ethyne) by metallic or other cations. Calcium carbide is an important industrial compound, which has long been used to produce acetylene for welding and illumination. It is also a major precursor to vinyl chloride. Other acetylides are reagents in organic synthesis. Nomenclature The term acetylide is used loosely. It apply to an acetylene , where R = H or a side chain that is usually organic. The nomenclature can be ambiguous with regards to the distinction between compounds of the type MC2R and M2C2. When both hydrogens of acetylene are replaced by metals, the compound can also be called carbide, e.g. calcium carbide . When only one hydrogen atom is replaced, the anion may be called hydrogen acetylide or the prefix mono- may be attached to the metal, as in monosodium acetylide . An acetylide may be a salt (ionic compound) containing the anion , , or , as in sodium acetylide or cobalt acetylide . Other acetylides have the metal bound to the carbon atom(s) by covalent bonds, being therefore coordination or organometallic compounds. Ionic acetylides Alkali metal and alkaline earth metal acetylides of the general formula MC≡CM are salt-like Zintl phase compounds, containing ions. Evidence for this ionic character can be seen in the ready hydrolysis of these compounds to form acetylene and metal oxides, and by solubility in liquid ammonia with solvated ions. The ion has a closed shell ground state of 1Σ, making it isoelectronic to a neutral molecule N2, which may afford it some gas-phase stability. Organometallic acetylides Some acetylides, particularly of transition metals, show evidences of covalent character, e. g. for being neither dissolved nor decomposed by water and by radically different chemical reactions. That seems to be the case of silver acetylide and copper acetylide, for example. In the absence of additional ligands, metal acetylides adopt polymeric structures wherein the acetylide groups are bridging ligands. Preparation Of the type MC2R Acetylene and terminal alkynes are weak acids: RC≡CH + R″M R″H + RC≡CM Monopotassium and monosodium acetylide can be prepared by reacting acetylene with bases like sodium amide or with the elemental metals, often at room temperature and atmospheric pressure. Copper(I) acetylide can be prepared by passing acetylene through an aqueous solution of copper(I) chloride because of a low solubility equilibrium. Similarly, silver acetylides can be obtained from silver nitrate. In organic synthesis, acetylides are usually prepared by treating acetylene and alkynes with organometallic or inorganic Classically, liquid ammonia was used for deprotonations, but ethers are now more commonly used. Lithium amide, LiHMDS, or organolithium reagents, such as butyllithium (), are frequently used to form lithium acetylides: Of the type M2C2 and CaC2 Calcium carbide is prepared industrially by heating carbon with lime (calcium oxide) at approximately 2,000 °C. A similar process can be used to produce lithium carbide. Dilithium acetylide, Li2C2, competes with the preparation of the monolithium derivative LiC2H. Reactions Ionic acetylides are typically decomposed by water with evolution of acetylene: + 2 → + + → + Acetylides of the type RC2M are widely used in alkynylations in organic chemistry. They are nucleophiles that add to a variety of electrophilic and unsaturated substrates. A classic application is the Favorskii reaction, such as in the sequence shown below. Here ethyl propiolate is deprotonated by n-butyllithium to give the corresponding lithium acetylide. This acetylide adds to the carbonyl center of cyclopentanone. Hydrolysis liberates the alkynyl alcohol. The dimerization of acetylene to vinylacetylene proceeds by insertion of acetylene into a copper(I) acetylide complex. Coupling reactions Acetylides are sometimes used as intermediates in coupling reactions. Examples include Sonogashira coupling, Cadiot-Chodkiewicz coupling, Glaser coupling and Eglinton coupling. Hazards Some acetylides are notoriously explosive. Formation of acetylides poses a risk in handling of gaseous acetylene in presence of metals such as mercury, silver or copper, or alloys with their high content (brass, bronze, silver solder). See also Ethynyl Ethynyl radical Diatomic carbon (neutral C2) Acetylenediol References Anions Functional groups
Acetylide
Physics,Chemistry
1,078
912,988
https://en.wikipedia.org/wiki/Spokestoon
A spokestoon is an established cartoon character who is hired to endorse a product. When the United States entered World War II, well-known celebrities already highly placed in American popular culture, such as Donald Duck and Bugs Bunny, joined the war effort, donating their highly visible images for patriotic and informative cartoons. Bambi, loaned by Walt Disney during 1943 to the US Forest Service, was the precursor of the purposely-created Smokey. Spokestoons have also lent their celebrity status to individual events, such as Pogo for Earth Day in 1970, or The Smurfs to UNICEF in 2005. Since then, many high-profile cartoon characters have turned their skills to corporate product placement. Though fast food franchises have used gimmicks to tie-in temporarily with current releases of animated features since the 1950s, a few cartoons have become more permanently associated with a product or service offered by corporate culture, similar to that of a mascot, and may be considered genuine spokestoons. Early recorded usages of the term "spokestoon" include a March 25, 1995, feature in the Portland, Maine Press Herald, noting "Buster Brown, the comic strip character who became the 'spokestoon' for the children's shoe line", and an October 1995 article about the Disney Corporation's use of characters from The Lion King to promote good nutrition in children. Some examples of spokestoons and the products they are identified with include: Dennis the Menace for Dairy Queen until 2002 Donald Duck for Donald Duck orange juice Fred Flintstone and Barney Rubble for Winston cigarettes, Post's Pebbles, and Flintstones vitamins Little Lulu for Kleenex Bugs Bunny for Tang, Kool-Aid, and Weetabix Gumby for Cheerios Peanuts characters for the Ford Falcon car, Dolly Madison snacks, and Metropolitan Life Insurance Mickey Mouse for Disney Mickey's Magix breakfast cereal The Pink Panther for Owens Corning fiberglass thermal insulation, and Sweet'n Low artificial sweetener The Road Runner for Charter Communications's Road Runner (now Spectrum) internet service and AutoNation Rocky and Bullwinkle characters for Family Fun Center, General Mills, and Taco Bell The Simpsons characters for Nestlé's Butterfinger candy bars and Procter & Gamble's Vizir laundry detergent Underdog characters for Family Fun Center Winnie the Pooh characters for Disney Hunny B's Honey-Graham breakfast cereal Yogi Bear characters for Yogi Bear Toastee Tarts Huey, Dewey and Louie for Nestle's Trio See also Mascot References Animated characters Advertising Mascots
Spokestoon
Mathematics
524
4,924,578
https://en.wikipedia.org/wiki/Structure%20factor
In condensed matter physics and crystallography, the static structure factor (or structure factor for short) is a mathematical description of how a material scatters incident radiation. The structure factor is a critical tool in the interpretation of scattering patterns (interference patterns) obtained in X-ray, electron and neutron diffraction experiments. Confusingly, there are two different mathematical expressions in use, both called 'structure factor'. One is usually written ; it is more generally valid, and relates the observed diffracted intensity per atom to that produced by a single scattering unit. The other is usually written or and is only valid for systems with long-range positional order — crystals. This expression relates the amplitude and phase of the beam diffracted by the planes of the crystal ( are the Miller indices of the planes) to that produced by a single scattering unit at the vertices of the primitive unit cell. is not a special case of ; gives the scattering intensity, but gives the amplitude. It is the modulus squared that gives the scattering intensity. is defined for a perfect crystal, and is used in crystallography, while is most useful for disordered systems. For partially ordered systems such as crystalline polymers there is obviously overlap, and experts will switch from one expression to the other as needed. The static structure factor is measured without resolving the energy of scattered photons/electrons/neutrons. Energy-resolved measurements yield the dynamic structure factor. Derivation of Consider the scattering of a beam of wavelength by an assembly of particles or atoms stationary at positions . Assume that the scattering is weak, so that the amplitude of the incident beam is constant throughout the sample volume (Born approximation), and absorption, refraction and multiple scattering can be neglected (kinematic diffraction). The direction of any scattered wave is defined by its scattering vector . , where and ( ) are the scattered and incident beam wavevectors, and is the angle between them. For elastic scattering, and , limiting the possible range of (see Ewald sphere). The amplitude and phase of this scattered wave will be the vector sum of the scattered waves from all the atoms For an assembly of atoms, is the atomic form factor of the -th atom. The scattered intensity is obtained by multiplying this function by its complex conjugate The structure factor is defined as this intensity normalized by If all the atoms are identical, then Equation () becomes and so Another useful simplification is if the material is isotropic, like a powder or a simple liquid. In that case, the intensity depends on and . In three dimensions, Equation () then simplifies to the Debye scattering equation: An alternative derivation gives good insight, but uses Fourier transforms and convolution. To be general, consider a scalar (real) quantity defined in a volume ; this may correspond, for instance, to a mass or charge distribution or to the refractive index of an inhomogeneous medium. If the scalar function is integrable, we can write its Fourier transform as . In the Born approximation the amplitude of the scattered wave corresponding to the scattering vector is proportional to the Fourier transform . When the system under study is composed of a number of identical constituents (atoms, molecules, colloidal particles, etc.) each of which has a distribution of mass or charge then the total distribution can be considered the convolution of this function with a set of delta functions. with the particle positions as before. Using the property that the Fourier transform of a convolution product is simply the product of the Fourier transforms of the two factors, we have , so that: This is clearly the same as Equation () with all particles identical, except that here is shown explicitly as a function of . In general, the particle positions are not fixed and the measurement takes place over a finite exposure time and with a macroscopic sample (much larger than the interparticle distance). The experimentally accessible intensity is thus an averaged one ; we need not specify whether denotes a time or ensemble average. To take this into account we can rewrite Equation () as: Perfect crystals In a crystal, the constitutive particles are arranged periodically, with translational symmetry forming a lattice. The crystal structure can be described as a Bravais lattice with a group of atoms, called the basis, placed at every lattice point; that is, [crystal structure] = [lattice] [basis]. If the lattice is infinite and completely regular, the system is a perfect crystal. For such a system, only a set of specific values for can give scattering, and the scattering amplitude for all other values is zero. This set of values forms a lattice, called the reciprocal lattice, which is the Fourier transform of the real-space crystal lattice. In principle the scattering factor can be used to determine the scattering from a perfect crystal; in the simple case when the basis is a single atom at the origin (and again neglecting all thermal motion, so that there is no need for averaging) all the atoms have identical environments. Equation () can be written as and . The structure factor is then simply the squared modulus of the Fourier transform of the lattice, and shows the directions in which scattering can have non-zero intensity. At these values of the wave from every lattice point is in phase. The value of the structure factor is the same for all these reciprocal lattice points, and the intensity varies only due to changes in with . Units The units of the structure-factor amplitude depend on the incident radiation. For X-ray crystallography they are multiples of the unit of scattering by a single electron (2.82 m); for neutron scattering by atomic nuclei the unit of scattering length of m is commonly used. The above discussion uses the wave vectors and . However, crystallography often uses wave vectors and . Therefore, when comparing equations from different sources, the factor may appear and disappear, and care to maintain consistent quantities is required to get correct numerical results. Definition of In crystallography, the basis and lattice are treated separately. For a perfect crystal the lattice gives the reciprocal lattice, which determines the positions (angles) of diffracted beams, and the basis gives the structure factor which determines the amplitude and phase of the diffracted beams: where the sum is over all atoms in the unit cell, are the positional coordinates of the -th atom, and is the scattering factor of the -th atom. The coordinates have the directions and dimensions of the lattice vectors . That is, (0,0,0) is at the lattice point, the origin of position in the unit cell; (1,0,0) is at the next lattice point along and (1/2, 1/2, 1/2) is at the body center of the unit cell. defines a reciprocal lattice point at which corresponds to the real-space plane defined by the Miller indices (see Bragg's law). is the vector sum of waves from all atoms within the unit cell. An atom at any lattice point has the reference phase angle zero for all since then is always an integer. A wave scattered from an atom at (1/2, 0, 0) will be in phase if is even, out of phase if is odd. Again an alternative view using convolution can be helpful. Since [crystal structure] = [lattice] [basis], [crystal structure] = [lattice] [basis]; that is, scattering [reciprocal lattice] [structure factor]. Examples of in 3-D Body-centered cubic (BCC) For the body-centered cubic Bravais lattice (cI), we use the points and which leads us to and hence Face-centered cubic (FCC) The FCC lattice is a Bravais lattice, and its Fourier transform is a body-centered cubic lattice. However to obtain without this shortcut, consider an FCC crystal with one atom at each lattice point as a primitive or simple cubic with a basis of 4 atoms, at the origin and at the three adjacent face centers, , and . Equation () becomes with the result The most intense diffraction peak from a material that crystallizes in the FCC structure is typically the (111). Films of FCC materials like gold tend to grow in a (111) orientation with a triangular surface symmetry. A zero diffracted intensity for a group of diffracted beams (here, of mixed parity) is called a systematic absence. Diamond crystal structure The diamond cubic crystal structure occurs for example diamond (carbon), tin, and most semiconductors. There are 8 atoms in the cubic unit cell. We can consider the structure as a simple cubic with a basis of 8 atoms, at positions But comparing this to the FCC above, we see that it is simpler to describe the structure as FCC with a basis of two atoms at (0, 0, 0) and (1/4, 1/4, 1/4). For this basis, Equation () becomes: And then the structure factor for the diamond cubic structure is the product of this and the structure factor for FCC above, (only including the atomic form factor once) with the result If h, k, ℓ are of mixed parity (odd and even values combined) the first (FCC) term is zero, so If h, k, ℓ are all even or all odd then the first (FCC) term is 4 if h+k+ℓ is odd then if h+k+ℓ is even and exactly divisible by 4 () then if h+k+ℓ is even but not exactly divisible by 4 () the second term is zero and These points are encapsulated by the following equations: where is an integer. Zincblende crystal structure The zincblende structure is similar to the diamond structure except that it is a compound of two distinct interpenetrating fcc lattices, rather than all the same element. Denoting the two elements in the compound by and , the resulting structure factor is Cesium chloride Cesium chloride is a simple cubic crystal lattice with a basis of Cs at (0,0,0) and Cl at (1/2, 1/2, 1/2) (or the other way around, it makes no difference). Equation () becomes We then arrive at the following result for the structure factor for scattering from a plane : and for scattered intensity, Hexagonal close-packed (HCP) In an HCP crystal such as graphite, the two coordinates include the origin and the next plane up the c axis located at c/2, and hence , which gives us From this it is convenient to define dummy variable , and from there consider the modulus squared so hence This leads us to the following conditions for the structure factor: Perfect crystals in one and two dimensions The reciprocal lattice is easily constructed in one dimension: for particles on a line with a period , the reciprocal lattice is an infinite array of points with spacing . In two dimensions, there are only five Bravais lattices. The corresponding reciprocal lattices have the same symmetry as the direct lattice. 2-D lattices are excellent for demonstrating simple diffraction geometry on a flat screen, as below. Equations (1)–(7) for structure factor apply with a scattering vector of limited dimensionality and a crystallographic structure factor can be defined in 2-D as . However, recall that real 2-D crystals such as graphene exist in 3-D. The reciprocal lattice of a 2-D hexagonal sheet that exists in 3-D space in the plane is a hexagonal array of lines parallel to the or axis that extend to and intersect any plane of constant in a hexagonal array of points. The Figure shows the construction of one vector of a 2-D reciprocal lattice and its relation to a scattering experiment. A parallel beam, with wave vector is incident on a square lattice of parameter . The scattered wave is detected at a certain angle, which defines the wave vector of the outgoing beam, (under the assumption of elastic scattering, ). One can equally define the scattering vector and construct the harmonic pattern . In the depicted example, the spacing of this pattern coincides to the distance between particle rows: , so that contributions to the scattering from all particles are in phase (constructive interference). Thus, the total signal in direction is strong, and belongs to the reciprocal lattice. It is easily shown that this configuration fulfills Bragg's law. Imperfect crystals Technically a perfect crystal must be infinite, so a finite size is an imperfection. Real crystals always exhibit imperfections of their order besides their finite size, and these imperfections can have profound effects on the properties of the material. André Guinier proposed a widely employed distinction between imperfections that preserve the long-range order of the crystal that he called disorder of the first kind and those that destroy it called disorder of the second kind. An example of the first is thermal vibration; an example of the second is some density of dislocations. The generally applicable structure factor can be used to include the effect of any imperfection. In crystallography, these effects are treated as separate from the structure factor , so separate factors for size or thermal effects are introduced into the expressions for scattered intensity, leaving the perfect crystal structure factor unchanged. Therefore, a detailed description of these factors in crystallographic structure modeling and structure determination by diffraction is not appropriate in this article. Finite-size effects For a finite crystal means that the sums in equations 1-7 are now over a finite . The effect is most easily demonstrated with a 1-D lattice of points. The sum of the phase factors is a geometric series and the structure factor becomes: This function is shown in the Figure for different values of . When the scattering from every particle is in phase, which is when the scattering is at a reciprocal lattice point , the sum of the amplitudes must be and so the maxima in intensity are . Taking the above expression for and estimating the limit using, for instance, L'Hôpital's rule) shows that as seen in the Figure. At the midpoint (by direct evaluation) and the peak width decreases like . In the large limit, the peaks become infinitely sharp Dirac delta functions, the reciprocal lattice of the perfect 1-D lattice. In crystallography when is used, is large, and the formal size effect on diffraction is taken as , which is the same as the expression for above near to the reciprocal lattice points, . Using convolution, we can describe the finite real crystal structure as [lattice] [basis] rectangular function, where the rectangular function has a value 1 inside the crystal and 0 outside it. Then [crystal structure] = [lattice] [basis] [rectangular function]; that is, scattering [reciprocal lattice] [structure factor] [ sinc function]. Thus the intensity, which is a delta function of position for the perfect crystal, becomes a function around every point with a maximum , a width , area . Disorder of the first kind This model for disorder in a crystal starts with the structure factor of a perfect crystal. In one-dimension for simplicity and with N planes, we then start with the expression above for a perfect finite lattice, and then this disorder only changes by a multiplicative factor, to give where the disorder is measured by the mean-square displacement of the positions from their positions in a perfect one-dimensional lattice: , i.e., , where is a small (much less than ) random displacement. For disorder of the first kind, each random displacement is independent of the others, and with respect to a perfect lattice. Thus the displacements do not destroy the translational order of the crystal. This has the consequence that for infinite crystals () the structure factor still has delta-function Bragg peaks – the peak width still goes to zero as , with this kind of disorder. However, it does reduce the amplitude of the peaks, and due to the factor of in the exponential factor, it reduces peaks at large much more than peaks at small . The structure is simply reduced by a and disorder dependent term because all disorder of the first-kind does is smear out the scattering planes, effectively reducing the form factor. In three dimensions the effect is the same, the structure is again reduced by a multiplicative factor, and this factor is often called the Debye–Waller factor. Note that the Debye–Waller factor is often ascribed to thermal motion, i.e., the are due to thermal motion, but any random displacements about a perfect lattice, not just thermal ones, will contribute to the Debye–Waller factor. Disorder of the second kind However, fluctuations that cause the correlations between pairs of atoms to decrease as their separation increases, causes the Bragg peaks in the structure factor of a crystal to broaden. To see how this works, we consider a one-dimensional toy model: a stack of plates with mean spacing . The derivation follows that in chapter 9 of Guinier's textbook. This model has been pioneered by and applied to a number of materials by Hosemann and collaborators over a number of years. Guinier and they termed this disorder of the second kind, and Hosemann in particular referred to this imperfect crystalline ordering as paracrystalline ordering. Disorder of the first kind is the source of the Debye–Waller factor. To derive the model we start with the definition (in one dimension) of the To start with we will consider, for simplicity an infinite crystal, i.e., . We will consider a finite crystal with disorder of the second-type below. For our infinite crystal, we want to consider pairs of lattice sites. For large each plane of an infinite crystal, there are two neighbours planes away, so the above double sum becomes a single sum over pairs of neighbours either side of an atom, at positions and lattice spacings away, times . So, then where is the probability density function for the separation of a pair of planes, lattice spacings apart. For the separation of neighbouring planes we assume for simplicity that the fluctuations around the mean neighbour spacing of a are Gaussian, i.e., that and we also assume that the fluctuations between a plane and its neighbour, and between this neighbour and the next plane, are independent. Then is just the convolution of two s, etc. As the convolution of two Gaussians is just another Gaussian, we have that The sum in is then just a sum of Fourier transforms of Gaussians, and so for . The sum is just the real part of the sum and so the structure factor of the infinite but disordered crystal is This has peaks at maxima , where . These peaks have heights i.e., the height of successive peaks drop off as the order of the peak (and so ) squared. Unlike finite-size effects that broaden peaks but do not decrease their height, disorder lowers peak heights. Note that here we assuming that the disorder is relatively weak, so that we still have relatively well defined peaks. This is the limit , where . In this limit, near a peak we can approximate , with and obtain which is a Lorentzian or Cauchy function, of FWHM , i.e., the FWHM increases as the square of the order of peak, and so as the square of the wave vector at the peak. Finally, the product of the peak height and the FWHM is constant and equals , in the limit. For the first few peaks where is not large, this is just the limit. Finite crystals with disorder of the second kind For a one-dimensional crystal of size where the factor in parentheses comes from the fact the sum is over nearest-neighbour pairs (), next nearest-neighbours (), ... and for a crystal of planes, there are pairs of nearest neighbours, pairs of next-nearest neighbours, etc. Liquids In contrast with crystals, liquids have no long-range order (in particular, there is no regular lattice), so the structure factor does not exhibit sharp peaks. They do however show a certain degree of short-range order, depending on their density and on the strength of the interaction between particles. Liquids are isotropic, so that, after the averaging operation in Equation (), the structure factor only depends on the absolute magnitude of the scattering vector . For further evaluation, it is convenient to separate the diagonal terms in the double sum, whose phase is identically zero, and therefore each contribute a unit constant: One can obtain an alternative expression for in terms of the radial distribution function : Ideal gas In the limiting case of no interaction, the system is an ideal gas and the structure factor is completely featureless: , because there is no correlation between the positions and of different particles (they are independent random variables), so the off-diagonal terms in Equation () average to zero: . High- limit Even for interacting particles, at high scattering vector the structure factor goes to 1. This result follows from Equation (), since is the Fourier transform of the "regular" function and thus goes to zero for high values of the argument . This reasoning does not hold for a perfect crystal, where the distribution function exhibits infinitely sharp peaks. Low- limit In the low- limit, as the system is probed over large length scales, the structure factor contains thermodynamic information, being related to the isothermal compressibility of the liquid by the compressibility equation: . Hard-sphere liquids In the hard sphere model, the particles are described as impenetrable spheres with radius ; thus, their center-to-center distance and they experience no interaction beyond this distance. Their interaction potential can be written as: This model has an analytical solution in the Percus–Yevick approximation. Although highly simplified, it provides a good description for systems ranging from liquid metals to colloidal suspensions. In an illustration, the structure factor for a hard-sphere fluid is shown in the Figure, for volume fractions from 1% to 40%. Polymers In polymer systems, the general definition () holds; the elementary constituents are now the monomers making up the chains. However, the structure factor being a measure of the correlation between particle positions, one can reasonably expect that this correlation will be different for monomers belonging to the same chain or to different chains. Let us assume that the volume contains identical molecules, each composed of monomers, such that ( is also known as the degree of polymerization). We can rewrite () as: where indices label the different molecules and the different monomers along each molecule. On the right-hand side we separated intramolecular () and intermolecular () terms. Using the equivalence of the chains, () can be simplified: where is the single-chain structure factor. See also R-factor (crystallography) Patterson function Ornstein–Zernike equation Notes References Als-Nielsen, N. and McMorrow, D. (2011). Elements of Modern X-ray Physics (2nd edition). John Wiley & Sons. Guinier, A. (1963). X-ray Diffraction. In Crystals, Imperfect Crystals, and Amorphous Bodies. W. H. Freeman and Co. Chandler, D. (1987). Introduction to Modern Statistical Mechanics. Oxford University Press. Hansen, J. P. and McDonald, I. R. (2005). Theory of Simple Liquids (3rd edition). Academic Press. Teraoka, I. (2002). Polymer Solutions: An Introduction to Physical Properties. John Wiley & Sons. External links Structure Factor Tutorial located at the University of York. Definition of by IUCr Learning Crystallography, from the CSIC Crystallography
Structure factor
Physics,Chemistry,Materials_science,Engineering
4,890
65,991,877
https://en.wikipedia.org/wiki/Development%20Assessment%20Panels
Development Assessment Panels are independent decision-making bodies with the power to determine high value development applications in Western Australia. The panels contain five members—three industry professionals and two elected members of the local government. The purpose of the panels is to introduce more consistent decision-making into the determination of development applications and to refocus the attention of elected members in local governments on higher-level strategic planning and policy matters. The panels share characteristics of panels set up by other Australian states for similar reasons. Background Development Assessment Panels where introduced to Western Australia in 2011 by the Barnett government with support from the opposition Labor party. The removal of the decision-making power from elected Councillors was opposed by the Western Australian Local Government Association, the Local Government Planners Association and some members of the community. During 2022-23, DAPs determined 270 applications for development approval. Controversy Community opposition Most DAP approvals where uncontroversial, however since 2011, several projects received DAP approval over strong community objections and resulting media coverage. This included an approval for the Lumier apartment development in South Perth which was overturned following an appeal to the Supreme Court in 2016. Scrap the DAP Scrap the DAP was a community campaign during the 2017 Western Australian state election advocating for the abolition of independent Development Assessment Panels. The campaign was supported by 21 Councils of 38 metropolitan local governments and opposed by groups associated with the property and development industry. The movement was not endorsed by either major party, although community concerns where acknowledged. Following the election, many of the individuals associated with the movement have continued to advocate for the reform or abolition of the panels under various other slogans and community associations. No council has passed a motion condemning the DAP process since 2017. References Western Australia Urban planning Perth, Western Australia Political campaigns
Development Assessment Panels
Engineering
359
160,224
https://en.wikipedia.org/wiki/Interface%20description%20language
An interface description language or interface definition language (IDL) is a generic term for a language that lets a program or object written in one language communicate with another program written in an unknown language. IDLs are usually used to describe data types and interfaces in a language-independent way, for example, between those written in C++ and those written in Java. IDLs are commonly used in remote procedure call software. In these cases the machines at either end of the link may be using different operating systems and computer languages. IDLs offer a bridge between the two different systems. Software systems based on IDLs include Sun's ONC RPC, The Open Group's Distributed Computing Environment, IBM's System Object Model, the Object Management Group's CORBA (which implements OMG IDL, an IDL based on DCE/RPC) and Data Distribution Service, Mozilla's XPCOM, Microsoft's Microsoft RPC (which evolved into COM and DCOM), Facebook's Thrift and WSDL for Web services. Examples AIDL: Java-based, for Android; supports local and remote procedure calls, can be accessed from native applications by calling through Java Native Interface (JNI) Apache Thrift: from Apache, originally developed by Facebook Avro IDL: for the Apache Avro system ASN.1 Cap'n Proto: created by its former maintainer, avoids some of the perceived shortcomings of Protocol Buffers. Concise Data Definition Language (CDDL, RFC 8610): A Notation for CBOR and JSON data structures CortoScript: Describe data and/or interfaces for systems that require Semantic interoperability Etch: Cisco's Etch Cross-platform Service Description Language Extensible Data Notation (EDN): Clojure data format, similar to JSON FlatBuffers: Serialization format from Google supporting zero-copy deserialization Franca IDL: the open-source Franca interface definition language FIDL: Interface description language for the Fuchsia Operating System designed for writing app components in C, C++, Dart, Go and Rust. IDL specification language: the original Interface Description Language IPL: Imandra Protocol Language JSON Web-Service Protocol (JSON-WSP) Lightweight Imaging Device Interface Language Microsoft Interface Definition Language (MIDL): the Microsoft extension of OMG IDL to add support for Component Object Model (COM) and Distributed Component Object Model (DCOM) OMG IDL: standardized by Object Management Group, used in CORBA (for DCE/RPC services) and DDS (for data modeling), also selected by the W3C for exposing the DOM of XML, HTML, and CSS documents OpenAPI Specification: a standard for Web APIs, used by Swagger and other technologies. Open Service Interface Definitions Protocol Buffers: Google's IDL RESTful Service Description Language (RSDL) Smithy: An AWS-invented protocol-agnostic interface definition language. Specification Language for Internet Communications Engine (Ice: Slice) Universal Network Objects: OpenOffice.org's component model Web Application Description Language (WADL) Web IDL by WHATWG: can be used to describe interfaces that are intended to be implemented in web browsers Web Services Description Language (WSDL) XCB: X protocol description language for X Window System Cross Platform Interface Description Language (XPIDL): Mozilla's way to specify XPCOM interfaces See also Component-based software engineering Interface-based programming Java Interface Definition Language List of computing and IT abbreviations Universal Interface Language User interface markup language References External links Documenting Software Architecture: Documenting Interfaces (PDF) OMG Specification of OMG IDL OMG Tutorial on OMG IDL Data modeling languages Remote procedure call Specification languages Domain-specific programming languages
Interface description language
Engineering
793
30,356,480
https://en.wikipedia.org/wiki/CaSNP
CaSNP is database for storing data about copy number alterations from SNP arrays for different types of cancer. References External links https://web.archive.org/web/20110719204256/http://cistrome.dfci.harvard.edu/CaSNP/ Genetics databases Cancer research Genetics Microarrays
CaSNP
Chemistry,Materials_science,Biology
75
63,940,707
https://en.wikipedia.org/wiki/Avian%20sleep
In birds, sleep consists of "periods of eye closure interrupted by short periods of eye-opening." During the short periods of eye-opening, electroencephalographic (EEG) studies indicate that the birds are still sleeping; the voltage level in the brain is identical. Birds restore their arousal thresholds during sleep. During their short eye-open periods, sleeping birds can mobilize almost instantaneously when threatened by a predator. Avian species have been found to rely on flock size and height for predatory precautions. Between the eye-opening and group sleep, these precautions allow sleep to be beneficial and safe. The amount of sleep necessary to function can vary by species. Pectoral sandpipers migrate from the Southern Hemisphere to the Arctic Circle, their mating ground (where they breed during daylight. Since the sandpipers are polygamous, they mate (or search for a mate) for the duration of daylight. Males do not require as much sleep during this time; some have been observed to give up 95 percent of their sleep time during the nineteen mating days. Most act similarly to humans when sleep-deprived, getting them into potentially life-threatening situations or slowing their migration speed. Comparative anatomy of avian brain and nervous system The typical avian nervous system is similar to that of mammals. The central nervous system includes the brain and spinal cord, and a peripheral nervous system consists of nerves and sensory organs. Key attributes have evolved compared to other species, especially vision; avian visual capabilities are believed to be more advanced than any other group of vertebrates. In addition to larger eyes, birds have larger-than-average optic lobes. With a larger, more intricate optic lobe, some bird species can view the ultraviolet (UV) spectrum (beyond the visual range of the human eye). This UV visual capability facilitates hunting, as seen in nighthawks. UV-sensitive cone opsin is typically responsible for avian ability to see UV, but some species have circumvented this; owls can see UV light, but lack opsins. They compensate for this with essential enzymes which allow heightened rod sensitivity. UV is seen by several other animal groups, including cats and insects (where it has appeared to evolve in response to predator-prey relationships). Trade-offs in anatomy and physiology are common, and this is seen in the olfactory lobes of most avian species. Possibly due to the larger-than-average optic lobes, avian olfactory lobes are relatively small; few bird species use smell to find food. Falcons and eagles do not tend to have larger cerebellums for flying. According to comparative neuroanatomy researcher Ludwig Edinger, avian brains consist mostly of basal ganglia (responsible for instinctive behavior, rather than behavioral plasticity). Scientists have challenged some of Edinger's findings, and called for the renaming of avian nervous-system organs to reflect their similarity to those of mammals. REM and slow-wave sleep Avian sleep shares two similarities with that of mammals: rapid eye movement (REM) and slow-wave sleep (SWS). REM sleep is believed to have an important effect on motor functions and memory storage. EEGs show high-amplitude and low-frequency waves during REM sleep; SWS tends toward lower-amplitude, higher-frequency waves, and is believed to be a form of deep sleep. During SWS, membrane potentials in the neurons of the neocortex oscillate slowly. A number of avian species exhibit unihemispheric slow-wave sleep: the ability to rest one half of the brain in SWS, while the other half appears to be awake. This type of sleep has also been seen in dolphins and whales. The organism is typically able to keep one eye open during this process, which allows added vigilance in high-predation environments. The evolution of this trait in birds and aquatic mammals is of interest to researchers because of the pressures involved. Unihemispheric SWS is thought to have evolved in aquatic mammals because they must return to the surface for oxygen; it is believed to help birds avoid predation, demonstrating homoplasy in the two groups. Dove experiment In a study of how the Barbary dove's sleep patterns are affected by flock size, D. W. Lendrum intended to prove that larger flocks reduced overall vigilance, and the apparent increase in predation risk of a smaller flock would harm the doves' sleep cycle. At the beginning of the study, the doves were caged alone or in pairs of cages containing two, three or six. They were then placed in one of two environments. In the calm environment, Lendrum walked alone past the cage between 10 am and noon; in the aggressive environment, Lendrum walked past the cage with a domesticated ferret at the same time of day. Lendrum discovered that the birds in the calm environment spent substantially more time with their eyes closed than those in the aggressive environment. Lendrum collected data on the doves' opened- and closed-eye sleep; flocking was associated with an increase in a bird's overall eye-closure time and a decrease in its amount of eye-opening. In the presence of a predator, Lendrum found that the doves exhibited higher levels of individual vigilance and increase in open-eye sleep; this reduced the active-sleep component of their total sleep time. Perch height Predators are believed to play a large role in an organism's sleeping patterns. To adapt to predation, two common techniques have evolved: positioning oneself out of harm's way while sleeping, and sleeping more lightly (such as unihemispheric sleep). In birds, perch height is believed to play a significant role in sleep; lower perch height has been shown to reduce the number and length of REM sleep episodes in pigeons, and a higher perch increases REM sleep and decreases slow-wave sleep. Findings also suggest that the time spent awake by pigeons increases when nesting on lower perches. Lower perch height correlates to a higher risk of predation; REM sleep would place the pigeon in more danger, since it is a less reactive form of sleep. Light pollution Light is one of the more common threats to sufficient sleep for birds living in anthropogenic environments, known as "artificial light pollution at night" (ALAN). ALAN eliminates darkness, a necessity for rest. Disrupting the birds' light and dark cycles can impact circadian rhythms, eventually harming sleep patterns. Biologist Thomas Raap conducted a study which suggested that exposure to ALAN affected the sleep behavior of Eurasian blue tits (Cyanistes caeruleus). In this study, birds woke up earlier due to ALAN factors such as seasonal timekeeping. Because light usually indicates a day's passage to birds, exposure to light pollution disrupts their ability to measure the length of a day. Outside densely-populated areas, there is normally about a five-percent drop in sleep duration for blue-tit females during their nesting period. The researchers found a 50-percent reduction in the females' sleep duration during this period in urban centers, and suggested that the effects of ALAN were responsible. References Sleep Birds Bird behavior
Avian sleep
Biology
1,485
32,271,526
https://en.wikipedia.org/wiki/Clp%20protease%20family
In molecular biology, the CLP protease family is a family of serine peptidases belong to the MEROPS peptidase family S14 (ClpP endopeptidase family, clan SK). ClpP is an ATP-dependent protease that cleaves a number of proteins, such as casein and albumin. It exists as a heterodimer of ATP-binding regulatory A and catalytic P subunits, both of which are required for effective levels of protease activity in the presence of ATP, although the P subunit alone does possess some catalytic activity. Proteases highly similar to ClpP have been found to be encoded in the genome of bacteria, in the mitochondria of metazoa, some viruses and in the chloroplast of plants. A number of the proteins in this family are classified as non-peptidase homologues as they have been found experimentally to be without peptidase activity, or lack amino acid residues that are believed to be essential for catalytic activity. Mutations in mitochondrial CLPP are associated with Perrault syndrome and cause a variety of molecular defects, from the loss of ATPase docking, to the activation or inhibition of peptidase activity. See also Endopeptidase Clp ATP-dependent Clp protease proteolytic subunit References External links MEROPS family S14 Protein complexes Protein families
Clp protease family
Biology
295
32,237
https://en.wikipedia.org/wiki/UMTS
The Universal Mobile Telecommunications System (UMTS) is a 3G mobile cellular system for networks based on the GSM standard. Developed and maintained by the 3GPP (3rd Generation Partnership Project), UMTS is a component of the International Telecommunication Union IMT-2000 standard set and compares with the CDMA2000 standard set for networks based on the competing cdmaOne technology. UMTS uses wideband code-division multiple access (W-CDMA) radio access technology to offer greater spectral efficiency and bandwidth to mobile network operators. UMTS specifies a complete network system, which includes the radio access network (UMTS Terrestrial Radio Access Network, or UTRAN), the core network (Mobile Application Part, or MAP) and the authentication of users via SIM (subscriber identity module) cards. The technology described in UMTS is sometimes also referred to as Freedom of Mobile Multimedia Access (FOMA) or 3GSM. Unlike EDGE (IMT Single-Carrier, based on GSM) and CDMA2000 (IMT Multi-Carrier), UMTS requires new base stations and new frequency allocations. Features UMTS supports theoretical maximum data transfer rates of 42 Mbit/s when Evolved HSPA (HSPA+) is implemented in the network. Users in deployed networks can expect a transfer rate of up to 384 kbit/s for Release '99 (R99) handsets (the original UMTS release), and 7.2 Mbit/s for High-Speed Downlink Packet Access (HSDPA) handsets in the downlink connection. These speeds are significantly faster than the 9.6 kbit/s of a single GSM error-corrected circuit switched data channel, multiple 9.6 kbit/s channels in High-Speed Circuit-Switched Data (HSCSD) and 14.4 kbit/s for CDMAOne channels. Since 2006, UMTS networks in many countries have been or are in the process of being upgraded with High-Speed Downlink Packet Access (HSDPA), sometimes known as 3.5G. Currently, HSDPA enables downlink transfer speeds of up to 21 Mbit/s. Work is also progressing on improving the uplink transfer speed with the High-Speed Uplink Packet Access (HSUPA). The 3GPP LTE standard succeeds UMTS and initially provided 4G speeds of 100 Mbit/s down and 50 Mbit/s up, with scalability up to 3 Gbps, using a next generation air interface technology based upon orthogonal frequency-division multiplexing. The first national consumer UMTS networks launched in 2002 with a heavy emphasis on telco-provided mobile applications such as mobile TV and video calling. The high data speeds of UMTS are now most often utilised for Internet access: experience in Japan and elsewhere has shown that user demand for video calls is not high, and telco-provided audio/video content has declined in popularity in favour of high-speed access to the World Wide Web either directly on a handset or connected to a computer via Wi-Fi, Bluetooth or USB. Air interfaces UMTS combines three different terrestrial air interfaces, GSM's Mobile Application Part (MAP) core, and the GSM family of speech codecs. The air interfaces are called UMTS Terrestrial Radio Access (UTRA). All air interface options are part of ITU's IMT-2000. In the currently most popular variant for cellular mobile telephones, W-CDMA (IMT Direct Spread) is used. It is also called "Uu interface", as it links User Equipment to the UMTS Terrestrial Radio Access Network. Please note that the terms W-CDMA, TD-CDMA and TD-SCDMA are misleading. While they suggest covering just a channel access method (namely a variant of CDMA), they are actually the common names for the whole air interface standards. W-CDMA (UTRA-FDD) W-CDMA (WCDMA; Wideband Code-Division Multiple Access), along with UMTS-FDD, UTRA-FDD, or IMT-2000 CDMA Direct Spread is an air interface standard found in 3G mobile telecommunications networks. It supports conventional cellular voice, text and MMS services, but can also carry data at high speeds, allowing mobile operators to deliver higher bandwidth applications including streaming and broadband Internet access. W-CDMA uses the DS-CDMA channel access method with a pair of 5 MHz wide channels. In contrast, the competing CDMA2000 system uses one or more available 1.25 MHz channels for each direction of communication. W-CDMA systems are widely criticized for their large spectrum usage, which delayed deployment in countries that acted relatively slowly in allocating new frequencies specifically for 3G services (such as the United States). The specific frequency bands originally defined by the UMTS standard are 1885–2025 MHz for the mobile-to-base (uplink) and 2110–2200 MHz for the base-to-mobile (downlink). In the US, 1710–1755 MHz and 2110–2155 MHz are used instead, as the 1900 MHz band was already used. While UMTS2100 is the most widely deployed UMTS band, some countries' UMTS operators use the 850 MHz (900 MHz in Europe) and/or 1900 MHz bands (independently, meaning uplink and downlink are within the same band), notably in the US by AT&T Mobility, New Zealand by Telecom New Zealand on the XT Mobile Network and in Australia by Telstra on the Next G network. Some carriers such as T-Mobile use band numbers to identify the UMTS frequencies. For example, Band I (2100 MHz), Band IV (1700/2100 MHz), and Band V (850 MHz). UMTS-FDD is an acronym for Universal Mobile Telecommunications System (UMTS) frequency-division duplexing (FDD) and a 3GPP standardized version of UMTS networks that makes use of frequency-division duplexing for duplexing over an UMTS Terrestrial Radio Access (UTRA) air interface. W-CDMA is the basis of Japan's NTT DoCoMo's FOMA service and the most-commonly used member of the Universal Mobile Telecommunications System (UMTS) family and sometimes used as a synonym for UMTS. It uses the DS-CDMA channel access method and the FDD duplexing method to achieve higher speeds and support more users compared to most previously used time-division multiple access (TDMA) and time-division duplex (TDD) schemes. While not an evolutionary upgrade on the airside, it uses the same core network as the 2G GSM networks deployed worldwide, allowing dual-mode mobile operation along with GSM/EDGE; a feature it shares with other members of the UMTS family. Development In the late 1990s, W-CDMA was developed by NTT DoCoMo as the air interface for their 3G network FOMA. Later NTT DoCoMo submitted the specification to the International Telecommunication Union (ITU) as a candidate for the international 3G standard known as IMT-2000. The ITU eventually accepted W-CDMA as part of the IMT-2000 family of 3G standards, as an alternative to CDMA2000, EDGE, and the short range DECT system. Later, W-CDMA was selected as an air interface for UMTS. As NTT DoCoMo did not wait for the finalisation of the 3G Release 99 specification, their network was initially incompatible with UMTS. However, this has been resolved by NTT DoCoMo updating their network. Code-Division Multiple Access communication networks have been developed by a number of companies over the years, but development of cell-phone networks based on CDMA (prior to W-CDMA) was dominated by Qualcomm, the first company to succeed in developing a practical and cost-effective CDMA implementation for consumer cell phones and its early IS-95 air interface standard has evolved into the current CDMA2000 (IS-856/IS-2000) standard. Qualcomm created an experimental wideband CDMA system called CDMA2000 3x which unified the W-CDMA (3GPP) and CDMA2000 (3GPP2) network technologies into a single design for a worldwide standard air interface. Compatibility with CDMA2000 would have beneficially enabled roaming on existing networks beyond Japan, since Qualcomm CDMA2000 networks are widely deployed, especially in the Americas, with coverage in 58 countries . However, divergent requirements resulted in the W-CDMA standard being retained and deployed globally. W-CDMA has then become the dominant technology with 457 commercial networks in 178 countries as of April 2012. Several CDMA2000 operators have even converted their networks to W-CDMA for international roaming compatibility and smooth upgrade path to LTE. Despite incompatibility with existing air-interface standards, late introduction and the high upgrade cost of deploying an all-new transmitter technology, W-CDMA has become the dominant standard. Rationale for W-CDMA W-CDMA transmits on a pair of 5 MHz-wide radio channels, while CDMA2000 transmits on one or several pairs of 1.25 MHz radio channels. Though W-CDMA does use a direct-sequence CDMA transmission technique like CDMA2000, W-CDMA is not simply a wideband version of CDMA2000 and differs in many aspects from CDMA2000. From an engineering point of view, W-CDMA provides a different balance of trade-offs between cost, capacity, performance, and density; it also promises to achieve a benefit of reduced cost for video phone handsets. W-CDMA may also be better suited for deployment in the very dense cities of Europe and Asia. However, hurdles remain, and cross-licensing of patents between Qualcomm and W-CDMA vendors has not eliminated possible patent issues due to the features of W-CDMA which remain covered by Qualcomm patents. W-CDMA has been developed into a complete set of specifications, a detailed protocol that defines how a mobile phone communicates with the tower, how signals are modulated, how datagrams are structured, and system interfaces are specified allowing free competition on technology elements. Deployment The world's first commercial W-CDMA service, FOMA, was launched by NTT DoCoMo in Japan in 2001. Elsewhere, W-CDMA deployments are usually marketed under the UMTS brand. W-CDMA has also been adapted for use in satellite communications on the U.S. Mobile User Objective System using geosynchronous satellites in place of cell towers. J-Phone Japan (once Vodafone and now SoftBank Mobile) soon followed by launching their own W-CDMA based service, originally branded "Vodafone Global Standard" and claiming UMTS compatibility. The name of the service was changed to "Vodafone 3G" (now "SoftBank 3G") in December 2004. Beginning in 2003, Hutchison Whampoa gradually launched their upstart UMTS networks. Most countries have, since the ITU approved of the 3G mobile service, either "auctioned" the radio frequencies to the company willing to pay the most, or conducted a "beauty contest" asking the various companies to present what they intend to commit to if awarded the licences. This strategy has been criticised for aiming to drain the cash of operators to the brink of bankruptcy in order to honour their bids or proposals. Most of them have a time constraint for the rollout of the service where a certain "coverage" must be achieved within a given date or the licence will be revoked. Vodafone launched several UMTS networks in Europe in February 2004. MobileOne of Singapore commercially launched its 3G (W-CDMA) services in February 2005. New Zealand in August 2005 and Australia in October 2005. AT&T Mobility utilized a UMTS network, with HSPA+, from 2005 until its shutdown in February 2022. Rogers in Canada March 2007 has launched HSDPA in the Toronto Golden Horseshoe district on W-CDMA at 850/1900 MHz and plan the launch the service commercial in the top 25 cities October, 2007. TeliaSonera opened W-CDMA service in Finland October 13, 2004, with speeds up to 384 kbit/s. Availability only in main cities. Pricing is approx. €2/MB. SK Telecom and KTF, two largest mobile phone service providers in South Korea, have each started offering W-CDMA service in December 2003. Due to poor coverage and lack of choice in handhelds, the W-CDMA service has barely made a dent in the Korean market which was dominated by CDMA2000. By October 2006 both companies are covering more than 90 cities while SK Telecom has announced that it will provide nationwide coverage for its WCDMA network in order for it to offer SBSM (Single Band Single Mode) handsets by the first half of 2007. KT Freecel will thus cut funding to its CDMA2000 network development to the minimum. In Norway, Telenor introduced W-CDMA in major cities by the end of 2004, while their competitor, NetCom, followed suit a few months later. Both operators have 98% national coverage on EDGE, but Telenor has parallel WLAN roaming networks on GSM, where the UMTS service is competing with this. For this reason Telenor is dropping support of their WLAN service in Austria (2006). Maxis Communications and Celcom, two mobile phone service providers in Malaysia, started offering W-CDMA services in 2005. In Sweden, Telia introduced W-CDMA in March 2004. UTRA-TDD UMTS-TDD, an acronym for Universal Mobile Telecommunications System (UMTS) time-division duplexing (TDD), is a 3GPP standardized version of UMTS networks that use UTRA-TDD. UTRA-TDD is a UTRA that uses time-division duplexing for duplexing. While a full implementation of UMTS, it is mainly used to provide Internet access in circumstances similar to those where WiMAX might be used. UMTS-TDD is not directly compatible with UMTS-FDD: a device designed to use one standard cannot, unless specifically designed to, work on the other, because of the difference in air interface technologies and frequencies used. It is more formally as IMT-2000 CDMA-TDD or IMT 2000 Time-Division (IMT-TD). The two UMTS air interfaces (UTRAs) for UMTS-TDD are TD-CDMA and TD-SCDMA. Both air interfaces use a combination of two channel access methods, code-division multiple access (CDMA) and time-division multiple access (TDMA): the frequency band is divided into time slots (TDMA), which are further divided into channels using CDMA spreading codes. These air interfaces are classified as TDD, because time slots can be allocated to either uplink or downlink traffic. TD-CDMA (UTRA-TDD 3.84 Mcps High Chip Rate (HCR)) TD-CDMA, an acronym for Time-Division-Code-Division Multiple Access, is a channel-access method based on using spread-spectrum multiple-access (CDMA) across multiple time slots (TDMA). TD-CDMA is the channel access method for UTRA-TDD HCR, which is an acronym for UMTS Terrestrial Radio Access-Time Division Duplex High Chip Rate. UMTS-TDD's air interfaces that use the TD-CDMA channel access technique are standardized as UTRA-TDD HCR, which uses increments of 5 MHz of spectrum, each slice divided into 10 ms frames containing fifteen time slots (1500 per second). The time slots (TS) are allocated in fixed percentage for downlink and uplink. TD-CDMA is used to multiplex streams from or to multiple transceivers. Unlike W-CDMA, it does not need separate frequency bands for up- and downstream, allowing deployment in tight frequency bands. TD-CDMA is a part of IMT-2000, defined as IMT-TD Time-Division (IMT CDMA TDD), and is one of the three UMTS air interfaces (UTRAs), as standardized by the 3GPP in UTRA-TDD HCR. UTRA-TDD HCR is closely related to W-CDMA, and provides the same types of channels where possible. UMTS's HSDPA/HSUPA enhancements are also implemented under TD-CDMA. In the United States, the technology has been used for public safety and government use in the New York City and a few other areas. In Japan, IPMobile planned to provide TD-CDMA service in year 2006, but it was delayed, changed to TD-SCDMA, and bankrupt before the service officially started. TD-SCDMA (UTRA-TDD 1.28 Mcps Low Chip Rate (LCR)) Time-Division Synchronous Code-Division Multiple Access (TD-SCDMA) or UTRA TDD 1.28 Mcps low chip rate (UTRA-TDD LCR) is an air interface found in UMTS mobile telecommunications networks in China as an alternative to W-CDMA. TD-SCDMA uses the TDMA channel access method combined with an adaptive synchronous CDMA component on 1.6 MHz slices of spectrum, allowing deployment in even tighter frequency bands than TD-CDMA. It is standardized by the 3GPP and also referred to as "UTRA-TDD LCR". However, the main incentive for development of this Chinese-developed standard was avoiding or reducing the license fees that have to be paid to non-Chinese patent owners. Unlike the other air interfaces, TD-SCDMA was not part of UMTS from the beginning but has been added in Release 4 of the specification. Like TD-CDMA, TD-SCDMA is known as IMT CDMA TDD within IMT-2000. The term "TD-SCDMA" is misleading. While it suggests covering only a channel access method, it is actually the common name for the whole air interface specification. TD-SCDMA / UMTS-TDD (LCR) networks are incompatible with W-CDMA / UMTS-FDD and TD-CDMA / UMTS-TDD (HCR) networks. Objectives TD-SCDMA was developed in the People's Republic of China by the Chinese Academy of Telecommunications Technology (CATT), Datang Telecom, and Siemens AG in an attempt to avoid dependence on Western technology. This is likely primarily for practical reasons, since other 3G formats require the payment of patent fees to a large number of Western patent holders. TD-SCDMA proponents also claim it is better suited for densely populated areas. Further, it is supposed to cover all usage scenarios, whereas W-CDMA is optimised for symmetric traffic and macro cells, while TD-CDMA is best used in low mobility scenarios within micro or pico cells. TD-SCDMA is based on spread-spectrum technology which makes it unlikely that it will be able to completely escape the payment of license fees to western patent holders. The launch of a national TD-SCDMA network was initially projected by 2005 but only reached large scale commercial trials with 60,000 users across eight cities in 2008. On January 7, 2009, China granted a TD-SCDMA 3G licence to China Mobile. On September 21, 2009, China Mobile officially announced that it had 1,327,000 TD-SCDMA subscribers as of the end of August, 2009. TD-SCDMA is not commonly used outside of China. Technical highlights TD-SCDMA uses TDD, in contrast to the FDD scheme used by W-CDMA. By dynamically adjusting the number of timeslots used for downlink and uplink, the system can more easily accommodate asymmetric traffic with different data rate requirements on downlink and uplink than FDD schemes. Since it does not require paired spectrum for downlink and uplink, spectrum allocation flexibility is also increased. Using the same carrier frequency for uplink and downlink also means that the channel condition is the same on both directions, and the base station can deduce the downlink channel information from uplink channel estimates, which is helpful to the application of beamforming techniques. TD-SCDMA also uses TDMA in addition to the CDMA used in WCDMA. This reduces the number of users in each timeslot, which reduces the implementation complexity of multiuser detection and beamforming schemes, but the non-continuous transmission also reduces coverage (because of the higher peak power needed), mobility (because of lower power control frequency) and complicates radio resource management algorithms. The "S" in TD-SCDMA stands for "synchronous", which means that uplink signals are synchronized at the base station receiver, achieved by continuous timing adjustments. This reduces the interference between users of the same timeslot using different codes by improving the orthogonality between the codes, therefore increasing system capacity, at the cost of some hardware complexity in achieving uplink synchronization. History On January 20, 2006, Ministry of Information Industry of the People's Republic of China formally announced that TD-SCDMA is the country's standard of 3G mobile telecommunication. On February 15, 2006, a timeline for deployment of the network in China was announced, stating pre-commercial trials would take place starting after completion of a number of test networks in select cities. These trials ran from March to October, 2006, but the results were apparently unsatisfactory. In early 2007, the Chinese government instructed the dominant cellular carrier, China Mobile, to build commercial trial networks in eight cities, and the two fixed-line carriers, China Telecom and China Netcom, to build one each in two other cities. Construction of these trial networks was scheduled to finish during the fourth quarter of 2007, but delays meant that construction was not complete until early 2008. The standard has been adopted by 3GPP since Rel-4, known as "UTRA TDD 1.28 Mcps Option". On March 28, 2008, China Mobile Group announced TD-SCDMA "commercial trials" for 60,000 test users in eight cities from April 1, 2008. Networks using other 3G standards (WCDMA and CDMA2000 EV/DO) had still not been launched in China, as these were delayed until TD-SCDMA was ready for commercial launch. In January 2009, the Ministry of Industry and Information Technology (MIIT) in China took the unusual step of assigning licences for 3 different third-generation mobile phone standards to three carriers in a long-awaited step that is expected to prompt $41 billion in spending on new equipment. The Chinese-developed standard, TD-SCDMA, was assigned to China Mobile, the world's biggest phone carrier by subscribers. That appeared to be an effort to make sure the new system has the financial and technical backing to succeed. Licences for two existing 3G standards, W-CDMA and CDMA2000 1xEV-DO, were assigned to China Unicom and China Telecom, respectively. Third-generation, or 3G, technology supports Web surfing, wireless video and other services and the start of service is expected to spur new revenue growth. The technical split by MIIT has hampered the performance of China Mobile in the 3G market, with users and China Mobile engineers alike pointing to the lack of suitable handsets to use on the network. Deployment of base stations has also been slow, resulting in lack of improvement of service for users. The network connection itself has consistently been slower than that from the other two carriers, leading to a sharp decline in market share. By 2011 China Mobile has already moved its focus onto TD-LTE. Gradual closures of TD-SCDMA stations started in 2016. Frequency bands & Deployments The following is a list of mobile telecommunications networks using third-generation TD-SCDMA / UMTS-TDD (LCR) technology. Unlicensed UMTS-TDD In Europe, CEPT allocated the 2010–2020 MHz range for a variant of UMTS-TDD designed for unlicensed, self-provided use. Some telecom groups and jurisdictions have proposed withdrawing this service in favour of licensed UMTS-TDD, due to lack of demand, and lack of development of a UMTS TDD air interface technology suitable for deployment in this band. Comparison with UMTS-FDD Ordinary UMTS uses UTRA-FDD as an air interface and is known as UMTS-FDD. UMTS-FDD uses W-CDMA for multiple access and frequency-division duplex for duplexing, meaning that the up-link and down-link transmit on different frequencies. UMTS is usually transmitted on frequencies assigned for 1G, 2G, or 3G mobile telephone service in the countries of operation. UMTS-TDD uses time-division duplexing, allowing the up-link and down-link to share the same spectrum. This allows the operator to more flexibly divide the usage of available spectrum according to traffic patterns. For ordinary phone service, you would expect the up-link and down-link to carry approximately equal amounts of data (because every phone call needs a voice transmission in either direction), but Internet-oriented traffic is more frequently one-way. For example, when browsing a website, the user will send commands, which are short, to the server, but the server will send whole files, that are generally larger than those commands, in response. UMTS-TDD tends to be allocated frequency intended for mobile/wireless Internet services rather than used on existing cellular frequencies. This is, in part, because TDD duplexing is not normally allowed on cellular, PCS/PCN, and 3G frequencies. TDD technologies open up the usage of left-over unpaired spectrum. Europe-wide, several bands are provided either specifically for UMTS-TDD or for similar technologies. These are 1900 MHz and 1920 MHz and between 2010 MHz and 2025 MHz. In several countries the 25002690 MHz band (also known as MMDS in the USA) have been used for UMTS-TDD deployments. Additionally, spectrum around the 3.5 GHz range has been allocated in some countries, notably Britain, in a technology-neutral environment. In the Czech Republic UTMS-TDD is also used in a frequency range around 872 MHz. Deployment UMTS-TDD has been deployed for public and/or private networks in at least nineteen countries around the world, with live systems in, amongst other countries, Australia, Czech Republic, France, Germany, Japan, New Zealand, Botswana, South Africa, the UK, and the USA. Deployments in the US thus far have been limited. It has been selected for a public safety support network used by emergency responders in New York, but outside of some experimental systems, notably one from Nextel, thus far the WiMAX standard appears to have gained greater traction as a general mobile Internet access system. Competing standards A variety of Internet-access systems exist which provide broadband speed access to the net. These include WiMAX and HIPERMAN. UMTS-TDD has the advantages of being able to use an operator's existing UMTS/GSM infrastructure, should it have one, and that it includes UMTS modes optimized for circuit switching should, for example, the operator want to offer telephone service. UMTS-TDD's performance is also more consistent. However, UMTS-TDD deployers often have regulatory problems with taking advantage of some of the services UMTS compatibility provides. For example, the UMTS-TDD spectrum in the UK cannot be used to provide telephone service, though the regulator OFCOM is discussing the possibility of allowing it at some point in the future. Few operators considering UMTS-TDD have existing UMTS/GSM infrastructure. Additionally, the WiMAX and HIPERMAN systems provide significantly larger bandwidths when the mobile station is near the tower. Like most mobile Internet access systems, many users who might otherwise choose UMTS-TDD will find their needs covered by the ad hoc collection of unconnected Wi-Fi access points at many restaurants and transportation hubs, and/or by Internet access already provided by their mobile phone operator. By comparison, UMTS-TDD (and systems like WiMAX) offers mobile, and more consistent, access than the former, and generally faster access than the latter. Radio access network UMTS also specifies the Universal Terrestrial Radio Access Network (UTRAN), which is composed of multiple base stations, possibly using different terrestrial air interface standards and frequency bands. UMTS and GSM/EDGE can share a Core Network (CN), making UTRAN an alternative radio access network to GERAN (GSM/EDGE RAN), and allowing (mostly) transparent switching between the RANs according to available coverage and service needs. Because of that, UMTS's and GSM/EDGE's radio access networks are sometimes collectively referred to as UTRAN/GERAN. UMTS networks are often combined with GSM/EDGE, the latter of which is also a part of IMT-2000. The UE (User Equipment) interface of the RAN (Radio Access Network) primarily consists of RRC (Radio Resource Control), PDCP (Packet Data Convergence Protocol), RLC (Radio Link Control) and MAC (Media Access Control) protocols. RRC protocol handles connection establishment, measurements, radio bearer services, security and handover decisions. RLC protocol primarily divides into three ModesTransparent Mode (TM), Unacknowledge Mode (UM), Acknowledge Mode (AM). The functionality of AM entity resembles TCP operation whereas UM operation resembles UDP operation. In TM mode, data will be sent to lower layers without adding any header to SDU of higher layers. MAC handles the scheduling of data on air interface depending on higher layer (RRC) configured parameters. The set of properties related to data transmission is called Radio Bearer (RB). This set of properties decides the maximum allowed data in a TTI (Transmission Time Interval). RB includes RLC information and RB mapping. RB mapping decides the mapping between RB<->logical channel<->transport channel. Signaling messages are sent on Signaling Radio Bearers (SRBs) and data packets (either CS or PS) are sent on data RBs. RRC and NAS messages go on SRBs. Security includes two procedures: integrity and ciphering. Integrity validates the resource of messages and also makes sure that no one (third/unknown party) on the radio interface has modified the messages. Ciphering ensures that no one listens to your data on the air interface. Both integrity and ciphering are applied for SRBs whereas only ciphering is applied for data RBs. Core network With Mobile Application Part, UMTS uses the same core network standard as GSM/EDGE. This allows a simple migration for existing GSM operators. However, the migration path to UMTS is still costly: while much of the core infrastructure is shared with GSM, the cost of obtaining new spectrum licenses and overlaying UMTS at existing towers is high. The CN can be connected to various backbone networks, such as the Internet or an Integrated Services Digital Network (ISDN) telephone network. UMTS (and GERAN) include the three lowest layers of OSI model. The network layer (OSI 3) includes the Radio Resource Management protocol (RRM) that manages the bearer channels between the mobile terminals and the fixed network, including the handovers. Frequency bands and channel bandwidths UARFCN A UARFCN (abbreviation for UTRA Absolute Radio Frequency Channel Number, where UTRA stands for UMTS Terrestrial Radio Access) is used to identify a frequency in the UMTS frequency bands. Typically channel number is derived from the frequency in MHz through the formula Channel Number = Frequency * 5. However, this is only able to represent channels that are centered on a multiple of 200 kHz, which do not align with licensing in North America. 3GPP added several special values for the common North American channels. Spectrum allocation Over 130 licenses have already been awarded to operators worldwide (as of December 2004), specifying W-CDMA radio access technology that builds on GSM. In Europe, the license process occurred at the tail end of the technology bubble, and the auction mechanisms for allocation set up in some countries resulted in some extremely high prices being paid for the original 2100 MHz licenses, notably in the UK and Germany. In Germany, bidders paid a total €50.8 billion for six licenses, two of which were subsequently abandoned and written off by their purchasers (Mobilcom and the Sonera/Telefónica consortium). It has been suggested that these huge license fees have the character of a very large tax paid on future income expected many years down the road. In any event, the high prices paid put some European telecom operators close to bankruptcy (most notably KPN). Over the last few years some operators have written off some or all of the license costs. Between 2007 and 2009, all three Finnish carriers began to use 900 MHz UMTS in a shared arrangement with its surrounding 2G GSM base stations for rural area coverage, a trend that is expected to expand over Europe in the next 1–3 years. The 2100 MHz band (downlink around 2100 MHz and uplink around 1900 MHz) allocated for UMTS in Europe and most of Asia is already used in North America. The 1900 MHz range is used for 2G (PCS) services, and 2100 MHz range is used for satellite communications. Regulators have, however, freed up some of the 2100 MHz range for 3G services, together with a different range around 1700 MHz for the uplink. AT&T Wireless launched UMTS services in the United States by the end of 2004 strictly using the existing 1900 MHz spectrum allocated for 2G PCS services. Cingular acquired AT&T Wireless in 2004 and has since then launched UMTS in select US cities. Cingular renamed itself AT&T Mobility and rolled out some cities with a UMTS network at 850 MHz to enhance its existing UMTS network at 1900 MHz and now offers subscribers a number of dual-band UMTS 850/1900 phones. T-Mobile's rollout of UMTS in the US was originally focused on the 1700 MHz band. However, T-Mobile has been moving users from 1700 MHz to 1900 MHz (PCS) in order to reallocate the spectrum to 4G LTE services. In Canada, UMTS coverage is being provided on the 850 MHz and 1900 MHz bands on the Rogers and Bell-Telus networks. Bell and Telus share the network. Recently, new providers Wind Mobile, Mobilicity and Videotron have begun operations in the 1700 MHz band. In 2008, Australian telco Telstra replaced its existing CDMA network with a national UMTS-based 3G network, branded as NextG, operating in the 850 MHz band. Telstra currently provides UMTS service on this network, and also on the 2100 MHz UMTS network, through a co-ownership of the owning and administrating company 3GIS. This company is also co-owned by Hutchison 3G Australia, and this is the primary network used by their customers. Optus is currently rolling out a 3G network operating on the 2100 MHz band in cities and most large towns, and the 900 MHz band in regional areas. Vodafone is also building a 3G network using the 900 MHz band. In India, BSNL has started its 3G services since October 2009, beginning with the larger cities and then expanding over to smaller cities. The 850 MHz and 900 MHz bands provide greater coverage compared to equivalent 1700/1900/2100 MHz networks, and are best suited to regional areas where greater distances separate base station and subscriber. Carriers in South America are now also rolling out 850 MHz networks. Interoperability and global roaming UMTS phones (and data cards) are highly portable they have been designed to roam easily onto other UMTS networks (if the providers have roaming agreements in place). In addition, almost all UMTS phones are UMTS/GSM dual-mode devices, so if a UMTS phone travels outside of UMTS coverage during a call the call may be transparently handed off to available GSM coverage. Roaming charges are usually significantly higher than regular usage charges. Most UMTS licensees consider ubiquitous, transparent global roaming an important issue. To enable a high degree of interoperability, UMTS phones usually support several different frequencies in addition to their GSM fallback. Different countries support different UMTS frequency bandsEurope initially used 2100 MHz while the most carriers in the USA use 850 MHz and 1900 MHz. T-Mobile has launched a network in the US operating at 1700 MHz (uplink) /2100 MHz (downlink), and these bands also have been adopted elsewhere in the US and in Canada and Latin America. A UMTS phone and network must support a common frequency to work together. Because of the frequencies used, early models of UMTS phones designated for the United States will likely not be operable elsewhere and vice versa. There are now 11 different frequency combinations used around the world including frequencies formerly used solely for 2G services. UMTS phones can use a Universal Subscriber Identity Module, USIM (based on GSM's SIM card) and also work (including UMTS services) with GSM SIM cards. This is a global standard of identification, and enables a network to identify and authenticate the (U)SIM in the phone. Roaming agreements between networks allow for calls to a customer to be redirected to them while roaming and determine the services (and prices) available to the user. In addition to user subscriber information and authentication information, the (U)SIM provides storage space for phone book contact. Handsets can store their data on their own memory or on the (U)SIM card (which is usually more limited in its phone book contact information). A (U)SIM can be moved to another UMTS or GSM phone, and the phone will take on the user details of the (U)SIM, meaning it is the (U)SIM (not the phone) which determines the phone number of the phone and the billing for calls made from the phone. Japan was the first country to adopt 3G technologies, and since they had not used GSM previously they had no need to build GSM compatibility into their handsets and their 3G handsets were smaller than those available elsewhere. In 2002, NTT DoCoMo's FOMA 3G network was the first commercial UMTS networkusing a pre-release specification, it was initially incompatible with the UMTS standard at the radio level but used standard USIM cards, meaning USIM card based roaming was possible (transferring the USIM card into a UMTS or GSM phone when travelling). Both NTT DoCoMo and SoftBank Mobile (which launched 3G in December 2002) now use standard UMTS. Handsets and modems All of the major 2G phone manufacturers (that are still in business) are now manufacturers of 3G phones. The early 3G handsets and modems were specific to the frequencies required in their country, which meant they could only roam to other countries on the same 3G frequency (though they can fall back to the older GSM standard). Canada and USA have a common share of frequencies, as do most European countries. The article UMTS frequency bands is an overview of UMTS network frequencies around the world. Using a cellular router, PCMCIA or USB card, customers are able to access 3G broadband services, regardless of their choice of computer (such as a tablet PC or a PDA). Some software installs itself from the modem, so that in some cases absolutely no knowledge of technology is required to get online in moments. Using a phone that supports 3G and Bluetooth 2.0, multiple Bluetooth-capable laptops can be connected to the Internet. Some smartphones can also act as a mobile WLAN access point. There are very few 3G phones or modems available supporting all 3G frequencies (UMTS850/900/1700/1900/2100 MHz). In 2010, Nokia released a range of phones with Pentaband 3G coverage, including the N8 and E7. Many other phones are offering more than one band which still enables extensive roaming. For example, Apple's iPhone 4 contains a quadband chipset operating on 850/900/1900/2100 MHz, allowing usage in the majority of countries where UMTS-FDD is deployed. Other competing standards The main competitor to UMTS is CDMA2000 (IMT-MC), which is developed by the 3GPP2. Unlike UMTS, CDMA2000 is an evolutionary upgrade to an existing 2G standard, cdmaOne, and is able to operate within the same frequency allocations. This and CDMA2000's narrower bandwidth requirements make it easier to deploy in existing spectra. In some, but not all, cases, existing GSM operators only have enough spectrum to implement either UMTS or GSM, not both. For example, in the US D, E, and F PCS spectrum blocks, the amount of spectrum available is 5 MHz in each direction. A standard UMTS system would saturate that spectrum. Where CDMA2000 is deployed, it usually co-exists with UMTS. In many markets however, the co-existence issue is of little relevance, as legislative hurdles exist to co-deploying two standards in the same licensed slice of spectrum. Another competitor to UMTS is EDGE (IMT-SC), which is an evolutionary upgrade to the 2G GSM system, leveraging existing GSM spectrums. It is also much easier, quicker, and considerably cheaper for wireless carriers to "bolt-on" EDGE functionality by upgrading their existing GSM transmission hardware to support EDGE rather than having to install almost all brand-new equipment to deliver UMTS. However, being developed by 3GPP just as UMTS, EDGE is not a true competitor. Instead, it is used as a temporary solution preceding UMTS roll-out or as a complement for rural areas. This is facilitated by the fact that GSM/EDGE and UMTS specifications are jointly developed and rely on the same core network, allowing dual-mode operation including vertical handovers. China's TD-SCDMA standard is often seen as a competitor, too. TD-SCDMA has been added to UMTS' Release 4 as UTRA-TDD 1.28 Mcps Low Chip Rate (UTRA-TDD LCR). Unlike TD-CDMA (UTRA-TDD 3.84 Mcps High Chip Rate, UTRA-TDD HCR) which complements W-CDMA (UTRA-FDD), it is suitable for both micro and macrocells. However, the lack of vendors' support is preventing it from being a real competitor. While DECT is technically capable of competing with UMTS and other cellular networks in densely populated, urban areas, it has only been deployed for domestic cordless phones and private in-house networks. All of these competitors have been accepted by ITU as part of the IMT-2000 family of 3G standards, along with UMTS-FDD. On the Internet access side, competing systems include WiMAX and Flash-OFDM. Migrating from GSM/GPRS to UMTS From a GSM/GPRS network, the following network elements can be reused: Home Location Register (HLR) Visitor Location Register (VLR) Equipment Identity Register (EIR) Mobile Switching Center (MSC) Gateway Mobile Switching Center (GMSC) Authentication Center (AUC) Serving GPRS Support Node (SGSN) Gateway GPRS Support Node (GGSN) From a GSM/GPRS communication radio network, the following elements cannot be reused: Base transceiver station (BTS) Base station controller (BSC) Packet Control Unit (PCU) They can remain in the network and be used in dual network operation where 2G and 3G networks co-exist while network migration and new 3G terminals become available for use in the network. The UMTS network introduces new network elements that function as specified by 3GPP: Node B (base transceiver station) Radio Network Controller (RNC) Media Gateway (MGW) The functionality of MSC changes when going to UMTS. In a GSM system the MSC handles all the circuit switched operations like connecting A- and B-subscriber through the network. In UMTS the Media gateway (MGW) takes care of data transfer in circuit switched networks. MSC controls MGW operations. Problems and issues Some countries, including the United States, have allocated spectrum differently from the ITU recommendations, so that the standard bands most commonly used for UMTS (UMTS-2100) have not been available. In those countries, alternative bands are used, preventing the interoperability of existing UMTS-2100 equipment, and requiring the design and manufacture of different equipment for the use in these markets. As is the case with GSM900 today, standard UMTS 2100 MHz equipment will not work in those markets. However, it appears as though UMTS is not suffering as much from handset band compatibility issues as GSM did, as many UMTS handsets are multi-band in both UMTS and GSM modes. Penta-band (850, 900, 1700, 2100, and 1900 MHz bands), quad-band GSM (850, 900, 1800, and 1900 MHz bands) and tri-band UMTS (850, 1900, and 2100 MHz bands) handsets are becoming more commonplace. In its early days, UMTS had problems in many countries: Overweight handsets with poor battery life were first to arrive on a market highly sensitive to weight and form factor. The Motorola A830, a debut handset on Hutchison's 3 network, weighed more than 200 grams and even featured a detachable camera to reduce handset weight. Another significant issue involved call reliability, related to problems with handover from UMTS to GSM. Customers found their connections being dropped as handovers were possible only in one direction (UMTS → GSM), with the handset only changing back to UMTS after hanging up. In most networks around the world this is no longer an issue. Compared to GSM, UMTS networks initially required a higher base station density. For fully-fledged UMTS incorporating video on demand features, one base station needed to be set up every 1–1.5 km (0.62–0.93 mi). This was the case when only the 2100 MHz band was being used, however with the growing use of lower-frequency bands (such as 850 and 900 MHz) this is no longer so. This has led to increasing rollout of the lower-band networks by operators since 2006. Even with current technologies and low-band UMTS, telephony and data over UMTS requires more power than on comparable GSM networks. Apple Inc. cited UMTS power consumption as the reason that the first generation iPhone only supported EDGE. Their release of the iPhone 3G quotes talk time on UMTS as half that available when the handset is set to use GSM. Other manufacturers indicate different battery lifetime for UMTS mode compared to GSM mode as well. As battery and network technology improve, this issue is diminishing. Security issues As early as 2008, it was known that carrier networks can be used to surreptitiously gather user location information. In August 2014, the Washington Post reported on widespread marketing of surveillance systems using Signalling System No. 7 (SS7) protocols to locate callers anywhere in the world. In December 2014, news broke that SS7's very own functions can be repurposed for surveillance, because of its relaxed security, in order to listen to calls in real time or to record encrypted calls and texts for later decryption, or to defraud users and cellular carriers. Deutsche Telekom and Vodafone declared the same day that they had fixed gaps in their networks, but that the problem is global and can only be fixed with a telecommunication system-wide solution. Releases The evolution of UMTS progresses according to planned releases. Each release is designed to introduce new features and improve upon existing ones. Release '99 Bearer services 64 kbit/s circuit switch 384 kbit/s packet switched Location services Call service: compatible with Global System for Mobile Communications (GSM), based on Universal Subscriber Identity Module (USIM) Voice quality featuresTandem Free Operation Frequency 2.1 GHz Release 4 Edge radio Multimedia messaging MExE (Mobile Execution Environment) Improved location services IP Multimedia Services (IMS) TD-SCDMA (UTRA-TDD 1.28 Mcps low chip rate) Release 5 IP Multimedia Subsystem (IMS) IPv6, IP transport in UTRAN Improvements in GERAN, MExE, etc. HSDPA Release 6 WLAN integration Multimedia broadcast and multicast Improvements in IMS HSUPA Fractional DPCH Release 7 Enhanced L2 64 QAM, MIMO Voice over HSPA CPCcontinuous packet connectivity FRLCFlexible RLC Release 8 Dual-Cell HSDPA Release 9 Dual-Cell HSUPA See also List of UMTS networks Long Term Evolution, the 3GPP 4G successor for UMTS and CDMA2000. GAN/UMA: A standard for running GSM and UMTS over wireless LANs. Opportunity-Driven Multiple Access (ODMA): a UMTS TDD mode communications relaying protocol HSDPA, HSUPA: updates to the W-CDMA air interface. PDCP Subscriber Identity Module UMTS-TDD: a variant of UMTS largely used to provide wireless Internet service. UMTS frequency bands UMTS channels W-CDMA: the primary air interface standard used by UMTS. TD-SCDMA Other, non-UMTS, 3G and 4G standards CDMA2000: evolved from cdmaOne (also known as IS-95 or "CDMA"), managed by the 3GPP2 FOMA WiMAX GSM GPRS EDGE ETSI Other information Cellular frequencies CDMA Comparison of wireless data standards DECT Dynamic TDMA Evolution-Data Optimized/CDMA2000 FOMA GSM/EDGE HSPA PN sequences Spectral efficiency comparison table UMTS frequency bands WiMAX Telecommunications industry in China Communications in China Standardization in China Mobile modem Spectral efficiency comparison table Code-Division Multiple Access (CDMA) Common pilot channel or CPICH, a simple synchronisation channel in WCDMA. Multiple-input multiple-output (MIMO) is the major issue of multiple antenna research. Wi-Fi: a local area wireless technology that is complementary to UMTS. List of device bandwidths Operations and Maintenance Centre Radio Network Controller UMTS security Huawei SingleRAN: a RAN technology allowing migration from GSM to UMTS or simultaneous use of both References Citations Bibliography Martin Sauter: Communication Systems for the Mobile Information Society, John Wiley, September 2006, . Ahonen and Barrett (editors), Services for UMTS (Wiley, 2002) first book on the services for 3G, . Holma and Toskala (editors), WCDMA for UMTS, (Wiley, 2000) first book dedicated to 3G technology, . Kreher and Ruedebusch, UMTS Signaling: UMTS Interfaces, Protocols, Message Flows and Procedures Analyzed and Explained (Wiley 2007), . Laiho, Wacker and Novosad, Radio Network Planning and Optimization for UMTS (Wiley, 2002) first book on radio network planning for 3G, . Muratore, Flavio. UMTS: mobile communications for the future. John Wiley & Sons, Inc., 2000. . Documentation 3GPP specification series 25 Radio aspects of 3G, including UMTS TS 25.201 Physical Layer General Description Describes basic differences between FDD and TDD. TS 25.211 Physical channels and mapping of transport channels onto physical channels (FDD) TS 25.221 Physical channels and mapping of transport channels onto physical channels (TDD) TS 25.212 Multiplexing and channel coding (FDD) TS 25.222 Multiplexing and channel coding (TDD) TS 25.213 Spreading and modulation (FDD) TS 25.223 Spreading and modulation (TDD) TS 25.214 Physical layer procedures (FDD) TS 25.224 Physical layer procedures (TDD) TS 25.215 Physical layer Measurements (FDD) TS 25.225 Physical layer Measurements (TDD) External links 3gpp.org 3rd Generation Partnership Project Standard 3GPP Specifications Numbering Schemes Vocabulary for 3GPP Specifications, up to Release 8 UMTS LTE Link Budget Comparison UMTS FAQ on UMTS World Worldwide W-CDMA frequency allocations on UMTS World UMTS TDD Alliance The Global UMTS TDD Alliance 3GSM World Congress UMTS Provider Chart LTE Encyclopedia TD-SCDMA Forum TD-SCDMA Industry Alliance UMTS FAQ Telecommunications-related introductions in 2002 3GPP standards Bandplans Metropolitan area networks Mobile telecommunications Mobile telecommunications standards Network access UMTS Videotelephony Wireless networking
UMTS
Technology,Engineering
11,163
57,578,297
https://en.wikipedia.org/wiki/NGC%203312
NGC 3312 is a large and highly inclined spiral galaxy located about 194 million light-years away in the constellation Hydra. The galaxy was discovered by astronomer John Herschel on March 26, 1835. It was later rediscovered by astronomer Guillaume Bigourdan on February 26, 1887. NGC 3312 was later listed and equated with IC 629 because the two objects share essentially the same celestial coordinates. NGC 3312 is the largest spiral galaxy in the Hydra Cluster and is also classified as a LINER galaxy. Physical characteristics NGC 3312 appears to be highly distorted with sharp dust lanes. There are sharp filamentary extensions to the north and an internal ringlike structure in the galaxy. The interstellar matter in the galaxy also appears highly disturbed. These features caused astronomer De Vaucouleurs to suggest that NGC 3312 was distorted by the giant elliptical galaxies NGC 3309 and NGC 3311 which are the dominant ellipticals in the Hydra Cluster. However, NGC 3309 and NGC 3311 are too distant and their relative velocity differences too large for either elliptical to cause the filamentary extensions observed in NGC 3312. It is more likely that NGC 3312 is interacting with the intracluster medium causing ram-pressure stripping which is stripping the interstellar medium of the galaxy. This may have caused the filamentary extensions observed in NGC 3312 as evidenced by the location of the galaxy near the cluster core. As a result, this galaxy can be considered part of a rare class of galaxies, known as jellyfish galaxies. Star formation Although NGC 3312's morphological structure resembles that of anemic galaxies, its mean surface-brightness profile hints that star formation may be quite active. The northwest filamentary extension from NGC 3312 has high surface brightness and has the knotted texture characteristic of active star-forming regions in spiral arms. Also, the internal dust lane of NGC 3312 is ringed with bright condensations. Radio source NGC 3312 contains an unresolved strong radio source in its core with a strength of 27 mJy and radio emission in the disk with a strength of 24 mJy mostly being confined to the spiral arms of the galaxy or regions of star formation. See also List of NGC objects (3001–4000) Messier 90 NGC 4921 References External links Hydra Cluster Hydra (constellation) Peculiar galaxies LINER galaxies Unbarred spiral galaxies 3312 31513 IC objects Astronomical objects discovered in 1835 Discoveries by John Herschel
NGC 3312
Astronomy
501
45,560,295
https://en.wikipedia.org/wiki/Hybridization%20assay
A hybridization assay comprises any form of quantifiable hybridization i.e. the quantitative annealing of two complementary strands of nucleic acids, known as nucleic acid hybridization. Overview In the context of biochemistry and drug development, a hybridization assay is a type of Ligand Binding Assay (LBA) used to quantify nucleic acids in biological matrices. Hybridization assays can be in solution or on a solid support such as 96-well plates or labelled beads. Hybridization assays involve labelled nucleic acid probes to identify related DNA or RNA molecules (i.e. with significantly high degree of sequence similarity) within a complex mixture of unlabelled nucleic acid molecules. Antisense therapy, siRNA, and other oligonucleotide and nucleic acid based biotherapeutics can be quantified with hybridization assays. Signalling of hybridization methods can be performed using oligonucleotide probes modified in-synthesis with haptens and small molecule ligands which act homologous to the capture and detection antibodies. As with traditional ELISA, conjugates to horse radish peroxidase (HRP) or alkaline phosphatase (AP) can be used as secondary antibodies. Sandwich hybridization assay In the sandwich hybridization ELISA assay format, the antigen ligand and antibodies in ELISA are replaced with a nucleic acid analyte, complementary oligonucleotide capture and detection probes. Generally, in the case of nucleic acid hybridization, monovalent salt concentration and temperature are controlled for hybridization and wash stringency, contrary to a traditional ELISA, where the salt concentration will usually be fixed for the binding and wash steps (i.e. PBS or TBS). Thus, optimal salt concentration in hybridization assays varies dependent upon the length and base composition of the analyte, capture and detection probes. Competitive hybridization assay The competitive hybridization assay is similar to a traditional competitive immunoassay. Like other hybridization assays, it relies on complementarity, where the capture probe competes between the analyte and the tracer–a labelled oligonucleotide analog to the analyte. Hybridization-ligation assay In the hybridization-ligation assay a template probe replaces the capture probe in the sandwich assay for immobilization to the solid support. The template probe is fully complementary to the oligonucleotide analyte and is intended to serve as a substrate for T4 DNA ligase-mediated ligation. The template probe has in addition an additional stretch complementary to a ligation probe so that the ligation probe will ligate onto the 3'-end of the analyte. Albeit generic, the ligation probe is similar to a detection probe in that it is labelled with, for example, digoxigenin for downstream signalling. Stringent, low/no salt wash will remove un-ligated products. The ligation of the analyte to the ligation probe makes the method specific for the 3'-end of the analyte, ligation by T4 DNA ligase being much less efficient over a bulge loop, which would happen for a 3' metabolite N-1 version of the analyte, for example. The specificity of the hybridization-ligation assay for ligation at the 3'-end is particularly relevant because the predominant nucleases in blood are 3' to 5' exonucleases. One limitation of the method is that it requires a free 3'-end hydroxyl which may not be available when targeting moieties are attached to the 3'-end, for example. Further, more exotic nucleic acid chemistries with oligonucleotide drugs may impact upon the activity of the ligase, which needs to be determined on a case-by-case basis. Dual ligation hybridization assay The dual ligation hybridization assay (DLA) extends the specificity of the hybridization-ligation assay to a specific method for the parent compound. Despite hybridization-ligation assay's robustness, sensitivity and added specificity for the 3'-end of the oligonculeotide analyte, the hybridization-ligation assay is not specific for the 5' end of the analyte. The DLA is intended to quantify the full-length, parent oligonucleotide compound only, with both intact 5' and 3' ends. DLA probes are ligated at the 5' and 3' ends of the analyte by the joint action of T4 DNA ligase and T4 polynucleotide kinase. The kinase phosphorylates the 5'-end of the analyte and the ligase will join the capture probe to the analyte to the detection probe. The capture and detection probes in the DLA can thus be termed ligation probes. As for the hybridization-ligation assay, the DLA is specific for the parent compound because the efficiency of ligation over a bulge loop is low, and thus the DLA detects the full-length analyte with both intact 5' and 3'-ends. The DLA can also be used for the determination of individual metabolites in biological matrices. The limitations with the hybridization-ligation assay also apply to the dual ligation assay, with the 5'-end in addition to the 3'-end requiring to have a free hydroxyl (or a phosphate group). Further, T4 DNA ligases are incompatible with ligation of RNA molecules as a donor (i.e. RNA at the 5' end of the ligation). Therefore, second generation antisense that comprise 2'-O-methyl RNA, 2'-O-methoxyethyl or locked nucleic acids may not be compatible with the dual ligation assay. Nuclease hybridization assay The nuclease hybridization assay, also called S1 nuclease cutting assay, is a nuclease protection assay-based hybridization ELISA. The assay is using S1 nuclease, which degrades single-stranded DNA and RNA into oligo- or mononucleotides, leaving intact double-stranded DNA and RNA. In the nuclease hybridization assay, the oligonucleotide analyte is captured onto the solid support such as a 96-well plate via a fully complementary cutting probe. After enzymatic processing by S1 nuclease, the free cutting probe and the cutting probe hybridized to metabolites, i.e. shortmers of the analyte are degraded, allowing signal to be generated only from the full-length cutting probe-analyte duplex. The assay is well tolerant to diverse chemistries, as exemplified by the development of a nuclease assay for morpholino oligonucleotides. This assay set-up can lack robustness and is not suitable for validation following the FDA's guidelines for bioanalytical method validation. This is demonstrated by an absence of published method that have been validated to the standards outlined by the FDA for bioanalytical methods. References Molecular biology Nucleic acids
Hybridization assay
Chemistry,Biology
1,506
69,274,247
https://en.wikipedia.org/wiki/Toyota%20R32V/R36V%20engine
The Toyota R32V and R36V engine family are a series of turbocharged, 3.2-liter and 3.6-liter, 90-degree, four-stroke, V-8, gasoline racing engines, designed, developed and produced by Toyota for sports car racing; between 1988 and 1999. The engines were used in various Toyota sports prototype race cars. Applications Toyota 88C-V Toyota 89C-V Toyota 90C-V Toyota 91C-V Toyota 92C-V Toyota 93C-V Toyota 94C-V Toyota GT-One References Toyota engines Gasoline engines by model Engines by model Group C V8 engines
Toyota R32V/R36V engine
Technology
131
72,659,566
https://en.wikipedia.org/wiki/Zero-touch%20provisioning
Zero-touch provisioning (ZTP), or zero-touch enrollment, is the process of remotely provisioning large numbers of network devices such as switches, routers and mobile devices without having to manually program each one individually. The feature improves existing provisioning models, solutions and practices in the areas of wireless networks, (complex) network management and operations services, and cloud based infrastructure services provisioning. ZTP saves configuration time while reducing errors. The process can also be used to update existing systems using scripts. Research has shown that ZTP systems allow for faster provisioning versus manual provisioning. The global market for ZTP services was estimated to be $2.1 Billion in 2021. In April 2019, the Internet Engineering Task Force published RFC 8572 Secure Zero Touch Provisioning (SZTP) as a Proposed Standard. The FIDO Alliance published FIDO Device Onboard version 1.0 in December 2020, and followed up with a FIDO Device Onboard version 1.1 in April 2022. Several FDO "app notes" augment this specification. FIDO Device Onboard is also a ZTP type protocol. Applications One application of the technology is to improve delivery of cloud computing services. The concept has been particularly influential for information technology when paired with mobile device management. Repetitive processes that can be automated and streamlined include configuring settings; collecting inventory details; deploying apps; managing licenses; and implementing security policy, including password management and wiping remote devices. System architecture A basic ZTP system requires a network device that supports ZTP, a server that supports Dynamic Host Configuration Protocol (DHCP) or Trivial File Transfer Protocol (TFTP), and a file server. When a ZTP-enabled device is powered on, the device's boot file sets up configuration parameters. A switch then sends a request using DHCP or TFTP to get the device's configuration file from a central location. The file then runs and configures ports, IP addresses and other server parameters for each location. Similar concepts A similar concept is the zero-touch network, which integrates zero-touch provisioning with automation, artificial intelligence and machine learning. Standards activity In December 2017, the European Telecommunications Standards Institute (ETSI) formed the Zero-touch network and Service Management group (ZSM) to accelerate development and standardization of the technology. In the summer of 2019, the group published a series of documents defining ZSM requirements, reference architecture and terminology. In April 2019, the Internet Engineering Task Force published RFC 8572 Secure Zero Touch Provisioning (SZTP) as a Proposed Standard. References External links ETSI ZSM standards What is ZTP (Zero Touch Provisioning)? Communications protocols Networks Cloud computing
Zero-touch provisioning
Technology
555
24,199,046
https://en.wikipedia.org/wiki/Lenvatinib
Lenvatinib, sold under the brand name Lenvima among others, is an anti-cancer medication for the treatment of certain kinds of thyroid cancer and for other cancers as well. It was developed by Eisai Co. and acts as a multiple kinase inhibitor against the VEGFR1, VEGFR2 and VEGFR3 kinases. Medical uses Lenvatinib is approved (since 2015) for the treatment of differentiated thyroid cancer that is either locally recurrent or metastatic, progressive, and did not respond to treatment with radioactive iodine (radioiodine). In May 2016, the US Food and Drug Administration (FDA) approved it (in combination with everolimus) for the treatment of advanced renal cell carcinoma following one prior anti-angiogenic therapy. The drug is also approved in the US and in the European Union for hepatocellular carcinoma that cannot be removed surgically in patients who have not received cancer therapy by mouth or injection. Adverse effects Hypertension (high blood pressure) was the most common side effect in studies (73% of patients, versus 16% in the placebo group), followed by diarrhoea (67% vs. 17%) and fatigue (67% vs. 35%). Other common side effects included decreased appetite, hypotension (low blood pressure), thrombocytopenia (low blood platelet count), nausea, muscle and bone pain. Interactions As lenvatinib moderately prolongs QT time, addition of other drugs with this property could increase the risk of a type of abnormal heart rhythm, namely torsades de pointes. No relevant interactions with enzyme inhibitors and inducers are expected. Pharmacology Mechanism of action Lenvatinib acts as a multiple kinase inhibitor. It inhibits the three main vascular endothelial growth factor receptors VEGFR1, 2 and 3, as well as fibroblast growth factor receptors (FGFR) 1, 2, 3 and 4, platelet-derived growth factor receptor (PDGFR) alpha, c-Kit, and the RET proto-oncogene. Some of these proteins play roles in cancerogenic signalling pathways. VEGFR2 inhibition is thought to be the main reason for the most common side effect, hypertension. Pharmacokinetics Lenvatinib is absorbed quickly from the gut, reaching peak blood plasma concentrations after one to four hours (three to seven hours if taken with food). Bioavailability is estimated to be about 85%. The substance is almost completely (98–99%) bound to plasma proteins, mainly albumin. Lenvatinib is metabolized by the liver enzyme CYP3A4 to desmethyl-lenvatinib (M2). M2 and lenvatinib itself are oxidized by aldehyde oxidase (AO) to substances called M2' and M3', the main metabolites in the feces. Another metabolite, also mediated by a CYP enzyme, is the N-oxide M3. Non-enzymatic metabolization also occurs, resulting in a low potential for interactions with enzyme inhibitors and inducers. Terminal half-life is 28 hours, with about two thirds being excreted via the feces, and one quarter via the urine. Chemistry Lenvatinib is used in form of the mesylate salt (CAS number ). History A phase I clinical trial in cancer patients was performed in 2006. A phase III trial treating thyroid cancer patients started in March 2011. Lenvatinib was granted orphan drug status for treatment of various types of thyroid cancer that do not respond to radioiodine in the US and Japan in 2012 and in Europe in 2013. In February 2015, the U.S. FDA approved lenvatinib for treatment of progressive, radioiodine refractory differentiated thyroid cancer. In May 2015, European Medicines Agency (EMA) approved the drug for the same indication. In May 2016, the FDA approved it (in combination with everolimus) for the treatment of advanced renal cell carcinoma following one prior anti-angiogenic therapy. In August 2018, the FDA approved lenvatinib for the first-line treatment of people with unresectable hepatocellular carcinoma (HCC). References Protein kinase inhibitors Orphan drugs Cyclopropyl compounds Chloroarenes Ureas Carboxamides
Lenvatinib
Chemistry
925
42,369,113
https://en.wikipedia.org/wiki/Data%20in%20transit
Data in transit, also referred to as data in motion and data in flight, is data en route between source and destination, typically on a computer network. Data in transit can be separated into two categories: information that flows over the public or untrusted network such as the Internet and data that flows in the confines of a private network such as a corporate or enterprise local area network (LAN). Data in transit is used as a complement to the terms data in use, and data at rest which together define the three states of digital data. See also Bandwidth-delay product References Computer networks engineering
Data in transit
Technology,Engineering
121
77,096,431
https://en.wikipedia.org/wiki/Escherichia%20coli%20BL21%28DE3%29
Escherichia coli BL21(DE3) is a commonly used protein production strain of the E. coli bacterium. This strain combines several features that allow for excessive expression of heterologous proteins. It is derived from the B lineage of E. coli. Naming The genotype of this strain is designated with E. coli B F– ompT gal dcm lon hsdSB(rB–mB–) λ(DE3 [lacI lacUV5-T7p07 ind1 sam7 nin5]) [malB+]K-12(λS). Characteristics Decreased proteolysis The proteolysis of heterologously expressed proteins is reduced due to the functional deficiency of two major proteases, Lon and OmpT. Lon is usually present in the cytoplasm of the cell, but in all B strains its production is prevented by an insertion within the promoter sequence. OmpT is located in the outer membrane but is absent in B strains due to deletion. Expression induction While E. coli BL21(DE3) supports the expression of genes under the control of constitutive promoters, it is specifically engineered for IPTG induction of recombinant genes under the control of a T7 promoter. The realized induction strength depends on several factors, including the IPTG concentration and the timing of its supplementation. This function is enabled by the presence of a recombinant λ-prophage (DE3). DE3 carries a T7 RNA polymerase (RNAP) gene under the control of a lacUV5 promoter (lacUV5-T7 gene 1). T7-RNAP is highly specific to the T7 promoter and orthogonal to native E. coli promoters. Therefore the T7-RNAP only transcribes (exogenously introduced) genes that are regulated by a T7 promoter. The LacUV5 promoter is derived from the E. coli wildtype lac promoter but exhibits an increased transcription strength due to two mutations that facilitate its interaction with a native E. coli RNAP σ-factor. In E. coli BL21(DE3) the expression of the T7-RNAP is suppressed by the constitutively expressed LacI repressor. LacI binds the lac operator, which is located downstream of the LacUV5 promoter, preventing the production of the T7-RNAP. However, upon supplementation of IPTG, the LacI repressor dissociates from the lac operator, allowing for the expression of T7-RNAP. Subsequently, T7-RNAP can initiate the transcription of a recombinant gene under T7 promoter control. Other DE3 modifications ensure stable integration of the prophage in the genome and prevent the prophage from entering the lytic cycle (ind1, sam7, and nin5). Facilitated cloning E. coli BL21(DE3) lacks a functional type I restriction-modification system, indicated by hsdS(rB− mB−). Specifically, both the restriction (hsdR) and modification (hsdM) domains are inactive. This enhances transformation efficiency since exogenously introduced unmethylated DNA remains untargeted by the restriction-modification system. The dcm gene is also rendered inactive, preventing the methylation of a cytosine on both strands within the recognition sequence 5'-CC(A/T)GG-3'. This facilitates further processing of purified DNA as Dcm methylation prevents cleavage by certain restriction enzymes. References Escherichia coli
Escherichia coli BL21(DE3)
Biology
756
32,172,099
https://en.wikipedia.org/wiki/BURP%20domain
In molecular biology, the BURP domain is a ~230-amino acid protein domain, which has been named for the four members of the group initially identified, BNM2, USP, RD22, and PG1beta. It is found in the C-terminal part of a number of plant cell wall proteins, which are defined not only by the BURP domain, but also by the overall similarity in their modular construction. The BURP domain proteins consists of either three or four modules: (i) an N-terminal hydrophobic domain - a presumptive transit peptide, joined to (ii) a short conserved segment or other short segment, (iii) an optional segment consisting of repeated units which is unique to each member, and (iv) the C-terminal BURP domain. Although the BURP domain proteins share primary structural features, their expression patterns and the conditions under which they are expressed differ. The presence of the conserved BURP domain in diverse plant proteins suggests an important role for this domain. It is possible that the BURP domain represents a general motif for localization of proteins within the cell wall matrix. The other structural domains associated with the BURP domain may specify other target sites for intermolecular interactions. Some proteins known to contain a BURP domain are listed below: Brassica protein BNM2, which is expressed during the induction of microspore embryogenesis. Field bean USPs, abundant non-storage seed proteins with unknown function. Soybean USP-like proteins ADR6 (or SALI5-4A), an auxin-repressible, aluminium-inducible protein and SALI3-2, a protein that is up-regulated by aluminium. Soybean seed coat BURP-domain protein 1 (SCB1). It might play a role in the differentiation of the seed coat parenchyma cells. Arabidopsis RD22 drought induced protein. Maize ZRP2, a protein of unknown function in cortex parenchyma. Tomato PG1beta, the beta-subunit of polygalacturonase isozyme 1 (PG1), which is expressed in ripening fruits. Cereal RAFTIN. It is essential specifically for the maturation phase of pollen development. References Protein families
BURP domain
Biology
465
43,677,246
https://en.wikipedia.org/wiki/Hyperlapse%20%28application%29
Hyperlapse is a mobile app created by Instagram that enables users to produce hyperlapse and time-lapse videos. It was released on August 26, 2014. Overview The app enables users to record videos up to 45 minutes of footage in a single take, which can be subsequently accelerated to create a hyperlapse cinematographic effect. Whereas time-lapses are normally produced by stitching together stills from traditional cameras, the app uses an image stabilization algorithm that steadies the appearance of video by eliminating jitter. Unlike Instagram, the app offers no filters. Instead, the only post-production option available to users is the modification of playback speed which can range from 1x to 40x normal playback speed. The app is only available on iOS devices, but Instagram suggested in August 2014 that an Android version would likely be made available in the near future. Fall Out Boy's music video for "Centuries" was filmed using the Hyperlapse app. Hyperlapse was removed from app stores by Instagram as of March 1, 2022. References External links Instagram Introduces Hyperlapse Instagram Video Downloader Instagram IOS software Video software
Hyperlapse (application)
Technology
238
53,855,897
https://en.wikipedia.org/wiki/Hartmut%20Zohm
Hartmut Zohm (born 2 November 1962) is a German plasma physicist who is known for his work on the ASDEX Upgrade machine. He received the 2014 John Dawson Award and the 2016 Hannes Alfvén Prize for successfully demonstrating that neoclassical tearing modes in tokamaks can be stabilized by electron cyclotron resonance heating, which is an important design consideration for pushing the performance limit of the ITER. Zohm is currently at the Max Planck Institute for Physics, and an Honorary Professor at the Ludwig Maximilian University of Munich. Early life and career Zohm received his doctorate in 1990 from Heidelberg University and the Max Planck Institute for Plasma Physics in Garching, Germany. His doctoral thesis "Investigation of Magnetic Modes in the ASDEX Tokamak" received the Otto Hahn Medal in 1991. He was a post-graduate student at General Atomics in San Diego, California. In 1996, he habilitated in experimental physics at the University of Augsburg and was professor for electrical engineering and plasma research at the University of Stuttgart from 1996 to 1999. He has been a scientific member of the Max Planck Institute for Plasma Physics since 1999 and heads the Tokamak scenario research area. In 2003, he became an honorary professor at the Ludwig Maximilian University of Munich. With his department at the ASDEX Upgrade (and JET), he researches plasma states (tokamak scenarios), energy dissipation, particle control including the removal of helium ash and the control of edge instabilities (edge localized modes) for optimal operation of ITER and DEMO. Honors and awards Zohm is an elected fellow of the American Physical Society. In 1991 his doctoral thesis Investigation of Magnetic Modes in the ASDEX Tokamak received the Otto-Hahn-Medal. In 2014, he received the American Physical Society's John Dawson Award for Excellence in Plasma Physics Research for "the theoretical prediction and experimental demonstration of neoclassical tearing mode stabilization by localized electron cyclotron current drive". In 2016, he and Sergei Bulanov received the Hannes Alfvén Prize from the European Physical Society for "their experimental and theoretical contributions to the development of large-scale next-step devices in high-temperature plasma physics research". Books References Living people 1962 births 21st-century German physicists Fellows of the American Physical Society Heidelberg University alumni Plasma physicists Scientists from Freiburg im Breisgau Max Planck Institute directors Max Planck Society alumni
Hartmut Zohm
Physics
492
14,609,028
https://en.wikipedia.org/wiki/State%20Scientific%20Research%20Institute%20of%20Aviation%20Systems
State Scientific Research Institute of Aviation Systems or GosNIIAS for short () is a Russian aerospace research centre. Founded by the decree of the Council of Ministers of the USSR on 26 February 1946 from a number of laboratories of the Flight Research Institute for operations research and aviation weapons systems development. The new institute was named NII-2. In March 1994 the institute was re-named with its current name (GosNIIAS). Initially, the institute was located in the buildings of the former Sergievo-Elizabethan Asylum. In GosNIIAS, there are six basic departments leading students and graduate students from three universities: Department FUPM MIPT “Avionics. Control and Information Systems. Organized in 1969, head. the department - academician E. A. Fedosov. Department MAI “System design of air complexes”. Organized in 1969, head. the department - doctor of technical sciences V. A. Stefanov. Department MAI "External design and efficiency of aviation complexes". Organized in 1973, head. Department - Doctor of Technical Sciences A. M. Zherebin. Department MAI "Systems of automatic and intelligent control." Organized in 1942, head. Department - Academician of the Russian Academy of Sciences S. Y. Zheltov. Department MIREA "Aviation and space information processing and control systems". Organized in 2002, head. Department - Corresponding Member of the Russian Academy of Sciences G. G. Sebryakov. Department MIREA "Avionics". Organized in 1988, head. Department - Academician of the Russian Academy of Sciences E. A. Fedosov. Bibliography List of GosNIIAS publications in the Scientific electronic library elibrary.ru Notes References 1946 establishments in Russia Defence companies of the Soviet Union Companies based in Moscow Metal companies of the Soviet Union Buran program Research institutes in Russia Research institutes in the Soviet Union Aviation in the Soviet Union Aerospace research institutes Aviation research institutes Aerospace engineering organizations Research and development organizations Federal State Unitary Enterprises of Russia
State Scientific Research Institute of Aviation Systems
Engineering
411
48,530,679
https://en.wikipedia.org/wiki/Stuart%20Orkin
Stuart Holland Orkin is an American physician, stem cell biologist and researcher in pediatric hematology-oncology. He is the David G. Nathan Distinguished Professor of Pediatrics at Harvard Medical School. Orkin's research has focused on the genetic basis of blood disorders. He is a member of the National Academy of Sciences and the Institute of Medicine, and an Investigator of the Howard Hughes Medical Institute. Early life Orkin grew up in Manhattan, where his father was a urologist. He studied biology as an undergraduate (B.S., 1967) at the Massachusetts Institute of Technology and earned a medical degree from Harvard Medical School in 1972. He did postdoctoral research in molecular biology at the National Institutes of Health under geneticist Philip Leder. While Orkin was completing his training in hematology-oncology, his department chair, David G. Nathan, allowed him to establish his own research laboratory. Career Orkin is the David G. Nathan Distinguished Professor of Pediatrics at Harvard Medical School. He served as Chair of the Department of Pediatric Oncology at the Dana–Farber/Harvard Cancer Center from 2000–2016. He has been on the Harvard Medical School faculty since the late 1970s and has been a Howard Hughes Medical Institute investigator since 1986. In the 1970s and 1980s, Orkin conducted research that identified genetic mutations associated with a group of blood disorders known as the thalassemias. This work led to the first comprehensive description of molecular defects in an inherited disorder. Later (1986), he and his team cloned a gene causing chronic granulomatous disease, marking the first time that a disease-causing gene was cloned without the researchers already knowing the protein coded by the gene. Today, his research lab examines transcriptional regulators of cell specification and differentiation. His laboratory cloned the first hematopoietic transcription factor GATA1 (1989). Starting in 2008, Orkin and his colleagues published a series of papers identifying the critical role for BCL11A in the developmental switch from fetal type (HbF) to adult type (HbA) hemoglobin. His group demonstrated that loss of BCL11A alone is sufficient to rescue the phenotype of sickle cell disease (SCD). In September 2015, Orkin published a study in the journal Nature showing a small section of DNA which could be responsive to gene therapy for sickle-cell disease. Translation of the basic findings on the role of BCL11A in HbF silencing to the clinic is ongoing both with gene therapy and therapeutic gene editing. Honors and awards In 1987, Orkin received the E. Mead Johnson Award. Elected to the National Academy of Sciences in 1991, Orkin won the Jessie Stevenson Kovalenko Medal from that organization in 2013. He was elected to the Institute of Medicine in 1992. In 1993, he received the Warren Alpert Foundation Prize. The American Society of Hematology named Orkin one of its Legends in Hematology in 2008. The American Society of Human Genetics honored Orkin with the 2014 William Allan Award, which recognizes sustained and significant contributions to human genetics. In 2017, he was elected to membership in the American Philosophical Society, and in 2018 he received the George M. Kober Medal of the Association of American Physicians and the Mechthild Esser Nemmers Prize in Medical Science from Northwestern University. In 2020 he was awarded the King Faisal International Prize in Medicine. and the Harrington Prize for Innovation in Medicine. In 2021, he received the Gruber Prize in Genetics. In 2022 he was a recipient of the Canada Gairdner International Award. Orkin was selected as the third recipient of the Elaine Redding Brinster Prize in Science or Medicine. In 2024 he was awarded the Shaw Prize. Orkin's name was included in the Time 2024 influential people in health list. Personal Orkin has been married for more than 50 years and has one daughter. References 1946 births Living people American hematologists Stem cell researchers Massachusetts Institute of Technology School of Science alumni Harvard Medical School alumni Harvard Medical School faculty Howard Hughes Medical Investigators Members of the National Academy of Medicine Fellows of the American Association for the Advancement of Science Members of the American Philosophical Society Members of the United States National Academy of Sciences
Stuart Orkin
Biology
866
28,734,406
https://en.wikipedia.org/wiki/C18H12O9
{{DISPLAYTITLE:C18H12O9}} The molecular formula C18H12O9 may refer to: Eckol Norstictic acid Variegatic acid Molecular formulas
C18H12O9
Physics,Chemistry
42
3,340,881
https://en.wikipedia.org/wiki/Environmental%20philosophy
Environmental philosophy is the branch of philosophy that is concerned with the natural environment and humans' place within it. It asks crucial questions about human environmental relations such as "What do we mean when we talk about nature?" "What is the value of the natural, that is non-human environment to us, or in itself?" "How should we respond to environmental challenges such as environmental degradation, pollution and climate change?" "How can we best understand the relationship between the natural world and human technology and development?" and "What is our place in the natural world?" Environmental philosophy includes environmental ethics, environmental aesthetics, ecofeminism, environmental hermeneutics, and environmental theology. Some of the main areas of interest for environmental philosophers are: Defining environment and nature How to value the environment Moral status of animals and plants Endangered species Environmentalism and deep ecology Aesthetic value of nature Intrinsic value Wilderness Restoration of nature Consideration of future generations Ecophenomenology Contemporary issues Modern issues within environmental philosophy include but are not restricted to the concerns of environmental activism, questions raised by science and technology, environmental justice, and climate change. These include issues related to the depletion of finite resources and other harmful and permanent effects brought on to the environment by humans, as well as the ethical and practical problems raised by philosophies and practices of environmental conservation, restoration, and policy in general. Another question that has settled on the minds of modern environmental philosophers is "Do rivers have rights?" At the same time environmental philosophy deals with the value human beings attach to different kinds of environmental experience, particularly how experiences in or close to non-human environments contrast with urban or industrialized experiences, and how this varies across cultures with close attention paid to indigenous people. Modern history Environmental philosophy emerged as a branch of philosophy in 1970s. Early environmental philosophers include Seyyed Hossein Nasr, Richard Routley, Arne Næss, and J. Baird Callicott. The movement was an attempt to connect with humanity's sense of alienation from nature in a continuing fashion throughout history. This was very closely related to the development at the same time of ecofeminism, an intersecting discipline. Since then its areas of concern have expanded significantly. The field is today characterized by a notable diversity of stylistic, philosophical and cultural approaches to human environmental relationships, from personal and poetic reflections on environmental experience and arguments for panpsychism to Malthusian applications of game theory or the question of how to put an economic value on nature's services. A major debate arose in the 1970s and 80s was that of whether nature has intrinsic value in itself independent of human values or whether its value is merely instrumental, with ecocentric or deep ecology approaches emerging on the one hand versus consequentialist or pragmatist anthropocentric approaches on the other. Another debate that arose at this time was the debate over whether there really is such a thing as wilderness or not, or whether it is merely a cultural construct with colonialist implications as suggested by William Cronon. Since then, readings of environmental history and discourse have become more critical and refined. In this ongoing debate, a diversity of dissenting voices have emerged from different cultures around the world questioning the dominance of Western assumptions, helping to transform the field into a global area of thought. In recent decades, there has been a significant challenge to deep ecology and the concepts of nature that underlie it, some arguing that there is not really such a thing as nature at all beyond some self-contradictory and even politically dubious constructions of an ideal other that ignore the real human-environmental interactions that shape our world and lives. This has been alternately dubbed the postmodern, constructivist, and most recently post-naturalistic turn in environmental philosophy. Environmental aesthetics, design and restoration have emerged as important intersecting disciplines that keep shifting the boundaries of environmental thought, as have the science of climate change and biodiversity and the ethical, political and epistemological questions they raise. Social ecology movement In 1982, Murray Bookchin described his philosophy of Social Ecology which provides a framework for understanding nature, our relationship with nature, and our relationships to each other. According to this philosophy, defining nature as "unspoiled wilderness" denies that humans are biological creatures created by natural evolution. It also takes issue with the attitude that "everything that exists is natural", as this provides us with no framework for judging a landfill as less natural than a forest. Instead, social ecology defines nature as a tendency in healthy ecosystems toward greater levels of diversity, complementarity, and freedom. Practices that are congruent with these principles are more natural than those that are not. Building from this foundation, Bookchin argues that "The ecological crisis is a social crisis": Practices which simplify biodiversity and dominate nature (monocropping, overfishing, clearcutting, etc.) are linked to societal tendencies to simplify and dominate humanity. Such societies create cultural institutions like poverty, racism, patriarchy, homophobia, and genocide from this same desire to simplify and dominate. In turn, Social Ecology suggests addressing the root causes of environmental degradation requires creating a society that promotes decentralization, interdependence, and direct democracy rather than profit extraction. Deep ecology movement In 1984, George Sessions and Arne Næss articulated the principles of the new Deep Ecology Movement. These basic principles are: The well-being and flourishing of human and non-human life have value. Richness and diversity of life forms contribute to the realization of these values and are also values in themselves. Humans have no right to reduce this richness and diversity except to satisfy vital needs. The flourishing of human life and cultures is compatible with a substantial decrease in the human population. Present human interference with the nonhuman world is excessive, and the situation is rapidly worsening. Policies must therefore be changed. These policies affect basic economic, technological, and ideological structures. The resulting state of affairs will be deeply different from the present. The ideological change is mainly that of appreciating life quality (dwelling in situations of inherent value), rather than adhering to an increasingly higher standard of living. There will be a profound awareness of the difference between big and great. Those who subscribe to the foregoing points have an obligation directly or indirectly to try to implement the necessary changes. Resacralization of nature See also Environmental Philosophy (journal) Environmental Values Environmental Ethics (journal) List of environmental philosophers Environmental hermeneutics References Notes Further reading Armstrong, Susan, Richard Botzler. Environmental Ethics: Divergence and Convergence, McGraw-Hill, Inc., New York, New York. . Auer, Matthew, 2019. Environmental Aesthetics in the Age of Climate Change, Sustainability, 11 (18), 5001. Benson, John, 2000. Environmental Ethics: An Introduction with Readings, Psychology Press. Callicott, J. Baird, and Michael Nelson, 1998. The Great New Wilderness Debate, University of Georgia Press. Conesa-Sevilla, J., 2006. The Intrinsic Value of the Whole: Cognitive and Utilitarian Evaluative Processes as they Pertain to Ecocentric, Deep Ecological, and Ecopsychological "Valuing", The Trumpeter, 22 (2), 26-42. Derr, Patrick, G, Edward McNamara, 2003. Case Studies in Environmental Ethics, Bowman & Littlefield Publishers. DesJardins, Joseph R., Environmental Ethics Wadsworth Publishing Company, ITP, An International Thomson Publishing Company, Belmont, California. A Division of Wadsworth, Inc. Devall, W. and G. Sessions. 1985. Deep Ecology: Living As if Nature Mattered, Salt Lake City: Gibbs M. Smith, Inc. Drengson, Inoue, 1995. "The Deep Ecology Movement", North Atlantic Books, Berkeley, California. Foltz, Bruce V., Robert Frodeman. 2004. Rethinking Nature, Indiana University Press, 601 North Morton Street, Bloomington, IN 47404-3797 Gade, Anna M. 2019. Muslim Environmentalisms: Religious and Social Foundations, Columbia University Press, New York Keulartz, Jozef, 1999. The Struggle for Nature: A Critique of Environmental Philosophy, Routledge. LaFreniere, Gilbert F, 2007. The Decline of Nature: Environmental History and the Western Worldview, Academica Press, Bethesda, MD Light, Andrew, and Eric Katz,1996. Environmental Pragmatism, Psychology Press. Mannison, D., M. McRobbie, and R. Routley (ed), 1980. Environmental Philosophy, Australian National University Matthews, Steve, 2002. [https://core.ac.uk/download/pdf/48856927.pdf A Hybrid Theory of Environmentalism, Essays in Philosophy, 3. Næss, A. 1989. Ecology, Community and Lifestyle: Outline of an Ecosophy, Translated by D. Rothenberg. Cambridge: Cambridge University Press. Oelschlaeger, Max, 1993. The Idea of Wilderness: From Prehistory to the Age of Ecology, New Haven: Yale University Press, Pojman, Louis P., Paul Pojman. Environmental Ethics, Thomson-Wadsworth, United States Sarvis, Will. Embracing Philanthropic Environmentalism: The Grand Responsibility of Stewardship, (McFarland, 2019). Sherer, D., ed, Thomas Attig. 1983. Ethics and the Environment, Prentice-Hall, Inc., Englewood Cliffs, New Jersey 07632. VanDeVeer, Donald, Christine Pierce. The Environmental Ethics and Policy Book, Wadsworth Publishing Company. An International Thomson Publishing Company Vogel, Steven, 1999. Environmental Philosophy After the End of Nature, Environmental Ethics 24 (1):23-39 Weston, 1999. An Invitation to Environmental Philosophy, Oxford University Press, New York, New York. Zimmerman, Michael E., J. Baird Callicott, George Sessions, Karen J. Warren, John Clark. 1993.Environmental Philosophy: From Animal Rights to Radical Ecology, Prentice-Hall, Inc., Englewood Cliffs, New Jersey 07632 External links
Environmental philosophy
Environmental_science
2,089
2,372,548
https://en.wikipedia.org/wiki/Polymer%20science
Polymer science or macromolecular science is a subfield of materials science concerned with polymers, primarily synthetic polymers such as plastics and elastomers. The field of polymer science includes researchers in multiple disciplines including chemistry, physics, and engineering. Subdisciplines This science comprises three main sub-disciplines: Polymer chemistry or macromolecular chemistry is concerned with the chemical synthesis and chemical properties of polymers. Polymer physics is concerned with the physical properties of polymer materials and engineering applications. Specifically, it seeks to present the mechanical, thermal, electronic and optical properties of polymers with respect to the underlying physics governing a polymer microstructure. Despite originating as an application of statistical physics to chain structures, polymer physics has now evolved into a discipline in its own right. Polymer characterization is concerned with the analysis of chemical structure, morphology, and the determination of physical properties in relation to compositional and structural parameters. History of polymer science The first modern example of polymer science is Henri Braconnot's work in the 1830s. Henri, along with Christian Schönbein and others, developed derivatives of the natural polymer cellulose, producing new, semi-synthetic materials, such as celluloid and cellulose acetate. The term "polymer" was coined in 1833 by Jöns Jakob Berzelius, though Berzelius did little that would be considered polymer science in the modern sense. In the 1840s, Friedrich Ludersdorf and Nathaniel Hayward independently discovered that adding sulfur to raw natural rubber (polyisoprene) helped prevent the material from becoming sticky. In 1844 Charles Goodyear received a U.S. patent for vulcanizing natural rubber with sulfur and heat. Thomas Hancock had received a patent for the same process in the UK the year before. This process strengthened natural rubber and prevented it from melting with heat without losing flexibility. This made practical products such as waterproofed articles possible. It also facilitated practical manufacture of such rubberized materials. Vulcanized rubber represents the first commercially successful product of polymer research. In 1884 Hilaire de Chardonnet started the first artificial fiber plant based on regenerated cellulose, or viscose rayon, as a substitute for silk, but it was very flammable. In 1907 Leo Baekeland invented the first synthetic plastic, a thermosetting phenol–formaldehyde resin called Bakelite. Despite significant advances in polymer synthesis, the molecular nature of polymers was not understood until the work of Hermann Staudinger in 1922. Prior to Staudinger's work, polymers were understood in terms of the association theory or aggregate theory, which originated with Thomas Graham in 1861. Graham proposed that cellulose and other polymers were colloids, aggregates of molecules having small molecular mass connected by an unknown intermolecular force. Hermann Staudinger was the first to propose that polymers consisted of long chains of atoms held together by covalent bonds. It took over a decade for Staudinger's work to gain wide acceptance in the scientific community, work for which he was awarded the Nobel Prize in 1953. The World War II era marked the emergence of a strong commercial polymer industry. The limited or restricted supply of natural materials such as silk and rubber necessitated the increased production of synthetic substitutes, such as nylon and synthetic rubber. In the intervening years, the development of advanced polymers such as Kevlar and Teflon have continued to fuel a strong and growing polymer industry. The growth in industrial applications was mirrored by the establishment of strong academic programs and research institutes. In 1946, Herman Mark established the Polymer Research Institute at Brooklyn Polytechnic, the first research facility in the United States dedicated to polymer research. Mark is also recognized as a pioneer in establishing curriculum and pedagogy for the field of polymer science. In 1950, the POLY division of the American Chemical Society was formed, and has since grown to the second-largest division in this association with nearly 8,000 members. Fred W. Billmeyer, Jr., a Professor of Analytical Chemistry had once said that "although the scarcity of education in polymer science is slowly diminishing but it is still evident in many areas. What is most unfortunate is that it appears to exist, not because of a lack of awareness but, rather, a lack of interest." Nobel prizes related to polymer science 2005 (Chemistry) Robert Grubbs, Richard Schrock, Yves Chauvin for olefin metathesis. 2002 (Chemistry) John Bennett Fenn, Koichi Tanaka, and Kurt Wüthrich for the development of methods for identification and structure analyses of biological macromolecules. 2000 (Chemistry) Alan G. MacDiarmid, Alan J. Heeger, and Hideki Shirakawa for work on conductive polymers, contributing to the advent of molecular electronics. 1991 (Physics) Pierre-Gilles de Gennes for developing a generalized theory of phase transitions with particular applications to describing ordering and phase transitions in polymers. 1974 (Chemistry) Paul J. Flory for contributions to theoretical polymer chemistry. 1963 (Chemistry) Giulio Natta and Karl Ziegler for contributions in polymer synthesis. (Ziegler-Natta catalysis). 1953 (Chemistry) Hermann Staudinger for contributions to the understanding of macromolecular chemistry. References McLeish T.C.B. (2009) Polymer Physics. In: Meyers R. (eds) Encyclopedia of Complexity and Systems Science. Springer, New York, NY. External links List of scholarly journals pertaining to polymer science Soft matter Materials science Polymers
Polymer science
Physics,Chemistry,Materials_science,Engineering
1,129
1,491,913
https://en.wikipedia.org/wiki/Primosome
In molecular biology, a primosome is a protein complex responsible for creating RNA primers on single stranded DNA during DNA replication. The primosome consists of seven proteins: DnaG primase, DnaB helicase, DnaC helicase assistant, DnaT, PriA, Pri B, and PriC. At each replication fork, the primosome is utilized once on the leading strand of DNA and repeatedly, initiating each Okazaki fragment, on the lagging DNA strand. Initially the complex formed by PriA, PriB, and PriC binds to DNA. Then the DnaB-DnaC helicase complex attaches along with DnaT. This structure is referred to as the pre-primosome. Finally, DnaG will bind to the pre-primosome forming a complete primosome. The primosome attaches 1-10 RNA nucleotides to the single stranded DNA creating a DNA-RNA hybrid. This sequence of RNA is used as a primer to initiate DNA polymerase III. The RNA bases are ultimately replaced with DNA bases by RNase H nuclease (eukaryotes) or DNA polymerase I nuclease (prokaryotes). DNA Ligase then acts to join the two ends together. Assembly of the Escherichia coli primosome requires six proteins, PriA, PriB, PriC, DnaB, DnaC, and DnaT, acting at a primosome assembly site (pas) on an SSBcoated single-stranded (8s) DNA. Assembly is initiated by interactions of PriA and PriB with ssDNA and the pas. PriC, DnaB, DnaC, and DnaT then act on the PriAPriB- DNA complex to yield the primosome. Primosomes are nucleoproteins assemblies that activate DNA replication forks. Their primary role is to recruit the replicative helicase onto single-stranded DNA. The "replication restart" primosome, defined in Escherichia coli, is involved in the reactivation of arrested replication forks. Binding of the PriA protein to forked DNA triggers its assembly. PriA is conserved in bacteria, but its primosomal partners are not. In Bacillus subtilis, genetic analysis has revealed three primosomal proteins, DnaB, DnaD, and DnaI, that have no obvious homologues in E. coli. They are involved in primosome function both at arrested replication forks and at the chromosomal origin. Our biochemical analysis of the DnaB and DnaD proteins unravels their role in primosome assembly. They are both multimeric and bind individually to DNA. Furthermore, DnaD stimulates DnaB binding activities. DnaD alone and the DnaD/DnaB pair interact specifically with PriA of B. subtilis on several DNA substrates. This suggests that the nucleoprotein assembly is sequential in the PriA, DnaD, DnaB order. The preferred DNA substrate mimics an arrested DNA replication fork with unreplicated lagging strand, structurally identical to a product of recombinational repair of a stalled replication fork. References Genetics
Primosome
Chemistry,Biology
665
33,287,802
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2038
In molecular biology, glycoside hydrolase family 38 is a family of glycoside hydrolases. Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. Glycoside hydrolase family 38 CAZY GH_38 comprises enzymes with only one known activity; alpha-mannosidase () (). Lysosomal alpha-mannosidase is necessary for the catabolism of N-linked carbohydrates released during glycoprotein turnover. The enzyme catalyzes the hydrolysis of terminal, non-reducing alpha-D-mannose residues in alpha-D-mannosides, and can cleave all known types of alpha-mannosidic linkages. Defects in the gene cause lysosomal alpha-mannosidosis (AM), a lysosomal storage disease characterised by the accumulation of unbranched oligo-saccharide chains. A domain, which is found in the central region adopts a structure consisting of three alpha helices, in an immunoglobulin/albumin-binding domain-like fold. The domain is predominantly found in the enzyme alpha-mannosidase. References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 38
Biology
365
53,079,374
https://en.wikipedia.org/wiki/Vibrio%20tubiashii
Vibrio tubiashii is a Gram-negative, rod-shaped (0.5 um-1.5 um) marine bacterium that uses a single polar flagellum for motility. It has been implicated in several diseases of marine organisms. Discovery Vibrio tubiashii was originally isolated from juvenile and larval bivalve mollusks suffering from bacillary necrosis, now called vibriosis. It was originally discovered by Tubiash et al. in 1965, hence the name, but not properly described until Hada et al. in 1984. Since its discovery and identification, V. tubiashii has been implicated in shellfish vibriosis across the globe, and more recently, coral diseases. Pathogenicity Like many Vibrio spp., V. tubiashii produces extracellular enzymes, specifically a zinc-metalloprotease and a cytolysin/hemolysin that are nearly identical to those produced by other pathogenic Vibrio strains. This being said, only the zinc-metalloprotease elicited disease symptoms in Crassostrea gigas consistent with vibriosis. In addition to shellfish disease, Vibrio-derived zinc-metalloprotease could be an integral virulence factor in diseases of scleractinian corals as it was shown to cause photoinactivation of the coral endosymbiont Symbiodinium, leading to tissue color loss and eventual tissue death. The hemolytic activity of V. tubiashii cultures increases during early growth stages and progressively decreases throughout the stationary phase, while proteolytic activity shows a gradual increase starting in the early stationary phase, suggesting that pathogenesis in this organism requires higher cell density. References External links Type strain of Vibrio tubiashii at BacDive - the Bacterial Diversity Metadatabase Bacterial diseases Vibrionales Waterborne diseases Marine microorganisms
Vibrio tubiashii
Biology
403
7,792,586
https://en.wikipedia.org/wiki/WAP%20gateway
A WAP gateway sits between mobile devices using the Wireless Application Protocol (WAP) and the World Wide Web, passing pages from one to the other much like a proxy. This translates pages into a form suitable for the mobiles, for instance using the Wireless Markup Language (WML). This process is hidden from the phone, so it may access the page in the same way as a browser accesses HTML, using a URL (for example, http://example.com/foo.wml), provided the mobile phone operator has not specifically prevented this. WAP gateway software encodes and decodes requests and responses between the smartphones, microbrowser and internet. It decodes the encoded WAP requests from the microbrowser and send the HTTP requests to the internet or to a local application server. It also encodes the WML and HDML data returning from the web for transmission to the microbrowser in the handset. References Kannel: Open Source WAP and SMS Gateway What is WAP Gateway Wap-Gateway.com: Free WAP Gateway / Wap Proxy http://www.squid-cache.org/ Wireless Application Protocol
WAP gateway
Technology
247
23,943,269
https://en.wikipedia.org/wiki/Sort%20sol
Sort sol is a murmuration, a flocking behavior that occurs in the marshlands in southwestern Jutland, Denmark, in particular the marsh near Tønder and Ribe. Very large numbers of migratory starlings gather there in spring and autumn when they move between their winter grounds in southern Europe and their summer breeding grounds in Scandinavia and other countries near the Baltic Sea. Sort sol takes place in the hours just after sunset. The birds gather in large flocks and form huge formations in the sky just before they decide on a location to roost for the night. The movements of the formations have been likened to a kind of dance or ballet and the birds are so numerous that they seem to obliterate the sunset, hence the term "sort sol" (Danish for "black sun"). Sort sol in the marsh near Tønder can occasionally comprise a formation of up to one million birds. Usually flocks break up when the number of individuals exceeds about half a million birds, due to excessive internal disturbances in the flock. If a predator bird enters the flock, the starlings initiate a veritable bombardment of droppings and vomit to soil the feathers of the predator. In rare cases the sticky deposits may render the predator unable to stay airborne. See also Flock (birds) Swarm References (include link to a photo series) External links 7 Incredible Natural Phenomena you've never seen - Denmark's Black Sun no.6 Birds Earth phenomena
Sort sol
Physics,Biology
293
309,304
https://en.wikipedia.org/wiki/Propellant%20mass%20fraction
In aerospace engineering, the propellant mass fraction is the portion of a vehicle's mass which does not reach the destination, usually used as a measure of the vehicle's performance. In other words, the propellant mass fraction is the ratio between the propellant mass and the initial mass of the vehicle. In a spacecraft, the destination is usually an orbit, while for aircraft it is their landing location. A higher mass fraction represents less weight in a design. Another related measure is the payload fraction, which is the fraction of initial weight that is payload. It can be applied to a vehicle, a stage of a vehicle or to a rocket propulsion system. Formulation The propellant mass fraction is given by: where: is the propellant mass fraction is the initial mass of the vehicle is the propellant mass is the final mass of the vehicle Significance In rockets for a given target orbit, a rocket's mass fraction is the portion of the rocket's pre-launch mass (fully fueled) that does not reach orbit. The propellant mass fraction is the ratio of just the propellant to the entire mass of the vehicle at takeoff (propellant plus dry mass). In the cases of a single-stage-to-orbit (SSTO) vehicle or suborbital vehicle, the mass fraction equals the propellant mass fraction, which is simply the fuel mass divided by the mass of the full spaceship. A rocket employing staging, which are the only designs to have reached orbit, has a mass fraction higher than the propellant mass fraction because parts of the rocket itself are dropped off en route. Propellant mass fractions are typically around 0.8 to 0.9. In aircraft, mass fraction is related to range, an aircraft with a higher mass fraction can go farther. Aircraft mass fractions are typically around 0.5. When applied to a rocket as a whole, a low mass fraction is desirable, since it indicates a greater capability for the rocket to deliver payload to orbit for a given amount of fuel. Conversely, when applied to a single stage, where the propellant mass fraction calculation doesn't include the payload, a higher propellant mass fraction corresponds to a more efficient design, since there is less non-propellant mass. Without the benefit of staging, SSTO designs are typically designed for mass fractions around 0.9. Staging increases the payload fraction, which is one of the reasons SSTOs appear difficult to build. For example, the complete Space Shuttle system has: fueled weight at liftoff: 1,708,500 kg dry weight at liftoff: 342,100 kg Given these numbers, the propellant mass fraction is . The mass fraction plays an important role in the rocket equation: Where is the ratio of final mass to initial mass (i.e., one minus the mass fraction), is the change in the vehicle's velocity as a result of the fuel burn and is the effective exhaust velocity (see below). The term effective exhaust velocity is defined as: where Isp is the fuel's specific impulse in seconds and gn is the standard acceleration of gravity (note that this is not the local acceleration of gravity). To make a powered landing from orbit on a celestial body without an atmosphere requires the same mass reduction as reaching orbit from its surface, if the speed at which the surface is reached is zero. See also Fuel fraction Mass ratio References Astrodynamics Mass Single-stage-to-orbit Rocket propulsion ro:Fracţie masică
Propellant mass fraction
Physics,Mathematics,Engineering
712
62,771,625
https://en.wikipedia.org/wiki/Walter%20Burke%20Institute%20for%20Theoretical%20Physics
The Walter Burke Institute for Theoretical Physics is a research center at the California Institute of Technology focused on high-energy physics, condensed matter physics, astrophysics, general relativity, and cosmology. It was founded in 2014. History The Institute was founded in 2014 with grants from the Sherman Fairchild Foundation, the Gordon and Betty Moore Foundation, and funding from the California Institute of Technology itself. It had an initial endowment of over $70 million, a significant amount, particularly when placed in the context of Department of Energy funding for high-energy physics. It is named after Walter Burke, a trustee of Caltech and president of the Sherman Fairchild Foundation. Its inaugural director is Hirosi Ooguri, a string theorist. References See also Institute for Theoretical Physics (disambiguation) Center for Theoretical Physics (disambiguation) Theoretical physics institutes Physics research institutes California Institute of Technology
Walter Burke Institute for Theoretical Physics
Physics
181
407,350
https://en.wikipedia.org/wiki/98%20%28number%29
98 (ninety-eight) is the natural number following 97 and preceding 99. In mathematics 98 is: a Wedderburn–Etherington number a nontotient a number of non-isomorphic set-systems of weight 7 In other fields Ninety-eight is: 10-98 code in police code means "Assignment Completed" References Integers
98 (number)
Mathematics
71
14,407,895
https://en.wikipedia.org/wiki/National%20Health%20and%20Nutrition%20Examination%20Survey
The National Health and Nutrition Examination Survey (NHANES) is a survey research program conducted by the National Center for Health Statistics (NCHS) to assess the health and nutritional status of adults and children in the United States, and to track changes over time. The survey combines interviews, physical examinations and laboratory tests. The NHANES interview includes demographic, socioeconomic, dietary, and health-related questions. The examination component consists of medical, dental, and physiological measurements, as well as laboratory tests administered by medical personnel. The National Health Survey Act was passed in 1956. This allowed legislative authorization to provide current statistical data on the amount, distribution, and effects on illness and disability in the United States. The first three national health examination surveys were conducted in the 1960s: 1960-62—National Health Examination Survey I (NHES I); 1963-65—National Health Examination Survey II (NHES II); and 1966-70—National Health Examination Survey III (NHES III). The first NHANES was conducted in 1971, and in 1999 the surveys became an annual event; the first report on the topic was published in 2001. NHANES findings are used to determine the prevalence of major diseases and risk factors for diseases. Information is used to assess nutritional status and its association with health promotion and disease prevention. NHANES findings are also the basis for national standards for such measurements as height, weight, and blood pressure. NHANES data are used in epidemiological studies and health sciences research (including biomarkers of aging), which help develop sound public health policy, direct and design health programs and services, expand health knowledge, extend healthspan and lifespan. Follow-up studies using NHANES data were made possible by creating linked mortality files and files based on Medicare and Medicaid data. See also National Archive of Computerized Data on Aging References External links Official website page for NHANES 1999-2000 DSDR page for NHANES 2001-2002 DSDR page for NHANES 2003-2004 DSDR page for NHANES 2005-2006 DSDR page for NHANES 2007-2008 Validity of U.S. Nutritional Surveillance: National Health and Nutrition Examination Survey Caloric Energy Intake Data, 1971–2010 Centers for Disease Control and Prevention Gerontology Health surveys
National Health and Nutrition Examination Survey
Biology
470
2,634,856
https://en.wikipedia.org/wiki/Enzyme%20assay
Enzyme assays are laboratory methods for measuring enzymatic activity. They are vital for the study of enzyme kinetics and enzyme inhibition. Enzyme units The quantity or concentration of an enzyme can be expressed in molar amounts, as with any other chemical, or in terms of activity in enzyme units. Enzyme activity Enzyme activity is a measure of the quantity of active enzyme present and is thus dependent on various physical conditions, which should be specified. It is calculated using the following formula: where = Enzyme activity = Moles of substrate converted per unit time = Rate of the reaction = Reaction volume The SI unit is the katal, 1 katal = 1 mol s−1 (mole per second), but this is an excessively large unit. A more practical and commonly used value is enzyme unit (U) = 1 μmol min−1 (micromole per minute). 1 U corresponds to 16.67 nanokatals. Enzyme activity as given in katal generally refers to that of the assumed natural target substrate of the enzyme. Enzyme activity can also be given as that of certain standardized substrates, such as gelatin, then measured in gelatin digesting units (GDU), or milk proteins, then measured in milk clotting units (MCU). The units GDU and MCU are based on how fast one gram of the enzyme will digest gelatin or milk proteins, respectively. 1 GDU approximately equals 1.5 MCU. An increased amount of substrate will increase the rate of reaction with enzymes, however once past a certain point, the rate of reaction will level out because the amount of active sites available has stayed constant. Specific activity The specific activity of an enzyme is another common unit. This is the activity of an enzyme per milligram of total protein (expressed in μmol min−1 mg−1). Specific activity gives a measurement of enzyme purity in the mixture. It is the micro moles of product formed by an enzyme in a given amount of time (minutes) under given conditions per milligram of total proteins. Specific activity is equal to the rate of reaction multiplied by the volume of reaction divided by the mass of total protein. The SI unit is katal/kg, but a more practical unit is μmol/(mg*min). Specific activity is a measure of enzyme processivity (the capability of enzyme to be processed), at a specific (usually saturating) substrate concentration, and is usually constant for a pure enzyme. An active site titration process can be done for the elimination of errors arising from differences in cultivation batches and/or misfolded enzyme and similar issues. This is a measure of the amount of active enzyme, calculated by e.g. titrating the amount of active sites present by employing an irreversible inhibitor. The specific activity should then be expressed as μmol min−1 mg−1 active enzyme. If the molecular weight of the enzyme is known, the turnover number, or μmol product per second per μmol of active enzyme, can be calculated from the specific activity. The turnover number can be visualized as the number of times each enzyme molecule carries out its catalytic cycle per second. Related terminology The rate of a reaction is the concentration of substrate disappearing (or product produced) per unit time (mol L−1 s−1). The % purity is 100% × (specific activity of enzyme sample / specific activity of pure enzyme). The impure sample has lower specific activity because some of the mass is not actually enzyme. If the specific activity of 100% pure enzyme is known, then an impure sample will have a lower specific activity, allowing purity to be calculated and then getting a clear result. Types of assays All enzyme assays measure either the consumption of substrate or production of product over time. A large number of different methods of measuring the concentrations of substrates and products exist and many enzymes can be assayed in several different ways. Biochemists usually study enzyme-catalysed reactions using four types of experiments: Initial rate experiments. When an enzyme is mixed with a large excess of the substrate, the enzyme-substrate intermediate builds up in a fast initial transient. Then the reaction achieves a steady-state kinetics in which enzyme substrate intermediates remains approximately constant over time and the reaction rate changes relatively slowly. Rates are measured for a short period after the attainment of the quasi-steady state, typically by monitoring the accumulation of product with time. Because the measurements are carried out for a very short period and because of the large excess of substrate, the approximation that the amount of free substrate is approximately equal to the amount of the initial substrate can be made. The initial rate experiment is the simplest to perform and analyze, being relatively free from complications such as back-reaction and enzyme degradation. It is therefore by far the most commonly used type of experiment in enzyme kinetics. Progress curve experiments. In these experiments, the kinetic parameters are determined from expressions for the species concentrations as a function of time. The concentration of the substrate or product is recorded in time after the initial fast transient and for a sufficiently long period to allow the reaction to approach equilibrium. Progress curve experiments were widely used in the early period of enzyme kinetics, but are less common now. Transient kinetics experiments. In these experiments, reaction behaviour is tracked during the initial fast transient as the intermediate reaches the steady-state kinetics period. These experiments are more difficult to perform than either of the above two classes because they require specialist techniques (such as flash photolysis of caged compounds) or rapid mixing (such as stopped-flow, quenched flow or continuous flow). Relaxation experiments. In these experiments, an equilibrium mixture of enzyme, substrate and product is perturbed, for instance by a temperature, pressure or pH jump, and the return to equilibrium is monitored. The analysis of these experiments requires consideration of the fully reversible reaction. Moreover, relaxation experiments are relatively insensitive to mechanistic details and are thus not typically used for mechanism identification, although they can be under appropriate conditions. Enzyme assays can be split into two groups according to their sampling method: continuous assays, where the assay gives a continuous reading of activity, and discontinuous assays, where samples are taken, the reaction stopped and then the concentration of substrates/products determined. Continuous assays Continuous assays are most convenient, with one assay giving the rate of reaction with no further work necessary. There are many different types of continuous assays. Spectrophotometric In spectrophotometric assays, you follow the course of the reaction by measuring a change in how much light the assay solution absorbs. If this light is in the visible region you can actually see a change in the color of the assay, and these are called colorimetric assays. The MTT assay, a redox assay using a tetrazolium dye as substrate is an example of a colorimetric assay. UV light is often used, since the common coenzymes NADH and NADPH absorb UV light in their reduced forms, but do not in their oxidized forms. An oxidoreductase using NADH as a substrate could therefore be assayed by following the decrease in UV absorbance at a wavelength of 340 nm as it consumes the coenzyme. Direct versus coupled assays Even when the enzyme reaction does not result in a change in the absorbance of light, it can still be possible to use a spectrophotometric assay for the enzyme by using a coupled assay. Here, the product of one reaction is used as the substrate of another, easily detectable reaction. For example, figure 1 shows the coupled assay for the enzyme hexokinase, which can be assayed by coupling its production of glucose-6-phosphate to NADPH production, using glucose-6-phosphate dehydrogenase. Fluorometric Fluorescence is when a molecule emits light of one wavelength after absorbing light of a different wavelength. Fluorometric assays use a difference in the fluorescence of substrate from product to measure the enzyme reaction. These assays are in general much more sensitive than spectrophotometric assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light. An example of these assays is again the use of the nucleotide coenzymes NADH and NADPH. Here, the reduced forms are fluorescent and the oxidised forms non-fluorescent. Oxidation reactions can therefore be followed by a decrease in fluorescence and reduction reactions by an increase. Synthetic substrates that release a fluorescent dye in an enzyme-catalyzed reaction are also available, such as 4-methylumbelliferyl-β-D-galactoside for assaying β-galactosidase or 4-methylumbelliferyl-butyrate for assaying Candida rugosa lipase. Calorimetric Calorimetry is the measurement of the heat released or absorbed by chemical reactions. These assays are very general, since many reactions involve some change in heat and with use of a microcalorimeter, not much enzyme or substrate is required. These assays can be used to measure reactions that are impossible to assay in any other way. Chemiluminescent Chemiluminescence is the emission of light by a chemical reaction. Some enzyme reactions produce light and this can be measured to detect product formation. These types of assay can be extremely sensitive, since the light produced can be captured by photographic film over days or weeks, but can be hard to quantify, because not all the light released by a reaction will be detected. The detection of horseradish peroxidase by enzymatic chemiluminescence (ECL) is a common method of detecting antibodies in western blotting. Another example is the enzyme luciferase, this is found in fireflies and naturally produces light from its substrate luciferin. Light scattering Static light scattering measures the product of weight-averaged molar mass and concentration of macromolecules in solution. Given a fixed total concentration of one or more species over the measurement time, the scattering signal is a direct measure of the weight-averaged molar mass of the solution, which will vary as complexes form or dissociate. Hence the measurement quantifies the stoichiometry of the complexes as well as kinetics. Light scattering assays of protein kinetics is a very general technique that does not require an enzyme. Microscale thermophoresis Microscale thermophoresis (MST) measures the size, charge and hydration entropy of molecules/substrates at equilibrium. The thermophoretic movement of a fluorescently labeled substrate changes significantly as it is modified by an enzyme. This enzymatic activity can be measured with high time resolution in real time. The material consumption of the all optical MST method is very low, only 5 μl sample volume and 10nM enzyme concentration are needed to measure the enzymatic rate constants for activity and inhibition. MST allows analysts to measure the modification of two different substrates at once (multiplexing) if both substrates are labeled with different fluorophores. Thus substrate competition experiments can be performed. Discontinuous assays Discontinuous assays are when samples are taken from an enzyme reaction at intervals and the amount of product production or substrate consumption is measured in these samples. Radiometric Radiometric assays measure the incorporation of radioactivity into substrates or its release from substrates. The radioactive isotopes most frequently used in these assays are 14C, 32P, 35S and 125I. Since radioactive isotopes can allow the specific labelling of a single atom of a substrate, these assays are both extremely sensitive and specific. They are frequently used in biochemistry and are often the only way of measuring a specific reaction in crude extracts (the complex mixtures of enzymes produced when you lyse cells). Radioactivity is usually measured in these procedures using a scintillation counter. Chromatographic Chromatographic assays measure product formation by separating the reaction mixture into its components by chromatography. This is usually done by high-performance liquid chromatography (HPLC), but can also use the simpler technique of thin layer chromatography. Although this approach can need a lot of material, its sensitivity can be increased by labelling the substrates/products with a radioactive or fluorescent tag. Assay sensitivity has also been increased by switching protocols to improved chromatographic instruments (e.g. ultra-high pressure liquid chromatography) that operate at pump pressure a few-fold higher than HPLC instruments (see High-performance liquid chromatography#Pump pressure). Factors affecting assays Several factors effect the assay outcome and a recent review summarizes the various parameters that needs to be monitored to keep an assay up and running. Salt Concentration Most enzymes cannot tolerate extremely high salt concentrations. The ions interfere with the weak ionic bonds of proteins. Typical enzymes are active in salt concentrations of 1-500 mM. As usual there are exceptions such as the halophilic algae and halophilic bacteria. Effects of Temperature All enzymes work within a range of temperature specific to the organism. Increases in temperature generally lead to increases in reaction rates. There is a limit to the increase because higher temperatures lead to a sharp decrease in reaction rates. This is due to the denaturating (alteration) of protein structure resulting from the breakdown of the weak ionic and hydrogen bonding that stabilize the three-dimensional structure of the enzyme active site. The "optimum" temperature for human enzymes is usually between 35 and 40 °C. The average temperature for humans is 37 °C. Human enzymes start to denature quickly at temperatures above 40 °C. Enzymes from thermophilic archaea found in the hot springs are stable up to 100 °C. However, the idea of an "optimum" rate of an enzyme reaction is misleading, as the rate observed at any temperature is the product of two rates, the reaction rate and the denaturation rate. If you were to use an assay measuring activity for one second, it would give high activity at high temperatures, however if you were to use an assay measuring product formation over an hour, it would give you low activity at these temperatures. Effects of pH Most enzymes are sensitive to pH and have specific ranges of activity. All have an optimum pH. The pH can stop enzyme activity by denaturating (altering) the three-dimensional shape of the enzyme by breaking ionic, and hydrogen bonds. Most enzymes function between a pH of 6 and 8; however pepsin in the stomach works best at a pH of 2 and trypsin at a pH of 8. Enzyme Saturation Increasing the substrate concentration increases the rate of reaction (enzyme activity). However, enzyme saturation limits reaction rates. An enzyme is saturated when the active sites of all the molecules are occupied most of the time. At the saturation point, the reaction will not speed up, no matter how much additional substrate is added. The graph of the reaction rate will plateau. Level of crowding Large amounts of macromolecules in a solution will alter the rates and equilibrium constants of enzyme reactions, through an effect called macromolecular crowding. List of enzyme assays MTT assay Fluorescein diacetate hydrolysis para-Nitrophenylphosphate See also Restriction enzyme DNase footprinting assay Enzyme kinetics References External links Protein methods assay Chemical pathology Clinical pathology Pathology
Enzyme assay
Chemistry,Biology
3,211
78,110,406
https://en.wikipedia.org/wiki/Ta3a
Ta3a (Delta-myrmicitoxin-Ta3a) is a vertebrate-selective neurotoxin found in the venom of the African ant species Tetramorium africanum. It is known to cause intense, long-lasting pain by targeting voltage-gated sodium channels in peripheral sensory neurons. Ta3a strongly reduces sodium channel inactivation, leading to heightened neuronal excitability. Chemistry Ta3a belongs to the aculeatoxin family of peptides, found in the venom of Hymenoptera. It is a 29-residue peptide, which is predicted to have an alpha-helical structure (amino acid sequence: LAPIFALLLLSGLFSLPALQHYIEKNYIN). Ta3a is similar to poneratoxin, a voltage-gated sodium channel toxin found in the ant species Paraponera clavata, as well as to other uncharacterised peptides from various other ant species. Target Ta3a targets voltage-gated sodium channels such as Nav1.6, Nav1.7 and Nav1.8, which are involved in peripheral pain signaling. The half-maximal effective concentration (EC50) of Ta3a for the human Nav1.7 channel is 30 ± 9 nM. Nav1.6 is similarly sensitive to Ta3a with an EC50 of 25 ± 2 nM, while Nav1.8 is less sensitive with an EC50 of 331 ± 58 nM. Mode of action Ant venom Nav toxins are distinct from other Nav modulators, but their effects more closely resemble those caused by small hydrophobic alkaloids. These peptides bind to the S2 voltage-sensing domain of Nav channels in their "activated" conformation, thereby maintaining channel activity. Ta3a exerts a significant regulatory effect on voltage-gated sodium channels, and its interaction with the Nav1.7 subtype was the one studied in more detail. Ta3a prolongs the duration that the channels remains active and increases the likelihood of the channels being open. Additionally, Ta3a shifts the activation of Nav1.7 to more negative (hyperpolarized) potentials, allowing Nav1.7 channels to remain active for extended periods even in the absence of strong depolarising stimuli. These prolonged, non-inactivating currents cause significant changes in the cell's membrane potential, due to the continuous sodium influx. Such prolonged sodium channel activation also permits sodium currents to persist at negative membrane potentials. Toxicity The hallmark of Ta3a toxicity is acute pain, which is the most immediate and prominent symptom of Ta3a exposure. This is due to the excessive activation of the Nav channels, which play a crucial role in pain transmission by enhancing the propagation of nerve signals, particularly pain-related signals. Treatment Since Ta3a primarily exerts its effects by overactivating the Nav channels, sodium channel blockers represent a potential therapeutic approach. For instance, tetrodotoxin (TTX), a sodium channel blocker, has been shown to effectively inhibit the persistent currents induced by Ta3a in experimental settings. However, no studies have yet been conducted on specific treatment methods for Ta3a poisoning. References Insect toxins Sodium channel openers Neurotoxins Ion channel toxins
Ta3a
Chemistry
701
93,831
https://en.wikipedia.org/wiki/Roundup%20%28herbicide%29
Roundup is a brand name of herbicide originally produced by Monsanto, which Bayer acquired in 2018. Prior to the late-2010s formulations, it used broad-spectrum glyphosate-based herbicides. As of 2009, sales of Roundup herbicides still represented about 10 percent of Monsanto's revenue despite competition from Chinese producers of other glyphosate-based herbicides. The overall Roundup line of products represented about half of Monsanto's yearly revenue in 2009. The product is marketed to consumers by Scotts Miracle-Gro Company. In the late-2010s other non-glyphosate containing herbicides were also sold under the Roundup brand. Monsanto patented the herbicidal use of glyphosate and derivatives in 1971. Commercial sale and usage in significant quantities started in 1974. It retained exclusive rights to glyphosate in the US until its US patent expired in September 2000; in other countries the patent expired earlier. The Roundup trademark is registered with the US Patent and Trademark Office and still extant. However, glyphosate is no longer under patent, so similar products use it as an active ingredient. The main active ingredient of Roundup is the isopropylamine salt of glyphosate. Another ingredient of Roundup is the surfactant POEA (polyethoxylated tallow amine). Monsanto also produced seeds which grow into plants genetically engineered to be tolerant to glyphosate, which are known as Roundup Ready crops. The genes contained in these seeds are patented. Such crops allow farmers to use glyphosate as a post-emergence herbicide against most broadleaf and cereal weeds. The health impacts of the product as well as its effects on the environment have been at the center of substantial legal and scientific controversies. In June 2020, Bayer agreed to pay $9.6 billion to settle tens of thousands of claims, mostly alleging that glyphosate-based Roundup had caused cancer. Composition Glyphosate-based formulations may contain a number of adjuvants, the identities of which may be proprietary. Surfactants are used in herbicide formulations as wetting agents, to maximize coverage and aid penetration of the herbicide(s) through plant leaves. As agricultural spray adjuvants, surfactants may be pre-mixed into commercial formulations or they may be purchased separately and mixed on-site. Polyethoxylated tallow amine (POEA) is a surfactant used in the original Roundup formulation and was commonly used in 2015. Different versions of Roundup have included different percentages of POEA. A 1997 US government report said that Roundup is 15% POEA while Roundup Pro is 14.5%. Since POEA is more toxic to fish and amphibians than glyphosate alone, POEA is not allowed in aquatic formulations. Non-glyphosate formulations of Roundup are typically used for lawns that glyphosate would otherwise kill. Both type of products being sold under the Roundup brand name can be a source of confusion for consumers. Active ingredients for non-glyphosate formulations of Roundup can include MCPA, quinclorac, dicamba, and sulfentrazone, penoxsulam, and 2,4-D Acute toxicity The lethal dose of different glyphosate-based formulations varies, especially with respect to the surfactants used. Formulations intended for terrestrial use that include the surfactant polyethoxylated tallow amine (POEA) can be more toxic than other formulations for aquatic species. Due to the variety in available formulations, including five different glyphosate salts and different combinations of inert ingredients, it is difficult to determine how much surfactants contribute to the overall toxicity of each formulation. Independent scientific reviews and regulatory agencies have repeatedly concluded that glyphosate-based herbicides do not lead to a significant risk for human or environmental health when the product label is properly followed. Human The acute oral toxicity for mammals is low, but death has been reported after deliberate overdose of concentrated Roundup. The surfactants in glyphosate formulations can increase the relative acute toxicity of the formulation. Surfactants generally do not, however, cause synergistic effects (as opposed to additive effects) that increase the acute toxicity of glyphosate within a formulation. The surfactant POEA is not considered an acute toxicity hazard, and has an oral toxicity similar to vitamin A and less toxic than aspirin. Deliberate ingestion of Roundup ranging from 85 to 200 ml (of 41% solution) has resulted in death within hours of ingestion, although it has also been ingested in quantities as large as 500 ml with only mild or moderate symptoms. Consumption of over 85 ml of concentrated product is likely to cause serious symptoms in adults, including burns due to corrosive effects as well as kidney and liver damage. More severe cases lead to "respiratory distress, impaired consciousness, pulmonary edema, infiltration on chest X-ray, shock, arrhythmias, kidney failure requiring haemodialysis, metabolic acidosis, and hyperkalaemia" and death is often preceded by bradycardia and ventricular arrhythmias. Skin exposure can cause irritation, and photocontact dermatitis has been occasionally reported. Severe skin burns are very rare. In a 2017 risk assessment, the European Chemicals Agency (ECHA) wrote: "There is very limited information on skin irritation in humans. Where skin irritation has been reported, it is unclear whether it is related to glyphosate or co-formulants in glyphosate-containing herbicide formulations." The ECHA concluded that available human data was insufficient to support classification for skin corrosion or irritation. Inhalation is a minor route of exposure, but spray mist may cause oral or nasal discomfort, an unpleasant taste in the mouth, or tingling and irritation in the throat. Eye exposure may lead to mild conjunctivitis. Superficial corneal injury is possible if irrigation is delayed or inadequate. Aquatic Glyphosate formulations with POEA, such as Roundup, are not approved for aquatic use due to aquatic organism toxicity. Due to the presence of POEA, glyphosate formulations only allowed for terrestrial use are more toxic for amphibians and fish than glyphosate alone. Terrestrial glyphosate formulations that include the surfactants POEA and MON 0818 (75% POEA) may have negative impacts on various aquatic organisms like protozoa, mussels, crustaceans, frogs and fish. Aquatic organism exposure risk to terrestrial formulations with POEA is limited to drift or temporary water pockets. While laboratory studies can show effects of glyphosate formulations on aquatic organisms, similar observations rarely occur in the field when instructions on the herbicide label are followed. Studies in a variety of amphibians have shown the toxicity of products containing POEA to amphibian larvae. These effects include interference with gill morphology and mortality from either the loss of osmotic stability or asphyxiation. At sub-lethal concentrations, exposure to POEA or glyphosate/POEA formulations have been associated with delayed development, accelerated development, reduced size at metamorphosis, developmental malformations of the tail, mouth, eye and head, histological indications of intersex and symptoms of oxidative stress. Glyphosate-based formulations can cause oxidative stress in bullfrog tadpoles. The use of glyphosate-based pesticides are not considered the major cause of amphibian decline, the bulk of which occurred prior to widespread use of glyphosate or in pristine tropical areas with minimal glyphosate exposure. A 2000 review of the toxicological data on Roundup concluded that "for terrestrial uses of Roundup minimal acute and chronic risk was predicted for potentially exposed nontarget organisms". It also concluded that there were some risks to aquatic organisms exposed to Roundup in shallow water. Bees Roundup Ready‐To‐Use, Roundup No Glyphosate, and Roundup ProActive have all been found to cause significant mortality in bumblebees when sprayed directly on them. It has been hypothesized that this is due to surfactants in the formulations blocking the tracheal system of the bees. Carcinogenicity There is limited evidence that human cancer risk might increase as a result of occupational exposure to large amounts of glyphosate, such as agricultural work, but no good evidence of such a risk from home use, such as in domestic gardening. The consensus among national pesticide regulatory agencies and scientific organizations is that labeled uses of glyphosate have demonstrated no evidence of human carcinogenicity. Organizations such as the Joint FAO/WHO Meeting on Pesticide Residues and the European Commission, Canadian Pest Management Regulatory Agency, and the German Federal Institute for Risk Assessment have concluded that there is no evidence that glyphosate poses a carcinogenic or genotoxic risk to humans. The final assessment of the Australian Pesticides and Veterinary Medicines Authority in 2017 was that "glyphosate does not pose a carcinogenic risk to humans". The EPA has evaluated the carcinogenic potential of glyphosate multiple times since 1986. In 1986, glyphosate was initially classified as Group C: "Possible Human Carcinogen", but later recommended as Group D: "Not Classifiable as to Human Carcinogenicity" due to lack of statistical significance in previously examined rat tumor studies. In 1991, it was classified as Group E: "Evidence of Non-Carcinogenicity for Humans", and in 2015 and 2017, "Not Likely to be Carcinogenic to Humans". One international scientific organization, the International Agency for Research on Cancer (IARC), classified glyphosate in Group 2A, "probably carcinogenic to humans" in 2015. The variation in classification between this agency and others has been attributed to "use of different data sets" and "methodological differences in the evaluation of the available evidence". In 2017, California environmental regulators listed glyphosate as “known to the state to cause cancer.” The state's Office of Environmental Health Hazard Assessment made the decision based in part on the report from the IARC. State Proposition 65 requires the state office to add substances the international agency deems carcinogenic in humans or laboratory animals to a state list of cancer-causing items. Legal In the ten months following Bayer's June 2018 acquisition of Monsanto, its stock lost 46% of its value because of investor apprehension concerning the 11,200 lawsuits filed against its subsidiary. As of 2023, around 165,000 claims have been made against Bayer, mostly alleging that Roundup had caused cancer. Bayer has settled tens of thousands of those claims and has agreed to pay billions in damages, but, as of 2023, more than 50,000 similar claims were still pending. In December 2023, Bayer won a case against a claim that Roundup had caused a man's cancer. In a statement they said the outcome was "consistent with the evidence in this case that Roundup does not cause cancer and is not responsible for the plaintiff's illness". At that time, Bayer had previously won 10 of 15 such cases. Most cases claiming injury from Roundup are based on a failure-to-warn theory of liability, meaning Monsanto is liable for a plaintiff's injury because it failed to warn the plaintiff that Roundup can cause cancer. The United States Court of Appeals for the Ninth Circuit in 2021, and the United States Court of Appeals for the Eleventh Circuit in early 2024, held that such state-law failure to warn claims were not preempted by the Federal Insecticide, Fungicide, and Rodenticide Act ("FIFRA"). In August 2024, however, the United States Court of Appeals for the Third Circuit held that FIFRA does preempt state-law failure to warn claims involving Roundup, expressly recognizing that its holding conflicts with that of the Ninth and Eleventh Circuits. This conflict among the Third, Ninth and Eleventh Circuit creates a heightened potential that the United States Supreme Court will review the Third Circuit's decision so that the Supreme Court can resolve the conflict among the Courts of Appeals. Cancer cases As of October 30, 2019, there were over 42,000 plaintiffs who said that glyphosate herbicides caused their cancer. After the IARC classified glyphosate as "probably carcinogenic to humans" in March 2015, many state and federal lawsuits were filed in the United States. Early on, over 300 of them were consolidated into a multidistrict litigation called In re: RoundUp Products Liability. On August 10, 2018, Dewayne Johnson, who has non-Hodgkin's lymphoma, was awarded $289 million in damages (later cut to $78 million on appeal then reduced to $21 million after another appeal) after a jury in San Francisco found that Monsanto had failed to adequately warn consumers of cancer risks posed by the herbicide. Johnson had routinely used two different glyphosate formulations in his work as a groundskeeper, RoundUp and another Monsanto product called Ranger Pro. The jury's verdict addressed the question of whether Monsanto knowingly failed to warn consumers that RoundUp could be harmful, but not whether RoundUp causes cancer. Court documents from the case alleged the company's efforts to influence scientific research via ghostwriting. In January 2019, Costco decided to stop carrying Roundup or other glyphosate-based herbicides. The decision was reportedly influenced in part by the public court cases. In March 2019, a man was awarded $80 million (later cut to $26 million on appeal) in a lawsuit claiming Roundup was a substantial factor in his cancer. U.S. District Judge Vince Chhabria stated that a punitive award was appropriate because the evidence "easily supported a conclusion that Monsanto was more concerned with tamping down safety inquiries and manipulating public opinion than it was with ensuring its product is safe." Chhabria stated that there was evidence on both sides as to whether glyphosate causes cancer, and that the behavior of Monsanto showed "a lack of concern about the risk that its product might be carcinogenic." On May 13, 2019, a jury in California ordered Bayer to pay a couple $2 billion in damages (later cut to $87 million on appeal) after finding that the company had failed to adequately inform consumers of the possible carcinogenicity of Roundup. On December 19, 2019, it was announced that Timothy Litzenburg, the lawyer for the RoundUp Virginia plaintiffs had been charged with extortion after offering to stop searching for more plaintiffs if he was paid a $200 million consulting fee by a manufacturer of glyphosate. Litzenburg and his partner Daniel Kincheloe pleaded guilty to the charges and they were sentenced to two and one years in prison respectively. In June 2020, Bayer agreed to settle over a hundred thousand Roundup lawsuits, agreeing to pay $8.8 to $9.6 billion to settle those claims, and $1.5 billion for any future claims. The settlement does not include three cases that have already gone to jury trials and are being appealed. However the settlement was not allowed to cover future cases. False advertising In 1996, Monsanto was accused of false and misleading advertising of glyphosate products, prompting a lawsuit by the New York State attorney general. Monsanto had made claims that its spray-on glyphosate based herbicides, including Roundup, were safer than table salt and "practically non-toxic" to mammals, birds, and fish, "environmentally friendly", and "biodegradable". Citing avoidance of costly litigation, Monsanto settled the case, admitting no wrongdoing, and agreeing to remove the offending advertising claims in New York State. Environmental and consumer rights campaigners brought a case in France in 2001 accusing Monsanto of presenting Roundup as "biodegradable" and claiming that it "left the soil clean" after use; glyphosate, Roundup's main ingredient, was classed by the European Union as "dangerous for the environment" and "toxic for aquatic organisms". In January 2007, Monsanto was convicted of false advertising and fined 15,000 euros. The result was confirmed in 2009. On 27 March 2020 Bayer settled claims in a proposed class action alleging that it falsely advertised that the active ingredient in Roundup Weed & Grass Killer only affects plants with a $39.5 million deal that included changing the labels on its products. In June 2023, Bayer reached a $6.9 million settlement agreement with the New York attorney general, settling false advertising allegations concerning the safety of Roundup. Falsification of test results Some tests originally conducted on glyphosate by contractors were later found to have been fraudulent, along with tests conducted on other pesticides. Concerns were raised about toxicology tests conducted by Industrial Bio-Test Laboratories in the 1970s and Craven Laboratories was found to have fraudulently analysed samples for residues of glyphosate in 1991. Monsanto has stated that the studies have since been repeated. Ban in France In January 2019, Roundup Pro 360 was banned in France following a Lyon court ruling that regulator ANSES had not given due weight to safety concerns when they approved the product in March 2017. The ban went into effect immediately. The court's decision cited research by the IARC, based in Lyon. Use with genetically modified crops Monsanto first developed Roundup in the 1970s. End-users initially used it in a similar way to paraquat and diquat – as a non-selective herbicide. Application of glyphosate-based herbicides to row crops resulted in problems with crop damage and kept them from being widely used for this purpose. In the United States, use of Roundup experienced rapid growth following the commercial introduction of a glyphosate-resistant soybean in 1996. "Roundup Ready" became Monsanto's trademark for its patented line of crop seeds that are resistant to Roundup. Between 1990 and 1996 sales of Roundup increased around 20% per year. the product was used in over 160 countries. Roundup is used most heavily on corn, soy, and cotton crops that have been genetically modified to withstand the chemical, but glyphosate treated approximately 5 million acres in California for crops like almond, peach, cantaloupe, onion, cherry, sweet corn, and citrus, although the product is only applied directly to certain varieties of sweet corn. See also 2,4-Dichlorophenoxyacetic acid Environmental impact of pesticides Health effects of pesticides Integrated pest management Pesticide regulation in the United States Pesticides in the United States References Further reading Baccara, Mariagiovanna, et al. ”Monsanto's Roundup”, NYU Stern School of Business: August 2001, Revised July 14, 2003. Pease W S et al. (1993) ”Preventing pesticide-related illness in California agriculture: Strategies and priorities”. Environmental Health Policy Program Report. Berkeley, CA: University of California. School of Public Health. California Policy Seminar. Wang Y, Jaw C and Chen Y (1994) “Accumulation of 2,4-D and glyphosate in fish and water hyaacinth”. Water Air Soil Pollute. 74:397–403 External links EPA's Integrated Risk Information System entry for glyphosate—The main ingredient in Roundup EPA's ground & drinking water consumer factsheet for glyphosate; 1976 introductions Endocrine disruptors Herbicides es:Glifosato tr:Roundup
Roundup (herbicide)
Chemistry,Biology
4,166
6,185,898
https://en.wikipedia.org/wiki/Ridge%20detection
In image processing, ridge detection is the attempt, via software, to locate ridges in an image, defined as curves whose points are local maxima of the function, akin to geographical ridges. For a function of N variables, its ridges are a set of curves whose points are local maxima in N − 1 dimensions. In this respect, the notion of ridge points extends the concept of a local maximum. Correspondingly, the notion of valleys for a function can be defined by replacing the condition of a local maximum with the condition of a local minimum. The union of ridge sets and valley sets, together with a related set of points called the connector set, form a connected set of curves that partition, intersect, or meet at the critical points of the function. This union of sets together is called the function's relative critical set. Ridge sets, valley sets, and relative critical sets represent important geometric information intrinsic to a function. In a way, they provide a compact representation of important features of the function, but the extent to which they can be used to determine global features of the function is an open question. The primary motivation for the creation of ridge detection and valley detection procedures has come from image analysis and computer vision and is to capture the interior of elongated objects in the image domain. Ridge-related representations in terms of watersheds have been used for image segmentation. There have also been attempts to capture the shapes of objects by graph-based representations that reflect ridges, valleys and critical points in the image domain. Such representations may, however, be highly noise sensitive if computed at a single scale only. Because scale-space theoretic computations involve convolution with the Gaussian (smoothing) kernel, it has been hoped that use of multi-scale ridges, valleys and critical points in the context of scale space theory should allow for more a robust representation of objects (or shapes) in the image. In this respect, ridges and valleys can be seen as a complement to natural interest points or local extremal points. With appropriately defined concepts, ridges and valleys in the intensity landscape (or in some other representation derived from the intensity landscape) may form a scale invariant skeleton for organizing spatial constraints on local appearance, with a number of qualitative similarities to the way the Blum's medial axis transform provides a shape skeleton for binary images. In typical applications, ridge and valley descriptors are often used for detecting roads in aerial images and for detecting blood vessels in retinal images or three-dimensional magnetic resonance images. Differential geometric definition of ridges and valleys at a fixed scale in a two-dimensional image Let denote a two-dimensional function, and let be the scale-space representation of obtained by convolving with a Gaussian function . Furthermore, let and denote the eigenvalues of the Hessian matrix of the scale-space representation with a coordinate transformation (a rotation) applied to local directional derivative operators, where p and q are coordinates of the rotated coordinate system. It can be shown that the mixed derivative in the transformed coordinate system is zero if we choose ,. Then, a formal differential geometric definition of the ridges of at a fixed scale can be expressed as the set of points that satisfy Correspondingly, the valleys of at scale are the set of points In terms of a coordinate system with the direction parallel to the image gradient where it can be shown that this ridge and valley definition can instead be equivalently written as where and the sign of determines the polarity; for ridges and for valleys. Computation of variable scale ridges from two-dimensional images A main problem with the fixed scale ridge definition presented above is that it can be very sensitive to the choice of the scale level. Experiments show that the scale parameter of the Gaussian pre-smoothing kernel must be carefully tuned to the width of the ridge structure in the image domain, in order for the ridge detector to produce a connected curve reflecting the underlying image structures. To handle this problem in the absence of prior information, the notion of scale-space ridges has been introduced, which treats the scale parameter as an inherent property of the ridge definition and allows the scale levels to vary along a scale-space ridge. Moreover, the concept of a scale-space ridge also allows the scale parameter to be automatically tuned to the width of the ridge structures in the image domain, in fact as a consequence of a well-stated definition. In the literature, a number of different approaches have been proposed based on this idea. Let denote a measure of ridge strength (to be specified below). Then, for a two-dimensional image, a scale-space ridge is the set of points that satisfy where is the scale parameter in the scale-space representation. Similarly, a scale-space valley is the set of points that satisfy An immediate consequence of this definition is that for a two-dimensional image the concept of scale-space ridges sweeps out a set of one-dimensional curves in the three-dimensional scale-space, where the scale parameter is allowed to vary along the scale-space ridge (or the scale-space valley). The ridge descriptor in the image domain will then be a projection of this three-dimensional curve into the two-dimensional image plane, where the attribute scale information at every ridge point can be used as a natural estimate of the width of the ridge structure in the image domain in a neighbourhood of that point. In the literature, various measures of ridge strength have been proposed. When Lindeberg (1996, 1998) coined the term scale-space ridge, he considered three measures of ridge strength: The main principal curvature expressed in terms of -normalized derivatives with . The square of the -normalized square eigenvalue difference The square of the -normalized eigenvalue difference The notion of -normalized derivatives is essential here, since it allows the ridge and valley detector algorithms to be calibrated properly. By requiring that for a one-dimensional Gaussian ridge embedded in two (or three dimensions) the detection scale should be equal to the width of the ridge structure when measured in units of length (a requirement of a match between the size of the detection filter and the image structure it responds to), it follows that one should choose . Out of these three measures of ridge strength, the first entity is a general purpose ridge strength measure with many applications such as blood vessel detection and road extraction. Nevertheless, the entity has been used in applications such as fingerprint enhancement, real-time hand tracking and gesture recognition as well as for modelling local image statistics for detecting and tracking humans in images and video. There are also other closely related ridge definitions that make use of normalized derivatives with the implicit assumption of . Develop these approaches in further detail. When detecting ridges with , however, the detection scale will be twice as large as for , resulting in more shape distortions and a lower ability to capture ridges and valleys with nearby interfering image structures in the image domain. History The notion of ridges and valleys in digital images was introduced by Haralick in 1983 and by Crowley concerning difference of Gaussians pyramids in 1984. The application of ridge descriptors to medical image analysis has been extensively studied by Pizer and his co-workers resulting in their notion of M-reps. Ridge detection has also been furthered by Lindeberg with the introduction of -normalized derivatives and scale-space ridges defined from local maximization of the appropriately normalized main principal curvature of the Hessian matrix (or other measures of ridge strength) over space and over scale. These notions have later been developed with application to road extraction by Steger et al. and to blood vessel segmentation by Frangi et al. as well as to the detection of curvilinear and tubular structures by Sato et al. and Krissian et al. A review of several of the classical ridge definitions at a fixed scale including relations between them has been given by Koenderink and van Doorn. A review of vessel extraction techniques has been presented by Kirbas and Quek. Definition of ridges and valleys in N dimensions In its broadest sense, the notion of ridge generalizes the idea of a local maximum of a real-valued function. A point in the domain of a function is a local maximum of the function if there is a distance with the property that if is within units of , then . It is well known that critical points, of which local maxima are just one type, are isolated points in a function's domain in all but the most unusual situations (i.e., the nongeneric cases). Consider relaxing the condition that for in an entire neighborhood of slightly to require only that this hold on an dimensional subset. Presumably this relaxation allows the set of points which satisfy the criteria, which we will call the ridge, to have a single degree of freedom, at least in the generic case. This means that the set of ridge points will form a 1-dimensional locus, or a ridge curve. Notice that the above can be modified to generalize the idea to local minima and result in what might call 1-dimensional valley curves. This following ridge definition follows the book by Eberly and can be seen as a generalization of some of the abovementioned ridge definitions. Let be an open set, and be smooth. Let . Let be the gradient of at , and let be the Hessian matrix of at . Let be the ordered eigenvalues of and let be a unit eigenvector in the eigenspace for . (For this, one should assume that all the eigenvalues are distinct.) The point is a point on the 1-dimensional ridge of if the following conditions hold: , and for . This makes precise the concept that restricted to this particular -dimensional subspace has a local maximum at . This definition naturally generalizes to the k-dimensional ridge as follows: the point is a point on the k-dimensional ridge of if the following conditions hold: , and for . In many ways, these definitions naturally generalize that of a local maximum of a function. Properties of maximal convexity ridges are put on a solid mathematical footing by Damon and Miller. Their properties in one-parameter families was established by Keller. Maximal scale ridge The following definition can be traced to Fritsch who was interested in extracting geometric information about figures in two dimensional greyscale images. Fritsch filtered his image with a "medialness" filter that gave him information analogous to "distant to the boundary" data in scale-space. Ridges of this image, once projected to the original image, were to be analogous to a shape skeleton (e.g., the Blum medial axis) of the original image. What follows is a definition for the maximal scale ridge of a function of three variables, one of which is a "scale" parameter. One thing that we want to be true in this definition is, if is a point on this ridge, then the value of the function at the point is maximal in the scale dimension. Let be a smooth differentiable function on . The is a point on the maximal scale ridge if and only if and , and and . Relations between edge detection and ridge detection The purpose of ridge detection is usually to capture the major axis of symmetry of an elongated object, whereas the purpose of edge detection is usually to capture the boundary of the object. However, some literature on edge detection erroneously includes the notion of ridges into the concept of edges, which confuses the situation. In terms of definitions, there is a close connection between edge detectors and ridge detectors. With the formulation of non-maximum as given by Canny, it holds that edges are defined as the points where the gradient magnitude assumes a local maximum in the gradient direction. Following a differential geometric way of expressing this definition, we can in the above-mentioned -coordinate system state that the gradient magnitude of the scale-space representation, which is equal to the first-order directional derivative in the -direction , should have its first order directional derivative in the -direction equal to zero while the second-order directional derivative in the -direction of should be negative, i.e., . Written out as an explicit expression in terms of local partial derivatives , ... , this edge definition can be expressed as the zero-crossing curves of the differential invariant that satisfy a sign-condition on the following differential invariant (see the article on edge detection for more information). Notably, the edges obtained in this way are the ridges of the gradient magnitude. See also Blob detection Computer vision Edge detection Feature detection (computer vision) Interest point detection Scale space References Feature detection (computer vision) Multivariable calculus Smooth functions Singularity theory
Ridge detection
Mathematics
2,576
55,362,449
https://en.wikipedia.org/wiki/1%2C5-Hexadiene
1,5-Hexadiene is the organic compound with the formula (CH)(CH=CH). It is a colorless, volatile liquid. It is used as a crosslinking agent and precursor to a variety of other compounds. Synthesis 1,5-Hexadiene is produced commercially by the ethenolysis of 1,5-cyclooctadiene: (CHCH=CHCH) + 2 CH=CH → 2 (CH)CH=CH The catalyst is derived from ReO on alumina. A laboratory-scale preparation involves reductive coupling of allyl chloride using magnesium: 2 ClCHCH=CH + Mg → (CH)(CH=CH) + MgCl References Alkadienes Monomers
1,5-Hexadiene
Chemistry,Materials_science
159
10,393,385
https://en.wikipedia.org/wiki/Nitrenium%20ion
A nitrenium ion (also called: aminylium ion or imidonium ion (obsolete)) in organic chemistry is a reactive intermediate based on nitrogen with both an electron lone pair and a positive charge and with two substituents (). Nitrenium ions are isoelectronic with carbenes, and can exist in either a singlet or a triplet state. The parent nitrenium ion, , is a ground state triplet species with a gap of to the lowest energy singlet state. Conversely, most arylnitrenium ions are ground state singlets. Certain substituted arylnitrenium ions can be ground state triplets, however. Nitrenium ions can have microsecond or longer lifetimes in water. Aryl nitrenium ions are of biological interest because of their involvement in certain DNA damaging processes. They are generated upon in vivo oxidation of arylamines. The regiochemistry and energetics of the reaction of phenylnitrenium ion with guanine has been investigated using density functional theory computations. Nitrenium species have been exploited as intermediates in organic reactions. They are typically generated via heterolysis of N–X (X = N, O, Halogen) bonds. For instance, they are formed upon treatment of chloramine derivatives with silver salts or by activation of aryl hydroxylamine derivatives or aryl azides with Brønsted or Lewis acids. The Bamberger rearrangement is an early example of a reaction that is now thought to proceed via an aryl nitrenium intermediate. They can also act as electrophiles in electrophilic aromatic substitution. See also The related neutral nitrenes R–N: References Reactive intermediates Nitrogen hydrides
Nitrenium ion
Chemistry
368
7,732,659
https://en.wikipedia.org/wiki/Nested%20quotation
A nested quotation is a quotation that is encapsulated inside another quotation, forming a hierarchy with multiple levels. When focusing on a certain quotation, one must interpret it within its scope. Nested quotation can be used in literature (as in nested narration), speech, and computer science (as in "meta"-statements that refer to other statements as strings). Nested quotation can be very confusing until evaluated carefully and until each quotation level is put into perspective. In literature In languages that allow for nested quotes and use quotation mark punctuation to indicate direct speech, hierarchical quotation sublevels are usually punctuated by alternating between primary quotation marks and secondary quotation marks. For a comprehensive analysis of the major quotation mark systems employed in major writing systems, see Quotation mark. In JavaScript programming Nested quotes often become an issue using the eval keyword. The eval function is a function that converts and interprets a string as actual JavaScript code, and runs that code. If that string is specified as a literal, then the code must be written as a quote itself (and escaped accordingly). For example: eval("var a=3; alert();"); This code declares a variable a, which is assigned the value 3, and a blank alert window is popped up to the user. Nested strings (level 2) Suppose we had to make a quote inside the quoted interpreted code. In JavaScript, you can only have one unescaped quote sublevel, which has to be the alternate of the top-level quote. If the 2nd-level quote symbol is the same as the first-level symbol, these quotes must be escaped. For example: alert("I don't need to escape here"); alert('Nor is it "required" here'); alert('But now I do or it won\'t work'); Nested strings (level 3 and beyond) Furthermore, (unlike in the literature example), the third-level nested quote must be escaped in order not to conflict with either the first- or second-level quote delimiters. This is true regardless of alternating-symbol encapsulation. Every level after the third level must be recursively escaped for all the levels of quotes in which it is contained. This includes the escape character itself, the backslash (“\”), which is escaped by itself (“\\”). For every sublevel in which a backslash is contained, it must be escaped for the level above it, and then all the backslashes used to escape that backslash as well as the original backslash, must be escaped, and so on and so forth for every level that is ascended. This is to avoid ambiguity and confusion in escaping. Here are some examples that demonstrate some of the above principles: document.write("<html><head></head><body><p>Hello, this is the body of the document."); document.writeln("</p>"); document.write("<p>A newline in HTML code acts simply as whitespace, whereas a <br> starts a new line."); document.write("</p></body></html>\n"); eval('eval(\"eval(\\\"alert(\\\\\\\"Now I\\\\\\\\\\\\\\\'m confused!\\\\\\\")\\\")\")'); Note that the number of backslashes increase from 0 to 1 to 3 to 7 to 15, indicating a rule for successively nested symbols, meaning that the length of the escape sequences grows exponentially with quotation depth. See also Leaning toothpick syndrome Embedded metalanguage Story within a story Play within a play References Computer programming Syntax English grammar
Nested quotation
Technology,Engineering
843
59,135,145
https://en.wikipedia.org/wiki/Deltaco
Deltaco is a Swedish computer hardware company founded in Ludvika in 1991, incorporated under the name Swedeltaco AB as well as Dist IT. Originally it mainly imported Taiwanese cables for resale, but came to both produce and sell its own products across a variety of product lines. In 2011, its own products stood for 40% of its revenue. In 2007 it launched across the Nordic countries as DELTACO, and in 2017 it released its own line of gaming peripherals, Deltaco Gaming. Deltaco has previously advocated against expensive cables, stating that: "when it comes to digital signals, either the signal arrives, or it doesn't", and that there is no need to pay more for cables for audiophiles or video-enthusiasts. References Computer hardware companies Electronics companies of Sweden Swedish brands Companies based in Stockholm
Deltaco
Technology
171
10,035,514
https://en.wikipedia.org/wiki/Intimate%20partner%20violence
Intimate partner violence (IPV) is domestic violence by a current or former spouse or partner in an intimate relationship against the other spouse or partner. IPV can take a number of forms, including physical, verbal, emotional, economic and sexual abuse. The World Health Organization (WHO) defines IPV as "any behavior within an intimate relationship that causes physical, psychological or sexual harm to those in the relationship, including acts of physical aggression, sexual coercion, psychological abuse and controlling behaviors." IPV is sometimes referred to simply as battery, or as spouse or partner abuse. The most extreme form of IPV is termed intimate terrorism, coercive controlling violence, or simply coercive control. In such situations, one partner is systematically violent and controlling. This is generally perpetrated by men against women, and is the most likely of the types to require medical services and the use of a women's shelter. Resistance to intimate terrorism, which is a form of self-defense, and is termed violent resistance, is usually conducted by women. Studies on domestic violence against men suggest that men are less likely to report domestic violence perpetrated by their female intimate partners. Conversely, men are more likely to commit acts of severe domestic battery, and women are more likely to suffer serious injury as a result. The most common but less injurious form of intimate partner violence is situational couple violence (also known as situational violence), which is conducted by men and women nearly equally, and is more likely to occur among younger couples, including adolescents (see teen dating violence) and those of college age. Background Intimate partner violence occurs between two people in an intimate relationship or former relationship. It may occur between heterosexual or homosexual couples and victims can be male or female. Couples may be dating, cohabiting or married and violence can occur in or outside of the home. Studies in the 1990s showed that both men and women could be abusers or victims of domestic violence. Women are more likely to act violently in retaliation or self-defense and tend to engage in less severe forms of violence than men whereas men are more likely to commit long-term cycles of abuse than women. The World Health Organization (WHO) defines intimate partner violence as "any behavior within an intimate relationship that causes physical, psychological or sexual harm to those in the relationship". The WHO also adds controlling behaviors as a form of abuse. According to a study conducted in 2010, 30% of women globally aged 15 and older have experienced physical and/or sexual intimate partner violence. Global estimates by WHO calculated that the incidence of women who had experienced physical or sexual abuse from an intimate partner in their lifetime was 1 in 3. The complications from intimate partner violence are profound. Intimate partner violence is associated with increased rates of substance abuse amongst the victims, including tobacco use. Those who are victims of intimate partner violence are also more likely to experience depression, PTSD, anxiety and suicidality. Women who experience intimate partner violence have a higher risk of unintended pregnancies and sexually transmitted infection, including HIV. This is thought to be due to forced or coerced sex and reproductive coercion (ie. removing a condom during sex or blocking the woman's access to contraception). Children whose parent experiences intimate partner violence are more likely to become victims of IPV themselves or become perpetrators of violence later in life. Injuries that are frequently seen in victims of IPV include contusions, lacerations, fractures (especially of the head, neck and face), strangulation injuries (a strong predictor of future serious injury or death), concussions and traumatic brain injuries. Assessment Screening tools The U.S. Preventive Services Task Force (USPSTF) recommends screening women of reproductive age for intimate partner violence, and provide information or referral to social services for those who screen positive. Some of the most studied IPV screening tools were the Hurt, Insult, Threaten, and Scream (HITS), the Woman Abuse Screening Tool/Woman Abuse Screening Tool-Short Form (WAST/WAST-SF), the Partner Violence Screen (PVS), and the Abuse Assessment Screen (AAS). The HITS is a four-item scale rated on a 5-point Likert scale from 1 (never) to 5 (frequently). This tool was initially developed and tested among family physicians and family practice offices, and since then has been evaluated in diverse outpatient settings. Internal reliability and concurrent validity are acceptable. Generally, sensitivity of this measure has found to be lower among men than among women. The WAST is an eight-item measure (there is a short form of the WAST that consists of the first two items only). It was originally developed for family physicians, but subsequently has been tested in the emergency department. It has been found to have good internal reliability and acceptable concurrent validity. The PVS is a three-item measure scored on a yes/no scale, with positive responses to any question denoting abuse. It was developed as a brief instrument for the emergency department. The AAS is a five-item measure scored on a yes/no scale, with positive responses to any question denoting abuse. It was created to detect abuse perpetrated against pregnant women. The screening tool has been tested predominantly with young, poor women. It has acceptable test retest reliability. The Danger Assessment-5 screening tool can assess for risk of severe injury or homicide due to intimate partner violence. A "yes" response to two or more questions suggests a high risk of severe injury or death in women experiencing intimate partner violence. The five questions ask about an increasing frequency of abuse over the past year, use of weapons during the abuse, if the victim believes their partner is capable of killing them, the occurrence of choking during the abuse, and if the abuser is violently and constantly jealous of the victim. Research instruments One instrument used in research on family violence is the Conflict Tactics Scale (CTS). Two versions have been developed from the original CTS: the CTS2 (an expanded and modified version of the original CTS) and the CTSPC (CTS Parent-Child). The CTS is one of the most widely criticized domestic violence measurement instruments due to its exclusion of context variables and motivational factors in understanding acts of violence. The National Institute of Justice cautions that the CTS may not be appropriate for IPV research "because it does not measure control, coercion, or the motives for conflict tactics." The Index of Spousal Abuse, popular in medical settings, is a 30-item self-report scale created from the CTS. Another assessment used in research to measure IPV is the Severity of Violence Against Women Scales (SVAWS). This scale measures how often a woman experiences violent behaviors by her partner. Causes Attitudes Research based on the Ambivalent Sexism Theory found that individuals who endorse sexist attitudes show a higher acceptance of myths that justify intimate partner violence compared to those who do not. Both students and adults with a more traditional perception of gender roles are more likely to blame the victim for the abuse than those who hold more non-traditional conceptions. Researchers Rollero and Tartaglia found that two dimensions of ambivalent sexism are particularly predictive of violence myth: hostility toward women and benevolence toward men. They both contribute to legitimizing partner violence and this, in turn, leads to undervaluing the seriousness of the abuse. Various studies have been conducted that link beliefs in myths of romantic love to greater probability of cyber-control perpetration toward the partner in youths aged 18 to 30, and a higher degree of justifying intimate partner violence in adults. Myths of romantic love include beliefs in the power of love to cope with all kind of difficulties, the need of having a romantic relationship to be happy, the belief in jealousy as a sign of love, the perception of love as suffering, and the existence of our soul mate who is our only one true love. Demographics A notice from the National Institute of Justice noted that women who were more likely to experience intimate partner violence had some common demographic factors. Women who had children by age 21 were twice as likely to be victims of intimate partner violence as women who were not mothers at that age. Men who had children by age 21 were more than three times as likely to be people who abuse compared to men who were not fathers at that age. Many male abusers are also substance abusers. More than two-thirds of males who commit or attempt homicide against a partner used alcohol, drugs, or both during the incident; less than one-fourth of the victims did. The lower the household income, the higher the reported intimate partner violence rates. Intimate partner violence impairs a woman's capacity to find employment. A study of women who received AFDC benefits found that domestic violence was associated with a general pattern of reduced stability of employment. Finally, many victims had mental health troubles. Almost half of the women reporting serious domestic violence also meet the criteria for major depression; 24 percent suffer from posttraumatic stress disorder, and 31 percent from anxiety. I³ Theory The I³ Theory (pronounced I-cubed) explains intimate partner violence as an interaction of three processes: instigation, impellance, and inhibition. According to the theory, these three processes determine the likelihood that a conflict would escalate into violence. Instigation refers to the initial provocation or triggering action by a partner, such as infidelity or rejection. The effect of these current events is then shaped by impellance and inhibition. Impelling factors increase the likelihood of violence. Examples of impelling factors include poor communication, alcohol or substance abuse, precarious manhood, impulsive and weak self-regulation, and abuse history. Inhibiting factors decrease the likelihood of violence by overriding the aggressive impulses. Examples of inhibiting factors include empathy, lack of stress, economic prosperity, self-control, and punishment for aggression. Weak instigating triggers, weak impelling factors, and strong inhibiting factors lead to low risk of intimate partner violence. The I³ Theory is useful when describing not only heterosexual male-to-female violence, but violence across other relationship types as well, such as male-to-male, female-to-male, and female-to-female violence. Types Michael P. Johnson argues for four major types of intimate partner violence (also known as "Johnson's typology"), which is supported by subsequent research and evaluation, as well as independent researchers. Distinctions are made among the types of violence, motives of perpetrators, and the social and cultural context based upon patterns across numerous incidents and motives of the perpetrator. The United States Centers for Disease Control (CDC) also divides domestic violence into types. Intimate terrorism Intimate terrorism, or coercive controlling violence (CCV), occurs when one partner in a relationship, typically a man, uses coercive control and power over the other partner, using threats, intimidation, and isolation. CCV relies on severe psychological abuse for controlling purposes; when physical abuse occurs it too is severe. In such cases, "[o]ne partner, usually a man, controls virtually every aspect of the victim's, usually a woman's, life." Johnson reported in 2001 that 97% of the perpetrators of intimate terrorism were men. Intimate partner violence may involve sexual, sadistic control, economic, physical, emotional and psychological abuse. Intimate terrorism is more likely to escalate over time, not as likely to be mutual, and more likely to involve serious injury. The victims of one type of abuse are often the victims of other types of abuse. Severity tends to increase with multiple incidents, especially if the abuse comes in many forms. If the abuse is more severe, it is more likely to have chronic effects on victims because the long-term effects of abuse tend to be cumulative. Because this type of violence is most likely to be extreme, survivors of intimate terrorism are most likely to require medical services and the safety of shelters. Consequences of physical or sexual intimate terrorism include chronic pain, gastrointestinal and gynecological problems, depression, post-traumatic stress disorder, and death. Other mental health consequences are anxiety, substance abuse, and low-self esteem. Abusers are more likely to have witnessed abuse as children than those who engage in situational couple violence. Intimate terrorism batterers include two types: "Generally-violent-antisocial" and "dysphoric-borderline". The first type includes people with general psychopathic and violent tendencies. The second type includes people who are emotionally dependent on the relationship. Violence by an individual against their intimate partner is often done as a way for controlling the partner, even if this kind of violence is not the most frequent. Violent resistance Violent resistance (VR), a form of self-defense, is violence perpetrated by victims against their partners who have exerted intimate terrorism against them. Within relationships of intimate terrorism and violent resistance, 96% of the violent resisters are women. VR can occur as an instinctive reaction in response to an initial attack or a defense mechanism after prolonged instances of violence. This form of resistance can sometimes become fatal if the victim feels as though their only way out is to kill their partner. Situational couple violence Situational couple violence, also called common couple violence, is not connected to general control behavior, but arises in a single argument where one or both partners physically lash out at the other. This is the most common form of intimate partner violence, particularly in the western world and among young couples, and involves women and men nearly equally. Among college students, Johnson found it to be perpetrated about 44% of the time by women and 56% of the time by men. Johnson states that situational couple violence involves a relationship dynamic "in which conflict occasionally gets 'out of hand,' leading usually to 'minor' forms of violence, and rarely escalating into serious or life-threatening forms of violence." In situational couple violence, acts of violence by men and women occur at fairly equal rates, with rare occurrences of injury, and are not committed in an attempt to control a partner. It is estimated that approximately 50% of couples experience situational couple violence in their relationships. Situational couple violence involves: Mode: Mildly aggressive behavior such as throwing objects, ranging to more aggressive behaviors such as pushing, slapping, biting, hitting, scratching, or hair pulling. Frequency: Less frequent than partner terrorism, occurring once in a while during an argument or disagreement. Severity: Milder than intimate terrorism, very rarely escalates to more severe abuse, generally does not include injuries that were serious or that caused one partner to be admitted to a hospital. Mutuality: Violence may be equally expressed by either partner in the relationship. Intent: Occurs out of anger or frustration rather than as a means of gaining control and power over the other partner. Reciprocal and non-reciprocal The CDC divides domestic violence into two types: reciprocal, in which both partners are violent, and non-reciprocal violence, in which one partner is violent. Of the four types, situational couple violence and mutual violent control are reciprocal, while intimate terrorism is non-reciprocal. Violent resistance on its own is non-reciprocal, but is reciprocal when in response to intimate terrorism. By gender In the 1970s and 1980s, studies using large, nationally representative samples resulted in findings indicating that women were as violent as men in intimate relationships. This information diverged significantly from shelter, hospital, and police data, initiating a long-standing debate, termed "the gender symmetry debate". One side of this debate argues that mainly men perpetrate IPV (the gender asymmetry perspective), whereas the other side maintains that men and women perpetrate IPV at about equal rates (gender symmetry perspective). However, research on gender symmetry acknowledges asymmetrical aspects of IPV, which show that men use more violent and often deadly means of IPV. Older conflict tactics scale (CTS) methodology was criticized for excluding two important facets in gender violence: conflict-motivated aggression and control-motivated aggression. For example, women commonly engage in IPV as a form of self-defense or retaliation. Research has shown that the nature of the abuse inflicted by women upon male partners is different from the abuse inflicted by men, in that it is generally not used as a form of control and does not cause the same levels of injury or fear of the abusive partner. Scholars state these cases should not be generalized and each couple's specificities must be assessed. A 2016 meta-analysis indicated that the only risk factors for the perpetration of intimate partner violence that differ by gender are witnessing intimate partner violence as a child, alcohol use, male demand, and female withdrawal communication patterns. The Centers for Disease Control and Prevention reports that in the United States, 41% of women and 26% of men experience intimate partner violence within their lifetime. Gender asymmetry While both women and men can be victims and perpetrators of IPV, the majority of such violence is inflicted upon women, who are also much more likely to suffer injuries as a result, in both heterosexual and same-sex relationships. Although men and women commit equivalent rates of unreported minor violence via situational altercation, more severe perpetration and domestic battery tends to be committed by men. This is based on newer CTS methodology as opposed to older versions that did not take into account the contexts in which violence takes place. A 2008 systematic review published in journal of Violence and Victims found that despite less serious altercation or violence being equal among both men and women, more serious and violent abuse was perpetrated by men. It was also found that women's use of physical violence was more likely motivated by self-defense or fear whereas men's use of violence was motivated by control. A 2010 systematic review published in the journal of Trauma Violence Abuse found that the common motives for female on male IPV were anger, a need for attention, or as a response to their partner's violence. A 2011 review published in the journal of Aggression and Violent behavior found differences in the methods of abuse employed by men and women, suggesting that men were more likely to "beat up, choke or strangle" their partners, whereas women were more likely to "throw something at their partner, slap, kick, bite, punch, or hit with an object". Researchers such as Michael S Kimmel have criticized CTS methodology in assessing relations between gender and domestic violence. Kimmel argued that the CTS excluded two important facets in gender violence: conflict-motivated aggression and control motivated aggression. The first facet is a form of family conflict (such as an argument) while the latter is using violence as a tool for control. Kimmel also argued that the CTS failed to assess for the severity of the injury, sexual assaults and abuse from ex-partners or spouses. Women generally suffer more severe and long-lasting forms of partner abuse than men, and men generally have more opportunities to leave an abusive partner than women do. Researchers have found different outcomes in men and women in response to such abuse. A 2012 review from the journal Psychology of Violence found that women suffered from over-proportionate numbers of injuries, fear, and posttraumatic stress as a result of partner violence. The review also found that 70% of female victims felt frightened as a result of violence perpetrated by their partners whereas 85% of male victims expressed "no fear" in response to such violence. Lastly, IPV correlated with relationship satisfaction for women but it did not do so for men. According to government statistics from the US Department of Justice, male perpetrators constituted 96% of federal prosecution on domestic violence. Another report by the US Department of Justice on non-fatal domestic violence from 2003 to 2012 found that 76% of domestic violence was committed against women and 24% was committed against men. According to the United Nations Office on Drugs and Crime, the percentage of victims killed by their spouses or ex-spouses was 77.4% for women and 22.6% for men in 2008 in selected countries across Europe. Globally, men's perpetration of intimate partner violence against women often stems from conceptions of masculinity and patriarchy. Studies done in the United States, Nigeria, and Guatemala all support the idea of men reacting violently towards their partners when their masculinity is threatened by changing gender roles. Recent scholarship draws attention to the complexity of interactions between conceptions of masculinity and factors such as colonialism, racism, class and sexual orientation in shaping attitudes toward intimate partner violence around the world. Gender symmetry The theory that women perpetrate intimate partner violence (IPV) at roughly the same rate as men has been termed "gender symmetry." The earliest empirical evidence of gender symmetry was presented in the 1975 U.S. National Family Violence Survey carried out by Murray A. Straus and Richard J. Gelles on a nationally representative sample of 2,146 "intact families." The survey found 11.6% of men and 12% of women had experienced some kind of IPV in the last twelve months, while 4.6% of men and 3.8% of women had experienced "severe" IPV. These unexpected results led Suzanne K. Steinmetz to coin the controversial term "battered husband syndrome" in 1977. Ever since the publication of Straus and Gelles' findings, other researchers into domestic violence have disputed whether gender symmetry really exists. Sociologist Michael Flood writes, "there is no 'gender symmetry' in domestic violence; there are important differences between men's and women's typical patterns of victimization; and domestic violence represents only a small proportion of the violence to which men are subject". Other empirical studies since 1975 suggest gender symmetry in IPV. Such results may be due to a bi-directional or reciprocal pattern of abuse, with one study concluding that 70% of assaults involve mutual acts of violence. According to Ko Ling Chan in a literature review of IPV, studies generally support the theory of gender symmetry if "no contexts, motives, and consequences are considered". A 2008 systematic review found that while men and women perpetrate roughly equal levels of the less harmful types of domestic violence, termed "situational couple violence", men are much more likely than women to perpetrate "serious and very violent 'intimate terrorism'". This review also found that "women's physical violence is more likely than men's violence to be motivated by self-defense and fear, whereas men's physical violence is more likely than women's to be driven by control motives." A 2010 systematic review found that women's perpetration of IPV is often a form of violent resistance as a means of self-defense and/or retaliation against their violent male partners, and that it was often difficult to distinguishing between self-defense and retaliation in such contexts. A 2013 review of evidence from five continents found that when partner abuse is defined broadly (emotional abuse, any kind of hitting, who hits first), it is relatively even. However, when the review examined who is physically harmed and how seriously, expresses more fear, and experiences subsequent psychological problems, domestic violence primarily affects women. A sample from Botswana demonstrated higher levels of mental health consequences among females experiencing IPV, contrasting the results with males and females who experience IPV in Pakistan for which similar levels of mental health consequences were found. Sexual violence Sexual violence by intimate partners varies by country, with an estimated 15 million adolescent girls surviving forced sex worldwide. In some countries forced sex, or marital rape, often occurs with other forms of domestic violence, particularly physical abuse. Treatment Individual treatment Due to the high prevalence and devastating consequences of IPV, approaches to decrease and prevent violence from re-occurring is of utmost importance. Initial police response and arrest is not always enough to protect victims from recurrence of abuse; thus, many states have mandated participation in batterer intervention programs (BIPs) for men who have been charged with assault against an intimate partner. Most of these BIPs are based on the Duluth model and incorporate some cognitive behavioral techniques. The Duluth model is one of the most common current interventions for IPV. It represents a psycho-educational approach that was developed by paraprofessionals from information gathered from interviewing battered women in shelters and using principles from feminist and sociological frameworks. One of the main components used in the Duluth model is the 'power and control wheel', which conceptualizes IPV as one form of abuse to maintain male privilege. Using the 'power and control wheel', the goal of treatment is to achieve behaviors that fall on the 'equality wheel' by re-educate men and by replacing maladaptive attitudes held by men. Cognitive behavioral therapy (CBT) techniques focus on modifying faulty or problematic cognitions, beliefs, and emotions to prevent future violent behavior and include skills training such as anger management, assertiveness, and relaxation techniques. Overall, the addition of Duluth and CBT approaches results in a 5% reduction in IPV. This low reduction rate might be explained, at least in part, by the high prevalence of bidirectional violence as well as client-treatment matching versus "one-size-fits-all" approaches. Achieving change through values-based behavior (ACTV) is a newly developed Acceptance and Commitment Therapy (ACT)-based program. Developed by domestic violence researcher Amie Zarling and colleagues at Iowa State University, the aim of ACTV is teach abusers "situational awareness"—to recognize and tolerate uncomfortable feelings – so that they can stop themselves from exploding into rage. Initial evidence of the ACTV program has shown high promise: Using a sample 3,474 men who were arrested for domestic assault and court-mandated to a BIP (either ACTV or Duluth/CBT), Zarling and colleagues showed that compared with Duluth/CBT participants, significantly fewer ACTV participants acquired any new charges, domestic assault charges, or violent charges. ACTV participants also acquired significantly fewer charges on average in the one year after treatment than Duluth/CBT participants. Psychological therapies for women probably reduce the resulting depression and anxiety, however it is unclear if these approaches properly address recovery from complex trauma and the need for safety planning. Conjoint treatment Some estimates show that as many as 50% of couples who experience IPV engage in some form of reciprocal violence. Nevertheless, most services address offenders and survivors separately. In addition, many couples who have experienced IPV decide to stay together. These couples may present to couples or family therapy. In fact, 37-58% of couples who seek regular outpatient treatment have experienced physical assault in the past year. In these cases, clinicians are faced with the decision as to whether they should accept or refuse to treat these couples. Although the use of conjoint treatment for IPV is controversial as it may present a danger to victims and potentially escalate abuse, it may be useful to others, such as couples experiencing situational couple violence. Scholars and practitioners in the field call for tailoring of interventions to various sub-types of violence and individuals served. Behavioral couple's therapy (BCT) is a cognitive-behavioral approach, typically delivered to outpatients in 15-20 sessions over several months. Research suggests that BCT can be effective in reducing IPV when used to treat co-occurring addictions, which is important work because IPV and substance abuse and misuse frequently co-occur. Domestic conflict containment program (DCCP) is a highly structured skills-based program whose goal is to teach couples conflict containment skills. Physical aggression couples treatment (PACT) is a modification of DCCP, which includes additional psychoeducational components designed to improve relationship quality, including such things as communication skills, fair fighting tactics, and dealing with gender differences, sex, and jealousy. The primary goal of domestic violence focused couples treatment (DVFCT) is to end violence with the additional goal of helping couples improve the quality of their relationships. It is designed to be conducted over 18 weeks and can be delivered in either individual or multi-couple group format. Advocacy Advocacy interventions have also been shown to have some benefits under specific circumstances. Brief advocacy may provide short-term mental health benefits and reduce abuse, particularly in pregnant women. Prevention Home visitation programs for children from birth up to two years old, with included screening for parental IPV and referral or education if screening is positive, have been shown to prevent future risk of IPV. Universal harm reduction education to patients in reproductive and adolescent healthcare settings has been shown to decrease certain types of IPV. See also Domestic violence Honor killing Intimate partner violence and U.S. military populations Marital rape Sexual violence by intimate partners Info-graphic on intimate partner violence, sexual violence, and stalking from the US Centers for Disease Control and Prevention available on Wikimedia Commons Notes References Further reading Details. Response article: Preview. A report commissioned by the Men's Advisory Network (MAN). External links Crimes against women Abuse Crime Sexual violence Violence against men Violence against women
Intimate partner violence
Biology
5,954
11,559,438
https://en.wikipedia.org/wiki/Colletotrichum%20coccodes
Colletotrichum coccodes is a plant pathogen, which causes anthracnose on tomato and black dot disease of potato. Fungi survive on crop debris and disease emergence is favored by warm temperatures and wet weather. Hosts and symptoms C. coccodes is known for infecting potato and tomato, and is primarily a pathogen of Solanaceous plants more generally. Heilmann et al., 2006 characterizes genetic varieties and their associations with particular potato hosts. Buddie et al., 1999, finds strawberry is also a host. C. coccodes has a large host range beyond those including some Cucurbitaceae, Fabaceae, and Solanaceae. C. coccodes can cause lesions, twisted leaves, and a bleached color on onion. On tomato, can see that there are sunken in dark spots. As the disease continues to develop can begin to see spots that are rotting. The pathogen can infect both green and ripe fruit; spots are not evident on green right away, but over time they develop. Symptoms are most common on the fruit, but they may also appear on the stem, leaves, and roots. In potato, C. coccodes is characterized by silvery lesions on the tuber surface which result in a deterioration in skin quality. In addition to causing tuber blemish symptoms, C. coccodes also causes symptoms on stems and foliage, which result in crop losses, and is implicated as a factor in the potato early dying disease complex. In the past the pathogen was not regarded as an issue, but it has become more prevalent. Disease cycle Colletotrichum coccodes can survive the winter as hard, melanized structures called sclerotia. The pathogen may also survive in debris as threadlike strands called hyphae. In late spring the lower leaves and fruit may become infected by germinating sclerotia and spores in the soil debris. Infections of the lower leaves of tomato plants are important sources of spores for secondary infections throughout the growing season. Senescent leaves with early blight infections and leaves with flea beetle injury are especially important spore sources because the fungus can colonize and produce new spores in these wounded areas. The growth of C. coccodes is most rapid at , although the fungus can cause infections over a wide range of temperatures between . Wet weather promotes disease development, and splashing water in the form of rain or irrigation favors the spread of the disease. The pathogen also produces an acervulus which is full of conidia that help to spread the infection. Management Plant crops on well drained soils, use 3- or 4- year crop rotations with plants which are not hosts, and resistant plants. Sanitation can also be important to reduce the spread of inoculum and clean seed should be planted. Soil fumigants may also be used although they may not be as economic as the other methods of control. Irrigation is to be avoided when fruit begins to ripen to avoid the splashing of spores, it is also recommended to rotate with a nonsolanaceous crop every other year. References External links Colletotrichum coccodes Gilat Research Center Horticulture Talk: Battling Tomato Anthracnose coccodes Potato diseases Tomato diseases Fungi described in 1833 Taxa named by Karl Friedrich Wilhelm Wallroth Fungus species
Colletotrichum coccodes
Biology
688
23,199,319
https://en.wikipedia.org/wiki/Sector/Sphere
Sector/Sphere is an open source software suite for high-performance distributed data storage and processing. It can be broadly compared to Google's GFS and MapReduce technology. Sector is a distributed file system targeting data storage over a large number of commodity computers. Sphere is the programming architecture framework that supports in-storage parallel data processing for data stored in Sector. Sector/Sphere operates in a wide area network (WAN) setting. The system was created by Yunhong Gu (the author of UDP-based Data Transfer Protocol) in 2006 and was then maintained by a group of other developers. Architecture Sector/Sphere consists of four components. The security server maintains the system security policies such as user accounts and the IP access control list. One or more master servers control operations of the overall system in addition to responding to various user requests. The slave nodes store the data files and process them upon request. The clients are the users' computers from which system access and data processing requests are issued. Also, Sector/Sphere is written in C++ and is claimed to achieve with its architecture a two to four times better performance than the competitor Hadoop which is written in Java, a statement supported by an Aster Data Systems benchmark and the winning of the "bandwidth challenge" of the Supercomputing Conference 2006, 2008, and 2009. Sector Sector is a user space file system which relies on the local/native file system of each node for storing uploaded files. Sector provides file system-level fault tolerance by replication, thus it does not require hardware fault tolerance such as RAID, which is usually very expensive. Sector does not split user files into blocks; instead, a user file is stored intact on the local file system of one or more slave nodes. This means that Sector has a file size limitation that is application specific. The advantages, however, are that the Sector file system is very simple, and it leads to better performance in Sphere parallel data processing due to reduced data transfer between nodes. It also allows uploaded data to be accessible from outside the Sector system. Sector provides many unique features compared to traditional file systems. Sector is topology aware. Users can define rules on how files are located and replicated in the system, according to network topology. For example, data from a certain user can be located on a specific cluster and will not be replicated to other racks. For another example, some files can have more replicas than others. Such rules can be applied at per-file level. The topology awareness and the use of UDT as data transfer protocol allows Sector to support high performance data IO across geographically distributed locations, while most file systems can only be deployed within a local area network. For this reason, Sector is often deployed as a content distribution network for very large datasets. Sector integrates data storage and processing into one system. Every storage node can also be used to process the data, thus it can support massive in-storage parallel data processing (see Sphere). Sector is application aware, meaning that it can provide data location information to applications and also allow applications to specify data location, whenever necessary. As a simple example of the benefits of Sphere, Sector can return the results from such commands as "grep" and "md5sum" without reading the data out of the file system. Moreover, it can compute the results of multiple files in parallel. The Sector client provides an API for application development which allows user applications to interact directly with Sector. The software also comes prepackaged with a set of command-line tools for accessing the file system. Finally, Sector supports the FUSE interface; presenting a mountable file system that is accessible via standard command-line tools. Sphere Sphere is a parallel data processing engine integrated in Sector and it can be used to process data stored in Sector in parallel. It can broadly compared to MapReduce, but it uses generic user defined functions (UDFs) instead of the map and reduce functions. A UDF can be either a map function or a reduce function, or even others. Sphere can manipulate the locality of both input data and output data, thus it can effectively support multiple input datasets, combinative and iterative operations and even legacy application executable. Because Sector does not split user files, Sphere can simply wrap up many existing applications that accepts files or directories as input, without rewriting them. Thus it can provide greater compatibility to legacy applications. See also Pentaho - Open source data integration (Kettle), analytics, reporting, visualization and predictive analytics directly from Hadoop nodes Nutch - An effort to build an open source search engine based on Lucene and Hadoop, also created by Doug Cutting Apache Accumulo - Secure Big Table HBase - Bigtable-model database Hypertable - HBase alternative MapReduce - Hadoop's fundamental data filtering algorithm Apache Mahout - Machine Learning algorithms implemented on Hadoop Apache Cassandra - A column-oriented database that supports access from Hadoop HPCC - LexisNexis Risk Solutions High Performance Computing Cluster Cloud computing Big data Data Intensive Computing Literature Yunhong Gu, Robert Grossman, Sector and Sphere: The Design and Implementation of a High Performance Data Cloud, Theme Issue of the Philosophical Transactions of the Royal Society A: Crossing Boundaries: Computational Science, E-Science and Global E-Infrastructure, 28 June 2009 vol. 367 no. 1897 2429–2445. References External links Sector/Sphere Project on SourceForge Free software programmed in C++ Free system software Distributed file systems Cloud infrastructure
Sector/Sphere
Technology
1,125
34,061,192
https://en.wikipedia.org/wiki/OSU-6162
OSU-6162 (PNU-96391) is a compound which acts as a partial agonist at both dopamine D2 receptors and 5-HT2A receptors. It acts as a dopamine stabilizer in a similar manner to the closely related drug pridopidine, and has antipsychotic, anti-addictive and anti-Parkinsonian effects in animal studies. Both enantiomers show similar activity but with different ratios of effects, with the (S) enantiomer (–)-OSU-6162 that is more commonly used in research, having higher binding affinity to D2 but is a weaker partial agonist at 5-HT2A, while the (R) enantiomer (+)-OSU-6162 has higher efficacy at 5-HT2A but lower D2 affinity. See also Flumexadol LPH-5 PF-219,061 References Benzosulfones Dopamine agonists Experimental non-hallucinogens Experimental psychiatric drugs Neuropharmacology Non-hallucinogenic 5-HT2A receptor agonists Piperidines
OSU-6162
Chemistry
249
23,605,074
https://en.wikipedia.org/wiki/Inverse%20gas%20chromatography
Inverse gas chromatography is a physical characterization analytical technique that is used in the analysis of the surfaces of solids. Inverse gas chromatography or IGC is a highly sensitive and versatile gas phase technique developed over 40 years ago to study the surface and bulk properties of particulate and fibrous materials. In IGC the roles of the stationary (solid) and mobile (gas or vapor) phases are inverted from traditional analytical gas chromatography (GC); IGC is considered a materials characterization technique (of the solid) rather than an analytical technique (of a gas mixture). In GC, a standard column is used to separate and characterize a mixture of several gases or vapors. In IGC, a single standard gas or vapor (probe molecule) is injected into a column packed with the solid sample under investigation. During an IGC experiment a pulse or constant concentration of a known gas or vapor (probe molecule) is injected down the column at a fixed carrier gas flow rate. The retention time of the probe molecule is then measured by traditional GC detectors (i.e. flame ionization detector or thermal conductivity detector). Measuring how the retention time changes as a function of probe molecule chemistry, probe molecule size, probe molecule concentration, column temperature, or carrier gas flow rate can elucidate a wide range of physico-chemical properties of the solid under investigation. Several in depth reviews of IGC have been published previously. IGC experiments are typically carried out at "infinite dilution", where only small amounts of probe molecule are injected. This region is also called Henry's law region or linear region of the sorption isotherm. At infinite dilution probe-probe interactions are assumed negligible and any retention is only due to probe-solid interactions. The resulting retention volume, VRo, is given by the following equation: where j is the James–Martin pressure drop correction, m is the sample mass, F is the carrier gas flow rate at standard temperature and pressure, tR is the gross retention time for the injected probe, to is the retention time for a non-interaction probe (i.e. dead-time), and T is the absolute temperature. Surface energy determination The main application of IGC is to measure the surface energy of solids (fibers, particulates, and films). Surface energy is defined as the amount of energy required to create a unit area of a solid surface; analogous to surface tension of a liquid. Also, the surface energy can be defined as the excess energy at the surface of a material compared to the bulk. The surface energy (γ) is directly related to the thermodynamic work of adhesion (Wadh) between two materials as given by the following equation: where 1 and 2 represent the two components in the composite or blend. When determining if two materials will adhere it is common to compare the work of adhesion with the work of cohesion, Wcoh = 2γ. If the work of adhesion is greater than the work of cohesion, then the two materials are thermodynamically favored to adhere. Surface energies are commonly measured by contact angle methods. However, these methods are ideally designed for flat, uniform surfaces. For contact angle measurements on powders, they are typically compressed or adhered to a substrate which can effectively change the surface characteristics of the powder. Alternatively, the Washburn method can be used, but this has been shown to be affected by column packing, particle size, and pore geometry. IGC is a gas phase technique, thus is not subject to the above limitations of the liquid phase techniques. To measure the solid surface energy by IGC a series of injections using different probe molecules is performed at defined column conditions. It is possible to ascertain both the dispersive component of the surface energy and acid-base properties via IGC. For the dispersive surface energy, the retention volumes for a series of n-alkane vapors (i.e. decane, nonane, octane, heptanes, etc.) are measured. The Dorris and Gray. or Schultz methods can then be used to calculate the dispersive surface energy. Retention volumes for polar probes (i.e. toluene, ethyl acetate, acetone, ethanol, acetonitrile, chloroform, dichloromethane, etc.) can then be used to determine the acid-base characteristics of the solid using either the Gutmann, or Good-van Oss theory. Other parameters accessible by IGC include: heats of sorption [1], adsorption isotherms, energetic heterogeneity profiles, diffusion coefficients, glass transition temperatures [1], Hildebrand and Hansen solubility parameters, and crosslink densities. Applications IGC experiments have applications over a wide range of industries. Both surface and bulk properties obtained from IGC can yield vital information for materials ranging from pharmaceuticals to carbon nanotubes. Although surface energy experiments are most common, there are a wide range of experimental parameters that can be controlled in IGC, thus allowing the determination of a variety of sample parameters. The below sections highlight how IGC experiments are utilized in several industries. Polymers and coatings IGC has been used extensively for the characterization of polymer films, beads, and powders. For instance, IGC was used to study surface properties and interactions amongst components in paint formulations. Also, IGC has been used to investigate the degree of crosslinking for ethylene propylene rubber using the Flory–Rehner equation [17]. Additionally, IGC is a sensitive technique for the detection and determination of first and second order phase transitions like melting and glass transition temperatures of polymers. Although other techniques like differential scanning calorimetry are capable of measuring these transition temperatures, IGC has the capability of glass transition temperatures as a function of relative humidity. Pharmaceuticals The increasing sophistication of pharmaceutical materials has necessitated the use for more sensitive, thermodynamic based techniques for materials characterization. For these reasons, IGC, has seen increased use throughout the pharmaceutical industry. Applications include polymorph characterization, effect of processing steps like milling, and drug-carrier interactions for dry powder formulations. In other studies, IGC was used to relate surface energy and acid-base values with triboelectric charging and differentiate the crystalline and amorphous phases [23]. Fibers Surface energy values obtained by IGC have been used extensively on fibrous materials including textiles, natural fibers, glass fibers, and carbon fibers. Most of these and other related studies investigating the surface energy of fibers are focusing on the use of these fibers in composites. Ultimately, the changes in surface energy can be related to composite performance via the works of adhesion and cohesion discussed previously. Nanomaterials Similar to fibers, nanomaterials like carbon nanotubes, nanoclays, and nanosilicas are being used as composite reinforcement agents. Therefore, the surface energy and surface treatment of these materials has been actively studied by IGC. For instance, IGC has been used to study the surface activity of nanosilica, nanohematite, and nanogeoethite. Further, IGC was used to characterize the surface of as received and modified carbon nanotubes. Metakaolins IGC was used to characterize the adsorption surface properties of calcined kaolin (metakaolin) and the grinding effect on this material. Other Other applications for IGC include paper-toner adhesion, wood composites, porous materials [3], and food materials. See also Adhesion Material characterization Sessile drop technique Surface energy Wetting Wetting transition References Gas chromatography
Inverse gas chromatography
Chemistry
1,606
49,979,265
https://en.wikipedia.org/wiki/Hol-Tox%20family
The putative holin-like toxin (Hol-Tox) family (TC# 1.E.42) consists of many small proteins, between 34 and 48 amino acyl residues (aas) with a single transmembrane segment (TMSs). Rajesh et al. (2011) first identified the gene and designated it tmp1, which coded for a 34 amino acyl peptide that acts as an antibacterial agent on gram-positive bacteria. This peptide exhibits a single transmembrane domain (TMD) that is believed to play a role in facilitating the antibacterial activity. A representative list of proteins belonging to the Hol-Tox family can be found in the Transporter Classification Database. See also Holin Lysin Further reading References Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins Holins
Hol-Tox family
Biology
188
716,083
https://en.wikipedia.org/wiki/Delivery%20point
In a postal system, a delivery point (sometimes DP) is a single mailbox or other place at which mail is delivered. It differs from a street address, in that each address may have several delivery points, such as an apartment, office department, or other room. Such buildings (primarily residential) are often called multiple-dwelling units (MDUs) by the USPS. United States Postal Service usage In the US Postal System, a delivery point is a specific set of digits between 00 and 99 assigned to every address. When combined with the ZIP + 4 code, the delivery point provides a unique identifier for every deliverable address served by the USPS. The delivery point digits are almost never printed on mail in human-readable form; instead they are encoded in the POSTNET delivery point barcode (DPBC) or as part of the newer Intelligent Mail Barcode (IMb). The DPBC makes automated mail sorting possible, including ordering the mail according to how the carrier delivers it (walk sequence). The two-digit delivery point number is combined with an additional check digit in the DPBC. This digit is used by barcode sorters (BCS) to check if the ZIP, ZIP+4, or delivery point ZIP codes contain an error. In a database, storing the ZIP+4 code in a 10 character field (with the hyphen) allows easy output in the address block, and storing the check digit in a 3-digit field (instead of calculating it) allows automatic checking of the validity of the ZIP+4 and delivery point fields in case one had been changed independently. In order to receive the appropriate barcode discount, the delivery point digits and the +4 extension must be verified using an up-to-date, CASS or Delivery Point Validation (DPV) certified program. Since each city block or section of a rural route has a different +4 extension, and address numbers generally increase by 100 per block, the delivery point is typically the last two digits of the address. In the early days of DPBC, it was acceptable to determine the delivery point in this fashion, but since suite and other secondary designations are assigned unique delivery points which cannot be determined without the CASS/DPV database, this is no longer possible. The delivery point is usually redundant for post office boxes, since they are typically assigned their own ZIP+4 code, but must nonetheless be assigned a complete DPBC for full postal discounts. The full rules for identifying the delivery point for a given address are specified in the USPS CASS Technical Guide. United Kingdom In the United Kingdom the delivery point index is known as the Postcode Address File (PAF). It is owned and made available by Royal Mail. New Zealand Post New Zealand Post maintains an index of delivery points known as the National Postal Address Database (NPAD). Australia Post In Australia the PAF is maintained by Australia Post. See also Coding Accuracy Support System References Postal systems
Delivery point
Technology
610
38,670,017
https://en.wikipedia.org/wiki/Restroom%20Access%20Act
The Restroom Access Act, also known as Ally's Law, is legislation passed by several U.S. states that requires retail establishments that have toilet facilities for their employees to also allow customers to use the facilities if the customer has a medical condition requiring immediate access to a toilet, such as inflammatory bowel disease or Crohn’s disease. Background The law is named for Ally Bain, a 14-year-old girl from Illinois who had a flare-up of her Crohn's disease while shopping at a large retail store and was subsequently denied use of the employee-only restroom, causing her to soil herself. Bain's mother vowed it would never happen to anyone else. The two met with Illinois State Representative Kathy Ryg, helped her draft a bill, and testified before a committee at the state capital. The bill was signed into law in August 2005, making Illinois the first U.S. state to do so. As of January 2024, at least 20 U.S. states had passed versions of the law. They include Arkansas, California, Colorado, Connecticut, Delaware, Illinois, Kentucky, Louisiana, Maryland, Massachusetts, Michigan, Minnesota, New Hampshire, New York, Ohio, Oregon, Tennessee, Texas, Wisconsin, and Washington. A Virginia bill, which levies fines of $100 for non-compliance, was adopted in the 2024 General Assembly session and went in to effect on July 1, 2024. There is support for a federal version of the act, but some small-business people object to the public using their employee bathrooms. Applicability In general, each state requires that the customer present a document signed by a medical professional attesting that the customer uses an ostomy device or has Crohn's disease, ulcerative colitis, or other inflammatory bowel disease or medical condition requiring access to a toilet facility without delay. In at least two states, Oregon and Tennessee, the customer can present an identification card issued by a national organization advocating for the eligible medical condition. Some states also include pregnancy as a covered medical condition. Sample law The Restroom Access Act of Illinois states: Courtesy card In Australia, the association Crohn's & Colitis Australia (CCA) encourages businesses to support people with such medical conditions by recognizing the Can't Wait Card issued by the CCA. The CCA states: Crohn's & Colitis Australia (CCA) is inviting retailers, business owners and venue operators to show their support for people with the medical condition Crohn's and colitis, collectively known as inflammatory bowel disease (IBD), by displaying a window sticker recognising the Can't Wait Card in their store. Other countries including the UK have similar programs of voluntary participation by businesses, one such program in the UK is the Bladder & Bowel Community's Just Can't Wait Card. A card with no country specific indications is available explaining the possibility of legislation and the gravity of the card holders disability and need for restroom access. See also Workers' right to access the toilet References Accessible building Gastrointestinal tract disorders Restrooms in the United States United States disability legislation United States state health legislation Restrooms in Washington (state)
Restroom Access Act
Engineering
661
519,483
https://en.wikipedia.org/wiki/TRON%20project
TRON (acronym for The Real-time Operating system Nucleus) is an open architecture real-time operating system kernel design. The project was started by Ken Sakamura of the University of Tokyo in 1984. The project's goal is to create an ideal computer architecture and network, to provide for all of society's needs. For different scenarios, the need for different OS kernels was identified. (See, for example, papers written in English in TRON Project 1988 ) The Industrial TRON (ITRON) derivative was one of the world's most used operating systems in 2003, being present in billions of electronic devices such as mobile phones, appliances and even cars. Although mainly used by Japanese companies, it garnered interest worldwide. However, a dearth of quality English documentation was said to hinder its broader adoption. The situation has improved since TRON Forum has taken over the activities to support TRON Project since 2015. (See the specification page that lists many English documents. ) The focus of these activities was a non-profit organization called TRON Association which acted as the communication hub for the parties concerned with the development of ITRON specification OS and its users in many fields including home electronics, smart house industry, etc. In 2002, T-Engine Forum was formed to provide an open source RTOS implementation that supercedes the ITRON specification OS, and provides binary compatibility additionally. The new RTOS was T-Kernel. The activities of TRON Association to support TRON Project were taken over by T-Engine Forum in 2010. In 2015, T-Engine Forum changed its name into TRON Forum. Today, ITRON specification OS and T-Kernel RTOS are supported by popular Secure Socket Layer (SSL) and Transport Layer Security (TLS) libraries such as wolfSSL. History In 1984, the TRON project was officially launched. In 1985, NEC announced the first ITRON implementation based on the ITRON/86 specification. In 1986, the TRON Kyogikai (unincorporated TRON Association) was established, Hitachi announced its ITRON implementation based on the ITRON/68K specification, and the first TRON project symposium is held. In 1987, Fujitsu announced an ITRON implementation based on the ITRON/MMU specification. Mitsubishi Electric announced an ITRON implementation based on the ITRON/32 specification, and Hitachi introduced the Gmicro/200 32-bit microprocessor based on the TRON VLSI CPU specification. In 1988, BTRON computer prototypes were being tested in various schools across Japan as the planned standardized computer for education. The project was organized by both the Ministry of International Trade and Industry and the Ministry of Education. However, Scott Callon of Stanford University writes that the project ran into some issues, such as BTRON being incompatible with existing DOS-based PCs and software. At the time NEC controlled 80–90% of the education market with DOS infrastructure, so adopting BTRON would have meant getting rid of all existing infrastructure. The existing incompatible PC software had also been personally written by school personnel, who opposed BTRON for this incompatibility with their earlier projects. There was also no software yet for the brand new computer. The project was additionally at least a year behind schedule and didn't perform better than earlier systems although that had been promised, which was possibly affected by the OS having been made by a firm that hadn't written one before. Because of these reasons, at the end of 1988 the Ministry of Education decided that it would not support the project unless BTRON was also made compatible with DOS. The Ministry of International Trade and Industry had hoped to avoid supporting NEC's domination of the PC market with DOS. BTRON integration with NEC DOS architecture was difficult but possible with negotiation. In April 1989 the Office of the U.S. Trade Representative issued a preliminary report accusing BTRON of being a trade barrier, as it only functioned in Japan, and asked the Japanese government not to make it standard in schools. TRON was included along with rice, semiconductors, and telecommunications equipment in a list of items targeted by Super-301 (complete stop of import based on section 301 of the Omnibus Trade and Competitiveness Act of 1988). It was removed from the list after the USTR inspection team visited the TRON Association in May. In June the Japanese government expressed their regret at U.S. intervention but accepted this request not to make it standard in schools, thus ending the BTRON project. Callon opines that the project had nevertheless run into such difficulties that the U.S. intervention allowed the government to save face from cancelling the project. According to a report from The Wall Street Journal, in 1989 US officials feared that TRON could undercut American dominance in computers, but that in the end PC software and chips based on the TRON technology proved no match for Windows and Intel's processors as a global standard. In the 1980s Microsoft had at least once lobbied Washington about TRON until backing off, but Ken Sakamura himself believed Microsoft wasn't the impetus behind the Super-301 listing in 1989. Known for his off the cuff remarks, in 2004 governor of Tokyo Shintaro Ishihara mentioned in his column post concerning international trade policy that TRON was dropped because Carla Anderson Hills had threatened Ryutaro Hashimoto over it. On 10 November 2017, TRON Forum, headquartered in Tokyo, Japan, which has been maintaining the TRON Project since 2010, has agreed with the Institute of Electrical and Electronics Engineers, headquartered in the US, to share the copyrights of TRON μT-Kernel 2.0 specification, the most recent version of T-Kernel (the successor of the original ITRON) for free. This was to facilitate the creation of IEEE standard of RTOS based on μT-Kernel specification. Stephen Dukes, Standards Committee, vice chair, IEEE Consumer Electronics Society of that time said that IEEE will "accelerate standards development and streamline global distribution" through the agreement. On September 11, 2018, "IEEE 2050-2018 - IEEE Standard for a Real-Time Operating System (RTOS) for Small-Scale Embedded Systems", a standard based on "μT-Kernel 2.0 was officially approved as an IEEE standard. In May 2023, the IEEE recognized the RTOS, proposed, created, and released by TRON Project, as an IEEE Milestone, titled "TRON Real-time Operating System Family, 1984." The certified Milestone plaque is installed on the campus of the University of Tokyo, where Ken Sakamura, the leader of TRON Project, worked as a research assistant in 1984. Architecture TRON does not specify the source code for the kernel, but instead is a "set of interfaces and design guidelines" for creating the kernel. This allows different companies to create their own versions of TRON, based on the specifications, which can be suited for different microprocessors. While the specification of TRON is publicly available, implementations can be proprietary at the discretion of the implementer. The TRON framework defines a complete architecture for the different computing units: ITRON (Industrial TRON): an architecture for real-time operating systems for embedded systems; this is the most popular use of the TRON architecture JTRON (Java TRON): a sub-project of ITRON to allow it to use the Java platform BTRON (Business TRON): for personal computers, workstations, PDAs, mainly as the human–machine interface in networks based on the TRON architecture CTRON (Central and Communications TRON): for mainframe computers, digital switching equipment MTRON (Macro TRON): for intercommunication between the different TRON components. STRON (Silicon TRON): hardware implementation of a real-time kernel. TRON (encoding), a way that TRON represents characters (as opposed to Unicode). Administration The TRON project was administered by the TRON Association for a long time. After it was integrated into T-Engine Forum in 2010, and T-Engine Forum changed its name to TRON Forum in 2015, TRON Forum has supported the TRON Project by acting as the communication hub for the parties involved. See also ITRON T-Kernel Micro T-Kernel References External links TRON Web TRON specifications in English B-Free in Japanese; Free BTRON OS project; archived Real-time operating systems Science and technology in Japan
TRON project
Technology
1,720
55,486,640
https://en.wikipedia.org/wiki/Methanosarcina%20sRNAs
sRNA162, sRNA154, sRNA41 are small non-coding RNA (sRNA) identified together with 248 other sRNA candidates by RNA sequencing in methanogenic archaeon Methanosarcina mazei Gö1. These sRNAs were further characterised. It was shown that sRNA162 can interact with both, a cis- and a trans-encoded mRNAs using two distinct domains. The sRNA overlaps the 5′UTR of the MM2442 mRNA and acts as a cis-encoded antisense RNA, and it also regulates MM2441 expression as a trans-encoded sRNA. It exhibits a regulatory role in the metabolic switch between methanol and trimethylamine as carbon and energy source. sRNA154, exclusively expressed under nitrogen deficiency, has a central regulatory role in nitrogen metabolism affecting nitrogenase and glutamine synthetase by masking the ribosome binding site or positively affecting transcript stability. sRNA41, highly expressed during nitrogen sufficiency, is capable to bind several ribosome binding sites independently within a polycistronic mRNA. It was proposed to inhibits translation initiation of all ACDS (acetyl-CoA decarbonylase/synthase complex) genes in N-dependent manner. See also Bacterial sRNA involved in nitrogen metabolism: NsiR4 Other archaeal sRNAs: Pyrobaculum asR3 small RNA Archaeal H/ACA sRNA References Non-coding RNA
Methanosarcina sRNAs
Chemistry
311
51,412,220
https://en.wikipedia.org/wiki/Mrp%20superfamily
The Na+ Transporting Mrp Superfamily is a superfamily of integral membrane transport proteins. It includes the TC families: 2.A.63 - The Monovalent Cation (K+ or Na+):Proton Antiporter-3 (CPA3) Family 3.D.1 - The H+ or Na+-translocating NADH Dehyrogenase (NDH) Family 3.D.9 - The H+-translocating F420H2 Dehydrogenase (F420H2DH) Family Mrp of Bacillus subtilis is a 7 subunit Na+/H+ antiporter complex (TC# 2.A.63.1.4). All subunits are homologous to the subunits in other members of this monovalent cation (K+ or Na+):proton antiporter-3 (CPA3) family as well as subunits in the archaeal hydrogenases (TC#s 3.D.1.4.1 and 3.D.1.4.2), which share several subunits with NADH dehydrogenase subunits (3.D.1). The largest subunits of the Mrp complex (MrpA and MrpD) are homologous to subunits in NADH dehydrogenases (NDHs): ND2, ND4 and ND5 in the fungal NADH dehydrogenase complex and most other NDHs, as well as subunits in the F420H2 dehydrogenase of Methanosarcina mazei (TC#3.D.9.1.1). These homologous subunits may catalyze Na+/K+ and/or H+ transport. See also Transporter Classification Database References Protein superfamilies Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins
Mrp superfamily
Biology
404
65,676,732
https://en.wikipedia.org/wiki/James%20Kirkcaldie
James Cullen Kirkcaldie (18 April 1875 – 16 August 1931) was a New Zealand cricketer. He played in one first-class match for Wellington in 1903/04. Kirkcaldie was an analytical chemist. References External links 1875 births 1931 deaths New Zealand cricketers Wellington cricketers Cricketers from the London Borough of Enfield People from Enfield, London Analytical chemists
James Kirkcaldie
Chemistry
75
50,566,748
https://en.wikipedia.org/wiki/Cammy%20Abernathy
Cammy R. Abernathy is a materials scientist who is the former dean of the University of Florida's Herbert Wertheim College of Engineering. Education Abernathy graduated from the Massachusetts Institute of Technology in 1980 followed by MS and PhD degrees from Stanford University in 1985. She received all three of her degrees in materials science and engineering. Career Abernathy began as a professor at the University of Florida in 1993. From 2004 and 2009, she worked as an associate dean for the college. In 2009, she was named the Dean of the Herbert Wertheim College of Engineering. In 2015, the college received $50 million from its namesake, Herbert Wertheim. It was the largest cash gift in UF history. She eventually stepped down in December of 2022, being replaced in the interim by Forrest Masters. She was among three finalists to be the new president of the University of Memphis. Ultimately, Bill Hardgrave was appointed. Her research includes work in thin-film electronic materials and devices. She is the author of over 500 journal publications, 430 conference papers, one co-authored book, 7 edited books, 8 book chapters, and 7 distinct patents. She currently serves as the William H. Wadsworth director of the Engineering Leadership Institute at UF. Recognition Abernathy was recognized as a Fellow of the American Physical Society in 2009 "for contributions to the development of compound semiconductor materials growth using molecular beam epitaxy". She is also a fellow of the American Vacuum Society. In 2016, the Association for Academic Women at the University of Florida honored Abernathy as its 2016 Woman of Distinction for her leadership and commitment to diversity and inclusion. References Living people University of Florida faculty MIT School of Engineering alumni Stanford University School of Engineering alumni Year of birth missing (living people) Place of birth missing (living people) American materials scientists Women materials scientists and engineers Fellows of the American Physical Society
Cammy Abernathy
Materials_science,Technology
386
294,408
https://en.wikipedia.org/wiki/Penrose%E2%80%93Hawking%20singularity%20theorems
The Penrose–Hawking singularity theorems (after Roger Penrose and Stephen Hawking) are a set of results in general relativity that attempt to answer the question of when gravitation produces singularities. The Penrose singularity theorem is a theorem in semi-Riemannian geometry and its general relativistic interpretation predicts a gravitational singularity in black hole formation. The Hawking singularity theorem is based on the Penrose theorem and it is interpreted as a gravitational singularity in the Big Bang situation. Penrose shared half of the Nobel Prize in Physics in 2020 "for the discovery that black hole formation is a robust prediction of the general theory of relativity". Singularity A singularity in solutions of the Einstein field equations is one of three things: Spacelike singularities: The singularity lies in the future or past of all events within a certain region. The Big Bang singularity and the typical singularity inside a non-rotating, uncharged Schwarzschild black hole are spacelike. Timelike singularities: These are singularities that can be avoided by an observer because they are not necessarily in the future of all events. An observer might be able to move around a timelike singularity. These are less common in known solutions of the Einstein field equations. Null singularities: These singularities occur on light-like or null surfaces. An example might be found in certain types of black hole interiors, such as the Cauchy horizon of a charged (Reissner–Nordström) or rotating (Kerr) black hole. A singularity can be either strong or weak: Weak singularities: A weak singularity is one where the tidal forces (which are responsible for the spaghettification in black holes) are not necessarily infinite. An observer falling into a weak singularity might not be torn apart before reaching the singularity, although the laws of physics would still break down there. The Cauchy horizon inside a charged or rotating black hole might be an example of a weak singularity. Strong singularities: A strong singularity is one where tidal forces become infinite. In a strong singularity, any object would be destroyed by infinite tidal forces as it approaches the singularity. The singularity at the center of a Schwarzschild black hole is an example of a strong singularity. Space-like singularities are a feature of non-rotating uncharged black holes as described by the Schwarzschild metric, while time-like singularities are those that occur in charged or rotating black hole exact solutions. Both of them have the property of geodesic incompleteness, in which either some light-path or some particle-path cannot be extended beyond a certain proper time or affine parameter (affine parameter being the null analog of proper time). The Penrose theorem guarantees that some sort of geodesic incompleteness occurs inside any black hole whenever matter satisfies reasonable energy conditions. The energy condition required for the black-hole singularity theorem is weak: it says that light rays are always focused together by gravity, never drawn apart, and this holds whenever the energy of matter is non-negative. Hawking's singularity theorem is for the whole universe, and works backwards in time: it guarantees that the (classical) Big Bang has infinite density. This theorem is more restricted and only holds when matter obeys a stronger energy condition, called the strong energy condition, in which the energy is larger than the pressure. All ordinary matter, with the exception of a vacuum expectation value of a scalar field, obeys this condition. During inflation, the universe violates the dominant energy condition, and it was initially argued (e.g. by Starobinsky) that inflationary cosmologies could avoid the initial big-bang singularity. However, it has since been shown that inflationary cosmologies are still past-incomplete, and thus require physics other than inflation to describe the past boundary of the inflating region of spacetime. It is still an open question whether (classical) general relativity predicts spacelike singularities in the interior of realistic charged or rotating black holes, or whether these are artefacts of high-symmetry solutions and turn into null or timelike singularities when perturbations are added. Interpretation and significance In general relativity, a singularity is a place that objects or light rays can reach in a finite time where the curvature becomes infinite, or spacetime stops being a manifold. Singularities can be found in all the black-hole spacetimes, the Schwarzschild metric, the Reissner–Nordström metric, the Kerr metric and the Kerr–Newman metric, and in all cosmological solutions that do not have a scalar field energy or a cosmological constant. One cannot predict what might come "out" of a big-bang singularity in our past, or what happens to an observer that falls "in" to a black-hole singularity in the future, so they require a modification of physical law. Before Penrose, it was conceivable that singularities only form in contrived situations. For example, in the collapse of a star to form a black hole, if the star is spinning and thus possesses some angular momentum, maybe the centrifugal force partly counteracts gravity and keeps a singularity from forming. The singularity theorems prove that this cannot happen, and that a singularity will always form once an event horizon forms. In the collapsing star example, since all matter and energy is a source of gravitational attraction in general relativity, the additional angular momentum only pulls the star together more strongly as it contracts: the part outside the event horizon eventually settles down to a Kerr black hole (see No-hair theorem). The part inside the event horizon necessarily has a singularity somewhere. The proof is somewhat constructiveit shows that the singularity can be found by following light-rays from a surface just inside the horizon. But the proof does not say what type of singularity occurs, spacelike, timelike, null, orbifold, jump discontinuity in the metric. It only guarantees that if one follows the time-like geodesics into the future, it is impossible for the boundary of the region they form to be generated by the null geodesics from the surface. This means that the boundary must either come from nowhere or the whole future ends at some finite extension. An interesting "philosophical" feature of general relativity is revealed by the singularity theorems. Because general relativity predicts the inevitable occurrence of singularities, the theory is not complete without a specification for what happens to matter that hits the singularity. One can extend general relativity to a unified field theory, such as the Einstein–Maxwell–Dirac system, where no such singularities occur. Elements of the theorems In history, there is a deep connection between the curvature of a manifold and its topology. The Bonnet–Myers theorem states that a complete Riemannian manifold that has Ricci curvature everywhere greater than a certain positive constant must be compact. The condition of positive Ricci curvature is most conveniently stated in the following way: for every geodesic there is a nearby initially parallel geodesic that will bend toward it when extended, and the two will intersect at some finite length. When two nearby parallel geodesics intersect (see conjugate point), the extension of either one is no longer the shortest path between the endpoints. The reason is that two parallel geodesic paths necessarily collide after an extension of equal length, and if one path is followed to the intersection then the other, you are connecting the endpoints by a non-geodesic path of equal length. This means that for a geodesic to be a shortest length path, it must never intersect neighboring parallel geodesics. Starting with a small sphere and sending out parallel geodesics from the boundary, assuming that the manifold has a Ricci curvature bounded below by a positive constant, none of the geodesics are shortest paths after a while, since they all collide with a neighbor. This means that after a certain amount of extension, all potentially new points have been reached. If all points in a connected manifold are at a finite geodesic distance from a small sphere, the manifold must be compact. Roger Penrose argued analogously in relativity. If null geodesics, the paths of light rays, are followed into the future, points in the future of the region are generated. If a point is on the boundary of the future of the region, it can only be reached by going at the speed of light, no slower, so null geodesics include the entire boundary of the proper future of a region. When the null geodesics intersect, they are no longer on the boundary of the future, they are in the interior of the future. So, if all the null geodesics collide, there is no boundary to the future. In relativity, the Ricci curvature, which determines the collision properties of geodesics, is determined by the energy tensor, and its projection on light rays is equal to the null-projection of the energy–momentum tensor and is always non-negative. This implies that the volume of a congruence of parallel null geodesics once it starts decreasing, will reach zero in a finite time. Once the volume is zero, there is a collapse in some direction, so every geodesic intersects some neighbor. Penrose concluded that whenever there is a sphere where all the outgoing (and ingoing) light rays are initially converging, the boundary of the future of that region will end after a finite extension, because all the null geodesics will converge. This is significant, because the outgoing light rays for any sphere inside the horizon of a black hole solution are all converging, so the boundary of the future of this region is either compact or comes from nowhere. The future of the interior either ends after a finite extension, or has a boundary that is eventually generated by new light rays that cannot be traced back to the original sphere. Nature of a singularity The singularity theorems use the notion of geodesic incompleteness as a stand-in for the presence of infinite curvatures. Geodesic incompleteness is the notion that there are geodesics, paths of observers through spacetime, that can only be extended for a finite time as measured by an observer traveling along one. Presumably, at the end of the geodesic the observer has fallen into a singularity or encountered some other pathology at which the laws of general relativity break down. Assumptions of the theorems Typically a singularity theorem has three ingredients: An energy condition on the matter, A condition on the global structure of spacetime, Gravity is strong enough (somewhere) to trap a region. There are various possibilities for each ingredient, and each leads to different singularity theorems. Tools employed A key tool used in the formulation and proof of the singularity theorems is the Raychaudhuri equation, which describes the divergence of a congruence (family) of geodesics. The divergence of a congruence is defined as the derivative of the log of the determinant of the congruence volume. The Raychaudhuri equation is where is the shear tensor of the congruence and is also known as the Raychaudhuri scalar (see the congruence page for details). The key point is that will be non-negative provided that the Einstein field equations hold and the null energy condition holds and the geodesic congruence is null, or the strong energy condition holds and the geodesic congruence is timelike. When these hold, the divergence becomes infinite at some finite value of the affine parameter. Thus all geodesics leaving a point will eventually reconverge after a finite time, provided the appropriate energy condition holds, a result also known as the focusing theorem. This is relevant for singularities thanks to the following argument: Suppose we have a spacetime that is globally hyperbolic, and two points and that can be connected by a timelike or null curve. Then there exists a geodesic of maximal length connecting and . Call this geodesic . The geodesic can be varied to a longer curve if another geodesic from intersects at another point, called a conjugate point. From the focusing theorem, we know that all geodesics from have conjugate points at finite values of the affine parameter. In particular, this is true for the geodesic of maximal length. But this is a contradictionone can therefore conclude that the spacetime is geodesically incomplete. In general relativity, there are several versions of the Penrose–Hawking singularity theorem. Most versions state, roughly, that if there is a trapped null surface and the energy density is nonnegative, then there exist geodesics of finite length that cannot be extended. These theorems, strictly speaking, prove that there is at least one non-spacelike geodesic that is only finitely extendible into the past but there are cases in which the conditions of these theorems obtain in such a way that all past-directed spacetime paths terminate at a singularity. Versions There are many versions; below is the null version: Assume The null energy condition holds. We have a noncompact connected Cauchy surface. We have a closed trapped null surface . Then, we either have null geodesic incompleteness, or closed timelike curves. Sketch of proof: Proof by contradiction. The boundary of the future of , is generated by null geodesic segments originating from with tangent vectors orthogonal to it. Being a trapped null surface, by the null Raychaudhuri equation, both families of null rays emanating from will encounter caustics. (A caustic by itself is unproblematic. For instance, the boundary of the future of two spacelike separated points is the union of two future light cones with the interior parts of the intersection removed. Caustics occur where the light cones intersect, but no singularity lies there.) The null geodesics generating have to terminate, however, i.e. reach their future endpoints at or before the caustics. Otherwise, we can take two null geodesic segmentschanging at the causticand then deform them slightly to get a timelike curve connecting a point on the boundary to a point on , a contradiction. But as is compact, given a continuous affine parameterization of the geodesic generators, there exists a lower bound to the absolute value of the expansion parameter. So, we know caustics will develop for every generator before a uniform bound in the affine parameter has elapsed. As a result, has to be compact. Either we have closed timelike curves, or we can construct a congruence by timelike curves, and every single one of them has to intersect the noncompact Cauchy surface exactly once. Consider all such timelike curves passing through and look at their image on the Cauchy surface. Being a continuous map, the image also has to be compact. Being a timelike congruence, the timelike curves can't intersect, and so, the map is injective. If the Cauchy surface were noncompact, then the image has a boundary. We're assuming spacetime comes in one connected piece. But is compact and boundariless because the boundary of a boundary is empty. A continuous injective map can't create a boundary, giving us our contradiction. Loopholes: If closed timelike curves exist, then timelike curves don't have to intersect the partial Cauchy surface. If the Cauchy surface were compact, i.e. space is compact, the null geodesic generators of the boundary can intersect everywhere because they can intersect on the other side of space. Other versions of the theorem involving the weak or strong energy condition also exist. Modified gravity In modified gravity, the Einstein field equations do not hold and so these singularities do not necessarily arise. For example, in Infinite Derivative Gravity, it is possible for to be negative even if the Null Energy Condition holds. Notes References The classic reference. . Also available as Kalvakota, Vaibhav R. (2021), "A discussion on Geometry and General Relativity" See also for a relevant chapter from The Large Scale Structure of Space Time. Witten, Edward (2020), "Light Rays, Singularities, and All That", Rev. Mod. Phys. 92, 45004 (2020), https://doi.org/10.1103/RevModPhys.92.045004 . Also available as https://arxiv.org/abs/1901.03928. An excellent pedagogical review of causal properties of General Relativity. General relativity Mathematical methods in general relativity Theorems in general relativity
Penrose–Hawking singularity theorems
Physics,Mathematics
3,485
47,955,920
https://en.wikipedia.org/wiki/Rae-1%20family
Retinoic acid early inducible 1 (RAE-1) family of murine cell surface glycoproteins is composed of at least five members (RAE-1α-ε). Genes encoding these proteins are located on mouse chromosome 10. RAE-1 proteins are related to MHC class I, they are made up of external α1α2 domain which is linked to the cell membrane by the GPI anchor. They function as stress-induced ligands for NKG2D receptor and their expression is low or absent on normal cells. However, they are constitutively expressed on some tumour cells and they can be upregulated by retinoic acid. References Glycoproteins Immunology
Rae-1 family
Chemistry,Biology
153
25,115,911
https://en.wikipedia.org/wiki/Marchenko%E2%80%93Pastur%20distribution
In the mathematical theory of random matrices, the Marchenko–Pastur distribution, or Marchenko–Pastur law, describes the asymptotic behavior of singular values of large rectangular random matrices. The theorem is named after soviet mathematicians Volodymyr Marchenko and Leonid Pastur who proved this result in 1967. If denotes a random matrix whose entries are independent identically distributed random variables with mean 0 and variance , let and let be the eigenvalues of (viewed as random variables). Finally, consider the random measure counting the number of eigenvalues in the subset included in . Theorem. Assume that so that the ratio . Then (in weak* topology in distribution), where and with The Marchenko–Pastur law also arises as the free Poisson law in free probability theory, having rate and jump size . Moments For each , its -th moment is Some transforms of this law The Stieltjes transform is given by for complex numbers of positive imaginary part, where the complex square root is also taken to have positive imaginary part. It satisfies the quadratic equationThe Stieltjes transform can be repackaged in the form of the R-transform, which is given by The S-transform is given by For the case of , the -transform is given by where satisfies the Marchenko-Pastur law. where For exact analyis of high dimensional regression in the proportional asymptotic regime, a convenient form is often which simplifies to The following functions and , where satisfies the Marchenko-Pastur law, show up in the limiting Bias and Variance respectively, of ridge regression and other regularized linear regression problems. One can show that and . Application to correlation matrices For the special case of correlation matrices, we know that and . This bounds the probability mass over the interval defined by Since this distribution describes the spectrum of random matrices with mean 0, the eigenvalues of correlation matrices that fall inside of the aforementioned interval could be considered spurious or noise. For instance, obtaining a correlation matrix of 10 stock returns calculated over a 252 trading days period would render . Thus, out of 10 eigenvalues of said correlation matrix, only the values higher than 1.43 would be considered significantly different from random. See also Wigner semicircle distribution Tracy–Widom distribution References Link to free-access pdf of Russian version Link to free download Another free access site Probability distributions Random matrices
Marchenko–Pastur distribution
Physics,Mathematics
502
160,993
https://en.wikipedia.org/wiki/Generating%20function
In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Generating functions are often expressed in closed form (rather than as a series), by some expression involving operations on the formal series. There are various types of generating functions, including ordinary generating functions, exponential generating functions, Lambert series, Bell series, and Dirichlet series. Every sequence in principle has a generating function of each type (except that Lambert and Dirichlet series require indices to start at 1 rather than 0), but the ease with which they can be handled may differ considerably. The particular generating function, if any, that is most useful in a given context will depend upon the nature of the sequence and the details of the problem being addressed. Generating functions are sometimes called generating series, in that a series of terms can be said to be the generator of its sequence of term coefficients. History Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. George Pólya writes in Mathematics and plausible reasoning: The name "generating function" is due to Laplace. Yet, without giving it a name, Euler used the device of generating functions long before Laplace [..]. He applied this mathematical tool to several problems in Combinatory Analysis and the Theory of Numbers. Definition Convergence Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers. Thus generating functions are not functions in the formal sense of a mapping from a domain to a codomain. These expressions in terms of the indeterminate  may involve arithmetic operations, differentiation with respect to  and composition with (i.e., substitution into) other generating functions; since these operations are also defined for functions, the result looks like a function of . Indeed, the closed form expression can often be interpreted as a function that can be evaluated at (sufficiently small) concrete values of , and which has the formal series as its series expansion; this explains the designation "generating functions". However such interpretation is not required to be possible, because formal series are not required to give a convergent series when a nonzero numeric value is substituted for . Limitations Not all expressions that are meaningful as functions of  are meaningful as expressions designating formal series; for example, negative and fractional powers of  are examples of functions that do not have a corresponding formal power series. Types Ordinary generating function (OGF) When the term generating function is used without qualification, it is usually taken to mean an ordinary generating function. The ordinary generating function of a sequence is: If is the probability mass function of a discrete random variable, then its ordinary generating function is called a probability-generating function. Exponential generating function (EGF) The exponential generating function of a sequence is Exponential generating functions are generally more convenient than ordinary generating functions for combinatorial enumeration problems that involve labelled objects. Another benefit of exponential generating functions is that they are useful in transferring linear recurrence relations to the realm of differential equations. For example, take the Fibonacci sequence that satisfies the linear recurrence relation . The corresponding exponential generating function has the form and its derivatives can readily be shown to satisfy the differential equation as a direct analogue with the recurrence relation above. In this view, the factorial term is merely a counter-term to normalise the derivative operator acting on . Poisson generating function The Poisson generating function of a sequence is Lambert series The Lambert series of a sequence is Note that in a Lambert series the index starts at 1, not at 0, as the first term would otherwise be undefined. The Lambert series coefficients in the power series expansions for integers are related by the divisor sum The main article provides several more classical, or at least well-known examples related to special arithmetic functions in number theory. As an example of a Lambert series identity not given in the main article, we can show that for we have that where we have the special case identity for the generating function of the divisor function, , given by Bell series The Bell series of a sequence is an expression in terms of both an indeterminate and a prime and is given by: Dirichlet series generating functions (DGFs) Formal Dirichlet series are often classified as generating functions, although they are not strictly formal power series. The Dirichlet series generating function of a sequence is: The Dirichlet series generating function is especially useful when is a multiplicative function, in which case it has an Euler product expression in terms of the function's Bell series: If is a Dirichlet character then its Dirichlet series generating function is called a Dirichlet -series. We also have a relation between the pair of coefficients in the Lambert series expansions above and their DGFs. Namely, we can prove that: if and only if where is the Riemann zeta function. The sequence generated by a Dirichlet series generating function (DGF) corresponding to:has the ordinary generating function: Polynomial sequence generating functions The idea of generating functions can be extended to sequences of other objects. Thus, for example, polynomial sequences of binomial type are generated by: where is a sequence of polynomials and is a function of a certain form. Sheffer sequences are generated in a similar way. See the main article generalized Appell polynomials for more information. Examples of polynomial sequences generated by more complex generating functions include: Appell polynomials Chebyshev polynomials Difference polynomials Generalized Appell polynomials -difference polynomials Other generating functions Other sequences generated by more complex generating functions include: Double exponential generating functions. For example: Aitken's Array: Triangle of Numbers Hadamard products of generating functions and diagonal generating functions, and their corresponding integral transformations Convolution polynomials Knuth's article titled "Convolution Polynomials" defines a generalized class of convolution polynomial sequences by their special generating functions of the form for some analytic function with a power series expansion such that . We say that a family of polynomials, , forms a convolution family if and if the following convolution condition holds for all , and for all : We see that for non-identically zero convolution families, this definition is equivalent to requiring that the sequence have an ordinary generating function of the first form given above. A sequence of convolution polynomials defined in the notation above has the following properties: The sequence is of binomial type Special values of the sequence include and , and For arbitrary (fixed) , these polynomials satisfy convolution formulas of the form For a fixed non-zero parameter , we have modified generating functions for these convolution polynomial sequences given by where is implicitly defined by a functional equation of the form . Moreover, we can use matrix methods (as in the reference) to prove that given two convolution polynomial sequences, and , with respective corresponding generating functions, and , then for arbitrary we have the identity Examples of convolution polynomial sequences include the binomial power series, , so-termed tree polynomials, the Bell numbers, , the Laguerre polynomials, and the Stirling convolution polynomials. Ordinary generating functions Examples for simple sequences Polynomials are a special case of ordinary generating functions, corresponding to finite sequences, or equivalently sequences that vanish after a certain point. These are important in that many finite sequences can usefully be interpreted as generating functions, such as the Poincaré polynomial and others. A fundamental generating function is that of the constant sequence , whose ordinary generating function is the geometric series The left-hand side is the Maclaurin series expansion of the right-hand side. Alternatively, the equality can be justified by multiplying the power series on the left by , and checking that the result is the constant power series 1 (in other words, that all coefficients except the one of are equal to 0). Moreover, there can be no other power series with this property. The left-hand side therefore designates the multiplicative inverse of in the ring of power series. Expressions for the ordinary generating function of other sequences are easily derived from this one. For instance, the substitution gives the generating function for the geometric sequence for any constant : (The equality also follows directly from the fact that the left-hand side is the Maclaurin series expansion of the right-hand side.) In particular, One can also introduce regular gaps in the sequence by replacing by some power of , so for instance for the sequence (which skips over ) one gets the generating function By squaring the initial generating function, or by finding the derivative of both sides with respect to and making a change of running variable , one sees that the coefficients form the sequence , so one has and the third power has as coefficients the triangular numbers whose term is the binomial coefficient , so that More generally, for any non-negative integer and non-zero real value , it is true that Since one can find the ordinary generating function for the sequence of square numbers by linear combination of binomial-coefficient generating sequences: We may also expand alternately to generate this same sequence of squares as a sum of derivatives of the geometric series in the following form: By induction, we can similarly show for positive integers that where denote the Stirling numbers of the second kind and where the generating function so that we can form the analogous generating functions over the integral th powers generalizing the result in the square case above. In particular, since we can write we can apply a well-known finite sum identity involving the Stirling numbers to obtain that Rational functions The ordinary generating function of a sequence can be expressed as a rational function (the ratio of two finite-degree polynomials) if and only if the sequence is a linear recursive sequence with constant coefficients; this generalizes the examples above. Conversely, every sequence generated by a fraction of polynomials satisfies a linear recurrence with constant coefficients; these coefficients are identical to the coefficients of the fraction denominator polynomial (so they can be directly read off). This observation shows it is easy to solve for generating functions of sequences defined by a linear finite difference equation with constant coefficients, and then hence, for explicit closed-form formulas for the coefficients of these generating functions. The prototypical example here is to derive Binet's formula for the Fibonacci numbers via generating function techniques. We also notice that the class of rational generating functions precisely corresponds to the generating functions that enumerate quasi-polynomial sequences of the form where the reciprocal roots, , are fixed scalars and where is a polynomial in for all . In general, Hadamard products of rational functions produce rational generating functions. Similarly, if is a bivariate rational generating function, then its corresponding diagonal generating function, is algebraic. For example, if we let then this generating function's diagonal coefficient generating function is given by the well-known OGF formula This result is computed in many ways, including Cauchy's integral formula or contour integration, taking complex residues, or by direct manipulations of formal power series in two variables. Operations on generating functions Multiplication yields convolution Multiplication of ordinary generating functions yields a discrete convolution (the Cauchy product) of the sequences. For example, the sequence of cumulative sums (compare to the slightly more general Euler–Maclaurin formula) of a sequence with ordinary generating function has the generating function because is the ordinary generating function for the sequence . See also the section on convolutions in the applications section of this article below for further examples of problem solving with convolutions of generating functions and interpretations. Shifting sequence indices For integers , we have the following two analogous identities for the modified generating functions enumerating the shifted sequence variants of and , respectively: Differentiation and integration of generating functions We have the following respective power series expansions for the first derivative of a generating function and its integral: The differentiation–multiplication operation of the second identity can be repeated times to multiply the sequence by , but that requires alternating between differentiation and multiplication. If instead doing differentiations in sequence, the effect is to multiply by the th falling factorial: Using the Stirling numbers of the second kind, that can be turned into another formula for multiplying by as follows (see the main article on generating function transformations): A negative-order reversal of this sequence powers formula corresponding to the operation of repeated integration is defined by the zeta series transformation and its generalizations defined as a derivative-based transformation of generating functions, or alternately termwise by and performing an integral transformation on the sequence generating function. Related operations of performing fractional integration on a sequence generating function are discussed here. Enumerating arithmetic progressions of sequences In this section we give formulas for generating functions enumerating the sequence given an ordinary generating function , where , , and and are integers (see the main article on transformations). For , this is simply the familiar decomposition of a function into even and odd parts (i.e., even and odd powers): More generally, suppose that and that denotes the th primitive root of unity. Then, as an application of the discrete Fourier transform, we have the formula For integers , another useful formula providing somewhat reversed floored arithmetic progressions — effectively repeating each coefficient times — are generated by the identity -recursive sequences and holonomic generating functions Definitions A formal power series (or function) is said to be holonomic if it satisfies a linear differential equation of the form where the coefficients are in the field of rational functions, . Equivalently, is holonomic if the vector space over spanned by the set of all of its derivatives is finite dimensional. Since we can clear denominators if need be in the previous equation, we may assume that the functions, are polynomials in . Thus we can see an equivalent condition that a generating function is holonomic if its coefficients satisfy a -recurrence of the form for all large enough and where the are fixed finite-degree polynomials in . In other words, the properties that a sequence be -recursive and have a holonomic generating function are equivalent. Holonomic functions are closed under the Hadamard product operation on generating functions. Examples The functions , , , , , the dilogarithm function , the generalized hypergeometric functions and the functions defined by the power series and the non-convergent are all holonomic. Examples of -recursive sequences with holonomic generating functions include and , where sequences such as and are not -recursive due to the nature of singularities in their corresponding generating functions. Similarly, functions with infinitely many singularities such as , , and are not holonomic functions. Software for working with -recursive sequences and holonomic generating functions Tools for processing and working with -recursive sequences in Mathematica include the software packages provided for non-commercial use on the RISC Combinatorics Group algorithmic combinatorics software site. Despite being mostly closed-source, particularly powerful tools in this software suite are provided by the Guess package for guessing -recurrences for arbitrary input sequences (useful for experimental mathematics and exploration) and the Sigma package which is able to find P-recurrences for many sums and solve for closed-form solutions to -recurrences involving generalized harmonic numbers. Other packages listed on this particular RISC site are targeted at working with holonomic generating functions specifically. Relation to discrete-time Fourier transform When the series converges absolutely, is the discrete-time Fourier transform of the sequence . Asymptotic growth of a sequence In calculus, often the growth rate of the coefficients of a power series can be used to deduce a radius of convergence for the power series. The reverse can also hold; often the radius of convergence for a generating function can be used to deduce the asymptotic growth of the underlying sequence. For instance, if an ordinary generating function that has a finite radius of convergence of can be written as where each of and is a function that is analytic to a radius of convergence greater than (or is entire), and where then using the gamma function, a binomial coefficient, or a multiset coefficient. Note that limit as goes to infinity of the ratio of to any of these expressions is guaranteed to be 1; not merely that is proportional to them. Often this approach can be iterated to generate several terms in an asymptotic series for . In particular, The asymptotic growth of the coefficients of this generating function can then be sought via the finding of , , , , and to describe the generating function, as above. Similar asymptotic analysis is possible for exponential generating functions; with an exponential generating function, it is that grows according to these asymptotic formulae. Generally, if the generating function of one sequence minus the generating function of a second sequence has a radius of convergence that is larger than the radius of convergence of the individual generating functions then the two sequences have the same asymptotic growth. Asymptotic growth of the sequence of squares As derived above, the ordinary generating function for the sequence of squares is: With , , , , and , we can verify that the squares grow as expected, like the squares: Asymptotic growth of the Catalan numbers The ordinary generating function for the Catalan numbers is With , , , , and , we can conclude that, for the Catalan numbers: Bivariate and multivariate generating functions The generating function in several variables can be generalized to arrays with multiple indices. These non-polynomial double sum examples are called multivariate generating functions, or super generating functions. For two variables, these are often called bivariate generating functions. Bivariate case The ordinary generating function of a two-dimensional array (where and are natural numbers) is: For instance, since is the ordinary generating function for binomial coefficients for a fixed , one may ask for a bivariate generating function that generates the binomial coefficients for all and . To do this, consider itself as a sequence in , and find the generating function in that has these sequence values as coefficients. Since the generating function for is: the generating function for the binomial coefficients is: Other examples of such include the following two-variable generating functions for the binomial coefficients, the Stirling numbers, and the Eulerian numbers, where and denote the two variables: Multivariate case Multivariate generating functions arise in practice when calculating the number of contingency tables of non-negative integers with specified row and column totals. Suppose the table has rows and columns; the row sums are and the column sums are . Then, according to I. J. Good, the number of such tables is the coefficient of: in: Representation by continued fractions (Jacobi-type -fractions) Definitions Expansions of (formal) Jacobi-type and Stieltjes-type continued fractions (-fractions and -fractions, respectively) whose th rational convergents represent -order accurate power series are another way to express the typically divergent ordinary generating functions for many special one and two-variate sequences. The particular form of the Jacobi-type continued fractions (-fractions) are expanded as in the following equation and have the next corresponding power series expansions with respect to for some specific, application-dependent component sequences, and , where denotes the formal variable in the second power series expansion given below: The coefficients of , denoted in shorthand by , in the previous equations correspond to matrix solutions of the equations: where , for , if , and where for all integers , we have an addition formula relation given by: Properties of the th convergent functions For (though in practice when ), we can define the rational th convergents to the infinite -fraction, , expanded by: component-wise through the sequences, and , defined recursively by: Moreover, the rationality of the convergent function for all implies additional finite difference equations and congruence properties satisfied by the sequence of , and for if then we have the congruence for non-symbolic, determinate choices of the parameter sequences and when , that is, when these sequences do not implicitly depend on an auxiliary parameter such as , , or as in the examples contained in the table below. Examples The next table provides examples of closed-form formulas for the component sequences found computationally (and subsequently proved correct in the cited references) in several special cases of the prescribed sequences, , generated by the general expansions of the -fractions defined in the first subsection. Here we define and the parameters and to be indeterminates with respect to these expansions, where the prescribed sequences enumerated by the expansions of these -fractions are defined in terms of the -Pochhammer symbol, Pochhammer symbol, and the binomial coefficients. {| class="wikitable" |- ! !! !! !! |- | || || || |- | || || || |- | || || || |- | || || || |- | || || || |- | || || || |- | || || || |} The radii of convergence of these series corresponding to the definition of the Jacobi-type -fractions given above are in general different from that of the corresponding power series expansions defining the ordinary generating functions of these sequences. Examples Square numbers Generating functions for the sequence of square numbers are: where is the Riemann zeta function. Applications Generating functions are used to: Find a closed formula for a sequence given in a recurrence relation, for example, Fibonacci numbers. Find recurrence relations for sequences—the form of a generating function may suggest a recurrence formula. Find relationships between sequences—if the generating functions of two sequences have a similar form, then the sequences themselves may be related. Explore the asymptotic behaviour of sequences. Prove identities involving sequences. Solve enumeration problems in combinatorics and encoding their solutions. Rook polynomials are an example of an application in combinatorics. Evaluate infinite sums. Various techniques: Evaluating sums and tackling other problems with generating functions Example 1: Formula for sums of harmonic numbers Generating functions give us several methods to manipulate sums and to establish identities between sums. The simplest case occurs when . We then know that for the corresponding ordinary generating functions. For example, we can manipulate where are the harmonic numbers. Let be the ordinary generating function of the harmonic numbers. Then and thus Using convolution with the numerator yields which can also be written as Example 2: Modified binomial coefficient sums and the binomial transform As another example of using generating functions to relate sequences and manipulate sums, for an arbitrary sequence we define the two sequences of sums for all , and seek to express the second sums in terms of the first. We suggest an approach by generating functions. First, we use the binomial transform to write the generating function for the first sum as Since the generating function for the sequence is given by we may write the generating function for the second sum defined above in the form In particular, we may write this modified sum generating function in the form of for , , , and , where . Finally, it follows that we may express the second sums through the first sums in the following form: Example 3: Generating functions for mutually recursive sequences In this example, we reformulate a generating function example given in Section 7.3 of Concrete Mathematics (see also Section 7.1 of the same reference for pretty pictures of generating function series). In particular, suppose that we seek the total number of ways (denoted ) to tile a 3-by- rectangle with unmarked 2-by-1 domino pieces. Let the auxiliary sequence, , be defined as the number of ways to cover a 3-by- rectangle-minus-corner section of the full rectangle. We seek to use these definitions to give a closed form formula for without breaking down this definition further to handle the cases of vertical versus horizontal dominoes. Notice that the ordinary generating functions for our two sequences correspond to the series: If we consider the possible configurations that can be given starting from the left edge of the 3-by- rectangle, we are able to express the following mutually dependent, or mutually recursive, recurrence relations for our two sequences when defined as above where , , , and : Since we have that for all integers , the index-shifted generating functions satisfy we can use the initial conditions specified above and the previous two recurrence relations to see that we have the next two equations relating the generating functions for these sequences given by which then implies by solving the system of equations (and this is the particular trick to our method here) that Thus by performing algebraic simplifications to the sequence resulting from the second partial fractions expansions of the generating function in the previous equation, we find that and that for all integers . We also note that the same shifted generating function technique applied to the second-order recurrence for the Fibonacci numbers is the prototypical example of using generating functions to solve recurrence relations in one variable already covered, or at least hinted at, in the subsection on rational functions given above. Convolution (Cauchy products) A discrete convolution of the terms in two formal power series turns a product of generating functions into a generating function enumerating a convolved sum of the original sequence terms (see Cauchy product). Consider and are ordinary generating functions. Consider and are exponential generating functions. Consider the triply convolved sequence resulting from the product of three ordinary generating functions Consider the -fold convolution of a sequence with itself for some positive integer (see the example below for an application) Multiplication of generating functions, or convolution of their underlying sequences, can correspond to a notion of independent events in certain counting and probability scenarios. For example, if we adopt the notational convention that the probability generating function, or pgf, of a random variable is denoted by , then we can show that for any two random variables if and are independent. Similarly, the number of ways to pay cents in coin denominations of values in the set {1, 5, 10, 25, 50} (i.e., in pennies, nickels, dimes, quarters, and half dollars, respectively) is generated by the product and moreover, if we allow the cents to be paid in coins of any positive integer denomination, we arrive at the generating for the number of such combinations of change being generated by the partition function generating function expanded by the infinite -Pochhammer symbol product of Example: Generating function for the Catalan numbers An example where convolutions of generating functions are useful allows us to solve for a specific closed-form function representing the ordinary generating function for the Catalan numbers, . In particular, this sequence has the combinatorial interpretation as being the number of ways to insert parentheses into the product so that the order of multiplication is completely specified. For example, which corresponds to the two expressions and . It follows that the sequence satisfies a recurrence relation given by and so has a corresponding convolved generating function, , satisfying Since , we then arrive at a formula for this generating function given by Note that the first equation implicitly defining above implies that which then leads to another "simple" (of form) continued fraction expansion of this generating function. Example: Spanning trees of fans and convolutions of convolutions A fan of order is defined to be a graph on the vertices with edges connected according to the following rules: Vertex 0 is connected by a single edge to each of the other vertices, and vertex is connected by a single edge to the next vertex for all . There is one fan of order one, three fans of order two, eight fans of order three, and so on. A spanning tree is a subgraph of a graph which contains all of the original vertices and which contains enough edges to make this subgraph connected, but not so many edges that there is a cycle in the subgraph. We ask how many spanning trees of a fan of order are possible for each . As an observation, we may approach the question by counting the number of ways to join adjacent sets of vertices. For example, when , we have that , which is a sum over the -fold convolutions of the sequence for . More generally, we may write a formula for this sequence as from which we see that the ordinary generating function for this sequence is given by the next sum of convolutions as from which we are able to extract an exact formula for the sequence by taking the partial fraction expansion of the last generating function. Implicit generating functions and the Lagrange inversion formula One often encounters generating functions specified by a functional equation, instead of an explicit specification. For example, the generating function for the number of binary trees on nodes (leaves included) satisfies The Lagrange inversion theorem is a tool used to explicitly evaluate solutions to such equations. Applying the above theorem to our functional equation yields (with ): Via the binomial theorem expansion, for even , the formula returns . This is expected as one can prove that the number of leaves of a binary tree are one more than the number of its internal nodes, so the total sum should always be an odd number. For odd , however, we get The expression becomes much neater if we let be the number of internal nodes: Now the expression just becomes the th Catalan number. Introducing a free parameter (snake oil method) Sometimes the sum is complicated, and it is not always easy to evaluate. The "Free Parameter" method is another method (called "snake oil" by H. Wilf) to evaluate these sums. Both methods discussed so far have as limit in the summation. When n does not appear explicitly in the summation, we may consider as a "free" parameter and treat as a coefficient of , change the order of the summations on and , and try to compute the inner sum. For example, if we want to compute we can treat as a "free" parameter, and set Interchanging summation ("snake oil") gives Now the inner sum is . Thus Then we obtain It is instructive to use the same method again for the sum, but this time take as the free parameter instead of . We thus set Interchanging summation ("snake oil") gives Now the inner sum is . Thus Thus we obtain for as before. Generating functions prove congruences We say that two generating functions (power series) are congruent modulo , written if their coefficients are congruent modulo for all , i.e., for all relevant cases of the integers (note that we need not assume that is an integer here—it may very well be polynomial-valued in some indeterminate , for example). If the "simpler" right-hand-side generating function, , is a rational function of , then the form of this sequence suggests that the sequence is eventually periodic modulo fixed particular cases of integer-valued . For example, we can prove that the Euler numbers, satisfy the following congruence modulo 3: One useful method of obtaining congruences for sequences enumerated by special generating functions modulo any integers (i.e., not only prime powers ) is given in the section on continued fraction representations of (even non-convergent) ordinary generating functions by -fractions above. We cite one particular result related to generating series expanded through a representation by continued fraction from Lando's Lectures on Generating Functions as follows: Generating functions also have other uses in proving congruences for their coefficients. We cite the next two specific examples deriving special case congruences for the Stirling numbers of the first kind and for the partition function which show the versatility of generating functions in tackling problems involving integer sequences. The Stirling numbers modulo small integers The main article on the Stirling numbers generated by the finite products provides an overview of the congruences for these numbers derived strictly from properties of their generating function as in Section 4.6 of Wilf's stock reference Generatingfunctionology. We repeat the basic argument and notice that when reduces modulo 2, these finite product generating functions each satisfy which implies that the parity of these Stirling numbers matches that of the binomial coefficient and consequently shows that is even whenever . Similarly, we can reduce the right-hand-side products defining the Stirling number generating functions modulo 3 to obtain slightly more complicated expressions providing that Congruences for the partition function In this example, we pull in some of the machinery of infinite products whose power series expansions generate the expansions of many special functions and enumerate partition functions. In particular, we recall that the partition function is generated by the reciprocal infinite -Pochhammer symbol product (or -Pochhammer product as the case may be) given by This partition function satisfies many known congruence properties, which notably include the following results though there are still many open questions about the forms of related integer congruences for the function: We show how to use generating functions and manipulations of congruences for formal power series to give a highly elementary proof of the first of these congruences listed above. First, we observe that in the binomial coefficient generating function all of the coefficients are divisible by 5 except for those which correspond to the powers and moreover in those cases the remainder of the coefficient is 1 modulo 5. Thus, or equivalently It follows that Using the infinite product expansions of it can be shown that the coefficient of in is divisible by 5 for all . Finally, since we may equate the coefficients of in the previous equations to prove our desired congruence result, namely that for all . Transformations of generating functions There are a number of transformations of generating functions that provide other applications (see the main article). A transformation of a sequence's ordinary generating function (OGF) provides a method of converting the generating function for one sequence into a generating function enumerating another. These transformations typically involve integral formulas involving a sequence OGF (see integral transformations) or weighted sums over the higher-order derivatives of these functions (see derivative transformations). Generating function transformations can come into play when we seek to express a generating function for the sums in the form of involving the original sequence generating function. For example, if the sums are then the generating function for the modified sum expressions is given by (see also the binomial transform and the Stirling transform). There are also integral formulas for converting between a sequence's OGF, , and its exponential generating function, or EGF, , and vice versa given by provided that these integrals converge for appropriate values of . Tables of special generating functions An initial listing of special mathematical series is found here. A number of useful and special sequence generating functions are found in Section 5.4 and 7.4 of Concrete Mathematics and in Section 2.5 of Wilf's Generatingfunctionology. Other special generating functions of note include the entries in the next table, which is by no means complete. {| class="wikitable" |- ! Formal power series !! Generating-function formula !! Notes |- | || || is a first-order harmonic number |- | || || is a Bernoulli number |- | || || is a Fibonacci number and |- | || || denotes the rising factorial, or Pochhammer symbol and some integer |- | || |- | || |- | || |- | || || is the polylogarithm function and is a generalized harmonic number for |- | || || is a Stirling number of the second kind and where the individual terms in the expansion satisfy |- | || || |- | || || The two-variable case is given by |- | || || |- | || || |- ||| |} See also Moment-generating function Probability-generating function Generating function transformation Stanley's reciprocity theorem Integer partition Combinatorial principles Cyclic sieving Z-transform Umbral calculus Coins in a fountain Notes References Citations Reprinted in External links "Introduction To Ordinary Generating Functions" by Mike Zabrocki, York University, Mathematics and Statistics Generating Functions, Power Indices and Coin Change at cut-the-knot "Generating Functions" by Ed Pegg Jr., Wolfram Demonstrations Project, 2007. 1730 introductions Abraham de Moivre
Generating function
Mathematics
7,579
14,975,426
https://en.wikipedia.org/wiki/Pyranine
Pyranine is a hydrophilic, pH-sensitive fluorescent dye from the group of chemicals known as arylsulfonates. Pyranine is soluble in water and is used as a coloring agent, biological stain, optical detecting reagent, and pH indicator. Pyranine is also used in yellow highlighters to provide their characteristic fluorescence and bright yellow-green colour. It is also found in some types of soap. Synthesis Pyranine is synthesized from pyrenetetrasulfonic acid and a solution of sodium hydroxide in water under reflux. The trisodium salt crystallizes as yellow needles when adding an aqueous solution of sodium chloride. See also Fluorescein Fluorescence References External links CTD's Pyranine page from the Comparative Toxicogenomics Database Staining dyes Fluorescent dyes Sulfonates Pyrenes Hydroxyarenes Organic sodium salts
Pyranine
Chemistry
188
77,019,650
https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Power%20Electronics
IEEE Transactions on Power Electronics is a peer-reviewed scientific journal published monthly by the IEEE. Sponsored by the IEEE Power Electronics Society, the journal covers advances in device, circuit or system issues in power electronics. Its editor-in-chief is Yaow-Ming Chen (National Taiwan University). According to the Journal Citation Reports, the journal has a 2023 impact factor of 6.6. References External links Power Electronics, IEEE Transactions on Academic journals established in 1986 English-language journals Monthly journals Power electronics
IEEE Transactions on Power Electronics
Engineering
103
15,096,444
https://en.wikipedia.org/wiki/Paper%20model
Paper models, also called card models or papercraft, are models constructed mainly from sheets of heavy paper, paperboard, card stock, or foam. Details This may be considered a broad category that contains origami and card modeling. Origami is the process of making a paper model by folding a single piece of paper without using glue or cutting while the variation kirigami does. Card modeling is making scale models from sheets of cardstock on which the parts were printed, usually in full color. These pieces would be cut out, folded, scored, and glued together. Papercraft is the art of combining these model types to build complex creations such as wearable suits of armor, life-size characters, and accurate weapon models. Sometimes the model pieces can be punched out. More frequently the printed parts must be cut out. Edges may be scored to aid folding. The parts are usually glued together with polyvinyl acetate glue ("white glue", "PVA"). In this kind of modeling, the sections are usually pre-painted, so there is no need to paint the model after completion. Some enthusiasts may enhance the model by painting and detailing. Due to the nature of the paper medium, the model may be sealed with varnish or filled with spray foam to last longer. Some enthusiasts also use papercrafts or perdurable to do life-sized props starting by making the craft, covering it with resin and painting them. Some also use photo paper and laminate them by heat, thus preventing the printed side from color wearing out, beyond the improved realistic effect on certain kinds of models (ships, cars, buses, trains, etc.). Paper crafts can be used as references to do props with other materials too. History The first paper models appeared in Europe in the 17th century with the earliest commercial models were appearing in French toy catalogs in 1800. Printed card became common in magazines in the early part of the 20th century. The popularity of card modeling boomed during World War II when the paper was one of the few items whose use and production was not heavily regulated. Micromodels, designed and published in England from 1941 were very popular with 100 different models, including architecture, ships, and aircraft. But as plastic model kits became more commonly available, interest in paper decreased. Availability The Robert Freidus Collection, held at the V&A Museum of Childhood has over 14000 card models exclusively in the category Architectural Paper Models. Since paper model patterns can be easily printed and assembled, the Internet has become a popular means of exchanging them. Commercial corporations have recently begun using downloadable paper models for their marketing (examples are Yamaha and Canon). The availability of numerous models on the Internet at little or no cost, which can then be downloaded and printed on inexpensive inkjet printers has caused its popularity again to increase worldwide. Home printing also allows models to be scaled up or down easily (for example, in order to make two models from different authors, in different scales, match each other in size), although the paper weight might need to be adjusted in the same ratio. Inexpensive kits are available from dedicated publishers (mostly based in Eastern Europe; examples include Halinski, JSC Models, and Maly Modelarz), a portion of the catalog of which date back to 1950. Experienced hobbyists often scratchbuild models, either by first hand drawing or using software such as Adobe Illustrator and Inkscape. An historical example of highly specialized software is Designer Castles for BBC Micro and Acorn Archimedes platforms, which was developed as a tool for creation of card model castles. CAD and CG software, such as Rhino 3D, 3DS Max, Blender, and specialist software, like Pepakura Designer from Tama Software, Dunreeb Cutout or Ultimate Papercraft 3D, may be employed to convert 3D computer models into two-dimensional printable templates for assembly. 3D models to paper The use of 3D models greatly assists in the construction of paper models, with video game models being the most prevalent source. The video game or source in question will have to be loaded into the computer. Various methods of extracting the model exist, including using a model viewer and exporting it into a workable file type, or capturing the model from the emulation directly. The methods of capturing the model are often unique to the subject and the tools available. Readability of file-formats including proprietary ones could mean that a model viewer and exporter is unavailable outside of the developer. Using other tools that capture rendered 3D models and textures is often the only way to obtain them. In this case, the designer may have to arrange the textures and the wireframe model on a 3D program, such as SketchUp, 3DS MAX, Metasequoia, or Blender before exporting it to a papercraft creating program, such as Dunreeb Cutout or Pepakura Designer by Tama Software. From there the model is typically refined to give a proper layout and construction tabs that will affect the overall appearance and difficulty in constructing the model. Subjects Because people can create their own patterns, paper models are limited only by their designers' imaginations and ability to manipulate paper into forms. Vehicles of all forms, from cars and cargo trucks to space shuttles, are a frequent subject of paper models, some using photo-realistic textures from their real-life counterparts for extremely fine details. Architecture models can be very simple and crude forms to very detailed models with thousands of pieces to assemble. The most prevalent designs are from video games, due to their popularity and ease of producing paper models. On the Web, enthusiasts can find hundreds of models from different designers across a wide range of subjects. The models include very difficult and ambitious paper projects, such as life-sized and complex creations. Architectural paper models are popular with model railway enthusiasts. Various models are used in tabletop gaming, primarily wargaming. Scale paper models allow for easy production of armies and buildings for use in gaming and that can be scaled up or down readily or produced as desired. Whether they be three-dimensional models or two-dimensional icons, players are able to personalize and modify the models to bear unique unit designations and insignias for gaming. See also Net Cardboard modeling Paper Aeroplane Origamic architecture Superquick Leo Monahan Omocha-e References External links Software for creating paper models Pepakura Designer Dunreeb Cutout Ultimate Papercraft 3D PaperMaker Paper products Scale modeling Paper toys
Paper model
Physics
1,316
59,227,670
https://en.wikipedia.org/wiki/Darlene%20Lim
Darlene Sze Shien Lim is a NASA geobiologist and exobiologist who prepares astronauts for scientific exploration of the Moon, Deep Space and Mars. Her expertise involves Mars human analog missions, in which extreme landscapes like volcanoes and Arctic deserts serve as physical or operational substitutes for various planetary bodies. She has become a leading public figure for Mars exploration, having presented her missions publicly at academic institutions and public events around the world. She has also discussed her work for various media groups such as NPR, The New York Times, and The Washington Post. Background Lim is a first-generation Canadian; her parents emigrated from Singapore and she was born in Kingston, Ontario. She grew up "spending time in the Canadian Rockies, and watching Jacques Cousteau on TV." She studied biology at Queen's University at Kingston as an undergraduate, and graduated in 1994. She credited Professor John P. Smol for igniting her interest in limnology, the study of lakes and ponds. Lim subsequently completed a master's and doctoral degree in geology at the University of Toronto in 1999 and 2004 respectively. She was already working on NASA-sponsored projects such as the Haughton–Mars Project, which involved studying the Arctic craters as simulated Martian environments. She became a postdoctoral researcher with Christopher McKay at NASA Ames in 2004, and later became a NASA staff scientist and project leader. Public outreach In popular media Lim has completed dozens of radio interviews with the CBC throughout her PhD. From 2008 to 2009, Lim's work was part of POLAR-PALOOZA: Stories from a Changing Planet, a traveling exhibit sponsored by the NSF and NASA. She was the lead guest on the NASA Ames podcast in 2016. A profile of her Mars-simulation colony in Hawaii (BASALT) appeared in the Chicago Tribune. Lim appeared on the SAGANet/SpaceTV Ask an Astrobiologist! streaming program in 2017. She appeared twice on the Science Friday radio hour in 2018, contributing a 25-minute segment about undersea volcanic exploration tuned in to by over a million listeners. Lim participated in the Frontiers for Life in Space panel at the MIT Media Lab. She was a judge in the HP "Home Mars" VR competition in 2018. In 2019, she will present the opening lecture of the newly renamed Solar System Exploration Research Virtual Institute (SSERVI). Volunteer service Lim serves on the Ocean Exploration Advisory Board of NOAA. She served as a Scientist-in-Residence at the government of Canada's "Marsville" program from 2000 to 2002. From 2009 to 2015, she served as co-chair of the Mars Exploration Program Analysis Group Goal IV (Prepare for Human Exploration). Lim founded the Haven House Family Shelter STEM Explorer's Speaker series, which enabled NASA and academic researchers to conduct education and outreach programs with shelter-based children in the San Francisco Bay Area. Research and exploration Lim has taken a decidedly nontraditional path, choosing government labs and public-facing space research over a traditional academic career. As an exobiologist, Lim has explored extreme environments worldwide, from Hawaii and Florida to the Arctic and Antarctic. By studying extreme habitats on earth, researchers like Lim hope to gain insights into conditions that human explorers might face on Mars or other planets. The physical, mental, and operational demands involved in real field science and exploration, under extreme conditions, are comparable to those involved in space exploration missions, giving astronauts an opportunity to train as field scientists and develop and test team protocols and technology. In 2000, Lim was an inaugural crew member in the first-ever Mars simulated colony in the Arctic, the Flashline Mars Arctic Research Station (FMARS). She participated at the simulated base at Haughton impact crater on Devon Island, Nunavut in both 2000 and 2001. In 2004, Lim established the Pavilion Lake Research Project in British Columbia, Canada, to study chemical and biological characteristics of microbial geologic formations underwater. She extrapolates from limnological and paleolimnological investigations of bodies of water in the Canadian High Arctic to Holocene climate change and to potential paleolake regions on the surface of Mars. In a 2017 interview, newly minted astronaut Zena Cardman specifically credited Lim, who gave her an opportunity to work at Pavilion Lake, with sparking her interest in NASA exobiology projects. Lim is the Principal Investigator of SUBSEA, a biogeochemical analogue study for life on other planets. Lim has been a principal investigator in the 2018 Lōʻihi Seamount Expedition on the Exploration Vessel Nautilus, exploring an underwater volcano near the Big Island of Hawaii. The work is supported by the Ocean Exploration Trust, as part of an initiative of Robert Ballard to explore and map the deep ocean. This work examines science related to future robotic exploration of Europa and Enceladus, two Solar System moons with potentially habitable environments. SUBSEA also studies ocean exploration as an analog to future human spaceflight concepts such as Low Latency Telerobotics. In 2018, Lim and colleague Jennifer Heldmann scouted extreme environments in Iceland in anticipation of a new mission kick-off in 2019. The preparations are being made to support possible missions that could send people to the Moon and to Mars in 2030. Awards and honors 1989: SHAD Fellow 2003: Dimitris N. Chorafas Foundation Best Doctoral Thesis of the Year 2005: National Geographic Research and Exploration grantee (PLRP) 2013: WIRED Magazine "Smart List" 2014: NASA Ames Honor Award for Group Achievement, PLRP 2018: Keynote speaker, Women in Space conference 2019: Air & Space Award, Women of Discovery Awards, WINGS WorldQuest References 1972 births Living people Canadian women geologists NASA people Canadian limnologists Geobiologists Exoplanet search projects Environment and Climate Change Canada Canadian people of Singaporean descent Queen's University at Kingston alumni University of Toronto alumni Women limnologists Women planetary scientists Planetary scientists
Darlene Lim
Astronomy
1,195
9,066,328
https://en.wikipedia.org/wiki/Petroleum%20exploration%20in%20the%20Arctic
Exploration for petroleum in the Arctic is expensive and challenging both technically and logistically. In the offshore, sea ice can be a major factor. There have been many discoveries of oil and gas in the several Arctic basins that have seen extensive exploration over past decades but distance from existing infrastructure has often deterred development. Development and production operations in the Arctic offshore as a result of exploration have been limited, with the exception of the Barents and Norwegian seas. In Alaska, exploration subsequent to the discovery of the Prudhoe Bay oilfield has focussed on the onshore and shallow coastal waters. Technological developments such as Arctic class tankers for Liquefied Natural Gas, and climatic changes leading to reduced sea ice, may see a resurgence of interest in the offshore Arctic should high oil and gas prices be sustained and environmental concerns mitigated. Since the onset of the 2010s oil glut in 2014, and, in North America particularly, the widespread development of shale gas and oil depressed prices. Consequently commercial interest in exploring many parts of the Arctic has declined. Overview There are 19 geological basins making up the Arctic region. Some of these basins have experienced oil and gas exploration, most notably the Alaska North Slope where oil was first produced in 1968 from Prudhoe Bay. However, only half the basins – such as the Beaufort Sea and the West Barents Sea – have been explored. A 2008 United States Geological Survey estimates that areas north of the Arctic Circle have 90 billion barrels of undiscovered, technically recoverable oil (and 44 billion barrels of natural gas liquids ) in 25 geologically defined areas thought to have potential for petroleum. This represents 13% of the undiscovered oil in the world. Of the estimated totals, more than half of the undiscovered oil resources are estimated to occur in just three geologic provinces – Arctic Alaska, the Amerasian Basin, and the East Greenland Rift Basins. More than 70% of the mean undiscovered oil resources is estimated to occur in five provinces: Arctic Alaska, Amerasian Basin, East Greenland Rift Basins, East Barents Basins, and West Greenland–East Canada. It is further estimated that approximately 84% of the undiscovered oil and gas occurs offshore. The USGS did not consider economic factors such as the effects of permanent sea ice or oceanic water depth in its assessment of undiscovered oil and gas resources. This assessment is lower than a 2000 survey, which had included lands south of the arctic circle. A recent study carried out by Wood Mackenzie on the Arctic potential comments that the likely remaining reserves will be 75% natural gas and 25% oil. It highlights four basins that are likely to be the focus of the petroleum industry in the upcoming years: the Kronprins Christian Basin, which is likely to have large reserves, the southwest Greenland basin, due to its proximity to markets, and the more oil-prone basins of Laptev and Baffin Bay. Canada Drilling in the Canadian Arctic peaked during the 1970s and 1980s, led by such companies as Panarctic Oils Ltd. in the Sverdrup Basin of the Arctic Islands, and by Imperial Oil and Dome Petroleum in the Beaufort Sea-Mackenzie Delta Basin. Drilling continued at declining rates until the early 2000s. In all, some 300,000 km of seismic and 1500 wells were drilled across this vast area. Approximately of oil and of natural gas were found in 73 discoveries, mostly in the two basins mentioned above, as well as further south in the Mackenzie Valley. Although certain discoveries proved large, the discovered resources were insufficient to justify development at the time. All the wells which were drilled were plugged and abandoned. Drilling in the Canadian Arctic turned out to be challenging and expensive, particularly in the offshore where drilling required innovative technology. Short operating seasons complicated logistics for companies who had to contend with the additional risk of variable ice conditions. Exploration has demonstrated that several sedimentary basins in the Canadian Arctic are rich in oil and gas. In particular, the Beaufort Sea-Mackenzie Delta Basin has a discovery record for both gas (onshore) and oil and gas (offshore) although the potential beneath the deeper waters of the Beaufort Sea remains unconfirmed by drilling. Discoveries in the Sverdrup Basin made between 1969 and 1971 are principally of gas. The several basins in the eastern Arctic offshore have seen little exploration activity. Russia In June 2007, a group of Russian geologists returned from a six-week voyage on a nuclear icebreaker 50 Let Pobedy, the expedition called Arktika 2007. They had travelled to the Lomonosov ridge, an underwater shelf going between Russia's remote, inhospitable eastern Arctic Ocean, and Ellesmere Island in Canada where the ridge lies 400m under the ocean surface. According to Russia's media, the geologists returned with the "sensational news" that the Lomonosov ridge was linked to Russian Federation territory, boosting Russia's claim over the oil-and-gas rich triangle. The territory contained 10bn tonnes of gas and oil deposits, the scientists said. Greenland In the years post 2000, sedimentary basins offshore Greenland were believed by some geologists to have high potential for large oil discoveries. In a comprehensive study of the potential of Arctic basins published in 2008, the U.S. Geological Survey estimated that the waters off north-eastern Greenland, in the Greenland Sea north and south of the Arctic Circle, could potentially contain 50 billion barrels of oil equivalent (7.9 x 10^9 m^3) (an estimate including both oil and gas). None of this potential has been realized. Prospecting took place under the auspices of NUNAOIL, a partnership between the Greenland Home Rule Government and the Danish state. Various oil companies secured licences and conducted exploration over the period 2002 to 2020. Much seismic exploration and several wells were drilled offshore western Greenland, but no discoveries were announced. Drilling proved expensive and the geology more complex than expected, discouraging further investment. Greenland has offered 8 license blocks for tender along its west coast by Baffin Bay. Seven of those blocks were bid for by a combination of multinational oil companies and the National Oil Company NUNAOIL. Companies that have participated successfully in the previous license rounds and have formed a partnership for the licenses with NUNAOIL are, DONG Energy, Chevron, ExxonMobil, Husky Energy, Cairn Energy. The area available, known as the West Disko licensing round, is of interest because of its relative accessibility compared to other Arctic basins as the area remains largely free of ice. Also, it has a number of promising geological leads and prospects from the Paleocene era. In 2021, following the election of a new executive, the Greenland government announced it would cease petroleum licensing and disband the state oil company Nunaoil. This political development, combined with the high costs of drilling exploratory wells and discouraging exploration results to date, it is unlikely that the Greenland offshore will see further exploration for the foreseeable future. United States Prudhoe Bay Oil Field on Alaska's North Slope is the largest oil field in North America, The field was discovered on March 12, 1968, by Atlantic Richfield Company (ARCO) and is operated by Hilcorp; partners are ExxonMobil and ConocoPhillips. In September 2012 Shell delayed actual oil drilling in the Chukchi until the following summer due to heavier-than-normal ice and the Arctic Challenger, an oil-spill response vessel, not being ready on time. However, on September 23, Shell began drilling a "top-hole" over its Burger prospect in the Chukchi. And on October 3, Shell began drilling a top-hole over its Sivulliq prospect in the Beaufort Sea, after being notified by the Alaska Eskimo Whaling Commission that drilling could begin. In September, 2012, Statoil, now Equinor, chose to delay its oil exploration plans at its Amundsen prospect in the Chukchi Sea, about 100 miles northwest of Wainwright, Alaska, by at least one year, to 2015 at the earliest. In 2012 Conoco planned to drill at its Devil's Paw prospect (part of a 2008 lease buy in the Chukchi Sea 120 miles west of Wainwright) in summer of 2013. This project was later shelved in 2013 after concerns over rig type and federal regulations related to runaway well containment. October 11, 2012, Dep. Secretary of the Department of the Interior David Hayes stated that support for the permitting process for Arctic offshore petroleum drilling will continue if President Obama stays in office. Shell, however, announced in September 2015 that it was abandoning exploration "for the foreseeable future" in Alaska, after tests showed disappointing quantities of oil and gas in the area. On October 4, 2016 Caelus Energy Alaska announced its discovery at Smith Bay could "provide 200,000 barrels per day of light, highly mobile oil". Norway Rosneft and Equinor (then Statoil) made the Arctic exploration deal in May 2012. It is the third deal Rosneft has signed in the past month, after Arctic exploration agreements with Italy's Eni and US giant ExxonMobil. Compared to other Arctic oil states, Norway is probably best equipped for oil spill preparedness in the Arctic. Environmental concerns Petroleum exploration and production operations in the Arctic have faced concerns from organizations and governments about the potential for detrimental environmental consequences. Firstly, in the event of a large oil spill, the effects on Arctic marine life (such as Polar Bears, Walruses and seals) could be calamitous. Secondly, pollution from ships and noise pollution from seismic exploration and drilling, could negatively affect fragile Arctic ecosystems and may lead to declining populations. Such issues concern Indigenous populations who live in the Arctic and rely on such animals as food sources. In response to these concerns, the Arctic Council working group on Arctic Monitoring and Assessment Programme (AMAP) undertook a comprehensive review of Oil and Gas Activities in the Arctic - Effects and Potential Effects. In another initiative, Greenpeace, an independent global campaigning network, have launched the Save the Arctic Project since the melting Arctic is under threat from oil drilling, industrial fishing and conflict. Response of governments to mounting concerns about the risk of petroleum operations in the Arctic offshore include regulatory changes and the moratorium on offshore leasing issued in 2016 for the Arctic marine waters of both the United States and Canada (and subsequently in Canada a prohibition of oil and gas operations). Consequently, no leasing or operations have been approved for the Canadian Arctic offshore since that year. In 2021, the Greenland government ended plans for future licensing for offshore exploration citing high costs and climate change impacts. A summary of the status of offshore oil and gas activities and regulatory frameworks in the Arctic was published by PAME in 2021. (Program for the Protection of the Arctic Marine Environment.). The Deepwater Horizon disaster in the Gulf of Mexico stimulated much concern about the consequences of a similar event in Arctic waters and has resulted in many developments in regulation of operations and management of oil and gas leasing by countries active in Arctic exploration. In 2021, the Arctic Environmental Responsibility Index (AERI) was published that ranks 120 oil, gas, and mining companies involved in resource extraction north of the Arctic Circle in Alaska, Canada, Greenland, Finland, Norway, Russia, and Sweden. The Index measures companies' environmental activities and demonstrates that oil and gas companies are generally ranked higher than mining companies operating in the Arctic. Geological basins in the Arctic Alaska North Slope Baffin Bay Barents Sea West Barents Sea and East Barents Sea Beaufort Sea (Mackenzie Delta-Beaufort Sea Basin) Sverdrup Basin (Canadian Arctic Islands) East Siberian Sea Greenland (North Greenland) Hope Basin Kronprins Christian Basin Laptev Sea North Chukchi Sea North Kara Sea Pechora Sea South Kara Sea See also Arctic cooperation and politics Arctic Refuge drilling controversy Natural resources of the Arctic Pollution in the Arctic Ocean Territorial claims in the Arctic References External links Murray, A. 2006. Arctic offers chilly welcome. E&P, December, 2006 "Arctic Video" "Assessment of Undiscovered Oil and Gas in the Arctic" Science 29 May 2009: Vol. 324 no. 5931 pp. 1175–1179 20th century in the Arctic 21st century in the Arctic Environment of the Arctic Industry in the Arctic Petroleum geology Oil exploration
Petroleum exploration in the Arctic
Chemistry
2,515
23,497,920
https://en.wikipedia.org/wiki/Arthur%20Humphreys
Arthur L. C. Humphreys (1917–2003) was a managing director of International Computers Limited and a long-time member of the British computer industry. He joined the British Tabulating Machine Company in 1940, and was involved in the negotiations with Powers-Samas that led to the formation of International Computers and Tabulators in 1958. In 1968, on the formation of ICL, he became its first Managing Director. When Geoff Cross became managing director in 1972, Humphreys was moved to the post of Deputy Chairman, where he remained until his retirement in 1983. References External links Oral history interview with Arthur L. C. Humphreys, Charles Babbage Institute, University of Minnesota. International Computers Limited people 1917 births 2003 deaths
Arthur Humphreys
Technology
150
37,914,845
https://en.wikipedia.org/wiki/Stuart%20Ballantine%20Medal
The Stuart Ballantine Medal was a science and engineering award presented by the Franklin Institute, of Philadelphia, Pennsylvania, USA. It was named after the US inventor Stuart Ballantine. Laureates 1947 - George Clark Southworth (Physics) 1948 - Ray Davis Kell (Engineering) 1949 - Sergei A. Schelkunoff (Physics) 1952 - John Bardeen (Physics) 1952 - Walter H. Brattain (Physics) 1953 - David G. C. Luck (Engineering) 1954 - Kenneth Alva Norton (Engineering) 1955 - Claude Elwood Shannon (Computer and Cognitive Science) 1956 - Kenneth Bullington (Physics) 1957 - Robert Morris Page (Engineering) 1957 - Leo Clifford Young (Engineering) 1958 - Harald Trap Friis (Engineering) 1959 - Albert Hoyt Taylor (Engineering) 1959 - Charles H. Townes (Physics) 1960 - Rudolf Kompfner (Engineering) 1960 - Harry Nyquist (Engineering) 1960 - John R. Pierce (Engineering) 1961 - Leo Esaki (Engineering) 1961 - Nicolaas Bloembergen (Physics) 1961 - H. E. Derrick Scovill (Physics) 1962 - Ali Javan (Physics) 1962 - Theodore H. Maiman (Physics) 1962 - Arthur L. Schawlow (Physics) 1962 - Charles H. Townes (Physics) 1963 - Arthur C. Clarke (Engineering) 1965 - Homer Walter Dudley (Engineering) 1965 - Alec Harley Reeves (Engineering) 1966 - Robert N. Noyce (Computer and Cognitive Science) 1966 - Jack S. Kilby (Engineering) 1967 - Jack N. James (Engineering) 1967 - Robert J. Parks (Engineering) 1968 - Chandra Kumar Naranbhai Patel (Physics) 1969 - Emmett N. Leith (Physics) 1971 - Zhores I. Alferov (Physics) 1972 - Daniel Earl Noble (Engineering) 1973 - Andrew H. Bobeck (Computer and Cognitive Science) 1973 - Willard S. Boyle (Computer and Cognitive Science) 1973 - George E. Smith (Computer and Cognitive Science) 1975 - Bernard C. De Loach, Jr. (Engineering) 1975 - Martin Mohamed Atalla (Physics) 1975 - Dawon Kahng (Physics) 1977 - Charles Kuen Kao (Engineering) 1977 - Stewart E. Miller (Engineering) 1979 - Marcian E. Hoff, Jr. (Computer and Cognitive Science) 1979 - Benjamin Abeles (Engineering) 1979 - George D. Cody (Engineering) 1981 - Amos E. Joel, Jr. (Engineering) 1983 - Adam Lender (Computer and Cognitive Science) 1986 - Linn F. Mollenauer (Engineering) 1989 - John M. J. Madey (Physics) 1992 - Rolf Landauer (Physics) 1993 - Leroy L. Chang (Physics) Sources The Franklin Institute. Winners. Ballantine Medal winners (bad link). The Franklin Institute. Laureates Search, Ballantine Award Science and technology awards Franklin Institute awards
Stuart Ballantine Medal
Technology
602
14,285,702
https://en.wikipedia.org/wiki/Biotin%20carboxylase
In enzymology, a biotin carboxylase () is an enzyme that catalyzes the chemical reaction ATP + biotin-carboxyl-carrier protein + CO2 ADP + phosphate + carboxybiotin-carboxyl-carrier protein The three substrates of this enzyme are ATP, biotin-carboxyl-carrier protein (BCCP), and CO2, whereas its three products are ADP, phosphate, and carboxybiotin-carboxyl-carrier protein. The systematic name of this enzyme class is biotin-carboxyl-carrier-protein:carbon-dioxide ligase (ADP-forming). This enzyme is also called biotin carboxylase (component of acetyl CoA carboxylase). This ATP-grasp enzyme participates in fatty acid biosynthesis. This enzyme participates in fatty acid biosynthesis by providing one of the catalytic functions of the Acetyl-CoA carboxylase complex. As previously mentioned, after the carboxybiotin product is formed, the carboxyltransferase unit of the complex will transfer the activated carboxy group from BCCP to Acetyl-CoA, forming a malonate analog known as malonyl-CoA. Malonyl-CoA serves as the primary carbon donor in fatty acid biosynthesis, followed by a series of reduction and dehydration reactions to remove the acyl group. Reaction pathway Biotin carboxylases are a conserved enzyme present within biotin-dependent carboxylase complexes such as acetyl-CoA carboxylase. How biotin carboxylase functions is, within the relevant carboxylase complex, there is a biotin carboxyl-carrier protein which is covalently linked to biotin via a Lys-residue. Both biotin carboxylase activity as well as the BCCP within the carboxylase complex are highly conserved among this enzyme class. The main source of variation for carboxylases arises from the carboxyltransferase component, as the molecule to which the carboxyl group is transferred (from biotin) dictates the necessary specificity component to catalyze this transfer. The structure of biotin carboxylase heavily influences the reaction pathway the enzyme catalyzes, so discussion of this reaction pathway must also touch on how the substrates and intermediates are stabilized within the active site. Bicarbonate (HCO3−) is held within the active site of biotin carboxylase by hydrogen bonding with biotin as well as a bidentate ion pair interaction of the negatively charged oxygen's with Arg292 iminium ion. It is hypothesized that the Glu296 residue of B.C. acts as a base, deprotonating bicarbonate molecule, thus facilitating nucleophilic attack of the carbonyl-oxygen on the terminal phosphate molecule of ATP. This initial reaction of the pathway can happen because the ATP is also held tightly within the active site pocket via non-covalent coordination of ATP with magnesium ions. After this nucleophilic attack, the carbonate molecule is degraded to via electron pushing, producing a PO43- ion which then acts as a base and deprotonates the amide of the ureido ring within biotin. An enolate-like intermediate is formed, producing a negative charge on the oxygen, which is stabilized by the iminium ion of Arg338. The enolate then executes a nucleophilic attack on (which is being held in place through H-bonding with Glu296 residue), ultimately leading to the product of this enzymatic pathway: carboxybiotin. After this reaction occurs, the carboxyltransferase enzyme present within the complex acts upon the carboxybiotin to transfer the carboxyl group to the target acceptor molecule i.e. acetyl Co-A, propionyl Co-A etc. Structural studies , 5 structures have been solved for this class of enzymes, with PDB accession codes , , , , and . The crystal structure has been determined for the biotin carboxylase (acetyl-CoA carboxylase) of Escherichia coli, but the eukaryotic B.C. is difficult to obtain info on as it is catalytically inactive in solution. E. coli biotin carboxylase is composed of two homogenous dimers made up of 3 domains: A, B, and C. It is believed that the B domain of each monomer is essential to the function of this enzyme, as there is extreme flexibility of this domain seen in the crystal structure. Upon binding of the ATP substrate, a conformational change occurs where the B domain essentially closes over the active site. While this change is thought to bring ATP within close enough proximity for the reaction to occur, the active site was still solvent exposed. Because of this anomaly in the crystal structure, it is believed that the attachment of biotin to BCCP aids in this reaction pathway, essentially covering biotin within the active site, as evidence shows free biotin is not as great of a substrate for this enzyme when compared to biotin-BCCP. A C-terminal conserved domain within this enzyme contains most of the active site residues. The Glu296 and Arg338 are highly conserved residues among this subclass of enzymes, and work to stabilize the reaction intermediates and keep them within the active site pocket until the carboxylation is complete. This enzyme is vital to life and has maintained its function across a variety of organisms. While the structure itself may be divergent based on the biotin carboxylase function and which complex it is present in, the enzyme still works to serve the same function. Fatty acid synthesis provides sterols and other lipids essential to biochemical pathways, and the necessity for this enzyme function is confirmed by the highly conserved active site amino acid sequence. References Further reading EC 6.3.4 Enzymes of known structure Protein domains
Biotin carboxylase
Biology
1,265
310,959
https://en.wikipedia.org/wiki/Subcategory
In mathematics, specifically category theory, a subcategory of a category C is a category S whose objects are objects in C and whose morphisms are morphisms in C with the same identities and composition of morphisms. Intuitively, a subcategory of C is a category obtained from C by "removing" some of its objects and arrows. Formal definition Let C be a category. A subcategory S of C is given by a subcollection of objects of C, denoted ob(S), a subcollection of morphisms of C, denoted hom(S). such that for every X in ob(S), the identity morphism idX is in hom(S), for every morphism f : X → Y in hom(S), both the source X and the target Y are in ob(S), for every pair of morphisms f and g in hom(S) the composite f o g is in hom(S) whenever it is defined. These conditions ensure that S is a category in its own right: its collection of objects is ob(S), its collection of morphisms is hom(S), and its identities and composition are as in C. There is an obvious faithful functor I : S → C, called the inclusion functor which takes objects and morphisms to themselves. Let S be a subcategory of a category C. We say that S is a full subcategory of C if for each pair of objects X and Y of S, A full subcategory is one that includes all morphisms in C between objects of S. For any collection of objects A in C, there is a unique full subcategory of C whose objects are those in A. Examples The category of finite sets forms a full subcategory of the category of sets. The category whose objects are sets and whose morphisms are bijections forms a non-full subcategory of the category of sets. The category of abelian groups forms a full subcategory of the category of groups. The category of rings (whose morphisms are unit-preserving ring homomorphisms) forms a non-full subcategory of the category of rngs. For a field K, the category of K-vector spaces forms a full subcategory of the category of (left or right) K-modules. Embeddings Given a subcategory S of C, the inclusion functor I : S → C is both a faithful functor and injective on objects. It is full if and only if S is a full subcategory. Some authors define an embedding to be a full and faithful functor. Such a functor is necessarily injective on objects up to isomorphism. For instance, the Yoneda embedding is an embedding in this sense. Some authors define an embedding to be a full and faithful functor that is injective on objects. Other authors define a functor to be an embedding if it is faithful and injective on objects. Equivalently, F is an embedding if it is injective on morphisms. A functor F is then called a full embedding if it is a full functor and an embedding. With the definitions of the previous paragraph, for any (full) embedding F : B → C the image of F is a (full) subcategory S of C, and F induces an isomorphism of categories between B and S. If F is not injective on objects then the image of F is equivalent to B. In some categories, one can also speak of morphisms of the category being embeddings. Types of subcategories A subcategory S of C is said to be isomorphism-closed or replete if every isomorphism k : X → Y in C such that Y is in S also belongs to S. An isomorphism-closed full subcategory is said to be strictly full. A subcategory of C is wide or lluf (a term first posed by Peter Freyd) if it contains all the objects of C. A wide subcategory is typically not full: the only wide full subcategory of a category is that category itself. A Serre subcategory is a non-empty full subcategory S of an abelian category C such that for all short exact sequences in C, M belongs to S if and only if both and do. This notion arises from Serre's C-theory. See also Reflective subcategory Exact category, a full subcategory closed under extensions. References Category theory Hierarchy
Subcategory
Mathematics
1,004
25,902,958
https://en.wikipedia.org/wiki/Epicyclic%20frequency
In astrophysics, particularly the study of accretion disks, the epicyclic frequency is the frequency at which a radially displaced fluid parcel will oscillate. It can be referred to as a "Rayleigh discriminant". When considering an astrophysical disc with differential rotation , the epicyclic frequency is given by , where R is the radial co-ordinate. This quantity can be used to examine the 'boundaries' of an accretion disc: when becomes negative, then small perturbations to the (assumed circular) orbit of a fluid parcel will become unstable, and the disc will develop an 'edge' at that point. For example, around a Schwarzschild black hole, the innermost stable circular orbit (ISCO) occurs at three times the event horizon, at . For a Keplerian disk, . Derivation An astrophysical disk can be modeled as a fluid with negligible mass compared to the central object (e.g. a star) and with negligible pressure. We can suppose an axial symmetry such that . Starting from the equations of movement in cylindrical coordinates : The second line implies that the specific angular momentum is conserved. We can then define an effective potential and so : We can apply a small perturbation to the circular orbit : So, And thus : We then note In a circular orbit . Thus : The frequency of a circular orbit is which finally yields : References Fluid dynamics Astrophysics Equations of astronomy
Epicyclic frequency
Physics,Chemistry,Astronomy,Engineering
304
39,206,557
https://en.wikipedia.org/wiki/ELIXIR
ELIXIR (the European life-sciences infrastructure for biological information) is an initiative that allows life science laboratories across Europe to share and store their research data as part of an organised network. Its goal is to bring together Europe's research organisations and data centres to help coordinate the collection, quality control and storage of large amounts of biological data produced by life science experiments. ELIXIR aims to ensure that biological data is integrated into a federated system easily accessible by the scientific community. Mission ELIXIR's mission is to build a sustainable European infrastructure for biological information, supporting life science research and its translation to medicine and the environment, the bio-industries and society. The results from biological experiments produce vast amounts of results that are stored as data using computer software. European countries have invested heavily in research that produces, analyses and stores biological information. However, the collection, storage, archiving and integration of these large amounts of data presents a problem that cannot be tackled by one country alone. ELIXIR represents the joining of independent bioscience facilities to create an integrated network that addresses the complex problem of biological data storage and management. By providing a sustainable and distributed structure for handling data and data retrieval tools, ELIXIR hopes to secure Europe-wide investment in bioinformatics, providing the stability to conduct research in all areas of life science, both in academia and industry. Organisation and structure ELIXIR is an inter-governmental organisation which brings together existing bioinformatics resources. It is coordinated by the ELIXIR Hub, based alongside the European Molecular Biology Laboratory's European Bioinformatics Institute (EMBL-EBI) on the Wellcome Trust Genome Campus in Hinxton, Cambridge. The members of the ELIXIR consortium are European countries, represented by their governments and ministries; the scientific community in each member country develops their national Node, which operates the services and resources that are part of ELIXIR. Each ELIXIR Node is itself a network of national life science organisations, coordinated by a lead institute. European Molecular Biology Laboratory is an intergovernmental organisation so it is the only Node that is not associated with a country. ELIXIR focuses efforts around five central areas of activity, referred to as Platforms. These cover Data, Tools, Compute, Interoperability and Training. Work in these areas is intended to improve access to open data resources and tools by improving connectivity, discoverability and access to computational power, as well as developing training for users and service providers to meet these aims. ELIXIR supports users addressing the Grand Challenges in life science research across diverse domains. ELIXIR supports a range of self-selected Communities, which focus on high-level topics such as ‘Human Data’ and ‘Plant Sciences’ to more specific and focused disciplines such as ‘Metabolomics’ and ‘Intrinsically Disordered Proteins’, as well as a community dedicated to the ‘Galaxy’ resource. These communities are in place to develop bioinformatic and data standards, services and training that are required to facilitate that community to reach their scientific goals. Participants As of September 2023 the following countries and EMBL-EBI have signed the ELIXIR Consortium Agreement (ECA) in order to become a member of ELIXIR: Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Israel, Italy, Luxembourg, the Netherlands, Norway, Portugal, Slovenia, Spain, Sweden, Switzerland, and the United Kingdom. Austria, Cyprus and Romania are Observer countries, working towards ratifying the ECA in the near future. Countries that have signed the ECA are allocated representation on the ELIXIR Board. The preparatory phase of ELIXIR was coordinated by Professor Dame Janet Thornton of EMBL-EBI. The Founding Director of ELIXIR, Dr Niklas Blomberg, took up his position in the new ELIXIR Hub in Cambridge in May 2013. He left ELIXIR in December 2023 and Prof Tim Hubbard has been appointed the new Director as of March 2024. Funding By the end of 2012 ELIXIR completed its five-year preparatory phase funded by the EU's Seventh Framework Programme as part of the European Strategy Forum on Research Infrastructures (ESFRI) process. In 2015 ELIXIR was awarded €19 Million HORIZON 2020 funding to run the EXCELERATE project, this was followed by the CONVERGE project in 2020. Both projects enabled ELIXIR to coordinate and extend national and international data resources. ELIXIR has also set up collaborations to apply for other large scale funding for other EU projects, in which it is also involved in an organisational capacity, for example for the CORBEL, FAIRplus, EOSC-Life, B1MG, GDI and BY-COVID projects. Each member state jointly contributes towards the funding of the ELIXIR Hub in proportion to their GDP. Some countries have allocated new funds to contribute towards their ELIXIR Node. The services and activities of the ELIXIR Nodes will continue to be funded by national agencies. Collectively, ELIXIR members will apply for additional external funding. In 2024, ELIXIR commences its fourth five-year scientific programme. References External links ESFRI Strategy Report on Research Infrastructures ELIXIR website Bioinformatics organizations European Union and science and technology Information technology organizations based in Europe South Cambridgeshire District Science and technology in Cambridgeshire Wellcome Trust
ELIXIR
Biology
1,107
77,405,829
https://en.wikipedia.org/wiki/CUMYL-3TMS-PRINACA
CUMYL-3TMS-PRINACA is an indazole-3-carboxamide based synthetic cannabinoid receptor agonist that has been sold as a designer drug, first identified in Sweden in May 2023. Along with the related compound ADMB-3TMS-PRINACA that was reported several months earlier, CUMYL-3TMS-PRINACA is one of the only psychoactive drugs ever reported that contains a silicon atom. Another example of a silicon-containing drug is sila-haloperidol. Legality CUMYL-3TMS-PRINACA is illegal in Germany and Italy and has been recommended for control in Sweden. See also CUMYL-CBMINACA CUMYL-FUBINACA CUMYL-NBMINACA CUMYL-PINACA CUMYL-THPINACA 1S-LSD Silandrone References Cannabinoids Designer drugs Amides Indazolecarboxamides Trimethylsilyl compounds
CUMYL-3TMS-PRINACA
Chemistry
205
40,910
https://en.wikipedia.org/wiki/Common%20carrier
A common carrier in common law countries (corresponding to a public carrier in some civil law systems, usually called simply a carrier) is a person or company that transports goods or people for any person or company and is responsible for any possible loss of the goods during transport. A common carrier offers its services to the general public under license or authority provided by a regulatory body, which has usually been granted "ministerial authority" by the legislation that created it. The regulatory body may create, interpret, and enforce its regulations upon the common carrier (subject to judicial review) with independence and finality as long as it acts within the bounds of the enabling legislation. A common carrier (also called a public carrier in British English) is distinguished from a contract carrier, which is a carrier that transports goods for only a certain number of clients and that can refuse to transport goods for anyone else, and from a private carrier. A common carrier holds itself out to provide service to the general public without discrimination (to meet the needs of the regulator's quasi-judicial role of impartiality toward the public's interest) for the "public convenience and necessity." A common carrier must further demonstrate to the regulator that it is "fit, willing, and able" to provide those services for which it is granted authority. Common carriers typically transport persons or goods according to defined and published routes, time schedules, and rate tables upon the approval of regulators. Public airlines, railroads, bus lines, taxicab companies, phone companies, internet service providers, cruise ships, motor carriers (i.e., canal operating companies, trucking companies), and other freight companies generally operate as common carriers. Under US law, an ocean freight forwarder cannot act as a common carrier. The term common carrier is a common law term and is seldom used in Continental Europe because it has no exact equivalent in civil-law systems. In Continental Europe, the functional equivalent of a common carrier is referred to as a public carrier or simply as a carrier. However, public carrier in Continental Europe is different from public carrier in British English in which it is a synonym for contract carrier. General Although common carriers generally transport people or goods, in the United States the term may also refer to telecommunications service providers and public utilities. In certain U.S. states, amusement parks that operate roller coasters and comparable rides have been found to be common carriers; a famous example is Disneyland. Regulatory bodies may also grant carriers the authority to operate under contract with their customers instead of under common carrier authority, rates, schedules and rules. These regulated carriers, known as contract carriers, must demonstrate that they are "fit, willing and able" to provide service, according to standards enforced by the regulator. However, contract carriers are specifically not required to demonstrate that they will operate for the "public convenience and necessity." A contract carrier may be authorized to provide service over either fixed routes and schedules, i.e., as regular route carrier or on an ad hoc basis as an irregular route carrier. It should be mentioned that the carrier refers only to the person (legal or physical) that enters into a contract of carriage with the shipper. The carrier does not necessarily have to own or even be in the possession of a means of transport. Unless otherwise agreed upon in the contract, the carrier may use whatever means of transport approved in its operating authority, as long as it is the most favorable from the cargo interests' point of view. The carriers' duty is to get the goods to the agreed destination within the agreed time or within reasonable time. The person that is physically transporting the goods on a means of transport is referred to as the "actual carrier". When a carrier subcontracts with another provider, such as an independent contractor or a third-party carrier, the common carrier is said to be providing "substituted service". The same person may hold both common carrier and contract carrier authority. In the case of a rail line in the US, the owner of the property is said to retain a "residual common carrier obligation", unless otherwise transferred (such as in the case of a commuter rail system, where the authority operating passenger trains may acquire the property but not this obligation from the former owner), and must operate the line if service is terminated. In contrast, private carriers are not licensed to offer a service to the public. Private carriers generally provide transport on an irregular or ad hoc basis for their owners. Carriers were very common in rural areas prior to motorised transport. Regular services by horse-drawn vehicles would ply to local towns, taking goods to market or bringing back purchases for the village. If space permitted, passengers could also travel. Cases have also established limitations to the common carrier designation. In a case concerning a hot air balloon, Grotheer v. Escape Adventures, Inc., the court affirmed a hot air balloon was not a common carrier, holding the key inquiry in determining whether or not a transporter can be classified as a common carrier is whether passengers expect the transportation to be safe because the operator is reasonably capable of controlling the risk of injury. Telecommunications In the United States, telecommunications carriers are regulated by the Federal Communications Commission under title II of the Communications Act of 1934. The Telecommunications Act of 1996 made extensive revisions to the "Title II" provisions regarding common carriers and repealed the judicial 1982 AT&T consent decree (often referred to as the Modification of Final Judgment) that effectuated the breakup of AT&T's Bell System. Further, the Act gives telephone companies the option of providing video programming on a common carrier basis or as a conventional cable television operator. If it chooses the former, the telephone company will face less regulation but will also have to comply with FCC regulations requiring what the Act refers to as "open video systems". The Act generally bars, with certain exceptions including most rural areas, acquisitions by telephone companies of more than a 10 percent interest in cable operators (and vice versa) and joint ventures between telephone companies and cable systems serving the same areas. Internet Service Providers Using provisions of the Communications Act of 1934, the FCC classified Internet service providers as common carriers, effective June 12, 2015, for the purpose of enforcing net neutrality. Led by the Trump administration's appointed commissioner Ajit Pai, on December 14, 2017 the FCC reversed its rules on net neutrality, effectively revoking common carrier status as a requirement for Internet service providers. Following this, in 2018 the U.S. Senate narrowly passed a non-binding resolution aiming to reverse the FCC's decision and restore FCC's net neutrality rules. On 25 April 2024, the FCC voted 3–2 to reinstate net neutrality in the United States by reclassifying the Internet under Title II. However, legal challenges filed by ISPs resulted in an appeals court order that stays the net neutrality rules until the court makes a final ruling, with the court opining that the ISPs are likely to prevail over the FCC on the merits. Pipelines In the United States, many oil, gas and CO2 pipelines are common carriers. The Federal Energy Regulatory Commission (FERC) regulates rates charged and other tariff terms imposed by interstate common carrier pipelines. Intrastate common carrier pipeline tariffs are often regulated by state agencies. The US and many states have delegated the power of eminent domain to common carrier gas pipelines. Legal implications Common carriers are subject to special laws and regulations that differ depending on the means of transport used, e.g. sea carriers are often governed by quite different rules from road carriers or railway carriers. In common law jurisdictions as well as under international law, a common carrier is absolutely liable for goods carried by it, with four exceptions: An act of nature An act of the public enemies Fault or fraud by the shipper An inherent defect in the goods A sea carrier may also, according to the Hague-Visby Rules, escape liability on other grounds than the above-mentioned, e.g. a sea carrier is not liable for damages to the goods if the damage is the result of a fire on board the ship or the result of a navigational error committed by the ship's master or other crewmember. Carriers typically incorporate further exceptions into a contract of carriage, often specifically claiming not to be a common carrier. An important legal requirement for common carrier as public provider is that it cannot discriminate, that is refuse the service unless there is some compelling reason. As of 2007, the status of Internet service providers as common carriers and their rights and responsibilities is widely debated (network neutrality). The term common carrier does not exist in continental Europe but is distinctive to common law systems, particularly law systems in the US. In Ludditt v Ginger Coote Airways the Privy Council (Lord Macmillan, Lord Wright, Lord Porter and Lord Simonds) held the liability of a public or common carrier of passengers is only to carry with due care. This is more limited than that of a common carrier of goods. The complete freedom of a carrier of passengers at common law to make such contracts as he thinks fit was not curtailed by the Railway and Canal Traffic Act 1854, and a specific contract that enlarges, diminishes or excludes his duty to take care (e.g., by a condition that the passenger travels "at his own risk against all casualties") cannot be pronounced to be unreasonable if the law authorises it. There was nothing in the provisions of the Canadian Transport Act 1938 section 25 that would invalidate a provision excluding liability. Grand Trunk Railway Co of Canada v Robinson [1915] A.C. 740 was followed and Peek v North Staffordshire Railway 11 E.R. 1109 was distinguished. See also References External links Cybertelecom Common Carrier FCC Wireline Competition Bureau, formerly the Common Carrier Bureau Communications Act of 1934 including definition of a Common Carrier, Title II from FCC.gov Traffic management Freight transport International law Legal terminology Net neutrality Rail transport operations Tort law
Common carrier
Engineering
2,017
4,487,600
https://en.wikipedia.org/wiki/Constraint%20inference
In constraint satisfaction, constraint inference is a relationship between constraints and their consequences. A set of constraints entails a constraint if every solution to is also a solution to . In other words, if is a valuation of the variables in the scopes of the constraints in and all constraints in are satisfied by , then also satisfies the constraint . Some operations on constraints produce a new constraint that is a consequence of them. Constraint composition operates on a pair of binary constraints and with a common variable. The composition of such two constraints is the constraint that is satisfied by every evaluation of the two non-shared variables for which there exists a value of the shared variable such that the evaluation of these three variables satisfies the two original constraints and . Constraint projection restricts the effects of a constraint to some of its variables. Given a constraint its projection to a subset of its variables is the constraint that is satisfied by an evaluation if this evaluation can be extended to the other variables in such a way the original constraint is satisfied. Extended composition is similar in principle to composition, but allows for an arbitrary number of possibly non-binary constraints; the generated constraint is on an arbitrary subset of the variables of the original constraints. Given constraints and a list of their variables, the extended composition of them is the constraint where an evaluation of satisfies this constraint if it can be extended to the other variables so that are all satisfied. See also Constraint satisfaction problem References Constraint programming Inference
Constraint inference
Mathematics
292
18,208,725
https://en.wikipedia.org/wiki/Biorisk
Biorisk generally refers to the risk associated with biological materials and/or infectious agents, also known as pathogens. The term has been used frequently for various purposes since the early 1990s. The term is used by regulators, security experts, laboratory personnel and industry alike, and is used by the World Health Organization (WHO). WHO/Europe also provides tools and training courses in biosafety and biosecurity. An international Laboratory Biorisk Management Standard developed under the auspices of the European Committee for Standardization, defines biorisk as the combination of the probability of occurrence of harm and the severity of that harm where the source of harm is a biological agent or toxin. The source of harm may be an unintentional exposure, accidental release or loss, theft, misuse, diversion, unauthorized access or intentional unauthorized release. Biorisk reduction Biorisk reduction involves creating expertise in managing high-consequence pathogens, by providing training on safe handling and control of pathogens that pose significant health risks. See also Biocontainment, related to laboratory biosafety levels Biodefense Biodiversity Biohazard Biological warfare Biological Weapons Convention Biosecurity Bioterrorism Cyberbiosecurity Endangered species References Bioethics Health risk Biological hazards Biological contamination
Biorisk
Technology
260
46,509,757
https://en.wikipedia.org/wiki/Haleets
Haleets ( also called Figurehead Rock) is a sandstone glacial erratic boulder with inscribed petroglyphs on Bainbridge Island, Washington. The Native American Suquamish Tribe claims the rock, on a public beach at Agate Point on the shore of Agate Passage, as part of their heritage. The exact date the petroglyphs were carved is unknown but is estimated to be around 1000 BCE to 400 or 500 CE, the latest date being when labrets (worn by one of the petroglyph figures) were no longer used by Coast Salish peoples. Haleets, also spelled as Halelos, Xalelos and Xalilc, is derived from the Lushootseed name of the rock, , meaning "marked rock". It is also known in English as Figurehead Rock. Its purpose is unknown but the Suquamish Museum curator and archivist Charlie Sigo has stated that it may have been a boundary marker. An amateur astronomer has proposed a theory that it has a calendrical function (see Archaeoastronomy). The rock is tall and long. It sits about offshore, and has been marked with chiseled and drilled Coast Survey features since 1856, and a bronze geodetic mark was placed on it in 1934. Some sources say that the rock is one of three prominent Salish Sea petroglyphs that were always on the shoreline, but tectonic activity around the Seattle Fault may have put Haleets in the intertidal zone. See also List of individual rocks Footnotes References Bibliography External links Petroglyphs in Washington (state) Glacial erratics of Washington (state) Bainbridge Island, Washington Coast Salish art Archaeoastronomy Individual rocks
Haleets
Astronomy
357
428,625
https://en.wikipedia.org/wiki/Frontend%20and%20backend
In software development, frontend refers to the presentation layer that users interact with, while backend involves the data management and processing behind the scenes. In the client–server model, the client is usually considered the frontend, handling user-facing tasks, and the server is the backend, managing data and logic. Some presentation tasks may also be performed by the server. Introduction In software architecture, there may be many layers between the hardware and end user. The front is an abstraction, simplifying the underlying component by providing a user-friendly interface, while the back usually handles data storage and business logic. Examples E-commerce Website: The frontend is the user interface (e.g., product pages, search bar), while the backend processes payments and updates inventory. Banking App: The frontend displays account balances, while the backend handles secure transactions and updates records. Social Media Platform: The frontend shows the news feed, while the backend stores posts and manages notifications. In telecommunication, the front can be considered a device or service, while the back is the infrastructure that supports provision of service. A rule of thumb is that the client-side (or "frontend") is any component manipulated by the user. The server-side (or "backend") code usually resides on the server, often far removed physically from the user. Software definitions In content management systems, the terms frontend and backend may refer to the end-user facing views of the CMS and the administrative views, respectively. In speech synthesis, the frontend refers to the part of the synthesis system that converts the input text into a symbolic phonetic representation, and the backend converts the symbolic phonetic representation into actual sounds. In compilers, the frontend translates a computer programming source code into an intermediate representation, and the backend works with the intermediate representation to produce code in a computer output language. The backend usually optimizes to produce code that runs faster. The frontend/backend distinction can separate the parser section that deals with source code and the backend that generates code and optimizes. Some designs, such as GCC, offer choices between multiple frontends (parsing different source languages) or backends (generating code for different target processors). Some graphical user interface (GUI) applications running in a desktop environment are implemented as a thin frontend for underlying command-line interface (CLI) programs, to save the user from learning the special terminology and memorizing the commands. Web development as an example Another way to understand the difference between the two is to understand the knowledge required of a frontend vs. a backend software developer. The list below focuses on web development as an example. Both Version control tools such as Git, Mercurial, or Subversion File transfer tools and protocols such as FTP or rsync Frontend focused Markup and web languages such as HTML, CSS, JavaScript, and ancillary libraries commonly used in those languages such as Sass or jQuery Asynchronous request handling and AJAX Single-page applications (with frameworks like React, Angular or Vue.js) Web performance (largest contentful paint, time to interactive, 60 FPS animations and interactions, memory usage, etc.) Responsive web design Cross-browser compatibility issues and workarounds End-to-end testing with a headless browser Build automation to transform and bundle JavaScript files, reduce image sizes and other processes using tools such as Webpack and Gulp.js Search engine optimization Accessibility concerns Basic usage of image editing tools such as GIMP or Photoshop User Interface Backend focused Scripting languages like PHP, Python, Ruby, Perl, Node.js, or Compiled languages like C#, Java or Go Data access layer Business logic Database administration Scalability High availability Security concerns, authentication and authorization Software Architecture Data transformation Backup methods and software Note that both positions, despite possibly working on one product, have a very distinct set of skills. API The frontend communicates with backend through an API. In the case of web and mobile frontends, the API is often based on HTTP request/response. The API is sometimes designed using the "Backend for Frontend" (BFF) pattern, that serves responses to ease the processing on frontend side. Hardware definitions In network computing, frontend can refer to any hardware that optimizes or protects network traffic. It is called application front-end hardware because it is placed on the network's outward-facing frontend or boundary. Network traffic passes through the front-end hardware before entering the network. In processor design, frontend design would be the initial description of the behavior of a circuit in a hardware description language such as Verilog, while backend design would be the process of mapping that behavior to physical transistors on a die. See also References Software architecture Software engineering terminology
Frontend and backend
Technology,Engineering
1,012
1,460,540
https://en.wikipedia.org/wiki/Sands%20Hotel%20and%20Casino
The Sands Hotel and Casino was a historic American hotel and casino on the Las Vegas Strip in Paradise, Nevada, United States, that operated from 1952 to 1996. Designed by architect Wayne McAllister, with a prominent high sign, the Sands was the seventh resort to open on the Strip. During its heyday, it hosted many famous entertainers of the day, most notably the Rat Pack and Jerry Lewis. The hotel was established in 1952 by Mack Kufferman, who bought the LaRue Restaurant which had opened a year earlier. The hotel was opened on December 15, 1952, as a casino and hotel with 200 rooms. The hotel rooms were divided into four two-story motel wings, each with fifty rooms, and named after famous race tracks. Crime bosses such as Doc Stacher and Meyer Lansky acquired shares in the hotel and attracted Frank Sinatra, who made his performing debut at Sands in October 1953. Sinatra later bought a share in the hotel himself. In 1960, the classic caper film Ocean's 11 was shot at the hotel, and it subsequently attained iconic status, with regular performances by Sinatra, Dean Martin, Jerry Lewis, Sammy Davis Jr., Red Skelton and others in the hotel's world-renowned Copa Room. In 1966, Sands opened a 500-room tower. In 1967, Sands became the first of several Las Vegas hotels to be purchased by Howard Hughes. Its final owners were Sheldon Adelson, Richard Katzeff, Ted Bernard, Irwin Chafetz, and Jordan Shapiro. After buying out his partners, Adelson shut it down to build a brand new resort. On November 26, 1996, the Sands was imploded and demolished, and The Venetian built in its place. History Early history The LaRue Restaurant was established in December 1950 by Billy Wilkerson. The following year, Mack Kufferman bought LaRue, with plans to build a hotel and casino. Kufferman failed to gain a gaming license, and his shares in the project were sold to Jake Freedman. Numerous sources state that organized crime figures Meyer Lansky and Doc Stacher; illegal bookmakers like Mike Shapiro, Ed Levinson, and Sid Wyman; as well as Hyman Abrams and Jack Entratter were involved in the financing of Sands and had shares in it. Lansky and his mob assumed ownership of the Flamingo Hotel after the murder of Bugsy Siegel in 1947, and Lansky and New York mobster Frank Costello also had business interests in the Thunderbird Hotel and El Cortez Club in Downtown Las Vegas. Construction began on Sands Hotel in early 1952, built to a design by Wayne McAllister. Trousdale Construction Company of Los Angeles was the general contractor. Initially the Nevada Tax Commission rejected Freedman's request for a gambling license due to his connections with known criminals. Freedman had initially intended naming the hotel "Holiday Inn" after the film of the same name starring Bing Crosby, but after noticing that his socks became so full of sand decided to name it Sands. The tag line would be "A Place in the Sun", named after a recently released film starring Montgomery Clift and Elizabeth Taylor, and quite suitable to the hot desert location of Las Vegas. The hotel was opened on December 15, 1952, as a casino with 200 rooms, and was established less than three months after the opening of another prominent landmark, Sahara Hotel and Casino. The opening was widely publicized, and the hotel was visited by some 12,000 people within a few hours. At the inauguration were 146 journalists and special guests such as Arlene Dahl, Fernando Lamas, Esther Williams, and Terry Moore. Every guest was given a Chamois bag with silver dollars, and Sands ended up losing $200,000 within the first eight hours. Danny Thomas, Jimmy McHugh and the Copa Girls, labelled "the most beautiful girls in the world", performed in the Copa Room on opening night, and Ray Sinatra and his Orchestra were the initial house band. Thomas was hired to perform for the first two weeks, but strained his voice on the second night and developed laryngitis, and was replaced with performers such as Jimmy Durante, Frankie Laine, Jane Powell, the Ritz Brothers, and Ray Anthony. Jack Entratter, who was formerly in charge of the New York nightclub, the Copacabana, became the hotel's manager. Entratter made many show business friends during his time at the nightclub; he was able to use these connections to sign performers for the Sands Copa Room. Entratter was also able to offer entertainers an additional incentive to perform at the Sands. Headlining stars received "points", or a percentage of ownership in the hotel and casino. Entratter's personally selected "Copa Girls" wore $12,000 worth of costumes on the hotel's opening night; this surpassed the salary of the Copa Room's star, Danny Thomas. In the early years, Freedman and his wife Carolyn were one of its attractions, wearing "matching white, leather outfits, replete with identical cowboy boots and hats". Freedman offered Carolyn's father Nathan a 5% stake in Sands but he declined the offer. The Rat Pack and racial policy Lansky and Costello brought the Sands to Frank Sinatra's attention, and he began staying at the hotel and gambling there during breaks from Hollywood, though some sources state that he was not a hardcore gambler. Sinatra earned a notoriety for "keeping his winnings and ignoring his gambling losses", but the mobsters running the hotel were not too concerned because Sinatra was great for business. He made his debut performing at the hotel on October 4, 1953, after an invitation by the manager Jack Entratter. Sinatra typically played at Sands three times a year, sometimes a two-week stint, which "brought in the big rollers, a lot of oil money from Texas". The big rollers left Vegas when Sinatra did, and other performers were reluctant to perform after him, feeling intimidated. Entratter replaced Freedman as the president of the Sands Hotel following his death from heart surgery on January 20, 1958. Freedman's last wife Sadie subsequently lived in a suite in the Belmont Park wing into the mid-1960s until her death. Sinatra, who had attempted to buy a share in the hotel soon after first visiting in 1953, but was denied by the Nevada Tax Commission, was now granted permission to buy a share in the hotel, due to his phenomenal impact upon business in Las Vegas. His share, variously described as from 2 to 9%, aided Freedman's wife in paying off her husband's gambling debts. In 1955, limited integration came to heavily segregated Las Vegas when the Sands first allowed Nat King Cole to stay at the hotel and perform. Sinatra noticed that he never saw Cole in the dining room, always eating his meals in solitude in his dressing room. When he asked his valet George to find out why, he learned that "Coloreds aren't allowed in the dining room at the Sands". Sinatra subsequently stated that if blacks were not permitted to eat their meals in the dining room with everybody else he would see to it that all of the waiters and waitresses were fired, and invited Cole to dine with him the following evening. Cole was allowed permission into the casino, as was another black performer, Harry Belafonte, who took a more aggressive approach by walking into the casino on his own accord and sitting at a blackjack table, which was not challenged by the bosses. Belafonte became the "first black man to play cards on the Las Vegas Strip." Sammy Davis Jr. was instrumental in bringing about a general change in policy. When the Will Mastin Trio began performing at Sands in 1958, Davis informed Entratter that his father and uncle must be allowed permission to stay at Sands while he was performing there. Entratter granted them permission but continued his objection to admitting other black guests. In 1961, an African-American couple entered the lobby of the hotel and were blocked by the security guard, witnessed by Sinatra and Davis. Sinatra told the guards that they were his guests and let them into the hotel. Sinatra subsequently swore profusely on the phone to Sands executive Carl Cohen at how ridiculous the situation was, and the following day, Davis approached Entratter and demanded that Sands begin employing blacks. Shortly afterwards the hotel changed its policy and it began hiring black waiters and busboys and permitting blacks entry into the casino. In the late 1950s, Senator John F. Kennedy was occasionally a guest of Sinatra at the Sands. Arguably the hotel's biggest claim to fame was a three-week period in 1960 during the filming of Ocean's 11, after which it attained iconic status. During that time, the movie's stars Sinatra, Dean Martin, Davis, Joey Bishop and Peter Lawford performed on stage together in the Copa Room. The performances were called the "Summit at the Sands" and this is considered to be the birth of the Rat Pack. Later history When Howard Hughes purchased the hotel in the mid-1960s for $14.6 million, the architect Martin Stern Jr. designed a 500-room circular tower, which opened in 1967. The tower was built by R. C. Johnson and Associates General Contractors. The hotel became a Las Vegas landmark. Hughes grew particularly annoyed every time the Rat Pack were in his hotel, due to a hatred of Frank Sinatra which stemmed from the fact that he had been in love with Ava Gardner in the 1950s and she had run off to marry Sinatra. The ill feeling was reciprocated by Sinatra. Hughes plotted to oust Sinatra from the Sands for good, and asked Robert Maheu to draw up a plan shortly after the new hotel opened in 1967. The hotel imposed restrictions on what Sinatra could gamble in the casino, to just $3,000 a night. Under previous management, Sinatra had no limits on the amount of credit extended to him by the Sands casino. His IOUs, chits or "markers" were torn up at the end of Sinatra's engagements because he was considered to be good for business—bringing the hotel more monetary value than the worth of his gambling losses. Hughes put a stop to this system, telling Jack Entratter to inform Sinatra of the new policy; Entratter did not do so because he was afraid. Fuming, Sinatra began what The Los Angeles Times describes as a "weekend-long tirade" against the "hotel's management, employees and security forces." The FBI report says the incident began when Mia Farrow lost $20,000 at the Sands casino. Sinatra bought $50,000 in chips and made an attempt to win the money back. He lost this sum within a short period of time. Sinatra then asked for credit, which was denied. It culminated when Sinatra reportedly drove a golf cart through the window of the coffee shop where casino manager Carl Cohen was seated and began "screaming obscenities and anti-Semitic remarks" at Cohen. Sinatra reportedly punched Cohen, a heavily built man, who responded with a smack in the mouth, bloodying Sinatra's nose and knocking two of his teeth out. As a result, Sinatra never performed at the Sands again while Hughes owned it, and began performing at Caesars Palace. A number of the staff were not disappointed to see Sinatra leave the Sands. Numerous employees had been humiliated or intimidated over the years, including a busboy Sinatra tripped while he was carrying a tray with dishes. After Sinatra left, the mobsters pulled out of the Sands and gradually left Vegas in the 1970s. In the 1970s, it became associated with the likes of Wayne Newton and Liberace. At this time, some 30% of the performers at Sands were Italian Americans. Frank Gagliardi became the drummer for the house orchestra in 1964, starting a twelve-year tenure. In 1968, Hughes stated that he intended to expand Sands into a 4,000-room resort, but his plans did not materialize. In 1980, Hughes' company, Summa Corporation, sold the Sands to the Pratt Corporation. Jack, Edward and Willian Pratt, said they will spend $40 million in renovating the Sands Hotel and expanding the rooms, casino and public area accommodations, but subsequently bought it back as they were unable to make a profit. MGM Grand, Inc. bought the hotel along with the neighboring Desert Inn in 1988 for a total of $167 million, and the property became known as the MGM Sands. The next year, MGM sold it for $110 million to Las Vegas Sands, a new company formed by the owners of The Interface Group, including Sheldon Adelson, Richard Katzeff, Ted Cutler, Irwin Chafetz and Jordan Shapiro. The same year, it was licensed by the Nevada Gaming Commission, and Adelson became a casino magnate. In the early 1990s, Adelson built the Sands Expo, a convention centre. In its final years, the Sands became a shadow of its former self—a throwback to the old days—and it ultimately could not compete with the newer and more exciting mega-resorts that were being built on the Strip. However, a 1990s travel guide stated that the hotel gardens and pool area still retained the ambiance of the classic Sands days. The decision was eventually made by its final owner, Sheldon Adelson, to shut it down and to build a brand new resort. The last dice in the casino was rolled by Bob Stupak just after 6:00p.m. on June30, 1996. At 2:06am on November26, 1996, it was imploded and demolished, much to the dismay of longtime employees and sentimentalists. Footage of the demolition also appeared in the closing credits of The Cooler. The climactic plane crash in 1997's Con Air ended with the aircraft crashing into the soon-to-be-demolished Sands' lobby. On May 3, 1999, the new $1.5 billion megaresort The Venetian opened where the Sands had formerly been, a 35-story hotel with 3,036 rooms, covering an area of . It became the largest AAA Five-Diamond landmark in North America. Architecture Wayne McAllister design, 1952 Wayne McAllister designed the original $5.5 million Sands Hotel, an exotic-looking terracotta red-painted modern hotel with a prominent porte cochère at the front, surrounded by a zig-zag wall ornamented with tiled planters. The hotel is arguably most associated with its high sign, made iconic with photographs of the Rat Pack standing underneath it. The name "Sands", written in elegant italics, featured a high letter "S", and the name was sprawled across an egg crate grill, cantilevered from a pillar. The sign was receptive to the light and shadow of the desert, and during night time it was lit up, glowing neon red. It was the tallest sign on the strip for a number of years. Beneath "Sands" was the tagline "A Place in the Sun", written in smaller capital letters. Below that was the billing of the names of the performers appearing at Sands, very often photographed displaying names such as Frank Sinatra, Dean Martin, Jerry Lewis, Sammy Davis Jr. and Red Skelton in the late 1950s and early 1960s. Author Alan Hess wrote that the "sleek Modernism of the Sands leaped past the Flamingo to set a higher standard of sophistication for Las Vegas. For the first time, the sign was an integral part of the architectural design." The porte-cochère of the hotel featured three great sharp-edged pillars jutting out in front of the glass-fronted building, angling down into the ground, which resembled fins. The two-story glass walled entry was bordered by a wall of imported Italian marble, and above the entrance area was a horizontal plane with copper lights suspended from the beams. Rather than being polished, the marble was unusual in that it was rough and grained. Natural and stained cork was used throughout the building. A.J. Leibling of The New Yorker described the hotel in 1953: "The main building of the Sands is a great rectangular hall, with the reception desk in one corner, slot machines along one long wall and a bar and cocktail lounge, complete with Latin trio, along the opposite wall. In the middle is a jumble of roulette and craps tables and 21 layouts." The casino, of substantial size, was accessed by three sets of terrazzo stairs and was lit by low-hanging chandeliers. The bar featured bas-reliefs with a Western theme, including cowboys, racing wagons and Joshua trees, designed by Allan Stewart of Claremont College, California. The Garden Room restaurant overlooked the hotel's pool and landscaped grounds. The guest rooms were located in a group of stand-alone, two-storey buildings. There were six 40-room, 168' by 59'6 blocks – named after the racetracks Arlington, Belmont, Santa Anita, Rockingham, Bay Meadows, and Hialeah – and one double-long block named Churchill Downs. Each block had a white Bermuda roof with a 3/12 pitch. The blocks were arranged in a Y formation. Rooms were designed and furnished by Barker Bros. of Los Angeles. Later in the 1950s, the additional bedroom blocks Hollywood Park and Garden State were built. The suites were luxuriously designed. Plush blue carpets and ivory-colored chairs with white ceilings were the norm in the early days. An electric tram service, often attended by pretty showgirls, took the guests to their rooms. In 1963, a new bedroom building, named the Aqueduct, was constructed. The Aqueduct was crescent shaped and was three storeys high, 275' long, and '70 wide. The architect of the new block was Julius Gabrielle, and furnishings were provided by the Albert Parvin Company of Los Angeles. Dick Wells, a Parvin employee, designed the suites. Martin Stern rebuild, 1964 In 1963, Martin Stern Jr. was hired to design a major overhaul of the hotel. Stern's plan amounted to an almost total rebuild of the main building and erased most of the defining features of McAllister's original design. To expand the main building, the Arlington and Belmont blocks were lifted and moved southwards. Stern's plans were completed by the fall of 1964. The rebuild included the addition of a convention hall, an entrance rotunda, new restaurants, and a 17-storey tower. Stern's design employed a prominent arch motif in the tower and the hotel façade. Construction of the tower commenced in late 1965, and was completed in 1967. It existed until November 1996 when it was demolished. The steam room of the hotel was a place of relaxation and good jest. It became a great place for socializing between the stars after 5 pm, including the Rat Pack, and Jerry Lewis, Steve Lawrence and Don Rickles. On one occasion they were having problems with the TV in the massage room, which was blurry and out of focus. Sinatra yelled "Move back, move back", and the television was thrown into the pool. Manager Entratter permitted such activities, knowing that if he scolded Sinatra and asked him to pay damages, he would not perform at Sands again. Copa Room The Copa Room was the showroom of Sands, named after the famed Copacabana Club in New York City. It contained 385 seats, designed in a Brazilian carnival style. Some of the more famed singers like Frank Sinatra, Dean Martin, and Sammy Davis Jr. had to sign contracts to ensure that they headline for a given number of weeks a year. Performers were extremely well paid for the period. It was common for some of them to be paid $25,000 per week, playing two shows a night, six days a week, and once on a Sunday for two to three weeks. The greatest names in the entertainment industry graced the stage of the Copa Room. Notable performers included Judy Garland, Lena Horne (one of the first black performers at the hotel, billed as "The Satin Doll"), Jimmy Durante, Dean Martin, Pat Cooper, Shirley MacLaine, Marlene Dietrich, Tallulah Bankhead, Shecky Greene, Martin and Lewis, Danny Thomas, Bobby Darin, Ethel Merman, Rich Little, Louis Armstrong, Jerry Lee Lewis, French singer Edith Piaf, Nat King Cole, Robert Merrill, Wayne Newton, Red Skelton, and "The Copa Girls". Hollywood celebrities such as Humphrey Bogart and Lauren Bacall, Elizabeth Taylor, Yul Brynner, Kirk Douglas, Lucille Ball and Rosalind Russell were often photographed enjoying the headline acts. A number of notable albums were recorded in the Copa Room. Among them are Dean Martin's Live At The Sands – An Evening of Music, Laughter and Hard Liquor, Frank Sinatra's Sinatra at the Sands, and Sammy Davis, Jr.'s The Sounds of '66 and That's All!. The Rat Pack: Live at the Sands, a CD released in 2001, features Martin, Sinatra and Davis in a live performance at the hotel recorded in September 1963. Live at the Sands is an album featuring Mary Wilson, formerly of The Supremes. Morrissey's B-side track, "At Amber" (1990), takes place at the Sands Hotel, and recounts its by-then aging and somewhat seedy atmosphere. Much of the musical success of the Copa Room is credited to the room's band leader and musical conductor Antonio Morelli. Morelli not only acted as the band leader and musical conductor for the Copa Room during the hotel's Rat Pack heyday in the 1950s and 1960s, but he also served in that role on hundreds of recorded albums by those same entertainers who graced the stage of the Copa. Often the festivities would carry over after hours to Morrelli's home in Las Vegas, nicknamed "The Morelli House", which was eventually relocated and sanctioned an historical landmark by the State of Nevada. Silver Queen Lounge The Silver Queen Lounge was another performing venue at Sands, with nightly acts starting at 5 pm and running until 6 am. It was particularly popular with the emerging rock 'n' roll crowd. The Sands is where Freddie Bell and the Bell Boys performed the rock 'n' roll-song "Hound Dog", seen by Elvis Presley. After Presley saw that performance at The Sands, he decided to record the song himself, and it became a hit for him. Roberta Linn and the Melodaires and Gene Vincent were also regular performers. See also Sands Macao Footnotes References Sources External links History of Sands Hotel , Classiclasvegas.squarespace.com 1967 Tower architect Martin Stern, Jr. at gaming.unlv.edu Video of 1996 tower implosion Casinos completed in 1952 Hotel buildings completed in 1952 Hotel buildings completed in 1967 Buildings and structures demolished in 1996 1996 disestablishments in Nevada Former skyscraper hotels Defunct casinos in the Las Vegas Valley Defunct hotels in the Las Vegas Valley Skyscraper hotels in Paradise, Nevada Demolished hotels in Clark County, Nevada Buildings and structures demolished by controlled implosion Las Vegas Strip Resorts in the Las Vegas Valley Hotels established in 1952 1952 establishments in Nevada Casino hotels Sheldon Adelson
Sands Hotel and Casino
Engineering
4,749
3,781,904
https://en.wikipedia.org/wiki/Core%20model
In set theory, the core model is a definable inner model of the universe of all sets. Even though set theorists refer to "the core model", it is not a uniquely identified mathematical object. Rather, it is a class of inner models that under the right set-theoretic assumptions have very special properties, most notably covering properties. Intuitively, the core model is "the largest canonical inner model there is", (here "canonical" is an undefined term)p. 28 and is typically associated with a large cardinal notion. If Φ is a large cardinal notion, then the phrase "core model below Φ" refers to the definable inner model that exhibits the special properties under the assumption that there does not exist a cardinal satisfying Φ. The core model program seeks to analyze large cardinal axioms by determining the core models below them. History The first core model was Kurt Gödel's constructible universe L. Ronald Jensen proved the covering lemma for L in the 1970s under the assumption of the non-existence of zero sharp, establishing that L is the "core model below zero sharp". The work of Solovay isolated another core model L[U], for U an ultrafilter on a measurable cardinal (and its associated "sharp", zero dagger). Together with Tony Dodd, Jensen constructed the Dodd–Jensen core model ("the core model below a measurable cardinal") and proved the covering lemma for it and a generalized covering lemma for L[U]. Mitchell used coherent sequences of measures to develop core models containing multiple or higher-order measurables. Still later, the Steel core model used extenders and iteration trees to construct a core model below a Woodin cardinal. Construction of core models Core models are constructed by transfinite recursion from small fragments of the core model called mice. An important ingredient of the construction is the comparison lemma that allows giving a wellordering of the relevant mice. At the level of strong cardinals and above, one constructs an intermediate countably certified core model Kc, and then, if possible, extracts K from Kc. Properties of core models Kc (and hence K) is a fine-structural countably iterable extender model below long extenders. (It is not currently known how to deal with long extenders, which establish that a cardinal is superstrong.) Here countable iterability means ω1+1 iterability for all countable elementary substructures of initial segments, and it suffices to develop basic theory, including certain condensation properties. The theory of such models is canonical and well understood. They satisfy GCH, the diamond principle for all stationary subsets of regular cardinals, the square principle (except at subcompact cardinals), and other principles holding in L. Kc is maximal in several senses. Kc computes the successors of measurable and many singular cardinals correctly. Also, it is expected that under an appropriate weakening of countable certifiability, Kc would correctly compute the successors of all weakly compact and singular strong limit cardinals correctly. If V is closed under a mouse operator (an inner model operator), then so is Kc. Kc has no sharp: There is no natural non-trivial elementary embedding of Kc into itself. (However, unlike K, Kc may be elementarily self-embeddable.) If in addition there are also no Woodin cardinals in this model (except in certain specific cases, it is not known how the core model should be defined if Kc has Woodin cardinals), we can extract the actual core model K. K is also its own core model. K is locally definable and generically absolute: For every generic extension of V, for every cardinal κ>ω1 in V[G], K as constructed in H(κ) of V[G] equals K∩H(κ). (This would not be possible had K contained Woodin cardinals). K is maximal, universal, and fully iterable. This implies that for every iterable extender model M (called a mouse), there is an elementary embedding M→N and of an initial segment of K into N, and if M is universal, the embedding is of K into M. It is conjectured that if K exists and V is closed under a sharp operator M, then K is Σ11 correct allowing real numbers in K as parameters and M as a predicate. That amounts to Σ13 correctness (in the usual sense) if M is x→x#. The core model can also be defined above a particular set of ordinals X: X belongs to K(X), but K(X) satisfies the usual properties of K above X. If there is no iterable inner model with ω Woodin cardinals, then for some X, K(X) exists. The above discussion of K and Kc generalizes to K(X) and Kc(X). Construction of core models Conjecture: If there is no ω1+1 iterable model with long extenders (and hence models with superstrong cardinals), then Kc exists. If Kc exists and as constructed in every generic extension of V (equivalently, under some generic collapse Coll(ω, <κ) for a sufficiently large ordinal κ) satisfies "there are no Woodin cardinals", then the Core Model K exists. Partial results for the conjecture are that: If there is no inner model with a Woodin cardinal, then K exists. If (boldface) Σ1n determinacy (n is finite) holds in every generic extension of V, but there is no iterable inner model with n Woodin cardinals, then K exists. If there is a measurable cardinal κ, then either Kc below κ exists, or there is an ω1+1 iterable model with measurable limit λ of both Woodin cardinals and cardinals strong up to λ. If V has Woodin cardinals but not cardinals strong past a Woodin one, then under appropriate circumstances (a candidate for) K can be constructed by constructing K below each Woodin cardinal (and below the class of all ordinals) κ but above that K as constructed below the supremum of Woodin cardinals below κ. The candidate core model is not fully iterable (iterability fails at Woodin cardinals) or generically absolute, but otherwise behaves like K. References W. Hugh Woodin (June/July 2001). "The Continuum Hypothesis, Part I". Notices of the AMS. William Mitchell. "Beginning Inner Model Theory" (being Chapter 17 in Volume 3 of "Handbook of Set Theory") at . Matthew Foreman and Akihiro Kanamori (Editors). "Handbook of Set Theory", Springer Verlag, 2010, . Ronald Jensen and John R. Steel. "K without the measurable". Journal of Symbolic Logic Volume 78, Issue 3 (2013), 708-734. Inner model theory Large cardinals
Core model
Mathematics
1,456
8,821,670
https://en.wikipedia.org/wiki/Voglibose
Voglibose (INN and USAN, trade name Voglib, marketed by Mascot Health Series) is an alpha-glucosidase inhibitor used for lowering postprandial blood glucose levels in people with diabetes mellitus. Voglibose is a research product of Takeda Pharmaceutical Company, Japan's largest pharmaceutical company. Voglibose was discovered in 1981, and was first launched in Japan in 1994, under the trade name BASEN, to improve postprandial hyperglycemia in diabetes mellitus. Postprandial hyperglycemia (PPHG) is primarily due to first phase insulin secretion. Alpha glucosidase inhibitors delay glucose absorption at the intestine level and thereby prevent sudden surge of glucose after a meal. There are three major drugs which belong to this class, acarbose, miglitol and voglibose, of which voglibose is the newest. Efficacy A Cochrane systematic review assessed the effect of alpha-glucosidase inhibitors in people with impaired glucose tolerance, impaired fasting blood glucose, elevated glycated hemoglobin A1c (HbA1c). It was found that there was no conclusive evidence that voglibose compared to diet and exercise or placebo reduced incidence of diabetes mellitus type 2, improved all-cause mortality, reduced or increased risk of cardiovascular mortality, serious or non-serious adverse events, non-fatal stroke, congestive heart failure, or non-fatal myocardial infarction. References Further reading Alpha-glucosidase inhibitors Amino sugars
Voglibose
Chemistry
336