id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
54,035,992
https://en.wikipedia.org/wiki/Operation%20Snowball%20%28test%29
Operation Snowball was a conventional explosive test to obtain information on nuclear weapon detonations run by the Defence Research Board with participation from the United Kingdom and United States. A detonation of of TNT was used to study the resulting phenomena. The test was held at the Suffield Experimental Station in Alberta and was the largest ever man-made, non-accidental explosion in Canada. The test was also the first of its kind using a stacked TNT block hemisphere of such magnitude, a method repeated in six subsequent tests such as Operation Sailor Hat and Prairie Flat. The test allowed verifying predicted properties of shock and blast and determining its effect on a variety of military targets at varied distances from ground zero. Background Suffield Experimental Station is in an isolated part of the rolling prairie terrain of Alberta and at the time of the test was part of a test range. This unique landscape offers clear views due to treelessness and favourable weather conditions. Starting in fall 1963, teams from the involved countries began assembling at the test grounds for the joint operation, one of many in a series of non-nuclear blast tests under the Tripartite Technical Cooperation Program. Initial preparations and construction work began in the winter of 1963 which was unusually mild and proved helpful. Virtually the entire station was mobilized for the effort and unusual equipment was designed and manufactured at the site's engineering and machine shops. The casting of the TNT was performed on site and began two years in advance of the test. Molten TNT was poured into special rectangular moulds and allowed to cool for 5 hours, producing smooth caramel-coloured rectangular blocks. A total of 30,678 were made with an average weight of each. Test Construction of the TNT hemisphere began six days before detonation. To protect from dust and sunlight and due to risk of lightning, the construction was not done in the open; instead, a building was constructed at ground zero that was to be wheeled away when the charge was complete. Each block was carefully fitted into a pre-calculated position to approximate a perfect hemisphere. When completed the stack measured in diameter. Columns were drilled into the ground at the blast site and then back filled with a coloured soil mix to measure the expected horizontal and vertical displacement. Various military targets were also placed around the blast site. Local residents in Medicine Hat may have seen unusual cargo arriving on rail flatcars including heavy Jupiter and Nike configuration rockets. Other targets included a minefield, chambers and tunnels, reinforced concrete arches, fibreglass shelters, full-scale troop models exposed and inside vehicles and gas masks. 60% of the targets were buried. Smoke mortars were installed to provide a white trail by which the shock wave could be tracked high above ground. An RCAF Neptune aircraft was also immediately above the test area for aerial photography. The detonation turned the TNT into a mass of highly pressurized gas and produced a large fireball and mushroom-shaped cloud. The resulting crater was across and within minutes partially filled with water. Unexpectedly, the crater also had a central updraft similar to moon craters which resulted in much speculation by the scientists. Many target objects were moved and damaged by the overpressure blast, especially the heavy rockets, and two M113 armoured personnel carriers were flipped on their side. After the test, a full cross section of the crater was excavated to analyze the effects below ground. References Explosions in Canada Explosions in 1964 1964 in Canada Military projects of the United States 1964 in military history Snowball
Operation Snowball (test)
Engineering
700
13,299,905
https://en.wikipedia.org/wiki/Pyroglutamyl-histidyl-glycine
Pyroglutamyl-histidyl-glycine (pEHG) is an endogenous tripeptide that acts as a tissue-specific antimitotic and selectively inhibits the proliferation of colon epithelial cells. Early research indicated that pEHG had anorectic effects in mice and was possibly involved in the pathophysiology of anorexia nervosa. However, subsequent studies have found that pEHG lacks anorectic effects and does not alter food intake in mice. References Tripeptides
Pyroglutamyl-histidyl-glycine
Chemistry,Biology
113
53,010,758
https://en.wikipedia.org/wiki/Short%20range%20order
In crystallography, short range order refers to the regular and predictable arrangement (i.e. crystalline lattice) of atoms over a short distance, usually with one or two atom spacings. However, this regularity described by short-range order does not necessarily apply to a larger area. Examples of materials with short range order include amorphous materials such as wax, glass and liquids as well as the collagen fibrils of the stroma in the cornea. Besides ordering of atoms, short-range ordering of vacancies are also possible. Example of systems with short-range ordering of oxygen-vacancies include oxygen-deficient stoichiometries of the superconductors , ; as well as perovskites and novel bismuth sillenites. See also Order and disorder Structure of liquids and glasses References Statistical mechanics
Short range order
Physics
179
32,570,884
https://en.wikipedia.org/wiki/Audio%20analyzer
An audio analyzer is a test and measurement instrument used to objectively quantify the audio performance of electronic and electro-acoustical devices. Audio quality metrics cover a wide variety of parameters, including level, gain, noise, harmonic and intermodulation distortion, frequency response, relative phase of signals, interchannel crosstalk, and more. In addition, many manufacturers have requirements for behavior and connectivity of audio devices that require specific tests and confirmations. Audio analysis requires that the device under test receive a stimulus signal of known characteristics, with which the output signal (response) may be compared by the analyzer in order to determine differences expressed in the specific measurements. This signal may be generated or controlled by the analyzer itself or may come from another source (e.g., a recording) as long as characteristics relative to the desired measurement are defined. As test and measurement equipment, audio analyzers are required to provide performance well beyond that of the typical devices under test (DUTs). High quality audio analyzers must demonstrate vanishingly low levels of noise, distortion and interference in order to be deemed worthwhile, and must do so consistently and reliably to be trusted by engineers and designers. For example, while a commercial CD player can achieve a total harmonic distortion plus noise (THD+N) ratio of approximately −98 dB at 1 kHz, a high quality audio analyzer may exhibit THD+N as low as −121 dB (this is the specified typical performance of the Audio Precision APx555). Audio analyzers are used in both development and production of products. A design engineer will find it very useful when understanding and refining product performance, while a production engineer will wish to perform tests to rapidly confirm that units meet specifications. Very often audio analyzers are optimized for one of these two cases. Current popular audio analyzer models include: APx585 and APx555 (from Audio Precision), dScope M1 and Series III (from Spectral Measurement, formerly Prism Sound), U8903A (from Agilent) and the UPP and UPV analyzers (from Rohde & Schwarz). History One of the earliest reliable sources used for audio test was the first product made by Hewlett-Packard in 1939, the HP200A audio oscillator. The clever and inexpensive design of the HP200A allowed testers to generate very high quality, low distortion sine waves that could be used for testing. This was followed by the company's introduction of the HP320A and HP320B Distortion Analyzers in 1941. These early analyzers could only determine total harmonic distortion and noise combined, and worked by employing a steep notch filter to remove the fundamental frequency of the stimulus signal from the output of the DUT. The remaining signal was measured as an AC voltage, and thus allowed for the manual calculation of total noise and distortion to approximately 0.1% minimum. Subsequent products from HP, Wandell & Goltermann, Radford, Marconi, Sound Technology, and Amber continued to refine measurement capabilities from the 1950s through the 1970s, but the model of usage remained relatively constant; signal generators and analyzers were separate pieces of equipment, and testing involved careful tuning of each one by a person with high technical skills. This changed in 1980 with the introduction of the Tektronix AA501 Distortion Analyzer, which automated the processes of setting levels, frequency tuning and nulling. At this same time Hewlett-Packard introduced the popular HP8903B, which combined a high quality signal generator and analyzer in a single unit. By the mid-eighties, Tektronix ceased production of audio test equipment, and in 1984 members of the team that had developed the AA501 started Audio Precision. The first Audio Precision product was the System One, which combined an integrated generator and analyzer with a connected PC to fully automate test procedures and provide a much higher degree of computational power than the simple microprocessors used in other products at the time. The novel use of a PC allowed for a high degree of custom automation and enabled a radically different visual presentation of results. The combination of PC technology with audio analyzers was adopted by others, including Prism Sound (dScope), Rohde and Schwarz (UPL), and Stanford Research (SR1). As the power of available PCs increased, measurements themselves migrated from being performed internally by audio analyzers to applications running on connected PCs performing FFT (Fast Fourier Transform) calculations, greatly increasing the flexibility and resolution of many results. In addition to analog, audio analyzers today are frequently capable of generating and measuring audio signals over several different types of digital I/O. For example, the Rohde and Schwarz UPP offers AES/EBU, S/PDIF, I²S and HDMI options; the Audio Precision APx500 Series analyzers support AES/EBU, S/PDIF, I²S, HDMI, PDM (Pulse Density Modulation), and Bluetooth radio, and are fully DSP based. Block Diagram and Operation A modern audio analyzer consists of: An audio generator that provides stimulus, both analog and digital, to the DUT. Audio input stages that receive a response from the DUT, both analog and digital, and convert it to appropriate signals (analog or digital) for analysis A signal analyzer that filters the response and calculates results of measurements, commonly a connected or embedded PC in modern solutions A form of output to the user (display, report, etc.) In a closed-loop test, the analysis engine controls the audio generator while simultaneously measuring the output of the DUT, as shown below: The signal analyzer can provide control to both the audio generator and the audio input stages, assuring that test conditions are met. This also permits precise time relationships between the stimulus and response of a DUT to be determined. In an open-loop test, the signal analyzer has no control over the audio source driving the DUT, and thus the user must take care to ensure that the source is providing a signal of appropriate characteristics. Open loop tests are useful for measuring DUTs that have no direct signal input, such as a CD or MP3 player. Electro-acoustic Devices Electro-acoustic devices such as loudspeakers and microphones present special problems for analysis, as they must receive or transmit signals through air. In these cases, the DUT in the model shown above must be replaced with the complete electro-mechanical system, e.g., a power amplifier to drive a loudspeaker, a loudspeaker, a measurement microphone and microphone pre-amplifier. The actual device under test can be measured only when the other devices in this system are fully characterized, so that the contributions from these devices may be subtracted from the response. Many modern audio analyzers contain measurement sequences that automate this procedure, and the focus of recent developments has been on quasi-anechoic measurements. These techniques allow loudspeakers to be characterised in a non-ideal (noisy) environment, without the need for an anechoic chamber, which makes them ideally suited for use in high volume production line manufacturing. Most quasi-anechoic measurements are based around an impulse response created from a sine wave whose frequency is swept on a logarithmic scale, with a window function applied to remove any acoustic reflections. The log swept sine method increases signal-to-noise ratio and also allows measurement of individual distortion harmonics up to the Nyquist frequency, something which previously impossible with older analysis techniques such as MLS (Maximum Length Sequence). Audio Generator An audio generator suitable for use in test and measurement must meet several criteria that apply to both analog and digital stimulus: Ability to generate different waveform types Sine Square Multitone (a group of simultaneous sine waves) Sweep (move continuously from one specified frequency to another) Standard Inter-modulation waveforms (SMPTE, DIN, DFD, and DIM) Arbitrary waveforms Extremely low residual distortion and noise Sufficient range of amplitude Sufficient range of frequency Extremely high accuracy of amplitude Extremely high accuracy of frequency Adjustable and accurate source impedance Balanced/unbalanced output options (analog) AC and DC coupling Additionally, the generator will allow for the definition of a precise frequency range and amplitude of the stimulus presented to the DUT. This is critical when aligning test conditions to the characteristics of the DUT. Signal Analyzer Prior to the introduction of integrated audio analyzers, audio generators and audio analyzers were separate pieces of equipment. In this article, signal analyzer refers to the element of a modern audio analyzer that implements the actual measurements. Whether realized in analog circuits, digital signal processing (DSP) or FFT, the analyzer engine must provide high precision implementations of: AC/DC voltmeter (peak and RMS) High pass, low pass and weighting filters Band pass and notch filters Frequency counter As most modern instruments are digitally based, signal analysis is frequently performed using FFT-based calculations, allowing many results to be calculated in a single test pass. Results of these measurements are processed by the analyzer into readable data using a variety of standard units and formats, such as volts, dB, dBu, SPL, ohms, relative percentage, etc., depending upon the specific measurement being reported. Derived results are achieved by combining several primary results into a calculated result. Measurements and Results Audio analyzers are capable of measuring many types of parameters. The fundamental measurements are: Level and gain: Level describes the magnitude of a signal, and may be expressed in absolute or relative terms. Common absolute units may be volts, watts, dBV and dBu, while relative measurements are expressed most commonly in dB. Level may also be conditioned as a peak measurement or an RMS measurement. Gain is the ratio of signal level at a DUT's output divided by the signal level at the input, usually expressed in dB. Frequency response: measures the output level of a DUT as a function of frequency. Level is expressed in the same units as above, typically dBV and dBu. Total Harmonic Distortion plus Noise (THD+N): Harmonic distortion products are multiples of stimulus frequencies, while noise is energy that is mathematically unrelated to the input signal. As a signal result, THD+N can be considered all signal content in the DUT response that is not contained in the stimulus. Signal-to-Noise Ratio (SNR): the ratio of desired signal to unwanted noise coming from a DUT, expressed in dB. Crosstalk: the unwanted presence of a signal from one audio channel as it appears in other audio channels of a DUT. Since this is a ratio, it is expressed in dB. Phase: the relationship in time between two signals of identical frequency, expressed as a fraction of the period of the signal. This is usually expressed in degrees, with one complete cycle of a sinusoidal signal being 360 degrees. Intermodulation Distortion (IMD): Distortion that is the result of non-linear mixing of two or more signals, typically two sine-waves at different frequencies or the sum of a sine-wave and square-wave. In addition to distortion products at harmonic multiples of the frequencies, products are also found at multiples of the sums and differences of the original frequencies. Time domain display: Equivalent to an oscilloscope display of the signal, showing instantaneous amplitude as a function of time. See also Audio system measurements Distortion References Understanding the Decibel in Audio Measurements SINAD measurements Using Audio Analyzer Two-Way Radio Testing with Audio Analyzer Introduction to the Six Basic Audio Tests How to Write (and Read) Audio Specifications The Audio Measurement Handbook Acoustic Transducer Testing Audio electronics
Audio analyzer
Engineering
2,418
454,896
https://en.wikipedia.org/wiki/Grignard%20reaction
The Grignard reaction () is an organometallic chemical reaction in which, according to the classical definition, carbon alkyl, allyl, vinyl, or aryl magnesium halides (Grignard reagent) are added to the carbonyl groups of either an aldehyde or ketone under anhydrous conditions. This reaction is important for the formation of carbon–carbon bonds. History and definitions Grignard reactions and reagents were discovered by and are named after the French chemist François Auguste Victor Grignard (University of Nancy, France), who described them in 1900. He was awarded the 1912 Nobel Prize in Chemistry for this work. The reaction of an organic halide with magnesium is not a Grignard reaction, but provides a Grignard reagent. Classically, the Grignard reaction refers to the reaction between a ketone or aldehyde group with a Grignard reagent to form a primary or tertiary alcohol. However, some chemists understand the definition to mean all reactions of any electrophiles with Grignard reagents. Therefore, there is some dispute about the modern definition of the Grignard reaction. In the Merck Index, published online by the Royal Society of Chemistry, the classical definition is acknowledged, followed by "A more modern interpretation extends the scope of the reaction to include the addition of Grignard reagents to a wide variety of electrophilic substrates." This variety of definitions illustrates that there is some dispute within the chemistry community about the definition of a Grignard reaction. Shown below are some reactions involving Grignard reagents, but they themselves are not classically understood as Grignard reactions. Reaction mechanism Because carbon is more electronegative than magnesium, the carbon attached to magnesium acts as a nucleophile and attacks the electrophilic carbon atom in the polar bond of a carbonyl group. The addition of the Grignard reagent to the carbonyl group typically proceeds through a six-membered ring transition state, as shown below. Based on the detection of radical coupling side products, an alternative single electron transfer (SET) mechanism that involves the initial formation of a ketyl radical intermediate has also been proposed. A recent computational study suggests that the operative mechanism (polar vs. radical) is substrate-dependent, with the reduction potential of the carbonyl compound serving as a key parameter. Conditions The Grignard reaction is conducted under anhydrous conditions. Otherwise, the reaction will fail because the Grignard reagent will act as a base rather than a nucleophile and pick up a labile proton rather than attacking the electrophilic site. This will result in no formation of the desired product as the R-group of the Grignard reagent will become protonated while the MgX portion will stabilize the deprotonated species. To prevent this, Grignard reactions are completed in an inert atmosphere to remove all water from the reaction flask and ensure that the desired product is formed. Additionally, if there are acidic protons in the starting material, as shown in the figure on the right, one can overcome this by protecting the acidic site of the reactant by turning it into an ether or a silyl ether to eliminate the labile proton from the solution prior to the Grignard reaction. Variants Other variations of the Grignard reagent have been discovered to improve the chemoselectivity of the Grignard reaction, which include but are not limited to: Turbo-Grignards, organocerium reagents, and organocuprate (Gilman) reagents. Turbo-Grignards Turbo-Grignards are Grignard reagents modified with lithium chloride. Compared to conventional Grignard reagents, Turbo-Grignards are more chemoselective; esters, amides, and nitriles do not react with the Turbo-Grignard reagent. Heterometal-modified Grignard reagents The behavior of Grignard reagents can be usefully modified in the present of other metals. Copper(I) salts give organocuprates that preferentially effect 1,4 addition. Cerium trichloride allows selective 1,2-additions to the same substrates. Nickel and palladium halides catalyze cross coupling reactions. See also Grignard reagent Wittig reaction Horner–Wadsworth–Emmons reaction Barbier reaction Bodroux–Chichibabin aldehyde synthesis Fujimoto–Belleau reaction Organolithium reagents Sakurai reaction Indium-mediated allylation Alkynylation References Organometallic chemistry Carbon-carbon bond forming reactions Magnesium Chemical tests Name reactions
Grignard reaction
Chemistry
991
2,295,496
https://en.wikipedia.org/wiki/Tungsten%28III%29%20oxide
Tungsten(III) oxide (W2O3) is a compound of tungsten and oxygen. It has been reported (2006) as being grown as a thin film by atomic layer deposition at temperatures between 140 and 240 °C using W2(N(CH3)2)6 as a precursor. It is not referred to in major textbooks. Some older literature refers to the compound W2O3 but as the atomic weight of tungsten was believed at the time to be 92 (i.e., approximately half the modern accepted value of 183.84) the compound actually being referred to was WO3. Reports about the compound date back to at least the 1970s, but only in as thin films or surfaces – no bulk synthesis of the material is known. Usage Tungsten(III) oxide is used in various types of infrared absorbing coatings and foils. References Tungsten(III) compounds Sesquioxides Transition metal oxides
Tungsten(III) oxide
Chemistry
194
52,466,165
https://en.wikipedia.org/wiki/Testosterone%20isovalerate
Testosterone isovalerate, also known as testosterone isopentanoate, testosterone 17β-isovalerate, and androst-4-en-17β-ol-3-one 17β-isovalerate, is a synthetic, injected anabolic-androgenic steroid (AAS) and an androgen ester – specifically, the C17β isovalerate (isopentanoate) ester of testosterone – which was never marketed. It is a prodrug of testosterone and, when administered via intramuscular injection, is associated with a long-lasting depot effect and extended duration of action. See also Testosterone isobutyrate Testosterone isocaproate Testosterone valerate References Anabolic–androgenic steroids Androstanes Ketones Testosterone esters Abandoned drugs Isovalerate esters
Testosterone isovalerate
Chemistry
172
26,856,232
https://en.wikipedia.org/wiki/Thor%20washing%20machine
The Thor washing machine was the first electric clothes washer sold commercially in the United States. Produced by the Chicago-based Hurley Electric Laundry Equipment Company, the 1907 Thor is believed to be the first electrically powered washer ever manufactured, crediting Hurley as the inventor of the first automatic washing machine. Designed by Hurley engineer Alva J. Fisher, a patent for the new electric Thor was issued on August 9, 1910, three years after its initial invention. The idea of an automatic washing machine had been around for many years. However, these were crude mechanical efforts that typically involved a manually operated crank or similar design. In many ways, the patent of the new Thor washer sounds modern, even today. The patent states that a "perforated cylinder is rotatably mounted within the tub containing the wash water". A series of blades lifted the clothes as the cylinder rotated. After 8 rotations in one direction, the machine would reverse rotation to "prevent the cloths from wadding up into a compact mass". Drive belts attached to a Westinghouse motor connected to three wheels of different sizes, which moved the drum during operation. The design also included a clutch, which allowed the machine to switch direction, and an emergency stop rod. The new Thor washer was mass marketed throughout the United States beginning in 1908. Controversy There is a dispute over who was the first inventor of the automatic washer. A company called Nineteen Hundred Washing Machine Company of Binghamton, NY, claims to have produced the first electric washer in 1906; a year before Thor's release. Additionally, it has been stated in various articles on the Internet that a Ford Motor Company employee invented the electric washer in late 19th century or early 20th century. Since Ford was incorporated in 1903, the Ford story seems unlikely to be valid. Regardless, Thor remains one of the first (if not the first) company to manufacture and sell an automatic washing machine on a large scale. Other Thor innovations Tilt-a-whirl agitator Thor invented the tilt-a-whirl system in which the agitator, typically in the shape of disk, tilted back and forth within the washer drum while simultaneously rotating. The early 1930s tilt-a-whirl design was the first agitator to move water in both a horizontal and vertical motion. The 1936 version of the Thor tilt-a-whirl incorporated sculpted hands embossed on the agitator. At the time, some Thor dealers painted the fingernails of the hands on demonstration machines. Automagic washer/dishwasher In the 1940s, Thor introduced the Automagic hybrid washer/dishwasher. The top-loading machine included both a removable clothes washing drum and a dish-washing drum. The Automagic was widely marketed but disappeared from the marketplace soon after its introduction, as many consumers soured on the idea of washing dirty clothing and dishes in the same machine. Thor today The Thor trademark was acquired in 2008 by Los Angeles–based Appliances International, a supplier of washer dryer combos and stacking washers and dryers. Soon after the brand acquisition, the company introduced a new line of laundry appliances under the Thor brand. References External links Thor Appliance Company archived from the original on 2011-09-23 Cleaning tools Home appliances Laundry washing equipment
Thor washing machine
Physics,Technology
680
1,730,437
https://en.wikipedia.org/wiki/Mathcad
Mathcad is computer software for the verification, validation, documentation and re-use of mathematical calculations in engineering and science, notably mechanical, chemical, electrical, and civil engineering. Released in 1986 on DOS, it introduced live editing (WYSIWYG) of typeset mathematical notation in an interactive notebook, combined with automatic computations. It was originally developed by Mathsoft, and since 2006 has been a product of Parametric Technology Corporation. History Mathcad was conceived and developed by Allen Razdow and Josh Bernoff at Mathsoft founded by David Blohm and Razdow. It was released in 1986. It was the first system to support WYSIWYG editing and recalculation of mathematical calculations mixed with text. It was also the first to check the consistency of engineering units through the full calculation. Other equation solving systems existed at the time, but did not provide a notebook interface: Software Arts' TK Solver was released in 1982, and Borland's Eureka: The Solver was released in 1987. Mathcad was acquired by Parametric Technology in April 2006. Mathcad was named "Best of '87" and "Best of '88" by PC Magazines editors. Overview Mathcad's central interface is an interactive notebook in which equations and expressions are created and manipulated in the same graphical format in which they are presented (WYSIWYG). This approach was adopted by systems such as Mathematica, Maple, Macsyma, MATLAB, and Jupyter. Mathcad today includes some of the capabilities of a computer algebra system, but remains oriented towards ease of use and documentation of numerical engineering applications. Mathcad is part of a broader product development system developed by PTC, addressing analytical steps in systems engineering. It integrates with PTC's Creo Elements/Pro, Windchill, and Creo Elements/View. Its live feature-level integration with Creo Elements/Pro enables Mathcad analytical models to be directly used in driving CAD geometry, and its structural awareness within Windchill allows live calculations to be re-used and re-applied toward multiple design models. Summary of capabilities The Mathcad interface allows users to combine a variety of different elements (mathematics, descriptive text, and supporting imagery) into a worksheet, in which dependent calculations are dynamically recalculated as inputs change. This allows for simple manipulation of input variables, assumptions, and expressions. Mathcad's functionality includes: Numerous numeric functions for statistics, data analysis, image processing, and signal processing; Ubiquitous dimensionality checking and simplification; Solution of systems of equations, such as ODEs and PDEs using several methods; Root finding for polynomials and other functions; Symbolic manipulation of mathematical expressions; Parametric 2D and 3D plotting and discrete data plotting; Leverage standard, readable mathematical expressions within embedded program constructs; Vector and matrix operations, including eigenvalues and eigenvectors; Curve fitting and regression analysis; Statistical and design of experiments functions and plot types, and evaluation of probability distributions; Import from and export to other applications and file types, such as Microsoft Excel and MathML; Cross references to other Mathcad worksheets; Integration with other engineering applications, such as CAD, FEM, BIM, and Simulation tools, to aid in product design, like Autocad, Ansys, Revit. Although Mathcad is mostly oriented to non-programmers, it is also used in more complex projects to visualize results of mathematical modeling by using distributed computing and coupling with programs written using more traditional languages such as C++. Current releases As of 2024, the latest release from PTC is Mathcad Prime 10.0.0.0. This release is a freemium variant: if the software is not activated after a Mathcad Prime 30-day trial, it is possible to continue using PTC Mathcad Express for an unlimited time as "PTC Mathcad Express Free-for-Life Engineering Calculations Software". This freemium pilot is a new marketing approach for PTC. Review and markup of engineering notes can now be done directly by team members without them all requiring a full Mathcad Prime license. The last release of the traditional (pre "Prime") product line, Mathcad 15.0, came out in June 2010 and shares the same worksheet file structure as Mathcad 14.0. The last service release, Mathcad 15.0 M050, which added support for Windows 10, was released in 2017. Mathcad 15.0 is no longer actively developed but in "sustained support". Computer operating system platforms Mathcad only runs on Microsoft Windows. Mathcad Prime 6.0 requires a 64-bit version of Windows 7, Windows 8.1 or Windows 10. Until 1998, Mathcad also supported Mac OS. Support Starting in 2011 (Mathcad 15.0) the first year of maintenance and support has been included in the purchase or upgrade price. Release history Screen captures of previous Mathcad versions See also Comparison of computer algebra systems Comparison of numerical-analysis software TK Solver PTC:Creo PTC:Windchill SMath Studio, a freeware similar to MathCad References External links Mathcad blogs Free trial of Mathcad Prime – Mathcad Express 1986 software Array programming languages Computer algebra system software for Windows Computer algebra systems Computer vision software Data and information visualization software Dynamically typed programming languages High-level programming languages Linear algebra Mathematical optimization software Numerical analysis software for Windows Numerical linear algebra Numerical programming languages Numerical software Plotting software Regression and curve fitting software Statistical programming languages Time series software Windows-only proprietary software
Mathcad
Mathematics
1,180
68,892,195
https://en.wikipedia.org/wiki/Photo%20response%20non-uniformity
Photo response non-uniformity, pixel response non-uniformity, or PRNU, is a form of fixed-pattern noise related to digital image sensors, as used in cameras and optical instruments. Both CCD and CMOS sensors are two-dimensional arrays of photosensitive cells, each broadly corresponding to an image pixel. Due to the non-uniformity of image sensors, each cell responds with a different voltage level when illuminated with a uniform light source, and this leads to luminance inaccuracy at the pixel level. High-end and metrology camera vendors tend to characterise this non-uniformity during instrument manufacture. The sensor is illuminated with a standardized light source and a two-dimensional table of correction factors is generated. This table is either carried in camera non-volatile memory and dynamically applied to the image on each capture, or ships with the camera to be applied by an external image processing and correcting pipeline. See also Color balance Color correction Flat-field correction Image sensor Digital photography Image sensors Optics
Photo response non-uniformity
Physics,Chemistry
210
381,750
https://en.wikipedia.org/wiki/Hilbert%27s%20sixteenth%20problem
Hilbert's 16th problem was posed by David Hilbert at the Paris conference of the International Congress of Mathematicians in 1900, as part of his list of 23 problems in mathematics. The original problem was posed as the Problem of the topology of algebraic curves and surfaces (Problem der Topologie algebraischer Kurven und Flächen). Actually the problem consists of two similar problems in different branches of mathematics: An investigation of the relative positions of the branches of real algebraic curves of degree n (and similarly for algebraic surfaces). The determination of the upper bound for the number of limit cycles in two-dimensional polynomial vector fields of degree n and an investigation of their relative positions. The first problem is yet unsolved for n = 8. Therefore, this problem is what usually is meant when talking about Hilbert's sixteenth problem in real algebraic geometry. The second problem also remains unsolved: no upper bound for the number of limit cycles is known for any n > 1, and this is what usually is meant by Hilbert's sixteenth problem in the field of dynamical systems. The Spanish Royal Society for Mathematics published an explanation of Hilbert's sixteenth problem. The first part of Hilbert's 16th problem In 1876, Harnack investigated algebraic curves in the real projective plane and found that curves of degree n could have no more than separate connected components. Furthermore, he showed how to construct curves that attained that upper bound, and thus that it was the best possible bound. Curves with that number of components are called M-curves. Hilbert had investigated the M-curves of degree 6, and found that the 11 components always were grouped in a certain way. His challenge to the mathematical community now was to completely investigate the possible configurations of the components of the M-curves. Furthermore, he requested a generalization of Harnack's curve theorem to algebraic surfaces and a similar investigation of surfaces with the maximum number of components. The second part of Hilbert's 16th problem Here we are going to consider polynomial vector fields in the real plane, that is a system of differential equations of the form: where both P and Q are real polynomials of degree n. These polynomial vector fields were studied by Poincaré, who had the idea of abandoning the search for finding exact solutions to the system, and instead attempted to study the qualitative features of the collection of all possible solutions. Among many important discoveries, he found that the limit sets of such solutions need not be a stationary point, but could rather be a periodic solution. Such solutions are called limit cycles. The second part of Hilbert's 16th problem is to decide an upper bound for the number of limit cycles in polynomial vector fields of degree n and, similar to the first part, investigate their relative positions. Results It was shown in 1991/1992 by Yulii Ilyashenko and Jean Écalle that every polynomial vector field in the plane has only finitely many limit cycles (a 1923 article by Henri Dulac claiming a proof of this statement had been shown to contain a gap in 1981). This statement is not obvious, since it is easy to construct smooth (C∞) vector fields in the plane with infinitely many concentric limit cycles. The question whether there exists a finite upper bound H(n) for the number of limit cycles of planar polynomial vector fields of degree n remains unsolved for any n > 1. (H(1) = 0 since linear vector fields do not have limit cycles.) Evgenii Landis and Ivan Petrovsky claimed a solution in the 1950s, but it was shown wrong in the early 1960s. Quadratic plane vector fields with four limit cycles are known. An example of numerical visualization of four limit cycles in a quadratic plane vector field can be found in. In general, the difficulties in estimating the number of limit cycles by numerical integration are due to the nested limit cycles with very narrow regions of attraction, which are hidden attractors, and semi-stable limit cycles. The original formulation of the problems In his speech, Hilbert presented the problems as: Hilbert continues: See also Hilbert–Arnold problem Hilbert's problems References External links 16th Hilbert problem: computation of Lyapunov quantities and limit cycles in two-dimensional dynamical systems 16 Unsolved problems in geometry Real algebraic geometry Dynamical systems Hidden oscillation
Hilbert's sixteenth problem
Physics,Mathematics
879
15,361,791
https://en.wikipedia.org/wiki/Oscilloscope
An oscilloscope, (formerly known as an oscillograph), (informally scope or O-scope) is a type of electronic test instrument that graphically displays varying voltages of one or more signals as a function of time. Their main purpose is capturing information on electrical signals for debugging, analysis, or characterization. The displayed waveform can then be analyzed for properties such as amplitude, frequency, rise time, time interval, distortion, and others. Originally, calculation of these values required manually measuring the waveform against the scales built into the screen of the instrument. Modern digital instruments may calculate and display these properties directly. Oscilloscopes are used in the sciences, engineering, biomedical, automotive and the telecommunications industry. General-purpose instruments are used for maintenance of electronic equipment and laboratory work. Special-purpose oscilloscopes may be used to analyze an automotive ignition system or to display the waveform of the heartbeat as an electrocardiogram, for instance. History Early high-speed visualisations of electrical voltages were made with an electro-mechanical oscillograph, invented by André Blondel in 1893. These gave valuable insights into high speed voltage changes, but had a frequency response in single kHz, and were superseded by the oscilloscope which used a cathode-ray tube (CRT) as its display element. The Braun tube, forerunner of the CRT, was known in 1897, and in 1899 Jonathan Zenneck equipped it with beam-forming plates and a magnetic field for deflecting the trace, and this formed the basis of the CRT. Early CRTs had been applied experimentally to laboratory measurements as early as the 1920s, but suffered from poor stability of the vacuum and the cathode emitters. V. K. Zworykin described a permanently sealed, high-vacuum CRT with a thermionic emitter in 1931. This stable and reproducible component allowed General Radio to manufacture an oscilloscope that was usable outside a laboratory setting. After World War II surplus electronic parts became the basis for the revival of Heathkit Corporation, and a $50 oscilloscope kit made from such parts proved its premiere market success. Features and uses An analog oscilloscope is typically divided into four sections: the display, vertical controls, horizontal controls and trigger controls. The display is usually a CRT with horizontal and vertical reference lines called the graticule. CRT displays also have controls for focus, intensity, and beam finder. The vertical section controls the amplitude of the displayed signal. This section has a volts-per-division (Volts/Div) selector knob, an AC/DC/Ground selector switch, and the vertical (primary) input for the instrument. Additionally, this section is typically equipped with the vertical beam position knob. The horizontal section controls the time base or sweep of the instrument. The primary control is the Seconds-per-Division (Sec/Div) selector switch. Also included is a horizontal input for plotting dual X-Y axis signals. The horizontal beam position knob is generally located in this section. The trigger section controls the start event of the sweep. The trigger can be set to automatically restart after each sweep or can be configured to respond to an internal or external event. The principal controls of this section are the source and coupling selector switches, and an external trigger input (EXT Input) and level adjustment. In addition to the basic instrument, most oscilloscopes are supplied with a probe. The probe connects to any input on the instrument and typically has a resistor of ten times the oscilloscope's input impedance. This results in a 0.1 (‑10×) attenuation factor; this helps to isolate the capacitive load presented by the probe cable from the signal being measured. Some probes have a switch allowing the operator to bypass the resistor when appropriate. Size and portability Most modern oscilloscopes are lightweight, portable instruments compact enough for a single person to carry. In addition to portable units, the market offers a number of miniature battery-powered instruments for field service applications. Laboratory grade oscilloscopes, especially older units that use vacuum tubes, are generally bench-top devices or are mounted on dedicated carts. Special-purpose oscilloscopes may be rack-mounted or permanently mounted into a custom instrument housing. Inputs The signal to be measured is fed to one of the input connectors, which is usually a coaxial connector such as a BNC or UHF type. Binding posts or banana plugs may be used for lower frequencies. If the signal source has its own coaxial connector, then a simple coaxial cable is used; otherwise, a specialized cable called a "scope probe", supplied with the oscilloscope, is used. In general, for routine use, an open wire test lead for connecting to the point being observed is not satisfactory, and a probe is generally necessary. General-purpose oscilloscopes usually present an input impedance of 1 megohm in parallel with a small but known capacitance such as 20 picofarads. This allows the use of standard oscilloscope probes. Scopes for use with very high frequencies may have 50 Ω inputs. These must be either connected directly to a 50 Ω signal source or used with Z0 or active probes. Less-frequently-used inputs include one (or two) for triggering the sweep, horizontal deflection for X‑Y mode displays, and trace brightening/darkening, sometimes called z'‑axis inputs. Probes Open wire test leads (flying leads) are likely to pick up interference, so they are not suitable for low level signals. Furthermore, the leads have a high inductance, so they are not suitable for high frequencies. Using a shielded cable (i.e., coaxial cable) is better for low level signals. Coaxial cable also has lower inductance, but it has higher capacitance: a typical 50 ohm cable has about 90 pF per meter. Consequently, a one-meter direct (1×) coaxial probe loads a circuit with a capacitance of about 110 pF and a resistance of 1 megohm. To minimize loading, attenuator probes (e.g., 10× probes) are used. A typical probe uses a 9 megohm series resistor shunted by a low-value capacitor to make an RC compensated divider with the cable capacitance and scope input. The RC time constants are adjusted to match. For example, the 9 megohm series resistor is shunted by a 12.2 pF capacitor for a time constant of 110 microseconds. The cable capacitance of 90 pF in parallel with the scope input of 20 pF and 1 megohm (total capacitance 110 pF) also gives a time constant of 110 microseconds. In practice, there is an adjustment so the operator can precisely match the low frequency time constant (called compensating the probe). Matching the time constants makes the attenuation independent of frequency. At low frequencies (where the resistance of R is much less than the reactance of C), the circuit looks like a resistive divider; at high frequencies (resistance much greater than reactance), the circuit looks like a capacitive divider. The result is a frequency compensated probe for modest frequencies. It presents a load of about 10 megohms shunted by 12 pF. Such a probe is an improvement, but does not work well when the time scale shrinks to several cable transit times or less (transit time is typically 5 ns). In that time frame, the cable looks like its characteristic impedance, and reflections from the transmission line mismatch at the scope input and the probe causes ringing. The modern scope probe uses lossy low capacitance transmission lines and sophisticated frequency shaping networks to make the 10× probe perform well at several hundred megahertz. Consequently, there are other adjustments for completing the compensation. Probes with 10:1 attenuation are by far the most common; for large signals (and slightly-less capacitive loading), 100:1 probes may be used. There are also probes that contain switches to select 10:1 or direct (1:1) ratios, but the latter setting has significant capacitance (tens of pF) at the probe tip, because the whole cable's capacitance is then directly connected. Most oscilloscopes provide for probe attenuation factors, displaying the effective sensitivity at the probe tip. Historically, some auto-sensing circuitry used indicator lamps behind translucent windows in the panel to illuminate different parts of the sensitivity scale. To do so, the probe connectors (modified BNCs) had an extra contact to define the probe's attenuation. (A certain value of resistor, connected to ground, "encodes" the attenuation.) Because probes wear out, and because the auto-sensing circuitry is not compatible between different oscilloscope makes, auto-sensing probe scaling is not foolproof. Likewise, manually setting the probe attenuation is prone to user error. Setting the probe scaling incorrectly is a common error, and throws the reading off by a factor of 10. Special high voltage probes form compensated attenuators with the oscilloscope input. These have a large probe body, and some require partly filling a canister surrounding the series resistor with volatile liquid fluorocarbon to displace air. The oscilloscope end has a box with several waveform-trimming adjustments. For safety, a barrier disc keeps the user's fingers away from the point being examined. Maximum voltage is in the low tens of kV. (Observing a high voltage ramp can create a staircase waveform with steps at different points every repetition, until the probe tip is in contact. Until then, a tiny arc charges the probe tip, and its capacitance holds the voltage (open circuit). As the voltage continues to climb, another tiny arc charges the tip further.) There are also current probes, with cores that surround the conductor carrying current to be examined. One type has a hole for the conductor, and requires that the wire be passed through the hole for semi-permanent or permanent mounting. However, other types, used for temporary testing, have a two-part core that can be clamped around a wire. Inside the probe, a coil wound around the core provides a current into an appropriate load, and the voltage across that load is proportional to current. This type of probe only senses AC. A more-sophisticated probe includes a magnetic flux sensor (Hall effect sensor) in the magnetic circuit. The probe connects to an amplifier, which feeds (low frequency) current into the coil to cancel the sensed field; the magnitude of the current provides the low-frequency part of the current waveform, right down to DC. The coil still picks up high frequencies. There is a combining network akin to a loudspeaker crossover. Front panel controls Focus control This control adjusts CRT focus to obtain the sharpest, most-detailed trace. In practice, focus must be adjusted slightly when observing very different signals, so it must be an external control. The control varies the voltage applied to a focusing anode within the CRT. Flat-panel displays do not need this control. Intensity control This adjusts trace brightness. Slow traces on CRT oscilloscopes need less, and fast ones, especially if not often repeated, require more brightness. On flat panels, however, trace brightness is essentially independent of sweep speed, because the internal signal processing effectively synthesizes the display from the digitized data. Astigmatism This control may instead be called "shape" or "spot shape". It adjusts the voltage on the last CRT anode (immediately next to the Y deflection plates). For a circular spot, the final anode must be at the same potential as both of the Y-plates (for a centred spot the Y-plate voltages must be the same). If the anode is made more positive, the spot becomes elliptical in the X-plane as the more negative Y-plates will repel the beam. If the anode is made more negative, the spot becomes elliptical in the Y-plane as the more positive Y-plates will attract the beam. This control may be absent from simpler oscilloscope designs or may even be an internal control. It is not necessary with flat panel displays. Beam finder Modern oscilloscopes have direct-coupled deflection amplifiers, which means the trace could be deflected off-screen. They also might have their beam blanked without the operator knowing it. To help in restoring a visible display, the beam finder circuit overrides any blanking and limits the beam deflection to the visible portion of the screen. Beam-finder circuits often distort the trace while activated. Graticule The graticule is a grid of lines that serve as reference marks for measuring the displayed trace. These markings, whether located directly on the screen or on a removable plastic filter, usually consist of a 1 cm grid with closer tick marks (often at 2 mm) on the centre vertical and horizontal axis. One expects to see ten major divisions across the screen; the number of vertical major divisions varies. Comparing the grid markings with the waveform permits one to measure both voltage (vertical axis) and time (horizontal axis). Frequency can also be determined by measuring the waveform period and calculating its reciprocal. On old and lower-cost CRT oscilloscopes the graticule is a sheet of plastic, often with light-diffusing markings and concealed lamps at the edge of the graticule. The lamps had a brightness control. Higher-cost instruments have the graticule marked on the inside face of the CRT, to eliminate parallax errors; better ones also had adjustable edge illumination with diffusing markings. (Diffusing markings appear bright.) Digital oscilloscopes, however, generate the graticule markings on the display in the same way as the trace. External graticules also protect the glass face of the CRT from accidental impact. Some CRT oscilloscopes with internal graticules have an unmarked tinted sheet plastic light filter to enhance trace contrast; this also serves to protect the faceplate of the CRT. Accuracy and resolution of measurements using a graticule is relatively limited; better instruments sometimes have movable bright markers on the trace. These permit internal circuits to make more refined measurements. Both calibrated vertical sensitivity and calibrated horizontal time are set in steps. This leads, however, to some awkward interpretations of minor divisions. Digital oscilloscopes generate the graticule digitally. The scale, spacing, etc., of the graticule can therefore be varied, and accuracy of readings may be improved. Timebase controls These select the horizontal speed of the CRT's spot as it creates the trace; this process is commonly referred to as the sweep. In all but the least-costly modern oscilloscopes, the sweep speed is selectable and calibrated in units of time per major graticule division. Quite a wide range of sweep speeds is generally provided, from seconds to as fast as picoseconds (in the fastest) per division. Usually, a continuously-variable control (often a knob in front of the calibrated selector knob) offers uncalibrated speeds, typically slower than calibrated. This control provides a range somewhat greater than the calibrated steps, making any speed between the steps available. Holdoff control Some higher-end analog oscilloscopes have a holdoff control. This sets a time after a trigger during which the sweep circuit cannot be triggered again. It helps provide a stable display of repetitive events in which some triggers would create confusing displays. It is usually set to minimum, because a longer time decreases the number of sweeps per second, resulting in a dimmer trace. See Holdoff for a more detailed description. Vertical sensitivity, coupling, and polarity controls To accommodate a wide range of input amplitudes, a switch selects calibrated sensitivity of the vertical deflection. Another control, often in front of the calibrated selector knob, offers a continuously variable sensitivity over a limited range from calibrated to less-sensitive settings. Often the observed signal is offset by a steady component, and only the changes are of interest. An input coupling switch in the "AC" position connects a capacitor in series with the input that blocks low-frequency signals and DC. However, when the signal has a fixed offset of interest, or changes slowly, the user will usually prefer "DC" coupling, which bypasses any such capacitor. Most oscilloscopes offer the DC input option. For convenience, to see where zero volts input currently shows on the screen, many oscilloscopes have a third switch position (usually labeled "GND" for ground) that disconnects the input and grounds it. Often, in this case, the user centers the trace with the vertical position control. Better oscilloscopes have a polarity selector. Normally, a positive input moves the trace upward; the polarity selector offers an "inverting" option, in which a positive-going signal deflects the trace downward. Vertical position control The vertical position control moves the whole displayed trace up and down. It is used to set the no-input trace exactly on the center line of the graticule, but also permits offsetting vertically by a limited amount. With direct coupling, adjustment of this control can compensate for a limited DC component of an input. Horizontal sensitivity control This control is found only on more elaborate oscilloscopes; it offers adjustable sensitivity for external horizontal inputs. It is only active when the instrument is in X-Y mode, i.e. the internal horizontal sweep is turned off. Horizontal position control The horizontal position control moves the display sidewise. It usually sets the left end of the trace at the left edge of the graticule, but it can displace the whole trace when desired. This control also moves the X-Y mode traces sidewise in some instruments, and can compensate for a limited DC component as for vertical position. Dual-trace controls Each input channel usually has its own set of sensitivity, coupling, and position controls, though some four-trace oscilloscopes have only minimal controls for their third and fourth channels. Dual-trace oscilloscopes have a mode switch to select either channel alone, both channels, or (in some) an X‑Y display, which uses the second channel for X deflection. When both channels are displayed, the type of channel switching can be selected on some oscilloscopes; on others, the type depends upon timebase setting. If manually selectable, channel switching can be free-running (asynchronous), or between consecutive sweeps. Some Philips dual-trace analog oscilloscopes had a fast analog multiplier, and provided a display of the product of the input channels. Multiple-trace oscilloscopes have a switch for each channel to enable or disable display of the channel's trace. Delayed-sweep controls These include controls for the delayed-sweep timebase, which is calibrated, and often also variable. The slowest speed is several steps faster than the slowest main sweep speed, though the fastest is generally the same. A calibrated multiturn delay time control offers wide range, high resolution delay settings; it spans the full duration of the main sweep, and its reading corresponds to graticule divisions (but with much finer precision). Its accuracy is also superior to that of the display. A switch selects display modes: Main sweep only, with a brightened region showing when the delayed sweep is advancing, delayed sweep only, or (on some) a combination mode. Good CRT oscilloscopes include a delayed-sweep intensity control, to allow for the dimmer trace of a much-faster delayed sweep which nevertheless occurs only once per main sweep. Such oscilloscopes also are likely to have a trace separation control for multiplexed display of both the main and delayed sweeps together. Sweep trigger controls A switch selects the trigger source. It can be an external input, one of the vertical channels of a dual or multiple-trace oscilloscope, or the AC line (mains) frequency. Another switch enables or disables auto trigger mode, or selects single sweep, if provided in the oscilloscope. Either a spring-return switch position or a pushbutton arms single sweeps. A trigger level control varies the voltage required to generate a trigger, and the slope switch selects positive-going or negative-going polarity at the selected trigger level. Basic types of sweep Triggered sweep To display events with unchanging or slowly (visibly) changing waveforms, but occurring at times that may not be evenly spaced, modern oscilloscopes have triggered sweeps. Compared to older, simpler oscilloscopes with continuously-running sweep oscillators, triggered-sweep oscilloscopes are markedly more versatile. A triggered sweep starts at a selected point on the signal, providing a stable display. In this way, triggering allows the display of periodic signals such as sine waves and square waves, as well as nonperiodic signals such as single pulses, or pulses that do not recur at a fixed rate. With triggered sweeps, the scope blanks the beam and starts to reset the sweep circuit each time the beam reaches the extreme right side of the screen. For a period of time, called holdoff, (extendable by a front-panel control on some better oscilloscopes), the sweep circuit resets completely and ignores triggers. Once holdoff expires, the next trigger starts a sweep. The trigger event is usually the input waveform reaching some user-specified threshold voltage (trigger level) in the specified direction (going positive or going negative—trigger polarity). In some cases, variable holdoff time can be useful to make the sweep ignore interfering triggers that occur before the events to be observed. In the case of repetitive, but complex waveforms, variable holdoff can provide a stable display that could not otherwise be achieved. HoldoffTrigger holdoff defines a certain period following a trigger during which the sweep cannot be triggered again. This makes it easier to establish a stable view of a waveform with multiple edges, which would otherwise cause additional triggers. Example Imagine the following repeating waveform: The green line is the waveform, the red vertical partial line represents the location of the trigger, and the yellow line represents the trigger level. If the scope was simply set to trigger on every rising edge, this waveform would cause three triggers for each cycle: Assuming the signal is fairly high frequency, the scope display would probably look something like this: On an actual scope, each trigger would be the same channel, so all would be the same color. It is desirable for the scope to trigger on only one edge per cycle, so it is necessary to set the holdoff at slightly less than the period of the waveform. This prevents triggering from occurring more than once per cycle, but still lets it trigger on the first edge of the next cycle. Automatic sweep mode Triggered sweeps can display a blank screen if there are no triggers. To avoid this, these sweeps include a timing circuit that generates free-running triggers so a trace is always visible. This is referred to as "auto sweep" or "automatic sweep" in the controls. Once triggers arrive, the timer stops providing pseudo-triggers. The user will usually disable automatic sweep when observing low repetition rates. Recurrent sweeps If the input signal is periodic, the sweep repetition rate can be adjusted to display a few cycles of the waveform. Early (tube) oscilloscopes and lowest-cost oscilloscopes have sweep oscillators that run continuously, and are uncalibrated. Such oscilloscopes are very simple, comparatively inexpensive, and were useful in radio servicing and some TV servicing. Measuring voltage or time is possible, but only with extra equipment, and is quite inconvenient. They are primarily qualitative instruments. They have a few (widely spaced) frequency ranges, and relatively wide-range continuous frequency control within a given range. In use, the sweep frequency is set to slightly lower than some submultiple of the input frequency, to display typically at least two cycles of the input signal (so all details are visible). A very simple control feeds an adjustable amount of the vertical signal (or possibly, a related external signal) to the sweep oscillator. The signal triggers beam blanking and a sweep retrace sooner than it would occur free-running, and the display becomes stable. Single sweeps Some oscilloscopes offer these. The user manually arms the sweep circuit (typically by a pushbutton or equivalent). "Armed" means it is ready to respond to a trigger. Once the sweep completes, it resets, and does not sweep again until re-armed. This mode, combined with an oscilloscope camera, captures single-shot events. Types of trigger include: external trigger, a pulse from an external source connected to a dedicated input on the scope. edge trigger, an edge detector that generates a pulse when the input signal crosses a specified threshold voltage in a specified direction. These are the most common types of triggers; the level control sets the threshold voltage, and the slope control selects the direction (negative or positive-going). (The first sentence of the description also applies to the inputs to some digital logic circuits; those inputs have fixed threshold and polarity response.) video trigger, also known as TV trigger, a circuit that extracts synchronizing pulses from video formats such as PAL and NTSC and triggers the timebase on every line, a specified line, every field, or every frame. This circuit is typically found in a waveform monitor device, though some better oscilloscopes include this function. delayed trigger, which waits a specified time after an edge trigger before starting the sweep. As described under delayed sweeps, a trigger delay circuit (typically the main sweep) extends this delay to a known and adjustable interval. In this way, the operator can examine a particular pulse in a long train of pulses. Some recent designs of oscilloscopes include more sophisticated triggering schemes; these are described toward the end of this article. Delayed sweeps More sophisticated analog oscilloscopes contain a second timebase for a delayed sweep. A delayed sweep provides a very detailed look at some small selected portion of the main timebase. The main timebase serves as a controllable delay, after which the delayed timebase starts. This can start when the delay expires, or can be triggered (only) after the delay expires. Ordinarily, the delayed timebase is set for a faster sweep, sometimes much faster, such as 1000:1. At extreme ratios, jitter in the delays on consecutive main sweeps degrades the display, but delayed-sweep triggers can overcome this. The display shows the vertical signal in one of several modes: the main timebase, or the delayed timebase only, or a combination thereof. When the delayed sweep is active, the main sweep trace brightens while the delayed sweep is advancing. In one combination mode, provided only on some oscilloscopes, the trace changes from the main sweep to the delayed sweep once the delayed sweep starts, though less of the delayed fast sweep is visible for longer delays. Another combination mode multiplexes (alternates) the main and delayed sweeps so that both appear at once; a trace separation control displaces them. DSOs can display waveforms this way, without offering a delayed timebase as such. Dual and multiple-trace oscilloscopes Oscilloscopes with two vertical inputs, referred to as dual-trace oscilloscopes, are extremely useful and commonplace. Using a single-beam CRT, they multiplex the inputs, usually switching between them fast enough to display two traces apparently at once. Less common are oscilloscopes with more traces; four inputs are common among these, but a few (Kikusui, for one) offered a display of the sweep trigger signal if desired. Some multi-trace oscilloscopes use the external trigger input as an optional vertical input, and some have third and fourth channels with only minimal controls. In all cases, the inputs, when independently displayed, are time-multiplexed, but dual-trace oscilloscopes often can add their inputs to display a real-time analog sum. Inverting one channel while adding them together results in a display of the differences between them, provided neither channel is overloaded. This difference mode can provide a moderate-performance differential input.) Switching channels can be asynchronous, i.e. free-running, with respect to the sweep frequency; or it can be done after each horizontal sweep is complete. Asynchronous switching is usually designated "Chopped", while sweep-synchronized is designated "Alt[ernate]". A given channel is alternately connected and disconnected, leading to the term "chopped". Multi-trace oscilloscopes also switch channels either in chopped or alternate modes. In general, chopped mode is better for slower sweeps. It is possible for the internal chopping rate to be a multiple of the sweep repetition rate, creating blanks in the traces, but in practice this is rarely a problem. The gaps in one trace are overwritten by traces of the following sweep. A few oscilloscopes had a modulated chopping rate to avoid this occasional problem. Alternate mode, however, is better for faster sweeps. True dual-beam CRT oscilloscopes did exist, but were not common. One type (Cossor, U.K.) had a beam-splitter plate in its CRT, and single-ended deflection following the splitter. Others had two complete electron guns, requiring tight control of axial (rotational) mechanical alignment in manufacturing the CRT. Beam-splitter types had horizontal deflection common to both vertical channels, but dual-gun oscilloscopes could have separate time bases, or use one time base for both channels. Multiple-gun CRTs (up to ten guns) were made in past decades. With ten guns, the envelope (bulb) was cylindrical throughout its length. (Also see "CRT Invention" in Oscilloscope history.) The vertical amplifier In an analog oscilloscope, the vertical amplifier acquires the signal[s] to be displayed and provides a signal large enough to deflect the CRT's beam. In better oscilloscopes, it delays the signal by a fraction of a microsecond. The maximum deflection is at least somewhat beyond the edges of the graticule, and more typically some distance off-screen. The amplifier has to have low distortion to display its input accurately (it must be linear), and it has to recover quickly from overloads. As well, its time-domain response has to represent transients accurately—minimal overshoot, rounding, and tilt of a flat pulse top. A vertical input goes to a frequency-compensated step attenuator to reduce large signals to prevent overload. The attenuator feeds one or more low-level stages, which in turn feed gain stages (and a delay-line driver if there is a delay). Subsequent gain stages lead to the final output stage, which develops a large signal swing (tens of volts, sometimes over 100 volts) for CRT electrostatic deflection. In dual and multiple-trace oscilloscopes, an internal electronic switch selects the relatively low-level output of one channel's early-stage amplifier and sends it to the following stages of the vertical amplifier. In free-running ("chopped") mode, the oscillator (which may be simply a different operating mode of the switch driver) blanks the beam before switching, and unblanks it only after the switching transients have settled. Part way through the amplifier is a feed to the sweep trigger circuits, for internal triggering from the signal. This feed would be from an individual channel's amplifier in a dual or multi-trace oscilloscope, the channel depending upon the setting of the trigger source selector. This feed precedes the delay (if there is one), which allows the sweep circuit to unblank the CRT and start the forward sweep, so the CRT can show the triggering event. High-quality analog delays add a modest cost to an oscilloscope, and are omitted in cost-sensitive oscilloscopes. The delay, itself, comes from a special cable with a pair of conductors wound around a flexible, magnetically soft core. The coiling provides distributed inductance, while a conductive layer close to the wires provides distributed capacitance. The combination is a wideband transmission line with considerable delay per unit length. Both ends of the delay cable require matched impedances to avoid reflections. X-Y mode Most modern oscilloscopes have several inputs for voltages, and thus can be used to plot one varying voltage versus another. This is especially useful for graphing I-V curves (current versus voltage characteristics) for components such as diodes, as well as Lissajous figures. Lissajous figures are an example of how an oscilloscope can be used to track phase differences between multiple input signals. This is very frequently used in broadcast engineering to plot the left and right stereophonic channels, to ensure that the stereo generator is calibrated properly. Historically, stable Lissajous figures were used to show that two sine waves had a relatively simple frequency relationship, a numerically-small ratio. They also indicated phase difference between two sine waves of the same frequency. The X-Y mode also lets the oscilloscope serve as a vector monitor to display images or user interfaces. Many early games, such as Tennis for Two, used an oscilloscope as an output device. Complete loss of signal in an X-Y CRT display means that the beam is stationary, striking a small spot. This risks burning the phosphor if the brightness is too high. Such damage was more common in older scopes as the phosphors previously used burned more easily. Some dedicated X-Y displays reduce beam current greatly, or blank the display entirely, if there are no inputs present. Z input Some analogue oscilloscopes feature a Z input. This is generally an input terminal that connects directly to the CRT grid (usually via a coupling capacitor). This allows an external signal to either increase (if positive) or decrease (if negative) the brightness of the trace, even allowing it to be totally blanked. The voltage range to achieve cut-off to a brightened display is of the order of 10–20 volts depending on the CRT characteristics. An example of a practical application is if a pair of sine waves of known frequency are used to generate a circular Lissajous figure and a higher unknown frequency is applied to the Z input. This turns the continuous circle into a circle of dots. The number of dots multiplied by the X-Y frequency gives the Z frequency. This technique only works if the Z frequency is an integer ratio of the X-Y frequency and only if it is not so large that the dots become so numerous that they are difficult to count. Bandwidth As with all practical instruments, oscilloscopes do not respond equally to all possible input frequencies. The range of sinusoid frequencies an oscilloscope can usefully display is referred to as its bandwidth. Bandwidth applies primarily to the Y-axis, though the X-axis sweeps must be fast enough to show the highest-frequency waveforms. The bandwidth is defined as the frequency at which the sensitivity is 0.707 of the sensitivity at DC or the lowest AC frequency (a drop of 3 dB). The oscilloscope's response drops off rapidly as the input frequency rises above that point. Within the stated bandwidth the response is not necessarily exactly uniform (or "flat"), but should always fall within a +0 to −3 dB range. One source says there is a noticeable effect on the accuracy of voltage measurements at only 20 percent of the stated bandwidth. Some oscilloscopes' specifications do include a narrower tolerance range within the stated bandwidth. Probes also have bandwidth limits and must be chosen and used to handle the frequencies of interest properly. To achieve the flattest response, most probes must be "compensated" (an adjustment performed using a test signal from the oscilloscope) to allow for the reactance of the probe's cable. Another related specification is rise time. This is the time taken between 10% and 90% of the maximum amplitude response at the leading edge of a pulse. It is related to the bandwidth approximately by: Bandwidth in Hz × rise time in seconds = 0.35. For example, an oscilloscope with a rise time of 1 nanosecond would have a bandwidth of 350 MHz. In analog instruments, the bandwidth of the oscilloscope is limited by the vertical amplifiers and the CRT or other display subsystem. In digital instruments, the sampling rate of the analog-to-digital converter (ADC) is a factor, but the stated analog bandwidth (and therefore the overall bandwidth of the instrument) is usually less than the ADC's Nyquist frequency. This is due to limitations in the analog signal amplifier, deliberate design of the anti-aliasing filter that precedes the ADC, or both. For a digital oscilloscope, a rule of thumb is that the continuous sampling rate should be ten times the highest frequency desired to resolve; for example a 20 megasample/second rate would be applicable for measuring signals up to about 2 MHz. This lets the anti-aliasing filter be designed with a 3 dB down point of 2 MHz and an effective cutoff at 10 MHz (the Nyquist frequency), avoiding the artifacts of a very steep ("brick-wall") filter. A sampling oscilloscope can display signals of considerably higher frequency than the sampling rate if the signals are exactly, or nearly, repetitive. It does this by taking one sample from each successive repetition of the input waveform, each sample being at an increased time interval from the trigger event. The waveform is then displayed from these collected samples. This mechanism is referred to as "equivalent-time sampling". Some oscilloscopes can operate in either this mode or in the more traditional "real-time" mode at the operator's choice. Waveform interval and sampling interval For digital oscilloscopes, waveform interval is defined as the time interval between adjacent points of a displayed waveform while sampling interval is defined as the time interval between adjacent gathered samples (= 1 / sampling frequency), and the waveform interval is usually longer than the sample interval. In other words, the displayed waveform is an aggregation of the gathered samples (e.g., each displayed point is the average over each waveform interval). Other features Some oscilloscopes have cursors. These are lines that can be moved about the screen to measure the time interval between two points, or the difference between two voltages. A few older oscilloscopes simply brightened the trace at movable locations. These cursors are more accurate than visual estimates referring to graticule lines. Better quality general purpose oscilloscopes include a calibration signal for setting up the compensation of test probes; this is (often) a 1 kHz square-wave signal of a definite peak-to-peak voltage available at a test terminal on the front panel. Some better oscilloscopes also have a squared-off loop for checking and adjusting current probes. Sometimes a user wants to see an event that happens only occasionally. To catch these events, some oscilloscopes—called storage scopes—preserve the most recent sweep on the screen. This was originally achieved with a special CRT, a storage tube, which retained the image of even a very brief event for a long time. Some digital oscilloscopes can sweep at speeds as slow as once per hour, emulating a strip chart recorder. That is, the signal scrolls across the screen from right to left. Most oscilloscopes with this facility switch from a sweep to a strip-chart mode at about one sweep per ten seconds. This is because otherwise, the scope looks broken: it is collecting data, but the dot cannot be seen. All but the simplest models of current oscilloscopes more often use digital signal sampling. Samples feed fast analog-to-digital converters, following which all signal processing (and storage) is digital. Many oscilloscopes accommodate plug-in modules for different purposes, e.g., high-sensitivity amplifiers of relatively narrow bandwidth, differential amplifiers, amplifiers with four or more channels, sampling plugins for repetitive signals of very high frequency, and special-purpose plugins, including audio/ultrasonic spectrum analyzers, and stable-offset-voltage direct-coupled channels with relatively high gain. Examples of use One of the most frequent uses of scopes is troubleshooting malfunctioning electronic equipment. For example, where a voltmeter may show a totally unexpected voltage, a scope may reveal that the circuit is oscillating. In other cases the precise shape or timing of a pulse is important. In a piece of electronic equipment, for example, the connections between stages (e.g., electronic mixers, electronic oscillators, amplifiers) may be 'probed' for the expected signal, using the scope as a simple signal tracer. If the expected signal is absent or incorrect, some preceding stage of the electronics is not operating correctly. Since most failures occur because of a single faulty component, each measurement can show that some of the stages of a complex piece of equipment either work, or probably did not cause the fault. Once the faulty stage is found, further probing can usually tell a skilled technician exactly which component has failed. Once the component is replaced, the unit can be restored to service, or at least the next fault can be isolated. This sort of troubleshooting is typical of radio and TV receivers, as well as audio amplifiers, but can apply to quite different devices such as electronic motor drives. Another use is to check newly designed circuitry. Often, a newly designed circuit misbehaves because of design errors, bad voltage levels, electrical noise etc. Digital electronics usually operate from a clock, so a dual-trace scope showing both the clock signal and a test signal dependent upon the clock is useful. Storage scopes are helpful for "capturing" rare electronic events that cause defective operation. Oscilloscopes are often used during real-time software development to check, among other things, missed deadlines and worst-case latencies. Pictures of use Automotive use First appearing in the 1970s for ignition system analysis, automotive oscilloscopes are becoming an important workshop tool for testing sensors and output signals on electronic engine management systems, braking and stability systems. Some oscilloscopes can trigger and decode serial bus messages, such as the CAN bus commonly used in automotive applications. Software Many oscilloscopes today provide one or more external interfaces to allow remote instrument control by external software. These interfaces (or buses) include GPIB, Ethernet, serial port, USB and Wi-Fi. Types and models The following section is a brief summary of various types and models available. For a detailed discussion, refer to the other article. Cathode-ray oscilloscope (CRO) The earliest and simplest type of oscilloscope consisted of a CRT, a vertical amplifier, a timebase, a horizontal amplifier and a power supply. These are now called "analog" scopes to distinguish them from the "digital" scopes that became common in the 1990s and later. Analog scopes do not necessarily include a calibrated reference grid for size measurement of waves, and they may not display waves in the traditional sense of a line segment sweeping from left to right. Instead, they could be used for signal analysis by feeding a reference signal into one axis and the signal to measure into the other axis. For an oscillating reference and measurement signal, this results in a complex looping pattern referred to as a Lissajous figure. The shape of the curve can be interpreted to identify properties of the measurement signal in relation to the reference signal and is useful across a wide range of oscillation frequencies. Dual-beam oscilloscope The dual-beam analog oscilloscope can display two signals simultaneously. A special dual-beam CRT generates and deflects two separate beams. Multi-trace analog oscilloscopes can simulate a dual-beam display with chop and alternate sweeps—but those features do not provide simultaneous displays. (Real-time digital oscilloscopes offer the same benefits of a dual-beam oscilloscope, but they do not require a dual-beam display.) The disadvantages of the dual trace oscilloscope are that it cannot switch quickly between traces, and cannot capture two fast transient events. A dual beam oscilloscope avoids those problems. Analog storage oscilloscope Trace storage is an extra feature available on some analog scopes; they used direct-view storage CRTs. Storage allows a trace pattern that normally would decay in a fraction of a second to remain on the screen for several minutes or longer. An electrical circuit can then be deliberately activated to store and erase the trace on the screen. Digital oscilloscopes While analog devices use continually varying voltages, digital devices use numbers that correspond to samples of the voltage. In the case of digital oscilloscopes, an analog-to-digital converter (ADC) changes the measured voltages into digital information. The digital storage oscilloscope, or DSO for short, is the standard type of oscilloscope today for the majority of industrial applications, and thanks to the low costs of entry-level oscilloscopes even for hobbyists. It replaces the electrostatic storage method in analog storage scopes with digital memory, which stores sample data as long as required without degradation and displays it without the brightness issues of storage-type CRTs. It also allows complex processing of the signal by high-speed digital signal processing circuits. A standard DSO is limited to capturing signals with a bandwidth of less than half the sampling rate of the ADC (called the Nyquist limit). There is a variation of the DSO called the digital sampling oscilloscope which can exceed this limit for certain types of signal, such as high-speed communications signals, where the waveform consists of repeating pulses. This type of DSO deliberately samples at a much lower frequency than the Nyquist limit and then uses signal processing to reconstruct a composite view of a typical pulse. Mixed-signal oscilloscopes A logic analyzer is similar to an oscilloscope, but for each input signal only provides the logic level without the shape of its analog waveform. A mixed-signal oscilloscope (or MSO) meanwhile has two kinds of inputs: a small number of analog channels (typically two or four), and a larger number of logic channels (typically sixteen). It provides the ability to accurately time-correlate analog and logic signals, thus offering a distinct advantage over a separate oscilloscope and logic analyzer. Typically, logic channels may be grouped and displayed as a bus with each bus value displayed at the bottom of the display in hexadecimal or binary. On most MSOs, the trigger can be set across both analog and logic channels. Mixed-domain oscilloscopes A mixed-domain oscilloscope (MDO) is an oscilloscope that comes with an additional RF input which is solely used for dedicated FFT-based spectrum analyzer functionality. Often, this RF input offers a higher bandwidth than the conventional analog input channels. This is in contrast to the FFT functionality of conventional digital oscilloscopes, which use the normal analog inputs. Some MDOs allow time-correlation of events in the time domain (like a specific serial data package) with events happening in the frequency domain (like RF transmissions). Handheld oscilloscopes Handheld oscilloscopes are useful for many test and field service applications. Today, a handheld oscilloscope is usually a digital sampling oscilloscope, using a liquid crystal display. Many handheld and bench oscilloscopes have the ground reference voltage common to all input channels. If more than one measurement channel is used at the same time, all the input signals must have the same voltage reference, and the shared default reference is the "earth". If there is no differential preamplifier or external signal isolator, this traditional desktop oscilloscope is not suitable for floating measurements. (Occasionally an oscilloscope user breaks the ground pin in the power supply cord of a bench-top oscilloscope in an attempt to isolate the signal common from the earth ground. This practice is unreliable since the entire stray capacitance of the instrument cabinet connects into the circuit. It is also a hazard to break a safety ground connection, and instruction manuals strongly advise against it.) Some models of oscilloscope have isolated inputs, where the signal reference level terminals are not connected together. Each input channel can be used to make a "floating" measurement with an independent signal reference level. Measurements can be made without tying one side of the oscilloscope input to the circuit signal common or ground reference. The isolation available is categorized as shown below: PC-based oscilloscopes Some digital oscilloscope rely on a PC platform for display and control of the instrument. This can be in the form of a standalone oscilloscope with internal PC platform (PC mainboard), or as external oscilloscope which connects through USB or LAN to a separate PC or laptop. Related instruments A large number of instruments used in a variety of technical fields are really oscilloscopes with inputs, calibration, controls, display calibration, etc., specialized and optimized for a particular application. Examples of such oscilloscope-based instruments include waveform monitors for analyzing video levels in television productions and medical devices such as vital function monitors and electrocardiogram and electroencephalogram instruments. In automobile repair, an ignition analyzer is used to show the spark waveforms for each cylinder. All of these are essentially oscilloscopes, performing the basic task of showing the changes in one or more input signals over time in an X‑Y display. Other instruments convert the results of their measurements to a repetitive electrical signal, and incorporate an oscilloscope as a display element. Such complex measurement systems include spectrum analyzers, transistor analyzers, and time domain reflectometers (TDRs). Unlike an oscilloscope, these instruments automatically generate stimulus or sweep a measurement parameter. See also Eye pattern Phonodeik Tennis for Two'', an oscilloscope game Time-domain reflectometry Vectorscope Waveform monitor References External links The Cathode Ray Tube site Virtual Oscilloscope Museum Electronic test equipment Measuring instruments Laboratory equipment Electronics work tools American inventions German inventions
Oscilloscope
Technology,Engineering
10,760
806,208
https://en.wikipedia.org/wiki/Maria%20de%20Lourdes%20Pintasilgo
Maria de Lourdes Ruivo da Silva de Matos Pintasilgo (; 18 January 1930 – 10 July 2004) was a Portuguese chemical engineer and politician. She was the first and to date only woman to serve as Prime Minister of Portugal, and the second woman to serve as prime minister in Western Europe, after Margaret Thatcher. Early life Maria de Lourdes Pintasilgo was born to a middle-class family in 1930. Her father, Jaime de Matos Pintasilgo (born Covilhã, Conceição, 9 December 1896 – died Lisbon, Socorro, 10 October 1959) was in the wool business, and her mother was Amélia do Carmo Ruivo da Silva, a native of Vendas Novas. Her parents married in Abrantes on 14 March 1929. Her father, Jaime, abandoned the family and at school she tried hard to hide that, thus causing her to avoid usual relationships. At the age of seven, she was sent to the Liceu Filipa de Lencastre, a secondary school, in Lisbon. She distinguished herself in the Mocidade Portuguesa, a militaristic youth movement founded by Dictator Salazar. Later she joined Acção Católica (Catholic Action). During her years at the Instituto Superior Técnico from where she earned a degree in industrial chemical engineering, she joined and eventually led the Catholic's women's student movement. Early career After graduating from University of Lisbon's Instituto Superior Técnico in 1953, at the age of 23, with an engineering degree in industrial chemistry she went into a graduate scholarship program with the national Nuclear Energy Board. After completing the program, she began working for a large Portuguese conglomerate with interests in cement plants, Companhia União Fabril, the "CUF". By 1954, she held the position of chief engineer of the studies and projects division. From that position she quickly moved to the position of project director, where she was in charge of the firm's documentation center and responsible for the company's technical journals. She held this position for seven years, until she left the company in 1960. Pintasilgo had strong ties to the Roman Catholic Church. From 1952 to 1956, at Lisbon's Catholic University of Portugal, she was president of the women's group. In 1956 she became the international president of a movement of Catholic students, Pax Romana. In 1961, Pintasilgo joined the Grail (Graal), an international Catholic laywomen's movement. Two years after joining the Grail she led an international group working to improve the movement as well as establishing it in Portugal. By 1965 she had become the Grail's international vice-president. She was also appointed by the Vatican and served as woman's liaison between the Roman Catholic Church and the World Council of Churches. After leaving Companhia União Fabril, she held a job in government until 1969 which was to run Portugal's program for development and social change. In 1970, she presided over government working groups involving women's affairs, as well as being a member of the Portuguese delegation to the United Nations, 1971–72. In 1974 she was appointed secretary of state for social welfare in the first provisional government following the revolution. She moved her way up to Minister of Social Affairs by early 1975. In 1975, Pintasilgo became Portugal's first Ambassador to the United Nations Educational, Scientific and Cultural Organization, UNESCO. Tenure as Prime Minister and later career In 1979 she was called on by General António Ramalho Eanes, the president of Portugal, to become prime minister. Pintasilgo was sworn in as the Prime Minister of the Portuguese caretaker government on 1 August 1979 with the term of three months in office. During her time in office she pushed to modernize the out-dated social welfare system. She left her mark by making social security universal and improving health care, education, and labor legislation in Portugal. She contributed the piece "Daring to be different" to the 1984 anthology Sisterhood Is Global: The International Women's Movement Anthology, edited by Robin Morgan. Pintasilgo was the first woman to run for president in 1986. She ran as an independent and received 7% of the votes. The following year she was elected to the European Parliament as a member of the Socialist Party which she held until 1989. From 1992 and for almost a decade, she chaired the Independent Commission for Population and Quality of Life - ICPQL. Hosted by the United Nations Educational, Scientific and Cultural Organization, UNESCO, in Paris, the international Commission was established by a coalition of governments and global Foundations in order to make recommendations to be presented to the UN system and donors community. In her statement at the Cairo UN International Conference on Population and Development on Sept, 7, 1994, Maria de Lourdes Pintasilgo explained, "The ultimate goal of Population and Development is to accord an improved quality of life to the people of the world. Not only to count people but to ensure that people count in Development". The commission's report was published in 1996 under the title: "Caring for the Future, Making the Next Decades Provide a Life Worth Living", edited by Oxford University Press. Maria de Lourdes Pintasilgo died of cardiac arrest at her home in Lisbon on 10 July 2004, aged 74. She was buried in Prazeres Cemetery, in Lisbon. Electoral history Presidential election, 1986 |- ! rowspan="2" colspan="2" |Candidate ! colspan="2" |First round ! colspan="2" |Second round |- ! Votes ! align="center" style="width: 50px"|% ! Votes ! align="center" style="width: 50px"|% |- | style="background:;"| | align=left |Mário Soares || 1,443,683 || 25.4 || 3,010,756 || 51.2 |- | style="background:;"| | align=left |Diogo Freitas do Amaral || 2,629,597 || 46.3 || 2,872,064 || 48.8 |- | style="background:green;"| | align=left |Francisco Salgado Zenha || 1,185,867 || 20.9 |- | style="background:;"| | align=left |Maria de Lourdes Pintasilgo || 418,961 || 7.4 |- | colspan="2" align="left"| Blank/Invalid ballots | 64,626 || – || 54,280 || – |- style="background-color:#E9E9E9" | colspan="2" align="left"| Turnout | 5,742,734 || 75.39 || 5,937,100 || 77.99 |- | colspan="6" align=left|Source: Comissão Nacional de Eleições |} European Parliament election, 1987 |- ! colspan="2" | Party ! Candidate ! Votes ! align="center" style="width: 50px"|% ! align="center" style="width: 50px"|Seats |- | style="background:;"| | align="left"|PSD | align=left |Pedro Santana Lopes || 2,111,828 || 37.5 || 10 |- | style="background:;"| | align="left"|PS | align=left |Maria de Lourdes Pintasilgo || 1,267,672 || 22.5 || 6 |- | style="background:;"| | align="left"| CDS | align=left |Lucas Pires || 868,718 || 15.4 || 4 |- | style="background:;"| | align="left"| CDU | align=left |Ângelo Veloso || 648,700 || 11.5 || 3 |- | style="background:green;"| | align="left"| PRD | align=left |Medeiros Ferreira || 250,158 || 4.4 || 1 |- | style="background:;"| | align="left"| PPM | align=left |Miguel Esteves Cardoso || 155,990 || 2.8 || 0 |- | style="background:white;"| | colspan="2" align="left"| Other parties | 193,869 || 3.4 || 0 |- | colspan="3" align="left"| Blank/Invalid ballots | 142,715 || 2.5 || – |- style="background-color:#E9E9E9" | colspan="3" align="left"| Turnout | 5,639,650 || 72.42 || 24 |- | colspan="7" align=left|Source: Comissão Nacional de Eleições |} Legacy Maria de Lourdes Pintasilgo was a student at Instituto Superior Técnico (IST), one of the most prestigious Engineering faculties in Portugal. Since 2016, IST promotes the Maria de Lourdes Pintasilgo Award aiming to recognize and reward annually two women that graduated at IST, as a way to promote the gender balance policy at IST as well as recognize the crucial role that women have in all fields of Engineering. References Further reading Skard, Torild (2014) "Maria de Lourdes Pintasilgo" in Women of Power - Half a century of female presidents and prime ministers worldwide, Brtistol: Policy Press, . 1930 births 2004 deaths 20th-century Portuguese politicians 20th-century women prime ministers Catholic socialists Grand Crosses of the Order of Christ (Portugal) Grand Crosses of the Order of Liberty Grand Crosses of the Order of Prince Henry Instituto Superior Técnico alumni MEPs for Portugal 1987–1989 20th-century women MEPs for Portugal People from Abrantes Portuguese chemical engineers Portuguese Christian socialists Portuguese Roman Catholics Candidates for President of Portugal Prime ministers of Portugal Socialist Party (Portugal) MEPs Socialist Party (Portugal) politicians Women government ministers of Portugal Women prime ministers in Europe Female Christian socialists Women chemical engineers
Maria de Lourdes Pintasilgo
Chemistry
2,213
58,131,299
https://en.wikipedia.org/wiki/Ramon%20E.%20Moore
Ramon Edgar (Ray) Moore ( ) was an American mathematician, known for his pioneering work in the field of interval arithmetic. Moore received an AB degree in physics from the University of California, Berkeley in 1950, and a PhD in mathematics from Stanford University in 1963. His early career included work on the earliest computers (including ENIAC). He was awarded the Humboldt Research Award for U.S. senior scientists twice, in 1975 and 1980. His most well known work is his first book, Interval Analysis, published in 1966. He wrote several more books and many journal articles and technical reports. R. E. Moore Prize The R. E. Moore Prize for Applications of Interval Analysis is an award in the interdisciplinary field of rigorous numerics. It is awarded biennially by the Computer Science Department at the University of Texas at El Paso, and judged by the editorial board of the journal Reliable Computing. The award was named in honor of Moore's contributions to interval analysis. Laureates See also List of mathematics awards References Further reading External links Faculty webpage R. E. Moore Prize 20th-century American mathematicians Mathematics awards Stanford University alumni University of California, Berkeley alumni University of Texas at El Paso faculty 1929 births 2015 deaths 21st-century American mathematicians
Ramon E. Moore
Technology
252
5,854,581
https://en.wikipedia.org/wiki/Bi-metallic%20coin
Bi-metallic coins are coins consisting of two (bi-) metals or alloys, generally arranged with an outer ring around a contrasting center. Common circulating examples include the €1, €2, United Kingdom £1 and £2, Canadian $2, South Africa R5, Egyptian £1, Turkish 1 lira and 50 kurus, Indian ₹10 and ₹20, Indonesian Rp1,000, Polish 2 and 5 zł, Czech 50 Kč, Hungarian 100 and 200 Ft, Bulgarian 1 and 2 lv., Hong Kong $10, Argentine $1 and $2, Brazilian R$1, Chilean $100 and $500, Colombian $500 and $1000, Peruvian S/2 and S/5, Albanian 100 Lekë, Thai 10 baht and all Mexican coins of $1 or higher denomination. For a more complete list, see List of bi-metallic coins. History Bi-metallic coins and medals have been issued for a long time. The Roman Empire issued special-occasion, large medallions with a center of bronze or copper and an outer ring of orichalcum, starting with the reign of Hadrian. Meanwhile, circulating bi-metallic coins are known from the 17th century. English farthings from 1684 through 1693 were made of tin with a central plug of copper for value. The silver-center cent pattern produced by the United States in 1792 is another example. In the 1830s and 1840s, British medalist Joseph Moore produced large numbers of bi-metallic "penny model" and less common "halfpenny model" tokens, as a proposal to replace the relatively large penny and halfpenny coins. Though not legal tender, Moore's tokens were circulated widely and accepted at face value by many merchants. Despite their popularity, the Royal Mint rejected the proposal, and did not reduce the size of the penny and halfpenny until decimalization. The first modern circulating bi-metallic coin was the Italian 500 lire, first issued in 1982. Based on the minting process of the lire coin, A list of All bi-metallic coins can be found here The first ever tri-metallic circulating coins were 20-francs coins introduced in France and Monaco in 1992. These were similar to the corresponding bi-metallic 10-francs coins, but had two rings instead of one. Use As well as circulating coins, where they are generally restricted to high-denomination coins, bi-metallic coins are often used in commemorative issues, often made of precious metals. For example, the only bi-metallic coin issued by the United States is the $10 Library of Congress commemorative, made of a gold ring around a platinum center. They are used primarily as a way of securing against coin counterfeiting. Manufacturing The manufacturing process is similar to that of ordinary coins, except that two blanks (the inner and the outer) are struck at the same time, deforming the separate blanks sufficiently to hold them together. Countries Examples See also List of bi-metallic coins References External links Numismatics Bimetal
Bi-metallic coin
Chemistry,Materials_science
617
912,858
https://en.wikipedia.org/wiki/Jon%20Barwise
Kenneth Jon Barwise (; June 29, 1942 – March 5, 2000) was an American mathematician, philosopher and logician who proposed some fundamental revisions to the way that logic is understood and used. Education and career He was born in Independence, Missouri, to Kenneth T. and Evelyn Barwise. A pupil of Solomon Feferman at Stanford University, Barwise started his research in infinitary logic. After positions as assistant professor at Yale University and the University of Wisconsin, during which time his interests turned to natural language, he returned to Stanford in 1983 to direct the Center for the Study of Language and Information (CSLI). He began teaching at Indiana University in 1990. He was elected a Fellow of the American Academy of Arts and Sciences in 1999. In his last year, Barwise was invited to give the 2000 Gödel Lecture; he died prior to the lecture. Philosophical and logical work Barwise contended that, by being explicit about the context in which a proposition is made, the situation, many problems in the application of logic can be eliminated. He sought ... to understand meaning and inference within a general theory of information, one that takes us outside the realm of sentences and relations between sentences of any language, natural or formal. In particular, he claimed that such an approach resolved the liar paradox. He made use of Peter Aczel's non-well-founded set theory in understanding "vicious circles" of reasoning. Barwise, along with his former colleague at Stanford John Etchemendy, was the author of the popular logic textbook Language, Proof and Logic. Unlike the Handbook of Mathematical Logic, which was a survey of the state of the art of mathematical logic circa 1975, and of which he was the editor, this work targeted elementary logic. The text is notable for including computer-aided homework problems, some of which provide visual representations of logical problems. During his time at Stanford, he was also the first Director of the Symbolic Systems Program, an interdepartmental degree program focusing on the relationships between cognition, language, logic, and computation. The K. Jon Barwise Award for Distinguished Contributions to the Symbolic Systems Program has been given periodically since 2001. Selected publications Barwise, K. J. (1975) Admissible Sets and Structures. An Approach to Definability Theory Barwise, K. J. & Perry, John (1983) Situations and Attitudes. Cambridge: MIT Press. Barwise, K. J. & Etchemendy, J. (1987) The Liar: An Essay in Truth and Circularity Barwise, K. J. (1988) The Situation in Logic Barwise, K. J. & Moss, L. (1996) Vicious Circles. On the Mathematics of Non-Wellfounded Phenomena Barwise, K, J. & Seligman, J. (1997) Information Flow: the Logic of Distributed Systems Barwise, K. J. & Etchemendy, J. (2002) Language, Proof and Logic Barwise, K. J. Editor (1977) Handbook of Mathematical Logic. xi+1165 pages Barwise, J. & Feferman, S. Editors (1985) Model-Theoretic Logics. x+893 pages See also Barwise Prize Barwise compactness theorem Slingshot argument References External links In Memoriam: Kenneth Jon Barwise by Solomon Feferman The Bulletin of Symbolic Logic vol. 6(4) Dec. 2000, pp505–8 (PostScript) 1942 births 2000 deaths 20th-century American mathematicians 20th-century American philosophers 20th-century American essayists American male essayists American male non-fiction writers American philosophy academics Deaths from colorectal cancer in the United States Fellows of the American Academy of Arts and Sciences Indiana University faculty Mathematical logicians Mathematicians from Missouri American philosophers of logic American philosophers of mathematics American philosophy writers Stanford University alumni Stanford University Department of Philosophy faculty University of Wisconsin–Madison faculty Writers from Independence, Missouri Yale University faculty 20th-century American male writers
Jon Barwise
Mathematics
814
2,984,115
https://en.wikipedia.org/wiki/Aminoglutethimide
Aminoglutethimide (AG), sold under the brand names Elipten, Cytadren, and Orimeten among others, is a medication which has been used in the treatment of seizures, Cushing's syndrome, breast cancer, and prostate cancer, among other indications. It has also been used by bodybuilders, athletes, and other men for muscle-building and performance- and physique-enhancing purposes. AG is taken by mouth three or four times per day. Side effects of AG include lethargy, somnolence, dizziness, headache, appetite loss, skin rash, hypertension, liver damage, and adrenal insufficiency, among others. AG is both an anticonvulsant and a steroidogenesis inhibitor. In terms of the latter property, it inhibits enzymes such as cholesterol side-chain cleavage enzyme (CYP11A1, P450scc) and aromatase (CYP19A1), thereby inhibiting the conversion of cholesterol into steroid hormones and blocking the production of androgens, estrogens, and glucocorticoids, among other endogenous steroids. As such, AG is an aromatase inhibitor and adrenal steroidogenesis inhibitor, including both an androgen synthesis inhibitor and a corticosteroid synthesis inhibitor. AG was introduced for medical use, as an anticonvulsant, in 1960. It was withdrawn in 1966 due to toxicity. Its steroidogenesis-inhibiting properties were discovered serendipitously and it was subsequently repurposed for use in the treatment of Cushing's syndrome, breast cancer, and prostate cancer from 1969 and thereafter. However, although used in the past, it has mostly been superseded by newer agents with better efficacy and lower toxicity such as ketoconazole, abiraterone acetate, and other aromatase inhibitors. It remains marketed only in a few countries. Medical uses AG is used as an anticonvulsant in the treatment of petit mal epilepsy and as a steroidogenesis inhibitor in the treatment of Cushing's syndrome, postmenopausal breast cancer, and prostate cancer. It is also used to treat secondary hyperaldosteronism, edema, adrenocortical carcinoma, and ectopic adrenocorticotropic hormone (ACTH) producing tumors. When used as a steroidogenesis inhibitor to treat breast cancer and prostate cancer, AG is given in combination with hydrocortisone, prednisone, or an equivalent corticosteroid to prevent adrenal insufficiency. AG is a second- or third-line choice in the treatment of hormone-sensitive metastatic breast cancer. While effective in the treatment of breast cancer in postmenopausal women, it is not effective in premenopausal women and is not an effective ovarian steroidogenesis inhibitor, probably because it is not a potent enough aromatase inhibitor. The medication is effective in the treatment of prostate cancer, but its effectiveness is low and inconsistent, likely due to its relatively weak steroidogenesis inhibition and poor pharmacokinetics. Nonetheless, AG was found to be non-significantly different in effectiveness from surgical adrenalectomy in terms of prostate cancer tumor regression. In any case, AG is not recommended as a first-line therapy in prostate cancer, but instead only as a second-line therapy. It has only rarely been used in the treatment of prostate cancer. AG is used for adrenal steroidogenesis inhibition by mouth at a dosage of 250 mg three times per day (750 mg/day total) for the first 3 weeks of therapy and then increased to 250 mg four times per day (1,000 mg/day total) thereafter. It can be used at a dosage of up to 500 mg four times per day (2,000 mg/day). It is used as an aromatase inhibitor to inhibit peripheral estrogen production by mouth at a dosage of 125 mg twice per day (250 mg/day total), without significant suppression of adrenal steroidogenesis at this dosage. Maximal aromatase inhibition is said to occur between dosages of 250 to 500 mg per day. The side effects of AG are less frequent and severe at this dosage. However, they are still less when AG is combined with hydrocortisone, and so AG is generally combined with a corticosteroid even at this lower dosage. AG should only be used under close medical supervision and with laboratory tests including thyroid function, baseline hematological, serum glutamic-oxaloacetic transaminase, alkaline phosphatase, and bilirubin. Ketoconazole can achieve similar decreases in steroid hormone levels as AG but is more effective in promoting tumor regression and is moderately less toxic in comparison. AG can still be a useful alternative in those who have failed or are unable to tolerate ketoconazole and other therapies however. Available forms AG is provided most commonly in the form of 250 mg tablets. Non-medical uses AG is used by bodybuilders, athletes, and other men to lower circulating levels of cortisol in the body and thereby prevent muscle loss. Cortisol is catabolic to protein in muscle and effective suppression of cortisol by AG at high doses can prevent muscle loss. It is usually used in combination with an anabolic steroid to avoid androgen deficiency. However, the usefulness of AG for such purposes has been questioned, with few users reportedly having positive comments about it, and the risks of AG are said to be high. In any case, AG is also used by bodybuilders and other men for its actions as an aromatase inhibitor in order to decrease estrogen levels. It is said to be useful for inhibiting the estrogenic side effects of certain anabolic steroids such as gynecomastia, increased water retention, and fat gain. Contraindications AG should not be used in people with known hypersensitivity to AG. It should not be used in women who are pregnant or breastfeeding. Other potential contraindications include chicken pox, shingles (herpes zoster), infection, kidney disease, liver disease, and hypothyroidism. Side effects AG has many side effects and is a relatively toxic medication, although its side effects are described as usually relatively mild. The side effects of AG include lethargy, fatigue, weakness, malaise, drowsiness, somnolence, depression, apathy, sleep disturbances, stomach discomfort, nausea, vomiting, ataxia, joint aches and pains, fever, skin rash, hypotension or hypertension, high cholesterol levels, virilization, hypothyroidism, thyroid abnormalities, elevated liver enzymes, jaundice, hepatotoxicity, weight gain, leg cramps, personality changes, blood dyscrasias, and adrenal insufficiency (e.g., hyponatremia, hypoglycemia, others). Lethargy is the most common side effect and has been found to occur in 31 to 70% of people treated with AG. It is the most common reason for discontinuation of AG. Skin rash and hypotension have both been observed in about 15% of people. At least one side effect will occur in 45 to 85% of people. Severe toxicity is seen in 10% of people, including circulatory collapse thought to be due to adrenal insufficiency. Hematological and bone marrow toxicity, including marked depression of white blood cell count, platelets, or both, occurs rarely, with an incidence of about 0.9%. It is usually seen within the first 7 weeks of treatment and resolves within 3 weeks following discontinuation. AG is discontinued in 5 to 10% of people due to intolerable side effects. The central nervous system side effects of AG are due to its nature as an anticonvulsant and relation to glutethimide. Overdose In the event of overdose of AG, drowsiness, nausea, vomiting, hypotension, and respiratory depression may occur. Medical attention should be sought urgently. Treatment of AG overdose can include gastric lavage to decrease absorption and dialysis to enhance elimination. Interactions AG has an interaction with all corticosteroids. It enhances the metabolism of dexamethasone, so hydrocortisone should be used instead. If the person is taking warfarin, the dosage of warfarin may need to be increased. Alcohol potentiates the central nervous system side effects of AG. Dosages of theophylline, digitoxin, and medroxyprogesterone acetate may need to be increased. Pharmacology Pharmacodynamics AG is a potent and non-selective steroidogenesis inhibitor, acting as a reversible and competitive inhibitor of multiple steroidogenic enzymes, including: Aromatase (CYP19A1) (600 nM). Inhibits the formation of the estrogens estradiol and estrone from testosterone and androstenedione, respectively. Cholesterol side-chain cleavage enzyme (P450scc; CYP11A1) (~20,000 nM). Inhibits the conversion of cholesterol into pregnenolone and consequently decreases the synthesis of all steroid hormones including the progestogens, androgens, glucocorticoids, and mineralocorticoids, as well as neurosteroids. 21-Hydroxylase (CYP21A2). Prevents the conversion of progesterone and 17α-hydroxyprogesterone into 11-deoxycorticosterone and 11-deoxycortisol, respectively. 11β-Hydroxylase (CYP11B1). Prevents the conversion of 11-deoxycorticosterone and 11-deoxycortisol into corticosterone and cortisol, respectively. Aldosterone synthase (18-hydroxylase; CYP11B2). Prevents the conversion of corticosterone into aldosterone. As such, AG is an estrogen synthesis inhibitor and adrenal steroidogenesis inhibitor, including both an androgen synthesis inhibitor and a corticosteroid synthesis inhibitor. For these reasons, AG has functional antiestrogenic, antiandrogenic, antiglucocorticoid, and antimineralocorticoid actions. In terms of its actions as an adrenal steroidogenesis inhibitor, it is described as a form of reversible "medical adrenalectomy" or "chemical adrenalectomy". While AG inhibits all of the enzymes listed above, inhibition of P450scc is primarily responsible for its inhibition of adrenal steroidogenesis. In terms of adrenal androgens, AG has been shown to significantly suppress dehydroepiandrosterone sulfate, androstenedione, testosterone, and dihydrotestosterone levels in men. Although it is most potent in inhibiting aromatase among the enzymes it targets, AG is described nonetheless as a relatively weak aromatase inhibitor. In addition, it is described as a much more potent aromatase inhibitor than adrenal steroidogenesis inhibitor. AG can inhibit aromatase by 74 to 92% and decrease circulating estradiol levels by 58 to 76% in men and postmenopausal women. AG is not an effective ovarian steroidogenesis inhibitor in premenopausal women. However, interference with ovarian steroidogenesis by AG may in any case result in hyperandrogenism and virilization in premenopausal women. Pharmacokinetics With oral administration, the absorption of AG is rapid and complete. It is well-distributed throughout the body. In terms of metabolism, a portion of AG is acetylated in the liver. The biological half-life of AG is 12.5 hours. It is excreted in urine 34 to 54% unchanged. Chemistry AG is a nonsteroidal compound, specifically a glutarimide, and is a derivative of glutethimide. It is also known by its chemical names 2-(4-aminophenyl)-2-ethylglutarimide and 2-(aminophenyl)-3-ethylpiperidine-2,6-dione. Aside from glutethimide, AG is structurally related to rogletimide (pyridoglutethimide) and thalidomide, as well as amphenone B, metyrapone, and mitotane. History AG was introduced for medical use, as an anticonvulsant, in 1960. In 1963, it was reported that AG had induced symptoms of Addison's disease (adrenal insufficiency) in a young girl. Following additional reports, it was determined that AG acts as a steroidogenesis inhibitor. As such, the discovery of AG as a steroidogenesis inhibitor was serendipitous. The medication was withdrawn from the market in 1966 due to its adverse effects. The first report of AG in the treatment of breast cancer was published in 1969, and the first report of AG in the treatment of prostate cancer was published in 1974. The medication was one of the first adrenal steroidogenesis inhibitors as well as the first aromatase inhibitor to be discovered and used clinically, and led to the development of other aromatase inhibitors. Along with testolactone, it is described as a "first-generation" aromatase inhibitor. AG has largely been superseded by medications with better effectiveness and tolerability and reduced toxicity, such as ketoconazole, abiraterone acetate, and other aromatase inhibitors. Society and culture Generic names Aminoglutethimide is the generic name of the drug and its , , and , while aminoglutéthimide is its and aminoglutetimide is its . It is also known by its developmental code names Ba 16038, Ciba 16038, and ND-1966. Brand names AG has been marketed under brand names including Elipten, Cytandren, and Orimeten. It has also been marketed under other brand names such as Aminoblastin, Rodazol, and Mamomit, among numerous others. Availability AG appears to remain marketed only in a few countries, which include China, Egypt, and Lithuania. Previously, AG was available very widely throughout the world, including in more than two dozen countries and under numerous brand names. Among other places, it was marketed in the United States, Canada, the United Kingdom, other European countries, Australia, New Zealand, South Africa, South America, Israel, Malaysia, and Hong Kong. References 11β-Hydroxylase inhibitors 21-Hydroxylase inhibitors Abandoned drugs Aldosterone synthase inhibitors Anticonvulsants Antiglucocorticoids Aromatase inhibitors Cholesterol side-chain cleavage enzyme inhibitors Glutarimides Hepatotoxins Hormonal antineoplastic drugs Withdrawn drugs 4-Aminophenyl compounds
Aminoglutethimide
Chemistry
3,215
1,201,381
https://en.wikipedia.org/wiki/Apothecaries%27%20system
The apothecaries' system, or apothecaries' weights and measures, is a historical system of mass and volume units that were used by physicians and apothecaries for medical prescriptions and also sometimes by scientists. The English version of the system is closely related to the English troy system of weights, the pound and grain being exactly the same in both. It divides a pound into 12 ounces, an ounce into 8 drachms, and a drachm into 3 scruples of 20 grains each. This exact form of the system was used in the United Kingdom; in some of its former colonies, it survived well into the 20th century. The apothecaries' system of measures is a similar system of volume units based on the fluid ounce. For a long time, medical recipes were written in Latin, often using special symbols to denote weights and measures. The use of different measure and weight systems depending on the purpose was an almost universal phenomenon in Europe between the decline of the Roman Empire and metrication. This was connected with international commerce, especially with the need to use the standards of the target market and to compensate for a common weighing practice that caused a difference between actual and nominal weight. In the 19th century, most European countries or cities still had at least a "commercial" or "civil" system (such as the English avoirdupois system) for general trading, and a second system (such as the troy system) for precious metals such as gold and silver. The system for precious metals was usually divided in a different way from the commercial system, often using special units such as the carat. More significantly, it was often based on different weight standards. The apothecaries' system often used the same ounces as the precious metals system, although even then the number of ounces in a pound could be different. The apothecaries' pound was divided into its own special units, which were inherited (via influential treatises of Greek physicians such as Dioscorides and Galen, 1st and 2nd century) from the general-purpose weight system of the Romans. Where the apothecaries' weights and the normal commercial weights were different, it was not always clear which of the two systems was used in trade between merchants and apothecaries, or by which system apothecaries weighed medicine when they actually sold it. In old merchants' handbooks, the former system is sometimes referred to as the pharmaceutical system and distinguished from the apothecaries' system. English-speaking countries Weight From pound down to the scruple, the units of the traditional English apothecaries' system were a subset of the units of the Roman weight system, although the troy pound and its subdivisions were slightly heavier than the Roman pound and its subdivisions. Similar systems were used all over Europe, but with considerable local variation described below under Variants. The traditional English apothecaries' system of weights is shown in the first table in this section: the pound, ounce and grain were identical to the troy pound, ounce and grain (also used to measure weights of precious metals; distinct from the avoirdupois pounds and ounces that were used for measurements of other goods.) The confusing variety of definitions and conversions for pounds and ounces is covered elsewhere in a table of pound definitions. To unify all weight systems used by apothecaries, the Irish pharmacopœia of 1850 introduced a new variant of the apothecaries' system which subdivided a new apothecaries' pound of 12 avoirdupois ounces instead of the troy pound. To allow effective use of the new system, new weight pieces were produced. Since an avoirdupois ounce corresponds to 28.35 g, the proposed system was very similar to that in use in Portugal and Spain, and in some locations in Italy. But it would have doubled the value of the avoirdupois drachm (an existing unit, but by then only used for weighing silk). Therefore, it conflicted with other non-standard variations that were based on that nearly obsolete unit. The Irish proposal was not widely adopted, but British legislation, in the form of the Medical Act 1858 (21 & 22 Vict. c. 90), was more radical: it prescribed the use of the avoirdupois system for the United Kingdom (then including Ireland), with none of the traditional subdivisions. This innovation was first used in the United British pharmacopœia of 1864, which recommended the adoption in pharmacy of the imperial avoirdupois pound of 7000 grains and ounce of 437.5 grains, and discontinuation of the drachm and scruple. In practice, however, the old apothecaries' system based on divisions of the troy pound, or of the troy ounce of 480 grains, was still widely used until it was abolished by the Weights and Measures Act 1976 (c. 77), since when it may only be used to measure precious metals and stones. (The troy pound had already been declared illegal for most other uses by the Weights and Measures Act 1878 (41 & 42 Vict. c. 49), but this act allowed the sale of drugs by apothecaries' weight as an exception.) Apothecaries' units of any kind became obsolete in the UK with the mandatory use of metric drug measurements from January 1, 1971. In the United States, the apothecaries' system remained official until it was abolished in 1971 in favour of the metric system. Volume English-speaking countries also used a system of units of fluid measure, or in modern terminology volume units, based on the apothecaries' system. Originally, the terms and symbols used to describe the volume measurements of liquids were the same as or similar to those used to describe weight measurements of solids (for example, the pound by weight and the fluid pint were both referred to with Latin libra, symbol ). A volume of liquid that was approximately that of an apothecaries' ounce of water was called a fluid ounce, and was divided into fluid drachms and sometimes also fluid scruples. The smallest unit of liquid volume was traditionally a drop (Latin gutta). Although nominally defined as a sixtieth of the fluid drachm, in practice the drop was a not a standardized unit of volume but was measured by means of releasing literal drops of liquid from the lip of a bottle or vial. The 1809 London Pharmacopoeia introduced modified terminology for apothecaries' measurements of liquid volume. New terms and symbols introduced by the London College of Physicians (octarius, minim) were subsequently adopted by the Pharmacopoeias of the United States and Dublin. The pint was given the Latin name octarius and represented by the symbol O, thus distinguishing its Latin name and symbol from that of the pound. The terms for fluid ounce and fluid drachm were distinguished from weight measures by the prefix fluid- (Latin Fluiduncia, Fluidrachma, English fluidounce and fluidrachm) or in abbreviations by the prefixed letter f (f℥, fʒ). (However, even in the 20th century the f of the symbols f℥, fʒ was sometimes omitted by physicians in the United States, with fluid ounces and fluid drams represented instead simply by the symbols ℥, ʒ.) The smallest unit of volume was standardized as precisely a sixtieth of a fluidrachm (1/61,440 of a wine gallon) and given the name minim. (Subsequently, the old term drop was sometimes viewed as an approximate equivalent of the new minim, based on their nominally equivalent definitions. In fact, the units were qualitatively different because the traditional drop, by the nature of how it was measured, did not actually correspond to a single definite volume.) Along with the new name of minim, the London Pharmacopoeia of 1809 prescribed a new method of measuring the smallest unit of volume using a graduated glass tube. Before introduction of the imperial units in the U.K., all apothecaries' fluid measures were based on the unit of the wine gallon, which survived in the US under the name liquid gallon or wet gallon. The wine gallon was abolished in Britain in 1826, and this system was replaced by a new one based on the newly introduced imperial gallon. Since the imperial gallon is 20% more than the liquid gallon, the same is true for the imperial pint in relation to the liquid pint. Accordingly, in the new imperial system, the number of fluid ounces per pint was adjusted from 16 to 20 so that the fluid ounce was not changed too much by the reform. Even so, the modern U.K. fluid ounce is 4% less than the US fluid ounce, and the same is true for the smaller units. For some years both systems were used concurrently in the U.K. As a result, the Imperial and U.S. systems differ in the size of the basic unit (the gallon or the pint, one gallon being equal to eight pints), and in the number of fluid ounces per pint. Apothecaries' systems for volumes were internationally much less common than those for weights. There were also commonly used, but unofficial divisions of the Apothecaries' system, consisting of: Glass-tumbler 8 fl. oz. Breakfast-cup about 8 fl. oz. Tea-cup 5 fl. oz. Wine-glass 2 fl. oz. Table-spoon fl. oz. Dessert-spoon 2 fl. dr. (same as fl. oz.) Tea-spoon 1 fl. dr. (same as fl. oz.) In the United States, similar measures in use were once: Tumblerful — ƒ℥ viii (8 fl oz/ 1 cup/ 240 mL) Teacupful — ƒ℥ iv (4 fl oz/ 1 gill/ 120 mL) Wineglassful — ƒ℥ ij (2 fl oz/ 60 mL) Tablespoonful — ƒ℥ ss ( fl oz/ 3 tsp/ 1 Tbsp; 15 mL as once codified in the ninth edition of the United States Pharmacopeia (U.S.P. IX)) Dessertspoonful — ƒʒ ij (≡ ƒ℈ viij or ƒʒ/ 2 tsp; 10 mL as once codified in the U.S.P. IX) Teaspoonful — ƒʒ j (≡ ƒ℈ iv or ƒʒ/ 1 tsp; 5 mL as once codified in the U.S.P. IX) The cited book states, "In almost all cases the modern teacups, tablespoons, dessertspoons, and teaspoons, after careful test by the author, were found to average 25 percent greater capacity than the theoretical quantities given above, and thus the use of accurately graduated medicine glasses, which may be had now at a trifling cost, should be insisted upon." Apothecaries' measures eventually fell out of use in the U.K. and were officially abolished in 1971. In the U.S., they are still occasionally used, for example with prescribed medicine being sold in four-ounce (℥ iv) bottles. Medical prescriptions Until around 1900, medical prescriptions and most European pharmacopoeias were written in Latin. Here is a typical example from the middle of the 19th century. The use of Latin ensured that the prescriptions could be read by an international audience. There was a technical reason why 3 ʒ was written ʒiij, and  ʒ as ʒss: Writing iii as iij would prevent tampering with or misinterpretation of a number after it is written. The letters "ss" are an abbreviation for the Latin "semis" meaning "half", which were sometimes written with a ß. In Apothecaries' Latin, numbers were generally written in Roman numerals, immediately following the symbol. Since only the units of the apothecaries' system were used in this way, this made it clear that the civil weight system was not meant. Variants Diversity of local standards The basic form of the apothecaries' system is essentially a subset of the Roman weight system. An apothecaries' pound normally consisted of 12 ounces. (In France this was changed to 16 ounces, and in Spain, the customary unit was the , a mark of 8 ounces.) In the south of Europe and in France, the scruple was generally divided into 24 grains, so that one ounce consisted of 576 grains. Nevertheless, the subdivision of an ounce was somewhat more uniform than that of a pound, and a common feature of all variants is that 12 ounces are roughly 100 drachms (96–128 drachms) and a grain is roughly the weight of a physical grain. It is most convenient to compare the various local weight standards by the metric weights of their ounces. The actual mass of an ounce varied by ±17% (5 g) around the typical value of 30 g. The table only shows approximate values for the most important standards; even the same nominal standard could vary slightly between one city and its neighbour. The range from 25 g to 31 g is filled with numerous variants, especially the Italian range up to 28 g. But there is a relatively large gap between the troy ounces of 31 g and the Habsburg ounce of 35 g. The latter is the product of an 18th-century weight reform. Even in Turkey a system of weights similar to the European apothecaries' system was used for the same purpose. For medical purposes the tcheky (approx. 320 g) was divided in 100 drachms, and the drachm in (16 kilos or) 64 grains. This is close to the classical Greek weight system, where a mina (corresponding roughly to a Roman libra) was also divided into 100 drachms. With the beginning of metrication, some countries standardized their apothecaries' pound to an easily remembered multiple of the French gramme. E.g. in the Netherlands the Dutch troy pound of 369.1 g was standardized in 1820 to 375.000 g, to match a similar reform in France. The British troy pound retained its value of 373.202 g until in 2000 it was legally defined in metric terms, as 373.2417216 g. (At this time its use was mainly confined to trading precious metals.) Basic variants In the Romance speaking part of Europe the scruple was divided in 24 grains, in the rest of Europe in 20 grains. Notable exceptions were Venice and Sicily, where the scruple was also divided in 20 grains. The Sicilian apothecaries ounce was divided into 10 drachms. Since the scruple was divided into only 20 grains, like in the northern countries, an ounce consisted of 600 grains. This was not too different from the situation in most of the other Mediterranean countries, where an ounce consisted of 576 grains. In France, at some stage the apothecaries' pound of 12 ounces was replaced by the larger civil pound of 16 ounces. The subdivisions of the apothecaries ounce were the same as in the other Romance countries, however, and were different from the subdivisions of the otherwise identical civil ounce. Origins Roman weight system The basic apothecaries' system consists of the units pound, ounce, and scruple from the classical Roman weight system, together with the originally Greek drachm and a new subdivision of the scruple into either 20 ("barley") or 24 ("wheat") grains (). In some countries other units of the original system remained in use, for example in Spain the and . In some cases the apothecaries' and civil weight systems had the same ounces ("an ounce is an ounce"), but the civil pound consisted of 16 ounces. is Latin for the seed of the carob tree. Many attempts were made to reconstruct the exact mass of the Roman pound. One method for doing this consists in weighing old coins; another uses the fact that Roman weight units were derived from Roman units of length similar to the way the kilogramme was originally derived from the metre, i.e. by weighing a known volume of water. Nowadays the Roman pound is often given as 327.45 g, but one should keep in mind that (apart from the other uncertainties that come with such a reconstruction) the Roman weight standard is unlikely to have remained constant to such a precision over the centuries, and that the provinces often had somewhat inexact copies of the standard. The weight and subdivision of the pound in the Holy Roman Empire were reformed by Charlemagne, but in the Byzantine Empire it remained essentially the same. Since Byzantine coins circulated up to Scandinavia, the old Roman standard continued to be influential through the Middle Ages. Weight system of Salerno The history of mediaeval medicine started roughly around the year 1000 with the school of medicine in Salerno, which combined elements of Latin, Greek, Arabic, and Jewish medicine. Galen and Dioscorides (who had used the Graeco-Roman weight system) were among the most important authorities, but also Arabic physicians, whose works were systematically translated into Latin. According to , a famous 13th-century text that exists in numerous variations and is often ascribed to Dino di Garbo, the system of weights used in Salerno was different from the systems used in Padua and Bologna. As can be seen from the table, it was also different from the Roman weight system used by Galenus and Dioscorides and from all modern apothecaries' systems: The ounce was divided into 9 drachms, rather than 8 drachms. Centuries later, the region around Salerno was the only exception to the rule that (except for skipping units that had regionally fallen out of use) the apothecaries' ounce was subdivided down to the scruple in exactly the same way as in the Roman system: It divided the ounce into 10 drachms. Romance countries While there will naturally have been some changes throughout the centuries, this section only tries to give a general overview of the situation that was recorded in detail in numerous 19th-century merchants' handbooks. Iberian Peninsula On the Iberian Peninsula, apothecaries' weights in the 19th century were relatively uniform, with 24 grains per scruple (576 grains per ounce), the standard in Romance countries. The weight of an apothecaries' pound was 345.1 g in Spain and 344.2 g in Portugal. As in Italy, some of the additional subdivisions of the Roman system, such as the , were still in use there. It was standard to use the , defined as 8 ounces, instead of the pound. France In 18th century France, there was a national weight standard, the of 8 ounces. The civil pound of 16 ounces was equivalent to 2 marks, and it was also used as the apothecaries' pound. With 30.6 g, the ounces were considerably heavier than other apothecaries ounces in Romance countries, but otherwise, the French system was not remarkable. Its history and connections to the English and Flemish standards are discussed below under Weight standards named after Troyes. Italy Due in part to the political conditions in what would become a united Kingdom of Italy only in 1861, the variation of apothecaries' systems and standard weights in this region was enormous. (For background information, see History of Italy during foreign domination and the unification.) The (pound) generally consisted of the standard twelve ounces, however. The civil weight systems were generally very similar to the apothecaries' system, and since the (or the , where different systems were in use for light and heavy goods) generally had a suitable weight for an apothecaries' pound it was often used for this purpose. Extreme cases were Rome and Genoa, where the same system was used for everything, including medicine. On the other hand, there were relatively large differences even between two cities in the same state. E.g. Bologna (in the Papal States) had an apothecaries pound that was less than the local civil pound, and 4% lighter than the pound used in Rome. The weight of an apothecaries' pound ranged generally between 300 g and 320 g, slightly less than that of a pound in the Roman Empire. An important exception to this rule is that the Kingdom of Lombardy–Venetia was under the rule of the Habsburg monarchy 1814–1859 and therefore had the extremely large Habsburg apothecaries' pound of 420 g. (See below under Habsburg standard.) E.g. in the large city of Milan the apothecaries' system based on a pound of 326.8 g was officially replaced by the metric system as early as 1803, because Milan was part of the Napoleonic Italian Republic. Since the successor of this little state, the Napoleonic Kingdom of Italy, fell to Habsburg in 1814 (at a time when even in France the had been introduced because the metric system was not accepted by the population), an apothecaries' system was officially introduced again, but now based on the Habsburg apothecaries' pound, which weighed almost 30% more. The apothecaries' pound in Venice had exactly the same subdivisions as those in the non-Romance countries, but its total weight of 301 g was at the bottom of the range. During the Habsburg reign of 1814–1859 an exception was made for Venice; as a result, the extreme weights of 301 g and 420 g coexisted within one state and in immediate proximity. The Venice standard was also used elsewhere, for example in Udine. In Dubrovnik (called "Ragusa" until 1909) its use was partially continued for a long time in spite of the official Habsburg weight reform. The measure and weight systems for the large mainland part of the Kingdom of the Two Sicilies were unified in 1840. The area consisted of the southern half of the Italian Peninsula and included Naples and Salerno. The subdivision of apothecaries' weight in the unified system was essentially the same as that for gold, silver, coins, and silk. It was the most excentric variant in that the ounce was divided in 10 drachms, rather than the usual 8. The scruple, like in Venice but unlike in the rest of the Romance region, was divided into 20 grains. The existence of a unit called , the equivalent of , is interesting because 6 were 9 . In the original Salerno weight system an ounce was divided into 9 drachms, and so an would have been of an ounce. Troyes, Nuremberg, and Habsburg Weight standards named after Troyes As early as 1147 in Troyes in Champagne (in the Middle Ages an important trading town) a unit of weight called was used. The national French standard until 1799 was based on a famous artifact called the , which probably dates back to the second half of the 15th century. It is an elaborate set of nesting weight pieces, with a total metric weight of 12.238 kg. The set is now shown in the Musée des Arts et Métiers in Paris. The total nominal value of the set is 50 or , a mark being 8 ounces. The ounce had therefore a metric equivalent of 30.59 g. The was used as a national French standard for trading, for gold, silver, and jewels, and for weighing medicine. It was also used in international communications between scientists. In the time before the French Revolution, the civil pound also played the role of the apothecaries' pound in the French apothecaries' system, which otherwise remained a standard system of the Romance (24 grains per scruple) type. In Bruges, Amsterdam, Antwerp and other Flemish cities, a "troy" unit ("") was also in use as a standard for valuable materials and medicine. As in France, the way in which the Flemish troy ounce was subdivided depended on what was weighed. Unlike the French, the Flemish apothecaries divided the scruple into 20 grains. The Flemish troy pound became the standard for the gold and apothecaries' system in the United Kingdom of the Netherlands; it was also used in this way in Lübeck. (The London troy pound was referred to as the '', after metrification.) The Dutch troy mark consisted of 8 Flemish troy ounces, with each ounce of 20 engels, and each engel divided into 32 assen. The Amsterdam Pound of two marks, used in commerce, weighed 10,280 assen, while the Amsterdam Troy pound weighed 10,240 assen, i.e. exactly two troy marks. In 1414, six years before the Treaty of Troyes, a statute of Henry V of England gave directions to the goldsmiths in terms of the troy pound. (In 1304 it had apparently not yet been introduced, since it did not appear in the statute of weights and measures.) There is evidence from the 15th century that the troy pound was used for weighing metals and spices. After the abolishment of the Tower pound in 1527 by Henry VIII of England, the troy pound was the official basis for English coin weights. The British apothecaries' system was based on the troy pound until metrication, and it survived in the United States and Australia well into the 20th century. Since the modern (English, American and Imperial) troy ounces are roughly 1.5% heavier than the late Paris ounce, the exact historical relations between the original , the French , the Flemish and the English troy pound are unclear. It is known, however, that the numerical relation between the English and French troy ounces was exactly 64:63 in the 14th century. Nuremberg standard In the Middle Ages the Imperial Free City of Nuremberg, an important trading place in the south of Germany, produced large amounts of nesting weight pieces to various European standards. In the 1540s, the first pharmacopoeia in the modern sense was also printed there. In 1555, a weight standard for the apothecaries' pound of 12 ounces was set in Nuremberg. Under the name Nuremberg pharmaceutical weight () it would become the standard for most of the north-east of Europe. However, some cities kept local copies of the standard. As of 1800 all German states and cities except Lübeck (which had the Dutch troy standard) followed the Nuremberg standard. It was also the standard for Denmark, Norway, the Russian Empire, and most cantons of Switzerland. Poland and Sweden had their own variants of the standard, which differed from each other by 0.6%. In 1811, Bavaria legally defined the apothecaries' pound as 360.00 g (an ounce of 30.00 g). In 1815, Nuremberg lost its status as a free city and became part of Bavaria. From now on the Nuremberg apothecaries' pound was no longer the official apothecaries' pound in Nuremberg; but the difference was only 0.6%. In 1836 the Greek apothecaries' pound was officially defined by this standard, four years after Otto, the son of the king of Bavaria, became the first king of Greece. But only few German states followed the example of Bavaria, and with a long delay. The apothecaries' pound of 360 g was also adopted in Lübeck, where it was official as of 1861. Austria and the states of the Habsburg monarchy officially had a different standard since 1761, and Prussia, followed by its neighbours Anhalt, Lippe and Mecklenburg, would diverge in the opposite direction with a reform in 1816. But in both cases apothecaries continued to use the Nuremberg standard unofficially for a long time after it became illegal. In Russia the apothecaries' system survived well into the 20th century. The Soviet Union officially abolished it only in January 1927. Habsburg standard Empress Maria Theresia of Austria reformed the measures and weights of the Habsburg monarchy in 1761. The weight of an apothecaries' pound of 12 ounces was increased to a value that was later (after the kilogramme was defined) found to be 420.009 g; this was called the . It was defined as of the unusually heavy Habsburg civil pound (defined as of the civil pound of Cologne) and corresponded to a record ounce weight of 35 g. Before the reform, in the north of the empire, the Nuremberg standard had been in effect, and in Italy, the local standards had been even lighter. It is not surprising that an increase of 17% and more met with some inertia. The 1770 edition of the pharmacopoeia still used the Nuremberg standard , indicating that even in the Austrian capital Vienna it took some time for the reform to become effective. In 1774, the used the new standard, and in 1783 all old apothecaries' weight pieces that were still in use were directed to be destroyed. Venice was not part of these reforms and kept its standard of approximately 25 g per ounce. When Austria started producing scales and weight pieces to the new standard with an excellent quality/price ratio, these were occasionally used by German apothecaries as well. Metrication Early metrication At the time of the Industrial Revolution, the fact that each state had its own system of weights and measures became increasingly problematic. Serious work on a "scientific" system was started in France under Louis XVI, and completed in 1799 (after the French Revolution) with its implementation. The French population, however, was initially unhappy with the new system. In 1812, Napoleon Bonaparte reintroduced some of the old measurements, but in a modified form that was defined with respect to the metric system. This was finally abolished in 1837 and became illegal in 1840. Due to the large expansion of the First French Empire under Napoleon I, French metrication also affected what would be (parts of) France's neighbour countries after the Congress of Vienna. The Netherlands were partially metricated when they were French, in the years 1810–1813. With full metrication, effective January 1821, the Netherlands reformed the . The apothecaries' new pound was 375.00 g. Apart from rounding issues concerning the subdivisions, this corresponded exactly to the French . (The reform was not followed in the north German city of Lübeck, which continued to use the .) In Belgium, apothecaries' weight was metricated effective 1856. From 1803 to 1815, all German regions west of the River Rhine were French, organised in the Roer, Sarre, Rhin-et-Moselle, and Mont-Tonnerre. As a result of the Congress of Vienna these became part of various German states. A large part of the Palatinate fell to Bavaria, but having the metric system it was excepted from the Bavarian reform of weights and measures. Prussia's path to metrication In Prussia, a reform in 1816 defined the Prussian civil pound in terms of the Prussian foot and distilled water. It also redefined the apothecaries' pound as 12 ounces, i.e. of the civil pound: 350.78 g. This reform was not popular with apothecaries, because it broke the uniformity of the apothecaries' pound in Germany at a time when a German national state was beginning to form. It seems that many apothecaries did not follow this reduction by 2%. Another reform in 1856 increased the civil pound from 467.711 g to 500.000 g (the German civil pound defined by the Zollverein), as a first step towards metrication. As a consequence the official apothecaries' pound was now 375.000 g, i.e. it was increased by 7%, and it was now very close to the troy standards. §4 of the law that introduced this reform said: "Further, a pharmaceutical weight deviating from the civil weight does not take place." But this paragraph was suspended until further notice. The abolishment of the apothecaries' system meant that doctors' prescriptions had to take place in terms of the current civil weight: grammes and kilograms. This was considered unfeasible by many, and the state received numerous protests and asked for expertises. Nevertheless, by 1868 §4 of the earlier reform was finally put into force. Metrication in countries using the troy and avoirdupois systems Britain was initially involved in the development of the metric system, and the US was among the 17 initial signatories of the Metre Convention in 1875. Yet in spite of enthusiastic support for the new system by intellectuals such as Charles Dickens, these two countries were particularly slow to implement it. In the US, the metric system replaced the apothecaries' system in the United States Pharmacopeia of 1971. In the UK, metric drug measurements were required for dealings in drugs from January 1, 1971. See also List of abbreviations used in medical prescriptions List of Latin abbreviations References Citations Sources Further reading External links Apothecaries' symbols Online Apothecaries' Converter Pharmacopoeia pro Republica Augustana, 1613 Pharmacopoeia Collegii Regalis Londini, 1677 Pharmacopoea Collegii Londinensis Medicor, 1746 Obsolete units of measurement Units of mass History of pharmacy
Apothecaries' system
Physics,Mathematics
6,864
23,685,620
https://en.wikipedia.org/wiki/Flora%20of%20Morocco
Morocco provides a refuge for a rich and diverse flora with about 4,200 taxa, of which 22% (879 taxa) are endemic. The phytogeographic zones of Morocco comprise 8 zones: the Mediterranean zone (central 0–500m, middle 500-1,000m and upper 1,100-1500m), the Cedar zone (1000-2000m), the sub-Alpine zone (2,000-2,500m), the Alpine zone (2,500m+), the semi-desert scrub zone, the Reg , the sandy desert zone and the oases. Mediterranean or coastal zone Maquis and Garrique Mediterranean dry woodlands and steppe, Mediterranean woodlands and forests, lower Northern slopes of Rif and Tell Atlas. The climax of the Mediterranean coast is a well-developed maquis commonly associated with Clematis, Smilax, Lonicera and Asparagus. Except in inaccessible or protected places the vegetation has been heavily grazed by domestic animals and this degraded maquis, called garrigue, is widespread. Poterium spinosum, various Salvia and Cistus are the dominant plants of the garrigue. A prominent feature of the coastal vegetation is the presence of a large exotic flora: Casuarina, Eucalyptus, Citrus, loquat and Opuntia ficus-indica are examples. Several species of steppe Acacia are common elements. The cultivated area which is extensive is wholly artificial and imported plants dominate the landscape. The meadows, orchards and wetter places in the maquis support such plants as fennel. Characteristic plants are Pinus halepensis, Erica arborea, Arbutus unedo, Pistacia lentiscus, Myrtus communis, Clematis cirrhosa, Asparagus acutifolius, Phlomis viscosa, Scilla autumnalis and Scilla peruviana, Narcissus tazetta, Iris palaestina, Colchicum stevenii, Arisarum vulgare, Quercus coccifera, Quercus ilex, Ceratonia siliqua, Pistacia atlantica, Pistacia terebinthus, Crataegus azarolus, Amygdalus communis, Rhamnus alaternus Nerprun alaterne, Cistus spp., especially Cistus monspeliensis, Cistus laurifolius and Cistus salviifolius, Juniperinus phoenicea, Phlomis spp. (Phlomis lychnitis), Helichrysum italicum, Salvia spp., Satureia spp., Poterium spp., Arabis spp., Reseda spp., Aristolochia pallida, A. boetica, A. longa paucinervis, A. fontanesi, A. rotunda, A. pistolochia :fr:Aristoloche pistoloche, Lavandula stoechas, Jasminium fruticans, and Brassica spp. Central zone Mediterranean Acacia-Argania dry woodlands lie to the South of the Mediterranean zone. Further south in the Atlas Mountains Mediterranean conifer and mixed forests dominate. The Cedar forests A mere remnant of their former glory the cedar forests are still impressive covering large areas of the Middle Atlas. The dominant plant Cedrus libani var. atlantica is peculiar to this zone along with Juniperus foetidissima and a multitude of low plants: Iberis odorata, I. ciliata, I. taurica, Centaurea spp., Prunus amygdalus, P. persica, P. institia, P. longipes, Pyrus communis, Malus domestica, Crataegus oxyacantha, Sisymbrium spp., Lunaria biennis, Capparis spinosa, Raphanus raphanistrum, Isatis tinctoria continue. There is a total lack of the oak-dominated maquis of the Mediterranean zone and the lower limit of the zone of the Cedar is demonstrated by the lack of Berberis cretica. Middle Atlas lakes (protected areas) Lake Aguelmame Aziza :fr:Lac Aguelmame Aziza in French Lake Ouiouane :fr:Ouiouane in French Lake Aguelmame Sidi Ali :fr:Lac Aguelmame Sidi Ali in French Lake Daït Iffer :fr:Lac Daït Iffer in French Lake Bin El Ouidanne Ouaouizerth Parc national d'Ifrane :fr:Parc national d'Ifrane in French Parc national de Tazekka :fr:Parc national de Tazekka in French Sub-alpine zone The disappearance of Cedrus atlantica and the presence of Onobrychis cornuta signal the beginning of the sub-alpine zone characterised by the absence of trees most notably the fir and the cedar; this is a montane habitat of some vigour. Dominant vegetation is pads of thorny Astragalus, Onobrychis (with cornuta as the most typical) and Acantholimon, interspersed with stands of Berberis cretica. Juniperus excelsa survives here and there. The sub-alpine zone is part in the Middle Atlas, part in the High Atlas. Mediterranean High Atlas juniper steppe (Cedar, juniper, pine, and oak forests cover approximately one-third of this eco-region. At high altitudes, junipers dominate the landscape. The key species is Juniperus thurifera. Even higher, the forests eventually give way to alpine meadows, pseudo-steppe vegetation, and finally scree slopes where purple cushion plants bloom. River valleys wind through the landscape, their rich, moist soil supporting willows, poplars, oaks, hawthorns, and a carpet of oleander). Alpine zone Alpine conditions are encountered above 2,500m and the special features of high mountains are enhanced by the dryness of the climate. Typically the zone begins with the disappearance of Berberis, Marrubium and Phlomis and the appearance of Vicia canescens in enormous quantity. The most important botanical characteristic is the presence of a hundred or so plants found nowhere else in Morocco, many of them endemic. The sub-alpine and alpine zones are both heavily overgrazed in many areas and this has left a mark on the vegetation. The success of plants such as Vicia canescens and Erodium trichomanifolium is undoubtedly due to the fact that they are unpalatable to goats. The desert zones (semi-desert scrub, reg and sandy desert) The Little Atlas and Djbel (Montane) Sahara. The Sahara desert is essentially a desert of herbs and small shrubs with larger shrubs and trees where moisture levels are higher. The dwarf-shrub community in the north comprises shrubs of less than 1 metre in height (usually about 50 cm.)as dominants. The bushes are often widely spaced, with a considerable amount of bare stony ground between the clumps which gives the vegetation a very parched appearance in the summers. Typical plants are Zizyphus lotus, Ziziphus spina-christi, Tamarix spp., Acacia spp., Moringa aptera, Salvadora persica, Thymus spp., Artemisia herba-alba, Noaea mucronata, Helianthemum spp., Braetama retam, Periploca aphylla, Suaeda spp., Salsola spp, Atriplex spp., Ephedra alata, Haloxylon articulatum, Pistacia atlantica and Achillea santolina. In steppe areas where the scrub vegetation is hardly developed desert grasses of a multiplicity of species are the climax vegetation. Ephemerals are common in the north, halophytes in the sandy areas. Succulent plants are uncommon. The sandy desert has virtually no vegetation. With rain vegetation increases in wadis (oueds - vallies, gullies, or streambeds that remain dry except during the rainy season),depressions and wherever runoff water augments rainfall. The soils of the Sahara are formed of rock debris and desert detritus and are very weakly developed. The characteristic species of these true desert areas which decrease as desert scrub becomes reg and then sandy desert are:- Faidherbia albida, A. raddiana, A. seyal, A. tortilis, Achillea santolina, Alyssum macrocalyx, Anabasis aretoides, A. articulata, Androcymbium punctataum, Aristoides coerulescens, Aristida pungens, Artemisia herba-alba, A. monosperma, Astragulus tribuloides, Atriplex halimus, Balanites aegyptiaca, Caligonum comosum, Caltropis procera, Cenchrus ciliaris, Citrullus colocynthus, Danthonia forskalii, Ephedra alata, Euphorbia guyoniana, Deverra scoparia , D. chloranthus, Linaria aegyptica, Annarrhinum fruticosum , Haloxylon guyonianum, Maerua crassifolia, Nerium oleander, Olea europaea, Panicum turgidum, Phoenix dactylifera, Populus euphratica, Prosopis stephaniana, Rhus oxyacanthae, Roetboellia hirsuta, Salsola foetida, S.inermis, Salvadora persica, Stipa tortilis, Suaeda fruticosa, S.vermiculata, Tamarix articulata, Zilla spinosa, Zygophyllum coccineum, Z. decumbens, Z. dumosum s and Capparis spinosa. In the depressions and inter-dunal areas, bushes Retama raetam, Ziziphus lotus, Genista saharae,Calligonum comosum, Acacia raddiana, Acacia seyal, Pistacia atlantica, Tamarix aphylla, Calligonum azel and Calligonum arich. In the depressions of the dayas and uidians, there are endemic species such as Panicum turgidum, Pituranthos sp., Neurada procumbens, Anastatica hyrochuntina and Astragalus gumbo. The hammadas often have endemics such as Pituranthos chloranthus, Helianthemum lippii, Gymnocarpos decander, Helianthemum kahiricum, Anabasis aretioides, Haloxylon scoparium and Arthrophytum schmittianum. Rivers and oases The larger rivers serve to spread the vegetation of the Mediterranean zone q.v. further south and allow the introduction of the plants of Africa to the north. Both rivers and oases support many anthropogenic species resembling in extreme cases tropical botanic gardens. The zones were established by the International Phytogeographic Excursion of 1936. See also List of ecoregions in Morocco Geography of Morocco References Rankou, H., Culham, A., Jury, S. L. & Christenhusz, M. J. M. 2013. The endemic flora of Morocco. Phytotaxa 78: 1–69. Rübel, E. & Lüdi, W. (eds) (1936) Ergebnisse der Internationalen pflanzengeographischen Exkursion durch Marokko und Westalgerien 1936. Veröffentlichungen des Geobotanischen Institutes Rübel in Zürich ; 14 See Pils, G. 2022. Illustrated Flora of Morocco, 608 pp.- Eigenverlag, Kostinbrod/Bulgarien, ISBN 978-3-200-08790-3. External links Links in English —Sahara Nature.com: Saharan flora — plant images —Western Desert Flora images — in Egypt, but many shared species. –—Biodiversity of South-Western Morocco — photographs of flora and plant communities of Southwestern Morocco. —Herbarium.uk: Mediterranean Plants — identification & distribution —Unep.org: Northern Africa Links in French Parcs nationaux du Maroc Biodiversité du Maroc - Tarrier & Delacre Biodiversité du Parc Naturel d'Ifrane - Tarrier & Delacre Les Papillons diurnes du Parc Naturel d’Ifrane (Maroc) par Michel Tarrier & Jean Delacre Les oiseaux des cimes Les initiateurs de la Maison de l'Écologie et des Écosystèmes du Maroc à Ifrane Un éco-projet pour le Parc Naturel d'Ifrane par Michel Tarrier & Jean Delacre - Historique Alerte-Nature-Maroc Portail M.E.E.M coinitiateurs J.Delacre et M .Tarrier Problèmatique du Val d'Ifrane Histoire d'Ifrane Faune du Maroc G.E.R.E.S Terre Maroc Futura science Protection du singe Magot Entomologie Portail des berbères chleuhs imazighen du Maroc Ifrane Moyen Atlas *Surpâturage Les cédraies du Maroc en danger Groupe d'etudes et d'observation pour la sauvegarde des animaux sauvages et des écosystèmes Morocco Environment of Morocco
Flora of Morocco
Biology
2,853
33,813,990
https://en.wikipedia.org/wiki/Sylvester%20domain
In mathematics, a Sylvester domain, named after James Joseph Sylvester by , is a ring in which Sylvester's law of nullity holds. This means that if A is an m by n matrix, and B is an n by s matrix over R, then ρ(AB) ≥ ρ(A) + ρ(B) – n where ρ is the inner rank of a matrix. The inner rank of an m by n matrix is the smallest integer r such that the matrix is a product of an m by r matrix and an r by n matrix. showed that fields satisfy Sylvester's law of nullity and are, therefore, Sylvester domains. References Ring theory
Sylvester domain
Mathematics
136
77,749,655
https://en.wikipedia.org/wiki/Theophylline/ephedrine
Theophylline ephedrine (), or theophylline/ephedrine, sold under the brand name Franol among others, is a fixed-dose combination formulation of theophylline, an adenosine receptor antagonist, and ephedrine, a norepinephrine releasing agent and indirectly acting sympathomimetic agent, which has been used as a bronchodilator in the treatment of asthma and as a nasal decongestant. It was first studied and used to treat asthma in the 1930s or 1940s and combinations of the two drugs subsequently became widely used. A ratio of 5:1 theophylline to ephedrine is usually used in combinations of the drugs. Later research found that the combination was no more effective for asthma than theophylline alone but produced more side effects. Combinations of theophylline, ephedrine, and phenobarbital (brand name Tedral among others) have also been widely used to treat asthma. Many such combinations have been marketed with numerous brand names. Theophylline has also been marketed in combination with other ephedrine-like sympathomimetics like racephedrine and pseudoephedrine and with other barbiturates such as amobarbital and butabarbital, among other drugs. A combination of theophylline, ephedrine, and hydroxyzine has been marketed under the brand name Marax among others as well. Combinations of theophylline, ephedrine, and a barbiturate were later phased out in favor of combinations of theophylline and ephedrine alone (e.g., brand name Franol). Fixed-dose combinations of theophylline and ephedrine were abandoned after the 1970s as they did not allow for dose titration in asthma therapy owing to the toxicity of ephedrine. The effects of theophylline/ephedrine as a performance-enhancing drug in exercise and sports have been studied. Use of theophylline/ephedrine combinations has led to disqualification of elite athletes due to ephedrine being banned in competitive sports. See also Cafedrine Fenethylline Theodrenaline References Abandoned drugs Adenosine receptor antagonists Beta-Hydroxyamphetamines Bronchodilators Cardiac stimulants Combination drugs Drugs in sport Ergogenic aids Norepinephrine releasing agents Sympathomimetics Xanthines
Theophylline/ephedrine
Chemistry
512
1,726,070
https://en.wikipedia.org/wiki/Churn%20drill
The churn drill is a large drilling machine that bores large diameter holes in the ground. In mining, they were used to drill into the soft carbonate rocks of lead and zinc hosted regions to extract bulk samples of the ore. Churn drills are also called percussion drills, as they function by lifting and dropping a heavy chisel-like bit which breaks the rock as it falls. Churn drills are most effective in soft- to medium-density rock of relative shallow depth (10–50 metres). History Churn drills were invented as early as 221 BC in Qin dynasty China, capable of reaching a depth of 1500 m. Churn drills in ancient China were built of wood and labor-intensive, but were able to go through solid rock. The churn drill appeared in Europe during the 12th century. A churn drill using steam power, based on "the ancient Chinese method of lifting and dropping a rod tipped with a bit," was first built in 1835 by Isaac Singer in the United States, according to The History of Grinding. In America, they were common in the Tri-State district areas during the lead and zinc mining in Missouri, Oklahoma, and Kansas. There is an example of one of these machines at the Northern Life Museum in Fort Smith, Northwest Territories, Canada. It was used in 1929–1930 at the Pine Point lead and zinc mine in the Northwest Territories. References External links Churn drill Tools Mining equipment Chinese inventions
Churn drill
Engineering
293
1,520,732
https://en.wikipedia.org/wiki/Simon%20Conway%20Morris
Simon Conway Morris (born 1951) is an English palaeontologist, evolutionary biologist, and astrobiologist known for his study of the fossils of the Burgess Shale and the Cambrian explosion. The results of these discoveries were celebrated in Stephen Jay Gould's 1989 book Wonderful Life. Conway Morris's own book on the subject, The Crucible of Creation (1998), however, is critical of Gould's presentation and interpretation. Conway Morris, a Christian, holds to theistic views of biological evolution. He has held the Chair of Evolutionary Palaeobiology in the Department of Earth Sciences, University of Cambridge since 1995. Biography Early years Conway Morris was born on 6 November 1951. A native of Carshalton, Surrey, he was brought up in London, England. and went on to study geology at Bristol University, achieving a First Class Honours degree. He then moved to Cambridge University and completed a PhD at St John's College under Harry Blackmore Whittington. He is professor of evolutionary palaeobiology in the Department of Earth Sciences at Cambridge. He is renowned for his insights into early evolution and his studies of paleobiology. He gave the Royal Institution Christmas Lecture in 1996 on the subject of The History in our Bones. He was elected a Fellow of the Royal Society at age 39, was awarded the Walcott Medal of the National Academy of Sciences in 1987 and the Lyell Medal of the Geological Society of London in 1998. Work Conway Morris is based in the Department of Earth Sciences at the University of Cambridge and is best known for his work on the Cambrian explosion, the Burgess Shale fossil fauna and similar deposits in China and Greenland. In addition to working in these countries he has undertaken research in Australia, Canada, Mongolia and the United States. His studies on the Burgess Shale-type faunas, as well as the early evolution of skeletons, has encompassed a wide variety of groups, ranging from ctenophores to the earliest vertebrates. His thinking on the significance of the Burgess Shale has evolved and his current interest in evolutionary convergence and its wider significance – the topic of his 2007 Gifford Lectures – was in part spurred by Stephen Jay Gould's arguments for the importance of contingency in the history of life. In January 2017, his team announced the discovery of Saccorhytus and initially described it as an early member of the deuterostomes which contain a diverse group of animals including vertebrates, but subsequent analysis reclassified this taxon as a member of the protostomes, probably within the ecdysozoans. Burgess Shale Conway Morris' views on the Burgess Shale are reported in numerous technical papers and more generally in The Crucible of Creation (Oxford University Press, 1998). In recent years he has been investigating the phenomenon of evolutionary convergence, the main thesis of which is put forward in Life's Solution: Inevitable Humans in a Lonely Universe (Cambridge University Press, 2003). He is now involved on a major project to investigate both the scientific ramifications of convergence and also to establish a website (www.mapoflife.org) that aims to provide an easily accessible introduction to the thousands of known examples of convergence. This work is funded by the John Templeton Foundation. Evolution, science and religion Conway Morris is active in the public understanding of science and has broadcast extensively on radio and television. The latter includes the Royal Institution Christmas Lectures delivered in 1996. A Christian, he has participated in science and religion debates, including arguments against intelligent design on the one hand and materialism on the other. In 2005 he gave the second Boyle Lecture. He has lectured at the Faraday Institute for Science and Religion on "Evolution and fine-tuning in Biology". He gave the University of Edinburgh Gifford Lectures for 2007 in a series titled "Darwin's Compass: How Evolution Discovers the Song of Creation". In these lectures Conway Morris explained why evolution is compatible with belief in the existence of a God. He is a critic of materialism and of reductionism: That satisfactory definitions of life elude us may be one hint that when materialists step forward and declare with a brisk slap of the hands that this is it, we should be deeply skeptical. Whether the "it" be that of Richard Dawkins' reductionist gene-centred worldpicture, the "universal acid" of Daniel Dennett's meaningless Darwinism, or David Sloan Wilson's faith in group selection (not least to explain the role of human religions), we certainly need to acknowledge each provides insights but as total explanations of what we see around us they are, to put it politely, somewhat incomplete. and of scientists who are militantly against religion: the scientist who boomingly – and they always boom – declares that those who believe in the Deity are unavoidably crazy, "cracked" as my dear father would have said, although I should add that I have every reason to believe he was – and now hope is – on the side of the angels. In March 2009 he was the opening speaker at the Biological Evolution: Facts and Theories conference held at the Pontifical Gregorian University in Rome, as well as chairing one of the sessions. The conference was sponsored by the Catholic Church. Conway Morris has contributed articles on evolution and Christian belief to several collections, including The Cambridge Companion to Science and Religion (2010) and The Blackwell Companion to Science and Christianity (2012). {| class="wikitable" |+ Simon Conway Morris appointments and accomplishments |- ! Date || Position |- | 1969–1972 || University of Bristol: First Class Honours in Geology (BSc) |- | 1975 || Elected Fellow (Title A) of St John's College |- | 1976 || University of Cambridge: PhD |- | 1976 || Research Fellowship at St John's College, University of Cambridge |- | 1979 || Lecturer in Department of Earth Sciences, Open University |- | 1983 || Lecturer in Department of Earth Sciences, University of Cambridge |- | 1987–1988 || Awarded a One-Year Science Research Fellowship by the Nuffield Foundation |- | 1990 || Elected Fellow of the Royal Society |- | 1991 || Appointed Reader in Evolutionary Palaeobiology |- | 1995 || Elected to an ad hominem Chair in Evolutionary Palaeobiology |- | 1997–2002 || Natural Environment Research Council |} Awards and honours The Walcott Medal 1987 PS Charles Schuchert Award 1989 GSL Charles Lyell Medal 1998 Trotter Prize 2007 Bibliography The Early Evolution of Metazoa and the Significance of Problematic Taxa. (ed., with Alberto M. Simonetta) Cambridge University Press, 1991. "The Cambrian "Explosion" of Metazoans". in Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology, 2003, The Deep Structure of Biology. (ed.) Templeton Foundation Press, 2008. Fitness of the Cosmos for Life: Biochemistry and Fine-Tuning. (ed., with John D. Barrow, Stephen J. Freeland, Charles L. Harper, Jr.) Cambridge University Press, 2008. Water and Life: The Unique Properties of H2O. (ed., with Ruth M. Lynden-Bell, John D. Barrow, John L. Finney, Charles Harper, Jr.) CRC Press, 2010. The Runes of Evolution: How the Universe became Self-Aware. Templeton Press, 2015 From Extraterrestrials to Animal Minds: Six Myths of Evolution. Templeton Press, 2022 See also Extraterrestrial (TV program) in which Conway Morris participates. References External links Simon Conway Morris webpage at the Earth Sciences department, University of Cambridge Simon Conway Morris resource page at ISCAST Simon Conway Morris extended film interview with transcripts for the 'Why Are We Here?' documentary series. 1951 births Living people People educated at King's College School, London Alumni of the University of Bristol Fellows of St John's College, Cambridge Academics of the Open University English palaeontologists Astrobiologists British evolutionary biologists Fellows of the Royal Society English Christians Charles Doolittle Walcott Medal winners Lyell Medal winners Theistic evolutionists Critics of New Atheism British critics of atheism Presidents of the Cambridge Philosophical Society Earth scientists at the University of Cambridge Alumni of St John's College, Cambridge
Simon Conway Morris
Biology
1,703
258,901
https://en.wikipedia.org/wiki/Mutilation
Mutilation or maiming (from the ) is severe damage to the body that has a subsequent harmful effect on an individual's quality of life. In the modern era, the term has an overwhelmingly negative connotation, referring to alterations that render something inferior, dysfunctional, imperfect, or ugly. Terminology In 2019, Michael H. Stone, Gary Brucato, and Ann Burgess proposed formal criteria by which "mutilation" might be systematically distinguished from the act of "dismemberment", as these terms are commonly used interchangeably. They suggested that dismemberment involves "the entire removal, by any means, of a large section of the body of a living or dead person, specifically, the head (also termed decapitation), arms, hands, torso, pelvic area, legs, or feet". Mutilation, by contrast, involves "the removal or irreparable disfigurement, by any means, of some smaller portion of one of those larger sections of a living or dead person. The latter would include castration (removal of the testicles), evisceration (removal of the internal organs), and flaying (removal of the skin)." According to these parameters, removing a whole hand would constitute dismemberment, while removing or damaging a finger would be mutilation; decapitation of a full head would be dismemberment, while removing or damaging a part of the face would be mutilation; and removing a whole torso would be dismemberment, while removing or damaging a breast or the organs contained within the torso would be mutilation. Usage Some ethnic groups practice ritual mutilation, for example, burning, clitoridectomy, or flagellation, sometimes as part of a rite of passage. In some cases, the term may even apply to treatment of dead bodies, as in the case of scalping, when a person is mutilated after they have been killed by an enemy. Castration is also a form of mutilation. The traditional Chinese practice of foot binding is a form of mutilation. Another form of mutilation that has captured the imagination of Westerners is the "long-neck" people, a sub-group of the Karen known as the Padaung where women wear brass rings around their necks to artificially make them longer. A joint statement released by the United Nations and numerous other international bodies opposes female genital mutilation. Maiming Maiming, or mutilation which involves the loss of, or incapacity to use, a bodily member, is and has been practiced by many societies with various cultural and religious significance, and is also a customary form of physical punishment, especially applied on the principle of an eye for an eye. Historical examples are plenty; Chinese general Sun Bin had his kneecaps removed after being framed for treason during the Warring States period, while Araucanian warrior Galvarino had his hands amputated as punishment while as a prisoner during the Spanish conquest of Chile. Maiming has often been a criminal offense; the old law term for a special case of maiming of persons was mayhem, an Anglo-French variant form of the word. Maiming of animals by others than their owners is a particular form of the offense generally grouped as malicious damage. For the purpose of the law as to this offense animals are divided into cattle, which includes pigs and equids, and other animals which are either subjects of larceny at common law or are usually kept in confinement or for domestic purposes. In Britain under the Malicious Damage Act 1861 the punishment for maiming of cattle was three to fourteen years' penal servitude; malicious injury to other animals was a misdemeanor punishable on summary conviction. For a second offense the penalty was imprisonment with hard labor for over twelve months. Today maiming of animals falls under the Cruelty to Animals Acts, while maiming by others is additionally treated as criminal damage. Mutilation as human punishment In times when even judicial physical punishment was still commonly allowed to cause not only intense pain and public humiliation during the administration but also to inflict permanent physical damage, or even deliberately intended to mark the criminal for life by cropping or branding, one of the common anatomical target areas not normally under permanent cover of clothing (so particularly merciless in the long term) were the ear(s). In England, for example, various pamphleteers attacking the religious views of the Anglican episcopacy under William Laud, the Archbishop of Canterbury, had their ears cut off for those writings: in 1630 Alexander Leighton and in 1637 still other Puritans, John Bastwick, Henry Burton, and William Prynne. In Scotland one of the Covenanters, James Gavin of Douglas, Lanarkshire, had his ears cut off for refusing to renounce his religious faith. In Japan, Gonsalo Garcia and his companions were similarly punished. Notably in various jurisdictions of the Thirteen Colonies, even relatively minor crimes, such as hog stealing, were punishable by having one's ears nailed to the pillory and slit loose, or even cropped, a counterfeiter would be branded on top (for that crime, considered lèse-majesté, the older mirror punishment was boiling in oil), which was an example of western mutilation. Independence did not render American justice any less brutal. For example, in the Southwest Territory (what would become the state of Tennessee), an example of harsh 'frontier law' under the 1780 Cumberland Compact took place in 1793 when Judge John McNairy sentenced Nashville's first horse thief, John McKain Jr., to be fastened to a wooden stock one hour for 39 lashes, have his ears cut off and cheeks branded with the letters "H" and "T". Nebahne Yohannes, an unsuccessful claimant to the Ethiopian imperial throne, had his ears and nose cut off, yet was then freed. This form of mutilation against unsuccessful claimants to thrones has been in use in middle-eastern regions for thousands of years. To qualify as a king, formerly, one had to exemplify perfection. Obvious physical deformities such as missing noses, ears, or lips, are thereby sufficient disqualifications. The victim in these cases is typically freed alive to act as an example to others, and as no longer a threat. See also Blinding (punishment) Cattle mutilation Decapitation Dismemberment Identification of inmates in Nazi concentration camps Overview of discretionary invasive procedures on animals References Corporal punishments Violence
Mutilation
Biology
1,363
42,321,034
https://en.wikipedia.org/wiki/Diallyl%20trisulfide
Diallyl trisulfide (DATS), also known as Allitridin, is an organosulfur compound with the formula S(SCH2CH=CH2)2. It is one of several compounds produced by hydrolysis of allicin, including diallyl disulfide and diallyl tetrasulfide; DATS is one of the most potent. Biological applications DATS has been shown to selectively kill cancerous cells in the prostate and breast, leaving healthy cells unharmed. This effect is attributed to increased reactive oxygen species (ROS) within cancer cells, increased the number of cells that arrest in the G2 phase of mitosis, and promote an increase in caspase-3 activity. These effects appear to contribute to the apoptosis of cancer cells and a decrease in cancer cell proliferation. DATS can be metabolized by glutathione in red blood cells to form hydrogen sulfide (H2S). This conversion occurs at a consistent rate over a prolonged period of time, rendering DATS a good source of H2S. H2S is a cardioprotective agent that has antioxidant, anti-inflammatory, and anti-apoptotic effects. A major topic of research is the impact of hydrogen sulfide on reducing myocardial ischemia-reperfusion injury. Reperfusion injury is a significant threat to myocardial function that arises with the reintroduction of blood flow to the heart following an ischemic episode. Reperfusion triggers an inflammatory response and often results in oxidative damage. H2S decreases injury through many different effects such a decrease in oxidative stress, maintenance of mitochondrial function, and increased eNOS (endothelial nitric oxide synthase) activation. eNOS is activated via phosphorylation by H2S through the activation of the PI3K/Akt pathway, which increases the formation and bioavailability of nitric oxide (NO). This negatively impacts mitochondria functionality. The mitochondria has been known to protect the heart from ischemic-reperfusion injury through the opening of the ATP-sensitive K+ channel. This causes vasodilation and improves hemodynamics. DATS is a promising treatment for cardiac arrhythmias through its ability to change the opening of the human ether-à-go-go-related (hERG) channel. hERG is the pore-forming subunit of potassium channels that create delayed rectifier potassium ion currents in many cells, including cardiac myocytes. The delayed rectifier potassium ion current is largely responsible for the repolarization of ventricular cardiac myocytes by permitting potassium efflux. DATS causes a decrease in the steady-state inactivation, alters deactivation, and impairs trafficking of the hERG channel from the endoplasmic reticulum to the plasma membrane of the cell. This decreases the amount of functional potassium ion rectifier channels on the cell membrane and thus, slows depolarization. However, hERG trafficking impairment has also been shown to cause arrhythmias due to the development of long QT syndrome and should be considered in drug development. References HERG blocker Organosulfur compounds
Diallyl trisulfide
Chemistry
689
699,489
https://en.wikipedia.org/wiki/Para-Azoxyanisole
para-Azoxyanisole (PAA) is an organic, aromatic compound. Its chemical formula is C14H14N2O3. In a solid state, it appears as a white powder, but when heated it forms a liquid crystal. As one of the first known and most readily prepared liquid crystals, PAA has played an important role in the development of liquid crystal displays. Its liquid crystal range is from 118 °C to 136 °C. The solid to nematic transition is at 118 °C and the nematic to isotropic liquid transition at 136 °C. References Azo compounds Amine oxides Liquid crystals 4-Methoxyphenyl compounds
Para-Azoxyanisole
Chemistry
137
17,639,930
https://en.wikipedia.org/wiki/List%20of%20Angry%20Video%20Game%20Nerd%20episodes
Angry Video Game Nerd (abbreviated as AVGN) is an American web series of comedy-themed retrogaming reviews, created by and starring James Rolfe. The show revolves around reviews that involve acerbic rants about low quality video games. From the beginning of season 2, new episodes were aired first on GameTrailers.com, but are since now aired at Cinemassacre.com, with episodes later being re-aired on Rolfe's own YouTube channel. Episodes are usually scheduled for release on the first or second Wednesday of each month; originally, Rolfe's early work schedule allowed for two episodes per month, but other work commitments changed this to its present arrangement. The only Angry Video Game Nerd episode, although still available for viewing, that never officially made it to/remained on YouTube in its original form was Atari Porn, which was removed after the site flagged it for inappropriate content per its community guidelines. On November 21, 2023, a heavily censored version was officially uploaded to the Cinemassacre YouTube channel, under the name Atari Pork. The two part review of Teenage Mutant Ninja Turtles 3 was removed for copyright issues, however an edited version was reuploaded in 2020 as one video. Two other episodes were later removed for using movie clips from copyrighted films – Rocky and Super Mario Bros. 3 – but were later reuploaded to YouTube after being amended and changed to comply with the website's policies. Series overview Episodes Season 1 (2004–06) Season 2 (2007–08) Season 3 (2008–09) Season 4 (2009–10) Season 5 (2010–11) Season 6 (2011) Season 7 (2012–13) Starting with this season, episodes no longer aired on GameTrailers. Season 8 (2014) Episodes 5 to 16 were released as part of the "12 Days of Shitsmas" event in December 2014. Angry Video Game Nerd: The Movie (2014) At the end of the Spielberg Games review, it was implied that E.T. would be reviewed in The Angry Video Game Nerd: The Movie. Eventually, at TooManyGames 2011 and Magfest 2012, Rolfe confirmed that he would review E.T. in the film. E.T. programmer, Howard Scott Warshaw, also makes an appearance in the film. The film premiered July 21, 2014. Season 9 (2015) Season 10 (2016) Season 11 (2017) Season 12 (2018) Season 13 (2019) Season 14 (2020) Season 15 (2021) Season 16 (2022) Season 17 (2023) Season 18 (2024) Related videos Cinemassacre The following collection of videos features appearances by either James Rolfe, or his character The Nerd: ScrewAttack The following collection of videos features appearances by either James Rolfe, or his character The Nerd: Channel Awesome The following collection of videos features appearances by either James Rolfe, or his character The Nerd. This includes Channel Awesome's collection of videos from the special crossover series between Angry Video Game Nerd and Nostalgia Critic: Cinevore Studios/Mixed Nuts Productions The following collection of videos feature appearances by James Rolfe's character, the Nerd: GameTrailers The following collection of videos features appearances by either James Rolfe, or his character The Nerd: Pat the NES Punk The following collection of videos features appearances by either James Rolfe, or his character The Nerd: Other The following collection of videos features appearances by either James Rolfe, or his character The Nerd: Clip Collection videos Bad Game Cover Art In 2015, from December 1 to 25, a series of mini episodes was released in the style of an advent calendar, in which the Nerd comments on poor examples of video game cover art. The following list these episodes: Shorts YouTube Shorts featuring the Angry Video Game Nerd. Home releases These DVDs and Blu-rays have not been sold in stores, with the exception of the ScrewAttack store. References External links AVGN Full Episode List at Cinemassacre Productions Cinemassacre Angry Video Game Nerd, the Angry Video Nerd, The Angry Video Game Nerd, the
List of Angry Video Game Nerd episodes
Technology
857
35,741,235
https://en.wikipedia.org/wiki/Incyte
Incyte Corporation is an American multinational pharmaceutical company with headquarters in Wilmington, Delaware, and Morges, Switzerland. The company was created in 2002 through the merger of Incyte Pharmaceuticals, founded in Palo Alto, California in 1991 and Incyte Genomics, Inc. of Delaware. The company currently operates manufacturing and R&D locations in North America, Europe, and Asia. Incyte Corporation currently develops and manufacturers prescription biopharmaceutical medications in multiple therapeutic areas including oncology, inflammation, and autoimmunity. History In 2014, Incyte named Hervé Hoppenot president and CEO, and in 2015 he was appointed chairman of the Board of Directors. Hoppenot had previously served as the president of Novartis Oncology; he had been with Novartis since 2003. In September 2015, the company announced it had gained exclusive development and commercial right pertaining to Jiangsu Hengrui Medicine Co., Ltd's anti-PD-1 monoclonal antibody, SHR-1210, in a deal worth $795+ million. In January 2020, Incyte signed a collaboration and license agreement for the global development and commercialization of tafasitamab with MorphoSys. On March 3, 2020, the agreement received antitrust clearance and thus became effective. Pharmaceuticals Incyte Corporation currently has seven marketed and co-marketed pharmaceutical products, including Jakafi (ruxolitinib), Pemazyre (pemigatinib), Monjuvi (tafasitamab-cxix), Opzelura (Ruxolitinib), Tabrecta (capmatinib), Olumiant (Baricitinib), and Iclusig (ponatinib). In 2013, Novartis acquired Incyte's c-Met inhibitor capmatinib (INC280, INCB028060), which is marketed under the brand name Tabrecta. As of 2014, the company was developing baricitinib, an oral JAK1 and JAK2 inhibitor drug for rheumatoid arthritis in partnership with Eli Lilly. It gained EU approval in February 2017. In April 2017, the US FDA issued a rejection, citing concerns about dosing and safety. In May 2018, baricitinib was approved in the United States for the treatment of rheumatoid arthritis under the brand name Olumiant. As of 2016 epacadostat, an indoleamine 2,3-dioxygenase (IDO1) inhibitor, was in development for various cancers and was in combination trials with Merck's pembrolizumab (Keytruda) and Bristol Myers Squibb's nivolumab (Opdivo). In May 2024, Incyte completed its acquisition of Escient Pharmaceuticals for $750 million. References External links American companies established in 1991 Companies based in New Castle County, Delaware Pharmaceutical companies established in 1991 Companies listed on the Nasdaq 1991 establishments in California Pharmaceutical companies of the United States Life sciences industry Health care companies based in Delaware 1993 initial public offerings
Incyte
Biology
642
24,113,078
https://en.wikipedia.org/wiki/Ombrophobe
Ombrophobe or ombrophobous/ombrophobic plant (from Greek ὄμβρος - ombros, "storm of rain" and φόβος - phobos, "fear") is a plant that cannot withstand much rain. A similar term are xerophile and xerophyte. Ombrophile or ombrophilous/ombrophilic plant is a plant that thrives in abundant amounts of rain. The terms were introduced by the 19th-century botanist Julius Wiesner, who identified the two extreme kinds of plants, ombrophobes and ombrophiles. Xerophytes are usually ombrophobous. References Plant physiology
Ombrophobe
Biology
147
30,501,764
https://en.wikipedia.org/wiki/Auriscalpium%20vulgare
Auriscalpium vulgare, commonly known as the pinecone mushroom, the cone tooth, or the ear-pick fungus, is a species of fungus in the family Auriscalpiaceae of the order Russulales. It was first described in 1753 by Carl Linnaeus, who included it as a member of the tooth fungi genus Hydnum, but British mycologist Samuel Frederick Gray recognized its uniqueness and in 1821 transferred it to the genus Auriscalpium that he created to contain it. The fruit bodies (mushrooms) grow on conifer litter or on conifer cones that may be partially or completely buried in soil. The dark brown cap of the small, spoon-shaped mushroom is covered with fine brown hairs, and reaches a diameter of up to . On the underside of the cap are a crowded array of tiny tooth-shaped protrusions ("teeth") up to 3 mm long; they are initially whitish to purplish-pink before turning brown in age. The dark brown and hairy stem, up to long and 2 mm thick, attaches to one edge of the cap. The mushroom produces a white spore print out of roughly spherical spores. High humidity is essential for optimum fruit body development, and growth is inhibited by either too much or too little light. Fruit bodies change their geotropic response three times during their development, which helps ensure that the teeth ultimately point downward for optimum spore release. The pure culture, cell division and the ultrastructure of A. vulgares hyphae and mycelia have been studied and described in search of potentially useful characters for phylogenetic analysis. When grown in culture, the fungus can be induced to produce fruit bodies under suitable conditions. The fungus is widely distributed in Europe, Central America, North America, and temperate Asia. Although common, its small size and nondescript colors lead it to be easily overlooked in the pine woods where it grows. A. vulgare is not generally considered edible, owing to its tough texture. Taxonomy The species was first described in the scientific literature by Carl Linnaeus under the name Hydnum auriscalpium in his 1753 Species Plantarum. Linnaeus placed three other tooth fungi in the genus Hydnum: H. imbricatum, H. repandum, and H. tomentosum. In 1821, Samuel Frederick Gray considered H. auriscalpium to be sufficiently distinct from the other Hydnum species to warrant the creation of a new genus, Auriscalpium, to contain it. In the process, its name was changed to Auriscalpium vulgare. Otto Kuntze and Howard James Banker later independently sought to restore Linnaeus' species name, but the resulting combination (Auriscalpium auriscalpium) is a tautonym and disallowed under the rules for botanical nomenclature (ICBN 2005 rule 23.4), and these combinations are therefore no longer validly published. Other names given to the fungus and now considered synonyms include Hydnum fechtneri, named by Josef Velenovský in 1922, and later combinations based on this name. A. vulgare is the type species of the widely distributed genus of eight species that it belongs to. Despite vast differences in appearance and morphology, A. vulgare is related to such varied taxa as the gilled fungi of Lentinus, the poroid genus Albatrellus, the coral-like Clavicorona, and fellow tooth fungus Hericium. The relationship of all of these taxa—members of the family Auriscalpiaceae of the order Russulales—has been demonstrated through molecular phylogenetics. Auriscalpium vulgare is commonly known as the "pinecone mushroom", the "cone tooth", "pine cone tooth", or the "ear-pick fungus". Gray called it the "common earpick-stool"; it was also referred to as the "fir-cone Hydnum", when it was still considered to be a member of that genus. The specific epithet vulgare means "common". The generic name Auriscalpium is Latin for "ear pick" and refers to a small, scoop-shaped instrument used to remove foreign matter from the ear. Description The fruit body of A. vulgare is fibrous when fresh and becomes stiff when dry. It is a small species rarely exceeding in height, with a cap usually smaller than an adult's fingernails: —although it has been known to reach up to . It is semicircular or kidney-shaped, flat on the lower surface and rounded on the top. The surface is at first much like the stem: covered with bristles and dark chestnut brown. It becomes smooth with maturity and can darken to the point of being almost black. The cap margin is usually buff to light brown–roughly the same color as the spines and lighter in color than the center. It becomes rolled inward (revolute) and often wavy in maturity. The spines on the underside of the cap are a few millimeters long and cylindrical down to their sharp tips. White to light brown when young, they later become covered with a white spore mass and then turn an ashy gray. Occasionally, fruit bodies are produced that lack a cap entirely. Auriscalpium vulgare usually has a single stem, but occasionally several stems arise from a thick common base. It attaches to the side of the cap and is cylindrical or slightly flattened with a bulbous base, 2–8 cm tall and 1–3 wide. Its surface is covered with hairy fibers, and its mature color is a dark chestnut brown. The cap flesh is composed of two distinct layers: a thin, compact, black-brown and hairy upper layer, and a thick, soft, white to light brown lower layer that is made of thin, thread-like filaments arranged in a roughly parallel fashion. The stem is similarly divided, with a thin, dark and hairy cortical layer covered by hairs, which encircles inner ochre-colored flesh. A drop of potassium hydroxide applied to the surface of the mushroom will cause it to instantly stain black. The mushroom has no distinct taste or odor, is generally considered inedible because of its toughness and diminutive size. An 1887 textbook noted, however, that it was "commonly eaten in France and Italy". Microscopic characteristics Spore deposits are white. Viewed under a light microscope, the spores appear hyaline (translucent), covered with minute wart-like bumps, and are spherical or nearly so, with dimensions of 4.6–5.5 by 4–5 μm. They are amyloid (reacting to Melzer's reagent) and cyanophilous (staining in methyl blue). The basidia (spore-bearing cells of the hymenium) are four-spored with basal clamps, and measure 15–24 by 3–4 μm, and sterigmata (extensions of the basidia that bear the spores) are swollen at the base and roughly 3 μm long. The hyphal system is dimitic, comprising both generative (undifferentiated) and skeletal (structural) hyphae. The thin-walled generative hyphae are hyaline, and have clamp connections; the thick-walled skeletal hyphae are thicker overall and lack such connections. The cortex (the tougher outer layer of flesh) is made of parallel unbranched generative hyphae that are brown, thick-walled, clumped together, and frequently clamped. The internal flesh is made of interwoven generative and skeletal hyphae. Gloeoplerous hyphae (containing oily or granular contents) are also present, protruding into the hymenium as club-like or sharp-pointed gloeocystidia. The hyphae of basidiomycetous fungi are partitioned by cross-walls called septa, and these septa have pores that permit the passage of cytoplasm or protoplasm between adjacent hyphal compartments. In an effort to determine ultrastructural characters useful for systematic and phylogenetic analyses of the Agaricomycotina, Gail Celio and colleagues used electron microscopy to examine both the structure of the septal pore, and nuclear division in A. vulgare. They determined that septa found in hyphae of the hymenium have bell-shaped pore "caps" with multiple perforations. Each cap extends along the length of the septum, along with a zone surrounding the pore that is free of organelles. Due to the scarcity of similar data from other Agaricomycotina species, it is unknown whether the extended septal pore cap margins of A. vulgare are phylogenetically informative. Regarding nuclear division, the process of metaphase I of meiosis is similar to the metaphase of mitosis. Spherical spindle pole bodies containing electron-opaque inclusions are set within gaps on opposite ends of the nuclear membrane. This membrane has occasional gaps but is largely continuous. Fragments of endoplasmic reticulum occur near the spindle pole bodies, but do not form a cap. Fruit body development Fruit body primordia first appear between the scales of the cones, and require 9 to 35 days to reach their final height. They consist of an inner core of thin-walled generative hyphae enclosed by an outer coat of skeletal hyphae. Immature fruit bodies are white and delicate, but gradually become brown as they mature. Because the cap is grown from the stem tip after it bends, cap development interrupts stem growth, and this shift to centrifugal growth (that is, growth outward from the stem) results in the typical kidney-shaped or semicircular cap. Although the fruit body takes at least 9 days to mature, spores production begins within 48–72 hours of the start of cap growth. Spines start out as minute protuberances on the part of the stem adjoining the undersurface of the cap. As the cap enlarges, these spines are spread horizontally, and more protuberances are formed, which elongate vertically downwards. When grown in favorable conditions of high water availability and humidity, the fruit body can proliferate by growing additional (secondary) fruit bodies on all parts of its upper and lower surfaces. These secondary growths typically number between four and seven; some may be aborted as the nutrients from the pine cone substrate are depleted, resulting in stems lacking caps. In one instance, a complete secondary proliferation was noted (i.e., growing from a primary proliferation) that developed completely so as to produce viable spores. Humidity is a limiting factor for optimum fruit body development. Removal of incompletely mature laboratory-grown specimens from a relative humidity (R.H.) of over 98% to one of 65–75% causes the fruit bodies to brown and stop growing. When transferred to an even lower R.H. of about 50%, the stems quickly begin to collapse. Light also affects fruit body development: both continuous illumination and complete darkness inhibit growth. When a stem is developing, the fungus is negatively geotropic, so that if the axis of the stem is tilted by 90 degrees, it will return to a vertical position within 24 hours. The extending hyphae that form the cap are themselves diageotropic—they will grow at right angles to the direction of gravity. Finally, the spines are positively geotropic, and will re-orient themselves to point downward if the mushroom orientation changes. Because the second (cap formation) and third (spine formation) geotropic responses overlap, there is a brief period where two different geotropic responses are operating simultaneously. These geotropic transitions help ensure that the final alignment results in optimum spore dispersal. Similar species Similar species include Strobilurius trullisatus, which also fruits on Douglas-fir cones. Baeospora myosura fruits on spruce cones, and Mycena purpureofusca on pine cones. Habitat and distribution Auriscalpium vulgare is a saprobic species. Its mushrooms grow solitary or clustered on fallen pine cones, especially those that are fully or partially buried. It typically favors Scots Pine (Pinus sylvestris), but has also been reported on spruce cones, and in California grows primarily on Douglas-fir cones. One author noted finding the mushroom on spruce needles on top of squirrel dens where cone bracts were present in the forest floor. In a study conducted in the Laojun Mountain region of Yunnan, China, A. vulgare was found to be one of the most dominant species collected from mixed forest at an altitude of . A study on the effect of slash and burn practices in northeast India showed that the fungus prefers to fruit on burned cones of the Khasi Pine, and that the number of fruit bodies on unburned cones increases with cone girth. The fungus is widely distributed in Europe, Central and North America, temperate Asia, and Turkey. In North America, its range extends from Canada to the Trans-Mexican Volcanic Belt south of Mexico City. The mushroom is common, appearing in the summer and autumn, although it is easily overlooked because of its small size and nondescript coloration. A. vulgare is the only representative of its genus in temperate areas of the Northern Hemisphere. Growth in culture Auriscalpium vulgare can be grown in pure culture on agar-containing plates supplemented with nutrients. The colonies that grow are white to pale cream, and cover the agar surface within six weeks from the initial inoculation. The mycelium is made of bent-over hyphae, without any aerial hyphae (hyphae that extend above the surface of the agar). Typically, two indistinct zones develop at about 6 mm and 15 mm from the initial inoculum spot, with each zone roughly 4 mm wide. The zones appear somewhat lighter in color because the hyphae are more closely packed and form crystalline substances that deposit into the agar. The mature mycelium consists of thin-walled, densely packed hyphae that are 1.5–3.2 μm in diameter. They are often gnarled or somewhat spiral (subhelicoid), and frequently branched at an angle of about 45°, with a clamp at the base of the branch. They contain amorphous granules that appear refractive when viewed under phase contrast microscopy, and their walls are often encrusted with tiny granules. Gloeocystidia (thin-walled cystidia with refractive, frequently granular contents) are common; they measure 50–85 by 6.5–8.5 μm, and are club-shaped (sometimes elongated), thin-walled, and often have one or two lobes with rounded tips. Containing foamy and pale yellow contents, they are a refractive yellow color under phase contrast. Initially they are erect but they soon fall under their own weight to lie on the agar surface. Crystalline deposits are abundant as small, randomly scattered plate-like or star-like crystals. Fruiting begins about six weeks after the initial inoculation on the agar plate, but only when portions of fruit bodies (spines or stem sections) are used as the inoculum to initiate growth; the use of mycelium as the inoculum precludes subsequent fruiting. Mature fruit bodies grow very close to the initial site of inoculation—within 3 mm—and take about 60 days to mature after they first start to form. Edibility The mushroom is generally considered inedible because of its toughness and diminutive size. A 1887 textbook claims that it was "commonly eaten in France and Italy". References External links AFTOL Images and details of ultrastructural characters Russulales Inedible fungi Fungi described in 1753 Taxa named by Carl Linnaeus Fungi of Asia Fungi of Central America Fungi of Europe Fungi of North America Taxa named by Samuel Frederick Gray Fungus species
Auriscalpium vulgare
Biology
3,328
68,598,894
https://en.wikipedia.org/wiki/5-MeO-DBT
5-MeO-DBT (5-Methoxy-N,N-dibutyltryptamine, 5-MeO-BET) is a rare substituted tryptamine derivative, which is thought to be a psychoactive substance and was identified in a designer drug sample by a forensic laboratory in Slovenia in March 2021, although only analytical studies have been conducted and no pharmacological data is available. It is nevertheless controlled under drug analogue legislation in a number of jurisdictions. Legal status 5-MeO-DBT was made schedule I at the state level in Alabama on September 13th, 2024. See also Dibutyltryptamine 4-HO-DBT 4-HO-DSBT 5-MeO-DET 5-MeO-EPT 5-MeO-DPT References Tryptamines Methoxy compounds
5-MeO-DBT
Chemistry
178
6,196,644
https://en.wikipedia.org/wiki/Social%20design
Social design is the application of design methodologies in order to tackle complex human issues, placing the social issues as the priority. Historically social design has been mindful of the designer's role and responsibility in society, and of the use of design processes to bring about social change. Social design as a discipline has been practiced primarily in two different models, as either the application of the human-centered design methodology in the social sector or governmental sector, or sometimes is synonymously practiced by designers who venture into social entrepreneurship. Models Stanford model of design thinking Stanford University's Hasso Plattner Institute of Design (d school) and IDEO collaboratively created interdisciplinary research in 1991 in order to improve the design process, and from that, Stanford's model of design thinking as a process emerged. The Stanford model has been applied to social design, where the goal is to develop both human and social capital with new products and processes that can be profitable, a goal that the anti-capitalist magazine In These Times called "naïve, at best". Margolin's social model Victor Margolin and Sylvia Margolin wrote in 2002 about the "social model" as a design practice and research methodology, primarily focused on social services but the ideas could be expanded in to educational systems, healthcare systems and for civic technology design. The social model involves a focus on human needs by taking inspiration from core social work literature and has an ecological perspective (that is less commonly seen in modes of design). Margolin suggests a multifaceted approach to solving problems, first accessing the situation by answering a few core questions, followed by survey research and interviews, content analysis of archival data, and/or participant observation. IDEO model The design firm, IDEO defines social design as a process that encourages community facilitation including the sharing of conversation and ideas, beliefs and rituals. The process should be supportive and empowering for those involved and offer an innovative and feasible process. The designer(s) should not try to change people's behavior and they draws on the differences in cultural traditions and cultural beliefs in order to frame the problems within society. Additionally there is importance of the wider influence including the environmental awareness of the design, since the environment effects everyone and is interconnected. The New Materialist Model This model seeks to break down any distinction between design and society. Boelen and Kaethler argue that all design is, for good or bad, essentially social because it is produced by, and exists in, the social realm. They observe, "A [new] materialist reading of social design on one hand complexifies the design process and on the other offers insight into meaningful forms of engagement." It employs central themes developed by thinkers such as Jane Bennet, Tim Ingold and Bruno Latour and as a result it produces design that rejects the logic of solutionism and tends towards research, personal reflection and story-telling—such as auto-ethnographic design. It is critiqued for being 'naval gazing' and too closely resembling artist practice and production. History Within the design world, social design is defined as a design process that contributes to improving human well-being and livelihood. The ideas behind social design has been inspired by Victor Papanek's writings, he was one of the first to address issues of social design in the 1960s. He was focused on creating change within the design field and no longer tolerating misdesign, any design that does not account for the needs of all people and disregards its own environmental consequences. To be a positive force in society, design and designers need to be socially and morally responsible, designers carry a serious responsibility for the consequences their designs have on society. These consequences include environmental impact and designers can contribute to designing more considerate and ecological products by carefully selecting the materials they use. Papanek also remarks on designing for people's needs (rather than their wants) and designers have responsibility over the choices they make in design processes. Often design is detached from the real world and is focused on the commercial market by designing for luxury items or for just a few people based on aesthetics, or disposable items. Papanek emphasizes designers should have a keen eye for where the need is and often that is found by looking at marginalized populations. Another author who contributes to the development of social design is Victor Margolin. He writes in the 2002 book, The Politics of the Artificial: Essays on Design and Design Studies the "designer's ability to envision and give form on material and immaterial products that can address human problems on broad scale and contribute to social well-being." This ideology is something that social design is built on. In this view social design is an activity that should not be framed with connotations of charity, aid donations, help, etc. It is not voluntary work, but it should be seen as professional contribution that plays a part in local economic development or livelihood. At the same time Social Design also challenges the conventional market model of designing. While traditionally, Design has been approached as a profession that remains strictly answerable to market forces, social design envisages the possibility of a more distributive conception of surpluses, by ensuring that the benefits of services and systems reach a wider range of user groups who may often fall outside the market system. Margolin writes, "The primary purpose of design for the market is creating products for sale. Conversely, the foremost intent of social design is the satisfaction of human needs." Designer George Aye writes about the importance of acknowledging the role of power when designing for complex social sector issues, as one may do for social design projects. Depending on the project, designing for user engagement in a project can be more important than designing for solutions, and it encourages the use of human-centered design methodologies. Engineer Chris Cox of Facebook used the term "social design" in 2010 and 2011 as, "[social design] defines the concept as improving how people build human-to-human, versus human-to-interface, connections online". Outside the design world social design appears in a number of professional environments, there are many artists that use the term social design or social practice to describe their work, though the work is exhibited within the contexts of the art world and have a different dialog when compared to design. Initiatives Hasso Plattner Institute of Design at Stanford University has supported social design programs. The Archeworks school was founded in 1994 and is located in Chicago, they were early in teaching socially responsible design processes. Curry Stone Design Prize was founded in 2008, a prize focused on design innovation in the social sector. Measured Summit, Design+Health in New York City was founded in 2017, a social design conference centered around the health care industry. The Center for Social Design at the Maryland Institute College of Art (MICA) was founded in 2011, and was one of the first graduate level degree programs in social design in the United States. They are dedicated to demonstrating the value of design in addressing complex social problems and to preparing the next generation of creative change-makers. The World Design Research Initiative, aka Worldesign, at the University of Art and Design Helsinki. Worldesign aims to explore issues relevant to social, welfare, and responsible design and to generate theory, as well as applicable systems or models. Its members produce exhibitions, workshops, and publications, which work as tools for testing and evaluating different social design applications. The University of Applied Arts Vienna has a master's degree dedicated to the challenges within urban social systems and related issues. The programme is oriented towards graduates from diverse fields of study using transdisciplinary teams. Art in synergy with project-related scientific methods and knowledge is seen as a tool for urban innovation. The University of Technology Sydney introduced a Bachelor of Creative Intelligence & Innovation degree in 2014, which must be completed in combination with another undergraduate degree. With a strong focus on developing novel solutions for social issues, it enables students "to participate in a future-facing, world-first, transdisciplinary degree that takes multiple perspectives from diverse fields, integrating a range of industry experiences, real-world projects and self-initiated proposals – equipping students to address the complex challenges and untapped opportunities of our times." The School of Design Ambedkar University, Delhi, India, offers an MDes in Social Design. The program commenced in 2013 and has been through many iterations. At its core the philosophy of the program is to make design more inclusive, at the level of creation and also at the level of users. In Spain, the Diseño Social EN+ works in integrating socially concerned designers and NGOs to help them improve the quality of their communications, whether from the formation or from the connection between designers and organizations. It launched in 2011. In the Netherlands, Social Design Showdown is an active community of social designers developing the social design field from practise since 2019. Through events and research initiatives they explore the impact, implementation, collaboration, etc. of social design. The Design Academy Eindhoven was one of the first European Masters in Social Design, initiated by Jan Boelen. They employ a New Materialist approach to social design. See also Business ethics Conceptual design Public interest design – design practice towards the greater good Service design – an ecological approach to designing a service. Social change – about changing social norms, behaviors Sociotechnical system – an approach to complex organizational work design that recognizes the interaction between people and technology in workplaces. Social responsibility – an ethical theory Sustainable design – philosophy of designing physical objects, the built environment, and services to comply with the principles of ecological sustainability. Universal design – he design of buildings, products or environments to make them accessible to all people, regardless of age, disability or other factors. References Further reading Boelen, Jan; Kaethler, Michael, eds. (2020). Social matter, social design: for good or bad, all design is social. Amsterdam: Valiz. ISBN 978-94-92095-84-8. Stocker, Karl (2017). Sozio-Design/Socio-Design: Relevante Projekte – Entworfen für die Gesellschaft/Relevant Projects – Designed for Society. Birkhäuser (ed. with FH JOANNEUM). External links Video: What is Social Design? by IDEO (2015) and by Victoria University of Wellington, posted by Design For Change on YouTube Podcast: An interview with John Emerson on design and social change (2013) from Internet Archive Articles Bruinsma, M. (1999). "Idealism: An Ideal Design is Not Yet". Casey, V. (2007). "The Designer's Dilemma." DesignersAccord.org. Emerson, J. (2009) "Mapping Power: Using design to get where we want to go" Emerson, J. (2008) "The Vision Thing: Seeing and creating change through design" Emerson, J. (2007) "The Conversation: When should designers make a political commitment?" Emerson, J. (2005) "Guns, Butter and Ballots: Citizens take charge by designing for better government" Emerson, J. (2004) "Taking it to the Streets: Graphic design for advocacy" Garland, K. (1964). "First Things First Manifesto." Hidalgo, M. (2014). "Armas de construcción Masiva: Manual de Diseño Social" Diseño Social EN+ (Spanish) Howard, A. (2001). "There is such a thing as society." EyeMagazine.com Howard, A. (2001). "Design Beyond Commodification." EyeMagazine.com Nini, P. (2004). "In Search of Ethics in Graphic Design." AIGA.org Poynor, R. (2007). "The Price of Juice." EyeMagazine.com Poynor, R. (2001). "The Time For Being Against." Typotheque.com Poynor, R. (2000). "First Things First 2000." Sagmeister, S. (2002). "How Good is Good?" Typotheque.com Various. (1883–2010). "100+ Years of Design Manifestos" Larosa, Antonio (2007). "Designers Against The iPodization Of Society" Design
Social design
Engineering
2,539
66,922,965
https://en.wikipedia.org/wiki/Les%20Espaces%20d%27Abraxas
Les Espaces d'Abraxas is a high-density housing complex in Noisy-le-Grand, approximately from Paris, France. The building was designed by architect Ricardo Bofill and his architecture practice Ricardo Bofill Taller de Arquitectura (RBTA) in 1978 on behalf of the French government, during a period of increased urbanisation across France after World War II. This rapid urbanisation led to overcrowding and insufficient housing in Paris. To offset this, the French government implemented a project to create five 'New Towns' on the outskirts of the city. Architect Ricardo Bofill's projects, including Les Espaces d'Abraxas, are rooted in his left wing ideals. The building's post-modern design uses classical motifs and new building technologies to achieve a luxury aesthetic previously reserved for upper classes. Despite receiving criticism, the building was an early success for Bofill, and brought him international success and praise. The building has been used as a backdrop in film and TV, including in Brazil (1985), The Hunger Games: Mockingjay – Part 2 (2015) and Arcadia (2023) Description The large complex of 591 apartments was designed in 1978 and completed in 1982. It rapidly acquired iconic status, amplified by its use as background sets in movies and music clips. It consists of three buildings: Le Palacio (the palace) is the largest, followed by Le Théâtre (the theatre) to its west, and the smaller L'Arc (the arch) between the other two. Le Palacio has 441 housing units, Le Théâtre has 130, and L'Arche has 20. In the decades following its creation, living conditions in the complex deteriorated to the extent that its demolition was debated in the mid-2010s. In 2018, the commune of Noisy-le-Grand announced that Bofill would oversee the renovation of Les Espaces d'Abraxas and of a number of nearby developments, including new construction. Name and location "Les Espaces d'Abraxas" literally translates to 'Abraxas's Spaces' and is a reference to the Greek Abraxas. This classical reference may be linked to the building's post-modern architecture style, which also has visual references to Ancient Greek and Roman architecture. The building is located within the Noisy-le-Grand region, a commune found in the eastern suburbs of Paris. The building is accessible via the Réseau Express Régional (RER), which travels directly from the centre of Paris (Gare de Lyon) to Noisy-le-Grand – Mont d'Est station. Noisy-le-Grand is located in Zone 4 of the RATP (Régie Autonome des Transports Parisiens). History and design Les Trente Glorieuses In the three decades following the Second World War, France experienced a dramatic economic boom, a period of "rapid urbanisation and industrial modernisation" which has since been dubbed Les Trente Glorieuses (The Thirty Glorious), During this period, large numbers relocated from rural towns into urban centres, specifically Paris. This led to unbalanced growth across France and poor housing and congestion throughout the capital. French housing policy in the 1950s–70s To manage this increased urbanisation, the French government began to implement various national programs to address housing shortages and move some of the population out of Paris. In 1965, under the government of General de Gaulle, five 'New Towns' were proposed as part of the Schéma Directeur de la Région Île-de-France (Île-de-France Region Master Plan), although not officially announced until 1971. In 1976, the Seventh National Plan proposed a 'Priority Action Programme' through which five 'New Towns' would be built across France. The five towns built were Cergy-Pontoise, Evry, Marne-la-Vallée, Melun-Senart, and Saint-Quentin-en-Yvelines. These towns were the focus of increased infrastructure, with the aim of aiding "employment and service growth" in the suburbs and diversifying the population on the outskirts of Paris. Originally designed in 1978, two years into the Seventh National Plan, Les Espaces d'Abraxas is situated in the Noisy-le-Grand region, within the 'New Town' of Marne-la-Vallée. At the time of construction, Noisy-le-Grand was a part of the Ceinture rouge (Red Belt), falling within the then French Communist Party-led council of Seine-Saint-Denis. As of 2013, social housing estates like Les Espaces d'Abraxas account for 41 per cent of housing in Seine-Saint-Denis. History of Noisy-le-Grand Before becoming part of the 'New Town' of Marne-la-Vallée, Noisy-le-Grand experienced a massive population growth in the inter-war period. The town dates back as far as "the invasion of Gaul by Julius Caesar (58–52 BC)", and remained a rural village until the 20th century. The population grew from 2,200 people in 1921 to over 10,000 in 1954. Following its incorporation into Marnee-la-Vallée, the population grew from "26,765 in 1975 to 52,408 by the end of the 1980s". Unlike the other 'New Towns' which sought to concentrate residents into one area and create infrastructure around it, Marne-la-Vallée was "to be a series of small scale settlements based upon existing communes and around transport connections (road and rail)". The town's growth was impeded by the economic recession that hit France in the late 1970s, however by the mid-1980s Noisy-le-Grand accounted for two-thirds of employment in Marne-la-Vallée. The 'New Towns' have been criticised as lacking 'identity', a problem which was furthered for Noisy-le-Grand with the construction of Disneyland Paris on the eastern edge of Marne-la-Vallée in 1992. Disneyland Paris, which was originally called 'Euro Disney' faced large amounts of backlash from the French public due to concerns it was an invasion of American aesthetics and culture, with theatre director Ariane Mnouchkine describing it as a "cultural Chernobyl". Ricardo Bofill Architect Ricardo Bofill was born in 1939 in Barcelona, Spain. Originally attending the Barcelona School of Architecture, he was expelled in 1957 due to his radical left wing beliefs that were contrary to the regime of then dictator Francisco Franco. After completing his education in Switzerland at the Haute École d'art de Design Genève in 1960, he spent nine months in the Spanish Military service. After his service, he returned to Barcelona in 1963 where he founded his own architecture practice, the Ricardo Bofil Taller de Arquitectura, at the age of 23. The practice included not only architects, but filmmakers, philosophers, engineers, writers, and sociologists. In 1968, the firm proposed an architectural design entitled the 'City in Space' as a "kind of manifesto in reaction to the pressing demands of a society in constant transformation". Of the project, the RBTA wrote: "This project for the development of a large housing complex was conceived to form a multifunctional neighbourhood, inspired by a vision of social factors very much in keeping with its time. The difficulty was to establish structures that were both complex and flexible, capable of quickly assimilating and even facilitating the changes of everyday reality".In 1969, the Ministry of Housing assigned land in Moratalaz, Madrid to the project, however the project was ultimately terminated due to "political, bureaucratic and economic circumstances". Aspects of this early design and its ideology remained prominent in Bofill's work, including idea's of "realisable utopia", "mega-structural character, systems of aggregation, agglomeration and mixture" and "revolutionary action". Bofill was arrested twice on political grounds, once while he was a student at the Barcelona School of Architecture, and again in 1964 after his return to Barcelona. He died due to complications from COVID-19 on 14 January 2022, at the age of 82. Construction and design The building was constructed with precast concrete panels made by mixing oxides with cement to create a polychromatic look. This specific type of prefabricated concrete was developed by the project's engineers and was weather-resistant, and sparked a new age of French concrete construction. The use of both these panels and of cranes allowed the construction to remain cost effective. Bofill credits the "brand new manufacturing system" of prefabricated concrete with the construction of the buildings. The RBTA website, on the construction of Les Espaces d'Abraxas: "The façades were built from prefabricated sections, cut according to their individual shapes and not in framed panels, so that the joints are invisible. These panels are stone, a mixture of sand, gray and white cement and oxides. The very light ochre and violet-blue shades obtained from these mixtures are extremely subtle. The aim of using this contemporary material, which harmonizes with the urban center, while remaining discrete, is to rediscover the qualities of stone and cultural references." The development consists of three separate structures (Le Palacio, Le Théâtre, L'Arc) surrounding a grass-lined plaza. The largest of the three is the 18-storey Le Palacio (The Palace) comprising three buildings arranged in a U-shape, and containing 441 individual 2–5-bedroom units. On the other side of the plaza sits Le Théâtre (The Theatre), the second largest building. Referencing the Roman amphitheatre the structure is 10 storeys high and semi-circular in shape. Between the two lies the smallest of the three buildings, L'Arc (The Arch). In his 2014 interview with Le Monde, Bofill noted that Le Théâtre had a bigger budget for design and construction, and was "intended for a wealthier category of people". By his own admission, Bofill's design draws heavily from post-modern architectural concepts. The architect has said of the objectives of the building that he "wanted to make an emblematic monument in a very poorly made area". In the decades following World War II, architectural designs began to reuse the aesthetic languages of "past avant-gardes, notably Russian Constructivism" as a symbolic way to "indicate a rediscovered vigour and confidence" and nationalism. Post-modern architecture has been attributed with "blending and distorting '' recognisable visual principles to create the "uncanny". Architects Farrell and Furman note the combination of grand proportions and classical Roman, Greek and Baroque shapes in Les Espaces d'Abraxas as being "a unique combination of the romantic sublime and totalitarian awe" and posit this as a possible reason for its popularity as a "backdrop for dystopian feature films". The outdoor plaza, situated in the centre of the development, was designed to mirror the Roman Forums. This communal space was key to Bofill, as he intended for Les Espaces d'Abraxas to "mix social categories" and create community spirit. His intention was to "invest this mass housing with a sense of narrative, to replace the bare functionalism of many urban apartment blocks". This follows the post-modern tradition which saw a move away from the "functionality of modernism", towards more neoclassical forms. The invention of new technologies, like prefabricated concrete, combined with classical motifs allowed a "pseudo luxurious extravagance" to be achieved that was traditionally withheld only for the upper classes. There are tree-planted gardens atop the roofs of both Le Théâtre and L'Arc, however they are not accessible to the inhabitants. Critical reception His work with the French authorities in the 'New Towns' brought Bofill "worldwide fame and consolidated his international practice". Critics, however, have compared his "supposedly ironic use of Classicism with the entirely unironic social housing of Soviet Realism". The post-modern design of the building's shape led to "awkward floor plans for the units inside". In a 2014 interview with Le Monde, Bofill claimed he has "not succeeded in changing the city", claiming that the "unique space suffered from the lack of community spirit specific to France". In 2006, the Noisy-le-Grand local government introduced plans to "demolish parts of the development". This proposal was met with "widespread resentment" from the community, and was subsequently scrapped. Of the proposed demolition, Bofill stated that "Demolishing them would be a lack of culture". In 2019, critic Owen Hatherley discussed Les Espaces d'Abraxas in his article The good, the bad and the ugly – neoclassical architecture in modern times for the international art magazine Apollo. Hatherley criticises the "sinister, domineering quality" of the development, which he attributes to the communist influence at the time, claiming that "here you cannot forget for a moment that you're in a massive modern housing estate". Arts and popular culture In 1985, an exhibition entitled "Architecture, Urbanism and History" was mounted by the Museum of Modern Art in New York City, which focused on the work of Bofill and Leon Krier. The exhibition, which included colour photographs of his buildings, including Les Espaces d'Abraxas, was sponsored by Gerald D. Hines Interests as part of a series of exhibitions to "focus on important younger architects". The exhibition ran from June until September 1985. The unusual and monumental appearance of Espaces d'Abraxas has made it a favorite background set for unreal, often dystopian narratives. It features prominently in Brazil (1985) and in The Hunger Games: Mockingjay – Part 2 (2015), where the characters attempt to flee as a substance similar to tar fills the bucket-like plaza. It also appears in À mort l'arbitre (Kill the Referee) (1984), F.B.I. Frog Butthead Investigators (2012), and the French TV mini-series (2016). It also appears in music videos by Stéphanie of Monaco ("Ouragan", 1986), Leck ("Fais le L", 2012), ("Break the Silence", 2015), Marwa Loud ("Fallait pas", 2017), Médine ("Grand Paris", 2017), Adel Tawil ("", 2017), and Ufo361 ("Nur zur Info", 2020). The building and its inhabitants are features in French photographer Laurent Kronental's ongoing photo series Souvenir d'un Futur. The photo series centres on the occupants of the various "Grands Ensembles" in Paris. Gallery See also Walden 7 Les Arcades du Lac Les Echelles du Baroque Antigone, Montpellier List of works by Ricardo Bofill Taller de Arquitectura References 1982 establishments in France Ricardo Bofill buildings Seine-Saint-Denis Postmodern architecture Buildings and structures in Île-de-France
Les Espaces d'Abraxas
Engineering
3,146
19,726,765
https://en.wikipedia.org/wiki/Tag%20%28programming%29
In programming, a tag is an argument to a subroutine that determines other arguments passed to it, which is used as a way to pass indefinite number of tagged parameters to the subroutine; notably, tags are used for a number of system calls in AmigaOS v2.0 and onwards. In AmigaOS In earlier versions of AmigaOS, if a system call required setting a large number of parameters, instead of passing them as function arguments, the function would require a pointer to a structure that holds the arguments (for example, intuition.library's OpenWindow() required struct NewWindow with 17 different parameters). Tags were introduced in AmigaOS 2.0 because they "make it possible to add new parameters to system functions without interfering with the original parameters. They also make specifying parameter lists much clearer and easier." A number of third-party software libraries for AmigaOS also use tags extensively. Example The code without tags is obscure (for example, 0, 1 define window colors) while the code with tags is self-documenting. Fewer parameters have to be defined with tags than are in the structure, as OpenWindowTags will fall back to default parameters. Implementation AmigaOS provides functions for tag handling in its utility.library. Especially Amiga E provides mechanisms for dynamic tag handling, allowing programs to manipulate structures where tags are not known until runtime. This is achieved through various dynamic data structures and associated functions. In general An advantage of tags is that they ease the work with default arguments since the programmer doesn't have to specify them or their substitutes. From this follows another advantage, ease of achieving of both forward and backward compatibility with external libraries: a program written for an older version of the library will work with a newer one, since the newer library will simply set all the parameters not provided by the program to their default values; and a program written for a newer version of the library will still work with the older version, since the older library will simply pay no attention to the newly introduced tags. A disadvantage of tags is that their processing is slower than simply reading data from a structure or the stack. Additionally, compile time type checking is lost. See also Named parameter References External links utility.library autodoc Amiga ROM Kernal Reference Manual: Libraries - Tag index AmigaOS MorphOS
Tag (programming)
Technology
475
25,447,068
https://en.wikipedia.org/wiki/SIBLING%20proteins
The family of non-collagenous proteins known as SIBLING proteins, standing for small integrin-binding ligand, N-linked glycoprotein, are components of the extracellular matrix of bone and dentin. Evidence shows that these proteins play key roles in the mineralization of these tissues. The following are categorized as SIBLING proteins: osteopontin (OPN) bone sialoprotein (BSP) dentin matrix protein 1 (DMP1) dentin sialophosphoprotein (DSPP) matrix extracellular phosphoglycoprotein (MEPE) The genes coding for members of the SIBLING protein family are similarly organized and are all located on human chromosome 4q21-23. References Glycoproteins Extracellular matrix proteins Protein families
SIBLING proteins
Chemistry,Biology
168
1,732,846
https://en.wikipedia.org/wiki/Brill%20tagger
The Brill tagger is an inductive method for part-of-speech tagging. It was described and invented by Eric Brill in his 1993 PhD thesis. It can be summarized as an "error-driven transformation-based tagger". It is: a form of supervised learning, which aims to minimize error; and, a transformation-based process, in the sense that a tag is assigned to each word and changed using a set of predefined rules. In the transformation process, if the word is known, it first assigns the most frequent tag, or if the word is unknown, it naively assigns the tag "noun" to it. High accuracy is eventually achieved by applying these rules iteratively and changing the incorrect tags. This approach ensures that valuable information such as the morphosyntactic construction of words is employed in an automatic tagging process. Algorithm The algorithm starts with initialization, which is the assignment of tags based on their probability for each word (for example, "dog" is more often a noun than a verb). Then "patches" are determined via rules that correct (probable) tagging errors made in the initialization phase: Initialization: Known words (in vocabulary): assigning the most frequent tag associated to a form of the word Unknown word Rules and processing The input text is first tokenized, or broken into words. Typically in natural language processing, contractions such as "'s", "n't", and the like are considered separate word tokens, as are punctuation marks. A dictionary and some morphological rules then provide an initial tag for each word token. For example, a simple lookup would reveal that "dog" may be a noun or a verb (the most frequent tag is simply chosen), while an unknown word will be assigned some tag(s) based on capitalization, various prefix or suffix strings, etc. (such morphological analyses, which Brill calls Lexical Rules, may vary between implementations). After all word tokens have (provisional) tags, contextual rules apply iteratively, to correct the tags by examining small amounts of context. This is where the Brill method differs from other part of speech tagging methods such as those using Hidden Markov Models. Rules are reapplied repeatedly, until a threshold is reached, or no more rules can apply. Brill rules are of the general form: tag1 → tag2 IF Condition where the Condition tests the preceding and/or following word tokens, or their tags (the notation for such rules differs between implementations). For example, in Brill's notation: IN NN WDPREVTAG DT while would change the tag of a word from IN (preposition) to NN (common noun), if the preceding word's tag is DT (determiner) and the word itself is "while". This covers cases like "all the while" or "in a while", where "while" should be tagged as a noun rather than its more common use as a conjunction (many rules are more general). Rules should only operate if the tag being changed is also known to be permissible, for the word in question or in principle (for example, most adjectives in English can also be used as nouns). Rules of this kind can be implemented by simple Finite-state machines. See Part of speech tagging for more general information including descriptions of the Penn Treebank and other sets of tags. Typical Brill taggers use a few hundred rules, which may be developed by linguistic intuition or by machine learning on a pre-tagged corpus. Code Brill's code pages at Johns Hopkins University are no longer on the web. An archived version of a mirror of the Brill tagger at its latest version as it was available at Plymouth Tech can be found on Archive.org. The software uses the MIT License. References External links Brill tagger trained for Dutch (online and offline version) Brill tagger trained for New Norwegian Brill tagger trained for Danish (online demo) Brill tagger trained for English (online demo) taggerXML Modernized version of Eric Brill's Part Of Speech tagger (source code of the Danish and English versions above) Natural language processing
Brill tagger
Technology
881
12,212,927
https://en.wikipedia.org/wiki/Application%20portfolio%20management
IT Application Portfolio Management (APM) is a practice that has emerged in mid to large-size information technology (IT) organizations since the mid-1990s. Application Portfolio Management attempts to use the lessons of financial portfolio management to justify and measure the financial benefits of each application in comparison to the costs of the application's maintenance and operations. Evolution of the practice Likely the earliest mention of the Applications Portfolio was in Cyrus Gibson and Richard Nolan's HBR article "Managing the Four Stages of EDP Growth" in 1974. Gibson and Nolan posited that businesses' understanding and successful use of IT "grows" in predictable stages and a given business' progress through the stages can be measured by observing the Applications Portfolio, User Awareness, IT Management Practices, and IT Resources within the context of an analysis of overall IT spending. Nolan, Norton & Co. pioneered the use of these concepts in practice with studies at DuPont, Deere, Union Carbide, IBM and Merrill Lynch among others. In these "Stage Assessments" they measured the degree to which each application supported or "covered" each business function or process, spending on the application, functional qualities, and technical qualities. These measures provided a comprehensive view of the application of IT to the business, the strengths and weaknesses, and a road map to improvement. APM was widely adopted in the late 1980s and through the 1990s as organizations began to address the threat of application failure when the date changed to the year 2000 (a threat that became known as Year 2000 or Y2K). During this time, tens of thousands of IT organizations around the world developed a comprehensive list of their applications, with information about each application. In many organizations, the value of developing this list was challenged by business leaders concerned about the cost of addressing the Y2K risk. In some organizations, the notion of managing the portfolio was presented to the business people in charge of the Information Technology budget as a benefit of performing the work, above and beyond managing the risk of application failure. There are two main categories of application portfolio management solutions, generally referred to as 'Top Down' and 'Bottom Up' approaches. The first need in any organization is to understand what applications exist and their main characteristics (such as flexibility, maintainability, owner, etc.), typically referred to as the 'Inventory'. Another approach to APM is to gain a detailed understanding of the applications in the portfolio by parsing the application source code and its related components into a repository database (i.e. 'Bottom Up'). Application mining tools, now marketed as APM tools, support this approach. Hundreds of tools are available to support the 'Top Down' approach. This is not surprising, because the majority of the task is to collect the right information; the actual maintenance and storage of the information can be implemented relatively easily. For that reason, many organizations bypass using commercial tools and use Microsoft Excel to store inventory data. However, if the inventory becomes complex, Excel can become cumbersome to maintain. Automatically updating the data is not well supported by an Excel-based solution. Finally, such an Inventory solution is completely separate from the 'Bottom Up' understanding needs. Business case for APM According to Forrester Research, "For IT operating budgets, enterprises spend two-thirds or more on ongoing operations and maintenance.". It is common to find organizations that have multiple systems that perform the same function. Many reasons may exist for this duplication, including the former prominence of departmental computing, the application silos of the 1970s and 1980s, the proliferation of corporate mergers and acquisitions, and abortive attempts to adopt new tools. Regardless of the duplication, each application is separately maintained and periodically upgraded, and the redundancy increases complexity and cost. With a large majority of expenses going to manage the existing IT applications, the transparency of the current inventory of applications and resource consumption is a primary goal of Application Portfolio Management. This enables firms to: 1) identify and eliminate partially and wholly redundant applications, 2) quantify the condition of applications in terms of stability, quality, and maintainability, 3) quantify the business value/impact of applications and the relative importance of each application to the business, 4) allocate resources according to the applications' condition and importance in the context of business priorities. Transparency also aids strategic planning efforts and diffuses business / IT conflict, because when business leaders understand how applications support their key business functions, and the impact of outages and poor quality, conversations turn away from blaming IT for excessive costs and toward how to best spend precious resources to support corporate priorities. Portfolio Taking ideas from investment portfolio management, APM practitioners gather information about each application in use in a business or organization, including the cost to build and maintain the application, the business value produced, the quality of the application, and the expected lifespan. Using this information, the portfolio manager is able to provide detailed reports on the performance of the IT infrastructure in relation to the cost to own and the business value delivered. Definition of an application In application portfolio management, the definition of an application is a critical component. Many service providers help organizations create their own definition, due to the often contentious results that come from these definitions. Application software — An executable software component or tightly coupled set of executable software components (one or more), deployed together, that deliver some or all of a series of steps needed to create, update, manage, calculate or display information for a specific business purpose. In order to be counted, each component must not be a member of another application. Software component — An executable set of computer instructions contained in a single deployment container in such a way that it cannot be broken apart further. Examples include a Dynamic Link Library, an ASP web page, and a command line "EXE" application. A zip file may contain more than one software component because it is easy to break them down further (by unpacking the ZIP archive). Software application and software component are technical terms used to describe a specific instance of the class of application software for the purposes of IT portfolio management. See application software for a definition for non-practitioners of IT Management or Enterprise Architecture. Software application portfolio management requires a fairly detailed and specific definition of an application in order to create a catalog of applications installed in an organization. The requirements of a definition for an application The definition of an application has the following needs in the context of application portfolio management: It must be simple for business team members to explain, understand, and apply. It must make sense to development, operations, and project management in the IT groups. It must be useful as an input to a complex function whose output is the overall cost of the portfolio. In other words, there are many factors that lead to the overall cost of an IT portfolio. The sheer number of applications is one of those factors. Therefore, the definition of an application must be useful in that calculation. It must be useful for the members of the Enterprise Architecture team who are attempting to judge a project with respect to their objectives for portfolio optimization and simplification. It must clearly define the boundaries of an application so that a person working on a measurable 'portfolio simplification' activity cannot simply redefine the boundaries of two existing applications in such a way as to call them a single application. Many organizations will readdress the definition of an application within the context of their IT portfolio management and governance practices. For that reason, this definition should be considered as a working start. Examples The definition of an application can be difficult to convey clearly. In an IT organization, there might be subtle differences in the definition among teams and even within one IT team. It helps to illustrate the definition by providing examples. The section below offers some examples of things that are applications, things that are not applications, and things that comprise two or more applications. Inclusions By this definition, the following are applications: A web service endpoint that presents three web services: InvoiceCreate, InvoiceSearch, and InvoiceDetailGet A service-oriented business application (SOBA) that presents a user interface for creating invoices, and that turns around and calls the InvoiceCreate service. (note that the service itself is a different application). A mobile application that is published to an enterprise application store and thus deployed to employee-owned or operated portable devices enabling authenticated access to data and services. A legacy system composed of a rich client, a server-based middle tier, and a database, all of which are tightly coupled. (e.g. changes in one are very likely to trigger changes in another). A website publishing system that pulls data from a database and publishes it to an HTML format as a sub-site on a public URL. A database that presents data to a Microsoft Excel workbook that queries the information for layout and calculations. This is interesting in that the database itself is an application unless the database is already included in another application (like a legacy system). An Excel spreadsheet that contains a coherent set of reusable macros that deliver business value. The spreadsheet itself constitutes a deployment container for the application (like a TAR or CAB file). A set of ASP or PHP web pages that work in conjunction with one another to deliver the experience and logic of a web application. It is entirely possible that a sub-site would qualify as a separate application under this definition if the coupling is loose. A web service end point established for machine-to-machine communication (not for human interaction), but which can be rationally understood to represent one or more useful steps in a business process. Exclusions The following are not applications: An HTML website. A database that contains data but is not part of any series of steps to deliver business value using that data. A web service that is structurally incapable of being part of a set of steps that provides value. For example, a web service that requires incoming data that breaks shared schema. A standalone batch script that compares the contents of two databases by making calls to each and then sends e-mail to a monitoring alias if data anomalies are noticed. In this case, the batch script is very likely to be tightly coupled with at least one of the two databases, and therefore should be included in the application boundary that contains the database that it is most tightly coupled with. Composites The following are many applications: A composite SOA application composed of a set of reusable services and a user interface that leverages those services. There are at least two applications here (the user interface and one or more service components). Each service is not counted as an application. A legacy client-server app that writes to a database to store data and an Excel spreadsheet that uses macros to read data from the database to present a report. There are TWO apps in this example. The database clearly belongs to the legacy app because it was developed with it, delivered with it, and is tightly coupled to it. This is true even if the legacy system uses the same stored procedures as the Excel spreadsheet. Methods and measures for evaluating applications There are many popular financial measures, and even more metrics of different (non-financial or complex) types that are used for evaluating applications or information systems. Return on investment (ROI) Return on Investment is one of the most popular performance measurement and evaluation metrics used in business analysis. ROI analysis (when applied correctly) is a powerful tool for evaluating existing information systems and making informed decisions on software acquisitions and other projects. However, ROI is a metric designed for a certain purpose – to evaluate profitability or financial efficiency. It cannot reliably substitute for many other financial metrics in providing an overall economic picture of the information solution. The attempts at using ROI as the sole or principal metric for decision making regarding in-formation systems cannot be productive. It may be appropriate in a very limited number of cases/projects. ROI is a financial measure and does not provide information about efficiency or effectiveness of the information systems. Economic value added (EVA) A measure of a company's financial performance based on the residual wealth calculated by deducting cost of capital from its operating profit (adjusted for taxes on a cash basis). (Also referred to as "economic profit".) Formula = Net Operating Profit After Taxes (NOPAT) - (Capital * Cost of Capital) Total cost of ownership (TCO) Total Cost of Ownership is a way to calculate what the application will cost over a defined period of time. In a TCO model, costs for hardware, software, and labor are captured and organized into the various application life cycle stages. An in depth TCO model helps management understand the true cost of the application as it attempts to measure build, run/support, and indirect costs. Many large consulting firms have defined strategies for building a complete TCO model. Total economic impact (TEI) TEI was developed by Forrester Research Inc. Forrester claims TEI systematically looks at the potential effects of technology investments across four dimensions: cost — impact on IT; benefits — impact on business; flexibility — future options created by the investment; risk — uncertainty. Business value of IT (ITBV) ITBV program was developed by Intel Corporation in 2002. The program uses a set of financial measurements of business value that are called Business Value Dials (Indicators). It is a multidimensional program, including a business component, and is relatively easy to implement. Applied information economics (AIE) AIE is a decision analysis method developed by Hubbard Decision Research. AIE claims to be "the first truly scientific and theoretically sound method" that builds on several methods from decision theory and risk analysis including the use of Monte Carlo methods. AIE is not used often because of its complexity. References Information technology management
Application portfolio management
Technology
2,820
25,000,756
https://en.wikipedia.org/wiki/C7H15N
{{DISPLAYTITLE:C7H15N}} The molecular formula C7H15N (molar mass: 113.20 g/mol, exact mass: 113.1204 u) may refer to: Azocane Dimethylpiperidines 2,6-Dimethylpiperidine 3,5-Dimethylpiperidine Molecular formulas
C7H15N
Physics,Chemistry
80
5,437,698
https://en.wikipedia.org/wiki/James%20K.%20Coyne%20III
James Kitchenman Coyne III (born November 17, 1946) is an American businessman and former politician. From 1981 to 1983, he served one term as a Republican member of the U.S. House of Representatives from Pennsylvania. Biography Coyne was born in Farmville, Virginia, and raised in Abington, Pennsylvania, the son of James Kitchenman Coyne Jr. and Pearl Beatrice Black. He graduated from Yale University in 1968 and received an M.B.A. from Harvard Business School in 1970. He was a lecturer at the Wharton School at the University of Pennsylvania from 1974 to 1979 and was president of the George S. Coyne Chemical Corp., Inc., from 1971 to 1981. Coyne was the supervisor of Upper Makefield Township in 1980. Congress He was elected in 1980 as a Republican to the 97th Congress. He was an unsuccessful candidate for reelection in 1982. Later career After his term in Congress, he served from 1983 to 1985 as a special assistant to President Ronald Reagan and as director of the White House Office of Private Sector Initiatives, in 1985–1986 as chief executive officer of the American Consulting Engineers Council, and as president of the American Tort Reform Association from 1986 to 1988. In 1987, he founded Americans to Limit Congressional Terms. Coyne co-authored (with John Fund) "Cleaning House," which promoted state referendums to limit the terms of Members of Congress. In 1994 he was chosen president of the National Air Transportation Association, where he served until 2012. He married Helen Biddle Mercer on October 24, 1970. They have three children, Alexander Black Coyne (born 1977), Katherine Mercer Coyne (born 1980) and Michael Atkinson Coyne (born 1982). He is a great-great-grandson of Philadelphia manufacturer James Kitchenman. Sources External links 1946 births Living people Businesspeople from Pennsylvania Harvard Business School alumni People from Abington Township, Montgomery County, Pennsylvania Politicians from Bucks County, Pennsylvania People in the chemical industry Republican Party members of the United States House of Representatives from Pennsylvania Yale University alumni Members of Congress who became lobbyists 20th-century members of the United States House of Representatives
James K. Coyne III
Chemistry
435
20,208,066
https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20triptans
Triptans are a family of tryptamine-based drugs used as abortive medication in the treatment of migraines and cluster headaches. They are selective 5-hydroxytryptamine/serotonin1B/1D (5-HT1B/1D) agonists. Migraine is a complex disease which affects about 15% of the population and can be highly disabling. Triptans have advantages over ergotamine and dihydroergotamine, such as selective pharmacology, well established safety record and evidence-based prescribing instructions. Triptans are therefore often preferred treatment in migraine. History Search for a new anti-migraine drug started at Glaxo in 1972. Studies in the 1960s showed that vasoconstriction from 5-HT, ergotamine and noradrenaline could reduce migraine attacks. Research also showed that platelet 5-HT level is reduced during migraine. Because there are too many side-effects for 5-HT to be used as a drug, scientists started research on the receptors of 5-HT in order to discover and develop a more specific agonist for 5-HT receptors. Research on the 5-HT receptors and their effect led to discovery of several types and subtypes of 5-HT. AH24167 showed a vasodilation effect instead of vasoconstriction due to the agonist effect on another type of 5-HT receptors later assigned the name 5-HT7. AH25086 was the second compound developed and showed a vasoconstriction effect but was not released as a drug due to low per oral bioavailability. Continued research led to the discovery of the first triptan drug, sumatriptan, that had both vasoconstriction effect, as well as better oral bioavailability. Sumatriptan was first launched in the Netherlands in 1991 and became available in the United States during 1993. Mechanism Triptans are specific and selective agonists for the 5-HT1 receptors. Sumatriptan binds to 5-HT1D receptors, zolmitriptan, rizatriptan, naratriptan, almotriptan, and frovatriptan binds to 5-HT1B/1D and eletriptan binds to 5-HT1B/1D/1F receptors. Triptans are believed to exert their effects through vasoconstriction, leading to reduced carotid arterial circulation without affecting cerebral blood flow, peripheral neuronal inhibition, or inhibition of transmission through second order neurons of the trigeminocervical complex. Receptors 5-HT receptors are all G-protein coupled receptors (GPCR) except for 5-HT3 which is a ligand gated ion channel. The receptors that have been found to be involved in migraine are 5-HT1B, 5-HT1D and 5-HT1F receptors. 5-HT1B are found in meningeal arteries, agonism of 5-HT1B causes vasoconstriction in cranial nerves. The 5-HT1D receptors are located primarily in the trigeminal nerve in the central nervous system (CNS). They are also found in vascular smooth muscles, mediating contraction. Agonism of 5-HT1D receptors subdues the release of inflammatory mediators. It has been shown that both 5-HT1B and 5-HT1D receptors in humans have a very similar amino acid structures, from which the similarities in binding properties can be expected. Design All triptans have an indole structure identical to the neurotransmitter 5-HT. Classic triptan structure contain side chain on the indole ring, and a basic nitrogen in a similar distance from the indole structure. The main structural difference of the triptans is the position of the sulfonamide and the side chain attached to it (see figure 1 and table 1). Rizatriptan and zolmitriptan have instead of a sulfonamide a triazole and 2-oxazolidone respectively. Another exception to the classic structure is seen on eletriptan where the nitrogen-alkyl chain connected to the indole ring is replaced with a dimethyl-pyrrolidine, and in naratriptan where the nitrogen-alkyl chain is replaced with a 1-methyl-piperidine ring. One of the frovatriptan side chains forms an additional ring with the indole, resulting in a carbazole ring system. Structures of the triptans The 5-HT1B/D pharmacophore 5-HT1B and 5-HT1D receptors are considered very similar, they share amino acid homology and their ligands expose similar binding properties thus they have similar pharmacophore. The pharmacophore model for these receptors ligands is qualitative and defines the relative positions of important groups. It is defined with following five main features: an aromatic group (usually the indole), protonated amine (a donor of hydrogen bond), acceptor of hydrogen bond, additional hydrogen bond site (both donor and acceptor) and hydrophobic region located between both hydrogen bond sites, see figure 2. The main binding points were concluded to be the protonated amine and the hydrogen bond site. It was observed that the double bond region in the indole was necessary for the agonism in this series of compounds. Figure 3 shows how different drugs fit the pharmacophore, with a C and N linked analogues of 5-HT1D agonist. The marked sites on the figure are responsible for the affinity. The pharmacophore can be characterized as amphipathic, that means that the structure has both hydrophobic and hydrophilic groups. Relevant structural features of triptans and binding to the receptor Triptan structures were designed from the structure of 5-HT to attain affinity to 5-HT receptors, hence the identical indole structure. The hydroxyl group (-OH) on the hexane of the indole core and the alkyl-amine side chain on position C3 on 5-HT have been replaced with other compounds, such as sulfonamides or azol-ring structured derivatives and different amine-alkyl side chains. An electro-negative group can form a hydrogen bond with Thr in the pocket of the receptor. Sulfonamide derivatives attached to the hexane ring of the indole structure have electro-negative properties, as well as the triazole and 2-oxazolidone on rizatriptan and zolmitriptan respectively. This can increase binding ability of the compound and the efficacy, especially with the 5-HT1D receptor. A schematic drawing of the binding of sumatriptan to 5-HT1D receptor can be seen in figure 4. One study showed that sumatriptan fits better in the binding site of the receptor when the side chain with the protonated nitrogen atom is folded back over the indole structure. This alignment contributes to the hydrogen bonding between the nitrogen in the sulfonamine and the Ser138 in the binding site. It is also favorable to the formation of the hydrogen bond between the oxygen of the sulfonamine and Thr202. Other binding in the pocket of the binding site occurs with the nitrogen atom in the pentene ring of the indole structure of the triptan and the amino acid Ser352. This energetically favorable position of the agonist makes it possible for additional binding of the ligand to other Ser in the binding site, along with additional anchoring between Phe in the pocket of the binding site and the indole of the agonist. The binding of Phe and the triptan is caused by π stacking interactions of the indole and amino acid and an additional effect on this interaction is because of dispersive effect of amino acid leucine (Leu; not shown in figure 4). The amino acids Trp343 and Tyr346 both have electron rich π-systems in their aromatic structures. With their position in the binding site they create a sort of aromatic cage around the protonated nitrogen atom of the side chain on position C3 on the triptans (this nitrogen atom is protonated at physiological condition), and thereby stabilizes the ion bond the nitrogen atom has formed with a carboxylate on aspartic acid. Side chains of the surrounding amino acids can have an effect on the binding of the nitrogen atom, mainly three Phe can affect the methyl groups bound to the nitrogen atom (not shown in figure 4). Eletriptan has higher affinity for the receptor, which is probably a result of the bulky substituents of the structure. The amine is protonated at physiological pH condition, triggering better uptake. The uptake rate of the agonist is different depending on whether the amine in R2 is primary, secondary or tertiary but the latter seem to give the best results. For the R1 substituent an electron rich sulfonamide groups and amide group has shown the best results in receptor binding and activity. It has been observed that a relationship is between absorption and molecular size hence larger hydrophilic molecules tended to have poor absorption. A small R1 substituent is necessary to maintain the rapid oral bioavailability of triptans. By placing an electron-withdrawing group or large group on position C2 on the indole structure the 5-HT agonist is conversed into an antagonist. This is thought to be because the indole ring is unable to occupy the aromatic part of the binding site. Triptan drugs Properties of formulations Sumatriptan was the pioneer drug in this class. The second generation's triptans such as zolmitriptan, naratriptan, rizatriptan, almotriptan, eletriptan and frovatriptan soon became available. Different triptans are available in different formulations and in different strengths (see table 2). They have been formulated as subcutaneous injections, oral tablets, orally disintegrating tablets, nasal spray and as rectal suppositories. Delivery system of the triptans may play an important role in the onset of action. The selection of anti-migraine drug for patients depends on their symptoms. The first selective 5-HT1B/1D agonist, sumatriptan, was first synthesized as a subcutaneous injection, then as an oral tablets and more recently as a nasal spray, it is also available in some countries as suppositories. The subcutaneous injection is the fastest way to stop a rapidly progressing migraine attack. The sumatriptan nasal spray provides faster onset of action than the tablets but it produces a similar headache response at 2 hours. Some patients prefer the nasal spray as it works more rapidly than the tablets and does not have as many adverse effects as the subcutaneous injection. Nasal spray is although not suitable for all patients, because some patients experience bad taste and lack of consistency of response. Zolmitriptan was developed with the strategy to create a more lipophilic compound, with faster absorption and better ability to cross the blood brain barrier than sumatriptan. It is available as tablets, orally disintegrating tablets and as nasal spray in some countries. Rizatriptan is available as tablets and orally disintegrating tablets but naratriptan, almotriptan, eletriptan and frovatriptan are only available in tablets, for now. a Specific enzyme not yet reported. The U.S. Food and Drug Administration (FDA) approved a new drug April 15, 2008, which is a combination of sumatriptan 85 mg and naproxen 500 mg (NSAID). Triptans and NSAIDs work on distinct mechanism involved in migraine and therefore may offer improved treatment when administrated together. Pharmacokinetics Pharmacokinetic properties (see table 3) are important when new drugs are developed. Patients seek rapid onset of action to relief the headache. Relatively short tmax, good bioavailability and lipophilicity are pharmacokinetic properties that have been associated with rapid onset of action. It has been speculated that good ability to cross the blood brain barrier and relatively long terminal elimination half-life may result in a lower incidence of headache recurrence. Sumatriptan and rizatriptan undergo first pass hepatic metabolism and result in lower bioavailability. t1/2 = Elimination half-life; tmax = Time to reach peak plasma drug concentration; ClR = Renal Clearance; LogDpH7.4 = Measure of lipophilicity at pH 7.4. Increasing number indicate greater solubility; VD = Volume of distribution M = Male; F = Female Future research Most triptans were developed and introduced in the 1990s. Further studies have not shown much promise regarding the development of new triptans with better duration of action, efficacy and safety profile. Therefore, it is unlikely that further variations will be developed and new anti-migraine drugs are likely to have another mechanism of action. References Triptans, Discovery And Development Of
Discovery and development of triptans
Chemistry,Biology
2,822
1,683,023
https://en.wikipedia.org/wiki/Birkhoff%27s%20theorem%20%28electromagnetism%29
In physics, in the context of electromagnetism, Birkhoff's theorem concerns spherically symmetric static solutions of Maxwell's field equations of electromagnetism. The theorem is due to George D. Birkhoff. It states that any spherically symmetric solution of the source-free Maxwell equations is necessarily static. Pappas (1984) gives two proofs of this theorem, using Maxwell's equations and Lie derivatives. It is a limiting case of Birkhoff's theorem (relativity) by taking the flat metric without backreaction. Derivation from Maxwell's equations The source-free Maxwell's equations state that Since the fields are spherically symmetric, they depend only on the radial distance in spherical coordinates. The field is purely radial as non-radial components cannot be invariant under rotation, which would be necessary for symmetry. Therefore, we can rewrite the fields as We find that the curls must be zero, since, Moreover, we can substitute into the source-free Maxwell equations, to find that Simply dividing by the constant coefficients, we find that both the magnetic and electric field are static Derivation using Lie derivatives Defining the 1-form and 2-form in as: Using the Hodge star operator, we can rewrite Maxwell's Equations with these forms as . The spherical symmetry condition requires that the Lie derivatives of and with respect to the vector field that represents their rotations are zero By the definition of the Lie derivative as the directional derivative along . Therefore, is equivalent to under rotation and we can write for some function . Because the product of the components of the vector are just its length . And substituting back into our equation and rewriting for a function . Taking the exterior derivative of , we find by definition that, . And using our Maxwell equation that , . Thus, we find that the magnetic field is static. Similarly, using the second rotational invariance equation, we can find that the electric field is static. Therefore, the solution must be static. References Electrodynamics Eponymous theorems of physics
Birkhoff's theorem (electromagnetism)
Physics,Materials_science,Mathematics
415
42,110,287
https://en.wikipedia.org/wiki/Concurrent%20testing
Research and literature on concurrency testing and concurrent testing typically focuses on testing software and systems that use concurrent computing. The purpose is, as with most software testing, to understand the behaviour and performance of a software system that uses concurrent computing, particularly assessing the stability of a system or application during normal activity. Research and study of program concurrency started in the 1950s, with research and study of testing program concurrency appearing in the 1960s. Examples of problems that concurrency testing might expose are incorrect shared memory access and unexpected order sequence of message or thread execution. Resource contention resolution, scheduling, deadlock avoidance, priority inversion and race conditions are also highlighted. Selected history & approaches of testing concurrency Approaches to concurrency testing may be on a limited unit test level right up to system test level. Some approaches to research and application of testing program/software concurrency have been: Execute a test once. This was considered to be ineffective for testing concurrency in a non-deterministic system and was equivalent to the testing of a sequential non-concurrent program on a system Execution of the same test sequence multiple times. Considered likely to find some issues in non-deterministic software execution. This later became called non-deterministic testing. Deterministic testing. This is an approach to set the system into a particular state so that code can be executed in a known order. Reachability testing An attempt to test synchronisation sequence combinations for a specified input (shared variable access not being corrupted, effectively testing race conditions variables). The sequence is typically derived for non-deterministic test execution. Structural Approaches / Static Analysis Analysis of code structure and static analysis tools. An example was a heuristic approach This led to code checker development, for example jlint. Research and comparison of static analysis and code checkers for concurrency bugs See also List of tools for static code analysis Multi-user approach This is an approach to testing program concurrency by looking at multiple user access, either serving different users or tasks simultaneously. Testing software and system concurrency should not be confused with stress testing, which is usually associated with loading a system beyond its defined limits. Testing of concurrent programs can exhibit problems when a system is performing within its defined limits. Most of the approaches above do not rely on overloading a system. Some literature states that testing of concurrency is a pre-requisite to stress testing. Lessons learned from concurrency bug characteristics study A study in 2008 analysed bug databases in a selection of open source software. It was thought to be the first real-world study of concurrency bugs. 105 bugs were classified as concurrency bugs and analysed, split as 31 being deadlock bugs and 74 non-deadlock bugs. The study had several findings, for potential follow-up and investigation: Approximately one-third of the concurrency bugs cause crashes or hanging programs. Most non-deadlock concurrency bugs are atomicity or order violation. I.e. focusing on atomicity (protected use of shared data) or sequence will potentially find most non-deadlock type bugs. Most concurrency bugs involve 1 or 2 threads. I.e. Heavy simultaneous users/usage is not the trigger for these bugs. There is a suggestion that pairwise testing may be effective to catch these types of bugs. Over 20% (7/31) deadlock bugs occurred with a single thread. Most deadlock concurrency bugs (30/31) involved only one or two resources. An implication that pairwise testing from a resource usage perspective could be applied to reveal deadlocks. See also Software testing Scalability testing Load testing Software performance testing Scenario analysis Simulation Stress test (hardware) System testing References General References Software testing
Concurrent testing
Engineering
735
18,119,915
https://en.wikipedia.org/wiki/S.%20A.%20Choudum
Sheshayya A. Choudum (born 1947) was a professor and a former chair of the department of mathematics at IIT Madras. He has often worked in chromatic numbers, degree sequences, graph enumeration, and bivariegated graphs. Choudum hails from Manvi, Raichur district, Karnataka. He completed a M.Sc., in Mathematics from Karnataka University, Dharwar and a Ph.D. in 1975 from IIT Madras. From there, he came to the Department of Mathematics University of Mumbai. Prior to joining the Computer Science Department at IIT Madras, he was with the Department of Mathematics of Madurai Kamaraj University. While at Madurai, he visited the University of Reading to work with Crispin Nash-Williams. Choudum has guided 10 Ph.D. students in graph theory. Books A First Course in Graph Theory by S.A. Choudum Macmillan, 1999 Graph Theory, by S.A. Choudum, NPTEL (IITM), India, 2011. References External links Dr. Choudum's webpage at IIT Madras 1947 births Kannada people Living people Graph theorists People from Raichur IIT Madras alumni Academic staff of IIT Madras Academic staff of Madurai Kamaraj University
S. A. Choudum
Mathematics
266
69,296,620
https://en.wikipedia.org/wiki/Strategic%20stability
Strategic stability is a concept in the international relations indicating a lack of incentives for any party to initiate the nuclear first strike; the term is also used in a broader sense of the state of the international environment helping to avoid a war. Strategic stability characterizes the degree of the deterrence provided by the mutual assured destruction and depends on the survivability of the strategic forces after the first strike. Definition The meaning of the term depends on the context. Edward Warner, a U.S. Secretary of Defense's representative at the New START talks, has observed that the strategic stability can be defined at multiple levels, from the narrowest to the broadest: The most narrow sense, described in the rest of this article – making the first strike less tempting in the event of a crisis (also known as crisis stability) and absence of incentives to build up the nuclear arsenals (avoiding the arms race instability) – is used by the nuclear-weapon states, including the United States, Even in this narrowest sense there is no universally agreed-upon definition of the strategic stability or ways to quantify it, as the vulnerabilities are country-specific; A broader sense describes the condition of no armed conflict between the nuclear powers; In its broadest sense strategic stability defines an international situation, not necessarily global, which is not conducive to an outbreak of a war. The governments, sometimes intentionally, make confusing references to the strategic stability: the most consistent is the US government that typically uses the term in the context of reducing incentives to strike first, although occasionally the term is still used in a broader sense; the Russian government uses all three meanings listed above interchangeably; the position of China ranges from refusing to acknowledge the applicability of the term to the PRC (since, in the Chinese view, the stability requires at the minimum the nuclear balance) to the statements that the disarmament efforts shall have strategic stability and indivisibility of security as their goals. Evolution Although the traditional view of the impact of the strategic stability, "to make a first strike less plausible", was clearly articulated only in 1990 in a joint US-Soviet statement, the corresponding ideas date back to the early 1950s (the exact roots are hard to identify, as many authors were "circling around" the topic at the time). During the development of the concept (until the early 1960s) the adjective "strategic" was rarely used, most authors used the term "stability" instead, mostly in the sense of the modern crisis stability. The early thinking that evolved into the discussion of stability dates as early as 1946, when the dueling views of Bernard Brodie and William L. Borden were expressed. Brodie considered the nuclear bombs to be an effective weapon when used against the cities, while Borden argued that in the almost inevitable future nuclear war the prime target should be the nuclear forces of the enemy, as "attacking cities ... can so easily be carried out later", and the "assets of surprise and the initiative" should not be squandered on them. In combination, these views reflect both sides of the strategic stability framework: the problem of the vulnerability of the strategic forces to a surprise attack and protecting the ability for a nuclear retaliation as a solution. US discourse in the early 1950s was concentrated on the vulnerability of the Strategic Air Command (SAC) forces to a surprise Soviet attack due to the concentration of its airplanes and atomic bombs at a few densely packed airfields, with proposed solutions involving both making the US nuclear forces more survivable and launching US nuclear bombers preemptively in the case of an imminent attack by the USSR, with the emphasis on preemption. In the words of Eisenhower, US strategic force "once in the air could be recalled ... if it stayed on the ground it might never get off". A glimpse of the concept of mutual assured destruction (MAD) can be seen in the National Security Council Report 162/2 (NSC-162/2) of October 30, 1953: "a stage of atomic plenty and ample means of delivery [for both sides] ... could create a stalemate, with both sides reluctant to initiate general warfare". The Killian Report of the Science Advisory Committee predicted in 1955 the arrival in the middle of 1960s of the "Period IV", when "neither country can derive a winning advantage, because each country will possess enough multimegaton weapons and adequate means of delivering them." However, the authors of the report did not figure out that MAD can lead to strategic stability, declared instead that the period IV will be "fraught with danger" due to instability and recommended measures to delay its arrival. The report concentrated on the unilateral American moves (like building a large quantity of intercontinental ballistic missiles) and completely ignored the possibility of negotiating arms control agreements or other security-building measures with the USSR. Eisenhower disliked an idea of building a lot of missiles and fighting the war when "Russians can fire 1000 [missiles] at us and we can fire 1000 a day at them" and in 1955 proposed the "Open Skies agreement" that would enable each side to perform aerial reconnaissance over the territory of the other side thus checking that there are no preparations for a surprise attack underway. The "counterintuitive" idea that the Americans are safer when Soviets know that the US is not getting ready for a first strike was an important step towards development of the concept of strategic stability (although not yet labeled as such). USSR had rejected the proposal, arguing that the information gathered can actually facilitate a surprise attack. On the American side, the importance of arms control agreements was stressed in 1957 in the Gaither Report. US Navy, while competing for government funds with the SAC and land-based missiles, advanced the thinking on the crisis stability by introducing a concept of "finite deterrence": a small number of missiles on highly survivable submarines not only can provide the same deterrence as a much larger number of land-based missiles, but more time for making decisions becomes available in the moment of crisis, as the retaliation strike loses its launch-now-or-never quality. By 1958 USSR had recognized the dangers of the fear of the first strike by an opponent. However, the US-USSR Surprise Attack Conference (Geneva, November 10th to December 18th, 1958) was a failure due to divergent goals: Americans were looking to identify the technical solutions for preventing the first strike, while Soviets tried to include broader issues, like reducing the presence of US forces in Germany. During the preparations for the conference, one of the earliest official applications of the term "stability" in the nuclear strike context, defined as "freedom from the threat of surprise attack", appeared in an American document. The 1950s also witnessed the development of another aspect of the strategic stability - focusing on interactions between the states and taking potential thinking of the other side into consideration. The American experts then assumed that the Soviet approach is identical to the American one. This logical leap of an adversary being a "mirror image" of self was taken despite the official rhetoric about the aggressive aims of the USSR and dangers of its ideology, and the validity of this assumption is impossible to verify due to the Russian archives on the subject being still closed (as of 2013). Foundations of the strategic stability reached the wide audience through an article by Albert Wohlstetter in the Foreign Affairs magazine, The Delicate Balance of Terror (1958). Wohlstetter's ideas have influenced Thomas Schelling, who in December 1958 had published a RAND article “Surprise Attack and Disarmament where he argued that the important condition for the deterrence is "is not the ‘balance’ — the sheer equality or symmetry in the situation — ... it is the stability of the balance". Therefore, per Schelling, the "good" weapons are the ones targeting the opponent's society at large and useless against his strategic forces, while the "bad" ones are able to reduce the opponent's ability to strike back, thus providing a "premium on haste" and increasing the chance of a nuclear war. The concept of strategic stability became widespread among experts in the early 1960s, while the US government had accepted it later, when "the stability of our deterrent" language appeared in the annual Draft Presidential Memorandum on Strategic Offensive and Defensive Forces in 1969. In the post-Cold War situation the concept of strategic stability, along with the arms control, appeared to have lost its significance. With Russia no longer considered a peer competitor, constraints on the American behavior did not look justified, especially when the new threats from the regional adversaries and China were considered. Due to the new balance of forces the survivability of forces after the first strike became much less of an issue for the USA than for Russia. Dall’Agnol and Cepik argued in 2021 that the U.S. withdrawal from the ABM Treaty in 2001 began the erosion of the institutional foundations of strategic stability in the 21st century. Brustlein suggests that in this environment Europeans are the ones to be worried about the strategic stability. Crisis stability and arms race stability Traditional definition of the crisis stability belongs to Schelling: the crisis is stable "if neither side has or perceives an incentive to use nuclear weapons first out of the fear that the other side is about to do so". Crisis instability is one pathway through which a political or conventional armed conflict can turn nuclear. James M. Acton defines the arms race stability as "the absence of perceived or actual incentives to augment a nuclear force ... out of the fear that in a crisis an opponent would gain a meaningful advantage by using nuclear weapons first". Acton goes further by arguing that the crisis stability and arms race stability are two views of the same "concern that an adversary might use nuclear weapons first in a crisis", observed on different timescales, with the classic crisis stability corresponding to the shortest decision times, and the arms race instability to the longest ones. Acton considers the following timescales and corresponding force posture adjustments: the decision for the first ("preemptive") strike needs to be takes in minutes to few days. While never used, both Americans and Soviets clearly used this option in the war planning in the past and rightfully expected the other side to harbor similar plans; raising the alert level of the nuclear forces (dispersing the planes and submarines, mating the nuclear warheads to their carriers) takes hours to days. Despite the inevitable escalatory nature and increased probability of an accidental launch, this change in posture has been utilized few times by the USSR (during the Caribbean crisis and Able Archer 83), USA (in August 1978 when two Soviet ballistic missile submarines (SSBNs) had approached the East Coast of the United States), and China (during the Sino-Soviet border conflict); moving the weapons (for example, the deployment of Soviet missiles to Cuba in 1962) takes months to years; building up the arsenal (arms race instability) requires years and is frequently driven by the survivability concerns. Critique The value of strategic stability was questioned from the very beginning. Brustlein points to two negative effects of achieving the strategic stability: adversaries might be actually encouraged to initiate or expand low-level conflicts due to being certain that a nuclear escalation is unfeasible (cf. the Stability–instability paradox); limiting military spending can be elusive, as deterrence requires credibility and the latter is impossible without the means to win a nuclear war, "arms control can only function when it is not needed". Notes References International relations theory Nuclear warfare
Strategic stability
Chemistry
2,363
186,758
https://en.wikipedia.org/wiki/Dispensationalism
Dispensationalism is a theological framework for interpreting the Bible which maintains that history is divided into multiple ages called "dispensations" in which God interacts with his chosen people in different ways. It is often distinguished from covenant theology. These are two competing frameworks of Biblical theology that attempt to explain overall continuity in the Bible. Coining of the term "dispensationalism" has been attributed to Philip Mauro, a critic of the system's teachings, in his 1928 book The Gospel of the Kingdom. Dispensationalists use a literal interpretation of the Bible and believe that divine revelation unfolds throughout the Bible. They believe that there is a distinction between Israel and the Church, and that Christians are not bound by Mosaic law. They maintain beliefs in premillennialism, Christian Zionism, and a rapture of the Church that will happen before the Second Coming of Christ, generally seen as happening before a period of tribulation. Dispensationalism was systematized and promoted by John Nelson Darby and the Plymouth Brethren in the mid-19th century. It began its spread in the United States during the late 19th century through the efforts of evangelists such as James Inglis, James Hall Brookes and Dwight L. Moody, the programs of the Niagara Bible Conference, and the establishment of Bible institutes. With the dawn of the 20th century, Cyrus Scofield introduced the Scofield Reference Bible, which crystalized dispensationalism in the United States. Dispensationalism has become popular within American evangelicalism. It is commonly found in nondenominational Bible churches, as well as Baptist, Pentecostal, and Charismatic groups. Protestant denominations that embrace covenant theology tend to reject dispensationalism. According to the system's critics, most theologians acknowledge that there is no specific sequence of end-times events defined in the Bible. The Scofield Bible has been called "the most dangerous heresy currently to be found within Christian circles". Overview Dispensationalism is a theological framework that views history as divided into distinct periods in which God interacts with mankind in specific ways. Scofield, in his Scofield Reference Bible, defined a dispensation as "a period of time during which man is tested in respect of obedience to some specific revelation of the will of God". Charles Ryrie took issue with Scofield's definition as too simple, stating that such a definition opened the system to attack from nondispensationalists. Ryrie separates the term age from dispensation, stating that the two terms are not synonymous in meaning while defining a dispensation as "a distinguishable economy in the outworking of God's purpose". He further suggests that the defining characteristics of a dispensation are the distinct governing relationship in which God interacts with mankind during that period, and the resulting responsibility placed upon mankind in that period. Evangelical Christians generally agree that there are distinct periods in God's plan for humanity. Dispensationalist theologians tend to hold "a particular view of the parallel-but-separate roles and destinies of Israel and the [Christian] church", with a "careful separation ... between what is addressed to Israel and what is addressed to the church. What is addressed to Israel is 'earthly' in character and is to be interpreted 'literally'." This view is distinct from covenant theology, which holds that rather than having separate plans, "God has one people, one people of God throughout redemptive history, called 'Israel' under the Old Testament, and called 'the church' under the New." Philip Mauro, a critic of the system's teachings in his 1928 book The Gospel of the Kingdom, is considered to be the first to coin the term "dispensationalism" to describe the theological framework that had made inroads into fundamentalism, calling it "a subtle form of modernism". Typical divisions The number of dispensations may vary from three to eight, but the typical seven-dispensation scheme is as follows: Innocence – Adam under probation prior to the Fall of Man. Ends with expulsion from the Garden of Eden in Genesis 3. Some refer to this period as the Adamic period or the dispensation of the Adamic covenant or Adamic law. Conscience – From the Fall to the Great Flood. Ends with the worldwide deluge. Human or Civil Government – After the Great Flood, humanity is responsible to enact the death penalty, and has the authority to govern. Ends with the dispersion at the Tower of Babel. Some use the term "Noahide law" in reference to this period of dispensation. Promise or Patriarchal Rule – From Abraham to Moses. Ends with the refusal to enter Canaan and the 40 years of unbelief in the wilderness. Some use the terms "Abrahamic law" or "Abrahamic covenant" in reference to this period of dispensation. Law – From Moses to the crucifixion of Jesus Christ. Ends with the scattering of Israel in AD 70. Some use the term "Mosaic law" in reference to this period of dispensation. Grace – From the cross to the rapture of the church seen by some groups as being described in 1 Thessalonians and the Book of Revelation. The rapture is followed by the wrath of God, constituting the Great Tribulation. Some use the term "Age of Grace" or "the Church Age" for this dispensation. Millennial Kingdom – A literal 1000 year reign of Christ on earth (), centered in Jerusalem, ending with God's judgment on the final rebellion. Variants Classic dispensationalism Ultradispensationalism (Bullingerism) Hyperdispensationalism (Mid-Acts dispensationalism) Revised dispensationalism Progressive dispensationalism Theology Purpose of God in the world According to John Walvoord, God's purpose in the world is to manifest his glory. Charles Ryrie writes that dispensational soteriology focuses on man's salvation as the means God uses to glorify himself. Biblical literalism A key element of dispensationalism is its use of the historical-grammatical hermeneutic to obtain a consistent, literal interpretation of the text. In this method, scripture is to be interpreted according to the normal rules of human language in its entirety. This leads dispensationalists to take eschatological passages in the Bible literally. Charles Ryrie suggests that a non-literal hermeneutic is the reason amillennialists apply Old Testament promises made to Israel "spiritually" to the church, and covenant premillennialists see some prophecies as fulfilled and others as not. Progressive revelation Progressive revelation is the doctrine that each successive book of the Bible provides further revelation of God and his program. Theologian Charles Hodge wrote that the progressive character of divine revelation is gradually unfolded until the fullness of truth is revealed. Charles Ryrie wrote that the Bible is not viewed as a textbook on theology, but rather as a continually unfolding revelation of God through successive ages where there are distinguishable stages in which God introduces mankind to new responsibilities. Covenant theology and dispensationalist theology disagree regarding the meaning of revelation. Covenant theology views the New Testament as the key to interpreting the Old Testament. For dispensationalists, the Old Testament is interpreted on its own and the New Testament contains new information which can build on the Old Testament but cannot change its meaning. Each stands alone, rather than the Old Testament being reread through the lens of the New Testament. Distinction between Israel and the Church Dispensationalists see a historic and demographic distinction between Israel and the Christian Church. For them, Israel is an ethnic nation consisting of Hebrews (Israelites), beginning with Abraham. The Church, on the other hand, consists of all saved individuals from the "birth of the Church" in the book of Acts until the time of the rapture. Classic dispensationalists refer to this period as a "parenthesis", a temporary interlude in the progress of Israel's prophesied history when God has paused his dealing with Israel and is dealing with his Church. There are differing views within dispensationalism as to when the church age began. Classic dispensationalism considers Pentecost in Acts 2 to be the beginning of the Church as distinct from Israel. Charles Finney wrote in 1839 that Pentecost was "the commencement of a new dispensation", emphasizing the role of the Holy Spirit as a distinction. Cyrus Scofield did not make Pentecost itself the turning point, but did emphasize its role in dividing the dispensations of "Law" and "Grace". In contrast, hyperdispensationalists suggest that the church started later in Acts ("Mid-Acts") with the ministry of Paul, identifying the start of the church as occurring between the salvation of Saul in Acts 9 and the Holy Spirit's commissioning of Paul in Acts 13. E. W. Bullinger and the ultradispensationalists taught that the church began in Acts 28. According to progressive dispensationalism, the distinction between Israel and the Church is not mutually exclusive, as there is a recognized overlap between the two. The overlap includes Jewish Christians like James, brother of Jesus, who integrated Jesus's teachings into the Jewish faith, and Christians of Jewish ethnicity who held varying opinions on compliance with Mosaic law, like Saint Peter and Paul the Apostle. Progressive dispensationalism "softens" the Church/Israel distinction by seeing some Old Testament promises as expanded by the New Testament to include the Church without replacing the promises to its original audience, Israel. The Law Dispensationalists believe that Christ abolished the Mosaic law, and thus it does not apply to the Christian. Instead, the Christian is under the Law of Christ, which embodies moral principles from God that are in both codes. In this view, although many commandments of the Old Testament are re-established in the New Testament, only the commandments explicitly affirmed there are to be kept; this excludes the ceremonial and civil aspects of the Mosaic law. Eschatology Dispensationalism teaches an eschatology that is explicitly premillennial, in that it affirms the return of Christ prior to a literal 1,000-year reign of Jesus Christ on earth as the fulfillment of Old Testament prophecies. This millennial kingdom will be theocratic in nature, and not mainly soteriological in the way George Eldon Ladd and others with a non-dispensational form of premillennialism have viewed it. It will be distinctly Jewish, with the throne of David restored. The majority of dispensationalists profess a pretribulation rapture. Mid-tribulation and post-tribulation rapture are minority views. Pre-tribulational rapture doctrine is what separates dispensationalism from other forms of premillennialism and other millennial views. Dispensational eschatology was popularized in Hal Lindsey's book, The Late Great Planet Earth (1970). In Lindsey's version, the unfolding of events includes the establishment of modern Israel in 1948, Jews regaining control of Jerusalem's sacred sites in the 1967 Arab-Israeli War, a rebuilding of the Temple which has yet to occur, an Antichrist who will come to power, Christians to be removed from the earth in a rapture of the Church, and seven years of tribulation (Daniel's seventieth week) culminating in a great battle of Armageddon in which Christ will triumph over evil and establish a literal 1,000 year reign of his kingdom on earth. Israel and the Church being distinct in this view, the rapture must remove the Church before remnant Israel can be gathered. History Proto-Dispensationalism Advocates of dispensationalism have sought to find similar views of dispensations in Church history, referencing theologians or groups such as Francisco Ribera, the Taborites, Joachim of Fiore, Denis the Carthusian and others. Joachim's theory of three stages of human history has been argued to have anticipated the later dispensationalist view of organizing history into different dispensations. Joachim's stages were divided into the "Age of the Father" which was under the Law, the "Age of the Son" which was a period of tribulation, and the "Age of the Spirit" which was a period of bliss on earth. Fra Dolcino (c. 1250 – 1307) taught Fiore's theory of the stages of history, and dispensationalists Mark Hitchcock and Thomas Ice have suggested that Dolcino's teaching was of a pretribulational rapture. The relevant teaching was that when Antichrist appears, Dolcino and his followers would be taken away and preserved from Antichrist, and that following the death of Antichrist, Dolcino and his followers would return to Earth to convert those then living to the true faith. However, the source is an anonymous 1316 Latin text titled The History of Brother Dolcino, so it is uncertain whether Dolcino actually taught it. William C. Watson has argued that multiple 17th century Puritan theologians anticipated dispensational views. In his book Dispensationalism Before Darby (2015), he argues that Ephraim Huit (1595–1644) and John Birchensa (in his book The History of Scripture published in 1660) taught that God has differing plans for Jews and Gentiles. Watson also argues that Nathaniel Holmes (1599–1678) taught a pretribulational rapture. Christian mystic and philosopher Pierre Poiret (1646–1719) is said by some to have been the first theologian to develop a dispensationalist system, writing a book titled The Divine Economy. Poiret taught that history should be organized into multiple dispensations in which God works with humans in different ways, including the millennium as a future dispensation. Poiret's eschatology includes a belief in two resurrections, the rise of the Antichrist, and the nation of Israel being regathered, restored and converted. Poiret divided history into seven dispensations: early childhood (ended in the Flood), childhood (ended in Moses's ministry), boyhood (ended in Malachi), youth (ended in Christ), manhood (most of the Church era), old age ("human decay", meaning the last hour of the Church), and the restoration of all things (the Millennium, including a literal earthly reign of Christ with Israel restored). Isaac Watts (1674–1748) presented a dispensational view in a forty-page essay titled "The Harmony of All the Religions Which God Ever Prescribed to Men and All His Dispensations Towards Them". Charles Ryrie states that Scofield's outline of dispensationalism, with the exception of the millennium, is exactly that of Watts, and not Darby. Edward Irving (1792–1834) in some ways anticipated dispensationalism. He used a literal approach to prophetic interpretation, he believed in a restoration of Israel as a nation, and he believed there would be a great apostasy and Christ would return to establish a literal earthly kingdom. But he also preached that Christ had a fallen nature, which led to him being defrocked by the Scots Presbyterians. Formalization by Darby Dispensationalism developed as a system from the teachings of John Nelson Darby (1800–1882), considered by many to be the father of dispensationalism. Darby strongly influenced the Plymouth Brethren of the 1830s in Ireland and England. The original concept came when Darby considered the implications of Isaiah 32 for Israel. He concluded that prophecy required a future fulfillment and realization of Israel's kingdom. He saw the New Testament church as a separate program not related to that kingdom. Thus arose a prophetic earthly kingdom program for Israel and a separate "mystery" heavenly program for the church. In order to not conflate the two programs, the prophetic program had to be put on hold to allow for the church to come into existence. Then the church would need to be raptured away before prophecy could resume its earthly program for Israel. In Darby's conception, dispensations relate exclusively to the divine government of the earth. The Mosaic dispensation continues as a divine administration over Earth up until the return of Christ, and the church, being a heavenly designated assembly, is not associated with any dispensations. Darby's Brethren ecclesiology failed to catch on in America, but his eschatological doctrine became widely popular, especially among Baptists and Old School Presbyterians. Expansion and growth James Inglis (1813–1872) introduced dispensationalism to North America through the monthly magazine Waymarks in the Wilderness, published intermittently between 1854 and 1872. In 1866, Inglis organized the Believers' Meeting for Bible Study, which introduced dispensationalist ideas to a small but influential circle of American evangelicals. They were disturbed by the growth of religious liberalism and saw premillennialism as an answer. Dispensationalism was introduced as a premillennial position, and it took over the evangelical movement in the course of several decades. The American church denominations rejected Darby's ecclesiology but accepted his eschatology. Many of these churches were Baptist and Old School Presbyterian; they retained Darby's Calvinistic soteriology. After Inglis's death, James H. Brookes (1830–1898), pastor of Walnut Street Presbyterian Church in St. Louis, Missouri, organized the Niagara Bible Conference (1876–1897) to continue dissemination of dispensationalist ideas. Brookes was well known within millenarian circles, both as a prominent speaker at the Believers' Meeting for Bible Study conferences and for having written articles published in Inglis's Waymarks in the Wilderness. Brethren theologian C. H. Mackintosh (1820–1896) had a profound influence on American evangelist Dwight L. Moody (1837–1899), who reached very large audiences with his powerful preaching in the latter half of the 19th century. Moody worked with Brookes and other dispensationalists, and encouraged the spread of dispensationalism. It was during this time that dispensational doctrine became widely accepted among American evangelicals. It also marked a shift in dispensational theology under evangelists like Moody, from Darby's Calvinism and doctrinal rigor to a non-Calvinist view of human freedom in personal salvation. Other prominent dispensationalists in this period include Reuben Archer Torrey (1856–1928), James M. Gray (1851–1925), William J. Erdman (1833–1923), A. C. Dixon (1854–1925), A. J. Gordon (1836–1895), and William Eugene Blackstone (1841–1935). These men were active evangelists who promoted a host of Bible conferences and other missionary and evangelistic efforts. They also gave the dispensationalist philosophy institutional permanence by assuming leadership of new independent Bible institutes, such as the Moody Bible Institute in 1886, the Bible Institute of Los Angeles (now Biola University) in 1908, and Philadelphia College of Bible (now Cairn University, formerly Philadelphia Biblical University) in 1913. The network of related institutes that soon developed became the nucleus for the spread of American dispensationalism. When the Bible Institute of the Chicago Evangelization Society (now Moody Bible Institute) formally opened in 1889, Torrey served as its first superintendent. Revivalist evangelicals such as Moody and Torrey did not believe the gift of tongues continued past the Apostolic age, but their emphasis on the baptism of the Holy Spirit merged well with holiness ideas. This encouraged the spread of dispensationalism within the Pentecostal movement. During this time, Anglican clergyman E. W. Bullinger (1837–1913) began teaching what became known as "ultradispensationalism" or "Bullingerism". Bullinger taught that the Church did not begin until Acts 28, that the Lord's Supper and water baptism were for Jewish believers, and that Paul's epistles were written to the Jews. Scofield and his influence A disciple of James Brookes, Cyrus Scofield (1843–1921) began attending the Niagara conferences and became an advocate of premillennialism, specifically pre-tribulationism. After several years of work, Scofield introduced dispensationalism to a wider audience in America through his Scofield Reference Bible. Published in 1909 by the Oxford University Press, the Scofield Reference Bible was the first Bible to display overtly dispensationalist notes on the same pages as the biblical text. Use of the Scofield Bible became popular among independent Evangelicals in the United States. Its premillennialism led to a pessimistic social view within evangelicalism, to "not polish the brass rails on the sinking social ship", so that evangelism came to be focused on saving the lost rather than expanding Christendom. The Scofield Reference Bible came out at the peak of Bullinger's influence. Scofield's Bible confronted some of the ultradispensationalists' (Bullingerites') positions, including their divisions of dispensational time. As the Scofield Bible became popular among dispensationalists, it marginalized the hyperdispensationalist position in the United States. Influenced by Scofield, evangelist and Bible teacher Lewis Sperry Chafer (1871–1952) and his brother Rollin Chafer founded Evangelical Theological College in 1924. The school would eventually become Dallas Theological Seminary, the main dispensationalist institution in America. The Baptist Bible Seminary now located in Clarks Summit, Pennsylvania became another dispensationalist school. The Fundamentals In the 1910s, another publication took hold within American evangelicalism. Known as The Fundamentals, its twelve volumes were published in quarterly installments between 1910 and 1915 by the Testimony Publishing Company. Funded by Union Oil co-founder Lyman Stewart (1840–1923) and managed by an executive committee of dispensationalists that included Clarence Dixon and Reuben Torrey, The Fundamentals helped solidify dispensationalist views within American Christian fundamentalism and the evangelical movement. The Scopes trial in 1925 served to unify fundamentalists, but the movement began to decline soon after the trial. Scopes trial prosecutor and public face of the fundamentalist movement William Jennings Bryan died a week after the verdict, and journalist H. L. Mencken portrayed supporters of that anti-evolution verdict as uneducated and ignorant. The fundamentalist movement began to decentralize after it lost Bryan. Dispensationalism's fate was tied to that breakdown. In 1928, Philip Mauro, seeking to re-invigorate the fundamentalist movement, pointed a finger at dispensationalism and in the process coined the term. Singling it out as the source of division within the larger fundamentalist movement, he wrote that the dispensationalist view was more recent than Darwinism and it eroded fundamental truths of scripture. In 1934, Evangelical Theological College acquired the venerable theological journal (first published in 1844). Lewis Chafer's first public declaration that he was a dispensationalist appeared in that journal's pages. In 1936, he published a 60-page response to criticism from Mauro and other fundamentalists, entitled "Dispensationalism". That same year, Chafer renamed his school Dallas Theological Seminary. The conflict between dispensationalists and covenantalists continued through the 1930s and 1940s, leading to permanent divisions that shaped the fundamentalist movement. Influence of Dallas Theological Seminary By the mid 20th century, evangelicals such as Charles Feinberg, J. Dwight Pentecost, Herman Hoyt, Charles Ryrie, and John Walvoord were promoting dispensationalism. All five of these men either studied or taught at Dallas Theological Seminary (DTS). Pentecost taught there for more than 60 years, and published an influential work on dispensational eschatology, Things to Come (1956). A decade later, Ryrie published Dispensationalism Today (1965), which has become the primary introduction to dispensational theology. Furthering the rift with covenant theology, Ryrie wrote in in 1957 that dispensationalism is "the only valid system of Biblical interpretation". In 1959, Walvoord stated that no non-dispensationalists (including Catholics and mainline Protestants) offered any defense against modernism, and that they were all under the influence of hermeneutical and theological errors. Dallas Theological Seminary's influence grew as other schools and seminaries hired its graduates as faculty. In 1970, DTS graduate Hal Lindsey published The Late Great Planet Earth, which launched dispensationalist eschatology into pop culture. His book sold 10 million copies and made "rapture" and "the tribulation" household words. Pop prophecy The commercial success of The Late Great Planet Earth triggered a flood of books that featured dispensationalism's rapture theology. Lindsey published Satan is Alive and Well on Planet Earth (1972), There's a New World Coming (1973), and The Liberation of Planet Earth (1974). Other books included The Beginning of the End (1972) by Tim LaHaye, and DTS graduate Thomas McCall's Satan in the Sanctuary (1973) and Raptured (1975). In 1972, Iowa filmmakers Russell Doughten and Donald W. Thompson released A Thief in the Night, a fictional film about the aftermath of the rapture that has been seen by an estimated 300 million people. Televangelist Jack Van Impe covered current events in light of Bible prophecy with a dispensational premillennialist spin. Emergence of the Christian Right The late 20th century marked a shift from the separatism practiced earlier in the century to more political engagement. This era saw emergence of the Christian Right, rooted in the dispensational theology that places Israel at the center of God's purpose in the world. In 1978, dispensationalist television evangelist Jerry Falwell began making trips to Israel that were sponsored by the Israeli government. He became the first major American political figure to insist that the U.S. must support Israel for the fate of the nation. Falwell listed Feinberg, Pentecost, Hoyt, and Walvoord as his most important influences. Jerry Falwell and Tim LaHaye founded the Moral Majority in 1979, with its objective to get people saved, baptized, and registered to vote. The Moral Majority also provided a platform for political activism. Influenced by dispensational premillennialism, the Moral Majority lobbied for pro-Israel U.S. foreign policy positions, including protection of the Jewish people in Israel and continuing U.S. aid to the state of Israel. Opposed to Jimmy Carter's affirmation of a Palestinian homeland, the Moral Majority endorsed Ronald Reagan for President in 1980. In Reagan, they found a candidate who shared their apocalypticism. Reagan had read Hal Lindsey's The Late Great Planet Earth, and it has been suggested that this eschatological view drove his Middle East policies. In an interview with televangelist Jim Bakker, Reagan said "[w]e may be the generation that sees Armageddon." Dispensational theology affected more than the Reagan administration's Middle East foreign policy. James G. Watt, a member of the Assemblies of God and Reagan's first Secretary of the Interior, told Congress that preservation of the environment was made irrelevant by the imminent return of Christ. In 1980, Hal Lindsey wrote a follow-up to his book The Late Great Planet Earth. Lindsey had not previously drawn a connection from a Christian's personal obligations to a responsibility for social change, but this changed with his new book, The 1980s: Countdown to Armageddon. He began encouraging his readers to elect moral leaders who would reflect that morality within government, an agenda closely aligned with Ronald Reagan's administration. Lifelong fundamentalist and dispensationalist Tim LaHaye also became a prominent figure in the Christian Right. He served as head of the Moral Majority for a time, and in the mid-eighties he created the American Coalition for Traditional Values. In 1987, he served as co-chairman of Republican Jack Kemp's presidential campaign, until it was reported that he had called Catholicism "a false religion". The megachurch movement The growth of suburbs through the 1960s led to the megachurch movement that began in the 1970s. DTS-trained pastors pioneered the movement, including Chuck Swindoll, Erwin Lutzer, David Jeremiah, Robert Jeffress, Tony Evans, and Andy Stanley. Other megachurches, such as John Hagee's Cornerstone Church in San Antonio, Texas, blended teachings of dispensationalism with the prosperity gospel and New Christian Right activism. Hagee's Christians United for Israel included six Pentecostal megachurch pastors and an executive from the Christian Broadcasting Network, including Jerry Falwell. This group became an example of how megachurch dispensationalism was able to find national influence in US politics and diplomacy. Despite success through growing megachurches, the movement revealed limits when leaders of two of the United States' largest megachurches, Bill Hybels and Rick Warren, disassociated from the theology of dispensationalism. The revival of reformed theology in the emergence of New Calvinism began in the 1980s. Led by pastors such as John Piper, Tim Keller, Mark Driscoll, Matt Chandler, and Albert Mohler, this spawned a megachurch movement of its own, whose leaders became outspoken critics of dispensationalism. Peak and decline By the 1990s, a younger generation of academics emerged as "progressive dispensationalists", opening a rift within the united front Ryrie had pushed for in Dispensationalism Today (1965). Leaders in this school of thought were Craig A. Blaising, Darrell Bock, Kenneth Barker, and Robert L. Saucy. Dispensationalism's influence within the New Christian Right grew stronger in the 1990s. Building on the success of Hal Lindsey's The Late Great Planet Earth, the 1995 novel Left Behind pushed pop prophecy to further commercial success. Conceived by Tim LaHaye and written by Jerry B. Jenkins, the book spawned a multimedia franchise of 16 books, plus multiple movies, video games, and other spinoff works. The series brought dispensational premillennialism and its "rapture culture" into plain view. As with Reagan in the 1980s, the New Christian Right helped elect another 'born again' president, George W. Bush. Like Reagan, Bush spoke in terms of prophecies being fulfilled in a way that had meaning to dispensationalists. He referred to Gog and Magog in the War on Terror, and said the confrontation was "willed by God, who wants to use this conflict to erase His people's enemies". Dispensational ideas were experiencing political and commercial success, but Hal Lindsey and Tim LaHaye, who had become the public standard-bearers of dispensationalism, were different from their academic predecessors John Walvoord, Dwight Pentecost, and Charles Ryrie. By the 2010s, support for dispensational theology had peaked in academia and was largely in decline within academic settings. A 2009 survey of Southern Baptist seminaries showed that the majority view was covenantal, and flagship Southern Baptist Theological Seminary had no dispensationalists on its faculty. Although dispensationalism had collapsed in academic areas, its cultural influence remained. Dispensationalist ideas have persisted in popular culture. A 2004 Newsweek poll indicated that 55 percent of Americans believe Christians will be taken up in the Rapture. By the turn of the 21st century, the term "dispensationalism" had become synonymous with "sectarian fundamentalism", and had come to be more of a political identity than a theological doctrine. Dispensationalism however remains strong within theological circles which espouse Free Grace theology. The majority of those associated with the Free Grace Alliance support dispensationalism and it is taught by the Grace Evangelical Society. Criticism The term "dispensationalism" originated with Philip Mauro. His critique of the system is found in his 1928 book The Gospel of the Kingdom, in which he wrote that "evangelical Christianity must purge itself of this leaven of dispensationalism". He used the term to group the new premillennialism, the idea of dispensational time, and the Israel–Church distinction into a single bundled idea. Protestant denominations and movements that embrace covenant theology tend to reject dispensationalism. For example, Presbyterian minister John Wick Bowman has called the Scofield Bible "the most dangerous heresy currently to be found within Christian circles". Dispensational theology ultimately led the Presbyterian Church of America (later the Orthodox Presbyterian Church) to split from Bible Presbyterian Synod, which taught dispensationalism. The Churches of Christ became divided in the 1930s as Robert Henry Boll (who taught a variant of dispensationalism) and Foy E. Wallace (representing the amillennial position) disputed severely over eschatology. Soteriology Some dispensational Christian Zionists, such as John Hagee, reject the need for Christians to pursue the conversion of the Jews. Presupposing a difference between law and grace leads to the idea that there are multiple forms of salvation. In what is known as the Lordship salvation controversy, there are criticisms of a lack of understanding what was necessary to be "born again". John MacArthur called the problem "easy-believism", in which the basis of salvation is that one merely needs to claim to follow Jesus. MacArthur identified Dallas Theological Seminary founder Lewis Sperry Chafer as the source of the free grace error. Defense of the dispensational position was led primarily by Charles Ryrie and Zane Hodges. Eschatology The pessimism of premillennial eschatology led dispensationalists to see social reform as wasted effort, so that they focused on converting the lost, with no effort toward the kingdom-building social reform of postmillennialism. Political influence In American Theocracy (2006), political commentator Kevin Phillips wrote that dispensationalist and other fundamentalist Christians, together with the oil lobby, provided political support for the invasion of Iraq in 2003. He wrote that most theologians acknowledge there is no specific sequence of end-times events in the Bible, and that such a belief is the result of a century of "amplified Darbyism". He quoted theologian Barbara Rossing that such hyper-literalism is a "dangerous and false view". See also Christian eschatology Millennial Day Theory Progressive revelation (Baháʼí) Supersessionism New Covenant theology (a synthesis of Covenant theology and Dispensationalism) Free grace theology History of Christianity References Further reading Berubee, Carol. A Case for Pauline Dispensationalism: Defining Paul's Gospel and Mission (Blue Dromos Books, 2017) Mangum, R. Todd, The Dispensational–Covenantal Rift (Wipf & Stock, 2007) Mangum, R. Todd and Mark Sweetnam, "The Scofield Bible: Its History and Impact on the Evangelical Church" (Colorado Springs: Paternoster, 2009) Poythress, Vern. Understanding Dispensationalists (P&R 2nd ed., 1993) Showers, Renald (1990). "There Really Is a Difference: A Comparison of Covenant and Dispensational Theology". Friends of Israel Gospel Ministry. Sweetnam, Mark. The Dispensations: God's Plan for the Ages (Scripture Teaching Library, 2013) Walvoord, John F. Prophecy In The New Millennium (Kregel, 2001) External links O'Hair, J. C. The Unsearchable Riches of Christ Christian eschatology Christian theological movements Christian terminology Christianity-related controversies Premillennialism Time in religion
Dispensationalism
Physics
7,692
1,276,320
https://en.wikipedia.org/wiki/Proper%20transfer%20function
In control theory, a proper transfer function is a transfer function in which the degree of the numerator does not exceed the degree of the denominator. A strictly proper transfer function is a transfer function where the degree of the numerator is less than the degree of the denominator. The difference between the degree of the denominator (number of poles) and degree of the numerator (number of zeros) is the relative degree of the transfer function. Example The following transfer function: is proper, because . is biproper, because . but is not strictly proper, because . The following transfer function is not proper (or strictly proper) because . A not proper transfer function can be made proper by using the method of long division. The following transfer function is strictly proper because . Implications A proper transfer function will never grow unbounded as the frequency approaches infinity: A strictly proper transfer function will approach zero as the frequency approaches infinity (which is true for all physical processes): Also, the integral of the real part of a strictly proper transfer function is zero. References Transfer functions - ECE 486: Control Systems Spring 2015, University of Illinois ELEC ENG 4CL4: Control System Design Notes for Lecture #9, 2004, Dr. Ian C. Bruce, McMaster University Control theory
Proper transfer function
Mathematics
269
5,775,285
https://en.wikipedia.org/wiki/Minol%20%28explosive%29
Minol (pronounced mine-ol) is a military explosive developed by the Admiralty early in the Second World War to augment supplies of trinitrotoluene (TNT) and RDX, which were in short supply. The aluminium component in Minol significantly prolongs the explosive pulse, making it ideal for use in underwater naval weapons (e.g. naval mines, for which it was developed, depth charges and torpedoes) where munitions with a longer explosive pulse are more destructive than those with high brisance. Minol cannot be used in weapons fired from gun barrels (e.g. artillery shells) because there is a risk of detonation when subjected to over 250 gs of acceleration. Initially, three Minol formulas were used. All percentages shown are by weight: Minol-1: 48% TNT, 42% ammonium nitrate (AN) and 10% powdered aluminium Minol-2: 40% TNT, 40% ammonium nitrate and 20% powdered aluminium Minol-3: 42% TNT, 38% ammonium nitrate and 20% powdered aluminium These three Minols suffered from expansion, spewing and gassing due to the reaction of fine aluminium powder with moisture and structural phase transitions in ammonium nitrate. To improve stability of Minol and increase production, more coarse aluminium powder was introduced. Later it was found that aluminium chips, such as filings, flakes and shavings, also gave good performance and improved stability. To solve the problem with dimensional instability, pure ammonium nitrate was replaced by a solid solution of 10% of potassium nitrate in ammonium nitrate. Thus, a new formula was adopted: Minol-4: 40% TNT, 36% ammonium nitrate, 4% potassium nitrate and 20% powdered aluminium The addition of potassium nitrate minimized expansion of Minol, making it more stable to temperature changes than TNT, but didn't solve the expansion problem. Minol IV could still expand and develop cracks after prolonged thermal cycling. A new composition, with 20% of potassium nitrate in solid solution, was developed. It didn't expand or crack even when cycled for months, but wasn't adopted for production and service. Since the 1950s, Minol has been superseded by more modern PBX compositions, due to their superior explosive yield and stability when being stored; Minol is regarded as obsolete. Generally, any Minol-filled munitions encountered will be in the form of legacy munitions or unexploded ordnance dating from before the 1960s. See also Amatol Composition H6 Hexanite Torpex Tritonal References British inventions Explosives Trinitrotoluene
Minol (explosive)
Chemistry
544
544,273
https://en.wikipedia.org/wiki/Spectral%20space
In mathematics, a spectral space is a topological space that is homeomorphic to the spectrum of a commutative ring. It is sometimes also called a coherent space because of the connection to coherent topoi. Definition Let X be a topological space and let K(X) be the set of all compact open subsets of X. Then X is said to be spectral if it satisfies all of the following conditions: X is compact and T0. K(X) is a basis of open subsets of X. K(X) is closed under finite intersections. X is sober, i.e., every nonempty irreducible closed subset of X has a (necessarily unique) generic point. Equivalent descriptions Let X be a topological space. Each of the following properties are equivalent to the property of X being spectral: X is homeomorphic to a projective limit of finite T0-spaces. X is homeomorphic to the spectrum of a bounded distributive lattice L. In this case, L is isomorphic (as a bounded lattice) to the lattice K(X) (this is called Stone representation of distributive lattices). X is homeomorphic to the spectrum of a commutative ring. X is the topological space determined by a Priestley space. X is a T0 space whose frame of open sets is coherent (and every coherent frame comes from a unique spectral space in this way). Properties Let X be a spectral space and let K(X) be as before. Then: K(X) is a bounded sublattice of subsets of X. Every closed subspace of X is spectral. An arbitrary intersection of compact and open subsets of X (hence of elements from K(X)) is again spectral. X is T0 by definition, but in general not T1. In fact a spectral space is T1 if and only if it is Hausdorff (or T2) if and only if it is a boolean space if and only if K(X) is a boolean algebra. X can be seen as a pairwise Stone space. Spectral maps A spectral map f: X → Y between spectral spaces X and Y is a continuous map such that the preimage of every open and compact subset of Y under f is again compact. The category of spectral spaces, which has spectral maps as morphisms, is dually equivalent to the category of bounded distributive lattices (together with homomorphisms of such lattices). In this anti-equivalence, a spectral space X corresponds to the lattice K(X). Citations References M. Hochster (1969). Prime ideal structure in commutative rings. Trans. Amer. Math. Soc., 142 43—60 . General topology Algebraic geometry Lattice theory
Spectral space
Mathematics
579
2,574,486
https://en.wikipedia.org/wiki/Fubini%E2%80%93Study%20metric
In mathematics, the Fubini–Study metric (IPA: /fubini-ʃtuːdi/) is a Kähler metric on a complex projective space CPn endowed with a Hermitian form. This metric was originally described in 1904 and 1905 by Guido Fubini and Eduard Study. A Hermitian form in (the vector space) Cn+1 defines a unitary subgroup U(n+1) in GL(n+1,C). A Fubini–Study metric is determined up to homothety (overall scaling) by invariance under such a U(n+1) action; thus it is homogeneous. Equipped with a Fubini–Study metric, CPn is a symmetric space. The particular normalization on the metric depends on the application. In Riemannian geometry, one uses a normalization so that the Fubini–Study metric simply relates to the standard metric on the (2n+1)-sphere. In algebraic geometry, one uses a normalization making CPn a Hodge manifold. Construction The Fubini–Study metric arises naturally in the quotient space construction of complex projective space. Specifically, one may define CPn to be the space consisting of all complex lines in Cn+1, i.e., the quotient of Cn+1\{0} by the equivalence relation relating all complex multiples of each point together. This agrees with the quotient by the diagonal group action of the multiplicative group C* = C \ {0}: This quotient realizes Cn+1\{0} as a complex line bundle over the base space CPn. (In fact this is the so-called tautological bundle over CPn.) A point of CPn is thus identified with an equivalence class of (n+1)-tuples [Z0,...,Zn] modulo nonzero complex rescaling; the Zi are called homogeneous coordinates of the point. Furthermore, one may realize this quotient mapping in two steps: since multiplication by a nonzero complex scalar z = R eiθ can be uniquely thought of as the composition of a dilation by the modulus R followed by a counterclockwise rotation about the origin by an angle , the quotient mapping Cn+1 → CPn splits into two pieces. where step (a) is a quotient by the dilation Z ~ RZ for R ∈ R+, the multiplicative group of positive real numbers, and step (b) is a quotient by the rotations Z ~ eiθZ. The result of the quotient in (a) is the real hypersphere S2n+1 defined by the equation |Z|2 = |Z0|2 + ... + |Zn|2 = 1. The quotient in (b) realizes CPn = S2n+1/S1, where S1 represents the group of rotations. This quotient is realized explicitly by the famous Hopf fibration S1 → S2n+1 → CPn, the fibers of which are among the great circles of . As a metric quotient When a quotient is taken of a Riemannian manifold (or metric space in general), care must be taken to ensure that the quotient space is endowed with a metric that is well-defined. For instance, if a group G acts on a Riemannian manifold (X,g), then in order for the orbit space X/G to possess an induced metric, must be constant along G-orbits in the sense that for any element h ∈ G and pair of vector fields we must have g(Xh,Yh) = g(X,Y). The standard Hermitian metric on Cn+1 is given in the standard basis by whose realification is the standard Euclidean metric on R2n+2. This metric is not invariant under the diagonal action of C*, so we are unable to directly push it down to CPn in the quotient. However, this metric is invariant under the diagonal action of S1 = U(1), the group of rotations. Therefore, step (b) in the above construction is possible once step (a) is accomplished. The Fubini–Study metric is the metric induced on the quotient CPn = S2n+1/S1, where carries the so-called "round metric" endowed upon it by restriction of the standard Euclidean metric to the unit hypersphere. In local affine coordinates Corresponding to a point in CPn with homogeneous coordinates , there is a unique set of n coordinates such that provided ; specifically, . The form an affine coordinate system for CPn in the coordinate patch . One can develop an affine coordinate system in any of the coordinate patches by dividing instead by in the obvious manner. The n+1 coordinate patches cover CPn, and it is possible to give the metric explicitly in terms of the affine coordinates on . The coordinate derivatives define a frame of the holomorphic tangent bundle of CPn, in terms of which the Fubini–Study metric has Hermitian components where |z|2 = |z1|2 + ... + |zn|2. That is, the Hermitian matrix of the Fubini–Study metric in this frame is Note that each matrix element is unitary-invariant: the diagonal action will leave this matrix unchanged. Accordingly, the line element is given by In this last expression, the summation convention is used to sum over Latin indices i,j that range from 1 to n. The metric can be derived from the following Kähler potential: as Using homogeneous coordinates An expression is also possible in the notation of homogeneous coordinates, commonly used to describe projective varieties of algebraic geometry: Z = [Z0:...:Zn]. Formally, subject to suitably interpreting the expressions involved, one has Here the summation convention is used to sum over Greek indices α β ranging from 0 to n, and in the last equality the standard notation for the skew part of a tensor is used: Now, this expression for ds2 apparently defines a tensor on the total space of the tautological bundle Cn+1\{0}. It is to be understood properly as a tensor on CPn by pulling it back along a holomorphic section σ of the tautological bundle of CPn. It remains then to verify that the value of the pullback is independent of the choice of section: this can be done by a direct calculation. The Kähler form of this metric is where the are the Dolbeault operators. The pullback of this is clearly independent of the choice of holomorphic section. The quantity log|Z|2 is the Kähler potential (sometimes called the Kähler scalar) of CPn. In bra-ket coordinate notation In quantum mechanics, the Fubini–Study metric is also known as the Bures metric. However, the Bures metric is typically defined in the notation of mixed states, whereas the exposition below is written in terms of a pure state. The real part of the metric is (a quarter of) the Fisher information metric. The Fubini–Study metric may be written using the bra–ket notation commonly used in quantum mechanics. To explicitly equate this notation to the homogeneous coordinates given above, let where is a set of orthonormal basis vectors for Hilbert space, the are complex numbers, and is the standard notation for a point in the projective space CPn in homogeneous coordinates. Then, given two points and in the space, the distance (length of a geodesic) between them is or, equivalently, in projective variety notation, Here, is the complex conjugate of . The appearance of in the denominator is a reminder that and likewise were not normalized to unit length; thus the normalization is made explicit here. In Hilbert space, the metric can be interpreted as the angle between two vectors; thus it is occasionally called the quantum angle. The angle is real-valued, and runs from 0 to . The infinitesimal form of this metric may be quickly obtained by taking , or equivalently, to obtain In the context of quantum mechanics, CP1 is called the Bloch sphere; the Fubini–Study metric is the natural metric for the geometrization of quantum mechanics. Much of the peculiar behaviour of quantum mechanics, including quantum entanglement and the Berry phase effect, can be attributed to the peculiarities of the Fubini–Study metric. The n = 1 case When n = 1, there is a diffeomorphism given by stereographic projection. This leads to the "special" Hopf fibration S1 → S3 → S2. When the Fubini–Study metric is written in coordinates on CP1, its restriction to the real tangent bundle yields an expression of the ordinary "round metric" of radius 1/2 (and Gaussian curvature 4) on S2. Namely, if z = x + iy is the standard affine coordinate chart on the Riemann sphere CP1 and x = r cos θ, y = r sin θ are polar coordinates on C, then a routine computation shows where is the round metric on the unit 2-sphere. Here φ, θ are "mathematician's spherical coordinates" on S2 coming from the stereographic projection r tan(φ/2) = 1, tan θ = y/x. (Many physics references interchange the roles of φ and θ.) The Kähler form is Choosing as vierbeins and , the Kähler form simplifies to Applying the Hodge star to the Kähler form, one obtains implying that K is harmonic. The n = 2 case The Fubini–Study metric on the complex projective plane CP2 has been proposed as a gravitational instanton, the gravitational analog of an instanton. The metric, the connection form and the curvature are readily computed, once suitable real 4D coordinates are established. Writing for real Cartesian coordinates, one then defines polar coordinate one-forms on the 4-sphere (the quaternionic projective line) as The are the standard left-invariant one-form coordinate frame on the Lie group ; that is, they obey for and cyclic permutations. The corresponding local affine coordinates are and then provide with the usual abbreviations that and . The line element, starting with the previously given expression, is given by The vierbeins can be immediately read off from the last expression: That is, in the vierbein coordinate system, using roman-letter subscripts, the metric tensor is Euclidean: Given the vierbein, a spin connection can be computed; the Levi-Civita spin connection is the unique connection that is torsion-free and covariantly constant, namely, it is the one-form that satisfies the torsion-free condition and is covariantly constant, which, for spin connections, means that it is antisymmetric in the vierbein indexes: The above is readily solved; one obtains The curvature 2-form is defined as and is constant: The Ricci tensor in veirbein indexes is given by where the curvature 2-form was expanded as a four-component tensor: The resulting Ricci tensor is constant so that the resulting Einstein equation can be solved with the cosmological constant . The Weyl tensor for Fubini–Study metrics in general is given by For the n = 2 case, the two-forms are self-dual: Curvature properties In the n = 1 special case, the Fubini–Study metric has constant sectional curvature identically equal to 4, according to the equivalence with the 2-sphere's round metric (which given a radius R has sectional curvature ). However, for n > 1, the Fubini–Study metric does not have constant curvature. Its sectional curvature is instead given by the equation where is an orthonormal basis of the 2-plane σ, the mapping J : TCPn → TCPn is the complex structure on CPn, and is the Fubini–Study metric. A consequence of this formula is that the sectional curvature satisfies for all 2-planes . The maximum sectional curvature (4) is attained at a holomorphic 2-plane — one for which J(σ) ⊂ σ — while the minimum sectional curvature (1) is attained at a 2-plane for which J(σ) is orthogonal to σ. For this reason, the Fubini–Study metric is often said to have "constant holomorphic sectional curvature" equal to 4. This makes CPn a (non-strict) quarter pinched manifold; a celebrated theorem shows that a strictly quarter-pinched simply connected n-manifold must be homeomorphic to a sphere. The Fubini–Study metric is also an Einstein metric in that it is proportional to its own Ricci tensor: there exists a constant ; such that for all i,j we have This implies, among other things, that the Fubini–Study metric remains unchanged up to a scalar multiple under the Ricci flow. It also makes CPn indispensable to the theory of general relativity, where it serves as a nontrivial solution to the vacuum Einstein field equations. The cosmological constant for CPn is given in terms of the dimension of the space: Product metric The common notions of separability apply for the Fubini–Study metric. More precisely, the metric is separable on the natural product of projective spaces, the Segre embedding. That is, if is a separable state, so that it can be written as , then the metric is the sum of the metric on the subspaces: where and are the metrics, respectively, on the subspaces A and B. Connection and curvature The fact that the metric can be derived from the Kähler potential means that the Christoffel symbols and the curvature tensors contain a lot of symmetries, and can be given a particularly simple form: The Christoffel symbols, in the local affine coordinates, are given by The Riemann tensor is also particularly simple: The Ricci tensor is See also Non-linear sigma model Kaluza–Klein theory Arakelov height References . Projective geometry Complex manifolds Symplectic geometry Structures on manifolds Quantum mechanics
Fubini–Study metric
Physics
2,994
2,271,116
https://en.wikipedia.org/wiki/EFA%20%28mobile%20bridge%29
The EFA or Engin de Franchissement de l'Avant (forward crossing apparatus) is a field-deployable river crossing vehicle, used by combat engineers in the French Army. Unlike a bridge layer, which transports a bridge that is deployed off of the host vehicle, the EFA itself is a combined pontoon bridge and amphibious vehicle, enabling much more rapid redeployment of the mobile bridge structure and an additional use as a ferry (at the cost of being useless in returning to service damaged bridges). When needed, multiple EFA's can be combined in a series to create a traditional pontoon bridge. It has been built since 1989 by Chaudronnerie et Forges d'Alsace (CEFA), located in Soultz-sous-Forêts in the Bas-Rhin. Characteristics A single EFA, in ferry configuration, has a length of 34.55m on a loading surface of 96 m2 is ready in less than five minutes for the transportation of up to 70 tons of goods. In one hour it is able to make about 10-12 crossings over a of 100m length and eight to 10 crossings over a length of 200 m. Two EFA coupled together at the ramp allow the carriage of up to 150 ton cargo, and a floating bridge with four EFA for example offers, in less than 10 minutes, a crossing capacity of 100 m long with an estimated flow of 200 vehicles an hour. The EFA is capable of astern propulsion, thus allowing fording without having to reorient the direction of the vehicle to the opposite shore which allows for more fluid ferry operations and rapid bridge assembly. The crew consists of four people: An equipment commander A driver A pilot A crewman Predecessor The EFA is the heir to the first self-propelled bridging ferry invented in 1955 by the French military engineer and general Jean Gillois (born in Châteaubriant 1909). Called the "Amphibious Bac" or "Gillois", it entered service with the French army in 1963. A version modified by EWK was successively adopted by the German, British and to a limited extent American militaries, and was used by Israel in the 1973 Yom Kippur War. At the time of its introduction it was able to carry vehicles up to a maximum weight of 25 tons and while configured as a bridge it could support loads of about 50 tons. It takes between 45 and 65 minutes to form a bridge 100 meters long. It allows an armed force to avoid the heavy and bulky convoys that barges brought in by road, which are sensitive to enemy attacks. Users : Contract for 10 units of more than 60 million euros signed in 2006 for EFA X1 motorized with Friedrichshafen MTU of 760 hp. Delivery from September 2008 : 39 units built for the French army since 1989, in active service since 1993. As of December 31, 2013, 30 units were in service with an average age of 25 years. They are assigned to the following units: 3rd engineer regiment, 6th engineer regiment, 19th engineer regiment, School of Engineering, Champagne Training Park. The three EFA sections are theoretically equipped with four groups of two vehicles, i.e. eight EFA per regiment. In practice, by 2014 it would seem that there were only four AETs per regiment, the rest being distributed between the Engineering School, the Training Park and the industrial owner of the conditioning contract. See also Bailey bridge Pontoon bridge Armoured vehicle-launched bridge References External links Description page on the site of the French Ministry of Defence Military vehicles of France Portable bridges Military bridging equipment Military vehicles introduced in the 1980s
EFA (mobile bridge)
Engineering
739
1,533,196
https://en.wikipedia.org/wiki/Photoelasticity
In materials science, photoelasticity describes changes in the optical properties of a material under mechanical deformation. It is a property of all dielectric media and is often used to experimentally determine the stress distribution in a material. History The photoelastic phenomenon was first discovered by the Scottish physicist David Brewster, who immediately recognized it as stress-induced birefringence. That diagnosis was confirmed in a direct refraction experiment by Augustin-Jean Fresnel. Experimental frameworks were developed at the beginning of the twentieth century with the works of E.G. Coker and L.N.G. Filon of University of London. Their book Treatise on Photoelasticity, published in 1930 by Cambridge Press, became a standard text on the subject. Between 1930 and 1940, many other books appeared on the subject, including books in Russian, German and French. Max M. Frocht published the classic two volume work, Photoelasticity, in the field. At the same time, much development occurred in the field – great improvements were achieved in technique, and the equipment was simplified. With refinements in the technology, photoelastic experiments were extended to determining three-dimensional states of stress. In parallel to developments in experimental technique, the first phenomenological description of photoelasticity was given in 1890 by Friedrich Pockels, however this was proved inadequate almost a century later by Nelson & Lax as the description by Pockels only considered the effect of mechanical strain on the optical properties of the material. With the advent of the digital polariscope – made possible by light-emitting diodes – continuous monitoring of structures under load became possible. This led to the development of dynamic photoelasticity, which has contributed greatly to the study of complex phenomena such as fracture of materials. Applications Photoelasticity has been used for a variety of stress analyses and even for routine use in design, particularly before the advent of numerical methods, such as finite elements or boundary elements. Digitization of polariscopy enables fast image acquisition and data processing, which allows its industrial applications to control quality of manufacturing process for materials such as glass and polymer. Dentistry utilizes photoelasticity to analyze strain in denture materials. Photoelasticity can successfully be used to investigate the highly localized stress state within masonry or in proximity of a rigid line inclusion (stiffener) embedded in an elastic medium. In the former case, the problem is nonlinear due to the contacts between bricks, while in the latter case the elastic solution is singular, so that numerical methods may fail to provide correct results. These can be obtained through photoelastic techniques. Dynamic photoelasticity integrated with high-speed photography is utilized to investigate fracture behavior in materials. Another important application of the photoelasticity experiments is to study the stress field around bi-material notches. Bi-material notches exist in many engineering application like welded or adhesively bonded structures. For example, some elements of Gothic cathedrals previously thought decorative were first proved essential for structural support by photoelastic methods. Formal definition For a linear dielectric material the change in the inverse permittivity tensor with respect to the deformation (the gradient of the displacement ) is described by where is the fourth-rank photoelasticity tensor, is the linear displacement from equilibrium, and denotes differentiation with respect to the Cartesian coordinate . For isotropic materials, this definition simplifies to where is the symmetric part of the photoelastic tensor (the photoelastic strain tensor), and is the linear strain. The antisymmetric part of is known as the roto-optic tensor. From either definition, it is clear that deformations to the body may induce optical anisotropy, which can cause an otherwise optically isotropic material to exhibit birefringence. Although the symmetric photoelastic tensor is most commonly defined with respect to mechanical strain, it is also possible to express photoelasticity in terms of the mechanical stress. Experimental principles The experimental procedure relies on the property of birefringence, as exhibited by certain transparent materials. Birefringence is a phenomenon in which a ray of light passing through a given material experiences two refractive indices. The property of birefringence (or double refraction) is observed in many optical crystals. Upon the application of stresses, photoelastic materials exhibit the property of birefringence, and the magnitude of the refractive indices at each point in the material is directly related to the state of stresses at that point. Information such as maximum shear stress and its orientation are available by analyzing the birefringence with an instrument called a polariscope. When a ray of light passes through a photoelastic material, its electromagnetic wave components are resolved along the two principal stress directions and each component experiences a different refractive index due to the birefringence. The difference in the refractive indices leads to a relative phase retardation between the two components. Assuming a thin specimen made of isotropic materials, where two-dimensional photoelasticity is applicable, the magnitude of the relative retardation is given by the stress-optic law: where Δ is the induced retardation, C is the , t is the specimen thickness, λ is the vacuum wavelength, and σ1 and σ2 are the first and second principal stresses, respectively. The retardation changes the polarization of transmitted light. The polariscope combines the different polarization states of light waves before and after passing the specimen. Due to optical interference of the two waves, a fringe pattern is revealed. The number of fringe order N is denoted as which depends on relative retardation. By studying the fringe pattern one can determine the state of stress at various points in the material. For materials that do not show photoelastic behavior, it is still possible to study the stress distribution. The first step is to build a model, using photoelastic materials, which has geometry similar to the real structure under investigation. The loading is then applied in the same way to ensure that the stress distribution in the model is similar to the stress in the real structure. Isoclinics and isochromatics Isoclinics are the loci of the points in the specimen along which the principal stresses are in the same direction. Isochromatics are the loci of the points along which the difference in the first and second principal stress remains the same. Thus they are the lines which join the points with equal maximum shear stress magnitude. Two-dimensional photoelasticity Photoelasticity can describe both three-dimensional and two-dimensional states of stress. However, examining photoelasticity in three-dimensional systems is more involved than two-dimensional or plane-stress system. So the present section deals with photoelasticity in a plane stress system. This condition is achieved when the thickness of the prototype is much smaller than the dimensions in the plane. Thus one is only concerned with stresses acting parallel to the plane of the model, as other stress components are zero. The experimental setup varies from experiment to experiment. The two basic kinds of setup used are plane polariscope and circular polariscope. The working principle of a two-dimensional experiment allows the measurement of retardation, which can be converted to the difference between the first and second principal stress and their orientation. To further get values of each stress component, a technique called stress-separation is required. Several theoretical and experimental methods are utilized to provide additional information to solve individual stress components. Plane polariscope setup The setup consists of two linear polarizers and a light source. The light source can either emit monochromatic light or white light depending upon the experiment. First the light is passed through the first polarizer which converts the light into plane polarized light. The apparatus is set up in such a way that this plane polarized light then passes through the stressed specimen. This light then follows, at each point of the specimen, the direction of principal stress at that point. The light is then made to pass through the analyzer and we finally get the fringe pattern. The fringe pattern in a plane polariscope setup consists of both the isochromatics and the isoclinics. The isoclinics change with the orientation of the polariscope while there is no change in the isochromatics. Circular polariscope setup In a circular polariscope setup two quarter-wave plates are added to the experimental setup of the plane polariscope. The first quarter-wave plate is placed in between the polarizer and the specimen and the second quarter-wave plate is placed between the specimen and the analyzer. The effect of adding the quarter-wave plate after the source-side polarizer is that we get circularly polarized light passing through the sample. The analyzer-side quarter-wave plate converts the circular polarization state back to linear before the light passes through the analyzer. The basic advantage of a circular polariscope over a plane polariscope is that in a circular polariscope setup we only get the isochromatics and not the isoclinics. This eliminates the problem of differentiating between the isoclinics and the isochromatics. See also Acousto-optic modulator Electrostriction Mechanochromism Photoelastic modulator Polarimetry References External links University of Cambridge Page on Photoelasticity. Laboratory for Physical Modeling of Structures and Photoelasticity (University of Trento, Italy) Build your own polariscope Materials science Mechanical engineering Mechanics Optics
Photoelasticity
Physics,Chemistry,Materials_science,Engineering
1,967
40,491,703
https://en.wikipedia.org/wiki/SIMes
SIMes (or H2Imes) is an N-heterocyclic carbene. It is a white solid that dissolves in organic solvents. The compound is used as a ligand in organometallic chemistry. It is structurally related to the more common ligand IMes but with a saturated backbone (the S of SIMes indicates a saturated backbone). It is slightly more flexible and is a component in Grubbs II. It is prepared by alkylation of trimethylaniline by dibromoethane followed by ring closure and dehydrohalogenation. References Carbenes
SIMes
Chemistry
129
11,569,748
https://en.wikipedia.org/wiki/Colletotrichum%20glycines
Colletotrichum glycines is a species of fungus in the family Glomerellaceae. It is a plant pathogen, causing soybean and tomato anthracnose. It is the teleomorph form of Glomerella glycines. See also List of soybean diseases References Fungal plant pathogens and diseases Enigmatic Sordariomycetes taxa Soybean diseases Fungi described in 1920 Fungus species
Colletotrichum glycines
Biology
96
51,594,670
https://en.wikipedia.org/wiki/Rhopalosiphum%20padi%20virus
Rhopalosiphum padi virus (RhPV) is a member of Dicistroviridae family, which includes cricket paralysis virus (CrPV), Plautia stali intestine virus and Drosophila C virus. Its 5'UTR region contains an internal ribosome entry site (IRES) element with a cross-kingdom activity. It can function efficiently in mammalian, plant and insect translation systems. Testing of R. padi aphids collected from different sites in Sweden revealed the presence of RhPV in wild aphid populations for the first time in Europe. Virus could be detected in several life stages of R. padi, including sexual individuals and eggs, establishing an over-wintering route for the virus. References Cis-regulatory RNA elements Dicistroviridae
Rhopalosiphum padi virus
Biology
169
18,717,991
https://en.wikipedia.org/wiki/QS%20Aquilae
QS Aquilae is a triple or quadruple star system consisting of an eclipsing binary in a 2.5 day orbit around which a third star orbits in 77 years. There is some indication that there is a fourth component with a period of roughly 18 years. Located in the constellation Aquila, its visual magnitude varies from 5.93 to 6.06, making it barely visible to the naked eye. The star's variability was discovered photometrically by Paul Guthnick and Richard Prager in 1930. It was given its variable star designation in 1934. References Aquila (constellation) 185936 Spectroscopic binaries Algol variables Eclipsing binaries Aquilae, QS 7846 B-type main-sequence stars 096840 Durchmusterung objects
QS Aquilae
Astronomy
167
14,467,117
https://en.wikipedia.org/wiki/Michael%20Hinchey
Michael Gerard Hinchey (born 1969) is an Irish computer scientist and former Director of the Irish Software Engineering Research Centre (Lero), a multi-university research centre headquartered at the University of Limerick, Ireland. He now serves as Head of Department of the Department of Computer Science & Information Systems at University of Limerick. Mike Hinchey studied at the University of Limerick as an undergraduate (was the leading student in his graduating year), Oxford University (at Wolfson College) for his MSc and Cambridge University (at St John's College) for his PhD. Hinchey has been a promulgator of formal methods throughout his career, especially CSP and the Z notation. He was Director of the NASA Software Engineering Laboratory at NASA Goddard Space Flight Center and is the founding editor-in-chief of the NASA journal Innovations in Systems and Software Engineering, launched in 2005. He has held many academic positions, both visiting and permanent, at a number of universities including the University of Nebraska, Queen's University Belfast, New Jersey Institute of Technology, Hiroshima University the University of Skövde in Sweden and was at Loyola College in Maryland (now Loyola University Maryland), United States, before his current post. Hinchey is a Member of Academia Europaea, a Fellow of the IET, a Fellow of the IMA, and a Senior Member of the IEEE. He is a Chartered Engineer, Chartered Professional Engineer, Chartered Mathematician and Chartered IT Professional. As of 2016, Hinchey has been serving as President of IFIP (International Federation for Information Processing). Selected publications Hinchey, M.G. and Bowen, J.P., editors, Applications of Formal Methods. Prentice Hall International Series in Computer Science, 1995. . Dean, C.N. and Hinchey, M.G., editors, Teaching and Learning Formal Methods, Academic Press, London, 1996. . Bowen, J.P. and Hinchey, M.G., editors, High-Integrity System Specification and Design. Springer-Verlag, London, FACIT series, 1999. . Hinchey, M.G. and Bowen, J.P., editors, Industrial-Strength Formal Methods in Practice. Springer-Verlag, London, FACIT series, 1999. . References External links Mike Hinchey web page Interview – The Irish Times 1969 births Living people Alumni of the University of Limerick Alumni of Wolfson College, Oxford Alumni of St John's College, Cambridge Irish computer scientists Formal methods people Fellows of the Institution of Engineering and Technology NASA people Loyola University Maryland faculty Academics of the University of Limerick Irish book editors Irish non-fiction writers Irish male non-fiction writers Senior members of the IEEE Computer science writers Academic journal editors Academic staff of the University of Skövde
Michael Hinchey
Engineering
567
35,998,312
https://en.wikipedia.org/wiki/Ion-to-photon%20detector
An ion-to-photon detector (IPD) is a component used for detecting ions in mass spectrometry. Operation In an ion-to-photon detector, a photomultiplier tube is coated with a layer of scintillator compound, such as Rhodamine B or CsI. When the ions pass through the mass analyzer of the spectrometer, they strike the scintillator compound and cause the release of photons. These photons are then detected by the photomultiplier tube. A conversion dynode, such as a microchannel plate can also be used between the ion beam and the scintillator to increase the signal. An MCP, when struck by an ion, will release electrons which then strike the scintillator. Applications The primary application for ion-to-photon detectors is acting as the detector in MALDI mass spectrometry. They could, in theory, be used for other types of mass spectrometry as well. Comparison to other detectors The conversion efficiency of ions to photons by an IPD is as good as, or better than, the conversion efficiency of ions to electrons by a multichannel plate detector. Ion-to-photon detectors may also detect ions with a mass of up to around 20,000 Da, better than microchannel plates. However, the resolution of the mass spectrum from an IPD equipped spectrometer is slightly lower. The noise in the spectrum, which may come from unfocused, slow speed ions, is also slightly higher. References Mass spectrometry
Ion-to-photon detector
Physics,Chemistry
327
8,008,408
https://en.wikipedia.org/wiki/Pulse%20shaping
In electronics and telecommunications, pulse shaping is the process of changing a transmitted pulses' waveform to optimize the signal for its intended purpose or the communication channel. This is often done by limiting the bandwidth of the transmission and filtering the pulses to control intersymbol interference. Pulse shaping is particularly important in RF communication for fitting the signal within a certain frequency band and is typically applied after line coding and modulation. Need for pulse shaping Transmitting a signal at high modulation rate through a band-limited channel can create intersymbol interference. The reason for this are Fourier correspondences (see Fourier transform). A bandlimited signal corresponds to an infinite time signal, that causes neighbouring pulses to overlap. As the modulation rate increases, the signal's bandwidth increases. As soon as the spectrum of the signal is a sharp rectangular, this leads to a sinc shape in the time domain. This happens if the bandwidth of the signal is larger than the channel bandwidth, leading to a distortion. This distortion usually manifests itself as intersymbol interference (ISI). Theoretically for sinc shaped pulses, there is no ISI, if neighbouring pulses are perfectly aligned, i.e. in the zero crossings of each other. But this requires a very good synchronization and precise/stable sampling without jitters. As a practical tool to determine ISI, one uses the Eye pattern, that visualizes typical effects of the channel and the synchronization/frequency stability. The signal's spectrum is determined by the modulation scheme and data rate used by the transmitter, but can be modified with a pulse shaping filter. This pulse shaping will make the spectrum smooth, leading to a time limited signal again. Usually the transmitted symbols are represented as a time sequence of dirac delta pulses multiplied with the symbol. This is the formal transition from the digital to the analog domain. At this point, the bandwidth of the signal is unlimited. This theoretical signal is then filtered with the pulse shaping filter, producing the transmitted signal. If the pulse shaping filter is a rectangular in the time domain (like this is usually done when drawing it), this would lead to an unlimited spectrum. In many base band communication systems the pulse shaping filter is implicitly a boxcar filter. Its Fourier transform is of the form sin(x)/x, and has significant signal power at frequencies higher than symbol rate. This is not a big problem when optical fibre or even twisted pair cable is used as the communication channel. However, in RF communications this would waste bandwidth, and only tightly specified frequency bands are used for single transmissions. In other words, the channel for the signal is band-limited. Therefore, better filters have been developed, which attempt to minimise the bandwidth needed for a certain symbol rate. An example in other areas of electronics is the generation of pulses where the rise time need to be short; one way to do this is to start with a slower-rising pulse, and decrease the rise time, for example with a step recovery diode circuit. These descriptions here provide a working knowledge, that cover most effects, but do not include causality, which would lead to analytical functions/signals. To understand this completely, one needs the Hilbert transform, that induces a direction by the convolution with the Cauchy Kernel. This couples the real and imaginary part of the baseband description, thereby adding structure. This immediately implies, that either the real or the imaginary part are enough to describe an analytical signal. By measuring both in a noisy setting, one has a redundancy, that can be used to better reconstruct the original signal. A physical realization is always causal, since an analytic signal carries the information. Pulse shaping filters Not every filter can be used as a pulse shaping filter. The filter itself must not introduce intersymbol interference — it needs to satisfy certain criteria. The Nyquist ISI criterion is a commonly used criterion for evaluation, because it relates the frequency spectrum of the transmitter signal to intersymbol interference. Examples of pulse shaping filters that are commonly found in communication systems are: Sinc shaped filter Raised-cosine filter Gaussian filter Sender side pulse shaping is often combined with a receiver side matched filter to achieve optimum tolerance for noise in the system. In this case the pulse shaping is equally distributed between the sender and receiver filters. The filters' amplitude responses are thus pointwise square roots of the system filters. Other approaches that eliminate complex pulse shaping filters have been invented. In OFDM, the carriers are modulated so slowly that each carrier is virtually unaffected by the bandwidth limitation of the channel. Sinc filter It is also called as Boxcar filter as its frequency domain equivalent is a rectangular shape. Theoretically the best pulse shaping filter would be the sinc filter, but it cannot be implemented precisely. It is a non-causal filter with relatively slowly decaying tails. It is also problematic from a synchronisation point of view as any phase error results in steeply increasing intersymbol interference. Raised-cosine filter Raised-cosine is similar to sinc, with the tradeoff of smaller sidelobes for a slightly larger spectral width. Raised-cosine filters are practical to implement and they are in wide use. They have a configurable excess bandwidth, so communication systems can choose a trade off between a simpler filter and spectral efficiency. Gaussian filter This gives an output pulse shaped like a Gaussian function. See also Nyquist ISI criterion Raised-cosine filter Matched filter Femtosecond pulse shaping Pulse (signal processing) References John G. Proakis, "Digital Communications, 3rd Edition" Chapter 9, McGraw-Hill Book Co., 1995. National Instruments Signal Generator Tutorial, Pulse Shaping to Improve Spectral Efficiency National Instruments Measurement Fundamentals Tutorial, Pulse-Shape Filtering in Communications Systems Root Raised Cosine Filters & Pulse Shaping in Communication Systems by Erkin Cubukcu (ntrs.nasa.gov). Telecommunication theory Signal processing
Pulse shaping
Technology,Engineering
1,231
3,303,981
https://en.wikipedia.org/wiki/Quantum%20gyroscope
A quantum gyroscope is a very sensitive device to measure angular rotation based on quantum mechanical principles. The first of these was built by Richard Packard and his colleagues at the University of California, Berkeley. The extreme sensitivity means that theoretically, a larger version could detect effects like minute changes in the rotational rate of the Earth. Principle In 1962, Cambridge University PhD student Brian Josephson hypothesized that an electric current could travel between two superconducting materials even when they were separated by a thin insulating layer. The term Josephson effect has come to refer generically to the different behaviors that occur in any two weakly connected macroscopic quantum systems—systems composed of molecules that all possess identical wavelike properties. Among other things, the Josephson effect means that when two superfluids (zero friction fluids) are connected using a weak link and pressure is applied to the superfluid on one side of a weak link, the fluid will oscillate from one side of the weak link to the other. This phenomenon, known as quantum whistling, occurs when pressure is applied to push a superfluid through a very small hole, somewhat as sound is produced by blowing air through an ordinary whistle. A ring-shaped tube full of superfluid, blocked by a barrier containing a tiny hole, could in principle be used to detect pressure differences caused by changes in rotational motion of the ring, in effect functioning as a sensitive gyroscope. Superfluid whistling was first demonstrated using helium-3, which has the disadvantage of being scarce and expensive, and requiring extremely low temperature (a few thousandths of a Kelvin). Common helium-4, which remains superfluid at 2 Kelvin, is much more practical, but its quantum whistling is too weak to be heard with a single practical-sized hole. This problem was overcome by using barriers with thousands of holes, in effect a chorus of quantum whistles producing sound waves that reinforced one another by constructive interference. Equation Where is the rotation vector, A is the area vector, and is the quantum of circulation of helium-3. References See also Polariton interferometer Ring laser gyroscope Gyroscope Vibrating structure gyroscope Inertial measurement unit Hemispherical resonator gyroscope Superconductivity Gyroscopes Quantum mechanics
Quantum gyroscope
Physics,Materials_science,Engineering
479
18,255,021
https://en.wikipedia.org/wiki/Singularity%20%28DeSmedt%20novel%29
Singularity is a novel by Bill DeSmedt published by Per Aspera Press in 2004. It is DeSmedt's debut novel and explores the theory that the Tunguska event was caused by a micro black hole. Synopsis and publication Released in 2004, Singularity is both DeSmedt's and the publishing house's debut novel. Singularity is the first novel in the Archon Sequence series about the Tunguska event. DeSmedt's second and third novels in the series are Dualism (2014) and Triploidy (2022). On Barnes & Noble's science fiction and fantasy list, Singularity ranked fifth. On Mysterious Galaxy's bestsellers rankings, it ranked seventh. Plot summary It is based on the theory that the Tunguska event was caused by a micro black hole. Trying to locate weapon of mass destruction, Marianna Bonaventure is an American in the United States Department of Energy's CROM (Critical Resources Oversight Mandate) who has to work together with the outstanding analyst Jonathan Knox. Reception The Seattle Timess Nisi Shawl wrote, "DeSmedt's clear descriptions of everything from the core of a typical star to the sinister device an assassin uses to mimic a wolf's bite make it easy to follow his swiftly swooping story line". Robert Folsom praised the book in The Kansas City Star, writing, "The dialogue would be another matter; it's very scientific. But De-Smedt has managed a neat trick: Conversations are lively even though they're peppered with accurate physicist's jargon. The thriller aspect of the book helps." The San Diego Union-Tribunes Jim Hopper called the novel "a stylish technothriller". The Fayetteville Observer said the novel was "a science fiction thriller [that] will appeal to readers who enjoy Michael Crichton". Danica McKellar praised the book in an interview with the New York Post, stating, "It's my favorite science fiction thriller. It's got everything - great characters, suspense, action, romance, and you just might learn something about black holes along the way." Referring to how Earth's gravity could have sucked in a black hole, John R. Alden wrote in The Plain Dealer, "Singularity takes this bizarre possibility, adds a cast of exotic characters, whips in a blitzkrieg plot and bakes it all into a hugely entertaining near-future thriller. James Bond would have loved to star in a story such as this." In a mixed review, Publishers Weekly said, "The sexual chemistry between Marianna and Jonathan adds spice. Exotic hardware, lifestyles of the rich and notorious, double- and triple-crosses and a slightly rushed and facile conclusion all make a respectable if not outstanding first effort." Awards The novel was awarded the "Gold Medal for Science Fiction" as part of Foreword Magazines "Book of the Year Awards". It received the Independent Publisher Book Awards's "Ippy prize for Best Fantasy/Science Fiction novel of 2004". About the Author Bill DeSmedt is an American author and software engineer. References External links 2004 American novels 2004 science fiction novels American science fiction novels Black holes Debut novels
Singularity (DeSmedt novel)
Physics,Astronomy
671
24,701,090
https://en.wikipedia.org/wiki/P%C3%B3lya%E2%80%93Szeg%C5%91%20inequality
In mathematical analysis, the Pólya–Szegő inequality (or Szegő inequality) states that the Sobolev energy of a function in a Sobolev space does not increase under symmetric decreasing rearrangement. The inequality is named after the mathematicians George Pólya and Gábor Szegő. Mathematical setting and statement Given a Lebesgue measurable function the symmetric decreasing rearrangement is the unique function such that for every the sublevel set is an open ball centred at the origin that has the same Lebesgue measure as Equivalently, is the unique radial and radially nonincreasing function, whose strict sublevel sets are open and have the same measure as those of the function . The Pólya–Szegő inequality states that if moreover then and Applications of the inequality The Pólya–Szegő inequality is used to prove the Rayleigh–Faber–Krahn inequality, which states that among all the domains of a given fixed volume, the ball has the smallest first eigenvalue for the Laplacian with Dirichlet boundary conditions. The proof goes by restating the problem as a minimization of the Rayleigh quotient. The isoperimetric inequality can be deduced from the Pólya–Szegő inequality with . The optimal constant in the Sobolev inequality can be obtained by combining the Pólya–Szegő inequality with some integral inequalities. Equality cases Since the Sobolev energy is invariant under translations, any translation of a radial function achieves equality in the Pólya–Szegő inequality. There are however other functions that can achieve equality, obtained for example by taking a radial nonincreasing function that achieves its maximum on a ball of positive radius and adding to this function another function which is radial with respect to a different point and whose support is contained in the maximum set of the first function. In order to avoid this obstruction, an additional condition is thus needed. It has been proved that if the function achieves equality in the Pólya–Szegő inequality and if the set is a null set for Lebesgue's measure, then the function is radial and radially nonincreasing with respect to some point . Generalizations The Pólya–Szegő inequality is still valid for symmetrizations on the sphere or the hyperbolic space. The inequality also holds for partial symmetrizations defined by foliating the space into planes (Steiner symmetrization) and into spheres (cap symmetrization). There are also Pólya−Szegő inequalities for rearrangements with respect to non-Euclidean norms and using the dual norm of the gradient. Proofs of the inequality Original proof by a cylindrical isoperimetric inequality The original proof by Pólya and Szegő for was based on an isoperimetric inequality comparing sets with cylinders and an asymptotics expansion of the area of the area of the graph of a function. The inequality is proved for a smooth function that vanishes outside a compact subset of the Euclidean space For every , they define the sets These sets are the sets of points who lie between the domain of the functions and and their respective graphs. They use then the geometrical fact that since the horizontal slices of both sets have the same measure and those of the second are balls, to deduce that the area of the boundary of the cylindrical set cannot exceed the one of . These areas can be computed by the area formula yielding the inequality Since the sets and have the same measure, this is equivalent to The conclusion then follows from the fact that Coarea formula and isoperimetric inequality The Pólya–Szegő inequality can be proved by combining the coarea formula, Hölder’s inequality and the classical isoperimetric inequality. If the function is smooth enough, the coarea formula can be used to write where denotes the –dimensional Hausdorff measure on the Euclidean space . For almost every each , we have by Hölder's inequality, Therefore, we have Since the set is a ball that has the same measure as the set , by the classical isoperimetric inequality, we have Moreover, recalling that the sublevel sets of the functions and have the same measure, and therefore, Since the function is radial, one has and the conclusion follows by applying the coarea formula again. Rearrangement inequalities for convolution When , the Pólya–Szegő inequality can be proved by representing the Sobolev energy by the heat kernel. One begins by observing that where for , the function is the heat kernel, defined for every by Since for every the function is radial and radially decreasing, we have by the Riesz rearrangement inequality Hence, we deduce that References Sobolev spaces Geometric inequalities Rearrangement inequalities
Pólya–Szegő inequality
Mathematics
1,004
23,138,298
https://en.wikipedia.org/wiki/Hodgkin%20cycle
In membrane biology, the Hodgkin cycle is a key component of membrane physiology that describes bioelectrical impulses, especially prevalent in neural and muscle tissues. It was identified by British physiologist and biophysicist Sir Alan Lloyd Hodgkin. The Hodgkin cycle represents a positive feedback loop in which an initial membrane depolarization leads to uncontrolled deflection of the membrane potential to near VNa. The initial depolarization must reach or surpass a certain threshold in order to activate voltage-gated Na+ channels. The opening of Na+ channels allows Na+ inflow, which, in turn, further depolarizes the membrane. Additional depolarization activates additional Na+ channels. This cycle leads to a very rapid rise in Na+ conductance (gNa), which moves the membrane potential close to VNa. The cycle is broken when the membrane potential reaches to the sodium equilibrium potential and potassium channels open to re-polarize the membrane potential. This positive feedback loop means that the closer these voltage-gated Na+ channels are to each other, the lower the threshold of activation. Importance in cell physiology An understanding of membrane physiology is needed in order to understand how cells communicate with one another. Signalling between cells, such as neurons for example, revolves around changes in the electrical potentials across their membranes. In a healthy cell at rest, the intracellular area is usually negatively charged relative to the extracellular region. Depolarization refers to when the intracellular region is neutralized to be at the same voltage relative to the extracellular region. The concentration of sodium ions is intimately related to the electrical potential across a membrane. Depolarization often occurs via the influx of sodium ions to the intracellular region. Given that sodium ions have a positive charge, the intracellular region becomes less negatively charged relative to the extracellular region. References Membrane biology
Hodgkin cycle
Chemistry
396
13,280,188
https://en.wikipedia.org/wiki/20%2C000
20,000 (twenty thousand) is the natural number that comes after 19,999 and before 20,001. 20,000 is a round number and is also in the title of Jules Verne's 1870 novel Twenty Thousand Leagues Under the Seas. Selected numbers in the range 20001–29999 20001 to 20999 20002 = number of surface-points of a tetrahedron with edge-length 100 20067 = The smallest number with no entry in the Online Encyclopedia of Integer Sequences (OEIS) 20100 = sum of the first 200 natural numbers (hence a triangular number) 20160 = 23rd highly composite number; the smallest order belonging to two non-isomorphic simple groups: the alternating group A8 and the Chevalley group A2(4) 20161 = the largest integer that cannot be expressed as a sum of two abundant numbers 20230 = pentagonal pyramidal number 20412 = Leyland number: 93 + 39 20540 = square pyramidal number 20569 = tetranacci number 20593 = unique prime in base 12 20597 = k such that the sum of the squares of the first k primes is divisible by k. 20736 = 1442 = 124, 1000012, palindromic in base 15 (622615), also called a dozen great-gross in some duodecimal nomenclature. 20793 = little Schroeder number 20871 = The number of weeks in exactly 400 years in the Gregorian calendar 20903 = first prime of form 120k + 23 that is not a full reptend prime 21000 to 21999 21025 = 1452, palindromic in base 12 (1020112) 21147 = Bell number 21181 = the least of five remaining Seventeen or Bust numbers in the Sierpiński problem 21209 = number of reduced trees with 23 nodes 21637 = number of partitions of 37 21856 = octahedral number 21943 = Friedman prime 21952 = 283 21978 = reverses when multiplied by 4: 4 × 21978 = 87912 22000 to 22999 22050 = pentagonal pyramidal number 22140 = square pyramidal number 22222 = repdigit, Kaprekar number: 222222 = 493817284, 4938 + 17284 = 22222 22447 = cuban prime 22527 = Woodall number: 11 × 211 − 1 22621 = repunit prime in base 12 22699 = one of five remaining Seventeen or Bust numbers in the Sierpiński problem 23000 to 23999 23000 = number of primes . 23401 = Leyland number: 65 + 56 23409 = 1532, sum of the cubes of the first 17 positive integers 23497 = cuban prime 23821 = square pyramidal number 23833 = Padovan prime 23969 = octahedral number 23976 = pentagonal pyramidal number 24000 to 24999 24000 = number of primitive polynomials of degree 20 over GF(2) 24211 = Zeisel number 24336 = 1562, palindromic in base 5: 12343215 24389 = 293 24571 = cuban prime 24631 = Wedderburn–Etherington prime 24649 = 1572, palindromic in base 12: 1232112 24737 = one of five remaining Seventeen or Bust numbers in the Sierpinski problem 24742 = number of signed trees with 10 nodes 25000 to 25999 25011 = the smallest composite number, ending in 1, 3, 7, or 9, that in base 10 remains composite after any insertion of a digit 25085 = Zeisel number 25117 = cuban prime 25200 = 224th triangular number, 24th highly composite number, smallest number with exactly 90 factors 25205 = largest number whose factorial is less than 10100000 25482 = number of 21-bead necklaces (turning over is allowed) where complements are equivalent 25585 = square pyramidal number 25724 = Fine number 25920 = smallest number with exactly 70 factors 26000 to 26999 26015 = number of partitions of 38 26214 = octahedral number 26227 = cuban prime 26272 = number of 20-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed 26861 = smallest number for which there are more primes of the form 4k + 1 than of the form 4k + 3 up to the number, against Chebyshev's bias 26896 = 1642, palindromic in base 9: 408049 27000 to 27999 27000 = 303 27405 = heptagonal number, hexadecagonal number, 48-gonal number, 80-gonal number, smallest integer that is polygonal in exactly 10 ways. 27434 = square pyramidal number 27559 = Zeisel number 27594 = number of primitive polynomials of degree 19 over GF(2) 27648 = 11 × 22 × 33 × 44 27653 = Friedman prime 27720 = 25th highly composite number; smallest number divisible by the numbers from 1 to 12 (there is no smaller number divisible by the numbers from 1 to 11 since any number divisible by 3 and 4 must be divisible by 12) 27846 = harmonic divisor number 27889 = 1672 28000 to 28999 28158 = pentagonal pyramidal number 28374 = smallest integer to start a run of six consecutive integers with the same number of divisors 28393 = unique prime in base 13 28547 = Friedman prime 28559 = nice Friedman prime 28561 = 1692 = 134 = 1192 + 1202, number that is simultaneously a square number and a centered square number, palindromic in base 12: 1464112 28595 = octahedral number 28657 = Fibonacci prime, Markov prime 28900 = 1702, palindromic in base 13: 1020113 29000 to 29999 29241 = 1712, sum of the cubes of the first 18 positive integers 29341 = Carmichael number 29370 = square pyramidal number 29527 = Friedman prime 29531 = Friedman prime 29601 = number of planar partitions of 18 29791 = 313 Primes There are 983 prime numbers between 20000 and 30000. References 20000
20,000
Mathematics
1,343
16,878,758
https://en.wikipedia.org/wiki/Copolyester
A copolyester is a copolymer synthesized by modification of polyesters, which are combinations of diacids and diols. For example, by introducing other diacids, such as isophthalic acid (IPA), or other diols, such as cyclohexane dimethanol (CHDM) to the polyester polyethylene terephthalate (PET), the material becomes a copolyester due to its comonomer content. Copolyesters retain their strength, clarity, and other mechanical properties even when exposed to a variety of chemicals that typically affect other materials, such as polycarbonates. This, plus their versatility and flexibility, allows manufacturers to use them effectively in the design of both high-volume, low-cost parts as well as critical, more expensive component parts. Applications Copolyesters offer versatility to meet a wide variety of applications. Copolyester resins have proved to be effective in packaging applications, due to their toughness, versatility and chemical resistance. They are also frequently used in the manufacture and packaging of consumer goods and materials. Markets that rely on copolyesters include medical packaging, home appliances, consumer goods (pens, toys, sporting goods, etc.), and cosmetics, among others. Table of Common Copolyester and Components Manufacturers The main global manufacturers and suppliers of Copolyester resins are as follows (The brand names are in parentheses): Eastman Chemical Company – (Eastar, Provista, Tritan) Bostik Findley – (Vitel) Toyobo – (Vylon) Evonik – (Dynacoll S) SKChemical – (Skygreen, Ecozen, Skybon) Henkel – (Petaflex) Covestro (formerly Bayer MaterialScience AG) – (Vivak/ PETg) Macroocean – (Marcoa, Marnex) EMS CHEMIE - (Griltex) See also Polyester References Biomaterials Carboxylate esters Plastics Polyesters
Copolyester
Physics,Biology
432
37,134,263
https://en.wikipedia.org/wiki/Emergency%20disconnect%20package
An Emergency Disconnect Package (EDP) is a piece of equipment used in the drilling and work-over (servicing or modification) of deep sea oil & gas wells, by Mobile Offshore Drilling Rigs (MODU's) and Well Intervention Vessels (WIV's). The EDP is designed for use in an emergency, when the MODU or WIV needs to quickly disconnect, and move away from the oil/gas well that it is drilling or working-over. Examples of when this might be necessary include unexpected extreme weather that exceeds the MODU/Vessel's capability to maintain its position. Under normal operating conditions, the MODU/WIV (which is floating on the sea surface) is connected to the oil/gas well (which is drilled in the sea bed) by a vertical (or near-vertical) piece of steel pipe, called a marine riser. Tools and fluids are moved within the marine riser as required to/from the well. At the bottom of the marine riser, the EDP and other components that connect the riser to the well and allow the well to be shut-in when required, constitute a 'Lower Riser Package' (LRP). When required to do so, the EDP disconnects from the LRP and isolates the riser from the environment. Thus the EDP allows the MODU/WIV to safely and quickly disconnect from the subsea well and move away in an emergency. The EDP is designed to carry out its function while under load with a high disconnection angle. An EDP consists of a connector to the rest of the LRP, an isolation valve, an accumulator, a subsea control module and a connection point at the top for connection to the riser pipe. A production retainer valve shuts in the riser and the annulus master valve shuts in the riser. A crossover valve allows circulation of the riser after disconnection. <refs. IADC Drilling Lexicon / API RP 17G, Recommended Practice for Completion/Workover Risers / API RP 96, Deepwater Well Design and Construction / GSO Engineering> External links References Oil wells
Emergency disconnect package
Chemistry
459
47,690,260
https://en.wikipedia.org/wiki/Giovanni%20Vignale
Giovanni Vignale is an Italian American physicist and Professor of Physics at the University of Missouri. Vignale is known for his work on density functional theory - a theoretical approach to the quantum many-body problem - and for several contributions to many-particle physics and spintronics. He is also the author of a monograph on the "Quantum Theory of the Electron Liquid" (with Gabriele F. Giuliani) and a book entitled "The Beautiful Invisible - Creativity, imagination, and theoretical physics". Life Vignale was born in Naples, Italy, in 1957 and studied physics at the Scuola Normale Superiore in Pisa, where he graduated in 1979. He completed his Ph.D. at Northwestern University in 1984, with a thesis on "Collective modes, effective interactions and superconductivity in the electron-hole liquid". He was a postdoctoral researcher at the Max-Planck-Institute for Solid State Research in Stuttgart, Germany and at Oak Ridge National Laboratory in Oak Ridge, Tennessee, before joining the Department of Physics and Astronomy at the University of Missouri in 1988. He is Curators' Professor of Physics at the University of Missouri since 2006 and Fellow of the American Physical Society since 1997. Research contributions Vignale is known for his contributions to density functional theory. In 1987 he formulated, in collaboration with Mark Rasolt, the current density functional theory for electronic systems in the presence of a static magnetic field. In 1996 he developed, with Walter Kohn (Nobel Laureate in Chemistry, 1998), the time-dependent current density functional theory for electronic systems subjected to time-dependent electromagnetic fields. He is also known for his contributions to spintronics: in 2000, with Irene D'Amico, he introduced the concept of spin Coulomb drag (experimentally observed in 2005). In 2003 he proposed, with Michael E. Flatte' of the University of Iowa, the theoretical concept for a unipolar spin diode and a unipolar spin transistor. Vignale is co-author (with Gabriele F. Giuliani) of a monograph on the quantum electron liquid, which is used by students and researchers for reference and self-study. In 2011 he published a non-technical book "The Beautiful Invisible - Creativity, imagination, and theoretical physics", which presents theoretical physics as a form of art. In the introduction to this book he writes “A good scientific theory is like a symbolic tale, an allegory of reality. Its characters are abstractions that may not exist in reality; yet they give us a way of thinking more deeply about reality. Like a fine work of art, the theory creates its own world: it transforms reality into something else – an illusion perhaps, but an illusion that has more value than the literal fact.” Literary work Giovanni Vignale is the author of several works of fiction and poetry. Some of his poems have been translated from English to Spanish by the renowned Cuban poet Juana Rosa Pita and are published in both languages in Time is Alive/El Tiempo Esta' Vivo. The dramatic quartet Odradek and Billy Bass Drink to the End of the World features four short plays patterned after the classic Japanese Noh drama form. About this book Juana Rosa Pita writes: "Suspended in space and time, his characters are pure abstractions, speaking devices, neither alive nor dead, in fact, not even human... Poetry, prose and drama conspire to make these plays an unforgettable reading experience". Prose Odradek and Billy Bass Drink to the End of the World. El Zunzun Viajero, 2018 Vite Scambiate. Cultura Duemila Editrice, 1993 Poetry Time is Alive/El tiempo esta' vivo. El Zunzun Viajero, 2019 References Northwestern University alumni University of Missouri physicists 21st-century American physicists Computational chemists 1957 births Living people Fellows of the American Physical Society
Giovanni Vignale
Chemistry
812
1,378,800
https://en.wikipedia.org/wiki/Efflorescence
In chemistry, efflorescence (which roughly means "the flowering" in French) is the migration of a salt to the surface of a porous material, where it forms a coating. The essential process involves the dissolving of an internally held salt in water or occasionally, in another solvent. The water, with the salt now held in solution, migrates to the surface, then evaporates, leaving a coating of the salt. In what has been described as "primary efflorescence", the water is the invader and the salt was already present internally, and a reverse process, where the salt is originally present externally and is then carried inside in solution, is referred to as "secondary efflorescence". Efflorescences can occur in natural and built environments. On porous construction materials it may present a cosmetic outer problem only (primary efflorescence causing staining), but can sometimes indicate internal structural weakness (migration/degradation of component materials). Efflorescence may clog the pores of porous materials, resulting in the destruction of those materials by internal water pressure, as seen in the spalling of brick. Examples A 5 molar concentration aqueous droplet of NaCl will spontaneously crystallize at 45% relative humidity (298 K) to form an NaCl cube by the mechanism of homogeneous nucleation. The original water is released to the gas phase. Gypsum (CaSO4.2H2O) is a hydrate solid that, in a sufficiently dry environment, will give up its water to the gas phase and form anhydrite (CaSO4). Copper(II) sulfate (bluestone) (CuSO4.5H2O) is a blue crystalline solid that when exposed to air, slowly loses water of crystallization from its surface to form a white layer of anhydrous copper(II) sulfate. Sodium carbonate decahydrate (Na2CO3.10H2O) will lose water when exposed to air. Masonry Primary efflorescence Primary efflorescence is named such, as it typically occurs during the initial cure of a cementitious product. It often occurs on masonry construction, particularly brick, as well as some firestop mortars, when water moving through a wall or other structure, or water being driven out as a result of the heat of hydration as cement stone is being formed, brings salts to the surface that are not commonly bound as part of the cement stone. As the water evaporates, it leaves the salt behind, which forms a white, fluffy deposit, that can normally be brushed off. The resulting white deposits are referred to as "efflorescence" in this instance. In this context efflorescence is sometimes referred to as "saltpetering." Since primary efflorescence brings out salts that are not ordinarily part of the cement stone, it is not a structural, but, rather, an aesthetic concern. For controlling primary efflorescence, formulations containing liquid fatty acid mixtures (e.g., oleic acid and linoleic acid) have commonly been used. The oily liquid admixture is introduced into the batch mix at an early stage by coating onto the sand particles prior to the introduction of any mix water, so that the oily admixture is distributed uniformly throughout the concrete batch mix. Secondary efflorescence Secondary efflorescence is named such as it does not occur as a result of the forming of the cement stone or its accompanying hydration products. Rather, it is usually due to the external influence of concrete poisons, such as chlorides. A very common example of where secondary efflorescence occurs is steel-reinforced concrete bridges as well as parking garages. Saline solutions are formed due to the presence of road salt in the winter. This saline solution is absorbed into the concrete, where it can begin to dissolve cement stone, which is of primary structural importance. Virtual stalactites can be formed in some cases as a result of dissolved cement stone, hanging off cracks in concrete structures. Where this process has taken hold, the structural integrity of a concrete element is at risk. This is a common traffic infrastructure and building maintenance concern. Secondary efflorescence is akin to osteoporosis of the concrete. For controlling secondary efflorescence, admixtures containing aqueous-based calcium stearate dispersion (CSD) are often added at a later stage of the batching process with the mix water. In a typical batching process, sand is first charged into the mixer, then the oil-based primary anti-efflorescence admixture is added with constant mixing to allow the oil to coat the sand. Then coarse aggregates, colorants, and cement are added, followed by water. If CSD is used, it is then introduced usually at this point during or after the addition of the mix water. CSD is an aqueous dispersion wherein fine solid particles of calcium stearate are suspended in the water uniformly. Commercially available CSD has an average particle size of about 1 to 10 micrometres. The uniform distribution of CSD in the mix may render the resulting concrete masonry unit water repellent, as CSD particles are well distributed in the pores of the unit to interfere with the capillary movement of water. Calthemite is also a secondary deposit derived from concrete, mortar or lime, which can be mistakenly assumed to be efflorescence. Calthemites are usually deposited as calcite which is the most stable polymorph of calcium carbonate (CaCO3). Protecting against efflorescence The only way to completely and permanently prevent (both primary and secondary) efflorescence in cementitious materials is by using special admixtures that chemically react with and bind the salt-based impurities in the concrete when hydrogen (H) is present. The chemical reaction in these special additives fuses the sodium chloride on a nanomolecular level, converting it into non-sodium chemicals and other harmless matter that will not leach out or migrate to the surface. In fact, the nanotechnology in these additives can be up to 100,000 times smaller than even the smallest cement particles, allowing their molecules to literally pass through cement minerals or sand particles and ultimately become part of the cement or sand with which they react. And since they require the presence of hydrogen they stop reacting as the concrete dries out and begin reacting again when the concrete is exposed to moisture. It is also possible to protect porous building materials such as brick, tiles, and concrete against efflorescence by treating the material with an impregnating, hydro-phobic sealer. This is a sealer that repels water and will penetrate deeply enough into the material to keep water and dissolved salts well away from the surface. However, in climates where freezing is a concern, such a sealer may lead to damage from freeze/thaw cycles. And while it will help to protect against efflorescence, it cannot permanently prevent the problem. Efflorescence can often be removed from concrete using phosphoric acid. After application the acid dilution is neutralised with mild diluted detergent, and then well rinsed with water. However, if the source of the water penetration is not addressed efflorescence may reappear. Common rebar protective measures include the use of epoxy coating as well as the use of a slight electrical charge, both of which prevent rusting. One may also use stainless steel rebar. Certain cement types are less resistant to chlorides than others. The choice of cement, therefore, can have a large effect upon the concrete's reaction to chlorides. Today's water repellents help create a vapor permeable barrier; liquid water, especially from wind driven rains, will stay out of the brick and masonry. Water vapor from the interior of the building, or from the underside of pavers can escape. This will reduce efflorescence, spalling and scaling that can occur from water being trapped inside the brick substrate and freezing during cold weather. Years ago, the water repellents trapped moisture in the masonry wall creating more problems than they solved. Condensation in areas that experienced the four seasons were much more problematic than their counterparts. Image gallery See also Calthemite Hydrate Hygroscopy Our Lady of the Underpass References Chemical processes Concrete Masonry
Efflorescence
Chemistry,Engineering
1,785
23,117,441
https://en.wikipedia.org/wiki/Ungula
In solid geometry, an ungula is a region of a solid of revolution, cut off by a plane oblique to its base. A common instance is the spherical wedge. The term ungula refers to the hoof of a horse, an anatomical feature that defines a class of mammals called ungulates. The volume of an ungula of a cylinder was calculated by Grégoire de Saint Vincent. Two cylinders with equal radii and perpendicular axes intersect in four double ungulae. The bicylinder formed by the intersection had been measured by Archimedes in The Method of Mechanical Theorems, but the manuscript was lost until 1906. A historian of calculus described the role of the ungula in integral calculus: Grégoire himself was primarily concerned to illustrate by reference to the ungula that volumetric integration could be reduced, through the ductus in planum, to a consideration of geometric relations between the lies of plane figures. The ungula, however, proved a valuable source of inspiration for those who followed him, and who saw in it a means of representing and transforming integrals in many ingenious ways. Cylindrical ungula A cylindrical ungula of base radius r and height h has volume ,. Its total surface area is , the surface area of its curved sidewall is , and the surface area of its top (slanted roof) is . Proof Consider a cylinder bounded below by plane and above by plane where k is the slope of the slanted roof: . Cutting up the volume into slices parallel to the y-axis, then a differential slice, shaped like a triangular prism, has volume where is the area of a right triangle whose vertices are, , , and , and whose base and height are thereby and , respectively. Then the volume of the whole cylindrical ungula is which equals after substituting . A differential surface area of the curved side wall is , which area belongs to a nearly flat rectangle bounded by vertices , , , and , and whose width and height are thereby and (close enough to) , respectively. Then the surface area of the wall is where the integral yields , so that the area of the wall is , and substituting yields . The base of the cylindrical ungula has the surface area of half a circle of radius r: , and the slanted top of the said ungula is a half-ellipse with semi-minor axis of length r and semi-major axis of length , so that its area is and substituting yields . ∎ Note how the surface area of the side wall is related to the volume: such surface area being , multiplying it by gives the volume of a differential half-shell, whose integral is , the volume. When the slope k equals 1 then such ungula is precisely one eighth of a bicylinder, whose volume is . One eighth of this is . Conical ungula A conical ungula of height h, base radius r, and upper flat surface slope k (if the semicircular base is at the bottom, on the plane z = 0) has volume where is the height of the cone from which the ungula has been cut out, and . The surface area of the curved sidewall is . As a consistency check, consider what happens when the height of the cone goes to infinity, so that the cone becomes a cylinder in the limit: so that , , and , which results agree with the cylindrical case. Proof Let a cone be described by where r and H are constants and z and ρ are variables, with and . Let the cone be cut by a plane . Substituting this z into the cone's equation, and solving for ρ yields which for a given value of θ is the radial coordinate of the point common to both the plane and the cone that is farthest from the cone's axis along an angle θ from the x-axis. The cylindrical height coordinate of this point is . So along the direction of angle θ, a cross-section of the conical ungula looks like the triangle . Rotating this triangle by an angle about the z-axis yields another triangle with , , substituted for , , and respectively, where and are functions of instead of . Since is infinitesimal then and also vary infinitesimally from and , so for purposes of considering the volume of the differential trapezoidal pyramid, they may be considered equal. The differential trapezoidal pyramid has a trapezoidal base with a length at the base (of the cone) of , a length at the top of , and altitude , so the trapezoid has area . An altitude from the trapezoidal base to the point has length differentially close to . (This is an altitude of one of the side triangles of the trapezoidal pyramid.) The volume of the pyramid is one-third its base area times its altitudinal length, so the volume of the conical ungula is the integral of that: where Substituting the right hand side into the integral and doing some algebraic manipulation yields the formula for volume to be proven. For the sidewall: and the integral on the rightmost-hand-side simplifies to . ∎ As a consistency check, consider what happens when k goes to infinity; then the conical ungula should become a semi-cone. which is half of the volume of a cone. which is half of the surface area of the curved wall of a cone. Surface area of top part When , the "top part" (i.e., the flat face that is not semicircular like the base) has a parabolic shape and its surface area is . When then the top part has an elliptic shape (i.e., it is less than one-half of an ellipse) and its surface area is where , , , , and . When then the top part is a section of a hyperbola and its surface area is where , is as above, , , , , where the logarithm is natural, and . See also Spherical wedge Steinmetz solid References External links William Vogdes (1861) An Elementary Treatise on Measuration and Practical Geometry via Google Books Geometric shapes Euclidean solid geometry
Ungula
Physics,Mathematics
1,273
36,392,695
https://en.wikipedia.org/wiki/USA-166
USA-166, also known as GPS IIR-8 and GPS SVN-56, is an American navigation satellite which forms part of the Global Positioning System. It was the eighth Block IIR GPS satellite to be launched, out of thirteen in the original configuration, and twenty one overall. It was built by Lockheed Martin, using the AS-4000 satellite bus. USA-166 was launched at 18:06:00 UTC on 29 January 2003, atop a Delta II carrier rocket, flight number D295, flying in the 7925-9.5 configuration. The XSS-10 satellite was carried as a secondary payload on the same rocket, but was deployed from the second stage of the three-stage rocket. The launch took place from Space Launch Complex 17B at the Cape Canaveral Air Force Station, and placed USA-166 into a transfer orbit. The satellite raised itself into medium Earth orbit using a Star-37FM apogee motor. By 1 February 2003, USA-166 was in an orbit with a perigee of , an apogee of , a period of 720.7 minutes, and 55 degrees of inclination to the equator. It is used to broadcast the PRN 16 signal, and operates in slot 1 of plane B of the GPS constellation. The satellite has a mass of , and a design life of 10 years. As of 2012 it remains in service. References Spacecraft launched in 2003 GPS satellites USA satellites
USA-166
Technology
295
50,344,789
https://en.wikipedia.org/wiki/Stemettes
Stemettes is a social enterprise which encourages girls and young women aged 5–25 to pursue careers in Science, Technology, Engineering, and Maths (STEM). Stemettes runs panel events, hackathons, the Student to Stemette mentoring programme supported by Deutsche Bank, Outbox Incubator and an app, OtotheB, an online platform for girls interested in STEM and entrepreneurship. History Stemettes was started in 2013 by British mathematics and computing child prodigy Anne-Marie Imafidon. In 2015, Jacquelyn Guderley became co-founder of Stemettes alongside Imafidon. Stemettes has partnered with organisations including Deutsche Bank, Salesforce, Accenture, Bank of America Merrill Lynch, BP and Microsoft. The organisation is regularly called upon by the UK Government and European Commission to consult on matters related to women in STEM. Outbox Incubator For six weeks from July–August 2015, Stemettes ran Outbox Incubator in London. This residential business incubator for girls with STEM start-ups became known as the "X-men house for girls". In February 2016, the Outbox Incubator spin-off app, OtotheB, was officially launched. The app is an online platform for girls interested in STEM and entrepreneurship. The app has been well received by some STEM figures such as academic and campaigner Sue Black, who described the app as a "…fantastic new resource for young women interested in technology". Awards and recognition Stemettes was named as European Digital Impact Organisation of the Year in October 2014 by the Digital Leadership Institute. References External links Outbox Incubator website Student to Stemette website OtotheB website Social enterprises Women in technology Computer science education Diversity in computing Women and science Organizations for women in science and technology Digital divide
Stemettes
Technology
364
25,185,313
https://en.wikipedia.org/wiki/Asher%20yatzar
Asher yatzar ( "Who has formed man") is a blessing in Judaism. It is recited after one engages in an act of excretion or urination, but is also included in many Jewish prayer books as a part of daily prayer prior to birkot hashachar. The purpose of this blessing is to thank God for good health. It expresses thanks for having the ability to excrete, for without it existence would be impossible. Though recited normally by observant Jews each time excretory functions are used, it is also recited during the Shacharit service due to its spiritual significance (to Jews, humans are made in God's image, so it is an expression of awe toward God's creations). Sources The obligation to recite a blessing upon leaving the bathroom could be traced to the following passage in Berachot (60b): Abaye objects to saying the above, and suggests one should recite something else prior to relieving oneself as well as recite a blessing similar to Asher Yatzar upon exiting the latrine. A dispute over what the conclusion (chasima) of the blessing should be is recorded: The Halakha follows Rav Papa. Process After completing urination or defecation and upon leaving the bathroom, the person washes their hands. According to Jewish etiquette, this should be done outside the bathroom, but if there is no source of water available outside the bathroom, it is permissible to wash one's hands inside the bathroom, then dry them outside; some are lenient in modern bathrooms to wash in the bathroom, as our bathrooms are much more clean than the outhouses of the olden days. No al netilat yadayim blessing is recited for the handwashing. Following the washing and drying of one's hands, the asher yatzar blessing is recited. Text English [Presented in Nusach Sfarad; see footnotes for other Nuschaot] "Blessed are You, Adonai, our God, King of the universe, Who formed man with wisdom and created within him many openings and . , or would be sealed, it would be impossible to survive . Blessed are You, Adonai, Who heals all flesh and acts wondrously." Hebrew [Presented in Nusach Sfarad; see footnotes for other Nuschaot] " People with medical issues There is no consensus as to whether or not (or how often) a person with medical issues should recite Asher Yatzar'': A person with incontinence should recite the blessing after urination, even if it is involuntary One who has no bowel or bladder control does not recite the blessing at all One who, as a result of medication feels an interrupted need to urinate, should recite the blessing a single time after they have emptied their bladder One who has a urinary catheter is considered to engage in a single act of urination lasting the entire day, so the catheter's wearer should recite the blessing once in the morning with the intent that it apply to all urination for the entire day. One who has diarrhea should recite the blessing after each instance of diarrhea One who has taken a laxative should not recite the blessing until the laxative has taken effect One whose mind is not completely settled due to illness is exempt See also List of Jewish prayers References External links Asher Yatzar prayer in both Hebrew and English from the Open Source Sefaria project Excretion Jewish prayer and ritual texts Jewish blessings Hebrew words and phrases in Jewish prayers and blessings Hebrew words and phrases in Jewish law
Asher yatzar
Biology
757
16,677,038
https://en.wikipedia.org/wiki/List%20of%20Slovenian%20astronomers
A list of notable astronomers from Slovenia: A Anton Ambschel (1749–1821) B Franc Breckerfeld (1681–1744) Silvo Breskvar (1902–1969) Č Andrej Čadež (born 1942) Lavo Čermelj (1889–1980) G Andreja Gomboc (born 1969) Pavel Grošelj (1883–1940) Gabriel Gruber (1740–1805) H Ferdinand Augustin Hallerstein (1703–1774) K Josip Križan (1841–1921) Pavel Kunaver (1889–1988) P Marijan Prosen (born 1937) S Uros Seljak (born 1966) V Jurij Vega (1754–1802) References Astronomers Slovenia
List of Slovenian astronomers
Astronomy
157
21,572,219
https://en.wikipedia.org/wiki/Bosque%20el%20Nixticuil
The Bosque el Nixticuil (Nixticuil Forest) is an old-growth forest located northwest of the Metropolitan Zone of Guadalajara in the Mexican town of Zapopan. An urban forest, it is encroached by the metropolitan area's constant growth. It is mostly composed of oak, holm oak and pine. It is a remnant of a larger, now vanished, forest of more than 27,000 hectares. Its name comes from a local natural promontory called El Nixticuil. Features Extent The forest stretches over 1,860 hectares, of which 1,591 have been established as a protected area (NPA) under the category of municipal watershed protection area (Area Municipal de Protección Hidrológica). The areas covered by the forest protection decree cover Nixticuil, the Cerro del Diente and the community of San Esteban, which form part of the Río Blanco watershed. Fauna and flora The forest's wildlife include coyote, fox, skunk, rabbit, opossum, various species of rodents and birds, reptiles, amphibians and insects. Various bird counts have been conducted in the forest, with one study by the University of Guadalajara reporting 107 distinct avian species (two rare, seven threatened and a one under special protection). In addition to oaks, pines and holm oaks, there is also a great biodiversity of herbs and shrubs such as kidneywood, copal, mallow, mugwort, foxtail, and other trees such as willow, the amate fig, and the cat's claw (tepame) and needle bush (huisache) acacias. It is one of three known habitats of the Styrax jaliscana, a white-flowering shrub in the Styrax family. Threats This forest is currently threatened for several reasons. Environmental organizations such as the Comité Salvabosque de El Nixticuil and the Comité Salvabosque Tigre II have decided to undertake actions to protect it. Protected status In 2005, neighbors and activists petitioned the city of Zapopan to grant the official status of "Protected Area" to 30 hectares of forest. This came after 2004's rainy season, when the ground gave way in the Nextipac community, affecting its inhabitants. At that point, the Zapopan city government intended to relocate those residents to a forested area adjacent to Tigre II, with construction scheduled for 2005. Those already living in Tigre II who wanted to avoid the destruction of the forest, and a significant number of Nextipac residents who refused their forced removal, united to oppose this project. Following this, the work was suspended and negotiations began. These negotiations continued until the city government decided without warning to retake and expand the territory designated for construction. Thereafter, protests renewed, and on May 18, 2005, the protesters managed to stop the work again, by which time more than 300 oak trees had been cut down. Together with the Zapopan municipal government's plan to construct public housing on 5 hectares of forest, various other public and private developments have been proposed within Nixticuil Forest. The development sponsored by the Villa de los Niños Association consists of the construction of a building complex, while the Autonomous University of Guadalajara (UAG) presented a project to construct a "University Science and Technology Research Park". By the end of 2006 the Zapopan government proposed the designation of Nixticuil Forest as a natural protected area. Nevertheless, the Jalisco State Legislature did not act until February 19, 2008 to approve the protected status of 1,591 hectares, which presupposed the freezing of the various municipal projects within Nixticuil Forest. In June 2007, Comité Salvabosque Tigre II, a community organization, presented a complaint before the Federal Prosecutor for Environmental Protection (PROFEPA) to terminate development, alleging irregularities in the granting of permits. On March 18, this same group also filed a complaint with the Jalisco State Human Rights Commission (CEDHJ). The Comité Salvabosque decries, in addition to private development plans, pressure by real estate interests continues, as well as anomalies in the act creating the protected area which they allege favor private interests including the Leaño family, the UAG's landowners. Wildfires Fires are one of the greatest threats faced by this forest. According to environmental groups that protect the forest, some of these fires have been intentional, motivated by economic interests. These fires have had a serious impact on the flora and fauna, as these fires have increased the pace of urban growth in municipality of Zapopan. Logging Given its illegality, over the long term, logging is the main cause of the disappearance of flora and fauna, and has increased in severity due to encroaching urbanization. References Old-growth forests Protected areas of Jalisco Forests of Mexico Geography of Jalisco Zapopan
Bosque el Nixticuil
Biology
1,007
25,838,521
https://en.wikipedia.org/wiki/National%20Aerospace%20University%20%E2%80%93%20Kharkiv%20Aviation%20Institute
National Aerospace University – "Kharkiv Aviation Institute" , NAU "KhAI" (, ХАІ; KhAI) is a university in Kharkiv, Ukraine which specializes in aviation and space engineering. The KhAI was founded in 1930. History The NAU "KhAI" was founded in 1930 on the basis of aviation division of the Kharkiv Polytechnic Institute. In 1941-44 it was evacuated to Kazan. Its history is closely connected with the development of aircraft engineering and science in the Soviet Union. The university is famous for its creation of the first in Europe high-speed airplane with a retractable landing gear and the creation of the design of the turbojet engine developed by teacher of the KhAI A. M. Liulka who afterwards became the academician and designer of many structures of aircraft engines including the engine of the aircraft Su-27. The KhAI is a unique higher educational institution where the airplanes developed by the Institute Design Bureau under the supervision of professor I. G. Neman were produced serially at the aircraft plants and run on passenger airlines. From 1977 to 1984 the Designer General O.K. Antonov ran the department of airplane structure at the KhAI. In 1978 the KhAI was given the name of N. Ye. Zhukovskiy. In 1980 the institute was awarded with the order of Lenin. In 1998 the N. Ye. Zhukovskiy State Aerospace University “Kharkiv Aviation Institute” was founded on the basis of the KhAI and in 2000 the University got a status of the National higher education institution and was renamed the National Aerospace University ″Kharkiv Aviation Institute″. Students The University has trained about 53 000 engineers. More than 80% of the specialists with higher education who work in Ukrainian aerospace area are the graduates of the NAU KhAI. At present about 7000 students and 160 post-graduate students are trained at the university; 700 teachers and 2000 employers work here. Among them there are 95 professors and doctors of science, more than 400 associate professors and candidates of sciences. Among the teachers of the university there is 1 USSR State Lenin Prize winner, 3 USSR State Prize winners, 25 Ukraine State Prize winners, 11 USSR Council of Ministers Prize winners. In 1992 the KhAI resumed the training of foreign students. Over 1000 foreign citizens from 40 countries of Asia, Africa and America are trained annually at the University. Faculties Faculty of Aircraft Engineering Faculty of Aircraft Engines' Faculty of Aircraft Control Systems Faculty of Rocket and Space Engineering Faculty of Aircraft Radio electronic Systems Faculty of Economics and Management Faculty of Humanities Faculty for International Students Programmes Bachelor's degree – 4 years Specialist Degree – 1/1.5 years (depends on the specific training programme) after B.Sc. Master's degree – 1.5/2 years (depends on the specific training programme) after B.Sc. Candidate of Science (Ph.D.) – 3 years after M.Sc. or Specialist Degree Doctor of Science – 3 years after Ph.D. Languages The majority of NAU KhAI Training Programmes are provided in Ukrainian and Russian. But NAU KHAI also have a set of International Oriented Studies (IOS) Bachelor's and Master's of Science Programmes in English (full list you can find on Official website). For international students NAU KHAI has 1-month Training Programme at the Preparatory Department for further study at University in Russian. Memberships and programs European and Global International Associations and Communities (International Association of Universities (IAU/UNESCO), European Group of Aeronautics and Space Universities (PEGASUS), The Magna Charta of the European Universities, International Association of Technical Universities from CIS Countries, Academic Association of CIS Countries Higher Education, European Aeronautic Science Network (EASN)) Bilateral cooperation (Technion Research and Development Foundation LTD (Israel) EU Framework Programmes TEMPUS Programme Dual Degree Programmes with other universities Working and Training Abroad for KhAI Students and Staff Educational Programmes for Foreign Citizens Aircraft designs From: Kharkiv KhAI-1 Kharkiv KhAI-2 (1932) Kharkiv KhAI-2 (1934) Kharkiv KhAI-2 (1937) Kharkiv KhAI-4 Iskra Kharkiv KhAI-5 (R-10/PS-5) Kharkiv KhAI-6 Kharkiv KhAI-8 Kharkiv KhAI-11 Kharkiv KhAI-12 Start Kharkiv KhAI-13 Gymnast Kharkiv KhAI-14 Orlyonok Kharkiv KhAI-17 Kharkiv KhAI-18 Kharkiv KhAI-19 Kharkiv KhAI-20 Kharkiv KhAI-21 Kharkiv KhAI-22 Kharkiv KhAI-23 Kharkiv KhAI-24 Kharkiv KhAI-25 Kharkiv KhAI-25 Kharkiv KhAI-26 Kharkiv KhAI-27 Kharkovchanin Kharkiv KhAI-28 Kharkiv KhAI-29 Korshun Kharkiv KhAI-30 Professor Nyeman Kharkiv KhAI-31 Kharkiv KhAI-32 Kharkiv KhAI-33 Kharkiv KhAI-34 Kharkiv KhAI-35 Entuziast Kharkiv KhAI-36 Kharkiv KhAI-37 Mikhail Efimov Kharkiv KhAI-38 Kharkiv KhAI-39 Kharkiv KhAI-40 Kharkiv KhAI-41 Kharkiv KhAI-42 Kharkiv KhAI-43 Kharkiv KhAI-44 Kharkiv KhAI-45 Kharkiv KhAI-46 Kharkiv KhAI-47 Kharkiv KhAI-48 Kharkiv KhAI-49 Kharkiv KhAI-51 Kharkiv KhAI-52 Kharkiv KhAI-60 (60 let KhAI) Kharkiv KhAI-70 Kharkiv KhAI-80 Kharkiv KhAI-90 Kharkiv KhAI-92 Kharkiv KhAI-112 Kharkiv KhAI-150 References External links Official website for International students Research cooperation Aerospace Research and Education at NAU KhAI Technical universities and colleges in Ukraine Aerospace engineering organizations Universities and colleges established in 1930 Buildings and structures in Kharkiv Universities and colleges in Kharkiv 1930 establishments in Ukraine Aviation schools in Ukraine Kharkiv Polytechnic Institute Kyivskyi District (Kharkiv) National universities in Ukraine Universities and institutes established in the Soviet Union Aviation in the Soviet Union Institutions with the title of National in Ukraine
National Aerospace University – Kharkiv Aviation Institute
Engineering
1,405
24,508,935
https://en.wikipedia.org/wiki/Gymnopilus%20ludovicianus
Gymnopilus ludovicianus is a species of mushroom in the family Hymenogastraceae. See also List of Gymnopilus species External links Gymnopilus ludovicianus at Index Fungorum ludovicianus Fungi of North America Taxa named by William Alphonso Murrill Fungus species
Gymnopilus ludovicianus
Biology
72
38,351,608
https://en.wikipedia.org/wiki/Diiminopyridine
Diiminopyridines (DIP, also known a pyridine diimines, PDIs) are a class of diimine ligands. They featuring a pyridine nucleus with imine sidearms appended to the 2,6–positions. The three nitrogen centres bind metals in a tridentate fashion, forming pincer complexes. Diiminopyridines are notable as non-innocent ligand that can assume more than one oxidation state. Complexes of DIPs participate in a range of chemical reactions, including ethylene polymerization, hydrosilylation, and hydrogenation. Synthesis and properties of DIP ligands Many DIPs have been prepared. They are synthesized by Schiff base condensation of commercially available 2,6-diacetylpyridine or 2,6-diformylpyridine with two equivalents of substituted anilines. Using substituted anilines, complexes one can obtain DIPs with diverse steric environments. Commonly used bulky anilines are 2,4,6-trimethylaniline and 2,6-diisopropylaniline. Unsymmetric variations have been established by successive condensation of different anilines. The dicarbonyl portion of the backbone can be further modified, as with 2,6-dipyridecarboxaldehyde and 2,6-dibenzoylpyridine. Most commonly, variations in the DIP arise from changes in the anilines. Effect of steric bulk Depending on its steric bulk, DIP ligands form complexes of 2:1 and 1:1 ratios, M(DIP)Lx and M(DIP)2, respectively. The 2:1 complexes occur for unhindered DIP ligands. Although such complexes are coordinatively saturated, they have been studied for their electronic and structural properties. Formation of 2:1 complexation is suppressed with bulky DIP ligands. Complexes of the type M(DIP)Ln exhibit diverse reactivity. Fe and Co complexes The reduction of the Fe(II)(DIP)X2 with sodium amalgam under nitrogen yields a square-pyramidal bis(nitrogen) complex Fe(II)(DIP)(N2)2. This complex is a precursor to other derivatives by exchange of the dinitrogen ligands, e.g. with H2 and CO, to give the monohydrogen or dicarbonyl complexes. Arylazides give imido complexes. Fe(DIP)(N2)2 is a precursor to highly active catalysts for hydrosilylation and hydrogenation reactions. Dissociation of N2 from Fe(DIP)(N2)2 results in binding of the anilino arene in an η6-fashion. This binding mode may play a role in the catalytic hydrogenation cycle. The reactivity of cobalt- and iron-DIP complexes are similar. Cobalt DIP complexes with azide ligands have been shown to lose N2 to give reactive nitrido complexes that undergo C-H activation of benzylic sites of the aryl substituents. The resulting cyclometalated amide adopt a roughly planar geometry. Noninnocence of DIP complexes The highly conjugated ligand framework of bis(imino)pyridine stabilizes metals in unusual oxidation states. The ability of the neutral complex to accept up to three electrons leads to ambiguity about the oxidation states of the metal center. The complex Fe(DIP)(N2)2 complex is ostensibly a 18e complex, consisting of Fe(0) with five 2-electron ligands. Mössbauer spectroscopy indicates, however, that this complex is better described as a ferrous derivative of DIP2−. This assignment is corroborated by the high frequency of the νNN vibration in the infrared spectrum, which is more consistent with Fe(II). Thus, reduction of Fe(DIP)Br2 is ligand-centered, not Fe-centered. This non-innocent behavior allows iron-DIP complexes to participate in 2e redox reactions, which is a pattern more usually seen for complexes of platinum group metals. Catalytic reactions of M-DIP complexes The catalytic properties of DIP complexes of Fe, Co, and Ni have attracted much attention. In principle, catalyst derived from "base metals" are preferred to noble transition metal catalysis due to low environmental impact and cost effectiveness. Furthermore, owing to its modular synthesis, the DIP ligand is easily modifiable allowing diversity in ligand screening. Complexes of the type M(DIP)Xn serve as precatalysts for ethylene polymerization. The precatalysts are activated by treatment with methylaluminoxane (MAO), which serves as a co-catalyst. Activities for 2,6-bis(imino)pyridine iron complexes are often comparable to or greater than group 4 metallocenes. The aryl substituents greatly affect the products. Small aryl substituents allow for highly selective production of oligomeric α-olefins, whereas bulky groups provide strictly linear, high molecular weight polyethylene. Silica-supported and homogeneous catalysts have been reported. Traditionally catalyzed by Pt and other precious metals, hydrosilylation is also catalyzed by Fe-DIP complexes. Reactions proceed under mild conditions, show anti-Markovnikov selectivity, and tolerate diverse functional groups. Depending on the steric properties of the ligand, Fe-DIP complexes catalyze hydrogenation of terminal olefins. Variations of DIP ligands In N-heterocyclic carbene variations of the diiminopyridine complex, the pyridine or imine substituents is replaced with an NHC group. The aryl substituted bis(imino)NHC complexes produce tridentate ligands, while the pyridine exchanged NHC forms exclusively bidentate complexes. This is presumably due to the additional strain from the 5 member ring of the central carbene. References Coordination complexes Tridentate ligands Pyridines Imines
Diiminopyridine
Chemistry
1,260
2,642,299
https://en.wikipedia.org/wiki/Freepost
Freepost is a postal service provided by various postal administrations, whereby a person sends mail without affixing postage, and the recipient pays the postage when collecting the mail. Freepost differs from self-addressed stamped envelopes, courtesy reply mail, and metered reply mail in that the recipient of the freepost pays only for those items that are actually received, rather than for all that are distributed. Freepost of preprinted cards issued by businesses is also different from postal stationery sold by postal administrations. Uses In one use of freepost, a business sends bulk mail to potential customers, the bulk mail including envelopes or postcards that potential customers can return to the business by freepost. In another use, magazines include subscription cards that potential subscribers can return by freepost. In another use, a seller can provide a merchandise return label bearing the appropriate freepost indicia (as described below) to a customer so that the customer can return the item to the seller by freepost upon issuance of a Return Merchandise Authorization. A non-commercial use would be to return lost items belonging to a business. The item will have printed on the back "if found please return by freepost to <address>". For example, UK's NHS worker's RFID access cards can be returned by freepost if lost & found. By country Australia In Australia, freepost is called Reply Paid. Specially printed envelopes are used, with the permit holder's address, the words "Reply Paid" with an authorization number. The stamp is replaced by three vertical black stripes and a postal bar code. The permit holder pays the postage plus a fee to the postal authority. The customer may also write the Reply Paid envelope out by hand. The delivery address is printed on the top left hand corner of the envelope. The delivery address may be the same as the postcode No stamp required if posted in Australia . . . . . . . . . . . . . . . . . . . . Delivery Address: PO Box 1435 ALEXANDRIA NSW 1435 An important customer could have an RP number the same as the post code and the delivery address PO Box, to minimize errors even more. Red Cross Reply Paid 1435 ALEXANDRIA NSW 1435 Canada To coordinate service with the United States, Canada Post uses the same terminology and the same standards as the USPS (as explained below), with the exception of the use of Canadian Postal codes. United States In the United States, the United States Postal Service refers to freepost as business reply mail. A mailer wishing to receive mail by freepost must obtain a business reply permit and design the envelopes, postcards, or labels according to the standards specified by the USPS, including the use of an appropriate FIM B or C code. The address on the envelope, postcard, or label is the same as the address for regular mail, except that the ZIP+4 code is different. In some large cities, business reply mail has its own five-digit ZIP code or codes (e.g., 20077 and 20078 in Washington, D.C.). The envelope or postcard also includes space for the business reply permit number. United Kingdom In the United Kingdom, Royal Mail offers a variety of services. The most expensive service, Freepost Name, enables a customer to purchase a licence to a name which allows the public to send mail to the organisation free of charge using any envelope, and with the Freepost Name handwritten or printed. Only the licensed name is required on the envelope, not the postal address. In addition, Royal Mail offers a range of business reply services enabling a business to provide their customers with pre-printed envelopes in order to send mail to the business free of charge, including: Freepost Standard, Freepost Plus, Business Reply Standard, Business Reply Plus. New Zealand In NZ, freepost envelopes can be hand addressed by the sender; this simply requires including the word ‘FREEPOST’ and the recipient's name or permit number as part of the address. (This technique can not be used to send mail to merely anyone, who would then have to pay for it; the permit number and the recipient address must match for the recipient to be billed.) The Netherlands In the Netherlands, freepost is addressed exactly the same as normal mail. The recipient needs a special address: an answering number (antwoordnummer in Dutch). The sender can distinguish these addresses because they include the word 'antwoordnummer' and may choose whether or not to use a stamp. When no stamp is used, the recipient pays the costs plus a fee, but when the sender uses a stamp, no extra costs are made. Finland In Finland, there are two types of freepost. In the first one, which is used on orders that a company has sent and the customer wants to return it (for example, an order from an e-commerce store), the customer can use the same envelope the company sent and simply write PALAUTUS or ASIAKASPALAUTUS ('RETURN' or 'CUSTOMER RETURN'). The company must have a contract with the Finnish postal service Itella. The other option is to use following form: Company name Tunnus: 12345 00003 VASTAUSLÄHETYS Where tunnus means 'code' and vastauslähetys means 'return delivery'. Postal code is always same for these deliveries. There is usually some text indicating that postage is paid on the upper-right corner. (where stamp would be located) Companies can, however design this text, or image, themselves to match the company logo instead of using the standard one. The company must also have contract with Itella. The address may be handwritten by the customer. Germany Where the stamp would be affixed normally on the envelope, a box with the words "Porto zahlt Empfänger" ("postage paid by recipient") or "Bitte frankieren, falls Marke zur Hand" ("please place a stamp if one is at hand") is printed. The recipient then has the choice to accept or deny the envelope upon delivery. The sentence "Porto zahlt Empfänger" can be written on the envelope by the sender, too, if they don't want to pay for the postage - though the recipient is not forced to accept and pay for the postage so that the system cannot be misused by senders. If neither the sender nor the recipient wants to pay for the postage, the envelope and its contents will be destroyed. More generally Other countries use freepost as well, although the envelope designs required by those countries' postal authorities differ widely from that described above. A freepost address may have a special freepost number for use along with, or instead of, the address for regular mail. International Business Reply Service (IBRS) International freepost also exists and is known variously as 'International Business Reply Service', ‘International Business Reply Mail’, 'International Business Response Service', 'IBRS' and, in French, Correspondance Commerciale-Réponse Internationale (CCRI). Like USPS business reply mail, international business reply mail must conform to certain format requirements, including the prominent notice "REPONSE PAYEE" (French for "reply paid"), and a number indicating the account that will pay for the postage. International Business Reply Service is a convenient way for international customers to reply to the sender with pre-paid cards and envelopes, at no cost to them. See also Toll-free telephone number References External links Information on reply paid from Australia Post Information on business reply mail from the USPS Return and Reply Mail (Mailpiece Quality Control Program, chapter 9), a USPS document showing the design of business reply mail in the United States Postal systems Philatelic terminology
Freepost
Technology
1,601
4,301,727
https://en.wikipedia.org/wiki/Khan%20Research%20Laboratories
The Dr. A. Q. Khan Research Laboratories (shortened as KRL), is a federally funded research and development laboratory located in Kahuta at a short distance from Rawalpindi in Punjab, Pakistan. Established in 1976, the laboratory is best known for its central role in Pakistan's nuclear weapons program and its understanding the nuclear science. Established in 1976, it was originally organized as a top-secret plant dedicated to enrichment as a response to the India's detonation of its first nuclear bomb in 1974. Chosen for its remote yet relatively accessible location from Rawalpindi. In the 1970s, the site was the cornerstone of the first stage of Pakistan's atomic bomb program, and serves as the center for conducting the nuclear scientific research. It is globally known for its research in gas centrifuges to produce the enriched uranium; and in past, it has competed with the PINSTECH on wide variety of weapon designs but it is now have focused in civilian missions, including the national security, fusion science and supercomputing. History As an aftermath of the India's first nuclear test, the Pakistan Atomic Energy Commission (PAEC) launched the studies on isotope separation through gas method by setting the plant as Project-706 under Bashiruddin Mahmood, a nuclear engineer, in 1974. In 1976, the difficulties encountered in preliminary studies under Mahmood on understanding the equation of state of uranium indicated the need for a dedicated laboratory solely to that purpose. Work on establishing the laboratory was initiated by the army's Engineer-in-Chief who selected Brigadier Zahid Ali Akbar to conduct the topographic survey. Because the experiments were deemed too dangerous to conduct in a major city, the need for the operations to be moved in an isolated and remote mountainous areas was felt by Brig. Akbar who selected Kahuta, at a short distance of the Rawalpindi. On 31 July 1976, the laboratory was established as Engineering Research Laboratories (ERL) with Abdul Qadeer Khan as its principal investigator. The officers and personnel from the Corps of Electrical and Mechanical Engineering (EME) were central in working and supporting the operations of the lab with Major-General Ali Nawab acting its principle engineer in 1979. More broadly, the ERL was intended to spur innovation and provide competition to the weapon design with the second lab in Nilore running under the PAEC's contract. Under Abdul Qadeer Khan, the work on equation of state of uranium began with drs. G. D. Alam, Tasneem Shah, and Anwar Ali who served as co-principle investigators at Kahuta. Initially, a large numbers of centrifuges were deployed but they were scaled down to few centrifuges after revised critical mass calculations on equation of state of uranium by Abdul Qadeer Khan and his co-investigators in 1980s. Its official name changed to Khan Research Laboratory (KRL) in 1981 by Presidential decree, which also allowed its status as a national defense laboratory. Uranium research Globally, the KRL has a prestige in conducting research into properties and behavior of uranium to learn how uranium is scaled to industrial-to-weapon-grade level and how its equation of state changes under the extreme pressure and temperature. For such investigation, principle investigators employed the Zippe method (local designation: Khan Centrifuge) that runs about 100,000 rpm on continuous at an average of 10 years. The Uranium (U92) is a naturally occurring element that can be found in low levels within all rock, soil, and water. The Natural uranium (Unat) consists of three isotopes, Uranium-238 (U238), which is 99.28% natural abundance, the Uranium-235 (U235), which is subject of interest for energy measurement available only at 0.71%, and uranium-234, with proportion of 0.0054%. The Uranium-235 is fissile but at 0.71% it cannot sustain the chain reaction in a nuclear weapon environment, which requires the 90% of U235 but remains uncontrolled whose reaction takes place in a short amount of time– in a nanosecond time. For this purposes, the gas centrifuge methods using the vacuum technology were established but this method took several years to master, and it was not until the 1985 when the highly enriched uranium (HEU) was first made available. The computer simulations and experiments on uranium are conducted by KRL to understand the structural, electrical, material, and chemical properties as well as uranium fused allows and to determine how these materials change over time under temperature and pressure difference. Negative publicity The laboratory has attracted negative publicity from a number of events, mainly due to its past research affiliation with North Korea and China. In 1996, the Clinton administration accused China of approving the tender released for the KRL on the acquisition of specially-made magnetic rings for special suspension bearings mounted at the top of rotating centrifuge cylinders. In 1999, a visit by the Saudi dignitaries accompanied by the Sharif administration personnel to the laboratory also garnered further negative publicity at the Western media that raised fears of proliferation in the middle east. In 2003, the Pakistani nuclear physicist, Abdul Qadeer Khan, was accused of (and later pardoned) for mishandling the classified information on the designs of the gas centrifuges to Libya, North Korea, Iran, and China in 1980s that were taken from the lab's computers. Extended research The academic research programs and development opportunities at the KRL are supported by the physics departments of the Government College University in Lahore in Punjab and the University of Karachi in Sindh. The KRL supports its physics program through funding and providing scholarship to physics and engineering students at the Government College University. The continuing efforts to make the laboratories more science efficient led the Ministry of Science (MoST) to grant a three research and fellowship programmes with the Government College University with the support of Pakistan Science Foundation (PSF). Since 1980 at present, the KRL continues to develop the research work on computational mathematics, supercomputing and advanced mathematics to the extended applications to natural sciences. In 1999, the KRL established a research institute on computer science at Kahuta, which was later integrated to University of Engineering and Technology in Taxila. The civilian research on biotechnology, biology and Genetic Engineering is supported by the KRL at the University of Karachi, with the support from Pakistan Science Foundation. The KRL organized a conference on Computational biology in Islamabad to present overview of the scope of computational sciences. National security and science program From 1976 till 1978, the lab depended heavily on the Urenco Group's method on developing the gas centrifuge, which it says to be suffering due to incomplete mechanical parts and differential equation problems involving rotational dynamics on a fixed axis. Dependence on the Zippe-type was lessened when more effective and innovative methods were developed that culminated from the studies conducted under Drs. A.Q. Khan, G.D. Allam and T. S. Shah. In Pakistan MoD laboratory system, the KRL is a senior laboratory executes missions relating to national security. The technology of krytron was also built at the KRL which was then transferred to Heavy Industries Taxila, an army laboratory based in Taxila. Besides the understanding the equation of state of uranium, the KRL also embarked on pioneering work on vacuum science and its extended application in plasma physics– its first paper on plasma physics was written in 1998. In 1983, the KRL was able to acquire its very first computer numerical control (CNC) from China to provide machining of the high-strength ultracentrifuges which was able to produce the uranium hexafluoride (UF6) gas that the KRL reduced to uranium metal and machined into weapon pits. In 1987, the KRL began publishing a series of academic articles on numerics and computational methods for centrifuge design, including a 1987 article co-authored by Abdul Qadeer Khan on techniques for balancing sophisticated ultracentrifuge rotors. In the 1990s, the mathematicians at the KRL had built the nation's first high performance computing machines and the supercomputer that were installed installed at the facility. The subcritical experiments on weapon-grade uranium began when a parallel Computational Fluid Dynamics (CFD) division was established which specialized in conducting high performance computations on shock waves in weapons effect from the outer surface to the inner core by using difficult differential equations of the state of the materials used in the bomb under high pressure. The KRL was major participant in MoD's Hatf program (lit. Target). The lab served as a chief designer of the warhead design, control systems, and rocket engine development of the Hatf and Ghauri weapon systems. Hatf-I – first tested in 1989. Ghauri-I (Hatf V) – first tested in 1999. Ghauri-II – has a range of 2,000–2,500 km. Since the 1980s, the KRL is involved in numerous military equipment and conventional weaponry development projects. The resulting systems have been put into service by the Pakistan's military and exported to other friendly nations. The following is a list of known equipment produced under these projects: Guided missiles: Anza series MANPADS. Baktar-Shikan man-portable anti-tank guided missile (ATGM). Modules for the BGM-71 TOW ATGM. Electrical and electronic equipment: Power conditioners for the above missile systems. Switched-mode power supplies for the following air defence systems: LAADS radar, Skyguard radar, Air Defense Automation System. Equipment for clearance of anti-personnel and anti-tank mines, including remote control mine exploders (RCME) and mine-sweeping line charges. Laser equipment: Laser range-finders, laser warning receivers, laser aiming devices, a laser actuated targeting system for training tank gunners. Reactive armour kits for armoured vehicles and APFS-DS anti-tank ammunition for main battle tanks. Digital goniometers. Electronic Voting Machine (EVM)– the KRL entered in competition with National Institute of Electronics (NIE) and Indra Sistemas of Spain to produce and demonstrate the usage of electronic voting machines. Eventually, the Election Commission of Pakistan awarded the contract to KRL for the final design and production of the electronic voting machines in 2018. Corporate management Contract changes The KRL is owned by the federal Government of Pakistan and sponsors the laboratory through the Ministry of Defence as its continuing efforts to make the laboratory more efficient. In its early years, the Corps of Engineers had served its first prime contractor from 1976 until 1977. From 1977 till 1981, the Corps of Electrical and Mechanical Engineering served on the MoD's contract with Maj-Gen. Ali Nawab overseeing the lab's operations. Since then, the lab's corporate leadership has been entrusted with civilian leadership through contracts awarded by the MoD. At the behest of the laboratory director in 1981, the tender was opened to the University of Karachi and the Government College University to oversee its operations. The KRL's research and university affiliation with the University of Karachi still continues to this date. With the formation of the federal National Command Authority in 2001, the agency took over the lab's business operations when it awarded the Strategic Plans Division as KRL's prime contractor, which it has been managing the lab operation since 2004. In 2010, the Strategic Plans Division won its first contract with Malaysian Armed Forces when it was reported that the KRL was to a contractor for weapons export through the Malaysian businessman Shah Hakim Zain to export weapons to Malaysia. Notes References External links Kahuta Research Laboratories Global Security Report Pakistan developed more powerful centrifuges, Nucleonics Week, 29 January 2007 K K K K K K Plasma physics facilities K K K K K K K K Guided missile manufacturers
Khan Research Laboratories
Physics,Engineering
2,431
37,881,289
https://en.wikipedia.org/wiki/Potassium%20pentasulfide
Potassium pentasulfide is the inorganic compound with the formula . It is a red-orange solid that dissolves in water. The salt decomposes rapidly in air. It is one of several polysulfide salts with the general formula , where M = Li, Na, K and n = 2, 3, 4, 5, 6. The polysulfide salts of potassium and sodium are similar. Preparation and reactions The salt is prepared by the addition of elemental sulfur to potassium sulfide. An idealized equation is shown for potassium hydrosulfide: 4 + → 2 + 2 The structure consists of zigzag chains of paired with ions. Occurrence Various polysulfides - are components of liver of sulfur. Polysulfides, like sulfides, can induce stress corrosion cracking in carbon steel and stainless steel. References Potassium compounds Polysulfides Inorganic compounds Chalcogenides
Potassium pentasulfide
Chemistry
183
9,277,886
https://en.wikipedia.org/wiki/Deuterated%20THF
Deuterated tetrahydrofuran (d8-THF) is a colourless, organic liquid at standard temperature and pressure. This heterocyclic compound has the chemical formula C4D8O, and is an isotopologue of tetrahydrofuran. Deuterated THF is used as a solvent in NMR spectroscopy, though its expense can often be prohibitive. References Deuterated solvents
Deuterated THF
Chemistry
93
68,402,408
https://en.wikipedia.org/wiki/Smart%20home%20hub
A smart home hub, sometimes also referred to as a "smart hub", "gateway'", "bridge", "controller" or "coordinator", is a control center/centre for a smart home, and enables the components of a smart home to communicate and respond to each other via communication through a central point. The smart home hub can consist of dedicated computer appliance, software appliance, or software running on computer hardware, and makes it possible to gather configuration, automation and monitoring of a smart house by communicating and controlling different smart devices that consist of for example home appliances, sensors and relays or robots, many of which are commonly categorized under Internet of things. A smart home can contain one, several, or even no smart home hubs. When using several smart home hubs it is sometimes possible to connect them to each other. Some smart home hubs support a wider selection of components, while others are more specialized for controlling products within certain product groups or using certain wireless technologies (e.g. Wi-Fi, Bluetooth, Z-Wave, and/or Zigbee). A smart speaker with a virtual assistant can often be used for speech input to a smart home hub. Open or closed source code Smart home hubs can have software with open source code or use proprietary software with closed source code, and independently of this the application programming interface can be public or closed. Some smart home hubs must run on proprietary hardware, while others (like for example Home Assistant) can be installed on generic hardware (like for example a laptop or single-board computer with Linux). Examples of commercial smart home hubs Some examples of smart home hubs with closed source code are: Logitech Harmony Hub SmartThings Hub Google Nest Hub Amazon Echo Show and Amazon Echo Plus which both integrates a Zigbee hub. Apple HomePod Some examples of smart home hubs based on free and open-source software are: Home Assistant OpenHAB Some examples of smart home hubs with closed source code, but an open application programming interface are: Homey Communication protocols Various communication protocols can be used between smart home hubs and smart house components. The protocols can be grouped into wired and wireless technologies. Wireless protocols Some examples of wireless protocols commonly used in smart home hubs are: 2,45 Ghz (WiFi, Bluetooth, Zigbee, Thread, Matter) Z-Wave (868 Mhz) RF 868 (868 Mhz, various protocols) RF 433 (433 Mhz, various protocols) Infrared light (430 THz; 697 nm) Wired protocols There are several cabled bus systems, some of which are built directly into electric panels. Some examples of wired protocols commonly used in smart home hubs are: DALI, open standard for network-based lighting control in buildings, well suited for dimming. KNX, older and well-established open standard for network-based control of lighting, sensors, HVAC, etc. in buildings. There is also a wireless extension of KNX called KNX-RF. DMX, a standard for control of stage lighting, smoke machines and more, but also used to a certain extent for home automation due to the widespread use in professional stage equipment and good availability on the market X10, widespread in older home automation equipment in the USA, but only used to a small extent in new installations. LonWorks, an open standard for networking platforms used for control applications of lighting and HVAC. MQTT, an open network protocol for machine to machine communication, particularly used for transmission of telemetry data in Internet of things components. BACnet, an open protocol (ISO 16484-5) for information exchange between building automation systems, regardless of the particular building service they perform. Designed for applications such as automation and control of heating, ventilating, and air-conditioning control (HVAC), lighting control, access control, fire detection systems, and associated equipment. Modbus, an openly published and royalty free data communications protocol, especially popular in industrial environments. Meter-Bus (M-Bus), an open standard for remote reading of consumption meters, e.g. water, gas or electricity meters. See also List of home automation software Smart speaker Matter References Home automation
Smart home hub
Technology
871
4,852,393
https://en.wikipedia.org/wiki/Pentavalent%20antimonial
Pentavalent antimonials (also abbreviated pentavalent Sb or SbV) are a group of compounds used for the treatment of leishmaniasis. They are also called pentavalent antimony compounds. Types The first pentavalent antimonial, urea stibamine, was synthesised by the Indian scientist Upendranath Brahmachari in 1922. Though it caused a dramatic decline in deaths due to leishmaniasis, it fell out of favour in the 1950s due to higher toxicity compared to sodium stibogluconate. The compounds currently available for clinical use are: sodium stibogluconate (Pentostam; manufactured by GlaxoSmithKline; available in United States [through the Centers for Disease Control only] and UK), which is administered by slow intravenous injection. meglumine antimoniate (Glucantim; manufactured by Aventis; available in Brazil, France and Italy), which is administered by intramuscular or intravenous injection. The pentavalent antimonials can only be given by injection: there are no oral preparations available. Alternatives In many countries, widespread resistance to antimony has meant that liposomal amphotericin or miltefosine are now used in preference. Side effects Cardiotoxicity, reversible kidney failure, pancreatitis, anemia, leukopenia, rash, headache, abdominal pain, nausea, vomiting, arthralgia, myalgia, thrombocytopenia, and transaminase elevation. References Antiprotozoal agents Antimony(V) compounds
Pentavalent antimonial
Biology
339
703,764
https://en.wikipedia.org/wiki/Nuclear%20bag%20fiber
A nuclear bag fiber is a type of intrafusal muscle fiber that lies in the center of a muscle spindle. Each has many nuclei concentrated in bags and they cause excitation of the primary sensory fibers. There are two kinds of bag fibers based upon contraction speed and motor innervation. BAG2 fibers are the largest. They have no striations in middle region and swell to enclose nuclei, hence their name. BAG1 fibers, smaller than BAG2. Both bag types extend beyond the spindle capsule. These sense dynamic length of the muscle. They are sensitive to length and velocity. See also Nuclear chain fiber List of distinct cell types in the adult human body References External links http://www.unmc.edu/Physiology/Mann/mann11.html Nervous system Muscular system
Nuclear bag fiber
Biology
166
76,389,533
https://en.wikipedia.org/wiki/Neodymium%28III%29%20iodate
Neodymium(III) iodate is an inorganic compound with the chemical formula Nd(IO3)3. Preparation Neodymium(III) iodate can be produced by the hydrothermal reaction of neodymium(III) nitrate or neodymium(III) oxide and iodic acid in water at 230 °C: Properties Neodymium(III) iodate can be thermally decomposed as follows: Its monohydrate is known, crystallizing in the monoclinic crystal system, with space group P21, and its pyroelectric coefficient at room temperature is 2.2×10−5 C·m−2/K. References Neodymium compounds Iodates Monoclinic crystals
Neodymium(III) iodate
Chemistry
153
22,100,545
https://en.wikipedia.org/wiki/Tajul%20muluk
The (taken from ) is a commonly used name for a system of geomancy, comprising metaphysical and geomantic principles considered when siting or designing buildings to improve and maintain well-being in Maritime Southeast Asia. It was traditionally practiced by shamans and architects from Indonesia and Malaysia. The term actually alludes to a book entitled which covered a number of other topics including herbal medicine, astrology and dream interpretation along with geomancy. While all these subjects may be categorised under the term , it usually refers to the otherwise unnamed set of rites and rules for constructing buildings in Acehnese and Malay culture. Terminology tiang seri / tiang ibu ("shining pillar" / "mother pillar"): The main pillar in traditional Malay buildings depa (armspan): A unit of measurement drawn from the armspan of the matriarch of the house rumah ibu ("mother house"): The main part of a house baris Laksmana: A symbol drawn onto a beam to protect the house from evil. Named after the magic line drawn by Laksmana to protect Sita Dewi Origins The history of Malay geomancy has never been documented, but the system contains cultural symbolism of Indian origin, indicating that it has existed as far back as the Hindu-Buddhist period of Southeast Asian history. Some conjecture that it may have been influenced by Indian vastu sastra or Chinese feng shui, both of which have traditionally been practiced in the Malay Peninsula. The earliest account of the art comes from the book Taj al-Mulk (meaning "Royal Crown of Sovereignty" in Arabic) written for Acehnese royalty. The title was pronounced "Tajul Muluk" in Malay so the information it contained was referred to as ilmu tajul muluk or just ilmu tajul, meaning "knowledge of Tajul Muluk". According to British civil servant Walter William Skeat in his book Malay Magic, originally published in 1900, the rituals of tajul muluk were once commonplace. Even in the making of roads through the forest it would appear that sacrificial ceremonies are not invariably neglected. On one occasion I came upon a party of Malays in the Labu jungle who were engaged in making a bridle-track for the Selangor Government. A small bamboo censer, on which incense had been burning, had been erected in the middle of the trace ; and I was informed that the necessary rites (for exorcising the demons from the trace) had just been successfully concluded. With the rise of the Islamization movement of Southeast Asia during the 1980s, animistic and Hindu-Buddhist aspects of Malay culture were condemned and banned. Today tajul muluk is considered a superstitious relic of the past and books written on the subject are sometimes banned in Malaysia. Rules and theories Soil and location The auspiciousness of a location is determined by the colour, taste and smell of the soil, as well as the formation of its surface. In general, the colours of the soil from best to worst are white, red, yellow, grey and black (the Malay language also categorizes brown as a shade of yellow). Soil which is greenish-yellow, fragrant and tart-tasting will ensure an abundance of gold and silver unto the third generation. If the soil is red and sour, the dweller will be loved by their family. White soil with a sweet smell and taste is said to bring wealth and happiness. Foul-smelling earth of the wrong colour and texture will bring sickness and poverty. Soil that is full of holes will cause the occupants to die poor. Aside from the soil, the lay of the land must also be taken into consideration. The best site for a house, city or orchard is level. The best aspect of the land's surface is that which is low on the north and high in the south, thought to bring the occupants absolute peacefulness. Land which is low on the west and high in the east is also auspicious. The reverse of these directions (i.e. high in the north and low to the south, or high to the west and low to the east) will bring poverty and death. The worst land is hilly or full of holes. A house that inclines from the southwest causes one to lose their livelihood or source of income. A house that inclines from the south brings death to its owner. If the house inclines from the west, the occupant will waste away all their money. Siting ritual Once the ground has been cleared, the dukun or bomoh begins smoking the area with incense (kemenyan). He then measures one depa of bamboo and sticks it in the ground together with a container of water. Incense is burnt again as the dukun recites incantations. At dawn the next morning, the stick and water are checked. If the pail of water has spilled or the bamboo has shortened, the plot is bad luck. If the water has overflowed or the stick has lengthened, it is very auspicious. Once the site has been chosen, a hole is dug in the ground for the house's main pillar. The shaman places seven grains of rice into the hole and recites mantera before inserting the pillar. If any of the rice grains are missing the next day, the site has negative energy. It is important to note however that an area which is bad for one family may be good for another since the ritual is based on the matriarch's armspan. Another, perhaps older, method involves dreams. After clearing the area, the dukun lays four sticks in its centre and calls the name of the presiding local deities or spirits. Taking a handful of soil, he recites the following chant: Hai anak Menteri Guru Yang duduk empat penjuru alam Aku memohonkan tanah ini Jikalau baik, tunjukkan alamat baik Jikalau jahat, tunjukkan alamat jahat Ho, children of Mentri Guru Who dwell in the four corners of the world I crave this plot as a boon If it is good, show me a good omen If it is bad, show me a bad omen The soil is then wrapped in white cloth, fumigated with incense and placed under the occupant's pillow at night. Before sleeping, the last two lines of the aforementioned charm are repeated. If the occupant has a nightmare, the house cannot be built. If the dream is good, the four corners of the site are pegged with sticks. A dead branch is then taken and heaped with earth before being set on fire. When it has been reduced to ashes, it is swept up and covered. A similar incantation is then spoken: Hai, segala orang yang memegang tanah ini empat penjuru alam Kama aku hendak berbuat rumah Jikalau baik, tunjukkan alamat baik Jikalau jahat, tunjukkan alamat jahat Ho, holders of this land who dwell in the four corners of the world I wish to build a house on this site If it is good, show me a good omen If it is bad, show me a bad omen The next morning, the ashes are uncovered and God will show a sign of the plot's good and bad potential. This method is no longer practiced today because of its pre-Islamic origin. The name Menteri Guru, which translates literally as "minister teacher", may be an alternate form of Betara Guru (from the Sanskrit term Bhattara Guru meaning "teacher-lord"), an epithet for the Hindu god Siwa or Shiva. The four corners mentioned in the incantation is found in Malay, Indian and Chinese spiritual world concepts. In Hindu cosmology the surface of the earth is represented as a square in reference to the horizon's relationship with sunrise and sunset. Erecting the main pillar Traditional Malay buildings have at their centre a main pillar called the tiang seri where the spirit of the house (semangat rumah) is said to dwell. Sometimes it may decorated with the family kris wrapped in yellow cloth. The construction of any building begins by digging a hole for this central post, accompanied by the recitation of a charm. The best time of day for this is 7 a.m. The workers must ensure that their shadows do not fall on the hole or on the post itself, or illness will follow. Certain materials are then deposited into the hole such as brazilwood (kayu sepang), ebony (kayu arang), scrap metal, tin-ore, a copper coin, a broken hatchet-head, or a candle-nut (buah gorek). To appease the local earth-spirit or demon (jembalang tanah or puaka), the head, feet and blood of an animal are also deposited in the hole. Depending on the malignity of the earth-spirit, the animal may be either a fowl (ayam), a goat (kambing) or a buffalo (kerbau). For a small demon, an egg will suffice. Among the natives of ancient Borneo the victim of this sacrifice would have been human, and the Malay custom of killing an animal for the purpose arose from what was once human sacrifice. As recently as the beginning of the 20th century, the Malaysian government would bury human skulls under the foundation of any large structure. A number of methods are used to ascertain whether the hole is in a propitious location. In one example a white cup is filled with water, fumigated with incense, and left in the hole overnight. If the cup is still full the next day or has live insects inside, it is a good sign. If the insects are dead or the water has lessened, it is a bad omen. Alternatively, one could wait until everyone has left the area before picking up three clods of soil, holding them over incense, and reciting a certain mantera or mantra. The soil must be taken home without ever turning to look back. Upon arrival, the earth is placed under the occupant's pillow before sleeping. If they have a bad dream, one of the clods is thrown away. This process continues until it results in a good dream, when the clod of earth which induced the dream is placed in the hole and serves as the tiang seri pole's foundation. An example of a charm recited when erecting the tiang seri runs as follows: Hai Raja Guru, Maharaja Guru, daripada tajar menyenseng Engkaulah anak Betara Guru Hai hantu tanah, benah tanah Aku tahu asal kau jadi: Jembalang tanah Daripada kilat sabung-menyabung Undur kau dari sini ke laut yang dalam Aku tahu asal kau jadi: Ke rimba yang sunyi Daripada embun setitik Antara aku dengan engkau, aku tahu asal kau jadi Ho Raja Guru, Maharaja Guru Thou art sons of Bhattara Guru Ho, ghost of the earth, blight of the earth I know the origin from which you sprang: Demon of the earth From the flashing lightning Retire ye hence to the depths of the sea I know the origin from which you sprang: To the peace of the forest From a single drop of dew Betwixt you and me, I know the origin from which you came into being Length of the threshold As with the Chinese bagua and the guardians of Jambudvipa in Indian culture, Malay astrology and geomancy also uses eight points of divination. Known as the "eight beasts", each one corresponds to an animal possessing its own characteristics. The eight beasts regulate the length of a house's threshold. This is measured in a unit called depa, the length of the house-mistress's armspan. After measuring off a depa on a piece of string, the string is folded into three before one-third of its length is cut off. The remainder is folded into eight and reduced to one-seventh. The remaining seventh is checked against the threshold's length, and the number of times it is contained therein determines which animal it corresponds to. If the measurement is unlucky, the threshold would be cut shorter. The ominous significance of the eight beasts is often illustrated in rhyme. An example, recorded by Walter William Skeat, reads as follows: 1. Dragon "A dragon of bulk, a monster dragon Is this dragon that turns round month by month Wherever you go you will be safe from stumbling-blocks And all who meet you will be your friends" 2. Dairy cow "There is the smoke of a fire in the forest Where Che Ali is burning lime They were milking the young dairy-cow And in the midst of the milking it sprawled and dropped dead" 3. Lion "Lion of courage, lion of valour Is the lion gambolling at the end of the Point The luck of this house will be lasting Bringing prosperity from year to year" 4. Dog "The wild dog, the jackal Barks at the deer from night to night Whatever you do will be a stumbling-block In this house men will stab one another" 5. Cow "The big cow from the middle of the clearing Has gone to the deep forest to calve there Great luck will be your portion Never will you cease to be prosperous" 6. Donkey "The ass within the fort Carries grass from morn to eve Whatever you pray for will not be granted Though big your capital, the half will be lost" 7. Elephant "The king's big riding-elephant Has its tusks covered with amalgam Good luck is your portion No harm or blemish will you suffer" 8. Crow "A black crow soaring by night Has perched on the house of the great Magic Prince Great indeed is the calamity which has happened Within the house its master lies dead" House height The rules of a house's height are also determined by the matriarch's armspan. Ideally the main pillar should be a round number (e.g. 5 depa) but the actual measurement usually contains a fraction (e.g. five and two-tenths depa). Depending on the amount of extra length, there would be a different result. Malay village houses were built so that they could be disassembled and rebuilt when the situation demanded it such as war, floods or famine. Some have noted that these rules consider the number four unlucky, just as in Chinese superstition. Direction of the door The direction that the house's door faces is also said to play a part in determining the occupants' well-being. The worst direction for the door to face is south, which brings bad career luck. A door which faces the west is said to bring knowledge or encourages the coming of someone well-learned, especially in matters of religion. This is probably a reference to the fact that foreigners, particularly the Indian Muslims who introduced Islam in the region, sailed to the Malay Peninsula from the west. A door that faces east ensures many grandchildren and a life of tranquility. A north-facing door brings wealth. See also References Environmental design Culture of Malaysia Architecture in Malaysia
Tajul muluk
Engineering
3,211
106,421
https://en.wikipedia.org/wiki/Library%20%28computing%29
In computer science, a library is a collection of resources that is leveraged during software development to implement a computer program. Historically, a library consisted of subroutines (generally called functions today). The concept now includes other forms of executable code including classes and non-executable data including images and text. It can also refer to a collection of source code. For example, a program could use a library to indirectly make system calls instead of making those system calls directly in the program. Characteristics General A library can be used by multiple, independent consumers (programs and other libraries). This differs from resources defined in a program which can usually only be used by that program. When a consumer uses a library resource, it gains the value of the library without having to implement it itself. Libraries encourage code reuse in a modular fashion. When writing code that uses a library, a programmer only needs to know high-level information such as what items it contains at and how to use the items not all of the internal details of the library. Libraries can use other libraries resulting in a hierarchy of libraries in a program. Executable A library of executable code has a well-defined interface by which the functionality is invoked. For example, in C, a library function is invoked via C's normal function call capability. The linker generates code to call a function via the library mechanism if the function is available from a library instead of from the program itself. The functions of a library can be connected to the invoking program at different program lifecycle phases. If the code of the library is accessed during the build of the invoking program, then the library is called a static library. An alternative is to build the program executable to be separate from the library file. The library functions are connected after the executable is started, either at load-time or runtime. In this case, the library is called a dynamic library. Most compiled languages have a standard library, although programmers can also create their own custom libraries. Most modern software systems provide libraries that implement the majority of the system services. Such libraries have organized the services which a modern application requires. As such, most code used by modern applications is provided in these system libraries. History The idea of a computer library dates back to the first computers created by Charles Babbage. An 1888 paper on his Analytical Engine suggested that computer operations could be punched on separate cards from numerical input. If these operation punch cards were saved for reuse then "by degrees the engine would have a library of its own." In 1947 Goldstine and von Neumann speculated that it would be useful to create a "library" of subroutines for their work on the IAS machine, an early computer that was not yet operational at that time. They envisioned a physical library of magnetic wire recordings, with each wire storing reusable computer code. Inspired by von Neumann, Wilkes and his team constructed EDSAC. A filing cabinet of punched tape held the subroutine library for this computer. Programs for EDSAC consisted of a main program and a sequence of subroutines copied from the subroutine library. In 1951 the team published the first textbook on programming, The Preparation of Programs for an Electronic Digital Computer, which detailed the creation and the purpose of the library. COBOL included "primitive capabilities for a library system" in 1959, but Jean Sammet described them as "inadequate library facilities" in retrospect. JOVIAL has a Communication Pool (COMPOOL), roughly a library of header files. Another major contributor to the modern library concept came in the form of the subprogram innovation of FORTRAN. FORTRAN subprograms can be compiled independently of each other, but the compiler lacked a linker. So prior to the introduction of modules in Fortran-90, type checking between FORTRAN subprograms was impossible. By the mid 1960s, copy and macro libraries for assemblers were common. Starting with the popularity of the IBM System/360, libraries containing other types of text elements, e.g., system parameters, also became common. In IBM's OS/360 and its successors this is called a partitioned data set. The first object-oriented programming language, Simula, developed in 1965, supported adding classes to libraries via its compiler. Linking Libraries are important in the program linking or binding process, which resolves references known as links or symbols to library modules. The linking process is usually automatically done by a linker or binder program that searches a set of libraries and other modules in a given order. Usually it is not considered an error if a link target can be found multiple times in a given set of libraries. Linking may be done when an executable file is created (static linking), or whenever the program is used at runtime (dynamic linking). The references being resolved may be addresses for jumps and other routine calls. They may be in the main program, or in one module depending upon another. They are resolved into fixed or relocatable addresses (from a common base) by allocating runtime memory for the memory segments of each module referenced. Some programming languages use a feature called smart linking whereby the linker is aware of or integrated with the compiler, such that the linker knows how external references are used, and code in a library that is never actually used, even though internally referenced, can be discarded from the compiled application. For example, a program that only uses integers for arithmetic, or does no arithmetic operations at all, can exclude floating-point library routines. This smart-linking feature can lead to smaller application file sizes and reduced memory usage. Relocation Some references in a program or library module are stored in a relative or symbolic form which cannot be resolved until all code and libraries are assigned final static addresses. Relocation is the process of adjusting these references, and is done either by the linker or the loader. In general, relocation cannot be done to individual libraries themselves because the addresses in memory may vary depending on the program using them and other libraries they are combined with. Position-independent code avoids references to absolute addresses and therefore does not require relocation. Static libraries When linking is performed during the creation of an executable or another object file, it is known as static linking or early binding. In this case, the linking is usually done by a linker, but may also be done by the compiler. A static library, also known as an archive, is one intended to be statically linked. Originally, only static libraries existed. Static linking must be performed when any modules are recompiled. All of the modules required by a program are sometimes statically linked and copied into the executable file. This process, and the resulting stand-alone file, is known as a static build of the program. A static build may not need any further relocation if virtual memory is used and no address space layout randomization is desired. Shared libraries A shared library or shared object is a file that is intended to be shared by executable files and further shared object files. Modules used by a program are loaded from individual shared objects into memory at load time or runtime, rather than being copied by a linker when it creates a single monolithic executable file for the program. Shared libraries can be statically linked during compile-time, meaning that references to the library modules are resolved and the modules are allocated memory when the executable file is created. But often linking of shared libraries is postponed until they are loaded. Object libraries Although originally pioneered in the 1960s, dynamic linking did not reach the most commonly-used operating systems until the late 1980s. It was generally available in some form in most operating systems by the early 1990s. During this same period, object-oriented programming (OOP) was becoming a significant part of the programming landscape. OOP with runtime binding requires additional information that traditional libraries do not supply. In addition to the names and entry points of the code located within, they also require a list of the objects they depend on. This is a side-effect of one of OOP's core concepts, inheritance, which means that parts of the complete definition of any method may be in different places. This is more than simply listing that one library requires the services of another: in a true OOP system, the libraries themselves may not be known at compile time, and vary from system to system. At the same time many developers worked on the idea of multi-tier programs, in which a "display" running on a desktop computer would use the services of a mainframe or minicomputer for data storage or processing. For instance, a program on a GUI-based computer would send messages to a minicomputer to return small samples of a huge dataset for display. Remote procedure calls (RPC) already handled these tasks, but there was no standard RPC system. Soon the majority of the minicomputer and mainframe vendors instigated projects to combine the two, producing an OOP library format that could be used anywhere. Such systems were known as object libraries, or distributed objects, if they supported remote access (not all did). Microsoft's COM is an example of such a system for local use. DCOM, a modified version of COM, supports remote access. For some time object libraries held the status of the "next big thing" in the programming world. There were a number of efforts to create systems that would run across platforms, and companies competed to try to get developers locked into their own system. Examples include IBM's System Object Model (SOM/DSOM), Sun Microsystems' Distributed Objects Everywhere (DOE), NeXT's Portable Distributed Objects (PDO), Digital's ObjectBroker, Microsoft's Component Object Model (COM/DCOM), and any number of CORBA-based systems. Class libraries Class libraries are the rough OOP equivalent of older types of code libraries. They contain classes, which describe characteristics and define actions (methods) that involve objects. Class libraries are used to create instances, or objects with their characteristics set to specific values. In some OOP languages, like Java, the distinction is clear, with the classes often contained in library files (like Java's JAR file format) and the instantiated objects residing only in memory (although potentially able to be made persistent in separate files). In others, like Smalltalk, the class libraries are merely the starting point for a system image that includes the entire state of the environment, classes and all instantiated objects. Today most class libraries are stored in a package repository (such as Maven Central for Java). Client code explicitly declare the dependencies to external libraries in build configuration files (such as a Maven Pom in Java). Remote libraries Another library technique uses completely separate executables (often in some lightweight form) and calls them using a remote procedure call (RPC) over a network to another computer. This maximizes operating system re-use: the code needed to support the library is the same code being used to provide application support and security for every other program. Additionally, such systems do not require the library to exist on the same machine, but can forward the requests over the network. However, such an approach means that every library call requires a considerable amount of overhead. RPC calls are much more expensive than calling a shared library that has already been loaded on the same machine. This approach is commonly used in a distributed architecture that makes heavy use of such remote calls, notably client-server systems and application servers such as Enterprise JavaBeans. Code generation libraries Code generation libraries are high-level APIs that can generate or transform byte code for Java. They are used by aspect-oriented programming, some data access frameworks, and for testing to generate dynamic proxy objects. They also are used to intercept field access. File naming Most modern Unix-like systems The system stores libfoo.a and libfoo.so files in directories such as /lib, /usr/lib or /usr/local/lib. The filenames always start with lib, and end with a suffix of .a (archive, static library) or of .so (shared object, dynamically linked library). Some systems might have multiple names for a dynamically linked library. These names typically share the same prefix and have different suffixes indicating the version number. Most of the names are names for symbolic links to the latest version. For example, on some systems libfoo.so.2 would be the filename for the second major interface revision of the dynamically linked library libfoo. The .la files sometimes found in the library directories are libtool archives, not usable by the system as such. macOS The system inherits static library conventions from BSD, with the library stored in a .a file, and can use .so-style dynamically linked libraries (with the .dylib suffix instead). Most libraries in macOS, however, consist of "frameworks", placed inside special directories called "bundles" which wrap the library's required files and metadata. For example, a framework called MyFramework would be implemented in a bundle called MyFramework.framework, with MyFramework.framework/MyFramework being either the dynamically linked library file or being a symlink to the dynamically linked library file in MyFramework.framework/Versions/Current/MyFramework. Microsoft Windows Dynamic-link libraries usually have the suffix *.DLL, although other file name extensions may identify specific-purpose dynamically linked libraries, e.g. *.OCX for OLE libraries. The interface revisions are either encoded in the file names, or abstracted away using COM-object interfaces. Depending on how they are compiled, *.LIB files can be either static libraries or representations of dynamically linkable libraries needed only during compilation, known as "import libraries". Unlike in the UNIX world, which uses different file extensions, when linking against .LIB file in Windows one must first know if it is a regular static library or an import library. In the latter case, a .DLL file must be present at runtime. See also (VCL) (CLX) (used by the C++ Standard Library) Notes References Further reading Code: Errata: Article Beginner's Guide to Linkers by David Drysdale Article Faster C++ program startups by improving runtime linking efficiency by Léon Bottou and John Ryland How to Create Program Libraries by Baris Simsek BFD - the Binary File Descriptor Library 1st Library-Centric Software Design Workshop LCSD'05 at OOPSLA'05 2nd Library-Centric Software Design Workshop LCSD'06 at OOPSLA'06 How to create shared library by Ulrich Drepper (with much background info) Anatomy of Linux dynamic libraries at IBM.com Operating system technology
Library (computing)
Technology
3,071
36,382,662
https://en.wikipedia.org/wiki/USA-38
USA-38, also known as GPS II-2 and GPS SVN-13, was an American navigation satellite which formed part of the Global Positioning System. It was the second of nine Block II GPS satellites to be launched, which were the first operational GPS satellites to be launched. Background It was part of the 21-satellite Global Positioning System (GPS) Block II series that provides precise position data (accurate to within 16 m) to military and civilian users worldwide. Its signals could be received on devices as small as a telephone. The GPS II satellites, built by Rockwell International for the Air Force Space Systems Division, each have a 7.5-year design life. The Air Force intends to launch a GPS II every two to three months until the constellation of 21 operational satellites and three spares is aloft. The GPS Block II joins seven operational Block 1 satellites. Launch USA-38 was launched at 22:19 UTC on 10 June 1989, atop a Delta II launch vehicle, flight number D185, flying in the 6925-9.5 configuration. The launch took place from Launch Complex 17A (LC-17A) at the Cape Canaveral Air Force Station (CCAFS), and placed USA-38 into a transfer orbit. The satellite raised itself into medium Earth orbit using a Star-37XFP apogee motor. Mission On 11 July 1989, USA-38 was in an orbit with a perigee of , an apogee of , a period of 717.92 minutes, and 54.5° of inclination to the equator. It operated in slot 3 of plane B of the GPS constellation. The satellite had a mass of , and generated 710 watts of power. It had a design life of 7.5 years, and was deactivated on 12 February 2004. References GPS satellites USA satellites Spacecraft launched in 1989
USA-38
Technology
380
72,709,760
https://en.wikipedia.org/wiki/Caesium%20telluride
Caesium telluride or Caesium telluridocaesium is an inorganic salt with a chemical formula Cs2Te. Caesium telluride is used to make photo cathodes. Caesium telluride is the photoemissive material used in many laser-driven radio frequency (RF) electron guns like in the TESLA Test Facility (TTF). References Caesium compounds Tellurides
Caesium telluride
Chemistry
90
3,368,981
https://en.wikipedia.org/wiki/Weak%20formulation
Weak formulations are important tools for the analysis of mathematical equations that permit the transfer of concepts of linear algebra to solve problems in other fields such as partial differential equations. In a weak formulation, equations or conditions are no longer required to hold absolutely (and this is not even well defined) and has instead weak solutions only with respect to certain "test vectors" or "test functions". In a strong formulation, the solution space is constructed such that these equations or conditions are already fulfilled. The Lax–Milgram theorem, named after Peter Lax and Arthur Milgram who proved it in 1954, provides weak formulations for certain systems on Hilbert spaces. General concept Let be a Banach space, let be the dual space of , let be a linear map, and let . A vector is a solution of the equation if and only if for all , A particular choice of is called a test vector (in general) or a test function (if is a function space). To bring this into the generic form of a weak formulation, find such that by defining the bilinear form Example 1: linear system of equations Now, let and be a linear mapping. Then, the weak formulation of the equation involves finding such that for all the following equation holds: where denotes an inner product. Since is a linear mapping, it is sufficient to test with basis vectors, and we get Actually, expanding we obtain the matrix form of the equation where and The bilinear form associated to this weak formulation is Example 2: Poisson's equation To solve Poisson's equation on a domain with on its boundary, and to specify the solution space later, one can use the scalar product to derive the weak formulation. Then, testing with differentiable functions yields The left side of this equation can be made more symmetric by integration by parts using Green's identity and assuming that on This is what is usually called the weak formulation of Poisson's equation. Functions in the solution space must be zero on the boundary, and have square-integrable derivatives. The appropriate space to satisfy these requirements is the Sobolev space of functions with weak derivatives in and with zero boundary conditions, so The generic form is obtained by assigning and The Lax–Milgram theorem This is a formulation of the Lax–Milgram theorem which relies on properties of the symmetric part of the bilinear form. It is not the most general form. Let be a real Hilbert space and a bilinear form on which is bounded: and coercive: Then, for any bounded there is a unique solution to the equation and it holds Application to example 1 Here, application of the Lax–Milgram theorem is a stronger result than is needed. Boundedness: all bilinear forms on are bounded. In particular, we have Coercivity: this actually means that the real parts of the eigenvalues of are not smaller than . Since this implies in particular that no eigenvalue is zero, the system is solvable. Additionally, this yields the estimate where is the minimal real part of an eigenvalue of Application to example 2 Here, choose with the norm where the norm on the right is the norm on (this provides a true norm on by the Poincaré inequality). But, we see that and by the Cauchy–Schwarz inequality, Therefore, for any there is a unique solution of Poisson's equation and we have the estimate See also Babuška–Lax–Milgram theorem Lions–Lax–Milgram theorem References External links MathWorld page on Lax–Milgram theorem Partial differential equations Numerical differential equations Theorems in functional analysis
Weak formulation
Mathematics
738
32,233,575
https://en.wikipedia.org/wiki/Asymmetric%20flow%20field%20flow%20fractionation
Asymmetrical flow field-flow fractionation (AF4) is most versatile and most widely used sub-technique within the family of field flow fractionation (FFF) methods. AF4 can be used in aqueous and organic solvents and is able to characterize nanoparticles, polymers and proteins. The theory for AF4 was conceived in 1986 and was established in 1987 and first published by Wahlund and Giddings. AF4 is distinct from symmetrical Flow FFF because it contains only one permeable wall so the cross-flow is caused only by the carrier liquid. The cross-flow is induced by the carrier liquid constantly exiting by way of the semi-permeable wall on the bottom of the channel. Applications and detection methods Asymmetrical flow field flow fractionation (AF4) is nowadays a common and state-of-the art method for fractionation and separation of macromolecules and particles in a suspension. AF4 is an alternative to HPLC and SEC in cases where column chromatography is not suitable for the analyte. HPLC or SEC would be used for liquid separations for molecules up to 1000 kDa and nanoparticles up to 10 nm. As the size increases above 10 nm, AF4 becomes superior in terms of resolution and recovery. AF4's applications are flexible for many analytical conditions where a column-based method would be unable to properly separate the desired particles. For macromolecules and nanoparticles AF4 is an alternative method especially when the stationary phase in columns interacts with the sample. AF4 is specifically powerful for inhomogeneous samples where it can separate soluble macromolecules from particles or aggregates. AF4 and other FFF methods have been extensively used in environmental research on the impact of nano materials and to characterize condensed tannins oxidation. For high molar mass and branched polymers, AF4 has been shown to achieve good separation, whereas SEC fails, and AF4 has been applied to polyolefines at temperatures above 150 C. Detection methods are the same as for FFF in general, UV is most popular as a concentration detector, but most AF4 systems include a multi-angle light scattering detector for direct measurement or size and molar mass. Operational procedures The AF4 experiment can be separated into three stages: 1. Sample Injection Samples are injected into the system using a known amount of sample volume. This volume will depend on the AF4 instrument being utilized in the experiment. Starting fractionation immediately after sample injection is not ideal because the sample is going to spread out randomly from the injection site, so the beginning velocity and place of the particles are not all the same. This leads to line broadening and insufficiency. In order to correct such an error, sample focusing is proposed. 2. Sample focusing A current going opposite of the carrier solvent is used to focus all the particles in the sample to one specified area before fractionation begins. This corrects for any peak broadening that can occur due to particles being dispersed from the injection port to the channel outlet before fractionation begins. Sample preparation is another option that can be achieved in the focus step. Once all the particles are in the same area of the channel, fractionation can occur. 3. Fractionation There are two components that make up the FFF system. Firstly, the laminar flow that carries the sample through the separation chamber and secondly the separation field applied perpendicular to the channel, against the sample flow. As particles flow along the channel the cross flow separation field pushes the molecules towards the bottom of the channel. As they pass by the bottom they undergo a counter acting diffusion back into the channel against the carrier flow. The extent to which the molecules can diffuse back into the channel is dictated by their natural Brownian motion, a characteristic based on size that is unique to each individual species. Smaller particles have a higher Brownian motion than larger ones and are able to diffuse higher into the channel against the carrier flow. The rate of laminar flow within the channel is not uniform. It travels in a parabolic pattern with the speed of the flow increasing towards the center of the channel and decreasing towards the sides. Therefore, the rate at which particles will be carried through will depend on their position within the channel. Those with a greater diffusion, located in the center of the channel, will be transported with a greater velocity. The larger particles in the shallow, slower moving stream are transported with lower flow velocity and elute later than smaller particles. This results in a gentle separation of particles based on mass with the elution order of smallest to largest. References External links An overview of FFF AF4 separation system Asymmetric flow FFF Analytical chemistry Fractionation
Asymmetric flow field flow fractionation
Chemistry
969
66,379,796
https://en.wikipedia.org/wiki/Ferraris%27%20motor
In 1885, Galileo Ferraris demonstrated an induction motor that also involved using two pairs of electromagnets to create a rotating magnetic field, though he did this independently of Baily. His motor more closely resembled modern ones in that the electromagnets surrounded a cylinder. More significantly, however, he proposed creating a true rotating magnetic field for it by supplying two sine wave alternating currents 90° apart. He gave his first public demonstration of the motor in 1888. History and description Professor Galileo Ferraris, of Turin, had already in 1885, arrived at the same fundamental ideas as those of Baily and of Deprez. But the result was more fruitful, inasmuch as he, without knowing of the work of either, united both sets of ideas. Like Baily he proposed to produce rotation of a copper conductor by means of eddy-currents induced in it by a progressively shifted magnetic field; and this progressively shifted magnetic field he proposed to generate as a true rotating field by combining at right angles to one another two alternate currents which differed by a quarter-period from one another. In 1885, Professor Ferraris constructed the motor depicted in plan in Fig 1, which was not, however, publicly shown till 1888. It was exhibited in 1893 at the World's Fair at Chicago. It consisted of two pairs of electromagnets A A and B B', having a common yoke made by winding iron wire around the exterior. Two alternate currents differing in phase were led into these two circuits, and the pivoted central body was observed to revolve. Ferraris's first publication was in March 1888, entitled Electrodynamic rotations produced by means of alternate currents. After expounding the geometric theory of the rotatory magnetic field, he suggested that a simple way of procuring the desired phase-currents would be to branch the circuit of an alternate current into two parts, into one of which should be inserted a resistance without self-induction, into the other a coil of much self-induction but of small resistance. The two windings of the motor should be respectively introduced into these two branches. The difference of phase thus produced would be sufficiently near to 90° to be effective. He expressed the opinion that in this way one might obtain all the effects that can be obtained by the rotation of a magnet. He then described the following experiments which were made in the autumn of 1885. Two flat coils, one of thick wire, the other of thinner wire, represented diagrammatically at A A and B B of Fig. 1, were set at right angles to one another. Into the first was brought a current from the primary of a Gaulard's transformer, and into the second the current from the secondary, with more or less non-inductive resistance. In the central space was suspended a small hollow closed cylinder of copper. If the current was turned on in one only of the two windings the cylinder remained immovable, but on turning on the second current it at once began to rotate. The sense of the rotation could be reversed by simply changing, with a reversing-switch, the connections of the second coil. The same results were found to follow when a cylinder of iron was substituted for that of copper. A laminated iron cylinder built up of insulated disks also turned. Then followed suggestions for constructing alternate current motors on this principle but of modified form; for, as Professor Ferraris remarked, it was evident that a motor thus made could not have any importance as a means of industrial transformation of power. He therefore designed a larger model, having as its rotating part a copper cylinder weighing 10 lbs, having a length of 18 cm, and a diameter of 8,9 cm, borne on a horizontal shaft 1 cm in diameter. It was surrounded by two sets of coils A A and B B at right angles to one another, as in the Fig. 2. It was, however, of but small power. Ferraris discussed the elementary theory of the apparatus, pointing out that the inductive action would be proportional to the slip, that is to say to the difference between the angular velocity of the magnetic field and that of the rotating cylinder, that the induced current in the rotating metal would also be proportional to this; and that the power of the motor is proportional jointly to the slip and to the velocity of the rotating part. Ferraris also suggested measuring instruments for alternate currents based on this principle. Lastly he succeeded in producing rotation in a mass of mercury placed in a vessel in the rotatory field. In 1894 Ferraris published another discussion of the theory of these motors. See also Arago's Rotations Timeline of the Electric Motor Rotating magnetic Field Induction Motor References External links Inventing The Induction Motor Electric motors Electromagnetism 19th-century inventions
Ferraris' motor
Physics,Technology,Engineering
977
30,001,045
https://en.wikipedia.org/wiki/Highest-weight%20category
In the mathematical field of representation theory, a highest-weight category is a k-linear category C (here k is a field) that is locally artinian has enough injectives satisfies for all subobjects B and each family of subobjects {Aα} of each object X and such that there is a locally finite poset Λ (whose elements are called the weights of C) that satisfies the following conditions: The poset Λ indexes an exhaustive set of non-isomorphic simple objects {S(λ)} in C. Λ also indexes a collection of objects {A(λ)} of objects of C such that there exist embeddings S(λ) → A(λ) such that all composition factors S(μ) of A(λ)/S(λ) satisfy μ < λ. For all μ, λ in Λ, is finite, and the multiplicity is also finite. Each S(λ) has an injective envelope I(λ) in C equipped with an increasing filtration such that for n > 1, for some μ = λ(n) > λ for each μ in Λ, λ(n) = μ for only finitely many n Examples The module category of the -algebra of upper triangular matrices over . This concept is named after the category of highest-weight modules of Lie-algebras. A finite-dimensional -algebra is quasi-hereditary iff its module category is a highest-weight category. In particular all module-categories over semisimple and hereditary algebras are highest-weight categories. A cellular algebra over a field is quasi-hereditary (and hence its module category a highest-weight category) iff its Cartan-determinant is 1. Notes References See also Category O Representation theory
Highest-weight category
Mathematics
373
78,700,451
https://en.wikipedia.org/wiki/C/1850%20Q1%20%28Bond%29
Bond's Comet, formally known as C/1850 Q1, is a parabolic comet that was observed through telescopes throughout late 1850. It was the only comet discovered independently by American astronomer, George Phillips Bond. Discovery and observations The comet was discovered by George Phillips Bond as a "faint, telescopic object" in the constellation Camelopardalis, about 10° north of the star α Per on 29 August 1850. It gradually brightened during the first weeks of September 1850, allowing further observations of the comet to be conducted by various other observatories around the globe. On September 18, Richard Carrington noted that the comet is best seen with a Fraunhofer refractor. The comet reached perihelion on October 19, however the comet was difficult to observe at this time due to its low position in the twilight skies. It was last observed on 14 November 1850. Initial orbital calculations in the 19th century showed the comet has a weakly-bound parabolic orbit with an orbital period of 46,000 years. Recalculations in 2003 suggested a hyperbolic trajectory instead, based on 103 observations of the comet. References External links Non-periodic comets Hyperbolic comets
C/1850 Q1 (Bond)
Astronomy
242
45,555,423
https://en.wikipedia.org/wiki/Active%20thermography
Active thermography is an advanced nondestructive testing procedure, which uses a thermographic measurement of a tested material thermal response after its external excitation. This principle can be used also for non-contact infrared non-destructive testing (IRNDT) of materials. The IRNDT method is based on an excitation of a tested material by an external source, which brings some energy to the material. Halogen lamps, flash-lamps, ultrasonic horn or other sources can be used as the excitation source for the IRNDT. The excitation causes a tested material thermal response, which is measured by an infrared camera. It is possible to obtain information about the tested material surface and sub-surface defects or material inhomogeneities by using a suitable combination of excitation source, excitation procedure, infrared camera and evaluation method. Modern thermographic systems with high-speed and high-sensitivity IR cameras extend the possibilities of the inspection method. Modularity of the systems allows their usage for research and development applications as well as in modern industrial production lines. Thermovision nondestructive testing of components can be carried out on a wide range of various materials. Thermographic inspection of material can be regarded as a method of infrared defectoscopy, that is capable of revealing material imperfections such as cracks, defects, voids, cavities and other inhomogeneities. The thermographic testing can be provided on individual components in a laboratory or directly on technology facilities that are in duty. Theory Active thermography uses an external source for measured object excitation, that means introducing an energy into the object. The excitation sources can be classified by the principles: optical radiation or microwaves absorption, electromagnetic induction, elastic waves transformation (e.g. ultrasound), convection (e.g. hot air), plastic deformation transformation (thermoplastic effect during mechanical loading). Various excitation sources can be used for the active thermography and nondestructive testing, for example laser heating, flash lamps, halogen lamps, electrical heating, ultrasonic horn, eddy currents, microwaves, and others. The measured object can be heated by an external source directly, e.g. by halogen lamps or hot air. The material inhomogeneities or defects cause then a distortion of temperature field. This distortion is detected as temperature differences on the material surface. Another possibility is to use thermophysical processes in the material, when mechanical or electrical energy is transformed into thermal energy due to defects and inhomogeneities. It creates local temperature sources, which cause temperature differences detected on the object surface by infrared techniques, such as in the case of ultrasound excitation. Methods A lot of methods were developed for active thermography for the nondestructive testing measurement evaluation. The evaluation methods selection depends on application, used excitation source and excitation type (pulse, periodic, continuous). In the simplest case, the response is evident from a thermogram directly. However, it is necessary to use advanced analysis techniques in most cases. The most common methods include Lock-In, Pulse or Transient (Step thermography) evaluation techniques, with continuous excitation used in some cases: Lock-In thermography (periodic excitation method). A modulated periodic source is used for the excitation. The phase and amplitude shift of the measured signal are evaluated and the analysis can be done by various techniques. Halogen lamps, LED lamps, ultrasound, laser or an electric current are suitable excitation sources. It has the advantage that it can be used on large surfaces, and it puts a low thermal energy on the part being inspected. The disadvantage is a longer measurement time and dependence of detection capabilities on a geometrical orientation of defects (except of an indirect excitation such as ultrasound). The Lock-In method is suitable for testing components with a low thermal diffusivity and it has many modifications for various specific applications (such as Lock-In Ref, Lock-In Online, etc.). Pulse thermography (pulse method). A very short pulse – usually in the units of milliseconds – is used to excite the object. The cooling process is then analyzed. A flash lamp is typically used as an excitation source. The advantage of this method is the speed of the analysis and a possibility to estimate the defects depth. The disadvantage is a limited depth of the analysis, a limited area that can be inspected (with regard to a usable power of excitation sources) and a dependence of detection capabilities on geometrical orientation of defects. Transient thermography (step thermography, thermal wave method). In principle, the excitation and evaluation are similar to the pulse thermography, however, the pulse length is much bigger. Less powerful excitation sources are required compared to the pulse thermography. It is therefore possible to analyze larger areas and the measurement time is shorter than in the case of Lock-In thermography. As in the pulse thermography, the sensitivity of the method is limited by the geometrical orientation of defects. Halogen lamps are the suitable excitation source for this type of evaluation. Continual excitation. The simplest method usable only in special applications. A high-speed cooled infrared camera with a high sensitivity is commonly used for IRNDT applications. However, an uncooled bolometric infrared camera can be used for specific applications. It can significantly reduce acquisition costs of the measurement system. The IR nondestructive testing system are usually modular. It means that various excitation sources can be combined with various infrared cameras and various evaluation methods depending on application, tested material, measuring time demands, size of a tested area, etc. The modularity allows universal usage of the system for various industrial, scientific and research applications. Applications IRNDT (infra-red nondestructive testing) method is suitable for detection and inspection of cracks, defects, cavities, voids and inhomogeneities in material, it is also possible to use the method for inspection of welded joints of metal and plastic parts, inspection of solar cells and solar panels, determination of internal structure of material etc. The main advantage of IRNDT method is availability for inspection of various materials in wide range of industrial and research applications. IRNDT measurement is fast, nondestructive and noncontact. Restrictive condition for IRNDT method is inspection depth combined with dimension and orientation of defect/crack/inhomogeneity in material. Inspection of laser welded plastic parts Laser welding of plastics is a progressive technology of connecting materials with different optical properties. Classical methods for testing of welding performance and weld joints quality – such as the metallographic cut microscopic analysis or X-ray tomography – are not suitable for routine measurements. Pulse IRNDT analysis can be successfully used for weld inspection in many cases. The images show an example of plastic parts inspection with a defective weld and with a correct weld. The gaps in the defective weld and the correct uninterrupted weld line are both well visible in the results of the IRNDT flash-pulse analysis. Inspection of laser welded joints Laser beam welding is a modern technology of fusion welding. Currently finds its wide usage not only in the field of scientific research but also establishes itself in a variety of industries. Among the most frequent users belong the automotive industry, which due to its stable continuous innovation enables fast implementation of advanced technologies in their production. It is clear that laser welding significantly enhances engineering designs and thus brings a number of new products which previously could not be made by conventional methods. The laser welding can produce quality welds of different types, both extremely thin and thick blanks. Weldable are common carbon steels, stainless steels, aluminum and its alloys, copper, titanium and last but not least, special materials and its combinations. An integral part of the weldment production is a quality control. Unlike conventional non-destructive test methods, IRNDT is used not only after the laser welding process, but also during it. This makes possible to decide whether or not to the weldment comply with established quality criteria during manufacture process. Solar cells testing Active thermography, particularly lock-in thermography, is widely employed for inspecting solar cells. While effective, lock-in thermography often requires physical contact with the solar cell for excitation. However, techniques that involve periodic excitation using light sources allow for non-contact testing of electrode-free cells. Common methods such as Illuminated Lock-In Thermography (ILIT) and Open Circuit Voltage Illuminated Lock-In Thermography (VOC-ILIT) are used to investigate defects or issues like ohmic shunts, cracks, open or short circuits, and degradation in photovoltaic materials. Pulsed thermography, another method under investigation, provides a non-contact alternative with significantly reduced inspection times; however, it usually offers lower detectability than the ILIT method. References Materials science
Active thermography
Physics,Materials_science,Engineering
1,870
347,838
https://en.wikipedia.org/wiki/Nuclear%20medicine
Nuclear medicine (nuclear radiology, nucleology), is a medical specialty involving the application of radioactive substances in the diagnosis and treatment of disease. Nuclear imaging is, in a sense, radiology done inside out, because it records radiation emitted from within the body rather than radiation that is transmitted through the body from external sources like X-ray generators. In addition, nuclear medicine scans differ from radiology, as the emphasis is not on imaging anatomy, but on the function. For such reason, it is called a physiological imaging modality. Single photon emission computed tomography (SPECT) and positron emission tomography (PET) scans are the two most common imaging modalities in nuclear medicine. Diagnostic medical imaging Diagnostic In nuclear medicine imaging, radiopharmaceuticals are taken internally, for example, through inhalation, intravenously, or orally. Then, external detectors (gamma cameras) capture and form images from the radiation emitted by the radiopharmaceuticals. This process is unlike a diagnostic X-ray, where external radiation is passed through the body to form an image. There are several techniques of diagnostic nuclear medicine. 2D: Scintigraphy ("scint") is the use of internal radionuclides to create two-dimensional images. 3D: SPECT is a 3D tomographic technique that uses gamma camera data from many projections and can be reconstructed in different planes. Positron emission tomography (PET) uses coincidence detection to image functional processes. Nuclear medicine tests differ from most other imaging modalities in that nuclear medicine scans primarily show the physiological function of the system being investigated as opposed to traditional anatomical imaging such as CT or MRI. Nuclear medicine imaging studies are generally more organ-, tissue- or disease-specific (e.g.: lungs scan, heart scan, bone scan, brain scan, tumor, infection, Parkinson etc.) than those in conventional radiology imaging, which focus on a particular section of the body (e.g.: chest X-ray, abdomen/pelvis CT scan, head CT scan, etc.). In addition, there are nuclear medicine studies that allow imaging of the whole body based on certain cellular receptors or functions. Examples are whole body PET scans or PET/CT scans, gallium scans, indium white blood cell scans, MIBG and octreotide scans. While the ability of nuclear metabolism to image disease processes from differences in metabolism is unsurpassed, it is not unique. Certain techniques such as fMRI image tissues (particularly cerebral tissues) by blood flow and thus show metabolism. Also, contrast-enhancement techniques in both CT and MRI show regions of tissue that are handling pharmaceuticals differently, due to an inflammatory process. Diagnostic tests in nuclear medicine exploit the way that the body handles substances differently when there is disease or pathology present. The radionuclide introduced into the body is often chemically bound to a complex that acts characteristically within the body; this is commonly known as a tracer. In the presence of disease, a tracer will often be distributed around the body and/or processed differently. For example, the ligand methylene-diphosphonate (MDP) can be preferentially taken up by bone. By chemically attaching technetium-99m to MDP, radioactivity can be transported and attached to bone via the hydroxyapatite for imaging. Any increased physiological function, such as due to a fracture in the bone, will usually mean increased concentration of the tracer. This often results in the appearance of a "hot spot", which is a focal increase in radio accumulation or a general increase in radio accumulation throughout the physiological system. Some disease processes result in the exclusion of a tracer, resulting in the appearance of a "cold spot". Many tracer complexes have been developed to image or treat many different organs, glands, and physiological processes. Hybrid scanning techniques In some centers, the nuclear medicine scans can be superimposed, using software or hybrid cameras, on images from modalities such as CT or MRI to highlight the part of the body in which the radiopharmaceutical is concentrated. This practice is often referred to as image fusion or co-registration, for example SPECT/CT and PET/CT. The fusion imaging technique in nuclear medicine provides information about the anatomy and function, which would otherwise be unavailable or would require a more invasive procedure or surgery. Practical concerns in nuclear imaging Although the risks of low-level radiation exposures are not well understood, a cautious approach has been universally adopted that all human radiation exposures should be kept As Low As Reasonably Practicable, "ALARP". (Originally, this was known as "As Low As Reasonably Achievable" (ALARA), but this has changed in modern draftings of the legislation to add more emphasis on the "Reasonably" and less on the "Achievable".) Working with the ALARP principle, before a patient is exposed for a nuclear medicine examination, the benefit of the examination must be identified. This needs to take into account the particular circumstances of the patient in question, where appropriate. For instance, if a patient is unlikely to be able to tolerate a sufficient amount of the procedure to achieve a diagnosis, then it would be inappropriate to proceed with injecting the patient with the radioactive tracer. When the benefit does justify the procedure, then the radiation exposure (the amount of radiation given to the patient) should also be kept "ALARP". This means that the images produced in nuclear medicine should never be better than required for confident diagnosis. Giving larger radiation exposures can reduce the noise in an image and make it more photographically appealing, but if the clinical question can be answered without this level of detail, then this is inappropriate. As a result, the radiation dose from nuclear medicine imaging varies greatly depending on the type of study. The effective radiation dose can be lower than or comparable to or can far exceed the general day-to-day environmental annual background radiation dose. Likewise, it can also be less than, in the range of, or higher than the radiation dose from an abdomen/pelvis CT scan. Some nuclear medicine procedures require special patient preparation before the study to obtain the most accurate result. Pre-imaging preparations may include dietary preparation or the withholding of certain medications. Patients are encouraged to consult with the nuclear medicine department prior to a scan. Analysis The result of the nuclear medicine imaging process is a dataset comprising one or more images. In multi-image datasets the array of images may represent a time sequence (i.e. cine or movie) often called a "dynamic" dataset, a cardiac gated time sequence, or a spatial sequence where the gamma-camera is moved relative to the patient. SPECT (single photon emission computed tomography) is the process by which images acquired from a rotating gamma-camera are reconstructed to produce an image of a "slice" through the patient at a particular position. A collection of parallel slices form a slice-stack, a three-dimensional representation of the distribution of radionuclide in the patient. The nuclear medicine computer may require millions of lines of source code to provide quantitative analysis packages for each of the specific imaging techniques available in nuclear medicine. Time sequences can be further analysed using kinetic models such as multi-compartment models or a Patlak plot. Interventional nuclear medicine Radionuclide therapy can be used to treat conditions such as hyperthyroidism, thyroid cancer, skin cancer and blood disorders. In nuclear medicine therapy, the radiation treatment dose is administered internally (e.g. intravenous or oral routes) or externally direct above the area to treat in form of a compound (e.g. in case of skin cancer). The radiopharmaceuticals used in nuclear medicine therapy emit ionizing radiation that travels only a short distance, thereby minimizing unwanted side effects and damage to noninvolved organs or nearby structures. Most nuclear medicine therapies can be performed as outpatient procedures since there are few side effects from the treatment and the radiation exposure to the general public can be kept within a safe limit. In some centers the nuclear medicine department may also use implanted capsules of isotopes (brachytherapy) to treat cancer. History The history of nuclear medicine contains contributions from scientists across different disciplines in physics, chemistry, engineering, and medicine. The multidisciplinary nature of nuclear medicine makes it difficult for medical historians to determine the birthdate of nuclear medicine. This can probably be best placed between the discovery of artificial radioactivity in 1934 and the production of radionuclides by Oak Ridge National Laboratory for medicine-related use, in 1946. The origins of this medical idea date back as far as the mid-1920s in Freiburg, Germany, when George de Hevesy made experiments with radionuclides administered to rats, thus displaying metabolic pathways of these substances and establishing the tracer principle. Possibly, the genesis of this medical field took place in 1936, when John Lawrence, known as "the father of nuclear medicine", took a leave of absence from his faculty position at Yale Medical School, to visit his brother Ernest Lawrence at his new radiation laboratory (now known as the Lawrence Berkeley National Laboratory) in Berkeley, California. Later on, John Lawrence made the first application in patients of an artificial radionuclide when he used phosphorus-32 to treat leukemia. Many historians consider the discovery of artificially produced radionuclides by Frédéric Joliot-Curie and Irène Joliot-Curie in 1934 as the most significant milestone in nuclear medicine. In February 1934, they reported the first artificial production of radioactive material in the journal Nature, after discovering radioactivity in aluminum foil that was irradiated with a polonium preparation. Their work built upon earlier discoveries by Wilhelm Konrad Roentgen for X-ray, Henri Becquerel for radioactive uranium salts, and Marie Curie (mother of Irène Curie) for radioactive thorium, polonium and coining the term "radioactivity." Taro Takemi studied the application of nuclear physics to medicine in the 1930s. The history of nuclear medicine will not be complete without mentioning these early pioneers. Nuclear medicine gained public recognition as a potential specialty when on May 11, 1946, an article in the Journal of the American Medical Association (JAMA) by Massachusetts General Hospital's Dr. Saul Hertz and Massachusetts Institute of Technology's Dr. Arthur Roberts, described the successful use of treating Graves' Disease with radioactive iodine (RAI) was published. Additionally, Sam Seidlin. brought further development in the field describing a successful treatment of a patient with thyroid cancer metastases using radioiodine (I-131). These articles are considered by many historians as the most important articles ever published in nuclear medicine. Although the earliest use of I-131 was devoted to therapy of thyroid cancer, its use was later expanded to include imaging of the thyroid gland, quantification of the thyroid function, and therapy for hyperthyroidism. Among the many radionuclides that were discovered for medical-use, none were as important as the discovery and development of Technetium-99m. It was first discovered in 1937 by C. Perrier and E. Segre as an artificial element to fill space number 43 in the Periodic Table. The development of a generator system to produce Technetium-99m in the 1960s became a practical method for medical use. Today, Technetium-99m is the most utilized element in nuclear medicine and is employed in a wide variety of nuclear medicine imaging studies. Widespread clinical use of nuclear medicine began in the early 1950s, as knowledge expanded about radionuclides, detection of radioactivity, and using certain radionuclides to trace biochemical processes. Pioneering works by Benedict Cassen in developing the first rectilinear scanner and Hal O. Anger's scintillation camera (Anger camera) broadened the young discipline of nuclear medicine into a full-fledged medical imaging specialty. By the early 1960s, in southern Scandinavia, Niels A. Lassen, David H. Ingvar, and Erik Skinhøj developed techniques that provided the first blood flow maps of the brain, which initially involved xenon-133 inhalation; an intra-arterial equivalent was developed soon after, enabling measurement of the local distribution of cerebral activity for patients with neuropsychiatric disorders such as schizophrenia. Later versions would have 254 scintillators so a two-dimensional image could be produced on a color monitor. It allowed them to construct images reflecting brain activation from speaking, reading, visual or auditory perception and voluntary movement. The technique was also used to investigate, e.g., imagined sequential movements, mental calculation and mental spatial navigation. By the 1970s most organs of the body could be visualized using nuclear medicine procedures. In 1971, American Medical Association officially recognized nuclear medicine as a medical specialty. In 1972, the American Board of Nuclear Medicine was established, and in 1974, the American Osteopathic Board of Nuclear Medicine was established, cementing nuclear medicine as a stand-alone medical specialty. In the 1980s, radiopharmaceuticals were designed for use in diagnosis of heart disease. The development of single photon emission computed tomography (SPECT), around the same time, led to three-dimensional reconstruction of the heart and establishment of the field of nuclear cardiology. More recent developments in nuclear medicine include the invention of the first positron emission tomography scanner (PET). The concept of emission and transmission tomography, later developed into single photon emission computed tomography (SPECT), was introduced by David E. Kuhl and Roy Edwards in the late 1950s. Their work led to the design and construction of several tomographic instruments at the University of Pennsylvania. Tomographic imaging techniques were further developed at the Washington University School of Medicine. These innovations led to fusion imaging with SPECT and CT by Bruce Hasegawa from University of California, San Francisco (UCSF), and the first PET/CT prototype by D. W. Townsend from University of Pittsburgh in 1998. PET and PET/CT imaging experienced slower growth in its early years owing to the cost of the modality and the requirement for an on-site or nearby cyclotron. However, an administrative decision to approve medical reimbursement of limited PET and PET/CT applications in oncology has led to phenomenal growth and widespread acceptance over the last few years, which also was facilitated by establishing 18F-labelled tracers for standard procedures, allowing work at non-cyclotron-equipped sites. PET/CT imaging is now an integral part of oncology for diagnosis, staging and treatment monitoring. A fully integrated MRI/PET scanner is on the market from early 2011. Sources of radionuclides 99mTc is normally supplied to hospitals through a radionuclide generator containing the parent radionuclide molybdenum-99. 99Mo is typically obtained as a fission product of 235U in nuclear reactors, however global supply shortages have led to the exploration of other methods of production. About a third of the world's supply, and most of Europe's supply, of medical isotopes is produced at the Petten nuclear reactor in the Netherlands. Another third of the world's supply, and most of North America's supply, was produced at the Chalk River Laboratories in Chalk River, Ontario, Canada until its permanent shutdown in 2018. The most commonly used radioisotope in PET, 18F, is not produced in a nuclear reactor, but rather in a circular accelerator called a cyclotron. The cyclotron is used to accelerate protons to bombard the stable heavy isotope of oxygen 18O. The 18O constitutes about 0.20% of ordinary oxygen (mostly oxygen-16), from which it is extracted. The 18F is then typically used to make FDG. A typical nuclear medicine study involves administration of a radionuclide into the body by intravenous injection in liquid or aggregate form, ingestion while combined with food, inhalation as a gas or aerosol, or rarely, injection of a radionuclide that has undergone micro-encapsulation. Some studies require the labeling of a patient's own blood cells with a radionuclide (leukocyte scintigraphy and red blood cell scintigraphy). Most diagnostic radionuclides emit gamma rays either directly from their decay or indirectly through electron–positron annihilation, while the cell-damaging properties of beta particles are used in therapeutic applications. Refined radionuclides for use in nuclear medicine are derived from fission or fusion processes in nuclear reactors, which produce radionuclides with longer half-lives, or cyclotrons, which produce radionuclides with shorter half-lives, or take advantage of natural decay processes in dedicated generators, i.e. molybdenum/technetium or strontium/rubidium. The most commonly used intravenous radionuclides are technetium-99m, iodine-123, iodine-131, thallium-201, gallium-67, fluorine-18 fluorodeoxyglucose, and indium-111 labeled leukocytes. The most commonly used gaseous/aerosol radionuclides are xenon-133, krypton-81m, (aerosolised) technetium-99m. Policies and procedures Radiation dose A patient undergoing a nuclear medicine procedure will receive a radiation dose. Under present international guidelines it is assumed that any radiation dose, however small, presents a risk. The radiation dose delivered to a patient in a nuclear medicine investigation, though unproven, is generally accepted to present a very small risk of inducing cancer. In this respect it is similar to the risk from X-ray investigations except that the dose is delivered internally rather than from an external source such as an X-ray machine, and dosage amounts are typically significantly higher than those of X-rays. The radiation dose from a nuclear medicine investigation is expressed as an effective dose with units of sieverts (usually given in millisieverts, mSv). The effective dose resulting from an investigation is influenced by the amount of radioactivity administered in megabecquerels (MBq), the physical properties of the radiopharmaceutical used, its distribution in the body and its rate of clearance from the body. Effective doses can range from 6 μSv (0.006 mSv) for a 3 MBq chromium-51 EDTA measurement of glomerular filtration rate to 11.2 mSv (11,200 μSv) for an 80 MBq thallium-201 myocardial imaging procedure. The common bone scan with 600 MBq of technetium-99m MDP has an effective dose of approximately 2.9 mSv (2,900 μSv). Formerly, units of measurement were: the curie (Ci), equal to 3.7 × 1010 Bq, and also equal to 1.0 grams of radium (Ra-226); the rad (radiation absorbed dose), now replaced by the gray; and the rem (Röntgen equivalent man), now replaced by the sievert. The rad and rem are essentially equivalent for almost all nuclear medicine procedures, and only alpha radiation will produce a higher Rem or Sv value, due to its much higher Relative Biological Effectiveness (RBE). Alpha emitters are nowadays rarely used in nuclear medicine, but were used extensively before the advent of nuclear reactor and accelerator produced radionuclides. The concepts involved in radiation exposure to humans are covered by the field of Health Physics; the development and practice of safe and effective nuclear medicinal techniques is a key focus of Medical Physics. Regulatory frameworks and guidelines Different countries around the world maintain regulatory frameworks that are responsible for the management and use of radionuclides in different medical settings. For example, in the US, the Nuclear Regulatory Commission (NRC) and the Food and Drug Administration (FDA) have guidelines in place for hospitals to follow. With the NRC, if radioactive materials aren't involved, like X-rays for example, they are not regulated by the agency and instead are regulated by the individual states. International organizations, such as the International Atomic Energy Agency (IAEA), have regularly published different articles and guidelines for best practices in nuclear medicine as well as reporting on emerging technologies in nuclear medicine. Other factors that are considered in nuclear medicine include a patient's medical history as well as post-treatment management. Groups like International Commission on Radiological Protection have published information on how to manage the release of patients from a hospital with unsealed radionuclides. See also Human subject research List of Nuclear Medicine Societies Nuclear medicine physician Nuclear pharmacy Nuclear technology Radiographer References Further reading External links Solving the Medical Isotope Crisis Hearing before the Subcommittee on Energy and Environment of the Committee on Energy and Commerce, House of Representatives, One Hundred Eleventh Congress, First Session, September 9, 2009 Radiology Medicinal radiochemistry
Nuclear medicine
Chemistry
4,364
7,214,368
https://en.wikipedia.org/wiki/Energy%20%26%20Environment
Energy & Environment is an academic journal "covering the direct and indirect environmental impacts of energy acquisition, transport, production and use". Under its editor-in-chief from 1998 to 2017, Sonja Boehmer-Christiansen, it was known for easygoing peer-review and publishing climate change denial papers. Yiu Fai Tsang became its editor-in-chief in May 2017. Abstracting and indexing The journal is abstracted and indexed in the Social Sciences Citation Index, Scopus, EBSCO databases, Current Contents/Social & Behavioral Sciences, and Compendex. According to the Journal Citation Reports, the journal had a 2021 impact factor of 2.945, ranking it 65th out of 125 journals in the category "Environmental Studies". History The journal was first published in 1989; David Everest (Department of the Environment, United Kingdom) was its founding editor. Following his death in 1998, Boehmer-Christiansen became the journal's editor. She and several members of the journal's editorial advisory board had previously been associated with "the Energy and Environment Groups" at the Science and Technology Policy Unit (University of Sussex), with John Surrey. Its publisher, Multi-science ceased trading on 31 December 2015 and the journal was transferred to SAGE. In May 2017, Yiu Fai Tsang became the journal's editor. Climate change denial and criticism The journal was regarded as "a small journal that caters to climate change denialists". It has played an important role in attacking climate science and scientists, for example Michael E. Mann. In 2011, a number of scientists such Gavin Schmidt, Roger A. Pielke Jr., Stephan Lewandowsky and Michael Ashley have criticised that E&E has low standards of peer review and little impact. In addition, Ralph Keeling criticized a paper in the journal which claimed that CO2 levels were above 400 ppm in 1825, 1857 and 1942, writing in a letter to the editor, "Is it really the intent of E&E to provide a forum for laundering pseudo-science?" A 2005 article in Environmental Science & Technology stated that the journal is "obscure" and that "scientific claims made in Energy & Environment have little credibility among scientists." Boehmer-Christiansen acknowledged that the journal's "impact rating has remained too low for many ambitious young researchers to use it", but blamed this on "the negative attitudes of the Intergovernmental Panel on Climate Change (IPCC)/Climatic Research Unit people." According to Hans von Storch, the journal "tries to give people who do not have a platform a platform," which "is then attractive for skeptic papers. They know they can come through and that interested people make sure the paper enters the political realm." When asked about the publication in the Spring of 2003 of a revised version of the paper at the center of the Soon and Baliunas controversy, Boehmer-Christiansen said, "I'm following my political agenda -- a bit, anyway. But isn't that the right of the editor?" The journal has also been accused of publishing papers that could not have passed any reasonable peer review process, such as one in 2011 that claimed that the Sun was made of iron. See also Environmental engineering science References External links Energy and fuel journals English-language journals Environmental social science journals Academic journals established in 1989 Climate change denial 8 times per year journals
Energy & Environment
Environmental_science
703
37,974,650
https://en.wikipedia.org/wiki/Stenella%20subsanguinea
Stenella subsanguinea is a species of anamorphic fungi. Description Belonging to the Stenella genus, this species is a Cercospora-like fungus with a superficial secondary mycelium, solitary conidiophores, conidiogenous cells with thickened and darkened conidiogenous loci and catenate or single conidia with dark, slightly thickened hila. References Further reading External links subsanguinea Fungi described in 1993 Fungus species
Stenella subsanguinea
Biology
100
76,563,227
https://en.wikipedia.org/wiki/Tsong%20Yueh%20Chen
Tsong Yueh Chen () is an Australian academic at the Swinburne University of Technology who is a professor and researcher in program testing and debugging. He is ranked internationally as the most prolific author in metamorphic testing. Chen received the BSc and MPhil from The University of Hong Kong, the MSc and DIC from Imperial College London, and the PhD from The University of Melbourne under the supervision of Jean-Louis Lassez. He has an h-index of 62. In 2021, Chen et al. were selected as the Grand Champion of the Most Influential Paper Award by the Journal of Systems and Software for their 2010 paper. In January 2024, Chen was selected for the ACM SIGSOFT Outstanding Research Award 2024 “for contributions to software testing through the invention and development of metamorphic testing”. This award is presented to individual(s) who have made significant and lasting research contributions to the theory or practice of software engineering. As tokens of recognition, Chen was invited to give a keynote speech at the International Conference on Software Engineering in April 2024, and was interviewed by ACM SIGSOFT Software Engineering Notes after his presentation. In December 2024, IEEE announced the election of Tsong Yueh Chen as an IEEE Fellow in the class of 2025 “for contributions to software testing through the invention of metamorphic testing and adaptive random testing”. Only three software engineers were elected as IEEE Fellows in the same year. Selected publications . References External links Swinburne University of Technology biography Year of birth missing (living people) Living people Computer scientists Software engineering researchers Software testing people Alumni of Imperial College London Academic staff of Swinburne University of Technology
Tsong Yueh Chen
Technology
342
63,360,436
https://en.wikipedia.org/wiki/Virtual%20mailbox
A virtual mailbox is a service that receives physical mail on behalf of the addressee and usually scans the outside of the mail. Some providers also scan the inside contents of the mail as well. These scans may be photos, PDFs, or text-searchable PDFs. Reasons for using virtual mailboxes may include accepting mail from couriers, accessing mail while traveling, and keeping a home address private. Virtual mailbox and P.O. Boxes Virtual mailboxes are different from P.O. boxes, which some delivery services will not deliver to, because they tend to offer a real street address and additional services. Services offered may include: mail forwarding, scanning, check depositing, and recycling. In the United States, virtual mailbox providers are classified as commercial mail receiving agencies (CMRA). Almost every popular virtual mailbox service provides customers with both a web version and a mobile application. Legality In the United States, virtual mailbox providers are classified as commercial mail receiving agencies (CMRA). Commercial mail receiving agencies are allowed to receive, access, and open third-party mail only when someone completes a USPS 1583 and has it notarized. Once this form is complete, a virtual mailbox address can be used as the official business mailing address in most states. See also Digital nomad Poste restante Mail forwarding References Postal infrastructure Postal systems
Virtual mailbox
Technology
286
42,979,524
https://en.wikipedia.org/wiki/Intertrial%20priming
In cognitive psychology, intertrial priming is an accumulation of the priming effect over multiple trials, where "priming" is the effect of the exposure to one stimulus on subsequently presented stimuli. Intertrial priming occurs when a target feature (the characteristic that distinguishes targets from non-targets) is repeated from one trial to the next, and typically results in speeded response times to the target. A target is the stimulus participants are required to search for. For example, intertrial priming occurs when the task is to respond to either a red or a green target, and the response time to a red target is faster if the preceding trial also has a red target. Top-down and bottom-up attention Visual attention is influenced by top down and bottom up attentional processes. Top-down attention is allocated based on an observer's current knowledge about the stimuli. Participants in an experiment might be instructed to search for, and respond to a target object presented in a display that is a different colour than the other objects presented simultaneously. Top down knowledge of the dimension of the target (i.e. colour) can speed response times to target identification. Bottom-up attention is involuntarily and automatically directed towards salient features in the environment such as a bright colour among dull colours. In experimental settings, the more different a stimulus is from other stimuli in a visual display, the more salient it is. Bottom-up attention is typically not guided by observers' goals or knowledge, only by the physical properties of the stimuli. Many studies employ various methods involving intertrial priming to assess the contribution of top down versus bottom up processes in guiding attention in visual search tasks. Integrative framework There are factors in visual search tasks that the top down versus bottom up dichotomy does not take into consideration. Not all selection biases can be explained by physical saliency (bottom up) or observer goals (top down). Studies that have found that stimuli that are equally salient and are connected with rewards and can draw a participants' attention, even if this choice doesn't match their selection goals. An alternative framework has been proposed where past selection history, current goals and physical salience are integrated in a model of attentional control. Measuring the effects of intertrial priming Intertrial priming is an important aspect to consider in designing an experiment as it can influence the results if it is not considered/controlled. Intertrial priming is often measured using a visual search task. A typical visual search task involves participants searching for, and responding to, a target amongst a group of non-target items. Intertrial priming performance is generally measured by recording participants' reaction times to identify a target and comparing these times across trials. Different trial designs and visual search tasks can be employed to measure intertrial priming. Blocked and mixed trials Studies often compare blocked and mixed visual search trials to measure intertrial priming. Blocked trials are multiple, successively presented visual search trials that include the same target, and mixed trials are a randomised series of trials, each trial consisting of different targets. For example, a blocked trial condition may include searching for a green circle in trial 1, and in multiple successive preceding trials, whereas a mixed trial condition may include searching a for a green circle in trial 1, but a red circle in the proceeding trials. Blocking trials can control for effects of variability in targets. When a target with the same defining feature is repeated across trials (blocked conditions), participants reaction times are faster than when the target is not the same across trials (mixed conditions). This repetition effect is also cumulative. As the number of target repetitions increases, up to a certain point, participants reaction times are faster each time they are exposed to the same target in repeated trials. In mixed and blocked trials there can be a disparity in intertrial priming that results in faster reaction times in the blocked trials. Reaction times may be faster in blocked trials because participants are required to respond to targets that differ in only one dimension from non-targets. Cueing A cue is a presentation of a stimulus prior to a trial to inform the participant of an upcoming target feature. For example, a blue circle may be shown before a trial to signify a blue circle will be the target in the upcoming trial. Target relevant cues may be presented to participants to decrease their reaction time to the target in the display. These cues may be valid or invalid. Valid cues correctly predict the target stimulus but invalid cues do not. For example, if the target in an upcoming trial is a blue circle, a blue circle presented as a cue would be valid, but if a red circle was presented as a cue it would be invalid, as it does not correctly predict the blue circle target stimulus. Reaction times to valid cues are typically faster than reaction times to invalid cues. This phenomenon is known as a cueing effect. Cue validity effect When a valid cue has a low probability of correct target prediction, there can still be a reliable cueing effect for valid cues, and faster reaction times to valid cues than invalid cues. This suggests that the cueing effect is not affected by the predictive nature of the cue, and may not be due to top-down control. If top-down control is involved in the response selection then invalid trials should have a faster response than valid trials because participants are aware that the likelihood of being presented with a valid trial is very low. Types of visual search Pop-out search Pop-out search tasks include a target that differs in one dimension from a group of homogeneous non-target items. A dimension is a categorical feature of a stimulus such as its colour (i.e. a red target among green non-targets), its shape (a square target among circle non-targets) or its orientation (a vertically presented target among horizontal non-targets). Response times in pop-out searches are generally faster to targets when the colour of both targets and distractors remains the same throughout trials, and slower when these colours are switched during trials. Conjunctive search A conjunctive search involves non-target stimuli that have more than one dimension in common with the target stimulus. For example, when a target is a green circle in a conjunctive search, non-targets (distractors) could be red circles and green squares. The target will share one dimension in common with one set of non-targets (i.e. shape) and another dimension in common with the other non-target group (i.e. colour). If target and distractor features are the same over consecutive trials, response times are faster than when these dimensions are not repeated. Why it occurs (major theories) Dimension-weighting account theory The "dimension-weighting account" of visual selection states that there is a limit to the attentional weight that can be allocated to a particular dimension of an object at any one time. The dimensions of stimuli perceived as important to an observer are allocated more attentional weight (i.e. a target in a visual search), resulting in faster detection times. If a target dimension is known in advance, this can increase the saliency signals of the target. On the other hand, if the target dimension is unknown, attentional weight has to be shifted to the target dimension. When target dimensions remain the same across trials there is no change in attentional weight required, resulting in faster reaction times (intertrial facilitation). Priming of pop-out hypothesis The priming of pop-out hypothesis suggests performance in a visual search task involving a pop-out target can be affected by the search history of specific target features in previous trials. If target and distractor features are repeated in subsequent trials, reaction times will be faster than if these features change across trials. The hypothesis proposes the repetition of a target used in a preceding trial makes its pop-out features more salient to an observer and therefore increases the likelihood the observer will attend to it. Episodic retrieval hypothesis The episodic retrieval model suggests reduced response times in intertrial priming are due to the observers' retrieval of episodic memories relevant to the task. The hypothesis states that visual search is composed of three successive stages of processing: search for a target, decide if the chosen target is the target of interest, and select and respond to the chosen target. This hypothesis argues that when a target presented in a previous trial is presented again in the current trial, processing of the target is accelerated in the target decision stage of the model, so that after identification the target is verified to assess whether it matches the previous target stored in episodic memory. Perceptual grouping of distractors account Many theories focus on the repetition of target features as dominant explanation for the repetition effects seen in intertrial priming. If target features are the same over consecutive trials but distractor features are changed, response times are not as fast as if both target and distractor features are kept constant over trials. This suggests that intertrial priming may mainly be due to distractor feature repetition, and target feature repetition influences this only slightly. This distractor-based priming may be due to faster perceptual grouping of distractors across trials. Perceptual grouping of distractors allows the target presence or absence to be distinguished more quickly. However, the repetition of target defining features cannot be excluded as a contributor to the priming effect found in conjunctive searches. See also Priming Thin-slicing References Cognitive psychology Attention Perception
Intertrial priming
Biology
1,944