id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
14,641,222
https://en.wikipedia.org/wiki/Longitudinal%20stability
In flight dynamics, longitudinal stability is the stability of an aircraft in the longitudinal, or pitching, plane. This characteristic is important in determining whether an aircraft pilot will be able to control the aircraft in the pitching plane without requiring excessive attention or excessive strength. The longitudinal stability of an aircraft, also called pitch stability, refers to the aircraft's stability in its plane of symmetry about the lateral axis (the axis along the wingspan). It is an important aspect of the handling qualities of the aircraft, and one of the main factors determining the ease with which the pilot is able to maintain level flight. Longitudinal static stability refers to the aircraft's initial tendency on pitching. Dynamic stability refers to whether oscillations tend to increase, decrease or stay constant. Static stability If an aircraft is longitudinally statically stable, a small increase in angle of attack will create a nose-down pitching moment on the aircraft, so that the angle of attack decreases. Similarly, a small decrease in angle of attack will create a nose-up pitching moment so that the angle of attack increases. This means the aircraft will self-correct longitudinal (pitch) disturbances without pilot input. If an aircraft is longitudinally statically unstable, a small increase in angle of attack will create a nose-up pitching moment on the aircraft, promoting a further increase in the angle of attack. If the aircraft has zero longitudinal static stability it is said to be statically neutral, and the position of its center of gravity is called the neutral point. The longitudinal static stability of an aircraft depends on the location of its center of gravity relative to the neutral point. As the center of gravity moves increasingly forward, the pitching moment arm is increased, increasing stability. The distance between the center of gravity and the neutral point is defined as "static margin". It is usually given as a percentage of the mean aerodynamic chord. If the center of gravity is forward of the neutral point, the static margin is positive. If the center of gravity is aft of the neutral point, the static margin is negative. The greater the static margin, the more stable the aircraft will be. Most conventional aircraft have positive longitudinal stability, providing the aircraft's center of gravity lies within the approved range. The operating handbook for every airplane specifies a range over which the center of gravity is permitted to move. If the center of gravity is too far aft, the aircraft will be unstable. If it is too far forward, the aircraft will be excessively stable, which makes the aircraft "stiff" in pitch and hard for the pilot to bring the nose up for landing. Required control forces will be greater. Some aircraft have low stability to reduce trim drag. This has the benefit of reducing fuel consumption. Some aerobatic and fighter aircraft may have low or even negative stability to provide high manoeuvrability. Low or negative stability is called relaxed stability. An aircraft with low or negative static stability will typically have fly-by-wire controls with computer augmentation to assist the pilot. Otherwise, an aircraft with negative longitudinal stability will be more difficult to fly. It will be necessary for the pilot devote more effort, make more frequent inputs to the elevator control, and make larger inputs, in an attempt to maintain the desired pitch attitude. For an aircraft to possess positive static stability, it is not necessary for its level to return to exactly what it was before the upset. It is sufficient that the speed and orientation do not continue to diverge but undergo at least a small change back towards the original speed and orientation. The deployment of flaps will increase longitudinal stability. Unlike motion about the other two axes, and in the other degrees of freedom of the aircraft (sideslip translation, rotation in roll, rotation in yaw), which are usually heavily coupled, motion in the longitudinal plane does not typically cause a roll or yaw. A larger horizontal stabilizer, and a greater moment arm of the horizontal stabilizer about the neutral point, will increase longitudinal stability. Tailless aircraft For a tailless aircraft, the neutral point coincides with the aerodynamic center, and so for such aircraft to have longitudinal static stability, the center of gravity must lie ahead of the aerodynamic center. For missiles with symmetric airfoils, the neutral point and the center of pressure are coincident and the term neutral point is not used. An unguided rocket must have a large positive static margin so the rocket shows minimum tendency to diverge from the direction of flight given to it at launch. In contrast, guided missiles usually have a negative static margin for increased maneuverability. Dynamic stability Longitudinal dynamic stability of a statically stable aircraft refers to whether the aircraft will continue to oscillate after a disturbance, or whether the oscillations are damped. A dynamically stable aircraft will experience oscillations reducing to nil. A dynamically neutral aircraft will continue to oscillate around its original level, and dynamically unstable aircraft will experience increasing oscillations and displacement from its original level. Dynamic stability is caused by damping. If damping is too great, the aircraft will be less responsive and less manoeuvrable. Decreasing phugoid (long-period) oscillations can be achieved by building a smaller stabilizer on a longer tail, and by shifting the center of gravity to the rear. An aircraft that is not statically stable cannot be dynamically stable. Analysis Near the cruise condition most of the lift force is generated by the wings, with ideally only a small amount generated by the fuselage and tail. We may analyse the longitudinal static stability by considering the aircraft in equilibrium under wing lift, tail force, and weight. The moment equilibrium condition is called trim, and we are generally interested in the longitudinal stability of the aircraft about this trim condition. Equating forces in the vertical direction: where W is the weight, is the wing lift and is the tail force. For a thin airfoil at low angle of attack, the wing lift is proportional to the angle of attack: where is the wing area is the (wing) lift coefficient, is the angle of attack. The term is included to account for camber, which results in lift at zero angle of attack. Finally is the dynamic pressure: where is the air density and is the speed. Trim The force from the tail-plane is proportional to its angle of attack, including the effects of any elevator deflection and any adjustment the pilot has made to trim-out any stick force. In addition, the tail is located in the flow field of the main wing, and consequently experiences downwash, reducing its angle of attack. In a statically stable aircraft of conventional (tail in rear) configuration, the tail-plane force may act upward or downward depending on the design and the flight conditions. In a typical canard aircraft both fore and aft planes are lifting surfaces. The fundamental requirement for static stability is that the aft surface must have greater authority (leverage) in restoring a disturbance than the forward surface has in exacerbating it. This leverage is a product of moment arm from the center of gravity and surface area. Correctly balanced in this way, the partial derivative of pitching moment with respect to changes in angle of attack will be negative: a momentary pitch up to a larger angle of attack makes the resultant pitching moment tend to pitch the aircraft back down. (Here, pitch is used casually for the angle between the nose and the direction of the airflow; angle of attack.) This is the "stability derivative" d(M)/d(alpha), described below. The tail force is, therefore: where is the tail area, is the tail force coefficient, is the elevator deflection, and is the downwash angle. A canard aircraft may have its foreplane rigged at a high angle of incidence, which can be seen in a canard catapult glider from a toy store; the design puts the c.g. well forward, requiring nose-up lift. Violations of the basic principle are exploited in some high performance "relaxed static stability" combat aircraft to enhance agility; artificial stability is supplied by active electronic means. There are a few classical cases where this favorable response was not achieved, notably in T-tail configurations. A T-tail airplane has a higher horizontal tail that passes through the wake of the wing later (at a higher angle of attack) than a lower tail would, and at this point the wing has already stalled and has a much larger separated wake. Inside the separated wake, the tail sees little to no freestream and loses effectiveness. Elevator control power is also heavily reduced or even lost, and the pilot is unable to easily escape the stall. This phenomenon is known as 'deep stall'. Taking moments about the center of gravity, the net nose-up moment is: where is the location of the center of gravity behind the aerodynamic center of the main wing, is the tail moment arm. For trim, this moment must be zero. For a given maximum elevator deflection, there is a corresponding limit on center of gravity position at which the aircraft can be kept in equilibrium. When limited by control deflection this is known as a 'trim limit'. In principle trim limits could determine the permissible forwards and rearwards shift of the center of gravity, but usually it is only the forward cg limit which is determined by the available control, the aft limit is usually dictated by stability. In a missile context 'trim limit' more usually refers to the maximum angle of attack, and hence lateral acceleration which can be generated. Static stability The nature of stability may be examined by considering the increment in pitching moment with change in angle of attack at the trim condition. If this is nose up, the aircraft is longitudinally unstable; if nose down it is stable. Differentiating the moment equation with respect to : Note: is a stability derivative. It is convenient to treat total lift as acting at a distance h ahead of the centre of gravity, so that the moment equation may be written: Applying the increment in angle of attack: Equating the two expressions for moment increment: The total lift is the sum of and so the sum in the denominator can be simplified and written as the derivative of the total lift due to angle of attack, yielding: Where c is the mean aerodynamic chord of the main wing. The term: is known as the tail volume ratio. Its coefficient, the ratio of the two lift derivatives, has values in the range of 0.50 to 0.65 for typical configurations. Hence the expression for h may be written more compactly, though somewhat approximately, as: is known as the static margin. For stability it must be negative. (However, for consistency of language, the static margin is sometimes taken as , so that positive stability is associated with positive static margin.) See also Directional stability Flight dynamics Handling qualities Phugoid Yaw damper References Aerospace engineering Aircraft aerodynamics Flight control systems Aviation science
Longitudinal stability
Engineering
2,227
55,547,754
https://en.wikipedia.org/wiki/Huawei%20Mate%2010
The Huawei Mate 10, Huawei Mate 10 Pro and Huawei Mate 10 Lite are Android smartphones designed and marketed by Huawei as part of the Huawei Mate series. There is also a Mate 10 Porsche design, which has 256 GB of storage but is otherwise identical to the Mate 10 Pro. They were first released on 16 October 2017. Versus the predecessor Mate 9, the Mate 10 pro flagship phone has a faster processor with an integrated neural processing unit, a slightly larger OLED screen (6.0") with a taller 18:9 aspect ratio, a significantly longer battery life and a glass back construction (but free of wireless charging). Chinese and international models are available in dual SIM configuration. It comes with Android 8 and a newer version of Huawei's EMUI interface. All Mate 10 models are unlocked and GSM only. Huawei phones, including the Mate series, are not sold or financed through U.S. carriers due to pressure from U.S. intelligence agencies, though they are available from independent and online retailers. The Mate 10 series was supplanted by the Mate 20 series as the flagship in October 2018. Specifications The Mate 10 series is powered by Huawei's all-new AI-focused processor, the Kirin 970. Kirin 970 is a 64-bit octa-core 2.36/1.8 GHz mobile ARM LTE SoC with a 12-core Mali G72 GPU and an onboard Neural processing unit (NPU). On the Kirin 970, the NPU takes over tasks like scanning and translating words in pictures taken with Microsoft's Translator. The Mate 10 is backed by 4 GB of RAM coupled with 64 GB storage, while the Mate 10 Pro is backed by either 4 GB RAM and 64 GB storage, or 6 GB RAM and 128 GB storage. The most expensive model, the Porsche Design gets 6 GB RAM and 256 GB storage. In China, you can get the Huawei Mate 10 with 6 GB RAM and 128 GB storage just like the Pro model although the rest of the world can buy it through online stores. Both phones feature a near-bezeless display with the Mate 10 having 81.61% screen to body ratio while the Mate 10 Pro having 81.79% screen to body ratio. The Mate 10 has a typical 16:9 aspect ratio, while the Mate 10 Pro has the new 18:9 ratio. A 5.9-inch LCD display panel with RGBW arrangement is used in the Mate 10, and a 6-inch OLED panel made by BOE in the Mate 10 Pro. Surprisingly, the Mate 10 has a higher resolution than the larger Mate 10 Pro (1440p to 1080p). The phones' body is made of 3D glass with aluminum frame. Both front and back of the phone is covered with Gorilla Glass 5. Despite a glass back, wireless charging is not supported. Huawei once again partnered with Leica to engineer the dual-lens camera in the Mate 10 series. The rear camera is a dual lens 12 MP RGB sensor with a f/1.6 aperture and a monochrome 20 MP sensor with the same f/1.6 aperture. Only the RGB camera is supported by optical image stabilization. Due to this dual-lens camera setup, the camera is capable of creating bokeh shots and it is adjustable even after taking the shot. The front camera is an 8 MP RGB camera. Both phones have a large 4,000 mAh battery with fast charging. Huawei claims that the phone can be charged from 1 to 20 percent in 10 minutes, and from 1 to 58 percent in 30 minutes. Ports on the Mate 10 include a 3.5 mm headphone jack, and a USB-C charging and data transfer port. However, the headphone jack is not present on the Mate 10 Pro, which, instead has IP67 water and dust resistance while the Mate 10 only has IP53 rating. The Mate 10 pro also has an IR blaster, so can be used as the remote control for TVs, etc. The phones also come in single sim (U.S.) and dual sim (China and Europe, at least) 4G/4G configurations. However, the dual sim configurations are DSDS (Dual Sim/Dual Standby) meaning the user must specify in settings which SIM can be used to send/receive calls or browse the internet. The other SIM will be inactivated. The Mate 10 features expandable storage via a micro-SD card supporting up to 256 Gb (uses SIM 2 slot). The Pro models do not have expandable storage. The Mate 10 has a real Home button/fingerprint scanner on the bottom of the front bezel; the Pro moves the fingerprint sensor to the back and vanishes the Home button. All the Mate 10 series phones are factory unlocked, but are GSM only - they do not support CDMA used by U.S. carriers like Verizon, Sprint and U.S. Cellular (the TL00 and AL00 Chinese models support CDMA 800 MHz, but are locked to China Telecom carrier). Because there are differences in GSM bands supported by the models, all models may not work on all carriers/networks, if they are attempted to be used outside their designated distribution areas. Huawei Mate 10 Lite Huawei also released a "Lite" version of the Mate 10, also known as Huawei Nova 2i in Malaysia, Huawei Maimang 6 in China, and Huawei Honor 9i in India. It has budget features and pricing ranking below the midrange P and Honor series phones: 2.36/1.7 GHz Kirin 659 with Mali T830 MP2 GPU, with about half the performance of the Kirin 970 on the flagship models and roughly comparable to the older P8 model. There is only one memory/storage configuration, 4 GB/64 GB. The phone comes with a 5.9-inch IPS LCD display with a resolution of 1080x2160 and an 18:9 aspect ratio. It has a 3340 mAh battery, a 16-megapixel rear camera, and runs on the older Android 7.0/EMUI 5.1 software. It measures 156.20 x 75.00 x 7.50 (height x width x thickness) and weighs 164.00 grams. The Huawei Mate 10 Lite may come in single SIM or dual SIM (GSM and GSM) configuration. Software The phones run on Android 8 "Oreo", with Huawei's own custom skin, EMUI 8.0 on top of it. The previous EMUI version was 5.1. Huawei decided to upgrade it to EMUI 8.0 to match the latest Android version number, which is 8.0 as well. It has some new features, notably the desktop mode called "Easy Projection", which is a custom desktop interface, similar to Samsung DeX, which appears when connecting the phone to a display via an HDMI-to-USB-C cable. Models The phones have at least 6 model designations, A09, AL00, TL00, L09, L29, and LOAC. The prefix BLA- designates a Mate 10 Pro, ALP- designates a Mate 10 and RNE- designates a Mate 10 Lite. Each model may come in two configurations for the Mate 10 pro, 4 GB/64 GB or 6 GB/128 GB. The Mate 10 Pro A09/L29 (Porsche) has a 6 GB/256 GB configuration. The BLA-LOAC model is an alternate name for the BLA-A09 model sold through special OEM arrangements at U.S. retailers like Best Buy. The A09/LOAC models are for U.S. distribution, the AL00 models are for China, the L09 and L29 models are for Europe and international distribution. The TL00 model appears to be a limited distribution Himalayan model closely resembling the AL00 Chinese models. The L29 models are dual-sim variants of the L09/A09 models; the A00 models are also dual sim. There are some differences between the A09/LOAC and Lx9 models in GSM bands supported. There are 6 models of Mate 10 Lite, L01, L02, L03, L21, L22, and L23; L0x models are single SIM and L2x models are dual SIM; otherwise the Mate 10 Lite has a fixed configuration and the models differ only in geographic area of distribution and the local GSM frequencies supported there. The Mate 10 is not marketed or sold in the U.S, though it is available online from international sellers. Also, no dual SIM models are sold in the U.S. Markets and release dates The Kirin 970 SoC which powers the Mate 10 series was first unveiled at the 2017 Berlin IFA in early Sept. The Mate 10 series was initially released to select markets on Oct. 16, 2017, followed by U.K. and Europe in late 2017, then the United States on Feb. 18, 2018. Unlocking (rooting) the Mate 10/Pro While it was originally intended to be an OEM/developer feature only, prior to May 24, 2018, Huawei freely supplied a bootloader unlock code to enable users/developers to root their devices. The URL to obtain the unlock code no longer exists. The company's announcement said: "Announcement: To provide better user experience and avoid issues caused by ROM flashing, the unlock code application service will be stopped for all products launched after 2018-5-24. For products released prior to this date, the service will be stopped 60 days after this announcement. Thank you for your understanding. We will continue to provide you with quality services. 2018-5-24 Huawei Device Co. Ltd." This was followed in June with an OTA named Patch01 that caused rooted phones to go into a bootloop until the original unrooted image was flashed. This image cannot be rooted. Performance The performance of the Mate 10 and Mate 10 Pro with identical CPU and GPU and clock frequencies, are very close. Due to its lower screen resolution, the Mate 10 Pro has a slight advantage especially in graphics intensive applications. The performance of the Mate 10 Lite, with CPU and GPU 2-3 generations older, falls far below that of the flagship models. Though the Mate 10 scores top the list of Android phones at Oct. 2017 release date, they fall well short of the A11 Fusion powered iPhone 8, 8+ and X, as well as the Snapdragon 845 powered devices released just a few months later. Special note: the official Geekbench testing site excludes the Mate 10 and Mate 20 models other than the Lite models from the published results because it claims these models run the benchmarks in a special benchmark mode which does not reflect real world performance. Predecessor phones like the Mate 9 and P10 are not affected; only products released from 2Q 2017 and later. The list of benchmarks affected is extensive: Geekbench, GFXbench, 3Dmark, Antutu, Quadrant, and others. These were not subtle differences, either, with results up to 47 percent higher than they were with private test variants Huawei couldn't catch. In some cases, the Mate 10 performed poorer on these internal benchmarks than the Kirin 960/Mali G71 powered Mate 9. The company has since claimed that AI processes are responsible for allocating resources when heavy workloads are encountered. However, private benchmarks identical to the public ones except for the names of the apps, and embedded strings containing the names, show radically different performance. The company has stated that it will make 'Performance mode' available to ordinary users. It is not clear that GPU turbo (see below) is the putative performance mode. GPU turbo In Aug. 2018, Huawei introduced a 'GPU turbo' mode as a software only upgrade and it is available at least for the Mate 10, Mate 10 Pro and Mate 10 Porsche design models. GPU turbo mode, the company claims, eliminates GPU throttling by using AI paradigms to predict invariant portions of the screen to reduce rendering effort and power consumption. The company does not claim that GPU turbo mode increases the maximum frame rate(s). The drawbacks for this technology are that it is per device (SoC) and per game - the device must be trained or tuned for each game and the profile data stored on the device. So far, only two games have been optimized for GPU turbo: PUBG and Legends: Bang, Bang. The company has also announced that GPU turbo will not be released to the United States market. Competitors The Mate 10 series came out in the same month as the Apple iPhone 8 and iPhone 10 and were positioned as direct competitors of these as well as the Samsung Galaxy S8 series released in April, 2017, though released at slightly lower price points. Other major competitors in the same timeframe were the OnePlus 5 and Google's Pixel 2 series. Reception The Huawei Mate 10 series received mostly positive reviews, especially regarding the camera feature. DxOMark gave it an overall 97 points, similar to the more expensive iPhone X and only 1 point behind the Google Pixel 2. Most reviewers praised the long battery life, a result of the 4000 mAh battery, but criticized the heavily skinned EMUI software and the lack of certain features such as WLAN without MIMO, Bluetooth 5.0, and inductive charging for both models. The Huawei Mate 10 Pro's lack of QHD+ resolution, storage expansion, and a headphone jack also received some criticism from reviewers. However it was praised in other aspects like including a bright and accurate-color OLED display, LTE Cat. 18, Dual-VoLTE, Dual-SIM, aptX HD, good performance, very fast quick-charge technology, (theoretical) water and dust proofing (IP67), high build quality, exact location determination and excellent voice quality. See also Huawei P10 Comparison of smartphones List of Huawei phones References External links Huawei Mate 10 Huawei Mate 10 Pro Huawei Mate 10 Porsche Design Android (operating system) devices Mobile phones introduced in 2017 Mate 10 Mobile phones with multiple rear cameras Mobile phones with 4K video recording Discontinued flagship smartphones Mobile phones with infrared transmitter
Huawei Mate 10
Technology
2,999
15,586,403
https://en.wikipedia.org/wiki/Boreas%20%28journal%29
Boreas is a peer-reviewed academic journal that has been published on behalf of the Collegium Boreas since 1972. The journal covers all branches of quaternary research, including biological and non-biological aspects of the quaternary environment in both glaciated and non-glaciated areas. Formerly published by Taylor & Francis, Boreas has been published by Wiley-Blackwell since 1998. According to the Journal Citation Reports, the journal has a 2012 impact factor of 2.457. See also List of earth and atmospheric sciences journals Journal of Quaternary Science References External links Ecology journals Quaternary science journals Wiley-Blackwell academic journals
Boreas (journal)
Environmental_science
135
396,028
https://en.wikipedia.org/wiki/%C3%89tienne%20Geoffroy%20Saint-Hilaire
Étienne Geoffroy Saint-Hilaire (; 15 April 177219 June 1844) was a French naturalist who established the principle of "unity of composition". He was a colleague of Jean-Baptiste Lamarck and expanded and defended Lamarck's evolutionary theories. Geoffroy's scientific views had a transcendental flavor (unlike Lamarck's materialistic views) and were similar to those of German morphologists like Lorenz Oken. He believed in the underlying unity of organismal design, and the possibility of the transmutation of species in time, amassing evidence for his claims through research in comparative anatomy, paleontology, and embryology. He is considered as a predecessor of the evo-devo evolutionary concept. Life and early career Geoffroy was born at Étampes (in present-day Essonne), and studied at the Collège de Navarre, in Paris, where he studied natural philosophy under M. J. Brisson. He then attended the lectures of Louis-Jean-Marie Daubenton at the College de France and Fourcroy at the Jardin des Plantes. In March 1793 Daubenton, through the interest of Bernardin de Saint-Pierre, procured him the office of sub-keeper and assistant demonstrator of the cabinet of natural history, made vacant by the resignation of Bernard Germain Étienne de la Ville, Comte de Lacépède. By a law passed in June 1793, Geoffroy was appointed one of the twelve professors of the newly constituted Muséum National d'Histoire Naturelle, being assigned the chair of zoology. In the same year he busied himself with the formation of a menagerie at that institution. In 1794, Geoffroy entered into correspondence with Georges Cuvier. Shortly after the appointment of Cuvier as assistant at the Museum d'Histoire Naturelle, Geoffroy received him into his house. The two friends wrote together five memoirs on natural history, one of which, on the classification of mammals, puts forward the idea of the subordination of characters upon which Cuvier based his zoological system. It was in a paper entitled Histoire des Makis, ou singes de Madagascar, written in 1795, that Geoffroy first gave expression to his views on the unity of organic composition, the influence of which is perceptible in all his subsequent writings; nature, he observes, presents us with only one plan of construction, the same in principle, but varied in its accessory parts. In 1798, Geoffroy was chosen a member of Napoleon's great scientific expedition to Egypt as part of the natural history and physics section of the Institut d'Égypte; 151 scientists and artists participated in the expedition, including Dominique-Vivant Denon, Claude Louis Berthollet, and Jean Baptiste Joseph Fourier. On the capitulation of Alexandria in August 1801, he took part in resisting the claim made by the British general to the collections of the expedition, declaring that, were that demand persisted in, history would have to record that he also had burnt a library in Alexandria. Early in January 1802 Geoffroy returned to Paris. He was elected a member of the French Academy of Sciences in September 1807. In March of the following year Napoleon, who had already recognized his national services by the award of the cross of the legion of honor, selected him to visit the museums of Portugal, for the purpose of procuring collections from them, and in the face of considerable opposition from the British he eventually was successful in retaining them as a permanent possession for his country. Later career In 1809, the year after his return to France, Geoffroy was made professor of zoology at the faculty of sciences at Paris, and from that period he devoted himself more exclusively than before to anatomical study. In 1818 he published the first part of his celebrated Philosophie anatomique, the second volume of which, published in 1822, and subsequent memoirs account for the formation of monstrosities on the principle of arrest of development, and of the attraction of similar parts. Geoffroy's friend Robert Edmund Grant shared his views on unity of plan and corresponded with him while working on marine invertebrates in the late 1820s in Edinburgh (assisted in 1826 and 1827 by his student Charles Darwin) when Grant successfully identified the pancreas in molluscs. When, in 1830, Geoffroy proceeded to apply to the invertebrata his views as to the unity of animal composition, he found a vigorous opponent in Cuvier, his former friend. Geoffroy, a synthesiser, contended, in accordance with his theory of unity of plan in organic composition, that all animals are formed of the same elements, in the same number; and with the same connections: homologous parts, however they differ in form and size, must remain associated in the same invariable order. With Johann Wolfgang von Goethe he held that there is in nature a law of compensation or balancing of growth, so that if one organ take on an excess of development, it is at the expense of some other part; and he maintained that, since nature takes no sudden leaps, even organs which are superfluous in any given species, if they have played an important part in other species of the same family, are retained as rudiments, which testify to the permanence of the general plan of creation. It was his conviction that, owing to the conditions of life, the same forms had not been perpetuated since the origin of all things, although it was not his belief that existing species are becoming modified. Cuvier, who was an analytical observer of facts, admitted only the prevalence of laws of co-existence or harmony in animal organs, and maintained the absolute invariability of species, which he declared had been created with a regard to the circumstances in which they were placed, each organ contrived with a view to the function it had to fulfil, thus putting, in Geoffroy's considerations, the effect for the cause. In 1836 he coined the term phocomelia. In 1838 he was named an Officer of the Légion d'honneur. In July 1840, Geoffroy became blind, and some months later he had a paralytic attack. From that time his strength gradually failed him. He resigned his chair at the museum in 1841, and was succeeded by his son, Isidore Geoffroy Saint-Hilaire. He died in Paris on 19 June 1844 and is buried in Division 19 of the Cimetière du Père Lachaise. Geoffroy's theory Geoffroy was a deist, which is to say that he believed in a God, but also in a law-like universe, with no supernatural interference in the details of existence. This kind of opinion was common in the Enlightenment, and goes with a rejection of revelation and miracles, and does not interpret the Bible as the literal word of God. These views did not conflict with his naturalistic ideas about organic change. Geoffroy's theory was not a theory of common descent, but a working-out of existing potential in a given type. For him, the environment causes a direct induction of organic change. This opinion Ernst Mayr labels as 'Geoffroyism'. It is definitely not what Lamarck believed (for Lamarck, a change in habits is what changes the animal). The direct effect of environment on heritable traits is not believed today to be a central evolutionary force; even Lawrence knew by 1816 that the climate does not directly cause the major differences between human races. Geoffroy endorsed a theory of saltational evolution that "monstrosities could become the founding fathers (or mothers) of new species by instantaneous transition from one form to the next." In 1831 he speculated that birds could have arisen from reptiles by an epigenetic saltation. Geoffroy wrote that environmental pressures could produce sudden transformations to establish new species instantaneously. In 1864 Albert von Kölliker revived Geoffroy's theory that evolution proceeds by large steps, under the name of heterogenesis. Geoffroy noted that the organization of dorsal and ventral structures in arthropods is opposite that of mammals. The inversion hypothesis was met with criticism and was rejected, however, some modern molecular embryologists have since resurrected this idea. Taxa described See :Category:Taxa named by Étienne Geoffroy Saint-Hilaire Legacy The Geoffroy's cat (Leopardus geoffroyi) was named in his honour. Étienne Geoffroy Saint-Hilaire is commemorated in the scientific name of a species of South American turtle, Phrynops geoffroanus. His name is also honoured in that of a number of other species, including Geoffroy's spider monkey, Geoffroy's bat, and Geoffroy's tamarin. The Catfish Corydoras geoffroy is named after him. is a street in the 5ème arrondissement, Paris near the Jardin des Plantes and Muséum national d'histoire naturelle. In popular culture French author Honoré de Balzac dedicated his novel Le Père Goriot to Saint-Hilaire, "as a tribute of admiration for his labors and his genius." Works See also Cuvier–Geoffroy debate Citations General sources Further reading van den Biggelaar, J.A.M.; Edsinger-Gonzales, E.; Schram, F.R. (2002). "The improbability of dorso-ventral axis inversion during animal evolution, as presumed by Geoffroy Saint Hilaire". Contributions to Zoology 71(1/3). HTM External links Étienne Geoffroy Saint Hilaire Collection, American Philosophical Society 1772 births 1844 deaths 18th-century French male writers 18th-century French writers 19th-century French male writers 19th-century French writers 19th-century French zoologists Burials at Père Lachaise Cemetery Commission des Sciences et des Arts members French male non-fiction writers French naturalists French taxonomists French zoologists Members of the French Academy of Sciences National Museum of Natural History (France) people People from Étampes Phocomelia Proto-evolutionary biologists University of Paris alumni Zookeepers
Étienne Geoffroy Saint-Hilaire
Biology
2,059
52,912,473
https://en.wikipedia.org/wiki/Multiverse%20%28set%20theory%29
In mathematical set theory, the multiverse view is that there are many models of set theory, but no "absolute", "canonical" or "true" model. The various models are all equally valid or true, though some may be more useful or attractive than others. The opposite view is the "universe" view of set theory in which all sets are contained in some single ultimate model. The collection of countable transitive models of ZFC (in some universe) is called the hyperverse and is very similar to the "multiverse". A typical difference between the universe and multiverse views is the attitude to the continuum hypothesis. In the universe view the continuum hypothesis is a meaningful question that is either true or false though we have not yet been able to decide which. In the multiverse view it is meaningless to ask whether the continuum hypothesis is true or false before selecting a model of set theory. Another difference is that the statement "For every transitive model of ZFC there is a larger model of ZFC in which it is countable" is true in some versions of the multiverse view of mathematics but is false in the universe view. References Set theory Philosophy of mathematics Foundations of mathematics
Multiverse (set theory)
Mathematics
244
9,476
https://en.wikipedia.org/wiki/Electron
The electron (, or in nuclear reactions) is a subatomic particle with a negative one elementary electric charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron's mass is approximately that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, . Being fermions, no two electrons can occupy the same quantum state, per the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: They can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy. Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry, and thermal conductivity; they also participate in gravitational, electromagnetic, and weak interactions. Since an electron has charge, it has a surrounding electric field; if that electron is moving relative to an observer, the observer will observe it to generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications, such as tribology or frictional charging, electrolysis, electrochemistry, battery technologies, electronics, welding, cathode-ray tubes, photoelectricity, photovoltaic solar panels, electron microscopes, radiation therapy, lasers, gaseous ionization detectors, and particle accelerators. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge "electron" in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897 during the cathode-ray tube experiment. Electrons participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance, when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron, except that it carries electrical charge of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons. History Discovery of effect of electric force The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise , the English scientist William Gilbert coined the Neo-Latin term , to refer to those substances with property similar to that of amber which attract small objects after being rubbed. Both electric and electricity are derived from the Latin (also the root of the alloy of the same name), which came from the Greek word for amber, (). Discovery of two kinds of charges In the early 1700s, French chemist Charles François du Fay found that if a charged gold-leaf is repulsed by glass rubbed with silk, then the same charged gold-leaf is attracted by amber rubbed with wool. From this and other results of similar types of experiments, du Fay concluded that electricity consists of two electrical fluids, vitreous fluid from glass rubbed with silk and resinous fluid from amber rubbed with wool. These two fluids can neutralize each other when combined. American scientist Ebenezer Kinnersley later also independently reached the same conclusion. A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (−). He gave them the modern charge nomenclature of positive and negative respectively. Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit. Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist Wilhelm Eduard Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis. However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity". Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron. The word electron is a combination of the words electric and ion. The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron. Discovery of free electrons outside matter While studying electrical conductivity in rarefied gases in 1859, the German physicist Julius Plücker observed the radiation emitted from the cathode caused phosphorescent light to appear on the tube wall near the cathode; and the region of the phosphorescent light could be moved by application of a magnetic field. In 1869, Plücker's student Johann Wilhelm Hittorf found that a solid body placed in between the cathode and the phosphorescence would cast a shadow upon the phosphorescent region of the tube. Hittorf inferred that there are straight rays emitted from the cathode and that the phosphorescence was caused by the rays striking the tube walls. Furthermore, he also discovered that these rays are deflected by magnets just like lines of current. In 1876, the German physicist Eugen Goldstein showed that the rays were emitted perpendicular to the cathode surface, which distinguished between the rays that were emitted from the cathode and the incandescent light. Goldstein dubbed the rays cathode rays. Decades of experimental and theoretical research involving cathode rays were important in J. J. Thomson's eventual discovery of electrons. Goldstein also experimented with double cathodes and hypothesized that one ray may repulse another, although he didn't believe that any particles might be involved. During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode-ray tube to have a high vacuum inside. He then showed in 1874 that the cathode rays can turn a small paddle wheel when placed in their path. Therefore, he concluded that the rays carried momentum. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged. In 1879, he proposed that these properties could be explained by regarding cathode rays as composed of negatively charged gaseous molecules in a fourth state of matter, in which the mean free path of the particles is so long that collisions may be ignored. In 1883, not yet well-known German physicist Heinrich Hertz tried to prove that cathode rays are electrically neutral and got what he interpreted as a confident absence of deflection in electrostatic, as opposed to magnetic, field. However, as J. J. Thomson explained in 1897, Hertz placed the deflecting electrodes in a highly-conductive area of the tube, resulting in a strong screening effect close to their surface. The German-born British physicist Arthur Schuster expanded upon Crookes's experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given electric and magnetic field, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time. This is because it was assumed that the charge carriers were much heavier hydrogen or nitrogen atoms. Schuster's estimates would subsequently turn out to be largely correct. In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge. While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter. In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays. This evidence strengthened the view that electrons existed as components of atoms. In 1897, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier. By 1899 he showed that their charge-to-mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal. Thomson measured m/e for cathode ray "corpuscles", and made good estimates of the charge e, leading to value for the mass m, finding a value 1400 times less massive than the least massive ion known: hydrogen. In the same year Emil Wiechert and Walter Kaufmann also calculated the e/m ratio but did not take the step of interpreting their results as showing a new particle, while J. J. Thomson would subsequently in 1899 give estimates for the electron charge and mass as well: e ~  and m ~  The name "electron" was adopted for these particles by the scientific community, mainly due to the advocation by G. F. FitzGerald, J. Larmor, and H. A. Lorentz. The term was originally coined by George Johnstone Stoney in 1891 as a tentative name for the basic unit of electrical charge (which had then yet to be discovered). The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team, using clouds of charged water droplets generated by electrolysis, and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913. However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time. Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons. Atomic theory By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons. In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom. However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms. Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them. Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics. In 1919, the American chemist Irving Langmuir elaborated on the Lewis's static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness". In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table, which were known to largely repeat themselves according to the periodic law. In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle. The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment. This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting. Quantum mechanics In his 1924 dissertation (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment. The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927, George Paget Thomson and Alexander Reid discovered the interference effect was produced when a beam of electrons was passed through thin celluloid foils and later metal films, and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel. Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned. De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated. Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum. Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen. In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field. In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron. This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatrons and using electron as a generic term to describe both the positively and negatively charged variants. In 1947, Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s. Particle accelerators With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light. With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron. The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics. Confinement of individual electrons Individual electrons can now be easily confined in ultra small (, ) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective-mass tensor. Characteristics Classification In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions because they all have half-odd integer spin; the electron has spin . Fundamental properties The invariant mass of an electron is approximately , or . Due to mass–energy equivalence, this corresponds to a rest energy of . The ratio between the mass of a proton and that of an electron is about 1836. Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe. Electrons have an electric charge of coulombs, which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. The electron is commonly symbolized by , and the positron is symbolized by . The electron has an intrinsic angular momentum or spin of . This property is usually stated by referring to the electron as a spin-1/2 particle. For such particles the spin magnitude is , while the result of the measurement of a projection of the spin on any axis can only be ±. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton, which is a physical constant that is equal to The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity. The electron has no known substructure. Nevertheless, in condensed matter physics, spin–charge separation can occur in some materials. In such cases, electrons 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles. The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity. Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters. The upper bound of the electron radius of 10−18 meters can be derived using the uncertainty relation in energy. There is also a physical constant called the "classical electron radius", with the much larger value of , greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron. There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of  seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation. The experimental lower bound for the electron's mean lifetime is years, at a 90% confidence level. Quantum properties As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment. The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density. Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, , where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead. In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit. Virtual particles In a simplified picture, which often tends to give the wrong idea but may serve to illustrate some aspects, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, . Thus, for a virtual electron, Δt is at most . While an electron–positron virtual pair is in existence, the Coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. Virtual particles cause a comparable shielding effect for the mass of the electron. The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment). The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics. The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons can heuristically be thought of as causing the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron. In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines. The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the "static" of virtual particles around elementary particles at a close distance. Interaction An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law. When an electron is in motion, it generates a magnetic field. The Ampère–Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor. The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic). When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation. The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself. Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force. Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The deceleration of the electron results in the emission of Bremsstrahlung radiation. An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength. For an electron, it has a value of . When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering. The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by which is approximately equal to . When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV. On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus. In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a exchange, and this is responsible for neutrino–electron elastic scattering. Atoms and molecules An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus's electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number. Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential. Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect. To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron. The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital, called paired electrons, cancel each other out. The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics. The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules. Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms. A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.<ref ></ref> Conductivity If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect. Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass. When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations. At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation. On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas) through the material much like free electrons. Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed. This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material. Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law, which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current. When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, pairs of electrons called Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance. (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.) However, the mechanism by which higher temperature superconductors operate remains uncertain. Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons. The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge. Motion and energy According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in vacuum, c. However, when relativistic electrons—that is, electrons moving at a speed close to c—are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation. The effects of special relativity are based on a quantity known as the Lorentz factor, defined as where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is: where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV. Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum. For the 51 GeV electron above, the wavelength is about , small enough to explore structures well below the size of an atomic nucleus. Formation The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron–electron pairs annihilated each other and emitted energetic photons: + ↔ + An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe. For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron–positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe. The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes. Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process, → + + For about the next –, the excess electrons remained too energetic to bind with atomic nuclei. What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation. Roughly one million years after the big bang, the first generation of stars began to form. Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus. An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (). At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole. According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants. When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space. In exchange, the other member of the pair is given negative energy, which results in a net loss of mass–energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes. Cosmic rays are particles traveling through space with high energies. Energy events as high as have been recorded. When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions. More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion. → + A muon, in turn, can decay to form an electron or positron. → + + Observation Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes. The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct dark lines appear in the spectrum of transmitted radiation in places where the corresponding frequency is absorbed by the atom's electrons. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. When detected, spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined. In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge. The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months. The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant. The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time. The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material. Plasma applications Particle beams Electron beams are used in welding. They allow energy densities up to across a narrow focus diameter of and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding. Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer. This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits. Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products. Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy. Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays. Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect. Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies. Imaging Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV. The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°. The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material. In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm. By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential. The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms. This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain. Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface. Other applications In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery. Electrons are important in cathode-ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets. In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse. Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor. See also Notes References External links Leptons Elementary particles Quantum electrodynamics Spintronics Charge carriers 1897 in science
Electron
Physics,Chemistry,Materials_science
10,984
11,066,871
https://en.wikipedia.org/wiki/Font%20editor
A font editor is a class of application software specifically designed to create or modify font files. Font editors differ greatly depending on if they are designed to edit bitmap fonts or outline fonts. Most modern font editors deal with the outline fonts. Bitmap fonts uses an older technology and are most commonly used in console applications. The bitmap font editors were usually very specialized, as each computing platform had its own font format. One subcategory of bitmap fonts is text mode fonts. List of font editors The following editors use outline vector graphics to create font files in common formats. Website FontStruct Free software Birdfont FontForge Inkscape Proprietary software FontLab (Mac, Windows) Fontographer (Mac, Windows) Ikarus Glyphs See also Typography Comparison of font editors References Editor Digital typography
Font editor
Technology
180
44,086,763
https://en.wikipedia.org/wiki/Merton%20Sandler
Merton Sandler (28 March 1926 – 24 August 2014) was a British professor of chemical pathology and a pioneer in biological psychiatry. Education and career Sandler grew up in an observant Jewish family in Salford. He studied at the Manchester Grammar School having won a scholarship, before studying medicine at the University of Manchester. Following his qualification in 1949, Sandler served two years of National Service in the Royal Army Medical Corps at Shoreham-by-Sea, attaining the rank of Captain. With his prior pathology training, he managed a small hospital laboratory during this period. In 1951 Sandler was appointed consultant chemical pathologist at Queen Charlotte’s Hospital. In 1959, he suggested a link between depression and monoamine deficiency in the brain, which led to the development of antidepressants. Sandler was Professor of Chemical Pathology at the University of London from 1973 to 1991, and Fellow Emeritus of the American College of Neuropsychopharmacology Private life Sandler married Lorna Grenby in 1961 and they had four children. He was an active Freemason initiated in 1954 in the In Arduis Fidelis Lodge (London), and two years later in the Holy Royal Arch. He belonged to several lodges and chapters, and held office in the United Grand Lodge of England. Awards Anna Monika Prize for research on biological aspects of depression (1973) Gold Medal British Migraine Association (1974) British Association for Psychopharmacology Lifetime Achievement Award (1999) CINP Pioneer Award for lifetime contribution to monoamine studies in human health and disease (2006) References External links 1926 births 2014 deaths People educated at Manchester Grammar School Alumni of the University of Manchester Chemical pathologists Academics of the University of London
Merton Sandler
Chemistry
347
40,317,384
https://en.wikipedia.org/wiki/Cyclotruncated%207-simplex%20honeycomb
In seven-dimensional Euclidean geometry, the cyclotruncated 7-simplex honeycomb is a space-filling tessellation (or honeycomb). The tessellation fills space by 7-simplex, truncated 7-simplex, bitruncated 7-simplex, and tritruncated 7-simplex facets. These facet types occur in proportions of 1:1:1:1 respectively in the whole honeycomb. Structure It can be constructed by eight sets of parallel hyperplanes that divide space. The hyperplane intersections generate cyclotruncated 6-simplex honeycomb divisions on each hyperplane. Related polytopes and honeycombs See also Regular and uniform honeycombs in 7-space: 7-cubic honeycomb 7-demicubic honeycomb 7-simplex honeycomb Omnitruncated 7-simplex honeycomb 331 honeycomb Notes References Norman Johnson Uniform Polytopes, Manuscript (1991) Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings) (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] Honeycombs (geometry) 8-polytopes
Cyclotruncated 7-simplex honeycomb
Physics,Chemistry,Materials_science
348
55,763,499
https://en.wikipedia.org/wiki/Fire%20and%20carbon%20cycling%20in%20boreal%20forests
Terrestrial ecosystems found in the boreal (or taiga) regions of North America and Eurasia cover 17% of the Earth's land surface, and contain more than 30% of all carbon present in the terrestrial biome. In terms of carbon storage, the boreal region consists of three ecosystems: boreal forest, peatland, and tundra. Vast areas of the globe and are contributing greatly to atmospheric carbon release due to increased temperature and fire hazard. High northern latitudes will experience the most significant increase in warming on the planet as a result of increased atmospheric greenhouse gases thus placing in jeopardy the carbon sink in these areas. In addition to the release of carbon through the melting of permafrost, high intensity wildfires will become more common and thus contribute to the release of stored carbon. This means that the boreal forest and its fire regime is becoming an increasingly more significant factor in determining the global carbon budget. Boreal forests are also important economic factors in Russia and Canada specifically, and the uncertainty of fire patterns in the future as a result of climate change is a major consideration in forest management plans. A decrease in allowed timber harvest could be a solution to long term uncertainty of fire cycles. Carbon cycling in boreal forests Although temperate and tropical forests in total cover twice as much land as boreal forest, boreal forest contains 20% more carbon than the other two combined. Boreal forests are susceptible to global warming because the ice/snow–albedo feedback is significantly influenced by surface temperature, so fire induced changes in surface albedo and infrared emissivity are more significant than in the tropics. Boreal forest fires contribute greatly to greenhouse gas presence in the atmosphere. Large boreal fires produce enough energy to produce convective smoke columns that can break into the troposphere and occasionally penetrate across the tropopause. In addition, the cold temperature in boreal regions result in low levels of water vapor. This low level of water vapor combined with low solar radiation results in very low photochemical production of the OH radical, which is a chemical that controls the atmospheric lifetime of most tropospheric gases. Therefore, the greenhouse gas emission in boreal forest fires will have prolonged lifetimes over the forest. Fire regime The fire regimes of boreal forest in Canada and in Russia are distinct. In Russia, the climate is drier and the majority of fires are human caused. This means that there are more frequent fires of lower intensity than in Canada and that most carbon output as a result of fire is in Russia. Forestry practices in Russia involve the use of heavy machinery and large-scale clear-cuts, leading to the alteration of fuel complexes. This practice is reportedly causing areas to degrade into grass steppes, rather that regenerate as new forest. This may result in the shorting of fire return intervals. Industrial practices in Russia also create additional fire hazards (severe damages in the Russian Federation affect about 9 million ha). Radioactive contamination on an area of about 7 million ha creates a fire hazard because fire can redistribute radionuclides. The majority of boreal forest fires in Canada are started by lightning. Subsequently, there are fewer fires on average in Canada but a much higher frequency of high intensity crown fire than Russia with a crown fire rate of 57% in Canada as opposed to 6% in Russia. Natural fire rotation across Canadian and Alaskan boreal forests is one to several centuries. Peatland and tundra Fire indirectly plays a role in the exchange of carbon between terrestrial surface and the atmosphere by regulating soil and moisture regimes, including plant succession, photosynthesis, and soil microbial processes. Soil in boreal regions is a significant global carbon sink; boreal forest soil holds 200 Gt of carbon while boreal peatlands hold 400 Gt of carbon. Northernmost permafrost regions contain 10,355 ± 150 Pg of soil organic carbon (SOC) in the top 0-3 m and 21% of this carbon is in the soil organic layer (SOL) pool found in the top 30 cm of the ground layer. The depth of the organic soil layer is one of the controls on permafrost, leading to a generalization of two domains in boreal forest: thick soil layer and thin soil layer. Thick organic soil insulates the subsoil from warmer summer temperatures and allows for permafrost to develop. Although permafrost keeps ground moist during winter, during summer months upper organic soil horizons will become desiccated. As average temperatures increase, Permafrost is melting at a faster rate and, correspondingly, the length of the fire season is increasing. When the fire-free interval (FFI) is decreased, the loss of the SOL may result in a domain change to a thin soil layer, leading to less carbon storage in the soil, greater fire vulnerability, and decreased permafrost. In black spruce forests, decreased FFI can ruin successional trajectories by opening the door for deciduous trees and shrubs to invade, which also further increases fire vulnerability. Data regarding carbon storage in the permafrost region as well as fire activity in boreal forests is sparse, which is a significant barrier in determining an accurate carbon budget. An expert assessment indicates that the permafrost region will become a net carbon source by 2100. A 5 - 10 degree C rise in forest floor temperature after a fire will significantly increase the rate of decomposition for years after the fire occurs, which temporarily turns the soil into a net carbon source (not sink) locally. Fire enhances the biogenic emissions of NO and N20 from soil. See also Permafrost carbon cycle Carbon cycle References Taiga and boreal forests Carbon cycle Fire
Fire and carbon cycling in boreal forests
Chemistry
1,160
11,598,476
https://en.wikipedia.org/wiki/Biebrich%20scarlet
Biebrich scarlet (C.I. 26905) is a molecule used in Lillie's trichrome. It is an anionic mono-azo dye, which is an important pigmenting agent in the textile and paper industries, used to color wool, silk, cotton, and papers. The dye was created in 1878 by the German chemist Rudolf Nietzki. He was employed by Kalle & Co. and completed his contributions on August of 1880, where he claimed to be the inventor of Biebrich scarlet. The name, Biebrich scarlet, originated from the location where a company, Kalle & Co., marketed the dye in Biebrich (Wiesbaden). Properties Biebrich scarlet has two alternative structures: the keto form, with the IUPAC name of 2-[(2Z)-2-(2-oxonaphthalen-1-ylidene)hydrazinyl]-5-[(4-sulfonatophenyl)diazenyl]benzene-1-sulfonate, and the enol form, with the IUPAC name of 2-[(2-hydroxynaphthalen-1-yl)diazenyl]-5-[(4-sulfophenyl)diazenyl]benzene-1-sulfonic acid. The dye has the molecular weight formula of C22H16N4Na2O7S2, molecular weight of 512.52 grams per mol, and has the maximum absorption of 510 nm. Environmental impacts and applications Biebrich scarlet dyes are used to color hydrophobic materials like fats and oils. It's also one of the most often used dyes for plasma staining. The dye is an illegal dye for food additives because of its carcinogenic properties. Biebrich scarlet can have harmful effects on living and non-living organisms in natural water. This dye is strongly pigmented, and its presence in water bodies, even at low quantities (10-50 mg/L), can be detected, reducing the transparency of the water ecosystem. It also hinders the entry of sunlight into the water, affecting both zooplankton and phytoplankton in the water ecosystem, therefore the pollutant must be removed. Removal of the pollutant involves absorption, membrane filtration, precipitation, ozonation, fungal detachment, and electrochemical separation. Hydrogel absorbents have active sites to which the dye is held using electrostatic interactions. Photocatalysis allows for almost total degradation of Biebrich scarlet azo dye bonds in less than 10 hours. Degradation of Biebrich scarlet is also observed using lignin peroxidase enzyme from wood rotting fungus in the presence of mediators like 2-chloro-1,4-dimethoxybenzene. With such a significant impact on the environment and surrounding resources, researchers are working to reduce the dye's presence in water bodies. Studies have shown techniques to remove the red dye Biebrich Scarlet (BS) from water using UV light and nanophotocatalysts like TiO₂, ZnO, CdS, and ZnS. Among these, ZnO performed the best in dye removal. To enhance the process, researchers adjusted factors such as catalyst concentration (0.25-1.25 g/L), solution pH (3-11), and dye concentration (5-100 mg/L). Precipitation was used to form the ZnO nanoparticles, which were then studied utilizing advanced technologies (XRD, FT-IR, TGA, SEM, and TEM) to confirm their characteristics. Experiments revealed that, under optimal conditions, these produced ZnO particles beat commercial ZnO powders in dye breakdown. Furthermore, the study found that the produced ZnO could be reused well, making it a suitable material for water treatment applications. See also Masson's trichrome stain References Staining Azo dyes Acid dyes
Biebrich scarlet
Chemistry,Biology
836
45,367,190
https://en.wikipedia.org/wiki/Penicillium%20corynephorum
Penicillium corynephorum is an anamorph species of the genus of Penicillium. See also List of Penicillium species References Further reading corynephorum Fungi described in 1985 Fungus species
Penicillium corynephorum
Biology
48
6,007,184
https://en.wikipedia.org/wiki/Pro-simplicial%20set
In mathematics, a pro-simplicial set is an inverse system of simplicial sets. A pro-simplicial set is called pro-finite if each term of the inverse system of simplicial sets has finite homotopy groups. Pro-simplicial sets show up in shape theory, in the study of localization and completion in homotopy theory, and in the study of homotopy properties of schemes (e.g. étale homotopy theory). References . . Simplicial sets
Pro-simplicial set
Mathematics
116
6,066,467
https://en.wikipedia.org/wiki/Itsy%20Pocket%20Computer
The Itsy Pocket Computer is a small, low-power, handheld device with a highly flexible interface. It was designed at Digital Equipment Corporation's Western Research Laboratory in Palo Alto to encourage novel user interface development—for example, it had accelerometers to detect movement and orientation as early as 1999. Hardware CPU: DEC StrongARM SA-1100 processor Memory: 16 MB of DRAM, 4 MB of flash memory Interfaces: I/O interfaces for audio input/output, IrDA, and an RS232 serial port Small 320 x 200 pixel LCD touchscreen for display and user input 10 general purpose push-buttons for additional user input purposes Power supply: Pair of standard AAA alkaline batteries References Related WRL Technical Notes The Itsy Pocket Computer Version 1.5: User's Manual (DEC WRL Technical Note WRL-TN-54) The Memory Daughter-Card Version 1.5: User's Manual (DEC WRL Technical Note WRL-TN-55) Power and Energy Characterization of the Itsy Pocket Computer (Version 1.5) (DEC WRL Technical Note WRL-TN-56) A Simple CMOS Camera for Itsy (DEC WRL Technical Note WRL-TN-58) Power Evaluation of Itsy Version 2.4 (DEC WRL Technical Note WRL-TN-59) Interpreting the Battery Lifetime of the Itsy Version 2.4 (DEC WRL Technical Note WRL-TN-59) The Itsy Pocket Computer, Joel F. Bartlett, Lawrence S. Brakmo, Keith I. Farkas, William R. Hamburgen, Timothy Mann, Marc A. Viredaz, Carl A. Waldspurger, Deborah A. Wallach, WRL Research Report 2000/6, Compaq Western Research Laboratory, 250 University Ave, Palo Alto, CA 94301. External links Itsy downloads at HP Labs DEC computers Personal digital assistants Prototypes
Itsy Pocket Computer
Technology
397
54,660,742
https://en.wikipedia.org/wiki/LRRC8B
Leucine-rich repeat-containing protein 8B is a protein that in humans is encoded by the LRRC8B gene. Researchers have found out that this protein, along with the other LRRC8 proteins LRRC8A, LRRC8C, LRRC8D, and LRRC8E, is sometimes a subunit of the heteromer protein volume-regulated anion channel (VRAC). VRACs are crucial to the regulation of cell size by transporting chloride ions and various organic osmolytes, such as taurine or glutamate, across the plasma membrane, and that is not the only function these channels have been linked to. While LRRC8B is one of many proteins that can be part of VRAC, research has found that it is not as crucial to the activity of the channel in comparison to LRRC8A and LRRC8D. However, while we know that LRRC8A and LRRC8D are necessary for VRAC function, other studies have found that they are not sufficient for the full range of usual VRAC activity. This is where the other LRRC8 proteins come in, such as LRRC8B, as the different composition of these subunits affects the range of specificity for VRACs. In addition to its role in VRACs, the LRRC8 protein family is also associated with agammaglobulinemia-5. References Further reading Ion channels
LRRC8B
Chemistry
313
40,409,788
https://en.wikipedia.org/wiki/Convolutional%20neural%20network
A convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns features by itself via filter (or kernel) optimization. This type of deep learning network has been applied to process and make predictions from many different types of data including text, images and audio. Convolution-based networks are the de-facto standard in deep learning-based approaches to computer vision and image processing, and have only recently been replaced—in some cases—by newer deep learning architectures such as the transformer. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution (or cross-correlation) kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features. Some applications of CNNs include: image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial time series. CNNs are also known as shift invariant or space invariant artificial neural networks, based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input. Feed-forward neural networks are usually fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The "full connectivity" of these networks makes them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training (such as weight decay) or trimming connectivity (skipped connections, dropout, etc.) Robust datasets also increase the probability that CNNs will learn the generalized principles that characterize a given dataset rather than the biases of a poorly-populated set. Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field. CNNs use relatively little pre-processing compared to other image classification algorithms. This means that the network learns to optimize the filters (or kernels) through automated learning, whereas in traditional algorithms these filters are hand-engineered. This independence from prior knowledge and human intervention in feature extraction is a major advantage. Architecture A convolutional neural network consists of an input layer, hidden layers and an output layer. In a convolutional neural network, the hidden layers include one or more layers that perform convolutions. Typically this includes a layer that performs a dot product of the convolution kernel with the layer's input matrix. This product is usually the Frobenius inner product, and its activation function is commonly ReLU. As the convolution kernel slides along the input matrix for the layer, the convolution operation generates a feature map, which in turn contributes to the input of the next layer. This is followed by other layers such as pooling layers, fully connected layers, and normalization layers. Here it should be noted how close a convolutional neural network is to a matched filter. Convolutional layers In a CNN, the input is a tensor with shape: (number of inputs) × (input height) × (input width) × (input channels) After passing through a convolutional layer, the image becomes abstracted to a feature map, also called an activation map, with shape: (number of inputs) × (feature map height) × (feature map width) × (feature map channels). Convolutional layers convolve the input and pass its result to the next layer. This is similar to the response of a neuron in the visual cortex to a specific stimulus. Each convolutional neuron processes data only for its receptive field. Although fully connected feedforward neural networks can be used to learn features and classify data, this architecture is generally impractical for larger inputs (e.g., high-resolution images), which would require massive numbers of neurons because each pixel is a relevant input feature. A fully connected layer for an image of size 100 × 100 has 10,000 weights for each neuron in the second layer. Convolution reduces the number of free parameters, allowing the network to be deeper. For example, using a 5 × 5 tiling region, each with the same shared weights, requires only 25 neurons. Using regularized weights over fewer parameters avoids the vanishing gradients and exploding gradients problems seen during backpropagation in earlier neural networks. To speed processing, standard convolutional layers can be replaced by depthwise separable convolutional layers, which are based on a depthwise convolution followed by a pointwise convolution. The depthwise convolution is a spatial convolution applied independently over each channel of the input tensor, while the pointwise convolution is a standard convolution restricted to the use of kernels. Pooling layers Convolutional networks may include local and/or global pooling layers along with traditional convolutional layers. Pooling layers reduce the dimensions of data by combining the outputs of neuron clusters at one layer into a single neuron in the next layer. Local pooling combines small clusters, tiling sizes such as 2 × 2 are commonly used. Global pooling acts on all the neurons of the feature map. There are two common types of pooling in popular use: max and average. Max pooling uses the maximum value of each local cluster of neurons in the feature map, while average pooling takes the average value. Fully connected layers Fully connected layers connect every neuron in one layer to every neuron in another layer. It is the same as a traditional multilayer perceptron neural network (MLP). The flattened matrix goes through a fully connected layer to classify the images. Receptive field In neural networks, each neuron receives input from some number of locations in the previous layer. In a convolutional layer, each neuron receives input from only a restricted area of the previous layer called the neuron's receptive field. Typically the area is a square (e.g. 5 by 5 neurons). Whereas, in a fully connected layer, the receptive field is the entire previous layer. Thus, in each convolutional layer, each neuron takes input from a larger area in the input than previous layers. This is due to applying the convolution over and over, which takes the value of a pixel into account, as well as its surrounding pixels. When using dilated layers, the number of pixels in the receptive field remains constant, but the field is more sparsely populated as its dimensions grow when combining the effect of several layers. To manipulate the receptive field size as desired, there are some alternatives to the standard convolutional layer. For example, atrous or dilated convolution expands the receptive field size without increasing the number of parameters by interleaving visible and blind regions. Moreover, a single dilated convolutional layer can comprise filters with multiple dilation ratios, thus having a variable receptive field size. Weights Each neuron in a neural network computes an output value by applying a specific function to the input values received from the receptive field in the previous layer. The function that is applied to the input values is determined by a vector of weights and a bias (typically real numbers). Learning consists of iteratively adjusting these biases and weights. The vectors of weights and biases are called filters and represent particular features of the input (e.g., a particular shape). A distinguishing feature of CNNs is that many neurons can share the same filter. This reduces the memory footprint because a single bias and a single vector of weights are used across all receptive fields that share that filter, as opposed to each receptive field having its own bias and vector weighting. Deconvolutional A deconvolutional neural network is essentially the reverse of a CNN. It consists of deconvolutional layers and unpooling layers. A deconvolutional layer is the transpose of a convolutional layer. Specifically, a convolutional layer can be written as a multiplication with a matrix, and a deconvolutional layer is multiplication with the transpose of that matrix. An unpooling layer expands the layer. The max-unpooling layer is the simplest, as it simply copies each entry multiple times. For example, a 2-by-2 max-unpooling layer is . Deconvolution layers are used in image generators. By default, it creates periodic checkerboard artifact, which can be fixed by upscale-then-convolve. History CNN are often compared to the way the brain achieves vision processing in living organisms. Receptive fields in the visual cortex Work by Hubel and Wiesel in the 1950s and 1960s showed that cat visual cortices contain neurons that individually respond to small regions of the visual field. Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as its receptive field. Neighboring cells have similar and overlapping receptive fields. Receptive field size and location varies systematically across the cortex to form a complete map of visual space. The cortex in each hemisphere represents the contralateral visual field. Their 1968 paper identified two basic visual cell types in the brain: simple cells, whose output is maximized by straight edges having particular orientations within their receptive field complex cells, which have larger receptive fields, whose output is insensitive to the exact position of the edges in the field. Hubel and Wiesel also proposed a cascading model of these two types of cells for use in pattern recognition tasks. Neocognitron, origin of the CNN architecture Inspired by Hubel and Wiesel's work, in 1969, Kunihiko Fukushima published a deep CNN that uses ReLU activation function. Unlike most modern networks, this network used hand-designed kernels. It was not used in his neocognitron, since all the weights were nonnegative; lateral inhibition was used instead. The rectifier has become the most popular activation function for CNNs and deep neural networks in general. The "neocognitron" was introduced by Kunihiko Fukushima in 1979. The kernels were trained by unsupervised learning. It was inspired by the above-mentioned work of Hubel and Wiesel. The neocognitron introduced the two basic types of layers: "S-layer": a shared-weights receptive-field layer, later known as a convolutional layer, which contains units whose receptive fields cover a patch of the previous layer. A shared-weights receptive-field group (a "plane" in neocognitron terminology) is often called a filter, and a layer typically has several such filters. "C-layer": a downsampling layer that contain units whose receptive fields cover patches of previous convolutional layers. Such a unit typically computes a weighted average of the activations of the units in its patch, and applies inhibition (divisive normalization) pooled from a somewhat larger patch and across different filters in a layer, and applies a saturating activation function. The patch weights are nonnegative and are not trainable in the original neocognitron. The downsampling and competitive inhibition help to classify features and objects in visual scenes even when the objects are shifted. In a variant of the neocognitron called the cresceptron, instead of using Fukushima's spatial averaging with inhibition and saturation, J. Weng et al. in 1993 introduced a method called max-pooling where a downsampling unit computes the maximum of the activations of the units in its patch. Max-pooling is often used in modern CNNs. Several supervised and unsupervised learning algorithms have been proposed over the decades to train the weights of a neocognitron. Today, however, the CNN architecture is usually trained through backpropagation. Convolution in time The term "convolution" first appears in neural networks in a paper by Toshiteru Homma, Les Atlas, and Robert Marks II at the first Conference on Neural Information Processing Systems in 1987. Their paper replaced multiplication with convolution in time, inherently providing shift invariance, motivated by and connecting more directly to the signal-processing concept of a filter, and demonstrated it on a speech recognition task. They also pointed out that as a data-trainable system, convolution is essentially equivalent to correlation since reversal of the weights does not affect the final learned function ("For convenience, we denote * as correlation instead of convolution. Note that convolving a(t) with b(t) is equivalent to correlating a(-t) with b(t)."). Modern CNN implementations typically do correlation and call it convolution, for convenience, as they did here. Time delay neural networks The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel et al. for phoneme recognition and was one of the first convolutional networks, as it achieved shift-invariance. A TDNN is a 1-D convolutional neural net where the convolution is performed along the time axis of the data. It is the first CNN utilizing weight sharing in combination with a training by gradient descent, using backpropagation. Thus, while also using a pyramidal structure as in the neocognitron, it performed a global optimization of the weights instead of a local one. TDNNs are convolutional networks that share weights along the temporal dimension. They allow speech signals to be processed time-invariantly. In 1990 Hampshire and Waibel introduced a variant that performs a two-dimensional convolution. Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both time and frequency shifts, as with images processed by a neocognitron. TDNNs improved the performance of far-distance speech recognition. Image recognition with CNNs trained by gradient descent Denker et al. (1989) designed a 2-D CNN system to recognize hand-written ZIP Code numbers. However, the lack of an efficient training method to determine the kernel coefficients of the involved convolutions meant that all the coefficients had to be laboriously hand-designed. Following the advances in the training of 1-D CNNs by Waibel et al. (1987), Yann LeCun et al. (1989) used back-propagation to learn the convolution kernel coefficients directly from images of hand-written numbers. Learning was thus fully automatic, performed better than manual coefficient design, and was suited to a broader range of image recognition problems and image types. Wei Zhang et al. (1988) used back-propagation to train the convolution kernels of a CNN for alphabets recognition. The model was called shift-invariant pattern recognition neural network before the name CNN was coined later in the early 1990s. Wei Zhang et al. also applied the same CNN without the last fully connected layer for medical image object segmentation (1991) and breast cancer detection in mammograms (1994). This approach became a foundation of modern computer vision. Max pooling In 1990 Yamaguchi et al. introduced the concept of max pooling, a fixed filtering operation that calculates and propagates the maximum value of a given region. They did so by combining TDNNs with max pooling to realize a speaker-independent isolated word recognition system. In their system they used several TDNNs per word, one for each syllable. The results of each TDNN over the input signal were combined using max pooling and the outputs of the pooling layers were then passed on to networks performing the actual word classification. LeNet-5 LeNet-5, a pioneering 7-level convolutional network by LeCun et al. in 1995, classifies hand-written numbers on checks () digitized in 32x32 pixel images. The ability to process higher-resolution images requires larger and more layers of convolutional neural networks, so this technique is constrained by the availability of computing resources. It was superior than other commercial courtesy amount reading systems (as of 1995). The system was integrated in NCR's check reading systems, and fielded in several American banks since June 1996, reading millions of checks per day. Shift-invariant neural network A shift-invariant neural network was proposed by Wei Zhang et al. for image character recognition in 1988. It is a modified Neocognitron by keeping only the convolutional interconnections between the image feature layers and the last fully connected layer. The model was trained with back-propagation. The training algorithm was further improved in 1991 to improve its generalization ability. The model architecture was modified by removing the last fully connected layer and applied for medical image segmentation (1991) and automatic detection of breast cancer in mammograms (1994). A different convolution-based design was proposed in 1988 for application to decomposition of one-dimensional electromyography convolved signals via de-convolution. This design was modified in 1989 to other de-convolution-based designs. Topological deep learning Topological deep learning was first introduced in 2017. It integrates topological data analysis and convolutional neural networks for intricately complex data. Topological deep learning has become a new frontier in deep learning. GPU implementations Although CNNs were invented in the 1980s, their breakthrough in the 2000s required fast implementations on graphics processing units (GPUs). In 2004, it was shown by K. S. Oh and K. Jung that standard neural networks can be greatly accelerated on GPUs. Their implementation was 20 times faster than an equivalent implementation on CPU. In 2005, another paper also emphasised the value of GPGPU for machine learning. The first GPU-implementation of a CNN was described in 2006 by K. Chellapilla et al. Their implementation was 4 times faster than an equivalent implementation on CPU. In the same period, GPUs were also used for unsupervised training of deep belief networks. In 2010, Dan Ciresan et al. at IDSIA trained deep feedforward networks on GPUs. In 2011, they extended this to CNNs, accelerating by 60 compared to training CPU. In 2011, the network win an image recognition contest where they achieved superhuman performance for the first time. Then they won more competitions and achieved state of the art on several benchmarks. Subsequently, AlexNet, a similar GPU-based CNN by Alex Krizhevsky et al. won the ImageNet Large Scale Visual Recognition Challenge 2012. It was an early catalytic event for the AI boom. Compared to the training of CNNs using GPUs, not much attention was given to CPU. (Viebke et al 2019) parallelizes CNN by thread- and SIMD-level parallelism that is available on the Intel Xeon Phi. Distinguishing features In the past, traditional multilayer perceptron (MLP) models were used for image recognition. However, the full connectivity between nodes caused the curse of dimensionality, and was computationally intractable with higher-resolution images. A 1000×1000-pixel image with RGB color channels has 3 million weights per fully-connected neuron, which is too high to feasibly process efficiently at scale. For example, in CIFAR-10, images are only of size 32×32×3 (32 wide, 32 high, 3 color channels), so a single fully connected neuron in the first hidden layer of a regular neural network would have 32*32*3 = 3,072 weights. A 200×200 image, however, would lead to neurons that have 200*200*3 = 120,000 weights. Also, such network architecture does not take into account the spatial structure of data, treating input pixels which are far apart in the same way as pixels that are close together. This ignores locality of reference in data with a grid-topology (such as images), both computationally and semantically. Thus, full connectivity of neurons is wasteful for purposes such as image recognition that are dominated by spatially local input patterns. Convolutional neural networks are variants of multilayer perceptrons, designed to emulate the behavior of a visual cortex. These models mitigate the challenges posed by the MLP architecture by exploiting the strong spatially local correlation present in natural images. As opposed to MLPs, CNNs have the following distinguishing features: 3D volumes of neurons. The layers of a CNN have neurons arranged in 3 dimensions: width, height and depth. Where each neuron inside a convolutional layer is connected to only a small region of the layer before it, called a receptive field. Distinct types of layers, both locally and completely connected, are stacked to form a CNN architecture. Local connectivity: following the concept of receptive fields, CNNs exploit spatial locality by enforcing a local connectivity pattern between neurons of adjacent layers. The architecture thus ensures that the learned "filters" produce the strongest response to a spatially local input pattern. Stacking many such layers leads to nonlinear filters that become increasingly global (i.e. responsive to a larger region of pixel space) so that the network first creates representations of small parts of the input, then from them assembles representations of larger areas. Shared weights: In CNNs, each filter is replicated across the entire visual field. These replicated units share the same parameterization (weight vector and bias) and form a feature map. This means that all the neurons in a given convolutional layer respond to the same feature within their specific response field. Replicating units in this way allows for the resulting activation map to be equivariant under shifts of the locations of input features in the visual field, i.e. they grant translational equivariance—given that the layer has a stride of one. Pooling: In a CNN's pooling layers, feature maps are divided into rectangular sub-regions, and the features in each rectangle are independently down-sampled to a single value, commonly by taking their average or maximum value. In addition to reducing the sizes of feature maps, the pooling operation grants a degree of local translational invariance to the features contained therein, allowing the CNN to be more robust to variations in their positions. Together, these properties allow CNNs to achieve better generalization on vision problems. Weight sharing dramatically reduces the number of free parameters learned, thus lowering the memory requirements for running the network and allowing the training of larger, more powerful networks. Building blocks A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume (e.g. holding the class scores) through a differentiable function. A few distinct types of layers are commonly used. These are further discussed below. Convolutional layer The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of learnable filters (or kernels), which have a small receptive field, but extend through the full depth of the input volume. During the forward pass, each filter is convolved across the width and height of the input volume, computing the dot product between the filter entries and the input, producing a 2-dimensional activation map of that filter. As a result, the network learns filters that activate when it detects some specific type of feature at some spatial position in the input. Stacking the activation maps for all filters along the depth dimension forms the full output volume of the convolution layer. Every entry in the output volume can thus also be interpreted as an output of a neuron that looks at a small region in the input. Each entry in an activation map use the same set of parameters that define the filter. Self-supervised learning has been adapted for use in convolutional layers by using sparse patches with a high-mask ratio and a global response normalization layer. Local connectivity When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing a sparse local connectivity pattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume. The extent of this connectivity is a hyperparameter called the receptive field of the neuron. The connections are local in space (along width and height), but always extend along the entire depth of the input volume. Such an architecture ensures that the learned () filters produce the strongest response to a spatially local input pattern. Spatial arrangement Three hyperparameters control the size of the output volume of the convolutional layer: the depth, stride, and padding size: The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color. Stride controls how depth columns around the width and height are allocated. If the stride is 1, then we move the filters one pixel at a time. This leads to heavily overlapping receptive fields between the columns, and to large output volumes. For any integer a stride S means that the filter is translated S units at a time per output. In practice, is rare. A greater stride means smaller overlap of receptive fields and smaller spatial dimensions of the output volume. Sometimes, it is convenient to pad the input with zeros (or other values, such as the average of the region) on the border of the input volume. The size of this padding is a third hyperparameter. Padding provides control of the output volume's spatial size. In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume, this is commonly referred to as "same" padding. The spatial size of the output volume is a function of the input volume size , the kernel field size of the convolutional layer neurons, the stride , and the amount of zero padding on the border. The number of neurons that "fit" in a given volume is then: If this number is not an integer, then the strides are incorrect and the neurons cannot be tiled to fit across the input volume in a symmetric way. In general, setting zero padding to be when the stride is ensures that the input volume and output volume will have the same size spatially. However, it is not always completely necessary to use all of the neurons of the previous layer. For example, a neural network designer may decide to use just a portion of padding. Parameter sharing A parameter sharing scheme is used in convolutional layers to control the number of free parameters. It relies on the assumption that if a patch feature is useful to compute at some spatial position, then it should also be useful to compute at other positions. Denoting a single 2-dimensional slice of depth as a depth slice, the neurons in each depth slice are constrained to use the same weights and bias. Since all neurons in a single depth slice share the same parameters, the forward pass in each depth slice of the convolutional layer can be computed as a convolution of the neuron's weights with the input volume. Therefore, it is common to refer to the sets of weights as a filter (or a kernel), which is convolved with the input. The result of this convolution is an activation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to the translation invariance of the CNN architecture. Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer". Pooling layer Another important concept of CNNs is pooling, which is used as a form of non-linear down-sampling. Pooling provides downsampling because it reduces the spatial dimensions (height and width) of the input feature maps while retaining the most important information. There are several non-linear functions to implement pooling, where max pooling and average pooling are the most common. Pooling aggregates information from small regions of the input creating partitions of the input feature map, typically using a fixed-size window (like 2x2) and applying a stride (often 2) to move the window across the input. Note that without using a stride greater than 1, pooling would not perform downsampling, as it would simply move the pooling window across the input one step at a time, without reducing the size of the feature map. In other words, the stride is what actually causes the downsampling by determining how much the pooling window moves over the input. Intuitively, the exact location of a feature is less important than its rough location relative to other features. This is the idea behind the use of pooling in convolutional neural networks. The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters, memory footprint and amount of computation in the network, and hence to also control overfitting. This is known as down-sampling. It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by an activation function, such as a ReLU layer) in a CNN architecture. While pooling layers contribute to local translation invariance, they do not provide global translation invariance in a CNN, unless a form of global pooling is used. The pooling layer commonly operates independently on every depth, or slice, of the input and resizes it spatially. A very common form of max pooling is a layer with filters of size 2×2, applied with a stride of 2, which subsamples every depth slice in the input by 2 along both width and height, discarding 75% of the activations: In this case, every max operation is over 4 numbers. The depth dimension remains unchanged (this is true for other forms of pooling as well). In addition to max pooling, pooling units can use other functions, such as average pooling or ℓ2-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which generally performs better in practice. Due to the effects of fast spatial reduction of the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether. Channel max pooling A channel max pooling (CMP) operation layer conducts the MP operation along the channel side among the corresponding positions of the consecutive feature maps for the purpose of redundant information elimination. The CMP makes the significant features gather together within fewer channels, which is important for fine-grained image classification that needs more discriminating features. Meanwhile, another advantage of the CMP operation is to make the channel number of feature maps smaller before it connects to the first fully connected (FC) layer. Similar to the MP operation, we denote the input feature maps and output feature maps of a CMP layer as F ∈ R(C×M×N) and C ∈ R(c×M×N), respectively, where C and c are the channel numbers of the input and output feature maps, M and N are the widths and the height of the feature maps, respectively. Note that the CMP operation only changes the channel number of the feature maps. The width and the height of the feature maps are not changed, which is different from the MP operation. See for reviews for pooling methods. ReLU layer ReLU is the abbreviation of rectified linear unit. It was proposed by Alston Householder in 1941, and used in CNN by Kunihiko Fukushima in 1969. ReLU applies the non-saturating activation function . It effectively removes negative values from an activation map by setting them to zero. It introduces nonlinearity to the decision function and in the overall network without affecting the receptive fields of the convolution layers. In 2011, Xavier Glorot, Antoine Bordes and Yoshua Bengio found that ReLU enables better training of deeper networks, compared to widely used activation functions prior to 2011. Other functions can also be used to increase nonlinearity, for example the saturating hyperbolic tangent , , and the sigmoid function . ReLU is often preferred to other functions because it trains the neural network several times faster without a significant penalty to generalization accuracy. Fully connected layer After several convolutional and max pooling layers, the final classification is done via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer, as seen in regular (non-convolutional) artificial neural networks. Their activations can thus be computed as an affine transformation, with matrix multiplication followed by a bias offset (vector addition of a learned or fixed bias term). Loss layer The "loss layer", or "loss function", specifies how training penalizes the deviation between the predicted output of the network, and the true data labels (during supervised learning). Various loss functions can be used, depending on the specific task. The Softmax loss function is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in . Euclidean loss is used for regressing to real-valued labels . Hyperparameters Hyperparameters are various settings that are used to control the learning process. CNNs use more hyperparameters than a standard multilayer perceptron (MLP). Kernel size The kernel is the number of pixels processed together. It is typically expressed as the kernel's dimensions, e.g., 2x2, or 3x3. Padding Padding is the addition of (typically) 0-valued pixels on the borders of an image. This is done so that the border pixels are not undervalued (lost) from the output because they would ordinarily participate in only a single receptive field instance. The padding applied is typically one less than the corresponding kernel dimension. For example, a convolutional layer using 3x3 kernels would receive a 2-pixel pad, that is 1 pixel on each side of the image. Stride The stride is the number of pixels that the analysis window moves on each iteration. A stride of 2 means that each kernel is offset by 2 pixels from its predecessor. Number of filters Since feature map size decreases with depth, layers near the input layer tend to have fewer filters while higher layers can have more. To equalize computation at each layer, the product of feature values va with pixel position is kept roughly constant across layers. Preserving more information about the input would require keeping the total number of activations (number of feature maps times number of pixel positions) non-decreasing from one layer to the next. The number of feature maps directly controls the capacity and depends on the number of available examples and task complexity. Filter size Common filter sizes found in the literature vary greatly, and are usually chosen based on the data set. Typical filter sizes range from 1x1 to 7x7. As two famous examples, AlexNet used 3x3, 5x5, and 11x11. Inceptionv3 used 1x1, 3x3, and 5x5. The challenge is to find the right level of granularity so as to create abstractions at the proper scale, given a particular data set, and without overfitting. Pooling type and size Max pooling is typically used, often with a 2x2 dimension. This implies that the input is drastically downsampled, reducing processing cost. Greater pooling reduces the dimension of the signal, and may result in unacceptable information loss. Often, non-overlapping pooling windows perform best. Dilation Dilation involves ignoring pixels within a kernel. This reduces processing memory potentially without significant signal loss. A dilation of 2 on a 3x3 kernel expands the kernel to 5x5, while still processing 9 (evenly spaced) pixels. Specifically, the processed pixels after the dilation are the cells (1,1), (1,3), (1,5), (3,1), (3,3), (3,5), (5,1), (5,3), (5,5), where (i,j) denotes the cell of the i-th row and j-th column in the expanded 5x5 kernel. Accordingly, dilation of 4 expands the kernel to 7x7. Translation equivariance and aliasing It is commonly assumed that CNNs are invariant to shifts of the input. Convolution or pooling layers within a CNN that do not have a stride greater than one are indeed equivariant to translations of the input. However, layers with a stride greater than one ignore the Nyquist-Shannon sampling theorem and might lead to aliasing of the input signal While, in principle, CNNs are capable of implementing anti-aliasing filters, it has been observed that this does not happen in practice and yield models that are not equivariant to translations. Furthermore, if a CNN makes use of fully connected layers, translation equivariance does not imply translation invariance, as the fully connected layers are not invariant to shifts of the input. One solution for complete translation invariance is avoiding any down-sampling throughout the network and applying global average pooling at the last layer. Additionally, several other partial solutions have been proposed, such as anti-aliasing before downsampling operations, spatial transformer networks, data augmentation, subsampling combined with pooling, and capsule neural networks. Evaluation The accuracy of the final model is based on a sub-part of the dataset set apart at the start, often called a test-set. Other times methods such as k-fold cross-validation are applied. Other strategies include using conformal prediction. Regularization methods Regularization is a process of introducing additional information to solve an ill-posed problem or to prevent overfitting. CNNs use various types of regularization. Empirical Dropout Because a fully connected layer occupies most of the parameters, it is prone to overfitting. One method to reduce overfitting is dropout, introduced in 2014. At each training stage, individual nodes are either "dropped out" of the net (ignored) with probability or kept with probability , so that a reduced network is left; incoming and outgoing edges to a dropped-out node are also removed. Only the reduced network is trained on the data in that stage. The removed nodes are then reinserted into the network with their original weights. In the training stages, is usually 0.5; for input nodes, it is typically much higher because information is directly lost when input nodes are ignored. At testing time after training has finished, we would ideally like to find a sample average of all possible dropped-out networks; unfortunately this is unfeasible for large values of . However, we can find an approximation by using the full network with each node's output weighted by a factor of , so the expected value of the output of any node is the same as in the training stages. This is the biggest contribution of the dropout method: although it effectively generates neural nets, and as such allows for model combination, at test time only a single network needs to be tested. By avoiding training all nodes on all training data, dropout decreases overfitting. The method also significantly improves training speed. This makes the model combination practical, even for deep neural networks. The technique seems to reduce node interactions, leading them to learn more robust features that better generalize to new data. DropConnect DropConnect is the generalization of dropout in which each connection, rather than each output unit, can be dropped with probability . Each unit thus receives input from a random subset of units in the previous layer. DropConnect is similar to dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights, rather than the output vectors of a layer. In other words, the fully connected layer with DropConnect becomes a sparsely connected layer in which the connections are chosen at random during the training stage. Stochastic pooling A major drawback to Dropout is that it does not have the same benefits for convolutional layers, where the neurons are not fully connected. Even before Dropout, in 2013 a technique called stochastic pooling, the conventional deterministic pooling operations were replaced with a stochastic procedure, where the activation within each pooling region is picked randomly according to a multinomial distribution, given by the activities within the pooling region. This approach is free of hyperparameters and can be combined with other regularization approaches, such as dropout and data augmentation. An alternate view of stochastic pooling is that it is equivalent to standard max pooling but with many copies of an input image, each having small local deformations. This is similar to explicit elastic deformations of the input images, which delivers excellent performance on the MNIST data set. Using stochastic pooling in a multilayer model gives an exponential number of deformations since the selections in higher layers are independent of those below. Artificial data Because the degree of model overfitting is determined by both its power and the amount of training it receives, providing a convolutional network with more training examples can reduce overfitting. Because there is often not enough available data to train, especially considering that some part should be spared for later testing, two approaches are to either generate new data from scratch (if possible) or perturb existing data to create new ones. The latter one is used since mid-1990s. For example, input images can be cropped, rotated, or rescaled to create new examples with the same labels as the original training set. Explicit Early stopping One of the simplest methods to prevent overfitting of a network is to simply stop the training before overfitting has had a chance to occur. It comes with the disadvantage that the learning process is halted. Number of parameters Another simple way to prevent overfitting is to limit the number of parameters, typically by limiting the number of hidden units in each layer or limiting network depth. For convolutional networks, the filter size also affects the number of parameters. Limiting the number of parameters restricts the predictive power of the network directly, reducing the complexity of the function that it can perform on the data, and thus limits the amount of overfitting. This is equivalent to a "zero norm". Weight decay A simple form of added regularizer is weight decay, which simply adds an additional error, proportional to the sum of weights (L1 norm) or squared magnitude (L2 norm) of the weight vector, to the error at each node. The level of acceptable model complexity can be reduced by increasing the proportionality constant('alpha' hyperparameter), thus increasing the penalty for large weight vectors. L2 regularization is the most common form of regularization. It can be implemented by penalizing the squared magnitude of all parameters directly in the objective. The L2 regularization has the intuitive interpretation of heavily penalizing peaky weight vectors and preferring diffuse weight vectors. Due to multiplicative interactions between weights and inputs this has the useful property of encouraging the network to use all of its inputs a little rather than some of its inputs a lot. L1 regularization is also common. It makes the weight vectors sparse during optimization. In other words, neurons with L1 regularization end up using only a sparse subset of their most important inputs and become nearly invariant to the noisy inputs. L1 with L2 regularization can be combined; this is called elastic net regularization. Max norm constraints Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector of every neuron to satisfy . Typical values of are order of 3–4. Some papers report improvements when using this form of regularization. Hierarchical coordinate frames Pooling loses the precise spatial relationships between high-level parts (such as nose and mouth in a face image). These relationships are needed for identity recognition. Overlapping the pools so that each feature occurs in multiple pools, helps retain the information. Translation alone cannot extrapolate the understanding of geometric relationships to a radically new viewpoint, such as a different orientation or scale. On the other hand, people are very good at extrapolating; after seeing a new shape once they can recognize it from a different viewpoint. An earlier common way to deal with this problem is to train the network on transformed data in different orientations, scales, lighting, etc. so that the network can cope with these variations. This is computationally intensive for large data-sets. The alternative is to use a hierarchy of coordinate frames and use a group of neurons to represent a conjunction of the shape of the feature and its pose relative to the retina. The pose relative to the retina is the relationship between the coordinate frame of the retina and the intrinsic features' coordinate frame. Thus, one way to represent something is to embed the coordinate frame within it. This allows large features to be recognized by using the consistency of the poses of their parts (e.g. nose and mouth poses make a consistent prediction of the pose of the whole face). This approach ensures that the higher-level entity (e.g. face) is present when the lower-level (e.g. nose and mouth) agree on its prediction of the pose. The vectors of neuronal activity that represent pose ("pose vectors") allow spatial transformations modeled as linear operations that make it easier for the network to learn the hierarchy of visual entities and generalize across viewpoints. This is similar to the way the human visual system imposes coordinate frames in order to represent shapes. Applications Image recognition CNNs are often used in image recognition systems. In 2012, an error rate of 0.23% on the MNIST database was reported. Another paper on using CNN for image classification reported that the learning process was "surprisingly fast"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database. Subsequently, a similar CNN called AlexNet won the ImageNet Large Scale Visual Recognition Challenge 2012. When applied to facial recognition, CNNs achieved a large decrease in error rate. Another paper reported a 97.6% recognition rate on "5,600 still images of more than 10 subjects". CNNs were used to assess video quality in an objective way after manual training; the resulting system had a very low root mean square error. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object classification and detection, with millions of images and hundreds of object classes. In the ILSVRC 2014, a large-scale visual recognition challenge, almost every highly ranked team used CNN as their basic framework. The winner GoogLeNet (the foundation of DeepDream) increased the mean average precision of object detection to 0.439329, and reduced classification error to 0.06656, the best result to date. Its network applied more than 30 layers. That performance of convolutional neural networks on the ImageNet tests was close to that of humans. The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters, an increasingly common phenomenon with modern digital cameras. By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained categories such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this. In 2015, a many-layered CNN demonstrated the ability to spot faces from a wide range of angles, including upside down, even when partially occluded, with competitive performance. The network was trained on a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They used batches of 128 images over 50,000 iterations. Video analysis Compared to image data domains, there is relatively little work on applying CNNs to video classification. Video is more complex than images since it has another (temporal) dimension. However, some extensions of CNNs into the video domain have been explored. One approach is to treat space and time as equivalent dimensions of the input and perform convolutions in both time and space. Another way is to fuse the features of two convolutional neural networks, one for the spatial and one for the temporal stream. Long short-term memory (LSTM) recurrent units are typically incorporated after the CNN to account for inter-frame or inter-clip dependencies. Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis. Its application can be seen in text-to-video model. Natural language processing CNNs have also been explored for natural language processing. CNN models are effective for various NLP problems and achieved excellent results in semantic parsing, search query retrieval, sentence modeling, classification, prediction and other traditional NLP tasks. Compared to traditional language processing methods such as recurrent neural networks, CNNs can represent different contextual realities of language that do not rely on a series-sequence assumption, while RNNs are better suitable when classical time series modeling is required. Anomaly detection A CNN with 1-D convolutions was used on time series in the frequency domain (spectral residual) by an unsupervised model to detect anomalies in the time domain. Drug discovery CNNs have been used in drug discovery. Predicting the interaction between molecules and biological proteins can identify potential treatments. In 2015, Atomwise introduced AtomNet, the first deep learning neural network for structure-based drug design. The system trains directly on 3-dimensional representations of chemical interactions. Similar to how image recognition networks learn to compose smaller, spatially proximate features into larger, complex structures, AtomNet discovers chemical features, such as aromaticity, sp3 carbons, and hydrogen bonding. Subsequently, AtomNet was used to predict novel candidate biomolecules for multiple disease targets, most notably treatments for the Ebola virus and multiple sclerosis. In 2016-2019, topologyNet and mathematical deep learning achieved first place in multiple categories of the D3R Grand Challenges, a worldwide annual competition series focused on computer-aided drug design. Checkers game CNNs have been used in the game of checkers. From 1999 to 2001, Fogel and Chellapilla published papers showing how a convolutional neural network could learn to play checker using co-evolution. The learning process did not use prior human professional games, but rather focused on a minimal set of information contained in the checkerboard: the location and type of pieces, and the difference in number of pieces between the two sides. Ultimately, the program (Blondie24) was tested on 165 games against players and ranked in the highest 0.4%. It also earned a win against the program Chinook at its "expert" level of play. Go CNNs have been used in computer Go. In December 2014, Clark and Storkey published a paper showing that a CNN trained by supervised learning from a database of human professional games could outperform GNU Go and win some games against Monte Carlo tree search Fuego 1.1 in a fraction of the time it took Fuego to play. Later it was announced that a large 12-layer convolutional neural network had correctly predicted the professional move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GNU Go in 97% of games, and matched the performance of the Monte Carlo tree search program Fuego simulating ten thousand playouts (about a million positions) per move. A couple of CNNs for choosing moves to try ("policy network") and evaluating positions ("value network") driving MCTS were used by AlphaGo, the first to beat the best human player at the time. Time series forecasting Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients. Convolutional networks can provide an improved forecasting performance when there are multiple similar time series to learn from. CNNs can also be applied to further tasks in time series analysis (e.g., time series classification or quantile forecasting). Cultural heritage and 3D-datasets As archaeological findings such as clay tablets with cuneiform writing are increasingly acquired using 3D scanners, benchmark datasets are becoming available, including HeiCuBeDa providing almost 2000 normalized 2-D and 3-D datasets prepared with the GigaMesh Software Framework. So curvature-based measures are used in conjunction with geometric neural networks (GNNs), e.g. for period classification of those clay tablets being among the oldest documents of human history. Fine-tuning For many applications, training data is not very available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain. Once the network parameters have converged an additional training step is performed using the in-domain data to fine-tune the network weights, this is known as transfer learning. Furthermore, this technique allows convolutional network architectures to successfully be applied to problems with tiny training sets. Human interpretable explanations End-to-end training and prediction are common practice in computer vision. However, human interpretable explanations are required for critical systems such as a self-driving cars. With recent advances in visual salience, spatial attention, and temporal attention, the most critical spatial regions/temporal instants could be visualized to justify the CNN predictions. Related architectures Deep Q-networks A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning. Preliminary results were presented in 2014, with an accompanying paper in February 2015. The research described an application to Atari 2600 gaming. Other deep reinforcement learning models preceded it. Deep belief networks Convolutional deep belief networks (CDBN) have structure very similar to convolutional neural networks and are trained similarly to deep belief networks. Therefore, they exploit the 2D structure of images, like CNNs do, and make use of pre-training like deep belief networks. They provide a generic structure that can be used in many image and signal processing tasks. Benchmark results on standard image datasets like CIFAR have been obtained using CDBNs. Neural abstraction pyramid The feed-forward architecture of convolutional neural networks was extended in the neural abstraction pyramid by lateral and feedback connections. The resulting recurrent convolutional network allows for the flexible incorporation of contextual information to iteratively resolve local ambiguities. In contrast to previous models, image-like outputs at the highest resolution were generated, e.g., for semantic segmentation, image reconstruction, and object localization tasks. Notable libraries Caffe: A library for convolutional neural networks. Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers. Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. A general-purpose deep learning library for the JVM production stack running on a C++ scientific computing engine. Allows the creation of custom layers. Integrates with Hadoop and Kafka. Dlib: A toolkit for making real world machine learning and data analysis applications in C++. Microsoft Cognitive Toolkit: A deep learning toolkit written by Microsoft with several unique features enhancing scalability over multiple nodes. It supports full-fledged interfaces for training in C++ and Python and with additional support for model inference in C# and Java. TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU, Google's proprietary tensor processing unit (TPU), and mobile devices. Theano: The reference deep-learning library for Python with an API largely compatible with the popular NumPy library. Allows user to write symbolic mathematical expressions, then automatically generates their derivatives, saving the user from having to code gradients or backpropagation. These symbolic expressions are automatically compiled to CUDA code for a fast, on-the-GPU implementation. Torch: A scientific computing framework with wide support for machine learning algorithms, written in C and Lua. See also Attention (machine learning) Convolution Deep learning Natural-language processing Neocognitron Scale-invariant feature transform Time delay neural network Vision processing unit Topological deep learning Notes References External links CS231n: Convolutional Neural Networks for Visual Recognition — Andrej Karpathy's Stanford computer science course on CNNs in computer vision vdumoulin/conv_arithmetic: A technical report on convolution arithmetic in the context of deep learning. Animations of convolutions. Neural network architectures Computer vision Computational neuroscience
Convolutional neural network
Engineering
12,456
41,989,305
https://en.wikipedia.org/wiki/In%20vivo%20bioreactor
The in vivo bioreactor is a tissue engineering paradigm that uses bioreactor methodology to grow neotissue in vivo that augments or replaces malfunctioning native tissue. Tissue engineering principles are used to construct a confined, artificial bioreactor space in vivo that hosts a tissue scaffold and key biomolecules necessary for neotissue growth. Said space often requires inoculation with pluripotent or specific stem cells to encourage initial growth, and access to a blood source. A blood source allows for recruitment of stem cells from the body alongside nutrient delivery for continual growth. This delivery of cells and nutrients to the bioreactor eventually results in the formation of a neotissue product. Overview Conceptually, the in vivo bioreactor was borne from complications in a repair method of bone fracture, bone loss, necrosis, and tumor reconstruction known as bone grafting. Traditional bone grafting strategies require fresh, autologous bone harvested from the iliac crest; this harvest site is limited by the amount of bone that can safely be removed, as well as associated pain and morbidity. Other methods include cadaverous allografts and synthetic options (often made of hydroxyapatite) that have become available in recent years. In response to the question of limited bone sourcing, it has been posited that bone can be grown to fit a damaged region within the body through the application of tissue engineering principles. Tissue engineering is a biomedical engineering discipline that combines biology, chemistry, and engineering to design neotissue (newly formed tissue) on a scaffold. Tissues scaffolds are functionally identical to the extracellular matrix found, acting as a site upon which regenerative cellular components adsorb to encourage cellular growth. This cellular growth is then artificially stimulated by additive growth factors in the environment that encourage tissue formation. The scaffold is often seeded with stem cells and growth additives to encourage a smooth transition from cells to tissues, and more recently, organs. Traditionally, this method of tissue engineering is performed in vitro, where scaffold components and environmental manipulation recreate in vivo stimuli that direct growth. Environmental manipulation includes changes in physical stimulation, pH, potential gradients, cytokine gradients, and oxygen concentration. The overarching goal of in vitro tissue engineering is to create a functional tissue that is equivalent to native tissue in terms of composition, biomechanical properties, and physiological performance. However, in vitro tissue engineering suffers from a limited ability to mimic in vitro conditions, often leading to inadequate tissue substitutes. Therefore, in vivo tissue engineering has been suggested as a method to circumvent the tedium of environmental manipulation and use native in vivo stimuli to direct cell growth. To achieve in vivo tissue growth, an artificial bioreactor space must be established in which cells may grow. The in vivo bioreactor depends on harnessing the reparative qualities of the body to recruit stem cells into an implanted scaffold, and utilize vasculature to supply all necessary growth components. Design Cells Tissue engineering done in vivo is capable of recruiting local cellular populations into a bioreactor space. Indeed a range of neotissue growth has been shown: bone, cartilage, fat, and muscle. In theory, any tissue type could be grown in this manner if all necessary components (growth factors, environmental and physical ques) are met. Recruitment of stem cells require a complex process of mobilization from their niche, though research suggests that mature cells transplanted upon the bioreactor scaffold can improve stem cell recruitment. These cells secrete growth factors that promote repair and can be co-cultured with stem cells to improve tissue formation. Scaffolds Scaffold materials are designed to enhance tissue formation through control of the local and surrounding environments. Scaffolds are critical in regulating cellular growth and provide a volume in which vascularization and stem cell differentiation can occur. Scaffold geometry significantly affects tissue differentiation through physical growth ques. Predicting tissue formation computationally requires theories that link physical growth ques to cell differentiation. Current models rely on mechano-regulation theory, widely shaped by Prendergast et al. for predicting cell growth. Thus a quantitative analysis of geometry and materials commonly used in tissue scaffolds is capable. Such materials include: Porous ceramic and demineralized bone matrix supports Coralline cylinders Biodegradable material such as poly(α-hydroxy esters) Decellularized tissue matrices Injectable biomaterials or hydrogels are typically composed of polysaccharides, proteins/peptide mimetics, or synthetic polymers such as (poly(ethylene glycol)). Peptide amphiphile (PA) systems are self assembling and can form solid bioactive scaffolds after injection within the body. Inert systems have been proven to be adequate for tissue formation. Cartilage formation has occurred by injecting an inert agarose gel beneath the periosteum in a rabbit model, vascularization was restricted. fibrin Sponges made from collagen Bioreactors Methods Initially, focusing on bone growth, subcutaneous pockets were used for bone prefabrication as a simple in vivo bioreactor model. The pocket is an artificially created space between varying levels of subcutaneous fascia. The location provides regenerative ques to the bioreactor implant but does not rely on pre-existing bone tissue as a substrate. Furthermore, these bioreactors may be wrapped with muscle tissue to encourage vascularization and bone growth. Another strategy is through the use of a periosteal flap wrapped around the bioreactor, or the scaffold itself to create an in vivo bioreactor. This strategy utilizes the guided bone regeneration treatment scheme, and is a safe method for bone prefabrication. These 'flap' methods of packing the bioreactor within fascia, or wrapping it in tissue is effective, though somewhat random due to the non-directed vascularization these methods incur. The axial vascular bundle (AVB) strategy requires that an artery and vein are inserted in an in vitro bioreactor to transport growth factors, cells, and remove waste. This ultimately results in extensive vascularization of the bioreactor space and a vast improvement in growth capability. This vascularization, though effective, is limited by the surface contact that it can achieve between the scaffold and the capillaries filling the bioreactor space. Thus, a combination of the flap and AVB techniques can maximize the growth rate and vascular contact of the bioreactor as suggested by Han and Dai, by inserting a vascular bundle into a scaffold wrapped in either musculature or periosteum. If inadequate pre-existing vasculature is present in the growth site due to damage or disease, an arteriovenous loop (AVL) can be used. The AVL strategy requires a surgical connection be made between an artery of vein to form an arteriovenous fistula which is then placed within an in vitro bioreactor space containing a scaffold. A capillary network will form from this loop and accelerate the vascularization of new tissue. Materials Materials used in the construction of an in vivo bioreactor space vary widely depending on the type of substrate, type of tissue, and mechanical demands of said tissue being grown. At its simplest, a bioreactor space will be created between tissue layers through the use of hydrogel injections to create a bioreactor space. Early models used an impermeable silicone shroud to encase a scaffold, though more recent studies have begun 3D printing custom bioreactor molds to further enhance the mechanical growth properties of the bioreactors. The choice of bioreactor chamber material generally requires that it is nontoxic and medical grade, examples include: "silicon, polycarbonate, and acrylic polymer". Recently both Teflon and titanium have been used in the growth of bone. One study utilized Polymethyl methacrylate as a chamber material and 3D printed hollow rectangular blocks. Yet another study pushed the limits of the in vivo bioreactor by proving that the omentum is suitable as a bioreactor space and chamber. Specifically, highly vascularized and functional bladder tissue was grown within the omentum space. Examples An example of the implementation of the IVB approach was in the engineering of autologous bone by injecting calcium alginate in a sub-periosteal location. The periosteum is a membrane that covers the long bones, jawbone, ribs and the skull. This membrane contains an endogenous population of pluripotent cells called the periosteal cells, which are a type of mesenchymal stem cells (MSC), which reside in the cambium layer, i.e., the side facing the bone. A key step in the procedure is the elevation of the periosteum without damaging the cambium surface and to ensure this a new technique called hydraulic elevation was developed. The choice of the sub-periosteum site is used because stimulation of the cambium layer using transforming growth factor–beta resulted in enhanced chondrogenesis, i.e., formation of cartilage. In development the formation of bone can either occur via a Cartilage template initially formed by the MSCs that then gets ossified through a process called endochondral ossification or directly from MSC differentiation to bone via a process termed intra-membranous ossification. Upon exposure of the periosteal cells to calcium from the alginate gel, these cells become bone cells and start producing bone matrix through the intra-membranous ossification process, recapitulating all steps of bone matrix deposition. The extension of the IVB paradigm to engineering autologous hyaline cartilage was also recently demonstrated. In this case, agarose is injected and this triggers local hypoxia, which then results in the differentiation of the periosteal MSCs into articular chondrocytes, i.e. cells similar to those found in the joint cartilage. Since this processes occurs in a relative short period of less than two weeks and cartilage can remodel into bone, this approach might provide some advantages in treatment of both cartilage and bone loss. The IVB concept needs to be however realized in humans and this is currently being undertaken. See also Biomedical engineering Tissue engineering Bioreactor Bone Grafting Guided Bone and Tissue Regeneration Further reading References Medical technology Regenerative biomedicine Tissue engineering
In vivo bioreactor
Chemistry,Engineering,Biology
2,212
18,095,472
https://en.wikipedia.org/wiki/HD%2082205
HD 82205 (HR 3770) is a solitary star in the southern constellation Antlia. It is faintly visible to the naked eye with an apparent magnitude of 5.48 and is estimated to be 810 light years distant based on parallax measurements. However, it is receding with a heliocentric radial velocity of . HD 82205 has a general stellar classification of K3 III, indicating that it is a red giant. However, Houk and Cowley (1982) found a slightly warmer class of K2 III CNII, which also suggests a strong overabundance of cyano radicals in the stellar atmosphere. At present it has 4.46 times the mass of the Sun but has expanded to 38.9 times its girth. It shines with a luminosity of from its enlarged photosphere at an effective temperature of , giving an orange hue. HD 82205 has a metallicity 123% that of the Sun and is believed to be a member of the thin disk population. Currently, it spins with a projected rotational velocity lower than . There is a 14th magnitude optical companion separated away along a position angle of . The object was first noticed by T.J.J See in 1897. References External links Image HD 82205 Antlia 082205 Double stars K-type giants 3770 046578 CD-26 07117 Antliae, 3
HD 82205
Astronomy
289
53,242,630
https://en.wikipedia.org/wiki/Penny%20graph
In geometric graph theory, a penny graph is a contact graph of unit circles. It is formed from a collection of unit circles that do not cross each other, by creating a vertex for each circle and an edge for every pair of tangent circles. The circles can be represented physically by pennies, arranged without overlapping on a flat surface, with a vertex for each penny and an edge for each two pennies that touch. Penny graphs have also been called unit coin graphs, because they are the coin graphs formed from unit circles. If each vertex is represented by a point at the center of its circle, then two vertices will be adjacent if and only if their distance is the minimum distance among all pairs of vertices. Therefore, penny graphs have also been called minimum-distance graphs, smallest-distance graphs, or closest-pairs graphs. Similarly, in a mutual nearest neighbor graph that links pairs of points in the plane that are each other's nearest neighbors, each connected component is a penny graph, although edges in different components may have different lengths. Every penny graph is a unit disk graph and a matchstick graph. Like planar graphs more generally, they obey the four color theorem, but this theorem is easier to prove for penny graphs. Testing whether a graph is a penny graph, or finding its maximum independent set, is NP-hard; however, both upper and lower bounds are known for the size of the maximum independent set, higher than the bounds that are possible for arbitrary planar graphs. Properties Number of edges Every vertex in a penny graph has at most six neighboring vertices; here the number six is the kissing number for circles in the plane. However, the pennies on the boundary of the convex hull have fewer neighbors. Counting more precisely this reduction in neighbors for boundary pennies leads to a precise bound on the number of edges in any penny graph: a penny graph with vertices has at most edges. Some penny graphs, formed by arranging the pennies in a triangular grid, have exactly this number of edges. By arranging the pennies in a square grid, or in the form of certain squaregraphs, one can form triangle-free penny graphs whose number of edges is at least and in any triangle-free penny graph the number of edges is at most Swanepoel conjectured that the bound is tight. Proving this, or finding a better bound, remains open. Coloring Every penny graph contains a vertex with at most three neighbors. For instance, such a vertex can be found at one of the corners of the convex hull of the circle centers. Therefore, penny graphs have degeneracy at most three. Based on this, one can prove that their graph colorings require at most four colors, much more easily than the proof of the more general four-color theorem. However, despite their restricted structure, there exist penny graphs that do still require four colors. Analogously, the degeneracy of every triangle-free penny graph is at most two. Every such graph contains a vertex with at most two neighbors, even though it is not always possible to find this vertex on the convex hull. Based on this, one can prove that they require at most three colors, more easily than the proof of the more general Grötzsch's theorem that triangle-free planar graphs are 3-colorable. Independent sets A maximum independent set in a penny graph is a subset of the pennies, no two of which touch each other. Finding maximum independent sets is NP-hard for arbitrary graphs, and remains NP-hard on penny graphs. It is an instance of the maximum disjoint set problem, in which one must find large subsets of non-overlapping regions of the plane. However, as with planar graphs more generally, Baker's technique provides a polynomial-time approximation scheme for this problem. In 1983, Paul Erdős asked for the largest number such that every -vertex penny graph has an independent set of at least vertices. That is, if we place pennies on a flat surface, there should be a subset of of the pennies that do not touch each other. By the four-color theorem, , and the improved bound was proven by Swanepoel. In the other direction, Pach and Tóth proved that . As of 2013, these remained the best bounds known for this problem. Computational complexity Constructing a penny graph from the locations of its circles can be performed as an instance of the closest pair of points problem, taking worst-case time or (with randomized time and with the use of the floor function) expected time . An alternative method with the same worst-case time is to construct the Delaunay triangulation or nearest neighbor graph of the circle centers (both of which contain the penny graph as a subgraph) and then test which edges correspond to circle tangencies. However, if a graph is given without geometric positions for its vertices, then testing whether it can be represented as a penny graph is NP-hard. It remains NP-hard even when the given graph is a tree. Similarly, testing whether a graph can be represented as a three-dimensional mutual nearest neighbor graph is also NP-hard. It is possible to perform some computational tasks on directed penny graphs, such as testing whether one vertex can reach another, in polynomial time and substantially less than linear space, given an input representing its circles in a form allowing basic computational tasks such as testing adjacency and finding intersections of the circles with axis-parallel lines. Related graph families Penny graphs are a special case of the coin graphs (graphs that can be represented by tangencies of non-crossing circles of arbitrary radii). Because the coin graphs are the same as the planar graphs, all penny graphs are planar. The penny graphs are also unit disk graphs (the intersection graphs of unit circles), unit distance graphs (graphs that can be drawn with all edges having equal lengths, allowing crossings), and matchstick graphs (graphs that can be drawn in the plane with equal-length straight edges and no edge crossings). References Geometric graphs Planar graphs Circle packing
Penny graph
Mathematics
1,237
2,356,196
https://en.wikipedia.org/wiki/C%20Sharp%20%28programming%20language%29
C# ( ) is a general-purpose high-level programming language supporting multiple paradigms. C# encompasses static typing, strong typing, lexically scoped, imperative, declarative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines. The principal inventors of the C# programming language were Anders Hejlsberg, Scott Wiltamuth, and Peter Golde from Microsoft. It was first widely distributed in July 2000 and was later approved as an international standard by Ecma (ECMA-334) in 2002 and ISO/IEC (ISO/IEC 23270 and 20619) in 2003. Microsoft introduced C# along with .NET Framework and Visual Studio, both of which were closed-source. At the time, Microsoft had no open-source products. Four years later, in 2004, a free and open-source project called Mono began, providing a cross-platform compiler and runtime environment for the C# programming language. A decade later, Microsoft released Visual Studio Code (code editor), Roslyn (compiler), and the unified .NET platform (software framework), all of which support C# and are free, open-source, and cross-platform. Mono also joined Microsoft but was not merged into .NET. the most recent stable version of the language is C# 13.0, which was released in 2024 in .NET 9.0. Design goals The Ecma standard lists these design goals for C#: The language is intended to be a simple, modern, general-purpose, object-oriented programming language. The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important. The language is intended for use in developing software components suitable for deployment in distributed environments. Portability is very important for source code and programmers, especially those already familiar with C and C++. Support for internationalization is very important. C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions. Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language. History During the development of the .NET Framework, the class libraries were originally written using a managed code compiler system named Simple Managed C (SMC). In January 1999, Anders Hejlsberg formed a team to build a new language at the time called Cool, which stood for "C-like Object Oriented Language". Microsoft had considered keeping the name "Cool" as the final name of the language, but chose not to do so for trademark reasons. By the time the .NET project was publicly announced at the July 2000 Professional Developers Conference, the language had been renamed C#, and the class libraries and ASP.NET runtime had been ported to C#. Hejlsberg is C#'s principal designer and lead architect at Microsoft, and was previously involved with the design of Turbo Pascal, Embarcadero Delphi (formerly CodeGear Delphi, Inprise Delphi and Borland Delphi), and Visual J++. In interviews and technical papers, he has stated that flaws in most major programming languages (e.g. C++, Java, Delphi, and Smalltalk) drove the fundamentals of the Common Language Runtime (CLR), which, in turn, drove the design of the C# language. James Gosling, who created the Java programming language in 1994, and Bill Joy, a co-founder of Sun Microsystems, the originator of Java, called C# an "imitation" of Java; Gosling further said that "[C# is] sort of Java with reliability, productivity and security deleted." In July 2000, Hejlsberg said that C# is "not a Java clone" and is "much closer to C++" in its design. Since the release of C# 2.0 in November 2005, the C# and Java languages have evolved on increasingly divergent trajectories, becoming two quite different languages. One of the first major departures came with the addition of generics to both languages, with vastly different implementations. C# makes use of reification to provide "first-class" generic objects that can be used like any other class, with code generation performed at class-load time. Furthermore, C# has added several major features to accommodate functional-style programming, culminating in the LINQ extensions released with C# 3.0 and its supporting framework of lambda expressions, extension methods, and anonymous types. These features enable C# programmers to use functional programming techniques, such as closures, when it is advantageous to their application. The LINQ extensions and the functional imports help developers reduce the amount of boilerplate code that is included in common tasks like querying a database, parsing an XML file, or searching through a data structure, shifting the emphasis onto the actual program logic to help improve readability and maintainability. C# used to have a mascot called Andy (named after Anders Hejlsberg). It was retired on January 29, 2004. C# was originally submitted to the ISO/IEC JTC 1 subcommittee SC 22 for review, under ISO/IEC 23270:2003, was withdrawn and was then approved under ISO/IEC 23270:2006. The 23270:2006 is withdrawn under 23270:2018 and approved with this version. Name Microsoft first used the name C# in 1988 for a variant of the C language designed for incremental compilation. That project was not completed, and the name was later reused. The name "C sharp" was inspired by the musical notation whereby a sharp symbol indicates that the written note should be made a semitone higher in pitch. This is similar to the language name of C++, where "++" indicates that a variable should be incremented by 1 after being evaluated. The sharp symbol also resembles a ligature of four "+" symbols (in a two-by-two grid), further implying that the language is an increment of C++. Due to technical limits of display (standard fonts, browsers, etc.), and most keyboard layouts lacking a sharp symbol (), the number sign () was chosen to approximate the sharp symbol in the written name of the programming language. This convention is reflected in the ECMA-334 C# Language Specification. The "sharp" suffix has been used by a number of other .NET languages that are variants of existing languages, including J# (a .NET language also designed by Microsoft that is derived from Java 1.1), A# (from Ada), and the functional programming language F#. The original implementation of Eiffel for .NET was called Eiffel#, a name retired since the full Eiffel language is now supported. The suffix has also been used for libraries, such as Gtk# (a .NET wrapper for GTK and other GNOME libraries) and Cocoa# (a wrapper for Cocoa). Versions Syntax The core syntax of the C# language is similar to that of other C-style languages such as C, C++ and Java, particularly: Semicolons are used to denote the end of a statement. Curly brackets are used to group statements. Statements are commonly grouped into methods (functions), methods into classes, and classes into namespaces. Variables are assigned using an equals sign, but compared using two consecutive equals signs. Square brackets are used with arrays, both to declare them and to get a value at a given index in one of them. Distinguishing features Some notable features of C# that distinguish it from C, C++, and Java where noted, are: Portability By design, C# is the programming language that most directly reflects the underlying Common Language Infrastructure (CLI). Most of its intrinsic types correspond to value-types implemented by the CLI framework. However, the language specification does not state the code generation requirements of the compiler: that is, it does not state that a C# compiler must target a Common Language Runtime, or generate Common Intermediate Language (CIL), or generate any other specific format. Some C# compilers can also generate machine code like traditional compilers of C++ or Fortran. Typing C# supports strongly, implicitly typed variable declarations with the keyword var, and implicitly typed arrays with the keyword new[] followed by a collection initializer. Its type system is split into two families: Value types, like the built-in numeric types and user-defined structs, which are automatically handed over as copies when used as parameters, and reference types, including arrays, instances of classes, and strings, which only hand over a pointer to the respective object. Due to their special handling of the equality operator and their immutability, strings will nevertheless behave as if they were values, for all practical purposes. You can even use them as case labels. Where necessary, value types will be boxed automatically. C# supports a strict Boolean data type, bool. Statements that take conditions, such as while and if, require an expression of a type that implements the true operator, such as the Boolean type. While C++ also has a Boolean type, it can be freely converted to and from integers, and expressions such as if (a) require only that a is convertible to bool, allowing a to be an int, or a pointer. C# disallows this "integer meaning true or false" approach, on the grounds that forcing programmers to use expressions that return exactly bool can prevent certain types of programming mistakes such as if (a = b) (use of assignment = instead of equality ==). C# is more type safe than C++. The only implicit conversions by default are those that are considered safe, such as widening of integers. This is enforced at compile-time, during JIT, and, in some cases, at runtime. No implicit conversions occur between Booleans and integers, nor between enumeration members and integers (except for literal 0, which can be implicitly converted to any enumerated type). Any user-defined conversion must be explicitly marked as explicit or implicit, unlike C++ copy constructors and conversion operators, which are both implicit by default. C# has explicit support for covariance and contravariance in generic types, unlike C++ which has some degree of support for contravariance simply through the semantics of return types on virtual methods. Enumeration members are placed in their own scope. The C# language does not allow for global variables or functions. All methods and members must be declared within classes. Static members of public classes can substitute for global variables and functions. Local variables cannot shadow variables of the enclosing block, unlike C and C++. Metaprogramming Metaprogramming can be achieved in several ways: Reflection is supported through .NET APIs, which enable scenarios such as type metadata inspection and dynamic method invocation. Expression trees represent code as an abstract syntax tree, where each node is an expression that can be inspected or executed. This enables dynamic modification of executable code at runtime. Expression trees introduce some homoiconicity to the language. Attributes, in C# parlance, are metadata that can be attached to types, members, or entire assemblies, equivalent to annotations in Java. Attributes are accessible both to the compiler and to code through reflection, allowing them to adjust their behaviour. Many of the native attributes duplicate the functionality of GCC's and VisualC++'s platform-dependent preprocessor directives. System.Reflection.Emit namespace, which contains classes that emit metadata and CIL (types, assemblies, etc.) at runtime. The .NET Compiler Platform (Roslyn) provides API access to language compilation services, allowing for the compilation of C# code from within .NET applications. It exposes APIs for syntactic (lexical) analysis of code, semantic analysis, dynamic compilation to CIL, and code emission. Source generators, a feature of the Roslyn C# compiler, enable compile time metaprogramming. During the compilation process, developers can inspect the code being compiled with the compiler's API and pass additional generated C# source code to be compiled. Methods and functions A method in C# is a member of a class that can be invoked as a function (a sequence of instructions), rather than the mere value-holding capability of a field (i.e. class or instance variable). As in other syntactically similar languages, such as C++ and ANSI C, the signature of a method is a declaration comprising in order: any optional accessibility keywords (such as private), the explicit specification of its return type (such as int, or the keyword void if no value is returned), the name of the method, and finally, a parenthesized sequence of comma-separated parameter specifications, each consisting of a parameter's type, its formal name and optionally, a default value to be used whenever none is provided. Different from most other languages, call-by-reference parameters have to be marked both at the function definition and at the calling site, and you can choose between ref and out, the latter allowing handing over an uninitialized variable which will have a definite value on return. Additionally, you can specify a variable-sized argument list by applying the params keyword to the last parameter. Certain specific kinds of methods, such as those that simply get or set a field's value by returning or assigning it, do not require an explicitly stated full signature, but in the general case, the definition of a class includes the full signature declaration of its methods. Like C++, and unlike Java, C# programmers must use the scope modifier keyword virtual to allow methods to be overridden by subclasses. Unlike C++, you have to explicitly specify the keyword override when doing so. This is supposed to avoid confusion between overriding and newly overloading a function (i.e. hiding the former implementation). To do the latter, you have to specify the new keyword. You can use the keyword sealed to disallow further overrides for individual methods or whole classes. Extension methods in C# allow programmers to use static methods as if they were methods from a class's method table, allowing programmers to virtually add instance methods to a class that they feel should exist on that kind of objects (and instances of the respective derived classes). The type dynamic allows for run-time method binding, allowing for JavaScript-like method calls and run-time object composition. C# has support for strongly-typed function pointers via the keyword delegate. Like the Qt framework's pseudo-C++ signal and slot, C# has semantics specifically surrounding publish-subscribe style events, though C# uses delegates to do so. C# offers Java-like synchronized method calls, via the attribute [MethodImpl(MethodImplOptions.Synchronized)], and has support for mutually-exclusive locks via the keyword lock. Property C# supports classes with properties. The properties can be simple accessor functions with a backing field, or implement arbitrary getter and setter functions. A property is read-only if there's no setter. Like with fields, there can be class and instance properties. The underlying methods can be virtual or abstract like any other method. Since C# 3.0 the syntactic sugar of auto-implemented properties is available, where the accessor (getter) and mutator (setter) encapsulate operations on a single field of a class. Namespace A C# namespace provides the same level of code isolation as a Java package or a C++ , with very similar rules and features to a package. Namespaces can be imported with the "using" syntax. Memory access In C#, memory address pointers can only be used within blocks specifically marked as unsafe, and programs with unsafe code need appropriate permissions to run. Most object access is done through safe object references, which always either point to a "live" object or have the well-defined null value; it is impossible to obtain a reference to a "dead" object (one that has been garbage collected), or to a random block of memory. An unsafe pointer can point to an instance of an unmanaged value type that does not contain any references to objects subject to garbage collections such as class instances, arrays or strings. Code that is not marked as unsafe can still store and manipulate pointers through the System.IntPtr type, but it cannot dereference them. Managed memory cannot be explicitly freed; instead, it is automatically garbage collected. Garbage collection addresses the problem of memory leaks by freeing the programmer of responsibility for releasing memory that is no longer needed in most cases. Code that retains references to objects longer than is required can still experience higher memory usage than necessary, however once the final reference to an object is released the memory is available for garbage collection. Exceptions A range of standard exceptions are available to programmers. Methods in standard libraries regularly throw system exceptions in some circumstances and the range of exceptions thrown is normally documented. Custom exception classes can be defined for classes allowing handling to be put in place for particular circumstances as needed. The syntax for handling exceptions is the following:try { // something } catch (Exception ex) { // if error do this } finally { // always executes, regardless of error occurrence }Depending on your plans, the "finally" part can be left out. If error handling is not required, the (Exception ex) parameter can be omitted as well. Also, there can be several "catch" parts handling different kinds of exceptions. Checked exceptions are not present in C# (in contrast to Java). This has been a conscious decision based on the issues of scalability and versionability. Polymorphism Unlike C++, C# does not support multiple inheritance, although a class can implement any number of "interfaces" (fully abstract classes). This was a design decision by the language's lead architect to avoid complications and to simplify architectural requirements throughout CLI. When implementing multiple interfaces that contain a method with the same name and taking parameters of the same type in the same order (i.e. the same signature), similar to Java, C# allows both a single method to cover all interfaces and if necessary specific methods for each interface. C# also offers function overloading (a.k.a. ad-hoc-polymorphism), i.e. methods with the same name, but distinguishable signatures. Unlike Java, C# additionally supports operator overloading. Since version 2.0, C# offers parametric polymorphism, i.e. classes with arbitrary or constrained type parameters, e.g. List<T>, a variable-sized array which only can contain elements of type T. There are certain kinds of constraints you can specify for the type parameters: Has to be type X (or one derived from it), has to implement a certain interface, has to be a reference type, has to be a value type, has to implement a public parameterless constructor. Most of them can be combined, and you can specify any number of interfaces. Language Integrated Query (LINQ) C# has the ability to utilize LINQ through the .NET Framework. A developer can query a variety of data sources, provided the IEnumerable<T> interface is implemented on the object. This includes XML documents, an ADO.NET dataset, and SQL databases. + Using LINQ in C# brings advantages like IntelliSense support, strong filtering capabilities, type safety with compile error checking ability, and consistency for querying data over a variety of sources. There are several different language structures that can be utilized with C# and LINQ and they are query expressions, lambda expressions, anonymous types, implicitly typed variables, extension methods, and object initializers. LINQ has two syntaxes: query syntax and method syntax. However, the compiler always converts the query syntax to method syntax at compile time. using System.Linq; var numbers = new int[] { 5, 10, 8, 3, 6, 12 }; // Query syntax (SELECT num FROM numbers WHERE num % 2 = 0 ORDER BY num) var numQuery1 = from num in numbers where num % 2 == 0 orderby num select num; // Method syntax var numQuery2 = numbers .Where(num => num % 2 == 0) .OrderBy(n => n); Functional programming Though primarily an imperative language, C# always adds functional features over time, for example: Functions as first-class citizen – C# 1.0 delegates Higher-order functions – C# 1.0 together with delegates Anonymous functions – C# 2 anonymous delegates and C# 3 lambdas expressions Closures – C# 2 together with anonymous delegates and C# 3 together with lambdas expressions Type inference – C# 3 with implicitly typed local variables and C# 9 target-typed new expressions List comprehension – C# 3 LINQ Tuples – .NET Framework 4.0 but it becomes popular when C# 7.0 introduced a new tuple type with language support Nested functions – C# 7.0 Pattern matching – C# 7.0 Immutability – C# 7.2 readonly struct C# 9 record types and Init only setters Type classes – C# 12 roles/extensions (in development) Common type system C# has a unified type system. This unified type system is called Common Type System (CTS). A unified type system implies that all types, including primitives such as integers, are subclasses of the class. For example, every type inherits a method. Categories of data types CTS separates data types into two categories: Reference types Value types Instances of value types neither have referential identity nor referential comparison semantics. Equality and inequality comparisons for value types compare the actual data values within the instances, unless the corresponding operators are overloaded. Value types are derived from , always have a default value, and can always be created and copied. Some other limitations on value types are that they cannot derive from each other (but can implement interfaces) and cannot have an explicit default (parameterless) constructor because they already have an implicit one which initializes all contained data to the type-dependent default value (0, null, or alike). Examples of value types are all primitive types, such as (a signed 32-bit integer), (a 32-bit IEEE floating-point number), (a 16-bit Unicode code unit), decimal (fixed-point numbers useful for handling currency amounts), and (identifies a specific point in time with nanosecond precision). Other examples are (enumerations) and (user defined structures). In contrast, reference types have the notion of referential identity, meaning that each instance of a reference type is inherently distinct from every other instance, even if the data within both instances is the same. This is reflected in default equality and inequality comparisons for reference types, which test for referential rather than structural equality, unless the corresponding operators are overloaded (such as the case for ). Some operations are not always possible, such as creating an instance of a reference type, copying an existing instance, or performing a value comparison on two existing instances. Nevertheless, specific reference types can provide such services by exposing a public constructor or implementing a corresponding interface (such as or ). Examples of reference types are (the ultimate base class for all other C# classes), (a string of Unicode characters), and (a base class for all C# arrays). Both type categories are extensible with user-defined types. Boxing and unboxing Boxing is the operation of converting a value-type object into a value of a corresponding reference type. Boxing in C# is implicit. Unboxing is the operation of converting a value of a reference type (previously boxed) into a value of a value type. Unboxing in C# requires an explicit type cast. A boxed object of type T can only be unboxed to a T (or a nullable T). Example: int foo = 42; // Value type. object bar = foo; // foo is boxed to bar. int foo2 = (int)bar; // Unboxed back to value type. Libraries The C# specification details a minimum set of types and class libraries that the compiler expects to have available. In practice, C# is most often used with some implementation of the Common Language Infrastructure (CLI), which is standardized as ECMA-335 Common Language Infrastructure (CLI). In addition to the standard CLI specifications, there are many commercial and community class libraries that build on top of the .NET framework libraries to provide additional functionality. C# can make calls to any library included in the List of .NET libraries and frameworks. Examples Hello World The following is a very simple C# program, a version of the classic "Hello world" example using the top-level statements feature introduced in C# 9: using System; Console.WriteLine("Hello, world!"); For code written as C# 8 or lower, the entry point logic of a program must be written in a Main method inside a type: using System; class Program { static void Main() { Console.WriteLine("Hello, world!"); } } This code will display this text in the console window: Hello, world! Each line has a purpose: using System; The above line imports all types in the System namespace. For example, the Console class used later in the source code is defined in the System namespace, meaning it can be used without supplying the full name of the type (which includes the namespace). // A version of the classic "Hello World" programThis line is a comment; it describes and documents the code for the programmer(s).class Program Above is a class definition for the class. Everything that follows between the pair of braces describes that class.{ ... }The curly brackets demarcate the boundaries of a code block. In this first instance, they are marking the start and end of the class.static void Main() This declares the class member method where the program begins execution. The .NET runtime calls the method. Unlike in Java, the method does not need the keyword, which tells the compiler that the method can be called from anywhere by any class. Writing is equivalent to writing . The static keyword makes the method accessible without an instance of . Each console application's entry point must be declared otherwise the program would require an instance of , but any instance would require a program. To avoid that irresolvable circular dependency, C# compilers processing console applications (like that above) report an error if there is no method. The keyword declares that has no return value. (Note, however, that short programs can be written using Top Level Statements introduced in C# 9, as mentioned earlier.) Console.WriteLine("Hello, world!"); This line writes the output. is a static class in the namespace. It provides an interface to the standard input/output, and error streams for console applications. The program calls the method , which displays on the console a line with the argument, the string . Generics With .NET 2.0 and C# 2.0, the community got more flexible collections than those in .NET 1.x. In the absence of generics, developers had to use collections such as ArrayList to store elements as objects of unspecified kind, which incurred performance overhead when boxing/unboxing/type-checking the contained items. Generics introduced a massive new feature in .NET that allowed developers to create type-safe data structures. This shift is particularly important in the context of converting legacy systems, where updating to generics can significantly enhance performance and maintainability by replacing outdated data structures with more efficient, type-safe alternatives. Example public class DataStore<T> { private T[] items = new T[10]; private int count = 0; public void Add(T item) { items[count++] = item; } public T Get(int index) { return items[index]; } } Standardization and licensing In August 2001, Microsoft, Hewlett-Packard and Intel co-sponsored the submission of specifications for C# as well as the Common Language Infrastructure (CLI) to the standards organization Ecma International. In December 2001, ECMA released ECMA-334 C# Language Specification. C# became an ISO/IEC standard in 2003 (ISO/IEC 23270:2003 - Information technology — Programming languages — C#). ECMA had previously adopted equivalent specifications as the 2nd edition of C#, in December 2002. In June 2005, ECMA approved edition 3 of the C# specification, and updated ECMA-334. Additions included partial classes, anonymous methods, nullable types, and generics (somewhat similar to C++ templates). In July 2005, ECMA submitted to ISO/IEC JTC 1/SC 22, via the latter's Fast-Track process, the standards and related TRs. This process usually takes 6–9 months. The C# language definition and the CLI are standardized under ISO/IEC and Ecma standards that provide reasonable and non-discriminatory licensing protection from patent claims. Microsoft initially agreed not to sue open-source developers for violating patents in non-profit projects for the part of the framework that is covered by the Open Specification Promise. Microsoft has also agreed not to enforce patents relating to Novell products against Novell's paying customers with the exception of a list of products that do not explicitly mention C#, .NET or Novell's implementation of .NET (The Mono Project). However, Novell maintained that Mono does not infringe any Microsoft patents. Microsoft also made a specific agreement not to enforce patent rights related to the Moonlight browser plugin, which depends on Mono, provided it is obtained through Novell. A decade later, Microsoft began developing free, open-source, and cross-platform tooling for C#, namely Visual Studio Code, .NET Core, and Roslyn. Mono joined Microsoft as a project of Xamarin, a Microsoft subsidiary. Implementations Microsoft has developed open-source reference C# compilers and tools. The first compiler, Roslyn, compiles into intermediate language (IL), and the second one, RyuJIT, is a JIT (just-in-time) compiler, which is dynamic and does on-the-fly optimization and compiles the IL into native code for the front-end of the CPU. RyuJIT is open source and written in C++. Roslyn is entirely written in managed code (C#), has been opened up and functionality surfaced as APIs. It is thus enabling developers to create refactoring and diagnostics tools. Two branches of official implementation are .NET Framework (closed-source, Windows-only) and .NET Core (open-source, cross-platform); they eventually converged into one open-source implementation: .NET 5.0. At .NET Framework 4.6, a new JIT compiler replaced the former. Other C# compilers (some of which include an implementation of the Common Language Infrastructure and .NET class libraries): Mono, a Microsoft-sponsored project provides an open-source C# compiler, a complete open-source implementation of the CLI (including the required framework libraries as they appear in the ECMA specification,) and a nearly complete implementation of the NET class libraries up to .NET Framework 3.5. The Elements tool chain from RemObjects includes RemObjects C#, which compiles C# code to .NET's Common Intermediate Language, Java bytecode, Cocoa, Android bytecode, WebAssembly, and native machine code for Windows, macOS, and Linux. The DotGNU project (now discontinued) also provided an open-source C# compiler, a nearly complete implementation of the Common Language Infrastructure including the required framework libraries as they appear in the ECMA specification, and subset of some of the remaining Microsoft proprietary .NET class libraries up to .NET 2.0 (those not documented or included in the ECMA specification, but included in Microsoft's standard .NET Framework distribution). The Unity game engine uses C# as its primary scripting language. The Godot game engine has implemented an optional C# module due to a donation of $24,000 from Microsoft. See also C# topics C# syntax Comparison of C# and Java Comparison of C# and Visual Basic .NET .NET standard libraries IDEs Visual Studio Visual Studio Code Rider LINQPad MonoDevelop Morfik SharpDevelop Turbo C# Microsoft Visual Studio Express Xamarin Studio Notes References Citations Sources Further reading External links C# Language Specification C# Programming Guide ISO C# Language Specification C# Compiler Platform ("Roslyn") source code 2000 software American inventions Programming languages High-level programming languages .NET programming languages Class-based programming languages Ecma standards Functional languages IEC standards ISO standards Microsoft programming languages Multi-paradigm programming languages Programming languages created in 2000 Programming languages with an ISO standard Statically typed programming languages Compiled programming languages Articles with example C Sharp code
C Sharp (programming language)
Technology
7,025
567,667
https://en.wikipedia.org/wiki/Lexicographic%20order
In mathematics, the lexicographic or lexicographical order (also known as lexical order, or dictionary order) is a generalization of the alphabetical order of the dictionaries to sequences of ordered symbols or, more generally, of elements of a totally ordered set. There are several variants and generalizations of the lexicographical ordering. One variant applies to sequences of different lengths by comparing the lengths of the sequences before considering their elements. Another variant, widely used in combinatorics, orders subsets of a given finite set by assigning a total order to the finite set, and converting subsets into increasing sequences, to which the lexicographical order is applied. A generalization defines an order on an n-ary Cartesian product of partially ordered sets; this order is a total order if and only if all factors of the Cartesian product are totally ordered. Definition The words in a lexicon (the set of words used in some language) have a conventional ordering, used in dictionaries and encyclopedias, that depends on the underlying ordering of the alphabet of symbols used to build the words. The lexicographical order is one way of formalizing word order given the order of the underlying symbols. The formal notion starts with a finite set , often called the alphabet, which is totally ordered. That is, for any two symbols and in that are not the same symbol, either or . The words of are the finite sequences of symbols from , including words of length 1 containing a single symbol, words of length 2 with 2 symbols, and so on, even including the empty sequence with no symbols at all. The lexicographical order on the set of all these finite words orders the words as follows: Given two different words of the same length, say and , the order of the two words depends on the alphabetic order of the symbols in the first place where the two words differ (counting from the beginning of the words): if and only if in the underlying order of the alphabet . If two words have different lengths, the usual lexicographical order pads the shorter one with "blanks" (a special symbol that is treated as smaller than every element of ) at the end until the words are the same length, and then the words are compared as in the previous case. However, in combinatorics, another convention is frequently used for the second case, whereby a shorter sequence is always smaller than a longer sequence. This variant of the lexicographical order is sometimes called . In lexicographical order, the word "Thomas" appears before "Thompson" because they first differ at the fifth letter ('a' and 'p'), and letter 'a' comes before the letter 'p' in the alphabet. Because it is the first difference, in this case the 5th letter is the "most significant difference" for alphabetical ordering. An important property of the lexicographical order is that for each , the set of words of length is well-ordered by the lexicographical order (provided the alphabet is finite); that is, every decreasing sequence of words of length is finite (or equivalently, every non-empty subset has a least element). It is not true that the set of all finite words is well-ordered; for example, the infinite set of words {b, ab, aab, aaab, ... } has no lexicographically earliest element. Numeral systems and dates The lexicographical order is used not only in dictionaries, but also commonly for numbers and dates. One of the drawbacks of the Roman numeral system is that it is not always immediately obvious which of two numbers is the smaller. On the other hand, with the positional notation of the Hindu–Arabic numeral system, comparing numbers is easy, because the natural order on natural numbers is the same as the variant shortlex of the lexicographic order. In fact, with positional notation, a natural number is represented by a sequence of numerical digits, and a natural number is larger than another one if either it has more digits (ignoring leading zeroes) or the number of digits is the same and the first (most significant) digit which differs is larger. For real numbers written in decimal notation, a slightly different variant of the lexicographical order is used: the parts on the left of the decimal point are compared as before; if they are equal, the parts at the right of the decimal point are compared with the lexicographical order. The padding 'blank' in this context is a trailing "0" digit. When negative numbers are also considered, one has to reverse the order for comparing negative numbers. This is not usually a problem for humans, but it may be for computers (testing the sign takes some time). This is one of the reasons for adopting two's complement representation for representing signed integers in computers. Another example of a non-dictionary use of lexicographical ordering appears in the ISO 8601 standard for dates, which expresses a date as YYYY-MM-DD. This formatting scheme has the advantage that the lexicographical order on sequences of characters that represent dates coincides with the chronological order: an earlier CE date is smaller in the lexicographical order than a later date up to year 9999. This date ordering makes computerized sorting of dates easier by avoiding the need for a separate sorting algorithm. Monoid of words The over an alphabet is the free monoid over . That is, the elements of the monoid are the finite sequences (words) of elements of (including the empty sequence, of length 0), and the operation (multiplication) is the concatenation of words. A word is a prefix (or 'truncation') of another word if there exists a word such that . By this definition, the empty word () is a prefix of every word, and every word is a prefix of itself (with ); care must be taken if these cases are to be excluded. With this terminology, the above definition of the lexicographical order becomes more concise: Given a partially or totally ordered set , and two words and over such that is non-empty, then one has under lexicographical order, if at least one of the following conditions is satisfied: is a prefix of there exists words , , (possibly empty) and elements and of such that Notice that, due to the prefix condition in this definition, where is the empty word. If is a total order on then so is the lexicographic order on the words of However, in general this is not a well-order, even if the alphabet is well-ordered. For instance, if , the language has no least element in the lexicographical order: . Since many applications require well orders, a variant of the lexicographical orders is often used. This well-order, sometimes called or , consists in considering first the lengths of the words (if , then ), and, if the lengths are equal, using the lexicographical order. If the order on is a well-order, the same is true for the shortlex order. Cartesian products The lexicographical order defines an order on an n-ary Cartesian product of ordered sets, which is a total order when all these sets are themselves totally ordered. An element of a Cartesian product is a sequence whose th element belongs to for every As evaluating the lexicographical order of sequences compares only elements which have the same rank in the sequences, the lexicographical order extends to Cartesian products of ordered sets. Specifically, given two partially ordered sets and the is defined as The result is a partial order. If and are each totally ordered, then the result is a total order as well. The lexicographical order of two totally ordered sets is thus a linear extension of their product order. One can define similarly the lexicographic order on the Cartesian product of an infinite family of ordered sets, if the family is indexed by the natural numbers, or more generally by a well-ordered set. This generalized lexicographical order is a total order if each factor set is totally ordered. Unlike the finite case, an infinite product of well-orders is not necessarily well-ordered by the lexicographical order. For instance, the set of countably infinite binary sequences (by definition, the set of functions from natural numbers to also known as the Cantor space ) is not well-ordered; the subset of sequences that have precisely one (that is, ) does not have a least element under the lexicographical order induced by because is an infinite descending chain. Similarly, the infinite lexicographic product is not Noetherian either because is an infinite ascending chain. Functions over a well-ordered set The functions from a well-ordered set to a totally ordered set may be identified with sequences indexed by of elements of They can thus be ordered by the lexicographical order, and for two such functions and the lexicographical order is thus determined by their values for the smallest such that If is also well-ordered and is finite, then the resulting order is a well-order. As shown above, if is infinite this is not the case. Finite subsets In combinatorics, one has often to enumerate, and therefore to order the finite subsets of a given set For this, one usually chooses an order on Then, sorting a subset of is equivalent to convert it into an increasing sequence. The lexicographic order on the resulting sequences induces thus an order on the subsets, which is also called the . In this context, one generally prefer to sort first the subsets by cardinality, such as in the shortlex order. Therefore, in the following, we will consider only orders on subsets of fixed cardinal. For example, using the natural order of the integers, the lexicographical ordering on the subsets of three elements of is . For ordering finite subsets of a given cardinality of the natural numbers, the order (see below) is often more convenient, because all initial segments are finite, and thus the colexicographical order defines an order isomorphism between the natural numbers and the set of sets of natural numbers. This is not the case for the lexicographical order, as, with the lexicographical order, we have, for example, for every Group orders of Zn Let be the free Abelian group of rank whose elements are sequences of integers, and operation is the addition. A group order on is a total order, which is compatible with addition, that is The lexicographical ordering is a group order on The lexicographical ordering may also be used to characterize all group orders on In fact, linear forms with real coefficients, define a map from into which is injective if the forms are linearly independent (it may be also injective if the forms are dependent, see below). The lexicographic order on the image of this map induces a group order on Robbiano's theorem is that every group order may be obtained in this way. More precisely, given a group order on there exist an integer and linear forms with real coefficients, such that the induced map from into has the following properties; is injective; the resulting isomorphism from to the image of is an order isomorphism when the image is equipped with the lexicographical order on Colexicographic order The colexicographic or colex order is a variant of the lexicographical order that is obtained by reading finite sequences from the right to the left instead of reading them from the left to the right. More precisely, whereas the lexicographical order between two sequences is defined by if for the first where and differ, the colexicographical order is defined by if for the last where and differ In general, the difference between the colexicographical order and the lexicographical order is not very significant. However, when considering increasing sequences, typically for coding subsets, the two orders differ significantly. For example, for ordering the increasing sequences (or the sets) of two natural integers, the lexicographical order begins by , and the colexicographic order begins by . The main property of the colexicographical order for increasing sequences of a given length is that every initial segment is finite. In other words, the colexicographical order for increasing sequences of a given length induces an order isomorphism with the natural numbers, and allows enumerating these sequences. This is frequently used in combinatorics, for example in the proof of the Kruskal–Katona theorem. Monomials When considering polynomials, the order of the terms does not matter in general, as the addition is commutative. However, some algorithms, such as polynomial long division, require the terms to be in a specific order. Many of the main algorithms for multivariate polynomials are related with Gröbner bases, concept that requires the choice of a monomial order, that is a total order, which is compatible with the monoid structure of the monomials. Here "compatible" means that if the monoid operation is denoted multiplicatively. This compatibility implies that the product of a polynomial by a monomial does not change the order of the terms. For Gröbner bases, a further condition must be satisfied, namely that every non-constant monomial is greater than the monomial . However this condition is not needed for other related algorithms, such as the algorithms for the computation of the tangent cone. As Gröbner bases are defined for polynomials in a fixed number of variables, it is common to identify monomials (for example ) with their exponent vectors (here ). If is the number of variables, every monomial order is thus the restriction to of a monomial order of (see above for a classification). One of these admissible orders is the lexicographical order. It is, historically, the first to have been used for defining Gröbner bases, and is sometimes called for distinguishing it from other orders that are also related to a lexicographical order. Another one consists in comparing first the total degrees, and then resolving the conflicts by using the lexicographical order. This order is not widely used, as either the lexicographical order or the degree reverse lexicographical order have generally better properties. The consists also in comparing first the total degrees, and, in case of equality of the total degrees, using the reverse of the colexicographical order. That is, given two exponent vectors, one has if either or For this ordering, the monomials of degree one have the same order as the corresponding indeterminates (this would not be the case if the reverse lexicographical order would be used). For comparing monomials in two variables of the same total degree, this order is the same as the lexicographic order. This is not the case with more variables. For example, for exponent vectors of monomials of degree two in three variables, one has for the degree reverse lexicographic order: For the lexicographical order, the same exponent vectors are ordered as A useful property of the degree reverse lexicographical order is that a homogeneous polynomial is a multiple of the least indeterminate if and only if its leading monomial (its greater monomial) is a multiple of this least indeterminate. See also Collation Kleene–Brouwer order Lexicographic preferences - an application of lexicographic order in economics. Lexicographic optimization - an algorithmic problem of finding a lexicographically-maximal element. Lexicographic order topology on the unit square Lexicographic ordering in tensor abstract index notation Lexicographically minimal string rotation Leximin order Long line (topology) Lyndon word Pre-order - the name of the lexicographical order (of bits) in a binary tree traversal Star product, a different way of combining partial orders Shortlex order Orders on the Cartesian product of totally ordered sets References External links Order theory Lexicography
Lexicographic order
Mathematics
3,342
914,901
https://en.wikipedia.org/wiki/Sard%27s%20theorem
In mathematics, Sard's theorem, also known as Sard's lemma or the Morse–Sard theorem, is a result in mathematical analysis that asserts that the set of critical values (that is, the image of the set of critical points) of a smooth function f from one Euclidean space or manifold to another is a null set, i.e., it has Lebesgue measure 0. This makes the set of critical values "small" in the sense of a generic property. The theorem is named for Anthony Morse and Arthur Sard. Statement More explicitly, let be , (that is, times continuously differentiable), where . Let denote the critical set of which is the set of points at which the Jacobian matrix of has rank . Then the image has Lebesgue measure 0 in . Intuitively speaking, this means that although may be large, its image must be small in the sense of Lebesgue measure: while may have many critical points in the domain , it must have few critical values in the image . More generally, the result also holds for mappings between differentiable manifolds and of dimensions and , respectively. The critical set of a function consists of those points at which the differential has rank less than as a linear transformation. If , then Sard's theorem asserts that the image of has measure zero as a subset of . This formulation of the result follows from the version for Euclidean spaces by taking a countable set of coordinate patches. The conclusion of the theorem is a local statement, since a countable union of sets of measure zero is a set of measure zero, and the property of a subset of a coordinate patch having zero measure is invariant under diffeomorphism. Variants There are many variants of this lemma, which plays a basic role in singularity theory among other fields. The case was proven by Anthony P. Morse in 1939, and the general case by Arthur Sard in 1942. A version for infinite-dimensional Banach manifolds was proven by Stephen Smale. The statement is quite powerful, and the proof involves analysis. In topology it is often quoted — as in the Brouwer fixed-point theorem and some applications in Morse theory — in order to prove the weaker corollary that “a non-constant smooth map has at least one regular value”. In 1965 Sard further generalized his theorem to state that if is for and if is the set of points such that has rank strictly less than , then the r-dimensional Hausdorff measure of is zero. In particular the Hausdorff dimension of is at most r. Caveat: The Hausdorff dimension of can be arbitrarily close to r. See also Generic property References Further reading Lemmas in analysis Smooth functions Multivariable calculus Singularity theory Theorems in analysis Theorems in differential geometry Theorems in measure theory
Sard's theorem
Mathematics
584
14,289,045
https://en.wikipedia.org/wiki/Sodium%20alum
Sodium aluminium sulfate is the inorganic compound with the chemical formula NaAl(SO4)2·12H2O (sometimes written Na2SO4·Al2(SO4)3·24H2O). Also known as soda alum, sodium alum, or SAS, this white solid is used in the manufacture of baking powder and as a food additive. Its official mineral name is alum-Na (IMA symbol: Aum-Na). Properties Like its potassium analog, sodium aluminum sulfate crystallizes as the dodecahydrate in the classical cubic alum structure. Sodium alum is very soluble in water, and is extremely difficult to purify. In the preparation of this salt, it is preferable to mix the component solutions in the cold, and to evaporate them at a temperature not exceeding 60 °C. 100 parts of water dissolve 110 parts of sodium alum at 0 °C, and 51 parts at 16 °C. Production and natural occurrence Sodium aluminum sulfate is produced by combining sodium sulfate and aluminium sulfate. An estimated 3000 ton/y (2003) are produced worldwide. The dodecahydrate is known in mineralogy as alum-(Na). Two other rare mineral forms are known: mendozite (undecahydrate) and tamarugite (hexahydrate). Uses In the US, some brands combine sodium aluminum sulfate with sodium bicarbonate and monocalcium phosphate in formulations of double acting baking powder. Kawahara et al. 1994 noted that aluminum is “a suspected risk factor in Alzheimer's disease” and that “aluminum directly influences the process of Alzheimer′s disease”. More recent research however disputes the alleged link between aluminum and Alzheimer's disease and The Alzheimer’s Society concluded that “No convincing relationship between aluminium and the development of Alzheimer's disease has been established.” Sodium alum is also used as an acidity regulator in food, with E number E521. Sodium alum is also a common mordant for the preparation of hematoxylin solutions for staining cell nuclei in histopathology. It is also used as a flocculant in water treatment and disinfection, but its relatively crude, caustic action makes it more suitable for industrial applications. References Works cited Aluminium compounds Sodium compounds Sulfates Double salts E-number additives
Sodium alum
Chemistry
492
27,839,280
https://en.wikipedia.org/wiki/General%20Relativity%20and%20Gravitation
General Relativity and Gravitation is a monthly peer-reviewed scientific journal. It was established in 1970, and is published by Springer Science+Business Media under the auspices of the International Society on General Relativity and Gravitation. The two editors-in-chief are Pablo Laguna and Mairi Sakellariadou; former editors include George Francis Rayner Ellis, Hermann Nicolai, Abhay Ashtekar, and Roy Maartens. The journal's field of interest is modern gravitational physics, encompassing all theoretical and experimental aspects of general relativity and gravitation. Aims and scope The aims of General Relativity and Gravitation include public outreach through teaching and public understanding, as well as disseminate the history of general relativity and gravitation. Another aim of the journal is to publish original research on numerous topics. Some of the topics of interest are observational, or theoretical work, in cosmology, general relativity, gravity, supergravity, quantum gravity, string theory (including extensions), relativity, and the related complex mathematics involved. Publishing formats include original research papers, short communications, commentaries, review articles, and book reviews. The journal also includes mathematical topics related to the journal's science topics, along with mathematical results and techniques. Abstracting and indexing General Relativity and Gravitation is abstracted and indexed in Academic OneFile, Academic Search, Astrophysics Data System, Compendex, ProQuest, Current Contents/Physical, Chemical and Earth Sciences, Digital Mathematics Registry, INIS Atomindex, Inspec, Mathematical Reviews, Science Citation Index, VINITI Database RAS, and Zentralblatt MATH. References External links Astrophysics journals Mathematical physics journals Physics journals Academic journals established in 1970 Monthly journals Springer Science+Business Media academic journals English-language journals
General Relativity and Gravitation
Physics
365
78,006,672
https://en.wikipedia.org/wiki/Dosage%20%28pharmacology%29
In pharmacology and medicine, dosage refers to the prescribed regimen for administering a medication or substance, encompassing the amount, frequency, and duration of use. It is distinct from dose, which denotes a single, specific quantity of a drug or substance given at one time. Dosage typically includes information on the number of doses, intervals between administrations, and the overall treatment period. For example, a dosage might be described as "200 mg twice daily for two weeks," where 200 mg represents the individual dose, twice daily indicates the frequency, and two weeks specifies the duration of treatment. References Medication pharmacology
Dosage (pharmacology)
Chemistry
129
61,759,328
https://en.wikipedia.org/wiki/Ascosphaera%20aggregata
Ascosphaera aggregata is a species of fungus. History and taxonomy Ascosphaera aggregata, discovered in 1975 by Jens-Peder Skou is a fungus that is related to Ascosphaera apis. Habitat and ecology Ascosphaera aggregata is an obligate parasite that causes chalkbrood in bees, symptom manifestations differ depending on age of the larva. It primarily infects alfalfa leafcutting bees, Megachile rotundata. Megachile rotundata infected with A. aggregata have been detected in the United States, Canada, and South America. Other bee species that A. aggregata has been seen to infect include the red mason bee (Osmia rufa), the patchwork leafcutter bee (Megachile centuncularis), Megachile pugnata and Megachile relativa. Growth, morphology and pathobiology Ascosphaera aggregata is an obligate parasite that can cause chalkbrood by the fifth instar. The majority of the life cycle and growth of A. aggregata occurs in M. rotundata larvae. Infection of bee larvae occurs only via ingestion of resting spores, and is not possible via spore inhalation nor contact with the fungal vegetative form. Spores develop in the larva and cause it to swell, bursting the larval integument (giving the dead larvae a ragged appearance) and furthering the spread of the fungus. Buildup of larval cadavers traps the unaffected emerging bees, forcing them to chew through the cadavers and be covered in spores. Bees covered in spores then contaminate food provisions for other broods and spread the infection. Early vegetative growth utilizes gut lumen nutrients. A. aggregata grows through the midgut wall to the hemocoele (event trigger is unknown, not because of lack of space nor food) eventually replacing larval tissue. Resulting larvae are filled with a mycelial mat comprising two layers: a dense inner layer and a less dense outer layer. Sexual development Ascospore morphology consists of two layers: an inner chitinous and smooth layer, and an outer layer that is rough, spotted, and not composed of chitin nor cellulose.;) Ascospore development in A. aggregata is very unique and the resulting structure is referred to as a "spore cyst", or "ascocyst" or "synascus". Sexual development occurs on the outer mycelial mat in the subcuticular region, and is documented to proceed as follows: The vegetative hyphae tips swell and form a thallus The middle of the thallus grows and forms a nutriocyte (previously referred to as an archicarp) The apical portion differentiates into the trichogyne cell. Compatible trichogyne fuse and initiate plasmogamy. Resulting dikaryotic fungal protoplasm then enters the nutriocyte and causes enlargement of the nutriocyte. Nutriocyte growth causes the integument to rupture and initiate development of a fragile spherical structure without a cell wall. Individual spores then pack together into a seemingly membrane-less spore ball. Multiple spore balls then join and form a spore cyst. Cell wall deposition changes spore colour from opaque white to grey to dull black Physiology Ascosphaera aggregata has been found to be unable to break down chitin. Diagnostic considerations Although ascospore development is very unique, it is very hard to identify A. aggregata because the spore balls and conidia tend to resemble other species. Recent investigations by James and Skinner (2005) have discovered that PCR of the ITS domain of ribosomal DNA with species specific primer sets allows the detection of fungal DNA (working, even, in asymptomatic individuals). The PCR technique can also be used on hair and honey samples to avoid the difficulty of culturing spores, as spore were shown before to only germinate well in lipids. Storage of the fungus has also proven to be difficult as it collapses after 1–2 months during normal culture passaging. However, Jensen et al. (2009) found that spores could be preserved via cryopreservation or freeze-drying whereas hyphae unfortunately could not be preserved. Economic importance Megachile rotundata is the primary pollinator of the commercially grown alfalfa seed, accounting for 46,000 metric tonnes of North American alfafa seed (two-thirds the global production) in 2004. M. rotundata is also the second most valuable field crop pollinator, behind the honey bee, because of the value of alfalfa in animal feed and hay. A. aggregata has been killing this economic pollinator in the US since 1972 and has been reported to be able to kill greater than 50% of a population. Effective management of the fungus has yet to be discovered, as the current registered treatment in Canada (paraformaldehyde fumigation of spores) involves a carcinogen and other treatment options (heat and chloride treatments) are expensive and labour-intensive. References Onygenales Fungus species
Ascosphaera aggregata
Biology
1,093
199,772
https://en.wikipedia.org/wiki/Concept%20testing
Concept testing (to be distinguished from pre-test markets and test markets which may be used at a later stage of product development research) is the process of using surveys (and sometimes qualitative methods) to evaluate consumer acceptance of a new product idea prior to the introduction of a product to the market. It is important not to confuse concept testing with advertising testing, brand testing and packaging testing, as is sometimes done. Concept testing focuses on the basic product idea, without the embellishments and puffery inherent in advertising. Questionnaires It is important that the instruments (questionnaires) to test the product have a high quality themselves. Otherwise, results from data gathered surveys may be biased by measurement error. That makes the design of the testing procedure more complex. Empirical tests provide insight into the quality of the questionnaire. This can be done by: conducting cognitive interviewing. By asking a faction of potential-respondents about their interpretation of the questions and use of the questionnaire, a researcher can verify the viability of the cognitive interviewing. carrying out a small pretest of the questionnaire, using a small subset of target respondents. Results can inform a researcher of errors such as missing questions, or logical and procedural errors. estimating the measurement quality of the questions. This can be done for instance using test-retest, quasi-simplex, or mutlitrait-multimethod models. predicting the measurement quality of the question. This can be done using the software Survey Quality Predictor (SQP). Concept testing Concept testing in the new product development (NPD) process is the concept generation stage. The concept generation stage of concept testing can take on many forms. Sometimes concepts are generated incidentally, as the result of technological advances. At other times concept generation is deliberate: examples include brain-storming sessions, problem detection surveys and qualitative research. While qualitative research can provide insights into the range of reactions consumers may have, it cannot provide an indication of the likely success of the new concept; this is better left to quantitative concept-test surveys. In the early stages of concept testing, a large field of alternative concepts might exist, requiring concept-screening surveys. Concept-screening surveys provide a quick means to narrow the field of options; however they provide little depth of insight and cannot be compared to a normative database due to interactions between concepts. For greater insight and to reach decisions on whether or not pursue further product development, monadic concept-testing surveys must be conducted. Presentation modes Frequently concept testing surveys are described as either monadic, sequential monadic, comparative, or proto-monadic. The terms mainly refer to how the concepts are displayed: Monadic. The concept is evaluated in isolation. Sequential monadic. Multiple concepts are evaluated in sequence (often randomized order). Comparative. Concepts are shown next to each other. Proto-monadic. Concepts are first shown in sequence, and then next to each other. "Monadic testing is the recommended method for most concept testing. Interaction effects and biases are avoided. Results from one test can be compared to results from previous monadic tests. A normative database can be constructed." However, each has its specific uses and it depends on the research objectives. The decision as to which method to use is best left to experience research professionals to decide, as there are numerous implications in terms of how the results are interpreted. Evaluating concept-test scores Traditionally concept-test survey results are compared to 'norms databases'. These are databases of previous new-product concept tests. These must be 'monadic' concept tests, to prevent interaction effects. To be fair, it is important that these databases contain 'new' concept test results, not ratings of old products that consumers are already familiar with; since once consumers become familiar with a product the ratings often drop. Comparing new concept ratings to the ratings for an existing product already on the market would result in an invalid comparison, unless special precautions are taken by researchers to reduce or adjust for this effect quantitatively. Additionally, the concept is usually only compared to norms from the same product category, and the same country. Companies that specialize in this area, tend to have developed their own unique systems, each with its own standards. Keeping to these standards consistently is important to preventing contamination of the results. Perhaps one of the famous concept-test systems is the Nielsen Bases system, which comes in different versions. Other well-known products include Decision Analyst's 'Concept Check', Acupoll's 'Concept Optimizer', Ipsos Innoquest and GFK. Examples of smaller players include Skuuber and Acentric Express Test. Determining the importance of concept attributes as purchase drivers The simplest approach to determining attribute importance is to ask direct open-ended questions. Alternatively checklists or ratings of the importance of each product attribute may be used. However, various debates have existed over whether or not consumers could be trusted to directly indicate the level of importance of each product attribute. As a result, correlation analysis and various forms of multiple regression have often been used for identifying importance - as an alternative to direct questions. A complementary technique to concept testing, is conjoint analysis (also referred to as discrete choice modelling). Various forms of conjoint analysis and discrete choice modelling exist. While academics stress the differences between the two, in practice there is often little difference. These techniques estimate the importance of product attributes indirectly, by creating alternative products according to an experimental design, and then using consumer responses to these alternatives (usually ratings of purchase likelihood or choices made between alternatives) to estimate importance. The results are often expressed in the form of a 'simulator' tool which allows clients to test alternative product configurations and pricing. Volumetric concept testing Volumetric concept testing falls somewhere between traditional concept testing and pre-test market models (simulated test market models are similar but emphasize greater realism) in terms of the level of complexity. The aim is to provide 'approximate' sales volume forecasts for the new concept prior to launch. They incorporate other variables beyond just input from the concept test survey itself, such as the distribution strategy. Examples of volumetric forecasting methodologies include 'Acupoll Foresight' and Decision Analyst's 'Conceptor'. Some models (more properly referred to as 'pre-test market models' or 'simulated test markets') gather additional data from a follow-up product testing survey (especially in the case of consumer packaged goods as repeat purchase rates need to be estimated). They may also include advertisement testing component that aims to assess advertising quality. Some such as Decision Analyst, include discrete choice models / conjoint analysis. See also marketing research proof of concept References Aptitude Design Innovation economics Product testing Science and technology studies Market research
Concept testing
Technology,Engineering
1,377
34,775,279
https://en.wikipedia.org/wiki/Enriques%E2%80%93Babbage%20theorem
In algebraic geometry, the Enriques–Babbage theorem states that a canonical curve is either a set-theoretic intersection of quadrics, or trigonal, or a plane quintic. It was proved by and . References Algebraic curves Theorems in algebraic geometry
Enriques–Babbage theorem
Mathematics
58
1,058,554
https://en.wikipedia.org/wiki/CK722
The CK722 was the first low-cost junction transistor available to the general public. It was a PNP germanium small-signal unit. Developed by Norman Krim, it was introduced by Raytheon in early 1953 for $7.60 each; the price was reduced to $3.50 in late 1954 and to $0.99 in 1956. Norm Krim selected Radio Shack to sell the CK721 and CK722 through their catalog. Krim had a long-standing personal and business relationship with Radio Shack. The CK722s were selected "fall out" from the Raytheon's premium-priced CK721 (which are fallouts from CK718 hearing-aid transistors). Raytheon actively encouraged hobbyists with design contests and advertisements. In the 1950s and 1960s, hundreds of hobbyist electronics projects based around the CK722 transistor were published in popular books and magazines. Raytheon also participated in expanding the role of the CK721/CK722 as a hobbyist electronics device by publishing "Transistor Applications" and "Transistor Applications Volume 2" during the mid-1950s. Construction The original CK722 were direct fallouts from CK718 hearing-aid transistors that did not meet specifications. These fallouts were later stamped with CK721 or CK722 numbers based on gain, noise and other dynamic characteristics. Early CK722s were plastic-encapsulated and had a black body. As Raytheon improved its production of hearing-aid transistors with the introduction of the smaller CK78x series, the body of the CK721/CK722s was changed to a metal case. Raytheon, however, kept the basic body size and used a unique method by taking the smaller CK78x rejects and inserting it into the larger body and sealing it. The first metal-cased CK721/CK722s were blue, and the later ones were silver. More details of this can be found in Jack Ward's website, Semiconductor Museum or the CK722 Museum, see external link reference below. Engineers associated with the CK722 Norman Krim – father of the transistor hobbyist market In the late 1930s, Norm Krim, then an engineer for Raytheon, was looking into subminiature tubes for use in consumer applications such as hearing aids and pocket radios. Krim's team developed the CK501X subminiature amplifier tube that could run on penlight A type batteries or small 22.5 V B-type batteries. Following World War II, Krim was interested in developing the first pocket vacuum tube radio. Raytheon approved, and a team headed by Krim designed a set of subminiature tubes specifically for radios (2E32, 2E36, 2E42 and 2G22). Raytheon’s acquisition of Belmont Radio proved prescient, and the result was the Belmont Boulevard in 1945. The radio did not sell well, and Raytheon took a loss. Despite this setback, Krim remained at the company and shifted his attention to the newly developed transistor. Carl David Todd – participant in the CK722 design contest Carl Todd, a hobbyist and later engineer in GE’s transistor division, placed 6th in Raytheon's CK722 design contest. His hobby work with this early transistor inspired him to pursue electrical engineering as a career. As an engineer, he helped develop the 2N107 transistor, GE's alternative to the CK722. See also Alfred Powell Morgan – an author of youth-oriented books on early electronics References External links A general summary of Norman Krim's achievements can be seen at this IEEE link In Memoriam- Norm Krim Jack Ward's Semiconductor Museum-The CK722 transistor website and museum Harry Goldstein's IEEE article on celebrating the transistor- webarchive backup: Free version Commercial transistors History of electronic engineering Bipolar transistors
CK722
Engineering
846
53,502,953
https://en.wikipedia.org/wiki/Skysite
Skysite, stylized as SKYSITE, is a document management platform designed for construction and facility owners and managers. The cloud-based drawing management and distribution software allows the management of documents related to construction projects. Skysite includes features like RFI, Punch List and photo management. Skysite’s track and report functionality is designed to prevent mistakes and delivery delays. The system can be accessed to generate reports for all documents that are shared with the team. The reports can also be deleted, downloaded, or marked up to increase employee accountability. Once constructed, facility managers can use Skysite to store, access, distribute, and manage the documents related to the operations of the building. As-builts, O&Ms, warranties and more can be organized and searched. Skysite’s real-time synchronization of documents allows access to current information whenever and wherever needed from tablets and mobile devices. All the building’s information can be archived and searched with custom attributes, along with document retention to mitigate risk. History Skysite was founded in 2015 with the goal of helping manage documents in active construction projects. Skysite offers users desktop sync and a mobile app that has been designed to add mobility to the industry. Product The software allows users to manage, view, collaborate, and distribute encrypted construction documents in real time on mobile device or desktop device. The app has been designed to store all critical information in the cloud, for local or offline access. It also has built-in mark-up tools to communicate issues. The offerings of the app includes improved hyperlinking of construction documents, image and more attachments to a Punch list item and document search. Users can upload documents and files with drag and drop, pictures can be pinned to construction drawings and RFIs can be answered. A sample project aids users in getting started. Facility management professionals can manage, sync, organize, search and share important information from computers and mobile devices. Skysite allows to sync all documents with mark-ups, and annotations, along with revision updates, so that the team can uses the right information to make decisions. The revamped application program interface (API) of Skysite is designed to fit to an existing information technology infrastructure and integrate with other project applications. Skysite’s API integrates with various productivity tools including Google Drive Box, Dropbox, OneDrive, and Egnyte. Skysite software can be accessed to reduce errors and inadequacies associated with paper-based document management systems during all the phases of construction. The system is designed to reduce information management costs, increase work efficiency, enable secure file access and sharing, and make collaboration better, easier, and faster. The Skysite mobile app is available on the iOS App Store and Google Play Store. References Construction software Document management systems
Skysite
Engineering
585
55,197,783
https://en.wikipedia.org/wiki/Cells%20%28journal%29
Cells is a monthly peer-reviewed open-access scientific journal that covers all aspects of cell and molecular biology, and biophysics. It was established in 2012 and is published by MDPI. The founding editor-in-chief is Alexander E. Kalyuzhny (University of Minnesota) who was joined by Cord Brakebusch (University of Copenhagen) in 2020. Abstracting and indexing The journal is abstracted and indexed in: Biological Abstracts BIOSIS Previews EBSCO databases Embase Index Medicus/MEDLINE/PubMed Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2021 impact factor of 7.666. References External links English-language journals MDPI academic journals Academic journals established in 2012 Monthly journals Molecular and cellular biology journals
Cells (journal)
Chemistry
161
32,098,871
https://en.wikipedia.org/wiki/Ternary%20fission
Ternary fission is a comparatively rare (0.2 to 0.4% of events) type of nuclear fission in which three charged products are produced rather than two. As in other nuclear fission processes, other uncharged particles such as multiple neutrons and gamma rays are produced in ternary fission. Ternary fission may happen during neutron-induced fission or in spontaneous fission (the type of radioactive decay). About 25% more ternary fission happens in spontaneous fission compared to the same fission system formed after thermal neutron capture, illustrating that these processes remain physically slightly different, even after the absorption of the neutron, possibly because of the extra energy present in the nuclear reaction system of thermal neutron-induced fission. Quaternary fission, at 1 per 10 million fissions, is also known (see below). Products The most common nuclear fission process is "binary fission." It produces two charged asymmetrical fission products with maximally probable charged product at 95±15 and 135±15 u atomic mass. However, in this conventional fission of large nuclei, the binary process happens merely because it is the most energetically probable. In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, the alternative ternary fission process produces three positively charged fragments (plus neutrons, which are not charged and not counted in this reckoning). The smallest of the charged products may range from so small a charge and mass as a single proton (Z=1), up to as large a fragment as the nucleus of argon (Z=18). Although particles as large as argon nuclei may be produced as the smaller (third) charged product in the usual ternary fission, the most common small fragments from ternary fission are helium-4 nuclei, which make up about 90% of the small fragment products. This high incidence is related to the stability (high binding energy) of the alpha particle, which makes more energy available to the reaction. The second-most common particles produced in ternary fission are Tritons (the nuclei of tritium), which make up 7% of the total small fragments, and the third-most are helium-6 nuclei (which decay in about 0.8 seconds to lithium-6). Protons and larger nuclei are in the small fraction (< 2%) which make up the remainder of the small charged products. The two larger charged particles from ternary fission, particularly when alphas are produced, are quite similar in size distribution to those produced in binary fission. Product energies The energy of the third much-smaller product usually ranges between 10 and 20 MeV. In keeping with their origin, alpha particles produced by ternary fission typically have mean energies of about ~ 16 MeV (energies this great are never seen in alpha decay). Since these typically have significantly more energy than the ~ 5 MeV alpha particles from alpha decay, they are accordingly called "long-range alphas" (referring to their longer range in air or other media). The other two larger fragments carry away, in their kinetic energies, the remainder of the fission kinetic energy (typically totalling ~ 170 MeV in heavy element fission) that does not appear as the 10 to 20 MeV kinetic energy carried away by the third smaller product. Thus, the larger fragments in ternary fission are each less energetic, by a typical 5 to 10 MeV, than they are seen to be in binary fission. Importance Although the ternary fission process is less common than the binary process, it still produces significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors. This phenomenon was initially detected in 1957, within the environs of the Savannah River National Laboratory. True ternary fission A very rare type of ternary fission process is sometimes called "true ternary fission." It produces three nearly equal-sized charged fragments (Z ~ 30) but only happens in about 1 in 100 million fission events. In this type of fission, the product nuclei split the fission energy in three nearly equal parts and have kinetic energies of ~ 60 MeV. True ternary fission has so far only been observed in nuclei bombarded by heavy, high energy ions. Quaternary fission Another rare fission process, occurring in about 1 in 10 million fissions, is Quaternary fission. It is analogous to ternary fission, save that four charged products are seen. Typically two of these are light particles, with the most common mode of Quaternary fission apparently being two large particles and two alpha particles (rather than one alpha, the most common mode of ternary fission). References Fission, nuclear Nuclear fission Concepts in physics Nuclear chemistry Radioactivity
Ternary fission
Physics,Chemistry
949
52,503,905
https://en.wikipedia.org/wiki/Kalai%20Prize
The Prize in Game Theory and Computer Science in Honour of Ehud Kalai is an award given by the Game Theory Society. The prize is awarded for outstanding articles at the interface of game theory and computer science. Following the eligibility rules of the Gödel Prize, preference is given to authors who are 45 years old or younger at the time of the award. It was established in 2008 by a donation from Yoav Shoham in honor of the Ehud Kalai's contributions in bridging these two fields. Recipients See also List of economics awards List of prizes named after people John Bates Clark Medal References Economics awards Awards established in 2008 Computer science awards
Kalai Prize
Technology
132
2,363,262
https://en.wikipedia.org/wiki/IBM%20805%20Test%20Scoring%20Machine
The IBM 805 Test Scoring Machine was an educational machine sold by IBM beginning in 1937. The device scored answer sheets marked with special "mark sense" pencils. The machine was developed from a prototype developed by Reynold Johnson, a school teacher who later became an IBM engineer. That machine and its descendants have been in use ever since. See also Benjamin D. Wood References "Bulletin of Information on the International Test Scoring Machine." (New York: Cooperative Test Service, 1936) IBM Archives web page on the 805 Test Scoring Machine 805 IBM educational computers
IBM 805 Test Scoring Machine
Technology
116
29,827,545
https://en.wikipedia.org/wiki/Tuck-in%20complex
In organometallic chemistry, a tuck-in complex usually refers to derivatives of Cp* ligands wherein a methyl group is deprotonated and the resulting methylene attaches to the metal. The C5–CH2–M angle is acute. The term "tucked in" was coined to describe derivatives of organotungsten complexes. Although most "tucked-in" complexes are derived from Cp* ligands, other pi-bonded rings undergo similar reactions. Scope and bonding The "tuck-in" process is related to ortho-metalation in the sense that it is an intramolecular cyclometalation. Tuck-in complexes derived from Cp* ligands are derivatives of tetramethylfulvene, sometimes abbreviated Me4Fv. A variety of complexes are known for Me4Fv and related ligands. In these complexes, the Fv can serve as a 4-electron or as a 6-electron ligand. Examples The original example proceeded via sequential loss of two equivalents of H2 from decamethyltungstocene dihydride, Cp*2WH2. The first dehydrogenation step affords a simple tuck-in complex: (C5Me5)2WH2 → (C5Me5)(η6-C5Me4CH2)WH + H2 The second dehydrogenation step affords a double tuck-in complex: (C5Me5)(η6-C5Me4CH2)WH → (C5Me5)(η7-C5Me3(CH2)2)W + H2 In organouranium chemistry, both tuck-in and tuck-over complexes are recognized, for example in the dihydrido diuranium complex [Cp*3(η7-C5Me3(CH2))U2H2]. In this complex the two methylene groups bind to different uranium centers. The tuck-over mode is binding of the Cp* methylene to a metal center elsewhere in the molecule rather than the one coordinated to that Cp* ligand. Reactions Tuck-in complexes retain nucleophilicity at the methylene carbon. They can be activated by Lewis acids to generate active catalysts for use in Ziegler–Natta catalysis. The Lewis acid attaches to the CH2 group, exposing a vacant site on the electrophilic Zr(IV) centre. References Organometallic chemistry
Tuck-in complex
Chemistry
521
22,782,409
https://en.wikipedia.org/wiki/Gary%20Chartrand
Gary Theodore Chartrand (born 1936) is an American-born mathematician who specializes in graph theory. He is known for his textbooks on introductory graph theory and for the concept of a highly irregular graph. Biography Gary Chartrand was born in 1936. He was raised in Sault Ste. Marie, Michigan and attended J. W. Sexton High School located in Lansing, Michigan. As an undergraduate student, he initially majored in chemical engineering, but switched to mathematics in his junior year, in which he also became a member of the honorary mathematics society Pi Mu Epsilon. He earned his B. S. from Michigan State University, where he majored in mathematics and minored in physical sciences and foreign languages. Michigan State University also awarded him a Master of Science and a PhD for his work in graph theory in 1964. Chartrand became the first doctoral student of Edward Nordhaus, and the first doctoral student at Michigan State University to research graph theory. His dissertation was Graphs and Their Associated Line-Graphs. Chartrand worked with Frank Harary at the University of Michigan, where he spent a year as a Research Associate, and the two have published numerous papers together (along with other authors). The topic of highly irregular graphs was introduced by Chartrand, Paul Erdős and Ortrud Oellermann. Other contributions that Chartrand has made involve dominating sets, distance in graphs, and graph coloring. During his career at Western Michigan University, he advised 22 doctoral students in their research on aspects of graph theory. Chartrand is currently a professor emeritus of mathematics at Western Michigan University. Books 1977: Graphs as Mathematical Models, Prindle, Weber & Schmidt, reprinted 1985 as Introductory Graph Theory . 1993: (with Ortrud R. Oellermann) Applied and Algorithmic Graph Theory, McGraw Hill . 2008: (with Ping Zhang) Chromatic Graph Theory, CRC Press . 2010: (with Linda Lesniak and Ping Zhang) Graphs & Digraphs, 5th edition, CRC Press . 2010: (with Ping Zhang) Discrete Mathematics, Waveland Press. 2012: (with Albert D. Polimeni & Ping Zhang) Mathematical Proofs: A Transition to Advanced Mathematics, 3rd edition, Pearson. 2012: (with Ping Zhang) A First Course in Graph Theory, Dover Publications. 2015: (with Arthur T. Benjamin and Ping Zhang) The Fascinating World of Graph Theory, Princeton University Press . 2019: (with Teresa W. Haynes, Michael A. Henning & Ping Zhang) From Domination to Coloring: Stephen Hedetniemi's Graph Theory and Beyond, SpringerBriefs in Mathematics. 2019: (with Cooroo Egan & Ping Zhang) How to Label a Graph, SpringerBriefs in Mathematics . 2021: (with Akbar Ali & Ping Zhang) Irregularity in Graphs, SpringerBriefs in Mathematics . References External links Chartrand's web page at Western Michigan University Living people 20th-century American mathematicians 21st-century American mathematicians Graph theorists Michigan State University alumni Western Michigan University faculty 1936 births
Gary Chartrand
Mathematics
616
32,226,257
https://en.wikipedia.org/wiki/Hidden%20character%20stone
Hidden Character Stone () is a stone located in a scenic area in the town of Zhangbu (), Pingtang County, Qiannan Buyei and Miao Autonomous Prefecture, Guizhou. The stone features several glyph-like patterns on its surface that have been tentatively identified as Simplified Chinese characters or Traditional Chinese characters, the meaning of which has been variously interpreted as "Communist Party of China" (), or alternatively "Communist Party of China perish" (). Area The Hidden Character Stone is one of the main attractions - along with a jade water basin (玉水金盆) - located at the Qiannan Pingtang National Geological Park (黔南平塘地质公园). The park has an area of about 201.6 square kilometers. The stone is situated within a narrow gap between two cliffs, just wide enough for two people to stand adjacent. History In June 2002, the Duyun international photography exposition (都匀国际摄影博览会) recommended an area in Zhangbu as a photo spot. The stone was discovered during the cleanup process following the exposition's conclusion. The site has been isolated and effectively untouched by humans for centuries. According to the Chinese official Xinhua News Agency the person who initially discovered the site was local party secretary Wang Guo-fu (王国富), who noticed the characters written on the stone as he was stacking poles in the cleft in the rock. Between December 5–8 2003 a Chinese scientific inspection group of about 15 scientists is reported to have investigated the stone. Some of the more notable members include Li Ting-dong (李廷栋) from Chinese Academy of Sciences, Liu Bao-jun (刘宝君) from Chinese academy of Sciences and Li Feng-lin (李凤麟) from China University of Geosciences. The stone was analyzed and determined to be about 270 million years old, with a likely provenance in the Permian period. Liu Bao-jun expressed support for additional research into the stone and its history, and was interested in the natural formation of the "characters" thereon. Each character on the stone measures about one square shaku in size, which is equivalent to about 1 square foot (0.09 square meters). Description Five-character version The five-character interpretation posits that the characters on the stone can be translated as "Communist Party of China" (中国共产党). This is the rendition publicly accepted in the People's Republic of China. This reading has also been referred to as 救星石, literally "savior stone". When recounting the narrative of the stone's discovery, Chinese sources usually adhere to the five-character interpretation. Six-character version The six-character version suggests the characters on the stone said "Communist Party of China perish" (中国共产党亡). Often when pictures are shown with the stone having six characters, the description still refers to it by the five-character version. Traditional and Simplified Chinese The characters on the stone are a mix of Traditional Chinese characters and Simplified Chinese characters. The first and third character (, ) are identical in both versions. The second character "country" () and fourth character "produce" () is in the traditional form. The fifth character, "party" () is in the simplified form. The sixth character "perish" () has no difference. Some have analyzed all the odd characters as Simplified, while the even characters are Traditional. On the stone: Traditional Chinese: Simplified Chinese: Analysis The origin of the characters remain a subject of dispute. There were some early speculations that the characters were put there by the People's Liberation Army, but according to the path of the Long March, they never went to Pingtang. The characters also read left to right, which was not practiced at the time. The inclusion of a Simplified character before the CPC did any simplifications also ruled them out. There were also some skeptics who suspected the village was creating a fraud to build their tourism industry at the time. Others think the Hidden Character Stone was made in the Cultural Revolution. Also, others, especially Christians, consider the message to be of divine origin, this idea being ridiculed by pro-Marxist groups. Cultural reference The Hidden Character Stone has been featured as a topic on a number of science-oriented television programmes such as CCTV's "Approaching Science" (走近科学) and the Hong Kong ATV series "China's Mystery Files" (中國神祕檔案). In both instances the programs referred to the five-character reading of the stone. References 2002 in China 2003 in China Archaeological artifacts of China Stones 2002 archaeological discoveries
Hidden character stone
Physics
965
25,948
https://en.wikipedia.org/wiki/Refraction
In physics, refraction is the redirection of a wave as it passes from one medium to another. The redirection can be caused by the wave's change in speed or by a change in the medium. Refraction of light is the most commonly observed phenomenon, but other waves such as sound waves and water waves also experience refraction. How much a wave is refracted is determined by the change in wave speed and the initial direction of wave propagation relative to the direction of change in speed. For light, refraction follows Snell's law, which states that, for a given pair of media, the ratio of the sines of the angle of incidence and angle of refraction is equal to the ratio of phase velocities in the two media, or equivalently, to the refractive indices of the two media: Optical prisms and lenses use refraction to redirect light, as does the human eye. The refractive index of materials varies with the wavelength of light, and thus the angle of the refraction also varies correspondingly. This is called dispersion and causes prisms and rainbows to divide white light into its constituent spectral colors. General explanation A correct explanation of refraction involves two separate parts, both a result of the wave nature of light. Light slows as it travels through a medium other than vacuum (such as air, glass or water). This is not because of scattering or absorption. Rather it is because, as an electromagnetic oscillation, light itself causes other electrically charged particles such as electrons, to oscillate. The oscillating electrons emit their own electromagnetic waves which interact with the original light. The resulting "combined" wave has wave packets that pass an observer at a slower rate. The light has effectively been slowed. When light returns to a vacuum and there are no electrons nearby, this slowing effect ends and its speed returns to . When light enters a slower medium at an angle, one side of the wavefront is slowed before the other. This asymmetrical slowing of the light causes it to change the angle of its travel. Once light is within the new medium with constant properties, it travels in a straight line again. Slowing of light As described above, the speed of light is slower in a medium other than vacuum. This slowing applies to any medium such as air, water, or glass, and is responsible for phenomena such as refraction. When light leaves the medium and returns to a vacuum, and ignoring any effects of gravity, its speed returns to the usual speed of light in vacuum, . A correct explanation rests on light's nature as an electromagnetic wave. Because light is an oscillating electrical/magnetic wave, light traveling in a medium causes the electrically charged electrons of the material to also oscillate. (The material's protons also oscillate but as they are around 2000 times more massive, their movement and therefore their effect, is far smaller). A moving electrical charge emits electromagnetic waves of its own. The electromagnetic waves emitted by the oscillating electrons interact with the electromagnetic waves that make up the original light, similar to water waves on a pond, a process known as constructive interference. When two waves interfere in this way, the resulting "combined" wave may have wave packets that pass an observer at a slower rate. The light has effectively been slowed. When the light leaves the material, this interaction with electrons no longer happens, and therefore the wave packet rate (and therefore its speed) return to normal. Bending of light Consider a wave going from one material to another where its speed is slower as in the figure. If it reaches the interface between the materials at an angle one side of the wave will reach the second material first, and therefore slow down earlier. With one side of the wave going slower the whole wave will pivot towards that side. This is why a wave will bend away from the surface or toward the normal when going into a slower material. In the opposite case of a wave reaching a material where the speed is higher, one side of the wave will speed up and the wave will pivot away from that side. Another way of understanding the same thing is to consider the change in wavelength at the interface. When the wave goes from one material to another where the wave has a different speed , the frequency of the wave will stay the same, but the distance between wavefronts or wavelength will change. If the speed is decreased, such as in the figure to the right, the wavelength will also decrease. With an angle between the wave fronts and the interface and change in distance between the wave fronts the angle must change over the interface to keep the wave fronts intact. From these considerations the relationship between the angle of incidence , angle of transmission and the wave speeds and in the two materials can be derived. This is the law of refraction or Snell's law and can be written as The phenomenon of refraction can in a more fundamental way be derived from the 2 or 3-dimensional wave equation. The boundary condition at the interface will then require the tangential component of the wave vector to be identical on the two sides of the interface. Since the magnitude of the wave vector depend on the wave speed this requires a change in direction of the wave vector. The relevant wave speed in the discussion above is the phase velocity of the wave. This is typically close to the group velocity which can be seen as the truer speed of a wave, but when they differ it is important to use the phase velocity in all calculations relating to refraction. A wave traveling perpendicular to a boundary, i.e. having its wavefronts parallel to the boundary, will not change direction even if the speed of the wave changes. Dispersion of light Refraction is also responsible for rainbows and for the splitting of white light into a rainbow-spectrum as it passes through a glass prism. Glass and water have higher refractive indexes than air. When a beam of white light passes from air into a material having an index of refraction that varies with frequency (and wavelength), a phenomenon known as dispersion occurs, in which different coloured components of the white light are refracted at different angles, i.e., they bend by different amounts at the interface, so that they become separated. The different colors correspond to different frequencies and different wavelengths. Law For light, the refractive index of a material is more often used than the wave phase speed in the material. They are directly related through the speed of light in vacuum as In optics, therefore, the law of refraction is typically written as On water Refraction occurs when light goes through a water surface since water has a refractive index of 1.33 and air has a refractive index of about 1. Looking at a straight object, such as a pencil in the figure here, which is placed at a slant, partially in the water, the object appears to bend at the water's surface. This is due to the bending of light rays as they move from the water to the air. Once the rays reach the eye, the eye traces them back as straight lines (lines of sight). The lines of sight (shown as dashed lines) intersect at a higher position than where the actual rays originated. This causes the pencil to appear higher and the water to appear shallower than it really is. The depth that the water appears to be when viewed from above is known as the apparent depth. This is an important consideration for spearfishing from the surface because it will make the target fish appear to be in a different place, and the fisher must aim lower to catch the fish. Conversely, an object above the water has a higher apparent height when viewed from below the water. The opposite correction must be made by an archer fish. For small angles of incidence (measured from the normal, when is approximately the same as ), the ratio of apparent to real depth is the ratio of the refractive indexes of air to that of water. But, as the angle of incidence approaches 90°, the apparent depth approaches zero, albeit reflection increases, which limits observation at high angles of incidence. Conversely, the apparent height approaches infinity as the angle of incidence (from below) increases, but even earlier, as the angle of total internal reflection is approached, albeit the image also fades from view as this limit is approached. Atmospheric The refractive index of air depends on the air density and thus vary with air temperature and pressure. Since the pressure is lower at higher altitudes, the refractive index is also lower, causing light rays to refract towards the earth surface when traveling long distances through the atmosphere. This shifts the apparent positions of stars slightly when they are close to the horizon and makes the sun visible before it geometrically rises above the horizon during a sunrise. Temperature variations in the air can also cause refraction of light. This can be seen as a heat haze when hot and cold air is mixed e.g. over a fire, in engine exhaust, or when opening a window on a cold day. This makes objects viewed through the mixed air appear to shimmer or move around randomly as the hot and cold air moves. This effect is also visible from normal variations in air temperature during a sunny day when using high magnification telephoto lenses and is often limiting the image quality in these cases. In a similar way, atmospheric turbulence gives rapidly varying distortions in the images of astronomical telescopes limiting the resolution of terrestrial telescopes not using adaptive optics or other techniques for overcoming these atmospheric distortions. Air temperature variations close to the surface can give rise to other optical phenomena, such as mirages and Fata Morgana. Most commonly, air heated by a hot road on a sunny day deflects light approaching at a shallow angle towards a viewer. This makes the road appear reflecting, giving an illusion of water covering the road. Clinical significance In medicine, particularly optometry, ophthalmology and orthoptics, refraction (also known as refractometry) is a clinical test in which a phoropter may be used by the appropriate eye care professional to determine the eye's refractive error and the best corrective lenses to be prescribed. A series of test lenses in graded optical powers or focal lengths are presented to determine which provides the sharpest, clearest vision. Refractive surgery is a medical procedure to treat common vision disorders. Mechanical waves Water Water waves travel slower in shallower water. This can be used to demonstrate refraction in ripple tanks and also explains why waves on a shoreline tend to strike the shore close to a perpendicular angle. As the waves travel from deep water into shallower water near the shore, they are refracted from their original direction of travel to an angle more normal to the shoreline. Sound In underwater acoustics, refraction is the bending or curving of a sound ray that results when the ray passes through a sound speed gradient from a region of one sound speed to a region of a different speed. The amount of ray bending is dependent on the amount of difference between sound speeds, that is, the variation in temperature, salinity, and pressure of the water. Similar acoustics effects are also found in the Earth's atmosphere. The phenomenon of refraction of sound in the atmosphere has been known for centuries. Beginning in the early 1970s, widespread analysis of this effect came into vogue through the designing of urban highways and noise barriers to address the meteorological effects of bending of sound rays in the lower atmosphere. Gallery See also Birefringence (double refraction) Geometrical optics Huygens–Fresnel principle List of indices of refraction Negative refraction Reflection Schlieren photography Seismic refraction Super refraction References External links Reflections and Refractions in Ray Tracing, a simple but thorough discussion of the mathematics behind refraction and reflection. Flash refraction simulation- includes source, Explains refraction and Snell's Law. Physical phenomena Geometrical optics Physical optics
Refraction
Physics
2,442
47,843,991
https://en.wikipedia.org/wiki/Inonotus%20rigidus
Inonotus rigidus is a species of fungus in the family Hymenochaetaceae. It is distinguished by its resupinate and rigid basidiocarps, its yellow pore surface, being microscopically ellipsoid and yellowish brown, its thick-walled basidiospores, and by lacking both setal hyphae and hymenial setae. References Further reading Yu, Hai-You, Chang-Lin Zhao, and Yu-Cheng Dai. "Inonotus niveomarginatus and I. tenuissimus spp. nov.(Hymenochaetales), resupinate species from tropical China." Mycotaxon 124.1 (2013): 61–68. External links Fungal tree pathogens and diseases rigidus Fungi described in 2011 Fungus species
Inonotus rigidus
Biology
172
13,961,210
https://en.wikipedia.org/wiki/Stack%20resource%20policy
The Stack Resource Policy (SRP) is a resource allocation policy used in real-time computing, used for accessing shared resources when using earliest deadline first scheduling. It was defined by T. P. Baker. SRP is not the same as the Priority ceiling protocol which is for fixed priority tasks (FP). Function Each task is assigned a preemption level based upon the following formula where denotes the deadline of task and denotes the preemption level of task i: Each resource R has a current ceiling that represents the maximum of the preemption levels of the tasks that may be blocked, when there are units of available and is the maximum units of that may require at any one time. is assigned as follows: There is also a system ceiling which is the maximum of all current ceilings of the resources. Any task that wishes to preempt the system must first satisfy the following constraint: This can be refined for Operating System implementation (as in MarteOS) by removing the multi-unit resources and defining the stack resource policy as follows All tasks are assigned a preemption level, in order to preserve the ordering of tasks in relation to each other when locking resources. The lowest relative deadline tasks are assigned the highest preemption level. Each shared resource has an associated ceiling level, which is the maximum preemption level of all the tasks that access this protected object. The system ceiling, at any instant in time, is the maximum active priority of all the tasks that are currently executing within the system. A task is only allowed to preempt the system when its absolute deadline is less than the currently executing task and its preemption level is higher than the current system ceiling. Relevancy The 2011 book Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications by Giorgio C. Buttazzo featured a dedicated section to reviewing SRP from Baker 1991 work. References Real-time computing
Stack resource policy
Technology
388
47,423,949
https://en.wikipedia.org/wiki/Suillus%20abietinus
Suillus abietinus is a species of edible mushroom in the genus Suillus. Found in Greece, it was described as new to science in 1970 by Maria Pantidou and Roy Watling from collections made in Vytina, Arkadia. References External links abietinus Fungi of Europe Fungi described in 1970 Fungus species
Suillus abietinus
Biology
69
3,566,788
https://en.wikipedia.org/wiki/Yips
In sports, the yips are a sudden and unexplained loss of ability to execute certain skills in experienced athletes. Symptoms of the yips are losing fine motor skills and psychological issues that impact the muscle memory and decision-making of athletes, leaving them unable to perform basic skills of their sport. The exact cause of the yips is still not fully understood. A yips episode may last a short time before the athlete regains their abilities or it can require longer term adjustments to technique before recovery occurs. The worst cases are those where the athlete does not recover at all, forcing the player to abandon the sport at the highest level. Causes include but may not be limited to performance anxiety and neurological conditions. There have been a plethora of treatment options tested to ameliorate the yips, including clinical sport psychology therapy, motor imagery, pre-performance routines, medication, botulinum toxin, acupuncture, and emotional freedom techniques. However, their possible effectiveness is primarily based on personal experience rather than well-founded research evidence. Early intervention with a thorough treatment plan is imperative for recovery of athletes with yips. Brain activity and the yips A specific 2021 study using EEG recordings to measure found that athletes with the yips showed increased brain activity in the alpha band when initiating movements, especially when increasing force output to match a target. In this particular study, increased brain activity in the alpha and beta bands for the treatment group after the movement compared to the control group, suggested that heightened brain activity might indicate problems with inhibitory systems or increased focus on the body part involved in the task. Further research must be conducted with a larger sample size, more diverse populations, and more than two EEG electrodes in order to further establish the validity of this claim. In golf In golf, the yips is a movement disorder known to interfere with putting. The term yips is said to have been popularized by Tommy Armour—a golf champion and later golf teacher—to explain the difficulties that led him to abandon tournament play. In describing the yips, golfers have used terms such as twitches, staggers, jitters and jerks. The yips affects between a quarter and a half of all mature golfers. Researchers at the Mayo Clinic found that 33% to 48% of all serious golfers have experienced the yips. Golfers who have played for more than 25 years appear most prone to the condition. Although the exact cause of the yips has yet to be determined, one possibility is biochemical changes in the brain that accompany aging. Excessive use of the involved muscles and intense demands of coordination and concentration may exacerbate the problem. Giving up golf for a month sometimes helps. Focal dystonia has been mentioned as another possibility for the cause of yips. Professional golfers seriously afflicted by the yips include Ernie Els, David Duval, Pádraig Harrington, Bernhard Langer, Ben Hogan, Harry Vardon, Sam Snead, Ian Baker-Finch and Keegan Bradley, who missed a four-foot putt in the final round of the 2013 HP Byron Nelson Championship due to the condition (although he may also have been suffering from strabismus). At the 2015 Waste Management Open, golf analyst Nick Faldo suggested that Tiger Woods could be suffering from the yips. Jay Yarow from Business Insider commented after the 2014 Open that Woods had both the putting yips and the driver yips. Interventions seeking to treat the affliction have been few and far between. Some golfers have tried changing their putter or their grip or even switching hands. However, these strategies have provided only temporary relief. They are also known as "freezing", "the jerks", "the staggers", "the waggles", and "whisky fingers". In tennis In tennis, the yips most often affects the (second) serve, leading to multiple double faults. Several top players have been affected by the yips in recent years, most notably Alexander Zverev in 2019, and Aryna Sabalenka in the beginning of 2022. For example, Zverev served a record of 20 double faults in his 2019 Cincinnati Masters first round loss against Miomir Kecmanović, while Sabalenka served up 39 double faults in her two first round losses in the 2022 Adelaide 1 and Adelaide 2 tournaments. From 2005-2008, Guillermo Coria, a former world no.3, suffered from service yips. In cricket In cricket, the yips applies mostly to bowlers. The affliction seems to involve bowlers having trouble releasing the ball at the end of their action. An example of this was Keith Medlycott, who having reached the England squad was forced to abandon the sport. Another player, Gavin Hamilton, having played a Test as an all-rounder, largely abandoned his right-arm medium pace bowling, following the yips. He did not make another Test appearance, but has enjoyed a One Day International career for Scotland, predominantly as a specialist batsman. Collins Obuya was one of the stars of Kenya's 2003 World Cup—he gained a contract with Warwickshire on the back of it—but after injury he encountered difficulty with his bowling action, later going through a phase of appearing as a specialist batsman in international matches. Other players to have experienced similar problems include Ian Folley of Lancashire, and the West Indies test cricketer Roger Harper. England cricket team sports psychologist Mark Bawden suffered from the yips himself as a teenager. He completed a PhD on the topic and has published a paper on the yips in the Journal of Sports Sciences. In baseball In baseball, the yips usually manifests itself as a sudden inability to throw the baseball accurately. They are more apparent in pitchers and catchers, players who touch the ball the most in the game, though position players have also been subject to the malady. Pittsburgh Pirates pitcher Steve Blass is an example; from 1964 to 1972, he was a dominant pitcher and All-Star; however, beginning in 1973, he suddenly lost his command, issuing 84 walks in innings pitched. He retired in 1974 due to continued loss of his pitching ability. "Steve Blass disease" has been attributed to talented players—such as New York Yankees second baseman Chuck Knoblauch or Los Angeles Dodgers second baseman Steve Sax—who suddenly lost their ability to throw the ball accurately to the first baseman. Sax's problems began in his 3rd season in the majors, but he continued to play in the league and seemingly recovered by 1989, going on to finish his career in 1994. New York Mets catcher Mackey Sasser could not throw the ball back to the pitcher without tapping his mitt several times—San Francisco Giants outfielder Brett Butler once stole third base during a Sasser yip. Sasser's problem became worse after a 1990 collision at home plate with Jim Presley of the Atlanta Braves, leading to a decrease in Sasser's playing time, and his release from the Seattle Mariners in 1994. Mark Wohlers of the Atlanta Braves was called "the 1990s poster child for Steve Blass Syndrome." He recovered enough to return to pitching, but not to previous levels. Rick Ankiel lost his control as a pitcher during the 2000 National League Championship Series. After several years of deteriorating performance coupled with injuries, he subsequently returned in 2007 as a productive outfielder. Jon Lester is also said to have suffered the yips on his pickoff attempts to first base. He did not throw to first at all in 2014, and struggled to make accurate throws early in 2015. For the rest of his career, when required to field a hit ball, Lester would run most of the way to 1st base and underhand throw the ball and on longer throws would spike it into the turf to reduce the chances of throwing it past the bag. His team also attempted to compensate for the problem with their catchers throwing 'back picks' to first base as well as the regulation throws to second. Pittsburgh Pirates minor league pitching prospect Hayden Hurst was so badly affected by the yips that he left baseball and went to the University of South Carolina to play football instead. On April 26, 2018, he was drafted in the first round of the 2018 NFL draft, 25th overall, by the Baltimore Ravens as a tight end. ESPN featured a story about Luke Hagerty's comeback from the yips in 2019. He never played after being drafted #32 overall by the Chicago Cubs in the 2002 draft. In gymnastics In artistic gymnastics, a version of the yips affecting twisting form is known as the "twisties". They refer to a sudden loss of a gymnast's ability to maintain body control during aerial maneuvers. Some gymnasts reference a feeling of disorientation or unawareness of where the ground is. This loss of air awareness increases the chance of a serious or critical injury occurring if the gymnast forgets in the moment how to land the maneuver safely. During the 2020 Olympic qualifications, American gymnast Simone Biles flew out of bounds twice on the floor and failed to stick her landing on the vault. Despite this, she still qualified for the all-around final in first place. During the Olympic events, Biles was unable to complete her skills and popularized the term "twisties," causing her to withdraw from competition after the women's team all-around final. She attributed her loss of air awareness to a mental health condition. Biles returned to perform a downscaled routine in the balance beam final, winning the bronze medal. In 2024 she responded that critics of her 2020 withdrawal had become "silent" after her return and win of three gold medals in the 2024 Summer Olympics. American gymnasts Laurie Hernandez and Aleah Finnegan both stated that they have experienced a loss of air awareness during their career and spoke out in support of Biles during the games in 2021. Finnegan stated "I cannot imagine the fear of having it happen to you during competition. You have absolutely no control over your body and what it does." In trampoline gymnastics, the condition is typically referred to as "lost move syndrome". Olympic trampoline gymnast Bryony Page has discussed her personal experience with the condition while preparing to compete in the 2016 Olympics. In other areas The yips also affects players in other sports. Examples include Markelle Fultz and Chuck Hayes's respective free throw shots in basketball. In darts, the yips are known as dartitis, with five-time world champion Eric Bristow an example of a sufferer. In the National Football League (NFL), a normally reliable placekicker who starts struggling is also said to have the yips. Seven-time Pro Bowler Justin Tucker was described by fans and sportswriters as suffering from yips during his 2024 season, after a seemingly inexplicable series of misses leading to a career-low 73.9% field goal rate, despite finishing 12 seasons as among the most accurate kickers in the league. On 12/01/2024, Adam Breneman stated that Ohio State has the yips when they play Michigan. Stephen Hendry, seven times snooker World Champion, said after his loss to Mark Williams in the 2010 UK Championship that he had been suffering from the yips for ten years, and that the condition had affected his ability to cue through the ball, causing him great difficulty in regaining his old form. The yips also occur in areas outside of sports, such as with musicians and writers. See also Analysis paralysis Conversion disorder Choke (sports) Dartitis Target panic "The Centipede's Dilemma" References Cricket terminology Golf terminology Tennis terminology Motor skills Ailments of unknown cause
Yips
Biology
2,392
507,208
https://en.wikipedia.org/wiki/Double%20factorial
In mathematics, the double factorial of a number , denoted by , is the product of all the positive integers up to that have the same parity (odd or even) as . That is, Restated, this says that for even , the double factorial is while for odd it is For example, . The zero double factorial as an empty product. The sequence of double factorials for even = starts as The sequence of double factorials for odd = starts as The term odd factorial is sometimes used for the double factorial of an odd number. The term semifactorial is also used by Knuth as a synonym of double factorial. History and usage In a 1902 paper, the physicist Arthur Schuster wrote: states that the double factorial was originally introduced in order to simplify the expression of certain trigonometric integrals that arise in the derivation of the Wallis product. Double factorials also arise in expressing the volume of a hyperball and surface area of a hypersphere, and they have many applications in enumerative combinatorics. They occur in Student's -distribution (1908), though Gosset did not use the double exclamation point notation. Relation to the factorial Because the double factorial only involves about half the factors of the ordinary factorial, its value is not substantially larger than the square root of the factorial , and it is much smaller than the iterated factorial . The factorial of a positive may be written as the product of two double factorials: and therefore where the denominator cancels the unwanted factors in the numerator. (The last form also applies when .) For an even non-negative integer with , the double factorial may be expressed as For odd with , combining the two previous formulas yields For an odd positive integer with , the double factorial may be expressed in terms of -permutations of or a falling factorial as Applications in enumerative combinatorics Double factorials are motivated by the fact that they occur frequently in enumerative combinatorics and other settings. For instance, for odd values of counts Perfect matchings of the complete graph for odd . In such a graph, any single vertex v has possible choices of vertex that it can be matched to, and once this choice is made the remaining problem is one of selecting a perfect matching in a complete graph with two fewer vertices. For instance, a complete graph with four vertices a, b, c, and d has three perfect matchings: ab and cd, ac and bd, and ad and bc. Perfect matchings may be described in several other equivalent ways, including involutions without fixed points on a set of items (permutations in which each cycle is a pair) or chord diagrams (sets of chords of a set of points evenly spaced on a circle such that each point is the endpoint of exactly one chord, also called Brauer diagrams). The numbers of matchings in complete graphs, without constraining the matchings to be perfect, are instead given by the telephone numbers, which may be expressed as a summation involving double factorials. Stirling permutations, permutations of the multiset of numbers in which each pair of equal numbers is separated only by larger numbers, where . The two copies of must be adjacent; removing them from the permutation leaves a permutation in which the maximum element is , with positions into which the adjacent pair of values may be placed. From this recursive construction, a proof that the Stirling permutations are counted by the double permutations follows by induction. Alternatively, instead of the restriction that values between a pair may be larger than it, one may also consider the permutations of this multiset in which the first copies of each pair appear in sorted order; such a permutation defines a matching on the positions of the permutation, so again the number of permutations may be counted by the double permutations. Heap-ordered trees, trees with nodes labeled , such that the root of the tree has label 0, each other node has a larger label than its parent, and such that the children of each node have a fixed ordering. An Euler tour of the tree (with doubled edges) gives a Stirling permutation, and every Stirling permutation represents a tree in this way. Unrooted binary trees with labeled leaves. Each such tree may be formed from a tree with one fewer leaf, by subdividing one of the tree edges and making the new vertex be the parent of a new leaf. Rooted binary trees with labeled leaves. This case is similar to the unrooted case, but the number of edges that can be subdivided is even, and in addition to subdividing an edge it is possible to add a node to a tree with one fewer leaf by adding a new root whose two children are the smaller tree and the new leaf. and list several additional objects with the same counting sequence, including "trapezoidal words" (numerals in a mixed radix system with increasing odd radixes), height-labeled Dyck paths, height-labeled ordered trees, "overhang paths", and certain vectors describing the lowest-numbered leaf descendant of each node in a rooted binary tree. For bijective proofs that some of these objects are equinumerous, see and . The even double factorials give the numbers of elements of the hyperoctahedral groups (signed permutations or symmetries of a hypercube) Asymptotics Stirling's approximation for the factorial can be used to derive an asymptotic equivalent for the double factorial. In particular, since one has as tends to infinity that Extensions Negative arguments The ordinary factorial, when extended to the gamma function, has a pole at each negative integer, preventing the factorial from being defined at these numbers. However, the double factorial of odd numbers may be extended to any negative odd integer argument by inverting its recurrence relation to give Using this inverted recurrence, (−1)!! = 1, (−3)!! = −1, and (−5)!! = ; negative odd numbers with greater magnitude have fractional double factorials. In particular, when is an odd number, this gives Complex arguments Disregarding the above definition of for even values of , the double factorial for odd integers can be extended to most real and complex numbers by noting that when is a positive odd integer then where is the gamma function. The final expression is defined for all complex numbers except the negative even integers and satisfies everywhere it is defined. As with the gamma function that extends the ordinary factorial function, this double factorial function is logarithmically convex in the sense of the Bohr–Mollerup theorem. Asymptotically, The generalized formula does not agree with the previous product formula for for non-negative even integer values of . Instead, this generalized formula implies the following alternative: with the value for 0!! in this case being . Using this generalized formula as the definition, the volume of an -dimensional hypersphere of radius can be expressed as Additional identities For integer values of , Using instead the extension of the double factorial of odd numbers to complex numbers, the formula is Double factorials can also be used to evaluate integrals of more complicated trigonometric polynomials. Double factorials of odd numbers are related to the gamma function by the identity: Some additional identities involving double factorials of odd numbers are: An approximation for the ratio of the double factorial of two consecutive integers is This approximation gets more accurate as increases, which can be seen as a result of the Wallis Integral. Generalizations Definitions In the same way that the double factorial generalizes the notion of the single factorial, the following definition of the integer-valued multiple factorial functions (multifactorials), or -factorial functions, extends the notion of the double factorial function for positive integers : Alternative extension of the multifactorial Alternatively, the multifactorial can be extended to most real and complex numbers by noting that when is one more than a positive multiple of the positive integer then This last expression is defined much more broadly than the original. In the same way that is not defined for negative integers, and is not defined for negative even integers, is not defined for negative multiples of . However, it is defined and satisfies for all other complex numbers . This definition is consistent with the earlier definition only for those integers satisfying . In addition to extending to most complex numbers , this definition has the feature of working for all positive real values of . Furthermore, when , this definition is mathematically equivalent to the function, described above. Also, when , this definition is mathematically equivalent to the alternative extension of the double factorial. Generalized Stirling numbers expanding the multifactorial functions A class of generalized Stirling numbers of the first kind is defined for by the following triangular recurrence relation: These generalized -factorial coefficients then generate the distinct symbolic polynomial products defining the multiple factorial, or -factorial functions, , as The distinct polynomial expansions in the previous equations actually define the -factorial products for multiple distinct cases of the least residues for . The generalized -factorial polynomials, where , which generalize the Stirling convolution polynomials from the single factorial case to the multifactorial cases, are defined by for . These polynomials have a particularly nice closed-form ordinary generating function given by Other combinatorial properties and expansions of these generalized -factorial triangles and polynomial sequences are considered in . Exact finite sums involving multiple factorial functions Suppose that and are integer-valued. Then we can expand the next single finite sums involving the multifactorial, or -factorial functions, , in terms of the Pochhammer symbol and the generalized, rational-valued binomial coefficients as and moreover, we similarly have double sum expansions of these functions given by The first two sums above are similar in form to a known non-round combinatorial identity for the double factorial function when given by . Similar identities can be obtained via context-free grammars. Additional finite sum expansions of congruences for the -factorial functions, , modulo any prescribed integer for any are given by . References Integer sequences Enumerative combinatorics Factorial and binomial topics fr:Analogues de la factorielle#Multifactorielles
Double factorial
Mathematics
2,149
11,935,111
https://en.wikipedia.org/wiki/STAT4
Signal transducer and activator of transcription 4 (STAT4) is a transcription factor belonging to the STAT protein family, composed of STAT1, STAT2, STAT3, STAT4, STAT5A, STAT5B, STAT6. STAT proteins are key activators of gene transcription which bind to DNA in response to cytokine gradient. STAT proteins are a common part of Janus kinase (JAK)- signalling pathways, activated by cytokines.STAT4 is required for the development of Th1 cells from naive CD4+ T cells and IFN-γ production in response to IL-12. There are two known STAT4 transcripts, STAT4α and STAT4β, differing in the levels of interferon-gamma (IFN-γ )production downstream. Structure Human as well murine STAT4 genes lie next to STAT1 gene locus suggesting that the genes arose by gene duplication. STAT proteins have six functional domains: 1. N-terminal interaction domain – crucial for dimerization of inactive STATs and nuclear translocation; 2.helical coiled coil domain –  association with regulatory factors; 3. central DNA-binding domain – binding to the enhancer region of IFN-γ activated sequence (GAS) family genes; 4. linker domain –  assisting during the DNA binding process; 5. Src homology 2 (SH2) domain – critical for specific binding to the cytokine receptor after tyrosine phosphorylation; 6. C-terminal transactivation domain – triggering the transcriptional process. The length of the protein is 748 amino acids, and the molecular weight is 85 941 Dalton. Expression Distribution of STAT4 is restricted to myeloid cells, thymus and testis. In resting human T cells it is expressed at very low levels, but its production is amplified by PHA stimulation. Cytokines activating STAT4 IL-12 Pro-inflammatory cytokine IL-12 is produced in heterodimer form by B cells and antigen-presenting cells. Binding of IL-12 to IL-12R, which is composed of two different subunits (IL12Rβ1 and IL12Rβ2), leads to the interaction of IL12Rβ1 and IL12Rβ2 with JAK2 and TYK2, which is followed by phosphorylation of STAT4 tyrosine 693. The pathway then induces IFNγ production and Th1 differentiation. STAT4 is critical in promotion of antiviral response of natural killer (NK) cell by targeting of promotor regions of Runx1 and Runx3. IFNα and IFNβ Secreted by leukocytes, respectively fibroblasts, IFNα IFNβ together regulate antiviral immunity, cell proliferation and anti-tumor effects. In viral infection signalling pathway, either of IFNα or β binds to IFN receptor (IFNAR), composed of IFNAR1 and IFNAR2, immediately followed by the phosphorylation of STAT1, STAT4 and IFN target genes. During the initial phase of viral infection in NK cells, STAT1 activation is replaced by the activation of STAT4. IL-23 Monocytes, activated dendritic cells (DC) and macrophages stimulate the accumulation of IL-23 after exposure to molecules of Gram-positive/negative bacteria or viruses. Receptor for IL-23 contains IL12β1 and IL23R subunits, which upon binding of IL-23 promotes the phosphorylation STAT4. The presence of IL12β1 enables similar, although weaker downstream activity as compared to IL-12. During chronic inflammation, IL-23/STAT4 signalling pathway is involved in the induction of differentiation and expansion of Th17 pro-inflammatory T helper cells. Additionally, other cytokines like IL2, IL 27, IL35, IL18 and IL21 are known to activate STAT4. Inhibitors of STAT4 signalling pathways In cells with progressively increasing expression of IL12 and IL6, SOCSs production and activity suppress cytokine signalling and phosphorylation of JAK-STAT pathways in a negative feedback loop. Other suppressors of the pathways are: protein inhibitor of activated STAT (PAIS) (regulation of transcriptional activity in the nucleus, observed in STAT4-DNA binding complex), protein tyrosine phosphatase (PTP) (removal of phosphate groups from phosphorylated tyrosine in JAK/STAT pathway proteins), STAT-interacting LIM protein (SLIM) (STAT ubiquitin E3 ligase blocking the phosphorylation of STAT4) or microRNA (miRNA) (degradation of STAT4 mRNA and its post-transcriptional regulation). Target genes STAT4 binds to hundreds of sites in the genome, among others to the promoters of genes for cytokines (IFN-γ, TNF), receptors (IL18R1, IL12rβ2, IL18RAP), and signaling factors (MYD88). Disease STAT4 is involved in several autoimmune and cancer diseases in animal models humans, significantly in the disease progression and pathology. STAT4 were significantly increased in patients with colitis ulcerative and skin T cells of psoriatic patients. Moreover, STAT4 -/- mice developed less severe experimental autoimmune encephalo-myelitis (EAE) than the wild type mice. Intronic single nucleotide polymorphism (SNP) mostly in third intron of the STAT4 has shown to be associated with immune dysregulation and autoimmunity including systemic lupus erythematosus (SLE) and rheumatoid arthritis as well as Sjögren's disease (SD), systemic sclerosis, psoriasis and also type-1 diabetes. High incident of STAT4 genetic polymorphisms and susceptibility to autoimmune diseases is a reason to consider the STAT4 as general autoimmune disease susceptibility locus. References Further reading External links Gene expression Immune system Proteins Transcription factors Signal transduction
STAT4
Chemistry,Biology
1,276
60,139,975
https://en.wikipedia.org/wiki/Canadian%20Internet%20Handbook
The Canadian Internet Handbook was a series of non-fiction books written by Jim Carroll and Rick Broadhead. It was first published in March 1994 aimed at an audience new to computers, describing the basics of how to use the Internet. Books contained information on what the Internet is, how to get connected, how it works, as well as a directory of internet-based services. Reception Within 6 weeks of the initial publication on March 7, 1994, the Canadian Internet Handbook was the number 1 best selling book according to The Globe and Mail and the National Post. Reviews of the initial and later editions were mostly favourable, citing the expertise of the authors as well as the comprehensiveness of the books. Success continued throughout the 1990s, but the dot-com bubble of 2001 eventually resulted in the downfall of the series. No further editions were released. References External links Interview with CBC Dot-com bubble Books about the Internet
Canadian Internet Handbook
Technology
183
9,833,385
https://en.wikipedia.org/wiki/Butt%20welding
Butt welding is when two pieces of metal are placed end-to-end without overlap and then welded along the joint (as opposed to lap joint weld, where one piece of metal is laid on top of the other, or plug welding, where one piece of metal is inserted into the other). Importantly, in a butt joint, the surfaces of the workpieces being joined are on the same plane and the weld metal remains within the planes of the surfaces. Common uses Butt welding is a commonly used technique in welding that can either be automated or done by hand on steel pieces. Butt welding can also be done with brazing for copper pieces. It is used to attach two pieces of metal together such as pipe, framework in factories, and also flanges. A flange is something that either is internal or external that provided to strengthen a piece of material. Factory fabrication demonstrates the cost-effectiveness of butt welding versus the more expensive overall processes of bending stock, reinforcing joints, and using fasteners where required. Butt welding is accomplished by heating up two pieces of metal, of those. Penetration while welding the metal is important to maintain and with thin pieces of metal this is possible however, with thick pieces edge preparation may have to be done to prepare the metal. Full penetration butt welds are made when they are in the within the parent(bigger, stronger) metal. In butt welding the strongest welds will have the fewest imperfections. To achieve this the heat input is controlled, which decreases the size of the weld. In commercial welding when this is done it also reduces cost but in order to maintain the strength of the weld double butt welds will be used. In butt welding there are two types used to achieve the specific welds and then there are also a variety of joints considered to be butt joints. Butt welding is best performed with MIG or TIG welding applications due to their natural ability to connect two pieces of metal together. Using different types of welding electrodes for the welder will determine the properties of the weld such as its resistance against corrosion and strength. Electrodes conduct current through the metal being welded in order join the two pieces. The metal determines the type of welding that is required. The electrodes are either heavily or lightly coated. For the heavily coated electrodes are commonly used in structural welding because they are much stronger and corrosion resistant. The lightly coated electrodes are not as structurally sound. Butt welding is performed with the Arc, TIG, or MIG welder held at a slight angle the weld if the weld is laying flat in order to achieve the least amount of porosity in the weld and also to increase the weld's strength. Fillet welding make up about 80 percent of the connection despite being weaker than butt welds. The reason it is used more often is because fillet welds offer more room for error with much larger tolerances. Fillet welding is not a type of butt weld despite its similarities. Types of butt welding Flash Flash butt welding is used with machinery and connects multiple pieces of metal together that are miss matched in size and shape. These different sizings can oftentimes cause for breaks in welding process. High voltage current is applied in order to connect the metal pieces together by applying it to both the components known as flashing in order to join them together. Resistance This weld joins the two pieces of metal together by heat that comes from the pressure due to the metals being held together at a preset force. Resistance butt welding is used on joints that are of similar shape and size and often the weld is performed in one movement unlike flash welding. Types of butt joints There are many different types of butt welding joints and they all are named with their particular shape. The joint also known as a square groove weld has many different forms in order to connect pieces of metal together and are all capable of bearing loads. There are many different types of joints such as lap joints, tee joints, butt joints, and also corner joints. Lap joints are two pieces that are end-over-end and welded together whereas butt welds are put end to end and connected that way. Butt welds are connected to each other with the thickness of the parent metal. There are many different kinds of butt welds such as square, single v, double v, single bevel, double bevel, single u, double u, single j, and also a double j. Minimizing the distortions in a weld is important however doing so will minimize the chances of full penetration. In order to get full penetration double welds such as double v, double j, and double u may be used. Standards EN 1993-1-8, which covers the design of joints in the design of steel structures, defines a set of provisions for welding structural steel. See also Fillet weld - a weld of approximately triangular cross section joining two surfaces at approximately right angles to each other Plug weld Flare groove weld Weld access hole Welding joint - a joining process that produces a coalescence of metals (or non metals) by heating them to the welding temperature, with or without the application of pressure, or by pressure alone, and with or without the use of filler metals References Welding
Butt welding
Engineering
1,081
25,887,632
https://en.wikipedia.org/wiki/Psilocybe%20rostrata
Psilocybe rostrata is a species of mushroom in the family Hymenogastraceae. See also List of Psilocybin mushrooms Psilocybin mushrooms Psilocybe References Entheogens Psychoactive fungi rostrata Psychedelic tryptamine carriers Fungi of North America Fungus species
Psilocybe rostrata
Biology
64
23,652,872
https://en.wikipedia.org/wiki/C8H8O2
{{DISPLAYTITLE:C8H8O2}} The molecular formula C8H8O2 may refer to: Anisaldehyde (p-anisaldehyde) Benzodioxan 3,4-Dihydroxystyrene 3-Hydroxyacetophenone 2-Hydroxy-4-methylbenzaldehyde 4-Hydroxyphenylacetaldehyde 2-Methoxybenzaldehyde (o-anisaldehyde) Methyl benzoate Phenyl acetate Phenylacetic acid Piceol and other hydroxy acetophenones Toluic acids p-Toluic acid o-Toluic acid m-Toluic acid
C8H8O2
Chemistry
156
38,413,368
https://en.wikipedia.org/wiki/WELMEC
WELMEC is a body set up to promote European cooperation in the field of legal metrology. WELMEC members are drawn from the national authorities responsible for legal metrology in European Union (EU) and European Free Trade Association (EFTA) member states. WELMEC state their mission as being "to develop and maintain mutual acceptance among its members and to maintain effective cooperation to achieve a harmonised and consistent approach to the societies needs for legal metrology and for the benefit of all stakeholders including consumers and businesses." WELMEC was established in 1990, at a meeting in Bern, Switzerland, and was originally the acronym for the "Western European Legal Metrology Cooperation". WELMEC has 30 members and 7 associate members. Today, although the name is still WELMEC, as the European Union extended its membership outside Western Europe, so did WELMEC, the organisation's membership encompassing EU member states, EFTA members and aspiring EU members: one of the aims of WELMEC being the provision of assistance to aspiring EU members in aligning their legal metrology process with those of the EU. As of 2013, WELMEC's principal activities centered on the operation of the EU Nonautomatic Weighing Instruments Directive (NAWI – EU directive 2009/23/EC) and the implementation of the EU Measuring Instruments Directive (MID – EU directive (2004/22/EC). The organisation's working parties, which map onto various aspects of these two directives, are: WG 2 Directive Implementation (2009/23/EC) WG 5 Metrological supervision WG 6 Prepackages WG 7 Software WG 8 Measuring Instruments Directive WG 10 Measuring equipment for liquids other than water WG 11 Gas and Electricity Meters WG 13 Water and Thermal Energy Meters See also EURAMET, the European Association of National Metrology Institutes International Organization of Legal Metrology References External links Measurement Standards organizations
WELMEC
Physics,Mathematics
393
47,894,032
https://en.wikipedia.org/wiki/Blasting%20mat
A blasting mat is a mat usually made of sliced-up rubber tires bound together with ropes, cables or chains. They are used during rock blasting to contain the blast, prevent flying rocks and suppress dust. Use Blasting mats are used when explosives are detonated in places such as quarries or construction sites. The mats are placed over the blasting area to contain the blast, suppress noise and dust as well as prevent high velocity rock fragments called fly rock (or flyrock) from damaging structures, people or the environment in proximity to the blast site. The amount of fly rock can be reduced by proper drilling in the bedrock for the explosives, but in practice it is hard to avoid. Mats can be used singly or in layers depending on the size of the blast charge, the type of mat and the amount of protection needed. They can be used horizontally on the ground or vertically hanging from cranes or attached to structures. In the vertical capacity the mats are sometimes referred to as blasting curtains. When used in blasting tunnels the mats can be placed in patterns designed to let the mats stabilize each other and to direct the discharge from the explosion out of the tunnel. To prevent mats from being displaced by the explosion, they can be covered with layers of soil or anchored to the ground. Anchoring the mats is also essential when the blasting is done on an incline where the mats may slide down from the rock face. Blasting mats are often used in combination with blasting blankets as an additional layer over the mats. The blankets are larger than the mats designed to retain the fragments that have managed to pass through the mat. Blasting blankets are used for both horizontal and vertical blasting. Blasting blankets consist of a fine-mesh strong net or industrial felt from paper mills. Both mats and blankets are designed to let air and gasses from the explosion pass through the cover and retain fragments. Knowledge of the proper use of blasting mats is required in order to obtain a blaster's certificate issued by organizations such as the WorkSafeBC. Blasting mats made from used tires can serve a double purpose as road stabilizers, or road surface, in locations where roads leading to the blasting site are unstable or nonexistent, or in areas where the surface needs to be protected from heavy machinery. Materials A number of materials are used for making blasting mats and new materials with different properties are constantly being introduced into this field. The most common materials are strips of old tires held together by steel cables, mats woven from manila rope or wire cables, logs or conveyor belts. Layers of wire netting can also be used. Several methods of assembling a blasting mat are patented. Blasting mats made from rope woven on wires were first used during the construction of the IRT Third Avenue Line in New York City in the early 1900s. They were used to protect the surrounding buildings and were favored since they prevented fly rock but vented gasses. Mats made from recycled tires can be fashioned in a number of ways depending on how the tires are cut. Some examples are tread mats, sidewall mats and mats from non-flattened sections of tires. Manufacturing The manufacturing of blasting mats is integrated with the mining industry and many companies specialize in making mats, especially those made from recycled tires. Military use When charges are used to dig foxholes, an improvised blasting mat made from whole tires tied together with rope to reduce noise and fly rock, is recommended in the A Soldiers Handbook (United States). A tarp may also be used as a blasting blanket. Accidents Over the years, a number of incidents with fatal outcomes have been caused by fly rock. In most of these, blasting mats were not used or they were placed over the blasting face in an incorrect manner. Such an incident occurred in August 2015, in Cape Ray, Newfoundland and Labrador when a fly rock travelled about from the blast site and crashed through the kitchen ceiling of a nearby house. Although designed to prevent accidents, as blasting mats weigh between , they have also caused injuries when falling on workers on construction sites. Blasting mats must be thoroughly inspected before each use to ensure there are no blown out segments or broken cables, by a blaster. Blasting mats will deteriorate with each use to the point where they become ineffective for their intended purpose. Only trained experienced and adequately supervised crews should be used in the placement of these devices over a loaded shot. A common complaint is accidental breakage of bus wires, leg wires or pinching off the non electric tubes that may result in the misfire of the shot. References Explosives engineering Explosion protection Military engineering Mining engineering Mining equipment Mine safety Vehicle recycling Improvisation Recycling by product
Blasting mat
Chemistry,Engineering
924
5,381,695
https://en.wikipedia.org/wiki/Ethylammonium%20nitrate
Ethylammonium nitrate or ethylamine nitrate (EAN) is a salt with formula . It is an odorless and colorless to slightly yellowish liquid with a melting point of 12 °C. This compound was described by Paul Walden in 1914, and is believed to be the earliest reported example of a room-temperature ionic liquid. Synthesis and properties Ethylammonium nitrate can be produced by heating ethyl nitrate with an alcoholic solution of ammonia or by reacting ethylamine with concentrated nitric acid. It has a relatively low viscosity of 0.28 poise or 0.028 Pa·s at 25 °C and therefore a high electrical conductivity of about 20 mS·cm−1 at 25 °C. It boils at 240 °C and decomposes at about 250 °C. Its density at 20 °C is 1.261 g/cm3. The ethylammonium ion () has three easily detachable protons which are tetrahedrally arranged around the central nitrogen atom, whereas the configuration of the anion is planar. Despite the structural differences, EAN shares many properties with water, such as micelle formation, aggregation of hydrocarbons, negative enthalpy and entropy of dissolution of gases, etc. Similar to water, EAN can form three-dimensional hydrogen bonding networks. Applications Ethylammonium nitrate is used as an electrically conductive solvent in electrochemistry and as a protein crystallization agent. It has a positive effect on the refolding of denaturated lysozyme, with the refolding yield of about 90%. The refolding action was explained as follows: The ethyl group of ethylammonium nitrate interacts with the hydrophobic part of the protein and thereby protects it from intermolecular association, whereas the charged part of EAN stabilizes the electrostatic interactions. References Ammonium compounds Nitrates Ionic liquids Substances discovered in the 1910s
Ethylammonium nitrate
Chemistry
393
11,851,243
https://en.wikipedia.org/wiki/L%C3%A9o-Pariseau%20Prize
The Léo-Pariseau Prize is a Québécois prize which is awarded annually to a distinguished individual working in the field of biological or health sciences. The prize is awarded by the Association francophone pour le savoir (Acfas), and is named after Léo Pariseau, the first president of Acfas. The award was inaugurated in 1944 and was the first Acfas prize. Prior to 1980 the prize was awarded to researchers in a large variety of disciplines, before being restricted to biological and health sciences. There are now ten annual prizes for researchers in different disciplines. Winners Source: Acfas – Prix de la Recherche Scientifique de l'Acfas – Prix Léo-Pariseau 1944 - Marie-Victorin Kirouac, botany, Université de Montréal 1945 - Paul-Antoine Giguère, chemistry, Université Laval 1946 - Marius Barbeau, ethnology, Université Laval 1947 - Jacques Rousseau, botany and ethnology, Université de Montréal 1948 - Léo Marion, chemistry, University of Ottawa 1949 - Jean Bruchési, history and political science, Université de Montréal 1950 - Louis-Charles Simard, pathology, Université de Montréal 1951 - Cyrias Ouellet, chemistry, Université Laval 1952 - Louis-Paul Dugal, physiology, Université de Montréal 1953 - Guy Frégault, history, Université de Montréal 1954 - Pierre Demers. physics, Université de Montréal 1955 - René Pomerleau, mycology, Université de Montréal 1956 - Marcel Rioux, anthropology, Université de Montréal 1957 - No prize awarded. 1958 - Roger Gaudry, chemistry, Université de Montréal 1959 - Lionel Daviault, entomology 1960 - Marcel Trudel, history, Université Laval 1961 - Raymond Lemieux, chemistry, University of Alberta 1962 - Charles-Philippe Leblond, histology, McGill University 1963 - Lionel Groulx, history, Université de Montréal 1964 - Larkin Kerwin, physics, Université Laval 1965 - Pierre Dansereau, ecology, Université du Québec à Montréal 1966 - Noël Mailloux, psychology, Université de Montréal 1967 - Albéric Boivin, physics, Université Laval 1968 - Léonard-Francis Bélanger, histology, Université de Montréal 1969 - Fernand Dumont, sociology, Université Laval 1970 - Bernard Belleau, biochemistry, Bristol-Myers of Canada 1971 - Édouard Pagé, biology, Université de Montréal 1972 - Louis-Edmond Hamelin, geography, Université Laval 1973 - Camille Sandorfy, chemistry, Université de Montréal 1974 - Antoine D'Iorio, biochemistry, Université d'Ottawa 1975 - Pierre Angersphilosophy, Université de Montréal 1976 - Paul Marmet, physics, Université Laval 1977 - Jacques de Repentigny, microbiology and immunology, Université de Montréal 1978 - Vincent Lemieux, political science, Université Laval 1979 - Pierre Deslongchamps, chemistry, Université de Sherbrooke 1980 - André Barbeau, neurology, Institut de recherches cliniques de Montréal 1981 - Jean-G. Lafontaine, biology, Université Laval 1982 - J.-André Fortin, botany, Université Laval 1983 - Germain Brisson, nutrition, Université Laval 1984 - Wladimir A. Smirnoff, microbiology, Environment Canada 1985 - Louis Legendre, biology, Université Laval 1986 - Marc Cantin, medicine, Université de Montréal 1987 - Guy Lemieux, nephrology, Université de Montréal 1988 - Pierre Borgeat, physiology, Université Laval 1989 - Jules Hardy, neurosurgery, Université de Montréal 1990 - Jacques de Champlain, medicine, Université de Montréal 1991 - Jacques Leblanc, medicine, Université Laval 1992 - Paul Jolicoeur, molecular biology, Institut de recherches cliniques de Montréal 1993 - Albert J. Aguayo, neurology, McGill University 1994 - Emil Skamene, medicine, McGill University 1995 - André Parent, physiology, Université Laval 1996 - Domineco Regoli, pharmacology, Université de Sherbrooke 1997 - Rémi Quirion, neurosciences, McGill University 1998 - Serge Rossignol, neurosciences, Université de Montréal 1999 - Guy Armand Rouleau, neurology, McGill University 2000 - Rima Rozen, human genetics and pediatrics, McGill University 2001 - Nabil G. Seidah, biochemistry and molecular medicine, Institut de recherches cliniques de Montréal 2002 - Graham Bell. biology, McGill University 2003 - Mona Nemer, pharmacology, Université de Montréal 2004 - Jacques Montplaisir, sleep sciences, Université de Montréal 2005 - Laurent Descarries, pathology and cell biology, Université de Montréal 2006 - Michel Bouvier, biochemistry, Université de Montréal 2007 - André Veillette, immunology, Université de Montréal 2008 - Michael Kramer, pediatrics, Université McGill 2009 - Michel J. Tremblay, medical biology, Université Laval 2010 - René Roy, medicinal chemistry, Université du Québec à Montréal 2011 - Claude Perreault, immunology, Université de Montréal 2012 - Julien Doyon, neurosciences, Université de Montréal 2013 - Jean-Pierre Julien, neurodegeneration, Université Laval 2014 - Marc-André Sirard, animal reproduction, Université Laval 2015 - Guy Sauvageau, immunology and oncology, Université de Montréal 2016 - Gustavo Turecki, suicide and neurosciences, McGill University 2017 - Jacques Simard, genetics, Université Laval 2018 - Sylvain Moineau, microbiology, Université Laval 2019 - Sylvain Chemtob, neonatalogy and pharmacology, Université de Montréal See also List of biology awards List of medicine awards References Canadian science and technology awards Awards established in 1944 Medicine awards
Léo-Pariseau Prize
Technology
1,176
4,035,228
https://en.wikipedia.org/wiki/Dikka
A dikka or dakka (), also known in Turkish as a müezzin mahfili, is a raised platform or tribune in a mosque from which the Quran is recited and where the muezzin chants or repeats in response to the imam's prayers. It is also used by the muezzin to chant the second call to prayer (iqama), which indicates to worshippers that the prayer is about to begin. On special occasions or evenings, such as during the month of Ramadan, expert or professional Qur'an reciters also use the platform to chant parts of the Qur'an. It is also known as the mukabbariyah () in the Prophet's Mosque in Medina. This feature is not found in all mosques but is most often found in large mosques where it is difficult for worshippers far from the mihrab to hear the imam. Raised on columns, it can be a freestanding structure near the middle of the prayer hall or a balcony set against a pillar or a wall opposite the minbar. See also Dakkah References Architectural elements Islamic architectural elements Islamic architecture Mosque architecture Islamic terminology
Dikka
Technology,Engineering
235
64,449,141
https://en.wikipedia.org/wiki/Carbon%20dioxide%20angiography
Carbon dioxide angiography is a diagnostic radiographic technique in which a carbon dioxide (CO2) based contrast medium is used - unlike traditional angiography where the contrast medium normally used is iodine based – to see and study the body vessels. Since CO2 is a non-radio-opaque contrast medium, angiographic procedures need to be performed in digital subtraction angiography (DSA). History The use of carbon dioxide as a contrast agent goes back to 1920s when the gas was used to visualize retroperitoneal structures. In the 1950s and early 1960s, CO2 was injected intravenously to delineate the right atrium for the detection of pericardial effusion. This imaging technique developed from animal and clinical studies which demonstrated that CO2 was safe and well tolerated with venous injections. In the early 1970s, Dr. Hawkins and Dr. Cho started using and studying CO2 as a contrast agent also for peripheral vascular imaging and intervention. With the advent of digital subtraction angiography (DSA) technique in 1980s, CO2 has evolved into a safe and useful alternative contrast agent in both arteriography and venography. Because of its lack of renal toxicity and allergic potential, CO2 is a preferred contrast agent in patients with renal failure or iodinated contrast medium allergy, and particularly in patients who require large volumes of contrast medium for complex endovascular procedures. Technique CO2 angiography is intended only for peripheral procedures. In case of procedures in the arterious system it is allowed to inject CO2 only below the diaphragm; while in the venous system it can also be investigated supradiaphragmatic, provided that the cerebral vessels are excluded. Taking this aspect into consideration, the practical approach follows that of the iodinated contrast procedures. The contrast injection can be carried out, similarly, both with manual devices and with automatic injectors (Automated Carbon Dioxide Angiography, ACDA). Properties Being naturally present in the human body, CO2 is the only 100% biocompatible contrast agent, meaning no adverse reactions, such as allergy, nephrotoxicity, and hepatotoxicity. Carbon dioxide is a negative contrast medium and it has a low radiopacity (while iodinated contrast media are defined as positive contrast media due to their high radiopacity). Contrast is caused by the different X-ray absorption coefficients between the tissue and the contrast agent. In the vascular imaging results produced using CO2, vessels look brighter rather than the surrounding tissues, because the contrast medium absorbs less X-ray radiations rather an iodine-based contrast medium, where the vessel are displayed in black. The CO2 does not mix with blood. At atmospheric pressure CO2 is in gaseous form and, when it comes out from the catheter, it forms a train of bubbles which displaces blood, causing a transient ischemia, in relation to the bloodstream (systolic pressure). When added together by DSA “stacking” software, the result is a composite diagnostic image of the frames. Carbon dioxide is highly soluble, allowing multiple injections without a maximum dosage (per procedure, while it is 100 mL per injection by the literature), but, in case of multiple injections, should be considered and adequate time interval between them, so to allow the gas to be expelled from the body. Compared with the oxygen, the most present gaseous substance in the body, CO2 is more than 20 times more soluble, meaning the possibility of injecting high quantities in the body. High compressibility and explosive delivery. More pressure is exerted to the gas, more its density increases, resulting in a decrease in gas volume and an increase in gas pressure. The effusion of the gas from the catheter orifice into a state of lower pressure, such as a blood vessel, leads to a sudden increase in the volume of the gas - the “explosive delivery” or “jet effect” - which could lead to an excessive stress in vessels walls. To avoid this, immediately prior to the injection of CO2, a flush is performed, injecting small amounts of CO2 to reduce gas compression and guarantee gas delivery at a steady flow rate. CO2 is 400 times less viscous than iodinated contrast medium, allowing its injection through devices with a very little inner lumen, as microcatheters, or, even, with other devices inserted in the catheter, as guidewires, balloons or as in atherectomy procedures. The low viscosity of CO2 makes it easy to pass through small vessels, visualizing tight stenosis, collaterals, small bleedings and endoleaks in AAA procedures. Expulsion: Once dissolved in the plasma, CO2 is transported to the lungs and removed in a single pass by the alveoli, favoring the possibility of performing multiple injections without complications (in healthy patients, meaning no severe COPD or significant POF, especially in presence of pulmonary embolism). Buoyancy is defined as the tendency of a body to float when submerged into a fluid. CO2 is lighter than blood and, therefore, floats above the bloodstream. The main advantage is represented by the simplicity of filling the more superficial (in transverse plane) vessels of the body, conversely the main disadvantage consists in a less ease of filling the deeper ones. Side effects Pins and needles/burning sensation, nausea and temporary discomfort are possible sensations during CO2 angiography, mainly because the transient ischemia caused by the CO2 bubbles flowing in the bloodstream. CO2 is also neurotoxic, so brain injections should be avoided. The most feared complication for intravascular use is air embolism, which can result in stroke, myocardial infarction, paralysis, amputation, or death, although this risk across all patients is less than 1%. A large amount of CO2 trapped in the pulmonary artery or right side of the heart (only of concern during venography) obstructs venous return resulting in bradycardia and hypotension. The patient should be rotated into a left lateral decubitus position if this happens to attempt to separate the CO2 into a gas layer floating "on top of" and no longer interfering with the flow of the liquid and solid components of blood (vapor lock). Therefore, having a delivery system, which prevents air room diffusion, is a necessary safety measure for the patients. References Carbon dioxide Radiography
Carbon dioxide angiography
Chemistry
1,350
1,522,583
https://en.wikipedia.org/wiki/Alpha%20Ceti
Alpha Ceti (α Ceti, abbreviated Alpha Cet, α Cet), officially named Menkar , is the second-brightest star in the constellation of Cetus. It is a cool luminous red giant estimated to be about 250 light years away based on parallax. Nomenclature Alpha Ceti is the star's Bayer designation. It has the traditional name Menkar, deriving from the Arabic word منخر manħar "nostril" (of Cetus). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Menkar for this star. This star, along with γ Cet (Kaffaljidhma), δ Cet, λ Cet (also Menkar), μ Cet, ξ1 Cet and ξ2 Cet were Al Kaff al Jidhmah, "the Part of a Hand". In Chinese, (), meaning Circular Celestial Granary, refers to an asterism consisting of α Ceti, κ1 Ceti, λ Ceti, μ Ceti, ξ1 Ceti, ξ2 Ceti, ν Ceti, γ Ceti, δ Ceti, 75 Ceti, 70 Ceti, 63 Ceti and 66 Ceti. Consequently, the Chinese name for α Ceti itself is (, .) Characteristics Despite having the Bayer designation α Ceti, at visual magnitude 2.54 this star is actually not the brightest star in the constellation Cetus. That honor goes instead to Beta Ceti at magnitude 2.04. Menkar is a red giant with a stellar classification of M1.5 IIIa. It has more than twice the mass of the Sun and, as a giant star, has expanded to about 100 times the Sun's radius. The large area of the photosphere means that it is emitting about 1,765 times as much energy as the Sun, even though the effective temperature is only (compared to on the Sun). The relatively low temperature gives Menkar the red hue of an M-type star. Menkar has evolved from the main sequence after exhausting the hydrogen at its core. It has also exhausted its core helium, becoming an asymptotic giant branch star, and will probably become a highly unstable star like Mira before finally shedding its outer layers and forming a planetary nebula, leaving a relatively large white dwarf remnant. It has been observed to periodically vary in brightness, but only with an amplitude of about one hundredth of a magnitude. Namesakes Menkar (AK-123) was a United States Navy Crater class cargo ship named after the star. References External links Assorted figures related to Alpha Ceti M-type giants Asymptotic-giant-branch stars Cetus Ceti, Alpha BD+03 0419 Ceti, 92 018884 014135 0911 Menkar
Alpha Ceti
Astronomy
633
36,291,080
https://en.wikipedia.org/wiki/Melodichthys%20hadrocephalus
Melodichthys hadrocephalus is a rare species of viviparous brotula found in the northeastern Atlantic Ocean off the coast of France. It is found at depths from . This is the only known species in its genus. It is known from a single specimen. References Bythitidae Monotypic fish genera Fish described in 1986
Melodichthys hadrocephalus
Biology
71
23,582,507
https://en.wikipedia.org/wiki/C16H10N2O2
{{DISPLAYTITLE:C16H10N2O2}} The molecular formula C16H10N2O2 (molar mass: 262.27 g/mol, exact mass: 262.0742 u) may refer to: Indigo dye Indirubin Molecular formulas
C16H10N2O2
Physics,Chemistry
63
31,304,687
https://en.wikipedia.org/wiki/Defense%20in%20insects
Insects have a wide variety of predators, including birds, reptiles, amphibians, mammals, carnivorous plants, and other arthropods. The great majority (80–99.99%) of individuals born do not survive to reproductive age, with perhaps 50% of this mortality rate attributed to predation. In order to deal with this ongoing escapist battle, insects have evolved a wide range of defense mechanisms. The only restraint on these adaptations is that their cost, in terms of time and energy, does not exceed the benefit that they provide to the organism. The further that a feature tips the balance towards beneficial, the more likely that selection will act upon the trait, passing it down to further generations. The opposite also holds true; defenses that are too costly will have a little chance of being passed down. Examples of defenses that have withstood the test of time include hiding, escape by flight or running, and firmly holding ground to fight as well as producing chemicals and social structures that help prevent predation. One of the best known modern examples of the role that evolution has played in insect defenses is the link between melanism and the peppered moth (Biston betularia). Peppered moth evolution over the past two centuries in England has taken place, with darker morphs becoming more prevalent over lighter morphs so as to reduce the risk of predation. However, its underlying mechanism is still debated. Hiding Walking sticks (order Phasmatodea), many katydid species (family Tettigoniidae), and moths (order Lepidoptera) are just a few of the insects that have evolved specialized cryptic morphology. This adaptation allows them to hide within their environment because of a resemblance to the general background or an inedible object. When an insect looks like an inedible or inconsequential object in the environment that is of no interest to a predator, such as leaves and twigs, it is said to display mimesis, a form of crypsis. Insects may also take on different types of camouflage, another type of crypsis. These include resembling a uniformly colored background as well as being light below and dark above, or countershaded. Additionally, camouflage is effective when it results in patterns or unique morphologies that disrupt outlines so as to better merge the individual into the background. Cost and benefit perspective Butterflies (order Lepidoptera) are a good example of the balancing act between the costs and benefits associated with defense. In order to take off, butterflies must have a thorax temperature of . This energy is derived both internally through muscles and externally through picking up solar radiation through the body or wings. When looked at in this light, cryptic coloration to escape from predators, markings to attract conspecifics or warn predators (aposematism), and the absence of color to absorb adequate solar radiation, all play key roles in survival. Only when these three affairs are in balance does the butterfly maximize its fitness. Mimicry Mimicry is a form of defense which describes when a species resembles another recognized by natural enemies, giving it protection against predators. The resemblance among mimics does not denote common ancestry. Mimicry works if and only if predators are able to learn from eating distasteful species. It is a three part system that involves a model species, a mimic of that species, and a predatory observer that acts as a selective agent. If learning is to be successful, then all models, mimics, and predators must co-exist, a notion feasible within the context of geographic sympatry. Mimicry is divided into two parts, Batesian mimicry and Müllerian mimicry. Batesian mimicry In Batesian mimicry, an aposematic inedible model has an edible mimic. Automimics are individuals that, due to environmental conditions, lack the distasteful or harmful chemicals of conspecifics, but are still indirectly protected through their visibly identical relatives. An example can be found in the plain tiger (Danaus chrysippus), a non-edible butterfly, which is mimicked by multiple species, the most similar being the female danaid eggfly (Hypolimnas misippus). Müllerian mimicry In Müllerian mimicry, a group of species benefit from each other's existence because they all are warningly colored in the same manner and are distasteful. The best examples of this phenomenon can be found within the butterfly genus Heliconius. Behavioral responses Behavioral responses to escape predation include burrowing into substrate and being active only through part of the day. Furthermore, insects may feign death, a response termed thanatosis. Beetles, particularly weevils, do this frequently. Bright colors may also be flashed underneath cryptic ones. A startle display occurs when prey takes advantage of these markings after being discovered by a predator. The striking color pattern, which often includes eyespots, is intended to evoke prompt enemy retreat. Better formed eyespots seem to result in better deterrence. Mechanical defenses Insects have had millions of years to evolve mechanical defenses. Perhaps the most obvious is the cuticle. Although its main role lies in support and muscle attachment, when extensively hardened by the cross-linking of proteins and chitin, or sclerotized, the cuticle acts as a first line of defense. Additional physical defenses include modified mandibles, horns, and spines on the tibia and femur. When these spines take on a main predatory role, they are termed raptorial. Some insects uniquely create retreats that appear uninteresting or inedible to predators. This is the case in caddisfly larvae (order Trichoptera) which encase their abdomen with a mixture of materials like leaves, twigs, and stones. Autotomy Autotomy, or the shedding of appendages, is also used to distract predators, giving the prey a chance to escape. This highly costly mechanism is regularly practiced within stick insects (order Phasmatodea) where the cost is accentuated by the possibility that legs can be lost 20% of the time during molting. Harvestmen (order Opiliones) also use autotomy as a first line of defense against predators. Chemical defenses Unlike pheromones, allomones harm the receiver at the benefit of the producer. This grouping encompasses the chemical arsenal that numerous insects employ. Insects with chemical weaponry usually make their presence known through aposematism. Aposematism is utilized by non-palatable species as a warning to predators that they represent a toxic danger. Additionally, these insects tend to be relatively large, long-lived, active, and frequently aggregate. Indeed, longer-lived insects are more likely to be chemically defended than short lived ones, as longevity increases apparency. Throughout the arthropod and insect realm, however, chemical defenses are quite unevenly distributed. There is great variation in the presence and absence of chemical arms among orders and families to even within families. Moreover, there is diversity among insects as to whether the defensive compounds are obtained intrinsically or extrinsically. Many compounds are derived from the main food source of insect larvae, and occasionally adults, feed, whereas other insects are able to synthesize their own toxins. In reflex bleeding, insects dispel their blood, hemolymph, or a mixture of exocrine secretions and blood as a defensive maneuver. As previously mentioned, the discharged blood may contain toxins produced within the insect source or externally from plants that the insect consumed. Reflexive bleeding occurs in specific parts of the body; for example, the beetle families Coccinellidae (ladybugs) and Meloidae bleed from the knee joints. Classification Gullan and Cranston have divided chemical defenses into two classes. Class I chemicals irritate, injure, poison, or drug individual predators. They can be further separated into immediate or delayed substances, depending on the amount of time it takes to feel their effects. Immediate substances are encountered topographically when a predator handles the insect while delayed chemicals, which are generally contained within the insect's tissues, induce vomiting and blistering. Class I chemicals include bufadienolides, cantharidin, cyanides, cardenolides, and alkaloids, all of which have greater effects on vertebrates than on other arthropods. The most frequently encountered defensive compounds in insects are alkaloids. Class II chemicals are essentially harmless. They stimulate scent and taste receptors so as to discourage feeding. They tend to have low molecular weight and are volatile and reactive, including acids, aldehydes, aromatic ketones, quinones, and terpenes. Furthermore, they may be aposematic, indicating through odors the presence of chemical defenses. The two different classes are not mutually exclusive, and insects may use combinations of the two. Pasteels, Grégoire, and Rowell-Rahier grouped chemical defenses into three types: compounds that are truly poisonous, those that restrict movement, and those that repel predators. True poisons, essentially Class I compounds, interfere with specific physiological processes or act at certain sites. Repellents are similar to those classified under Class II as they irritate the chemical sensitivity of predators. Impairment of movement and sense organs is achieved through sticky, slimy, or entangling secretions that act mechanically rather than chemically. This last grouping of chemicals has both Class I and Class II properties. Again, these three categories are not mutually exclusive, as some chemicals can have multiple effects. Examples Assassin bugs When startled, the assassin bug Platymeris rhadamanthus (family Reduviidae), is capable of spitting venom up to 30 cm at potential threats. The saliva of this insect contains at least six proteins including large amounts of protease, hyaluronidase, and phospholipase which are known to cause intense local pain, vasodilation, and edema. Cockroaches Many cockroach species (order Blattodea) have mucus-like adhesive secretions on their posterior. Although not as effective against vertebrates, these secretions foul the mouths of invertebrate predators, increasing the chances of the cockroach escaping. Termites The majority of termite soldiers secrete a rubberlike and sticky chemical concoction that serves to entangle enemies, called a fontanellar gun, and it is usually coupled with specialized mandibles. In nasute species of termites (contained within the subfamily Nasutitermitinae), the mandibles have receded. This makes way for an elongated, syringic nasus capable of squirting liquid glue. When this substance is released from the frontal gland reservoir and dries, it becomes sticky and is capable of immobilizing attackers. It is highly effective against other arthropods, including spiders, ants, and centipedes. Among termite species in the Apicotermitinae that are soldierless or where soldiers are rare, mouth secretions are commonly replaced by abdominal dehiscence. These termites contract their abdominal muscles, resulting in the fracturing of the abdominal wall and the expulsion of gut contents. Because abdominal dehiscence is quite effective at killing ants, the noxious chemical substance released is likely contained within the termite itself. Ants Venom is the defense of choice for many ants (family Formicidae). It is injected from an ovipositor that has been evolutionarily modified into a stinging apparatus. These ants release a complex venom mixture that can include histamine. Within the subfamily Formicinae, the stinger has been lost and instead the poison gland forcibly ejects the fluid of choice, formic acid. Some carpenter ants (genus Camponotus) also have mandibular glands that extend throughout their bodies. When these are mechanically irritated, the ant commits suicide by exploding, spilling out a sticky, entangling substance. The subfamily Dolichoderinae, which also does not possess a stinger, has a different type of defense. The anal gland secretions of this group rapidly polymerize in air and serve to immobilize predators. Leaf beetles Leaf beetles produce a spectrum of chemicals for their protection from predators. In the case of the subtribe Chrysomelina (Chrysomelinae), all live stages are protected by the occurrence of isoxazolin-5-one derived glucosides that partially contain esters of 3-nitropropanoic acid (3-NPA, beta-nitropropionic acid). The latter compound is an irreversible inhibitor of succinate dehydrogenase. Hence, 3-NPA inhibits the tricarboxylic acid cycle. This inhibition leads to neurodegeneration with symptoms similar to those caused by Huntington's disease. Since leaf beetles produce high concentrations of 3-NPA esters, a powerful chemical defense against a wide range of different predators is obvious. The larvae of Chrysomelina leaf beetles developed a second defensive strategy that is based on the excretion of droplets via pairs of defensive glands at the back of the insects. These droplets are immediately presented after mechanical disturbance and contain volatile compounds that derive from sequestered plant metabolites. Due to the specialization of leaf beetles to a certain host plant, the composition of the larval secretion is species-dependent. For instance, the red poplar leaf beetle (Chrysomela populi) consumes the leaves of poplar plants, which contain salicin. This compound is taken up by the insect and then further transformed biochemically into salicylaldehyde, an odor very similar to benzaldehyde. The presence of salicin and salicylaldehyde can repel potential predators of leaf beetles. The hemolymph toxins originate from autogenous de novo biosynthesis by the Chrysomelina beetle. Essential amino acids, such as valine serve as precursors for the production of the hemolymph toxins of Chrysomelina leaf beetles. The degradation of such essential amino acids provides propanoyl-CoA. This compound is further transformed into propanoic acid and β-alanine. The amino group in β-alanine is then oxidized to yield either an oxime or the nitro-toxin 3-nitropropanoic acid (3-NPA). The oxime is cyclized to isoxazolin-5-one, which is transformed with α-UDP-glucose into the isoxazolin-5-one glucoside. In a final step, an ester is formed by transesterification of 3-nitropropanoyl-CoA to the 6´-position of isoxazolin-5-one glucoside. This biosynthetic route yields high millimolar concentrations of the secondary isoxazolin-5-one and 3-NPA derived metabolites. Free 3-NPA and glucosides that derive from 3-NPA and isoxazolin-5-one also occur in many genera of leguminous plants (Fabaceae). The larvae of leaf beetles from the subfamilies of e.g., Criocerinae and Galerucinae often employ fecal shields, masses of feces that they carry on their bodies to repel predators. More than just a physical barrier, the fecal shield contains excreted plant volatiles that can serve as potent predator deterrents. Wasps Ant attacks represent a large predatory pressure for many species of wasps, including Polistes versicolor. These wasps possess a gland located in the VI abdominal sternite (van de Vecht's gland) that is primarily responsible for making an ant-repellent substance. Tufts of hair near the edge of the VI abdominal sternite store and apply the ant repellent, secreting the ant repellent through a rubbing behavior. Collective defenses in social insects Many chemically defended insect species take advantage of clustering over solitary confinement. Among some insect larvae in the orders Coleoptera and Hymenoptera, cycloalexy is adopted. Either the heads or ends of the abdomen, depending on where noxious compounds are secreted, make up the circumference of a circle. The remaining larvae lie inside this defensive ring where the defenders repel predators through threatening attitudes, regurgitation, and biting. Termites (order Isoptera), like eusocial ants, wasps, and bees, rely on a caste system to protect their nests. The evolution of fortress defense is closely linked to the specialization of soldier mandibles. Soldiers can have biting-crushing, biting-cutting, cutting, symmetrical snapping, and asymmetrical snapping mandibles. These mandibles may be paired with frontal gland secretion, although snapping soldiers rarely utilize chemical defenses. Termites take advantage of their modified mandibles in phragmosis, which is the blocking of the nest with any part of the body; in this case of termites, nest entrances are blocked by the heads of soldiers. Some species of bee, mainly that of the genus Trigona, also exhibit such aggressive behavior. The Trigona fuscipennis species in particular, make use of attraction, landing, buzzing and angular flights as typical alarm behaviors. But biting is the prominent form of defense among T. fuscipennis bees and involve their strong, sharp five-toothed mandibles. T. fuscipennis bees have been discovered to engage in suicidal biting in order to defend the nest and against predators. Humans standing in the vicinity of nests are almost always attacked and experience painful bites. The bees also crawl over the intruder into the ears, eye, mouth, and other cavities. The Trigona workers give a painful and persistent bite, are difficult to remove, and usually die during the attack. Alarm pheromones warn members of a species of approaching danger. Because of their altruistic nature, they follow the rules of kin selection. They can elicit both aggregational and dispersive responses in social insects depending on the alarm caller's location relative to the nest. Closer to the nest, it causes social insects to aggregate and may subsequently produce an attack against the threat. The Polistes canadensis, a primitively eusocial wasp, will emit a chemical alarm substance at the approach of a predator, which will lower their nestmates' thresholds for attack, and even attract more nestmates to the alarm. The colony is thus able to rise quickly with its sting chambers open to defend its nest against predators. In nonsocial insects, these compounds typically stimulate dispersal regardless of location. Chemical alarm systems are best developed in aphids and treehoppers (family Membracidae) among the nonsocial groups. Alarm pheromones take on a variety of compositions, ranging from terpenoids in aphids and termites to acetates, an alcohol, and a ketone in honey bees to formic acid and terpenoids in ants. Immunity Insects, like nearly every other organism, are subject to infectious diseases caused by viruses, bacteria, fungi, protozoa, and nematodes. These encounters can kill or weaken the insect. Insects protect themselves against these detrimental microorganisms in two ways. Firstly, the body-enveloping chitin cuticle, in conjunction with the tracheal system and the gut lining, serve as major physical barriers to entry. Secondly, hemolymph itself plays a key role in repairing external wounds as well as destroying foreign organisms within the body cavity. Insects, along with having passive immunity, also show evidence of acquired immunity. Social insects additionally have a repertoire of behavioural and chemical "border-defences" and in the case of the ant, groom venom or metapleural gland secretions over their cuticle. Role of phenotypic plasticity Phenotypic plasticity is the capacity of a single genotype to exhibit a range of phenotypes in response to variation in the environment. For example, in Nemoria arizonaria caterpillars, the cryptic pattern changes according to season and is triggered by dietary cues. In the spring, the first brood of caterpillars resembles oak catkins, or flowers. By the summer when the catkins have fallen, the caterpillars discreetly mimic oak twigs. No intermediate forms are present in this species, although other members of the genus Nemoria, such as N. darwiniata, do exhibit transitional forms. In social insects such as ants and termites, members of different castes develop different phenotypes. For example, workers are normally smaller with less pronounced mandibles than soldiers. This type of plasticity is more so determined by cues, which tend to be non-harmful stimuli, than by the environment. Phenotypic plasticity is important because it allows an individual to adapt to a changing environment and can ultimately alter their evolutionary path. It not only plays an indirect role in defense as individuals prepare themselves physically to take on the task of avoiding predation through camouflage or developing collective mechanical traits to protect a social hive, but also a direct one. For example, cues elicited from a predator, which may be visual, acoustic, chemical, or vibrational, may cause rapid responses that alter the prey’s phenotype in real time. See also Insect ecology Antipredator adaptation Behavioral ecology References Exploding animals Insect ecology Mimicry
Defense in insects
Chemistry,Biology
4,454
2,426,547
https://en.wikipedia.org/wiki/Affective%20forecasting
Affective forecasting, also known as hedonic forecasting or the hedonic forecasting mechanism, is the prediction of one's affect (emotional state) in the future. As a process that influences preferences, decisions, and behavior, affective forecasting is studied by both psychologists and economists, with broad applications. History In The Theory of Moral Sentiments (1759), Adam Smith observed the personal challenges, and social benefits, of hedonic forecasting errors: [Consider t]he poor man's son, whom heaven in its anger has visited with ambition, when he begins to look around him, admires the condition of the rich …. and, in order to arrive at it, he devotes himself for ever to the pursuit of wealth and greatness…. Through the whole of his life he pursues the idea of a certain artificial and elegant repose which he may never arrive at, for which he sacrifices a real tranquillity that is at all times in his power, and which, if in the extremity of old age he should at last attain…, he will find to be in no respect preferable to that humble security and contentment which he had abandoned for it. It is then, in the last dregs of life, his body wasted with toil and diseases, his mind galled and ruffled by the memory of a thousand injuries and disappointments..., that he begins at last to find that wealth and greatness are mere trinkets of frivolous utility…. [Yet] it is well that nature imposes upon us in this manner. It is this deception which rouses and keeps in continual motion the industry of mankind. In the early 1990s, Kahneman and Snell began research on hedonic forecasts, examining its impact on decision making. The term "affective forecasting" was later coined by psychologists Timothy Wilson and Daniel Gilbert. Early research tended to focus solely on measuring emotional forecasts, while subsequent studies began to examine the accuracy of forecasts, revealing that people are surprisingly poor judges of their future emotional states. For example, in predicting how events like winning the lottery might affect their happiness, people are likely to overestimate future positive feelings, ignoring the numerous other factors that might contribute to their emotional state outside of the single lottery event. Some of the cognitive biases related to systematic errors in affective forecasts are focalism, hot-cold empathy gap, and impact bias. Applications While affective forecasting has traditionally drawn the most attention from economists and psychologists, their findings have in turn generated interest from a variety of other fields, including happiness research, law, and health care. Its effect on decision-making and well-being is of particular concern to policy-makers and analysts in these fields, although it also has applications in ethics. For example, one's tendency to underestimate one's ability to adapt to life-changing events has led to legal theorists questioning the assumptions behind tort damage compensation. Behavioral economists have incorporated discrepancies between forecasts and actual emotional outcomes into their models of different types of utility and welfare. This discrepancy also concerns healthcare analysts, in that many important health decisions depend upon patients' perceptions of their future quality of life. Overview Affective forecasting can be divided into four components: predictions about valence (i.e. positive or negative), the specific emotions experienced, their duration, and their intensity. While errors may occur in all four components, research overwhelmingly indicates that the two areas most prone to bias, usually in the form of overestimation, are duration and intensity. Immune neglect is a form of impact bias in response to negative events, in which people fail to predict how much their recovery will be hastened by their psychological immune system. The psychological immune system is a metaphor "for that system of defenses that helps you feel better when bad things happen", according to Gilbert. On average, people are fairly accurate about predicting which emotions they will feel in response to future events. However, some studies indicate that predicting specific emotions in response to more complex social events leads to greater inaccuracy. For example, one study found that while many women who imagine encountering gender harassment predict feelings of anger, in reality, a much higher proportion report feelings of fear. Other research suggests that accuracy in affective forecasting is greater for positive affect than negative affect, suggesting an overall tendency to overreact to perceived negative events. Gilbert and Wilson posit that this is a result of the psychological immune system. While affective forecasts take place in the present moment, researchers also investigate its future outcomes. That is, they analyze forecasting as a two-step process, encompassing a current prediction as well as a future event. Breaking down the present and future stages allow researchers to measure accuracy, as well as tease out how errors occur. Gilbert and Wilson, for example, categorize errors based on which component they affect and when they enter the forecasting process. In the present phase of affective forecasting, forecasters bring to mind a mental representation of the future event and predict how they will respond emotionally to it. The future phase includes the initial emotional response to the onset of the event, as well as subsequent emotional outcomes, for example, the fading of the initial feeling. When errors occur throughout the forecasting process, people are vulnerable to biases. These biases disable people from accurately predicting their future emotions. Errors may arise due to extrinsic factors, such as framing effects, or intrinsic ones, such as cognitive biases or expectation effects. Because accuracy is often measured as the discrepancy between a forecaster's present prediction and the eventual outcome, researchers also study how time affects affective forecasting. For example, the tendency for people to represent distant events differently from close events is captured in the construal level theory. The finding that people are generally inaccurate affective forecasters has been most obviously incorporated into conceptualizations of happiness and its successful pursuit, as well as decision making across disciplines. Findings in affective forecasts have stimulated philosophical and ethical debates, for example, on how to define welfare. On an applied level, findings have informed various approaches to healthcare policy, tort law, consumer decision making, and measuring utility (see below sections on economics, law, and health). Newer and conflicting evidence suggests that intensity bias in affective forecasting may not be as strong as previous research indicates. Five studies, including a meta-analysis, recover evidence that overestimation in affective forecasting is partly due to the methodology of past research. Their results indicate that some participants misinterpreted specific questions in affective forecasting testing. For example, one study found that undergraduate students tended to overestimate experienced happiness levels when participants were asked how they were feeling in general with and without reference to the election, compared to when participants were asked how they were feeling specifically in reference to the election. Findings indicated that 75%-81% of participants who were asked general questions misinterpreted them. After clarification of tasks, participants were able to more accurately predict the intensity of their emotions Major sources of errors Because forecasting errors commonly arise from literature on cognitive processes, many affective forecasting errors derive from and are often framed as cognitive biases, some of which are closely related or overlapping constructs (e.g. projection bias and empathy gap). Below is a list of commonly cited cognitive processes that contribute to forecasting errors. Major sources of error in emotion Impact bias One of the most common sources of error in affective forecasting across various populations and situations is impact bias, the tendency to overestimate the emotional impact of a future event, whether in terms of intensity or duration. The tendencies to overestimate intensity and duration are both robust and reliable errors found in affective forecasting. One study documenting impact bias examined college students participating in a housing lottery. These students predicted how happy or unhappy they would be one year after being assigned to either a desirable or an undesirable dormitory. These college students predicted that the lottery outcomes would lead to meaningful differences in their own level of happiness, but follow-up questionnaires revealed that students assigned to desirable or undesirable dormitories reported nearly the same levels of happiness. Thus, differences in forecasts overestimated the impact of the housing assignment on future happiness. Some studies specifically address "durability bias," the tendency to overestimate the length of time future emotional responses will last. Even if people accurately estimate the intensity of their future emotions, they may not be able to estimate their duration. Durability bias is generally stronger in reaction to negative events. This is important because people tend to work toward events they believe will cause lasting happiness, and according to durability bias, people might be working toward the wrong things. Similar to impact bias, durability bias causes a person to overemphasize where the root cause of their happiness lies. Impact bias is a broad term and covers a multitude of more specific errors. Proposed causes of impact bias include mechanisms like immune neglect, focalism, and misconstruals. The pervasiveness of impact bias in affective forecasts is of particular concern to healthcare specialists, in that it affects both patients' expectations of future medical events as well as patient-provider relationships. (See health.) Expectation effects Previously formed expectations can alter emotional responses to the event itself, motivating forecasters to confirm or debunk their initial forecasts. In this way, the self-fulfilling prophecy can lead to the perception that forecasters have made accurate predictions. Inaccurate forecasts can also become amplified by expectation effects. For example, a forecaster who expects a movie to be enjoyable will, upon finding it dull, like it significantly less than a forecaster who had no expectations. Sense-making processes Major life events can have a huge impact on people's emotions for a very long time but the intensity of that emotion tends to decrease with time, a phenomenon known as emotional evanescence. When making forecasts, forecasters often overlook this phenomenon. Psychologists have suggested that emotion does not decay over time predictably like radioactive isotopes but that the mediating factors are more complex. People have psychological processes that help dampen emotions. Psychologists have proposed that surprising, unexpected, or unlikely events cause more intense emotional reactions. Research suggests that people are unhappy with randomness and chaos and that they automatically think of ways to make sense of an event when it is surprising or unexpected. This sense-making helps individuals recover from negative events more quickly than they would have expected. This is related to immune neglect in that when these unwanted acts of randomness occur people become upset and try to find meaning or ways to cope with the event. The way that people try to make sense of the situation can be considered a coping strategy made by the body. This idea differs from immune neglect due to the fact that this is more of a momentary idea. Immune neglect tries to cope with the event before it even happens. One study documents how sense-making processes decrease emotional reactions. The study found that a small gift produced greater emotional reactions when it was not accompanied by a reason than when it was, arguably because the reason facilitated the sense-making process, dulling the emotional impact of the gift. Researchers have summarized that pleasant feelings are prolonged after a positive situation if people are uncertain about the situation. People fail to anticipate that they will make sense of events in a way that will diminish the intensity of the emotional reaction. This error is known as ordinization neglect. For example, ("I will be ecstatic for many years if my boss agrees to give me a raise") an employee might believe, especially if the employee believes the probability of a raise was unlikely. Immediately after having the request approved, the employee may be thrilled but with time the employees make sense of the situation (e.g., "I am a very hard worker and my boss must have noticed this") thus dampening the emotional reaction. Immune neglect Gilbert et al. originally coined the term immune neglect (or ) to describe a function of the psychological immune system, which is the set of processes that restore positive emotions after the experience of negative emotions. Immune neglect is people's unawareness of their tendency to adapt to and cope with negative events. Unconsciously the body will identify a stressful event and try to cope with the event or try to avoid it. Bolger & Zuckerman found that coping strategies vary between individuals and are influenced by their personalities. They assumed that since people generally do not take their coping strategies into account when they predict future events, that people with better coping strategies should have a bigger impact bias or a greater difference between their predicted and actual outcome. For example, asking someone who is afraid of clowns how going to a circus would feel may result in an overestimation of fear because the anticipation of such fear causes the body to begin coping with the negative event. Hoerger et al. examined this further by studying college students' emotions toward football games. They found that students who generally coped with their emotions instead of avoiding them would have a greater impact bias when predicting how they'd feel if their team lost the game. They found that those with better coping strategies recovered more quickly. Since the participants did not think about their coping strategies when making predictions, those who actually coped had a greater impact bias. Those who avoided their emotions, felt very closely to what they predicted they would. In other words, students who were able to deal with their emotions were able to recover from their feelings. The students were unaware that their body was actually coping with the stress and this process made them feel better than not dealing with the stress. Hoerger ran another study on immune neglect after this, which studied both daters' and non-daters' forecasts about Valentine's Day, and how they would feel in the days that followed. Hoerger found that different coping strategies would cause people to have different emotions in the days following Valentine's Day, but participants' predicted emotions would all be similar. This shows that most people do not realize the impact that coping can have on their feelings following an emotional event. He also found that not only did immune neglect create a bias for negative events, but also for positive ones. This shows that people continually make inaccurate forecasts because they do not take into account their ability to cope and overcome emotional events. Hoerger proposed that coping styles and cognitive processes are associated with actual emotional reactions to life events. A variant of immune neglect also proposed by Gilbert and Wilson is the region-beta paradox, where recovery from more intense suffering is faster than recovery from less intense experiences because of the engagement of coping systems. This complicates forecasting, leading to errors. Contrarily, accurate affective forecasting can also promote the region-beta paradox. For example, Cameron and Payne conducted a series of studies in order to investigate the relationship between affective forecasting and the collapse of compassion phenomenon, which refers to the tendency for people's compassion to decrease as the number of people in need of help increases. Participants in their experiments read about either 1 or a group of 8 children from Darfur. These researchers found that people who are skilled at regulating their emotions tended to experience less compassion in response to stories about 8 children from Darfur compared to stories about only 1 child. These participants appeared to collapse their compassion by correctly forecasting their future affective states and proactively avoiding the increased negative emotions resulting from the story. In order to further establish the causal role of proactive emotional regulation in this phenomenon, participants in another study read the same materials and were encouraged to either reduce or experience their emotions. Participants instructed to reduce their emotions reported feeling less upset for 8 children than for 1, presumably because of the increased emotional burden and effort required for the former (an example of the region-beta paradox). These studies suggest that in some cases accurate affective forecasting can actually promote unwanted outcomes such as the collapse of compassion phenomenon by way of the region-beta paradox. Positive vs negative affect Research suggests that the accuracy of affective forecasting for positive and negative emotions is based on the distance in time of the forecast. Finkenauer, Gallucci, van Dijk, and Pollman discovered that people show greater forecasting accuracy for positive than negative affect when the event or trigger being forecast is more distant in time. Contrarily, people exhibit greater affective forecasting accuracy for negative affect when the event/trigger is closer in time. The accuracy of an affective forecast is also related to how well a person predicts the intensity of his or her emotions. In regard to forecasting both positive and negative emotions, Levine, Kaplan, Lench, and Safer have recently shown that people can in fact predict the intensity of their feelings about events with a high degree of accuracy. This finding is contrary to much of the affective forecasting literature currently published, which the authors suggest is due to a procedural artifact in how these studies were conducted. Another important affective forecasting bias is fading affect bias, in which the emotions associated with unpleasant memories fade more quickly than the emotion associated with positive events. Major sources of error in cognition Focalism Focalism (or the "focusing illusion") occurs when people focus too much on certain details of an event, ignoring other factors. Research suggests that people have a tendency to exaggerate aspects of life when focusing their attention on it. A well-known example originates from a paper by Kahneman and Schkade, who coined the term "focusing illusion" in 1998. They found that although people tended to believe that someone from the Midwest would be more satisfied if they lived in California, results showed equal levels of life satisfaction in residents of both regions. In this case, concentrating on the easily observed difference in weather bore more weight in predicting satisfaction than other factors. There are many other factors that could have contributed to the desire to move to the Midwest, but the focal point for their decisions was weather. Various studies have attempted to "defocus" participants, meaning instead of focusing on that one factor, they tried to make the participants think of other factors or look at the situation through a different lens. There were mixed results dependent upon the methods used. One successful study asked people to imagine how happy a winner of the lottery and a recently diagnosed HIV patient would be. The researchers were able to reduce the amount of focalism by exposing participants to detailed and mundane descriptions of each person's life, meaning that the more information the participants had on the lottery winner and the HIV patient the less they were able to only focus on few factors, these participants subsequently estimated similar levels of happiness for the HIV patient as well as the lottery-winner. As for the control participants, they made unrealistically disparate predictions of happiness. This could be due to the fact that the more information that is available, the less likely it is one will be able to ignore contributory factors. Time discounting Time discounting (or time preference) is the tendency to weigh present events over future events. Immediate gratification is preferred to delayed gratification, especially over longer periods of time and with younger children or adolescents. For example, a child may prefer one piece of candy now (1 candy/0 seconds=infinity candies/second) instead of five pieces of candy in four months (5 candies/10540800 seconds≈0.00000047candies/second). The bigger the candies/second, the more people like it. This pattern is sometimes referred to as hyperbolic discounting or "present bias" because people's judgements are biased toward present events. Economists often cite time discounting as a source of mispredictions of future utility. Memory Affective forecasters often rely on memories of past events. When people report memories of past events they may leave out important details, change things that occurred, and even add things that have not happened. This suggests the mind constructs memories based on what actually happened, and other factors including the person's knowledge, experiences, and existing schemas. Using highly available, but unrepresentative memories, increases the impact bias. Baseball fans, for example, tend to use the best game they can remember as the basis for their affective forecast of the game they are about to see. Commuters are similarly likely to base their forecasts of how unpleasant it would feel to miss a train on their memory of the worst time they missed the train Various studies indicate that retroactive assessments of past experiences are prone to various errors, such as duration neglect or decay bias. People tend to overemphasize the peaks and ends of their experiences when assessing them (peak/end bias), instead of analyzing the event as a whole. For example, in recalling painful experiences, people place greater emphasis on the most discomforting moments as well as the end of the event, as opposed to taking into account the overall duration. Retroactive reports often conflict with present-moment reports of events, further pointing to contradictions between the actual emotions experienced during an event and the memory of them. In addition to producing errors in forecasts about the future, this discrepancy has incited economists to redefine different types of utility and happiness (see the section on economics). Another problem that can arise with affective forecasting is that people tend to remember their past predictions inaccurately. Meyvis, Ratner, and Levav predicted that people forget how they predicted an experience would be beforehand, and thought their predictions were the same as their actual emotions. Because of this, people do not realize that they made a mistake in their predictions, and will then continue to inaccurately forecast similar situations in the future. Meyvis et al. ran five studies to test whether or not this is true. They found in all of their studies, when people were asked to recall their previous predictions they instead write how they currently feel about the situation. This shows that they do not remember how they thought they would feel, and makes it impossible for them to learn from this event for future experiences. Misconstruals When predicting future emotional states people must first construct a good representation of the event. If people have a lot of experience with the event then they can easily picture the event. When people do not have much experience with the event they need to create a representation of what the event likely contains. For example, if people were asked how they would feel if they lost one hundred dollars in a bet, gamblers are more likely to easily construct an accurate representation of the event. "Construal level theory" theorizes that distant events are conceptualized more abstractly than immediate ones. Thus, psychologists suggest that a lack of concrete details prompts forecasters to rely on more general or idealized representations of events, which subsequently leads to simplistic and inaccurate predictions. For example, when asked to imagine what a 'good day' would be like for them in the near future, people often describe both positive and negative events. When asked to imagine what a 'good day' would be like for them in a year, however, people resort to more uniformly positive descriptions. Gilbert and Wilson call bringing to mind a flawed representation of a forecasted event the misconstrual problem. Framing effects, environmental context, and heuristics (such as schemas) can all affect how a forecaster conceptualizes a future event. For example, the way options are framed affects how they are represented: when asked to forecast future levels of happiness based on pictures of dorms they may be assigned to, college students use physical features of the actual buildings to predict their emotions. In this case, the framing of options highlighted visual aspects of future outcomes, which overshadowed more relevant factors to happiness, such as having a friendly roommate. Projection bias Overview Projection bias is the tendency to falsely project current preferences onto a future event. When people are trying to estimate their emotional state in the future they attempt to give an unbiased estimate. However, people's assessments are contaminated by their current emotional state. Thus, it may be difficult for them to predict their emotional state in the future, an occurrence known as mental contamination. For example, if a college student was currently in a negative mood because he just found out he failed a test, and if the college student forecasted how much he would enjoy a party two weeks later, his current negative mood may influence his forecast. In order to make an accurate forecast the student would need to be aware that his forecast is biased due to mental contamination, be motivated to correct the bias, and be able to correct the bias in the right direction and magnitude. Projection bias can arise from empathy gaps (or hot/cold empathy gaps), which occur when the present and future phases of affective forecasting are characterized by different states of physiological arousal, which the forecaster fails to take into account. For example, forecasters in a state of hunger are likely to overestimate how much they will want to eat later, overlooking the effect of their hunger on future preferences. As with projection bias, economists use the visceral motivations that produce empathy gaps to help explain impulsive or self-destructive behaviors, such as smoking. An important affective forecasting bias related to projection bias is personality neglect. Personality neglect refers to a person's tendency to overlook their personality when making decisions about their future emotions. In a study conducted by Quoidbach and Dunn, students' predictions of their feelings about future exam scores were used to measure affective forecasting errors related to personality. They found that college students who predicted their future emotions about their exam scores were unable to relate these emotions to their own dispositional happiness. To further investigate personality neglect, Quoidbach and Dunn studied happiness in relation to neuroticism. People predicted their future feelings about the outcome of the 2008 US presidential election between Barack Obama and John McCain. Neuroticism was correlated with impact bias, which is the overestimation of the length and intensity of emotions. People who rated themselves as higher in neuroticism overestimated their happiness in response to the election of their preferred candidate, suggesting that they failed to relate their dispositional happiness to their future emotional state. The term "projection bias" was first introduced in the 2003 paper "Projection Bias in Predicting Future Utility" by Loewenstein, O'Donoghue and Rabin. Market applications of projection bias The novelty of new products oftentimes overexcites consumers and results in the negative consumption externality of impulse buying. To counteract such, George Loewenstein recommends offering "cooling off" periods for consumers. During such, they would have a few days to reflect on their purchase and appropriately develop a longer-term understanding of the utility they receive from it. This cooling-off period could also benefit the production side by diminishing the need for a salesperson to "hype" certain products. Transparency between consumers and producers would increase as "sellers will have an incentive to put buyers in a long-run average mood rather than an overenthusiastic state". By implementing Loewentstein's recommendation, firms that understand projection bias should minimize information asymmetry; such would diminish the negative consumer externality that comes from purchasing an undesirable good and relieve sellers from extraneous costs required to exaggerate the utility of their product. Life-cycle consumption Projection bias influences the life cycle of consumption. The immediate utility obtained from consuming particular goods exceeds the utility of future consumption. Consequently, projection bias causes "a person to (plan to) consume too much early in life and too little late in life relative to what would be optimal". Graph 1 displays decreasing expenditures as a percentage of total income from 20 to 54. The period following where income begins to decline can be explained by retirement. According to Loewenstein's recommendation, a more optimal expenditure and income distribution is displayed in Graph 2. Here, income is left the same as in Graph 1, but expenditures are recalculated by taking the average percentage of expenditures in terms of income from ages 25 to 54 (77.7%) and multiplying such by income to arrive at a theoretical expenditure. The calculation is only applied to this age group because of unpredictable income before 25 and after 54 due to school and retirement. Food waste When buying food, people often wrongly project what they will want to eat in the future when they go shopping, which results in food waste. Major sources of error in motivation Motivated reasoning Generally, affect is a potent source of motivation. People are more likely to pursue experiences and achievements that will bring them more pleasure than less pleasure. In some cases, affective forecasting errors appear to be due to forecasters' strategic use of their forecasts as a means to motivate them to obtain or avoid the forecasted experience. Students, for example, might predict they would be devastated if they failed a test as a way to motivate them to study harder for it. The role of motivated reasoning in affective forecasting has been demonstrated in studies by Morewedge and Buechel (2013). Research participants were more likely to overestimate how happy they would be if they won a prize, or achieved a goal, if they made an affective forecast while they could still influence whether or not they achieved it than if they made an affective forecast after the outcome had been determined (while still in the dark about whether they knew if they won the prize or achieved the goal). In economics Economists share psychologists' interests in affective forecasting insomuch as it affects the closely related concepts of utility, decision making, and happiness. Utility Research in affective forecasting errors complicates conventional interpretations of utility maximization, which presuppose that to make rational decisions, people must be able to make accurate forecasts about future experiences or utility. Whereas economics formerly focused largely on utility in terms of a person's preferences (decision utility), the realization that forecasts are often inaccurate suggests that measuring preferences at a time of choice may be an incomplete concept of utility. Thus, economists such as Daniel Kahneman, have incorporated differences between affective forecasts and later outcomes into corresponding types of utility. Whereas a current forecast reflects expected or predicted utility, the actual outcome of the event reflects experienced utility. Predicted utility is the "weighted average of all possible outcomes under certain circumstances." Experienced utility refers to the perceptions of pleasure and pain associated with an outcome. Kahneman and Thaler provide an example of "the hungry shopper," in which case the shopper takes pleasure in the purchase of food due to their current state of hunger. The usefulness of such purchasing is based on their current experience and their anticipated pleasure in fulfilling their hunger. Decision making Affective forecasting is an important component of studying human decision making. Research in affective forecasts and economic decision making include investigations of durability bias in consumers and predictions of public transit satisfaction. In relevance to the durability bias in consumers, a study was conducted by Wood and Bettman, that showed that people make decisions regarding the consumption of goods based on the predicted pleasure, and the duration of that pleasure, that the goods will bring them. Overestimation of such pleasure, and its duration, increases the likelihood that the good will be consumed. Knowledge on such an effect can aid in the formation of marketing strategies of consumer goods. Studies regarding the predictions of public transit satisfaction reveal the same bias. However, with a negative impact on consumption, due to their lack of experience with public transportation, car users predict that they will receive less satisfaction with the use of public transportation than they actually experience. This can lead them to refrain from the use of such services, due to inaccurate forecasting. Broadly, the tendencies people have to make biased forecasts deviate from rational models of decision making. Rational models of decision making presume an absence of bias, in favor of making comparisons based on all relevant and available information. Affective forecasting may cause consumers to rely on the feelings associated with consumption rather than the utility of the good itself. One application of affective forecasting research is in economic policy. The knowledge that forecasts, and therefore, decisions, are affected by biases as well as other factors (such as framing effects), can be used to design policies that maximize the utility of people's choices. This approach is not without its critics, however, as it can also be seen to justify economic paternalism. Prospect theory describes how people make decisions. It differs from expected utility theory in that it takes into account the relativity of how people view utility and incorporates loss aversion, or the tendency to react more strongly to losses rather than gains. Some researchers suggest that loss aversion is in itself an affective forecasting error since people often overestimate the impact of future losses. Happiness and well-being Economic definitions of happiness are tied to concepts of welfare and utility, and researchers are often interested in how to increase levels of happiness in the population. The economy has a major influence on the aid that is provided through welfare programs because it provides funding for such programs. Many welfare programs are focused on providing assistance with the attainment of basic necessities such as food and shelter. This may be due to the fact that happiness and well-being are best derived from personal perceptions of one's ability to provide these necessities. This statement is supported by research that states after basic needs have been met, income has less of an impact on perceptions of happiness. Additionally, the availability of such welfare programs can enable those that are less fortunate to have additional discretionary income. Discretionary income can be dedicated to enjoyable experiences, such as family outings, and in turn, provides an additional dimension to their feelings and experience of happiness. Affective forecasting provides a unique challenge to answering the question regarding the best method for increasing levels of happiness, and economists are split between offering more choices to maximize happiness, versus offering experiences that contain more objective or experienced utility. Experienced utility refers to how useful an experience is in its contribution to feelings of happiness and well-being. Experienced utility can refer to both material purchases and experiential purchases. Studies show that experiential purchases, such as a bag of chips, result in forecasts of higher levels of happiness than material purchases, such as the purchase of a pen. This prediction of happiness as a result of a purchase experience exemplifies affective forecasting. It is possible that an increase in choices, or means, of achieving desired levels of happiness will be predictive of increased levels of happiness. For example, if one is happy with their ability to provide themselves with both a choice of necessities and a choice of enjoyable experiences they are more likely to predict that they will be happier than if they were forced to choose between one or the other. Also, when people are able to reference multiple experiences that contribute to their feelings of happiness, more opportunities for comparison will lead to a forecast of more happiness. Under these circumstances, both the number of choices and the quantity of experienced utility have the same effect on affective forecasting, which makes it difficult to choose a side of the debate on which method is most effective in maximizing happiness. Applying findings from affective forecasting research to happiness also raises methodological issues: should happiness measure the outcome of an experience or the satisfaction experienced as a result of the choice made based upon a forecast? For example, although professors may forecast that getting tenure would significantly increase their happiness, research suggests that in reality, happiness levels between professors who are or are not awarded tenure are insignificant. In this case happiness is measured in terms of the outcome of an experience. Affective forecasting conflicts such as this one have also influenced theories of hedonic adaptation, which compares happiness to a treadmill, in that it remains relatively stable despite forecasts. In law Similar to how some economists have drawn attention to how affective forecasting violates assumptions of rationality, legal theorists point out that inaccuracies in, and applications of, these forecasts have implications in law that have remained overlooked. The application of affective forecasting, and its related research, to legal theory reflects a wider effort to address how emotions affect the legal system. In addition to influencing legal discourse on emotions, and welfare, Jeremy Blumenthal cites additional implications of affective forecasting in tort damages, capital sentencing and sexual harassment. Tort damages Jury awards for tort damages are based on compensating victims for pain, suffering, and loss of quality of life. However, findings in affective forecasting errors have prompted some to suggest that juries are overcompensating victims since their forecasts overestimate the negative impact of damages on the victims' lives. Some scholars suggest implementing jury education to attenuate potentially inaccurate predictions, drawing upon research that investigates how to decrease inaccurate affective forecasts. Capital sentencing During the process of capital sentencing, juries are allowed to hear victim impact statements (VIS) from the victim's family. This demonstrates affective forecasting in that its purpose is to present how the victim's family has been impacted emotionally and, or, how they expect to be impacted in the future. These statements can cause juries to overestimate the emotional harm, causing harsh sentencing, or underestimate harm, resulting in inadequate sentencing. The time frame in which these statements are present also influences affective forecasting. By increasing the time gap between the crime itself and sentencing (the time at which victim impact statements are given), forecasts are more likely to be influenced by the error of immune neglect (See Immune neglect) Immune neglect is likely to lead to underestimation of future emotional harm, and therefore results in inadequate sentencing. As with tort damages, jury education is a proposed method for alleviating the negative effects of forecasting error. Sexual harassment In cases involving sexual harassment, judgements are more likely to blame the victim for their failure to react in a timely fashion or their failure to make use of services that were available to them in the event of sexual harassment. This is because prior to the actual experience of harassment, people tend to overestimate their affective reactions as well as their proactive reactions in response to sexual harassment. This exemplifies the focalism error (See Focalism) in which forecasters ignore alternative factors that may influence one's reaction, or failure to react. For example, in their study, Woodzicka and LaFrance studied women's predictions of how they would react to sexual harassment during an interview. Forecasters overestimated their affective reactions of anger, while underestimating the level of fear they would experience. They also overestimated their proactive reactions. In Study 1, participants reported that they would refuse to answer questions of a sexual nature and, or, report the question to the interviewer's supervisor. However, in Study 2, of those who had actually experienced sexual harassment during an interview, none of them displayed either proactive reaction. If juries are able to recognize such errors in forecasting, they may be able to adjust such errors. Additionally, if juries are educated on other factors that may influence the reactions of those who are victims of sexual harassment, such as intimidation, they are more likely to make more accurate forecasts, and less likely to blame victims for their own victimization. In health Affective forecasting has implications in health decision making and medical ethics and policy. Research in health-related affective forecasting suggests that nonpatients consistently underestimate the quality of life associated with chronic health conditions and disability. The so-called "disability paradox" states the discrepancy between self-reported levels of happiness amongst chronically ill people versus the predictions of their happiness levels by healthy people. The implications of this forecasting error in medical decision making can be severe, because judgments about future quality of life often inform health decisions. Inaccurate forecasts can lead patients, or more commonly their health care agent, to refuse life-saving treatment in cases when the treatment would involve a drastic change in lifestyle, for example, the amputation of a leg. A patient, or health care agent, who falls victim to focalism would fail to take into account all the aspects of life that would remain the same after losing a limb. Although Halpern and Arnold suggest interventions to foster awareness of forecasting errors and improve medical decision making amongst patients, the lack of direct research in the impact of biases in medical decisions provides a significant challenge. Research also indicates that affective forecasts about future quality of life are influenced by the forecaster's current state of health. Whereas healthy individuals associate future low health with low quality of life, less healthy individuals do not forecast necessarily low quality of life when imagining having poorer health. Thus, patient forecasts and preferences about their own quality of life may conflict with public notions. Because a primary goal of healthcare is maximizing quality of life, knowledge about patients' forecasts can potentially inform policy on how resources are allocated. Some doctors suggest that research findings in affective forecasting errors merit medical paternalism. Others argue that although biases exist and should support changes in doctor-patient communication, they do not unilaterally diminish decision-making capacity and should not be used to endorse paternalistic policies. This debate captures the tension between medicine's emphasis on protecting the autonomy of the patient and an approach that favors intervention in order to correct biases. Improving forecasts Individuals who recently have experienced an emotionally charged life event will display the impact bias. The individual predicts they will feel happier than they actually feel about the event. Another factor that influences overestimation is focalism which causes individuals to concentrate on the current event. Individuals often fail to realize that other events will also influence how they currently feel. Lam et al. (2005) found that the perspective that individuals take influences their susceptibility to biases when making predictions about their feelings. A perspective that overrides impact bias is mindfulness. Mindfulness is a skill that individuals can learn to help them prevent overestimating their feelings. Being mindful helps the individual understand that they may currently feel negative emotions, but the feelings are not permanent. The Five Factor Mindfulness Questionnaire (FFMQ) can be used to measure an individual's mindfulness. The five factors of mindfulness are observing, describing, acting with awareness, non-judging of inner experience, and non-reactivity to inner experience. The two most important factors for improving forecasts are observing and acting with awareness. The observing factor assesses how often an individual attends to their sensations, emotions, and outside environment. The ability to observe allows the individual to avoid focusing on one single event, and be aware that other experiences will influence their current emotions. Acting with awareness requires assessing how individuals tend to current activities with careful consideration and concentration. Emanuel, Updegraff, Kalmbach, and Ciesla (2010) stated that the ability to act with awareness reduces the impact bias because the individual is more aware that other events co-occur with the present event. Being able to observe the current event can help individuals focus on pursuing future events that provide long-term satisfaction and fulfillment. See also Happiness economics List of cognitive biases List of memory biases Prospect theory Welfare economics References Further reading On the projection bias External links Daniel Gilbert "Why are we happy?" (video lecture), TED.com, Retrieved 2009-08-29 Psychlopedia on Affective Forecasting Daniel Gilbert, video interview Affective forecasting on Psychology Today Psychological methodology Behavioral economics
Affective forecasting
Biology
8,943
23,853,763
https://en.wikipedia.org/wiki/Vacuum%20truck
A vacuum truck, vacuum tanker, vactor truck, vactor, vac-con truck, vac-con is a tank truck that has a pump and a tank. The pump is designed to pneumatically suck liquids, sludges, slurries, or the like from a location (often underground) into the tank of the truck. The objective is to enable transport of the liquid material via road to another location. Vacuum trucks transport the collected material to a treatment or disposal site, for example a sewage treatment plant. A common material to be transported is septage (or more broadly: fecal sludge) which is human excreta mixed with water, e.g. from septic tanks and pit latrines. They also transport sewage sludge, industrial liquids, or slurries from animal waste from livestock facilities with pens. Vacuum trucks can also be used to prepare a site for installation or to access underground utilities. These trucks may use compressed air or water to break up the ground safely, without risk of damage, before installation may begin. Vacuum trucks can be equipped with a high pressure pump if they are used to clean out sand from sewers. Other names used Other names used for vacuum trucks: vacuum tanker, "Sucker truck" or vac-trucks (in Australia) or "Sewer Sucker", "Hydro-vac", or "vac-trucks" (in Canada) or "Exhauster truck" (in Rwanda, Malawi & Kenya). Slang terms include: "honey truck", "honey sucker" (in India and South Africa), and "honeywagon", all (probably) derived from honey bucket. When a vacuum truck is used to transport fecal sludge then it can also be called "fecal sludge truck". Design and configurations Commercial vacuum trucks which collect fecal sludge usually have a volume of . However various smaller versions for specialized applications or low-resource settings can be found with tanks as small as . Pumps They generally use a low-volume sliding vane pump or a liquid ring pump to create a negative air pressure. The use of diaphragm mud pumps is less common, but with the advantage of a simpler design and usually lower overall costs. The disadvantage is that mechanical parts come into contact with the sludge, which is not the case for the more common vacuum pumps. The truck can be configured to be a direct belt drive, or a hydraulic drive system. There are two different ways to mount the pump: either directly on the truck with the vacuum drive powered by the truck motor, or on the trailer with an independent motor. The second option with the independent motor is more complicated and not commonly used. It has the advantage of potentially having the pump closer to the septic tank. It is also able to use the negative pressure suction side of the pump as well as the positive pressure side to pump sludge over longer distances or lift it higher into the tank. Suction hoses The suction hoses are typically 2" - 4" (or 50mm to 100mm) in diameter with 3" (or 75mm) being the norm. The possible length depends on various factors mainly related to the lift and other pressure losses. It is usually impossible to extend it beyond . An inherent suction limitation of all suction pumps is that they can only lift a liquid through utilizing atmospheric pressure. For pure water the theoretical maximum lift is approximately . However, due to the viscosity of fecal sludge it is possible to mix air into it either by sucking close from the surface or by adding air with a compressor through a separate hose. Through this process, the overall density of the sludge/air mixture can be reduced below that of pure water and thus a higher lift () can be reached under optimal conditions. Other factors affecting the possible lift and total length of the suction hose are that single stage vacuum pumps only reach an 85-90% partial vacuum, and that small air leakages, pipe friction losses, and the viscosity of the liquid further reduce the possible lift. Emptying the tanker Normally a tanker is emptied by gravity. It is possible to pressurize the vacuum tank to "pressure out" the liquid quicker (or against a small difference in elevation). This procedure is detrimental for the equipment, hence is used in special situations. The regular discharge time for a tanker of is about 15 minutes (or 7–10 minutes to unload a tanker of ). The outlet is typically in diameter. The discharge time depends on the thickness of the sludge, the size of the outlet valve and hose, the amount of garbage in the fecal sludge, and the frequency of driver cleaning the dump screen. Uses Vacuum trucks are used by town and municipal governments, as well as commercial entities around the world. Human excreta Several types of non-centralized sanitation systems are served by vacuum trucks. They are used to empty septage from cesspits, septic tanks, pit latrines, and communal latrines, for street cleanup, for sewer clean out, and for individual septic systems. The trucks are used in the cleaning of sanitary sewer pumping stations. Vacuum trucks are used to empty portable toilets. In commercial aviation, vacuum trucks are used to collect waste from airplane toilets. Vacuum trucks discharge these wastes to the sewer network, to a wastewater treatment plant, or—usually illegally, for example in many developing countries— directly into the environment. The latter practice, called "institutionalised open defecation", is dangerous since it constitutes a public health and environmental hazard. Industrial liquids Vacuum trucks are used in the petroleum industry, for cleaning of storage tanks and spills. They are also an important part of drilling oil and natural gas wells, as they are located at the drilling site. Vacuum trucks are used to remove drilling mud, drilling cuttings, cement, spills, and for removal of brine water from production tanks. They dispose of this in sump pits, treatment plants or if within safe levels may be spread in farm fields. Others Vacuum trucks are also used for exposing underground utilities. Before installing many pieces of underground equipment, the ground must be excavated far enough down to create a solid foundation for the structure to be placed on. Underground utilities can include lamp poles, traffic lights, road signs, and even commercial grade trees for landscaping. To prepare the ground for installation it is jetted with water, and the vacuum truck sucks up the muddy product. This exposes the buried utility without the possibility of damage, as would be possible if a digging machine were used (i.e. tractor backhoe, tracked or wheeled excavator, ditch witches). Vacuum trucks can also be used for cleanup of contaminated soil. For some instances, air excavation may be used in place of hydro excavation. Air excavation, also known as soft dig, uses compressed air to break up the ground and then vacuums up the soil into the debris tank. Air excavation is often used for locating underground electrical cables and gas lines. See also Gong farmer Gully emptier Suction excavator References External links Factsheet on motorised emptying and transport Waste collection vehicles Waste management in India Industrial equipment Cleaning tools Trucks Sewerage infrastructure Sanitation
Vacuum truck
Chemistry,Engineering
1,478
35,065,185
https://en.wikipedia.org/wiki/Johansson%20Mikrokator
A Johansson Mikrokator (also called Abramson's movement) is a mechanical comparator used to obtain mechanical magnification of the difference in length as compared to a standard. It works on the principle of a button spinning on a loop of string. A twisted thin metal strip holds a pointer, which shows the reading on a suitable scale. Since there is no friction involved in the transfer of movement from the strip to the pointer, it is free from backlash. It was reportedly designed by Hugo Abramson in 1938. Construction A metallic strip is twisted and fixed between two ends as shown. Any longitudinal movement (in either direction) will cause the central portion of the strip to rotate. One end of the strip is fixed to an adjustable cantilever and the other end is fixed to the spring elbow. The spring elbow, in turn, is connected to a plunger, which moves upwards or downwards. The spring elbow, which consists of flexible strips and a stiff diagonal acts as a bell crank lever and causes the twisted strip to change length whenever there is a movement in the plunger. This change in length will result in a proportional amount of twist of the metallic strip. The magnification can be varied by changing the length of the spring elbow. Operation The instrument is initially calibrated to the standard, and the zero is set to this value. Then, the test specimen are placed on the measuring table and are slid below the plunger of the instrument. Any difference in the measured dimension of the specimen will result in either the lowering or rising of the plunger. The lowering or rising of the plunger will cause the bell crank lever to move in forward or backward direction, and in turn, will twist or untwist the metallic strip. The centre line of the strip is perforated in order to prevent excessive stress. References External links Mechanical amplifiers
Johansson Mikrokator
Technology
380
4,371,746
https://en.wikipedia.org/wiki/Pentane%20%28data%20page%29
This page provides supplementary chemical data on n-pentane. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as eChemPortal, and follow its directions. Mallinckrodt Baker. Structure and properties Thermodynamic properties Vapor pressure of liquid Table data obtained from CRC Handbook of Chemistry and Physics 47th ed. Spectral data References Chemical data pages Chemical data pages cleanup
Pentane (data page)
Chemistry
111
154,171
https://en.wikipedia.org/wiki/List%20of%20heritage%20railways
This list of heritage railways includes heritage railways sorted by country, state, or region. A heritage railway is a preserved or tourist railroad which is run as a tourist attraction, is usually but not always run by volunteers, and often seeks to re-create railway scenes of the past. Europe Austria Ampflwanger Bahn (Timelkam — Ampflwang) Bockerlbahn Bürmoos (narrow gauge, original tracks removed) Bregenzerwaldbahn (narrow gauge, remaining section Bezau — Schwarzenberg) Erzbergbahn (section Vordernberg — Eisenerz) Feistritztalbahn (narrow gauge, Weiz — Birkfeld) Gurktalbahn (narrow gauge, remaining section Treibach-Althofen — Pöckstein-Zwischenwässern) Höllental Railway (Lower Austria) (narrow gauge, Payerbach-Reichenau — Hirschwang an der Rax) Landesbahn Feldbach — Bad Gleichenberg (some regular service remains) Lavamünder Bahn (Lavamünd — St. Paul im Lavanttal, track removed) Lokalbahn Ebelsberg — St. Florian (narrow gauge, some sections removed) Lokalbahn Korneuburg — Hohenau (some sections) Lokalbahn Retz — Drosendorf Lokalbahn Weizelsdorf — Ferlach Internationale Rheinregulierungsbahn (narrow gauge, remaining section: Austrian side of mouth of river Rhine into lake Constance — Lustenau depot — Kadelberg quarry via Swiss side of Rhine) Rosen Valley Railway (section Weizelsdorf — Rosenbach) Stainzer Flascherlzug (narrow gauge, Stainz — Preding) Steyr Valley Railway (narrow gauge, remaining section Steyr — Grünburg) Taurachbahn (narrow gauge, section Tamsweg — Mauterndorf im Lungau of the Murtalbahn) Thörlerbahn (narrow gauge, originally Kapfenberg — Turnau, track removed) Wachauer Bahn (section Krems — Emmersdorf an der Donau of the Donauuferbahn) Waldviertler Schmalspurbahnen (narrow gauge, Gmünd — Groß Gerungs and Gmünd — Litschau / — Heidenreichstein) Ybbs Valley Railway (narrow gauge, remaining section Göstling — Lunz am See — Kienberg-Gaming) Belgium Chemin de Fer à vapeur des Trois Vallées Chemin de Fer du Bocq Dendermonde-Puurs Steam Railway Stoomcentrum Maldegem ASVi museum Vennbahn Closed in 2001 Bosnia and Herzegovina Sarajevo-Višegrad Railway (section from Višegrad to Vardište) Czech Republic Lužná u Rakovníka - Kolešovice Railway Tovačovka (Kroměříž) Kojetín - Tovačov) Zubrnická museální železnice ((Ústí nad Labem Střekov) - Velké Březno - Zubrnice) Břeclav - Lednice Hvozdnický Expres (Opava Východ - Svobodné Heřmanice) Bruntál - Malá Morávka Sklářská lokálka Šenovka (Česká Kamenice - Kamenický Šenov) Kozí dráha (Děčín - Telnice) Denmark Source: H Heritage rail operator | N Narrow gauge railway | S Standard gauge railway DJK: Dansk Jernbane-Klub. Several heritage railways and operators are members of DJK Finland Jokioinen Museum Railway Kovjoki Museum Railway Porvoo Museum Railway France Chemin de Fer de la Baie de Somme Chemin de fer Touristique d'Anse Froissy Dompierre Light Railway Tarn Light Railway Germany Greece Diakofto–Kalavryta Railway Pelion railway Treno sto Rouf Railway Carriage Theater Hungary Children's Railway, Gyermekvasút Italy Bernina Railway, in the Rhaetian Railway between Italy and Switzerland; inscribed in the World Heritage List of UNESCO Valmorea railway Ceva–Ormea railway Sassari–Tempio-Palau railway Asciano–Monte Antico railway Novara–Varallo railway Latvia Gulbene-Alūksne railway Ventspils narrow-gauge railway Luxembourg Train 1900 Netherlands Corus Stoom IJmuiden Efteling Steam Train Company Museum Buurtspoorweg Steamtrain Hoorn Medemblik Stichting Stadskanaal Rail Stichting voorheen RTM Stoom Stichting Nederland Stoomtrein Goes - Borsele Stoomtrein Valkenburgse Meer Veluwse Stoomtrein Maatschappij Zuid-Limburgse Stoomtrein Maatschappij Norway Old Voss Line Krøderen Line Nesttun–Os Line Norwegian Railway Museum in Hamar Rjukan Line Setesdal Line Thamshavn Line Urskog–Høland Line Valdres Line Poland Bieszczady Forest Railway Narrow Gauge Railway Museum in Sochaczew Narrow Gauge Railway Museum in Wenecja Seaside Narrow Gauge Railway Wigry Forest Railway Portugal Barca d'Alva–La Fuente de San Esteban railway Corgo line Linha do Douro Sabor line Tâmega line Linha do Tua National Railway Museum (Portugal) Narrow-gauge railways in Portugal Monte Railway (Funchal, Madeira) Republic of Ireland Romania Mocăniţa from Vasser Valley, Maramureş (CFF Vişeu de Sus) Sibiu to Agnita narrow-gauge line in Hârtibaciu Valley San Marino Ferrovia Rimini–San Marino (In 2012, 800 meters of the track was reconstructed and opened to service at the San Marino terminal station with the original train as a tourist attraction.) Serbia Šargan Eight Slovakia Čierny Hron Railway The Historical Logging Switchback Railway in Vychylovka, Kysuce near Nová Bystrica (Historická lesná úvraťová železnica) Spain Basque Railway Museum (steam railway tours) Gijón Railway Museum Philip II Train, service between Madrid and El Escorial Railway Museum in Vilanova (close to Barcelona) Strawberry train, seasonal service between Madrid and Aranjuez Tramvia Blau, Barcelona Tren dels Llacs, seasonal service between Lleida and La Pobla de Segur Sweden Anten-Gräfsnäs Järnväg – narrow gauge, near Gothenburg Association of Narrow Gauge Railways Växjö-Västervik – narrow gauge (includes a section of mixed gauge track into Västervik) Böda Skogsjärnväg – narrow gauge, Öland Dal-Västra Värmlands Järnväg – standard gauge, Värmland Djurgården Line (tramway) – Stockholm Engelsberg-Norbergs Railway – standard gauge, Västmanland Gotlands Hesselby Jernväg – narrow gauge, Gotland Jädraås-Tallås Järnväg – narrow gauge, Gästrikland Ohsabanan – narrow gauge, Jönköping Risten–Lakvik Museum Railway – narrow gauge, Östergötland Skara – Lundsbrunns Järnvägar – narrow gauge, Västra Götaland County Skånska Järnvägar – standard gauge, Skåne Smalspårsjärnvägen Hultsfred-Västervik – narrow gauge, Småland Upsala-Lenna Jernväg – narrow gauge, Upsala County Östra Södermanlands Järnväg – narrow gauge, Södermanland Switzerland Blonay-Chamby Museum Railway Brienz Rothorn Bahn Dampfbahn-Verein Zürcher Oberland Etzwilen–Singen railway Furka Cogwheel Steam Railway Furka Oberalp Railway Pilatus Railway Rigi Railways Schynige Platte Railway Zürcher Museums-Bahn La Traction Sursee–Triengen Railway Schinznacher Baumschulbahn United Kingdom and Crown dependencies England Scotland Wales Northern Ireland Isle of Man Channel Islands Alderney Railway Pallot Heritage Steam Museum North America Canada United States Abilene and Smoky Valley Railroad Adirondack Railroad Agrirama Logging Train Arcade and Attica Railroad Azalea Sprinter Big South Fork Scenic Railway Black Hills Central Railroad Black River and Western Railroad Boone and Scenic Valley Railroad Branson Scenic Railway Bluegrass Railroad and Museum Blue Ridge Scenic Railway Belvidere and Delaware Railroad (AKA Delaware River Railroad) California Western Railroad (AKA, The Skunk Train) Cass Scenic Railroad State Park Conway Scenic Railroad Cumbres and Toltec Scenic Railroad Cuyahoga Valley Scenic Railroad Chehalis–Centralia Railroad Chelatchie Prairie Railroad Cripple Creek and Victor Narrow Gauge Railroad Durango and Silverton Narrow Gauge Railroad Durbin and Greenbrier Valley Railroad Dollywood Express Eureka Springs and North Arkansas Railway East Broad Top Railroad and Coal Company Everett Railroad Gold Coast Railroad Museum Georgia Coastal Railway Georgia State Railroad Museum Georgetown Loop Railroad Grand Canyon Railway Great Smoky Mountains Railroad Grapevine Vintage Railroad Heber Valley Railroad Hesston Steam Museum Hocking Valley Scenic Railway Huckleberry Railroad Illinois Railway Museum Indiana Transportation Museum Kirby Family Farm Train Kentucky Railway Museum Kettle Moraine Scenic Railroad Lumberjack Steam Train Little River Railroad (Michigan) Mount Rainier Railroad and Logging Museum Mount Washington Cog Railway Mid-Continent Railway Museum Monticello Railway Museum Midwest Central Railroad My Old Kentucky Dinner Train Nevada Northern Railway Museum New Hope Railroad North Shore Scenic Railroad Nickel Plate Express Niles Canyon Railway Oregon Coast Scenic Railroad Oil Creek and Titusville Railroad Reading Blue Mountain and Northern Railroad Rio Grande Scenic Railroad Roaring Camp Railtown 1897 State Historic Park SAM Shortline Excursion Train Silverwood Theme Park Steamtown National Historic Site Serengeti Express Southeastern Railway Museum Stone Mountain Scenic Railroad Strasburg Rail Road Sumpter Valley Railway South Central Florida Express, Inc. (AKA, Sugar Express) Tallulah Falls Railroad Museum Tennessee Valley Railroad Museum Tweetsie Railroad Texas State Railroad Three Rivers Rambler TECO Line Streetcar Tavares, Eustis & Gulf Railroad Virginia and Truckee Railroad Valley Railroad Company (AKA, The Essex Steam Train) Western Maryland Scenic Railroad White Pass and Yukon Route Wilmington and Western Railroad Wiscasset, Waterville and Farmington Railway Walt Disney World Railroad Wanamaker, Kempton and Southern Railroad Wildlife Express Train Whippany Railway Museum White Mountain Central Railroad Whitewater Valley Railroad Yosemite Mountain Sugar Pine Railroad Mexico Chihuahua al Pacífico (Copper Canyon) Ferrocarril Interoceanico Tequila Express Barbados St. Nicholas Abbey Heritage Railway St. Kitts St. Kitts Scenic Railway (Over historic tracks) South America Argentina Capilla del Señor Historic Train, in Buenos Aires Province Old Patagonian Express, Patagonia Train at the End of the World in Tierra del Fuego, Tierra del Fuego Tren a las Nubes, Salta Tren Histórico de Bariloche, Patagonia (British-built 1912, 4-6-0 steam locomotive to Perito Moreno glacier) Villa Elisa Historic Train in Entre Ríos Province Brazil Estrada de Ferro Central do Brasil Rede Mineira de Viação Corcovado Rack Railway Estrada de Ferro Oeste de Minas Estrada de Ferro Perus Pirapora Serra Verde Express Train of Pantanal Trem da Serra da Mantiqueira Trem das Águas Viação Férrea Campinas Jaguariúna Chile Colchaguac Wine Train (a Bayer Peacock 2-6-0) Tren de la Araucanía Temuco to Victoria (1953 Baldwin 4-8-2) Ecuador Tren Crucero Ecuador Colombia Tren Turistico De La Sabana, Bogota Asia Mainland China Jiayang Coal Railway Mengzi–Baoxiu Railway (heritage train operation on an otherwise disused section west of Jianshui) Tieling-Faku Railway Kunming-Hekou Railway Huanan Forest Railway Chaoyanggou-Qi Railway Taiwan Alishan Forest Railway Hong Kong Hong Kong Tramways India Calcutta Tramways Darjeeling Himalayan Railway Kalka Shimla Railway Matheran Hill Railway Nilgiri Mountain Railway Palace on Wheels Indonesia Ambarawa Railway Museum Cepu Forest Railway Mak Itam Steam Locomotive Sepur Kluthuk Jaladara Israel The Oak Railway (רכבת האלונים) in kibbutz Ein Shemer Japan Narita Yume Bokujo narrow gauge railway Sagano Scenic Railway Shuzenji Romney Railway Pakistan Khyber Railway Pakistan Railways Heritage Museum Africa South Africa Note that most of the heritage railway operators in South Africa have their own depots where locomotives and coaches are kept and serviced, but run on state-owned railways. Atlantic Rail – Now defunct. Formally ran day trips from Cape Town to Simonstown using steam locomotives and heritage coaching stock Friends of the Rail – day trips from Hermanstad (Pretoria) using steam locomotives and heritage coaching stock Outeniqua Choo Tjoe – A heritage railway that has not operated since August 2006. Patons Country Narrow Gauge Railway – a two-foot narrow-gauge heritage railway in KwaZulu-Natal, South Africa, from Ixopo to Umzimkhulu Reefsteamers – day trips from Johannesburg to Magaliesburg. Rovos Rail – up-market railtours The Sandstone Heritage Trust – private railway operating 2-foot gauge steam locomotives Umgeni Steam Railway – Kloof to Inchanga, near Durban Tunisia Lézard rouge Australia Australia New Zealand See also Heritage tourism List of Conservation topics List of tourist attractions worldwide List of United States railroads Mountain railway References External links Heritage railways in Spain International working steam locomotives National Preservation forum Indian Train Times Rail transport-related lists Railroad attractions Tourism-related lists
List of heritage railways
Engineering
2,868
12,488,904
https://en.wikipedia.org/wiki/Capua%20Leg
The Capua leg was an artificial leg, found in a grave in Capua, Italy in about 1884. Dating from 300 BC, the leg is one of the earliest known prosthetic limbs. There was no sign of an artificial foot which may have been made from a valuable metal. The limb was kept at the Royal College of Surgeons in London, but was destroyed in World War II during an air raid. A copy of the limb is held at the Science Museum, London. and another was made by 3D printing in 2021. Bibliography Von Brunn, Walther: Der Stelzfuß von Capua und die antiken Prothesen. In: Archiv für Geschichte der Medizin. Vol. 18, No. 4 (1. November 1926). Stuttgart: Steiner, 1926, pp. 351–360. Bliquez, Lawrence J.: Prosthetics in Classical Antiquity: Greek, Etruscan, and Roman Prosthetics. In: Haase, Wolfgang; Temproini, Hildegard (ed.): Aufstieg und Niedergang der römischen Welt. Teil II: Principat, Vol. 37.3. Berlin / New York: De Gruyter, 1996, pp. 2640–2676. References Prosthetics Archaeological artifacts
Capua Leg
Engineering,Biology
277
6,224,278
https://en.wikipedia.org/wiki/National%20Institute%20of%20Aeronautics%20and%20Space
The National Institute of Aeronautics and Space (, LAPAN) was the Indonesian government's space agency. It was established on 27 November 1963, by former Indonesian president Sukarno, after one year's existence of a previous, informal space agency organization. LAPAN is responsible for long-term civilian and military aerospace research. For over two decades, LAPAN managed satellites, including the domain-developed small scientific-technology satellite LAPAN-TUBsat and the Palapa series of telecommunication satellites, which were built by Hughes (now Boeing Satellite Systems) and launched from the US on Delta rockets, or from French Guiana using Ariane 4 and Ariane 5 rockets. LAPAN has also developed sounding rockets and has been trying to develop small orbital space launchers. The LAPAN A1, in 2007, and LAPAN A2, in 2015, satellites were launched by India. With the enactment of Presidential Decree No. 33/2021 on 5 May 2021, LAPAN is due to be disbanded along with government research agencies such as the Agency of Assessment and Application of Technology (Indonesian: Badan Pengkajian dan Penerapan Teknologi, BPPT), National Nuclear Energy Agency (Indonesian: Badan Tenaga Nuklir Nasional, BATAN), and Indonesian Institute of Sciences (Indonesian: Lembaga Ilmu Pengetahuan Indonesia, LIPI). All of those agencies fused into newly formed National Research and Innovation Agency (Indonesian: Badan Riset dan Inovasi Nasional, BRIN). As of September 2021, the disbandment process is still on process and expected to be finished on 1 January 2022. On 1 September 2021, LAPAN was finally dissolved as an independent agency and transformed into the space and aeronautics research organization of BRIN, signaling the beginning of the institutional integration of the former LAPAN into BRIN. History On 31 May 1962, Indonesia commenced aeronautics exploration when the Aeronautics Committee was established by the Indonesian prime minister, Djuanda, who was also the head of Indonesian Aeronautics. The secretary of Indonesian Aeronautics, RJ Salatun, was also involved in the establishment. On 22 September 1962, the Initial Scientific and Military Rocket Project (known in Indonesia as Proyek Roket Ilmiah dan Militer Awal or PRIMA) was formed by an affiliation of AURI (Indonesian Air Force) with ITB (Bandung Institute of Technology). The outcome of the project was the launching of two "Kartika I" ("star")–series rockets and their telemetric ordnances. After two informal projects, the National Institute of Aeronautics and Space (LAPAN) was established in 1963 by Presidential Decree 236. Programs For more than 20 years, LAPAN has done research on rocketry, remote sensing, satellites, and space sciences. Satellites Palapa A1 and A2 The first program was the launching of the Palapa A1 (launched 7 August 1976) and A2 (launched 3 October 1977) satellites. These satellites were almost identical to Canada's Anik and Western Union's Westars. The Indonesian satellites belonged to the government-owned company Perumtel, but they were made in the United States. LAPAN satellites The development of microsatellites has become an opportunity for LAPAN. The development of such satellites requires only a limited budget and facilities, compared to the development of large satellites. Meanwhile, the capability to develop micro-satellite will brings LAPAN to be ready to implement a future space program that will have measurable economic impact, and therefore contribute to the country's sustainable development effort. LAPAN-A1 The LAPAN-A1, or LAPAN-TUBsat, is designed to develop knowledge, skill, and experience with micro-satellite technology development, in cooperation with Technische Universität Berlin, Germany, where the satellite was manufactured. The Indonesian spacecraft is based on the German DLR-Tubsat, but includes a new star sensor and features a new 45 × 45 × 27 cm structure. The satellite payload is a commercial off-the-shelf video camera with a 1000 mm lens, resulting in a nadir resolution of 5 m and nadir swath of 3.5 km from an altitude of 650 km. In addition, the satellite carries a video camera with a 50 mm lens, resulting in a 200 m resolution video image with swath of 80 km at the nadir. The uplink and downlink for telemetry, tracking, and command (TTC) is done in the UHF band and downlink for video is done in S-band analog. On 10 January 2007, the satellite was successfully launched from Sriharikota, India, as an auxiliary payload with India's Cartosat-2, in the ISRO's Polar Satellite Launch Vehicle (PSLV) C7, to a Sun-synchronous orbit of 635 km, with an inclination of 97.60° and a period of 99.039 minutes. The longitude shift per orbit is about 24.828° with a ground track velocity of 6.744 km/s with an angular velocity of 3.635 deg/s, and a circular velocity of 7.542 km/s. LAPAN Tubsat performed technological experiments, earth observation, and attitude control experiments. LAPAN-A2 The mission of LAPAN-A2, or LAPAN-ORARI, is Earth observation using an RGB camera, maritime traffic monitoring using an automatic identification system (AIS)—which can know name and flag of the ship registered, ship type, tonnage, current route, departure and arrival ports—and amateur radio communication (text and voice; ORARI is Indonesian Amateur Radio Organization). The satellite will be launched, as a secondary payload of India's ASTROSAT mission, into a circular orbit of 650 km with an inclination of 8 degrees. The purpose of the project is to develop the capability to design, assembly, integrate, and test (AIT) micro-satellites. The satellite was successfully launched on 28 September 2015 using India's ISRO Polar Satellite Launch Vehicle (PSLV) and will pass over Indonesia every 97 minutes, or 14 times a day. LAPAN-A3 LAPAN-A3, or LAPAN-IPB, will perform experimental remote sensing. In addition to that, the satellite will support a global AIS mission and amateur radio communication. The satellite payload is a four-band push broom multi-spectral imaging camera (Landsat band: B, G, R, NIR), which will give a resolution of 18 m and coverage of 120 km from 650 km altitude. The satellite was launched in June 2016. International cooperation In 2008 Indonesia signed an agreement with the National Space Agency of Ukraine (NSAU) that will allow Indonesia access to rocket and satellite technologies. Spaceport development plan Biak Spaceport plan (2006) Since 2006, Indonesia and Russia have been discussing the possibility of launching a satellite from Biak island using air launch technology. LAPAN and the Russian Federal Space Agency (RKA) have worked on a government-to-government space cooperation agreement in order to enable such activities in Indonesia. The plan is for an Antonov An-124 aircraft to deliver a Polyot space launch vehicle to the new Indonesian spaceport on Biak island (West Papua province). This spaceport is well suited to commercial launches, as it sits almost exactly on the equator (the nearer the equator the greater the initial velocity imparted to the launched craft, making higher velocity or heavier payloads possible). In the spaceport, the launch vehicle will be fueled, and the satellites will be loaded on it. The Antonov An-124 would then fly to 10 km altitude above the ocean east of Biak island to jettison the launch vehicle. In 2012, discussions resumed. The main stumbling block is Russian concerns over compliance with the terms of the Missile Technology Control Regime; Russia is a co-signatory, Indonesia is not. In 2019, LAPAN officially confirmed plans for building the Biak spaceport, with first flights expected in 2024. Enggano Launchpad plan (2011) In 2011, LAPAN planned to build a satellite to be launchpad at Enggano Island, Bengkulu province, located in the westernmost part of Indonesia, on the Indian Ocean. There are three possible locations, two in Kioyo Natural Park and one in Gunung Nanua Bird Park. The most strategic site for this launchpad is inside Nanua Bird Park, a place called Tanjung Laboko, which is 20 meters above sea level and far from residential areas. The satellite launch pad itself sits on only one hectare of ground, but the safety zone covers 200 hectares. The cost to be disbursed is Rp.40 trillion (around $4.5 billion). The location can handle the assembly of the rockets and launch preparations for satellites of up to 3.8 tonnes. The Bengkulu Natural Resources Conservation Agency has expressed concerns about the plan, because both parks are habitats for a number of bird species native to Enggano Island. The Bengkulu provincial government refused to consider those concerns. Morotai Spaceport plan (2012) After studying the surrounding environment at three potential spaceport island sites (Enggano-Bengkulu, Morotai-North Maluku, and Biak-Papua), LAPAN (21/11) announced Morotai Island as a future spaceport site. Planning started in December 2012. The launch site's completion is expected for 2025. In 2013, LAPAN planned to launch an RX-550 experimental satellite launcher from a location in Morotai to be decided. This island was selected according to the following criteria: Morotai Island's location near the equator, which makes the launch more economical. The island's having seven runways, one of them 2,400 meters, easily extended to 3,000 meters. The ease of building on Morotai, which is not densely populated, and consequently little potential for social conflict with native inhabitants. Morotai Island's east side facing the Pacific Ocean directly, reducing downrange risks to other island populations. Field installations Ground stations Remote-sensing satellite ground station The Stasiun Bumi Satelit Penginderaan Jauh (Remote Sensing Satellite Earth Station) is located at Parepare, South Sulawesi; it has been in operation since 1993. Its main functions include receiving and recording data from earth observation satellites such as Landsat, SPOT, ERS-1, JERS-1, Terra/Aqua MODIS, and NPP. Weather satellite ground stations These ground stations are located at Pekayon, Jakarta, and Biak; since 1982 they have been receiving, recording, and processing data from NOAA, MetOp, and Himawari weather satellites 24 times a day. Rocket launch site LAPAN manages Stasiun Peluncuran Roket (Rocket Launching Station) located at Pameungpeuk Beach in the Garut Regency on West Java (). Starting in 1963, the facility was built through cooperation between Indonesia and Japan, as the station was designed by Hideo Itokawa, with the aim of supporting high atmospheric research using Kappa-8 rockets. This installation comprises a Motor Assembly building, a Launch Control Center, a Meteorological Sounding System building, a Rocket Motor Storage hangar, and a dormitory. Radar Koto Tabang Equator Atmospheric Radar The Radar Atmosfer Khatulistiwa Koto Tabang is a radar facility located at Koto Tabang, West Sumatra. It commenced operations in 2001. This facility is used for atmospheric dynamics research, especially areas concerning global climate change, such as El Niño and La Niña climate anomalies. Laboratory Remote Sensing Technology and Data Laboratory The Remote Sensing Technology and Data Laboratory is located at Pekayon, in Jakarta. Its functions include data acquisition systems development, satellite payload imager systems development, satellite ground station system development, preliminary satellite imagery image processing—such as making geometric, radiometric, and atmospheric corrections. Remote Sensing Applications Laboratory The Remote Sensing Applications Laboratory at Pekayon, Jakarta, works with remote sensing satellite data applications for Land Resource, Coastal-Marine Resources, Environment Monitoring, and Disaster Mitigation. Rocket Motor Laboratory The Laboratorium Motor Roket (Rocket Motor Laboratory) is located at Tarogong, West Java. It designs and produces rocket propulsion systems. Propellant Laboratory The Laboratorium Bahan Baku Propelan (Combustion Propellant Laboratory) researches propellants such as oxidizer Ammonium perchlorate and Hydroxyl-terminated polybutadiene. Satellite Technology Laboratory The Satellite Technology Laboratory is located at Bogor, West Java. Its functions include research, development, and engineering of the satellite payload, the satellite bus, and facilities of the ground segment. Aviation Technology Laboratory The Aviation Technology Laboratory is located at Rumpin, West Java. Its functions include research, development, and engineering of aerodynamics, flight mechanics technology, propulsion technology, avionics technology, and aerostructure. Observatories In 2020, Indonesia joined other nations in the hunt for habitable-zone exoplanets, after completion of new astronomical observatory center at Kupang Regency in East Nusa Tenggara province. Equatorial Atmosphere Observatory The Equatorial Atmosphere Observatory of LAPAN is located at Koto Tabang, West Sumatera. It researches: High-resolution observations of wind vectors that will make it possible to study the detailed structure of the equatorial atmosphere, which is related to the growth and decay of cumulus convection; From long-term continuous observations, relationships between atmospheric waves and global atmospheric circulation; By conducting observations from near the surface to the ionosphere, it will be possible to reveal dynamical couplings between the equatorial atmosphere and ionosphere; Based on these results, transports of atmospheric constituents such as ozone and greenhouse gases, and the variations of the Earth's atmosphere that lead to climatic change such as El-Nino and La-Nina. Solar Radiation Observatory The Stasiun Pengamat Radiasi Matahari (Solar Radiation Observation Station) observes ultraviolet radiation of the sun. Operations began in 1992. These facilities were developed by Eko Instrument, of Japan, and are located at Bandung and Pontianak. Aerospace Observatory For decades, Indonesian astronomy depended on the Bosscha Observatory in Lembang, West Java, which was built in 1928 by the Dutch and which, at that time, had one of the largest telescopes in the southern hemisphere. At present, the aerospace observatories of LAPAN are located at Pontianak-West Kalimantan, Pontianak-North Sulawesi, Kupang-East Nusa Tenggara, and Watukosek-East Java, and make observations relevant to climatology, meteorology, the sun, and Earth's magnetic field. National Observatory (Obnas) The new observatory construction project on Mount Timau in Kupang Regency, East Nusa Tenggara, which began functioning in 2020, is the biggest observatory in Southeast Asia. The observatory is built with the cooperation of the Bandung Institute of Technology (ITB), Nusa Cendana University (UNdana). It is designated as the National Observatory (Obnas), and has a telescope. The area around Obnas is developed as a national park, with the aim of attracting tourists. The aim of the observatory is to: develop Indonesian space science to a high degree economically strengthen the surrounding region, to allow for equitable distribution of inter-regional development, especially in Eastern Indonesia. Obnas is one of LAPAN's key strategic objectives, along with mastery of rocket technology, building a launch site, growing its National Remote Sensing Data Bank (BDPJN) and National Earth Monitoring System (SPBN), and overall technological development. Rockets LAPAN rockets are classified "RX" (Roket Eksperimental) followed by the diameter in millimeters. For example, the RX-100 has a diameter of . LAPAN's current workhouse rocket propulsion system consists of four stages, namely the three-stage RX-420 and the RX-320 level. It is planned to use the RX-420 as a rocket booster for the planned Roket Pengorbit Satelit (RPS) (Orbital Satellite Rocket) to fly in 2014. In 2008, optimistic hopes were that this rocket, known as the Satellite Launch Vehicle (SLV), would first be launched in Indonesia to 2012, and if there were extra funds pursuant to the good economic situation of 2007–8, possibly the year 2010. In fact, the LAPAN budget for 2008 and 2007 was Rp 200 billion (approximately US$20 million). Budgetary issues surrounding the international credit crises of 2008–2009 placed many Indonesian technical projects in jeopardy, most especially the complete development of RX-420 and associated micro-satellite program to world-class standards ahead of project finalization schedule and the opportunity to work together with the world institutions. LAPAN hopes to be an educating partner with Indian Aerospace in sciences related to satellite. On November 11, 2010, a LAPAN spokesman said that the RX-550 rocket would undergo a static test in December 2010 and a flight test in 2012. The rocket would consist of four stages, and would be part of an RPS-01 rocket to put a satellite in orbit. Previously, the Polar LAPAN-TUBsat (LAPAN-A1) satellite had been successfully placed in orbit and is still functioning well. The aim is to have home-made rockets and satellites. Beginning in 2005, LAPAN rejuvenated Indonesian expertise in rocket-based weapons systems, in cooperation with the Armed Forces of Indonesia (TNI). In April 2008, the TNI began a new missile research program, alongside LAPAN. Prior to this, eight projects were sponsored by the TNI in Malacca monitoring, using LAPAN-TUBsat, the theft of timber and alleged encroachment on Indonesian territorial waters in the 2009 escalation over Malaysia's claims to the huge gas fields off Ambalat-island. RX-100 The RX-100 serves to test rocket payload subsystems. It has a diameter of , a length of , and a mass of . It carries enough solid-composite propellant to last 2.5 seconds, which allows for a flight time of 70 seconds, at a maximum speed of Mach 1, at an altitude of , for a range of . The rocket carries a GPS, altimeter, gyroscope, 3-axis accelerometer, CPU, and battery. RX-150 / 120 The two-stage rocket booster RX-150-120 is supported by the Indonesian Army (TNI-AD) and PT Pindad. With a range of , it was successfully launched from a moving vehicle (Pindad Panser) on March 31, 2009. R-Han 122 The R-Han 122 rocket is a surface-to-surface missile with a range of up to at Mach 1.8. As of March 28, 2012, fifty R-Han 122s have been successfully launched. The rockets are the result of six years work by LAPAN. By 2014, at least 500 R-Han 122 rockets will be part of the army arsenal. RX-250 Between 1987 and 2005, LAPAN RX-250 rockets have been regularly launched. RX-320 LAPAN successfully launched two -diameter RX-320 rockets on 30 May and 2 July 2008 at Pameungpeuk, West Java. Space launchers RPS-420 (Pengorbitan-1) The RPS-420 (Pengorbitan-1) is a micro-satellite orbital launch vehicle, similar to Lambda from Japan, but with lighter, modern materials and modern avionics. It is launched unguided at a 70-degree angle of inclination with a four-stage solid rocket motor launcher. It has a diameter of , a length of , a lift-off mass of . It uses solid composite propellant, for a firing time of 13 seconds, yielding a thrust of 9.6 tons, for a flight duration of 205 seconds at a maximum velocity of Mach 4.5. Its range is at an altitude of . Its payload consists of diagnostics, GPS, altimeter, gyro, 3-axis accelerometer, CPU, and battery. The RX-420 was entirely built using local materials. LAPAN carried out a stationary test on the RX-420 on 23 December 2008 in Tarogong, West Java. The RX-420 had its first test flight at the launching station Cilauteureun, Pameungpeuk District, Garut regency, West Java. The LAPAN RX-420 is the test bed for an entirely indigenously developed satellite launch vehicle. The RX-420 is suitable for launching micro-satellites of or less and nano-satellites of or less in co-development with Technische Universität Berlin. The rocket launching plan was extended in 2010 by launching combined RX-420-420s, and in 2011 for combined RX-420-420 – 320, and SOB 420. RPS-420/520 (Pengorbitan-2) In the planning stage are the RX-420 with multiple customizable configuration boosters and the planned RX-520, which is predicted to be able to launch a greater than payload into orbit. This large rocket is intended to be fueled by high-pressure liquid hydrogen peroxide, and various hydrocarbons are under evaluation. The addition of RX-420 boosters to the RX-520 should increase lifting capacity to over , although if too expensive, the proven Russian Soyuz and Energiya rockets will likely be employed. The RX-520 consists of one RX-420 and two RX-420 as a stage-1 booster, one RX-420 as stage 2, one RX-420 as stage 3, and as a payload launcher, one RX-320 as stage 4. RX-550 In 2013, LAPAN launched an RX-550 experimental satellite launcher from a point in Morotai. LAPAN Library In June 2009, LAPAN put online its extensive library of over 8000 titles on aeronautics and astronautics. This is the largest dedicated aerospace library in ASEAN and it was hoped it would bring Indonesian and ASEAN talent into the LAPAN program, especially those disadvantaged by location. It was unclear how much content would be available freely to the public. Komurindo Komurindo or Kompetisi Muatan Roket Indonesia is the Indonesia Payload Rocket Competition. The competition was established by LAPAN, the education ministry, and some universities, to enhance rocket research by the universities. The third competition was held in late June 2011 in Pandansimo Beach of Bantul, Yogyakarta. Aircraft LAPAN XT-400 LSU-02 LSU-03 LAPAN Fighter Experiment (LFX) Logo End of LAPAN On 1 September 2021, LAPAN became the Space and Aeronautics Research Organization of the National Research and Innovation Agency (BRIN), signaling the beginning of the institutional integration of the former LAPAN into BRIN. See also List of government space agencies List of rocket launch sites Pratiwi Sudarmono References External links LAPAN remote Sensing https://web.archive.org/web/20090815080000/http://www.lapan.go.id/lombaRUM2009/index.php LAPAN Satellite Technology State Ministry of Research and Technology, Indonesia (RISTEK) Government agencies of Indonesia Space agencies Science and technology in Indonesia Space program of Indonesia Aerospace Defense companies of Indonesia 1963 establishments in Indonesia 2021 disestablishments in Indonesia
National Institute of Aeronautics and Space
Physics
4,889
1,084,420
https://en.wikipedia.org/wiki/Singulation
Singulation is a method by which an RFID reader identifies a tag with a specific serial number from a number of tags in its field. This is necessary because if multiple tags respond simultaneously to a query, they will jam each other. In a typical commercial application, such as scanning a bag of groceries, potentially hundreds of tags might be within range of the reader. When all the tags cooperate with the tag reader and follow the same anti-collision protocol, also called singulation protocol, then the tag reader can read data from each and every tag without interference from the other tags. Collision avoidance Generally, a collision occurs when two entities require the same resource; for example, two ships with crossing courses in a narrows. In wireless technology, a collision occurs when two transmitters transmit at the same time with the same modulation scheme on the same frequency. In RFID technology, various strategies have been developed to overcome this situation. Tree walking There are different methods of singulation, but the most common is tree walking, which involves asking all tags with a serial number that starts with, for instance, a 0 to respond. If more than one responds, the reader might ask for all tags with a serial number that starts with 01 to respond, and then 010. It keeps refining the qualifier until only one tag responds. Note that if the reader has some idea of what tags it wishes to interrogate, it can considerably optimise the search order. For example with some designs of tags, if a reader already suspects certain tags to be present then those tags can be instructed to remain silent, then tree walking can proceed without interference from these. This simple protocol leaks considerable information because anyone able to eavesdrop on the tag reader alone may be able to determine all but the last bit of a tag's serial number. Thus a tag can be (largely) identified so long as the reader's signal is receivable, which is usually possible at much greater distance than simply reading a tag directly. Because of privacy and security concerns related to this, the Auto-ID Labs have developed two more advanced singulation protocols, called Class 0 UHF and Class 1 UHF, which are intended to be resistant to these sorts of attacks. These protocols, which are based on tree-walking but include other elements, have a performance of up to 1000 tags per second. The tree walking protocol may be blocked or partially blocked by RSA Security's blocker tags. ALOHA The first offered singulation protocol is the ALOHA protocol, originally used decades ago in ALOHAnet and very similar to CSMA/CD used by Ethernet. These protocols are mainly used in HF tags. In ALOHA, tags detect when a collision has occurred, and attempt to resend after waiting a random interval. The performance of such collide-and-resend protocols is approximately doubled if transmissions are synchronised to particular time-slots, and in this application time-slots for the tags are readily provided for by the reader. ALOHA does not leak information like the tree-walking protocol, and is much less vulnerable to blocker tags, which would need to be active devices with much higher power handling capabilities in order to work. However when the reader field is densely populated, ALOHA may make much less efficient use of available bandwidth than optimised versions of tree-walking. In the worst case, an ALOHA protocol network can reach a state of congestion collapse. The Auto-ID consortium is attempting to standardise a version of an ALOHA protocol which it calls Class 0 HF. This has a performance of up to 200 tags per second. Slotted Aloha Slotted Aloha is another variety offering better properties than the initial concept. It is implemented in most of the modern bulk detection systems, especially in the clothing industry. Listen before talk This concept is known from polite conversation. It applies as well to wireless communication, also named listen before send. With RFID it is applied for concurrence of readers (CSMA) as well as with concurrence of tags. References Network protocols Radio-frequency identification Wireless locating
Singulation
Technology,Engineering
841
15,387,512
https://en.wikipedia.org/wiki/Fogger
A fogger is any device that creates a fog, typically containing an insecticide for killing insects and other arthropods. Foggers are often used by consumers as a low cost alternative to professional pest control services. The number of foggers needed for pest control depends on the size of the space to be treated, as stated for safety reasons on the instructions supplied with the devices. The fog may contain flammable gases, leading to a danger of explosion if a fogger is used in a building with a pilot light or other naked flame. Fogger composition Total release foggers (TRFs) (also called "bug bombs") are used to kill cockroaches, fleas, and flying insects by filling an area with insecticide. Most foggers contain pyrethroid, pyrethrin, or both as active ingredients. Pyrethroids are a class of synthetic insecticides that are chemically similar to natural pyrethrins and have low potential for systemic toxicity in mammals. Pyrethrins are insecticides derived from chrysanthemum flowers (pyrethrum). Piperonyl butoxide and n-octyl bicycloheptene dicarboximide often are added to pyrethrin products to inhibit insects' microsomal enzymes that detoxify pyrethrins. To distribute their insecticide, foggers also contain aerosol propellants. Hazards to humans During 2001-2006, a total of 466 fogger-related illnesses or injuries were identified in the United States by the SENSOR-Pesticides program. These illnesses or injuries often resulted from inability or failure to vacate before the fogger discharged, reentry into the treated space too soon after the fogger was discharged, excessive use of foggers for the space being treated, and failure to notify others nearby. Exposure symptoms Pyrethrins have little systemic toxicity in mammals, but they have been reported to induce contact dermatitis, conjunctivitis, and asthma. Signs and symptoms of pyrethroid toxicity include abnormal skin sensation (e.g., burning, itching, tingling, and numbness), dizziness, salivation, headache, fatigue, vomiting, diarrhea, seizure, irritability to sound and touch, and other central nervous system effects. See also Ultrasonic hydroponic fogger, a device used in agriculture Pyrethrin Toxicity References External links Bug Bomb Case Profile — National Pesticide Information Center Pesticide Illness & Injury Surveillance — National Institute for Occupational Safety and Health Pesticides
Fogger
Biology,Environmental_science
530
552,328
https://en.wikipedia.org/wiki/Thief%20of%20Time
Thief of Time is a fantasy novel by British writer Terry Pratchett, the 26th book in his Discworld series. It was the last Discworld novel with a cover by Josh Kirby. Plot summary The Auditors hire young clockmaker Jeremy Clockson to build a perfect glass clock, without telling him that this will stop time and thereby eliminate human unpredictability from the universe. Death discovers their plans, but cannot act against them directly, so he instead sends his granddaughter Susan Sto Helit. Meanwhile, Lu-Tze of the History Monks leads gifted young apprentice Lobsang Ludd in a desperate mission. Characters Myria LeJean Death – the anthropomorphic personification of Death, or Grim Reaper, a recurring and popular character in the Discworld series. Jeremy Clockson – a master clockmaker tasked with creating the perfect clock, whose name is a pun on British broadcaster Jeremy Clarkson. Susan Sto Helit – Death's granddaughter. Lu-Tze – a powerful member of the History Monks masquerading as a humble sweeper. Lobsang Ludd – apprentice of Lu-Tze Reception Thief of Time was shortlisted for the 2002 Locus Award for Best Fantasy Novel. At The Guardian, Sam Jordison called it "as complicated, daft, hilarious and satisfying as vintage P. G. Wodehouse: part kung fu epic, part philosophical novel, part mind-bending experiment with chaos theory (and a piss-take of those three things)", and categorized it as a book to "give (readers) hope". At the SF Site, Steven H Silver observed that the book's parodying of action films is "masterful", and commended Pratchett for how "fresh" the humor was—while conceding that "reader(s) may not laugh out loud ... but there will be plenty of internal chuckling". At Infinity Plus, John Grant noted that it has "fewer moments of uproarious humour than" the majority of Pratchett's oeuvre, and that the "narrative fails to engender any sense of urgency in the places where it should", concluding that although "one could swiftly lay hands on a dozen genre-fantasy novels that are less worthwhile", it was not Pratchett's best work. Writing process During a 2011 interview, Pratchett discussed his process for writing, and mentioned a self-invented goddess of writers called Narrativia, whom he believed to be smiling upon him throughout his career. One example of Narrativia's intervention from Thief Of Time was the naming of a key character, Ronnie Soak, the forgotten fifth horseman of the apocalypse. Pratchett stated that he had picked the name at random, and was later "astonished when he noticed what it sounded like backwards. Suddenly, he knew of what this particular horseman would be a harbinger." In a direct quote Pratchett revealed his satisfaction with this coincidence, "I thought chaos – yes! Chaos, the oldest. Stuff just turns up like that." References External links Annotations for Thief of Time Quotes from Thief of Time Thief of Time at Worlds Without End 2001 British novels Discworld books 2001 fantasy novels Doubleday (publisher) books Fiction about time Apocalyptic novels Science fantasy novels Hive minds in fiction British comedy novels de:Scheibenwelt-Romane#Der Zeitdieb
Thief of Time
Physics
710
44,635,220
https://en.wikipedia.org/wiki/Dispersion%20polymerization
In polymer science, dispersion polymerization is a heterogeneous polymerization process carried out in the presence of a polymeric stabilizer in the reaction medium. Dispersion polymerization is a type of precipitation polymerization, meaning the solvent selected as the reaction medium is a good solvent for the monomer and the initiator, but is a non-solvent for the polymer. As the polymerization reaction proceeds, particles of polymer form, creating a non-homogeneous solution. In dispersion polymerization these particles are the locus of polymerization, with monomer being added to the particle throughout the reaction. In this sense, the mechanism for polymer formation and growth has features similar to that of emulsion polymerization. With typical precipitation polymerization, the continuous phase (the solvent solution) is the main locus of polymerization, which is the main difference between precipitation and dispersion. Polymerization mechanism At the onset of polymerization, polymers remain in solution until they reach a critical molecular weight (MW), at which point they precipitate. These initial polymer particles are unstable and coagulate with other particles until stabilized particles form. After this point in the polymerization, growth only occurs by addition of monomer to the stabilized particles. As the polymer particles grow, stabilizer (or dispersant) molecules attach covalently to the surface. These stabilizer molecules are generally graft or block copolymers, and can be preformed or can form in situ during the reaction. Typically, one side of the stabilizer copolymer has an affinity for the solvent while the other side has an affinity for the polymer particle being formed. These molecules play a crucial role in dispersion polymerization by forming a “hairy layer” around the particles that prevents particle coagulation. This controls size and colloidal stability of the particles in the reaction system. The driving force for the particle separation is steric hindrance between the outward-facing tails of the stabilizer layers. Dispersion polymerization can produce nearly monodisperse polymer particles of 0.1–15 micrometers (μm). This is important because it fills the gap between particle size generated by conventional emulsion polymerization (0.006–0.7 μm) in batch process and that of suspension polymerization (50–1000 μm). Applications Particles produced by dispersion polymerization are used in a wide variety of applications. Toners, instrument calibration standards, chromatography column packing materials, liquid crystal display spacers, and biomedical and biochemical analysis all use these micron-size monodisperse particles, particles which were hard to come by before the development of dispersion polymerization methods. The dispersions are also used as surface coatings. Unlike solution coatings, dispersion coatings have viscosities that are independent of polymer MW. The viscosities of dispersions are advantageously lower than those of solutions with practical polymer levels. This allows for easier application of the coating. One dispersion polymerization system being studied is the use of supercritical liquid carbon dioxide (scCO2) as a solvent. Because of its unique solvent properties, supercritical CO2 is an ideal medium for dispersion polymerization for many soluble-monomer with insoluble-polymer systems. For example, polymers can be separated by releasing the high pressure under which the scCO2 is held. This process is more efficient than typical drying processes. Also, the principles of dispersion polymerization with scCO2 follows principles of green chemistry: low solvent toxicity, low waste, efficient atom economy, and avoidance of purification steps. References Polymerization reactions
Dispersion polymerization
Chemistry,Materials_science
762
31,779,955
https://en.wikipedia.org/wiki/Pan-T%20antigens
Pan-T antigens are antigens found on all T cells. They include CD2, CD3, CD5 and CD7. References Antigen presenting cells
Pan-T antigens
Chemistry,Biology
34
18,576,196
https://en.wikipedia.org/wiki/Lariat%20chain
A lariat chain is a loop of chain that hangs off, and is spun by a wheel. It is often used as a science exhibit or a toy. The original lariat chain was created in 1986 by Norman Tuck, as an artist-in-residence project at the Exploratorium in San Francisco. The lariat chain was developed from an earlier Tuck piece entitled Chain Reaction (1984). Chain Reaction was hand cranked, and utilized a heavy chain attached by magnets onto an iron flywheel. As in the lariat chain, Chain Reaction used a brush to disrupt the motion of the traveling chain. The speed of the chain is arranged to equal the wave speed of transverse waves, so that waves moving against the motion of the chain appear to be standing still. See also Belt (mechanical) Foucault pendulum Launch loop has similar potential instabilities References External links Coilgun info: Lariat Chain Introduction Kinetic Chain sculpture built from a bicycle Instructables how-to Simulation of a Lariat chain Physics experiments
Lariat chain
Physics
216
37,168,115
https://en.wikipedia.org/wiki/Tools%20Design
Tools Design is a Danish design studio founded by partners Claus Jensen and Henrik Holbæk. The studio was founded in 1989 and do work in the area of consumer products and furniture. The design studio is among the most awarded in Denmark. Tools Design has worked for Eva Solo, Georg Jensen, Scanglobe, Skybar, Bionaire, Coloplast and others. References External links Design companies of Denmark Danish companies established in 1989 Design companies based in Copenhagen
Tools Design
Engineering
95
24,145,018
https://en.wikipedia.org/wiki/C20H30O3
{{DISPLAYTITLE:C20H30O3}} The molecular formula C20H30O3 may refer to: Coicenal A Coicenal D Dihydrotestosterone formate Eoxin A4 Epoxyeicosatetraenoic acid Galanolactone Leukotriene A4 9-Nor-9β-hydroxyhexahydrocannabinol Neotripterifordin 5-oxo-eicosatetraenoic acid Oxymesterone Steviol
C20H30O3
Chemistry
116
1,265,697
https://en.wikipedia.org/wiki/Lester%20Frank%20Ward
Lester Frank Ward (June 18, 1841 – April 18, 1913) was an American botanist, paleontologist, and sociologist. The first president of the American Sociological Association, James Q. Dealey characterized Ward as a "great pioneer" in the development of American sociology, with contemporaries referring to him as "the Nestor of American sociologists". His 1883 work Dynamic Sociology was influential in establishing sociology as a distinct field in the United States. However, despite its initial impact his work was quickly sidelined during the later institutionalization and development of American sociology. Biography Childhood: 1841–1858 Most, if not all of what is known about Ward's early life comes from the biography, Lester F. Ward: A Personal Sketch, written by Emily Palmer Cape in 1922. Lester Frank Ward was born in Joliet, Illinois. He was the youngest of 10 children born to Justus Ward and his wife Silence Rolph Ward. Justus Ward (d. 1858) was of New England colonial descent and worked on farms in addition to being an itinerant mechanic. Silence Ward was the daughter of a clergyman; she was educated and fond of literature. The family lived in poverty during Ward's early years. When Ward was one year old, the family moved closer to Chicago, to Cass, now known as Downers Grove, Illinois about twenty-three miles from Lake Michigan. The family then moved to a homestead in nearby St. Charles, Illinois where Ward's father built a saw mill business making railroad ties. As a child, Ward had to worked in farms, mills, and factories to supplement his family income, giving him little time for his education. Ward first attended a formal school at St. Charles, Kane County, Illinois, in 1850 when he was nine years old. He was known as Frank Ward to his classmates and friends and showed a great enthusiasm for books and learning, liberally supplementing his education with outside reading. Four years after Ward started attending school, his parents, along with Lester and an older brother, Erastus, traveled to Iowa in a covered wagon for a new life on the frontier. Starting college: 1858–1862 In 1858, Justus Ward unexpectedly died, and the boys returned the family to the old homestead they still owned in St. Charles. Ward's estranged mother, who lived two miles away with Ward's sister, disapproved of the move, and wanted the boys to stay in Iowa to continue their father's work. The two brothers lived together for a short time in the old family homestead they dubbed "Bachelor's Hall," doing farm work to earn a living, and encouraged each other to pursue an education and abandon their father's life of physical labor. In late 1858, the two brothers moved to Pennsylvania at the invitation of Lester Frank's oldest brother Cyrenus (9 years Lester Frank's senior), who was starting a business making wagon wheel hubs and needed workers. The brothers saw this as an opportunity to move closer to civilization and to eventually attend college. The business failed, however, and Lester Frank, who still didn't have the money to attend college, found a job teaching in a small country school; in the summer months he worked as a farm laborer. He finally saved enough money to attend college and enrolled in the Susquehanna Collegiate Institute in 1860. While he was at first self-conscious about his spotty formal education and self learning, he soon found that his knowledge compared favorably to his classmates', and he was rapidly promoted. Civil War service and further studies: 1862–1873 Ward was a "fervent opponent of slavery" and enlisted in the Union Army to fight in the Civil War in August, 1862. He suffered three gunshot wounds in the Battle of Chancellorsville and was discharged from service on November 18, 1864 due to physical disability. After the war, Ward moved to Washington. In Washington, he worked at the Treasury Department from 1865 until 1872. Ward attended Columbian College, now the George Washington University, and graduating in 1869 with the degree of A.B. In 1871, after he received the degree of LL.B, he was admitted to the Bar of the Supreme Court of the District of Columbia. However, Ward never practiced law. In 1873, he completed his A.M. degree. Government work and research in Washington, DC Ward concentrated on his work as a researcher for the federal government. At that time almost all of the basic research in such fields as geography, paleontology, archaeology and anthropology were concentrated in Washington, DC, and a job as a federal government scientist was a prestigious and influential position. From 1881 until 1888 Ward worked as an assistant geologist at the U.S. Geological Survey In 1883 he was made Geologist of the U.S. Geological Survey. While he worked at the Geological Survey he became friends with John Wesley Powell, the second director of the US Geological Survey (1881–1894) and the director of the Bureau of Ethnology at the Smithsonian Institution. In 1892, he was named Paleontologist for the USGS, a position he held until 1906. According to Edward Rafferty, Ward was part of a group of "Washington intellectuals" who "wanted to place social science within the structure of government and public life itself". Ward believed that centering research activity in government actions would benefit democratic progress, and evade the partisanship, corruption, and conflict of post-Civil War politics. Broadly, Ward's overarching project represented the "monumental exposition of the relation of the state to social progress" Working from the perspective that social research could be used to improve policy and the function of government, Ward was noted by his contemporaries for engaging in "the most advanced views yet taken by an avowed sociologist in the advocacy of a comprehensive program of social reform through the medium of legislation". During this time, Ward was very productive in writing and circulating works on his interests concerning nature and society. Ward published his Guide to the flora of Washington and vicinity (1881), followed shortly afterwards by the first volume of Dynamic Sociology: Or applied social science as based upon statistical sociology and the less complex sciences (1883), alongside his Sketch of Paleobotany (1885), Synopsis of the Flora of the Laramie Group (1885), and Types of the Laramie Flora (1887). Gaining notability Reflecting his growing prominence as a scholar and acceptance in academic circles, Ward was elected to the American Philosophical Society in 1889. In 1900, he was elected as the president of International Institute of Sociology in France. Ward was also a fellow of the American Association for the Advancement of Science, and a member of the National Academy of Sciences. From 1891 to 1905, Ward continued to publish numerous texts on natural history and sociology, with the circulation of his work in both areas contributing to his growing notability. These works included sociological writings on Neo-Darwinism and Neo-Lamarckism (1891), The Psychic Factors of Civilization (1893), multiple articles in Contributions to Social Philosophy (1895–1897), the second volume of his Dynamic Sociology (1897), and his Outlines of Sociology (1898). The founding of the American Sociological Association: 1905 In 1905, American sociologists debated the creation of an independent professional association that would be distinct from other existing collectives for historians, economists, and political scientists. C. W. A. Veditz, a professor at George Washington University who admired Ward's work, sought Ward's opinion on the matter, with Ward arguing in favor of an organization that could mirror Paris' International Institute of Sociology. At a meeting of approximately three hundred sociologists at the December 27th 1905 American Economic Association, Ward made a strong argument for the establishment of the American Sociological Association, with the assembled sociologists passing Ward's motion and forming a committee to establish the association's charter and founding officers. Ward became the first president of the American Sociological Association on December 28, 1905, after his colleauges Ross, Small, and Giddings motioned for him to receive the honor. Ward was chosen for the role out of a belief among the committee that "all sociologists are under a heavy debt of gratitude" to his work, and because of Ward's commitment to raise the discipline's profile and esteem in a society where sociology was "not merely discredited, but almost entirely unknown". Teaching at Brown and final years: 1906–1913 After becoming the first president of the American Sociological Association, Ward's reputation and prominence as a sociologist in America was at its peak. In 1906, Ward became chair of sociology at Brown University. Previously, Ward had given "extended courses of lectures on sociology" at the University of Chicago and at Stanford University. Prior to taking up the position at Brown, Ward and his wife travelled to Europe and Ward took part in various presentations and debates. Ward was popular at Brown, as a teacher and colleauge; a fellow professor, Samuel Mitchell, described him as "pre-eminent" among the "many able scholars and teachers" at Brown. One of Ward's students, Sara Algeo, wrote that "studying with Prof. Ward was like sitting at the feet of Aristotle, or Plato ... He was the wisest man I have ever known." In 1910, Ward taught at the University of Wisconsin Madison's sociology department during their summer school Ward delivered public lectures and seminars in the United Kingdom and across the United States. Towards the end of his life, Ward critiqued the eugenics movement as founded on a "distrust of nature" and "egotism", and instead argued that a program of social welfare (or 'euthenics') would be far more effective in curing social ills than what was proposed by eugenicists. Despite gaining recognition for his work and professional esteem, Ward felt increasingly isolated in this later stage of his career as his focus on systematization was at odds with the work of other social scientists who were more focused on policy and legislation. During his later years, Ward remained a productive writer. In 1906 Ward published Applied Sociology: A Treatise on the Conscious Improvement of Society by Society, and in 1908 an article on Social Classes in the Light of Modern Sociological Theory followed in the American Journal of Sociology. Ward's final major work, Glimpses of the Cosmos, was published posthumously, with the help of Sarah Comstock and Sarah Simons, in six volumes beginning in 1913 and continuing until 1918. Death: 1913 After several weeks of sickness, Ward died on April 17, 1913 at his home on Rhode Island Avenue. Prominent social scientists including Emile Durkheim, Ferdinand Tonnies, Patrick Geddes, Thorstein Veblen, and Albion Small mourned his death. His colleagues at Brown University eulogized Ward as a "profound student, and an original investigator in the most abstruse problems which the human mind can grapple", describing him as "a genial associate" and "an inspiring teacher". In a eulogy in the Washington Herald, C. W. A. Veditz remarked that "his death marks the disappearance of a scientists who will unquestionably rank as one of the half-dozen greatest thinkers in his field that the world has produced" Ward was first buried at Glenwood Cemetery in Washington, but was later moved to Brookside Cemetery, Watertown in Jefferson County, New York to be with his wife. The only surviving public memorial commemorating Ward is in the Pennsylvania village of Myersburg, where a state historical sign describes Ward as "the American Aristotle". Personal life Marriages While attending the Susquehanna Collegiate Institute, Ward met Elizabeth "Lizzie" Carolyn Vought and fell in love. They married on August 13, 1862. Shortly afterward, he enlisted in the Union Army and was sent to the Civil War front. After the war he successfully petitioned for work with the federal government in Washington, DC, where the couple moved. Lizzie assisted him in editing and contributing to a newsletter called The Iconoclast, dedicated to free thinking and critiquing organized religion. She gave birth to a son, but the child died when he was less than a year old. Lizzie died in 1872 at the age of thirty. Lester Frank Ward went on to marry Rosamond Asenath Simons (1840–1913) as his second wife in the year 1873. Personal Character Reflecting after his death, James Q. Dealey, one of Ward's friends, wrote that Ward "had a deeply emotional nature" which was "suppressed by his close devotion to intellectual pursuits", while he was "really fond of social life" he became "so absorbed in his work that to a quite large extent he lived a lonely life during his last years" and rarely socialized away from his university connections. Dealey described Ward as a committed teacher who "was seldom absent from his classes" and "was most systematic in the preparation of his lectures", even towards the end of his life when "he could barely put one foot before another and could hardly carry the weight of his books", Ward cherished teaching. Emily Palmer Cape wrote that Ward "always stressed the power of an education which teaches a knowledge of the materials and forces of nature, and their relation to our own lives." Cape noted that Ward "loved nature, and to be out of doors" and enjoyed giving "a long and beautiful description of the earth" whenever possible. Family Ward's immediate family were politically active and involved in various social causes. Lester Ward's older brother, Cyrenus Ward, was "heavily involved in the politics of labor unions and working-class reform" and in the middle of the 1860s he became a leading member of the socialist movement in New York City. Cyrenus Ward went on to join Karl Marx and Friedrich Engels in the International Workingmen's Association, to which he was elected a council member, before being arrested as a spy during the Franco-Prussian War. Lester Ward detailed Cyrenus' activities in The Iconoclast, and went on to secure jobs for him at the Geological Survey and the Bureau of Statistics via his network in Washington. Lester Ward's other brothers, Lorenzo and Justin, were both politically active in the cooperative movement and the prohibitionist movement respectively. Works and ideas Ward hoped to use his scientific literacy to contribute an American version of historical-materialist Sociology, opposing the then popular work of Herbert Spencer with critique inspired by Karl Marx. Working in the Enlightenment tradition, Ward associated his project with the advancement of democratic principles in the United States. As Ward explained in the Preface to Dynamic Sociology: Or Applied Social Science as Based Upon Statistical Sociology and the Less Complex Sciences, it was his belief that: "The real object of science is to benefit man. A science which fails to do this, however agreeable its study, is lifeless. Sociology, which of all sciences should benefit man most, is in danger of falling into the class of polite amusements, or dead sciences. It is the object of this work to point out a method by which the breath of life may be breathed into its nostrils." Political beliefs Ward approached society through the lens of producerism, or the celebration of productive workers, for example artisans, skilled laborers, merchants, and craftspeople, as opposed to nonproducers who simply accumulated capital and resources., Ward believed that government should provide society with understanding of socioeconomic conditions to ensure that the state progressed as a whole. Ward was critical of "privilege, monopoly, and the evils of financial capitalism", and supported abolitionism, temperance, and public education. Nature, evolution and conservation Ward had a lifelong interest in nature, beginning in childhood and extending throughout his time as a government clerk active in local biological societies, and as a formally trained paleobiologist. Ward engaged with Lamarckian ideas, or the theory that the natural environment shapes organisms. Ward wrote on the topic in Neo-Darwinism and Neo-Lamarckism, and was enthusiastic in his support of Darwin's findings and theories. Reflecting a popular trend at the time, Ward made connections between evolution, patterns in the natural world, and his perspectives on society. Ward wrote that "the process of evolution is organization", reflecting that in his opinion "the process is the same" across biological, chemical, physical, and social forms of organization. Ward believed that "the universal comprehension of nature" would lead to a situation where "every human could do his part", stressing that recognising this interconnectedness and interdependence "should inspire one to add to the whole" and to "contribute one's share ot life's great continuous flow." Ward understood human conflict and war as evolutionary forces responsible for progress. From Ward's perspective, conflict enabled the rise of Homo Sapiens over other creatures, and saw the expansion of what he considered to be more technologically advanced races and nations. Ward saw war as a natural evolutionary process that could be painful, slow, and ineffective. He argued to recognize these characteristics of war, but to replace it with a more progressive system which minimized harm. He wrote: Alongside George Perkins Marsh, John Wesley Powell, and W J McGee, Ward's ideas concerning conservation and the management of natural resources helped to inform the conservation movement of the early 20th century. However, the extent of Ward's contributions to scientific understandings of nature has been debated, with John Burnham writing that "Ward's unbelievable egotism and his ostentatious display of technical terminology misled many writers into believing he was a "great" or "distinguished" natural scientist." Ward's desire to "prove his knowledge of all scientific subjects", and his "habit of creating difficult neologisms in his books" proved to be "particularly bothersome to many readers of his work". Welfare state and laissez faire Ward was a supporter of the concept of the welfare state. Ward argued that those critical of the development of a social safety as 'paternalistic' were hypocritical for themselves receiving "relief from their own incompetency" in their private enterprise as capitalists and industrialists. Ward's ideas influenced a rising generation of progressive political leaders, such as Herbert Croly, and his ideas came to help shape early welfare policy in the United States. However, there are few demonstrable direct links between his writings and the actual programs of the founders of the welfare state and the New Deal. Reflecting his overarching engagement with discussions of evolution, Ward critiqued Herbert Spencer and Spencer's theories of laissez-faire and survival of the fittest which were popular in socio-economic thought in the United States after the American Civil War. Ward positioned himself in opposition to Spencer and the American political scientist William Graham Sumner, an advocate for Spencer's ideas, who had promoted the principles of laissez-faire. The historian Henry Steele Commager argued that Ward "trained his heaviest guns" on "the superstitions that still held domain over the mid of his generation", of which "laissez-faire was the most stupefying" Women's equality Ward advocated for equal rights for women, at times drawing on metaphors and analogies from his interest in the study of the natural world to support his arguments. He gave a speech on the topic to the Fourteenth Dinner of the Six O’clock Club in Washington on April 26, 1888, at Willard’s Hotel. Ward was of the opinion that "there is no fixed rule by which Nature has intended that one sex should excel the other, any more than there is any fixed point beyond which either cannot develop." Ward summarized his position as "true science teaches that the elevation of woman is the only sure road to the evolution of man." Despite Ward's interest in the topic of equal rights for women, Clifford H. Scott summarised that "practically all the suffragists ignored" Ward. Legacy in American sociology As Robert Kessler summarized, "reputation came slowly and faded rapidly" for Ward, while his early work was "epoch-making" and his impact led to Hofstadter naming him the "American Aristotle", by the middle of the 20th century Ward had "passed so completely from the contemporary scene" and is now largely undiscussed in modern American sociology. Eric Royal Lybeck argues that the broadness of Ward's research was responsible for his work being "shunted from the centre of sociological discourse to the margins of posterity" While Ward's work was wide sweeping and attempted to synthesize insights from a broad spectrum of research themes and subjects, the institutionalization of sociology in the United States led to a hyperfocus on discrete and specialized problems which was at odds with the scale of his approach. Albion Small suggested that Ward remained too attached to the positivism of Auguste Comte and the evolutionism of Herbert Spencer at a time when other social scientists were moving towards other social models and methods of analysis. It was Small's assessment that Ward clung to a "pure science" approach in social research, and was more of a "museum investigator" interested in labeling, categorising, and developing schema. Cumulatively, this meant that while Ward was "highly regarded and influential" in the early history of sociology in the United States, his approach and contributions rapidly became redundant as the field changed. Even during his lifetime, C. W. A. Veditz suggested that due to translation and wide circulation, Ward's works may have been better known in Germany, France, Switzerland, Russia, and Japan than they were in the United States. Ward's diaries, writings, and photographs All but the first of his voluminous diaries were reportedly destroyed by Rosamond after his death. Ward's first journal, Young Ward's Diary: A Human and Eager Record of the Years Between 1860 and 1870..., remains under copyright. A collection of Ward's writings and photographs is maintained by the Special Collections Research Center of the George Washington University. The collection includes articles, diaries, correspondence, and a scrapbook. GWU's Special Collections Research Center is located in the Estelle and Melvin Gelman Library. Literature Coser, Lewis. A History of Sociological Analysis. New York : Basic Books. Dahms, Harry F. – 'Lester F. Ward' Finlay, Barbara. "Lester Frank Ward as a Sociologist Of Gender: A New Look at His Sociological Work." Gender & Society, Vol. 13, No. 2, 251–265 (1999) Gossett, Thomas F. (1963). Race: The History of an Idea in America. Harp, Gillis J. Positivist Republic, Ch. 5 "Lester F. Ward: Positivist Whig" Positivist Republic: Auguste Comte and the Reconstruction of American Liberalism, 1865–1920 Hofstadter, Richard. Social Darwinism in American Thought, Chapter 4, (original 1944, 1955. reprint Boston: Beacon Press, 1992). Social Darwinism in American Thought Largey, Gale. Lester Ward: A Global Sociologist Mers, Adelheid. Fusion Perlstadt, Harry. Applied Sociology as Translational Research: A One Hundred Fifty Year Voyage Rafferty, Edward C. Apostle of Human Progress. Lester Frank Ward and American Political Thought, 1841/1913. Apostle of Human Progress: Lester Frank Ward and American Political Thought, 1841–1913 Ravitch, Diane. Left Back: A Century of Failed School Reforms. Simon & Schuster. "Chapter one: The Educational Ladder" Left Back Ross, John R. Man over Nature: the origins of the conservation movement Ross, Dorthy. The Origins of American Social Science. Cambridge University Press The Origins of American Social Science Seidelman, Raymond and Harpham, Edward J. Disenchanted Realists: Political Science and the American Crisis, 1884–1984. p. 26 Disenchanted Realists: Political Science and the American Crisis Wood, Clement. The Sociology Of Lester F Ward The Sociology Of Lester F Ward Selected works 1880–1889 1890–1899 (reprinted 1906) (reprinted 1913) 1900–1909 Ward, Lester F. (1903) "Pure Sociology: A Treatise on the Origin and Spontaneous Development of Society." (2,625 KB – PDF) With the collaboration of William M. Fontaine, Arthur Bibbins, and G. R. Wieland With the collaboration of William M. Fontaine, Arthur Bibbins, and G. R. Wieland 1910–1919 References Further reading Primary sources Commager, Henry Steele, ed., Lester Frank Ward and the Welfare State (1967), major writings by Ward, and long introduction by Commager Stern, Bernhard J. ed. Young Ward's Diary: A Human and Eager Record of the Years Between 1860 and 1870 as They Were Lived in the Vicinity of the Little Town of Towanda, Pennsylvania; in the Field as a Rank and File Soldier in the Union Army; and Later in the Nation's Capital, by Lester Ward Who became the First Great Sociologist This Country Produced (1935) Secondary sources Bannister, Robert. Sociology and Scientism: The American Quest for Objectivity, 1880–1940 (1987), pp. 13–31. Burnham, John C. "Lester Frank Ward as Natural Scientist," American Quarterly 1954 6#3 pp. 259–265 in JSTOR Chugerman, Samuel. Lester F. Ward, the American Aristotle: A Summary and Interpretation of His Sociology (Duke University Press, 1939) Fine, Sidney. Laissez Faire and the General-Welfare State: A Study of Conflict in American Thought, 1865–1901 (1956), pp. 252–288 Muccigrosso, Robert, ed. Research Guide to American Historical Biography (1988) 3:1570–1574 Nelson, Alvin F. "Lester Ward's Conception of the Nature of Science," Journal of the History of Ideas (1972) 33#4 pp. 633–638 in JSTOR Piott, Steven L. American Reformers, 1870–1920: Progressives in Word and Deed (2006); examines 12 leading activists; see chapter 1 for Ward. Scott, Clifford H. Lester Frank Ward (1976) External links Primary sources Guide to the Lester Frank Ward Collection, 1860–1913, Brown University Library Collections Guide to the Lester Frank Ward Papers, 1883–1919, Special Collections Research Center, Estelle and Melvin Gelman Library, the George Washington University Secondary sources The Sunday Review; Towanda, Pennsylvania Short biography A Lester Ward web site Public Sociology website Mansfield University Sociology professor Gale Largey produced a 90 minute documentary on Lester Frank Ward that was featured at the 2005 Centennial of the American Sociological Association and is available upon request from the director. 1841 births 1913 deaths American sociologists Writers from Joliet, Illinois Lamarckism Presidents of the American Sociological Association American male feminists American feminists 19th-century American writers 20th-century American writers Brown University faculty Members of the American Philosophical Society
Lester Frank Ward
Biology
5,526
4,243,440
https://en.wikipedia.org/wiki/Russula%20virescens
Russula virescens is a basidiomycete mushroom of the genus Russula, and is commonly known as the green-cracking russula, the quilted green russula, or the green brittlegill. It can be recognized by its distinctive pale green cap that measures up to in diameter, the surface of which is covered with darker green angular patches. It has crowded white gills, and a firm, white stipe that is up to tall and thick. Considered to be one of the best edible mushrooms of the genus Russula, it is especially popular in Spain and China. With a taste that is described variously as mild, nutty, fruity, or sweet, it is cooked by grilling, frying, sautéeing, or eaten raw. Mushrooms are rich in carbohydrates and proteins, with a low fat content. The species was described as new to science in 1774 by Jacob Christian Schaeffer. Its distribution encompasses Asia, North Africa, Europe, and Central America. Its presence in North America has not been clarified, due to confusion with the similar species Russula parvovirescens and R. crustosa. R. virescens fruits singly or scattered on the ground in both deciduous and mixed forests, forming mycorrhizal associations with broadleaf trees such as oak, European beech, and aspen. In Asia, it associates with several species of tropical lowland rainforest trees of the family Dipterocarpaceae. R. virescens has a ribonuclease enzyme with a biochemistry unique among edible mushrooms. It also has biologically active polysaccharides, and a laccase enzyme that can break down several dyes used in the laboratory and in the textile industry. Taxonomy Russula virescens was first described by German polymath Jacob Christian Schaeffer in 1774 as Agaricus virescens. The species was subsequently transferred to the genus Russula by Elias Fries in 1836. According to the nomenclatural authority MycoBank, Russula furcata var. aeruginosa (published by Christian Hendrik Persoon in 1796) and Agaricus caseosus (published by Karl Friedrich Wilhelm Wallroth in 1883) are synonyms of Russula virescens. The variety albidocitrina, defined by Claude Casimir Gillet in 1876, is no longer considered to have independent taxonomic significance. According to Rolf Singer's 1986 classification of Russula, R. virescens is the type species of subsection Virescentinae in section Rigidae, a grouping of mushrooms characterized by a cap surface that breaks into patches of bran-like (furfuraceous) particles. In a molecular phylogenetic analysis of European Russula, R. virescens formed a clade with R. mustelina; these two species were sister to a clade containing R. amoenicolor and R. violeipes. The specific epithet virescens is Latin for "becoming green". The characteristic pattern of the cap surface has earned the species common names such as the green-cracking russula, the quilted green russula, and the green brittlegill. In the mid-Atlantic United States, it is also known locally as the moldy russula. Description Described by mushroom enthusiast Antonio Carluccio as "not exactly nice to look at", the cap is at first dome or barrel-shaped, becoming convex and flattened with age with a diameter of up to . The cap center is often depressed. The cuticle of the cap is green, most profoundly in the center, with patches of the same color dispersed radially around the center in an areolate pattern. The color of the cuticle is often of variable shade, ranging from gray to verdigris to grass-green. The extent of the patching of the cuticle is also variable, giving specimens with limited patches a resemblance to other green-capped species of Russula, such as R. aeruginea. The green patches of the cap lie on a white to pale green background. The cap, while frequently round, may also exhibit irregular lobes and cracks. The cap cuticle is thin, and can be readily peeled off the surface to a distance of about halfway towards the cap center. The gills are white to cream colored, and fairly crowded together; they are mostly free from attachment to the stipe. Gills are interconnected at their bases by veins. The stipe is cylindrical, white, and of variable height, up to tall and wide; it is roughly the same thickness at both the top and the base. The top portion of the stipe may be farinose—covered with a white, mealy powder. It may turn slightly brown with age, or when it is injured or bruised from handling. Like other mushrooms in the Russulales, the flesh is brittle, owing to the sphaerocyst cytoarchitecture—cylindrical cells that contrast with the typical fibrous, filamentous hyphae present in other orders of the basidiomycota. The spores of R. virescens are elliptical or ellipsoid with warts, translucent (hyaline), and produce a white, pale or pale yellow spore print; the spore dimensions are 6–9 by 5–7 μm. A partial reticulum (net-like pattern of ridges) interconnects the warts. The spore-bearing cells, the basidia, are club-shaped and have dimensions of 24–33 by 6–7.5 μm; they are colorless, and each hold from two to four spores. The pleurocystidia (cystidia on the gill face) are 40–85 by 6–8 μm and end abruptly in a sharp point. Similar species Russula parvovirescens, found in the eastern United States, can be distinguished from R. virescens by its smaller stature, with caps measuring wide and stipe up to long by thick. Compared to R. virescens, it tends to be more bluish-green, the patches on its cap are larger, and it has a lined cap margin. Microscopically, the terminal cells in the cap cuticle of R. parvovirescens are more swollen than those of R. virescens, which has tapered and elongated terminal cells. Another green-capped Russula is R. aeruginea, but this species may be distinguished from R. virescens by its smaller size and smooth cap. Other green russulas with a smooth cap include R. heterophylla and R. cyanoxantha var. peltereaui. Russula crustosa, like R. virescens, also has an areolate cap, but the cap becomes sticky (viscid) when moist, and its color is more variable, as it may be reddish, yellowish, or brown. Also, the spore print of R. crustosa is a darker yellow than R. virescens. R. redolens has a cap that is "drab-green to blue-green", but unlike R. virescens, is smooth. R. redolens has an unpleasant taste and smells of parsley. Edibility Russula virescens is an edible mushroom considered to be one of the best of the genus Russula, and is popular in Europe, particularly in Spain. In an 1875 work on the uses of fungi, English mycologist Mordecai Cubitt Cooke remarked about the mushroom that "the peasants about Milan are in the habit of putting [it] over wood embers to toast, and eating [it] afterwards with a little salt." The mushroom is often sold as a dried product in Asia; in China, it can be found in roadside markets, and used in traditional herbal medicines. Its smell is not distinctive, but its taste has been described as mild, nutty, fruity, or even sweet. Old specimens may smell of herrings. Drying the mushrooms enhances the nutty flavor. Mushrooms can be sautéed (the green color disappears with cooking), and young specimens that are prepared this way have a potato taste that pairs well with shallots. They are also fried or grilled, or used raw in salads. Young specimens are pale and can be hard to identify, but the characteristic pattern of older fruit bodies makes them hard to confuse with other species. When collecting R. virescens for consumption, caution is of vital importance to avoid confusion with the dangerously poisonous Amanita phalloides (better known as the death cap), a mushroom that can be most easily identified by its volva and ring. The nutritional components of R. virescens mushrooms have been characterized. Fresh mushrooms contain about 92.5% moisture. A sample of dried mushroom (100 g dw) has 365 kcal (1527 kilojoules). Carbohydrates make up the bulk of the fruit bodies, comprising 62% of the dry weight; 11.1% of the carbohydrates are sugars, the large majority of which (10.9%) is mannitol. The total lipid, or crude fat, content makes up 1.85% of the dry matter of the mushroom. The proportion of fatty acids (expressed as a percentage of total fatty acids) are 28.78% saturated, 41.51% monounsaturated, and 29.71% polyunsaturated. The most prevalent fatty acids include: palmitic acid, 17.3% of total fatty acids; stearic acid, 7.16%; oleic acid, 40.27%; and linoleic acid, 29.18%. Several bioactive compounds are present in the mushroom. One hundred grams (dry weight) contains 49.3 micrograms (μg) of tocopherols (20.0 μg alpha, 21.3 μg beta, and 8.0 μg gamma) and 0.19 milligrams (mg) of the carotenoid pigment lycopene. There are 4.46 g of organic acids per 100 g of dry mushrooms, including oxalic acid (0.78 g), malic acid (2.71 g), citric acid (0.55 g), and fumaric acid (0.23 g). Mushrooms have 22.6 mg/100 g dw of the phenolic compound 4-hydroxybenzoic acid, and 15.8 mg/100 g dw of cinnamic acid. Habitat and distribution Russula virescens can be found fruiting on soil in both deciduous forests and mixed forests, forming ectomycorrhizal symbiotic relationships with a variety of trees, including oaks (Quercus), European beech (Fagus sylvatica), and aspen (Populus tremula). Preliminary investigations suggest that the fungus also associates with at least ten species of Dipterocarpaceae, an important tree family prevalent in the tropical lowland forests of Southeast Asia. Fruit bodies may appear singly or in groups, reappear in the same spots year after year, and are not common. In Europe, fruiting occurs mainly during the months of summer to early autumn. A Mexican study of the seasonal occurrence of several common mushroom species in subtropical forests in Xalapa showed that the fruiting period of R. virescens occurred in April, before the onset of the rainy season. The distribution of R. virescens in North America is subject to debate, where a number of similar species such as R. parvovirescens and R. crustosa are also recognized. One author even suggests that R. virescens "is strictly a European species", citing Buyck and collaborators (2006), who say "the virescens-crustosa group is much more complex than suspected and embraces at least a dozen taxa in the eastern US". As in Europe, Russula virescens has a widespread distribution in Asia, having been recorded from India, Malaysia, Korea, the Philippines, Nepal, China, Thailand, and Vietnam. It is also found in North Africa and Central America. Chemistry Russula virescens has a limited capacity to bioaccumulate the micronutrients iron, copper, and zinc from the soil. The concentration of these trace metals is slightly higher in the caps than the stipes. A meal of fresh mushroom caps would supply 16% of the recommended daily allowance (RDA) of copper for an adult male or female (ages 19–50); 16% or 7.3% of the RDA of iron for an adult male or female, respectively; and 16–22% of the adult RDA of zinc. The mushroom is a poor bioaccumulator of the toxic heavy metals arsenic, cadmium, lead, mercury, and nickel. Biologically active mushroom polysaccharides have been a frequent research topic in recent decades due to their possible stimulatory effect on innate and cell-mediated immune responses, antitumor activities, and other activities. Immunostimulatory activity, antioxidant activity, cholesterol-lowering, and blood sugar-lowering effects have been detected in extracts of R. virescens fruit bodies, which are attributed to polysaccharides. A water-insoluble beta-glucan, RVS3-II, has been isolated from the fruit bodies. Sulfated derivatives of this compound have antitumor activities against sarcoma tumor cell lines. RVP, a water-soluble polysaccharide present in the mushroom, is made largely of galactomannan subunits and has antioxidant activity. Ribonucleases (or RNases) are enzymes that catalyze the hydrolysis of ribonucleic acid (RNA), and collectively they play a critical role in many biological processes. A RNase from R. virescens was shown to be biochemically unique amongst seven edible mushroom species in several ways: it has a co-specificity towards cleaving RNA at poly A and poly C, compared to the monospecific RNases of the others; it can be adsorbed on chromatography columns containing DEAE–cellulose as the adsorbent; it has a pH optimum of 4.5, lower than all other species; and, it has a "distinctly different" N-terminal amino acid sequence. The mushroom contains a unique laccase enzyme that can break down several dyes used in the laboratory and in the textile industry, such as bromothymol blue, eriochrome black T, malachite green, and reactive brilliant blue. Laccases are being used increasingly in the textile industry as environmental biocatalysts for the treatment of dye wastewater. See also List of Russula species References External links virescens Edible fungi Fungi described in 1774 Fungi of Africa Fungi of Asia Fungi of Central America Fungi of Europe Fungi of North America Fungus species
Russula virescens
Biology
3,107
19,981,882
https://en.wikipedia.org/wiki/CA%2027-29
CA 27.29 is a tumor marker for breast cancer. It is a form of glycoprotein MUC1. References Tumor markers
CA 27-29
Chemistry,Biology
31
68,571,110
https://en.wikipedia.org/wiki/Psychoplastogen
Psychoplastogens are a group of small molecule drugs that produce rapid and sustained effects on neuronal structure and function, intended to manifest therapeutic benefit after a single administration. Several existing psychoplastogens have been identified and their therapeutic effects demonstrated; several are presently at various stages of development as medications including ketamine, MDMA, scopolamine, and the serotonergic psychedelics, including LSD, psilocin (the active metabolite of psilocybin), DMT, and 5-MeO-DMT. Compounds of this sort are being explored as therapeutics for a variety of brain disorders including depression, addiction, and PTSD. The ability to rapidly promote neuronal changes via mechanisms of neuroplasticity was recently discovered as the common therapeutic activity and mechanism of action. Etymology and nomenclature The term psychoplastogen comes from the Greek roots - (mind), - (molded), and - (producing) and covers a variety of chemotypes and receptor targets. It was coined by David E. Olson in collaboration with Valentina Popescu, both at the University of California, Davis. The term neuroplastogen is sometimes used as a synonym for psychoplastogen, especially when speaking to the biological substrate rather than the therapeutic. Chemistry Psychoplastogens come in a variety of chemotypes and chemical families, but, by definition, are small-molecule drugs. Ketamine has been described as, "the prototypical psychoplastogen". Pharmacology Psychoplastogens exert their effects by promoting structural and functional neural plasticity through diverse targets including, but not limited to, 5-HT2A, NMDA, and muscarinic receptors. Some are biased agonists. While each compound may have a different receptor binding profile, signaling appears to converge at the tyrosine kinase B (TrkB) and mammalian target of rapamycin (mTOR) pathways. Convergence at TrkB and mTOR parallels that of traditional antidepressants with known efficacies, but with more rapid onset. Due to their rapid and sustained effects, psychoplastogens could potentially be dosed intermittently. In addition to the neuroplasticity effects, these compounds can have other epiphenomena including sedation, dissociation, and hallucinations. Psychedelics show complex effects on neuroplasticity and can both promote and inhibit neuroplasticity depending on the circumstances. Single doses of DMT, 5-MeO-DMT, psilocybin, and DOI have been found to produce robust and long-lasting increases in neuroplasticity in animals. Likewise, repeated doses of LSD for 7days increased neuroplasticity. However, chronic intermittent administration of DMT for several weeks resulted in dendritic spine retraction, suggesting physiological homeostatic compensation in response to overstimulation. In addition, DOI has been found to decrease brain-derived neurotrophic factor (BDNF) levels in the hippocampus. The effects of psychedelics on neuroplasticity appear to be dependent on serotonin 5-HT2A receptor activation, as they are abolished in 5-HT2A receptor knockout mice. Non-hallucinogenic serotonin 5-HT2A receptor agonists, like tabernanthalog and lisuride, have also been found to increase neuroplasticity, and to a magnitude comparable to psychedelics. In terms of neurogenesis, DOI and LSD showed no impact on hippocampal neurogenesis, while psilocybin and 25I-NBOMe decreased hippocampal neurogenesis. 5-MeO-DMT however has been found to increase hippocampal neurogenesis, and this could be blocked by sigma σ1 receptor antagonists. Approved medical uses Several psychoplastogens have either been approved or are in development for the treatment of a variety of brain disorders associated with neuronal atrophy where neuroplasticity can elicit beneficial effects. Esketamine, sold under the brand name Spravato and produced by Janssen Pharmaceuticals, was approved by the FDA in March 2019 for the treatment of Treatment-Resistant Depression (TRD) and suicidal ideation. As of 2022, it is the only psychoplastogen approved in the US for the treatment of a neuropsychiatric disorder. Esketamine is the S(+) enantiomer of ketamine and functions as an NMDA receptor antagonist. Clinical development Other psychoplastogens that are being investigated in the clinic include: MDMA-assisted psychotherapy is being investigated for treatment of PTSD. A recent placebo controlled Phase 3 trial found that 67% of participants in the MDMA+therapy group no longer met the diagnostic criteria for PTSD whereas 32% of those in the placebo+therapy group no longer met PTSD threshold. MDMA-assisted psychotherapy is also currently in Phase 2 trials for eating disorders, anxiety associated with life-threatening illness, and social anxiety in autistic adults. Psilocybin, a compound in psilocybin mushrooms that serves as a prodrug for psilocin, is currently being investigated in clinical trials of Hallucinogen-Assisted Therapy for a variety of neuropsychiatric disorders. To date studies have explored the utility of psilocybin in a variety of diseases, including TRD, smoking addiction, and anxiety and depression in people with cancer diagnoses. LSD is being tested in phase 2 trials for cluster headaches and anxiety. DMT is being studied for depression. 5-MeO-DMT is being studied for depression and eating disorders. Ibogaine and Noribogaine are being studied for addiction. List of known psychoplastogens Substituted tryptamines: psilocin (including psilocybin and psilacetin), DMT, 5-MeO-DMT Ergolines: LSD, lisuride Substituted phenethylamines: DOI, MDMA and mescaline Dissociatives: ketamine (including esketamine, arketamine), Iboga-derivatives: ibogaine, noribogaine, tabernanthine and tabernanthalog AAZ-A-154 Scopolamine Rapastinel Tropoflavin (7,8-DHF) (including R7, R13) LY-341495 Isoflurane See also Ariadne (drug) Neuroplasticity Notes References Neuropharmacology
Psychoplastogen
Chemistry
1,402
9,624,797
https://en.wikipedia.org/wiki/Translational%20Genomics%20Research%20Institute
The Translational Genomics Research Institute (TGen) is a non-profit genomics research institute based in Phoenix, Arizona, United States. History and activities TGen was established in July 2002 by Jeffrey Trent in Phoenix, Arizona, with an initial investment of US$100 million from Arizona public- and private-sector investors. The field of translational genomics research searches for ways to apply results from the Human Genome Project to the development of improved diagnostics, prognostics, and therapies for cancer, neurological disorders, diabetes and other complex diseases. The mission of TGen is to make and translate genomic discoveries into advances in human health. TGen has contributed to the growth of scientific research and biotechnology in Arizona. The institute has been involved in collaborations and studies, such as the research on chronic traumatic encephalopathy (CTE) in former NFL players in partnership with Exosome Sciences. References Genetics or genomics research institutions Bioinformatics organizations Research institutes in Arizona Research institutes established in 2002 Organizations based in Arizona 2002 establishments in Arizona
Translational Genomics Research Institute
Biology
217
22,424
https://en.wikipedia.org/wiki/Recapitulation%20theory
The theory of recapitulation, also called the biogenetic law or embryological parallelism—often expressed using Ernst Haeckel's phrase "ontogeny recapitulates phylogeny"—is a historical hypothesis that the development of the embryo of an animal, from fertilization to gestation or hatching (ontogeny), goes through stages resembling or representing successive adult stages in the evolution of the animal's remote ancestors (phylogeny). It was formulated in the 1820s by Étienne Serres based on the work of Johann Friedrich Meckel, after whom it is also known as the Meckel–Serres law. Since embryos also evolve in different ways, the shortcomings of the theory had been recognized by the early 20th century, and it had been relegated to "biological mythology" by the mid-20th century. Analogies to recapitulation theory have been formulated in other fields, including cognitive development and music criticism. Embryology Meckel, Serres, Geoffroy The idea of recapitulation was first formulated in biology from the 1790s onwards by the German natural philosophers Johann Friedrich Meckel and Carl Friedrich Kielmeyer, and by Étienne Serres after which, Marcel Danesi states, it soon gained the status of a supposed biogenetic law. The embryological theory was formalised by Serres in 1824–1826, based on Meckel's work, in what became known as the "Meckel-Serres Law". This attempted to link comparative embryology with a "pattern of unification" in the organic world. It was supported by Étienne Geoffroy Saint-Hilaire, and became a prominent part of his ideas. It suggested that past transformations of life could have been through environmental causes working on the embryo, rather than on the adult as in Lamarckism. These naturalistic ideas led to disagreements with Georges Cuvier. The theory was widely supported in the Edinburgh and London schools of higher anatomy around 1830, notably by Robert Edmond Grant, but was opposed by Karl Ernst von Baer's ideas of divergence, and attacked by Richard Owen in the 1830s. Haeckel Ernst Haeckel (1834–1919) attempted to synthesize the ideas of Lamarckism and Goethe's Naturphilosophie with Charles Darwin's concepts. While often seen as rejecting Darwin's theory of branching evolution for a more linear Lamarckian view of progressive evolution, this is not accurate: Haeckel used the Lamarckian picture to describe the ontogenetic and phylogenetic history of individual species, but agreed with Darwin about the branching of all species from one, or a few, original ancestors. Since early in the twentieth century, Haeckel's "biogenetic law" has been refuted on many fronts. Haeckel formulated his theory as "Ontogeny recapitulates phylogeny". The notion later became simply known as the recapitulation theory. Ontogeny is the growth (size change) and development (structure change) of an individual organism; phylogeny is the evolutionary history of a species. Haeckel claimed that the development of advanced species passes through stages represented by adult organisms of more primitive species. Otherwise put, each successive stage in the development of an individual represents one of the adult forms that appeared in its evolutionary history. For example, Haeckel proposed that the pharyngeal grooves between the pharyngeal arches in the neck of the human embryo not only roughly resembled gill slits of fish, but directly represented an adult "fishlike" developmental stage, signifying a fishlike ancestor. Embryonic pharyngeal slits, which form in many animals when the thin branchial plates separating pharyngeal pouches and pharyngeal grooves perforate, open the pharynx to the outside. Pharyngeal arches appear in all tetrapod embryos: in mammals, the first pharyngeal arch develops into the lower jaw (Meckel's cartilage), the malleus and the stapes. Haeckel produced several embryo drawings that often overemphasized similarities between embryos of related species. Modern biology rejects the literal and universal form of Haeckel's theory, such as its possible application to behavioural ontogeny, i.e. the psychomotor development of young animals and human children. Contemporary criticism Haeckel's theory and drawings were criticised by his contemporary, the anatomist Wilhelm His Sr. (1831–1904), who had developed a rival "causal-mechanical theory" of human embryonic development. His's work specifically criticised Haeckel's methodology, arguing that the shapes of embryos were caused most immediately by mechanical pressures resulting from local differences in growth. These differences were, in turn, caused by "heredity". He compared the shapes of embryonic structures to those of rubber tubes that could be slit and bent, illustrating these comparisons with accurate drawings. Stephen Jay Gould noted in his 1977 book Ontogeny and Phylogeny that His's attack on Haeckel's recapitulation theory was far more fundamental than that of any empirical critic, as it effectively stated that Haeckel's "biogenetic law" was irrelevant. Darwin proposed that embryos resembled each other since they shared a common ancestor, which presumably had a similar embryo, but that development did not necessarily recapitulate phylogeny: he saw no reason to suppose that an embryo at any stage resembled an adult of any ancestor. Darwin supposed further that embryos were subject to less intense selection pressure than adults, and had therefore changed less. Modern status Modern evolutionary developmental biology (evo-devo) follows von Baer, rather than Darwin, in pointing to active evolution of embryonic development as a significant means of changing the morphology of adult bodies. Two of the key principles of evo-devo, namely that changes in the timing (heterochrony) and positioning (heterotopy) within the body of aspects of embryonic development would change the shape of a descendant's body compared to an ancestor's, were first formulated by Haeckel in the 1870s. These elements of his thinking about development have thus survived, whereas his theory of recapitulation has not. The Haeckelian form of recapitulation theory is considered defunct. Embryos do undergo a period or phylotypic stage where their morphology is strongly shaped by their phylogenetic position, rather than selective pressures, but that means only that they resemble other embryos at that stage, not ancestral adults as Haeckel had claimed. The modern view is summarised by the University of California Museum of Paleontology: Applications to other areas The idea that ontogeny recapitulates phylogeny has been applied to some other areas. Cognitive development English philosopher Herbert Spencer was one of the most energetic proponents of evolutionary ideas to explain many phenomena. In 1861, five years before Haeckel first published on the subject, Spencer proposed a possible basis for a cultural recapitulation theory of education with the following claim: G. Stanley Hall used Haeckel's theories as the basis for his theories of child development. His most influential work, "Adolescence: Its Psychology and Its Relations to Physiology, Anthropology, Sociology, Sex, Crime, Religion and Education" in 1904 suggested that each individual's life course recapitulated humanity's evolution from "savagery" to "civilization". Though he has influenced later childhood development theories, Hall's conception is now generally considered racist. Developmental psychologist Jean Piaget favored a weaker version of the formula, according to which ontogeny parallels phylogeny because the two are subject to similar external constraints. The Austrian pioneer of psychoanalysis, Sigmund Freud, also favored Haeckel's doctrine. He was trained as a biologist under the influence of recapitulation theory during its heyday, and retained a Lamarckian outlook with justification from the recapitulation theory. Freud also distinguished between physical and mental recapitulation, in which the differences would become an essential argument for his theory of neuroses. In the late 20th century, studies of symbolism and learning in the field of cultural anthropology suggested that "both biological evolution and the stages in the child's cognitive development follow much the same progression of evolutionary stages as that suggested in the archaeological record". Music criticism The musicologist Richard Taruskin in 2005 applied the phrase "ontogeny becomes phylogeny" to the process of creating and recasting music history, often to assert a perspective or argument. For example, the peculiar development of the works by modernist composer Arnold Schoenberg (here an "ontogeny") is generalized in many histories into a "phylogeny" – a historical development ("evolution") of Western music toward atonal styles of which Schoenberg is a representative. Such historiographies of the "collapse of traditional tonality" are faulted by music historians as asserting a rhetorical rather than historical point about tonality's "collapse". Taruskin also developed a variation of the motto into the pun "ontogeny recapitulates ontology" to refute the concept of "absolute music" advancing the socio-artistic theories of the musicologist Carl Dahlhaus. Ontology is the investigation of what exactly something is, and Taruskin asserts that an art object becomes that which society and succeeding generations made of it. For example, Johann Sebastian Bach's St. John Passion, composed in the 1720s, was appropriated by the Nazi regime in the 1930s for propaganda. Taruskin claims the historical development of the St John Passion (its ontogeny) as a work with an anti-Semitic message does, in fact, inform the work's identity (its ontology), even though that was an unlikely concern of the composer. Music or even an abstract visual artwork can not be truly autonomous ("absolute") because it is defined by its historical and social reception. See also Glottogony Stage theory Psychomotor patterning Notes References Sources Further reading Borchert. Catherine M. and Zihlman, Adrienne L. (1990) The ontogeny and phylogeny of symbolizing, in Foster and Botscharow (eds) The Life of Symbols Bates, E., with L. Benigni, I. Bretherton, L. Camaioni, & V. Volterra. (1979). The emergence of symbols: Cognition and communication in infancy. New York: Academic Press Gerhard Medicus (2017, chapter 8). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB External links Of Parts and Wholes: Self-similarity and Synecdoche in Science, Culture and Literature Biology theories Obsolete biology theories History of evolutionary biology Evolutionary developmental biology
Recapitulation theory
Biology
2,250
60,132,831
https://en.wikipedia.org/wiki/Qiddiya
Qiddiya (, ) is a planned entertainment and tourism megaproject in Riyadh, Saudi Arabia. Construction started at the beginning of 2019. It was planned to open in 2023, though as of 2024, major projects including Six Flags Qiddiya City, the Aquarabia waterpark, and the Formula One racetrack are incomplete (with the racetrack now being scheduled to open in 2027). It is part of the Saudi Vision 2030 program, which aims to diversify the Saudi economy. History The project, which was announced in April 2017, is part of a goal to increase local spending and diversify the Saudi economy under Saudi Vision 2030. The project is supported by the Public Investment Fund. According to the organizers, the total number of annual visitors will by 2030 reach 17 million, and be the "largest tourism destination worldwide". It is expected to create 325,000 jobs. The first phase of the project was planned to be completed by 2023. Upon the completion of this phase, 45 individual projects are to be completed. There is a partnership agreement between Qiddiya and the University of Central Florida to train young Saudis on hospitality, tourism and sports management. Proposed activities Qiddiya Speed Park An FIA Grade 1 motor racing circuit, Qiddiya Speed Park, is expected to hold a Formula One or MotoGP race, currently planned for 2027. Construction delays to the racing circuit have caused numerous issues. The inaugural Saudi Arabian Grand Prix was scheduled to be held at the course in December 2021, but because construction was unfinished, the event took place at the Jeddah Corniche Circuit. Because of the Formula One Group announcement in November 2020 that a Grand Prix would be held at the circuit in 2021, the Saudi government allegedly paid tens of millions dollars to maintain hosting rights after completion of the project was delayed until 2027. The announcement led to criticism against Saudi Arabia from human rights organizations for attempting to sportswash its image. Six Flags Qiddiya Six Flags Qiddiya, which is currently under construction, is planned to be the largest theme park in the Middle East at , and is expected to have multiple attractions. Notable attractions will include Falcons Flight, the world's fastest, tallest, and longest roller coaster, (which will partly go over the Qiddiya circuit) and the world's tallest drop tower ride. Dragon Ball theme park In March 2024, Qiddiya, in collaboration with Toei Animation, announced the construction of the "world's only ever Dragon Ball theme park". It is planned to include over 30 attractions, including a "70 meter-high Shenron" statue that contains a roller coaster. Upon the park being announced it was met with negative reception from international Dragon Ball fans who criticized Saudi Arabia's humans rights record, treatment of women, and lack of recognition of LGBT rights. See also List of Saudi Vision 2030 Projects Saudi Vision 2030 Neom References External links Six Flags Qiddiya Megaprojects Planned communities in Saudi Arabia Tourist attractions in Saudi Arabia
Qiddiya
Engineering
618
74,936,187
https://en.wikipedia.org/wiki/Deuremidevir
Deuremidevir, also known as VV116, is a nucleoside analogue antiviral drug. It is administrated through oral tablets, which contain the hydrobromide salt of this drug. The drug is a deuterated tri-isobutyrate of GS-441524, the active metabolite of remdesivir. It was first described in a November 2020 preprint by a team including members of Wuhan Institute of Virology and Vigonvita. It completed a phase 3 trial in 2022. Results from a separate Phase 3 trial conducted in mainland China from October 2022 to January 2023 suggested that deuremidevir may shorten the duration of COVID-19 symptoms in non-hospitalized adults with mild-to-moderate disease compared to placebo. Junshi, which markets the drug, received conditional approval from China's National Medical Products Administration in January 2023. In November 2023, in response to viral mutations and changing characteristics of infection, the WHO adjusted its treatment guidelines. Among other changes, the use of deuremidevir was recommended against, except for clinical trials. Research In preclinical studies, a single high dose of the drug (at least 1.0 g/kg) was shown to be tolerated in rats and dogs. References Anti–RNA virus drugs Antiviral drugs Deuterated compounds Isobutyrate esters Nitriles Nucleosides COVID-19 drug development
Deuremidevir
Chemistry,Biology
307
2,099,359
https://en.wikipedia.org/wiki/Orbiting%20Astronomical%20Observatory
The Orbiting Astronomical Observatory (OAO) satellites were a series of four American space observatories launched by NASA between 1966 and 1972, managed by NASA Chief of Astronomy Nancy Grace Roman. These observatories, including the first successful space telescope, provided the first high-quality observations of many objects in ultraviolet light. Although two OAO missions were failures, the success of the other two increased awareness within the astronomical community of the benefits of space-based observations, and led to the instigation of the Hubble Space Telescope. OAO-1 The first Orbiting Astronomical Observatory was launched successfully on 8 April 1966, carrying instruments to detect ultraviolet, X-ray and gamma ray emission. Before the instruments could be activated, a power failure resulted in the termination of the mission after three days. The spacecraft was out of control, so that the solar panels could not be deployed to recharge the batteries that would supply power to the electrical and electronic equipment on board. OAO-2 Stargazer Orbiting Astronomical Observatory 2 (OAO-2, nicknamed Stargazer) was launched on 7 December 1968, and carried 11 ultraviolet telescopes. It observed successfully until January 1973, and contributed to many significant astronomical discoveries. Among these were the discovery that comets are surrounded by enormous haloes of hydrogen, several hundred thousand kilometres across, and observations of novae which found that their UV brightness often increased during the decline in their optical brightness. OAO-B OAO-B carried a ultraviolet telescope, and should have provided spectra of fainter objects than had previously been observable. The satellite was launched on 30 November 1970 with "the largest space telescope ever launched", but never made it into orbit. The payload fairing did not separate properly during ascent and the excess weight of it prevented the Centaur stage from achieving orbital velocity. The Centaur and OAO reentered the atmosphere and broke up, destroying a $98,500,000 project. The disaster was later traced to a flaw in a $100 explosive bolt that failed to fire. OAO-3 (Copernicus) OAO-3 was launched on 21 August 1972, and proved to be the most successful of the OAO missions. It was a collaborative effort between NASA and the UK's Science Research Council (currently known as the Science and Engineering Research Council). After its launch, it was named Copernicus to mark the 500th anniversary of the birth of Nicolaus Copernicus in 1473. Copernicus operated until February 1981, and returned high resolution spectra of hundreds of stars along with extensive X-ray observations. Among the significant discoveries made by Copernicus were the discovery of several long-period pulsars such as X Persei that had rotation times of many minutes instead of the more typical second or less, and confirmation that most of the hydrogen in interstellar gas clouds existed in molecular form. Launches OAO-1: Atlas-Agena D from Launch Complex 12, Cape Canaveral, Florida OAO-2, OAO-B and OAO-3: Atlas SLV-3C from Launch Complex 36, Cape Canaveral, Florida In popular culture In the first season of the alternate history space drama show For All Mankind, a repair mission to an OAO-like satellite was the original objective of the Apollo 25 mission, with an artist's conception of OAO-1 appearing on the mission's emblem. See also Timeline of artificial satellites and space probes Hubble Space Telescope References Code A.D., Houck T.E., McNall J.F., Bless R.C., Lillie C.F. (1970), Ultraviolet Photometry from the Orbiting Astronomical Observatory. I. Instrumentation and Operation, Astrophysical Journal, v. 161, p.377 Rogerson J.B., Spitzer L., Drake J.F., Dressler K., Jenkins E.B., Morton D.C. (1973), Spectrophotometric Results from the Copernicus Satellite. I. Instrumentation and Performance, Astrophysical Journal, v. 181, p. L97 1966 in spaceflight 1968 in spaceflight 1970 in spaceflight 1972 in spaceflight Orbiting Astronomical Observatory
Orbiting Astronomical Observatory
Astronomy
865
92,310
https://en.wikipedia.org/wiki/Illuminated%20manuscript
An illuminated manuscript is a formally prepared document where the text is decorated with flourishes such as borders and miniature illustrations. Often used in the Roman Catholic Church for prayers and liturgical books such as psalters and courtly literature, the practice continued into secular texts from the 13th century onward and typically include proclamations, enrolled bills, laws, charters, inventories, and deeds. The earliest surviving illuminated manuscripts are a small number from late antiquity, and date from between 400 and 600. Examples include the Vergilius Romanus, Vergilius Vaticanus, and the Rossano Gospels. The majority of extant manuscripts are from the Middle Ages, although many survive from the Renaissance. While Islamic manuscripts can also be called illuminated and use essentially the same techniques, comparable Far Eastern and Mesoamerican works are described as painted. Most manuscripts, illuminated or not, were written on parchment until the 2nd century BCE, when a more refined material called vellum, made from stretched calf skin, was supposedly introduced by King Eumenes II of Pergamum. This gradually became the standard for luxury illuminated manuscripts, although modern scholars are often reluctant to distinguish between parchment and vellum, and the skins of various animals might be used. The pages were then normally bound into codices (singular: codex), that is the usual modern book format, although sometimes the older scroll format was used, for various reasons. A very few illuminated fragments also survive on papyrus. Books ranged in size from ones smaller than a modern paperback, such as the pocket gospel, to very large ones such as choirbooks for choirs to sing from, and "Atlantic" bibles, requiring more than one person to lift them. Paper manuscripts appeared during the Late Middle Ages. The untypically early 11th century Missal of Silos is from Spain, near to Muslim paper manufacturing centres in Al-Andalus. Textual manuscripts on paper become increasingly common, but the more expensive parchment was mostly used for illuminated manuscripts until the end of the period. Very early printed books left spaces for red text, known as rubrics, miniature illustrations and illuminated initials, all of which would have been added later by hand. Drawings in the margins (known as marginalia) would also allow scribes to add their own notes, diagrams, translations, and even comic flourishes. The introduction of printing rapidly led to the decline of illumination. Illuminated manuscripts continued to be produced in the early 16th century but in much smaller numbers, mostly for the very wealthy. They are among the most common items to survive from the Middle Ages; many thousands survive. They are also the best surviving specimens of medieval painting, and the best preserved. Indeed, for many areas and time periods, they are the only surviving examples of painting. History Latin Europe Art historians classify illuminated manuscripts into their historic periods and types, including (but not limited to) Late Antique, Insular, Carolingian, Ottonian, Romanesque, Gothic, and Renaissance manuscripts. There are a few examples from later periods. Books that are heavily and richly illuminated are sometimes known as "display books" in church contexts, or "luxury manuscripts", especially if secular works. In the first millennium, these were most likely to be Gospel Books, such as the Lindisfarne Gospels and the Book of Kells. The Book of Kells is the most widely recognized illuminated manuscript in the Anglosphere, and is famous for its insular designs. The Romanesque and Gothic periods saw the creation of many large illuminated complete bibles. The largest surviving example of these is The Codex Gigas in Sweden; it is so massive that it takes three librarians to lift it. Other illuminated liturgical books appeared during and after the Romanesque period. These included psalters, which usually contained all 150 canonical psalms, and small, personal devotional books made for lay people known as books of hours that would separate one's day into eight hours of devotion. These were often richly illuminated with miniatures, decorated initials and floral borders. They were costly and therefore only owned by wealthy patrons, often women. As the production of manuscripts shifted from monasteries to the public sector during the High Middle Ages, illuminated books began to reflect secular interests. These included short stories, legends of the saints, tales of chivalry, mythological stories, and even accounts of criminal, social or miraculous occurrences. Some of these were also freely used by storytellers and itinerant actors to support their plays. One of the most popular secular texts of the time were bestiaries. These books contained illuminated depictions of various animals, both real and fictional, and often focused on their religious symbolism and significance, as it was a widespread belief in post-classical Europe that animals, and all other organisms on Earth, were manifestations of God. These manuscripts served as both devotional guidance and entertainment for the working class of the Middle Ages. The Gothic period, which generally saw an increase in the production of illuminated books, also saw more secular works such as chronicles and works of literature illuminated. Wealthy people began to build up personal libraries; Philip the Bold probably had the largest personal library of his time in the mid-15th century, is estimated to have had about 600 illuminated manuscripts, whilst a number of his friends and relations had several dozen. Wealthy patrons, however, could have personal prayer books made especially for them, usually in the form of richly illuminated "books of hours", which set down prayers appropriate for various times in the liturgical day. One of the best known examples is the extravagant for a French prince. Up to the 12th century, most manuscripts were produced in monasteries in order to add to the library or after receiving a commission from a wealthy patron. Larger monasteries often contained separate areas for the monks who specialized in the production of manuscripts called a scriptorium. Within the walls of a scriptorium were individualized areas where a monk could sit and work on a manuscript without being disturbed by his fellow brethren. If no scriptorium was available, then "separate little rooms were assigned to book copying; they were situated in such a way that each scribe had to himself a window open to the cloister walk." By the 14th century, the cloisters of monks writing in the scriptorium had almost fully given way to commercial urban scriptoria, especially in Paris, Rome and the Netherlands. While the process of creating an illuminated manuscript did not change, the move from monasteries to commercial settings was a radical step. Demand for manuscripts grew to an extent that monastic libraries began to employ secular scribes and illuminators. These individuals often lived close to the monastery and, in instances, dressed as monks whenever they entered the monastery, but were allowed to leave at the end of the day. Illuminators were often well known and acclaimed and many of their identities have survived. Greek Europe and the Islamic world The Byzantine world produced manuscripts in its own style, versions of which spread to other Orthodox and Eastern Christian areas. This distinct Byzantine style of illumination had a characteristic color palette along with different ways of preparing pigments and ink and a unique finish to the vellum writing surface which was not as conducive to long term preservation as the more texture Western style. With their traditions of literacy uninterrupted by the Middle Ages, the Muslim world, especially on the Iberian Peninsula, was instrumental in delivering ancient classic works to the growing intellectual circles and universities of Western Europe throughout the 12th century. Books were produced there in large numbers and on paper for the first time in Europe, and with them full treatises on the sciences, especially astrology and medicine where illumination was required to have profuse and accurate representations with the text. The origins of the pictorial tradition of Arabic illustrated manuscripts are uncertain. The first known decorated manuscripts are some Qur'ans from the 9th century. They were not illustrated, but were "illuminated" with decorations of the frontispieces or headings. The tradition of illustrated manuscripts started with the Graeco-Arabic translation movement and the creation of scientific and technical treatises often based on Greek scientific knowledge, such as the Arabic versions of The Book of Fixed Stars (965 CE), De materia medica or Book of the Ten Treatises of the Eye. The translators were most often Arab Syriac Christians, such as Hunayn ibn Ishaq or Yahya ibn Adi, and their work is known to have been sponsored by local rulers, such as the Artuqids. An explosion of artistic production in Arabic manuscripts occurred in the 12th and especially the 13th century. Thus various Syriac manuscripts of the twelfth and thirteenth centuries, such as Syriac Gospels, Vatican Library, Syr. 559 or Syriac Gospels, British Library, Add. 7170, were derived from the Byzantine tradition, yet stylistically have a lot in common with Islamic illustrated manuscripts such as the Maqāmāt al-Ḥarīrī, pointing to a common pictorial tradition that existed since circa 1180 in Syria and Iraq which was highly influenced by Byzantine art. Some of the illustrations of these manuscript have been characterized as "illustration byzantine traitée à la manière arabe" ("Byzantine illustration treated in the Arab style"). The Persian miniature tradition mostly began in whole books, rather than single pages for muraqqas or albums, as later became more common. The Great Mongol Shahnameh, probably from the 1330s, is a very early manuscript of one of the most common works for grand illustrated books in Persian courts. Techniques Styles and techniques of manuscript illumination varied by region, and there were distinct differences in aspects like color palette, decoration style, and peak periods of output. Certain places like the Celtic regions specialized in more ornamental details in contrast to the Byzantine pictorial designs, and regions such as Flanders were more prolific in manuscript production much later than other places. Illumination was a complex and costly process, and was therefore usually reserved for special books such as altar bibles, or books for royalty. Heavily illuminated manuscripts are often called "luxury manuscripts" for this reason. In the early Middle Ages, most books were produced in monasteries, whether for their own use, for presentation, or for a commission. These monks would work as a collective group to sponsor the patronage of a manuscript, but that in turn shielded their identites somewhat from history: there are more numerous surviving signatures on works from the scibe and less from the illustrations, but often there is simply the signature of the patron monastery. However, commercial scriptoria grew up in large cities, especially Paris, and in Italy and the Netherlands, and by the late 14th century there was a significant industry producing manuscripts, including agents who would take long-distance commissions, with details of the heraldry of the buyer and the saints of personal interest to him (for the calendar of a book of hours). By the end of the period, many of the painters were women, especially painting the elaborate borders, and perhaps especially in Paris. Text The type of script depended on local customs and tastes. In England, for example, Textura was widely used from the 12th to 16th centuries, while a cursive hand known as Anglicana emerged around 1260 for business documents. In the Frankish Empire, Carolingian minuscule emerged under the vast educational program of Charlemagne. The first step was to send the manuscript to a rubricator, "who added (in red or other colors) the titles, headlines, the initials of chapters and sections, the notes and so on; and then – if the book was to be illustrated – it was sent to the illuminator". These letters and notes would be applied using an ink-pot and either a sharpened quill feather or a reed pen. In the case of manuscripts that were sold commercially, the writing would "undoubtedly have been discussed initially between the patron and the scribe (or the scribe's agent, but by the time the written gathering were sent off to the illuminator, there was no longer any scope for innovation.)" The sturdy Roman letters of the early Middle Ages gradually gave way to scripts such as Uncial and half-Uncial, especially in the British Isles, where distinctive scripts such as insular majuscule and insular minuscule developed. Stocky, richly textured blackletter was first seen around the 13th century and was particularly popular in the later Middle Ages. Prior to the days of such careful planning, "A typical black-letter page of these Gothic years would show a page in which the lettering was cramped and crowded into a format dominated by huge ornamented capitals that descended from uncial forms or by illustrations". To prevent such poorly made manuscripts and illuminations from occurring, a script was typically supplied first, "and blank spaces were left for the decoration. This presupposes very careful planning by the scribe even before he put pen to parchment." Engrossing: The process of illumination The following steps outline the detailed labor involved to create the illuminations of one page of a manuscript: Silverpoint drawing of the design is executed Burnished gold dots are applied Application of modulating colors Continuation of previous three steps in addition to outlining marginal figures Penning of a rinceau appearing in the border of page Finally, marginal figures are painted The illumination and decoration was normally planned at the inception of the work, and space reserved for it. However, the text was usually written before illumination began. In the Early Medieval period the text and illumination were often done by the same people, normally monks, but by the High Middle Ages the roles were typically separated, except for routine initials and flourishes, and by at least the 14th century there were secular workshops producing manuscripts, and by the beginning of the 15th century these were producing most of the best work, and were commissioned even by monasteries. When the text was complete, the illustrator set to work. Complex designs were planned out beforehand, probably on wax tablets, the sketch pad of the era. The design was then traced or drawn onto the vellum (possibly with the aid of pinpricks or other markings, as in the case of the Lindisfarne Gospels). Many incomplete manuscripts survive from most periods, giving us a good idea of working methods. At all times, most manuscripts did not have images in them. In the early Middle Ages, manuscripts tend to either be display books with very full illumination, or manuscripts for study with at most a few decorated initials and flourishes. By the Romanesque period many more manuscripts had decorated or historiated initials, and manuscripts essentially for study often contained some images, often not in color. This trend intensified in the Gothic period, when most manuscripts had at least decorative flourishes in places, and a much larger proportion had images of some sort. Display books of the Gothic period in particular had very elaborate decorated borders of foliate patterns, often with small drolleries. A Gothic page might contain several areas and types of decoration: a miniature in a frame, a historiated initial beginning a passage of text, and a border with drolleries. Often different artists worked on the different parts of the decoration. Paints While the use of gold is by far one of the most captivating features of illuminated manuscripts, the bold use of varying colors provided multiple layers of dimension to the illumination. From a religious perspective, "the diverse colors wherewith the book is illustrated, not unworthily represent the multiple grace of heavenly wisdom." There is evidence of illustratiors planning out color choice in advance, which indicates purposeful choice and design in the finished product. There is also a great deal of nuance when it comes to the colors and painting of manuscripts. Illuminators would be trained in color combinations and stylistic distinctions by a form of apprenticeship, so the limited number of primary literary sources discussing colors and techniques may not be accurate to what the actual illuminators learned and followed. The medieval artist's palette was broad: Gilding On the strictest definition, a manuscript is not considered "illuminated" unless one or many illuminations contained metal, normally gold leaf or shell gold paint, or at least was brushed with gold specks. Gold leaf was from the 12th century usually polished, a process known as burnishing. The inclusion of gold alludes to many different possibilities for the text. If the text is of religious nature, lettering in gold is a sign of exalting the text. In the early centuries of Christianity, Gospel manuscripts were sometimes written entirely in gold. The gold ground style, with all or most of the background in gold, was taken from Byzantine mosaics and icons. Aside from adding rich decoration to the text, scribes during the time considered themselves to be praising God with their use of gold. Furthermore, gold was used if a patron who had commissioned a book to be written wished to display the vastness of their riches. Eventually, the addition of gold to manuscripts became so frequent "that its value as a barometer of status with the manuscript was degraded". During this time period the price of gold had become so cheap that its inclusion in an illuminated manuscript accounted for only a tenth of the cost of production. By adding richness and depth to the manuscript, the use of gold in illuminations created pieces of art that are still valued today. The application of gold leaf or dust to an illumination is a very detailed process that only the most skilled illuminators can undertake and successfully achieve. The first detail an illuminator considered when dealing with gold was whether to use gold leaf or specks of gold that could be applied with a brush. When working with gold leaf, the pieces would be hammered and thinned. The use of this type of leaf allowed for numerous areas of the text to be outlined in gold. There were several ways of applying gold to an illumination. One of the most popular included mixing the gold with stag's glue and then "pour it into water and dissolve it with your finger." Once the gold was soft and malleable in the water, it was ready to be applied to the page. Illuminators had to be very careful when applying gold leaf to the manuscript because gold leaf is able to "adhere to any pigment which had already been laid, ruining the design, and secondly the action of burnishing it is vigorous and runs the risk of smudging any painting already around it." Patrons At least in earlier periods, monasteries were the biggest manufacturers of illuminated manuscripts. They produced manuscripts for their own use; heavily illuminated ones tended to be reserved for liturgical use in the early period, while the monastery library held plainer texts. In the early period manuscripts were often commissioned by rulers for their own personal use or as diplomatic gifts, and many old manuscripts continued to be given in this way, even into the Early Modern period. Especially after the book of hours became popular, wealthy individuals commissioned works as a sign of status within the community, sometimes including donor portraits or heraldry: "In a scene from the New Testament, Christ would be shown larger than an apostle, who would be bigger than a mere bystander in the picture, while the humble donor of the painting or the artist himself might appear as a tiny figure in the corner." The calendar was also personalized, recording the feast days of local or family saints. By the end of the Middle Ages even many religious manuscripts were produced in secular commercial workshops, such as that of William de Brailes in 13th-century Oxford, for distribution through a network of agents, and blank spaces might be reserved for the appropriate heraldry to be added locally by the buyer. The growing genre of luxury illuminated manuscripts of secular works was very largely produced in commercial workshops, mostly in cities such as Paris, Ghent, Bruges and north Italy. Gallery See also Gothic book illustration Renaissance illumination References Sources Alexander, Jonathan A.G., Medieval Illuminators and their Methods of Work, 1992, Yale UP, Coleman, Joyce, Mark Cruse, and Kathryn A. Smith, eds. The Social Life of Illumination: Manuscripts, Images, and Communities in the Late Middle Ages (Series: Medieval Texts and Cultures in Northern Europe, vol. 21. Turnhout: Brepols Publishing, 2013). xxiv + 552 pp online review Calkins, Robert G. Illuminated Books of the Middle Ages. 1983, Cornell University Press, Camille, M. (1992). Image on the edge: the margins of medieval art. Harvard University Press. De Hamel, Christopher. A History of Illuminated Manuscript (Phaidon, 1986) Kren, T. & McKendrick, Scot (eds), Illuminating the Renaissance – The Triumph of Flemish Manuscript Painting in Europe, Getty Museum/Royal Academy of Arts, 2003, Liepe, Lena. Studies in Icelandic Fourteenth Century Book Painting, Reykholt: Snorrastofa, rit. vol. VI, 2009. Melo, M.J., Castro, R., Nabais, P. et al. The book on how to make all the colour paints for illuminating books: unravelling a Portuguese Hebrew illuminators' manual' ' Herit Sci 6, 44 (2018). https://doi.org/10.1186/s40494-018-0208-z Morgan, Nigel J., Stella Panayotova, and Martine Meuwese. Illuminated Manuscripts in Cambridge: A Catalogue of Western Book Illumination in the Fitzwilliam Museum and the Cambridge Colleges (London : Harvey Miller Publishers in conjunction with the Modern Humanities Association. 1999– ) Pächt, Otto, Book Illumination in the Middle Ages (trans fr German), 1986, Harvey Miller Publishers, London, Wieck, Roger. "Folia Fugitiva: The Pursuit of the Illuminated Manuscript Leaf". The Journal of the Walters Art Gallery, Vol. 54, 1996. External links Images Illuminated Manuscripts in the J. Paul Getty Museum – Los Angeles (archived 17 September 2006) Illuminated Manuscript Leaves. Digitized illuminated manuscripts from the University of Louisville Libraries. 15 pages of illuminated manuscripts from the Ball State University Digital Media Repository Digitized Illuminated Manuscripts – Complete sets of high-resolution archival images from the Walters Art Museum Collection of Armenian Illuminated Manuscripts – A full collection with high resolution images of Armenian Illuminated Manuscripts Resources UCLA Library Special Collections collection of Medieval and Renaissance manuscripts British Library, catalogue of illuminated manuscripts Collection of illuminated manuscripts from the Koninklijke Bibliotheek and Museum Meermanno-Westreenianum in The Hague. CORSAIR. Thousands of digital images from the Morgan Library's renowned collection of medieval and Renaissance manuscripts Manuscript Miniatures, a collection of illustrations from manuscripts made before 1450 A Collection of Indonesian Illuminated Manuscripts | Southeast Asia Digital Library Related articles The Missal of Thomas James Books by type Book arts Book design Book terminology Christian genres Gilding .Illuminated Textual scholarship Western art
Illuminated manuscript
Engineering
4,639
5,926
https://en.wikipedia.org/wiki/Computation
A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computation are mathematical equation solving and the execution of computer algorithms. Mechanical or electronic devices (or, historically, people) that perform computations are known as computers. Computer science is an academic field that involves the study of computation. Introduction The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing machine. Other (mathematically equivalent) definitions include Alonzo Church's lambda-definability, Herbrand-Gödel-Kleene's general recursiveness and Emil Post's 1-definability. Today, any formal statement or calculation that exhibits this quality of well-definedness is termed computable, while the statement or calculation itself is referred to as a computation. Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages. Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements. Some examples of mathematical statements that are computable include: All statements characterised in modern programming languages, including C++, Python, and Java. All calculations carried by an electronic computer, calculator or abacus. All calculations carried out on an analytical engine. All calculations carried out on a Turing Machine. The majority of mathematical statements and calculations given in maths textbooks. Some examples of mathematical statements that are not computable include: Calculations or statements which are ill-defined, such that they cannot be unambiguously encoded into a Turing machine: ("Paul loves me twice as much as Joe"). Problem statements which do appear to be well-defined, but for which it can be proved that no Turing machine exists to solve them (such as the halting problem). The Physical process of computation Computation can be seen as a purely physical process occurring inside a closed physical system called a computer. Turing's 1937 proof, On Computable Numbers, with an Application to the Entscheidungsproblem, demonstrated that there is a formal equivalence between computable statements and particular physical systems, commonly called computers. Examples of such physical systems are: Turing machines, human mathematicians following strict rules, digital computers, mechanical computers, analog computers and others. Alternative accounts of computation The mapping account An alternative account of computation is found throughout the works of Hilary Putnam and others. Peter Godfrey-Smith has dubbed this the "simple mapping account." Gualtiero Piccinini's summary of this account states that a physical system can be said to perform a specific computation when there is a mapping between the state of that system and the computation such that the "microphysical states [of the system] mirror the state transitions between the computational states." The semantic account Philosophers such as Jerry Fodor have suggested various accounts of computation with the restriction that semantic content be a necessary condition for computation (that is, what differentiates an arbitrary physical system from a computing system is that the operands of the computation represent something). This notion attempts to prevent the logical abstraction of the mapping account of pancomputationalism, the idea that everything can be said to be computing everything. The mechanistic account Gualtiero Piccinini proposes an account of computation based on mechanical philosophy. It states that physical computing systems are types of mechanisms that, by design, perform physical computation, or the manipulation (by a functional mechanism) of a "medium-independent" vehicle according to a rule. "Medium-independence" requires that the property can be instantiated by multiple realizers and multiple mechanisms, and that the inputs and outputs of the mechanism also be multiply realizable. In short, medium-independence allows for the use of physical variables with properties other than voltage (as in typical digital computers); this is imperative in considering other types of computation, such as that which occurs in the brain or in a quantum computer. A rule, in this sense, provides a mapping among inputs, outputs, and internal states of the physical computing system. Mathematical models In the theory of computation, a diversity of mathematical models of computation has been developed. Typical mathematical models of computers are the following: State models including Turing machine, pushdown automaton, finite-state automaton, and PRAM Functional models including lambda calculus Logical models including logic programming Concurrent models including actor model and process calculi Giunti calls the models studied by computation theory computational systems, and he argues that all of them are mathematical dynamical systems with discrete time and discrete state space. He maintains that a computational system is a complex object which consists of three parts. First, a mathematical dynamical system with discrete time and discrete state space; second, a computational setup , which is made up of a theoretical part , and a real part ; third, an interpretation , which links the dynamical system with the setup . See also Computability theory Hypercomputation Computational problem Limits of computation Computationalism Notes References Theoretical computer science Computability theory
Computation
Mathematics
1,171
38,164,153
https://en.wikipedia.org/wiki/Missouri%20v.%20McNeely
Missouri v. McNeely, 569 U.S. 141 (2013), was a case decided by United States Supreme Court, on appeal from the Supreme Court of Missouri, regarding exceptions to the Fourth Amendment to the United States Constitution under exigent circumstances. The United States Supreme Court ruled that police must generally obtain a warrant before subjecting a drunken-driving suspect to a blood test, and that the natural metabolism of blood alcohol does not establish a per se exigency that would justify a blood draw without consent. Background At approximately 2:08 a.m. on October 3, 2010, Tyler McNeely was stopped after a highway patrol officer observed him exceed the posted speed limit, and cross over the centerline. The officer reportedly noticed signs of intoxication from McNeely, including bloodshot eyes, slurred speech, and the smell of alcohol on his breath. McNeely failed field-sobriety tests administered by the officer. After refusing to blow into a handheld breathalyzer, and stating that he would refuse a breathalyzer at the police station, the officer drove McNeely directly to a medical center instead of the station. The officer did not seek a warrant to conduct the blood test, but asked McNeely for his consent. McNeely was warned by the officer that by refusing a chemical test, his license would be revoked for one year. McNeely continued to refuse, and at 2:35 a.m., the officer proceeded to instruct the lab technician to draw a specimen of blood from McNeely. The results of the blood test showed a BAC of 0.154 percent, which was above the state's legal limit of 0.08 percent. McNeely was charged with driving while intoxicated, and later moved to suppress the results of his blood test, as he argued that it was done unconstitutionally as an unreasonable search and seizure. Procedural history A trial judge sided with McNeely, ruling in their favor by suppressing the results of the blood test. The judge emphasized that conducting a blood test without a warrant constituted a breach of the suspect's Fourth Amendment protection against unreasonable searches and seizures. Later, state prosecutors argued that justifying the administration of the test without a warrant was valid because blood alcohol would metabolize with time, and a delay in obtaining a warrant would amount to destruction of evidence, citing the exigent circumstances exception in the 1966 United States Supreme Court decision Schmerber v. California. On appeal, the state appeals court stated an intention to reverse, but transferred the case directly to the Missouri Supreme Court. The Missouri Supreme Court affirmed the trial court's decision that the officer had violated McNeely's Fourth Amendment rights. The United States Supreme Court granted a petition for writ of certiorari on 25 September 2012. Opinion of the Court A 5-4 Supreme Court affirmed the Missouri Supreme Court, agreeing that an involuntary blood draw is a "search" as that term is used in the Fourth Amendment. As such, a warrant is generally required. In its majority opinion, the Court found that because McNeely's "case was unquestionably a routine DWI case" in which no factors other than the natural dissipation of blood-alcohol suggested that there was an emergency, the court held that the nonconsensual warrantless blood draw violated McNeely's Fourth Amendment right to be free from unreasonable searches of his person. However, the Court left open the possibility that the "exigent circumstances" exception to that general requirement might apply in some drunk-driving cases. See also Breithaupt v. Abram (1957) U.S. Supreme Court case in which the Court ruled that involuntary blood samples, taken by a skilled technician to determine intoxication, do not violate substantive due process under the Fourteenth Amendment Birchfield v. North Dakota (2016) A warrantless breath test, on the other hand, is constitutional. Mitchell v. Wisconsin (2019) U.S. Supreme Court case in which the Court ruled that in the event a driver is unconscious and therefore can't be given a breath test as an alternative to testing blood, exigent circumstances allow for the drawing of blood without a warrant. References External links 2012 in United States case law United States Fourth Amendment case law 2012 in Missouri Missouri state case law United States Supreme Court cases United States Supreme Court cases of the Roberts Court Alcohol law in the United States Blood tests
Missouri v. McNeely
Chemistry
916
21,514
https://en.wikipedia.org/wiki/Nanomedicine
Nanomedicine is the medical application of nanotechnology. Nanomedicine ranges from the medical applications of nanomaterials and biological devices, to nanoelectronic biosensors, and even possible future applications of molecular nanotechnology such as biological machines. Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials (materials whose structure is on the scale of nanometers, i.e. billionths of a meter). Functionalities can be added to nanomaterials by interfacing them with biological molecules or structures. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications. Thus far, the integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles. Nanomedicine seeks to deliver a valuable set of research tools and clinically useful devices in the near future. The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging. Nanomedicine research is receiving funding from the US National Institutes of Health Common Fund program, supporting four nanomedicine development centers. The goal of funding this newer form of science is to further develop the biological, biochemical, and biophysical mechanisms of living tissues. More medical and drug companies today are becoming involved in nanomedical research and medications. These include Bristol-Myers Squibb, which focuses on drug delivery systems for immunology and fibrotic diseases; Moderna known for their COVID-19 vaccine and their work on mRNA therapeutics; and Nanobiotix, a company that focuses on cancer and currently has a drug in testing that increases the effect of radiation on targeted cells. More companies include Generation Bio, which specializes in genetic medicines and has developed the cell-targeted lipid nanoparticle, and Jazz Pharmaceuticals, which developed Vyxeos , a drug that treats acute myeloid leukemia, and concentrates on cancer and neuroscience. Cytiva is a company that specializes in producing delivery systems for genomic medicines that are non-viral, including mRNA vaccines and other therapies utilizing nucleic acid and Ratiopharm is known for manufacturing Pazenir, a drug for various cancers. Finally, Pacira specializes in pain management and is know for producing ZILRETTA for osteoarthritis knee pain, the first treatment without opioids. Nanomedicine sales reached $16 billion in 2015, with a minimum of $3.8 billion in nanotechnology R&D being invested every year. Global funding for emerging nanotechnology increased by 45% per year in recent years, with product sales exceeding $1 trillion in 2013. In 2023, the global market was valued at $189.55 billion and is predicted to exceed $ 500 billion in the next ten years. As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy. Drug delivery Nanotechnology has provided the possibility of delivering drugs to specific cells using nanoparticles. This use of drug delivery systems was first proposed by Gregory Gregoriadis in 1974, who outlined liposomes as a drug delivery system for chemotherapy. The overall drug consumption and side-effects may be lowered significantly by depositing the active pharmaceutical agent in the diseased region only and in no higher dose than needed. Targeted drug delivery is intended to reduce the side effects of drugs in tandem decreases in consumption and treatment expenses. Additionally, targeted drug delivery reduces the side effects of crude or naturally occurring drugs by minimizing undesired exposure to healthy cells. Drug delivery focuses on maximizing bioavailability both at specific places in the body and over a period of time. This can potentially be achieved by molecular targeting by nanoengineered devices. A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter. These devices are faster and more sensitive than typical drug delivery. The efficacy of drug delivery through nanomedicine is largely based upon: a) efficient encapsulation of the drugs, b) successful delivery of drug to the targeted region of the body, and c) successful release of the drug. Several nano-delivery drugs were on the market by 2019. Drug delivery systems, lipid- or polymer-based nanoparticles, can be designed to improve the pharmacokinetics and biodistribution of the drug. However, the pharmacokinetics and pharmacodynamics of nanomedicine is highly variable among different patients. When designed to avoid the body's defense mechanisms, nanoparticles have beneficial properties that can be used to improve drug delivery. Complex drug delivery mechanisms are being developed, including the ability to get drugs through cell membranes and into cell cytoplasm. Triggered response is one way for drug molecules to be used more efficiently. Drugs are placed in the body and only activate on encountering a particular signal. For example, a drug with poor solubility will be replaced by a drug delivery system where both hydrophilic and hydrophobic environments exist, improving the solubility. Drug delivery systems may also be able to prevent tissue damage through regulated drug release; reduce drug clearance rates; or lower the volume of distribution and reduce the effect on non-target tissue. However, the biodistribution of these nanoparticles is still imperfect due to the complex host's reactions to nano- and microsized materials and the difficulty in targeting specific organs in the body. Nevertheless, a lot of work is still ongoing to optimize and better understand the potential and limitations of nanoparticulate systems. While advancement of research proves that targeting and distribution can be augmented by nanoparticles, the dangers of nanotoxicity become an important next step in further understanding of their medical uses. The toxicity of nanoparticles varies, depending on size, shape, and material. These factors also affect the build-up and organ damage that may occur. Nanoparticles are made to be long-lasting, but this causes them to be trapped within organs, specifically the liver and spleen, as they cannot be broken down or excreted. This build-up of non-biodegradable material has been observed to cause organ damage and inflammation in mice. Delivering magnetic nanoparticles to a tumor using uneven stationary magnetic fields may lead to enhanced tumor growth. In order to avoid this, alternating electromagnetic fields should be used. Nanoparticles are under research for their potential to decrease antibiotic resistance or for various antimicrobial uses. Nanoparticles might also be used to circumvent multidrug resistance (MDR) mechanisms. Systems under research Advances in lipid nanotechnology were instrumental in engineering medical nanodevices and novel drug delivery systems, as well as in developing sensing applications. Another system for microRNA delivery under preliminary research is nanoparticles formed by the self-assembly of two different microRNAs to possibly shrink tumors. One potential application is based on small electromechanical systems, such as nanoelectromechanical systems being investigated for the active release of drugs and sensors for possible cancer treatment with iron nanoparticles or gold shells. Another system of drug delivery involving nanoparticles is the use of aquasomes, self-assembled nanoparticles with a nanocrystalline center, a coating made of a polyhydroxyl oligomer, covered in the desired drug, which protects it from dehydration and conformational change. Applications Some nanotechnology-based drugs that are commercially available or in human clinical trials include: Doxil was originally approved by the FDA for the use on HIV-related Kaposi's sarcoma. It is now being used to also treat ovarian cancer and multiple myeloma. The drug is encased in liposomes, which helps to extend the life of the drug that is being distributed. Liposomes are self-assembling, spherical, closed colloidal structures that are composed of lipid bilayers that surround an aqueous space. The liposomes also help to increase the functionality and it helps to decrease the damage that the drug does to the heart muscles specifically. Onivyde, liposome encapsulated irinotecan to treat metastatic pancreatic cancer, was approved by FDA in October 2015. Rapamune is a nanocrystal-based drug that was approved by the FDA in 2000 to prevent organ rejection after transplantation. The nanocrystal components allow for increased drug solubility and dissolution rate, leading to improved absorption and high bioavailability. Cabenuva is approved by FDA as cabotegravir extended-release injectable nano-suspension, plus rilpivirine extended-release injectable nano-suspension. It is indicated as a complete regimen for the treatment of HIV-1 infection in adults to replace the current antiretroviral regimen in those who are virologically suppressed (HIV-1 RNA less than 50 copies per mL) on a stable antiretroviral regimen with no history of treatment failure and with no known or suspected resistance to either cabotegravir or rilpivirine. This is the first FDA-approved injectable, complete regimen for HIV-1 infected adults that is administered once a month. Imaging In vivo imaging is another area where tools and devices are being developed. Using nanoparticle contrast agents, images such as ultrasound and MRI have a better distribution and improved contrast. In cardiovascular imaging, nanoparticles have potential to aid visualization of blood pooling, ischemia, angiogenesis, atherosclerosis, and focal areas where inflammation is present. The small size of nanoparticles gives them with properties that can be very useful in oncology, particularly in imaging. Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites. Nanoparticles of cadmium selenide (quantum dots) glow when exposed to ultraviolet light. When injected, they seep into cancer tumors. The surgeon can see the glowing tumor, and use it as a guide for more accurate tumor removal. These nanoparticles are much brighter than organic dyes and only need one light source for activation. This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes used as contrast media. The downside, however, is that quantum dots are usually made of quite toxic elements, but this concern may be addressed by use of fluorescent dopants, substances added to create fluorescence. Tracking movement can help determine how well drugs are being distributed or how substances are metabolized. It is difficult to track a small group of cells throughout the body, so scientists used to dye the cells. These dyes needed to be excited by light of a certain wavelength in order for them to light up. While different color dyes absorb different frequencies of light, there was a need for as many light sources as cells. A way around this problem is with luminescent tags. These tags are quantum dots attached to proteins that penetrate cell membranes. The dots can be random in size, can be made of bio-inert material, and they demonstrate the nanoscale property that color is size-dependent. As a result, sizes are selected so that the frequency of light used to make a group of quantum dots fluoresce is an even multiple of the frequency required to make another group incandesce. Then both groups can be lit with a single light source. They have also found a way to insert nanoparticles into the affected parts of the body so that those parts of the body will glow showing the tumor growth or shrinkage or also organ trouble. Sensing Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology. Magnetic nanoparticles, bound to a suitable antibody, are used to label specific molecules, structures or microorganisms. Silica nanoparticles, in particular, are inert from a photophysical perspective and can accumulate a large number of dye(s) within their shells. Gold nanoparticles tagged with short DNA segments can be used to detect genetic sequences in a sample. Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots into polymeric microbeads. Nanopore technology for analysis of nucleic acids converts strings of nucleotides directly into electronic signatures. Sensor test chips containing thousands of nanowires, able to detect proteins and other biomarkers left behind by cancer cells, could enable the detection and diagnosis of cancer in the early stages from a few drops of a patient's blood. Nanotechnology is helping to advance the use of arthroscopes, which are pencil-sized devices that are used in surgeries with lights and cameras so surgeons can do the surgeries with smaller incisions. The smaller the incisions the faster the healing time which is better for the patients. It is also helping to find a way to make an arthroscope smaller than a strand of hair. Research on nanoelectronics-based cancer diagnostics could lead to tests that can be done in pharmacies. The results promise to be highly accurate and the product promises to be inexpensive. They could take a very small amount of blood and detect cancer anywhere in the body in about five minutes, with a sensitivity that is a thousand times better a conventional laboratory test. These devices are built with nanowires to detect cancer proteins; each nanowire detector is primed to be sensitive to a different cancer marker. The biggest advantage of the nanowire detectors is that they could test for anywhere from ten to one hundred similar medical conditions without adding cost to the testing device. Nanotechnology has also helped to personalize oncology for the detection, diagnosis, and treatment of cancer. It is now able to be tailored to each individual's tumor for better performance. They have found ways that they will be able to target a specific part of the body that is being affected by cancer. Sepsis treatment In contrast to dialysis, which works on the principle of the size-related diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane, the purification using nanoparticles allows specific targeting of substances. Additionally, larger compounds which are commonly not dialyzable can be removed. The purification process is based on functionalized iron oxide or carbon coated metal nanoparticles with ferromagnetic or superparamagnetic properties. Binding agents such as proteins, antibiotics, or synthetic ligands are covalently linked to the particle surface. These binding agents are able to interact with target species forming an agglomerate. Applying an external magnetic field gradient exerts a force on the nanoparticles, allowing them to be separated from the bulk fluid, thus removing contaminants. This can neutralize the toxicity of sepsis, but runs the risk of nephrotoxicity and neurotoxicity. The small size (< 100 nm) and large surface area of functionalized nanomagnets offer advantages properties compared to hemoperfusion, which is a clinically used technique for the purification of blood and is based on surface adsorption. These advantages include high loading capacity, high selectivity towards the target compound, fast diffusion, low hydrodynamic resistance, and low dosage requirements. Tissue engineering Nanotechnology may be used as part of tissue engineering to help reproduce, repair, or reshape damaged tissue using suitable nanomaterial-based scaffolds and growth factors. If successful, tissue engineering if successful may replace conventional treatments like organ transplants or artificial implants. Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles to the polymer matrix at low concentrations (~0.2 weight %) significantly improves in the compressive and flexural mechanical properties of polymeric nanocomposites. These nanocomposites may potentially serve as novel, mechanically strong, lightweight bone implants. For example, a flesh welder was demonstrated to fuse two pieces of chicken meat into a single piece using a suspension of gold-coated nanoshells activated by an infrared laser. This could be used to weld arteries during surgery. Another example is nanonephrology, the use of nanomedicine on the kidney. The full potential and implications of nanotechnology uses within the tissue engineering are not yet fully understood, despite research spanning the past two decades. Vaccine development Today, a significant proportion of vaccines against viral diseases are created using nanotechnology. Solid lipid nanoparticles represent a novel delivery system for some vaccines against SARS-CoV-2 (the virus that causes COVID-19). In recent decades, nanosized adjuvants have been widely used to enhance immune responses to targeted vaccine antigens. Inorganic nanoparticles of aluminum, silica and clay, as well as organic nanoparticles based on polymers and lipids, are commonly used adjuvants within modern vaccine formulations. Nanoparticles of natural polymers such as chitosan are commonly used adjuvants in modern vaccine formulations. Ceria nanoparticles appear very promising for both enhancing vaccine responses and mitigating inflammation, as their adjuvanticity can be adjusted by modifying parameters such as size, crystallinity, surface state, and stoichiometry. In addition, virus-like nanoparticles are also being researched. These structures allow vaccines to self-assemble without encapsulating viral RNA, making them non-infectious and incapable of replication. These virus-like nanoparticles are designed to elicit a strong immune response by using a self-assembled layer of virus capsid proteins. Medical devices Neuro-Electronic Interfacing Neuro-electronic interfacing is a visionary goal dealing with the construction of nanodevices that will permit computers to connect and interact with the nervous system. This idea requires the building of a molecular structure that will permit control and detection of nerve impulses by an external computer. A refuelable system implies energy is refilled continuously or periodically with external sonic, chemical, tethered, magnetic, or biological electrical sources, while a non-refuelable system implies that all power is drawn from internal energy storage, ceasing operation once the energy is depleted. A nanoscale enzymatic biofuel cell for self-powered nanodevices have been developed, using glucose from biofluids such as human blood or watermelons. One limitation to this innovation is the potential for electrical interference, leakage, or overheating due to power consumption. The wiring of the structure is extremely difficult because they must be positioned precisely in the nervous system. The structures that will provide the interface must also be compatible with the body's immune system. Current research is developing nanoparticle coatings for the electrodes to allow for improved recording and reduce interference. Cell repair machines Molecular nanotechnology is a speculative subfield of nanotechnology that explores the potential to engineer molecular assemblers—machines capable of reorganizing matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities. Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular machines, in his 1986 book Engines of Creation, with the first technical discussion of medical nanorobots by Robert Freitas appearing in 1999. Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him () the idea of a medical use for Feynman's theoretical micromachines (see nanotechnology). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom. Regulatory Impacts As the development of nanomedicine continues to develop and becomes a potential treatments for diseases, regulatory challenges have come to light. This section will highlight some of the regulatory considerations and challenges faced by the Food and Drug Administration (FDA), the European Medicine Agency (EMA), and each manufacturing organization. The major challenges that companies are reproducible manufacturing processes, scalability, availability of appropriate characterization methods, safety issues, and poor understandings of disease heterogeneity and patient preselection strategies. Despite these challenges, several therapeutic nanomedicine products have been approved by the FDA and EMA. In order to be approved for market, these therapies are evaluated for biocompatibility, immunotoxicity, as well as undergo a preclinical assessment. The current scope of approved nanomedicine are mainly nano-drugs, but as the field continued to grow and more applications of nanomedicine progress to a marketable scale, more impacts and regulatory oversight will be needed. See also British Society for Nanomedicine Biopharmaceutical Colloidal gold Heart nanotechnology IEEE P1906.1 – Recommended Practice for Nanoscale and Molecular Communication Framework Impalefection Monitoring (medicine) Nanobiotechnology Nanoparticle–biomolecule conjugate Nanozymes Nanotechnology in fiction Photodynamic therapy Top-down and bottom-up design References Nanotechnology Biotechnology
Nanomedicine
Materials_science,Engineering,Biology
4,748
20,246,683
https://en.wikipedia.org/wiki/Pneumococcal%20infection
Pneumococcal infection is an infection caused by the bacterium Streptococcus pneumoniae. S. pneumoniae is a common member of the bacterial flora colonizing the nose and throat of 5–10% of healthy adults and 20–40% of healthy children. However, it is also a cause of significant disease, being a leading cause of pneumonia, bacterial meningitis, and sepsis. The World Health Organization estimates that in 2005, pneumococcal infections were responsible for the death of 1.6 million children worldwide. Infections Pneumococcal pneumonia represents 15%–50% of all episodes of community-acquired pneumonia, 30–50% of all cases of acute otitis media, and a significant proportion of bloodstream infections and bacterial meningitis. As estimated by the WHO, in 2005 it killed about 1.6 million children every year worldwide with 0.7–1 million of them being under the age of five. The majority of these deaths were in developing countries. Pathogenesis S. pneumoniae is normally found in the nose and throat of 5–10% of healthy adults and 20–40% of healthy children. It can be found in higher amounts in certain environments, especially those where people are spending a great deal of time in close proximity to each other (day-care centers, military barracks). It attaches to nasopharyngeal cells through interaction of bacterial surface adhesins. This normal colonization can become infectious if the organisms are carried into areas such as the Eustachian tube or nasal sinuses where it can cause otitis media and sinusitis, respectively. Pneumonia occurs if the organisms are inhaled into the lungs and not cleared (again, viral infection, or smoking-induced ciliary paralysis might be contributing factors). The organism's polysaccharide capsule makes it resistant to phagocytosis and if there is no pre-existing anticapsular antibody alveolar macrophages cannot adequately kill the pneumococci. The organism spreads to the blood stream (where it can cause bacteremia) and is carried to the meninges, joint spaces, bones, and peritoneal cavity, and may result in meningitis, brain abscess, septic arthritis, or osteomyelitis. S. pneumoniae has several virulence factors, including the polysaccharide capsule mentioned earlier, that help it evade a host's immune system. It has pneumococcal surface proteins that inhibit complement-mediated opsonization, and it secretes IgA1 protease that will destroy secretory IgA produced by the body and mediates its attachment to respiratory mucosa. The risk of pneumococcal infection is much increased in persons with impaired IgG synthesis, impaired phagocytosis, or defective clearance of pneumococci. In particular, the absence of a functional spleen, through congenital asplenia, surgical removal of the spleen, or sickle-cell disease predisposes one to a more severe course of infection (overwhelming post-splenectomy infection) and prevention measures are indicated. People with a compromised immune system, such as those living with HIV, are also at higher risk of pneumococcal disease. In HIV patients with access to treatment, the risk of invasive pneumoccal disease is 0.2–1% per year and has a fatality rate of 8%. There is an association between pneumococcal pneumonia and influenza. Damage to the lining of the airways (respiratory epithelium) and upper respiratory system caused by influenza may facilitate pneumococcal entry and infection. Other risk factors include smoking, injection drug use, hepatitis C, and COPD. Virulence factors S. pneumoniae expresses different virulence factors on its cell surface and inside the organism. These virulence factors contribute to some of the clinical manifestations during infection with S. pneumoniae. Polysaccharide capsule—prevents phagocytosis by host immune cells by inhibiting C3b opsonization of the bacterial cells Pneumolysin (Ply)—a 53-kDa pore-forming protein that can cause lysis of host cells and activate complement Autolysin (LytA)—activation of this protein lyses the bacteria releasing its internal contents (i.e., pneumolysin) Hydrogen peroxide—causes damage to host cells (can cause apoptosis in neuronal cells during meningitis) and has bactericidal effects against competing bacteria (Haemophilus influenzae, Neisseria meningitidis, Staphylococcus aureus) Pili—hair-like structures that extend from the surface of many strains of S. pneumoniae. They contribute to colonization of upper respiratory tract and increase the formation of large amounts of TNF by the immune system during sepsis, raising the possibility of septic shock Choline binding protein A / Pneumococcal surface protein A (CbpA/PspA)—an adhesin that can interact with carbohydrates on the cell surface of pulmonary epithelial cells and can inhibit complement-mediated opsonization of pneumococci Competence for genetic transformation likely plays an important role in nasal colonization fitness and virulence (lung infectivity) Extracellular vesicles (pEVs)—secretory vesicles that carry virulence factors, such as serine-threonine kinase, which, upon internalization by host epithelial cells, phosphorylates Beclin 1, leading to autophagy-mediated degradation of the tight junction protein occludin (OCLN), subsequent disruption of the alveolar epithelial barrier, and dissemination of S. pneumoniae Diagnosis Depending on the nature of infection an appropriate sample is collected for laboratory identification. Pneumococci are typically gram-positive cocci seen in pairs or chains. When cultured on blood agar plates with added optochin antibiotic disk they show alpha-hemolytic colonies and a clear zone of inhibition around the disk indicating sensitivity to the antibiotic. Pneumococci are also bile soluble. Just like other streptococci they are catalase-negative. A Quellung test can identify specific capsular polysaccharides. Pneumococcal antigen (cell wall C polysaccharide) may be detected in various body fluids. Older detection kits, based on latex agglutination, added little value above Gram staining and were occasionally false-positive. Better results are achieved with rapid immunochromatography, which has a sensitivity (identifies the cause) of 70–80% and >90% specificity (when positive identifies the actual cause) in pneumococcal infections. The test was initially validated on urine samples but has been applied successfully to other body fluids. Chest X-rays can also be conducted to confirm inflammation though are not specific to the causative agent. Prevention Due to the importance of disease caused by S. pneumoniae, several vaccines have been developed to protect against invasive infection. The World Health Organization recommend routine childhood pneumococcal vaccination; it is incorporated into the childhood immunization schedule in a number of countries including the United Kingdom, United States, and South Africa. Treatment Throughout history treatment relied primarily on β-lactam antibiotics. In the 1960s nearly all strains of S. pneumoniae were susceptible to penicillin, but more recently there has been an increasing prevalence of penicillin resistance especially in areas of high antibiotic use. A varying proportion of strains may also be resistant to cephalosporins, macrolides (such as erythromycin), tetracycline, clindamycin and the fluoroquinolones. Notably, macrolide-resistant S. pneumoniae has been declared a medium-priority pathogen by the WHO due to its growing clinical and public health significance. Penicillin-resistant strains are more likely to be resistant to other antibiotics. Most isolates remain susceptible to vancomycin, though its use in a β-lactam-susceptible isolate is less desirable because of tissue distribution of the medication and concerns of development of vancomycin resistance. More advanced beta-lactam antibiotics (cephalosporins) are commonly used in combination with other antibiotics to treat meningitis and community-acquired pneumonia. In adults recently developed fluoroquinolones such as levofloxacin and moxifloxacin are often used to provide empiric coverage for patients with pneumonia, but in parts of the world where these medications are used to treat tuberculosis, resistance has been described. Susceptibility testing should be routine with empiric antibiotic treatment guided by resistance patterns in the community in which the organism was acquired. There is currently debate as to how relevant the results of susceptibility testing are to clinical outcome. There is slight clinical evidence that penicillins may act synergistically with macrolides to improve outcomes. Resistant pneumococci strains are called penicillin-resistant pneumococci (PRP), penicillin-resistant Streptococcus pneumoniae (PRSP), Streptococcus pneumoniae penicillin resistant (SPPR), or drug-resistant Strepotococcus pneumomoniae (DRSP). History In the 19th century it was demonstrated that immunization of rabbits with killed pneumococci protected them against subsequent challenge with viable pneumococci. Serum from immunized rabbits or from humans who had recovered from pneumococcal pneumonia also conferred protection. In the 20th century, the efficacy of immunization was demonstrated in South African miners. It was discovered that the pneumococcus's capsule made it resistant to phagocytosis, and in the 1920s it was shown that an antibody specific for capsular polysaccharide aided the killing of S. pneumoniae. In 1936, a pneumococcal capsular polysaccharide vaccine was used to abort an epidemic of pneumococcal pneumonia. In the 1940s, experiments on capsular transformation by pneumococci first identified DNA as the material that carries genetic information. In 1900 it was recognized that different serovars of pneumococci exist and that immunization with a given serovar did not protect against infection with other serovars. Since then over ninety serovars have been discovered each with a unique polysaccharide capsule that can be identified by the quellung reaction. Because some of these serovars cause disease more commonly than others it is possible to provide reasonable protection by immunizing with less than 90 serovars; current vaccines contain up to 23 serovars (i.e., it is "23-valent"). The serovars are numbered according to two systems: the American system, which numbers them in the order in which they were discovered, and the Danish system, which groups them according to antigenic similarities. References External links November 2nd: World Pneumonia Day Website Pneumococcal Vaccine Accelerated Development and Introduction Plan Pneumonia Vaccine-preventable diseases
Pneumococcal infection
Biology
2,365
25,222,830
https://en.wikipedia.org/wiki/Productivity%20%28ecology%29
In ecology, the term productivity refers to the rate of generation of biomass in an ecosystem, usually expressed in units of mass per volume (unit surface) per unit of time, such as grams per square metre per day (g m−2 d−1). The unit of mass can relate to dry matter or to the mass of generated carbon. The productivity of autotrophs, such as plants, is called primary productivity, while the productivity of heterotrophs, such as animals, is called secondary productivity. The productivity of an ecosystem is influenced by a wide range of factors, including nutrient availability, temperature, and water availability. Understanding ecological productivity is vital because it provides insights into how ecosystems function and the extent to which they can support life. Primary production Primary production is the synthesis of organic material from inorganic molecules. Primary production in most ecosystems is dominated by the process of photosynthesis, In which organisms synthesize organic molecules from sunlight, H2O, and CO2. Aquatic primary productivity refers to the production of organic matter, such as phytoplankton, aquatic plants, and algae, in aquatic ecosystems, which include oceans, lakes, and rivers. Terrestrial primary productivity refers to the organic matter production that takes place in terrestrial ecosystems such as forests, grasslands, and wetlands. Primary production is divided into Net Primary Production (NPP) and Gross Primary Production (GPP). Gross primary production measures all carbon assimilated into organic molecules by primary producers. Net primary production measures the organic molecules by primary producers. Net primary production also measures the amount of carbon assimilated into organic molecules by primary producers, but does not include organic molecules that are then broken down again by these organism for biological processes such as cellular respiration. The formula used to calculate NPP is net primary production = gross primary production - respiration. Primary producers Photoautotrophs Organisms that rely on light energy to fix carbon, and thus participate in primary production, are referred to as photoautotrophs. Photoautotrophs exists across the tree of life. Many bacterial taxa are known to be photoautotrophic such as cyanobacteria and some Pseudomonadota (formerly proteobacteria). Eukaryotic organisms gained the ability to participate in photosynthesis through the development of plastids derived from endosymbiotic relationships. Archaeplastida, which includes red algae, green algae, and plants, have evolved chloroplasts originating from an ancient endosymbiotic relationship with an Alphaproteobacteria. The productivity of plants, while being photoautotrophs, is also dependent on factors such as salinity and abiotic stressors from the surrounding environment. The rest of the eukaryotic photoautotrophic organisms are within the SAR clade (Comprising Stramenopila, Alveolata, and Rhizaria). Organisms in the SAR clade that developed plastids did so through a secondary or a tertiary endosymbiotic relationships with green algae and/or red algae. The SAR clade includes many aquatic and marine primary producers such as Kelp, Diatoms, and Dinoflagellates. Lithoautotrophs The other process of primary production is lithoautotrophy. Lithoautotrophs use reduced chemical compounds such as hydrogen gas, hydrogen sulfide, methane, or ferrous ion to fix carbon and participate in primary production. Lithoautotrophic organisms are prokaryotic and are represented by members of both the bacterial and archaeal domains. Lithoautotrophy is the only form of primary production possible in ecosystems without light such as ground-water ecosystems, hydrothermal vent ecosystems, soil ecosystems, and cave ecosystems. Secondary production Secondary production is the generation of biomass of heterotrophic (consumer) organisms in a system. This is driven by the transfer of organic material between trophic levels, and represents the quantity of new tissue created through the use of assimilated food. Secondary production is sometimes defined to only include consumption of primary producers by herbivorous consumers (with tertiary production referring to carnivorous consumers), but is more commonly defined to include all biomass generation by heterotrophs. Organisms responsible for secondary production include animals, protists, fungi and many bacteria. Secondary production can be estimated through a number of different methods including increment summation, removal summation, the instantaneous growth method and the Allen curve method. The choice between these methods will depend on the assumptions of each and the ecosystem under study. For instance, whether cohorts should be distinguished, whether linear mortality can be assumed and whether population growth is exponential. Net ecosystem production is defined as the difference between gross primary production (GPP) and ecosystem respiration. The formula to calculate net ecosystem production is NEP = GPP - respiration (by autotrophs) - respiration (by heterotrophs). The key difference between NPP and NEP is that NPP focuses primarily on autotrophic production, whereas NEP incorporates the contributions of other aspects of the ecosystem to the total carbon budget. Productivity Following is the list of ecosystems in order of decreasing productivity. Species diversity and productivity relationship The connection between plant productivity and biodiversity is a significant topic in ecology, although it has been controversial for decades. Both productivity and species diversity are constricted by other variables such as climate, ecosystem type, and land use intensity. According to some research on the correlation between plant diversity and ecosystem functioning is that productivity increases as species diversity increases. One reasoning for this is that the likelihood of discovering a highly productive species increases as the number of species initially present in an ecosystem increases. Other researchers believe that the relationship between species diversity and productivity is unimodal within an ecosystem. A 1999 study on grassland ecosystems in Europe, for example, found that increasing species diversity initially increased productivity but gradually leveled off at intermediate levels of diversity. More recently, a meta-analysis of 44 studies from various ecosystem types observed that the interaction between diversity and production was unimodal in all but one study. Human interactions Anthropogenic activities (human activities) have impacted the productivity and biomass of several ecosystems. Examples of these activities include habitat modification, freshwater consumption, an increase in nutrients due to fertilizers, and many others. Increased nutrients can stimulate an algal bloom in waterbodies, increasing primary production but making the ecosystem less stable. This would raise secondary production and have a trophic cascade effect across the food chain, ultimately increasing overall ecosystem productivity. See also Biomass (ecology) Community ecology Food web Agricultural productivity References Aquatic ecology Biological oceanography Chemical oceanography
Productivity (ecology)
Chemistry,Biology
1,366
32,280,107
https://en.wikipedia.org/wiki/Otidea%20bufonia
Otidea bufonia is a species of apothecial fungus belonging to the family Pyronemataceae. The fruit body appears from late summer to early autumn as a dark brown, deep cup, split down one side, up to high and the same across. A rare European species, it occurs singly or in small groups on soil in woodland. While similar to many other species within Otidea, bufonia can be characterized by its narrow fusoid ascospores and the presence of hyphae with striate resinous exudates in the medullary excipulum. References Xu Y-Y, Mao N, Yang J-J, Fan L. New Species and New Records of Otidea from China Based on Molecular and Morphological Data. Journal of Fungi. 2022 [1] External links Fungi described in 1822 Pyronemataceae Taxa named by Christiaan Hendrik Persoon Fungus species
Otidea bufonia
Biology
196
28,789,704
https://en.wikipedia.org/wiki/Dicopper%20chloride%20trihydroxide
Dicopper chloride trihydroxide is the compound with chemical formula . It is often referred to as tribasic copper chloride (TBCC), copper trihydroxyl chloride or copper hydroxychloride. This greenish substance is encountered as the minerals atacamite, paratacamite, and botallackite. Similar materials are assigned to green solids formed upon corrosion of various copper objectss. These materials have been used in agriculture. Industrial production Air oxidation of copper(I) chloride in brine solution Large scale industrial production of basic copper chloride was devoted to making either a fungicide for crop protection or an intermediate in the manufacture of other copper compounds. In neither of those applications was the polymorphic nature of the compound, or the size of individual particles of particular importance, so the manufacturing processes were simple precipitation schemes. can be prepared by air oxidation of CuCl in brine solution. The CuCl solution is usually made by the reduction of solutions over copper metal. A solution with concentrated brine is contacted with copper metal until the Cu(II) is completely reduced. The resulting CuCl is then heated to and aerated to effect the oxidation and hydrolysis. The oxidation reaction can be performed with or without the copper metal. The precipitated product is separated and the mother liquor containing and NaCl, is recycled back to the process: The product from this process is of fine particle with size of 1 ~ 5 μm and is usable as an agricultural fungicide. Astable, free-flowing, non-dusty green powder with typical particle size of 30 ~ 100 microns has been used in preparation of uniform animal feed mixtures. There are two types of spent etching solutions from printed circuit board manufacturing operations: an acidic cupric chloride solution (/), and an alkaline tetraamminedichloridocopper(II) solution (). Tribasic copper chloride is generated by neutralization of either one of these two solutions (acidic or alkaline pathway), or by combination of these two solutions, a self-neutralization reaction. In the acidic pathway, the cupric chloride solution can be neutralized with caustic soda, or ammonia, lime, or other base. In the alkaline pathway, cuprammine chloride solution can be neutralized with HCl or other available acidic solutions: More efficiently, the two spent etching solutions are combined under mild acidic conditions, one neutralizing the other, to produce higher yield of basic copper chloride: Seeding is introduced during crystallization. The production is operated continuously under well-defined conditions (pH, feeding rate, concentrations, temperature, etc.). Product with good particle size is produced and can be easily separated from background salt and other impurities in the mother liquor. After simple rinse with water and drying, pure, free-flowing, non-dusty green crystalline solid with typical particle size of 30 ~ 100 micron is obtained. The product from this process is predominantly atacamite and paratacamite, the stable crystal forms of basic copper chloride – and is called alpha basic copper chloride for simplicity. Careful control of process conditions to favor the alpha polymorphs results in a product that remains free flowing over extended storage times, thus avoiding caking as occurs with both copper sulfate and the botallackite crystal form - also called beta basic copper chloride. This process is used to manufacture thousands of tons of tribasic copper chloride every year, and has been the predominant route of commercial production since it was introduced by Steward in 1994. Applications As an agriculture fungicide Fine has been used as a fungicidal spray on tea, orange, grape, rubber, coffee, cardamom, and cotton etc., and as an aerial spray on rubber for control of phytophthora attack on leaves. As a pigment Basic copper chloride has been used as a pigment and as a colorant for glass and ceramics. It was widely used as a coloring agent in wall painting, manuscript illumination, and other paintings by ancient people. It was also used in cosmetics by ancient Egyptians. In pyrotechnics has been used as a blue/green coloring agents in pyrotechnics. As a catalyst has been used in the preparation of catalysts and as a catalyst in organic synthesis for chlorination and/or oxidation. has been shown to be a catalyst in the chlorination of ethylene. Atacamite and paratacamite crystal forms of have been found to be active species in supported catalyst systems for the oxidative carbonylation of methanol to dimethyl carbonate. A number of supported catalysts have also been prepared and studied in such conversion. Dimethyl carbonate is an environmentally benign chemical product and unique intermediate with versatile chemical reactivity. has been identified as a new catalytically active material for the partial oxidation of n-butane to maleic anhydride. A mixture of ultrafine powder CuO/ has been shown to be good in photo-catalytic decolorization of dyes, such as amido black, and indigo carmine. As a feed supplement Copper is one of the most critically important of the trace minerals that are essential elements in numerous enzymes that support metabolic functions in most organisms. Since the early 1900s, copper has routinely been added to animal feedstuffs to support good health and normal development. Starting in the 1950s, there was increasing focus on the issue of bioavailability of trace mineral supplements which led to copper sulfate pentahydrate becoming the predominant source. Because of its high water solubility, and thus hygroscopicity, leads to destructive reactions in feed mixtures. These are notoriously destructive in hot, humid climates. Recognition that basic copper chloride would reduce feed stability problems led to issuance of patents on the use of the compound as a nutritional source. Subsequently, animal feeding studies revealed that the alpha crystal form of basic copper chloride has a rate of chemical reactivity that is well matched to biological processes. The strength of the bonds holding copper in the alpha crystal polymorphs could prevent undesirable, anti-nutritive interactions with other feed ingredients while delivering controlled amounts of copper throughout the active zones in the digestive tract of an animal. Success in producing alpha basic copper chloride on a large scale allowed for the widespread application of basic copper chloride in the feed thereby supplying the copper requirements of all major livestock groups. This form of the compound has proven to be particularly suitable as a commercial feed supplement for use in livestock and aquaculture due to its inherent chemical and physical characteristics. Compared to copper sulfate, the alpha crystal form of basic copper chloride provides many benefits including improved feed stability, less oxidative destruction of vitamins and other essential feed ingredients; superior blending in feed mixtures, and reduced handing costs. It has been widely used in feed formulations for most species, including chickens, turkeys, pigs, beef and dairy cattle, horses, pets, aquaculture and exotic zoo animals. Natural occurrence occurs as natural minerals in four polymorphic crystal forms: atacamite, paratacamite, clinoatacamite, and botallackite. Atacamite is orthorhombic, paratacamite is rhombohedral, and the other two polymorphs are monoclinic. Atacamite and paratacamite are common secondary minerals in areas of copper mineralization and frequently form as corrosion products of Cu-bearing metals. The most common polymorph is atacamite. It is an oxidation product of other copper minerals, especially under arid, saline conditions. It was found in fumarolic deposits, and a weathering product of sulfides in subsea black smoker deposits. It was named for the Atacama Desert in Chile. Its color varies from blackish to emerald green. It is the sugar-like coating of dark green glistening crystals found on many bronze objects from Egypt and Mesopotamia. It has also been found in living systems such as the jaws of the marine bloodworm Glycera dibranchiata. The stability of atacamite is evidenced by its ability to endure dynamic regimes in its natural geologic environment. Paratacamite is another polymorph that was named for the Atacama Desert in Chile. It has been identified in the powdery light-green corrosion product that forms on a copper or bronze surface – at times in corrosion pustules. It can be distinguished from atacamite by the rhombohedral shape of its crystals. Botallackite is the least stable of the four polymorphs. It is pale bluish-green in color. This rare mineral was first found, and later identified, in the Botallack Mine in Cornwall, England. It is also a rare corrosion product on archaeological finds. For instance, it was identified on an Egyptian statue of Bastet. The fourth polymorph of family is clinoatacamite. It was found and identified around in Chuquicamata, Chile in 1996. It was named in allusion to its monoclinic morphology and relationship to atacamite. It too is pale green but has monoclinic crystals. Clinoatacamite can be easily confused with the closely related paratacamite. It is believed that clinoatacamite should replace most previously reported occurrences of paratacamite in the conservation literature. Structure of naturally occurring forms Atacamite is orthorhombic, space group Pnma, with two crystallographically independent Copper and Oxygen atoms of hydroxyl groups in the asymmetric unit. Both Cu atoms display characteristically Jahn-Teller distorted octahedral (4+2) coordination geometry: each Cu is bonded to four nearest OH groups with Cu-OH distance of 2.01 Å; in addition, one of Cu atoms is bonded to two Cl atoms (at 2.76 Å) to form a octahedron, and the other Cu atom is bonded to one Cl atom (at 2.75 Å) and a distant OH group (at 2.36 Å) to form a octahedron. The two different types of octahedron are edge-linked to form a three-dimensional framework with the octahedron cross-linking the octahedron layers parallel to (110) (Figure 1). Botallackite crystallizes in monoclinic with space group P21/m. Like in atacamite, there are two different types of Cu coordination geometries: Jahn-Teller distorted octahedral and . But these octahedra assemble in different ways. Each octahedron shares six edges with surrounding octahedra, forming a two-dimensional sheet-type structure parallel to (100). The adjacent sheets are held together by hydrogen bonding between the hydroxyl oxygen atoms of one sheet and the opposing chlorine atoms in the other sheets. The resulting weak bonding between the sheets accounts for the perfect (100) cleavage and the typical platy habit of botallackite (Figure 2). Paratacamite is rhombohedral, space group R3. It has a well-developed substructure with a’=a/2, c’=c, apparent space group R3m. There are four crystallographically independent Cu atoms in the asymmetric unit. The Cu atoms display three different types of octahedral coordination geometries. Three quarters of the Cu atoms are coordinated to four near OH groups and two distant Cl atoms, giving the expected (4+2) configuration . Three sixteenths of the Cu atoms are bonded to two near OH groups at 1.93 Å and four stretched OH groups at 2.20 Å to form an axially compressed (2+4) octahedral , and the remaining one sixteenth of the Cu atoms are bonded to six equivalent OH groups at 2.12 Å to form a regular octahedral . The Jahn-Teller distorted octahedra share the edges and form partially occupied layers parallel to (001), and the compressed and regular octahedra cross-link the adjacent octahedral layers to form a three-dimensional framework. The existence of the regular octahedral is unusual, and it has been shown that partial substitution of Zn or Ni for Copper at this special site (3b) is necessary to stabilize paratacamite structure at ambient temperature. Due to the high symmetry of the special position, only about 2 wt% Zn is necessary to stabilize the rhombohedral structure. In fact, most of paratacamite crystals studied contain significant amounts of Zn or Ni (> 2 wt%) (Figure 3). Clinoatacamite is monoclinic, space group P21/m. The structure is very close to that of paratacamite. But the octahedron is Jahn-Teller distorted. The Jahn-Teller distorted octahedra share the edges to form partially occupied layers parallel to (101). This layer is topologically the same as that in mica. Adjacent layers of octahedra are offset, such that vacant sites in one sheet align with occupied sites in the neighboring sheet. The octahedra link the layers to form a 3-dimensional network (Figure 4). Thermodynamic data based on the free energy of formation indicates that the order of stability of these polymorphs is clinoatacamite>atacamite> botallackite. Spectroscopic studies show that the strength of hydrogen bonding in these polymorphs is in the order paratacamite >atacamite> botallackite. Studies on the formation of basic copper chloride indicate botallackite is a key intermediate and crystallizes first under most conditions; subsequent recrystallization of botallackite to atacamite or paratacamite depends on the nature of reaction medium. References Coordination complexes Nutrition Copper(II) minerals
Dicopper chloride trihydroxide
Chemistry
2,878
15,533,451
https://en.wikipedia.org/wiki/Flettner%20airplane
A Flettner airplane is a type of rotor airplane which uses a Flettner rotor to provide lift. The rotor comprises a spinning cylinder with circular end plates and, in an aircraft, spins about a spanwise horizontal axis. When the aircraft moves forward, the Magnus effect creates lift. Anton Flettner, after whom the rotor is named, used it successfully as the sails of a rotor ship. He also suggested its use as a wing for a rotor airplane. The Butler Ames Aerocycle was built in 1910 and tested aboard a warship. There is no record of it having flown. The Plymouth A-A-2004 was built for Zaparka in 1930 by three anonymous American inventors. It was reported to have made successful flights over Long Island Sound. An inherent safety concern is that if power to the rotating drums were lost—even if thrust was maintained—the aircraft would lose its ability to generate lift as the drum slowed and it would not be able to sustain flight. See also Cyclogyro FanWing Servo tab References External links Aircraft configurations
Flettner airplane
Engineering
215
17,423,266
https://en.wikipedia.org/wiki/PaNie
PaNie is a 25 kDa protein produced by the root rot disease-causing pathogen Pythium aphanidermatum. It stands for Pythium aphanidermatum Necrosis inducing elicitor. PaNie (aka NLPPya) belongs to a family of elicitors named the Nep1-like proteins (NLPs), which cause necrosis when injected into the leaves of dicotyledonous plants. References Oomycete proteins Elicitors
PaNie
Chemistry,Biology
103
23,474,670
https://en.wikipedia.org/wiki/Geometric%20progression
A geometric progression, also known as a geometric sequence, is a mathematical sequence of non-zero numbers where each term after the first is found by multiplying the previous one by a fixed number called the common ratio. For example, the sequence 2, 6, 18, 54, ... is a geometric progression with a common ratio of 3. Similarly 10, 5, 2.5, 1.25, ... is a geometric sequence with a common ratio of 1/2. Examples of a geometric sequence are powers rk of a fixed non-zero number r, such as 2k and 3k. The general form of a geometric sequence is where r is the common ratio and a is the initial value. The sum of a geometric progression's terms is called a geometric series. Properties The nth term of a geometric sequence with initial value a = a1 and common ratio r is given by and in general Geometric sequences satisfy the linear recurrence relation for every integer This is a first order, homogeneous linear recurrence with constant coefficients. Geometric sequences also satisfy the nonlinear recurrence relation for every integer This is a second order nonlinear recurrence with constant coefficients. When the common ratio of a geometric sequence is positive, the sequence's terms will all share the sign of the first term. When the common ratio of a geometric sequence is negative, the sequence's terms alternate between positive and negative; this is called an alternating sequence. For instance the sequence 1, −3, 9, −27, 81, −243, ... is an alternating geometric sequence with an initial value of 1 and a common ratio of −3. When the initial term and common ratio are complex numbers, the terms' complex arguments follow an arithmetic progression. If the absolute value of the common ratio is smaller than 1, the terms will decrease in magnitude and approach zero via an exponential decay. If the absolute value of the common ratio is greater than 1, the terms will increase in magnitude and approach infinity via an exponential growth. If the absolute value of the common ratio equals 1, the terms will stay the same size indefinitely, though their signs or complex arguments may change. Geometric progressions show exponential growth or exponential decline, as opposed to arithmetic progressions showing linear growth or linear decline. This comparison was taken by T.R. Malthus as the mathematical foundation of his An Essay on the Principle of Population. The two kinds of progression are related through the exponential function and the logarithm: exponentiating each term of an arithmetic progression yields a geometric progression, while taking the logarithm of each term in a geometric progression yields an arithmetic progression. Geometric series Product The infinite product of a geometric progression is the product of all of its terms. The partial product of a geometric progression up to the term with power is When and are positive real numbers, this is equivalent to taking the geometric mean of the partial progression's first and last individual terms and then raising that mean to the power given by the number of terms This corresponds to a similar property of sums of terms of a finite arithmetic sequence: the sum of an arithmetic sequence is the number of terms times the arithmetic mean of the first and last individual terms. This correspondence follows the usual pattern that any arithmetic sequence is a sequence of logarithms of terms of a geometric sequence and any geometric sequence is a sequence of exponentiations of terms of an arithmetic sequence. Sums of logarithms correspond to products of exponentiated values. Proof Let represent the product up to power . Written out in full, . Carrying out the multiplications and gathering like terms, . The exponent of is the sum of an arithmetic sequence. Substituting the formula for that sum, , which concludes the proof. One can rearrange this expression to Rewriting as and as though this is not valid for or which is the formula in terms of the geometric mean. History A clay tablet from the Early Dynastic Period in Mesopotamia (c. 2900 – c. 2350 BC), identified as MS 3047, contains a geometric progression with base 3 and multiplier 1/2. It has been suggested to be Sumerian, from the city of Shuruppak. It is the only known record of a geometric progression from before the time of old Babylonian mathematics beginning in 2000 BC. Books VIII and IX of Euclid's Elements analyze geometric progressions (such as the powers of two, see the article for details) and give several of their properties. See also References Hall & Knight, Higher Algebra, p. 39, External links Derivation of formulas for sum of finite and infinite geometric progression at Mathalino.com Geometric Progression Calculator Nice Proof of a Geometric Progression Sum at sputsoft.com Sequences and series Mathematical series Articles containing proofs
Geometric progression
Mathematics
979
26,334,770
https://en.wikipedia.org/wiki/International%20Moss%20Stock%20Center
The International Moss Stock Center (IMSC) is a biorepository which is specialized in collecting, preserving and distributing moss plants of a high value of scientific research. The IMSC is located at the Faculty of Biology, Department of Plant Biotechnology, at the Albert-Ludwigs-University of Freiburg, Germany. Moss collection The moss collection of the IMSC currently includes various ecotypes of Physcomitrella patens, Physcomitrium and Funaria as well as several transgenic and mutant lines of Physcomitrella patens, including knockout mosses. Storage conditions The long-term storage of moss samples in the IMSC is carried out via cryopreservation in the gas phase of liquid nitrogen at temperatures below −135 °C in special freezer containers. It has been shown for Physcomitrella patens that the regeneration rate after cryopreservation is 100%. Trackable accession numbers which may be used for citation purposes in publications are automatically assigned to all samples. Financial support The IMSC is supported financially by the Chair Plant Biotechnology of Prof. Ralf Reski and the Centre for Biological Signalling Studies (bioss). References External links Website International Moss Stock Center (IMSC) Freiburg Website Chair Plant Biotechnology, University of Freiburg Website Centre for Biological Signalling Studies (bioss) Sciencedaily: Mosses, deep frozen BIOPRO "A small moss turns professional" Botanical research institutes University of Freiburg Cryopreservation Biorepositories
International Moss Stock Center
Chemistry,Biology
307
56,840,282
https://en.wikipedia.org/wiki/Martin%20Johnson%20House
The Martin Johnson House, at 45 W. 400 South in Glenwood, Utah, was built in c.1880. It was listed on the National Register of Historic Places in 1982. Martin Johnson was born in Denmark in 1861, came to Utah with his parents in 1866, and probably built this house in preparation for his marriage; he was married in 1884. The house is a one-and-a-half-story adobe structure laid out in a modified pair-house plan. It has Gothic Revival-style cross gable above the main entrance, though not symmetrically placed. It has decorative details including Doric columns on its porch and scroll-cut bargeboards. References Pair-houses Houses on the National Register of Historic Places in Utah Gothic Revival architecture in Utah Houses completed in 1880 Sevier County, Utah
Martin Johnson House
Engineering
164
16,402,793
https://en.wikipedia.org/wiki/Idempotent%20measure
In mathematics, an idempotent measure on a metric group is a probability measure that equals its convolution with itself; in other words, an idempotent measure is an idempotent element in the topological semigroup of probability measures on the given metric group. Explicitly, given a metric group X and two probability measures μ and ν on X, the convolution μ ∗ ν of μ and ν is the measure given by for any Borel subset A of X. (The equality of the two integrals follows from Fubini's theorem.) With respect to the topology of weak convergence of measures, the operation of convolution makes the space of probability measures on X into a topological semigroup. Thus, μ is said to be an idempotent measure if μ ∗ μ = μ. It can be shown that the only idempotent probability measures on a complete, separable metric group are the normalized Haar measures of compact subgroups. References (See chapter 3, section 3.) Group theory Measures (measure theory) Metric geometry
Idempotent measure
Physics,Mathematics
225
38,033,909
https://en.wikipedia.org/wiki/L%C3%A9n%C3%A1rt%20sphere
A Lénárt sphere is an educational manipulative and writing surface for exploring spherical geometry, invented by Hungarian István Lénárt as a modern replacement for a spherical blackboard. It can be used for visualizing the geometry of points, great and small circles, triangles, polygons, conics, and other objects on a sphere, and comparing spherical geometry to Euclidean geometry as drawn on a flat piece of paper or blackboard. The included spherical ruler and compass support synthetic straightedge and compass construction on the sphere. Products The Lénárt sphere and accessories are produced by the company Lénárt Educational Research and Technology. The basic set includes: An eight-inch transparent plastic sphere A torus-shaped support to place under the sphere Hemispherical transparencies that fit over the sphere for students to draw on with marker pens or cut out shapes with scissors A spherical ruler with two scaled edges for drawing great-circle arcs and measuring spherical angles and great-circle distances A spherical compass and center locator for drawing small circles A set of wet-wipe markers for writing and drawing on the sphere and transparencies A hanger for displaying spherical constructions and designs A 16-page booklet of suggested activities, "Getting Started on the Lenart Sphere" A four-color polyconic projection of the earth that one can cut out and transform into a globe The company also sells replacement parts, extra transparency sheets, wet-wipe markers, and Lénárt's book Non-Euclidean Adventures on the Lenart Sphere, which describes more activities for students. Related tools Spherical Easel is an interactive geometry software tool for exploring spherical geometry (see ). Other interactive geometry software is typically limited to the flat plane. History Spherical trigonometry is fundamental to ancient astronomy and astrology, celestial navigation, and geodesy and cartography, and it used to be a standard part of undergraduate mathematics education. In recent decades hand computations have been replaced by electronic computers and spherical trigonometry has been pushed out of the typical mathematics curriculum by other topics. The Lénárt sphere was invented by István Lénárt in Hungary in the early 1990s and its use is described in his 2003 book comparing planar and spherical geometry. The Lénárt sphere is widely used throughout Europe in university courses on non-Euclidean geometry and geographic information systems (GIS). See also Blackboard Whiteboard Spherical geometry Celestial navigation Mathematics education Non-Euclidean geometry References External links Official website, lenartsphere.com Spherical Easel, interactive spherical geometry software Profile of István Lénárt at ResearchGate with some of Lénárt's research papers Trigonometry Non-Euclidean geometry Spherical geometry Spherical trigonometry Projective geometry Geometry education
Lénárt sphere
Mathematics
546
5,951,576
https://en.wikipedia.org/wiki/History%20of%20electromagnetic%20theory
The history of electromagnetic theory begins with ancient measures to understand atmospheric electricity, in particular lightning. People then had little understanding of electricity, and were unable to explain the phenomena. Scientific understanding and research into the nature of electricity grew throughout the eighteenth and nineteenth centuries through the work of researchers such as André-Marie Ampère, Charles-Augustin de Coulomb, Michael Faraday, Carl Friedrich Gauss and James Clerk Maxwell. In the 19th century it had become clear that electricity and magnetism were related, and their theories were unified: wherever charges are in motion electric current results, and magnetism is due to electric current. The source for electric field is electric charge, whereas that for magnetic field is electric current (charges in motion). Ancient and classical history The knowledge of static electricity dates back to the earliest civilizations, but for millennia it remained merely an interesting and mystifying phenomenon, without a theory to explain its behavior, and it was often confused with magnetism. The ancients were acquainted with rather curious properties possessed by two minerals, amber (, ) and magnetic iron ore ( , "the Magnesian stone, lodestone"). Amber, when rubbed, attracts lightweight objects, such as feathers; magnetic iron ore has the power of attracting iron. Based on his find of an Olmec hematite artifact in Central America, the American astronomer John Carlson has suggested that "the Olmec may have discovered and used the geomagnetic lodestone compass earlier than 1000 BC". If true, this "predates the Chinese discovery of the geomagnetic lodestone compass by more than a millennium". Carlson speculates that the Olmecs may have used similar artifacts as a directional device for astrological or geomantic purposes, or to orient their temples, the dwellings of the living or the interments of the dead. The earliest Chinese literature reference to magnetism lies in a 4th-century BC book called The Book of the Devil Valley Master: "When the people of Cheng go out to collect jade, they carry a south-pointer with them so as not to lose their way." Long before any knowledge of electromagnetism existed, people were aware of the effects of electricity. Lightning and other manifestations of electricity such as St. Elmo's fire were known in ancient times, but it was not understood that these phenomena had a common origin. Ancient Egyptians were aware of shocks when interacting with electric fish (such as the electric catfish) or other animals (such as electric eels). The shocks from animals were apparent to observers since pre-history by a variety of peoples that came into contact with them. Texts from 2750 BC by the ancient Egyptians referred to these fish as "thunderer of the Nile" and saw them as the "protectors" of all the other fish. Another possible approach to the discovery of the identity of lightning and electricity from any other source, is to be attributed to the Arabs, who before the 15th century used the same Arabic word for lightning () and the electric ray. Thales of Miletus, writing at around 600 BC, noted that rubbing fur on various substances such as amber would cause them to attract specks of dust and other light objects. Thales wrote on the effect now known as static electricity. The Greeks noted that if they rubbed the amber for long enough they could even get an electric spark to jump. The ancient Indian medical text Sushruta Samhita describes using magnetic properties of the lodestone to remove arrows embedded in a person's body. These electrostatic phenomena were again reported millennia later by Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by catfish and electric rays. Pliny in his books writes: "The ancient Tuscans by their learning hold that there are nine gods that send forth lightning and those of eleven sorts." This was in general the early pagan idea of lightning. The ancients held some concept that shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. A group of objects found in Iraq in 1938 dated to the early centuries AD (Sassanid Mesopotamia), called the Baghdad Battery, resembles a galvanic cell and is believed by some to have been used for electroplating. The claims are controversial because of supporting evidence and theories for the uses of the artifacts, physical evidence on the objects conducive for electrical functions, and if they were electrical in nature. As a result, the nature of these objects is based on speculation, and the function of these artifacts remains in doubt. Magnetic attraction was once accounted for by Aristotle and Thales as the working of a soul in the stone. Middle Ages and the Renaissance The magnetic needle compass was developed in the 11th century and it improved the accuracy of navigation by employing the astronomical concept of true north (Dream Pool Essays, 1088). The Chinese scientist Shen Kuo (1031–1095) was the first person known to write about the magnetic needle compass and by the 12th century Chinese were known to use the lodestone compass for navigation. In Europe, the first description of the compass and its use for navigation are of Alexander Neckam (1187), although the use of compasses was already common. Its development, in European history, was due to Flavio Gioja from Amalfi. In the 13th century, Peter Peregrinus, a native of Maricourt in Picardy, conducted experiments on magnetism and wrote the first extant treatise describing the properties of magnets and pivoting compass needles. In 1282, the properties of magnets and the dry compasses were discussed by Al-Ashraf Umar II, a Yemeni scholar. The dry compass was invented around 1300 by Italian inventor Flavio Gioja. Archbishop Eustathius of Thessalonica, Greek scholar and writer of the 12th century, records that Woliver, king of the Goths, was able to draw sparks from his body. The same writer states that a certain philosopher was able while dressing to draw sparks from his clothes, a result seemingly akin to that obtained by Robert Symmer in his silk stocking experiments, a careful account of which may be found in the Philosophical Transactions, 1759. Italian physician Gerolamo Cardano wrote about electricity in De Subtilitate (1550) distinguishing, perhaps for the first time, between electrical and magnetic forces. 17th century Toward the late 16th century, a physician of Queen Elizabeth's time, William Gilbert, in De Magnete, expanded on Cardano's work and invented the Neo-Latin word from (), the Greek word for "amber". Gilbert undertook a number of careful electrical experiments, in the course of which he discovered that many substances other than amber, such as sulphur, wax, glass, etc., were capable of manifesting electrical properties. Gilbert also discovered that a heated body lost its electricity and that moisture prevented the electrification of all bodies, due to the now well-known fact that moisture impaired the insulation of such bodies. He also noticed that electrified substances attracted all other substances indiscriminately, whereas a magnet only attracted iron. The many discoveries of this nature earned for Gilbert the title of founder of the electrical science. By investigating the forces on a light metallic needle, balanced on a point, he extended the list of electric bodies, and found also that many substances, including metals and natural magnets, showed no attractive forces when rubbed. He noticed that dry weather with north or east wind was the most favourable atmospheric condition for exhibiting electric phenomena—an observation liable to misconception until the difference between conductor and insulator was understood. Gilbert's work was followed up by Robert Boyle (1627–1691), the famous natural philosopher who was once described as "father of Chemistry, and uncle of the Earl of Cork." Boyle was one of the founders of the Royal Society when it met privately in Oxford, and became a member of the council after the Society was incorporated by Charles II in 1663. He left a detailed account of his research under the title of Experiments on the Origin of Electricity. He discovered electrified bodies attracted light substances in a vacuum, indicating the electrical effect did not depend upon the air as a medium. He also added resin, and other substances, to the then known list of electrics. In 1663 Otto von Guericke invented a device that is now recognized as an early (possibly the first) electrostatic generator, but he did not recognize it primarily as an electrical device or conduct electrical experiments with it. By the end of the 17th century, researchers had developed practical means of generating electricity by friction with an electrostatic generator, but the development of electrostatic machines did not begin in earnest until the 18th century, when they became fundamental instruments in the studies about the new science of electricity. The first usage of the word electricity is ascribed to Sir Thomas Browne in his 1646 work, Pseudodoxia Epidemica. The first appearance of the term electromagnetism was in Magnes, by the Jesuit luminary Athanasius Kircher, in 1641, which carries the provocative chapter-heading: "Elektro-magnetismos i.e. On the Magnetism of amber, or electrical attractions and their causes" ( id est sive De Magnetismo electri, seu electricis attractionibus earumque causis). 18th century Improving the electric machine The electric machine was subsequently improved by Francis Hauksbee, his student Litzendorf, and by Prof. Georg Matthias Bose, about 1750. Litzendorf, researching for Christian August Hausen, substituted a glass ball for the sulphur ball of Guericke. Bose was the first to employ the "prime conductor" in such machines, this consisting of an iron rod held in the hand of a person whose body was insulated by standing on a block of resin. Ingenhousz, during 1746, invented electric machines made of plate glass. Experiments with the electric machine were largely aided by the discovery that a glass plate, coated on both sides with tinfoil, would accumulate electric charge when connected with a source of electromotive force. The electric machine was soon further improved by Andrew Gordon, a Scotsman, Professor at Erfurt, who substituted a glass cylinder in place of a glass globe; and by Giessing of Leipzig who added a "rubber" consisting of a cushion of woollen material. The collector, consisting of a series of metal points, was added to the machine by Benjamin Wilson about 1746, and in 1762, John Canton of England (also the inventor of the first pith-ball electroscope in 1754) improved the efficiency of electric machines by sprinkling an amalgam of tin over the surface of the rubber. Electrics and non-electrics In 1729, Stephen Gray conducted a series of experiments that demonstrated the difference between conductors and non-conductors (insulators), showing amongst other things that a metal wire and even packthread conducted electricity, whereas silk did not. In one of his experiments he sent an electric current through 800 feet of hempen thread which was suspended at intervals by loops of silk thread. When he tried to conduct the same experiment substituting the silk for finely spun brass wire, he found that the electric current was no longer carried throughout the hemp cord, but instead seemed to vanish into the brass wire. From this experiment he classified substances into two categories: "electrics" like glass, resin and silk and "non-electrics" like metal and water. "Non-electrics" conducted charges while "electrics" held the charge. Vitreous and resinous Intrigued by Gray's results, in 1732, C. F. du Fay began to conduct several experiments. In his first experiment, Du Fay concluded that all objects except metals, animals, and liquids could be electrified by rubbing and that metals, animals and liquids could be electrified by means of an electric machine, thus discrediting Gray's "electrics" and "non-electrics" classification of substances. In 1733 Du Fay discovered what he believed to be two kinds of frictional electricity; one generated from rubbing glass, the other from rubbing resin. From this, Du Fay theorized that electricity consists of two electrical fluids, "vitreous" and "resinous", that are separated by friction and that neutralize each other when combined. This picture of electricity was also supported by Christian Gottlieb Kratzenstein in his theoretical and experimental works. The two-fluid theory would later give rise to the concept of positive and negative electrical charges devised by Benjamin Franklin. Leyden jar The Leyden jar, a type of capacitor for electrical energy in large quantities, was invented independently by Ewald Georg von Kleist on 11 October 1744 and by Pieter van Musschenbroek in 1745–1746 at Leiden University (the latter location giving the device its name). William Watson, when experimenting with the Leyden jar, discovered in 1747 that a discharge of static electricity was equivalent to an electric current. Capacitance was first observed by Von Kleist of Leyden in 1754. Von Kleist happened to hold, near his electric machine, a small bottle, in the neck of which there was an iron nail. Touching the iron nail accidentally with his other hand he received a severe electric shock. In much the same way Musschenbroeck assisted by Cunaens received a more severe shock from a somewhat similar glass bottle. Sir William Watson of England greatly improved this device, by covering the bottle, or jar, outside and in with tinfoil. This piece of electrical apparatus will be easily recognized as the well-known Leyden jar, so called by the Abbot Nollet of Paris, after the place of its discovery. In 1741, John Ellicott "proposed to measure the strength of electrification by its power to raise a weight in one scale of a balance while the other was held over the electrified body and pulled to it by its attractive power". As early as 1746, Jean-Antoine Nollet (1700–1770) had performed experiments on the propagation speed of electricity. By involving 200 Carthusian monks connected from hand to hand by iron wires so as to form a circle of about 1.6 km, he was able to prove that this speed is finite, even though very high. In 1749, Sir William Watson conducted numerous experiments to ascertain the velocity of electricity in a wire. These experiments, although perhaps not so intended, also demonstrated the possibility of transmitting signals to a distance by electricity. In these experiments, the signal appeared to travel the 12,276-foot length of the insulated wire instantaneously. Le Monnier in France had previously made somewhat similar experiments, sending shocks through an iron wire 1,319 feet long. About 1750, first experiments in electrotherapy were made. Various experimenters made tests to ascertain the physiological and therapeutical effects of electricity. Typical for this effort was Kratzenstein in Halle who in 1744 wrote a treatise on the subject. Demainbray in Edinburgh examined the effects of electricity upon plants and concluded that the growth of two myrtle trees was quickened by electrification. These myrtles were electrified "during the whole month of October, 1746, and they put forth branches and blossoms sooner than other shrubs of the same kind not electrified." Abbé Ménon in France tried the effects of a continued application of electricity upon men and birds and found that the subjects experimented on lost weight, thus apparently showing that electricity quickened the excretions. The efficacy of electric shocks in cases of paralysis was tested in the county hospital at Shrewsbury, England, with rather poor success. Late 18th century Benjamin Franklin promoted his investigations of electricity and theories through the famous, though extremely dangerous, experiment of having his son fly a kite through a storm-threatened sky. A key attached to the kite string sparked and charged a Leyden jar, thus establishing the link between lightning and electricity. Following these experiments, he invented a lightning rod. It is either Franklin (more frequently) or Ebenezer Kinnersley of Philadelphia (less frequently) who is considered to have established the convention of positive and negative electricity. Theories regarding the nature of electricity were quite vague at this period, and those prevalent were more or less conflicting. Franklin considered that electricity was an imponderable fluid pervading everything, and which, in its normal condition, was uniformly distributed in all substances. He assumed that the electrical manifestations obtained by rubbing glass were due to the production of an excess of the electric fluid in that substance and that the manifestations produced by rubbing wax were due to a deficit of the fluid. This explanation was opposed by supporters of the "two-fluid" theory like Robert Symmer in 1759. In this theory, the vitreous and resinous electricities were regarded as imponderable fluids, each fluid being composed of mutually repellent particles while the particles of the opposite electricities are mutually attractive. When the two fluids unite as a result of their attraction for one another, their effect upon external objects is neutralized. The act of rubbing a body decomposes the fluids, one of which remains in excess on the body and manifests itself as vitreous or resinous electricity. Up to the time of Franklin's historic kite experiment, the identity of the electricity developed by rubbing and by electrostatic machines (frictional electricity) with lightning had not been generally established. Dr. Wall, Abbot Nollet, Hauksbee, Stephen Gray and John Henry Winkler had indeed suggested the resemblance between the phenomena of "electricity" and "lightning", Gray having intimated that they only differed in degree. It was doubtless Franklin, however, who first proposed tests to determine the sameness of the phenomena. In a letter to Peter Comlinson of London, on 19 October 1752, Franklin, referring to his kite experiment, wrote, On 10 May 1742 Thomas-François Dalibard, at Marly (near Paris), using a vertical iron rod 40 feet long, obtained results corresponding to those recorded by Franklin and somewhat prior to the date of Franklin's experiment. Franklin's important demonstration of the sameness of frictional electricity and lightning added zest to the efforts of the many experimenters in this field in the last half of the 18th century, to advance the progress of the science. Franklin's observations aided later scientists such as Michael Faraday, Luigi Galvani, Alessandro Volta, André-Marie Ampère and Georg Simon Ohm, whose collective work provided the basis for modern electrical technology and for whom fundamental units of electrical measurement are named. Others who would advance the field of knowledge included William Watson, Georg Matthias Bose, Smeaton, Louis-Guillaume Le Monnier, Jacques de Romas, Jean Jallabert, Giovanni Battista Beccaria, Tiberius Cavallo, John Canton, Robert Symmer, Abbot Nollet, John Henry Winkler, Benjamin Wilson, Ebenezer Kinnersley, Joseph Priestley, Franz Aepinus, Edward Hussey Délavai, Henry Cavendish, and Charles-Augustin de Coulomb. Descriptions of many of the experiments and discoveries of these early electrical scientists may be found in the scientific publications of the time, notably the Philosophical Transactions, Philosophical Magazine, Cambridge Mathematical Journal, Young's Natural Philosophy, Priestley's History of Electricity, Franklin's Experiments and Observations on Electricity, Cavalli's Treatise on Electricity and De la Rive's Treatise on Electricity. Henry Elles was one of the first people to suggest links between electricity and magnetism. In 1757 he claimed that he had written to the Royal Society in 1755 about the links between electricity and magnetism, asserting that "there are some things in the power of magnetism very similar to those of electricity" but he did "not by any means think them the same". In 1760 he similarly claimed that in 1750 he had been the first "to think how the electric fire may be the cause of thunder". Among the more important of the electrical research and experiments during this period were those of Franz Aepinus, a noted German scholar (1724–1802) and Henry Cavendish of London, England. Franz Aepinus is credited as the first to conceive of the view of the reciprocal relationship of electricity and magnetism. In his work Tentamen Theoria Electricitatis et Magnetism, published in Saint Petersburg in 1759, he gives the following amplification of Franklin's theory, which in some of its features is measurably in accord with present-day views: "The particles of the electric fluid repel each other, attract and are attracted by the particles of all bodies with a force that decreases in proportion as the distance increases; the electric fluid exists in the pores of bodies; it moves unobstructedly through non-electric (conductors), but moves with difficulty in insulators; the manifestations of electricity are due to the unequal distribution of the fluid in a body, or to the approach of bodies unequally charged with the fluid." Aepinus formulated a corresponding theory of magnetism excepting that, in the case of magnetic phenomena, the fluids only acted on the particles of iron. He also made numerous electrical experiments apparently showing that, in order to manifest electrical effects, tourmaline must be heated to between 37.5 °C and 100 °C. In fact, tourmaline remains unelectrified when its temperature is uniform, but manifests electrical properties when its temperature is rising or falling. Crystals that manifest electrical properties in this way are termed pyroelectric; along with tourmaline, these include sulphate of quinine and quartz. Henry Cavendish independently conceived a theory of electricity nearly akin to that of Aepinus. In 1784, he was perhaps the first to utilize an electric spark to produce an explosion of hydrogen and oxygen in the proper proportions that would create pure water. Cavendish also discovered the inductive capacity of dielectrics (insulators), and, as early as 1778, measured the specific inductive capacity for beeswax and other substances by comparison with an air condenser. Around 1784 C. A. Coulomb devised the torsion balance, discovering what is now known as Coulomb's law: the force exerted between two small electrified bodies varies inversely as the square of the distance, not as Aepinus in his theory of electricity had assumed, merely inversely as the distance. According to the theory advanced by Cavendish, "the particles attract and are attracted inversely as some less power of the distance than the cube." A large part of the domain of electricity became virtually annexed by Coulomb's discovery of the law of inverse squares. Through the experiments of William Watson and others proving that electricity could be transmitted to a distance, the idea of making practical use of this phenomenon began, around 1753, to engross the minds of inquisitive people. To this end, suggestions as to the employment of electricity in the transmission of intelligence were made. The first of the methods devised for this purpose was probably that of Georges Lesage in 1774. This method consisted of 24 wires, insulated from one another and each having had a pith ball connected to its distant end. Each wire represented a letter of the alphabet. To send a message, a desired wire was charged momentarily with electricity from an electric machine, whereupon the pith ball connected to that wire would fly out. Other methods of telegraphing in which frictional electricity was employed were also tried, some of which are described in the history on the telegraph. The era of galvanic or voltaic electricity represented a revolutionary break from the historical focus on frictional electricity. Alessandro Volta discovered that chemical reactions could be used to create positively charged anodes and negatively charged cathodes. When a conductor was attached between these, the difference in the electrical potential (also known as voltage) drove a current between them through the conductor. The potential difference between two points is measured in units of volts in recognition of Volta's work. The first mention of voltaic electricity, although not recognized as such at the time, was probably made by Johann Georg Sulzer in 1767, who, upon placing a small disc of zinc under his tongue and a small disc of copper over it, observed a peculiar taste when the respective metals touched at their edges. Sulzer assumed that when the metals came together they were set into vibration, acting upon the nerves of the tongue to produce the effects noticed. In 1790, Prof. Luigi Alyisio Galvani of Bologna, while conducting experiments on "animal electricity", noticed the twitching of a frog's legs in the presence of an electric machine. He observed that a frog's muscle, suspended on an iron balustrade by a copper hook passing through its dorsal column, underwent lively convulsions without any extraneous cause, the electric machine being at this time absent. To account for this phenomenon, Galvani assumed that electricity of opposite kinds existed in the nerves and muscles of the frog, the muscles and nerves constituting the charged coatings of a Leyden jar. Galvani published the results of his discoveries, together with his hypothesis, which engrossed the attention of the physicists of that time. The most prominent of these was Volta, professor of physics at Pavia, who contended that the results observed by Galvani were the result of the two metals, copper and iron, acting as electromotors, and that the muscles of the frog played the part of a conductor, completing the circuit. This precipitated a long discussion between the adherents of the conflicting views. One group agreed with Volta that the electric current was the result of an electromotive force of contact at the two metals; the other adopted a modification of Galvani's view and asserted that the current was the result of a chemical affinity between the metals and the acids in the pile. Michael Faraday wrote in the preface to his Experimental Researches, relative to the question of whether metallic contact is productive of a part of the electricity of the voltaic pile: "I see no reason as yet to alter the opinion I have given; ... but the point itself is of such great importance that I intend at the first opportunity renewing the inquiry, and, if I can, rendering the proofs either on the one side or the other, undeniable to all." Even Faraday himself, however, did not settle the controversy, and while the views of the advocates on both sides of the question have undergone modifications, as subsequent investigations and discoveries demanded, up to 1918 diversity of opinion on these points continued to crop out. Volta made numerous experiments in support of his theory and ultimately developed the pile or battery, which was the precursor of all subsequent chemical batteries, and possessed the distinguishing merit of being the first means by which a prolonged continuous current of electricity was obtainable. Volta communicated a description of his pile to the Royal Society of London and shortly thereafter Nicholson and Cavendish (1780) produced the decomposition of water by means of the electric current, using Volta's pile as the source of electromotive force. 19th century Early 19th century In 1800 Alessandro Volta constructed the first device to produce a large electric current, later known as the electric battery. Napoleon, informed of his works, summoned him in 1801 for a command performance of his experiments. He received many medals and decorations, including the Légion d'honneur. Davy in 1806, employing a voltaic pile of approximately 250 cells, or couples, decomposed potash and soda, showing that these substances were respectively the oxides of potassium and sodium, metals which previously had been unknown. These experiments were the beginning of electrochemistry, the investigation of which Faraday took up, and concerning which in 1833 he announced his important law of electrochemical equivalents, viz.: "The same quantity of electricity — that is, the same electric current — decomposes chemically equivalent quantities of all the bodies which it traverses; hence the weights of elements separated in these electrolytes are to each other as their chemical equivalents." Employing a battery of 2,000 elements of a voltaic pile Humphry Davy in 1809 gave the first public demonstration of the electric arc light, using charcoal enclosed in a vacuum. Somewhat important to note, it was not until many years after the discovery of the voltaic pile that the sameness of animal and frictional electricity with voltaic electricity was clearly recognized and demonstrated. Thus as late as January 1833 we find Faraday writing in a paper on the electricity of the electric ray. "After an examination of the experiments of Walsh,The works of Benjamin Franklin: containing several political and historical tracts not included in any former ed., and many letters official and private, not hitherto published; with notes and a life of the author, Volume 6 Page 348. Ingenhousz, Henry Cavendish, Sir H. Davy, and Dr. Davy, no doubt remains on my mind as to the identity of the electricity of the torpedo with common (frictional) and voltaic electricity; and I presume that so little will remain on the mind of others as to justify my refraining from entering at length into the philosophical proof of that identity. The doubts raised by Sir Humphry Davy have been removed by his brother, Dr. Davy; the results of the latter being the reverse of those of the former. ... The general conclusion which must, I think, be drawn from this collection of facts (a table showing the similarity, of properties of the diversely named electricities) is, that electricity, whatever may be its source, is identical in its nature." It is proper to state, however, that prior to Faraday's time the similarity of electricity derived from different sources was more than suspected. Thus, William Hyde Wollaston, wrote in 1801: "This similarity in the means by which both electricity and galvanism (voltaic electricity) appear to be excited in addition to the resemblance that has been traced between their effects shows that they are both essentially the same and confirm an opinion that has already been advanced by others, that all the differences discoverable in the effects of the latter may be owing to its being less intense, but produced in much larger quantity." In the same paper Wollaston describes certain experiments in which he uses very fine wire in a solution of sulphate of copper through which he passed electric currents from an electric machine. This is interesting in connection with the later day use of almost similarly arranged fine wires in electrolytic receivers in wireless, or radio-telegraphy. In the first half of the 19th century many very important additions were made to the world's knowledge concerning electricity and magnetism. For example, in 1820 Hans Christian Ørsted of Copenhagen discovered the deflecting effect of an electric current traversing a wire upon a suspended magnetic needle. This discovery gave a clue to the subsequently proved intimate relationship between electricity and magnetism which was promptly followed up by Ampère who some months later, in September 1820, presented the first elements of his new theory, which he developed in the following years culminating with the publication in his 1827 "" (Memoir on the Mathematical Theory of Electrodynamic Phenomena, Uniquely Deduced from Experience) announcing his celebrated theory of electrodynamics, relating to the force that one current exerts upon another, by its electro-magnetic effects, namely Two parallel portions of a circuit attract one another if the currents in them are flowing in the same direction, and repel one another if the currents flow in the opposite direction. Two portions of circuits crossing one another obliquely attract one another if both the currents flow either towards or from the point of crossing, and repel one another if one flows to and the other from that point. When an element of a circuit exerts a force on another element of a circuit, that force always tends to urge the second one in a direction at right angles to its own direction. Ampere brought a multitude of phenomena into theory by his investigations of the mechanical forces between conductors supporting currents and magnets. James Clerk Maxwell, in his "A Treatise on Electricity and Magnetism", named Ampere “the Newton of electricity”. The German physicist Seebeck discovered in 1821 that when heat is applied to the junction of two metals that had been soldered together an electric current is set up. This is termed thermoelectricity. Seebeck's device consists of a strip of copper bent at each end and soldered to a plate of bismuth. A magnetic needle is placed parallel with the copper strip. When the heat of a lamp is applied to the junction of the copper and bismuth an electric current is set up which deflects the needle. Around this time, Siméon Denis Poisson attacked the difficult problem of induced magnetization, and his results, though differently expressed, are still the theory, as a most important first approximation. It was in the application of mathematics to physics that his services to science were performed. Perhaps the most original, and certainly the most permanent in their influence, were his memoirs on the theory of electricity and magnetism, which virtually created a new branch of mathematical physics. George Green wrote An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828. The essay introduced several important concepts, among them a theorem similar to the modern Green's theorem, the idea of potential functions as currently used in physics, and the concept of what are now called Green's functions. George Green was the first person to create a mathematical theory of electricity and magnetism and his theory formed the foundation for the work of other scientists such as James Clerk Maxwell, William Thomson, and others. Peltier in 1834 discovered an effect opposite to thermoelectricity, namely, that when a current is passed through a couple of dissimilar metals the temperature is lowered or raised at the junction of the metals, depending on the direction of the current. This is termed the Peltier effect. The variations of temperature are found to be proportional to the strength of the current and not to the square of the strength of the current as in the case of heat due to the ordinary resistance of a conductor. This second law is the I2R law, discovered experimentally in 1841 by the English physicist Joule. In other words, this important law is that the heat generated in any part of an electric circuit is directly proportional to the product of the resistance R of this part of the circuit and to the square of the strength of current I flowing in the circuit. In 1822 Johann Schweigger devised the first galvanometer. This instrument was subsequently much improved by Wilhelm Weber (1833). In 1825 William Sturgeon of Woolwich, England, invented the horseshoe and straight bar electromagnet, receiving therefor the silver medal of the Society of Arts. In 1837 Carl Friedrich Gauss and Weber (both noted workers of this period) jointly invented a reflecting galvanometer for telegraph purposes. This was the forerunner of the Thomson reflecting and other exceedingly sensitive galvanometers once used in submarine signaling and still widely employed in electrical measurements. Arago in 1824 made the important discovery that when a copper disc is rotated in its own plane, and if a magnetic needle be freely suspended on a pivot over the disc, the needle will rotate with the disc. If on the other hand the needle is fixed it will tend to retard the motion of the disc. This effect was termed Arago's rotations. Futile attempts were made by Charles Babbage, Peter Barlow, John Herschel and others to explain this phenomenon. The true explanation was reserved for Faraday, namely, that electric currents are induced in the copper disc by the cutting of the magnetic lines of force of the needle, which currents in turn react on the needle. Georg Simon Ohm did his work on resistance in the years 1825 and 1826, and published his results in 1827 as the book Die galvanische Kette, mathematisch bearbeitet. He drew considerable inspiration from Fourier's work on heat conduction in the theoretical explanation of his work. For experiments, he initially used voltaic piles, but later used a thermocouple as this provided a more stable voltage source in terms of internal resistance and constant potential difference. He used a galvanometer to measure current, and knew that the voltage between the thermocouple terminals was proportional to the junction temperature. He then added test wires of varying length, diameter, and material to complete the circuit. He found that his data could be modeled through a simple equation with variable composed of the reading from a galvanometer, the length of the test conductor, thermocouple junction temperature, and a constant of the entire setup. From this, Ohm determined his law of proportionality and published his results. In 1827, he announced the now famous law that bears his name, that is: Ohm brought into order a host of puzzling facts connecting electromotive force and electric current in conductors, which all previous electricians had only succeeded in loosely binding together qualitatively under some rather vague statements. Ohm found that the results could be summed up in such a simple law and by Ohm's discovery a large part of the domain of electricity became annexed to theory. Faraday and Henry The discovery of electromagnetic induction was made almost simultaneously, although independently, by Michael Faraday, who was first to make the discovery in 1831, and Joseph Henry in 1832. Henry's discovery of self-induction and his work on spiral conductors using a copper coil were made public in 1835, just before those of Faraday. In 1831 began the epoch-making researches of Michael Faraday, the famous pupil and successor of Humphry Davy at the head of the Royal Institution, London, relating to electric and electromagnetic induction. The remarkable researches of Faraday, the prince of experimentalists, on electrostatics and electrodynamics and the induction of currents. These were rather long in being brought from the crude experimental state to a compact system, expressing the real essence. Faraday was not a competent mathematician, but had he been one, he would have been greatly assisted in his researches, have saved himself much useless speculation, and would have anticipated much later work. He would, for instance, knowing Ampere's theory, by his own results have readily been led to Neumann's theory, and the connected work of Helmholtz and Thomson. Faraday's studies and researches extended from 1831 to 1855 and a detailed description of his experiments, deductions and speculations are to be found in his compiled papers, entitled Experimental Researches in Electricity.' Faraday was by profession a chemist. He was not in the remotest degree a mathematician in the ordinary sense — indeed it is a question if in all his writings there is a single mathematical formula. The experiment which led Faraday to the discovery of electromagnetic induction was made as follows: He constructed what is now and was then termed an induction coil, the primary and secondary wires of which were wound on a wooden bobbin, side by side, and insulated from one another. In the circuit of the primary wire he placed a battery of approximately 100 cells. In the secondary wire he inserted a galvanometer. On making his first test he observed no results, the galvanometer remaining quiescent, but on increasing the length of the wires he noticed a deflection of the galvanometer in the secondary wire when the circuit of the primary wire was made and broken. This was the first observed instance of the development of electromotive force by electromagnetic induction. He also discovered that induced currents are established in a second closed circuit when the current strength is varied in the first wire, and that the direction of the current in the secondary circuit is opposite to that in the first circuit. Also that a current is induced in a secondary circuit when another circuit carrying a current is moved to and from the first circuit, and that the approach or withdrawal of a magnet to or from a closed circuit induces momentary currents in the latter. In short, within the space of a few months Faraday discovered by experiment virtually all the laws and facts now known concerning electro-magnetic induction and magneto-electric induction. Upon these discoveries, with scarcely an exception, depends the operation of the telephone, the dynamo machine, and incidental to the dynamo electric machine practically all the gigantic electrical industries of the world, including electric lighting, electric traction, the operation of electric motors for power purposes, and electro-plating, electrotyping, etc. In his investigations of the peculiar manner in which iron filings arrange themselves on a cardboard or glass in proximity to the poles of a magnet, Faraday conceived the idea of magnetic "lines of force" extending from pole to pole of the magnet and along which the filings tend to place themselves. On the discovery being made that magnetic effects accompany the passage of an electric current in a wire, it was also assumed that similar magnetic lines of force whirled around the wire. For convenience and to account for induced electricity it was then assumed that when these lines of force are "cut" by a wire in passing across them or when the lines of force in rising and falling cut the wire, a current of electricity is developed, or to be more exact, an electromotive force is developed in the wire that sets up a current in a closed circuit. Faraday advanced what has been termed the molecular theory of electricity which assumes that electricity is the manifestation of a peculiar condition of the molecule of the body rubbed or the ether surrounding the body. Faraday also, by experiment, discovered paramagnetism and diamagnetism, namely, that all solids and liquids are either attracted or repelled by a magnet. For example, iron, nickel, cobalt, manganese, chromium, etc., are paramagnetic (attracted by magnetism), whilst other substances, such as bismuth, phosphorus, antimony, zinc, etc., are repelled by magnetism or are diamagnetic. Brugans of Leyden in 1778 and Le Baillif and Becquerel in 1827 had previously discovered diamagnetism in the case of bismuth and antimony. Faraday also rediscovered specific inductive capacity in 1837, the results of the experiments by Cavendish not having been published at that time. He also predicted the retardation of signals on long submarine cables due to the inductive effect of the insulation of the cable, in other words, the static capacity of the cable. In 1816 telegraph pioneer Francis Ronalds had also observed signal retardation on his buried telegraph lines, attributing it to induction. The 25 years immediately following Faraday's discoveries of electromagnetic induction were fruitful in the promulgation of laws and facts relating to induced currents and to magnetism. In 1834 Heinrich Lenz and Moritz von Jacobi independently demonstrated the now familiar fact that the currents induced in a coil are proportional to the number of turns in the coil. Lenz also announced at that time his important law that, in all cases of electromagnetic induction the induced currents have such a direction that their reaction tends to stop the motion that produces them, a law that was perhaps deducible from Faraday's explanation of Arago's rotations. The induction coil was first designed by Nicholas Callan in 1836. In 1845 Joseph Henry, the American physicist, published an account of his valuable and interesting experiments with induced currents of a high order, showing that currents could be induced from the secondary of an induction coil to the primary of a second coil, thence to its secondary wire, and so on to the primary of a third coil, etc. Heinrich Daniel Ruhmkorff further developed the induction coil, the Ruhmkorff coil was patented in 1851, and he utilized long windings of copper wire to achieve a spark of approximately 2 inches (50 mm) in length. In 1857, after examining a greatly improved version made by an American inventor, Edward Samuel Ritchie, Ruhmkorff improved his design (as did other engineers), using glass insulation and other innovations to allow the production of sparks more than long. Middle 19th century Up to the middle of the 19th century, indeed up to about 1870, electrical science was, it may be said, a sealed book to the majority of electrical workers. Prior to this time a number of handbooks had been published on electricity and magnetism, notably Auguste de La Rive's exhaustive ' Treatise on Electricity,' in 1851 (French) and 1853 (English); August Beer's Einleitung in die Elektrostatik, die Lehre vom Magnetismus und die Elektrodynamik, Wiedemann's ' Galvanismus,' and Reiss' 'Reibungsal-elektricitat.' But these works consisted in the main in details of experiments with electricity and magnetism, and but little with the laws and facts of those phenomena. Henry d'Abria published the results of some researches into the laws of induced currents, but owing to their complexity of the investigation it was not productive of very notable results. Around the mid-19th century, Fleeming Jenkin's work on electricity and magnetism and Clerk Maxwell's ' Treatise on Electricity and Magnetism ' were published. These books were departures from the beaten path. As Jenkin states in the preface to his work the science of the schools was so dissimilar from that of the practical electrician that it was quite impossible to give students sufficient, or even approximately sufficient, textbooks. A student he said might have mastered de la Rive's large and valuable treatise and yet feel as if in an unknown country and listening to an unknown tongue in the company of practical men. As another writer has said, with the coming of Jenkin's and Maxwell's books all impediments in the way of electrical students were removed, "the full meaning of Ohm's law becomes clear; electromotive force, difference of potential, resistance, current, capacity, lines of force, magnetization and chemical affinity were measurable, and could be reasoned about, and calculations could be made about them with as much certainty as calculations in dynamics". About 1850, Kirchhoff published his laws relating to branched or divided circuits. He also showed mathematically that according to the then prevailing electrodynamic theory, electricity would be propagated along a perfectly conducting wire with the velocity of light. Helmholtz investigated mathematically the effects of induction upon the strength of a current and deduced therefrom equations, which experiment confirmed, showing amongst other important points the retarding effect of self-induction under certain conditions of the circuit. In 1853, Sir William Thomson (later Lord Kelvin) predicted as a result of mathematical calculations the oscillatory nature of the electric discharge of a condenser circuit. To Henry, however, belongs the credit of discerning as a result of his experiments in 1842 the oscillatory nature of the Leyden jar discharge. He wrote: The phenomena require us to admit the existence of a principal discharge in one direction, and then several reflex actions backward and forward, each more feeble than the preceding, until the equilibrium is obtained. These oscillations were subsequently observed by B. W. Feddersen (1857) who using a rotating concave mirror projected an image of the electric spark upon a sensitive plate, thereby obtaining a photograph of the spark which plainly indicated the alternating nature of the discharge. Sir William Thomson was also the discoverer of the electric convection of heat (the "Thomson" effect). He designed for electrical measurements of precision his quadrant and absolute electrometers. The reflecting galvanometer and siphon recorder, as applied to submarine cable signaling, are also due to him. About 1876 the American physicist Henry Augustus Rowland of Baltimore demonstrated the important fact that a static charge carried around produces the same magnetic effects as an electric current. The Importance of this discovery consists in that it may afford a plausible theory of magnetism, namely, that magnetism may be the result of directed motion of rows of molecules carrying static charges. After Faraday's discovery that electric currents could be developed in a wire by causing it to cut across the lines of force of a magnet, it was to be expected that attempts would be made to construct machines to avail of this fact in the development of voltaic currents. The first machine of this kind was due to Hippolyte Pixii, 1832. It consisted of two bobbins of iron wire, opposite which the poles of a horseshoe magnet were caused to rotate. As this produced in the coils of the wire an alternating current, Pixii arranged a commutating device (commutator) that converted the alternating current of the coils or armature into a direct current in the external circuit. This machine was followed by improved forms of magneto-electric machines due to Edward Samuel Ritchie, Joseph Saxton, Edward M. Clarke 1834, Emil Stohrer 1843, Floris Nollet 1849, Shepperd 1856, Van Maldern, Werner von Siemens, Henry Wilde and others. A notable advance in the art of dynamo construction was made by Samuel Alfred Varley in 1866 and by Siemens and Charles Wheatstone, who independently discovered that when a coil of wire, or armature, of the dynamo machine is rotated between the poles (or in the "field") of an electromagnet, a weak current is set up in the coil due to residual magnetism in the iron of the electromagnet, and that if the circuit of the armature be connected with the circuit of the electromagnet, the weak current developed in the armature increases the magnetism in the field. This further increases the magnetic lines of force in which the armature rotates, which still further increases the current in the electromagnet, thereby producing a corresponding increase in the field magnetism, and so on, until the maximum electromotive force which the machine is capable of developing is reached. By means of this principle the dynamo machine develops its own magnetic field, thereby much increasing its efficiency and economical operation. Not by any means, however, was the dynamo electric machine perfected at the time mentioned. In 1860 an important improvement had been made by Dr. Antonio Pacinotti of Pisa who devised the first electric machine with a ring armature. This machine was first used as an electric motor, but afterward as a generator of electricity. The discovery of the principle of the reversibility of the dynamo electric machine (variously attributed to Walenn 1860; Pacinotti 1864; Fontaine, Gramme 1873; Deprez 1881, and others) whereby it may be used as an electric motor or as a generator of electricity has been termed one of the greatest discoveries of the 19th century. In 1872 the drum armature was devised by Hefner-Alteneck. This machine in a modified form was subsequently known as the Siemens dynamo. These machines were presently followed by the Schuckert, Gulcher, Fein, Brush, Hochhausen, Edison and the dynamo machines of numerous other inventors. In the early days of dynamo machine construction the machines were mainly arranged as direct current generators, and perhaps the most important application of such machines at that time was in electro-plating, for which purpose machines of low voltage and large current strength were employed. Beginning about 1887 alternating current generators came into extensive operation and the commercial development of the transformer, by means of which currents of low voltage and high current strength are transformed to currents of high voltage and low current strength, and vice versa, in time revolutionized the transmission of electric power to long distances. Likewise the introduction of the rotary converter (in connection with the "step-down" transformer) which converts alternating currents into direct currents (and vice versa) has effected large economies in the operation of electric power systems. Before the introduction of dynamo electric machines, voltaic, or primary, batteries were extensively used for electro-plating and in telegraphy. There are two distinct types of voltaic cells, namely, the "open" and the "closed", or "constant", type. The open type in brief is that type which operated on closed circuit becomes, after a short time, polarized; that is, gases are liberated in the cell which settle on the negative plate and establish a resistance that reduces the current strength. After a brief interval of open circuit these gases are eliminated or absorbed and the cell is again ready for operation. Closed circuit cells are those in which the gases in the cells are absorbed as quickly as liberated and hence the output of the cell is practically uniform. The Leclanché and Daniell cells, respectively, are familiar examples of the "open" and "closed" type of voltaic cell. Batteries of the Daniell or "gravity" type were employed almost generally in the United States and Canada as the source of electromotive force in telegraphy before the dynamo machine became available. In the late 19th century, the term luminiferous aether, meaning light-bearing aether, was a conjectured medium for the propagation of light. The word aether stems via Latin from the Greek αιθήρ, from a root meaning to kindle, burn, or shine. It signifies the substance which was thought in ancient times to fill the upper regions of space, beyond the clouds. Maxwell In 1864 James Clerk Maxwell of Edinburgh announced his electromagnetic theory of light, which was perhaps the greatest single step in the world's knowledge of electricity. Maxwell had studied and commented on the field of electricity and magnetism as early as 1855/6 when On Faraday's lines of force was read to the Cambridge Philosophical Society. The paper presented a simplified model of Faraday's work, and how the two phenomena were related. He reduced all of the current knowledge into a linked set of differential equations with 20 equations in 20 variables. This work was later published as On Physical Lines of Force in March 1861. In order to determine the force which is acting on any part of the machine we must find its momentum, and then calculate the rate at which this momentum is being changed. This rate of change will give us the force. The method of calculation which it is necessary to employ was first given by Lagrange, and afterwards developed, with some modifications, by Hamilton's equations. It is usually referred to as Hamilton's principle; when the equations in the original form are used they are known as Lagrange's equations. Now Maxwell logically showed how these methods of calculation could be applied to the electro-magnetic field. The energy of a dynamical system is partly kinetic, partly potential. Maxwell supposes that the magnetic energy of the field is kinetic energy, the electric energy potential. Around 1862, while lecturing at King's College, Maxwell calculated that the speed of propagation of an electromagnetic field is approximately that of the speed of light. He considered this to be more than just a coincidence, and commented "We can scarcely avoid the conclusion that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena." Working on the problem further, Maxwell showed that the equations predict the existence of waves of oscillating electric and magnetic fields that travel through empty space at a speed that could be predicted from simple electrical experiments; using the data available at the time, Maxwell obtained a velocity of 310,740,000 m/s. In his 1864 paper A Dynamical Theory of the Electromagnetic Field, Maxwell wrote, The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws. As already noted herein Faraday, and before him, Ampère and others, had inklings that the luminiferous ether of space was also the medium for electric action. It was known by calculation and experiment that the velocity of electricity was approximately 186,000 miles per second; that is, equal to the velocity of light, which in itself suggests the idea of a relationship between -electricity and "light." A number of the earlier philosophers or mathematicians, as Maxwell terms them, of the 19th century, held the view that electromagnetic phenomena were explainable by action at a distance. Maxwell, following Faraday, contended that the seat of the phenomena was in the medium. The methods of the mathematicians in arriving at their results were synthetical while Faraday's methods were analytical. Faraday in his mind's eye saw lines of force traversing all space where the mathematicians saw centres of force attracting at a distance. Faraday sought the seat of the phenomena in real actions going on in the medium; they were satisfied that they had found it in a power of action at a distance on the electric fluids. Both of these methods, as Maxwell points out, had succeeded in explaining the propagation of light as an electromagnetic phenomenon while at the same time the fundamental conceptions of what the quantities concerned are, radically differed. The mathematicians assumed that insulators were barriers to electric currents; that, for instance, in a Leyden jar or electric condenser the electricity was accumulated at one plate and that by some occult action at a distance electricity of an opposite kind was attracted to the other plate. Maxwell, looking further than Faraday, reasoned that if light is an electromagnetic phenomenon and is transmissible through dielectrics such as glass, the phenomenon must be in the nature of electromagnetic currents in the dielectrics. He therefore contended that in the charging of a condenser, for instance, the action did not stop at the insulator, but that some "displacement" currents are set up in the insulating medium, which currents continue until the resisting force of the medium equals that of the charging force. In a closed conductor circuit, an electric current is also a displacement of electricity. The conductor offers a certain resistance, akin to friction, to the displacement of electricity, and heat is developed in the conductor, proportional to the square of the current (as already stated herein), which current flows as long as the impelling electric force continues. This resistance may be likened to that met with by a ship as it displaces in the water in its progress. The resistance of the dielectric is of a different nature and has been compared to the compression of multitudes of springs, which, under compression, yield with an increasing back pressure, up to a point where the total back pressure equals the initial pressure. When the initial pressure is withdrawn the energy expended in compressing the "springs" is returned to the circuit, concurrently with the return of the springs to their original condition, this producing a reaction in the opposite direction. Consequently, the current due to the displacement of electricity in a conductor may be continuous, while the displacement currents in a dielectric are momentary and, in a circuit or medium which contains but little resistance compared with capacity or inductance reaction, the currents of discharge are of an oscillatory or alternating nature. Maxwell extended this view of displacement currents in dielectrics to the ether of free space. Assuming light to be the manifestation of alterations of electric currents in the ether, and vibrating at the rate of light vibrations, these vibrations by induction set up corresponding vibrations in adjoining portions of the ether, and in this way the undulations corresponding to those of light are propagated as an electromagnetic effect in the ether. Maxwell's electromagnetic theory of light obviously involved the existence of electric waves in free space, and his followers set themselves the task of experimentally demonstrating the truth of the theory. By 1871, Maxwell could already reflect on the philosophy of science. End of the 19th century In 1887, the German physicist Heinrich Hertz in a series of experiments proved the actual existence of electromagnetic waves, showing that transverse free space electromagnetic waves can travel over some distance as predicted by Maxwell and Faraday. Hertz published his work in a book titled: Electric waves: being researches on the propagation of electric action with finite velocity through space. The discovery of electromagnetic waves in space led to the development of radio in the closing years of the 19th century. The electron as a unit of charge in electrochemistry was posited by G. Johnstone Stoney in 1874, who also coined the term electron in 1894. Plasma was first identified in a Crookes tube, and so described by Sir William Crookes in 1879 (he called it "radiant matter"). The place of electricity in leading up to the discovery of those beautiful phenomena of the Crookes Tube (due to Sir William Crookes), viz., Cathode rays, and later to the discovery of Roentgen or X-rays, must not be overlooked, since without electricity as the excitant of the tube the discovery of the rays might have been postponed indefinitely. It has been noted herein that Dr. William Gilbert was termed the founder of electrical science. This must, however, be regarded as a comparative statement. Oliver Heaviside was a self-taught scholar who reformulated Maxwell's field equations in terms of electric and magnetic forces and energy flux, and independently co-formulated vector analysis. During the late 1890s a number of physicists proposed that electricity, as observed in studies of electrical conduction in conductors, electrolytes, and cathode ray tubes, consisted of discrete units, which were given a variety of names, but the reality of these units had not been confirmed in a compelling way. However, there were also indications that the cathode rays had wavelike properties. Faraday, Weber, Helmholtz, Clifford and others had glimpses of this view; and the experimental works of Zeeman, Goldstein, Crookes, J. J. Thomson and others had greatly strengthened this view. Weber predicted that electrical phenomena were due to the existence of electrical atoms, the influence of which on one another depended on their position and relative accelerations and velocities. Helmholtz and others also contended that the existence of electrical atoms followed from Faraday's laws of electrolysis, and Johnstone Stoney, to whom is due the term "electron", showed that each chemical ion of the decomposed electrolyte carries a definite and constant quantity of electricity, and inasmuch as these charged ions are separated on the electrodes as neutral substances there must be an instant, however brief, when the charges must be capable of existing separately as electrical atoms; while in 1887, Clifford wrote: "There is great reason to believe that every material atom carries upon it a small electric current, if it does not wholly consist of this current." In 1896, J. J. Thomson performed experiments indicating that cathode rays really were particles, found an accurate value for their charge-to-mass ratio e/m, and found that e/m was independent of cathode material. He made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles", had perhaps one thousandth of the mass of the least massive ion known (hydrogen). He further showed that the negatively charged particles produced by radioactive materials, by heated materials, and by illuminated materials, were universal. The nature of the Crookes tube "cathode ray" matter was identified by Thomson in 1897. In the late 19th century, the Michelson–Morley experiment was performed by Albert A. Michelson and Edward W. Morley at what is now Case Western Reserve University. It is generally considered to be the evidence against the theory of a luminiferous aether. The experiment has also been referred to as "the kicking-off point for the theoretical aspects of the Second Scientific Revolution." Primarily for this work, Michelson was awarded the Nobel Prize in 1907. Dayton Miller continued with experiments, conducting thousands of measurements and eventually developing the most accurate interferometer in the world at that time. Miller and others, such as Morley, continue observations and experiments dealing with the concepts. A range of proposed aether-dragging theories could explain the null result but these were more complex, and tended to use arbitrary-looking coefficients and physical assumptions. By the end of the 19th century electrical engineers had become a distinct profession, separate from physicists and inventors. They created companies that investigated, developed and perfected the techniques of electricity transmission, and gained support from governments all over the world for starting the first worldwide electrical telecommunication network, the telegraph network. Pioneers in this field included Werner von Siemens, founder of Siemens AG in 1847, and John Pender, founder of Cable & Wireless. William Stanley made the first public demonstration of a transformer that enabled commercial delivery of alternating current in 1886. Large two-phase alternating current generators were built by a British electrician, J. E. H. Gordon, in 1882. Lord Kelvin and Sebastian Ferranti also developed early alternators, producing frequencies between 100 and 300 hertz. After 1891, polyphase alternators were introduced to supply currents of multiple differing phases. Later alternators were designed for varying alternating-current frequencies between sixteen and about one hundred hertz, for use with arc lighting, incandescent lighting and electric motors. The possibility of obtaining the electric current in large quantities, and economically, by means of dynamo electric machines gave impetus to the development of incandescent and arc lighting. Until these machines had attained a commercial basis voltaic batteries were the only available source of current for electric lighting and power. The cost of these batteries, however, and the difficulties of maintaining them in reliable operation were prohibitory of their use for practical lighting purposes. The date of the employment of arc and incandescent lamps may be set at about 1877. Even in 1880, however, but little headway had been made toward the general use of these illuminants; the rapid subsequent growth of this industry is a matter of general knowledge. The employment of storage batteries, which were originally termed secondary batteries or accumulators, began about 1879. Such batteries are now utilized on a large scale as auxiliaries to the dynamo machine in electric power-houses and substations, in electric automobiles and in immense numbers in automobile ignition and starting systems, also in fire alarm telegraphy and other signal systems. For the 1893 World's Columbian International Exposition in Chicago, General Electric proposed to power the entire fair with direct current. Westinghouse slightly undercut GE's bid and used the fair to debut their alternating current based system, showing how their system could power poly-phase motors and all the other AC and DC exhibits at the fair. Second Industrial Revolution The Second Industrial Revolution, also known as the Technological Revolution, was a phase of rapid industrialization in the final third of the 19th century and the beginning of the 20th. Along with the expansion of railroads, iron and steel production, widespread use of machinery in manufacturing, greatly increased use of steam power and petroleum, the period saw expansion in the use of electricity and the adaption of electromagnetic theory in developing various technologies. The 1880s saw the spread of large scale commercial electric power systems, first used for lighting and eventually for electro-motive power and heating. Systems early on used alternating current and direct current. Large centralized power generation became possible when it was recognized that alternating current electric power lines could use transformers to take advantage of the fact that each doubling of the voltage would allow the same size cable to transmit the same amount of power four times the distance. Transformer were used to raise voltage at the point of generation (a representative number is a generator voltage in the low kilovolt range) to a much higher voltage (tens of thousands to several hundred thousand volts) for primary transmission, followed to several downward transformations, for commercial and residential domestic use. Between 1885 and 1890 poly-phase currents combined with electromagnetic induction and practical AC induction motors were developed. The International Electro-Technical Exhibition of 1891 featuring the long-distance transmission of high-power, three-phase electric current. It was held between 16 May and 19 October on the disused site of the three former "Westbahnhöfe" (Western Railway Stations) in Frankfurt am Main. The exhibition featured the first long-distance transmission of high-power, three-phase electric current, which was generated 175 km away at Lauffen am Neckar. As a result of this successful field trial, three-phase current became established for electrical transmission networks throughout the world. Much was done in the direction in the improvement of railroad terminal facilities, and it is difficult to find one steam railroad engineer who would have denied that all the important steam railroads of this country were not to be operated electrically. In other directions the progress of events as to the utilization of electric power was expected to be equally rapid. In every part of the world the power of falling water, nature's perpetual motion machine, which has been going to waste since the world began, is now being converted into electricity and transmitted by wire hundreds of miles to points where it is usefully and economically employed. The first windmill for electricity production was built in Scotland in July 1887 by the Scottish electrical engineer James Blyth. Across the Atlantic, in Cleveland, Ohio a larger and heavily engineered machine was designed and constructed in 1887–88 by Charles F. Brush, this was built by his engineering company at his home and operated from 1886 until 1900. The Brush wind turbine had a rotor in diameter and was mounted on a 60-foot (18 m) tower. Although large by today's standards, the machine was only rated at 12 kW; it turned relatively slowly since it had 144 blades. The connected dynamo was used either to charge a bank of batteries or to operate up to 100 incandescent light bulbs, three arc lamps, and various motors in Brush's laboratory. The machine fell into disuse after 1900 when electricity became available from Cleveland's central stations, and was abandoned in 1908. 20th century Various units of electricity and magnetism have been adopted and named by representatives of the electrical engineering institutes of the world, which units and names have been confirmed and legalized by the governments of the United States and other countries. Thus the volt, from the Italian Volta, has been adopted as the practical unit of electromotive force, the ohm, from the enunciator of Ohm's law, as the practical unit of resistance; the ampere, after the eminent French scientist of that name, as the practical unit of current strength, the henry as the practical unit of inductance, after Joseph Henry and in recognition of his early and important experimental work in mutual induction. Dewar and John Ambrose Fleming predicted that at absolute zero, pure metals would become perfect electromagnetic conductors (though, later, Dewar altered his opinion on the disappearance of resistance believing that there would always be some resistance). Walther Hermann Nernst developed the third law of thermodynamics and stated that absolute zero was unattainable. Carl von Linde and William Hampson, both commercial researchers, nearly at the same time filed for patents on the Joule–Thomson effect. Linde's patent was the climax of 20 years of systematic investigation of established facts, using a regenerative counterflow method. Hampson's design was also of a regenerative method. The combined process became known as the Linde–Hampson liquefaction process. Heike Kamerlingh Onnes purchased a Linde machine for his research. Zygmunt Florenty Wróblewski conducted research into electrical properties at low temperatures, though his research ended early due to his accidental death. Around 1864, Karol Olszewski and Wroblewski predicted the electrical phenomena of dropping resistance levels at ultra-cold temperatures. Olszewski and Wroblewski documented evidence of this in the 1880s. A milestone was achieved on 10 July 1908 when Onnes at the Leiden University in Leiden produced, for the first time, liquified helium and achieved superconductivity. In 1900, William Du Bois Duddell develops the Singing Arc and produced melodic sounds, from a low to a high-tone, from this arc lamp. Lorentz and Poincaré Between 1900 and 1910, many scientists like Wilhelm Wien, Max Abraham, Hermann Minkowski, or Gustav Mie believed that all forces of nature are of electromagnetic origin (the so-called "electromagnetic world view"). This was connected with the electron theory developed between 1892 and 1904 by Hendrik Lorentz. Lorentz introduced a strict separation between matter (electrons) and the aether, whereby in his model the ether is completely motionless, and it won't be set in motion in the neighborhood of ponderable matter. Contrary to other electron models before, the electromagnetic field of the ether appears as a mediator between the electrons, and changes in this field can propagate not faster than the speed of light. In 1896, three years after submitting his thesis on the Kerr effect, Pieter Zeeman disobeyed the direct orders of his supervisor and used laboratory equipment to measure the splitting of spectral lines by a strong magnetic field. Lorentz theoretically explained the Zeeman effect on the basis of his theory, for which both received the Nobel Prize in Physics in 1902. A fundamental concept of Lorentz's theory in 1895 was the "theorem of corresponding states" for terms of order v/c. This theorem states that a moving observer (relative to the ether) makes the same observations as a resting observer. This theorem was extended for terms of all orders by Lorentz in 1904. Lorentz noticed, that it was necessary to change the space-time variables when changing frames and introduced concepts like physical length contraction (1892) to explain the Michelson–Morley experiment, and the mathematical concept of local time (1895) to explain the aberration of light and the Fizeau experiment. That resulted in the formulation of the so-called Lorentz transformation by Joseph Larmor (1897, 1900) and Lorentz (1899, 1904). As Lorentz later noted (1921, 1928), he considered the time indicated by clocks resting in the aether as "true" time, while local time was seen by him as a heuristic working hypothesis and a mathematical artifice. Therefore, Lorentz's theorem is seen by modern historians as being a mathematical transformation from a "real" system resting in the aether into a "fictitious" system in motion. Continuing the work of Lorentz, Henri Poincaré between 1895 and 1905 formulated on many occasions the principle of relativity and tried to harmonize it with electrodynamics. He declared simultaneity only a convenient convention which depends on the speed of light, whereby the constancy of the speed of light would be a useful postulate for making the laws of nature as simple as possible. In 1900 he interpreted Lorentz's local time as the result of clock synchronization by light signals, and introduced the electromagnetic momentum by comparing electromagnetic energy to what he called a "fictitious fluid" of mass . And finally in June and July 1905 he declared the relativity principle a general law of nature, including gravitation. He corrected some mistakes of Lorentz and proved the Lorentz covariance of the electromagnetic equations. Poincaré also suggested that there exist non-electrical forces to stabilize the electron configuration and asserted that gravitation is a non-electrical force as well, contrary to the electromagnetic world view. However, historians pointed out that he still used the notion of an ether and distinguished between "apparent" and "real" time and therefore didn't invent special relativity in its modern understanding. Einstein's Annus Mirabilis In 1905, while he was working in the patent office, Albert Einstein had four papers published in the Annalen der Physik, the leading German physics journal. These are the papers that history has come to call the Annus Mirabilis papers: His paper on the particulate nature of light put forward the idea that certain experimental results, notably the photoelectric effect, could be simply understood from the postulate that light interacts with matter as discrete "packets" (quanta) of energy, an idea that had been introduced by Max Planck in 1900 as a purely mathematical manipulation, and which seemed to contradict contemporary wave theories of light . This was the only work of Einstein's that he himself called "revolutionary." His paper on Brownian motion explained the random movement of very small objects as direct evidence of molecular action, thus supporting the atomic theory. His paper on the electrodynamics of moving bodies introduced the radical theory of special relativity, which showed that the observed independence of the speed of light on the observer's state of motion required fundamental changes to the notion of simultaneity. Consequences of this include the time-space frame of a moving body slowing down and contracting (in the direction of motion) relative to the frame of the observer. This paper also argued that the idea of a luminiferous aether—one of the leading theoretical entities in physics at the time—was superfluous. In his paper on mass–energy equivalence (previously considered to be distinct concepts), Einstein deduced from his equations of special relativity what later became the well-known expression: , suggesting that tiny amounts of mass could be converted into huge amounts of energy. All four papers are today recognized as tremendous achievements—and hence 1905 is known as Einstein's "Wonderful Year". At the time, however, they were not noticed by most physicists as being important, and many of those who did notice them rejected them outright. Some of this work—such as the theory of light quanta—remained controversial for years. Mid-20th century The first formulation of a quantum theory describing radiation and matter interaction is due to Paul Dirac, who, during 1920, was first able to compute the coefficient of spontaneous emission of an atom. Paul Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and an elegant formulation of quantum electrodynamics due to Enrico Fermi, physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck, and Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics. In December 1938, the German chemists Otto Hahn and Fritz Strassmann sent a manuscript to Naturwissenschaften reporting they had detected the element barium after bombarding uranium with neutrons; simultaneously, they communicated these results to Lise Meitner. Meitner, and her nephew Otto Robert Frisch, correctly interpreted these results as being nuclear fission. Frisch confirmed this experimentally on 13 January 1939. In 1944, Hahn received the Nobel Prize in Chemistry for the discovery of nuclear fission. Some historians who have documented the history of the discovery of nuclear fission believe Meitner should have been awarded the Nobel Prize with Hahn. Difficulties with the quantum theory increased through the end of 1940. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift and magnetic moment of the electron. These experiments unequivocally exposed discrepancies which the theory was unable to explain. With the invention of bubble chambers and spark chambers in the 1950s, experimental particle physics discovered a large and ever-growing number of particles called hadrons. It seemed that such a large number of particles could not all be fundamental. Shortly after the end of the war in 1945, Bell Labs formed a Solid State Physics Group, led by William Shockley and chemist Stanley Morgan; other personnel including John Bardeen and Walter Brattain, physicist Gerald Pearson, chemist Robert Gibney, electronics expert Hilbert Moore and several technicians. Their assignment was to seek a solid-state alternative to fragile glass vacuum tube amplifiers. Their first attempts were based on Shockley's ideas about using an external electrical field on a semiconductor to affect its conductivity. These experiments failed every time in all sorts of configurations and materials. The group was at a standstill until Bardeen suggested a theory that invoked surface states that prevented the field from penetrating the semiconductor. The group changed its focus to study these surface states and they met almost daily to discuss the work. The rapport of the group was excellent, and ideas were freely exchanged. As to the problems in the electron experiments, a path to a solution was given by Hans Bethe. In 1947, while he was traveling by train to reach Schenectady from New York, after giving a talk at the conference at Shelter Island on the subject, Bethe completed the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections at mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization. Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga, Julian Schwinger, Richard Feynman and Freeman Dyson, it was finally possible to get fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Shin'ichirō Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with a Nobel Prize in Physics in 1965 for their work in this area. Their contributions, and those of Freeman Dyson, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus". QED has served as the model and template for all subsequent quantum field theories. Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. Robert Noyce credited Kurt Lehovec for the principle of p–n junction isolation caused by the action of a biased p-n junction (the diode) as a key concept behind the integrated circuit. Jack Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and successfully demonstrated the first working integrated circuit on September 12, 1958. In his patent application of February 6, 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated." Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit. Robert Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's chip solved many practical problems that Kilby's had not. Noyce's chip, made at Fairchild Semiconductor, was made of silicon, whereas Kilby's chip was made of germanium. Philo Farnsworth developed the Farnsworth–Hirsch Fusor, or simply fusor, an apparatus designed by Farnsworth to create nuclear fusion. Unlike most controlled fusion systems, which slowly heat a magnetically confined plasma, the fusor injects high temperature ions directly into a reaction chamber, thereby avoiding a considerable amount of complexity. When the Farnsworth-Hirsch Fusor was first introduced to the fusion research world in the late 1960s, the Fusor was the first device that could clearly demonstrate it was producing fusion reactions at all. Hopes at the time were high that it could be quickly developed into a practical power source. However, as with other fusion experiments, development into a power source has proven difficult. Nevertheless, the fusor has since become a practical neutron source and is produced commercially for this role. Parity violation The mirror image of an electromagnet produces a field with the opposite polarity. Thus the north and south poles of a magnet have the same symmetry as left and right. Prior to 1956, it was believed that this symmetry was perfect, and that a technician would be unable to distinguish the north and south poles of a magnet except by reference to left and right. In that year, T. D. Lee and C. N. Yang predicted the nonconservation of parity in the weak interaction. To the surprise of many physicists, in 1957 C. S. Wu and collaborators at the U.S. National Bureau of Standards demonstrated that under suitable conditions for polarization of nuclei, the beta decay of cobalt-60 preferentially releases electrons toward the south pole of an external magnetic field, and a somewhat higher number of gamma rays toward the north pole. As a result, the experimental apparatus does not behave comparably with its mirror image. Electroweak theory The first step towards the Standard Model was Sheldon Glashow's discovery, in 1960, of a way to combine the electromagnetic and weak interactions. In 1967, Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashow's electroweak theory, giving it its modern form. The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions – i.e. the quarks and leptons. After the neutral weak currents caused by boson exchange were discovered at CERN in 1973, the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W and Z bosons were discovered experimentally in 1981, and their masses were found to be as the Standard Model predicted. The theory of the strong interaction, to which many contributed, acquired its modern form around 1973–74, when experiments confirmed that the hadrons were composed of fractionally charged quarks. With the establishment of quantum chromodynamics in the 1970s finalized a set of fundamental and exchange particles, which allowed for the establishment of a "standard model" based on the mathematics of gauge invariance, which successfully described all forces except for gravity, and which remains generally accepted within the domain to which it is designed to be applied. The 'standard model' groups the electroweak interaction theory and quantum chromodynamics into a structure denoted by the gauge group SU(3)×SU(2)×U(1). The formulation of the unification of the electromagnetic and weak interactions in the standard model is due to Abdus Salam, Steven Weinberg and, subsequently, Sheldon Glashow. After the discovery, made at CERN, of the existence of neutral weak currents, mediated by the boson foreseen in the standard model, the physicists Salam, Glashow and Weinberg received the 1979 Nobel Prize in Physics for their electroweak theory. Since then, discoveries of the bottom quark (1977), the top quark (1995), tau neutrino (2000) and the Higgs boson (2012) have given credence to the Standard Model. 21st century Electromagnetic technologies There are a range of emerging energy technologies. By 2007, solid state micrometer-scale electric double-layer capacitors based on advanced superionic conductors had been for low-voltage electronics such as deep-sub-voltage nanoelectronics and related technologies (the 22 nm technological node of CMOS and beyond). Also, the nanowire battery, a lithium-ion battery, was invented by a team led by Dr. Yi Cui in 2007. Magnetic resonance Reflecting the fundamental importance and applicability of Magnetic resonance imaging in medicine, Paul Lauterbur of the University of Illinois at Urbana–Champaign and Sir Peter Mansfield of the University of Nottingham were awarded the 2003 Nobel Prize in Physiology or Medicine for their "discoveries concerning magnetic resonance imaging". The Nobel citation acknowledged Lauterbur's insight of using magnetic field gradients to determine spatial localization, a discovery that allowed rapid acquisition of 2D images. Wireless electricity Wireless electricity is a form of wireless energy transfer, the ability to provide electrical energy to remote objects without wires. The term WiTricity was coined in 2005 by Dave Gerding and later used for a project led by Prof. Marin Soljačić in 2007. The MIT researchers successfully demonstrated the ability to power a 60 watt light bulb wirelessly, using two 5-turn copper coils of 60 cm (24 in) diameter, that were 2 m (7 ft) away, at roughly 45% efficiency. This technology can potentially be used in a large variety of applications, including consumer, industrial, medical and military. Its aim is to reduce the dependence on batteries. Further applications for this technology include transmission of information—it would not interfere with radio waves and thus could be used as a cheap and efficient communication device without requiring a license or a government permit. Unified theories A Grand Unified Theory (GUT) is a model in particle physics in which, at high energy, the electromagnetic force is merged with the other two gauge interactions of the Standard Model, the weak and strong nuclear forces. Many candidates have been proposed, but none is directly supported by experimental evidence. GUTs are often seen as intermediate steps towards a "Theory of Everything" (TOE), a putative theory of theoretical physics that fully explains and links together all known physical phenomena, and, ideally, has predictive power for the outcome of any experiment that could be carried out in principle. No such theory has yet been accepted by the physics community. Open problems The magnetic monopole in the quantum theory of magnetic charge started with a paper by the physicist Paul A.M. Dirac in 1931. The detection of magnetic monopoles is an open problem in experimental physics. In some theoretical models, magnetic monopoles are unlikely to be observed, because they are too massive to be created in particle accelerators, and also too rare in the Universe to enter a particle detector with much probability. After more than twenty years of intensive research, the origin of high-temperature superconductivity is still not clear, but it seems that instead of electron-phonon attraction mechanisms, as in conventional superconductivity, one is dealing with genuine electronic mechanisms (e.g. by antiferromagnetic correlations), and instead of s-wave pairing, d-wave pairings are substantial. One goal of all this research is room-temperature superconductivity. See also Histories History of electromagnetic spectrum, History of electrical engineering, History of Maxwell's equations, History of radio, History of optics, History of physics General Coulomb's law, Biot–Savart law, Gauss's law, Ampère's circuital law, Gauss's law for magnetism, Faraday's law of induction, Ponderomotive force, Telluric currents, Terrestrial magnetism, ampere hours, Transverse waves, Longitudinal waves, Plane waves, Refractive index, torque, Revolutions per minute, Photosphere, Vortex, vortex rings, Theory permittivity, scalar product, vector product, tensor, divergent series, linear operator, unit vector, parallelepiped, osculating plane, standard candle Technology Solenoid, electro-magnets, Nicol prisms, rheostat, voltmeter, gutta-percha covered wire, Electrical conductor, ammeters, Gramme machine, binding posts, Induction motor, Lightning arresters, Technological and industrial history of the United States, Western Electric Company, Lists Outline of energy development Timelines Timeline of electromagnetism, Timeline of luminiferous aether References Citations and notes Attribution Bibliography Bakewell, F. C. (1853). Electric science; its history, phenomena, and applications. London: Ingram, Cooke. Benjamin, P. (1898). A history of electricity (The intellectual rise in electricity) from antiquity to the days of Benjamin Franklin. New York: J. Wiley & Sons. Durgin, W. A. (1912). Electricity, its history and development. Chicago: A.C. McClurg. Einstein, Albert: "Ether and the Theory of Relativity" (1920), republished in Sidelights on Relativity (Dover, New York, 1922). Einstein, Albert, The Investigation of the State of Aether in Magnetic Fields, 1895. (PDF format) . This annus mirabilis paper on the photoelectric effect was received by Annalen der Physik March 18. . This annus mirabilis paper on Brownian motion was received May 11. . This annus mirabilis paper on special relativity was received June 30. . This annus mirabilis paper on mass-energy equivalence was received September 27. The Encyclopedia Americana; a library of universal knowledge; "Electricity, its history and Progress". (1918). New York: Encyclopedia Americana Corp. Page 171 Gibson, C. R. (1907). Electricity of to-day, its work & mysteries described in non-technical language. London: Seeley and co., limited Heaviside, O. (1894). Electromagnetic theory. London: "The Electrician" Print. and Pub. Ireland commissioners of nat. educ., (1861). Electricity, galvanism, magnetism, electro-magnetism, heat, and the steam engine. Oxford University. Jeans, J. H. (1908). The mathematical theory of electricity and magnetism. Cambridge: University Press. Lord Kelvin (Sir William Thomson), "On Vortex Atoms". Proceedings of the Royal Society of Edinburgh, Vol. VI, 1867, pp. 197–206. (ed., Reprinted in Phil. Mag. Vol. XXXIV, 1867, pp. 15–24.) Kolbe, Bruno; Francis ed Legge, Joseph Skellon, tr., "An Introduction to Electricity". Kegan Paul, Trench, Trübner, 1908. Lodge, Oliver, "Ether", Encyclopædia Britannica, Thirteenth Edition (1926). Lodge, Oliver, "The Ether of Space". (paperback) (hardcover) Lodge, Oliver, "Ether and Reality". Lyons, T. A. (1901). A treatise on electromagnetic phenomena, and on the compass and its deviations aboard ship. Mathematical, theoretical, and practical. New York: J. Wiley & Sons. Maxwell, J. C., & Thompson, J. J. (1892). A treatise on electricity and magnetism. Clarendon Press series. Oxford: Clarendon. Priestley, J., & Mynde, J. (1775). The history and present state of electricity, with original experiments. London: Printed for C. Bathurst, and T. Lowndes; J. Rivington, and J. Johnson; S. Crowder [and 4 others in London]. Schaffner, Kenneth F. : Nineteenth-Century Aether Theories, Oxford: Pergamon Press, 1972. (contains several reprints of original papers of famous physicists) Slingo, M., Brooker, A., Urbanitzky, A., Perry, J., & Dibner, B. (1895). The cyclopædia of electrical engineering: containing a history of the discovery and application of electricity with its practice and achievements from the earliest period to the present time: the whole being a practical guide to artisans, engineers and students interested in the practice and development of electricity, electric lighting, motors, thermo-piles, the telegraph, the telephone, magnets and every other branch of electrical application. Philadelphia: The Gebbie Pub. Co., Limited. Steinmetz, C. P., "Transient Electric Phenomena". Page 38. (ed., contained in: General Electric Company. General Electric review. Schenectady: General Electric Co..) A New System of Alternating Current Motors and Transformers, by Nikola Tesla, 1888 Thompson, S. P. (1891). The electromagnet, and electromagnetic mechanism. London: E. & F.N. Spon. Whittaker, E. T., "A History of the Theories of Aether and Electricity, from the Age of Descartes to the Close of the 19th century". Dublin University Press series. London: Longmans, Green and Co.; Urbanitzky, A. v., & Wormell, R. (1886). Electricity in the service of man: a popular and practical treatise on the applications of electricity in modern life. London: Cassell &. External links Electrickery, BBC Radio 4 discussion with Simon Schaffer, Patricia Fara & Iwan Morus (In Our Time, Nov. 4, 2004) Magnetism, BBC Radio 4 discussion with Stephen Pumphrey, John Heilbron & Lisa Jardine (In Our Time, Sep. 29, 2005) Electricity Magnetism Electromagnetism Electromagnetism Electromagnetic theory
History of electromagnetic theory
Technology,Engineering
20,241
61,209,349
https://en.wikipedia.org/wiki/Multi-level%20converter
A multi-level converter (MLC) or (multi-level inverter) is a method of generating high-voltage wave-forms from lower-voltage components. MLC origins go back over a hundred years, when in the 1880s, the advantages of DC long-distance transmission became evident. Modular multi-level converters (MMC) were investigated by Tricoli et al in 2017. Although their viability for electric vehicles (EV) was established, suitable low-cost semiconductors to make this topology competitive are not currently available (as of 2019). Description In 1999 it was already described in that motors can also be operated, but the system can also be charged without the need for an additional AC charger. A notable example is the work of the startup Pulsetrain, which is pioneering the use of MLC for electric mobility. By leveraging advancements in semiconductor/control/hardware technology, bringing MLCs closer to widespread adoption in vehicles. Extended Battery Lifespan: MLCs can increase battery lifespan by up to 80%, primarily through pulsed charging and discharging, a process not feasible with conventional systems. By rapidly switching the current on and off, this approach minimizes lithium plating, which are common causes of battery degradation. Additionally, the ability to individually switch cells on and off offers a further potential lifespan increase of up to 60%. Combined, these mechanisms are expected to provide even greater benefits, but current estimates conservatively focus on the 80% improvement to avoid setting overly high expectations. The exact extent of these savings will become clearer as ongoing research efforts worldwide continue to advance. Reduced Battery Formation Time and Cost: MLC streamline the battery formation process, a critical and resource-intensive step in manufacturing. This reduces costs while improving battery efficiency. Enhanced Electric Motor Efficiency: Enhanced Electric Motor Efficiency: The holistic integration of the inverter, charger, and battery management system into a single multi-level converter (MLC) enables optimized electric motor performance. This integration ensures that the entire drivetrain operates more efficiently compared to traditional systems. MLCs also achieve up to a 30% reduction in energy losses, particularly during partial load operation, where vehicles are not operating at full speed. This improvement is especially beneficial for everyday driving scenarios, such as city traffic and stop-and-go conditions, which represent a significant portion of typical vehicle usage. By reducing losses and enhancing efficiency in these common driving environments, MLCs contribute to a more sustainable and cost-effective electric mobility solution. System-Wide Optimization: MLC advocates for a system-level approach to electric vehicle design, where batteries, power electronics, and motors are co-optimized. This comprehensive strategy has the potential to revolutionize electric drivetrains. The MLC has two principal disadvantages: Complex Control Requirements: While the control of an MLC is inherently more complex than that of a traditional 2-level converter, many of these challenges have been resolved over recent years. Managing and balancing the voltages, state of charge (SoC), state of health (SoH), and temperature of each submodule battery is now achievable with modern computing power, which has become both affordable and efficient. Additionally, operating each battery submodule at its optimal frequency is no longer a prohibitive challenge. These advancements make MLC technology not only viable but also cost-effective when factoring in the substantial advantages it offers compared to conventional solutions. Lack of DC Voltage Output: MLCs lack a direct DC voltage output (e.g., 400V or 800V), commonly required for ancillary systems such as heating or air conditioning in electric vehicles. This limitation necessitates additional hardware, slightly increasing system complexity and cost. Given these factors, MLC technology is currently best suited for applications with single-motor systems, where the absence of a fixed DC output is less critical. By integrating the batteries into the inverter design, the voltage is no longer held constant (e.g., at 400V) but varies dynamically over time, making this approach particularly effective for such application. Variable DC Sources as an intermediate step An emerging variation of MLCs is the concept of variable DC sources, which offers a gradual transition toward full MLC adoption. These systems function similarly to MLCs but with a key difference: instead of switching voltages in microseconds, they adjust more slowly, over the course of seconds. This slower switching allows for much of the same hardware design, with minimal modifications. For example, the use of full-bridge configurations, common in MLCs, becomes optional, enabling the use of simpler half-bridge designs. This approach is primarily used as a Battery Management System (BMS), allowing variable DC sources to maintain many of the advantages of MLCs, such as improved battery utilization and better energy distribution. However, it eliminates the need for a complete system redesign in vehicles. By simply swapping the battery system, OEMs can integrate this technology into existing vehicle platforms without major alterations, making it easier for manufacturers to gain experience and build trust in the technology. While this intermediate step sacrifices some of the advanced benefits of MLCs, it retains a significant portion of their advantages, such as cost efficiency and increased sustainability. It also provides a compelling demonstration of the price attractiveness of the technology, paving the way for broader adoption and eventual transition to fully integrated multi-level systems. High voltage DC converters HVDC converters typically use series connected switched capacitors blocks. The blocks are switched in or out of the circuit to form the desired waveform, typically three phase AC. Low voltage DC converters Multi-level converters (MLC) are adaptable for a wide range of applications, many of which are still in the research phase: Mobility and Stationary Energy Storage: MLCs are expected to achieve their first major breakthroughs in electric vehicles and stationary storage systems, offering improved battery lifespan and system efficiency. Hydrogen Generation: MLCs can manage high currents and moderate voltages required for electrolysis through configurations such as galvanically isolated LLC resonant converters. Aerospace: Systems like ELAPSED explore the use of MLCs for regulating rapidly changing magnetic fields in aviation. Magnetic Stimulation: Academic projects utilize MLCs in medical and neuroengineering for precise magnetic field control. Space and Fusion: Future applications include regulating magnetic fields in space systems and nuclear fusion reactors. References Voltage Electrical components
Multi-level converter
Physics,Mathematics,Technology,Engineering
1,316
11,027,904
https://en.wikipedia.org/wiki/Epsilon%20calculus
In logic, Hilbert's epsilon calculus is an extension of a formal language by the epsilon operator, where the epsilon operator substitutes for quantifiers in that language as a method leading to a proof of consistency for the extended formal language. The epsilon operator and epsilon substitution method are typically applied to a first-order predicate calculus, followed by a demonstration of consistency. The epsilon-extended calculus is further extended and generalized to cover those mathematical objects, classes, and categories for which there is a desire to show consistency, building on previously-shown consistency at earlier levels. Epsilon operator Hilbert notation For any formal language L, extend L by adding the epsilon operator to redefine quantification: The intended interpretation of ϵx A is some x that satisfies A, if it exists. In other words, ϵx A returns some term t such that A(t) is true, otherwise it returns some default or arbitrary term. If more than one term can satisfy A, then any one of these terms (which make A true) can be chosen, non-deterministically. Equality is required to be defined under L, and the only rules required for L extended by the epsilon operator are modus ponens and the substitution of A(t) to replace A(x) for any term t. Bourbaki notation In tau-square notation from N. Bourbaki's Theory of Sets, the quantifiers are defined as follows: where A is a relation in L, x is a variable, and juxtaposes a at the front of A, replaces all instances of x with , and links them back to . Then let Y be an assembly, (Y|x)A denotes the replacement of all variables x in A with Y. This notation is equivalent to the Hilbert notation and is read the same. It is used by Bourbaki to define cardinal assignment since they do not use the axiom of replacement. Defining quantifiers in this way leads to great inefficiencies. For instance, the expansion of Bourbaki's original definition of the number one, using this notation, has length approximately 4.5 × 1012, and for a later edition of Bourbaki that combined this notation with the Kuratowski definition of ordered pairs, this number grows to approximately 2.4 × 1054. Modern approaches Hilbert's program for mathematics was to justify those formal systems as consistent in relation to constructive or semi-constructive systems. While Gödel's results on incompleteness mooted Hilbert's Program to a great extent, modern researchers find the epsilon calculus to provide alternatives for approaching proofs of systemic consistency as described in the epsilon substitution method. Epsilon substitution method A theory to be checked for consistency is first embedded in an appropriate epsilon calculus. Second, a process is developed for re-writing quantified theorems to be expressed in terms of epsilon operations via the epsilon substitution method. Finally, the process must be shown to normalize the re-writing process, so that the re-written theorems satisfy the axioms of the theory. Notes References Systems of formal logic Proof theory
Epsilon calculus
Mathematics
643