id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
13,229,499
https://en.wikipedia.org/wiki/Bochner%20identity
In mathematics — specifically, differential geometry — the Bochner identity is an identity concerning harmonic maps between Riemannian manifolds. The identity is named after the American mathematician Salomon Bochner. Statement of the result Let M and N be Riemannian manifolds and let u : M → N be a harmonic map. Let du denote the derivative (pushforward) of u, ∇ the gradient, Δ the Laplace–Beltrami operator, RiemN the Riemann curvature tensor on N and RicM the Ricci curvature tensor on M. Then See also Bochner's formula References External links Differential geometry Mathematical identities
Bochner identity
[ "Mathematics" ]
130
[ "Mathematical theorems", "Mathematical identities", "Mathematical problems", "Algebra" ]
13,229,877
https://en.wikipedia.org/wiki/FED%202
The FED 2 was a 35 mm rangefinder camera introduced in 1955 by FED. The name of FED comes from the initial of Felix Edmundovich Dzerzhinsky. Major features The FED 2 is a new design that is quite different from the FED 1. It has a longer rangefinder base (67 mm), a combined viewfinder and rangefinder window, adjustable diopter for the viewing window, a self timer, and a detachable back for film loading. Variations There are six models. FED 2a was introduced in 1955. FED 2b has flash sync added. The new shutter speed dial has the reference point on a center post that rotates with the dial as the shutter is fired. FED 2c has the flash sync port moved to the top deck from the body. It also has a mushroom-shaped film advance knob. FED 2d has a new set of shutter speeds, from 1/30 to 1/500, instead of the older set of 1/25 to 1/500. FED 2L is the only factory-designated model number; all other models are stamped FED 2 by the factory. The body is identical to the FED 2d body but the lens supplied is an Industar-61 with Lanthane glass instead of the Industar-26 used in models 2b to 2d. FED 2e is a FED 3b with the FED 2d shutter that does not have the slow speeds. Similar to the 3b, it does not have strap lugs but has a film-advance lever. Production ended around 1970. Operation To load a film, two locks in the base of the camera need to be turned. The entire back and bottom can then be removed as a single unit, allowing easy access to the film chamber. Standard 35 mm film cassettes are used, with film being wound onto a removable take-up spool (the latter often becomes difficult to remove on older cameras). Winding the film cocks the shutter and forwards the frame counter simultaneously. The FED 2 has a manual frame counter located below the wind-on knob, which must be reset by hand when loading film. Shutter The Fed 2 has a curtain shutter with speeds from B, 1/25-1/500s. After detaching the back, two screws on under the camera allow you to adjust the spring tension and change the shutter speeds, which may have become slow over time. As with similar cameras, it is important to cock the shutter before operating the shutter speed dial. Failing to do so may harm the mechanism. When firing, this dial will rotate. After re-cocking, the speed set will be indicated correctly again. Photos of FED 2 FED 2b FED 2d Lens The FED takes 39 mm screw lenses. The one shown here is a 50mm Jupiter-8 lens. Many FEDs come with Industar lenses. On this lens, the aperture is set on the front and focusing is done with a focusing ring. The rangefinder is coupled to the lens. The field of view in the viewfinder is that of a 5 cm lens. For other focal lengths, a separate turret viewfinder was placed on the accessory shoe. External links Matt's Cameras: FED 2 Lionel's FED 2 overview at 35mm-compact.com Fed 2 at www.collection-appareils.com by Sylvain Halgand FED section at Retrography.com by Simon Simonsen, Denmark Rangefinder cameras
FED 2
[ "Technology" ]
696
[ "Rangefinder cameras", "System cameras" ]
13,230,101
https://en.wikipedia.org/wiki/Iloca
The Iloca was a 35mm rangefinder camera produced from 1952 to 1959 by Wilhelm Witt of Hamburg. Models designated "Rapid" had a rapid winding lever. The Iloca was the first 35mm camera with an integrated electric motor wind. It was very expensive and sold poorly in Europe, but was much more successful in the USA where it was sold as the Graphic 35 Electric. The company was acquired by Agfa in 1960, and the Iloca Electric was re-introduced as the Agfa Selecta m, with a fixed f2.8 Solinar lens in place of the interchangeable bayonet mount. Iloca cameras Iloca IIa Iloca Stereo II - 1951 Iloca Rapid (A) - 1952 Iloca Rapid B / Sears Tower 51 - 1954 Iloca Rapid I - 1956 Iloca Rapid IL / MPP Iloca - 1956 Iloca Rapid IIL / Sears Tower 52 / Argus V-100 - 1956 Iloca Rapid III - 1959 Iloca Automatic Iloca Electric / Graphic 35 Electric - 1959 External links Iloca Rapid at Tigin's Classic Cameras Repair notes of the Iloca Rapid (A), Iloca Rapid B and Iloca IIa at Daniel Mitchell's camera site Rangefinder cameras
Iloca
[ "Technology" ]
252
[ "Rangefinder cameras", "System cameras" ]
13,230,505
https://en.wikipedia.org/wiki/List%20of%20countries%20by%20number%20of%20Internet%20users
Below is a sortable list of countries by number of Internet users as of 2024. Internet users are defined as persons who accessed the Internet in the last 12 months from any device, including mobile phones. Percentage is the percentage of a country's population that are Internet users. Estimates are derived either from household surveys or from Internet subscription data. All United Nations member states are included, except North Korea, whose number of internet users is estimated at a few thousand. Data from Statista and Internet World Stats estimates that the total number of internet users at the end of 2023 is around 5.3 billion. Table The table contains the following: the population percentage that uses the internet; data from the World Bank, the population percentage that uses the internet; data from the International Telecommunication Union, and the estimated number of internet users; data from the CIA. See also Global digital divide National broadband plan Loon LLC, a Google research and development project to provide Internet access to rural and remote areas Starlink - globally available satellite internet List of social networking services List of sovereign states by Internet connection speeds List of sovereign states by number of broadband Internet subscriptions List of countries by number of telephone lines in use List of countries by smartphone penetration List of mobile network operators List of multiple-system operators List of telecommunications companies Notes References External links Number Of Internet Users Worldwide (Live-Counter) International telecommunications Internet users Internet-related lists
List of countries by number of Internet users
[ "Technology" ]
284
[ "Computing-related lists", "Internet-related lists" ]
13,230,920
https://en.wikipedia.org/wiki/Furstenberg%27s%20proof%20of%20the%20infinitude%20of%20primes
In mathematics, particularly in number theory, Hillel Furstenberg's proof of the infinitude of primes is a topological proof that the integers contain infinitely many prime numbers. When examined closely, the proof is less a statement about topology than a statement about certain properties of arithmetic sequences. Unlike Euclid's classical proof, Furstenberg's proof is a proof by contradiction. The proof was published in 1955 in the American Mathematical Monthly while he was still an undergraduate student at Yeshiva University. Furstenberg's proof Define a topology on the integers , called the evenly spaced integer topology, by declaring a subset U ⊆  to be an open set if and only if it is a union of arithmetic sequences S(a, b) for a ≠ 0, or is empty (which can be seen as a nullary union (empty union) of arithmetic sequences), where Equivalently, U is open if and only if for every x in U there is some non-zero integer a such that S(a, x) ⊆ U. The axioms for a topology are easily verified: ∅ is open by definition, and is just the sequence S(1, 0), and so is open as well. Any union of open sets is open: for any collection of open sets Ui and x in their union U, any of the numbers ai for which S(ai, x) ⊆ Ui also shows that S(ai, x) ⊆ U. The intersection of two (and hence finitely many) open sets is open: let U1 and U2 be open sets and let x ∈ U1 ∩ U2 (with numbers a1 and a2 establishing membership). Set a to be the least common multiple of a1 and a2. Then S(a, x) ⊆ S(ai, x) ⊆ Ui. This topology has two notable properties: Since any non-empty open set contains an infinite sequence, a finite non-empty set cannot be open; put another way, the complement of a finite non-empty set cannot be a closed set. The basis sets S(a, b) are both open and closed: they are open by definition, and we can write S(a, b) as the complement of an open set as follows: The only integers that are not integer multiples of prime numbers are −1 and +1, i.e. Now, by the first topological property, the set on the left-hand side cannot be closed. On the other hand, by the second topological property, the sets S(p, 0) are closed. So, if there were only finitely many prime numbers, then the set on the right-hand side would be a finite union of closed sets, and hence closed. This would be a contradiction, so there must be infinitely many prime numbers. Topological properties The evenly spaced integer topology on is the topology induced by the inclusion , where is the profinite integer ring with its profinite topology. It is homeomorphic to the rational numbers with the subspace topology inherited from the real line, which makes it clear that any finite subset of it, such as , cannot be open. Notes References Keith Conrad https://kconrad.math.uconn.edu/blurbs/ugradnumthy/primestopology.pdf External links Furstenberg's proof that there are infinitely many prime numbers at Everything2 Article proofs General topology Prime numbers
Furstenberg's proof of the infinitude of primes
[ "Mathematics" ]
701
[ "General topology", "Prime numbers", "Mathematical objects", "Article proofs", "Topology", "Numbers", "Number theory" ]
13,231,083
https://en.wikipedia.org/wiki/Dave%20Barr%20%28motorcyclist%29
Dave Barr (April 12, 1952 - Nov 8, 2024) was born in Los Angeles, California was an American veteran of the Vietnam War and a motorcyclist best known for being the first double amputee to circumnavigate the globe. He lived in Bodfish, Kern County, California where he runs a motorcycle tour company. He is also the author of several books and was inducted into the Motorcycle Hall of Fame in 2000. Military service Barr joined the US Marine Corps when he was 17 and served in Vietnam on a helicopter gunship. After his discharge from the Marines in 1972 he lived in various locations around the world and served in the armed forces of several countries, including: two years with the Israeli Parachute Regiment and one year Rhodesian Light Infantry. He was in the middle of two years of service as an enlisted army paratrooper with the South African Defence Force, when he was injured in a land-mine explosion. In 1981 while riding in a military vehicle in southern Angola, his vehicle drove over a land mine and the resulting explosion cost him both of his legs, above the knee on the right and below the knee on the left. Prosthetic legs allowed him to complete his tour of duty. Motorcyclist In 1994 Barr set a world record for riding a Harley-Davidson 83,000 miles around the world, including a 13,000 mile Atlantic to Pacific segment across Northern Europe and Siberia. The bike that Barr used for the journey is currently on display at the AMA Motorcycle Hall of Fame in Pickerington, Ohio. Barr also set a second world record for riding the so-called Southern Cross. In just 45 days during 1996, he completed the first motorcycle journey ever between the four extreme geographical corners of the Australian continent. Barr was inducted into the Motorcycle Hall of Fame in 2000 not only for his world record setting exploits, but also for the charity work he has done for the disabled along the way. Writing Barr wrote a travelogue, Riding the Edge, in 1995. References External links davebarr1972.com davebarr1972.com/patriot-express 1953 births Living people American amputees American expatriates in South Africa American volunteers in the Rhodesian Bush War United States Marine Corps personnel of the Vietnam War Military personnel from Los Angeles South African military personnel of the Border War Long-distance motorcycle riders 20th-century Israeli military personnel South African Army personnel World record holders Explosion survivors Landmine victims
Dave Barr (motorcyclist)
[ "Chemistry" ]
503
[ "Explosion survivors", "Explosions" ]
13,231,140
https://en.wikipedia.org/wiki/%C5%8Ckunoshima
is a small island in the Inland Sea of Japan. It is considered to be part of the city of Takehara, Hiroshima Prefecture. It is accessible by ferry from Tadanoumi and Ōmishima. There are campsites, walking trails and places of historical interest on the island. It is often called because of the large population of free-ranging domestic rabbits that roam the island. The rabbits are rather tame and will approach humans. Ōkunoshima played a key role during World War II as a poison gas factory for much of the chemical warfare that was carried out in China. History The island was a cultivated area until the Russo-Japanese War when ten forts were built to protect it. Three fishing families lived on the island. In 1925, the Imperial Japanese Army Institute of Science and Technology initiated a secret program to develop chemical weapons, based on extensive research that showed that chemical weapons were being produced throughout the United States and Europe. A chemical munitions plant was built on the island between 1927 and 1929 and was home to a chemical weapons facility that would go on to produce over six kilotons of mustard gas and tear gas. Japan was a signatory of the 1925 Geneva Protocol, which banned the use of chemical warfare but not the development and storage of chemical weapons. Nevertheless, Japan went to great lengths to keep the chemical munitions plant a secret, even going so far as to remove records of the island from some maps. The island was chosen for its isolation, security, and distance from Tokyo and other areas in case of disaster. Under the jurisdiction of the Japanese military, the local fish preservation processor was converted into a toxic gas reactor. Residents and potential employees were not told what the plant was manufacturing, and everything was kept secret. Working conditions were harsh and many suffered from toxic-exposure related illnesses due to inadequate safety equipment. When World War II ended, documents concerning the plant were burned and Allied Occupation Forces disposed of the gas either by dumping, burning, or burying it. People were told to be silent about the project, and several decades would pass before victims from the plant were given government aid for treatment. In 1988, the Ōkunoshima Poison Gas Museum was opened. Present day This island is presently inhabited by . Many of them are descended from rabbits intentionally let loose when the island was developed as a park after World War II. During the war, rabbits were also used in the chemical munitions plant and were used for testing the effectiveness of the chemical weapons, but those rabbits were euthanized or killed when the factory was demolished and are not related to the rabbits currently on the island. Hunting the rabbits is forbidden, and dogs and cats are not allowed on the island. In 2015, the BBC presented a short television series called Pets – Wild at Heart about the behaviours of pets which featured the rabbits on the island. The series also showed tourists coming to feed the rabbits. The ruins of the old forts and the gas factory still exist all over the island, but entry is prohibited as it is too dangerous. Since it is part of the Inland Sea National Park system of Japan, there is a resource center and a museum. Poison Gas Museum The Poison Gas Museum was opened in 1988 and "was established in order to alert as many people as possible to the dreadful truths about poison gas." As expressed by its curator, Murakami Hatsuichi, to The New York Times, "My hope is that people will see the museum in Hiroshima City and also this one, so they will learn that we [Japanese] were both victims and aggressors in the war. I hope people will realize both facets and recognize the importance of peace." The small museum is only two rooms large and provides a basic overview of the construction of the chemical plant, working conditions, and the effects of poison gas on humans. Families of workers who suffered the aftereffects of the harsh working conditions donated numerous artifacts to help tell the story of the workers' plight. The second room shows how poison gas affects the human body through the lungs, eyes, skin, and heart. Images of victims from Iraq and Iran add to the message of the museum. The museum also offers guides to the numerous remains of the forts from the Second Sino-Japanese War and the poison gas factory. Most of the buildings are run-down and condemned, but still recognizable. The museum is aimed primarily at Japanese tourists, but English translations are provided on the overall summary for each section. Other buildings and structures The island is connected to Takehara on the mainland by Chūshi Powerline Crossing, the tallest powerline in Japan. Travel Access to Ōkunoshima from mainland Japan is via the train to Mihara Station (only the stops there). At Mihara, travelers catch the Kure Line local train to , and from there walk to the terminal and catch a ferry. Habu Shosen now also runs direct ferries from Mihara Port to Ōkunoshima on weekends. See also Tashirojima, Japan, also known as Cat Island due to a high population of cats Aoshima, Ehime, cat island References External links Up-to-date information on getting to Rabbit Island Rabbit Island Kyukamura Ohkunoshima Paper from Dr. Yukutake on poison gas usage and treatment Documentary film about Ōkunoshima and Japan's poison gas history Imperial Japanese Army Chemical warfare facilities Islands of Hiroshima Prefecture Islands of the Seto Inland Sea Takehara, Hiroshima
Ōkunoshima
[ "Chemistry" ]
1,093
[ "Chemical warfare facilities" ]
13,231,523
https://en.wikipedia.org/wiki/Nishkama%20Karma
Nishkama Karma (Sanskrit IAST : Niṣkāmakarma), self-less or desireless action, is an action performed without any expectation of fruits or results, and the central tenet of Karma Yoga path to liberation. Its modern advocates press upon achieving success following the principles of Yoga, and stepping beyond personal goals and agendas while pursuing any action over greater good, which has become well known since it is the central message of the Bhagavad Gita. In Indian philosophy, action or Karma has been divided into three categories, according to their intrinsic qualities or gunas. Here Nishkama Karma belongs to the first category, the Sattva (pure) or actions which add to calmness; the Sakama Karma (Self-centred action) comes in the second rājasika (aggression) and Vikarma (worst-action) comes under the third, tāmasika which correlates to darkness or inertia. Nishkama Karma in the workplace The opposite of Sakama Karma (action with desire), Nishkama Karma has been variously explained as 'Duty for duty's sake' and as 'Detached Involvement', which is neither negative attitude nor indifference; and has today found many advocates in the modern business area where the emphasis has shifted to ethical business practices adhering to intrinsic human values and reducing stress at the workplace. Another aspect that differentiates it from Sakama or selfish action, is that while the former is guided by inspiration, the latter is all about motivation, and that makes the central difference in its results, for example, Sakama Karma might lead to excessive work pressure and workaholism as it aims at success, and hence creates more chances of physical and psychological burnouts. Moreover, Nishkama Karma means a more balanced approach to work, and as work has been turned into a pursuit of personal excellence, which results in greater personal satisfaction, which one would have otherwise sought in job satisfaction coming from external rewards. One important fallout of the entire shift is that where one is essentially an ethical practice inside-out leading to the adage, ‘Work is worship’ show itself literally at the workplace, leading to greater work commitment, the other since it is so much result oriented can lead to unethical business and professional ethics, as seen so often at modern workplace. The central tenet of practicing Nishkama Karma is mindfulness in the present moment. Over time, this practice leads to not only equanimity of mind as it allows the practitioner to stay detached from results, and hence from ups and downs of business that are inevitable in any business arena, while maintaining constant work commitment since work has now been turned into a personal act of worship. Further in the long run it leads to cleansing of the heart but also spiritual growth and holistic development. Nishkama Karma in the Bhagavad Gita Nishkama Karma has an important role in the Bhagavad Gita, the central text of Mahabharata, where Krishna advocates 'Nishkama Karma Yoga' (the Yoga of Selfless Action) as the ideal path to realize the Truth. Allocated work done without expectations, motives, or thinking about its outcomes tends to purify one's mind and gradually makes an individual fit to see the value of reason and the benefits of renouncing the work itself. These concepts are described in the following verses: See also Karma Karma Yoga Puruṣārtha References External links Ramana Maharishi talks of Nishkama Karma Bhagavad Gita on Nishkama Karma Yoga Applied ethics Corporate social responsibility Hindu philosophical concepts Karma in Hinduism
Nishkama Karma
[ "Biology" ]
756
[ "Behavior", "Human behavior", "Applied ethics" ]
13,232,165
https://en.wikipedia.org/wiki/High-redundancy%20actuation
High-redundancy actuation (HRA) is a new approach to fault-tolerant control in the area of mechanical actuation. Overview The basic idea is to use a lot of small actuation elements, so that a fault of one element has only a minor effect on the overall system. This way, a High Redundancy Actuator can remain functional even after several elements are at fault. This property is also called graceful degradation. Fault-tolerant operation in the presence of actuator faults requires some form of redundancy. Actuators are essential, because they are used to keep the system stable and to bring it into the desired state. Both requires a certain amount of power or force to be applied to the system. No control approach can work unless the actuators produce this necessary force. So the common solution is to err on the side of safety by over-actuation: much more control action than strictly necessary is built into the system. For critical systems, the normal approach involves straightforward replication of the actuators. Often three or four actuators are used in parallel for aircraft flight control systems, even if one would be sufficient from a control point of view. So if one actuator fails, the remaining actuator can always keep the system operation. While this approach is certainly successful, it also makes the system expensive, heavy and ineffective. Inspiration of high-redundancy actuation The idea of the high-redundancy actuation (HRA) is inspired by the human musculature. A muscle is composed of many individual muscle cells, each of which provides only a minute contribution to the force and the travel of the muscle. These properties allow the muscle as a whole to be highly resilient to damage of individual cells. Technical realisation The aim of high redundancy actuation is not to produce man-made muscles, but to use the same principle of cooperation in technical actuators to provide intrinsic fault tolerance. To achieve this, a high number of small actuator elements are assembled in parallel and in series to form one actuator (see Series and parallel circuits). Faults within the actuator will affect the maximum capability, but through robust control, full performance can be maintained without either adaptation or reconfiguration. Some form of condition monitoring is necessary to provide warnings to the operator calling for maintenance. But this monitoring has no influence on the system itself, unlike in adaptive methods or control reconfiguration, which simplifies the design of the system significantly. The HRA is an important new approach within the overall area of fault-tolerant control, using concepts of reliability engineering on a mechanical level. When applicable, it can provide actuators that have graceful degradation, and that continue to operate at close to nominal performance even in the presence of multiple faults in the actuator elements. Using actuation elements in series An important feature of the high-redundancy actuation is that the actuator elements are connected both in parallel and in series. While the parallel arrangement is commonly used, the configuration in series is rarely employed, because it is perceived to be less efficient. However, there is one fault that is difficult to deal with in a parallel arrangement: the locking up of one actuator element. Because parallel actuator elements always have the same extension, one locked-up element can render the whole assembly useless. It is possible to mitigate this by guarding the elements against locking or by limiting the force exerted by a single element. But these measures reduce both the effectiveness of the system and introduce new points of failure. The analysis of the serial configuration shows that it remains operational when one element is locked-up. This fact is important for the High Redundancy Actuator, as fault tolerance is required for different fault types. The goal of the HRA project is to use parallel and serial actuator elements to accommodate both the blocking and the inactivity (loss of force) of an element. Available technology The basic idea of high-redundancy actuation is technology agnostic: it should be applicable to a wide range of actuator technology, including different kinds of linear actuators and rotational actuators. However, initial experiments are performed with electric actuators, especially with electromechanical and electromagnetic technology. Compared to pneumatic actuators, the electrical drive allow a much finer control of position and force. Further reading M. Blanke, M. Kinnaert, J. Lunze, M. Staroswiecki, J. Schröder: "Diagnosis and Fault-Tolerant Control", . Springer, New York, 2006. S. Chen, G. Tao, and S. M. Joshi: "On matching conditions for adaptive state tracking control of systems with actuator failures", in IEEE Transactions on Automatic Control, vol. 47, no. 3, pp. 473–478, 2002. X. Du, R. Dixon, R.M. Goodall, and A.C. Zolotas: "LQG Control for a Highly Redundant Actuator", in Preprint of the IFAC Conference for Advanced Intelligent Mechatronics (AIM), Zurich, 2007. X. Du, R. Dixon, R.M. Goodall, and A.C. Zolotas: "Assessment Of Strategies For Control Of High Redundancy Actuators", ACTUATOR 2006, Germany. X. Du, R. Dixon, R.M. Goodall, and A.C. Zolotas: "Modelling And Control Of A Highly Redundant Actuator", CONTROL 2006, Scotland, 2006. T. Steffen, J. Davies, R. Dixon, R.M. Goodall and A.C. Zolotas: "Using a Series of Moving Coils as a High Redundancy Actuator", in Preprint of the IFAC Conference for Advanced Intelligent Mechatronics (AIM), Zurich, 2007. Arun Manohar Gollapudi, V. Velagapudi, S. Korla: "Modeling and simulation of a high-redundancy direct-driven linear electromechanical actuator for fault-tolerance under various fault conditions", Engineering Science and Technology, an International Journal, Volume 23, Issue 5, October 2020, Pages 1171-1181. External links home page of the initial project Control engineering
High-redundancy actuation
[ "Engineering" ]
1,330
[ "Control engineering" ]
13,232,257
https://en.wikipedia.org/wiki/Robot%20II
The Robot II was a mechanical 135 film camera by Robot introduced in 1938. It was a slightly larger camera than the Robot I, with some significant improvements but still using the basic mechanism. Among the standard objectives were 3 cm Zeiss Tessar and a 3 cm Zeiss Tessar in 1:2,8 and 1:3,5 variations, a 1:2,0/40 mm Zeiss Biotar and 1:4/7,5 cm Zeiss Sonnar. The film cassette system was redesigned but it was only with the Robot IIa launched in 1951 that film could accept a standard 35 mm cassette. The special Robot cassettes type-N continued their role for take up. A small bakelite box was sold to allow people to rewind colour film into the original cassettes as demanded by the film processing companies. The camera was synchronized for flash. The swinging viewfinder was retained but now operated by a lever rather than moving the entire housing. Both the deep purple filter and the yellow filter were eliminated in the redesign. Some versions were available with a double wind motor which could expose 50 frames. The Second World War stopped civilian production of the Robot but it was used as a gun camera by the Luftwaffe. The Robot II took square photographs in the 24×24mm format, which allowed 50 exposures to fit on a 36-exposure film. It featured an all-metal Rotor shutter (1/4 to 1/500), a direct vision finder, and a screw mount for interchangeable lenses by Schneider. The standard lens is a Schneider Xenon 40 mm f1.9. Also available was the Robot Star II/50, with a double spring motor for 50 exposures and a Xenar 38 mm f2.8 lens. External links Robot IIa Cameras
Robot II
[ "Technology" ]
365
[ "Recording devices", "Cameras" ]
13,232,598
https://en.wikipedia.org/wiki/Wireless%20intercom
A wireless intercom is a telecommunications device that enables voice communication without the need to run copper wires between intercom stations. A wired intercom system may incorporate wireless elements. There are many types of wireless intercom systems on the market. Most wireless intercom systems communicate by radio waves using one of the frequencies allotted by various government agencies. Some wireless intercom systems communicate using the 802.11 standard. There are also systems that advertise themselves as wireless, but communicate over existing building AC electrical wiring. Basic terms Station - A wireless intercom unit. Outdoor Intercom - This is an intercom that can be placed by a building's doors and it operates like a doorbell, but people inside can talk to the visitor. Channels - Some wireless intercom systems have more than one channel so private conversations can occur between groups of intercoms. Range - This is the maximum range an intercom will communicate under ideal conditions. Ideal conditions mean no obstructions between units. Monitor - Usually this means the ability to listen to what is happening at a wireless intercom unit. Conference - The ability to talk to multiple intercom units at once. Paging - Paging function enables you to broadcast to all the stations in the location. Wired vs. Wireless One reason to use a wireless intercom system is that the cost of retrofitting a building for a wired intercom system is high. Another reason is the increased portability of a wireless system. With battery-powered radio frequency wireless intercom units, a person can carry a station as they walk around. One of the challenges of a wireless system is the possibility of interference. Radio frequency wireless systems may get interference from other wireless devices. Some wireless intercom designs reduce this interference by using "digital spread spectrum". Encrypted wireless Wired intercom is inherently private, so long as the wiring system isn't tapped by outside parties. Wireless intercom is not inherently private; conversations on a wireless intercom are broadcast using publicly available wireless frequencies which means other users with similar devices could listen in if they are within range. Most units on the market will allow intercom conversations to be heard through other devices such as scanners, baby monitors, cordless telephones, or the same brand of wireless intercom. Wireless intercom privacy can be provided if the audio stream is encrypted. Telex, HME, Altair and other intercom manufacturers offer encrypted wireless intercom for corporate, military and sports team customers desiring instant voice communications with privacy. The first use of encrypted wireless intercom in American football was in 1996; by 1999 it was being used in the Super Bowl. Audio frequency response of current products is limited to less than 4 kHz; this means that natural vocal sibilances above 4 kHz are absent. Ess sounds like eff, requiring additional spoken clarification such as saying "'S' as in 'Sam'". U.S. & Canada Wireless Frequencies The United States and Canada have several frequency ranges for wireless intercom systems and other wireless products. They are 49 MHz, FM band (160–270 kHz), 900 MHz, 2.4 GHz, 5.8 GHz, and MURS (150 MHz). The frequency that will work best for an application depends on the wireless devices already in use not only in the building itself, but also in surrounding buildings. For instance, if a residence is using wireless networking which operates in the 2.4 GHz range, a wireless intercom that operates in this range may interfere with the network and vice versa. Ideally, the best intercom for an application would be one that is in a frequency not in use in the surrounding area, or one that uses digital spread spectrum to reduce possibility of interference. Systems that use existing electrical cabling The first intercom systems communicated over a set of low-voltage signal wires installed in the walls of a building. The installation was typically done during the building’s construction, but buildings could be retrofitted with communication wires, at a cost. Non-radio "wireless" intercom designs were developed that used a building's existing electrical wiring to carry communication signals. Such systems work similarly to normal wired intercom designs, with intercom stations using wires to connect to electrical outlets in rooms. This method of intercom connection is most useful in offices and homes served by a single electrical service. Products are available from a variety of manufacturers including Westinghouse and GE. References Radio communications
Wireless intercom
[ "Engineering" ]
909
[ "Telecommunications engineering", "Radio communications" ]
13,232,642
https://en.wikipedia.org/wiki/Night%20Sky%20%28magazine%29
Night Sky is a discontinued American bimonthly magazine for entry-level stargazers. It was published between May/June 2004 and March/April 2007 by Sky Publishing, which also produces Sky & Telescope (S&T). Night Sky was intended to be a less technical than S&T. The target audience was recreational naked-eye and low-power instrument observers. The magazine was discontinued because of low sales, and subscriptions were converted to an equal number of issues of S&T. References Amateur astronomy Bimonthly magazines published in the United States Astronomy magazines Defunct magazines published in the United States Magazines established in 2004 Magazines disestablished in 2007 Science and technology magazines published in the United States
Night Sky (magazine)
[ "Astronomy" ]
143
[ "Astronomy magazines", "Works about astronomy", "Astronomy stubs" ]
13,232,655
https://en.wikipedia.org/wiki/Contaflex%20SLR
The Contaflex series is a family of 35mm Single-lens reflex cameras (SLR) equipped with a leaf shutter, produced by Zeiss Ikon in the 1950s and 1960s. The name was first used by Zeiss Ikon in 1935 for a 35mm Twin-lens reflex camera, the Contaflex TLR; for the earlier TLR, the -flex suffix referred to the integral reflex mirror for the viewfinder. The first SLR models, the Contaflex I and II (introduced in 1953) have fixed lenses, while the later models have interchangeable lenses; eventually the Contaflexes became a camera system with a wide variety of accessories. History The Mecaflex was presented at photokina 1951 and launched two years later as one of the first SLRs, fitted with a leaf shutter behind the removable lens and a waist-level viewfinder with a reflex mirror that swings out of the way during the film exposure. Compared to twin-lens reflex cameras, the SLR offered several advantages: photographer would be able to view the scene exactly through the same lens that would be used to expose the film, and only a single lens was required, reducing costs. The later Hasselblad 500C, introduced in 1957, is a similar SLR design that uses leaf shutters; for the Hasselblad, each of its interchangeable lenses has a shutter. The first Contaflex SLRs were introduced in 1953, following the general design of the Mecaflex using a Compur leaf shutter and reflex mirror, but the Contaflex cameras were equipped with an integral eye-level finder and a fixed lens. The advantages of using the leaf shutter are low manufacturing costs, compactness, quieter operation, and flash synchronization at all shutter speeds. However, using a leaf shutter in an SLR requires additional mechanical complications to cock the shutter and return the mirror after the shutter is released and the film is wound; these were seen more as a challenge than a drawback at Zeiss Ikon, but no Contaflex model ever got a rapid return mirror. However, only a very limited range of interchangeable lenses became available. For the models I and II, having a fixed lens, only three add-on converters were offered using a slide-on adapter, but from models III and IV onwards interchangeable lenses from 35mm to 115mm focal length were provided; at the time regarded as quite sufficient, as most would only be used with the standard lens anyway. Three years later, during 1956, the Kodak Retina Reflex was launched, followed by the Voigtländer Bessamatic and the Ultramatic. The market soon flourished with leaf-shuttered SLR cameras. These mechanically complex cameras required precision assembly and high quality materials. More often than not many camera makes suffered from reliability issues, while the few better ones performed well, selling in quantity. Cameras Contaflex I and II The Contaflex I, launched in 1953, was equipped with a fixed Zeiss Tessar 45 mm lens with front-cell focusing. The earliest Contaflex I cameras had a Synchro-Compur shutter with the old scale of shutter speeds (1-2-5-10-25-50-100-250-500) and no self-timer, but very soon it adopted the new scale 1-2-4-8-15-30-60-125-250-500. The Contaflex II, introduced the following year, was the same camera with an uncoupled selenium meter added to one side of the front plate. For both the Teleskop 1.7× supplementary lens could be attached to the front of the fixed lens using an accessory carrier bracket; as the name suggests, this extended the focal length by 70% to approximately 75 mm. The same bracket could be used for the Steritar A attachment, which was used for stereo photography. Contaflex III and IV The Contaflex III, launched in 1956, was the same as the I, but equipped with a Zeiss Tessar 50mm with unit helical focusing. The Contaflex IV, introduced the same year, was the same camera with the uncoupled meter inherited from the Contaflex II. The III and IV were equipped with a convertible lens system branded Pro-Tessar, where the front element of the standard lens was removable and could be replaced by supplementary lenses, as discussed in the section Contaflex lenses, to create 35 mm and 80 mm lenses, both . Contaflex Alpha and Beta The Contaflex Alpha and Contaflex Beta, both introduced in 1957, were lower-cost versions of the convertible-lens Contaflex III and IV, respectively; to reduce costs, the lens was changed to a Rodenstock (Zeiss-branded) Pantar 45 mm triplet with front-element focusing and the Compur shutter was replaced by a Prontor Reflex shutter, with a slight reduction in minimum shutter speed to . The Alpha had no meter, like the I/III, and the Beta had the selenium meter of the II/IV. The front element of the lens could be interchanged with supplemental lenses to create 30 mm and 75 mm lenses, both . These supplemental lenses had been introduced and were shared with the earlier (1955) Contina III 35mm viewfinder camera. Contaflex Rapid and Super The Contaflex Rapid was introduced in 1958; compared to the III, which it replaced, the Rapid had a slightly longer body, a built-in accessory shoe, a winding lever and a rewind crank. It retained the 50 mm Tessar and convertible lens system from the III. The "Contaflex" name engraved on the front of the prism was changed to a script typeface instead of the sans-serif used on prior Contaflex cameras. It was the meterless version and was discontinued in 1960. The Contaflex Super, launched the following year, was based on the Rapid and had a coupled selenium exposure meter on the front side of the prism. It is easily recognized by the wheel on the front plate for the setting of the film speed (DIN). The meter needle was visible in the finder as well as on the top plate from the outside. It is sometimes referred to parenthetically as the Super (old style) to avoid confusion with the later Super (new). The major innovation for the Rapid/Super over the III/IV was the introduction of interchangeable film magazines, which permitted the photographer to swap emulsions mid-roll. The new body of the Rapid and Super allowed them to take magazine backs, interchangeable with a partly exposed film inside. Magazine backs, rare among 35mm cameras, also were supplied for the Contarex of Zeiss Ikon. The Rapid and Super (old style) could take the same supplementary 35 mm and 80 mm lenses as the III and IV, and newer Pro-Tessar supplementary lenses were available for the Rapid and Super to create 35 mm , 85 mm , and 115 mm lenses. Contaflex Prima The Contaflex Prima, launched in 1959 and sold until 1965, was based on the body of the Rapid, retaining the new film magazine and lever wind, but with costs reduced by fitting the Pantar triplet lens and the Prontor shutter like the Alpha and Beta. The Prima had a coupled exposure meter placed on the side of the front plate, similar to the Beta. The Prima could take the same Pantar supplementary lenses as the Alpha and Beta. Contaflex Super (new) and Super B The Contaflex Super (new) and Contaflex Super B are very similar cameras. Both have a new body design, being longer with added bulk. The information about which came first is a bit contradictory in some reference books, but it seems the Super (new) was launched in 1962, introducing the new body design and a new selenium exposure meter in a prominent rectangle marked Zeiss Ikon in front of the prism. The aperture wheel was replaced by a more traditional aperture command, and the meter read-out was visible both on the exterior and in the finder. The Super B was launched in 1963, and added a shutter-priority automatic aperture, and some other small changes. The Super B can be distinguished by the presence of an "A"utomatic setting for the shutter speed ring and an EV scale in the viewfinder. From the Super (new) and Super B, the Zeiss Tessar 50mm f:2.8 lens was recomputed and supposedly performed better. They could still take the same supplementary lenses, with one exception discussed in the relevant section. Contaflex Super BC and S The Contaflex Super BC was introduced in 1965, and was a Super B with the selenium meter replaced by a CdS through-the-lens exposure meter. It still had a black rectangle marked Zeiss Ikon on the front of the prism, but it was only decorative. It had a battery compartment at the bottom front. The Contaflex S was the last variant, introduced in 1968, and was simply a renamed Super BC, sold until Zeiss Ikon ceased production in 1972. It had a black rectangle marked Contaflex S on the front, and a different, newer Zeiss Ikon logo. It proudly sported the word Automatic on the front of the shutter. The Super BC and S could take the magazine backs, as well as the usual supplementary lenses. Both the Contaflex Super BC and S were, along with the 126-format Contaflex 126, available in chrome or black finish. Contaflex 126 The Contaflex 126 is related to the Contaflex SLR family primarily by its name and general appearance, as it takes a different film format (126 film) and uses a different shutter technology (focal plane shutter) than the rest of the family. Voigtländer had developed it as the Icarex 126, and it was released as a Zeiss Ikon camera after Voigtländer's operations were consolidated into its larger parent in the late 1960s. It was introduced in 1967 to accept Kodak 126 (Instamatic) cartridges. It was one of the very few SLRs taking 126 film, and one of the very few cameras using that film aimed at the premium market. Two other examples of 126 SLRs are the Rollei SL26 and Kodak Instamatic Reflex. Former Zeiss-Ikon chief designer Hubert Nerwin, who designed the famous CONTAX 2 and 3 rangefinder cameras and other cameras for Zeiss-Ikon, later invented the 126 film cassette. This was after he emigrated to the U.S. after World War 2 and was working for Kodak. The Contaflex 126 is an SLR with a focal-plane shutter and interchangeable lenses. It was available in chrome or black finish. The range of lenses was: Zeiss Distagon 25/4 Zeiss Distagon 32/2.8 Zeiss Color-Pantar 45/2.8, three-element, cheaper Zeiss Tessar 45/2.8, four-element, better Zeiss Sonnar 85/2.8 Zeiss Tele-Tessar 135/4 Zeiss Tele-Tessar 200/4 The Contaflex 126 lenses are often confused with other lenses by the sellers. They can only be used on the Contaflex 126 body, which can only accept the obsolete 126 film cartridge, so the value of these lenses is not very high, despite their famous names. Weber SL75 When Zeiss Ikon stopped making cameras in 1972, they had prototypes in various stages of development. One of them was the SL725, which would be a successor to the Contaflex line with an electronic shutter. The prototype ended in the hands of a company named Weber, which presented the camera at a photokina show under the name Weber SL75, but could not afford to put it into production, and did not find a partner to do so. The lens mount was a modification of the Contarex camera lens mount. Carl Zeiss advertised a range of lenses for the Weber SL75, all with the T* multicoating: 18/4 Distagon 25/2.8 Distagon 35/2.8 Distagon 50/1.4 Planar 85/2.8 Sonnar 135/2.8 Sonnar 200/3.5 Tele-Tessar An eBay seller seems to have uncovered a small stock of the Planar lens, and has recently sold a couple of them. Recently (2021), several of these lenses have surfaced again and were sold on eBay. No SL75 body seems to have surfaced so far, and the only picture found on the web is here and from an Italian photo magazine as a preview in their Nov. 1974 issue, as seen to the right. Contaflex lenses There are three classes of supplemental lenses available for Contaflex SLRs, which are not interchangeable between class: The Contaflex I and II could only take the Teleskop 1.7x supplementary lenses, and the Alpha, Beta and Prima had their own limited range of Pantar supplementary lenses. The models III, IV, Rapid, Super, Super (new), Super B, Super BC and S all have a Zeiss Tessar 50mm f:2.8 lens (27mm screw-in or 28.5mm push-on filters); the front element can be removed and replaced by a supplemental lens: Zeiss Pro-Tessar 35/4 (49mm filters), later replaced by the Pro-Tessar 35/3.2 (60mm screw-over filters) Zeiss Pro-Tessar 85/4 (60mm screw-over filters), later replaced by the Pro-Tessar 85/3.2 (60mm filters) Zeiss Pro-Tessar 115/4 (67mm filters) Monocular 8x30B, equivalent to a 400mm lens (attaches to the 50mm f/2.8 Tessar lens). There was also a Zeiss Pro-Tessar M 1:1 supplementary lens, that kept the focal length of 50mm but allowed 1:1 reproduction. The effective speed of the M 1:1 lens is f/5.6. The 50mm standard front elements, as well as the Pro-Tessar M 1:1 elements, were different between the early models III, IV, Rapid and Super with the old model of Tessar, and the later models Super (new), Super B, Super BC and S with the recomputed Tessar. It appears that the mount was very slightly modified, and it seems physically impossible to mismatch the elements as the journal diameter above the bayonet mount had been reduced by approximately .006" There were also stereo attachments: Steritar A for the Contaflex I and II Steritar B for the other Tessar-equipped models Near Steritar for close up stereo pictures .2 – 2.5 meters (Normally interchangeable with the older Tessar line of Steritar B camera lenses) Steritar D for the Pantar-equipped models A complete line of these Contaflex Steritar lenses can be seen at (https://www.flickr.com/photos/12670411@N02/) Zeiss Proxar for Contaflex: 1M,0.5M,0.3M,0.2M and 0.1M Accessories Slip on metal lens hood Screw in metal lens hood Film back Zeiss Proxar lens set References Bibliography Barringer, C. and Small, M. Zeiss Compendium East and West — 1940–1972. Small Dole, UK: Hove Books, 1999 (2nd edition). . External links Contaflex II and Contaflex S at La Chambre Claire Contaflex 126 at www.collection-appareils.com by Sylvain Halgand Contaflex II at www.collection-appareils.com by Sylvain Halgand User manuals, Ads about Contaflex at www.collection-appareils.com by Sylvain Halgand Single-lens reflex cameras
Contaflex SLR
[ "Technology" ]
3,414
[ "System cameras", "Single-lens reflex cameras" ]
13,232,736
https://en.wikipedia.org/wiki/Energy%20transfer%20upconversion
Energy Transfer Upconversion or ETU is a physical principle (most commonly encountered in solid-state laser physics) that involves the excitation of a laser-active ion to a level above that which would be achieved by simple absorption of a pump photon, the required additional energy being transferred from another laser-active ion undergoing nonradiative deexcitation. ETU involves two fundamental ideas: energy transfer and upconversion. The analysis below will discuss ETU in the context of an optically pumped [see optical pumping] solid-state laser. A solid-state laser [see also laser] has laser-active ions embedded in a host medium. Energy may be transferred between these by dipole–dipole interaction (over short distances) or by fluorescence and reabsorption (over longer distances). In the case of ETU it is primarily dipole–dipole energy transfer that is of interest. If a laser-active ion is in an excited state, it can decay to a lower state either radiatively (i.e. energy is conserved by the emission of a photon, as required for laser operation) or nonradiatively. Nonradiative emission may be via Auger decay or via energy transfer to another laser-active ion. If this occurs, the ion receiving the energy will be excited to a higher energy state than that already achieved by absorption of a pump photon. This process of further exciting an already excited laser-active ion is known as photon upconversion. ETU is normally an unwanted effect when building lasers. Nonradiative decay is itself an inefficiency (in a perfect laser every downward transition would be a stimulated emission event), whilst the excitation of the energy-receiving ion can result in heating of the gain medium. When ETU occurs due to a clustering of ions within the host medium, it is sometimes termed concentration quenching. References Solid-state lasers
Energy transfer upconversion
[ "Chemistry" ]
399
[ "Solid state engineering", "Solid-state lasers" ]
13,235,454
https://en.wikipedia.org/wiki/International%20Mass%20Spectrometry%20Foundation
The International Mass Spectrometry Foundation (IMSF) is a non-profit scientific organization in the field of mass spectrometry. It operates the International Mass Spectrometry Society, which consists of 37 member societies and sponsors the International Mass Spectrometry Conference that is held once every two years. Aims The foundation has four aims: organizing international conferences and workshops in mass spectrometry improving mass spectrometry education standardizing terminology in the field aiding in the dissemination of mass spectrometry through publications Conferences Before the formation of the IMSF, the first International Mass Spectrometry Conference was held in London in 1958 and 41 papers were presented. Since then, conferences were held every three years until 2012, and every two years since. Conference proceedings are published in a book series, Advances in Mass Spectrometry, which is the oldest continuous series of publications in mass spectrometry. The International Mass Spectrometry Society evolved from this series of International Mass Spectrometry Conferences. The IMSF was officially registered in the Netherlands in 1998 following an agreement at the 1994 conference. Past meetings were held in these locations: Awards The society sponsors several awards including the Curt Brunnée Award for achievements in instrumentation by a scientist under 45 years of age, the Thomson Medal Award for achievements in mass spectrometry, as well as travel awards and student paper awards: Curt Brunnée Award winners: See also American Society for Mass Spectrometry British Mass Spectrometry Society Canadian Society for Mass Spectrometry List of female mass spectrometrists References External links Chemistry societies Mass spectrometry Organisations based in Gelderland Scientific organisations based in the Netherlands
International Mass Spectrometry Foundation
[ "Physics", "Chemistry" ]
349
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "nan", "Chemistry societies", "Matter" ]
15,909,871
https://en.wikipedia.org/wiki/Symbolic%20trajectory%20evaluation
Symbolic trajectory evaluation (STE) is a lattice-based model checking technology that uses a form of symbolic simulation. STE is essentially used for computer hardware, that is circuit verification. The technique uses abstraction, meaning that details of the circuit behaviour are removed from the circuit model. It was first developed by Carl Seger and Randy Bryant in 1995 as an alternative to "classical" symbolic model checking. References C.-J. H. Seger, and R. E. Bryant, Formal Verification by Symbolic Evaluation of Ordered Trajectories, Formal Methods in System Design, Vol. 6, No. 2 (March, 1995), pp. 147–190 Model checking Management cybernetics
Symbolic trajectory evaluation
[ "Technology" ]
140
[ "Computing stubs", "Computer science", "Computer science stubs" ]
15,909,922
https://en.wikipedia.org/wiki/Dicro
Dicro Oy is a Finnish contract manufacturer of electronics, cable and electromechanical assemblies for various industrial, medical and telecommunication branch businesses. The company headquarters is in the municipality of Vihti in the town of Nummela, Finland. It was founded in 1987 and in 2001 opened a new manufacturing plant in Estonia. On February 13, 2006, Dicro bought out another Nummela based firm, RFI filter manufacturer named GE Procond Oy, until then part of General Electric. The new daughter company was renamed Procond Oy. Procond Oy The company Procond Oy is based in Nummela and manufactures industrial RFI and EMP filters. These include: Power line filters RFI filters EMC filters EMP filters Rectifiers References External links Electronics companies of Finland Electronics companies established in 1987 Finnish companies established in 1987 Vihti Filter manufacturers
Dicro
[ "Chemistry" ]
187
[ "Filter manufacturers", "Filters" ]
15,910,901
https://en.wikipedia.org/wiki/Cryptotope
A cryptotope is an antigenic site or epitope hidden in a protein or virion by surface subunits. Cryptotopes are antigenically active only after the dissociation of protein aggregates and virions. Some infectious pathogens are known to escape immunological targeting by B-cells by masking antigen-binding sites as cryptotopes. A cryptotope can also be referred to as a cryptic epitope. Cryptotopes are becoming important for HIV vaccine research as a number of studies have shown that cryptic epitopes can be revealed or exposed when HIV gp120 binds to CD4. References Immune system
Cryptotope
[ "Biology" ]
133
[ "Immune system", "Organ systems" ]
15,912,991
https://en.wikipedia.org/wiki/Hopf%20maximum%20principle
The Hopf maximum principle is a maximum principle in the theory of second order elliptic partial differential equations and has been described as the "classic and bedrock result" of that theory. Generalizing the maximum principle for harmonic functions which was already known to Gauss in 1839, Eberhard Hopf proved in 1927 that if a function satisfies a second order partial differential inequality of a certain kind in a domain of Rn and attains a maximum in the domain then the function is constant. The simple idea behind Hopf's proof, the comparison technique he introduced for this purpose, has led to an enormous range of important applications and generalizations. Mathematical formulation Let u = u(x), x = (x1, ..., xn) be a C2 function which satisfies the differential inequality in an open domain (connected open subset of Rn) Ω, where the symmetric matrix aij = aji(x) is locally uniformly positive definite in Ω and the coefficients aij, bi are locally bounded. If u takes a maximum value M in Ω then u ≡ M. The coefficients aij, bi are just functions. If they are known to be continuous then it is sufficient to demand pointwise positive definiteness of aij on the domain. It is usually thought that the Hopf maximum principle applies only to linear differential operators L. In particular, this is the point of view taken by Courant and Hilbert's Methoden der mathematischen Physik. In the later sections of his original paper, however, Hopf considered a more general situation which permits certain nonlinear operators L and, in some cases, leads to uniqueness statements in the Dirichlet problem for the mean curvature operator and the Monge–Ampère equation. Boundary behaviour If the domain has the interior sphere property (for example, if has a smooth boundary), slightly more can be said. If in addition to the assumptions above, and u takes a maximum value M at a point x0 in , then for any outward direction ν at x0, there holds unless . References . . Elliptic partial differential equations Mathematical principles
Hopf maximum principle
[ "Mathematics" ]
431
[ "Mathematical principles" ]
15,914,332
https://en.wikipedia.org/wiki/AmbieSense
AmbieSense was a large European project funded by the Information Society Technologies, Fifth Framework Programme of the European Commission (EU-IST 2001-34244). A company has been formed of the same name and has contributed to initiatives such as the open source Webinos project. Objectives The AmbieSense looks into the future of the ambient intelligence landscape. Miniature and wireless context tags are mounted in everyday surroundings and situations. The tags are smart objects embedded in the environment of people with mobile devices. Project vision: "Relevant information to the right situation and user". The resulting AmbieSense technology and applications gave good inspiration to several of the later Sixth Framework Programme (FP6) projects. Project impact - from idea to market People have referred to the AmbieSense project as: "turning the mobile operator model on its head". The invented system enables new and flexible business models for the distribution, delivery, and interaction with mobile information. Applications for travel and tourism were implemented for Oslo Airport, Gardermoen and Seville city centre, in which also Lonely Planet was involved as content provider. The piloting of the applications and technologies were well received, which led to the commercialisation of the project outcome (see: External links below). AmbieSense main components An AmbieSense system includes three cornerstones: Wireless context tags populated in the environment A content service provider The users with mobile phones The system integrates context tags with information from content service providers. The mobile information was both from a general travel guide publisher and local information providers. Content service providers are able to provide online information directly to a user or also via tags mounted in various strategic places thus creating an information zone. Information can be uploaded from remote via WiFi or Ethernet (by content service provider or building owners, for example, and is accessible locally by the user who is in that environment and situation, via Bluetooth push and/or pull. For example, in the context of an ambient travel guide, the historic and cultural web pages, local sights, shops, maps, and local events can be communicated to the mobile phones. Web pages and other multi-media content were relayed and distributed via the tags. One of the applications, the travel guide for mobile phones was also presented on EuroNews HiTech, a TV-program and web-column, January 6, 2005. The users can also receive recommendations or search results based on their context. These may be explicitly stated, implicitly derived through accessing their search behaviour, or use environment information from the context tags. Other dissemination results Press coverage in many newspapers in Spain, Scotland, Germany, and Norway (both online and paper versions). Three Spanish radio stations, one Spanish TV-channel, and the international channel EuroNews. Several articles found in Gemini, a popular scientific magazine in Norway. Additional information can be found on various web sites in the world. News items in IST Results web (Europe), Association for Computing Machinery (ACM) Bulletin web (United States), and EuroNews web. References External links AmbieSense commercial company AmbieSense EU-IST project Ambient intelligence Distributed computing projects Information technology organizations based in Europe
AmbieSense
[ "Technology", "Engineering" ]
643
[ "Distributed computing projects", "Computing and society", "Ambient intelligence", "Information technology projects" ]
7,109,264
https://en.wikipedia.org/wiki/Model%20engine
A model engine is a small internal combustion engine typically used to power a radio-controlled aircraft, radio-controlled car, radio-controlled boat, free flight, control line aircraft, or ground-running tether car model. Because of the square–cube law, the behaviour of many engines does not always scale up or down at the same rate as the machine's size; usually at best causing a dramatic loss of power or efficiency, and at worst causing them not to work at all. Methanol and nitromethane are common fuels. Overview The fully functional, albeit small, engines vary from the most common single-cylinder two-stroke to the exotic single and multiple-cylinder four-stroke, the latter taking shape in boxer, v-twin, inline and radial form, a few Wankel engine designs are also used. Most model engines run on a blend of methanol, nitromethane, and lubricant (either castor or synthetic oil). Two-stroke model engines, most often designed since 1970 with Schnuerle porting for best performance, range in typical size from .12 cubic inches (2 cubic centimeters) to 1.2 ci (19.6 cc) and generate between .5 horsepower (370 watts) to 5 hp (3.7 kW), can get as small as .010 ci (.16 cc) and as large as 3-4 ci (49–66 cc). Four-stroke model engines have been made in sizes as small as 0.20 in3 (3.3 cc) for the smallest single-cylinder models, all the way up to 3.05 in3 (50 cc) for the largest size for single-cylinder units, with twin- and multi-cylinder engines on the market being as small as 10 cc for opposed-cylinder twins, while going somewhat larger in size than 50 cc, and even upwards to well above 200 cc for some model boxer opposed-piston, inline and radial engines. While the methanol and nitromethane blended "glow fuel" engines are the most common, many larger (especially above 15 cc/0.90 ci displacement) model engines, both two-stroke and a growing number of four-stroke examples are spark ignition, and are primarily fueled with gasoline — with some examples of both two and four-stroke glow plug-designed methanol aeromodeling engines capable, with aftermarket upgrades, to having battery-powered, electronically controlled spark ignition systems replacing the glow plugs normally used. Model engines refitted in such a manner often run more efficiently on methanol-based glow plug engine fuels, often with the ability to exclude the use of nitromethane altogether in their fuel formulas. This article concerns itself with the methanol engines; gasoline-powered model engines are similar to those built for use in string trimmers, chainsaws, and other yard equipment, unless they happen to be purpose-built for aeromodeling use, being especially true for four-stroke gasoline-fueled model engines. Such engines usually use a fuel that contains a small percentage of motor oil as a two-stroke engine uses for lubrication purposes, as most model four-stroke engines — be they glow plug or spark ignition — have no built-in reservoir for motor oil in their crankcase or engine block design. The majority of model engines have used, and continue to use, the two-stroke cycle principle to avoid needing valves in the combustion chamber, but a growing number of model engines use the four-stroke cycle design instead. Both reed valve and rotary valve-type two-strokes are common, with four-stroke model engines using either conventional poppet valve, and rotary valve formats for induction and exhaust. The engine shown to the right has its carburetor in the center of the zinc alloy casting to the left. (It uses a flow restriction, like the choke on an old car engine, because the venturi effect is not effective on such a small scale.) The valve reed, cross shaped above its retainer spring, is still beryllium copper alloy, in this old engine. The glow plug is built into the cylinder head. Large production volume makes it possible to use a machined cylinder and an extruded crank case (cut away by hand in the example shown). These Cox Bee reed valve engines are notable for their low cost and ability to survive crashes. The components of the engine shown come from several different engines. Comparison of engines Images of a glowplug engine and a "diesel" engine are shown below for comparison. The most obvious external difference is seen on top of the cylinder head. The glowplug engine's glow plug has a pinlike terminal for its center contact, which is an electrical connector for the glowplug. The "diesel" engine has a T-bar which is used for adjusting the compression. The cylindrical object behind the glowplug engine is an exhaust silencer or muffler. Glowplug engines Glow plugs are used for starting as well as continuing the power cycle. The glow plug consists of a durable, mostly platinum, helically wound wire filament, within a cylindrical pocket in the plug body, exposed to the combustion chamber. A small direct current voltage (around 1.5 volts) is applied to the glow plug, the engine is then started, and the voltage is removed. The burning of the fuel/air mixture in a glow-plug model engine, which requires methanol for the glow plug to work in the first place, and sometimes with the use of nitromethane for greater power output and steadier idle, occurs due to the catalytic reaction of the methanol vapor to the presence of the platinum in the filament, thus causing the ignition. This keeps the plug's filament glowing hot, and allows it to ignite the next charge. Since the ignition timing is not controlled electrically, as in a spark ignition engine or by fuel injection, as in an ordinary diesel, it must be adjusted by the richness of the mixture, the ratio of nitromethane to methanol, the compression ratio, the cooling of the cylinder head, the type of glow plug, etc. A richer mixture will tend to cool the filament and so retard ignition, slowing the engine, and a rich mixture also eases starting. After starting the engine can easily be leaned (by adjusting a needle valve in the spraybar) to obtain maximum power. Glowplug engines are also known as nitro engines. Nitro engines require a 1.5 volt ignitor to light the glow plug in the heat sink. Once primed, pulling the starter with the ignitor in will start the engine. Diesel engines Diesel engines are an alternative to methanol glow plug engines. These "diesels" run on a mixture of kerosene, ether, castor oil or vegetable oil, and cetane or amyl nitrate booster. Despite their name, their use of compression ignition, and the use of a kerosene fuel that is similar to diesel, model diesels share very little with full-size diesel engines. Full-size diesel engines, such as those found in a truck, are fuel injected and either two-stroke or four-stroke. They use compression ignition to ignite the mixture: the compression within the cylinder heats the inlet charge sufficiently to cause ignition, without requiring an applied ignition source. A fundamental feature of such engines, unlike petrol (gasoline) engines, is that they draw in air alone and the fuel is only mixed by being injected into the combustion chamber separately. Model diesel engines are instead a carbureted two-stroke using the crankcase for compression. The carburetor supplies a mixture of fuel and air into the engine, with the proportions kept fairly constant and their total volume throttled to control the engine power. Apart from sharing the diesel's use of compression ignition, their construction has more in common with a small two-stroke motorcycle or lawnmower engine. In addition to this, model diesels have variable compression ratios. This variable compression is achieved by a "contra-piston", at the top of the cylinder, which can be adjusted by a screwed "T-bar". The swept volume of the engine remains the same, but as the volume of the combustion chamber at top dead centre is changed by adjusting the contra-piston, the compression ratio (swept volume + combustion chamber / combustion chamber) changes accordingly. Model diesels are found to produce more torque than glow engines of the same displacement, and are thought to get better fuel efficiency, because the same power is produced at a lower rpm, and in a smaller displacement engine. However, the specific power may not be significantly superior to a glow engine, due to the heavier construction needed to assure that the engine can withstand the much higher compression ratio, sometimes reaching 30:1. Diesels also run significantly quieter, due to the more rapid combustion, unlike two-stroke glow engines, in which combustion may still be occurring when the exhaust ports are uncovered, causing a significant amount of noise. Recent developments in model engineering have produced true diesel model engines, with a traditional injector and injector pump, and these engines operate in the same way as a large diesel engine. See also Four-stroking Glow plug (model engine) Glow fuel Nitro engine Schnuerle porting, used on model two-stroke engines since the 1970s Makers Bullitt Engines Cox Model Engines Enya Model Engines (two and four-stroke model engines) FOX Manufacturing FX Royal Racing Engines K&B Manufacturing Laser Engines LRP electronic (rebranded OS Engines) Mantua Models GAUI GPOWER MECOA Motori Cipolla Ninja Engine Novarossi nVision O.S. Engines (two and four-stroke model engines) OPS (engine) Picco Micromotori RB Products rcvengines Reds Racing Saito Seisakusho (four-stroke and model steam engine specialist) Team Orion Thunder Tiger Webra Yamada Engines (YS) (two and four-stroke model engines) References External links K&B Manufacturing Yamada Engines Saito Seisakusho WEBRA FOX Manufacturing MECOA COX Hobbies Engine technology Model engines Radio control Scale modeling
Model engine
[ "Physics", "Technology" ]
2,103
[ "Engine technology", "Scale modeling", "Model engines", "Engines" ]
7,109,281
https://en.wikipedia.org/wiki/Award%20pin
An award pin is a small object, usually made from metal or plastic, with a pin on the back, presented as an award of achievement or a mark of appreciation. They are worn on clothes such as jackets, shirts or hats. Description Award pins usually have an image or words, or both, depicting the reason for the award. An award pin series that is offered by the U.S. Government to all eligible civilians is the Pilot Proficiency Award Program. Award pins are commonly given to participants of youth sports as a method to reinforce excellent play and sportsmanship. There are many companies that provide Sports Award Pins. Award pins can usually be plated on Gold (plain or antique), Silver (plain or antique), Nickel and Black Nickel or Copper (plain or antique). During the manufacturing process pins can be filled with enamel colors and then covered with a thin coat of epoxy to protect these colors. Noteworthy award pins The Astronaut pin is awarded to military and civilian personnel who have completed training and performed a successful spaceflight and is the least-awarded qualification badge of the United States military. After Apollo 1's fire in 1967, NASA turned to Charles Schulz to use the character Snoopy for its new safety award. The Silver Snoopy award pin was created to reward those who significantly contribute to the safety of spaceflight operations. It is one of the highest awards in NASA and the space industry. In 2010, the French Minister of Culture Frédéric Mitterrand slightly pinned Marion Cotillard's chest as he decorated her with the award pin of the Chevalier de l’Ordre des Arts et des Lettres. See also Badge Lapel pin References Award items Memorabilia Badges
Award pin
[ "Mathematics" ]
342
[ "Symbols", "Badges" ]
7,110,145
https://en.wikipedia.org/wiki/Spectra%20Shield
Spectra Shield is a composite material (specifically, an ultra-high-molecular-weight polyethylene (UHMWPE) fiber) used in bulletproof vests and vehicle armour. It is manufactured by Honeywell. Spectra is a fiber offered by Honeywell, and "Shield" is their patented process for using resin and a "plastic film" to bind multiple overlapping layers of Spectra, without having to have them be woven together. Other popular fibers with similar uses are aramid (Kevlar or Twaron) and Dyneema (another UHMWPE). References External links Honeywell: Spectra Fiber Composite materials
Spectra Shield
[ "Physics" ]
132
[ "Materials", "Composite materials", "Matter" ]
7,110,810
https://en.wikipedia.org/wiki/Pocket%20shark
The pocket shark (Mollisquama parini) is a species of kitefin shark in the family Dalatiidae. The species is found in deep water off Chile in the southeastern Pacific Ocean. It was the only member of the genus Mollisquama, until another species, M. mississippiensis, was discovered in the Gulf of Mexico. Both species are distinguished from other sharks by two pockets next to the front fins. The pockets are large, measuring about 4% of the shark's body length. Some researchers hypothesize that the pockets may excrete some kind of glowing fluid or pheromones. Etymology The specific name, parini, is in honor of Russian ichthyologist Nikolai Vasilevich Parin (born 1932). Distribution and habitat The first specimen of M. parini was found off the coast of Chile in the Nazca Submarine Ridge. This specimen was an adolescent female with a total length of , taken at a depth of , in 1979. This initially suggested that the species was distributed throughout the Pacific Ocean. In February 2010, a similar specimen with a total length of was caught off the coast of Louisiana, in the Gulf of Mexico. This second specimen was determined to be a new species which was described and named as M. mississippiensis. It is now believed that the genus Mollisquama is more widely dispersed than previously hypothesized. Description Sharks of the family Dalatiidae are small-sized to medium-sized, with two spineless dorsal fins. They are described as having strong jaws with dagger-like upper teeth and wider blade-like teeth in the lower jaw. From the one finding of the pocket shark in the Gulf of Mexico (M. mississippiensis), the mouth was described to have a rectangular-like opening on the underside of the body. The juvenile male shark found in the Gulf of Mexico weighed and had a total length of . The overall shape of the shark is cylindrical, with a wide, rounded snout tapering back toward the caudal fin. Pocket gland The pocket shark, Mollisquama parini, gets its common name from a small pocket gland that is found behind each pectoral fin on either side of the shark. The purpose of this gland is still unknown as not enough specimens have been found to investigate the matter. The closest suggestion for the purpose of this gland is to act as a luminous pouch as found on the species Euprotomicroides zantedeschia. While this pocket gland appears to have a slightly darker gray coloration, the rest of the shark's body is described to be a light gray with brown undertones. The pocket is located approximately from the base of the pectoral fin and was measured to be long and wide. Environmental threats and conservation There is essentially no interaction of M. parini with humans, so the species does not seem to cause any threat to the environment and other species, including humans. No methods of conservation are in place to protect this species as population numbers are unknown. References Further reading Dolganov VN (1984). "[New shark from family Squalidae caught on Naska Submarine Ridge]". Zoologicheskii zhurnal 63: 1589-1591. (Mollisquama, new genus; Mollisquama parini, new species). (in Russian). External links NOAA and Tulane researchers identify second possible specimen ever found Dalatiidae Fish described in 1984 Species known from a single specimen
Pocket shark
[ "Biology" ]
713
[ "Individual organisms", "Species known from a single specimen" ]
7,111,308
https://en.wikipedia.org/wiki/Tubular%20NDT
Tubular NDT (nondestructive testing) is the application of various technologies to detect anomalies such as corrosion and manufacturing defects in metallic tubes. Tubing can be found in such equipment as boilers and heat exchangers. To carry out an examination in situ (i.e., examination of the tubes in position, where they are installed), a manhole cover is usually removed to allow a technician access to the tubes. Alternatively, a tube bundle may be removed from a heat-exchanger and transported by forklift to a maintenance area for easier access. The usual means of examination is to insert some type of probe into the tubes, one at a time, while data is recorded for later interpretation. The technologies listed below (ECT, RFT, IRIS, and MFL) are all able to detect defects on the outside of the tube from the inside. The tubes must be clean enough to allow passage of the probe: deposits of debris, rust, or scale may have to be removed by chemicals or pressure washing. In water-tube boilers, the tubes may be examined from the outside when the boiler is shut down, often using ultrasonic testing. Common methods Eddy-current testing (ECT) is commonly used on non-[ferromagnetic] metals and alloys such as copper, brass, and copper nickel. Variations on ECT are partial saturation ECT and magnetic biased ECT, both of which use magnets to allow ECT to operate in lightly ferromagnetic materials or in thin-wall ferromagnetic tubes. Remote field testing (RFT) is used on [ferromagnetic] materials such as carbon steel. IRIS (Internal rotary inspection system) can be used on all types of metal tubes. IRIS is very slow, but very accurate, and is often used as a back-up to a remote field examination. Magnetic flux leakage (MFL) testing is used on carbon steel tubes, although it tends to be less accurate than remote field testing. References Sources Heat exchangers: Monitoring and maintenance H. Sadek, NDE technologies for the examination of heat exchangers and boiler tubes – principles, advantages and limitations, PDF, 2.1 MB. Fathi E. Al-Qadeeb, Tubing Inspection Using Multiple NDT Techniques, PDF, 118 kB. Nondestructive testing
Tubular NDT
[ "Materials_science" ]
484
[ "Nondestructive testing", "Materials testing" ]
7,111,771
https://en.wikipedia.org/wiki/UVB-induced%20apoptosis
UVB-induced apoptosis is the programmed cell death of cells that become damaged by ultraviolet rays. This is notable in skin cells, to prevent melanoma. Some studies have shown that exercise accelerates this process. Description Apoptosis is a physiological process, that promotes the active suicide of cells, resulting in an advantage, unlike necrosis which occurs from trauma. In the average human adult it is estimated that 50 to 70 billion cells die each day from apoptosis. One of the largest promoters of apoptosis is exposure to ultraviolet (UV) light. While UV light is essential to human life it can also cause harm by inducing cancer, immunosuppression, photoaging, inflammation, and cell death. Of the various components of sunlight, ultraviolet radiation B (UVB) (290-320 nm) is considered to be the most harmful. This type of radiation acts primarily on the epidermis, and in particular the keratinocytes. Keratinocytes are known to form a barrier to provide a layer of protection within the skin against environmental hazards. Within the epidermis, in addition to the keratinocytes, there are melanocytes (melanin producing cells). These cells produce pigment that provides the keratinocytes with protection against UVB radiation. Once the keratinocytes have been damaged irreparably as a result of UVB radiation, they are marked for destruction by apoptosis to eliminate them as they are potentially mutagenic cells. Failure of the body to remove DNA damaged cells increases the risk of skin cancer. One consequence of acute UVB exposure is the occurrence of sunburn cells, keratinocytes, within the epidermis. It has been found that when exposed to UVB radiation the DNA in an epidermis cell undergoes fragmentation, which could result in the growth of tumor cells. To prevent this the cell undergoes a morphological change into keratinocytes. These keratinocytes exhibit the capacity to release TNF-α (tumor necrosis factor - alpha) that stop the growth of the tumor by promoting the death of the cell. If keratinocyte cells have been damaged by UVB radiation, the term "sunburn cell" or "SBC formation" is used. It is thought that when keratinocytes have been damaged by UVB radiation, this triggers a series of processes, caused in part by damage to the DNA. A study indicates that it may be at the mitochondria where the various processes (ligan-dependent receptor activation and cytosolic signaling) pathways are activated by the production of reactive oxygen species (ROS) that may direct the destruction of keratinocytes through apoptosis by activating caspase. As a result of increased exposure to an oxygen-reduced environment, this promotes the development of ROS thereby linking the incidence of ROS with keratinocytes and making these cells more sensitive to UVB radiation. A study by Tobi et al., in 2002 has linked ROS with cytotoxicity, apoptosis, mutations, and carcinogenesis. Mild hypoxia (1-5%) sensitized keratinocytes to UVB-induced apoptosis, while protecting melanocytes from environmental stresses. A study by Mark Schotanus, et al., has demonstrated that in addition to potential damage to keratinocytes and melanocytes, exposure to UVB radiation may also produce a loss of potassium ions, which may then cause the activation of apoptotic pathways in lymphocytes and neuronal cells as opposed to keratinocytes and melanocytes. It has been demonstrated that incubation of lymphocytes and neuronal cells in elevated concentrations of potassium ions provides protection from apoptosis. This phenomenon was demonstrated in tears, which have higher levels of potassium ions, and bathe cells of the eye and therefore provides protection from UVB radiation. Reduction of potassium ions promotes apoptosis and the synthesis of initiator caspase-8 and the effector caspase-3. A study reported in the International Journal of Molecular Sciences in 2012; 13(3), pages 2560-2675, published February 28, 2012 by Terrerence J. Piva, Catherine M. Davern, Paula M. Hall, Clay M. Winterford and Kay A.O. Ellem, that while caspase may play a role in apoptosis, it is specifically not as a result of caspase-3. It was reported in that study that the process of apoptosis includes: "detachment from the substrate, followed by loss of specialized membrane structures such as microvilli. The cell then undergoes rounding, shrinkage and blabbing before condensation of chromatin is observed in the nucleus. After a period of time the cell fragments into apoptotic bodies, which in vivo are engulfed and degraded by phagocytic cells such as macrophages" Caspase I is involved in the aforementioned cell membrane activity but not caspase-3. UVB-induced apoptosis pathway The sequence of events that leads to apoptosis is multifaceted and complex. Despite the simple concept of apoptosis, the sequence of events that leads to it and other conditions that attempt to counter act it can be very cumbersome. Since apoptosis is a last resort alternative, it takes the initiation of multiple other genes (ING2, p53, or Ras subfamily) expressed before the cell is finally programmed for death. In addition, genes like Survivin can attempt to suppress apoptosis. References Free Radical Biology and Medicine, Vol 52, Issue 6, 15 March 2012, Pages 1111-1120. Skin mild hypoxia enhances killing of UVB-damaged keratinocytes through relative oxygen species-mediated apoptosis requiring Nova and Bim. Kris Kys, Hannaelore Maes, Graieia Andrei, Rober Snoeck, Maria Garmyn, Partiizia Agostinis Experimental Eye Research, Vol 93, Issue 5, November 2011, pages 735-740. Stratified Corneal timbal epithelial cells are protected from UVB-induced apoptosis by elevated extracellular potassium ions. Mark Schotanus, Leah R. Koetje, Rachel E. Van Dyken, John L. Ubels Methods 2008; 44; pages 205-221, Apoptosis and necrosis, detection, discrimination and phagocytosis, Krysko D.V. Berghe T.V. D. Herde, K., Vandenabeele P External links LiveScience article on the subject Cell signaling Immune system Programmed cell death
UVB-induced apoptosis
[ "Chemistry", "Biology" ]
1,394
[ "Immune system", "Signal transduction", "Senescence", "Organ systems", "Programmed cell death" ]
7,112,124
https://en.wikipedia.org/wiki/Making%20Sweden%20an%20Oil-Free%20Society
In 2005 the government of Sweden appointed a commission to draw up a comprehensive programme to reduce Sweden's dependence on petroleum, natural gas and other 'fossil raw materials' by 2020. In June 2006 (less than three months before the 2006 general election) the commission issued its report, entitled Making Sweden an Oil-Free Society (). The report cited four reasons to reduce oil dependence: The impact of oil prices on Swedish economic growth and employment The link between oil, peace and security throughout the world The great potential to use Sweden's own clean renewable energy resources in place of oil The threat of climate change resulting from the extensive burning of fossil fuels As of 2005, oil supplies provided about 32% of the country's energy supply, with nuclear power and hydroelectricity providing much of the remainder. Although the report did not propose to end the use of oil entirely, the 2020 date was suggested as a marker on a continuing process of the "oil phase-out in Sweden". Following defeat of the incumbent government coalition in the 2006 general election, the proposals were not included in the energy policy or in any law. "Sweden's energy policy, in both the short and the long term, is to safeguard the supply of electricity and other forms of energy on terms that are competitive with the rest of the world. It is intended to create the right conditions for efficient use of energy and a cost efficient Swedish supply of energy, with minimum adverse effect on health, the environment or climate, and assisting the move towards an ecologically sustainable society." Commission on Oil Independence To make recommendations on how dependency on oil should be broken, the government created a Commission on Oil Independence (), headed by the then Prime Minister Göran Persson, which reported in June 2006. In their report, the Commission proposed the following targets for 2020: consumption of oil in road transport to be reduced by 40–50 per cent; consumption of oil in industry to be cut by 25–40 per cent; heating buildings with oil, a practice already cut by 70% since the 1973 oil crisis, should be phased out; overall, energy should be used 20% more efficiently. Replacing oil with renewable energy sources and energy conservation measures to cut total energy use was envisioned. This is also expected to result in cuts in carbon emissions and to strengthen the country's role in sustainable development technologies as well as increasing its international economic competitiveness. Energy sources Technical solutions under consideration include the further development of domestically grown biofuels, solar cells, fuel cells, wind farms, wave energy, a major increase in district heating schemes and greater use of geothermal heat pumps. It is expected that research, development and commercialization of such technologies should be supported by government. The commission also recommended that the government should not sanction the creation of a national natural gas infrastructure, on the belief that this would inhibit the development of biofuels and encourage the use of gas in place of oil. Energy use To cut energy use, the commission anticipated that by 2020 at least 75% of all new housing would use low-energy building techniques similar to the German passive house standard, and that it will also be necessary to modernize the existing housing stock, including replacing direct electric heating systems (with systems heated by district heating, biofuels or heat pumps). They also expect there to be a greater use of remote work, videotelephony and web conferencing, public transport, ship transport, hybrid vehicles, and smaller, lighter, biodiesel cars. As part of reducing industrial consumption, it is proposed that carbon allowances issued in Sweden under the European Union Emission Trading Scheme should be cut to 75% of their initial levels by 2020. The taxation system is also likely to be used to influence energy choices, together with education and public awareness initiatives. Progress On their release, the commission's proposals were supported by the national automotive industry association, BIL Sweden. It was, however, opposed by the timber industry, who fear that land producing profitable exports may become used for low-income domestic biofuel production. As of 2008, 43% of the Swedish primary energy supply comes from renewable sources, which is the largest share in any European Union country. In September, 2015, the Swedish government announced its plan to drastically cut its reliance on fossil fuels by 2020. This plan also includes the goal of having the capital, Stockholm, 100% powered by renewable resources by 2050. Though the goal is to have the entire country run on renewable resources, there is no temporal goal yet. Ban of fossil fuel-driven vehicles In 2008, Swedish political party Centerpartiet proposed to ban gasoline fossil fuel-driven vehicles by 2025–2030. See also Climate change in Sweden Coal phase out Alternative propulsion Hydrogen fuel replacement in Iceland Renewable energy development Renewable energy in the European Union References External links Swedish Energy Agency Sweden's first biogas train Swedish Bioenergy Association The road to Sweden's oil-free future Towards an Oil Free Economy in Ireland: Lessons from the Swedish Commission for Oil Independence Report SWEDISH OIL LLC Bioenergy organizations Biofuel in Sweden Climate change in Sweden Economy of Sweden Energy in Sweden Environmental reports Fossil fuels in Sweden Petroleum politics Politics of Sweden Renewable energy policy Sustainability in Sweden Transport in Sweden Fossil fuel phase-out Environmental policy in the EU 2005 establishments in Sweden
Making Sweden an Oil-Free Society
[ "Chemistry" ]
1,073
[ "Petroleum", "Petroleum politics" ]
7,112,300
https://en.wikipedia.org/wiki/Barberpole%20illusion
The barberpole illusion is a visual illusion that reveals biases in the processing of visual motion in the human brain. This visual illusion occurs when a diagonally striped pole is rotated around its vertical axis (horizontally), it appears as though the stripes are moving in the direction of its vertical axis (downwards in the case of the animation to the right) rather than around it. History In 1929, psychologist J.P. Guilford informally noted a paradox in the perceived motion of stripes on a rotating barber pole. The barber pole turns in place on its vertical axis, but the stripes appear to move upwards rather than turning with the pole. Guilford tentatively attributed the phenomenon to eye movements, but acknowledged the absence of data on the question. In 1935, Hans Wallach published a comprehensive series of experiments related to this topic, but since the article was in German it was not immediately known to English-speaking researchers. An English summary of the research was published in 1976 and a complete English translation of the 1935 paper was published by Sophie Wuerger, Robert Shapley, and Nava Rubin in 1996. Wallach's analysis focused on the interaction between the terminal points of the diagonal lines and the implicit aperture created by the edges of the pole. Explanation This illusion occurs because a bar or contour within a frame of reference provides ambiguous information about its "real" direction of movement. The actual motion of the line has many possibilities. The shape of the aperture thus tends to determine the perceived direction of motion for an otherwise identically moving contour. A vertically elongated aperture makes vertical motion dominant whereas a horizontally elongated aperture makes horizontal motion dominant. In the case of a circular or square aperture, the perceived direction of movement is usually orthogonal to the orientation of the stripes (diagonal, in this case). The perceived direction of movement relates to the termination of the line's end points within the inside border of the occluder. The vertical aperture, for instance, has longer edges at the vertical orientation, creating a larger number of terminators unambiguously moving vertically. This stronger motion signal forces us to perceive vertical motion. Functionally, this mechanism has evolved to ensure that we perceive a moving pattern as a rigid surface moving in one direction. Individual motion-sensitive neurons in the visual system have only limited information, as they see only a small portion of the visual field (a situation referred to as the "aperture problem"). In the absence of additional information the visual system prefers the slowest possible motion: i.e., motion orthogonal to the moving line. The neurons which may correspond to perceiving barber-pole-like patterns have been identified in the visual cortex of ferrets. Auditory analogue A similar effect occurs in the Shepard's tone, which is an auditory illusion. See also Screw (simple machine) – screws convert rotational motion to linear motion and exhibit the same mechanic Motion perception Auditory illusion References Notes External links Barpole effect animation and explanation. Optical illusions
Barberpole illusion
[ "Physics" ]
604
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
7,113,411
https://en.wikipedia.org/wiki/Betulin
Betulin is an abundant, naturally occurring triterpene. It is commonly isolated from the bark of birch trees. It forms up to 30% of the dry weight of silver birch bark. It is also found in birch sap. Inonotus obliquus contains betulin. The compound in the bark gives the tree its white color which appears to protect the tree from mid-winter overheating by the sun. As a result, birches are some of the northernmost occurring deciduous trees. History Betulin was discovered in 1788 by German-Russian chemist Johann Tobias Lowitz. Chemistry Chemically, betulin is a triterpenoid of lupane structure. It has a pentacyclic ring structure, and hydroxyl groups in positions C3 and C28. See also Abietic acid Stanol ester Phytosterols References Triterpenes Isopropenyl compounds Pentacyclic compounds Diols Primary alcohols Secondary alcohols
Betulin
[ "Chemistry" ]
201
[ "Isopropenyl compounds", "Functional groups" ]
7,113,672
https://en.wikipedia.org/wiki/Mobile%20DTV%20Alliance
The Mobile DTV Alliance is a marketing organization based in San Ramon, California that was founded in 2006 by a consortium of companies to promote open standards for mobile TV. Its goal is the rapid adoption of mobile TV technology via DVB-H and to further the mobile TV experience in North America. The President of the Mobile DTV Alliance, Yoram Solomon, is also on the boards of the WiMedia Alliance and Wi-Fi Alliance. The Alliance was founded by Intel, Microsoft, Modeo, Motorola, Nokia and Texas Instruments. References Alliance formed to promote mobile TV, Financial Times, Jan 23, 2006 DTV Alliance Takes Mobile TV To The Masses, PC Magazine, Jan 24, 2006 Report: Consumers ready for mobile TV, Information Week, Jan 25, 2007 The other US mobile DTV alliance opens the door for a truce Consortia in the United States Mobile television 2006 establishments in California Organizations based in the San Francisco Bay Area San Ramon, California
Mobile DTV Alliance
[ "Technology" ]
193
[ "Mobile television" ]
7,113,944
https://en.wikipedia.org/wiki/Augmented%20cognition
Augmented cognition is an interdisciplinary area of psychology and engineering, attracting researchers from the more traditional fields of human-computer interaction, psychology, ergonomics and neuroscience. Augmented cognition research generally focuses on tasks and environments where human–computer interaction and interfaces already exist. Developers, leveraging the tools and findings of neuroscience, aim to develop applications which capture the human user's cognitive state in order to drive real-time computer systems. In doing so, these systems are able to provide operational data specifically targeted for the user in a given context. Three major areas of research in the field are: Cognitive State Assessment (CSA), Mitigation Strategies (MS), and Robust Controllers (RC). A subfield of the science, Augmented Social Cognition, endeavours to enhance the "ability of a group of people to remember, think, and reason." History In 1962 Douglas C. Engelbart released the report "Augmenting Human Intellect: A Conceptual Framework" which introduced, and laid the groundwork for, augmented cognition. In this paper, Engelbart defines "augmenting human intellect" as "increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems." Modern augmented cognition began to emerge in the early 2000s. Advances in cognitive, behavioral, and neurological sciences during the 1990s set the stage for the emerging field of augmented cognition – this period has been termed the "Decade of the Brain." Major advancements in functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) have been pivotal in the emergence of augmented cognition technologies which seek to monitor the user's cognitive abilities. As these tools were primarily used in controlled environments, their further development was essential to pragmatic augmented cognition applications. Research DARPA's Augmented Cognition Program The Defense Advanced Research Projects Agency (DARPA) has been one of the primary funding agencies for augmented cognition investigators. A major focus of DARPA's augmented cognition program (AugCog) has been developing more robust tools for monitoring cognitive state and integrating them with computer systems. The program envisions "order of magnitude increases in available, net thinking power resulting from linked human-machine dyads [that] will provide such clear informational superiority that few rational individuals or organizations would challenge under the consequences of mortality." The program began in 2001, and has since be renamed to Improving Warfighter Information Intake Under Stress Program. By leveraging such tools, the program seeks to provide warfighters with enhanced cognitive abilities, especially under complex or stressful war conditions. As of 2002, the program vision is divided into four phases: Phase 1: Real-time cognitive state detection Phase 2: Real-time cognitive state manipulation Phase 3: Autonomous cognitive state manipulation Phase 4: Operation demonstration and transition Proof of concept was carried out in two phases: near real time monitoring of the user's cognitive activity, and subsequent manipulation of the user's cognitive state. Augmented Cognition International (ACI) Society The Augmented Cognition International (ACI) Society held its first conference in July 2005. At the society's first conference, attendees from a diverse background including academia, government, and industry came together to create an agenda for future research. The agenda focused on near-, medium-, and long-term research and development goals in key augmented cognition science and technology areas. The International Conference on Human Computer Interaction, where the society first established itself, continues to host the society's activities. Translation engines Thad Starner, and the American Sign Language (ASL) Research Group at Georgia Tech, have been researching systems for the recognition of ASL. Telesign, a one-way translation system from ASL to English, was shown to have a 94% accuracy rate on a vocabulary with 141 signs. Augmentation Factor Ron Fulbright proposed the augmentation factor (A+), as a measure of the degree a human is cognitively enhanced by working in collaborative partnership with an artificial cognitive system (cog). If WH is the cognitive work performed by the human in a human-machine dyad, and WC is the cognitive work done by the cog then A+ = WC/WH. In situations where a human is working alone without assistance, then WC = 0 resulting in A+ = 0 meaning the human is not cognitively augmented at all. In situations where the human does more cognitive work than the cog, A+ < 1. In situations where the cog does more cognitive work than the human, A+ > 1. As cognitive systems continue to advance, A+ will increase. In situations where a cog performs all cognitive work without the assistance of a human, then WH = 0 resulting in A+ = <undefined> meaning attempting to calculate the augmentation factor is nonsensical since there is no human involved to be augmented. Human/Cog Ensembles Whereas DARPA's AugCog program focuses on human/machine dyads, it is possible for there to be more than one human and more than one artificial element involved. Human/Cog Ensembles involve one or more humans working with one or more cognitive systems (cogs). In a human/cog ensemble, the total amount of cognitive work performed by the ensemble, W*, is the sum of the cognitive work performed by each of the N humans in the ensemble plus the sum of the cognitive work performed by each of the M cognitive systems in the ensemble: W* = WkH + WkC Controversy Privacy concerns The increasing sophistication of brain-reading technologies has led many to investigate their potential applications for lie detection. Legally required brain scans arguably violate “the guarantee against self-incrimination” because they differ from acceptable forms of bodily evidence, such as fingerprints or blood samples, in an important way: they are not simply physical, hard evidence, but evidence that is intimately linked to the defendant's mind. Under US law, brain-scanning technologies might also raise implications for the Fourth Amendment, calling into question whether they constitute an unreasonable search and seizure. Human augmentation Many of the same arguments in the debate around human enhancement can be analogized to augmented cognition. Economic inequality, for instance, may serve to exacerbate societal advantages and disadvantages due to the limited availability of such technologies. Fearing the potential applications of devices like Google Glass, certain gambling establishments (such as Caesar's Palace in Las Vegas) banned its use even before it was commercially available. See also Augmented reality Intelligence amplification Neuroergonomics Human-computer interaction Dylan Schmorrow References Further reading Dylan Schmorrow, Ivy V. Estabrooke, Marc Grootjen: Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience, 5th International Conference, FAC 2009 Held as Part of HCI International 2009 San Diego, CA, USA, July 19–24, 2009, Proceedings Springer 2009. Fuchs, Sven, Hale, Kelly S., Axelsson, Par, "Augmented Cognition can increase human performance in the control room," Human Factors and Power Plants and HPRCT 13th Annual Meeting, 2007 IEEE 8th, vol., no., pp. 128–132, 26–31 Aug. 2007 Neuroscience Ergonomics Human–computer interaction Cognition
Augmented cognition
[ "Engineering", "Biology" ]
1,498
[ "Human–computer interaction", "Neuroscience", "Human–machine interaction" ]
7,114,058
https://en.wikipedia.org/wiki/Boronic%20acid
A boronic acid is an organic compound related to boric acid () in which one of the three hydroxyl groups () is replaced by an alkyl or aryl group (represented by R in the general formula ). As a compound containing a carbon–boron bond, members of this class thus belong to the larger class of organoboranes. Boronic acids act as Lewis acids. Their unique feature is that they are capable of forming reversible covalent complexes with sugars, amino acids, hydroxamic acids, etc. (molecules with vicinal, (1,2) or occasionally (1,3) substituted Lewis base donors (alcohol, amine, carboxylate)). The pKa of a boronic acid is ~9, but they can form tetrahedral boronate complexes with pKa ~7. They are occasionally used in the area of molecular recognition to bind to saccharides for fluorescent detection or selective transport of saccharides across membranes. Boronic acids are used extensively in organic chemistry as chemical building blocks and intermediates predominantly in the Suzuki coupling. A key concept in its chemistry is transmetallation of its organic residue to a transition metal. The compound bortezomib with a boronic acid group is a drug used in chemotherapy. The boron atom in this molecule is a key substructure because through it certain proteasomes are blocked that would otherwise degrade proteins. Boronic acids are known to bind to active site serines and are part of inhibitors for porcine pancreatic lipase, subtilisin and the protease Kex2. Furthermore, boronic acid derivatives constitute a class of inhibitors for human acyl-protein thioesterase 1 and 2, which are cancer drug targets within the Ras cycle. Structure and synthesis In 1860, Edward Frankland was the first to report the preparation and isolation of a boronic acid. Ethylboronic acid was synthesized by a two-stage process. First, diethylzinc and triethyl borate reacted to produce triethylborane. This compound then oxidized in air to form ethylboronic acid. Several synthetic routes are now in common use, and many air-stable boronic acids are commercially available. Boronic acids typically have high melting points. They are prone to forming anhydrides by loss of water molecules, typically to give cyclic trimers. Synthesis Boronic acids can be obtained via several methods. The most common way is reaction of organometallic compounds based on lithium or magnesium (Grignards) with borate esters. For example, phenylboronic acid is produced from phenylmagnesium bromide and trimethyl borate followed by hydrolysis PhMgBr + B(OMe)3 → PhB(OMe)2 + MeOMgBr PhB(OMe)2 + 2 H2O → PhB(OH)2 + 2 MeOH Another method is reaction of an arylsilane (RSiR3) with boron tribromide (BBr3) in a transmetallation to RBBr2 followed by acidic hydrolysis. A third method is by palladium catalysed reaction of aryl halides and triflates with diboronyl esters in a coupling reaction known as the Miyaura borylation reaction. An alternative to esters in this method is the use of diboronic acid or tetrahydroxydiboron ([B(OH2)]2). Boronic esters (also named boronate esters) Boronic esters are esters formed between a boronic acid and an alcohol. The compounds can be obtained from borate esters by condensation with alcohols and diols. Phenylboronic acid can be selfcondensed to the cyclic trimer called triphenyl anhydride or triphenylboroxin. Compounds with 5-membered cyclic structures containing the C–O–B–O–C linkage are called dioxaborolanes and those with 6-membered rings dioxaborinanes. Organic chemistry applications Suzuki coupling reaction Boronic acids are used in organic chemistry in the Suzuki reaction. In this reaction the boron atom exchanges its aryl group with an alkoxy group from palladium. Chan–Lam coupling In the Chan–Lam coupling the alkyl, alkenyl or aryl boronic acid reacts with a N–H or O–H containing compound with Cu(II) such as copper(II) acetate and oxygen and a base such as pyridine forming a new carbon–nitrogen bond or carbon–oxygen bond for example in this reaction of 2-pyridone with trans-1-hexenylboronic acid: The reaction mechanism sequence is deprotonation of the amine, coordination of the amine to the copper(II), transmetallation (transferring the alkyl boron group to copper and the copper acetate group to boron), oxidation of Cu(II) to Cu(III) by oxygen and finally reductive elimination of Cu(III) to Cu(I) with formation of the product. In catalytic systems oxygen also regenerates the Cu(II) catalyst. Liebeskind–Srogl coupling In the Liebeskind–Srogl coupling a thiol ester is coupled with a boronic acid to produce a ketone. Conjugate addition The boronic acid organic residue is a nucleophile in conjugate addition also in conjunction with a metal. In one study the pinacol ester of allylboronic acid is reacted with dibenzylidene acetone in such a conjugate addition: The catalyst system in this reaction is tris(dibenzylideneacetone)dipalladium(0) / tricyclohexylphosphine. Another conjugate addition is that of gramine with phenylboronic acid catalyzed by cyclooctadiene rhodium chloride dimer: Oxidation Boronic esters are oxidized to the corresponding alcohols with base and hydrogen peroxide (for an example see: carbenoid) Homologation In boronic ester homologization an alkyl group shifts from boron in a boronate to carbon: In this reaction dichloromethyllithium converts the boronic ester into a boronate. A Lewis acid then induces a rearrangement of the alkyl group with displacement of the chlorine group. Finally an organometallic reagent such as a Grignard reagent displaces the second chlorine atom effectively leading to insertion of an RCH2 group into the C-B bond. Another reaction featuring a boronate alkyl migration is the Petasis reaction. Electrophilic allyl shifts Allyl boronic esters engage in electrophilic allyl shifts very much like silicon pendant in the Sakurai reaction. In one study a diallylation reagent combines both: Hydrolysis Hydrolysis of boronic esters back to the boronic acid and the alcohol can be accomplished in certain systems with thionyl chloride and pyridine. Aryl boronic acids or esters may be hydrolyzed to the corresponding phenols by reaction with hydroxylamine at room temperature. C–H coupling reactions The diboron compound bis(pinacolato)diboron reacts with aromatic heterocycles or simple arenes to an arylboronate ester with iridium catalyst [IrCl(COD)]2 (a modification of Crabtree's catalyst) and base 4,4′-di-tert-butyl-2,2′-bipyridine in a C-H coupling reaction for example with benzene: In one modification the arene reacts using only a stoichiometric equivalent rather than a large excess using the cheaper pinacolborane: Unlike in ordinary electrophilic aromatic substitution (EAS) where electronic effects dominate, the regioselectivity in this reaction type is solely determined by the steric bulk of the iridium complex. This is exploited in a meta-bromination of m-xylene which by standard AES would give the ortho product: Protonolysis Protodeboronation is a chemical reaction involving the protonolysis of a boronic acid (or other organoborane compound) in which a carbon-boron bond is broken and replaced with a carbon-hydrogen bond. Protodeboronation is a well-known undesired side reaction, and frequently associated with metal-catalysed coupling reactions that utilise boronic acids (see Suzuki reaction). For a given boronic acid, the propensity to undergo protodeboronation is highly variable and dependent on various factors, such as the reaction conditions employed and the organic substituent of the boronic acid: Supramolecular chemistry Saccharide recognition The covalent pair-wise interaction between boronic acids and hydroxy groups as found in alcohols and acids is rapid and reversible in aqueous solutions. The equilibrium established between boronic acids and the hydroxyl groups present on saccharides has been successfully employed to develop a range of sensors for saccharides. One of the key advantages with this dynamic covalent strategy lies in the ability of boronic acids to overcome the challenge of binding neutral species in aqueous media. If arranged correctly, the introduction of a tertiary amine within these supramolecular systems will permit binding to occur at physiological pH and allow signalling mechanisms such as photoinduced electron transfer mediated fluorescence emission to report the binding event. Potential applications for this research include blood glucose monitoring systems to help manage diabetes mellitus. As the sensors employ an optical response, monitoring could be achieved using minimally invasive methods, one such example is the investigation of a contact lens that contains a boronic acid based sensor molecule to detect glucose levels within ocular fluids. Safety Some commonly used boronic acids and their derivatives give a positive Ames test and act as chemical mutagens. The mechanism of mutagenicity is thought to involve the generation of organic radicals via oxidation of the boronic acid by atmospheric oxygen. Notes References External links Boronic acids database Functional groups
Boronic acid
[ "Chemistry" ]
2,205
[ "Functional groups" ]
7,114,714
https://en.wikipedia.org/wiki/Isostere
Classical Isosteres are molecules or ions with similar shape and often electronic properties. Many definitions are available. but the term is usually employed in the context of bioactivity and drug development. Such biologically-active compounds containing an isostere is called a bioisostere. This is frequently used in drug design: the bioisostere will still be recognized and accepted by the body, but its functions there will be altered as compared to the parent molecule. History and additional definitions Non-classical isosteres do not obey the above classifications, but they still produce similar biological effects in vivo. Non-classical isosteres may be made up of similar atoms, but their structures do not follow an easily definable set of rules. The isostere concept was formulated by Irving Langmuir in 1919, and later modified by Grimm. Hans Erlenmeyer extended the concept to biological systems in 1932. Classical isosteres are defined as being atoms, ions and molecules that had identical outer shells of electrons, This definition has now been broadened to include groups that produce compounds that can sometimes have similar biological activities. Some evidence for the validity of this notion was the observation that some pairs, such as benzene, thiophene, furan, and even pyridine, exhibited similarities in many physical and chemical properties. References Theoretical chemistry Drug discovery
Isostere
[ "Chemistry", "Biology" ]
277
[ "Life sciences industry", "Drug discovery", "Theoretical chemistry", "nan", "Medicinal chemistry" ]
7,115,286
https://en.wikipedia.org/wiki/Asymmetric%20cell%20division
An asymmetric cell division produces two daughter cells with different cellular fates. This is in contrast to symmetric cell divisions which give rise to daughter cells of equivalent fates. Notably, stem cells divide asymmetrically to give rise to two distinct daughter cells: one copy of the original stem cell as well as a second daughter programmed to differentiate into a non-stem cell fate. (In times of growth or regeneration, stem cells can also divide symmetrically, to produce two identical copies of the original cell.) In principle, there are two mechanisms by which distinct properties may be conferred on the daughters of a dividing cell. In one, the daughter cells are initially equivalent but a difference is induced by signaling between the cells, from surrounding cells, or from the precursor cell. This mechanism is known as extrinsic asymmetric cell division. In the second mechanism, the prospective daughter cells are inherently different at the time of division of the mother cell. Because this latter mechanism does not depend on interactions of cells with each other or with their environment, it must rely on intrinsic asymmetry. The term asymmetric cell division usually refers to such intrinsic asymmetric divisions. Intrinsic asymmetry In order for asymmetric division to take place the mother cell must be polarized, and the mitotic spindle must be aligned with the axis of polarity. The cell biology of these events has been most studied in three animal models: the mouse, the nematode Caenorhabditis elegans, and the fruit fly Drosophila melanogaster. A later focus has been on development in spiralia. In C. elegans development In C. elegans, a series of asymmetric cell divisions in the early embryo are critical in setting up the anterior/posterior, dorsal/ventral, and left/right axes of the body plan. After fertilization, events are already occurring in the zygote to allow for the first asymmetric cell division. This first division produces two distinctly different blastomeres, termed AB and P1. When the sperm cell fertilizes the egg cell, the sperm pronucleus and centrosomes are deposited within the egg, which causes a cytoplasmic flux resulting in the movement of the pronucleus and centrosomes towards one pole. The centrosomes deposited by the sperm are responsible for the establishment of the posterior pole within the zygote. Sperm with mutant or absent centrosomes fail to establish a posterior pole. The establishment of this polarity initiates the polarized distribution of a group of proteins present in the zygote called the PAR proteins (partitioning defective), which are a conserved group of proteins that function in establishing cell polarity during development. These proteins are initially distributed uniformly throughout the zygote and then become polarized with the creation of the posterior pole. This series of events allows the single celled zygote to obtain polarity through an unequal distribution of multiple factors. The single cell is now set up to undergo an asymmetric cell division, however the orientation in which the division occurs is also an important factor. The mitotic spindle must be oriented correctly to ensure that the proper cell fate determinants are distributed appropriately to the daughter cells. The alignment of the spindle is mediated by the PAR proteins, which regulate the positioning of the centrosomes along the A/P axis as well as the movement of the mitotic spindle along the A/P axis. Following this first asymmetric division, the AB daughter cell divides symmetrically, giving rise to ABa and ABp, while the P1 daughter cell undergoes another asymmetric cell division to produce P2 and EMS. This division is also dependent on the distribution of the PAR proteins. In Drosophila neural development In Drosophila melanogaster, asymmetric cell division plays an important role in neural development. Neuroblasts are the progenitor cells which divide asymmetrically to give rise to another neuroblast and a ganglion mother cell (GMC). The neuroblast repeatedly undergoes this asymmetric cell division while the GMC continues on to produce a pair of neurons. Two proteins play an important role in setting up this cell fate asymmetry in the neuroblast, Prospero and Numb. These proteins are both synthesized in the neuroblast and segregate into only the GMC during divisions. Numb is a suppressor of Notch, therefore the asymmetric segregation of Numb to the basal cortex biases the response of the daughter cells to Notch signaling, resulting in two distinct cell fates. Prospero is required for gene regulation in GMCs. It is equally distributed throughout the neuroblast cytoplasm, but becomes localized at the basal cortex when the neuroblast starts to undergo mitosis. Once the GMC buds off from the basal cortex, Prospero becomes translocated into the GMC nucleus to act as a transcription factor. Other proteins present in the neuroblast mediate the asymmetric localization of Numb and Prospero. Miranda is an anchoring protein that binds to Prospero and keeps it in the basal cortex. Following the generation of the GMC, Miranda releases Prospero and then becomes degraded. The segregation of Numb is mediated by Pon (the partner of Numb protein). Pon binds to Numb and colocalizes with it during neuroblast cell division. The mitotic spindle must also align parallel to the asymmetrically distributed cell fate determinants to allow them to become segregated into one daughter cell and not the other. The mitotic spindle orientation is mediated by Inscuteable, which is segregated to the apical cortex of the neuroblast. Without the presence of Inscuteable, the positioning of the mitotic spindle and the cell fate determinants in relationship to each other becomes randomized. Inscuteable mutants display a uniform distribution of Miranda and Numb at the cortex, and the resulting daughter cells display identical neuronal fates. In addition to the two daughter cells having separate fates, they have different cell sizes; the resulting neuroblast is much larger than the GMC. However, unlike with the proper segregation of fate determinants, asymmetric cell division that gives rise to cell size asymmetry is spindle-independent. The mechanism instead relies on the spatial and temporal organization of myosin on the cell cortex and its upstream components. Apical localization of Pins (Partner of Inscuteable) by Inscuteable allows Pins-dependent apical Protein Kinase N (Pkn) localization during metaphase. Pkn inhibits Rho-kinase (Rok), resulting in the timely loss of myosin and Rok from the apical cortex at anaphase onset. The apical myosin flows basally to where the cleavage furrow is positioned. Subsequently, the proteins Tum and Pav at the central spindle recruit myosin to increase myosin concentration, generating a myosin gradient to drive apical myosin flow from the basal cortex. This spatiotemporal control of myosin localization results in the asymmetric loss of cortical tension that normally pushes against hydrostatic pressure. In other words, the loss of apical cortical myosin allows hydrostatic pressure to push against the apical cell membrane, increasing the size of the apical region that is bound to become the larger neuroblast after cell division. Generation of apical and basal myosin flows simultaneously results in symmetric cell division, and delaying of basal myosin flows prevents normal expansion of the basal region of the dividing cell. Although this mechanism is spindle-independent, the spindle is important for setting up the cleavage furrow position, for bringing myosin to the cleavage furrow, and for driving basal myosin clearing. Actomyosin-based cortical flows direct a reorganization of the plasma membrane and cell cortex of the neuroblast, which is needed to generate the size difference between daughter cells. Early in mitosis, cortical flows collect membrane folds and protrusions around the apical pole forming a polarized membrane reservoir. As myosin clears from the apical cortex and cleavage furrow ingression causes hydrostatic pressure to increase, the stores of membrane within the reservoir are used to expand the apical region which becomes the larger daughter cell after division. In spiralian development Spiralia (commonly synonymous with lophotrochozoa) represent a diverse clade of animals whose species comprise the bulk of the bilaterian animals present today. Examples include mollusks, annelid worms, and the entoprocta. Although much is known at the cellular and molecular level about the other bilateralian clades (ecdysozoa and deuterostomia), research into the processes that govern spiralian development is comparatively lacking. However, one unifying feature shared among spiralia is the pattern of cleavage in the early embryo known as spiral cleavage. Mechanisms of asymmetric division (See Figure, right panel): Tubifex tubifex: The sludge worm Tubifex tubifex has been shown to demonstrate an interesting asymmetric cell division at the point of first embryonic cleavage. Unlike the classic idea of cortical differences at the zygotic membrane that determine spindle asymmetry in the C. elegans embryo, the first cleavage in tubifex relies on the number of centrosomes. Embryos inherit a single centrosome which localizes in the prospective larger CD cell cytoplasm and emits radial microtubules during anaphase that contribute to both the mitotic spindle as well as cortical asters. However, the microtubule organizing center of the prospective smaller AB cell emits only microtubules that commit to the mitotic spindle and not cortical bound asters. When embryos are compressed or deformed, asymmetric spindles still form, and staining for gamma tubulin reveals that the second microtubule organizing center lacks the molecular signature of a centrosome. Furthermore, when centrosome number is doubled, tubifex embryos cleave symmetrically, suggesting this monoastral mechanism of asymmetric cell division is centrosome dependent. Helobdella robusta: The leech Helobdella robusta exhibits a similar asymmetry in the first embryonic division as C. elegans and tubifex, but relies on a modified mechanism. Compression experiments on the robusta embryo do not affect asymmetric division, suggesting the mechanism, like tubifex, uses a cortical independent molecular pathway. In robusta, antibody staining reveals that the mitotic spindle forms symmetrically until metaphase and stems from two biastral centrosomes. At the onset of metaphase, asymmetry becomes apparent as the centrosome of the prospective larger CD cell lengthens cortical asters while the asters of the prospective smaller AB cell become downregulated. Experiments using nocodazole and taxol support this observation. Taxol, which stabilized microtubules, forced a significant number of embryos to cleave symmetrically when used at a moderate concentration. Moreover, embryos treated with nocodazole, which sequesters tubulin dimers and promotes microtubule depolymerization, similarly forced symmetric division in a significant number of embryos. Treatment with either drug at these concentrations fails to disrupt normal centrosome dynamics, suggesting that a balance of microtubule polymerization and depolymerization represents another mechanism for establishing asymmetric cell division in spilarian development. Ilyanasa obsoleta: A third, less traditional mechanism contributing to asymmetric cell division in spiralian development has been discovered in the mollusk Ilyanasa obsoleta. In situ hybridization and immunofluorescence experiments show that mRNA transcripts co-localize with centrosomes during early cleavage. Consequently, these transcripts are inherited in a stereotypical fashion to distinct cells. All mRNA transcripts followed have been implicated in body axis patterning, and in situ hybridization for transcripts associated with other functions fail to exhibit such a localization. Moreover, disruption of microtubule polymerization with nocodazole, and of actin polymerization with cytochalisin B, shows the cytoskeleton is also important in this asymmetry. It appears that microtubules are not required to recruit the mRNA to the centrosome, and that actin is required to attach the centrosome to the cortex. Finally, introducing multiple centrosomes into one cell by inhibiting cytokinesis shows that mRNA dependably localizes on the correct centrosome, suggesting intrinsic differences between each centrosomal composition. It is important to note that these results reflect experiments performed after the first two divisions, yet still demonstrate a different molecular means of establishing asymmetry in a dividing cell. In stem cells and progenitors Animals are made up of a vast number of distinct cell types. During development, the zygote undergoes many cell divisions that give rise to various cell types, including embryonic stem cells. Asymmetric divisions of these embryonic cells gives rise to one cell of the same potency (self-renewal), and another that maybe of the same potency or stimulated to further differentiate into specialized cell types such as neurons. This stimulated differentiation arises from many factors which can be divided into two broad categories: intrinsic and extrinsic. Intrinsic factors generally involve differing amounts of cell-fate determinants being distributed into each daughter cell. Extrinsic factors involve interactions with neighboring cells and the micro and macro environment of the precursor cell. In addition to the aforementioned Drosophila neuronal example, it was proposed that the macrosensory organs of the Drosophila, specifically the glial cells, also arise from a similar set of asymmetric division from a single progenitor cell via regulation of the Notch signaling pathway and transcription factors. An example of how extrinsic factors bring about this phenomenon is the physical displacement of one of the daughter cells out of the original stem cell niche, exposing it to signalling molecules such as chondroitin sulfate. In this manner, the daughter cell is forced to interact with the heavily sulfated molecules, which stimulate it to differentiate while the other daughter cell remains in the original niche in a quiescent state. Role in disease In normal stem and progenitor cells, asymmetric cell division balances proliferation and self-renewal with cell-cycle exit and differentiation. Disruption of asymmetric cell division leads to aberrant self-renewal and impairs differentiation, and could therefore constitute an early step in the tumorogenic transformation of stem and progenitor cells. In normal non-tumor stem cells, a number of genes have been described which are responsible for pluripotency, such as Bmi-1, Wnt and Notch. These genes have been discovered also in the case of cancer stem cells, and shows that their aberrant expression is essential for the formation of tumor cell mass. For example, it has been shown that gastrointestinal cancers contain rare subpopulation of cancer stem cells which are capable to divide asymmetrically. The asymmetric division in these cells is regulated by cancer niche (microenvironment) and Wnt pathway. Blocking the Wnt pathway with IWP2 (WNT antagonist) or siRNA-TCF4 resulted in high suppression of asymmetric cell division. Another mutation in asymmetric cell divisions which are involved in tumor growth are loss-of-function mutations. The first suggestion that loss of asymmetric cell division might be involved in tumorigenesis came from studies of Drosophila. Studies of loss-of-function mutations in key regulators of asymmetric cell division including lgl, aurA, polo, numb and brat, revealed hyperproliferative phenotypes in situ. In these mutants cells divide more symmetrically and generate mis-specified progeny that fail to exit the cell cycle and differentiate, but rather proliferate continuously and form a tumor cell mass. References Further reading Developmental biology Articles containing video clips
Asymmetric cell division
[ "Biology" ]
3,366
[ "Behavior", "Developmental biology", "Reproduction" ]
7,115,718
https://en.wikipedia.org/wiki/Ball-and-stick%20model
In chemistry, the ball-and-stick model is a molecular model of a chemical substance which displays both the three-dimensional position of the atoms and the bonds between them. The atoms are typically represented by spheres, connected by rods which represent the bonds. Double and triple bonds are usually represented by two or three curved rods, respectively, or alternately by correctly positioned sticks for the sigma and pi bonds. In a good model, the angles between the rods should be the same as the angles between the bonds, and the distances between the centers of the spheres should be proportional to the distances between the corresponding atomic nuclei. The chemical element of each atom is often indicated by the sphere's color. In a ball-and-stick model, the radius of the spheres is usually much smaller than the rod lengths, in order to provide a clearer view of the atoms and bonds throughout the model. As a consequence, the model does not provide a clear insight about the space occupied by the model. In this aspect, ball-and-stick models are distinct from space-filling (calotte) models, where the sphere radii are proportional to the Van der Waals atomic radii in the same scale as the atom distances, and therefore show the occupied space but not the bonds. Ball-and-stick models can be physical artifacts or virtual computer models. The former are usually built from molecular modeling kits, consisting of a number of coil springs or plastic or wood sticks, and a number of plastic balls with pre-drilled holes. The sphere colors commonly follow the CPK coloring. Some university courses on chemistry require students to buy such models as learning material. History In 1865, German chemist August Wilhelm von Hofmann was the first to make ball-and-stick molecular models. He used such models in lecture at the Royal Institution of Great Britain. Specialist companies manufacture kits and models to order. One of the earlier companies was Woosters at Bottisham, Cambridgeshire, UK. Besides tetrahedral, trigonal and octahedral holes, there were all-purpose balls with 24 holes. These models allowed rotation about the single rod bonds, which could be both an advantage (showing molecular flexibility) and a disadvantage (models are floppy). The approximate scale was 5 cm per ångström (0.5 m/nm or 500,000,000:1), but was not consistent over all elements. The Beeverses Miniature Models company in Edinburgh (now operating as Miramodus) produced small models beginning in 1961 using PMMA balls and stainless steel rods. In these models, the use of individually drilled balls with precise bond angles and bond lengths enabled large crystal structures to be accurately created in a light and rigid form. See also VSEPR theory References Molecular modelling
Ball-and-stick model
[ "Chemistry" ]
560
[ "Theoretical chemistry", "Molecular modelling", "Molecular physics" ]
7,115,965
https://en.wikipedia.org/wiki/IETF%20language%20tag
An IETF BCP 47 language tag is a standardized code that is used to identify human languages on the Internet. The tag structure has been standardized by the Internet Engineering Task Force (IETF) in Best Current Practice (BCP) 47; the subtags are maintained by the IANA Language Subtag Registry. To distinguish language variants for countries, regions, or writing systems (scripts), IETF language tags combine subtags from other standards such as ISO 639, ISO 15924, ISO 3166-1 and UN M.49. For example, the tag stands for English; for Latin American Spanish; for Romansh Sursilvan; for Serbian written in Cyrillic script; for Min Nan Chinese using traditional Han characters, as spoken in Taiwan; for Cantonese using traditional Han characters, as spoken in Hong Kong; and for Zürich German. It is used by computing standards such as HTTP, HTML, XML and PNG.󠀁 History IETF language tags were first defined in RFC 1766, edited by Harald Tveit Alvestrand, published in March 1995. The tags used ISO 639 two-letter language codes and ISO 3166 two-letter country codes, and allowed registration of whole tags that included variant or script subtags of three to eight letters. In January 2001, this was updated by RFC 3066, which added the use of ISO 639-2 three-letter codes, permitted subtags with digits, and adopted the concept of language ranges from HTTP/1.1 to help with matching of language tags. The next revision of the specification came in September 2006 with the publication of RFC 4646 (the main part of the specification), edited by Addison Philips and Mark Davis and RFC 4647 (which deals with matching behaviour). RFC 4646 introduced a more structured format for language tags, added the use of ISO 15924 four-letter script codes and UN M.49 three-digit geographical region codes, and replaced the old registry of tags with a new registry of subtags. The small number of previously defined tags that did not conform to the new structure were grandfathered in order to maintain compatibility with RFC 3066. The current version of the specification, RFC 5646, was published in September 2009. The main purpose of this revision was to incorporate three-letter codes from ISO 639-3 and 639-5 into the Language Subtag Registry, in order to increase the interoperability between ISO 639 and BCP 47. Syntax of language tags Each language tag is composed of one or more "subtags" separated by hyphens (-). Each subtag is composed of basic Latin letters or digits only. With the exceptions of private-use language tags beginning with an x- prefix and grandfathered language tags (including those starting with an i- prefix and those previously registered in the old Language Tag Registry), subtags occur in the following order: A single primary language subtag based on a two-letter language code from ISO 639-1 (2002) or a three-letter code from ISO 639-2 (1998), ISO 639-3 (2007) or ISO 639-5 (2008), or registered through the BCP 47 process and composed of five to eight letters; Up to three optional extended language subtags composed of three letters each, separated by hyphens; (There is currently no extended language subtag registered in the Language Subtag Registry without an equivalent and preferred primary language subtag. This component of language tags is preserved for backwards compatibility and to allow for future parts of ISO 639.) An optional script subtag, based on a four-letter script code from ISO 15924 (usually written in Title Case); An optional region subtag based on a two-letter country code from ISO 3166-1 alpha-2 (usually written in upper case), or a three-digit code from UN M.49 for geographical regions; Optional variant subtags, separated by hyphens, each composed of five to eight letters, or of four characters starting with a digit; (Variant subtags are registered with IANA and not associated with any external standard.) Optional extension subtags, separated by hyphens, each composed of a single character, with the exception of the letter x, and a hyphen followed by one or more subtags of two to eight characters each, separated by hyphens; An optional private-use subtag, composed of the letter x and a hyphen followed by subtags of one to eight characters each, separated by hyphens. Subtags are not case-sensitive, but the specification recommends using the same case as in the Language Subtag Registry, where region subtags are UPPERCASE, script subtags are Title Case, and all other subtags are lowercase. This capitalization follows the recommendations of the underlying ISO standards. Optional script and region subtags are preferred to be omitted when they add no distinguishing information to a language tag. For example, es is preferred over es-Latn, as Spanish is fully expected to be written in the Latin script; ja is preferred over ja-JP, as Japanese as used in Japan does not differ markedly from Japanese as used elsewhere. Not all linguistic regions can be represented with a valid region subtag: the subnational regional dialects of a primary language are registered as variant subtags. For example, the valencia variant subtag for the Valencian variant of the Catalan is registered in the Language Subtag Registry with the prefix ca. As this dialect is spoken almost exclusively in Spain, the region subtag ES can normally be omitted. Furthermore, there are script tags that do not refer to traditional scripts such as Latin, or even scripts at all, and these usually begin with a Z. For example, Zsye refers to emojis, Zmth to mathematical notation, Zxxx to unwritten documents and Zyyy to undetermined scripts. IETF language tags have been used as locale identifiers in many applications. It may be necessary for these applications to establish their own strategy for defining, encoding and matching locales if the strategy described in RFC 4647 is not adequate. The use, interpretation and matching of IETF language tags is currently defined in RFC 5646 and RFC 4647. The Language Subtag Registry lists all currently valid public subtags. Private-use subtags are not included in the Registry as they are implementation-dependent and subject to private agreements between third parties using them. These private agreements are out of scope of BCP 47. List of common primary language subtags The following is a list of some of the more commonly used primary language subtags. The list represents only a small subset (less than 2 percent) of primary language subtags; for full information, the Language Subtag Registry should be consulted directly. Relation to other standards Although some types of subtags are derived from ISO or UN core standards, they do not follow these standards absolutely, as this could lead to the meaning of language tags changing over time. In particular, a subtag derived from a code assigned by ISO 639, ISO 15924, ISO 3166, or UN M49 remains a valid (though deprecated) subtag even if the code is withdrawn from the corresponding core standard. If the standard later assigns a new meaning to the withdrawn code, the corresponding subtag will still retain its old meaning. This stability was introduced in RFC 4646. ISO 639-3 and ISO 639-1 RFC 4646 defined the concept of an "extended language subtag" (sometimes referred to as extlang), although no such subtags were registered at that time. RFC 5645 and RFC 5646 added primary language subtags corresponding to ISO 639-3 codes for all languages that did not already exist in the Registry. In addition, codes for languages encompassed by certain macrolanguages were registered as extended language subtags. Sign languages were also registered as extlangs, with the prefix sgn. These languages may be represented either with the subtag for the encompassed language alone (cmn for Mandarin) or with a language-extlang combination (zh-cmn). The first option is preferred for most purposes. The second option is called "extlang form" and is new in RFC 5646. Whole tags that were registered prior to RFC 4646 and are now classified as "grandfathered" or "redundant" (depending on whether they fit the new syntax) are deprecated in favor of the corresponding ISO 639-3–based language subtag, if one exists. To list a few examples, nan is preferred over zh-min-nan for Min Nan Chinese; hak is preferred over i-hak and zh-hakka for Hakka Chinese; and ase is preferred over sgn-US for American Sign Language. Windows Vista and later versions of Microsoft Windows have RFC 4646 support. ISO 639-5 and ISO 639-1/2 ISO 639-5 defines language collections with alpha-3 codes in a different way than they were initially encoded in ISO 639-2 (including one code already present in ISO 639-1, Bihari coded inclusively as bh in ISO 639-1 and bih in ISO 639-2). Specifically, the language collections are now all defined in ISO 639-5 as inclusive, rather than some of them being defined exclusively. This means that language collections have a broader scope than before, in some cases where they could encompass languages that were already encoded separately within ISO 639-2. For example, the ISO 639-2 code afa was previously associated with the name "Afro-Asiatic (Other)", excluding languages such as Arabic that already had their own code. In ISO 639-5, this collection is named "Afro-Asiatic languages" and includes all such languages. ISO 639-2 changed the exclusive names in 2009 to match the inclusive ISO 639-5 names. To avoid breaking implementations that may still depend on the older (exclusive) definition of these collections, ISO 639-5 defines a grouping type attribute for all collections that were already encoded in ISO 639-2 (such grouping type is not defined for the new collections added only in ISO 639-5). BCP 47 defines a "Scope" property to identify subtags for language collections. However, it does not define any given collection as inclusive or exclusive, and does not use the ISO 639-5 grouping type attribute, although the description fields in the Language Subtag Registry for these subtags match the ISO 639-5 (inclusive) names. As a consequence, BCP 47 language tags that include a primary language subtag for a collection may be ambiguous as to whether the collection is intended to be inclusive or exclusive. ISO 639-5 does not define precisely which languages are members of these collections; only the hierarchical classification of collections is defined, using the inclusive definition of these collections. Because of this, RFC 5646 does not recommend the use of subtags for language collections for most applications, although they are still preferred over subtags whose meaning is even less specific, such as "Multiple languages" and "Undetermined". In contrast, the classification of individual languages within their macrolanguage is standardized, in both ISO 639-3 and the Language Subtag Registry. ISO 15924, ISO/IEC 10646 and Unicode Script subtags were first added to the Language Subtag Registry when RFC 4646 was published, from the list of codes defined in ISO 15924. They are encoded in the language tag after primary and extended language subtags, but before other types of subtag, including region and variant subtags. Some primary language subtags are defined with a property named "Suppress-Script" which indicates the cases where a single script can usually be assumed by default for the language, even if it can be written with another script. When this is the case, it is preferable to omit the script subtag, to improve the likelihood of successful matching. A different script subtag can still be appended to make the distinction when necessary. For example, yi is preferred over yi-Hebr in most contexts, because the Hebrew script subtag is assumed for the Yiddish language. As another example, zh-Hans-SG may be considered equivalent to zh-Hans, because the region code is probably not significant; the written form of Chinese used in Singapore uses the same simplified Chinese characters as in other countries where Chinese is written. However, the script subtag is maintained because it is significant. ISO 15924 includes some codes for script variants (for example, Hans and Hant for simplified and traditional forms of Chinese characters) that are unified within Unicode and ISO/IEC 10646. These script variants are most often encoded for bibliographic purposes, but are not always significant from a linguistic point of view (for example, Latf and Latg script codes for the Fraktur and Gaelic variants of the Latin script, which are mostly encoded with regular Latin letters in Unicode and ISO/IEC 10646). They may occasionally be useful in language tags to expose orthographic or semantic differences, with different analysis of letters, diacritics, and digraphs/trigraphs as default grapheme clusters, or differences in letter casing rules. ISO 3166-1 and UN M.49 Two-letter region subtags are based on codes assigned, or "exceptionally reserved", in ISO 3166-1. If the ISO 3166 Maintenance Agency were to reassign a code that had previously been assigned to a different country, the existing BCP 47 subtag corresponding to that code would retain its meaning, and a new region subtag based on UN M.49 would be registered for the new country. UN M.49 is also the source for numeric region subtags for geographical regions, such as 005 for South America. The UN M.49 codes for economic regions are not allowed. Region subtags are used to specify the variety of a language "as used in" a particular region. They are appropriate when the variety is regional in nature, and can be captured adequately by identifying the countries involved, as when distinguishing British English (en-GB) from American English (en-US). When the difference is one of script or script variety, as for simplified versus traditional Chinese characters, it should be expressed with a script subtag instead of a region subtag; in this example, zh-Hans and zh-Hant should be used instead of zh-CN/zh-SG/zh-MY and zh-TW/zh-HK/zh-MO. When a distinct language subtag exists for a language that could be considered a regional variety, it is often preferable to use the more specific subtag instead of a language-region combination. For example, ar-DZ (Arabic as used in Algeria) may be better expressed as arq for Algerian Spoken Arabic. Adherence to core standards Disagreements about language identification may extend to BCP 47 and to the core standards that inform it. For example, some speakers of Punjabi believe that the ISO 639-3 distinction between [pan] "Panjabi" and [pnb] "Western Panjabi" is spurious (i.e. they feel the two are the same language); that sub-varieties of the Arabic script should be encoded separately in ISO 15924 (as, for example, the Fraktur and Gaelic styles of the Latin script are); and that BCP 47 should reflect these views and/or overrule the core standards with regard to them. BCP 47 delegates this type of judgment to the core standards, and does not attempt to overrule or supersede them. Variant subtags and (theoretically) primary language subtags may be registered individually, but not in a way that contradicts the core standards. Extensions Extension subtags (not to be confused with extended language subtags) allow additional information to be attached to a language tag that does not necessarily serve to identify a language. One use for extensions is to encode locale information, such as calendar and currency. Extension subtags are composed of multiple hyphen-separated character strings, starting with a single character (other than x), called a singleton. Each extension is described in its own IETF RFC, which identifies a Registration Authority to manage the data for that extension. IANA is responsible for allocating singletons. Two extensions have been assigned as of January 2014. Extension T (Transformed Content) Extension T allows a language tag to include information on how the tagged data was transliterated, transcribed, or otherwise transformed. For example, the tag en-t-jp could be used for content in English that was translated from the original Japanese. Additional substrings could indicate that the translation was done mechanically, or in accordance with a published standard. Extension T is described in the informational RFC 6497, published in February 2012. The Registration Authority is the Unicode Consortium. Extension U (Unicode Locale) Extension U allows a wide variety of locale attributes found in the Common Locale Data Repository (CLDR) to be embedded in language tags. These attributes include country subdivisions, calendar and time zone data, collation order, currency, number system, and keyboard identification. Some examples include: gsw-u-sd-chzh represents Swiss German as used in the Canton of Zurich. ar-u-nu-latn represents Arabic-language content using Basic Latin digits (0 through 9) instead of Arabic-script digits (٠ through ٩). he-IL-u-ca-hebrew-tz-jeruslm represents Hebrew as spoken in Israel, using the traditional Hebrew calendar, and in the "Asia/Jerusalem" time zone as identified in the tz database. Extension U is described in the informational RFC 6067, published in December 2010. The Registration Authority is the Unicode Consortium. See also Codes for constructed languages Internationalization and localization Locale (computer software) References External links BCP 47 Language Tags – current specification contains two RFCs published separately at different dates, but concatenated in a single document: RFC 4647Matching of Language Tags RFC 5646Tags for Identifying Languages (also referencing the related informational RFC 5645, which complements the previous informational RFC 4645, as well other individual registration forms published separately by others for each language added or modified in the Registry between these BCP 47 revisions) Language Subtag Registry – maintained by IANA Language Subtag Registry Search – find subtags and view entries in the Registry "Language tags in HTML and XML" – from the W3C "Language Tags" – from the IETF Language Tag Registry Update working group Internet properties established in 1995 Internet governance Request for Comments ISO standards Language identifiers Unique identifiers Internationalization and localization
IETF language tag
[ "Technology" ]
3,937
[ "Natural language and computing", "Internationalization and localization" ]
7,116,046
https://en.wikipedia.org/wiki/Animal%20rights
Animal rights is the philosophy according to which many or all sentient animals have moral worth independent of their utility to humans, and that their most basic interests—such as avoiding suffering—should be afforded the same consideration as similar interests of human beings. The argument from marginal cases is often used to reach this conclusion. This argument holds that if marginal human beings such as infants, senile people, and the cognitively disabled are granted moral status and negative rights, then nonhuman animals must be granted the same moral consideration, since animals do not lack any known morally relevant characteristic that marginal-case humans have. Broadly speaking, and particularly in popular discourse, the term "animal rights" is often used synonymously with "animal protection" or "animal liberation". More narrowly, "animal rights" refers to the idea that many animals have fundamental rights to be treated with respect as individuals—rights to life, liberty, and freedom from torture—that may not be overridden by considerations of aggregate welfare. Many animal rights advocates oppose assigning moral value and fundamental protections on the basis of species membership alone. They consider this idea, known as speciesism, a prejudice as irrational as any other, and hold that animals should not be considered property or used as food, clothing, entertainment, or beasts of burden merely because they are not human. Cultural traditions such as Jainism, Taoism, Hinduism, Buddhism, Shinto, and animism also espouse varying forms of animal rights. In parallel to the debate about moral rights, North American law schools now often teach animal law, and several legal scholars, such as Steven M. Wise and Gary L. Francione, support extending basic legal rights and personhood to nonhuman animals. The animals most often considered in arguments for personhood are hominids. Some animal-rights academics support this because it would break the species barrier, but others oppose it because it predicates moral value on mental complexity rather than sentience alone. , 29 countries had enacted bans on hominoid experimentation; Argentina granted captive orangutans basic human rights in 2014. Outside of primates, animal-rights discussions most often address the status of mammals (compare charismatic megafauna). Other animals (considered less sentient) have gained less attention—insects relatively little (outside Jainism) and animal-like bacteria hardly any. The vast majority of animals have no legally recognised rights. Critics of animal rights argue that nonhuman animals are unable to enter into a social contract, and thus cannot have rights, a view summarised by the philosopher Roger Scruton, who writes that only humans have duties, and therefore only humans have rights. Another argument, associated with the utilitarian tradition, maintains that animals may be used as resources so long as there is no unnecessary suffering; animals may have some moral standing, but any interests they have may be overridden in cases of comparatively greater gains to aggregate welfare made possible by their use, though what counts as "necessary" suffering or a legitimate sacrifice of interests can vary considerably. Certain forms of animal-rights activism, such as the destruction of fur farms and of animal laboratories by the Animal Liberation Front, have attracted criticism, including from within the animal-rights movement itself, and prompted the U.S. Congress to enact laws, including the Animal Enterprise Terrorism Act, allowing the prosecution of this sort of activity as terrorism. History The concept of moral rights for animals dates to Ancient India, with roots in early Jain and Hindu history, while Eastern, African, and Indigenous peoples also have rich traditions of animal protection. In the Western world, Aristotle viewed animals as lacking reason and existing for human use, though other ancient philosophers believed animals deserved gentle treatment. Major religious traditions, chiefly Indian or Dharmic religions, opposed animal cruelty. While scholars like Descartes saw animals as unconscious automata, and Kant denied direct duties to animals, Jeremy Bentham emphasized their capacity to suffer. The publications of Charles Darwin eventually eroded the Cartesian view of animals. Darwin noted the mental and emotional continuity between humans and animals, suggesting the possibility of animal suffering. The anti-vivisection movement emerged in the late 19th and early 20th centuries, driven significantly by women. From the 1970s onward, growing scholarly and activist interest in animal treatment has aimed to raise awareness and reform laws to improve animal rights and human–animal relationships. In religion For some the basis of animal rights is in religion or animal worship (or in general nature worship), with some religions banning killing any animal. In other religions animals are considered unclean. Hindu and Buddhist societies abandoned animal sacrifice and embraced vegetarianism from the 3rd century BCE. One of the most important sanctions of the Jain, Hindu, and Buddhist faiths is the concept of ahimsa, or refraining from the destruction of life. According to Buddhism, humans do not deserve preferential treatment over other living beings. The Dharmic interpretation of this doctrine prohibits the killing of any living being. These Indian religions' dharmic beliefs are reflected in the ancient Indian works of the Tolkāppiyam and Tirukkural, which contain passages that extend the idea of nonviolence to all living beings. In Islam, animal rights were recognized early by the Sharia. This recognition is based on both the Qur'an and the Hadith. The Qur'an contains many references to animals, detailing that they have souls, form communities, communicate with God, and worship Him in their own way. Muhammad forbade his followers to harm any animal and asked them to respect animals' rights. Nevertheless, Islam does allow eating of certain species of animals. According to Christianity, all animals, from the smallest to the largest, are cared for and loved. According to the Bible, "All these animals waited for the Lord, that the Lord might give them food at the hour. The Lord gives them, they receive; The Lord opens his hand, and they are filled with good things." It further says God "gave food to the animals, and made the crows cry." Philosophical and legal approaches Overview The two main philosophical approaches to animal ethics are utilitarian and rights-based. The former is exemplified by Peter Singer, and the latter by Tom Regan and Gary Francione. Their differences reflect a distinction philosophers draw between ethical theories that judge the rightness of an act by its consequences (consequentialism/teleological ethics, or utilitarianism), and those that focus on the principle behind the act, almost regardless of consequences (deontological ethics). Deontologists argue that there are acts we should never perform, even if failing to do so entails a worse outcome. There are a number of positions that can be defended from a consequentalist or deontologist perspective, including the capabilities approach, represented by Martha Nussbaum, and the egalitarian approach, which has been examined by Ingmar Persson and Peter Vallentyne. The capabilities approach focuses on what individuals require to fulfill their capabilities: Nussbaum (2006) argues that animals need a right to life, some control over their environment, company, play, and physical health. Stephen R. L. Clark, Mary Midgley, and Bernard Rollin also discuss animal rights in terms of animals being permitted to lead a life appropriate for their kind. Egalitarianism favors an equal distribution of happiness among all individuals, which makes the interests of the worse off more important than those of the better off. Another approach, virtue ethics, holds that in considering how to act we should consider the character of the actor, and what kind of moral agents we should be. Rosalind Hursthouse has suggested an approach to animal rights based on virtue ethics. Mark Rowlands has proposed a contractarian approach. Utilitarianism Nussbaum (2004) writes that utilitarianism, starting with Jeremy Bentham and John Stuart Mill, has contributed more to the recognition of the moral status of animals than any other ethical theory. The utilitarian philosopher most associated with animal rights is Peter Singer, professor of bioethics at Princeton University. Singer is not a rights theorist, but He is a preference utilitarian, meaning that he judges the rightness of an act by the extent to which it satisfies the preferences (interests) of those affected. His position is that there is no reason not to give equal consideration to the interests of human and nonhumans, though his principle of equality does not require identical treatment. A mouse and a man both have an interest in not being kicked, and there are no moral or logical grounds for failing to accord those interests equal weight. Interests are predicated on the ability to suffer, nothing more, and once it is established that a being has interests, those interests must be given equal consideration. Singer quotes the English philosopher Henry Sidgwick (1838–1900): "The good of any one individual is of no more importance, from the point of view ... of the Universe, than the good of any other." Singer argues that equality of consideration is a prescription, not an assertion of fact: if the equality of the sexes were based only on the idea that men and women were equally intelligent, we would have to abandon the practice of equal consideration if this were later found to be false. But the moral idea of equality does not depend on matters of fact such as intelligence, physical strength, or moral capacity. Equality therefore cannot be grounded on the outcome of scientific investigations into the intelligence of nonhumans. All that matters is whether they can suffer. Commentators on all sides of the debate now accept that animals suffer and feel pain, although it was not always so. Bernard Rollin, professor of philosophy, animal sciences, and biomedical sciences at Colorado State University, writes that Descartes's influence continued to be felt until the 1980s. Veterinarians trained in the US before 1989 were taught to ignore pain, he writes, and at least one major veterinary hospital in the 1960s did not stock narcotic analgesics for animal pain control. In his interactions with scientists, he was often asked to "prove" that animals are conscious, and to provide "scientifically acceptable" evidence that they could feel pain. Scientific publications have made it clear since the 1980s that the majority of researchers do believe animals suffer and feel pain, though it continues to be argued that their suffering may be reduced by an inability to experience the same dread of anticipation as humans or to remember the suffering as vividly. The ability of animals to suffer, even it may vary in severity, is the basis for Singer's application of equal consideration. The problem of animal suffering, and animal consciousness in general, arose primarily because it was argued that animals have no language. Singer writes that, if language were needed to communicate pain, it would often be impossible to know when humans are in pain, though we can observe pain behavior and make a calculated guess based on it. He argues that there is no reason to suppose that the pain behavior of nonhumans would have a different meaning from the pain behavior of humans. Subjects-of-a-life Tom Regan, professor emeritus of philosophy at North Carolina State University, argues in The Case for Animal Rights (1983) that nonhuman animals are what he calls "subjects-of-a-life", and as such are bearers of rights. He writes that, because the moral rights of humans are based on their possession of certain cognitive abilities, and because these abilities are also possessed by at least some nonhuman animals, such animals must have the same moral rights as humans. Although only humans act as moral agents, both marginal-case humans, such as infants, and at least some nonhumans must have the status of "moral patients". Moral patients are unable to formulate moral principles, and as such are unable to do right or wrong, even though what they do may be beneficial or harmful. Only moral agents are able to engage in moral action. Animals for Regan have "intrinsic value" as subjects-of-a-life, and cannot be regarded as a means to an end, a view that places him firmly in the abolitionist camp. His theory does not extend to all animals, but only to those that can be regarded as subjects-of-a-life. He argues that all normal mammals of at least one year of age would qualify: Whereas Singer is primarily concerned with improving the treatment of animals and accepts that, in some hypothetical scenarios, individual animals might be used legitimately to further human or nonhuman ends, Regan believes we ought to treat nonhuman animals as we would humans. He applies the strict Kantian ideal (which Kant himself applied only to humans) that they ought never to be sacrificed as a means to an end, and must be treated as ends in themselves. Abolitionism Gary Francione, professor of law and philosophy at Rutgers Law School in Newark, is a leading abolitionist writer, arguing that animals need only one right, the right not to be owned. Everything else would follow from that paradigm shift. He writes that, although most people would condemn the mistreatment of animals, and in many countries there are laws that seem to reflect those concerns, "in practice the legal system allows any use of animals, however abhorrent." The law only requires that any suffering not be "unnecessary". In deciding what counts as "unnecessary", an animal's interests are weighed against the interests of human beings, and the latter almost always prevail. Francione's Animals, Property, and the Law (1995) was the first extensive jurisprudential treatment of animal rights. In it, Francione compares the situation of animals to the treatment of slaves in the United States, where legislation existed that appeared to protect them while the courts ignored that the institution of slavery itself rendered the protection unenforceable. He offers as an example the United States Animal Welfare Act, which he describes as an example of symbolic legislation, intended to assuage public concern about the treatment of animals, but difficult to implement. He argues that a focus on animal welfare, rather than animal rights, may worsen the position of animals by making the public feel comfortable about using them and entrenching the view of them as property. He calls animal rights groups who pursue animal welfare issues, such as People for the Ethical Treatment of Animals, the "new welfarists", arguing that they have more in common with 19th-century animal protectionists than with the animal rights movement; indeed, the terms "animal protection" and "protectionism" are increasingly favored. His position in 1996 was that there is no animal rights movement in the United States. Contractarianism Mark Rowlands, professor of philosophy at the University of Florida, has proposed a contractarian approach, based on the original position and the veil of ignorance—a "state of nature" thought experiment that tests intuitions about justice and fairness—in John Rawls's A Theory of Justice (1971). In the original position, individuals choose principles of justice (what kind of society to form, and how primary social goods will be distributed), unaware of their individual characteristics—their race, sex, class, or intelligence, whether they are able-bodied or disabled, rich or poor—and therefore unaware of which role they will assume in the society they are about to form. The idea is that, operating behind the veil of ignorance, they will choose a social contract in which there is basic fairness and justice for them no matter the position they occupy. Rawls did not include species membership as one of the attributes hidden from the decision-makers in the original position. Rowlands proposes extending the veil of ignorance to include rationality, which he argues is an undeserved property similar to characteristics including race, sex and intelligence. Prima facie rights theory American philosopher Timothy Garry has proposed an approach that deems nonhuman animals worthy of prima facie rights. In a philosophical context, a prima facie (Latin for "on the face of it" or "at first glance") right is one that appears to be applicable at first glance, but upon closer examination may be outweighed by other considerations. In his book Ethics: A Pluralistic Approach to Moral Theory, Lawrence Hinman characterizes such rights as "the right is real but leaves open the question of whether it is applicable and overriding in a particular situation". The idea that nonhuman animals are worthy of prima facie rights is to say that, in a sense, animals have rights that can be overridden by many other considerations, especially those conflicting a human's right to life, liberty, property, and the pursuit of happiness. Garry supports his view arguing: In sum, Garry suggests that humans have obligations to nonhuman animals; animals do not, and ought not to, have uninfringible rights against humans. Feminism and animal rights Women have played a central role in animal advocacy since the 19th century. The anti-vivisection movement in the 19th and early 20th century in England and the United States was largely run by women, including Frances Power Cobbe, Anna Kingsford, Lizzy Lind af Hageby and Caroline Earle White (1833–1916). Garner writes that 70 per cent of the membership of the Victoria Street Society (one of the anti-vivisection groups founded by Cobbe) were women, as were 70 per cent of the membership of the British RSPCA in 1900. The modern animal advocacy movement has a similar representation of women. They are not invariably in leadership positions: during the March for Animals in Washington, D.C., in 1990—the largest animal rights demonstration held until then in the United States—most of the participants were women, but most of the platform speakers were men. Nevertheless, several influential animal advocacy groups have been founded by women, including the British Union for the Abolition of Vivisection by Cobbe in London in 1898; the Animal Welfare Board of India by Rukmini Devi Arundale in 1962; and People for the Ethical Treatment of Animals, co-founded by Ingrid Newkirk in 1980. In the Netherlands, Marianne Thieme and Esther Ouwehand were elected to parliament in 2006 representing the Parliamentary group for Animals. The preponderance of women in the movement has led to a body of academic literature exploring feminism and animal rights, such as feminism and vegetarianism or veganism, the oppression of women and animals, and the male association of women and animals with nature and emotion, rather than reason—an association that several feminist writers have embraced. Lori Gruen writes that women and animals serve the same symbolic function in a patriarchal society: both are "the used"; the dominated, submissive "Other". When the British feminist Mary Wollstonecraft (1759–1797) published A Vindication of the Rights of Woman (1792), Thomas Taylor (1758–1835), a Cambridge philosopher, responded with an anonymous parody, A Vindication of the Rights of Brutes (1792), saying that Wollstonecraft's arguments for women's rights could be applied equally to animals, a position he intended as reductio ad absurdum. In her works The Sexual Politics of Meat: A Feminist-Vegetarian Critical Theory (1990) and The Pornography of Meat (2004), Carol J. Adams focuses in particular on what she argues are the links between the oppression of women and that of non-human animals. Transhumanism Some transhumanists argue for animal rights, liberation, and "uplift" of animal consciousness into machines. Transhumanism also understands animal rights on a gradation or spectrum with other types of sentient rights, including human rights and the rights of conscious artificial intelligences (posthuman rights). Socialism and anti-capitalism According to sociologist David Nibert of Wittenberg University, the struggle for animal liberation must happen in tandem with a more generalized struggle against human oppression and exploitation under global capitalism. He says that under a more egalitarian democratic socialist system, one that would "allow a more just and peaceful order to emerge" and be "characterized by economic democracy and a democratically controlled state and mass media", there would be "much greater potential to inform the public about vital global issues—and the potential for "campaigns to improve the lives of other animals" to be "more abolitionist in nature." Philosopher Steven Best of the University of Texas at El Paso states that the animal liberation movement, as characterized by the Animal Liberation Front and its various offshoots, "is a significant threat to global capital." Critics R. G. Frey R. G. Frey, professor of philosophy at Bowling Green State University, is a preference utilitarian. In his early work, Interests and Rights (1980), Frey disagreed with Singer—who wrote in Animal Liberation (1975) that the interests of nonhuman animals must be given equal consideration when judging the consequences of an act—on the grounds that animals have no interests. Frey argues that interests are dependent on desire, and that no desire can exist without a corresponding belief. Animals have no beliefs, because a belief state requires the ability to hold a second-order belief—a belief about the belief—which he argues requires language: "If someone were to say, e.g. 'The cat believes that the door is locked,' then that person is holding, as I see it, that the cat holds the declarative sentence 'The door is locked' to be true; and I can see no reason whatever for crediting the cat or any other creature which lacks language, including human infants, with entertaining declarative sentences." Carl Cohen Carl Cohen, professor of philosophy at the University of Michigan, argues that rights holders must be able to distinguish between their own interests and what is right. "The holders of rights must have the capacity to comprehend rules of duty governing all, including themselves. In applying such rules, [they] ... must recognize possible conflicts between what is in their own interest and what is just. Only in a community of beings capable of self-restricting moral judgments can the concept of a right be correctly invoked." Cohen rejects Singer's argument that, since a brain-damaged human could not make moral judgments, moral judgments cannot be used as the distinguishing characteristic for determining who is awarded rights. Cohen writes that the test for moral judgment "is not a test to be administered to humans one by one", but should be applied to the capacity of members of the species in general. Richard Posner Judge Richard Posner of the United States Court of Appeals for the Seventh Circuit debated the issue of animal rights in 2001 with Peter Singer. Posner posits that his moral intuition tells him "that human beings prefer their own. If a dog threatens a human infant, even if it requires causing more pain to the dog to stop it, than the dog would have caused to the infant, then we favour the child. It would be monstrous to spare the dog." Singer challenges this by arguing that formerly unequal rights for gays, women, and certain races were justified using the same set of intuitions. Posner replies that equality in civil rights did not occur because of ethical arguments, but because facts mounted that there were no morally significant differences between humans based on race, sex, or sexual orientation that would support inequality. If and when similar facts emerge about humans and animals, the differences in rights will erode too. But facts will drive equality, not ethical arguments that run contrary to instinct, he argues. Posner calls his approach "soft utilitarianism", in contrast to Singer's "hard utilitarianism". He argues: Roger Scruton Roger Scruton, the British philosopher, argued that rights imply obligations. Every legal privilege, he wrote, imposes a burden on the one who does not possess that privilege: that is, "your right may be my duty." Scruton therefore regarded the emergence of the animal rights movement as "the strangest cultural shift within the liberal worldview", because the idea of rights and responsibilities is, he argued, distinctive to the human condition, and it makes no sense to spread them beyond our own species. He accused animal rights advocates of "pre-scientific" anthropomorphism, attributing traits to animals that are, he says, Beatrix Potter-like, where "only man is vile." It is within this fiction that the appeal of animal rights lies, he argued. The world of animals is non-judgmental, filled with dogs who return our affection almost no matter what we do to them, and cats who pretend to be affectionate when, in fact, they care only about themselves. It is, he argued, a fantasy, a world of escape. Scruton singled out Peter Singer, a prominent Australian philosopher and animal-rights activist, for criticism. He wrote that Singer's works, including Animal Liberation, "contain little or no philosophical argument. They derive their radical moral conclusions from a vacuous utilitarianism that counts the pain and pleasure of all living things as equally significant and that ignores just about everything that has been said in our philosophical tradition about the real distinction between persons and animals." Tom Regan countered this view of rights by distinguishing moral agents and moral patients. Public attitudes According to a 2000 paper by Harold Herzog and Lorna Dorr, previous academic surveys of attitudes toward animal rights tended to have small sample sizes and non-representative groups. But a number of factors appear to correlate with people's attitudes about the treatment of animals and animal rights. These include gender, age, occupation, religion, and level of education. There is also evidence suggesting that experience with pets may be a factor in people's attitudes. According to some studies, women are more likely to empathize with the cause of animal rights than men. A 1996 study suggested that factors that may partially explain this discrepancy include attitudes towards feminism and science, scientific literacy, and the presence of a greater emphasis on "nurturance or compassion" among women. A common misconception about animal rights is that its proponents want to grant nonhuman animals the same legal rights as humans, such as the right to vote. This is false. Rather, the idea is that animals should have rights that accord with their interests (for example, cats have no interest in voting, and so should not have the right to vote). A 2016 study found that support for animal testing may not be based on cogent philosophical rationales and that more open debate is warranted. A 2007 survey that examined whether people who believe in evolution are more likely to support animal rights than creationists and believers in intelligent design found that this was largely the case; according to the researchers, strong Christian fundamentalists and believers in creationism were less likely to advocate for animal rights than those who were less fundamentalist in their beliefs. The findings extended previous research, such as a 1992 study that found that 48% of animal rights activists were atheists or agnostic. A 2019 Washington Post study found that those with favorable attitudes toward animal rights also tend to have favorable views of universal healthcare; reducing discrimination against African Americans, the LGBT community, and undocumented immigrants; and expanding welfare to aid the poor. Two surveys found that attitudes toward animal rights tactics, such as direct action, are very diverse within the animal rights communities. Near half (50% and 39% in two surveys) of activists do not support direct action. One survey concluded, "it would be a mistake to portray animal rights activists as homogeneous." Even though around 90% of U.S. adults regularly consume meat, almost half of them appear to support a ban on slaughterhouses: in Sentience Institute's 2017 survey of 1,094 U.S. adults' attitudes toward animal farming, 49% "support a ban on factory farming, 47% support a ban on slaughterhouses, and 33% support a ban on animal farming". The 2017 survey was replicated by researchers at Oklahoma State University, who found similar results: 73% of respondents answered "yes" to the question "Were you aware that slaughterhouses are where livestock are killed and processed into meat, such that, without them, you would not be able to consume meat?" In the U.S., the National Farmers Organization held many public protest slaughters in the late 1960s and early 1970s. Protesting low prices for meat, farmers killed their animals in front of media representatives. The carcasses were wasted and not eaten. This effort backfired because it angered people to see animals needlessly and wastefully killed. See also Animal cognition Animal consciousness Animal–industrial complex Animal liberation Animal liberation movement Animal liberationist Animal rights by country or territory Animal studies Animal suffering Animal trial Animal Welfare Institute Antinaturalism (politics) Cambridge Declaration on Consciousness Chick culling Cruelty to animals Critical animal studies Deep ecology Do Animals Have Rights? (book) List of animal rights advocates List of songs about animal rights Moral circle expansion Non-human electoral candidate Open rescue Plant rights Sentientism Timeline of animal welfare and rights Wild animal suffering World Animal Day References Bibliography Books and papers are cited in short form in the footnotes, with full citations here. News and other sources are cited in full in the footnotes. Benthall, Jonathan (2007). "Animal liberation and rights", Anthropology Today, volume 23, issue 2, April. Bentham, Jeremy (1781). Principles of Penal Law. Beauchamp, Tom (2009). "The Moral Standing of Animals", in Marc Bekoff. Encyclopedia of Animal Rights and Animal Welfare. Greenwood. Beauchamp, Tom (2011a). "Introduction," in Tom Beauchamp and R.G. Frey (eds.). The Oxford Handbook of Animal Ethics. Oxford University Press. Beauchamp, Tom (2011b). "Rights Theory and Animal Rights," in Beauchamp and Frey, op cit. Clark, Stephen R. L. (1977). The Moral Status of Animals. Oxford University Press. Cohen, Carl (1986). "The Case for the Use of Animals in Biomedical Research" , New England Journal of Medicine, vol. 315, issue 14, October, pp. 865–870. Cohen, Carl and Regan, Tom (2001). The Animal Rights Debate. Rowman & Littlefield. Craig, Edward (ed.) (1988). "Deontological Ethics" and "Consequentalism." Routledge Encyclopedia of Philosophy. DeGrazia, David (2002). Animal Rights: A Very Short Introduction. Oxford University Press. Donovan, Josephine (1993). "Animal Rights and Feminist Theory," in Greta Gaard. Ecofeminism: Women, Animals, Nature. Temple University Press. Francione, Gary (1996). Rain Without Thunder: The Ideology of the Animal Rights Movement. Temple University Press. Francione, Gary (1995). Animals, Property, and the Law. Temple University Press. Francione, Gary (2008). Animals as Persons. Columbia University Press. Francione, Gary and Garner, Robert (2010). The Animal Rights Debate: Abolition Or Regulation? Columbia University Press. Fellenz, Mark R. (2007). The Moral Menagerie: Philosophy and Animal Rights. University of Illinois Press. Frey, R.G. (1980). Interests and Rights: The Case against Animals. Clarendon Press. Frey, R.G. (1989). "Why Animals Lack Beliefs and Desires," in Peter Singer and Tom Regan (eds.). Animal Rights and Human Obligations. Prentice Hall. Garner, Robert (2004). Animals, Politics and Morality. Manchester University Press. Garner, Robert (2005). The Political Theory of Animals Rights. Manchester University Press. Giannelli, Michael A. (1985). "Three Blind Mice, See How They Run: A Critique of Behavioral Research With Animals". In M.W. Fox & L.D. Mickley (eds.), Advances in Animal Welfare Science 1985/1986 (pp. 109–164). Washington, DC: The Humane Society of the United States Gruen, Lori (1993). "Dismantling Oppression: An Analysis of the Connection Between Women and Animals", in Greta Gaard. Ecofeminism: Women, Animals, Nature. Temple University Press. Griffin, Donald (1984). Animal Thinking. Harvard University Press. Horta, Oscar (2010). "What Is Speciesism?", The Journal of Environmental and Agricultural Ethics, Vol. 23, No. 3, June, pp. 243–266. Hursthouse, Rosalind (2000a). On Virtue Ethics. Oxford University Press. Hursthouse, Rosalind (2000b). Ethics, Humans and Other Animals. Routledge. Jakopovich, Daniel (2021). "The UK's Animal Welfare (Sentience) Bill Excludes the Vast Majority of Animals: Why We Must Expand Our Moral Circle to Include Invertebrates" , Animals & Society Research Initiative, University of Victoria, Canada. Kant, Immanuel (1785). Groundwork of the Metaphysic of Morals. Kean, Hilda (1995). "The 'Smooth Cool Men of Science': The Feminist and Socialist Response to Vivisection" , History Workshop Journal, No. 40 (Autumn), pp. 16–38. Lansbury, Coral (1985). The Old Brown Dog: Women, Workers, and Vivisection in Edwardian England. University of Wisconsin Press. Legge, Debbi and Brooman, Simon (1997). Law Relating to Animals. Cavendish Publishing. Leneman, Leah (1999). "No Animal Food: The Road to Veganism in Britain, 1909–1944," Society and Animals, 7, 1–5. Locke, John (1693). Some Thoughts Concerning Education. MacKinnon, Catharine A. (2004). "Of Mice and Men," in Nussbaum and Sunstein, op cit. Mason, Peter (1997). The Brown Dog Affair. Two Sevens Publishing. Midgley, Mary (1984). Animals and Why They Matter. University of Georgia Press. Molland, Neil (2004). "Thirty Years of Direct Action" in Best and Nocella, op cit. Monaghan, Rachael (2000). "Terrorism in the Name of Animal Rights," in Taylor, Maxwell and Horgan, John. The Future of Terrorism. Routledge. Murray, L. (2006). "The ASPCA–Pioneers in Animal Welfare" , Encyclopædia Britannica's Advocacy for Animals. Nash, Roderick (1989). The Rights of Nature: A History of Environmental Ethics. University of Wisconsin Press. Newkirk, Ingrid (2004). "The ALF: Who, Why, and What?", in Steven Best and Anthony Nocella. (eds).Terrorists or Freedom Fighters? Reflections on the Liberation of Animals. Lantern 2004. Nussbaum, Martha (2004). "Beyond Compassion and Humanity: Justice for Nonhuman Animals", in Cass Sunstein and Martha Nussbaum (eds.). Animal Rights: Current Debates and New Directions. Oxford University Press. Nussbaum, Martha (2006). Frontiers of Justice: Disability, Nationality, Species Membership. Belknap Press. Posner, Richard and Singer, Peter (June 15, 2001). Posner-Singer debate , Slate. Posner, Richard and Singer, Peter (2004). "Animal rights" in Sunstein and Nussbaum, op cit. Rachels, James (2009). "Darwin, Charles," in Bekoff, op cit. Redclift, Michael R. (2010). The International Handbook of Environmental Sociology. Edward Elgar Publishing. Regan, Tom (1983). The Case for Animal Rights. University of California Press. Regan, Tom (2001). Defending Animal Rights. University of Illinois Press. Rollin, Bernard (1981). Animal Rights and Human Morality. Prometheus Books. Rollin, Bernard (1989). The Unheeded Cry: Animal Consciousness, Animal Pain, and Science. New York: Oxford University Press. Rollin, Bernard (2007). "Animal research: a moral science" , Nature, EMBO Reports 8, 6, pp. 521–525. Rowlands, Mark (2009) [1998]. Animal Rights. A Defense. Palgrave Macmillan. Ryder, Richard (2000) [1989]. Animal Revolution: Changing Attitudes Towards Speciesism. Berg. Sapontzis, Steve (1985). "Moral Community and Animal Rights" , American Philosophical Quarterly, Vol. 22, No. 3 (July), pp. 251–257. Scruton, Roger (1998). Animal Rights and Wrongs. Claridge Press. Scruton, Roger (2000). "Animal Rights" , City Journal, summer. Singer, Peter (April 5, 1973). "Animal liberation" , The New York Review of Books, Volume 20, Number 5. Singer, Peter (1990) [1975]. Animal Liberation. New York Review Books. Singer, Peter (2000) [1998]. Ethics into Action: Henry Spira and the Animal Rights Movement. Rowman & Littlefield Publishers, Inc. Singer, Peter (2003). "Animal liberation at 30" , The New York Review of Books, vol 50, no. 8, May 15. Singer, Peter (2004). "Ethics Beyond Species and Beyond Instincts," in Sunstein and Nussbaum, op cit. Singer, Peter (2011) [1979]. Practical Ethics. Cambridge University Press. Sprigge, T.L.S. (1981) "Interests and Rights: The Case against Animals" , Journal of Medical Ethics. June, 7(2): 95–102. Stamp Dawkins, Marian (1980). Animal Suffering: The Science of Animal Welfare. Chapman and Hall. Stucki, Saskia (2020) "Towards a Theory of Legal Animal Rights: Simple and Fundamental Rights" , Oxford Journal of Legal Studies 40:533-560. Sunstein, Cass R. (2004). "Introduction: What are Animal Rights?" in Sunstein and Nussbaum, op cit. Sunstein, Cass R. and Nussbaum, Martha (2005). Animal Rights: Current Debates and New Directions. Oxford University Press. Taylor, Angus (2009). Animals and Ethics: An Overview of the Philosophical Debate. Broadview Press. Taylor, Thomas (1792). "A Vindication of the Rights of Brutes," in Craciun, Adriana (2002). A Routledge Literary Sourcebook on Mary Wollstonecraft's A Vindication of the Rights of Woman. Routledge. Vallentyne, Peter (2005). "Of Mice and Men: Equality and Animals" , The Journal of Ethics, Vol. 9, No. 3/4, pp. 403–433. Vallentyne, Peter (2007). "Of Mice and Men: Equality and Animals" in Nils Holtug, and Kasper Lippert-Rasmussen (eds.) (2007). Egalitarianism: New Essays on the Nature and Value of Equality. Oxford University Press. Walker, Stephen (1983). Animal Thoughts. Routledge. Weir, Jack (2009). "Virtue Ethics," in Marc Bekoff. Encyclopedia of Animal Rights and Animal Welfare. Greenwood. Williams, Erin E. and DeMello, Margo (2007). Why Animals Matter. Prometheus Books. Wise, Steven M. (2000). Rattling the Cage: Toward Legal Rights for Animals. Da Capo Press. Wise, Steven M. (2002). Drawing the Line: Science and the Case for Animal Rights. Perseus. Wise, Steven M. (2004). "Animal Rights, One Step at a Time," in Sunstein and Nussbaum, op cit. Wise, Steven M. (2007). "Animal Rights" , Encyclopædia Britannica. Further reading Lubinski, Joseph (2002). "Overview Summary of Animal Rights", The Animal Legal and Historical Center at Michigan State University College of Law. "Great Apes and the Law", The Animal Legal and Historical Center at Michigan State University College of Law. Bekoff, Marc (ed.) (2009). The Encyclopedia of Animal Rights and Animal Welfare. Greenwood. Best, Steven and Nocella II, Anthony J. (eds). (2004). Terrorists or Freedom Fighters? Reflections on the Liberation of Animals. Lantern Books Chapouthier, Georges and Nouët, Jean-Claude (eds.) (1998). The Universal Declaration of Animal Rights. Ligue Française des Droits de l'Animal. Dawkins, Richard (1993). Gaps in the mind, in Cavalieri, Paola and Singer, Peter (eds.). The Great Ape Project. St. Martin's Griffin. Dombrowski, Daniel (1997). Babies and Beasts: The Argument from Marginal Cases. University of Illinois Press. Finlayson, Lorna, "Let them eat oysters" (review of Peter Singer, Animal Liberation Now, Penguin, 2023, , 368 pp; and Martha Nussbaum, Justice for Animals, Simon & Schuster, 2023, , 372 pp.), London Review of Books, vol. 45, no.19 (5 October 2023), pp. 3, 5–8. The question of animal rights has been approached from a variety of theoretical orientations, including utilitarianism and capabilities approach ("CA") – none of them satisfactory to reviewer Lorna Finlayson, who teaches philosophy at England's University of Essex and ends up (p. 8) suggesting "think[ing] politically [and pragmatically] about animals: "It ought to be – it is – possible to arrange society differently." (p. 8.) Foltz, Richard (2006). Animals in Islamic Tradition and Muslim Cultures. Oneworld Publications. Franklin, Julian H. (2005). Animal Rights and Moral Philosophy. University of Columbia Press. Gruen, Lori (2003). "The Moral Status of Animals", Stanford Encyclopedia of Philosophy, July 1, 2003. Gruen, Lori (2011). Ethics and Animals. Cambridge University Press. Hall, Lee (2006). Capers in the Churchyard: Animal Rights Advocacy in the Age of Terror. Nectar Bat Press. Linzey, Andrew and Clarke, Paul A. B.(eds.) (1990). Animal Rights: A Historic Anthology. Columbia University Press. Mann, Keith (2007). From Dusk 'til Dawn: An Insider's View of the Growth of the Animal Liberation Movement. Puppy Pincher Press. McArthur, Jo-Anne and Wilson, Keith (eds). (2020). Hidden: Animals in the Anthropocene. Lantern Publishing & Media. Neumann, Jean-Marc (2012). "The Universal Declaration of Animal Rights or the Creation of a New Equilibrium between Species". Animal Law Review volume 19–1. Nibert, David (2002). Animal Rights, Human Rights: Entanglements of Oppression and Liberation. Rowman and Litterfield. Patterson, Charles (2002). Eternal Treblinka: Our Treatment of Animals and the Holocaust. Lantern. Rachels, James (1990). Created from Animals: The Moral Implications of Darwinism. Oxford University Press. Regan, Tom and Singer, Peter (eds.) (1976). Animal Rights and Human Obligations. Prentice-Hall. Spiegel, Marjorie (1996). The Dreaded Comparison: Human and Animal Slavery. Mirror Books. Sztybel, David (2006). "Can the Treatment of Animals Be Compared to the Holocaust?" Ethics and the Environment 11 (Spring): 97–132. Tobias, Michael (2000). Life Force: The World of Jainism. Asian Humanities Press. Wilson, Scott (2010). "Animals and Ethics" Internet Encyclopedia of Philosophy. Kymlicka, W., Donaldson, S. (2011) Zoopolis. A Political Theory of Animal Rights. Oxford University Press. Bioethics Political movements Ethical schools and movements Animal ethics
Animal rights
[ "Technology" ]
9,128
[ "Bioethics", "Ethics of science and technology" ]
7,116,205
https://en.wikipedia.org/wiki/APM%2008279%2B5255
APM 08279+5255 is a very distant, broad absorption line quasar located in the constellation Lynx. It is magnified and split into multiple images by the gravitational lensing effect of a foreground galaxy through which its light passes. It appears to be a giant elliptical galaxy with a supermassive black hole and associated accretion disk. It possesses large regions of hot dust and molecular gas, as well as regions with starburst activity. Gravitational lensing APM 08279+5255 was initially identified as a quasar in 1998 during an Automatic Plate Measuring Facility (APM) survey to find carbon stars in the galactic halo. The combination of its high redshift (z=3.87) and brightness (particularly in the infrared) made it the most luminous object yet seen in the universe. It was suspected of being a gravitationally lensed object, with its luminosity magnified. Observations in the infrared with the NICMOS high-resolution camera on board the Hubble Space Telescope (HST) showed that the source was composed of three discrete images. Even accounting for the magnification, the quasar is an extremely powerful object, with a luminosity of 1014 to 1015 times the luminosity of the sun. Subsequent observations with the Hubble Space Telescope Imaging Spectrograph confirmed the presence of a third faint image between the two brighter images. Each component has the same spectral energy distribution and is an image of the quasar. Gravitational lensed systems with odd numbers of images are extremely rare; most contain two or four. Initially the magnification due to gravitational lensing was thought to be large, in the range of 40 to 90 times. After detailed observations at many wavelengths, the best model of the lensing galaxy is a tilted spiral galaxy. This gives a magnification of about 4. The additional observations led to a revised redshift of 3.911. Galactic structure APM 08279+5255 is a bright source at almost all wavelengths and has become one of the most studied of distant sources. Using interferometry it has been mapped in X-ray with the AXAF CCD Imaging Spectrometer on the Chandra X-ray Observatory, in infrared with the Hubble Space Telescope, and in radio with the Very Long Baseline Array. Measurements with the IRAM Plateau de Bure Interferometer and other instruments looked at the distribution of molecules such as CO, CN, HCN, and HCO+ as well as atomic carbon. From these observations APM 08279+5255 is in a giant elliptical galaxy with large amounts of gas, dust, and an active galactic nucleus (AGN) at its core. The AGN is radio-quiet with no evidence for a relativistic jet. It is powered by one of the largest known supermassive black holes: 23 billion solar masses (based on the molecular disk velocities); or alternatively 10 billion solar masses (based on reverberation mapping). The black hole is surrounded by an accretion disk of material spiraling into it, a few parsecs in size. Further out is a dust torus, a doughnut shaped cloud of dust and gas with a radius of about 100 parsecs. Both the accretion disk and dust torus appear to be almost face-on to us. The radiation from the molecular gas is coming from a flattened disk at the center of the galaxy with a radius of 550 pc. This is also the starburst region of the galaxy. The gas is heated both by activity in the AGN and by the newly forming stars. APM is an ultra-luminous infrared galaxy (ULIRG). Its high redshift shifts the far-infrared spectrum into millimeter wavelengths where it can be observed from observatories on the ground. In 2008 and 2009 the intensities of its water vapor spectral lines were measured using the millimeter wave spectrometer Z-Spec at the Caltech Submillimeter Observatory. Comparing the spectrum to that of Markarian 231, another ULIRG, showed that it had 50 times the water vapor of that galaxy. This made it the largest mass of water in the known universe—100 trillion times more water than that held in all of Earth's oceans combined. Its discovery shows that water has been prevalent in the known universe for nearly its entire existence; the radiation was emitted 1.6 billion years after the Big Bang. Gallery References External links Lynx (constellation) Gravitationally lensed quasars IRAS catalogue objects Starburst galaxies Supermassive black holes Astronomical objects discovered in 1998
APM 08279+5255
[ "Physics", "Astronomy" ]
952
[ "Black holes", "Lynx (constellation)", "Unsolved problems in physics", "Supermassive black holes", "Constellations" ]
7,116,441
https://en.wikipedia.org/wiki/Simple%20lipid
A simple lipid is a fatty acid ester of different alcohols and carries no other substance. These lipids belong to a heterogeneous class of predominantly nonpolar compounds, mostly insoluble in water, but soluble in nonpolar organic solvents such as chloroform and benzene. "Simple lipid" can refer to many different types of lipid depending on the classification system used, but the most basic definitions usually classify simple lipids as those that do not contain acyl groups. The simple lipids are then divided further into glycerides, cholesteryl esters, and waxes. The term was first used by in 1947 to separate "simple" greases and waxes from "mixed" triglycerides found in animal fats. See also Lipid Lipids References
Simple lipid
[ "Chemistry", "Biology" ]
173
[ "Biomolecules by chemical classification", "Biotechnology stubs", "Organic compounds", "Biochemistry stubs", "Biochemistry", "Lipids" ]
7,116,447
https://en.wikipedia.org/wiki/Saponifiable%20lipid
A saponifiable lipid is part of the ester functional group. They are made up of long chain carboxylic (of fatty) acids connected to an alcoholic functional group through the ester linkage which can undergo a saponification reaction. The fatty acids are released upon base-catalyzed ester hydrolysis to form ionized salts. The primary saponifiable lipids are free fatty acids, neutral glycerolipids, glycerophospholipids, sphingolipids, and glycolipids. By comparison, the non-saponifiable class of lipids is made up of terpenes, including fat-soluble A and E vitamins, and certain steroids, such as cholesterol. Applications Saponifiable lipids have relevant applications as a source of biofuel and can be extracted from various forms of biomass to produce biodiesel. See also Lipids Simple lipid References H. Stephen Stoker. General, Organic, and Biological Chemistry, 6th ed. Cengage Learning, Nov 15, 2011 pg. 697 Lipids
Saponifiable lipid
[ "Chemistry", "Biology" ]
235
[ "Biomolecules by chemical classification", "Biotechnology stubs", "Biochemistry stubs", "Organic compounds", "Biochemistry", "Lipids" ]
7,118,201
https://en.wikipedia.org/wiki/Downhill%20folding
Downhill folding is a process in which a protein folds without encountering any significant macroscopic free energy barrier. It is a key prediction of the folding funnel hypothesis of the energy landscape theory of proteins. Overview Downhill folding is predicted to occur under conditions of extreme native bias, i.e. at low temperatures or in the absence of denaturants. This corresponds to the type 0 scenario in the energy landscape theory. At temperatures or denaturant concentrations close to their apparent midpoints, proteins may switch from downhill to two-state folding, the type 0 to type 1 transition. Global downhill folding (or one-state folding) is another scenario in which the protein folds in the absence of a free energy barrier under all conditions. In other words, there is a unimodal population distribution at all temperatures and denaturant concentrations, suggesting a continuous unfolding transition in which different ensembles of structures populate at different conditions. This is in contrast to two-state folding, which assumes only two ensembles (folded and unfolded) and a sharp unfolding transition. Free energy barriers in protein folding are predicted to be small because they arise as a result of compensation between large energetic and entropic terms. Non-synchronization between gain in stabilizing energy and loss in conformational entropy results in two-state folding, while a synchronization between these two terms as the folding proceeds results in downhill folding. Experimental studies Transition state structures in two-state folding are not experimentally accessible (by definition they are the least populated along the reaction coordinate), but the folding sub-ensembles in downhill folding processes are theoretically distinguishable by spectroscopy. The 40-residue protein BBL, which is an independently folding domain from the E2 subunit of the 2-oxoglutarate dehydrogenase multi-enzyme complex of E. coli, has been experimentally shown to fold globally downhill. Also, a mutant of lambda repressor protein has been shown to shift from downhill to two-state upon changing the temperature/solvent conditions. However, the status of BBL as a downhill-folding protein, and by extension the existence of naturally occurring downhill folders, has been controversial. The current controversy arises from the fact that the only way a protein can be labeled as two-state or downhill is by analyzing the experimental data with models that explicitly deal with these two situations, i.e. by allowing the barrier heights to vary. Unfortunately, most of the experimental data so far have been analyzed with a simple chemical two-state model. In other words, the presence of a rather large free energy barrier has been pre-assumed, ruling out the possibility of identifying downhill or globally downhill protein folding. This is critical because any sigmoidal unfolding curve, irrespective of the degree of cooperativity, can be fit to a two-state model. Kinetically, the presence of a barrier guarantees a single-exponential, but not vice versa. Nevertheless, in some proteins such as the yeast phosphoglycerate kinase and a mutant human ubiquitin, non-exponential kinetics suggesting downhill folding have been observed. A proposed solution to these problems is to develop models that can differentiate between the different situations, and identify simple but robust experimental criteria for identifying downhill folding proteins. These are outlined below. Equilibrium criteria Differences in apparent melting temperatures An analysis based on an extension of Zwanzig's model of protein folding indicates that global downhill folding proteins should reveal different apparent melting temperatures (Tms) when monitored by different techniques. This was experimentally confirmed in the protein BBL mentioned above. The unfolding followed by differential scanning calorimetry (DSC), circular dichroism (CD), fluorescence resonance energy transfer (FRET) and fluorescence all revealed different apparent melting temperatures. A wavelength-dependent melting temperature was also observed in the CD experiments. The data analyzed with a structure-based statistical mechanical model resulted in a unimodal population distribution at all temperatures, indicating a structurally uncoupled continuous unfolding process. The crucial issue in such experiments is to use probes that monitor different aspects of the structure. For example, DSC gives information on the heat capacity changes (and hence enthalpy) associated with unfolding, fluorescence on the immediate environment of the fluorophore, FRET on the average dimensions of the molecule and CD on the secondary structure. A more stringent test would involve following the chemical shifts of each and every atom in the molecule by nuclear magnetic resonance (NMR) as a function of temperature/denaturant. Though time-consuming, this method does not require any specific model for the interpretation of data. The Tms for all the atoms should be identical within experimental error if the protein folds in a two-state manner. But for a protein that folds globally downhill the unfolding curves should have widely different Tms. The atomic unfolding behavior of BBL was found to follow the latter, showing a large spread in the Tms consistent with global downhill behavior. The Tms of some atoms were found to be similar to that of the global Tm (obtained from a low-resolution technique like CD or fluorescence), indicating that the unfolding of multiple atoms has to be followed, instead of a few as is frequently done in such experiments. The average atomic unfolding behavior was strikingly similar to that of CD, underlining the fact that unfolding curves of low resolution experiments are highly simplified representations of a more complex behavior. Calorimetry and crossing baselines Baselines frequently used in two-state fits correspond to the fluctuations in the folded or unfolded well. They are purely empirical as there is little or no information on how the folded or unfolded states' property changes with temperature/chemical denaturant. This assumes even more importance in case of DSC experiments as the changes in heat capacity correspond to both fluctuations in the protein ensemble and exposure of hydrophobic residues upon unfolding. The DSC profiles of many small fast-folding proteins are broad, with steep pre-transition slopes. Two-state fits to these profiles result in crossing of baselines indicating that the two-state assumption is no longer valid. This was recognized by Munoz and Sanchez-Ruiz, resulting in the development of the variable-barrier model. Instead of attempting a model-free inversion of the DSC profile to extract the underlying probability density function, they assumed a specific free energy functional with either one or two minima (similar to the Landau theory of phase transitions) thus enabling the extraction of free energy barrier heights. This model is the first of its kind in physical biochemistry that enables the determination of barrier heights from equilibrium experiments. Analysis of the DSC profile of BBL with this model resulted in zero barrier height, i.e. downhill folding, confirming the earlier result from the statistical mechanical model. When the variable-barrier model was applied to a set of proteins for which both the rate and DSC data are available, a very high correlation of 0.95 was obtained between the rates and barrier heights. Many of the proteins examined had small barriers (<20 kJ/mol) with baseline crossing evident for proteins that fold faster than 1 ms. This is in contrast to the traditional assumption that the free energy barrier between the folded and unfolded states are large. Simulations Because downhill folding is difficult to measure experimentally, molecular dynamics and Monte Carlo simulations have been performed on fast-folding proteins to explore their folding kinetics. Proteins whose folding rate is at or near the folding "speed limit", whose timescales make their folding more accessible to simulation methods, may more commonly fold downhill. Simulation studies of the BBL protein imply that its rapid folding rate and very low energy barrier arise from a lack of cooperativity in the formation of native contacts during the folding process; that is, a low contact order. The link between lack of cooperativity and low contact order was also observed in the context of Monte Carlo lattice simulations These data suggest that the average number of "nonlocal contacts" per residue in a protein serves as an indicator of the barrier height, where very low nonlocal contact values imply downhill folding. Coarse-grained simulations by Knott and Chan also support the experimental observation of global downhill folding in BBL. A more recent study using constant-pH molecular dynamics (CpHMD) simulation has reconciled the opposing downhill and two-state folding mechanisms and found that the folding barrier vanishes at acidic pH conditions, leading to downhill folding. See also Dr. Victor Muñoz References Further reading Bieri O, Kiefhaber T. (2000). Kinetic models in protein folding. In Mechanisms of Protein Folding 2nd ed. Ed. RH Pain. Frontiers in Molecular Biology series. Oxford University Press: Oxford, UK. Gruebele M. (2008) Fast protein folding. In Protein Folding, Misfolding and Aggregation Ed. V Muñoz. RSC Biomolecular Sciences series. Royal Society of Chemistry Publishing: Cambridge, UK. Protein structure Statistical mechanics
Downhill folding
[ "Physics", "Chemistry" ]
1,837
[ "Statistical mechanics", "Protein structure", "Structural biology" ]
7,118,210
https://en.wikipedia.org/wiki/Electrical%20engineering%20technology
Electrical/Electronics engineering technology (EET) is an engineering technology field that implements and applies the principles of electrical engineering. Like electrical engineering, EET deals with the "design, application, installation, manufacturing, operation or maintenance of electrical/electronic(s) systems." However, EET is a specialized discipline that has more focus on application, theory, and applied design, and implementation, while electrical engineering may focus more of a generalized emphasis on theory and conceptual design. Electrical/Electronic engineering technology is the largest branch of engineering technology and includes a diverse range of sub-disciplines, such as applied design, electronics, embedded systems, control systems, instrumentation, telecommunications, and power systems. Education Accreditation The Accreditation Board for Engineering and Technology (ABET) is the recognized organization for accrediting both undergraduate engineering and engineering technology programs in the United States. Coursework EET curricula can vary widely by institution type, degree type, program objective, and expected student outcome. Each year after, however, ABET publishes a set of minimum criteria that a given EET program (either associate degree or bachelor's degree) must meet in order to maintain its ABET accreditation. These criteria may be classified as either general criteria, which apply to all ABET accredited programs, or as program criteria, which apply to discipline-specific criteria. Associate degree Associate degree programs emphasize the practical field knowledge that is needed to maintain or troubleshoot existing electrical/electronic systems or to build and test new design prototypes. Discipline-specific program outcomes include the application of circuit analysis and design, analog and digital electronics, computer programming, associated software, and relevant engineering standards Coursework must be at a minimum algebra and trigonometry based. Bachelor's degree Bachelor's degree programs emphasize the analysis, design, and implementation of electrical/electronic systems. Some programs may focus on a specific sub-discipline, such as control systems or communications systems, while others may take a broader approach, introducing the student to several different sub-disciplines. Math to differential equations is a minimum requirement for ABET accredited bachelor's level EET degrees. In addition, graduates must demonstrate an understanding of basic project management skills. The United States Department of Commerce classifies the bachelor of science in electrical engineering technology (BSEET) as a STEM undergraduate engineering degree field. In many states, recent graduates and students who are close to finishing an undergraduate BSEET degree are qualified to sit-in for the Fundamentals of Engineering exam while those BSEETs who have already gained at least four years’ post-college experience are qualified to sit-in for the Professional Engineer exam for their licensure in the United States. The importance of the licensing board requirements depend upon location, level of education, required years of experience, and the BSEETs sub-discipline are the passageways for becoming a licensed engineer. The knowledge obtained by a TAC/ABET accredited program is one pathway that may help students prepare for and pass the FE/PE exam. For example, in the United States and Canada, "only a licensed engineer may seal engineering work for public and private clients". Career Graduates of electrical/electronics engineering technology programs work in a wide range of career fields. Some examples include: Engineering management Telecommunications Signal processing Medical technology and devices Instrumentation Integration Engineer Control Aerospace and avionics Computers Electrical power industry and power distribution Optics and Optoelectronics Manufacturing and manufacturing test engineer Marine Engineering Research and development Project management and Operations research Supervision/Management Systems analyst Technology management Associate degree Electrical/electronic engineering technicians may have a two-year associate degree and considered craftsman technicians. Eventually, with additional experience and certifications obtained then the craftsman technicians may advance to master craftsman technicians. Bachelor degree Electrical/electronic engineering technologists are broad specialists, rather than central technicians. EETs have a bachelor's degree and are considered applied electrical or electronic engineers because they have electrical engineering concepts to use in their work. Entry-level jobs in electrical or electronics engineering generally require a bachelor's degree in electrical engineering, electronics engineering, or electrical engineering technology. See also Outline of engineering IEEE Applied science Mechanical engineering technology Computer engineering Manufacturing engineering References External links IEEE Global History Network A wiki-based site with many resources about the history of IEEE, its members, their professions and electrical and informational technologies and sciences. Technology by type Electrical engineering
Electrical engineering technology
[ "Engineering" ]
878
[ "Electrical engineering" ]
7,118,482
https://en.wikipedia.org/wiki/Dog%20behavior
Dog behavior is the internally coordinated responses of individuals or groups of domestic dogs to internal and external stimuli. It has been shaped by millennia of contact with humans and their lifestyles. As a result of this physical and social evolution, dogs have acquired the ability to understand and communicate with humans. Behavioral scientists have uncovered a wide range of social-cognitive abilities in domestic dogs. Co-evolution with humans The origin of the domestic dog (Canis familiaris) is not clear. Whole-genome sequencing indicates that the dog, the gray wolf and the extinct Taymyr wolf diverged around the same time 27,000–40,000 years ago. How dogs became domesticated is not clear, however the two main hypotheses are self-domestication or human domestication. There exists evidence of human-canine behavioral coevolution. Intelligence Dog intelligence is the ability of the dog to perceive information and retain it as knowledge in order to solve problems. Dogs have been shown to learn by inference. A study with Rico showed that he knew the labels of over 200 different items. He inferred the names of novel items by exclusion learning and correctly retrieved those novel items immediately. He also retained this ability four weeks after the initial exposure. Dogs have advanced memory skills. A study documented the learning and memory capabilities of a border collie, "Chaser", who had learned the names and could associate by verbal command over 1,000 words. Dogs are able to read and react appropriately to human body language such as gesturing and pointing, and to understand human voice commands. After undergoing training to solve a simple manipulation task, dogs that are faced with an insolvable version of the same problem look at the human, while socialized wolves do not. Dogs demonstrate a theory of mind by engaging in deception. Senses The dog's senses include vision, hearing, sense of smell, taste, touch, proprioception, and sensitivity to the Earth's magnetic field. Communication behavior Dog communication is about how dogs "speak" to each other, how they understand messages that humans send to them, and how humans can translate the ideas that dogs are trying to transmit. These communication behaviors include eye gaze, facial expression, vocalization, body posture (including movements of bodies and limbs) and gustatory communication (scents, pheromones and taste). Humans communicate with dogs by using vocalization, hand signals, and body posture. Dogs can also learn to understand the communication of emotions with humans by reading human facial expressions. Social behavior Two studies have indicated that dog behavior vary based on their size, body weight, and skull size. Play Dog and dog Play between dogs usually involves several behaviors often seen in aggressive encounters, such as nipping, biting and growling. It is therefore important for the dogs to place these behaviors in the context of the play, rather than aggression. Dogs signal their intent to play with a range of behaviors including a "play-bow", "face-paw", "open-mouthed play face" and postures inviting the other dog to chase the initiator. Similar signals are given throughout the play to maintain the context of the potentially aggressive activities. From a young age, dogs engage in play with one another. Dog play is made up primarily of mock fights. It is believed that this behavior, which is most common in puppies, is training for important behaviors later in life. Play between puppies is not necessarily a 50:50 symmetry of dominant and submissive roles between the individuals; dogs who engage in greater rates of dominant behaviors (e.g. chasing, forcing partners down) at later ages also initiate play at higher rates. This could imply that winning during play becomes more important as puppies mature. Emotional contagion is linked to facial mimicry in humans and primates. Facial mimicry is an automatic response that occurs in less than 1 second in which one person involuntarily mimics another person's facial expressions, forming empathy. It has also been found in dogs at play, and play sessions lasted longer when there were facial mimicry signals from one dog to another. Dog and human The motivation for a dog to play with another dog is different from that of a dog playing with a human. Dogs walked together with opportunities to play with one another and play with their owners, choose to play with their owners at the same frequency as dogs being walked alone. Dogs in households with two or more dogs only play slightly more often with their owners than dogs in households with a single dog, indicating the motivation to play with other dogs does not substitute for the motivation to play with humans. It is a common misconception that winning and losing games such as "tug-of-war" and "rough-and-tumble" can influence a dog's dominant relationship with humans. Rather, how dogs play indicates their temperament and relationship with their owner. Dogs that play rough-and-tumble are more amenable and show lower separation anxiety than dogs which play other types of games, and dogs playing tug-of-war and "fetch" are more confident. Dogs that start most games are less amenable and more likely to be aggressive. Playing with humans can affect the cortisol levels of dogs. In one study, the cortisol responses of police dogs and border guard dogs were assessed after playing with their handlers. The cortisol concentrations of the police dogs increased, whereas the border guard dogs' hormone levels decreased. The researchers noted that during the play sessions, police officers were disciplining their dogs, whereas the border guards were truly playing with them, i.e. this included bonding and affectionate behaviors. They commented that several studies have shown that behaviors associated with control, authority or aggression increase cortisol, whereas play and affiliation behavior decrease cortisol levels. Empathy In 2012, a study found that dogs oriented toward their owner or a stranger more often when the person was pretending to cry than when they were talking or humming. When the stranger pretended to cry, rather than approaching their usual source of comfort, their owner, dogs sniffed, nuzzled and licked the stranger instead. The dogs' pattern of response was behaviorally consistent with an expression of empathic concern. A study found a third of dogs suffered from anxiety when separated from others. Personalities The term personality has been applied to human research, whereas the term temperament has been mostly used for animal research. However, both terms have been used interchangeably in the literature, or purely to distinguish humans from animals and avoid anthropomorphism. Personality can be defined as "a set of behaviors that are consistent over context and time". Studies of dogs' personalities have tried to identify the presence of broad personality traits that are stable and consistent over time. There are different approaches to assess dog personality: Ratings of individual dogs: either a caretaker or a dog expert who is familiar with the dog is asked to answer a questionnaire, for instance the Canine Behavioral Assessment and Research Questionnaire, concerning how often the dog shows certain types of behavior. Tests: the dog is submitted to a set of tests and its reactions are evaluated on a behavioral scale. For instance, the dog is presented to a familiar and then an unfamiliar person in order to measure sociability or aggression. Observational test: The dog's behavior is evaluated in a selected but not controlled environment. An observer focuses on the dog's reactions to naturally occurring stimuli. For example, a walk through the supermarket can allow the observer to see the dog in various types of conditions (crowded, noisy…) Several potential personality traits have been identified in dogs, for instance "Playfulness", "Curiosity/Fearlessness, "Chase-proneness", "Sociability and Aggressiveness" and "Shyness–Boldness". A meta-analysis of 51 published peer reviewed articles identified seven dimensions of canine personality: Reactivity (approach or avoidance of new objects, increased activity in novel situations) Fearfulness (shaking, avoiding novel situations) Activity Sociability (initiating friendly interactions with people and other dogs) Responsiveness to training (working with people, learning quickly) Submissiveness Aggression With regard to the nature versus nurture debate, according to a study in April 2022 carried out by Kathleen Morrill and others who work in a laboratory that was based on genetic and survey data of nearly 2000 dogs, with the majority of them having their entire genomes sequenced, as well as survey results from 16,000 owners of dogs. The dogs included mixes and purebreds, with 128 breeds represented. The study found that the physical traits of a dog can be attributed to 80% on DNA and that retrieving and friendliness around humans were predominantly genetic. But, breed alone is only responsible for about 9% of individual personality differences, with about 25% of personality traits determined by (mainly individual) genetics in total, and the rest determined by the environment. However, a study in December 2022 challenged those findings after the researching of the genetic codes of 4,000 dogs and 46,000 dog owners and concluded that a dog's breed does genetically influence a dog's personality. The effects of age and sex have not been clearly determined. The personality models can be used for a range of tasks, including guide and working dog selection, finding appropriate families to re-home shelter dogs, or selecting breeding stock. Leadership, dominance and social groups Dominance is a descriptive term for the relationship between pairs of individuals. Among ethologists, dominance has been defined as "an attribute of the pattern of repeated, antagonistic interactions between two individuals, characterized by a consistent outcome in favor of the same dyad member and a default yielding response of its opponent rather than escalation. The status of the consistent winner is dominant and that of the loser subordinate." Another definition is that a dominant animal has "priority of access to resources". Dominance is a relative attribute, not absolute; there is no reason to assume that a high-ranking individual in one group would also become high ranking if moved to another. Nor is there any good evidence that "dominance" is a lifelong character trait. Competitive behavior characterized by confident (e.g. growl, inhibited bite, stand over, stare at, chase, bark at) and submissive (e.g. crouch, avoid, displacement lick/yawn, run away) patterns exchanged. One test to ascertain in which group the dominant dog was used the following criteria: When a stranger comes to the house, which dog starts to bark first or if they start to bark together, which dog barks more or longer? Which dog licks more often the other dog's mouth? If the dogs get food at the same time and at the same spot, which dog starts to eat first or eats the other dog's food? If the dogs start to fight, which dog usually wins? Domestic dogs appear to pay little attention to relative size, despite the large weight differences between the largest and smallest individuals; for example, size was not a predictor of the outcome of encounters between dogs meeting while being exercised by their owners nor was size correlated with neutered male dogs. Therefore, many dogs do not appear to pay much attention to the actual fighting ability of their opponent, presumably allowing differences in motivation (how much the dog values the resource) and perceived motivation (what the behavior of the other dog signifies about the likelihood that it will escalate) to play a much greater role. Two dogs that are contesting possession of a highly valued resource for the first time, if one is in a state of emotional arousal, in pain; if reactivity is influenced by recent endocrine changes, or motivational states such as hunger, then the outcome of the interaction may be different than if none of these factors were present. Equally, the threshold at which aggression is shown may be influenced by a range of medical factors, or, in some cases, precipitated entirely by pathological disorders. Hence, the contextual and physiological factors present when two dogs first encounter each other may profoundly influence the long-term nature of the relationship between those dogs. The complexity of the factors involved in this type of learning means that dogs may develop different "expectations" about the likely response of another individual for each resource in a range of different situations. Puppies learn early not to challenge an older dog and this respect stays with them into adulthood. When adult animals meet for the first time, they have no expectations of the behavior of the other: they will both, therefore, be initially anxious and vigilant in this encounter (characterized by the tense body posture and sudden movements typically seen when two dogs first meet), until they start to be able to predict the responses of the other individual. The outcome of these early adult–adult interactions will be influenced by the specific factors present at the time of the initial encounters. As well as contextual and physiological factors, the experiences of each member of the dyad of other dogs will also influence their behavior. Scent Dogs have an olfactory sense 40 times more sensitive than a human's and they commence their lives operating almost exclusively on smell and touch. The special scents that dogs use for communication are called pheromones. Different hormones are secreted when a dog is angry, fearful or confident, and some chemical signatures identify the sex and age of the dog, and if a female is in the estrus cycle, pregnant or recently given birth. Many of the pheromone chemicals can be found dissolved in a dog's urine, and sniffing where another dog has urinated gives the dog a great deal of information about that dog. Male dogs prefer to mark vertical surfaces and having the scent higher allows the air to carry it farther. The height of the marking tells other dogs about the size of the dog, as among canines size is an important factor in dominance. Dogs (and wolves) mark their territories with urine and their stools. The anal gland of canines give a particular signature to fecal deposits and identifies the marker as well as the place where the dung is left. Dogs are very particular about these landmarks, and engage in what is to humans a meaningless and complex ritual before defecating. Most dogs start with a careful bout of sniffing of a location, perhaps to erect an exact line or boundary between their territory and another dog's territory. This behavior may also involve a small degree of elevation, such as a rock or fallen branch, to aid scent dispersal. Scratching the ground after defecating is a visual sign pointing to the scent marking. The freshness of the scent gives visitors some idea of the current status of a piece of territory and if it is used frequently. Regions under dispute, or used by different animals at different times, may lead to marking battles with every scent marked-over by a new competitor. Feral dogs Feral dogs are those dogs living in a wild state with no food and shelter intentionally provided by humans, and showing a continuous and strong avoidance of direct human contacts. In the developing world pet dogs are uncommon, but feral, village or community dogs are plentiful around humans. The distinction between feral, stray, and free-ranging dogs is sometimes a matter of degree, and a dog may shift its status throughout its life. In some unlikely but observed cases, a feral dog that was not born wild but living with a feral group can become behavior-modified to a domestic dog with an owner. A dog can become a stray when it escapes human control, by abandonment or being born to a stray mother. A stray dog can become feral when forced out of the human environment or when co-opted or socially accepted by a nearby feral group. Feralization occurs through the development of the human avoidance response. Feral dogs are not reproductively self-sustaining, suffer from high rates of juvenile mortality, and depend indirectly on humans for their food, their space, and the supply of co-optable individuals. See further: behavior compared to other canids. Other behavior Dogs have a general behavioral trait of strongly preferring novelty ("neophillia") compared to familiarity. The average sleep time of a dog in captivity in a 24-hour period is 10.1 hours. Reproduction behavior Estrous cycle and mating Although puppies do not have the urge to procreate, males sometimes engage in sexual play in the form of mounting. In some puppies, this behavior occurs as early as 3 or 4 weeks-of-age. Dogs reach sexual maturity and can reproduce during their first year, in contrast to wolves at two years-of-age. Female dogs have their first estrus ("heat") at 6 to 12 months-of-age; smaller dogs tend to come into heat earlier whereas larger dogs take longer to mature. Female dogs have an estrous cycle that is nonseasonal and monestrus, i.e. there is only one estrus per estrous cycle. The interval between one estrus and another is, on average, seven months, however, this may range between 4 and 12 months. This interestrous period is not influenced by the photoperiod or pregnancy. The average duration of estrus is 9 days with spontaneous ovulation usually about 3 days after the onset of estrus. For several days before estrus, a phase called proestrus, the female dog may show greater interest in male dogs and "flirt" with them (proceptive behavior). There is progressive vulval swelling and some bleeding. If males try to mount a female dog during proestrus, she may avoid mating by sitting down or turning round and growling or snapping. Estrous behavior in the female dog is usually indicated by her standing still with the tail held up, or to the side of the perineum, when the male sniffs the vulva and attempts to mount. This tail position is sometimes called "flagging". The female dog may also turn, presenting the vulva to the male. The male dog mounts the female and is able to achieve intromission with a non-erect penis, which contains a bone called the os penis. The dog's penis enlarges inside the vagina, thereby preventing its withdrawal; this is sometimes known as the "tie" or "copulatory lock". The male dog rapidly thrust into the female for 1–2 minutes then dismounts with the erect penis still inside the vagina, and turns to stand rear-end to rear-end with the female dog for up to 30 to 40 minutes; the penis is twisted 180 degrees in a lateral plane. During this time, prostatic fluid is ejaculated. The female dog can bear another litter within 8 months of the previous one. Dogs are polygamous in contrast to wolves that are generally monogamous. Therefore, dogs have no pair bonding and the protection of a single mate, but rather have multiple mates in a year. The consequence is that wolves put a lot of energy into producing a few pups in contrast to dogs that maximize the production of pups. This higher pup production rate enables dogs to maintain or even increase their population with a lower pup survival rate than wolves, and allows dogs a greater capacity than wolves to grow their population after a population crash or when entering a new habitat. It is proposed that these differences are an alternative breeding strategy, one adapted to a life of scavenging instead of hunting. Parenting and early life All of the wild members of the genus Canis display complex coordinated parental behaviors. Wolf pups are cared for primarily by their mother for the first 3 months of their life when she remains in the den with them while they rely on her milk for sustenance and her presence for protection. The father brings her food. Once they leave the den and can chew, the parents and pups from previous years regurgitate food for them. Wolf pups become independent by 5 to 8 months, although they often stay with their parents for years. In contrast, dog pups are cared for by the mother and rely on her for milk and protection but she gets no help from the father nor other dogs. Once pups are weaned around 10 weeks they are independent and receive no further maternal care. Behavior problems There are many different types of behavioural issues that a dog can exhibit, including growling, snapping, barking, and invading a human's personal space. A survey of 203 dog owners in Melbourne, Australia, found that the main behaviour problems reported by owners were overexcitement (63%) and jumping up on people (56%). Some problems are related to attachment while others are neurological, as seen below. Separation anxiety When dogs are separated from humans, usually the owner, they often display behaviors which can be broken into the following four categories: exploratory behaviour, object play, destructive behaviour, and vocalization, and they are related to the canine's level of arousal. These behaviours may manifest as destructiveness, fecal or urinary elimination, hypersalivation or vocalization among other things. Dogs from single-owner homes are approximately 2.5 times more likely to have separation anxiety compared to dogs from multiple-owner homes. Furthermore, sexually intact dogs are only one third as likely to have separation anxiety as neutered dogs. The sex of dogs and whether there is another pet in the home do not have an effect on separation anxiety. It has been estimated that at least 14% of dogs examined at typical veterinary practices in the United States have shown signs of separation anxiety. Dogs that have been diagnosed with profound separation anxiety can be left alone for no more than minutes before they begin to panic and exhibit the behaviors associated with separation anxiety. Separation problems have been found to be linked to the dog's dependency on its owner, not because of disobedience. In the absence of treatment, affected dogs are often relinquished to a humane society or shelter, abandoned, or euthanized. Resource guarding Resource guarding is exhibited by many canines, and is one of the most commonly reported behaviour issues to canine professionals. It is seen when a dog uses specific behaviour patterns so that they can control access to an item, and the patterns are flexible when people are around. If a canine places value on some resource (i.e. food, toys, etc.) they may attempt to guard it from other animals as well as people, which leads to behavioural problems if not treated. The guarding can show in many different ways from rapid ingestion of food to using the body to shield items. It manifests as aggressive behaviour including, but not limited to, growling, barking, or snapping. Some dogs will also resource guard their owners and can become aggressive if the behaviour is allowed to continue. Owners must learn to interpret their dog's body language in order to try to judge the dog's reaction, as visual signals are used (i.e. changes in body posture, facial expression, etc.) to communicate feeling and response. These behaviours are commonly seen in shelter animals, most likely due to insecurities caused by a poor environment. Resource guarding is a concern since it can lead to aggression, but research has found that aggression over guarding can be contained by teaching the dog to drop the item they are guarding. Jealousy Canines are one of a number of non-human animals that can express jealousy towards other animals or animal-like objects. This emotion may feed into other behavioural problems, manifest as attention-seeking behaviour, withdrawing from social activity, or aggression towards their owner or another animal or person. Noise anxiety Canines often fear, and exhibit stress responses to, loud noises. Noise-related anxieties in dogs may be triggered by fireworks, thunderstorms, gunshots, and even loud or sharp bird noises. Associated stimuli may also come to trigger the symptoms of the phobia or anxiety, such as a change in barometric pressure being associated with a thunderstorm, thus causing an anticipatory anxiety. Tail chasing Tail chasing can be classified as a stereotypy. It falls under obsessive compulsive disorder, which is a neuropsychiatric disorder that can present in dogs as canine compulsive disorder. In one clinical study on this potential behavioral problem, 18 tail-chasing terriers were given clomipramine orally at a dosage of 1 to 2 mg/kg (0.5 to 0.9 mg/lb) of body weight, every 12 hours. Three of the dogs required treatment at a slightly higher dosage range to control tail chasing, however, after 1 to 12 weeks of treatment, 9 of 12 dogs were reported to have a 75% or greater reduction in tail chasing. Personality can also play a factor in tail chasing. Dogs who chase their tails have been found to be more shy than those who do not, and some dogs also show a lower level of response during tail chasing bouts. Behavior compared to other canids Comparisons made within the wolf-like canids allow the identification of those behaviors that may have been inherited from common ancestry and those that may have been the result of domestication or other relatively recent environmental changes. Studies of free-ranging African Basenjis and New Guinea Singing Dogs indicate that their behavioral and ecological traits were the result of environmental selection pressures or selective breeding choices and not the result of artificial selection imposed by humans. Early aggression Dog pups show unrestrained fighting with their siblings from 2 weeks of age, with injury avoided only due to their undeveloped jaw muscles. This fighting gives way to play-chasing with the development of running skills at 4–5 weeks. Wolf pups possess more-developed jaw muscles from 2 weeks of age, when they first show signs of play-fighting with their siblings. Serious fighting occurs during 4–6 weeks of age. Compared to wolf and dog pups, golden jackal pups develop aggression at the age of 4–6 weeks when play-fighting frequently escalates into uninhibited biting intended to harm. This aggression ceases by 10–12 weeks when a hierarchy has formed. Tameness Unlike other domestic species which were primarily selected for production-related traits, dogs were initially selected for their behaviors. In 2016, a study found that there were only 11 fixed genes that showed variation between wolves and dogs. These gene variations were unlikely to have been the result of natural evolution, and indicate selection on both morphology and behavior during dog domestication. These genes have been shown to affect the catecholamine synthesis pathway, with the majority of the genes affecting the fight-or-flight response (i.e. selection for tameness), and emotional processing. Dogs generally show reduced fear and aggression compared to wolves. Some of these genes have been associated with aggression in some dog breeds, indicating their importance in both the initial domestication and then later in breed formation. Social structure Among canids, packs are the social units that hunt, rear young and protect a communal territory as a stable group and their members are usually related. Members of the feral dog group are usually not related. Feral dog groups are composed of a stable 2–6 members compared to the 2–15 member wolf pack whose size fluctuates with the availability of prey and reaches a maximum in winter time. The feral dog group consists of monogamous breeding pairs compared to the one breeding pair of the wolf pack. Agonistic behavior does not extend to the individual level and does not support a higher social structure compared to the ritualized agonistic behavior of the wolf pack that upholds its social structure. Feral pups have a very high mortality rate that adds little to the group size, with studies showing that adults are usually killed through accidents with humans, therefore other dogs need to be co-opted from villages to maintain stable group size. Socialization The critical period for socialization begins with walking and exploring the environment. Dog and wolf pups both develop the ability to see, hear and smell at 4 weeks of age. Dogs begin to explore the world around them at 4 weeks of age with these senses available to them, while wolves begin to explore at 2 weeks of age when they have the sense of smell but are functionally blind and deaf. The consequences of this is that more things are novel and frightening to wolf pups. The critical period for socialization closes with the avoidance of novelty, when the animal runs away from – rather than approaching and exploring – novel objects. For dogs this develops between 4 and 8 weeks of age. Wolves reach the end of the critical period after 6 weeks, after which it is not possible to socialize a wolf. Dog puppies require as little as 90 minutes of contact with humans during their critical period of socialization to form a social attachment. This will not create a highly social pet but a dog that will solicit human attention. Wolves require 24 hours contact a day starting before 3 weeks of age. To create a socialized wolf the pups are removed from the den at 10 days of age, kept in constant human contact until they are 4 weeks old when they begin to bite their sleeping human companions, then spend only their waking hours in the presence of humans. This socialization process continues until age 4 months, when the pups can join other captive wolves but will require daily human contact to remain socialized. Despite this intensive socialization process, a well-socialized wolf will behave differently to a well-socialized dog and will display species-typical hunting and reproductive behaviors, only closer to humans than a wild wolf. These wolves do not generalize their socialization to all humans in the same manner as a socialized dog and they remain more fearful of novelty compared to socialized dogs. In 1982, a study to observe the differences between dogs and wolves raised in similar conditions took place. The dog puppies preferred larger amounts of sleep at the beginning of their lives, while the wolf puppies were much more active. The dog puppies also preferred the company of humans, rather than their canine foster mother, though the wolf puppies were the exact opposite, spending more time with their foster mother. The dogs also showed a greater interest in the food given to them and paid little attention to their surroundings, while the wolf puppies found their surroundings to be much more intriguing than their food or food bowl. The wolf puppies were observed taking part in antagonistic play at a younger age, while the dog puppies did not display dominant/submissive roles until they were much older. The wolf puppies were rarely seen as being aggressive to each other or towards the other canines. On the other hand, the dog puppies were much more aggressive to each other and other canines, often seen full-on attacking their foster mother or one another. A 2005 study comparing dog and wolf pups concluded that extensively socialised dogs as well as unsocialised dog pups showed greater attachment to a human owner than wolf pups did, even if the wolf was socialised. The study concluded that dogs may have evolved a capacity for attachment to humans functionally analogous to that human infants display. Cognition Despite claims that dogs show more human-like social cognition than wolves, several recent studies have demonstrated that if wolves are properly socialized to humans and have the opportunity to interact with humans regularly, then they too can succeed on some human-guided cognitive tasks, in some cases out-performing dogs at an individual level. Similar to dogs, wolves can also follow more complex point types made with body parts other than the human arm and hand (e.g. elbow, knee, foot). Both dogs and wolves have the cognitive capacity for prosocial behavior toward humans; however it is not guaranteed. For canids to perform well on traditional human-guided tasks (e.g. following the human point) both relevant lifetime experiences with humans – including socialization to humans during the critical period for social development – and opportunities to associate human body parts with certain outcomes (such as food being provided by human hands, a human throwing or kicking a ball, etc.) are required. After undergoing training to solve a simple manipulation task, dogs that are faced with an insoluble version of the same problem look at the human, while socialized wolves do not. Reproduction Dogs reach sexual maturity and can reproduce during their first year in contrast to a wolf at two years. The female dog can bear another litter within 8 months of the last one. The canid genus is influenced by the photoperiod and generally reproduces in the springtime. Domestic dogs are not reliant on seasonality for reproduction in contrast to the wolf, coyote, Australian dingo and African basenji that may have only one, seasonal, estrus each year. Feral dogs are influenced by the photoperiod with around half of the breeding females mating in the springtime, which is thought to indicate an ancestral reproductive trait not overcome by domestication, as can be inferred from wolves and Cape hunting dogs. Domestic dogs are polygamous in contrast to wolves that are generally monogamous. Therefore, domestic dogs have no pair bonding and the protection of a single mate, but rather have multiple mates in a year. There is no paternal care in dogs as opposed to wolves where all pack members assist the mother with the pups. The consequence is that wolves put a lot of energy into producing a few pups in contrast to dogs that maximize the production of pups. This higher pup production rate enables dogs to maintain or even increase their population with a lower pup survival rate than wolves, and allows dogs a greater capacity than wolves to grow their population after a population crash or when entering a new habitat. It is proposed that these differences are an alternative breeding strategy adapted to a life of scavenging instead of hunting. In contrast to domestic dogs, feral dogs are monogamous. Domestic dogs tend to have a litter size of 10, wolves 3, and feral dogs 5–8. Feral pups have a very high mortality rate with only 5% surviving at the age of one year, and sometimes the pups are left unattended making them vulnerable to predators. Domestic dogs stand alone among all canids for a total lack of paternal care. Dogs differ from wolves and most other large canid species as they generally do not regurgitate food for their young, nor the young of other dogs in the same territory. However, this difference was not observed in all domestic dogs. Regurgitating of food by the females for the young, as well as care for the young by the males, has been observed in domestic dogs, dingos and in feral or semi-feral dogs. In one study of a group of free-ranging dogs, for the first 2 weeks immediately after parturition the lactating females were observed to be more aggressive to protect the pups. The male parents were in contact with the litters as 'guard' dogs for the first 6–8 weeks of the litters' life. In absence of the mothers, they were observed to prevent the approach of strangers by vocalizations or even by physical attacks. Moreover, one male fed the litter by regurgitation showing the existence of paternal care in some free-roaming dogs. Space Space used by feral dogs is not dissimilar from most other canids in that they use defined traditional areas (home ranges) that tend to be defended against intruders, and have core areas where most of their activities are undertaken. Urban domestic dogs have a home range of 2-61 hectares, in contrast to a feral dog's home range of 58 square kilometers. Wolf home ranges vary from 78 square kilometers where prey is deer, to 2.5 square kilometers at higher latitudes where prey is moose and caribou. Wolves will defend their territory based on prey abundance and pack density, but feral dogs will defend their home ranges all year. Where wolf ranges and feral dog ranges overlap, the feral dogs will site their core areas closer to human settlement. Predation and scavenging Despite claims in the popular press, studies could not find evidence of a single predation on cattle by feral dogs. However, domestic dogs were responsible for the death of 3 calves over one 5-year study. Other studies in Europe and North America indicate moderate limited success in the consumption of wild boar, deer and other ungulates, however it could not be determined if this was predation or scavenging on carcasses. A new study has shown though that these were most likely due to predation. Feral dogs, like their ancestors do participate in pup rearing. Several studies show that feral dogs are not primarily scavengers, despite claims in the popular press. Studies in the modern era show that their diet is very opportunistic, ranging from garbage, carrion to live prey. The primary feature that distinguishes feral from domestic dogs is the degree of reliance or dependence on humans, and in some respect, their behavior toward people. Feral dogs survive and reproduce independently of human intervention or assistance. While it is true that some feral dogs use human garbage for food, others acquire their primary subsistence by hunting and scavenging like other wild canids. Dogs may resort to hunting more than garbage consuming when their garbage food source is scarce. Even well-fed domestic dogs are prone to scavenge; gastro-intestinal veterinary visits increase during warmer weather as dogs are prone to eat decaying material. Some dogs consume feces, which may contain nutrition. On occasion well-fed dogs have been known to scavenge their owners' corpses. Dogs in human society Studies using an operant framework have indicated that humans can influence the behavior of dogs through food, petting and voice. Food and 20–30 seconds of petting maintained operant responding in dogs. Some dogs will show a preference for petting once food is readily available, and dogs will remain in proximity to a person providing petting and show no satiation to that stimulus. Petting alone was sufficient to maintain the operant response of military dogs to voice commands, and responses to basic obedience commands in all dogs increased when only vocal praise was provided for correct responses. A study using dogs that were trained to remain motionless while unsedated and unrestrained in an MRI scanner exhibited caudate activation to a hand signal associated with reward. Further work found that the magnitude of the canine caudate response is similar to that of humans, while the between-subject variability in dogs may be less than humans. In a further study, 5 scents were presented (self, familiar human, strange human, familiar dog, strange dog). While the olfactory bulb/peduncle was activated to a similar degree by all the scents, the caudate was activated maximally to the familiar human. Importantly, the scent of the familiar human was not the handler, meaning that the caudate response differentiated the scent in the absence of the person being present. The caudate activation suggested that not only did the dogs discriminate that scent from the others, they had a positive association with it. Although these signals came from two different people, the humans lived in the same household as the dog and therefore represented the dog's primary social circle. And while dogs should be highly tuned to the smell of items that are not comparable, it seems that the "reward response" is reserved for their humans. Research has shown that there are individual differences in the interactions between dogs and their human that have significant effects on dog behavior. In 1997, a study showed that the type of relationship between dog and master, characterized as either companionship or working relationship, significantly affected the dog's performance on a cognitive problem-solving task. They speculate that companion dogs have a more dependent relationship with their owners, and look to them to solve problems. In contrast, working dogs are more independent. Dogs in the family In 2013, a study produced the first evidence under controlled experimental observation for a correlation between the owner's personality and their dog's behaviour. Dogs at work Service dogs are those that are trained to help people with disabilities such as blindness, epilepsy, diabetes and autism. Detection dogs are trained to using their sense of smell to detect substances such as explosives, illegal drugs, wildlife scat, or blood. In science, dogs have helped humans understand about the conditioned reflex. Attack dogs, dogs that have been trained to attack on command, are employed in security, police, and military roles. Service dog programs have been established to help individuals suffering from Post Traumatic Stress Disorder (PTSD) and have shown to have positive results. Attacks The human-dog relationship is based on unconditional trust; however, if this trust is lost it will be difficult to reinstate. In the UK between 2005 and 2013, there were 17 fatal dog attacks. In 2007–08, there were 4,611 hospital admissions due to dog attacks, which increased to 5,221 in 2008–09. It was estimated in 2013 that more than 200,000 people a year are bitten by dogs in England, with the annual cost to the National Health Service of treating injuries about £3 million. A report published in 2014 stated there were 6,743 hospital admissions specifically caused by dog bites, a 5.8% increase from the 6,372 admissions in the previous 12 months. In the US between 1979 and 1996, there were more than 300 human dog bite-related fatalities. In the US in 2013, there were 31 dog-bite related deaths. Each year, more than 4.5 million people in the US are bitten by dogs and almost 1 in 5 require medical attention. A dog's thick fur protects it from the bite of another dog, but humans are furless and are not so protected. Attack training is condemned by some as promoting ferocity in dogs; a 1975 American study showed that 10% of dogs that have bitten a person received attack dog training at some point. Genetic basis of dog behavior It is well established that simple genetic architecture underlies the morphological differences in dog breeds, but the underlying genetic impact of domestic canine behavior is more commonly disputed. Early studies in the genetics of breed specific behaviors in the in herding dogs in the 1940s concluded that 'showing eye' and 'bark' behaviors do not follow simple Mendelian inheritance.  Decades later the same conclusion has been reached for all studied behaviors, but the complex modes of inheritance have not been completely deciphered. Like human behavior, canine behavior is a result of the interactions between the protein products coded for by genes, and the environment in which the organism lives. The first study to identify a specific locus associated with a behavioral phenotype in dogs (with genome wide significance) was in 2010, when they found that allelic variation in the cdh2 gene were linked to compulsive behavioral phenotypes. A 2019 genome wide association study concluded that a large proportion of behavioral variance across breeds is attributable to genetic factors.  Within breeds, the most heritable traits include characteristics selected for in breeding such as trainability, stranger directed aggression, chasing, and attachment/attention seeking. Another study that compared breed data from C-BARQ, to daily behavioral patterns concluded that  separation anxiety and owner directed aggression were the only two out of nine traits not found to have significant heritability. Meanwhile agitation, attention seeking, barking, excitability, fetching, human/object fear, noise fear, non-owner aggression, and trainability were found to have a genetic basis. Genes containing SNPs associated with dog behavior are likely to be expressed in the brain, contributing to pathways related to the development and expression of behavior and cognition (i.e. they influence behavioral processes through expression in the brain).  Examples of chromosome loci for putative SNPs and their associated traits include: Geneticists continue to explore candidate genes that are responsible for the regulation of neurotransmitters, specifically dopamine and serotonin, as major differences in their concentration, receptivity, and binding ability are linked to behavioral disorders. For example, attachment and attention seeking behaviors were linked to genes associated with dopamine transport and metabolism. Like some physical diseases, it is conceivable that similar presentation of behavioral traits across breeds could be caused by several different kinds of mutations, and conversely, mutations of the same genes could result in diverse phenotypes. For example, studies have found that certain loci associated breed differences in stranger-directed aggression were associated with SNPs in GRM8, a gene that  codes for glutamate receptor, one of the major excitatory neurotransmitters in the central nervous system.  Allelic variation in another glutamate receptor gene, slc1a2 has been associated with increased stranger directed aggression in Shiba inus, and in higher activity levels in Labrador retrievers. SNPs in PDE7B, a gene that functions in dopaminergic pathways, was also associated with breed differences in aggression. Additionally, it has been discovered that there are common genetic mechanisms for individual differences in social behavior between dogs and humans.  For example, the structural variation in the GTF2I and GTF2IRD1 genes at the locus responsible for Williams-Beuren Syndrome in humans is also associated with hypersociability in dogs. Genes associated with temperament and startle response in humans such as OTORD and CACNA1C, were linked to breed differences in fear/fear response. Genes associated with aggression in dogs have been linked to aggressive behavior in humans including CPNE4 and OPCML. Frequency of energetic, boisterous and playful behavior include genes previously linked to resting heart rate, daytime rest, and sleep duration in humans such as TMEM132D, AGMO, SNX29, and CACNA2D3.Trainability has been previously associated with intelligence and information processing speed genes ERG, SNX29, CSMD2, and ATRNL1. CAMKMT, a gene relating to stranger fear in dogs is also associated with anxiety in humans. At present, there are still limitations to understanding the genetic basis for canine behavior including inconsistent phenotyping methods strong environmental/developmental influence on behavior, and a lack of international collaboration. See also Alpha roll Dog communication Dog intelligence Calming signals Pack (canine) Pack hunter Separation anxiety disorder (humans) Temperament test References Further reading External links Brain Training for Dogs - Unique Dog Training Course! Ethology
Dog behavior
[ "Biology" ]
9,446
[ "Behavioural sciences", "Ethology", "Behavior" ]
14,383,139
https://en.wikipedia.org/wiki/Allotropes%20of%20sulfur
The element sulfur exists as many allotropes. In number of allotropes, sulfur is second only to carbon. In addition to the allotropes, each allotrope often exists in polymorphs (different crystal structures of the same covalently bonded Sn molecules) delineated by Greek prefixes (α, β, etc.). Furthermore, because elemental sulfur has been an item of commerce for centuries, its various forms are given traditional names. Early workers identified some forms that have later proved to be single or mixtures of allotropes. Some forms have been named for their appearance, e.g. "mother of pearl sulfur", or alternatively named for a chemist who was pre-eminent in identifying them, e.g. "Muthmann's sulfur I" or "Engel's sulfur". The most commonly encountered form of sulfur is the orthorhombic polymorph of , which adopts a puckered ring – or "crown" – structure. Two other polymorphs are known, also with nearly identical molecular structures. In addition to , sulfur rings of 6, 7, 9–15, 18, and 20 atoms are known. At least five allotropes are uniquely formed at high pressures, two of which are metallic. The number of sulfur allotropes reflects the relatively strong S−S bond of 265 kJ/mol. Furthermore, unlike most elements, the allotropes of sulfur can be manipulated in solutions of organic solvents and are analysed by HPLC. Phase diagram The pressure-temperature (P-T) phase diagram for sulfur is complex (see image). The region labeled I (a solid region), is α-sulfur. High-pressure solid allotropes In a high-pressure study at ambient temperatures, four new solid forms, termed II, III, IV, V have been characterized, where α-sulfur is form I. Solid forms II and III are polymeric, while IV and V are metallic (and are superconductive below 10 K and 17 K, respectively). Laser irradiation of solid samples produces three sulfur forms below 200–300 kbar (20–30 GPa). Solid cyclo allotrope preparation Two methods exist for the preparation of the cyclo-sulfur allotropes. One of the methods, which is most famous for preparing hexasulfur, is to treat hydrogen polysulfides with polysulfur dichloride: A second strategy uses titanocene pentasulfide as a source of the unit. This complex is easily made from polysulfide solutions: Titanocene pentasulfide reacts with polysulfur chloride: Solid cyclo-sulfur allotropes Cyclo-hexasulfur, cyclo- This allotrope was first prepared by M. R. Engel in 1891 by treating thiosulfate with HCl. Cyclo- is orange-red and forms a rhombohedral crystal. It is called ρ-sulfur, ε-sulfur, Engel's sulfur and Aten's sulfur. Another method of preparation involves the reaction of a polysulfane with sulfur monochloride: (dilute solution in diethyl ether) The sulfur ring in cyclo- has a "chair" conformation, reminiscent of the chair form of cyclohexane. All of the sulfur atoms are equivalent. Cyclo-heptasulfur, cyclo- It is a bright yellow solid. Four (α-, β-, γ-, δ-) forms of cyclo-heptasulfur are known. Two forms (γ-, δ-) have been characterized. The cyclo- ring has an unusual range of bond lengths of 199.3–218.1 pm. It is said to be the least stable of all of the sulfur allotropes. Cyclo-octasulfur, cyclo- Octasulfur contains puckered rings, and is known in three forms that differ only in the way the rings are packed in the crystal. α-Sulfur α-Sulfur is the form most commonly found in nature. When pure it has a greenish-yellow colour (traces of cyclo- in commercially available samples make it appear yellower). It is practically insoluble in water and is a good electrical insulator with poor thermal conductivity. It is quite soluble in carbon disulfide: 35.5 g/100 g solvent at 25 °C. It has an orthorhombic crystal structure. α-Sulfur is the predominant form found in "flowers of sulfur", "roll sulfur" and "milk of sulfur". It contains puckered rings, alternatively called a crown shape. The S–S bond lengths are all 203.7 pm and the S-S-S angles are 107.8° with a dihedral angle of 98°. At 95.3 °C, α-sulfur converts to β-sulfur. β-Sulfur β-Sulfur is a yellow solid with a monoclinic crystal form and is less dense than α-sulfur. It is unusual because it is only stable above 95.3 °C; below this temperature it converts to α-sulfur. β-Sulfur can be prepared by crystallising at 100 °C and cooling rapidly to slow down formation of α-sulfur. It has a melting point variously quoted as 119.6 °C and 119.8 °C but as it decomposes to other forms at around this temperature the observed melting point can vary. The 119 °C melting point has been termed the "ideal melting point" and the typical lower value (114.5 °C) when decomposition occurs, the "natural melting point". γ-Sulfur γ-Sulfur was first prepared by F.W. Muthmann in 1890. It is sometimes called "nacreous sulfur" or "mother of pearl sulfur" because of its appearance. It crystallises in pale yellow monoclinic needles. It is the densest form of the three. It can be prepared by slowly cooling molten sulfur that has been heated above 150 °C or by chilling solutions of sulfur in carbon disulfide, ethyl alcohol or hydrocarbons. It is found in nature as the mineral rosickyite. It has been tested in carbon fiber-stabilized form as a cathode in lithium-sulfur (Li-S) batteries and was observed to stop the formation of polysulfides that compromise battery life. Cyclo- (n = 9–15, 18, 20) These allotropes have been synthesised by various methods for example, treating titanocene pentasulfide and a dichlorosulfane of suitable sulfur chain length, : or alternatively treating a dichlorosulfane, and a polysulfane, : , , and can also be prepared from . With the exception of cyclo-, the rings contain S–S bond lengths and S-S-S bond angle that differ one from another. Cyclo- is the most stable cyclo-allotrope. Its structure can be visualised as having sulfur atoms in three parallel planes, 3 in the top, 6 in the middle and three in the bottom. Two forms (α-, β-) of cyclo- are known, one of which has been characterized. Two forms of cyclo- are known where the conformation of the ring is different. To differentiate these structures, rather than using the normal crystallographic convention of α-, β-, etc., which in other cyclo- compounds refer to different packings of essentially the same conformer, these two conformers have been termed endo- and exo-. Cyclo-·cyclo- adduct This adduct is produced from a solution of cyclo- and cyclo- in . It has a density midway between cyclo- and cyclo-. The crystal consists of alternate layers of cyclo- and cyclo-. This material is a rare example of an allotrope that contains molecules of different sizes. Catena sulfur forms The term "Catena sulfur forms" refers to mixtures of sulfur allotropes that are high in catena (polymer chain) sulfur. The naming of the different forms is very confusing and care has to be taken to determine what is being described because some names are used interchangeably. Amorphous sulfur Amorphous sulfur is the quenched product from molten sulfur hotter than the λ-transition at 160 °C, where polymerization yields catena sulfur molecules. (Above this temperature, the properties of the liquid melt change remarkably. For example, the viscosity increases more than 10000-fold as the temperature increases through the transition). As it anneals, solid amorphous sulfur changes from its initial glassy form, to a plastic form, hence its other names of plastic, and glassy or vitreous sulfur. The plastic form is also called χ-sulfur. Amorphous sulfur contains a complex mixture of catena-sulfur forms mixed with cyclo-forms. Insoluble sulfur Insoluble sulfur is obtained by washing quenched liquid sulfur with . It is sometimes called polymeric sulfur, μ-S or ω-S. Fibrous (φ-) sulfur Fibrous (φ-) sulfur is a mixture of the allotropic ψ- form and γ-cyclo-. ω-Sulfur ω-Sulfur is a commercially available product prepared from amorphous sulfur that has not been stretched prior to extraction of soluble forms with . It sometimes called "white sulfur of Das" or supersublimated sulfur. It is a mixture of ψ-sulfur and lamina sulfur. The composition depends on the exact method of production and the sample's history. One well known commercial form is "Crystex". ω-sulfur is used in the vulcanization of rubber. λ-Sulfur λ-Sulfur is molten sulfur just above the melting temperature. It is a mixture containing mostly cyclo-. Cooling λ-sulfur slowly gives predominantly β-sulfur. μ-Sulfur μ-Sulfur is the name applied to solid insoluble sulfur and the melt prior to quenching. π-Sulfur π-Sulfur is a dark-coloured liquid formed when λ-sulfur is left to stay molten. It contains mixture of rings. Biradical catena () chains This term is applied to biradical catena-chains in sulfur melts or the chains in the solid. Solid catena allotropes The production of pure forms of catena-sulfur has proved to be extremely difficult. Complicating factors include the purity of the starting material and the thermal history of the sample. ψ-Sulfur This form, also called fibrous sulfur or ω1-sulfur, has been well characterized. It has a density of 2.01 g·cm−3 (α-sulfur 2.069 g·cm−3) and decomposes around its melting point of 104 °C. It consists of parallel helical sulfur chains. These chains have both left and right-handed "twists" and a radius of 95 pm. The S–S bond length is 206.6 pm, the S-S-S bond angle is 106° and the dihedral angle is 85.3°, (comparable figures for α-sulfur are 203.7 pm, 107.8° and 98.3°). Lamina sulfur Lamina sulfur has not been well characterized but is believed to consist of criss-crossed helices. It is also called χ-sulfur or ω2-sulfur. High-temperature gaseous allotropes Monatomic sulfur can be produced from photolysis of carbonyl sulfide. Disulfur, Disulfur, , is the predominant species in sulfur vapour above 720 °C (a temperature above that shown in the phase diagram); at low pressure (1 mmHg) at 530 °C, it comprises 99% of the vapor. It is a triplet diradical (like dioxygen and sulfur monoxide), with an S−S bond length of 188.7 pm. The blue colour of burning sulfur is due to the emission of light by the molecule produced in the flame. The molecule has been trapped in the compound (E = As, Sb) for crystallographic measurements, produced by treating elemental sulfur with excess iodine in liquid sulfur dioxide. The cation has an "open-book" structure, in which each ion donates the unpaired electron in the π* molecular orbital to a vacant orbital of the molecule. Trisulfur, is found in sulfur vapour, comprising 10% of vapour species at 440 °C and 10 mmHg. It is cherry red in colour, with a bent structure, similar to ozone, . Tetrasulfur, has been detected in the vapour phase, but it has not been well characterized. Diverse structures (e.g. chains, branched chains and rings) have been proposed. Theoretical calculations suggest a cyclic structure. Pentasulfur, Pentasulfur has been detected in sulfur vapours but has not been isolated in pure form. List of allotropes and forms Allotropes are in Bold. References Bibliography External links Amorphous solids
Allotropes of sulfur
[ "Physics", "Chemistry" ]
2,769
[ "Amorphous solids", "Allotropes of sulfur", "Unsolved problems in physics", "Allotropes" ]
14,383,330
https://en.wikipedia.org/wiki/Aquacultural%20engineering
Aquacultural engineering is a multidisciplinary field of engineering and that aims to solve technical problems associated with farming aquatic vertebrates, invertebrates, and algae. Common aquaculture systems requiring optimization and engineering include sea cages, ponds, and recirculating systems. The design and management of these systems is based on their production goals and the economics of the farming operation. Aquaculture technology is varied with design and development requiring knowledge of mechanical, biological and environmental systems along with material engineering and instrumentation. Furthermore, engineering techniques often involve solutions borrowed from wastewater treatment, fisheries, and traditional agriculture. Aquacultural engineering has played a role in the expansion of the aquaculture industry, which now accounts for half of all seafood products consumed in the world. To identify effective solutions the discipline is combined with both fish physiology and business economics unknowledge. Recirculating aquaculture systems Recirculating aquaculture systems often involve intensive, high-density culture of a species with limited water usage and extensive filtration. In a typical recirculating aquaculture system, a series of filtration steps maintains a high level of water quality that promotes rapid fish growth. Steps include solids removal, biofiltration, oxygenation, and pumping, with each one requiring different equipment and engineering considerations. Comprehensive instrumentation and sensor controls are required to monitor this equipment and the underlying water conditions such as temperature, dissolved oxygen, and pH. Development of recirculating aquaculture systems is still underway in 2017, and engineering advances are needed to make the systems economically viable for culturing most species. Research The Journal of Aquacultural Engineering publishes engineers' studies related to the design and development of aquacultural systems. Worldwide, universities provide aquacultural engineering education often under the umbrella of agricultural or biological engineering. See also Recirculating aquaculture systems References Aquaculture Engineering disciplines
Aquacultural engineering
[ "Engineering" ]
373
[ "nan" ]
14,383,955
https://en.wikipedia.org/wiki/Temples%20of%20Humankind
The Temples of Humankind are a collection of subterranean temples buried underground built by the Federation of Damanhur. They are decorated in several motifs stressing peaceful human collaboration. The Temples are located in the foothills of the Alps in northern Italy, from Turin, in the valley of Valchiusella. The temples were created under the direction of Oberto Airaudi who, having claimed visions of ancient temples at age 10 from a previous life, began excavation and building in August 1978. By 1991 most of the chambers were reportedly complete when Italian police, acting on a tip from villagers, conducted a raid on the Temples. However, since the temples were so well hidden, police were unable to locate them until state prosecutor Bruno Tinti threatened "show us these temples or we will dynamite the entire hillside." Eventually the Italian government reportedly gave them retroactive excavation and erection privileges and the Temples are now open to visitors. Structure Parts of the Temples: Hall of Water – dedicated to the feminine principle, it is in the shape of a chalice and invites receptivity Blue Hall – for meditation on social matters and is used as a place of inspiration and reflection Hall of Earth – dedicated to the masculine principle, to the earth as an element and planet and to past and future reincarnations Hall of Metals – represents the different ages and developmental stages of humankind and the shadow elements of the human psyche Labyrinth Hall – showing Interfaith worship through the centuries, uniting different cultures and peoples Hall of Spheres – positioned where 3 synchronic lines merge, inviting planetary contact and transmission of messages, ideas and dreams to create harmony between nations Hall of Mirrors – dedicated to the sky, air and light, solar energy, strength and life. There are 4 altars to earth, water, air and fire References Publisher of Damanhur Biography The Damanhur Temples ABC News website External links Photos of Temples of Humankind 1978 establishments in Italy New religious movements Religious buildings and structures completed in 1991 Architecture related to utopias
Temples of Humankind
[ "Engineering" ]
404
[ "Architecture related to utopias", "Architecture" ]
14,384,856
https://en.wikipedia.org/wiki/Fourth%20International%20Conference%20on%20Environmental%20Education
The Tbilisiplus30 or the Fourth International Conference on Environmental Education was held at the Centre for Environment Education, Ahmedabad, India between November 24, 2007 and November 28, 2007. The conference was the fourth in the series of Conferences on environmental education held since the first international conference in Tbilisi (former USSR). The second conference was organised in 1977 in Moscow; and the third conference was held in Thessaloniki in 1997. The United Nations has declared the decade 2005 to 2014 as the "Decade of Education for Sustainable Development" (DESD). This conference underlined the key role of education in achieving sustainable development. The participants and delegates from countries across the globe came together to bridge the gap between environmental education and Education for Sustainable Development. They examined the development of environmental education since the first conference, thirty years ago, and set a global agenda for the DESD. This will be a platform for sharing practices and ideas on initiatives in environmental education throughout the world. There was a significant amount of participation in workshops on topics including "Education for Sustainable Development" and "Teacher Education," research for DESD, "DESD Monitoring and Evaluation," "ESD and Media," "Man and Biosphere Reserves" and "World Heritage Sites" as learning sites for environmental development, "Floods and Disaster Reduction" and "Education for Sustainable Consumption". References 2007 in the environment Environmental conferences Environmental education International sustainable development
Fourth International Conference on Environmental Education
[ "Environmental_science" ]
283
[ "Environmental education", "Environmental social science" ]
14,385,549
https://en.wikipedia.org/wiki/Strongly%20minimal%20theory
In model theory—a branch of mathematical logic—a minimal structure is an infinite one-sorted structure such that every subset of its domain that is definable with parameters is either finite or cofinite. A strongly minimal theory is a complete theory all models of which are minimal. A strongly minimal structure is a structure whose theory is strongly minimal. Thus a structure is minimal only if the parametrically definable subsets of its domain cannot be avoided, because they are already parametrically definable in the pure language of equality. Strong minimality was one of the early notions in the new field of classification theory and stability theory that was opened up by Morley's theorem on totally categorical structures. The nontrivial standard examples of strongly minimal theories are the one-sorted theories of infinite-dimensional vector spaces, and the theories ACFp of algebraically closed fields of characteristic p. As the example ACFp shows, the parametrically definable subsets of the square of the domain of a minimal structure can be relatively complicated ("curves"). More generally, a subset of a structure that is defined as the set of realizations of a formula φ(x) is called a minimal set if every parametrically definable subset of it is either finite or cofinite. It is called a strongly minimal set if this is true even in all elementary extensions. A strongly minimal set, equipped with the closure operator given by algebraic closure in the model-theoretic sense, is an infinite matroid, or pregeometry. A model of a strongly minimal theory is determined up to isomorphism by its dimension as a matroid. Totally categorical theories are controlled by a strongly minimal set; this fact explains (and is used in the proof of) Morley's theorem. Boris Zilber conjectured that the only pregeometries that can arise from strongly minimal sets are those that arise in vector spaces, projective spaces, or algebraically closed fields. This conjecture was refuted by Ehud Hrushovski, who developed a method known as "Hrushovski construction" to build new strongly minimal structures from finite structures. See also C-minimal theory o-minimal theory References Model theory
Strongly minimal theory
[ "Mathematics" ]
455
[ "Mathematical logic", "Model theory" ]
14,385,751
https://en.wikipedia.org/wiki/Margaret%20T.%20Fuller
Margaret "Minx" T. Fuller is an American developmental biologist known for her research on the male germ line and defining the role of the stem cell environment (the hub cells that establish the niche of particular cells) in specifying cell fate and differentiation. Fuller is the Reed-Hodgson Professor of Human Biology at Stanford University, and former chair of the Stanford Department of Developmental Biology. Biography Fuller earned a B.A. in physics from Brandeis University in 1974, and a Ph.D. in microbiology from MIT in 1980, working with Jonathan King. She completed her postdoctoral work in developmental genetics at Indiana University, working with Elizabeth Raff and Thomas Kaufman, from 1980 to 1983. Fuller joined the University of Colorado faculty and then joined Stanford University in 1990, where she began working on spermatogenesis, doing genetic analysis of microtubule structure and function. Fuller is married to fellow biologist Matthew P. Scott. Key papers Raff, E.C. and M. T. Fuller, et al., "Regulation of tubulin gene expression during embryogenesis in Drosophila melanogaster", Cell v.28, pp. 33–40 (1982). Fuller, M.T. et al., "Genetic Analysis of Microtubule Structure: A b-tubulin Mutation Causes the Formation of Aberrant Microtubule in vivo and in vitro", Journal of Cell Biology, v.104, pp. 385–394 (1987). Fuller, M.T. and P.G. Wilson, "Force and Counter Force in the Mitotic Spindle", Cell, v.71, pp. 547–550 (1992). Fuller, M.T., "Riding the Polar Winds: Chromosomes Motor Down East," Cell, v.81, pp. 5–8 (1995). Hales, K.G., M.T. Fuller, "Developmentally Regulated Mitochondrial Fusion Mediated by a Conserved, Novel, Predicted GTPase", Cell (1997). G. J. Hermann, J.W. Thatcher, J.P. Mills, K.G. Hales, M.T. Fuller, "Mitochondrial Fusion in Yeast Requires the Transmembrane GTPase Fzo1p", Journal of Cell Biology (1998). Kiger, A., H. White-Cooper, and M.T. fuller, "Somatic support cells restrict germ line stem cell self-renewal and promote differentiation", Nature v.407, pp. 750–754 (2000). Additional publications Margaret T. Fuller and Allan C. Spradling, Review, "Male and Female Drosophila Germline Stem Cells: Two Versions of Immortality", Science, v.316, n.5823, pp. 402–404 (April 20, 2007). Awards 1980 - Jane Coffin Childs Fellow 1985-86 - Searle Scholar 2004 - Reed-Hodgson Professor, Human Biology, Stanford University 2006 - Elected member, American Academy of Arts and Sciences 2008 - Elected member, National Academy of Sciences 2022 - Genetics Society of America Medal References 21st-century American biologists Stem cell researchers Living people Place of birth missing (living people) Year of birth missing (living people) Stanford University School of Medicine faculty Fellows of the American Academy of Arts and Sciences American developmental biologists Members of the United States National Academy of Sciences American women biologists 21st-century American women scientists 20th-century American women scientists 20th-century American biologists Brandeis University alumni Massachusetts Institute of Technology School of Science alumni University of Colorado Boulder faculty Searle Scholars Program recipients Members of the National Academy of Medicine
Margaret T. Fuller
[ "Biology" ]
748
[ "Stem cell researchers", "Stem cell research" ]
14,386,018
https://en.wikipedia.org/wiki/Glyoxalase%20system
The glyoxalase system is a set of enzymes that carry out the detoxification of methylglyoxal and the other reactive aldehydes that are produced as a normal part of metabolism. This system has been studied in both bacteria and eukaryotes. This detoxification is accomplished by the sequential action of two thiol-dependent enzymes; firstly glyoxalase І, which catalyzes the isomerization of the spontaneously formed hemithioacetal adduct between glutathione and 2-oxoaldehydes (such as methylglyoxal) into S-2-hydroxyacylglutathione. Secondly, glyoxalase ІІ hydrolyses these thiolesters and in the case of methylglyoxal catabolism, produces D-lactate and GSH from S-D-lactoyl-glutathione. This system shows many of the typical features of the enzymes that dispose of endogenous toxins. Firstly, in contrast to the amazing substrate range of many of the enzymes involved in xenobiotic metabolism, it shows a narrow substrate specificity. Secondly, intracellular thiols are required as part of its enzymatic mechanism and thirdly, the system acts to recycle reactive metabolites back to a form which may be useful to cellular metabolism. Overview of Glyoxalase Pathway Glyoxalase I (GLO1), glyoxalase II (GLO2), and reduced glutathione (GSH). In bacteria, there is an additional enzyme that functions if there is no GSH, it is called the third glyoxalase protein, glyoxalase 3 (GLO3). GLO3 has not been found in humans yet. The pathway begins with methylglyoxal (MG), which is produced from non-enzymatic reactions with DHAP or G3P produced in glycolysis. Methylglyoxal is then converted into S-d-lactoylglutathione by enzyme GLO1 with a catalytic amount of GSH, of which is hydrolyzed into non-toxic D-lactate via GLO2, during which GSH is reformed to be consumed again by GLO1 with a new molecule of MG. D-lactate ultimately goes on to be metabolized into pyruvate. Regulation There are several small molecule inducers that can induce the glyoxalase pathway by either promoting GLO1 function to increase conversion of MG into D-Lactate, which are called GLO1 activators, or by directly reducing MG levels or levels of MG substrate, which are called MG scavengers. GLO1 activators include the synthetic drug candesartan or natural compounds resveratrol, fisetin, the binary combination of trans-resveratrol and hesperetin (tRES-HESP), mangiferin, allyl isothiocyanate, phenethyl isothiocyanate, sulforaphane, and bardoxolone methyl, and MG scavengers include aminoguanidine, alagebrium, and benfotiamine. There is also the small molecule pyridoxamine, which acts as both a GLO1 activator and MG scavenger. Many inhibitors of GLO1 have been discovered since GLO1 activity tends to be promoted in cancer cells, thus GLO1 serves as a potential therapeutic target for anti-cancer drug treatment and has been the focus of many research studies regarding its regulation in tumor cells. Medical Applications/Pharmacology Hyperglycemia, a side effect caused by diabetes, combines with oxidative stress to create advanced glycation end-products (AGEs) that can lead to diabetic retinopathy (RD) and cause symptoms such as blindness in adults. The manipulation of the glyoxalase system in mice retina has shown there is a potential for targeting the glyoxalase system to use as a therapeutic treatment for RD by lowering the production of AGEs. Oxidative stress can lead to worsening neurological diseases such as Alzheimer's, Parkinson's, and Autism Spectrum Disorder. Flavonoids, a type of antioxidant that combats oxidative stress in the body, has been found to help decrease the production of radical oxygen species (ROS) mostly by preventing the formation of free radicals but also partially by promoting the glyoxalase pathway via increasing transcription of GSH and GSH constituent subunits to increase intracellular levels of GSH. Major metabolic pathways converging on the glyoxalase cycle Although the glyoxalase pathway is the main metabolic system that reduces methylglyoxal levels in the cell, other enzymes have also been found to convert methylglyoxal into non-AGE producing species: specifically, 99% of MG is processed by glyoxalase metabolism, while less than 1% is metabolized into hydroxyacetone by aldo-keto reductases (AKRs) or into pyruvate by aldehyde dehydrogenases (ALDH). Other reactions have been found to produce MG that also feeds into the glyoxalase pathway. These reactions include catabolism of threonine and acetone, peroxidation of lipids, autoxidation of glucose, and degradation of glycated proteins. See also References Metabolism
Glyoxalase system
[ "Chemistry", "Biology" ]
1,156
[ "Cellular processes", "Biochemistry", "Metabolism" ]
14,386,363
https://en.wikipedia.org/wiki/Ferranti%20effect
In electrical engineering, the Ferranti effect is the increase in voltage occurring at the receiving end of a very long (> 200 km) AC electric power transmission line, relative to the voltage at the sending end, when the load is very small, or no load is connected. It can be stated as a factor, or as a percent increase. It was first observed during the installation of underground cables in Sebastian Ziani de Ferranti's 10,000-volt AC power distribution system in 1887. The capacitive line charging current produces a voltage drop across the line inductance that is in-phase with the sending-end voltage, assuming negligible line resistance. Therefore, both line inductance and capacitance are responsible for this phenomenon. This can be analysed by considering the line as a transmission line where the source impedance is lower than the load impedance (unterminated). The effect is similar to an electrically short version of the quarter-wave impedance transformer, but with smaller voltage transformation. The Ferranti effect is more pronounced the longer the line and the higher the voltage applied. The relative voltage rise is proportional to the square of the line length and the square of frequency. The Ferranti effect is much more pronounced in underground cables, even in short lengths, because of their high capacitance per unit length, and lower electrical impedance. An equivalent to the Ferranti effect occurs when inductive current flows through a series capacitance. Indeed, a lagging current flowing through a impedance results in a voltage difference , hence in increased voltage on the receiving side. See also LC circuit "A series resonant circuit provides voltage magnification." Reflections of signals on conducting lines Failure of the first trans-Atlantic telegraph cable Telegrapher's equations Characteristic impedance#Transmission line model References Electric power transmission Electrical phenomena Ferranti Transmission lines
Ferranti effect
[ "Physics" ]
395
[ "Physical phenomena", "Electrical phenomena" ]
14,387,852
https://en.wikipedia.org/wiki/Lapaquistat
Lapaquistat (TAK-475) is a cholesterol-lowering drug candidate that was abandoned before being marketed. Unlike statins, which inhibit HMG-CoA reductase, lapaquistat metabolites inhibit squalene synthase, which is further downstream in the synthesis of cholesterol. It is hoped that side effects can be reduced by not disturbing the mevalonate pathway, which is important for other biochemical molecules besides cholesterol. However, there is increasing evidence that statins (which inhibit the mevalonate pathway) may be clinically useful because they affect these other molecules (including protein prenylation). On March 28, 2008, Takeda halted further development of lapaquistat. While effective at lowering low-density lipoprotein cholesterol in a dose-dependent manner, development of the drug was ceased due to observations in clinical trials that it might cause liver damage in the high dose trial groups. Data from knockout mouse studies suggests that accumulation of high levels of the metabolic substrate of squalene synthase and derivatives thereof account for the liver toxicity of squalene synthase inhibitors, and efforts to mitigate this substrate accumulation would likely be necessary for clinical success of a squalene synthase inhibitor References Further reading Hypolipidemic agents Piperidines Carboxamides Benzoxazepines Phenol ethers Chloroarenes Abandoned drugs
Lapaquistat
[ "Chemistry" ]
292
[ "Drug safety", "Abandoned drugs" ]
14,388,992
https://en.wikipedia.org/wiki/MT-ND5
MT-ND5 is a gene of the mitochondrial genome coding for the NADH-ubiquinone oxidoreductase chain 5 protein (ND5). The ND5 protein is a subunit of NADH dehydrogenase (ubiquinone), which is located in the mitochondrial inner membrane and is the largest of the five complexes of the electron transport chain. Variations in human MT-ND5 are associated with mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes (MELAS) as well as some symptoms of Leigh's syndrome and Leber's hereditary optic neuropathy (LHON). Structure MT-ND5 is located in mitochondrial DNA from base pair 12,337 to 14,148. The MT-ND5 gene produces a 67 kDa protein composed of 603 amino acids. MT-ND5 is one of seven mitochondrial genes encoding subunits of the enzyme NADH dehydrogenase (ubiquinone), together with MT-ND1, MT-ND2, MT-ND3, MT-ND4, MT-ND4L, and MT-ND6. Also known as Complex I, this enzyme is the largest of the respiratory complexes. The structure is L-shaped with a long, hydrophobic transmembrane domain and a hydrophilic domain for the peripheral arm that includes all the known redox centres and the NADH binding site. MT-ND5 and the rest of the mitochondrially encoded subunits are the most hydrophobic of the subunits of Complex I and form the core of the transmembrane region. Function The MT-ND5 product is a subunit of the respiratory chain Complex I that is supposed to belong to the minimal assembly of core proteins required to catalyze NADH dehydrogenation and electron transfer to ubiquinone (coenzyme Q10). Initially, NADH binds to Complex I and transfers two electrons to the isoalloxazine ring of the flavin mononucleotide (FMN) prosthetic arm to form FMNH2. The electrons are transferred through a series of iron-sulfur (Fe-S) clusters in the prosthetic arm and finally to coenzyme Q10 (CoQ), which is reduced to ubiquinol (CoQH2). The flow of electrons changes the redox state of the protein, resulting in a conformational change and pK shift of the ionizable side chain, which pumps four hydrogen ions out of the mitochondrial matrix. Clinical Significance A small percentage of mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes (MELAS) are caused by a G>A mutation at base pair 13513 in the MT-ND5 gene. Mutations in the MT-ND5 gene cause impaired Complex I function of the mitochondrial electron transport system, impairing those tissues that require significant energy input, such as the brain and muscles. Cardiac and renal involvement as well as symptoms such as myopathy and lactic acidosis can also be observed. Those with MT-ND5 mutations can display the major features of MELAS and MERRF in some patients, as well as symptoms of Leigh's syndrome and/or Leber's hereditary optic neuropathy (LHON) in others. Interactions MT-ND5 interacts with Glutamine synthetase (GLUL), LIG4 and YME1L1. References External links Mass spectrometry characterization of MT-ND5 at COPaKB GeneReviews/NCBI/NIH/UW entry on Mitochondrial DNA-Associated Leigh Syndrome and NARP Proteins Human mitochondrial genes
MT-ND5
[ "Chemistry" ]
785
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,388,997
https://en.wikipedia.org/wiki/MT-ND1
MT-ND1 is a gene of the mitochondrial genome coding for the NADH-ubiquinone oxidoreductase chain 1 (ND1) protein. The ND1 protein is a subunit of NADH dehydrogenase, which is located in the mitochondrial inner membrane and is the largest of the five complexes of the electron transport chain. Variants of the human MT-ND1 gene are associated with mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes (MELAS), Leigh's syndrome (LS), Leber's hereditary optic neuropathy (LHON) and increases in adult BMI. Structure MT-ND1 is located in mitochondrial DNA from base pair 3,307 to 4,262. The MT-ND1 gene produces a 36 kDa protein composed of 318 amino acids. MT-ND1 is one of seven mitochondrial genes encoding subunits of the enzyme NADH dehydrogenase (ubiquinone), together with MT-ND2, MT-ND3, MT-ND4, MT-ND4L, MT-ND5, and MT-ND6. Also known as Complex I, this enzyme is the largest of the respiratory complexes. The structure is L-shaped with a long, hydrophobic transmembrane domain and a hydrophilic domain for the peripheral arm that includes all the known redox centres and the NADH binding site. The MT-ND1 product and the rest of the mitochondrially encoded subunits are the most hydrophobic of the subunits of Complex I and form the core of the transmembrane region. Function MT-ND1-encoded NADH-ubiquinone oxidoreductase chain 1 is a subunit of the respiratory chain Complex I that is supposed to belong to the minimal assembly of core proteins required to catalyze NADH dehydrogenation and electron transfer to ubiquinone (coenzyme Q10). Initially, NADH binds to Complex I and transfers two electrons to the isoalloxazine ring of the flavin mononucleotide (FMN) prosthetic arm to form FMNH2. The electrons are transferred through a series of iron-sulfur (Fe-S) clusters in the prosthetic arm and finally to coenzyme Q10 (CoQ), which is reduced to ubiquinol (CoQH2). The flow of electrons changes the redox state of the protein, resulting in a conformational change and pK shift of the ionizable side chain, which pumps four hydrogen ions out of the mitochondrial matrix. Clinical significance Pathogenic variants of the mitochondrial gene MT-ND1 are known to cause mtDNA-associated Leigh syndrome, as are variants of MT-ATP6, MT-TL1, MT-TK, MT-TW, MT-TV, MT-ND2, MT-ND3, MT-ND4, MT-ND5, MT-ND6 and MT-CO3. Abnormalities in mitochondrial energy generation result in neurodegenerative disorders like Leigh syndrome, which is characterized by an onset of symptoms between 12 months and three years of age. The symptoms frequently present themselves following a viral infection and include movement disorders and peripheral neuropathy, as well as hypotonia, spasticity and cerebellar ataxia. Roughly half of affected individuals die of respiratory or cardiac failure by the age of three. Leigh syndrome is a maternally inherited disorder and its diagnosis is established through genetic testing of the aforementioned mitochondrial genes, including MT-ND1. The m.4171C>A/MT-ND1 mutation also leads to a Leigh-like phenotype as well as bilateral brainstem lesions affecting the vestibular nuclei, resulting in vision loss, vomiting and vertigo. These complex I genes have been associated with a variety of neurodegenerative disorders, including Leber's hereditary optic neuropathy (LHON), mitochondrial encephalomyopathy with stroke-like episodes (MELAS), overlap between LHON and MELAS, and the previously mentioned Leigh syndrome. Mitochondrial dysfunction resulting from variants of MT-ND1, MT-ND2 and MT-ND4L have been linked to BMI in adults and implicated in metabolic disorders including obesity, diabetes and hypertension. References Further reading External links Mass spectrometry characterization of MT-ND1 at COPaKB Proteins Human mitochondrial genes
MT-ND1
[ "Chemistry" ]
949
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,389,789
https://en.wikipedia.org/wiki/Proteinase%20K
In molecular biology, Proteinase K (, protease K, endopeptidase K, Tritirachium alkaline proteinase, Tritirachium album serine proteinase, Tritirachium album proteinase K) is a broad-spectrum serine protease. The enzyme was discovered in 1974 in extracts of the fungus Parengyodontium album (formerly Engyodontium album or Tritirachium album). Proteinase K is able to digest hair (keratin), hence, the name "Proteinase K". The predominant site of cleavage is the peptide bond adjacent to the carboxyl group of aliphatic and aromatic amino acids with blocked alpha amino groups. It is commonly used for its broad specificity. This enzyme belongs to Peptidase family S8 (subtilisin). The molecular weight of Proteinase K is 28,900 daltons (28.9 kDa). Enzyme activity Activated by calcium, the enzyme digests proteins preferentially after hydrophobic amino acids (aliphatic, aromatic and other hydrophobic amino acids). Although calcium ions do not affect the enzyme activity, they do contribute to its stability. Proteins will be completely digested if the incubation time is long and the protease concentration high enough. Upon removal of the calcium ions, the stability of the enzyme is reduced, but the proteolytic activity remains. Proteinase K has two binding sites for Ca2+, which are located close to the active center, but are not directly involved in the catalytic mechanism. The residual activity is sufficient to digest proteins, which usually contaminate nucleic acid preparations. Therefore, the digestion with Proteinase K for the purification of nucleic acids is usually performed in the presence of EDTA (inhibition of metal-ion dependent enzymes such as nucleases). Proteinase K is also stable over a wide pH range (4–12), with a pH optimum of pH 8.0. An elevation of the reaction temperature from 37 °C to 50–60 °C may increase the activity several times, like the addition of 0.5–1% sodium dodecyl sulfate (SDS) or Guanidinium chloride (3 M), Guanidinium thiocyanate (1 M) and urea (4 M) . The above-mentioned conditions enhance proteinase K activity by making its substrate cleavage sites more accessible. Temperatures above 65 °C, trichloroacetic acid (TCA) or the serine protease-inhibitors AEBSF, PMSF or DFP inhibit the activity. Proteinase K will not be inhibited by Guanidinium chloride, Guanidinium thiocyanate, urea, Sarkosyl, Triton X-100, Tween 20, SDS, citrate, iodoacetic acid, EDTA or by other serine protease inhibitors like Nα-Tosyl-Lys Chloromethyl Ketone (TLCK) and Nα-Tosyl-Phe Chloromethyl Ketone (TPCK). Protease K activity in commonly used buffers Applications Proteinase K is commonly used in molecular biology to digest protein and remove contamination from preparations of nucleic acid. Addition of Proteinase K to nucleic acid preparations rapidly inactivates nucleases that might otherwise degrade the DNA or RNA during purification. It is highly suited to this application since the enzyme is active in the presence of chemicals that denature proteins, such as SDS and urea, chelating agents such as EDTA, sulfhydryl reagents, as well as trypsin or chymotrypsin inhibitors. Proteinase K is used for the destruction of proteins in cell lysates (tissue, cell culture cells) and for the release of nucleic acids, since it very effectively inactivates DNases and RNases. Some examples for applications: Proteinase K is very useful in the isolation of highly native, undamaged DNAs or RNAs, since most microbial or mammalian DNases and RNases are rapidly inactivated by the enzyme, particularly in the presence of 0.5–1% SDS. The enzyme's activity towards native proteins is stimulated by denaturants such as SDS. In contrast, when measured using peptide substrates, denaturants inhibit the enzyme. The reason for this result is that the denaturing agents unfold the protein substrates and make them more accessible to the protease. Inhibitors Proteinase K has two disulfide bonds, but it exhibits higher proteolytic activity in the presence of reducing agents (e.g. 5 mM DTT), suggesting that the presumed reduction of its own disulfide bonds does not lead to its irreversible inactivation. Proteinase K is inhibited by serine protease inhibitors such as phenylmethylsulfonyl fluoride (PMSF), diisopropylfluorophosphate (DFP), or 4-(2-aminoethyl)benzenesulfonyl fluoride (AEBSF). Proteinase K activity is unaffected by the sulfhydryl modifying reagents: para-chloromercuribenzoic acid (PCMB), N-alpha-tosyl-L-lysyl-chloromethyl-ketone (TLCK), or N-alpha-Tosyl-l-phenylalanine Chloromethyl Ketone (TPCK), although presumably if these reagents were included alongside disulfide reducing reagents which exposed the typically-unavailable Proteinase K thiols, it may then become inhibited. References External links Proteinase K Worthington enzyme manual Biochemistry methods EC 3.4.21
Proteinase K
[ "Chemistry", "Biology" ]
1,234
[ "Biochemistry methods", "Biochemistry" ]
14,389,994
https://en.wikipedia.org/wiki/Natural%20landscape
A natural landscape is the original landscape that exists before it is acted upon by human culture. The natural landscape and the cultural landscape are separate parts of the landscape. However, in the 21st century, landscapes that are totally untouched by human activity no longer exist, so that reference is sometimes now made to degrees of naturalness within a landscape. In Silent Spring (1962) Rachel Carson describes a roadside verge as it used to look: "Along the roads, laurel, viburnum and alder, great ferns and wildflowers delighted the traveler’s eye through much of the year" and then how it looks now following the use of herbicides: "The roadsides, once so attractive, were now lined with browned and withered vegetation as though swept by fire". Even though the landscape before it is sprayed is biologically degraded, and may well contains alien species, the concept of what might constitute a natural landscape can still be deduced from the context. The phrase "natural landscape" was first used in connection with landscape painting, and landscape gardening, to contrast a formal style with a more natural one, closer to nature. Alexander von Humboldt (1769 – 1859) was to further conceptualize this into the idea of a natural landscape separate from the cultural landscape. Then in 1908 geographer Otto Schlüter developed the terms original landscape (Urlandschaft) and its opposite cultural landscape (Kulturlandschaft) in an attempt to give the science of geography a subject matter that was different from the other sciences. An early use of the actual phrase "natural landscape" by a geographer can be found in Carl O. Sauer's paper "The Morphology of Landscape" (1925). Origins of the term The concept of a natural landscape was first developed in connection with landscape painting, though the actual term itself was first used in relation to landscape gardening. In both cases it was used to contrast a formal style with a more natural one, that is closer to nature. Chunglin Kwa suggests, "that a seventeenth-century or early-eighteenth-century pen could experience natural scenery 'just like on a painting,’ and so, with or without the use of the word itself, designate it as a landscape." With regard to landscape gardening John Aikin, commented in 1794: "Whatever, therefore, there be of novelty in the singular scenery of an artificial garden, it is soon exhausted, whereas the infinite diversity of a natural landscape presents an inexhaustible flore of new forms". Writing in 1844 the prominent American landscape gardener Andrew Jackson Downing comments: "straight canals, round or oblong pieces of water, and all the regular forms of the geometric mode ... would evidently be in violent opposition to the whole character and expression of natural landscape". In his extensive travels in South America, Alexander von Humboldt became the first to conceptualize a natural landscape separate from the cultural landscape, though he does not actually use these terms. Andrew Jackson Downing was aware of, and sympathetic to, Humboldt's ideas, which therefore influenced American landscape gardening. Subsequently, the geographer Otto Schlüter, in 1908, argued that by defining geography as a Landschaftskunde (landscape science) would give geography a logical subject matter shared by no other discipline. He defined two forms of landscape: the Urlandschaft (original landscape) or landscape that existed before major human induced changes and the Kulturlandschaft (cultural landscape) a landscape created by human culture. Schlüter argued that the major task of geography was to trace the changes in these two landscapes. The term natural landscape is sometimes used as a synonym for wilderness, but for geographers natural landscape is a scientific term which refers to the biological, geological, climatological and other aspects of a landscape, not the cultural values that are implied by the word wilderness. The natural and conservation Matters are complicated by the fact that the words nature and natural have more than one meaning. On the one hand there is the main dictionary meaning for nature: "The phenomena of the physical world collectively, including plants, animals, the landscape, and other features and products of the earth, as opposed to humans or human creations." On the other hand, there is the growing awareness, especially since Charles Darwin, of humanities biological affinity with nature. The dualism of the first definition has its roots is an "ancient concept", because early people viewed "nature, or the nonhuman world […] as a divine Other, godlike in its separation from humans." In the West, Christianity's myth of the fall, that is the expulsion of humankind from the Garden of Eden, where all creation lived in harmony, into an imperfect world, has been the major influence. Cartesian dualism, from the seventeenth century on, further reinforced this dualistic thinking about nature. With this dualism goes value judgement as to the superiority of the natural over the artificial. Modern science, however, is moving towards a holistic view of nature. America What is meant by natural, within the American conservation movement, has been changing over the last century and a half. In the mid-nineteenth century American began to realize that the land was becoming more and more domesticated and wildlife was disappearing. This led to the creation of American National Parks and other conservation sites. Initially it was believed that all that was needed to do was to separate what was seen as natural landscape and "avoid disturbances such as logging, grazing, fire and insect outbreaks." This, and subsequent environmental policy, until recently, was influenced by ideas of the wilderness. However, this policy was not consistently applied, and in Yellowstone Park, to take one example, the existing ecology was altered, firstly by the exclusion of Native Americans and later with the virtual extermination of the wolf population.<ref name="dow"></ref</ref> A century later, in the mid-twentieth century, it began to be believed that the earlier policy of "protection from disturbance was inadequate to preserve park values", and that is that direct human intervention was necessary to restore the landscape of National Parks to its ‘'natural'’ condition. In 1963 the Leopold Report argued that "A national park should represent a vignette of primitive America". This policy change eventually led to the restoration of wolves in Yellowstone Park in the 1990s. However, recent research in various disciplines indicates that a pristine natural or "primitive" landscape is a myth, and it now realised that people have been changing the natural into a cultural landscape for a long while, and that there are few places untouched in some way from human influence. The earlier conservation policies were now seen as cultural interventions. The idea of what is natural and what artificial or cultural, and how to maintain the natural elements in a landscape, has been further complicated by the discovery of global warming and how it is changing natural landscapes. Also important is a reaction recently amongst scholars against dualistic thinking about nature and culture. Maria Kaika comments: "Nowadays, we are beginning to see nature and culture as intertwined once again – not ontologically separated anymore […].What I used to perceive as a compartmentalized world, consisting of neatly and tightly sealed, autonomous 'space envelopes' (the home, the city, and nature) was, in fact, a messy socio-spatial continuum". And William Cronon argues against the idea of wilderness because it "involves a dualistic vision in which the human is entirely outside the natural" and affirms that "wildness (as opposed to wilderness) can be found anywhere" even "in the cracks of a Manhattan sidewalk." According to Cronon we have to "abandon the dualism that sees the tree in the garden as artificial […] and the tree in the wilderness as natural […] Both in some ultimate sense are wild." Here he bends somewhat the regular dictionary meaning of wild, to emphasise that nothing natural, even in a garden, is fully under human control. Europe The landscape of Europe has considerably altered by people and even in an area, like the Cairngorm Mountains of Scotland, with a low population density, only " the high summits of the Cairngorm Mountains, consist entirely of natural elements. These high summits are of course only part of the Cairngorms, and there are no longer wolves, bears, wild boar or lynx in Scotland's wilderness. The Scots pine in the form of the Caledonian forest also covered much more of the Scottish landscape than today. The Swiss National Park, however, represent a more natural landscape. It was founded in 1914, and is one of the earliest national parks in Europe. Visitors are not allowed to leave the motor road, or paths through the park, make fire or camp. The only building within the park is Chamanna Cluozza, mountain hut. It is also forbidden to disturb the animals or the plants, or to take home anything found in the park. Dogs are not allowed. Due to these strict rules, the Swiss National Park is the only park in the Alps who has been categorized by the IUCN as a strict nature reserve, which is the highest protection level. History of natural landscape No place on the Earth is unaffected by people and their culture. People are part of biodiversity, but human activity affects biodiversity, and this alters the natural landscape. Mankind have altered landscape to such an extent that few places on earth remain pristine, but once free of human influences, the landscape can return to a natural or near natural state. Even the remote Yukon and Alaskan wilderness, the bi-national Kluane-Wrangell-St. Elias-Glacier Bay-Tatshenshini-Alsek park system comprising Kluane, Wrangell-St Elias, Glacier Bay and Tatshenshini-Alsek parks, a UNESCO World Heritage Site, is not free from human influence, because the Kluane National Park lies within the traditional territories of the Champagne and Aishihik First Nations and Kluane First Nation who have a long history of living in this region. Through their respective Final Agreements with the Canadian Government, they have made into law their rights to harvest in this region. Procession Through different intervals of time, the process of natural landscapes have been shaped by a series of landforms, mostly due to its factors, including tectonics, erosion, weathering and vegetation. Examples of cultural forces Cultural forces intentionally or unintentionally, have an influence upon the landscape. Cultural landscapes are places or artifacts created and maintained by people. Examples of cultural intrusions into a landscape are: fences, roads, parking lots, sand pits, buildings, hiking trails, management of plants, including the introduction of invasive species, extraction or removal of plants, management of animals, mining, hunting, natural landscaping, farming and forestry, pollution. Areas that might be confused with a natural landscape include public parks, farms, orchards, artificial lakes and reservoirs, managed forests, golf courses, nature center trails, gardens. See also Notes References External links Developing a forest naturalness indicator for Europe Scottish heritage: Natural Spaces Carl O. Sauer. "The Morphology of Landscape", University of California Publications in Geography, vol. 2, No. 2, 12 October 1925, pp. 19–53 (scroll down) Biodiversity Biology terminology Ecology Environmental science Environmental law Evolution Geography terminology Habitats Landscape Philosophy of biology Wilderness
Natural landscape
[ "Biology", "Environmental_science" ]
2,323
[ "Biodiversity", "Ecology", "nan" ]
14,390,885
https://en.wikipedia.org/wiki/Effective%20molarity
In chemistry, the effective molarity (denoted EM) is defined as the ratio between the first-order rate constant of an intramolecular reaction and the second-order rate constant of the corresponding intermolecular reaction (kinetic effective molarity) or the ratio between the equilibrium constant of an intramolecular reaction and the equilibrium constant of the corresponding intermolecular reaction (thermodynamic effective molarity). EM has the dimension of concentration. High EM values always indicate greater ease of intramolecular processes over the corresponding intermolecular ones. Effective molarities can be used to get a deeper understanding of the effects of intramolecularity on reaction courses. See also Cyclic compound Intramolecular reaction Macrocycle Polymerization References Physical organic chemistry
Effective molarity
[ "Chemistry" ]
164
[ "Physical organic chemistry" ]
14,391,189
https://en.wikipedia.org/wiki/Alexei%20Strolman
Alexei Petrovich Strolman (c. 1811–1898, aged 86-87), Строльман (Алексей Петрович), was a Russian mining engineer, historian and author. He is known for his work with the commission for the emancipation of the serfs, and for his history of mining in Russia published as a series of articles in the Mining Journal (Russia) . Strolman received his education from the mining cadet school in St. Petersburg, and began publishing on mining and geological topics as early as 1835. He was a compatriot of Nikolay Milyutin. In 1859, he was part the Editing Commission which worked out the text of the Emancipation Manifesto of 3 March 1861 (NS), and then he worked for four years on its implementation. From 1870 to 1880, Strolman was the scientist member on the Imperial Mining and Mining Board Committee. He devoted the final years of his life to writing, producing a series of articles on mining for the Mining Journal (Russia) (Горном Журнале, Gornyi Zhurnal ). Some of Strolman's articles were translated into French and German Notes External links Мир словарей: СТРОЛЬМАН АЛЕКСЕЙ ПЕТРОВИЧ in Russian (Mir Dictionary and Encyclopedia: Strolman, Alexei Petrovich) Mining engineers 1810s births 1898 deaths Engineers from the Russian Empire
Alexei Strolman
[ "Engineering" ]
318
[ "Mining engineering", "Mining engineers" ]
14,391,787
https://en.wikipedia.org/wiki/Bayes%20linear%20statistics
Bayes linear statistics is a subjectivist statistical methodology and framework. Traditional subjective Bayesian analysis is based upon fully specified probability distributions, which are very difficult to specify at the necessary level of detail. Bayes linear analysis attempts to solve this problem by developing theory and practise for using partially specified probability models. Bayes linear in its current form has been primarily developed by Michael Goldstein. Mathematically and philosophically it extends Bruno de Finetti's Operational Subjective approach to probability and statistics. Motivation Consider first a traditional Bayesian Analysis where you expect to shortly know D and you would like to know more about some other observable B. In the traditional Bayesian approach it is required that every possible outcome is enumerated i.e. every possible outcome is the cross product of the partition of a set of B and D. If represented on a computer where B requires n bits and D m bits then the number of states required is . The first step to such an analysis is to determine a person's subjective probabilities e.g. by asking about their betting behaviour for each of these outcomes. When we learn D conditional probabilities for B are determined by the application of Bayes' rule. Practitioners of subjective Bayesian statistics routinely analyse datasets where the size of this set is large enough that subjective probabilities cannot be meaningfully determined for every element of D × B. This is normally accomplished by assuming exchangeability and then the use of parameterized models with prior distributions over parameters and appealing to the de Finetti's theorem to justify that this produces valid operational subjective probabilities over D × B. The difficulty with such an approach is that the validity of the statistical analysis requires that the subjective probabilities are a good representation of an individual's beliefs however this method results in a very precise specification over D × B and it is often difficult to articulate what it would mean to adopt these belief specifications. In contrast to the traditional Bayesian paradigm Bayes linear statistics following de Finetti uses Prevision or subjective expectation as a primitive, probability is then defined as the expectation of an indicator variable. Instead of specifying a subjective probability for every element in the partition D × B the analyst specifies subjective expectations for just a few quantities that they are interested in or feel knowledgeable about. Then instead of conditioning an adjusted expectation is computed by a rule that is a generalization of Bayes' rule that is based upon expectation. The use of the word linear in the title refers to de Finetti's arguments that probability theory is a linear theory (de Finetti argued against the more common measure theory approach). Example In Bayes linear statistics, the probability model is only partially specified, and it is not possible to calculate conditional probability by Bayes' rule. Instead Bayes linear suggests the calculation of an Adjusted Expectation. To conduct a Bayes linear analysis it is necessary to identify some values that you expect to know shortly by making measurements D and some future value which you would like to know B. Here D refers to a vector containing data and B to a vector containing quantities you would like to predict. For the following example B and D are taken to be two-dimensional vectors i.e. In order to specify a Bayes linear model it is necessary to supply expectations for the vectors B and D, and to also specify the correlation between each component of B and each component of D. For example the expectations are specified as: and the covariance matrix is specified as : The repetition in this matrix, has some interesting implications to be discussed shortly. An adjusted expectation is a linear estimator of the form where and are chosen to minimise the prior expected loss for the observations i.e. in this case. That is for where are chosen in order to minimise the prior expected loss in estimating In general the adjusted expectation is calculated with Setting to minimise From a proof provided in (Goldstein and Wooff 2007) it can be shown that: For the case where is not invertible the Moore–Penrose pseudoinverse should be used instead. Furthermore, the adjusted variance of the variable after observing the data is given by See also Imprecise probability External links Bayes Linear Methods References Goldstein, M. (1981) Revising Previsions: a Geometric Interpretation (with Discussion). Journal of the Royal Statistical Society, Series B, 43(2), 105-130 Goldstein, M. (2006) Subjectivism principles and practice. Bayesian Analysis] Michael Goldstein, David Wooff (2007) Bayes Linear Statistics, Theory & Methods, Wiley. de Finetti, B. (1931) "Probabilism: A Critical Essay on the Theory of Probability and on the Value of Science," (translation of 1931 article) in Erkenntnis, volume 31, September 1989. The entire double issue is devoted to de Finetti's philosophy of probability. de Finetti, B. (1937) “La Prévision: ses lois logiques, ses sources subjectives,” Annales de l'Institut Henri Poincaré, - "Foresight: its Logical Laws, Its Subjective Sources," (translation of the 1937 article in French) in H. E. Kyburg and H. E. Smokler (eds), Studies in Subjective Probability, New York: Wiley, 1964. de Finetti, B. (1974) Theory of Probability, (translation by A Machi and AFM Smith of 1970 book) 2 volumes, New York: Wiley, 1974-5. Linear statistics Probability interpretations
Bayes linear statistics
[ "Mathematics" ]
1,133
[ "Probability interpretations" ]
14,391,804
https://en.wikipedia.org/wiki/Wilberforce%20pendulum
A Wilberforce pendulum, invented by British physicist Lionel Robert Wilberforce around 1896, consists of a mass suspended by a long helical spring and free to turn on its vertical axis, twisting the spring. It is an example of a coupled mechanical oscillator, often used as a demonstration in physics education. The mass can both bob up and down on the spring, and rotate back and forth about its vertical axis with torsional vibrations. When correctly adjusted and set in motion, it exhibits a curious motion in which periods of purely rotational oscillation gradually alternate with periods of purely up and down oscillation. The energy stored in the device shifts slowly back and forth between the translational 'up and down' oscillation mode and the torsional 'clockwise and counterclockwise' oscillation mode, until the motion eventually dies away. Despite the name, in normal operation it does not swing back and forth as ordinary pendulums do. The mass usually has opposing pairs of radial 'arms' sticking out horizontally, threaded with small weights that can be screwed in or out to adjust the moment of inertia to 'tune' the torsional vibration period. Explanation The device's intriguing behavior is caused by a slight coupling between the two motions or degrees of freedom, due to the geometry of the spring. When the weight is moving up and down, each downward excursion of the spring causes it to unwind slightly, giving the weight a slight twist. When the weight moves up, it causes the spring to wind slightly tighter, giving the weight a slight twist in the other direction. So when the weight is moving up and down, each oscillation gives a slight alternating rotational torque to the weight. In other words, during each oscillation some of the energy in the translational mode leaks into the rotational mode. Slowly the up and down movement gets less, and the rotational movement gets greater, until the weight is just rotating and not bobbing. Similarly, when the weight is rotating back and forth, each twist of the weight in the direction that unwinds the spring also reduces the spring tension slightly, causing the weight to sag a little lower. Conversely, each twist of the weight in the direction of winding the spring tighter causes the tension to increase, pulling the weight up slightly. So each oscillation of the weight back and forth causes it to bob up and down more, until all the energy is transferred back from the rotational mode into the translational mode and it is just bobbing up and down, not rotating. A Wilberforce pendulum can be designed by approximately equating the frequency of harmonic oscillations of the spring-mass oscillator fT, which is dependent on the spring constant k of the spring and the mass m of the system, and the frequency of the rotating oscillator fR, which is dependent on the moment of inertia I and the torsional coefficient κ of the system. The pendulum is usually adjusted by moving the moment of inertia adjustment weights towards or away from the centre of the mass by equal amounts on each side in order to modify fR, until the rotational frequency is close to the translational frequency, so the alternation period will be slow enough to allow the change between the two modes to be clearly seen. Alternation or 'beat' frequency The frequency at which the two modes alternate is equal to the difference between the oscillation frequencies of the modes. The closer in frequency the two motions are, the slower will be the alternation between them. This behavior, common to all coupled oscillators, is analogous to the phenomenon of beats in musical instruments, in which two tones combine to produce a 'beat' tone at the difference between their frequencies. For example, if the pendulum bobs up and down at a rate of fT = 4 Hz, and rotates back and forth about its axis at a rate of fR = 4.1 Hz, the alternation rate falt will be: So the motion will change from rotational to translational in 5 seconds and then back to rotational in the next 5 seconds. If the two frequencies are exactly equal, the beat frequency will be zero, and resonance will occur. References External links Video of Wilberforce pendulum oscillating, by Berkeley Lecture Demonstrations, YouTube.com, retrieved April 25, 2008 Pendulums Dynamics (mechanics)
Wilberforce pendulum
[ "Physics" ]
894
[ "Motion (physics)", "Physical phenomena", "Dynamics (mechanics)", "Classical mechanics" ]
14,392,996
https://en.wikipedia.org/wiki/Document%20camera
Document cameras, also known as visual presenters, visualizers, digital overheads, or docucams, are high-resolution, real-time image capture devices used to display an object to a large audience, such as in a classroom or a lecture hall, or in online presentations such as webinars. A webcam is mounted on arms, allowing it to be positioned over a page. The camera connects to a projector or similar video display system, enabling a presenter to write on a sheet of paper or display a two- or three-dimensional object while the audience watches. Alternatively, the images can be broadcast to an online audience or recorded for later use. Larger objects can be positioned in front of the camera, which can then be rotated as needed. Use cases Document cameras are commonly used in: Lecture halls and classrooms Presentation of material in conferences, meetings, and training sessions Videoconferencing and telepresence Presentation of evidence in courtrooms Various medical applications (telemedicine, telepathology, display of radiology images) Document cameras can be used instead of overhead projectors. A document camera can enlarge the small print in books and project a printed page as if it were a traditional transparency by using a zoom feature. Most document cameras can also send a video signal to a computer. History Document cameras were developed to meet increased demand for the ability to project and present original documents, plans, drawings, and objects directly rather than necessitating the prior preparation required for their use as part of an overhead projector-based presentation. The first document camera, also known as a visualizer, was developed by WolfVision and Elmo and launched at the Photokina Trade Fair in 1988. The widespread use of computers, projectors, and popular presentation programs such as Microsoft PowerPoint in meeting rooms led to overhead projectors being used less frequently. Early prototypes were simple video cameras mounted on a copy-stands. During the mid-1970s, these were assembled and equipped with additional lighting to provide a consistent quality of the projected image, as well as enable use in a darkened room. Toward the end of the 1990s, progressive scan cameras were introduced. Many visualizers available on the market today are capable of an output of at least 30 frames per second. Technology The design and specification of a document camera combine several different technologies. Image quality depends on primary components: optics, camera, lighting, and the motherboard with appropriate firmware (software). The finished product is then realized by the production of different mechanical designs by individual manufacturers. Some document cameras offer HDMI output, audio/video recording, and Wi-Fi connectivity. Optics Optics are critical to image quality and vary based on the device's cost. Simple or highly complex optical systems can be used, which can differ significantly in quality and size. The iris or aperture is another important component of the optics. The iris controls and regulates the amount of light that passes through the lens onto the image sensor. A lens focuses on a single point of the object, projecting it onto the sensor. However, there is also an area in front of and behind the point of focus that is perceived as being in sharp focus by the human eye. This is called the depth of field, and it is dependent upon the size of the iris or aperture. The smaller the aperture, the greater the depth of field, which brings more of the image into sharp focus. Camera Progressive scan cameras use either CCD sensors or CMOS sensors. The general advantage of progressive scanning over the interlaced method is a much higher resolution. A progressive scan camera captures all scan lines at the same time, whereas an interlaced camera uses alternating sets of lines. Image sensors provide only monochrome images. With a 1-chip camera, color information can be obtained through the use of color filters over each pixel. With 1-chip cameras, the Bayer filter is very commonly used. Red, green, and blue filters are arranged in a pattern, where the number of green pixels is twice as large as that of the blue or red; thus, the higher sensitivity and resolution of the human eye is replicated. To get a color image, different algorithms are then used to interpolate the missing color information. A 3CCD camera module is another way to produce color images. A prism is used to split white light into its red, green, and blue components, and a separate sensor is then used for each color. This camera technology is used in 3-chip cameras and allows for excellent color reproduction at very high resolutions. Modern camera systems used in a document camera are able to provide high-resolution color images at 30 frames per second. In a 3-chip camera, the measured resolution may be up to 1,500 lines. In addition, the image can be adapted to fit common display aspect ratios of 4:3, 16:9, and 16:10. Lighting system A uniform lighting system is essential for accurate color rendition in document cameras. High-intensity lighting permits the document camera to produce clear images regardless of ambient light conditions. Utilization of powerful lighting systems enables smaller apertures to be used; this, in turn, can provide an increase in the depth of field that can be achieved by the document camera. If the quality of the light source is increased, more light can reach the camera sensor, which results in less noticeable noise, allowing the quality of the image to not be degraded. Some document camera models integrate additional functionality into the light system, such as a synchronized light field that indicates to the user at all times, via an illuminated image capture area or laser markers, the size and position of the imaging area, which adjusts simultaneously as the lens zooms in or out. Motherboard and firmware The motherboard plays an important role in image processing and it has influence on the quality of the eventual image that is produced. Larger resolutions and high refresh rates generate large amounts of data that must be processed in real time. Document cameras can be equipped with a range of advanced automated systems designed to enhance ease of use and improve functionality and image quality. For instance, permanent auto-focus detection automatically adjusts focus settings whenever a new object is displayed, eliminating the need for manual adjustments. Other examples of automated features include automatic iris adjustment, auto exposure, white balance, and automatic gain control. Modern motherboards have a variety of connections to ensure flexibility of use. In addition to HDMI, DVI and VGA ports for connecting to displays (projectors, monitors, and video conferencing systems), there are also several interfaces provided to facilitate connection to a computer or interactive whiteboard. These interfaces are most commonly USB, network (LAN), and serial. In addition, an external PC or laptop can be connected to the document camera to allow for switching between a Power Point presentation and a live demonstration. Some models can also handle external storage devices and play files directly from a USB flash drive, or save images taken during the presentation onto it. Document camera types Document cameras are generally divided into three groups: Portable: Smaller and lightweight models Desktop: Larger, sturdier, and more stable units Visualizers: Ceiling-mounted above a tabletop or podium Portable and desktop models Portable and desktop models allow a working environment similar to an overhead projector. Many document camera users appreciate the added flexibility regarding the variety of objects that can be displayed to an audience. Portable devices can be used in multiple locations without requiring any prior installation. Ceiling models Ceiling-mounted document cameras/visualizers are a variation of the traditional desktop models and allow for larger objects to be displayed. There is no desktop technical equipment to restrict the views of the speaker and audience, as the technology is installed unobtrusively in the ceiling. Ceiling models are often used to support videoconferencing or telepresence systems to further enhance the immersive experience for participants. Document camera scanners Document cameras have also been used as replacements for image scanners. Capturing images on document cameras differs from that of flatbed and automatic document feeder scanners in that there are no moving parts required to scan the object. Conventionally, either the illumination/reflector rod inside the scanner must be moved over the document (such as for a flatbed scanner), or the document must be passed over the rod (such as for feeder scanners) in order to produce a scan of a whole image. Document cameras capture the whole document or object in one step, usually instantly. Typically, documents are placed on a flat surface underneath the capture area of the document camera. The process of capturing an entire surface at once has the benefit of increasing reaction time for the workflow of scanning. After being captured, the images are usually processed with software that may enhance the image and perform tasks such automatic rotation, cropping, and straightening. The documents or objects being scanned are not required to make contact with the document camera, increasing the flexibility of the types of documents that can be checked. Objects that have previously been difficult to scan or may get jammed in conventional scanners––including documents of varying sizes and shapes, such as books, magazines, receipts, letters, or tickets, as well as those which are stapled, in folders, bent, or crumpled––can now be displayed with one device. The lack of moving parts also removes the need for maintenance, a consideration in the total cost of ownership, which includes the continuing operational costs of scanners. Increased reaction time whilst scanning also has benefits in the realm of context-scanning. ADF scanners, whilst very fast and very good at batch scanning, also require pre- and post-processing of the documents. Document cameras can be integrated directly into a workflow or process as documents are scanned in real-time for the customer, in the context of which it is being placed or used. Reaction time is an advantage in these situations. Document cameras usually require a small amount of space and are often portable. While scanning, document cameras may have a quick reaction time; large batch scanning of even and unstapled documents is more efficient with an ADF scanner. This kind of technology faces challenges regarding external factors (such as lighting) that may influence the scan results. How these issues are resolved firmly depends on the sophistication of the product and how it deals with these issues. See also Planetary scanner References Cameras
Document camera
[ "Technology" ]
2,101
[ "Recording devices", "Cameras" ]
14,393,992
https://en.wikipedia.org/wiki/F2%20propagation
F2 propagation (F2-skip) is the reflection of VHF signals off the F2 layer of the ionosphere. The phenomenon is rare compared to other forms of propagation (such as sporadic E propagation, or E-skip) but can reflect signals thousands of miles beyond their intended broadcast area, substantially farther than E-skip. F2-skip affects the upper ends of the high frequency (HF) spectrum and the low ends of the very high frequency (VHF) spectrum; only a small portion of F2's effective range overlaps frequencies used by consumer broadcast reception, also contributing to the phenomenon being rarely encountered. Theory Solar activity has a cycle of approximately 11 years. During this period, sunspot activity rises to a peak and gradually falls again to a low level. When sunspot activity increases, the reflecting capabilities of the F1 layer surrounding earth enable high frequency short-wave communications. The highest-reflecting layer, the F2 layer, which is approximately above earth, receives ultraviolet radiation from the sun, causing ionisation of the gases within this layer. During the daytime when sunspot activity is at a maximum, the F2 layer can become intensely ionized due to radiation from the sun. When solar activity is sufficiently high, the maximum usable frequency (MUF) increases, hence the ionisation density is sufficient to reflect signals well into the 30-60 MHz VHF spectrum. Since the MUF progressively increases, F2 reception on lower frequencies can indicate potential low band 45-55 MHz VHF TV as well as VHF amateur radio paths. A rising MUF will initially affect the 27 MHz CB band, and the amateur 28 MHz 10 meters band before reaching 45-55 MHz TV and the 6 meters amateur band. The F2 MUF generally increases at a slower rate compared to the Es MUF. Since the height of the F2 layer is some , it follows that single-hop F2 signals will be received at thousands rather than hundreds of miles. A single-hop F2 signal will usually be around minimum. A maximum F2 single-hop can reach up to approximately . Multi-hop F2 propagation has enabled Band 1 VHF reception to over . Since F2 reception is directly related to radiation from the Sun on both a daily basis and in relation to the sunspot cycle, it follows that for optimum reception the centre of the signal path will be roughly at midday. Outside a solar maximum it can still occur somewhat regularly within about 15 to 20 degrees from the geomagnetic equator, with the peak generally being in spring time. However, this type of F2 propagation is mostly specifically referred to as TEP (Trans Equatorial Propagation) to differentiate it from the less common mid latitude F2 propagation. The F2 layer tends to predominantly propagate signals below 30 MHz (HF) during a solar minimum, which includes the 27 MHz CB radio, and 28 MHz 10-meter amateur radio band. During a solar maximum, television, amateur radio signals, private land mobile, and other services in the 30-60 MHz VHF spectrum are also propagated over considerable distances. In North America, F2 is most likely to only affect VHF TV channel 2, in Europe and middle east channel E2 and E3 (and the now deprecated channel itA) and in eastern Europe channel R1. Television pictures propagated via F2 tend to suffer from characteristic ghosting and smearing, although they are mostly stronger and more stable than double hop Sporadic E signal. Picture degradation and signal strength attenuation increases with each subsequent F2 hop. Notable F2 DX receptions In November 1938, 405-line video from the BBC Alexandra Palace television station (London, England) on channel B1 (45.0 MHz) was received in New York, US. In 1958, the FM broadcast radio DX record was set by DXer Gordon Simkin in southern California, United States, when he logged a 45 MHz commercial FM station from Korea via trans-Pacific F2 propagation at a distance of . In October 1979, Anthony Mann (Perth, Western Australia) received 48.25 MHz audio and 51.75 MHz video from the Holme Moss BBC channel B2 television transmitter. This F2 reception is a world record for reception from a BBC 405-line channel B2 transmitter. During October to December 1979, United Kingdom DXers Roger Bunney (Hampshire), Hugh Cocks (Sussex), Mike Allmark (Leeds), and Ray Davies (Norwich) all received viewable television pictures from Australian channel TVQ 0 Brisbane (46.26 MHz) via multi-hop F2 propagation. On January 31, 1981, Todd Emslie, Sydney, Australia, received 41.5 MHz channel B1 television audio transmitted from Crystal Palace Transmitter by the BBC's television service, away. This BBC B1 reception was also recorded on to audio tape. He has also received Dubai's DCRTV 48.25 MHz video on November 23, 1991, in the same place. On February 8, 1992, emedxer from Perth, West Australia, received ARD E2 video from Grünten on 48.2604 MHz at a distance of 13,750 km away. On April 18, 2014, the DXer HughTVDX, received Canal 2 Posada (Misiones) - Argentina (55.251 MHz video) in southern Portugal about 8700 km away From late March until mid April 2023 Dante's Enigmatic World received various TV signals from the Philippines, such as AMBS ALLTV and TV5 A2 with video and audio (55.25 MHz video, 59.75 MHz audio) in Kyoto, Japan 3,200 km away. See also Tropospheric propagation Federal Standard 1037C MW DX Skywave Radio propagation Clear-channel station References External links TV/FM Antenna Locator Worldwide TV/FM DX Association Worldwide TV/FM DX Association Forums Band 1 TVDX from Europe, North African and Middle East FMDX database British FM & TV Circle, Home of FM & TV DX in the UK Girard Westerberg's page, including a live DX webcam Mike's TV and FM DX Page since 1999 Todd Emslie's TV FM DX Page Jeff Kadet's TV DX Page FM DX Italy The official FM & TV DX website in Italy fmdxITALY Home of FM & TV DX in Italy FMLIST is a non-commercial worldwide database of FM stations, including a bandscan and logbook tool (FMINFO/myFM) Mixture.fr AM/FM/DAB database for France MeteorComm Meteor Burst Technology used for Data Communication FMSCAN reception prediction of FM, TV, MW, SW stations (also use the expert options for better results) Herman Wijnants' FMDX pages TV/FM Skip Log qth.net Mailing Lists for Radio, Television, Amateur and other related information for Enthusiasts. North American TV Logo Gallery VHF DXing - From Fort Walton Beach, Florida Radio-info.com DX and Reception FM DX RDS LogBook Software Ionosphere Radio frequency propagation th:ทีวีดีเอกซ์
F2 propagation
[ "Physics" ]
1,461
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
14,394,227
https://en.wikipedia.org/wiki/Generalized%20Ozaki%20cost%20function
In economics the generalized-Ozaki (GO) cost function is a general description of the cost of production proposed by Shinichiro Nakamura. The GO cost function is notable for explicitly considering nonhomothetic technology, where the proportions of inputs can vary as the output changes. This stands in contrast to the standard production model, which assumes homothetic technology. The GO function For a given output , at time and a vector of input prices , the generalized-Ozaki (GO) cost function is expressed as Here, and , . By applying the Shephard's lemma, we derive the demand function for input , : The GO cost function is flexible in the price space, and treats scale effects and technical change in a highly general manner. The concavity condition which ensures that a constant function aligns with cost minimization for a specific set of , necessitates that its Hessian (the matrix of second partial derivatives with respect to and ) being negative semidefinite. Several notable special cases can be identified: Homothticity (HT): for all . All input levels () scale proportionally with the overall output level (). Homogeneity of (of degree one) in output (HG): in addition to HT. Factor limitationality (FL): for all . None of the input levels () depend on . Neutral technical change (NT): for all . When (HT) holds, the GO function reduces to the Generalized Leontief function of Diewert, A well-known flexible functional form for cost and production functions. When (FL) hods, it reduces to a non-linear version of Leontief's model, which explains the cross-sectional variation of when variations in input prices were negligible: Background Cost- and production functions In economics, production technology is typically represented by the production function , which, in the case of a single output and inputs, is written as . When considering cost minimization for a given set of prices and , the corresponding cost function can be expressed as: The duality theorems of cost and production functions state that once a well-behaved cost function is established, one can derive the corresponding production function, and vice versa. For a given cost function , the corresponding production function can be obtained as (a more rigorous derivation involves using a distance function instead of a production function) : In essence, under general conditions, a specific technology can be equally effectively represented by both cost and production functions. One advantage of using a cost function rather than a production function is that the demand functions for inputs can be easily derived from the former using Shephard's lemma, whereas this process can become cumbersome with the production function. Homothetic- and Nonhomothetic Technology Commonly used forms of production functions, such as Cobb-Douglas and Constant Elasticity of Substitution (CES) functions exhibit homothticity. This property means that the production function can be represented as a positive monotone transformation of a linear-homogeneous function : where for any . The Cobb-Douglas function is a special case of the CES function for which the elasticity of substitution between the inputs, , is one. For a homothetic technology, the cost function can be represented as where is a monotone increasing function, and is termed a unit cost function. From Shephard's lemma, we obtain the following expression for the ratio of inputs and : , which implies that for a homothetic technology, the ratio of inputs depends solely on prices and not on the scale of output. However, empirical studies on the cross-section of establishments show that the FL model () effectively explains the data, particularly for heavy industries such as steel mills, paper mills, basic chemical sectors, and power stations, indicating that homotheticity may not be applicable. Furthermore, in the areas of trade, homothetic and monolithic functional models do not accurately predict results. One example is in the gravity equation for trade, or how much will two countries trade with each other based on GDP and distance. This led researchers to explore non-homothetic models of production, to fit with a cross section analysis of producer behavior, for example, when producers would begin to minimize costs by switching inputs or investing in increased production. Flexible Functional Forms CES functions (note that Cobb-Douglas is a special case of CES) typically involve only two inputs, such as capital and labor. While they can be extended to include more than two inputs, assuming the same degree of substitutability for all inputs may seem overly restrictive (refer to CES for further details on this topic, including the potential for accommodating diverse elasticities of substitution among inputs, although this capability is somewhat constrained). To address this limitation, flexible functional forms have been developed. These general functional forms are called flexible functional forms (FFFs) because they do not impose any restrictions a priori on the degree of substitutability among inputs. These FFFs can provide a second-order approximation to any twice-differentiable function that meets the necessary regulatory conditions, including basic technological conditions and those consistent with cost minimization. Widely used examples of FFFs are the transcendental logarithmic (translog) function and the Generalized Leontief (GL) function. The translog function extends the Cobb-Douglas function to the second order, while the GL function performs a similar extension to the Leontief production function. Limitations A drawback of the GL function is its inability to be globally concave without sacrificing flexibility in the price space. This limitation also applies to the GO function, as it is a non-homothetic extension of the GL. In a subsequent study, Nakamura attempted to address this issue by employing the Generalized McFadden function. For further advancements in this area, refer to Ryan and Wales. Moreover, both the GO function and the underlying GL function presume immediate adjustments of inputs in response to changes in and . This oversimplifies the reality where technological changes entail significant investments in plant and equipment, thus requiring time, often occurring over years rather than instantaneously. One way to address this issue will be to resort to a variable cost function that explicitly takes into account differences in the speed of adjustments among inputs. Notes References See also Production function List of production functions Constant elasticity of substitution Shephard's lemma Returns to scale Functions and mappings Production economics
Generalized Ozaki cost function
[ "Mathematics" ]
1,331
[ "Mathematical analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
14,397,549
https://en.wikipedia.org/wiki/Siemens%20C25
The Siemens C25 is a mobile phone introduced by Siemens in 1999.Siemens C 25 is positioned as an entry-level model. It is a small, lightweight, handy device. This model was available in only 5 colors (Classic Green, Classic Blue, Anthracite, Bright Blue or Bright Yellow). But it was possible to buy a removable front panel. There is a function to write your own ringtone. There are several programs for writing melodies. For example, the MIDI-2-C25 allows you to convert MIDI standard music files into a Siemens C25 sheet of notes. It weighs 135 g and its dimensions are 117 × 47 × 27 mm (length (without the antenna) × width × depth). Its display is a 3 × 12-character monochrome LCD. Display backlight color is green. The phone's battery powers the phone for 300 minutes talk time, or up to 160 hours if left in stand-by mode. The Ni MH battery is used as standard. It is a dual-band mobile phone, supporting both GSM 900 and GSM 1800 network frequencies. It supports up to 21 monophonic ringtones. It also supports SMS sending and receiving. Reviews Despite usability flaws, Mobile Review found it "a beautiful phone to hold and use". Mobiles magazine scored it 88/100, despite also criticising certain omissions. References C25 Mobile phones introduced in 1999
Siemens C25
[ "Technology" ]
291
[ "Mobile technology stubs", "Mobile phone stubs" ]
9,241,978
https://en.wikipedia.org/wiki/Visual%20MODFLOW
Visual MODFLOW (VMOD) is a graphical interface (GUI) for the open source groundwater modeling engine MODFLOW. VMOD was developed by Waterloo Hydrogeologic and first released in 1994, the first commercially available GUI for MODFLOW. In May 2012 a .NET version of the software was rebranded as Visual MODFLOW Flex. The program includes proprietary extensions, such as MODFLOW-SURFACT, MT3DMS (mass-transport 3D multi-species) and a three-dimensional model explorer. Visual MODFLOW supports MODFLOW-2000, MODFLOW-2005, MODFLOW-NWT, MODFLOW-LGR, MODFLOW-SURFACT, and SEAWAT. The software is used primarily by hydrogeologists to simulate groundwater flow and contaminant transport. History The original version of Visual MODFLOW, developed for DOS by Nilson Guiguer, Thomas Franz and Bob Cleary, was released in August 1994. It was based on the USGS MODFLOW-88 and MODPATH code, and resembled the FLOWPATH program developed by Waterloo Hydrogeologic Inc. The first Windows based version was released in 1997. The main programmers were Sergei Schmakov, Alexander Liftshits, and Sean Wilson. A .NET version that included non-grid-based, graphical conceptual modelling features was released in 2012. On January 10, 2005, Waterloo Hydrogeologic was acquired by Schlumberger's Water Services Technology Group. On May 1, 2012, Waterloo Hydrogeologic released Visual MODFLOW Flex. On March 13, 2015, Waterloo Hydrogeologic was acquired by Nova Metrix. References External links Waterloo Hydrogeologic, Inc Scientific simulation software Hydrogeology software
Visual MODFLOW
[ "Environmental_science" ]
350
[ "Hydrology", "Hydrology stubs" ]
9,243,099
https://en.wikipedia.org/wiki/Competitive%20equilibrium
Competitive equilibrium (also called: Walrasian equilibrium) is a concept of economic equilibrium, introduced by Kenneth Arrow and Gérard Debreu in 1951, appropriate for the analysis of commodity markets with flexible prices and many traders, and serving as the benchmark of efficiency in economic analysis. It relies crucially on the assumption of a competitive environment where each trader decides upon a quantity that is so small compared to the total quantity traded in the market that their individual transactions have no influence on the prices. Competitive markets are an ideal standard by which other market structures are evaluated. Definitions A competitive equilibrium (CE) consists of two elements: A price function . It takes as argument a vector representing a bundle of commodities, and returns a positive real number that represents its price. Usually the price function is linear - it is represented as a vector of prices, a price for each commodity type. An allocation matrix . For every , is the vector of commodities allotted to agent . These elements should satisfy the following requirement: Satisfaction (market-envy-freeness): Every agent weakly prefers his bundle to any other affordable bundle: , if then . Often, there is an initial endowment matrix : for every , is the initial endowment of agent . Then, a CE should satisfy some additional requirements: Market Clearance: the demand equals the supply, no items are created or destroyed: . Individual Rationality: all agents are better-off after the trade than before the trade: . Budget Balance: all agents can afford their allocation given their endowment: . Definition 2 This definition explicitly allows for the possibility that there may be multiple commodity arrays that are equally appealing. Also for zero prices. An alternative definition relies on the concept of a demand-set. Given a price function P and an agent with a utility function U, a certain bundle of goods x is in the demand-set of the agent if: for every other bundle y. A competitive equilibrium is a price function P and an allocation matrix X such that: The bundle allocated by X to each agent is in that agent's demand-set for the price-vector P; Every good which has a positive price is fully allocated (i.e. every unallocated item has price 0). Approximate equilibrium In some cases it is useful to define an equilibrium in which the rationality condition is relaxed. Given a positive value (measured in monetary units, e.g., dollars), a price vector and a bundle , define as a price vector in which all items in x have the same price they have in P, and all items not in x are priced more than their price in P. In a -competitive-equilibrium, the bundle x allocated to an agent should be in that agent's demand-set for the modified price vector, . This approximation is realistic when there are buy/sell commissions. For example, suppose that an agent has to pay dollars for buying a unit of an item, in addition to that item's price. That agent will keep his current bundle as long as it is in the demand-set for price vector . This makes the equilibrium more stable. Examples The following examples involve an exchange economy with two agents, Jane and Kelvin, two goods e.g. bananas (x) and apples (y), and no money. 1. Graphical example: Suppose that the initial allocation is at point X, where Jane has more apples than Kelvin does and Kelvin has more bananas than Jane does. By looking at their indifference curves of Jane and of Kelvin, we can see that this is not an equilibrium - both agents are willing to trade with each other at the prices and . After trading, both Jane and Kelvin move to an indifference curve which depicts a higher level of utility, and . The new indifference curves intersect at point E. The slope of the tangent of both curves equals -. And the ; . The marginal rate of substitution (MRS) of Jane equals that of Kelvin. Therefore, the 2 individuals society reaches Pareto efficiency, where there is no way to make Jane or Kelvin better off without making the other worse off. 2. Arithmetic example: suppose that both agents have Cobb–Douglas utilities: where are constants. Suppose the initial endowment is . The demand function of Jane for x is: The demand function of Kelvin for x is: The market clearance condition for x is: This equation yields the equilibrium price ratio: We could do a similar calculation for y, but this is not needed, since Walras' law guarantees that the results will be the same. Note that in CE, only relative prices are determined; we can normalize the prices, e.g, by requiring that . Then we get . But any other normalization will also work. 3. Non-existence example: Suppose the agents' utilities are: and the initial endowment is [(2,1),(2,1)]. In CE, every agent must have either only x or only y (the other product does not contribute anything to the utility so the agent would like to exchange it away). Hence, the only possible CE allocations are [(4,0),(0,2)] and [(0,2),(4,0)]. Since the agents have the same income, necessarily . But then, the agent holding 2 units of y will want to exchange them for 4 units of x. 4. For existence and non-existence examples involving linear utilities, see Linear utility#Examples. Indivisible items When there are indivisible items in the economy, it is common to assume that there is also money, which is divisible. The agents have quasilinear utility functions: their utility is the amount of money they have plus the utility from the bundle of items they hold. A. Single item: Alice has a car which she values as 10. Bob has no car, and he values Alice's car as 20. A possible CE is: the price of the car is 15, Bob gets the car and pays 15 to Alice. This is an equilibrium because the market is cleared and both agents prefer their final bundle to their initial bundle. In fact, every price between 10 and 20 will be a CE price, with the same allocation. The same situation holds when the car is not initially held by Alice but rather in an auction in which both Alice and Bob are buyers: the car will go to Bob and the price will be anywhere between 10 and 20. On the other hand, any price below 10 is not an equilibrium price because there is an excess demand (both Alice and Bob want the car at that price), and any price above 20 is not an equilibrium price because there is an excess supply (neither Alice nor Bob want the car at that price). This example is a special case of a double auction. B. Substitutes: A car and a horse are sold in an auction. Alice only cares about transportation, so for her these are perfect substitutes: she gets utility 8 from the horse, 9 from the car, and if she has both of them then she uses only the car so her utility is 9. Bob gets a utility of 5 from the horse and 7 from the car, but if he has both of them then his utility is 11 since he also likes the horse as a pet. In this case it is more difficult to find an equilibrium (see below). A possible equilibrium is that Alice buys the horse for 5 and Bob buys the car for 7. This is an equilibrium since Bob wouldn't like to pay 5 for the horse which will give him only 4 additional utility, and Alice wouldn't like to pay 7 for the car which will give her only 1 additional utility. C. Complements: A horse and a carriage are sold in an auction. There are two potential buyers: AND and XOR. AND wants only the horse and the carriage together - they receive a utility of from holding both of them but a utility of 0 for holding only one of them. XOR wants either the horse or the carriage but doesn't need both - they receive a utility of from holding one of them and the same utility for holding both of them. Here, when , a competitive equilibrium does NOT exist, i.e, no price will clear the market. Proof: consider the following options for the sum of the prices (horse-price + carriage-price): The sum is less than . Then, AND wants both items. Since the price of at least one item is less than , XOR wants that item, so there is excess demand. The sum is exactly . Then, AND is indifferent between buying both items and not buying any item. But XOR still wants exactly one item, so there is either excess demand or excess supply. The sum is more than . Then, AND wants no item and XOR still wants at most a single item, so there is excess supply. D. Unit-demand consumers: There are n consumers. Each consumer has an index . There is a single type of good. Each consumer wants at most a single unit of the good, which gives him a utility of . The consumers are ordered such that is a weakly increasing function of . If the supply is units, then any price satisfying is an equilibrium price, since there are k consumers that either want to buy the product or indifferent between buying and not buying it. Note that an increase in supply causes a decrease in price. Existence of a competitive equilibrium Divisible resources The Arrow–Debreu model shows that a CE exists in every exchange economy with divisible goods satisfying the following conditions: All agents have strictly convex preferences; All goods are desirable. This means that, if any good is given for free (), then all agents want as much as possible from that good. The proof proceeds in several steps. A. For concreteness, assume that there are agents and divisible goods. Normalize the prices such that their sum is 1, i.e. . Then the space of all possible prices is the -dimensional unit simplex in . We call this simplex the price simplex. B. Let be the excess demand function. This is a function of the price vector when the initial endowment is kept constant: It is known that, when the agents have strictly convex preferences, the Marshallian demand function is continuous. Hence, is also a continuous function of . C. Define the following function from the price simplex to itself: This is a continuous function, so by the Brouwer fixed-point theorem there is a price vector such that: so, D. Using Walras' law and some algebra, it is possible to show that for this price vector, there is no excess demand in any product, i.e: E. The desirability assumption implies that all products have strictly positive prices: By Walras' law, . But this implies that the inequality above must be an equality: This means that is a price vector of a competitive equilibrium. Note that Linear utilities are only weakly convex, so they do not qualify for the Arrow–Debreu model. However, David Gale proved that a CE exists in every linear exchange economy satisfying certain conditions. For details see Linear utilities#Existence of competitive equilibrium. Algorithms for computing the market equilibrium are described in market equilibrium computation. Indivisible items In the examples above, a competitive equilibrium existed when the items were substitutes but not when the items were complements. This is not a coincidence. Given a utility function on two goods X and Y, say that the goods are weakly gross-substitute (GS) if they are either independent goods or gross substitute goods, but not complementary goods. This means that . I.e., if the price of Y increases, then the demand for X either remains constant or increases, but does not decrease. If the price of Y decreases, then the demand for X either remains constant or decreases. A utility function is called GS if, according to this utility function, all pairs of different goods are GS. With a GS utility function, if an agent has a demand set at a given price vector, and the prices of some items increase, then the agent has a demand set which includes all the items whose price remained constant. He may decide that he doesn't want an item which has become more expensive; he may also decide that he wants another item instead (a substitute); but he may not decide that he doesn't want a third item whose price hasn't changed. When the utility functions of all agents are GS, a competitive equilibrium always exists. Moreover, the set of GS valuations is the largest set containing unit demand valuations for which the existence of competitive equilibrium is guaranteed: for any non-GS valuation, there exist unit-demand valuations such that a competitive equilibrium does not exist for these unit-demand valuations coupled with the given non-GS valuation. For the computational problem of finding a competitive equilibrium in a special kind of a market, see Fisher market#indivisible. The competitive equilibrium and allocative efficiency By the fundamental theorems of welfare economics, any CE allocation is Pareto efficient, and any efficient allocation can be sustainable by a competitive equilibrium. Furthermore, by Varian's theorems, a CE allocation in which all agents have the same income is also envy-free. At the competitive equilibrium, the value society places on a good is equivalent to the value of the resources given up to produce it (marginal benefit equals marginal cost). This ensures allocative efficiency: the additional value society places on another unit of the good is equal to what society must give up in resources to produce it. Note that microeconomic analysis does not assume additive utility, nor does it assume any interpersonal utility tradeoffs. Efficiency, therefore, refers to the absence of Pareto improvements. It does not in any way opine on the fairness of the allocation (in the sense of distributive justice or equity). An efficient equilibrium could be one where one player has all the goods and other players have none (in an extreme example), which is efficient in the sense that one may not be able to find a Pareto improvement - which makes all players (including the one with everything in this case) better off (for a strict Pareto improvement), or not worse off. Welfare theorems for indivisible item assignment In the case of indivisible items, we have the following strong versions of the two welfare theorems: Any competitive equilibrium maximizes the social welfare (the sum of utilities), not only over all realistic assignments of items, but also over all fractional assignments of items. I.e., even if we could assign fractions of an item to different people, we couldn't do better than a competitive equilibrium in which only whole items are assigned. If there is an integral assignment (with no fractional assignments) that maximizes the social welfare, then there is a competitive equilibrium with that assignment. Finding an equilibrium In the case of indivisible item assignment, when the utility functions of all agents are GS (and thus an equilibrium exists), it is possible to find a competitive equilibrium using an ascending auction. In an ascending auction, the auctioneer publishes a price vector, initially zero, and the buyers declare their favorite bundle under these prices. In case each item is desired by at most a single bidder, the items are divided and the auction is over. In case there is an excess demand on one or more items, the auctioneer increases the price of an over-demanded item by a small amount (e.g. a dollar), and the buyers bid again. Several different ascending-auction mechanisms have been suggested in the literature. Such mechanisms are often called Walrasian auction, Walrasian tâtonnement or English auction. See also Envy-free pricing - a relaxation of Walrasian equilibrium in which some items may remain unallocated. Fisher market - a simplified market model, with a single seller and many buyers, in which a CE can be computed efficiently. Allocative efficiency Economic equilibrium General equilibrium theory Walrasian auction References External links Competitive equilibrium, Walrasian equilibrium and Walrasian auction in Economics Stack Exchange. Market (economics) Competition (economics) Auction theory
Competitive equilibrium
[ "Mathematics" ]
3,303
[ "Game theory", "Auction theory" ]
9,243,645
https://en.wikipedia.org/wiki/Vela%20X-1
Vela X-1 is a pulsing, eclipsing high-mass X-ray binary (HMXB) system, associated with the Uhuru source 4U 0900-40 and the supergiant star HD 77581. The X-ray emission of the neutron star is caused by the capture and accretion of matter from the stellar wind of the supergiant companion. Vela X-1 is the prototypical detached HMXB. The orbital period of the system is 8.964 days, with the neutron star being eclipsed for about two days of each orbit by HD 77581. It has been given the variable star designation GP Velorum, and it varies from visual magnitude 6.76 to 6.99. The spin period of the neutron star is about 283 seconds, and gives rise to strong X-ray pulsations. The mass of the pulsar is estimated to be at least solar masses. Characteristics Long term monitoring of the spin period shows small random increases and decreases over time similar to a random walk. The accreting matter causes the random spin period changes. However, a recent study has detected nearly periodic spin period reversals in Vela X-1 on long time-scales of about 5.9 years. See also High-mass X-ray binary List of X-ray pulsars X-ray binary References External links Spin frequency history of Vela X-1 Long-term spin period evolution of Vela X-1 for about five decades B-type supergiants X-ray binaries Neutron stars Pulsars Runaway stars Vela (constellation) Durchmusterung objects 077581 044368 Velorum, GP
Vela X-1
[ "Astronomy" ]
352
[ "Vela (constellation)", "Constellations" ]
9,243,688
https://en.wikipedia.org/wiki/Multiuser%20detection
Multiuser detection deals with demodulation of the mutually interfering digital streams of information that occur in areas such as wireless communications, high-speed data transmission, DSL, satellite communication, digital television, and magnetic recording. It is also being currently investigated for demodulation in low-power inter-chip and intra-chip communication. Multiuser detection encompasses both receiver technologies devoted to joint detection of all the interfering signals or to single-user receivers which are interested in recovering only one user but are robustified against multiuser interference and not just background noise. Mutual interference is unavoidable in modern spectrally efficient wireless systems: even when using orthogonal multiplexing systems such as TDMA, synchronous CDMA or OFDMA, multiuser interference originates from channel distortion and from out-of-cell interference. In addition, in multi-antenna (MIMO) systems, the digitally modulated streams emanating from different antennas interfere at the receiver, and the MIMO receiver uses multiuser detection techniques to separate them. By exploiting the structure of the interfering signals, multiuser detection can increase spectral efficiency, receiver sensitivity, and the number of users the system can sustain. Because of the mistaken belief in some quarters of the spread spectrum community that little could be gained from receivers more sophisticated than the single-user matched filter, multiuser detection did not start developing until the early 1980s. Verdu showed that the near-far problem suffered by CDMA was not inherent to this multiplexing technology and could be overcome by an optimum receiver that demodulates all users simultaneously. Verdu's receiver consisted of a bank of matched filters followed by a Viterbi algorithm. In the context of the capacity of the narrowband Gaussian two-user multiple-access channel, Cover showed the achievability of the capacity region by means of a successive cancellation receiver, which decodes one user treating the other as noise, re-encodes its signal and subtracts it from the received signal. The same near-far resistance of the optimum receiver can be achieved with the decorrelating receiver proposed in. Adaptive multiuser detectors that do not require prior knowledge of the interfering waveforms have also been proposed. References Mobile technology
Multiuser detection
[ "Technology" ]
456
[ "nan" ]
9,244,813
https://en.wikipedia.org/wiki/Cold%20Spring%20Canyon%20Arch%20Bridge
The Cold Spring Canyon Arch Bridge in the Santa Ynez Mountains links Santa Barbara, California with Santa Ynez, California. The bridge is signed as part of State Route 154. It is currently the highest arch bridge in the U.S. state of California and among the highest bridges in the United States. At its highest point, the bridge deck is above the canyon floor. The bridge is also the largest steel arch bridge in the state. It was determined to be eligible for the National Register of Historic Places with exceptional significance. History The current bridge was completed and opened to traffic in February 1964. It was constructed by U.S Steel Corp's American Bridge division and Massman Construction Co. The structure won awards for engineering, design, and beauty. It was in the top 5 longest span arch bridges of this "supported deck" type in the world until the 1990s. Cold Spring Tavern, originally a stagecoach stop, is approximately 600m south of the bridge's west base in the canyon below, on a stub of Old San Marcos Pass Road (now named Stagecoach Rd.) connecting with SR 154 at Camino Cielo and Paradise Roads. The bridge was designated as a Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1976. Seismic retrofitting was completed in 1998 by American Bridge, one of the companies involved in the original construction. Barriers , the bridge had been the site of 55 suicides since its completion, which is about one per year on average; however, some years have several more suicides, such as the eight deaths recorded in 2009. In an effort to prevent future incidents, California Department of Transportation installed a tall barrier in the form of an inwardly-curved, finely-gridded mesh fence in March 2012. The fence cost $3.2 million. A Santa Monica man committed suicide from the bridge six months later in September 2012. The most recent suicide was in April 2019; Daniel Lacy was the son of actress Julia Duffy. Gallery See also Gaviota Pass List of bridges documented by the Historic American Engineering Record in California Suicide bridge References External links Cold Spring Bridge, dot.ca.gov, retrieved on 2007-01-31. List of Awards "Highways and Roads" (CA DOT CalHwyIndex.pdf), pg. 28 "Fatal Attraction - Cold Spring Toll Hits 47", EdHat.com, November 28, 2008 Buckland & Taylor Ltd..com: Seismic retrofit in 1999 2008 Flickr.com: Aerial photo Road bridges in California Transportation buildings and structures in Santa Barbara County, California Open-spandrel deck arch bridges in the United States Steel bridges in the United States Santa Ynez Mountains Bridges completed in 1963 1963 establishments in California Historic American Engineering Record in California Historic Civil Engineering Landmarks
Cold Spring Canyon Arch Bridge
[ "Engineering" ]
563
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
9,245,024
https://en.wikipedia.org/wiki/List%20of%20Bluetooth%20protocols
The wireless data exchange standard Bluetooth uses a variety of protocols. Core protocols are defined by the trade organization Bluetooth SIG. Additional protocols have been adopted from other standards bodies. This article gives an overview of the core protocols and those adopted protocols that are widely used. The Bluetooth protocol stack is split in two parts: a "controller stack" containing the timing critical radio interface, and a "host stack" dealing with high level data. The controller stack is generally implemented in a low cost silicon device containing the Bluetooth radio and a microprocessor. The host stack is generally implemented as part of an operating system, or as an installable package on top of an operating system. For integrated devices such as Bluetooth headsets, the host stack and controller stack can be run on the same microprocessor to reduce mass production costs; this is known as a hostless system. Controller stack Asynchronous Connection-Less [logical transport] (ACL) The normal type of radio link used for general data packets using a polling TDMA scheme to arbitrate access. It can carry packets of several types, which are distinguished by: length (1, 3, or 5 time slots depending on required payload size) Forward error correction (optionally reducing the data rate in favour of reliability) modulation (Enhanced Data Rate packets allow up to triple data rate by using a different RF modulation for the payload) A connection must be explicitly set up and accepted between two devices before packets can be transferred. ACL packets are retransmitted automatically if unacknowledged, allowing for correction of a radio link that is subject to interference. For isochronous data, the number of retransmissions can be limited by a flush timeout; but without using L2PLAY retransmission and flow control mode or EL2CAP, a higher layer must handle the packet loss. ACL links are disconnected if there is nothing received for the supervision timeout period; the default timeout is 20 seconds, but this may be modified by the master. Synchronous Connection-Oriented (SCO) link The type of radio link used for voice data. A SCO link is a set of reserved time slots separated by the SCO interval Tsco which is determined during logical link establishment by the Central device. Each device transmits encoded voice data in the reserved timeslot. There are no retransmissions, but forward error correction can be optionally applied. SCO packets may be sent every 1, 2, or 3 time slots. Enhanced SCO (eSCO) links allow greater flexibility in setting up links: they may use retransmissions to achieve reliability, allow for a wider variety of packet types and for greater intervals between packets than SCO, thus increasing radio availability for other links. Link Management Protocol (LMP) Used for control of the radio link between two devices, mobile dmv, querying device abilities and power control. Implemented on the controller. Host Controller Interface (HCI) Standardized communication between the host stack (e.g., a PC or mobile phone OS) and the controller (the Bluetooth integrated circuit (IC)). This standard allows the host stack or controller IC to be swapped with minimal adaptation. There are several HCI transport layer standards, each using a different hardware interface to transfer the same command, event and data packets. The most commonly used are USB (in PCs) and UART (in mobile phones and PDAs). In Bluetooth devices with simple functionality (e.g., headsets), the host stack and controller can be implemented on the same microprocessor. In this case the HCI is optional, although often implemented as an internal software interface. Low Energy Link Layer (LE LL) This is the LMP equivalent for Bluetooth Low Energy (LE), but is simpler. It is implemented on the controller and manages advertisement, scanning, connection and security from a low-level, close to the hardware point of view from Bluetooth perspective. Host stack Logical link control and adaptation protocol (L2CAP) L2CAP is used within the Bluetooth protocol stack. It passes packets to either the Host Controller Interface (HCI) or, on a hostless system, directly to the Link Manager/ACL link. L2CAP's functions include: Multiplexing data between different higher layer protocols. Segmentation and reassembly of packets. Providing one-way transmission management of multicast data to a group of other Bluetooth devices. Quality of service (QoS) management for higher layer protocols. L2CAP is used to communicate over the host ACL link. Its connection is established after the ACL link has been set up. In basic mode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the default MTU, and 48 bytes as the minimum mandatory supported MTU. In retransmission and flow control modes, L2CAP can be configured for reliable or asynchronous data per channel by performing retransmissions and CRC checks. Reliability in either of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio will flush packets). In-order sequencing is guaranteed by the lower layer. The EL2CAP specification adds an additional enhanced retransmission mode (ERTM) to the core specification, which is an improved version of retransmission and flow control modes. ERTM is required when using an AMP (Alternate MAC/PHY), such as 802.11abgn. Bluetooth network encapsulation protocol (BNEP) BNEP is used for delivering network packets on top of L2CAP. This protocol is used by the personal area networking (PAN) profile. BNEP performs a similar function to Subnetwork Access Protocol (SNAP) in Wireless LAN. In the protocol stack, BNEP is bound to L2CAP. Radio frequency communication (RFCOMM) The Bluetooth protocol RFCOMM is a simple set of transport protocols, made on top of the L2CAP protocol, providing emulated RS-232 serial ports (up to sixty simultaneous connections to a Bluetooth device at a time). The protocol is based on the ETSI standard TS 07.10. RFCOMM is sometimes called serial port emulation. The Bluetooth serial port profile (SPP) is based on this protocol. RFCOMM provides a simple reliable data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth. Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM. In the protocol stack, RFCOMM is bound to L2CAP. Service discovery protocol (SDP) Used to allow devices to discover what services each other support, and what parameters to use to connect to them. For example, when connecting a mobile phone to a Bluetooth headset, SDP will be used to determine which Bluetooth profiles are supported by the headset (headset profile, hands free profile, advanced audio distribution profile, etc.) and the protocol multiplexer settings needed to connect to each of them. Each service is identified by a Universally Unique Identifier (UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128). In the protocol stack, SDP is bound to L2CAP. Telephony control protocol (TCS) Also referred to as telephony control protocol specification binary (TCS binary) Used to set up and control speech and data calls between Bluetooth devices. The protocol is based on the ITU-T standard Q.931, with the provisions of Annex D applied, making only the minimum changes necessary for Bluetooth. TCS is used by the intercom (ICP) and cordless telephony (CTP) profiles. The telephone control protocol specification is not called TCP, to avoid confusion with transmission control protocol (TCP) used for Internet communication. Audio/video control transport protocol (AVCTP) Used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player. In the protocol stack, AVCTP is bound to L2CAP. Audio/video data transport protocol (AVDTP) Used by the advanced audio distribution profile to stream music to stereo headsets over an L2CAP channel. Intended to be used by video distribution profile. In the protocol stack, AVDTP is bound to L2CAP. Object exchange (OBEX) Object exchange (OBEX; also termed IrOBEX) is a communications protocol that facilitates the exchange of binary objects between devices. It is maintained by the Infrared Data Association but has also been adopted by the Bluetooth Special Interest Group and the SyncML wing of the Open Mobile Alliance (OMA). In Bluetooth, OBEX is used for many profiles that require simple data exchange (e.g., object push, file transfer, basic imaging, basic printing, phonebook access, etc.). Low Energy Attribute Protocol (ATT) Similar in scope to SDP but specially adapted and simplified for Low Energy Bluetooth. It allows a client to read and/or write certain attributes exposed by the server in a non-complex, low-power friendly manner. In the protocol stack, ATT is bound to L2CAP. Low Energy Security Manager Protocol (SMP) This is used by Bluetooth Low Energy implementations for pairing and transport specific key distribution. In the protocol stack, SMP is bound to L2CAP. References External links Bluetooth.com - Data Transport Architecture Oracle.com - Bluetooth protocol stack overview with diagram (halfway down the page) Bluetooth Specifications directory Bluetooth Bluetooth
List of Bluetooth protocols
[ "Technology" ]
2,102
[ "Computing-related lists", "Lists of network protocols", "Wireless networking", "Bluetooth" ]
9,245,403
https://en.wikipedia.org/wiki/Misuse%20detection
Misuse detection actively works against potential insider threats to vulnerable computer data. Misuse Misuse detection is an approach to detecting computer attacks. In a misuse detection approach, abnormal system behaviour is defined first, and then all other behaviour is defined as normal. It stands against the anomaly detection approach which utilizes the reverse: defining normal system behaviour first and defining all other behaviour as abnormal. With misuse detection, anything not known is normal. An example of misuse detection is the use of attack signatures in an intrusion detection system. Misuse detection has also been used more generally to refer to all kinds of computer misuse. References Further reading For more information on Misuse Detection, including papers written on the subject, consider the following: Misuse Detection Concepts and Algorithms - article by the IR Lab at IIT. Data security
Misuse detection
[ "Engineering" ]
166
[ "Cybersecurity engineering", "Data security" ]
9,245,479
https://en.wikipedia.org/wiki/Montgomery%20Woods%20State%20Natural%20Reserve
Montgomery Woods State Natural Reserve is a 1,323-acre (535 ha) state-owned park located in the Coastal Range in Mendocino County, California, United States. The Reserve occupies the headwaters of Montgomery Creek, a tributary of Big River, which flows into the Pacific Ocean at Mendocino Headlands State Park. The virgin groves of Coast Redwood (Sequoia sempervirens) in Montgomery Woods are examples of a now rare upland riparian meadow habitat; most other preserved redwood groves are on broad alluvial plains. The Reserve is accessed from a parking area along Orr Springs Road west of Ukiah and east of Comptche. A moderately steep trail from the parking area climbs uphill along Montgomery Creek about three-quarters of a mile. Once in the grove, the trail makes a meandering loop, with substantial use of boardwalks to protect the fragile forest floor. The reserve was initiated by a 9-acre (3.6 ha) donation from Robert T. Orr in 1945, with donated since 1947 by the Save the Redwoods League. Between 1999 and 2004, the tallest tree in Montgomery Woods, named the Mendocino Tree, was the world's tallest known tree. It was displaced by the discovery of a number of taller trees in Humboldt Redwoods State Park and later Redwood National Park in Humboldt County. The tree is one of dozens of similar height in the grove, and was never specifically marked in order to protect the tree. Earlier well-publicized candidates for the world's tallest tree suffered damage from stresses resulting from crowds of tourists. References External links Montgomery Woods State Natural Reserve State parks of California Parks in Mendocino County, California Protected areas established in 1945 1945 establishments in California Old-growth forests Coast redwood groves
Montgomery Woods State Natural Reserve
[ "Biology" ]
359
[ "Old-growth forests", "Ecosystems" ]
9,245,509
https://en.wikipedia.org/wiki/Milnor%20conjecture%20%28knot%20theory%29
In knot theory, the Milnor conjecture says that the slice genus of the torus knot is It is in a similar vein to the Thom conjecture. It was first proved by gauge theoretic methods by Peter Kronheimer and Tomasz Mrowka. Jacob Rasmussen later gave a purely combinatorial proof using Khovanov homology, by means of the s-invariant. References Geometric topology Knot theory 4-manifolds Conjectures that have been proved
Milnor conjecture (knot theory)
[ "Mathematics" ]
96
[ "Geometric topology", "Topology", "Conjectures that have been proved", "Mathematical problems", "Mathematical theorems" ]
9,245,549
https://en.wikipedia.org/wiki/Vestibule%20%28architecture%29
A vestibule (also anteroom, antechamber, air-lock entry or foyer) is a small room leading into a larger space such as a lobby, entrance hall, or passage, for the purpose of waiting, withholding the larger space from view, reducing heat loss, providing storage space for outdoor clothing, etc. The term applies to structures in both modern and classical architecture since ancient times. In antiquity, antechambers were employed as transitional spaces leading to more significant rooms, such as throne rooms in palaces or the naos in temples. In ancient Roman architecture, a vestibule () was a partially enclosed area between the interior of the house and the street. In modern architecture, a vestibule is typically a small room next to the outer door and connecting it with the interior of the building. Ancient usage Ancient Greece Vestibules were common in ancient Greek temples. Due to the construction techniques available at the time, it was not possible to build large spans. Consequently, many entranceways had two rows of columns that supported the roof and created a distinct space around the entrance. In ancient Greek houses, the prothyrum was the space just outside the door of a house, which often had an altar to Apollo or a statue, or a laurel tree. In elaborate houses or palaces, the vestibule could be divided into three parts, the prothyron (πρόθυρον), the thyroreion (θυρωρεῖον; ), and the proaulion (προαύλιον). The vestibule in ancient Greek homes served as a barrier to the outside world, and also added security to discourage unwanted entrance into the home and unwanted glances into the home. The vestibule's alignment at right angles of private interior spaces, and the use of doors and curtains also added security and privacy from the outside. The Classical Period marked a change in the need for privacy in Greek society, which ultimately led to the design and use of vestibules in Greek homes. Ancient Rome In ancient Roman architecture, where the term originates, a vestibule () was a space that was sometimes present between the interior fauces of a building leading to the atrium and the street. Vestibules were common in ancient architecture. A Roman house was typically divided into two different sections: the first front section, or the public part, was introduced with a vestibule. These vestibules contained two rooms, which usually served as waiting rooms or a porters’ lodge where visitors could get directions or information. Upon entering a Roman house or domus, one would have to pass through the vestibule before entering the fauces, which led to the atrium. The structure was a mixture between a modern hall and porch. Church architecture From the 5th century onward, churches of Eastern and Western Christianity utilized vestibules. In Roman Catholic and some Anglican churches, the vestibule is usually a spacious area which holds church information such as literature, pamphlets, and bulletin announcements, as well as holy water for worshippers. In Orthodox and Byzantine church architecture, the temple antechamber is more commonly referred to as an exonarthex. In early Christian architecture, the vestibule replaced the more extravagant atrium or quadriporticus in favor of a more simplified area to house the vase of holy water. Palace architecture Vestibules are common in palace architecture. The style of vestibule used in Genoa, Italy, was transformed from a previously modest design to a more ornamental structure, which satisfied Genoese aristocracy, while becoming an influential transformation for Italian palaces. The Genoese vestibule became a prominent feature of their palace architecture. These vestibules would sometimes include a fountain or large statue. The Genoese vestibule was large and exaggerated, and seemed "rather designed to accommodate a race of giants". Modern usage In contemporary usage, a vestibule constitutes an area surrounding the exterior door. It acts as an antechamber between the exterior and the interior structure. Often it connects the doorway to a lobby or hallway. It is the space one occupies once passing the door, but not yet in the main interior of the building. Although vestibules such as a modified mud room are common in private residences, they are especially prevalent in more opulent buildings, such as government ones, designed to elicit a sense of grandeur by contrasting the vestibule's small space with the following greater one, and by adding the aspect of anticipation. The residence of the White House in the United States is such an example. At the north portico, it contains a tiny vestibule between the doors flushed with the outer and inner faces of the exterior wall of, and in the past inside, the Entrance Hall (called incorrectly Vestibule) separated from the not much bigger Cross Hall by just 2 double columns. The difference in sizes between a vestibule and the following space is better illustrated by the—so called—entrance (15) to the main gallery in the Solomon R. Guggenheim Museum by Frank Lloyd Wright. Many government buildings mimic the classical architecture from which the vestibule originates. A purely utilitarian use of vestibules in modern buildings is to create an airlock entry. Such vestibules consist of a set of inner doors and a set of outer doors, the intent being to reduce air infiltration to the building by having only one set of doors open at any given time. ATM vestibule An ATM vestibule is an enclosed area with automated teller machines that is attached to the outside of a building, but typically features no further entrance to the building and is not accessible from within. There may be a secure entrance to the vestibule which requires a card to open. ATM vestibules may also contain security devices, such as panic alarms and CCTV, to help prevent criminal activity. Railway use The vestibule on a railway passenger car is an enclosed area at the end of the car body, usually separated from the main part of the interior by a door, which is power-operated on most modern equipment. Entrance to and exit from the car is through the side doors, which lead into the vestibule. When passenger cars are coupled, their vestibules are joined by mating faceplate and diaphragm assemblies to create a weather-tight seal for the safety and comfort of passengers who are stepping from car to car. In British usage the term refers to the part of the carriage where the passenger doors are located; this can be at the ends of the carriage (on long-distance stock) or at the and of length positions (typical on modern suburban stock). Commercial buildings The U.S. Department of Energy Building Energy Codes Program released a publication on 19 June 2018, which detailed the requirements of a vestibule to be used in commercial buildings. The publication states it requires vestibules to reduce the amount of air that infiltrates a space in order to aid in energy conservation, as well as increasing comfort near entrance doors. By creating an air lock entry, vestibules reduce infiltration losses or gains caused by wind. Designers of commercial buildings must install a vestibule between the main entry doors leading to spaces that are greater than or equal to . One other requirement of the design is that it is not necessary for both sets of door to be open in order to pass through the vestibule, and they should have devices that allow for self-closing. An example of such is in New York City where in the winter, temporary sidewalk vestibules are commonly placed in front of entrances to restaurants to reduce cold drafts from reaching customers inside. See also Antarala – vestibule in certain Hindu temples Entryway Genkan – entryway area in Japanese buildings Propylaeum – monumental gateway in Ancient Greek architecture Revolving door – used for similar functions References Citations Sources Further reading External links architectural elements rooms
Vestibule (architecture)
[ "Technology", "Engineering" ]
1,620
[ "Building engineering", "Rooms", "Architectural elements", "Components", "Architecture" ]
9,246,109
https://en.wikipedia.org/wiki/Eikos
Eikos, Inc was a technology company based in Franklin, Massachusetts that developed transparent, electrically conductive carbon nanotube films and nanotube inks for transparent conductive coatings. Eikos branded its technology as Invisicon. It was founded in 1996 by Joseph Piche. Eikos aimed to replace indium tin oxide (ITO) and conducting polymers with carbon nanotube transparent conductors in several common electronic devices, such as touch screens, LCDs, OLEDs, photovoltaics, electroluminescent lamps, electronic paper. Nanotube films have several advantages over ITO, which make them attractive in these markets. For example, nanotube films are exceptionally flexible and can be deposited using low cost, atmospheric coating methods. Eikos' coatings were recognized by R&D Magazine with a R&D 100 Award in 2005 for environmentally friendly coatings and the Micro/Nano 25 in 2006. Eikos also was awarded a Technology Innovation Award for Materials and other Base Technologies by The Wall Street Journal in September, 2006. References External links Eikos Website Companies based in Massachusetts Chemical companies of the United States Nanotechnology institutions Defunct electronics companies of the United States Nanotechnology companies Companies established in 1996
Eikos
[ "Materials_science" ]
254
[ "Nanotechnology", "Nanotechnology institutions", "Nanotechnology companies" ]
9,246,645
https://en.wikipedia.org/wiki/25D/Neujmin
25D/Neujmin, otherwise known as Comet Neujmin 2, is a periodic comet in the Solar System discovered by Grigory N. Neujmin (Simeis) on February 24, 1916. It was last observed on February 10, 1927. It was confirmed by George Van Biesbroeck (Yerkes Observatory, Wisconsin, United States) and Frank Watson Dyson (Greenwich Observatory, England) on March 1. A prediction by Andrew Crommelin (Royal Observatory, Greenwich, England) for 1921 was considered unfavourable and no observations were made. The comet was recovered in 1926. Searches in 1932 and 1937 were unsuccessful. Consequently, this comet has remained a lost comet since 1927. and using the JPL Horizons nominal orbit, the comet is still expected to come to perihelion around 1.3 AU from the Sun. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 25D at Kronk's Cometography 25D at Kazuo Kinoshita's Comets 25D at Seiichi Yoshida's Comet Catalog Periodic comets Lost comets 0025 19160224
25D/Neujmin
[ "Astronomy" ]
239
[ "Astronomy stubs", "Comet stubs" ]
9,246,889
https://en.wikipedia.org/wiki/Netgraph
netgraph is the graph based kernel networking subsystem of FreeBSD since 3.4 and DragonFly BSD since the fork from FreeBSD. Netgraph provides support for L2TP, PPTP, ATM, Bluetooth using a modular set of nodes that are the graph. Netgraph has also been ported on other Operating Systems: NetBSD kernel 1.5V (not integrated into mainline kernel) Linux kernel 2.4 and 2.6 by 6WIND (Commercial closed source port) Linux kernel 3.0 by LANA History Netgraph was originally designed and implemented at Whistle Communications by Julian Elischer and Archie Cobbs for the Whistle InterJet small office router product. The purpose of the project was to create a flexible framework for implementing new networking protocols. Key requirements included the ability to prototype with user-space programs while still retaining the ability to interact with data flows normally hidden within the kernel. References External links netgraph(4) man page Netgraph article BSD_software Free network-related software
Netgraph
[ "Technology" ]
211
[ "Computing stubs", "Computer network stubs" ]
9,246,933
https://en.wikipedia.org/wiki/Text%20Services%20Framework
The Text Services Framework (TSF) is a COM framework and API in Windows XP and later Windows operating systems that supports advanced text input and text processing. The Language Bar is the core user interface for Text Services Framework. Overview The Text Services Framework is designed to offer advanced language and word processing features to applications. It supports features such as multilingual support, keyboard drivers, handwriting recognition, speech recognition, as well as spell checking and other text and natural language processing functions. It is also downloadable for older Windows operating systems. The Language Bar enables text services to add UI elements to the toolbar and enables these elements when an application has focus. From the Language Bar, users can select the input language, and control keyboard input, handwriting recognition and speech recognition. The language bar also provides a direct means to switch between installed languages, even when a non-TSF-enabled application has focus. Starting with Windows XP Tablet PC Edition 2005 and Windows Vista, the RichEdit control supports the Text Services Framework. Windows Speech Recognition in Windows Vista is also implemented using the Text Services Framework. Features TSF is extensible. Independent software vendors can write their own text processing feature for TSF. TSF-enabled applications can receive text input from any text service that supports TSF without having to be aware of any details of the text source. Services built using TSF are globally available to any application. TSF enables a text service to store metadata with a document, a piece of text, or an object within the document. For example, a speech input text service can store sound information associated with a block of text. TSF enables text services to provide accurate and complete text conversion, with continuous access to the document buffer. Text services using TSF can avoid separating their functionality into modes for input and modes for editing. This input architecture enables the buffered and accumulating text stream to change dynamically, thereby enabling more efficient keyboard input and text editing. TSF is device-independent and enables text services for multiple input devices including keyboard, electronic pen or stylus, and microphone. ctfmon and CTF ctfmon (ctfmon.exe) is a process used to activate the Alternative User Input Text Input Processor (TIP) and also the Microsoft Language Bar. Ctfmon is also a component of Windows XP, Windows Vista and Windows 7 which enables advanced user input services in applications (pen and ink, speech etc.). ctfmon.exe in Windows XP has superseded internat.exe (means international) in Windows 95, Windows NT 4.0, Windows 98 and Windows 2000. CTF means Common Text Framework (codename Cicero) according to the leaked Windows XP source code) and patent text. In August 2019, Google Project Zero discovered and publicly exposed a critical security vulnerability in CTF that dated back to its first release in Windows XP. The vulnerability, known as CVE-2019-1162, allows privilege escalation and security boundary traversal. Microsoft patched this vulnerability in August 2019. References External links Text Services Framework documentation on MSDN Text Services Framework blog How to use the language bar in Windows XP The Language Bar in Windows XP Language Bar Overview: Windows Vista Help Windows components Natural language and computing
Text Services Framework
[ "Technology" ]
660
[ "Input methods", "Natural language and computing" ]
9,247,154
https://en.wikipedia.org/wiki/Franco-British%20Nuclear%20Forum
The first meeting of the Franco–British Nuclear Forum was held in Paris in November 2007, chaired by the Minister for Energy and the French Industry Minister. The working groups are focusing on specific areas for collaboration. A follow-up meeting on the issue in London was planned for March 2008, but did not take place. See also Nuclear power in the United Kingdom Nuclear power in France External links BBC: France and UK boost nuclear ties References Nuclear energy in the United Kingdom Nuclear energy in France France–United Kingdom relations
Franco-British Nuclear Forum
[ "Physics" ]
103
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
9,247,352
https://en.wikipedia.org/wiki/Open%20Design%20Alliance
Open Design Alliance is a nonprofit organization creating software development kits (SDKs) for engineering applications. ODA offers interoperability tools for CAD, BIM, and Mechanical industries including .dwg, .dxf, .dgn, Autodesk Revit, Autodesk Navisworks, and .ifc files and additional tools for visualization, web development, 3D PDF publishing and modeling. History 1998-2014 The Alliance was formed in February 1998 as the OpenDWG Alliance, with its initial release of code based on the AUTODIRECT libraries written by Matt Richards of MarComp. In 2002, the OpenDWG library was renamed to DWGdirect, and the same year, the alliance was renamed to Open Design Alliance. On November 22, 2006, Autodesk sued the Open Design Alliance alleging that its DWGdirect libraries infringed Autodesk's trademark for the word "Autodesk", by writing the TrustedDWG code (including the word "AutoCAD") into DWG files it created. In April 2007, the suit was dropped, with Autodesk modifying the warning message in AutoCAD 2008 (to make it more benign), and the Open Design Alliance removing support for the TrustedDWG code from its DWGdirect libraries. In 2008, support was added for .dgn files with DGNdirect. In April 2010, DWGdirect was renamed to Teigha for .dwg files, OpenDWG was renamed to Teigha Classic and DGNdirect was renamed to Teigha for .dgn files. 2015-2024 Since August 2017 (v. 4.3.1), Teigha contains production support for version 2018 .dwg files, including architectural, civil and mechanical custom objects. In February 2018 (v. 4.3.2), support for STL and OBJ files was announced. In September 2018 Teigha brand was removed. In October 2018 ODA started work on IFC Solutio. In January 2019 Drawings 2019.2 introduced extrude and revolve 3d solid modeling operations as part of the standard SDK. Also that month, ODA announced the release of its new BimNv SDK. In May 2020 ODA switched to monthly releases. In June 2020 ODA released its free Open IFC Viewer, and in July 2021 ODA started development for STEP Support. In October 2021 ODA released its IFC validation engine. In January 2022 ODA started Scan-to-BIM development. In September 2022 ODA started MCAD SDK development, and in October 2022 ODA released STEP SDK for production use. In September 2024 ODA removed the free trial downloads of the ODAFileConverter. ODA products and supported file formats CAD Drawings SDK is a development toolkit that provides access to all data in .dwg and .dgn through an object-oriented API, allows creating and editing any type of .dwg or .dgn drawing file, and can be extended with custom .dwg objects. (Old names: Teigha Drawings, Teigha for .dwg files and Teigha for .dgn files; OpenDWG and DWGdirect; DGNdirect.) Drawings SDK also provides exchange of the following file formats to and from .dwg and .dgn: Architecture SDK is a development toolkit for building .dwg-based architectural design applications. It offers interoperability with Autodesk Architecture files (old name: Teigha Architecture). Civil SDK is a development toolkit for working with Autodesk Civil 3D files. The Civil API provides read/write access to data in civil custom objects (old name: Teigha Civil). Map SDK is a development toolkit for working with Autodesk® Map 3D custom objects in any ODA-based application. BIM BimRv SDK is a development toolkit for reading, writing, and creating .rvt and .rfa files. IFC SDK is a development toolkit featuring 100% compatibility with the buildingSMART IFC standard. It offers a geometry building module for creating IFC geometry, which includes the ODA facet modeler and B-Rep modeler. BimNv is a development toolkit for reading, visualizing and creating Autodesk Navisworks files. Scan-To-BIM is a development toolkit for converting point cloud data to 3D BIM models. Mechanical Mechanical SDK is a development toolkit for working with Autodesk Mechanical files. STEP SDK is one of the newest ODA development toolkits; it provides access to STEP model data. In production since October 2022. MCAD SDK is an open exchange platform for 3D MCAD file formats such as Inventor, IGES, Rhino, CATIA V4, CADDS, 3Shape DCM, CATIA V5, PLMXML, Parasolid, SolidWorks, Creo, STEP, SolidEdge, ProE, UG NX, CGR, CATIA V6, JT, and Procera. ODA Core Platform Technologies Visualize SDK is a graphics toolkit designed for engineering applications development. Web SDK uses Visualize SDK to embed engineering models into web pages and create web/SaaS applications. Publish SDK is a development toolkit for creating 2D and 3D .pdf and .prc models. All PDFs are compatible with ISO standards and Adobe tools. Publish SDK can create PRC-based 3D PDF documents that contain full B-Rep models and can include animation, interactive views, part lists, etc. Membership There are six types of ODA membership: Educational: qualified university use only, 1 year limit Non-commercial: any kind of internal automation for in-house use and R&D, 2 year limit Commercial: limited commercial use (sell up to 100 copies), web/SaaS use not allowed Sustaining: unlimited commercial use, web/SaaS use allowed Founding: unlimited commercial use with full source code Corporate: unlimited commercial use across multiple business units There is also a free trial period. Releases Open Design Alliance provides monthly production releases. Annual ODA conference Open Design Alliance holds an ODA conference every year in September. The two-day conference includes presentations from directors and developers and face-to-face meetings for non-members, members, ODA developers, and ODA executives. Anyone who is interested can register and attend the conference. Member organizations of the ODA The following is an incomplete list of members of the Open Design Alliance. Corporate members Alias Limited Allplan Autodesk AutoDWG AVEVA Bricsys Dassault Systèmes Nemetschek Design Data Corporation Graphisoft Hexagon AB Intergraph IronCAD Knowledge Base Microsoft Nanosoft OpenText Corp Shenzhen Jiang & Associates Creative Design Spatial Corp Vianova Systems AS Founding members The following is an incomplete list of founding member organizations of the Open Design Alliance. 4M SA Accusoft Corporation Advanced Computer Solutions Andor Corporation Beijing Glory PKPM Technology Bentley Systems BlueCielo ECM Solutions Software Central South University Chongqing Chinabyte Network Co Ltd CSoft Development EntIT Software LLC Epic Games Esri Glodon Graebert GmbH GRAITEC INNOVATION SAS Gstarsoft Haofang Tech Hilti Hyland IMSI/Design IntelliCAD Intrasec ITI TranscenData MIDAS Information Technology Onshape Oracle Photron Relativity Robert McNeel And Associates Safe Software Shandong Hoteam Software Shenzhen ZhiHuiRuiTu Information Technology Siemens Stabiplan Trimble UNIFI Labs Watchtower Bible and Tract Society ZWCAD Software ODA developers in Ukraine Since 2016 ODA has a 30-person development team in Chernihiv, Ukraine (almost students of Chernihiv Polytechnic National University). On 4 April, 2022 in a response to full-scale Russian invasion of Ukraine and continuous shelling of Chernihiv Neil Peterson, ODA President, announced a campaign for collecting money to donate Ukrainian team members and their families, and stated that help with relocation and temporary housing being provided. See also AutoCAD DWG Digital modeling and fabrication Open Cascade Technology Building Information Modeling Industry Foundation Classes References External links Computer-aided engineering software Information technology organizations CAD file formats Open formats
Open Design Alliance
[ "Technology" ]
1,738
[ "Information technology", "Information technology organizations" ]
9,247,603
https://en.wikipedia.org/wiki/Protistology
Protistology is a scientific discipline devoted to the study of protists, a highly diverse group of eukaryotic organisms. All eukaryotes apart from animals, plants and fungi are considered protists. Its field of study therefore overlaps with the more traditional disciplines of phycology, mycology, and protozoology, just as protists embrace mostly unicellular organisms described as algae, some organisms regarded previously as primitive fungi, and protozoa ("animal" motile protists lacking chloroplasts). They are a paraphyletic group with very diverse morphologies and lifestyles. Their sizes range from unicellular picoeukaryotes only a few micrometres in diameter to multicellular marine algae several metres long. History The history of the study of protists has its origins in the 17th century. Since the beginning, the study of protists has been intimately linked to developments in microscopy, which have allowed important advances in the understanding of these organisms due to their generally microscopic nature. Among the pioneers was Anton van Leeuwenhoek, who observed a variety of free-living protists and in 1674 named them “very little animalcules”. During the 18th century studies on the Infusoria were dominated by Christian Gottfried Ehrenberg and Félix Dujardin. The term "protozoology" has become dated as understanding of the evolutionary relationships of the eukaryotes has improved, and is frequently replaced by the term "protistology". For example, the Society of Protozoologists, founded in 1947, was renamed International Society of Protistologists in 2005. However, the older term is retained in some cases (e.g., the Polish journal Acta Protozoologica). Journals and societies Dedicated academic journals include: Archiv für Protistenkunde, 1902-1998, Germany (renamed Protist, 1998-); Archives de la Societe Russe de Protistologie, 1922-1928, Russia; Journal of Protozoology, 1954-1993, USA (renamed Journal of Eukaryotic Microbiology, 1993-); Acta Protozoologica, 1963-, Poland; Protistologica, 1968-1987, France (renamed European Journal of Protistology, 1987-); Japanese Journal of Protozoology, 1968-2017, Japan (renamed Journal of Protistology, 2018-); Protistology, 1999-, Russia. Other less specialized journals, important to protistology before the appearance of the more specialized: Comptes rendus de l'Académie des sciences, 1666-, France; Quarterly Journal of Microscopical Science, 1853-1966, UK (renamed Journal of Cell Science, 1966-); Archiv für mikroskopische Anatomie, 1865-1923, Germany; Transactions of the Microscopical Society, 1841-1869, UK (renamed Journal of Microscopy, 1869-); Transactions of the American Microscopical Society, 1880-1994, USA (renamed Invertebrate Biology, 1995-); Memórias do Instituto Oswaldo Cruz, 1909-, Brazil. Some societies: Society of Protozoloogists, 1947-2005, USA (renamed International Society of Protistologists, 2005-), with many affiliates; International Society for Evolutionary Protistology, 1975, USA. Protistology UK (previously British Society for Protist Biology) International Society of Protistologists (previously the Society of Protozoologists) Notable protistologists (sorted by alphabetical order of surnames) The field of protistology was idealized by Haeckel, but its widespread recognition is more recent. In fact, many of the researchers cited below considered themselves as protozoologists, phycologists, mycologists, microbiologists, microscopists, parasitologists, limnologists, biologists, naturalists, zoologists, botanists, etc., but made significant contributions to the field. References External links Portal to protistology by the International Society of Protistologists . . Branches of biology
Protistology
[ "Biology" ]
851
[ "Eukaryotes", "Protists", "nan" ]
9,248,446
https://en.wikipedia.org/wiki/Enterprise%20Privacy%20Authorization%20Language
Enterprise Privacy Authorization Language (EPAL) is a formal language for writing enterprise privacy policies to govern data handling practices in IT systems according to fine-grained positive and negative authorization rights. It was submitted by IBM to the World Wide Web Consortium (W3C) in 2003 to be considered for recommendation. In 2004, a lawsuit was filed by Zero-Knowledge Systems claiming that IBM breached a copyright agreement from when they worked together in 2001 - 2002 to create Privacy Rights Markup Language (PRML). EPAL is based on PRML, which means Zero-Knowledge argued they should be a co-owner of the standard. See also XACML - eXtensible Access Control Markup Language, a standard by OASIS. References EPAL 1.2 submission to the W3C 10 Nov 2003 Technology Report on EPAL from OASIS A Comparison of Two Privacy Policy Languages:EPAL and XACML by Anne Anderson, Sun Microsystem Laboratories Computer security procedures XML-based standards IBM software
Enterprise Privacy Authorization Language
[ "Technology", "Engineering" ]
203
[ "Computer standards", "Computer security procedures", "Cybersecurity engineering", "XML-based standards" ]
9,248,456
https://en.wikipedia.org/wiki/Deuterated%20chloroform
Deuterated chloroform, also known as chloroform-d, is the organic compound with the formula . Deuterated chloroform is a common solvent used in NMR spectroscopy. The properties of (chloroform) are virtually identical. Deuterochloroform was first made in 1935 during the years of research on deuterium. Preparation Deuterated chloroform is commercially available. It is more easily produced and less expensive than deuterated dichloromethane. Deuterochloroform is produced by the reaction of hexachloroacetone with deuterium oxide, using pyridine as a catalyst. The large difference in boiling points between the starting material and product facilitate purification by distillation. Treating chloral with sodium deuteroxide (NaOD) gives deuterated chloroform. NMR solvent In proton NMR spectroscopy, deuterated solvent (enriched to >99% deuterium) is typically used to avoid recording a large interfering signal or signals from the proton(s) (i.e., hydrogen-1) present in the solvent itself. If nondeuterated chloroform (containing a full equivalent of protium) were used as solvent, the solvent signal would almost certainly overwhelm and obscure any nearby analyte signals. In addition, modern instruments usually require the presence of deuterated solvent, as the field frequency is locked using the deuterium signal of the solvent to prevent frequency drift. Commercial chloroform-d does, however, still contain a small amount (0.2% or less) of non-deuterated chloroform; this results in a small singlet at 7.26 ppm, known as the residual solvent peak, which is frequently used as an internal chemical shift reference. In carbon-13 NMR spectroscopy, the sole carbon in deuterated chloroform shows a triplet at a chemical shift of 77.16 ppm with the three peaks being about equal size, resulting from splitting by spin coupling to the attached spin-1 deuterium atom ( has a chemical shift of 77.36 ppm). Deuterated chloroform is a general purpose NMR solvent, as it is not very chemically reactive and unlikely to exchange its deuterium with its solute, and its low boiling point allows for easy sample recovery. It, however, it is incompatible with strongly basic, nucleophilic, or reducing analytes, including many organometallic compounds. Hazards Chloroform reacts photochemically with oxygen to form chlorine, phosgene and hydrogen chloride. To slow this process and reduce the acidity of the solvent, chloroform-d is stored in brown-tinted bottles, often over copper chips or silver foil as stabilizer. Instead of metals, a small amount of a neutralizing base like potassium carbonate may be added. It is less toxic to the liver and kidneys than due to the stronger bond as compared to the bond, making it somewhat less prone to form the destructive trichloromethyl radical (). References Deuterated solvents Organochlorides Trichloromethyl compounds Nuclear magnetic resonance
Deuterated chloroform
[ "Physics", "Chemistry" ]
691
[ "Deuterated solvents", "Nuclear magnetic resonance", "Nuclear physics" ]
9,248,704
https://en.wikipedia.org/wiki/30P/Reinmuth
Comet 30P/Reinmuth, also known as Comet Reinmuth 1, is a periodic comet in the Solar System, first discovered by Karl Reinmuth (Landessternwarte Heidelberg-Königstuhl, Germany) on February 22, 1928. First calculations of orbit concluded a period of 25 years, but this was revised down to seven years and speculation this was the same comet as Comet Taylor, which had been lost since 1915. Further calculations by George van Biesbroeck concluded they were different comets. The 1935 approach was observed though not as favourable, in 1937 the comet passed close to Jupiter which increased the perihelion distance and orbital period. Due to miscalculations, the 1942 appearance was missed, but it has been observed on every subsequent appearance since. The comet nucleus is estimated to be 7.8 kilometers in diameter. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 30P/Reinmuth magnitude plot for 2010 30P at Kronk's Cometography 30P at Kazuo Kinoshita's Comets 30P at Seiichi Yoshida's Comet Catalog Periodic comets 0030 Discoveries by Karl Wilhelm Reinmuth 030P 19280222
30P/Reinmuth
[ "Astronomy" ]
256
[ "Astronomy stubs", "Comet stubs" ]
9,249,185
https://en.wikipedia.org/wiki/69P/Taylor
Comet Taylor is a periodic comet in the Solar System, first discovered by Clement J. Taylor (Cape Town, South Africa) on November 24, 1915. George van Biesbroeck and E. E. Barnard (Yerkes Observatory, Wisconsin, United States) observed that the comet was split into two distinct nuclei, but this was not seen after March 16. The comet was predicted to return in 1922, but was lost (see lost comet). In 1928 the discovery of Comet Reinmuth 1 was originally assumed to be Comet Taylor, and again in 1951 the same assumption was made with Comet Arend-Rigaux. The 1976 return was predicted by N. A. Belyaev and V. V. Emel'yanenko and on January 25, 1977, Charles Kowal (Palomar Observatory, California, United States) found images on photographic plates for December 13, 1976. The comet was recovered for the returns in 1984 and 1990, and in January 1998 was observed as magnitude 12 when it was 1AU from Earth. There were 6 recovery images of 69P in October 2018 when the comet had a magnitude of about 20.5. Due to the lack of observations, when the comet is at perihelion on March 18, 2019 and 2.45AU from Earth, the 3-sigma uncertainty in the comet's Earth distance will be ±6000 km. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 69P/Taylor – Seiichi Yoshida @ aerith.net 69P at Kronk's Cometography 69P at Kazuo Kinoshita's Comets 69P at Seiichi Yoshida's Comet Catalog Periodic comets 0069 069P 19151124
69P/Taylor
[ "Astronomy" ]
359
[ "Astronomy stubs", "Comet stubs" ]
9,249,236
https://en.wikipedia.org/wiki/List%20of%20WiMAX%20networks
The following is a list of WiMAX networks. Standards IEEE 802.16 - called fixed WiMAX because of static connection without handover. IEEE 802.16e - called mobile WiMAX because it allows handovers between base stations. IEEE 802.16m - advanced air interface with data rates of 100 Mbit/s mobile and 1 Gbit/s fixed. Networks See also List of LTE networks References IEEE 802 Metropolitan area networks Network access Technology-related lists Telecommunications lists WiMAX
List of WiMAX networks
[ "Technology", "Engineering" ]
100
[ "Electronic engineering", "WiMAX", "Wireless networking", "Network access" ]
9,249,358
https://en.wikipedia.org/wiki/Ditch%20Witch
Ditch Witch, a trade name of Charles Machine Works, is an American brand of underground utility construction equipment, principally trenchers, which has been in operation since 1949. It is the leading subsidiary of Charles Machine Works, headquartered in Perry, Oklahoma. Charles Machine Works is, since 2019, a subsidiary of Toro Company. Innovation of Ditch Witch machines started in the 1940s when a compact trenching machine was created to replace the pick and shovel for installation of underground residential utility services. The Ditch Witch organization specializes in the design and manufacture of underground construction equipment. The company is a source for trenchers, vibratory plows, horizontal directional drilling systems, drill pipe, downhole tools, vacuum excavation systems, fluid management systems, and mini skid steers. Because of its extensive experience in construction of subterranean structures and systems, CMW and Ditch Witch have been called, "The Underground Authority." History In 1902, Carl Frederick Malzahn, a German immigrant seeking to escape the harsh winters of Minnesota, moved his family to Perry, Oklahoma, and opened the Malzahn Blacksmith Shop with his sons, Charlie and Gus. The sons took over the business in 1913 and renamed it Malzahn Brothers' General Blacksmithing. The business prospered, and several years later, with the advent of an oil boom, it began specializing in repairs for the nearby oil fields. After Gus died in 1928, Charlie renamed the business Charlie's Machine Shop. In 1944, Charlie persuaded his son, Edwin "Ed" Malzahn (July 3, 1921 - December 11, 2015), by then an Oklahoma A&M (now Oklahoma State)-trained mechanical engineer, to join the business. Compact trencher development In the late 1940s, Malzahn began to apply his mechanical engineering knowledge to inventing a device that he believed would be in great demand, once it was produced. At the time, the process of installing residential utility services—electric, gas and plumbing lines—involved slow, tedious pick-and-shovel labor. Malzahn's idea was to create a compact trencher that would dramatically reduce the time and effort of this process. Working together, Ed and his father spent months in the family machine shop creating the prototype of what would be known as the DWP, which stood for Ditch Witch Power. As described by the ASME, "The DWP used a vertical bucket line with an endless bucket chain to carry off the spoil, ...Small two piece buckets with sharp, finger-like edges were mounted on the vertical chain to gouge out chunks of dirt. The buckets were attached in sequence onto an endless moving chain that carried them down a ladder type mechanism to chew out chunks of soil, then upward to dump the spoil in neat piles on the ground as they began the downward descent to bring up more dirt. A 6-inch wide trench with a digging depth of 30 inches was the goal. The first production trencher rolled off the assembly line in 1949. Called the "endless conveyor ditch digging machine," It was the first mechanized, compact service-line trencher developed for laying underground water lines between the street main and the house. It was initially marketed for $750 per machine. Before the end of the 1950s, the company bought of land west of Perry and built a new manufacturing facility. In 1955, Ed Malzahn's endless conveyor ditch digging machine received U.S. Patent No. 2,714,262. Alex Baker, a landscape contractor from Long Island, New York, bought the first DWP machine that came off the production line. He used it to install underground sprinkler systems until he traded it in for a newer model in 1959. Ditch Witch company restored the older model to mint condition and put it on display in the Ditch Witch museum in Perry. In 1958, Charles Machine Works incorporated with Charlie and Ed Malzahn having equal control. During the same year, the first Ditch Witch international office opened in Australia. Meanwhile, the DWP led to the creation of the compact trencher industry, which today produces all types of equipment for efficiently installing any type of underground utilities including water, sewer and gas lines, as well as telecommunications, CATV, and fiber-optic cables. By 1969, customers included utility, telephone, and cable television companies, government agencies, and general contractors. In 2019, the Toro Company acquired Charles Machine Works, maintaining it as a Toro subsidiary. Still based in Perry, Oklahoma, as of 2022, The Charles Machine Works designs and manufactures a wide variety of underground construction equipment: trenchers, vibratory plows, horizontal directional drilling systems, drill pipe, downhole tools, vacuum excavation systems, fluid management systems, and mini skid steers, all bearing the Ditch Witch name. Charles Machine Works’ campus outside of Perry contains a manufacturing plant as well as training, testing, research, and product development facilities. It employs more than 1,300 people, making the company the largest employer in Perry and in Noble County. Accolades and awards The Ditch Witch compact trencher has twice been named "one of the 100 best American-made products in the world" by Fortune magazine. On December 16, 2002, American Society of Mechanical Engineers President Susan H. Kemp awarded the Ditch Witch organization a bronze plaque designating the DWP as a historical mechanical engineering landmark. Notes References External links Company website Encyclopedia of Oklahoma History and Culture - Ditch Witch Voices of Oklahoma interview with Ed Malzahn. First person interview conducted on April 14, 2011, with Ed Malzahn, founder of Ditch Witch. 1949 establishments in Oklahoma American inventions Noble County, Oklahoma Construction equipment manufacturers of the United States Engineering vehicles Manufacturing companies based in Oklahoma Manufacturing companies established in 1949 2019 mergers and acquisitions
Ditch Witch
[ "Engineering" ]
1,172
[ "Engineering vehicles" ]
9,249,581
https://en.wikipedia.org/wiki/Ultimate%20reality
Ultimate reality is "the supreme, final, and fundamental power in all reality". It refers to the most fundamental fact about reality, especially when it is seen as also being the most valuable fact. This may overlap with the concept of the Absolute in certain philosophies. Greek philosophy Anaximander (c. 610–546 BCE) believed that the ultimate substance of the universe, generally known as arche, was apeiron, an infinite and eternal substance that is the origin of all things. Aristotle (384–322 BCE) held that the unmoved mover "must be an immortal, unchanging being, ultimately responsible for all wholeness and orderliness in the sensible world" and that its existence is necessary to support everyday change. Democritus (c. 460–370 BCE) and Epicureanism (c. 307 BCE) rejected the idea of ultimate reality, saying that only atoms and void exist, but they do have the eternal, unbounded, and self-caused nature of non-materialistic views of the concept. In Neoplatonism (3rd century CE), the first principle of reality is "the One" which is a perfectly simple and ineffable principle which is the source of the universe, and exists without multiplicity and beyond being and non-being. Stoic physics (c. 300 BCE–3rd century CE) called the primitive substance of the universe pneuma or God, which is everything that exists and is a creative force that develops and shapes the cosmos. Buddhism In Theravada Buddhism, Nirvana is ultimate reality. Nirvana is described in negative terms; it is unconstructed and unconditioned. Mahayana Buddhism has different conceptions of ultimate reality, which is framed within the context of the two truths, the relative truth of everyday things and the ultimate truth. Some traditions, specifically those who rely on the Madhyamaka philosophy, reject the notion of a truly existing or essential ultimate reality, regarding any existent as empty (sunyata) of inherent existence (svabhava). Other strands of Mahayana thought have a more positive or cataphatic view of the ultimate reality. The Yogacara school tends to follow an idealistic metaphysics. Other examples include those traditions which rely more heavily on Buddha-nature thought, such as East Asian Mahayana schools like Huayan and Tibetan traditions like shentong. Hinduism In Hinduism, Brahman connotes the highest universal principle, the ultimate reality in the universe. In major schools of Hindu philosophy, it is the material, efficient, formal and final cause of all that exists. It is the pervasive, genderless, infinite, eternal truth and bliss which does not change, yet is the cause of all changes. Brahman as a metaphysical concept is the single binding unity behind diversity in all that exists in the universe. Taoism In Taoism, the Tao is the impersonal principle that underlies reality. It is a metaphysical principle and process that refers to how nature develops, being an enigmatic process of transformation. It is described as the source of existence, an ineffable mystery, and something that can be individually harnessed for the good. It is thought of as being "the flow of the universe" and the source of its order and its qi, but it is not considered a deity to be worshipped, even if some interpretations believed it had the power to bless or illuminate. Abrahamic religions Abrahamic conceptions of ultimate reality show diversity, in which some perspectives consider God to be a personal deity, while others have taken more abstract views. John Scotus Eriugena held that God's essence is uncaused and incomprehensible. Similarly, Maimonides believed that God is a perfect unity and is indescribable with positive attributes, and that anthropomorphic imagery in the Bible is metaphorical. Modern philosophy Baruch Spinoza believed that God is the natural world, existing eternally and necessarily, and that everything is an effect of God's nature. He defined God as a metaphysical substance rather than a personal being, and wrote in Ethics that "blessedness" comes from the love of God, meaning knowledge of reality as it is. Contemporary philosophy notes the possibility that reality has no fundamental explanation and should be seen as a brute fact. Adherents of the principle of sufficient reason reject this, holding that everything must have a reason. Representation According to Dadosky, the concept of "ultimate reality" is difficult to express in words, poetry, mythology, and art. Paradox or contradiction is often used as a medium of expression because of the "contradictory aspect of the ultimate reality". According to Mircea Eliade, ultimate reality can be mediated or revealed through symbols. For Eliade the "archaic" mind is constantly aware of the presence of the Sacred, and for this mind all symbols are religious (relinking to the Origin). Through symbols human beings can get an immediate "intuition" of certain features of the inexhaustible Sacred. The mind makes use of images to grasp the ultimate reality of things because reality manifests itself in contradictory ways and therefore can't be described in concepts. It is therefore the image as such, as a whole bundle of meaning, that is "true" (faithful, trustworthy). Eliade says : Common symbols of ultimate reality include world trees, the tree of life, microcosm, fire, children. Paul Tillich held that God is the ground of being and is something that precedes the subject and object (philosophy) dichotomy. He considered God to be what people are ultimately concerned with, existentially, and that religious symbols can be recovered as meaningful even without faith in the personal God of traditional Christianity. See also Absolute (philosophy) Ein Sof I Am that I Am Nondualism Pantheism Tao The One Wuji References Sources John Daniel Dadosky. The Structure of Religious Knowing: Encountering the Sacred in Eliade and Lonergan. State University of New York Press, 2004. . Further reading Conceptions of God Infinity
Ultimate reality
[ "Mathematics" ]
1,256
[ "Mathematical objects", "Infinity" ]
9,249,813
https://en.wikipedia.org/wiki/Forward%20kinematics
In robot kinematics, forward kinematics refers to the use of the kinematic equations of a robot to compute the position of the end-effector from specified values for the joint parameters. The kinematics equations of the robot are used in robotics, computer games, and animation. The reverse process, that computes the joint parameters that achieve a specified position of the end-effector, is known as inverse kinematics. Kinematics equations The kinematics equations for the series chain of a robot are obtained using a rigid transformation [Z] to characterize the relative movement allowed at each joint and separate rigid transformation [X] to define the dimensions of each link. The result is a sequence of rigid transformations alternating joint and link transformations from the base of the chain to its end link, which is equated to the specified position for the end link, where [T] is the transformation locating the end-link. These equations are called the kinematics equations of the serial chain. Link transformations In 1955, Jacques Denavit and Richard Hartenberg introduced a convention for the definition of the joint matrices [Z] and link matrices [X] to standardize the coordinate frame for spatial linkages. This convention positions the joint frame so that it consists of a screw displacement along the Z-axis and it positions the link frame so it consists of a screw displacement along the X-axis, Using this notation, each transformation-link goes along a serial chain robot, and can be described by the coordinate transformation, where θi, di, αi,i+1 and ai,i+1 are known as the Denavit-Hartenberg parameters. Kinematics equations revisited The kinematics equations of a serial chain of n links, with joint parameters θi are given by where is the transformation matrix from the frame of link to link . In robotics, these are conventionally described by Denavit–Hartenberg parameters. Denavit-Hartenberg matrix The matrices associated with these operations are: Similarly, The use of the Denavit-Hartenberg convention yields the link transformation matrix, [i-1Ti] as known as the Denavit-Hartenberg matrix. Computer animation The forward kinematic equations can be used as a method in 3D computer graphics for animating models. The essential concept of forward kinematic animation is that the positions of particular parts of the model at a specified time are calculated from the position and orientation of the object, together with any information on the joints of an articulated model. So for example if the object to be animated is an arm with the shoulder remaining at a fixed location, the location of the tip of the thumb would be calculated from the angles of the shoulder, elbow, wrist, thumb and knuckle joints. Three of these joints (the shoulder, wrist and the base of the thumb) have more than one degree of freedom, all of which must be taken into account. If the model were an entire human figure, then the location of the shoulder would also have to be calculated from other properties of the model. Forward kinematic animation can be distinguished from inverse kinematic animation by this means of calculation - in inverse kinematics the orientation of articulated parts is calculated from the desired position of certain points on the model. It is also distinguished from other animation systems by the fact that the motion of the model is defined directly by the animator - no account is taken of any physical laws that might be in effect on the model, such as gravity or collision with other models. See also Inverse kinematics Kinematic chain Robot control Mechanical systems Robot kinematics Kinematic synthesis References 3D computer graphics Computational physics Robot kinematics
Forward kinematics
[ "Physics", "Engineering" ]
755
[ "Robot kinematics", "Robotics engineering", "Computational physics" ]
9,250,002
https://en.wikipedia.org/wiki/Energy%20planning
Energy planning has a number of different meanings, but the most common meaning of the term is the process of developing long-range policies to help guide the future of a local, national, regional or even the global energy system. Energy planning is often conducted within governmental organizations but may also be carried out by large energy companies such as electric utilities or oil and gas producers. These oil and gas producers release greenhouse gas emissions. Energy planning may be carried out with input from different stakeholders drawn from government agencies, local utilities, academia and other interest groups. Since 1973, energy modeling, on which energy planning is based, has developed significantly. Energy models can be classified into three groups: descriptive, normative, and futuristic forecasting. Energy planning is often conducted using integrated approaches that consider both the provision of energy supplies and the role of energy efficiency in reducing demands (Integrated Resource Planning). Energy planning should always reflect the outcomes of population growth and economic development. There are also several alternative energy solutions which avoid the release of greenhouse gasses, like electrifying current machines and using nuclear energy. A unused energy plan for cities is created as a result of a careful investigation of the arranging prepare, which coordinating city arranging and vitality arranging together and gives energy arrangements for high-level cities and mechanical parks. Planning and market concepts Energy planning has traditionally played a strong role in setting the framework for regulations in the energy sector (for example, influencing what type of power plants might be built or what prices were charged for fuels). But in the past two decades many countries have deregulated their energy systems so that the role of energy planning has been reduced, and decisions have increasingly been left to the market. This has arguably led to increased competition in the energy sector, although there is little evidence that this has translated into lower energy prices for consumers. Indeed, in some cases, deregulation has led to significant concentrations of "market power" with large very profitable companies having a large influence as price setters. Integrated resource planning Approaches to energy planning depends on the planning agent and the scope of the exercise. Several catch-phrases are associated with energy planning. Basic to all is resource planning, i.e. a view of the possible sources of energy in the future. A forking in methods is whether the planner considers the possibility of influencing the consumption (demand) for energy. The 1970s energy crisis ended a period of relatively stable energy prices and stable supply-demand relation. Concepts of demand side management, least cost planning and integrated resource planning (IRP) emerged with new emphasis on the need to reduce energy demand by new technologies or simple energy saving. Sustainable energy planning Further global integration of energy supply systems and local and global environmental limits amplifies the scope of planning both in subject and time perspective. Sustainable energy planning should consider environmental impacts of energy consumption and production, particularly in light of the threat of global climate change, which is caused largely by emissions of greenhouse gases from the world's energy systems, which is a long-term process. The 2022 renewable energy industry outlook shows supportive policies from an administration focused on combatting climate change in 2022's political landscape aid an expected growth of the renewable energy industry Biden has argued in favor of developing the clean energy industry in the US and in the world to vigorously address climate change. President Biden expressed his intention to move away from the oil industry. 2022 administration calls for, "Plan for Climate Change and Environmental Justice", which aims to reach 100% carbon-free power generation by 2035 and net-zero emissions by 2050 in the USA. Many OECD countries and some U.S. states are now moving to more closely regulate their energy systems. For example, many countries and states have been adopting targets for emissions of CO2 and other greenhouse gases. In light of these developments, broad scope integrated energy planning could become increasingly important Sustainable Energy Planning takes a more holistic approach to the problem of planning for future energy needs. It is based on a structured decision making process based on six key steps, namely: Exploration of the context of the current and future situation Formulation of particular problems and opportunities which need to be addressed as part of the Sustainable Energy Planning process. This could include such issues as "peak oil" or "economic recession/depression", as well as the development of energy demand technologies. Create a range of models to predict the likely impact of different scenarios. This traditionally would consist of mathematical modelling but is evolving to include "Soft System Methodologies" such as focus groups, peer ethnographic research, "what if" logical scenarios etc. Based on the output from a wide range of modelling exercises and literature reviews, open forum discussion etc., the results are analysed and structured in an easily interpreted format. The results are then interpreted to determine the scope, scale and likely implementation methodologies which would be required to ensure successful implementation. This stage is a quality assurance process which actively interrogates each stage of the Sustainable Energy Planning process and checks if it has been carried out rigorously, without any bias and that it furthers the aims of sustainable development and does not act against them. The last stage of the process is to take action. This may consist of the development, publication and implementation of a range of policies, regulations, procedures or tasks which together will help to achieve the goals of the Sustainable Energy Plan. Designing for implementation is often carried out using "Logical Framework Analysis" which interrogates a proposed project and checks that it is completely logical, that it has no fatal errors and that appropriate contingency arrangements have been put in place to ensure that the complete project will not fail if a particular strand of the project fails. Sustainable energy planning is particularly appropriate for communities who want to develop their own energy security, while employing best available practice in their planning processes. Energy planning tools (software) Energy planning can be conducted on different software platforms and over various timespans and with different qualities of resolution (i.e very short divisions of time/space or very large divisions). There are multiple platforms available for all sorts of energy planning analysis, with focuses on different areas, and significant growth in terms of modeling software or platforms available in recent years. Energy planning tools can be identified as commercial, open source, educational, free, and as used by governments (often custom tools). Potential energy solutions Electrification One potential energy option is the move to electrify all machines that currently use fossil fuels for their energy source. There are already electric alternatives available such as electric cars, electric cooktops, and electric heat pumps, now these products need to be widely implemented to electrify and decarbonize our energy use. To reduce our dependence on fossil fuels and transfer to electric machines, it requires that all electricity be generated by renewable sources. As of 2020 60.3% of all energy generated in the United States came from fossil fuels, 19.7% came from nuclear energy, and 19.8% came from renewables. The United States is still heavily relying on fossil fuels as a source of energy. For the electrification of our machines to help the efforts to decarbonize, more renewable energy sources, such as wind and solar would have to be built. Another potential problem that comes with the use of renewable energy is the energy transmission. A study conducted by Princeton University found that the locations with the highest renewable potential are in the Midwest, however, the places with the highest energy demand are coastal cities. To effectively make use of the electricity coming from these renewable sources, the U.S. electric grid would have to be nationalized, and more high voltage transmission lines would have to be built. The total amount of electricity that the grid would have to be able to accommodate has to increase. If more electric cars were being driven there would be a decline in gasoline demand and an increased demand for electricity, this increased demand for electricity would require our electric grids to be able to transport more energy at any given moment than is currently viable. Nuclear Energy Nuclear energy is sometimes considered to be a clean energy source. Nuclear energy's only associated carbon emission takes place during the process of mining for uranium, but the process of obtaining energy from uranium does not emit any carbon. A primary concern in using nuclear energy stems from the issue of what to do with radioactive waste. The highest level source of radioactive waste comes from the spent reactor fuel, the radioactive fuel decreases over time through radioactive decay. The time it takes for the radioactive waste to decay depends on the length of the substance's half-life. Currently, the United States does not have a permanent disposal facility for high-level nuclear waste. Public support behind increasing nuclear energy production is an important consideration when planning for sustainable energy. Nuclear energy production has a complicated past. Multiple nuclear power plants having accidents or meltdowns has tainted the reputation of nuclear energy for many. A considerable section of the public is concerned about the health and environmental impacts of a nuclear power plant melting down, believing that the risk is not worth the reward. Though there is a portion of the population that believes expanding nuclear energy is necessary and that the threats of climate change far outweigh the possibility of a meltdown, especially considering the advancements in technology that have been made within recent decades. Global greenhouse gas emissions and energy production The majority of global manmade greenhouse gas emissions is derived from the energy sector, contributing to 72.0% of global emissions. The majority of that energy goes toward producing electricity and heat (31.0%), the next largest contributor is agriculture (11%), followed by transportation (15%), forestry (6%) and manufacturing (12%). There are multiple different molecular compounds that fall under the classification of green house gases including, carbon dioxide, methane, and nitrous oxide. Carbon dioxide is the largest emitted greenhouse gas, making up 76% of global emission. Methane is the second largest emitted greenhouse gas at 16%, methane is primarily emitted from the agriculture industry. Lastly nitrous oxide makes up 6% of global emitted greenhouse gases, agriculture and industry are the largest emitters of nitrous oxide. The challenges in the energy sector include the reliance on coal. Coal production remains key to the energy mix and global imports rely on coal to meet the growing demand for gas Energy planning evaluates the current energy situation and estimates future changes based on industrialization patterns and resource availability. Many of the future changes and solutions depend on the global effort to move away from coal and begin making energy efficient technology and continue to electrify the world. See also References External links An online community for energy planners working on energy for sustainable development. A masters education on Energy planning at Aalborg University in Denmark. Energy development Energy policy Climate change policy
Energy planning
[ "Environmental_science" ]
2,176
[ "Environmental social science", "Energy policy" ]
9,250,314
https://en.wikipedia.org/wiki/Selective%20adsorption
In surface science, selective adsorption is the effect when minima associated with bound-state resonances occur in specular intensity in atom-surface scattering. In crystal growth, selective adsorption refers to the phenomenon where adsorbing molecules attach preferentially to certain crystal faces. An example of selective adsorption can be demonstrated in the growth of Rochelle salt crystals. If copper ions are added to solution during the growth process, some crystal faces will slow down as copper apparently becomes a barrier to adsorption. However, by then adding sodium hydroxide to the solution, the preferred crystal faces will change once again. Discovery Pronounced intensity minima were first observed in 1930 by Theodor Estermann, Otto Frisch, and Otto Stern, during a series of gas-surface interaction experiments attempting to demonstrate the wave nature of atoms and molecules. The phenomenon has been explained in 1936 by John Lennard-Jones and Devonshire in terms of resonant transitions to bound surface states. Significance The selective adsorption binding energies can supply information on the gas-surface interaction potentials by yielding the vibrational energy spectrum of the gas atom bound to the surface. Starting from the 1970s, it has been extensively studied, both theoretically and experimentally. Energy levels measured with this technique are available for many systems. References Surface science
Selective adsorption
[ "Physics", "Chemistry", "Materials_science" ]
267
[ "Physical chemistry stubs", "Condensed matter physics", "Surface science" ]
9,250,392
https://en.wikipedia.org/wiki/Tonic%20%28physiology%29
Tonic in physiology refers to a physiological response which is slow and may be graded. This term is typically used in opposition to a fast response. For instance, tonic muscles are contrasted by the more typical and much faster twitch muscles, while tonic sensory nerve endings are contrasted to the much faster phasic sensory nerve endings. Tonic muscles Tonic muscles are much slower than twitch fibers in terms of time from stimulus to full activation, time to full relaxation upon cessation of stimuli, and maximal shortening velocity. These muscles are rarely found in mammals (only in the muscles moving the eye and in the middle ear), but are common in reptiles and amphibians. Tonic sensory receptors Tonic receptors adapt slowly to a stimulus and continues to produce action potentials over the duration of the stimulus. In this way it conveys information about the duration of the stimulus. In contrast, phasic receptors adapt rapidly to a stimulus. The response of the cell diminishes very quickly and then stops. It does not provide information on the duration of the stimulus; instead some of them convey information on rapid changes in stimulus intensity and rate. Examples of tonic receptors are pain receptors, the joint capsule, muscle spindle, and the Ruffini corpuscle. See also Tonic-clonic seizure References Physiology
Tonic (physiology)
[ "Biology" ]
260
[ "Physiology" ]
9,251,112
https://en.wikipedia.org/wiki/Giovanni%20%28meteorology%29
Giovanni is a Web interface that allows users to analyze NASA's gridded data from various satellite and surface observations. Giovanni lets researchers examine data on atmospheric chemistry, atmospheric temperature, water vapor and clouds, atmospheric aerosols, precipitation, and ocean chlorophyll and surface temperature. The primary data consist of global gridded data sets with reduced spatial resolution. Basic analytical functions performed by Giovanni are carried out by the Grid Analysis and Display System (GrADS). Giovanni is an acronym for GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure. It allows access to data from multiple remote sites, supports multiple data formats including Hierarchical Data Format (HDF), HDF-EOS, network Common Data Form (netCDF), GRIdded Binary (GRIB), and binary, and multiple plot types including area, time, Hovmoller, and image animation. References J. G. Acker and G. Leptoukh, Online Analysis Enhances Use of NASA Earth Science Data, EOS, January 9, 2007, vol. 88, pages 14 and 17 (the American Geophysical Union's weekly newspaper). External links Meteorological data and networks Atmospheric chemistry
Giovanni (meteorology)
[ "Chemistry" ]
243
[ "nan" ]
9,251,253
https://en.wikipedia.org/wiki/Opus%20latericium
Opus latericium (Latin for "brick work") is an ancient Roman construction technique in which course-laid brickwork is used to face a core of opus caementicium. Opus reticulatum was the dominant form of wall construction in the Imperial era. In the time of the architectural writer Vitruvius, opus latericium seems to have designated structures built using unfired mud bricks. See also References Ancient Roman construction techniques
Opus latericium
[ "Engineering" ]
93
[ "Architecture stubs", "Architecture" ]
9,251,789
https://en.wikipedia.org/wiki/Streaming%20Text%20Oriented%20Messaging%20Protocol
Simple (or Streaming) Text Oriented Message Protocol (STOMP), formerly known as TTMP, is a simple text-based protocol, designed for working with message-oriented middleware (MOM). It provides an interoperable wire format that allows STOMP clients to talk with any message broker supporting the protocol. Overview The protocol is broadly similar to HTTP, and works over TCP using the following commands: CONNECT SEND SUBSCRIBE UNSUBSCRIBE BEGIN COMMIT ABORT ACK NACK DISCONNECT Communication between client and server is through a "frame" consisting of a number of lines. The first line contains the command, followed by headers in the form <key>: <value> (one per line), followed by a blank line and then the body content, ending in a null character. Communication between server and client is through a MESSAGE, RECEIPT or ERROR frame with a similar format of headers and body content. Example SEND destination:/queue/a content-type:text/plain hello queue a ^@ Implementations Some message-oriented middleware products support STOMP, such as:: Apache ActiveMQ, Fuse Message Broker HornetQ Open Message Queue (OpenMQ) RabbitMQ syslog-ng Spring Framework References Internet protocols Application layer protocols Message-oriented middleware
Streaming Text Oriented Messaging Protocol
[ "Technology" ]
267
[ "Computing stubs", "Computer network stubs" ]
9,252,226
https://en.wikipedia.org/wiki/Pitchers%20%28ceramic%20material%29
Pitchers are pottery that has been broken in the course of manufacture. Biscuit (unglazed) pitchers can be crushed, ground and re-used, either as a low-percentage addition to the virgin raw materials on the same factory, or elsewhere as grog. Because of the adhering glaze, glost pitchers find less use. The crushed material can also be used in other industries as an inert filler. Archaeologists call ancient pitchers sherds or ostracons; shards or ostraca. References Ceramic materials
Pitchers (ceramic material)
[ "Physics", "Engineering" ]
111
[ "Materials stubs", "Materials", "Ceramic materials", "Ceramic engineering", "Matter" ]
9,252,237
https://en.wikipedia.org/wiki/P-Xylene%20%28data%20page%29
This page provides supplementary chemical data on p-xylene. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source and follow its directions. MATHESON TRI-GAS, INC. Structure and properties Thermodynamic properties Vapor pressure of liquid Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. Distillation data Spectral data References Notes Bibliography NIST Standard Reference Database Xylene Chemical data pages cleanup
P-Xylene (data page)
[ "Chemistry" ]
116
[ "Chemical data pages", "nan" ]
9,252,911
https://en.wikipedia.org/wiki/Deviation%20%28statistics%29
In mathematics and statistics, deviation serves as a measure to quantify the disparity between an observed value of a variable and another designated value, frequently the mean of that variable. Deviations with respect to the sample mean and the population mean (or "true value") are called errors and residuals, respectively. The sign of the deviation reports the direction of that difference: the deviation is positive when the observed value exceeds the reference value. The absolute value of the deviation indicates the size or magnitude of the difference. In a given sample, there are as many deviations as sample points. Summary statistics can be derived from a set of deviations, such as the standard deviation and the mean absolute deviation, measures of dispersion, and the mean signed deviation, a measure of bias. The deviation of each data point is calculated by subtracting the mean of the data set from the individual data point. Mathematically, the deviation d of a data point x in a data set with respect to the mean m is given by the difference: This calculation represents the "distance" of a data point from the mean and provides information about how much individual values vary from the average. Positive deviations indicate values above the mean, while negative deviations indicate values below the mean. The sum of squared deviations is a key component in the calculation of variance, another measure of the spread or dispersion of a data set. Variance is calculated by averaging the squared deviations. Deviation is a fundamental concept in understanding the distribution and variability of data points in statistical analysis. Types A deviation that is a difference between an observed value and the true value of a quantity of interest (where true value denotes the Expected Value, such as the population mean) is an error. Signed deviations A deviation that is the difference between the observed value and an estimate of the true value (e.g. the sample mean) is a residual. These concepts are applicable for data at the interval and ratio levels of measurement. Unsigned or absolute deviation Absolute deviation in statistics is a metric that measures the overall difference between individual data points and a central value, typically the mean or median of a dataset. It is determined by taking the absolute value of the difference between each data point and the central value and then averaging these absolute differences. The formula is expressed as follows: where Di is the absolute deviation, xi is the data element, m(X) is the chosen measure of central tendency of the data set—sometimes the mean (), but most often the median. The average absolute deviation (AAD) in statistics is a measure of the dispersion or spread of a set of data points around a central value, usually the mean or median. It is calculated by taking the average of the absolute differences between each data point and the chosen central value. AAD provides a measure of the typical magnitude of deviations from the central value in a dataset, giving insights into the overall variability of the data. Least absolute deviation (LAD) is a statistical method used in regression analysis to estimate the coefficients of a linear model. Unlike the more common least squares method, which minimizes the sum of squared vertical distances (residuals) between the observed and predicted values, the LAD method minimizes the sum of the absolute vertical distances. In the context of linear regression, if (x1,y1), (x2,y2), ... are the data points, and a and b are the coefficients to be estimated for the linear model the least absolute deviation estimates (a and b) are obtained by minimizing the sum. The LAD method is less sensitive to outliers compared to the least squares method, making it a robust regression technique in the presence of skewed or heavy-tailed residual distributions. Summary statistics Mean signed deviation For an unbiased estimator, the average of the signed deviations across the entire set of all observations from the unobserved population parameter value averages zero over an arbitrarily large number of samples. However, by construction the average of signed deviations of values from the sample mean value is always zero, though the average signed deviation from another measure of central tendency, such as the sample median, need not be zero. Mean Signed Deviation is a statistical measure used to assess the average deviation of a set of values from a central point, usually the mean. It is calculated by taking the arithmetic mean of the signed differences between each data point and the mean of the dataset. The term "signed" indicates that the deviations are considered with their respective signs, meaning whether they are above or below the mean. Positive deviations (above the mean) and negative deviations (below the mean) are included in the calculation. The mean signed deviation provides a measure of the average distance and direction of data points from the mean, offering insights into the overall trend and distribution of the data. Dispersion Statistics of the distribution of deviations are used as measures of statistical dispersion. Standard deviation is a widely used measure of the spread or dispersion of a dataset. It quantifies the average amount of variation or deviation of individual data points from the mean of the dataset. It uses squared deviations, and has desirable properties. Standard deviation is sensitive to extreme values, making it not robust. Average absolute deviation is a measure of the dispersion in a dataset that is less influenced by extreme values. It is calculated by finding the absolute difference between each data point and the mean, summing these absolute differences, and then dividing by the number of observations. This metric provides a more robust estimation of variability compared to standard deviation. Median absolute deviation is a robust statistic that employs the median, rather than the mean, to measure the spread of a dataset. It is calculated by finding the absolute difference between each data point and the median, then computing the median of these absolute differences. This makes median absolute deviation less sensitive to outliers, offering a robust alternative to standard deviation. Maximum absolute deviation is a straightforward measure of the maximum difference between any individual data point and the mean of the dataset. However, it is highly non-robust, as it can be disproportionately influenced by a single extreme value. This metric may not provide a reliable measure of dispersion when dealing with datasets containing outliers. Normalization Deviations, which measure the difference between observed values and some reference point, inherently carry units corresponding to the measurement scale used. For example, if lengths are being measured, deviations would be expressed in units like meters or feet. To make deviations unitless and facilitate comparisons across different datasets, one can nondimensionalize. One common method involves dividing deviations by a measure of scale(statistical dispersion), with the population standard deviation used for standardizing or the sample standard deviation for studentizing (e.g., Studentized residual). Another approach to nondimensionalization focuses on scaling by location rather than dispersion. The percent deviation offers an illustration of this method, calculated as the difference between the observed value and the accepted value, divided by the accepted value, and then multiplied by 100%. By scaling the deviation based on the accepted value, this technique allows for expressing deviations in percentage terms, providing a clear perspective on the relative difference between the observed and accepted values. Both methods of nondimensionalization serve the purpose of making deviations comparable and interpretable beyond the specific measurement units. Examples In one example, a series of measurements of the speed are taken of sound in a particular medium. The accepted or expected value for the speed of sound in this medium, based on theoretical calculations, is 343 meters per second. Now, during an experiment, multiple measurements are taken by different researchers. Researcher A measures the speed of sound as 340 meters per second, resulting in a deviation of −3 meters per second from the expected value. Researcher B, on the other hand, measures the speed as 345 meters per second, resulting in a deviation of +2 meters per second. In this scientific context, deviation helps quantify how individual measurements differ from the theoretically predicted or accepted value. It provides insights into the accuracy and precision of experimental results, allowing researchers to assess the reliability of their data and potentially identify factors contributing to discrepancies. In another example, suppose a chemical reaction is expected to yield 100 grams of a specific compound based on stoichiometry. However, in an actual laboratory experiment, several trials are conducted with different conditions. In Trial 1, the actual yield is measured to be 95 grams, resulting in a deviation of −5 grams from the expected yield. In Trial 2, the actual yield is measured to be 102 grams, resulting in a deviation of +2 grams. These deviations from the expected value provide valuable information about the efficiency and reproducibility of the chemical reaction under different conditions. Scientists can analyze these deviations to optimize reaction conditions, identify potential sources of error, and improve the overall yield and reliability of the process. The concept of deviation is crucial in assessing the accuracy of experimental results and making informed decisions to enhance the outcomes of scientific experiments. See also Anomaly (natural sciences) Squared deviations Deviate (statistics) Variance References Statistical deviation and dispersion Statistical distance
Deviation (statistics)
[ "Physics" ]
1,893
[ "Physical quantities", "Statistical distance", "Distance" ]
9,252,913
https://en.wikipedia.org/wiki/Red%20Barn%20Observatory
The Red Barn Observatory was established in 2006 and is dedicated to follow-up observations and detections of asteroids, comets, and Near-Earth objects. Plans for the observatory began in 2002 and construction was completed in 2005. During the month of August 2006, the observatory code H68 was assigned by the Minor Planet Center. Currently, the observatory is of the "roll-off" roof type, but plans are in the works to install an 8-foot dome in the summer of 2007. The observatory is located in Ty Ty, Georgia, USA – well away from any city light pollution and is in an excellent location to perform the follow-up observations of near-Earth objects and potentially hazardous asteroids that are near the vicinity of Earth on a regular basis. Also performed in the observatory is an early evening sky survey (such as Palomar sky survey or NEAT – Near-Earth Asteroid Tracking) to search for new comets and/or other unknown objects low on the horizon that can be easily overlooked due to the position of the object. Most amateur discovered comets are found in this location. Future plans for the observatory include an amateur based asteroid study program that will allow the "amateur astrometrist" on-line access to observatory images and there they will be able to perform astrometry on all detected asteroids or comets. Established in July 2007, the Georgia Fireball Network began monitoring the skies for bright meteors and fireballs. Currently, there are two stations in the Georgia Fireball Network. Station 1 is located in Buena Vista at the Deer Run Observatory and Station 2 is located in Ty Ty, Georgia at the Red Barn Observatory. Together, the stations monitor skies over most of Georgia and parts of Florida and Alabama. Observer/Owner Steve E. Farmer Jr. See also List of astronomical observatories References External links Planetary Society NEAT IAU Sky Surveys Red Barn Observatory Clear Sky Clock for Red Barn Observatory Georgia Fireball Network 2006 establishments in Georgia (U.S. state) Astronomical observatories in Georgia (U.S. state) Buildings and structures in Tift County, Georgia Buildings and structures completed in 2005 Minor-planet discovering observatories Space telescopes Science and technology in Georgia (U.S. state) Asteroid surveys
Red Barn Observatory
[ "Astronomy" ]
451
[ "Space telescopes" ]
9,253,234
https://en.wikipedia.org/wiki/Keidel%20vacuum
The Keidel vacuum tube was a type of blood collecting device, first manufactured by Hynson, Wescott and Dunning in around 1922. This vacuum was one of the first evacuated systems, predating the more well known Vacutainer. Its primary use was to test for syphilis and typhoid fever. Process Essentially, the Keidel vacuum consists of a sealed ampule with or without a culture medium. Connected to the ampule was a short rubber tube with a needle at the end, using a small glass tube as a cap. The insertion of the needle into the vein crushes the ampule, thus creating a vacuum and forcing blood into the container. Typically, a prominent vein in the forearm such as the median cubital vein would suffice, although the Keidel vacuum can take blood for any prominent peripheral vein. This concept did not become popular until during World War II, when quick and efficient first aid care was necessary in the battle field. As a result, the vacutainer became the forefront device used for blood collection. See also Phlebotomy Fingerprick References History of medicine Blood tests Hematology
Keidel vacuum
[ "Chemistry" ]
235
[ "Blood tests", "Chemical pathology" ]