id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
11,917,122
https://en.wikipedia.org/wiki/Scanning%20Hall%20probe%20microscope
Scanning Hall probe microscope (SHPM) is a variety of a scanning probe microscope which incorporates accurate sample approach and positioning of the scanning tunnelling microscope with a semiconductor Hall sensor. Developed in 1996 by Oral, Bending and Henini, SHPM allows mapping the magnetic induction associated with a sample. Current state of the art SHPM systems utilize 2D electron gas materials (e.g. GaAs/AlGaAs) to provide high spatial resolution (~300 nm) imaging with high magnetic field sensitivity. Unlike the magnetic force microscope the SHPM provides direct quantitative information on the magnetic state of a material. The SHPM can also image magnetic induction under applied fields up to ~1 tesla and over a wide range of temperatures (millikelvins to 300 K). The SHPM can be used to image many types of magnetic structures such as thin films, permanent magnets, MEMS structures, current carrying traces on PCBs, permalloy disks, and recording media Advantages to other magnetic raster scanning methods SHPM is a superior magnetic imaging technique due to many reasons. Although MFM provides higher spatial resolution (~30 nm) imaging, unlike the MFM technique, the Hall probe exerts negligible force on the underlying magnetic structure and is noninvasive. Unlike the magnetic decoration technique, the same area can be scanned over and over again. The magnetic field caused by hall probe is so minimal it has a negligible effect on sample it is measuring. The sample does not need to be an electrical conductor, unless using STM for height control. The measurement can be performed from 5 – 500 K in ultra high vacuum (UHV) and is nondestructive to the crystal lattice or structure. Tests requires no special surface preparation or coating. The detectable magnetic field sensitivity, is approximately 0.1 uT – 10 T. SHPM can be combined with other scanning methods such as STM. Limitations There are some shortcomings or difficulties when working with an SHPM. High resolution scans become difficult due to the thermal noise of extremely small hall probes. There is a minimum scanning height distance due to the construction of the hall probe. (This is especially significant with 2DEG semi-conductor probes due to their multi-layer design). The scanning (lift) height affects obtained image. Scanning large areas takes a significant amount of time. There is a relatively short practical scanning range (order of 1000s micrometer) along any direction. The housing is important to shield electromagnetic noise (Faraday cage), acoustic noise (anti-vibrating tables), air flow (air isolation cupboard), and static charge on the sample (ionizing units). References Scanning probe microscopy
Scanning Hall probe microscope
[ "Chemistry", "Materials_science" ]
549
[ "Nanotechnology", "Scanning probe microscopy", "Microscopy" ]
11,917,317
https://en.wikipedia.org/wiki/Camp%20bed
A camp bed is a narrow, light-weight bed, often made of sturdy cloth stretched over a folding frame. The term camp bed is common in the United Kingdom, but in North America they are often referred to as cots. Camp beds are used by the military in temporary camps and in emergency situations where large numbers of people are in need of housing after disasters. They are also used for recreational purposes, such as overnight camping trips. Ancient history It is believed that King Tutankhamun, who reigned in Egypt from approximately 1332 to 1323 BC, may have had the first camping bed. When Tutankhamun's tomb was opened in 1922, a room full of furniture was found to contain a three-section camping bed that folded up into a Z shape. Though the king, who had a clubfoot, may never have taken part in long-distance explorations, the elaborate folding bed suggests he had an interest in camping and hunting. 18th- and early 19th-century history The New-York Historical Society owns a camp bed thought to have been used by General George Washington during the American Revolutionary War, including during the hard winter at Valley Forge. It is made in three sections, with each section consisting of a wood frame stretched with canvas, supported by an X-shaped wooden base with iron mounts. According to the donor, Washington gave the camp bed to his recording secretary, Richard Varick, at the close of the war. It was passed down through Varick's descendants until it was donated to the Historical Society in 1871. Napoleon Bonaparte and his high-ranking officers used camp beds with a frame of gilt copper. The bed's six legs had wheels, and its vertical poles could support a canopy. Striped twill was attached to the frame by means of hooks in the copper frame. Napoleon died in such a camp bed on 5 May 1821, on the island of Saint Helena. Gallery See also Camping chair Stretcher Wall bed References External links Video of George Washington's military camp bed Beds Portable furniture
Camp bed
[ "Biology" ]
413
[ "Beds", "Behavior", "Sleep" ]
11,917,751
https://en.wikipedia.org/wiki/Brownout%20%28electricity%29
A brownout is a drop in the magnitude of voltage in an electrical power system. Unintentional brownouts can be caused by excessive electricity demand, severe weather events, or a malfunction or error affecting electrical grid control or monitoring systems. Intentional brownouts are used for load reduction in an emergency, or to prevent a total grid power outage due to high demand. The term brownout comes from the dimming of incandescent lighting when voltage reduces. In some countries, the term brownout refers not to a drop in voltage but to an intentional or unintentional power outage (or blackout). Effects Different types of electrical apparatus will react in different ways to a voltage reduction. Some devices will be severely affected, while others may not be affected at all. Resistive loads The heat output of any resistive device, such as an electric space heater, toaster, oven, and incandescent bulbs is equal to the power consumption, which is directly proportional to the square of the applied voltage if the resistance stays constant. Therefore, a significant reduction of heat output will occur with a relatively small reduction in voltage. An incandescent lamp will dim due to lower heat creation in the filament, as well as lower conversion of heat to light. Generally speaking, no damage will occur but functionality will be impaired. Motors Commutated electric motors, such as universal motors, will run at reduced speed or reduced torque. Depending on the motor design, no harm may occur. However, under load, the motor may draw more current due to the reduced back-EMF developed at the lower armature speed. Unless the motor has ample cooling capacity, it may eventually overheat and burn out. An induction motor will draw more current to compensate for the decreased voltage, which may lead to overheating and burnout. If a substantial part of a grid's load is electric motors, reducing voltage may not actually reduce load and can result in damage to customers' equipment. Power supplies An unregulated DC supply will produce a lower output voltage. The output voltage ripple will decrease in line with the usually reduced load current. In a cathode-ray tube television, the reduced output voltage will make the screen image smaller, dimmer and fuzzier. A linear DC regulated supply will maintain the output voltage unless the brownout is severe and the input voltage drops below the drop out voltage for the regulator, at which point the output voltage will fall and high levels of ripple from the rectifier/reservoir capacitor will appear on the output. A switched-mode power supply will be affected if the brownout voltage is lower than the minimum input voltage of the power supply. As the input voltage falls, the current draw will increase to maintain the same output voltage and current, until such a point that the power supply malfunctions or its under-voltage protection kicks in and disables the output. Digital systems Brownouts can cause unexpected behavior in systems with digital control circuits. Reduced voltages can bring control signals below the threshold at which logic circuits can reliably detect which state is being represented. As the voltage returns to normal levels the logic can latch at an incorrect state; to the extent that even "can't happen" states become possible. The seriousness of this effect and whether steps need to be taken by the designer to prevent it depends on the nature of the equipment being controlled; for instance, a brownout may cause a motor to begin running backwards. See also Black start Dumsor Power outage Undervoltage lockout (UVLO) Voltage drop References Electrical grid Voltage stability
Brownout (electricity)
[ "Physics" ]
740
[ "Voltage", "Voltage stability", "Physical quantities" ]
11,918,162
https://en.wikipedia.org/wiki/Environmental%20engineering%20law
Environmental engineering law is a profession that requires an expertise in both environmental engineering and law. This field includes professionals with both a legal and environmental engineering education. This dual educational requirement is typically satisfied through an ABET accredited degree in environmental engineering and an ABA accredited law degree. Likewise, this profession requires both licensure in professional environmental engineering and admittance to one bar. Environmental engineering law is the professional application of law and engineering principles to improve the environment (air, water, and/or land resources), to provide healthy water, air, and land for human habitation and for other organisms, and to remediate polluted sites. Environmental engineering lawyers seek to promote the advancement of technical engineering knowledge in the legal profession and to enhance informed legal analysis of complex environmental matters. Practice areas Environmental engineering law professionals offer a sound knowledge base in the fields of both environmental engineering and law to address complex environmental problems which demand both professional technical practice and legal expertise. Areas of practice are continually expanding, but frequently include complex land transactions, such as: Brownfields redevelopment Asbestos baseline survey and building revaluation due to forthcoming asbestos abatements Soil contamination assessment & remediation, the development of a remedial action workplan (RAWP) and engineering controls, including an environmental land use restriction (ELUR) Total maximum daily load (TMDL) nutrient loading studies (ex. for NPDES wastewater discharges) and regulatory negotiation of nutrients discharge limits from waste treatment plants, such as phosphorus and nitrogen. See also Engineering law Environmental law Environmental agreements Environmental Engineering Science Environmental impact statement Environmental justice International environmental law References Environmental engineering Environmental law
Environmental engineering law
[ "Chemistry", "Engineering" ]
328
[ "Chemical engineering", "Civil engineering", "Environmental engineering" ]
11,919,629
https://en.wikipedia.org/wiki/Rod%20calculus
Rod calculus or rod calculation was the mechanical method of algorithmic computation with counting rods in China from the Warring States to Ming dynasty before the counting rods were increasingly replaced by the more convenient and faster abacus. Rod calculus played a key role in the development of Chinese mathematics to its height in the Song dynasty and Yuan dynasty, culminating in the invention of polynomial equations of up to four unknowns in the work of Zhu Shijie. Hardware The basic equipment for carrying out rod calculus is a bundle of counting rods and a counting board. The counting rods are usually made of bamboo sticks, about 12 cm- 15 cm in length, 2mm to 4 mm diameter, sometimes from animal bones, or ivory and jade (for well-heeled merchants). A counting board could be a table top, a wooden board with or without grid, on the floor or on sand. In 1971 Chinese archaeologists unearthed a bundle of well-preserved animal bone counting rods stored in a silk pouch from a tomb in Qian Yang county in Shanxi province, dated back to the first half of Han dynasty (206 BC – 8AD). In 1975 a bundle of bamboo counting rods was unearthed. The use of counting rods for rod calculus flourished in the Warring States, although no archaeological artefacts were found earlier than the Western Han dynasty (the first half of Han dynasty; however, archaeologists did unearth software artefacts of rod calculus dated back to the Warring States); since the rod calculus software must have gone along with rod calculus hardware, there is no doubt that rod calculus was already flourishing during the Warring States more than 2,200 years ago. Software The key software required for rod calculus was a simple 45 phrase positional decimal multiplication table used in China since antiquity, called the nine-nine table, which were learned by heart by pupils, merchants, government officials and mathematicians alike. Rod numerals Displaying numbers Rod numerals is the only numeric system that uses different placement combination of a single symbol to convey any number or fraction in the Decimal System. For numbers in the units place, every vertical rod represent 1. Two vertical rods represent 2, and so on, until 5 vertical rods, which represents 5. For number between 6 and 9, a biquinary system is used, in which a horizontal bar on top of the vertical bars represent 5. The first row are the number 1 to 9 in rod numerals, and the second row is the same numbers in horizontal form. For numbers larger than 9, a decimal system is used. Rods placed one place to the left of the units place represent 10 times that number. For the hundreds place, another set of rods is placed to the left which represents 100 times of that number, and so on. As shown in the adjacent image, the number 231 is represented in rod numerals in the top row, with one rod in the units place representing 1, three rods in the tens place representing 30, and two rods in the hundreds place representing 200, with a sum of 231. When doing calculation, usually there was no grid on the surface. If rod numerals two, three, and one is placed consecutively in the vertical form, there's a possibility of it being mistaken for 51 or 24, as shown in the second and third row of the adjacent image. To avoid confusion, number in consecutive places are placed in alternating vertical and horizontal form, with the units place in vertical form, as shown in the bottom row on the right. Displaying zeroes In Rod numerals, zeroes are represented by a space, which serves both as a number and a place holder value. Unlike in Hindu-Arabic numerals, there is no specific symbol to represent zero. Before the introduction of a written zero, in addition to a space to indicate no units, the character in the subsequent unit column would be rotated by 90°, to reduce the ambiguity of a single zero. For example 107 (𝍠 𝍧) and 17 (𝍩𝍧) would be distinguished by rotation, in addition to the space, though multiple zero units could lead to ambiguity, e.g. 1007 (𝍩 𝍧), and 10007 (𝍠 𝍧). In the adjacent image, the number zero is merely represented with a space. Negative and positive numbers Song mathematicians used red to represent positive numbers and black for negative numbers. However, another way is to add a slash to the last place to show that the number is negative. Decimal fraction The Mathematical Treatise of Sunzi used decimal fraction metrology. The unit of length was 1 chi, 1 chi = 10 cun, 1 cun = 10 fen, 1 fen = 10 li, 1 li = 10 hao, 10 hao = 1 shi, 1 shi = 10 hu. 1 chi 2 cun 3 fen 4 li 5 hao 6 shi 7 hu is laid out on counting board as where is the unit measurement chi. Southern Song dynasty mathematician Qin Jiushao extended the use of decimal fraction beyond metrology. In his book Mathematical Treatise in Nine Sections, he formally expressed 1.1446154 day as 日 He marked the unit with a word “日” (day) underneath it. Addition Rod calculus works on the principle of addition. Unlike Arabic numerals, digits represented by counting rods have additive properties. The process of addition involves mechanically moving the rods without the need of memorising an addition table. This is the biggest difference with Arabic numerals, as one cannot mechanically put 1 and 2 together to form 3, or 2 and 3 together to form 5. The adjacent image presents the steps in adding 3748 to 289: Place the augend 3748 in the first row, and the addend 289 in the second. Calculate from LEFT to RIGHT, from the 2 of 289 first. Take away two rods from the bottom add to 7 on top to make 9. Move 2 rods from top to bottom 8, carry one to forward to 9, which becomes zero and carries to 3 to make 4, remove 8 from bottom row. Move one rod from 8 on top row to 9 on bottom to form a carry one to next rank and add one rod to 2 rods on top row to make 3 rods, top row left 7. Result 3748+289=4037 The rods in the augend change throughout the addition, while the rods in the addend at the bottom "disappear". Subtraction Without borrowing In situation in which no borrowing is needed, one only needs to take the number of rods in the subtrahend from the minuend. The result of the calculation is the difference. The adjacent image shows the steps in subtracting 23 from 54. Borrowing In situations in which borrowing is needed such as 4231–789, one need use a more complicated procedure. The steps for this example are shown on the left. Place the minuend 4231 on top, the subtrahend 789 on the bottom. Calculate from the left to the right. Borrow 1 from the thousands place for a ten in the hundreds place, minus 7 from the row below, the difference 3 is added to the 2 on top to form 5. The 7 on the bottom is subtracted, shown by the space. Borrow 1 from the hundreds place, which leaves 4. The 10 in the tens place minus the 8 below results in 2, which is added to the 3 above to form 5. The top row now is 3451, the bottom 9. Borrow 1 from the 5 in the tens place on top, which leaves 4. The 1 borrowed from the tens is 10 in the units place, subtracting 9 which results in 1, which are added to the top to form 2. With all rods in the bottom row subtracted, the 3442 in the top row is then, the result of the calculation Multiplication Sunzi Suanjing described in detail the algorithm of multiplication. On the left are the steps to calculate 38×76: Place the multiplicand on top, the multiplier on bottom. Line up the units place of the multiplier with the highest place of the multiplicand. Leave room in the middle for recording. Start calculating from the highest place of the multiplicand (in the example, calculate 30×76, and then 8×76). Using the multiplication table 3 times 7 is 21. Place 21 in rods in the middle, with 1 aligned with the tens place of the multiplier (on top of 7). Then, 3 times 6 equals 18, place 18 as it is shown in the image. With the 3 in the multiplicand multiplied totally, take the rods off. Move the multiplier one place to the right. Change 7 to horizontal form, 6 to vertical. 8×7 = 56, place 56 in the second row in the middle, with the units place aligned with the digits multiplied in the multiplier. Take 7 out of the multiplier since it has been multiplied. 8×6 = 48, 4 added to the 6 of the last step makes 10, carry 1 over. Take off 8 of the units place in the multiplicand, and take off 6 in the units place of the multiplier. Sum the 2380 and 508 in the middle, which results in 2888: the product. Division The animation on the left shows the steps for calculating . Place the dividend, 309, in the middle row and the divisor, 7, in the bottom row. Leave space for the top row. Move the divisor, 7, one place to the left, changing it to horizontal form. Using the Chinese multiplication table and division, 30÷7 equals 4 remainder 2. Place the quotient, 4, in the top row and the remainder, 2, in the middle row. Move the divisor one place to the right, changing it to vertical form. 29÷7 equals 4 remainder 1. Place the quotient, 4, on top, leaving the divisor in place. Place the remainder in the middle row in place of the dividend in this step. The result is the quotient is 44 with a remainder of 1 The Sunzi algorithm for division was transmitted in toto by al Khwarizmi to Islamic country from Indian sources in 825AD. Al Khwarizmi's book was translated into Latin in the 13th century, The Sunzi division algorithm later evolved into Galley division in Europe. The division algorithm in Abu'l-Hasan al-Uqlidisi's 925AD book Kitab al-Fusul fi al-Hisab al-Hindi and in 11th century Kushyar ibn Labban's Principles of Hindu Reckoning were identical to Sunzu's division algorithm. Fractions If there is a remainder in a place value decimal rod calculus division, both the remainder and the divisor must be left in place with one on top of another. In Liu Hui's notes to Jiuzhang suanshu (2nd century BCE), the number on top is called "shi" (实), while the one at bottom is called "fa" (法). In Sunzi Suanjing, the number on top is called "zi" (子) or "fenzi" (lit., son of fraction), and the one on the bottom is called "mu" (母) or "fenmu" (lit., mother of fraction). Fenzi and Fenmu are also the modern Chinese name for numerator and denominator, respectively. As shown on the right, 1 is the numerator remainder, 7 is the denominator divisor, formed a fraction . The quotient of the division is 44 + . Liu Hui used a lot of calculations with fractions in Haidao Suanjing. This form of fraction with numerator on top and denominator at bottom without a horizontal bar in between, was transmitted to Arabic country in an 825AD book by al Khwarizmi via India, and in use by 10th century Abu'l-Hasan al-Uqlidisi and 15th century Jamshīd al-Kāshī's work "Arithematic Key". Addition Put the two numerators 1 and 2 on the left side of counting board, put the two denominators 3 and 5 at the right hand side Cross multiply 1 with 5, 2 with 3 to get 5 and 6, replace the numerators with the corresponding cross products. Multiply the two denominators 3 × 5 = 15, put at bottom right Add the two numerators 5 and 6 = 11 put on top right of counting board. Result: Subtraction Put down the rod numeral for numerators 1 and 8 at left hand side of a counting board Put down the rods for denominators 5 and 9 at the right hand side of a counting board Cross multiply 1 × 9 = 9, 5 × 8 = 40, replace the corresponding numerators Multiply the denominators 5 × 9 = 45, put 45 at the bottom right of counting board, replace the denominator 5 Subtract 40 − 9 = 31, put on top right. Result: Multiplication 3 × 5 Arrange the counting rods for 3 and 5 on the counting board as shang, shi, fa tabulation format. shang times fa add to shi: 3 × 3 + 1 = 10; 5 × 5 + 2 = 27 shi multiplied by shi:10 × 27 = 270 fa multiplied by fa:3 × 5 = 15 shi divided by fa: Highest common factor and fraction reduction The algorithm for finding the highest common factor of two numbers and reduction of fraction was laid out in Jiuzhang suanshu. The highest common factor is found by successive division with remainders until the last two remainders are identical. The animation on the right illustrates the algorithm for finding the highest common factor of and reduction of a fraction. In this case the hcf is 25. Divide the numerator and denominator by 25. The reduced fraction is . Interpolation Calendarist and mathematician He Chengtian (何承天) used fraction interpolation method, called "harmonisation of the divisor of the day" (调日法) to obtain a better approximate value than the old one by iteratively adding the numerators and denominators a "weaker" fraction with a "stronger fraction". Zu Chongzhi's legendary could be obtained with He Chengtian's method System of linear equations Chapter Eight Rectangular Arrays of Jiuzhang suanshu provided an algorithm for solving System of linear equations by method of elimination: Problem 8-1: Suppose we have 3 bundles of top quality cereals, 2 bundles of medium quality cereals, and a bundle of low quality cereal with accumulative weight of 39 dou. We also have 2, 3 and 1 bundles of respective cereals amounting to 34 dou; we also have 1,2 and 3 bundles of respective cereals, totaling 26 dou. Find the quantity of top, medium, and poor quality cereals. In algebra, this problem can be expressed in three system equations with three unknowns. This problem was solved in Jiuzhang suanshu with counting rods laid out on a counting board in a tabular format similar to a 3x4 matrix: Algorithm: Multiply the center column with right column top quality number. Repeatedly subtract right column from center column, until the top number of center column=0. multiply the left column with the value of top row of right column. Repeatedly subtract right column from left column, until the top number of left column=0. After applying above elimination algorithm to the reduced center column and left column, the matrix was reduced to triangular shape. The amount of one bundle of low quality cereal From which the amount of one bundle of top and medium quality cereals can be found easily: One bundle of top quality cereals=9 dou One bundle of medium cereal=4 dou Extraction of Square root Algorithm for extraction of square root was described in Jiuzhang suanshu and with minor difference in terminology in Sunzi Suanjing. The animation shows the algorithm for rod calculus extraction of an approximation of the square root from the algorithm in chap 2 problem 19 of Sunzi Suanjing:Now there is a square area 234567, find one side of the square''. The algorithm is as follows: Set up 234567 on the counting board, on the second row from top, named shi Set up a marker 1 at 10000 position at the 4th row named xia fa Estimate the first digit of square root to be counting rod numeral 4, put on the top row (shang) hundreds position, Multiply the shang 4 with xiafa 1, put the product 4 on 3rd row named fang fa Multiply shang with fang fa deduct the product 4x4=16 from shi: 23-16=7, remain numeral 7. double up the fang fa 4 to become 8, shift one position right, and change the vertical 8 into horizontal 8 after moved right. Move xia fa two position right. Estimate second digit of shang as 8: put numeral 8 at tenth position on top row. Multiply xia fa with the new digit of shang, add to fang fa . 8 calls 8 =64, subtract 64 from top row numeral "74", leaving one rod at the most significant digit. double the last digit of fang fa 8, add to 80 =96 Move fang fa96 one position right, change convention;move xia fa "1" two position right. Estimate 3rd digit of shang to be 4. Multiply new digit of shang 4 with xia fa 1, combined with fang fa to make 964. subtract successively 4*9=36,4*6=24,4*4=16 from the shi, leaving 311 double the last digit 4 of fang fa into 8 and merge with fang fa result North Song dynasty mathematician Jia Xian developed an additive multiplicative algorithm for square root extraction, in which he replaced the traditional "doubling" of "fang fa" by adding shang digit to fang fa digit, with same effect. Extraction of cubic root Jiuzhang suanshu vol iv "shaoguang" provided algorithm for extraction of cubic root. problem 19: We have a 1860867 cubic chi, what is the length of a side ? Answer:123 chi. North Song dynasty mathematician Jia Xian invented a method similar to simplified form of Horner scheme for extraction of cubic root. The animation at right shows Jia Xian's algorithm for solving problem 19 in Jiuzhang suanshu vol 4. Polynomial equation North Song dynasty mathematician Jia Xian invented Horner scheme for solving simple 4th order equation of the form South Song dynasty mathematician Qin Jiushao improved Jia Xian's Horner method to solve polynomial equation up to 10th order. The following is algorithm for solving in his Mathematical Treatise in Nine Sections vol 6 problem 2. This equation was arranged bottom up with counting rods on counting board in tabular form Algorithm: Arrange the coefficients in tabular form, constant at shi, coeffienct of x at shang lian, the coeffiecnt of at yi yu;align the numbers at unit rank. Advance shang lian two ranks Advance yi yu three ranks Estimate shang=20 let xia lian =shang * yi yu let fu lian=shang *yi yu merge fu lian with shang lian let fang=shang * shang lian subtract shang*fang from shi add shang * yi yu to xia lian retract xia lian 3 ranks, retract yi yu 4 ranks The second digit of shang is 0 merge shang lian into fang merge yi yu into xia lian Add yi yu to fu lian, subtract the result from fang, let the result be denominator find the highest common factor =25 and simplify the fraction solution Tian Yuan shu Yuan dynasty mathematician Li Zhi developed rod calculus into Tian yuan shu Example Li Zhi Ceyuan haijing vol II, problem 14 equation of one unknown: 元 Polynomial equations of four unknowns Mathematician Zhu Shijie further developed rod calculus to include polynomial equations of 2 to four unknowns. For example, polynomials of three unknowns: Equation 1: 太 Equation 2: Equation 3: 太 After successive elimination of two unknowns, the polynomial equations of three unknowns was reduced to a polynomial equation of one unknown: Solved x=5; Which ignores 3 other answers, 2 are repeated. See also Chinese mathematics Counting rods References Lam Lay Yong (蓝丽蓉) Ang Tian Se (洪天赐), Fleeting Footsteps, World Scientific Jean Claude Martzloff, A History of Chinese Mathematics Chinese mathematics Mathematical tools Science and technology in China
Rod calculus
[ "Mathematics", "Technology" ]
4,275
[ "Applied mathematics", "Mathematical tools", "History of computing", "nan" ]
11,920,038
https://en.wikipedia.org/wiki/Vitascan
Vitascan (sometimes alternately spelled VitaScan) was an early color television camera system developed by American television equipment manufacturer DuMont Laboratories. Development began in 1949 and the product was released on an experimental basis in 1956. Vitascan was fully compatible with the NTSC color system, and DuMont Labs hoped the system would catch on in the television industry. However, Vitascan cameras only worked indoors, due to Vitascan being in essence a flying-spot scanner based system. The system's camera basically worked in reverse by projecting a light through the camera's lens onto the subject from a cathode ray tube, or CRT, mounted behind the lens (instead of a pickup tube like conventional television cameras), providing the "flying spot". Four photomultiplier tubes (two for red, one for green, and one for blue) mounted inside special "scoops" placed in the studio and pointed at the subject would pick up the light from the camera's CRT and produce the final image to be televised. Normally, with any flying-spot scanned system, the area between the flying-spot CRT and photomultiplier tubes (the whole studio in Vitascan's case) would have to be completely darkened, in order to prevent any other light, besides the light for the flying spot from the CRT, from interfering with the photomultiplier tubes. Darkening the whole room would make things quite inconvenient for any talent present in a Vitascan studio, but to get around this, strobe lighting was used in the studio for the aid of the talent. The strobe light, referred to as a "sync-lite" by DuMont, would light up only when the photomultiplier scoops were in the vertical blanking intervals of the video they would generate, to prevent any light interference to the photomultiplier tubes. Due to this, the system could not be used outdoors because sunlight would interfere during the scanning phase. From 1956 to 1959, Vitascan cameras were in use at independent television station WITI in Milwaukee, Wisconsin, for its local TV news programs. However, the limitations of the cameras caused WITI to eventually return to monochrome cameras. The television industry never adopted Vitascan, and television stations continued to operate mostly in black-and-white for many more years. Vitascan, like earlier DuMont technologies such as the Electronicam, failed to catch on. External links Early Television entry for Vitascan DuMont Television Network Cameras by type Television technology
Vitascan
[ "Technology" ]
514
[ "Information and communications technology", "Television technology" ]
11,920,671
https://en.wikipedia.org/wiki/Machine%20perception
Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. The basic method that the computers take in and respond to their environment is through the attached hardware. Until recently input was limited to a keyboard, or a mouse, but advances in technology, both in hardware and software, have allowed computers to take in sensory input in a way similar to humans. Machine perception allows the computer to use this sensory input, as well as conventional computational means of gathering information, to gather information with greater accuracy and to present it in a way that is more comfortable for the user. These include computer vision, machine hearing, machine touch, and machine smelling, as artificial scents are, at a chemical compound, molecular, atomic level, indiscernible and identical. The end goal of machine perception is to give machines the ability to see, feel and perceive the world as humans do and therefore for them to be able to explain in a human way why they are making their decisions, to warn us when it is failing and more importantly, the reason why it is failing. This purpose is very similar to the proposed purposes for artificial intelligence generally, except that machine perception would only grant machines limited sentience, rather than bestow upon machines full consciousness, self-awareness, and intentionality. Machine vision Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and high-dimensional data from the real world to produce numerical or symbolic information, e.g., in the forms of decisions. Computer vision has many applications already in use today such as facial recognition, geographical modeling, and even aesthetic judgment. However, machines still struggle to interpret visual impute accurately if it is blurry or if the viewpoint at which stimuli are viewed varies often. Computers also struggle to determine the proper nature of some stimulus if overlapped by or seamlessly touching another stimulus. This refers to the Principle of Good Continuation. Machines also struggle to perceive and record stimulus functioning according to the Apparent Movement principle which is a field of research in Gestalt psychology. Machine hearing Machine hearing, also known as machine listening or computer audition is the ability of a computer or machine to take in and process sound data such as speech or music. This area has a wide range of application including music recording and compression, speech synthesis and speech recognition. Moreover, this technology allows the machine to replicate the human brain's ability to selectively focus on a specific sound against many other competing sounds and background noise. This ability is called "auditory scene analysis". The technology enables the machine to segment several streams occurring at the same time. Many commonly used devices such as a smartphones, voice translators and cars make use of some form of machine hearing. Present technology still has challenges in speech segmentation. This means it is occasionally unable to correctly split words within sentences especially when spoken in an atypical accent. Machine touch Machine touch is an area of machine perception where tactile information is processed by a machine or computer. Applications include tactile perception of surface properties and dexterity whereby tactile information can enable intelligent reflexes and interaction with the environment. Though this could possibly be done through measuring when and where friction occurs and also the nature and intensity of the friction, machines however still do not have any way of measuring few ordinary physical human experiences including physical pain. For example, scientists have yet to invent a mechanical substitute for the Nociceptors in the body and brain that are responsible for noticing and measuring physical human discomfort and suffering. Machine olfaction Scientists are developing computers known as machine olfaction which can recognize and measure smells as well. Airborne chemicals are sensed and classified with a device sometimes known as an electronic nose. Machine taste Future Other than those listed above, some of the future hurdles that the science of machine perception still has to overcome include, but are not limited to: - Embodied cognition - The theory that cognition is a full body experience, and therefore can only exist, and therefore be measure and analyzed, in fullness if all required human abilities and processes are working together through a mutually aware and supportive systems network. - The Moravec's paradox (see the link) - The Principle of similarity - The ability young children develop to determine what family a newly introduced stimulus falls under even when the said stimulus is different from the members with which the child usually associates said family with. (An example could be a child figuring that a chihuahua is a dog and house pet rather than vermin.) - The Unconscious inference: The natural human behavior of determining if a new stimulus is dangerous or not, what it is, and then how to relate to it without ever requiring any new conscious effort. - The innate human ability to follow the likelihood principle in order to learn from circumstances and others over time. - The recognition-by-components theory - being able to mentally analyze and break even complicated mechanisms into manageable parts with which to interact with. For example: A person seeing both the cup and the handle parts that make up a mug full of hot cocoa, in order to use the handle to hold the mug so as to avoid being burned. - The free energy principle - determining long before hand how much energy one can safely delegate to being aware of things outside one's self without the loss of the needed energy one requires for sustaining their life and function satisfactorily. This allows one to become both optimally aware of the world around them self without depleting their energy so much that they experience damaging stress, decision fatigue, and/or exhaustion. See also Robotic sensing Sensors SLAM History of artificial intelligence References Artificial intelligence Artificial intelligence engineering
Machine perception
[ "Engineering" ]
1,154
[ "Artificial intelligence engineering", "Software engineering" ]
11,920,944
https://en.wikipedia.org/wiki/Tympanum%20%28architecture%29
A tympanum (: tympana; from Greek and Latin words meaning "drum") is the semi-circular or triangular decorative wall surface over an entrance, door or window, which is bounded by a lintel and an arch. It often contains pedimental sculpture or other imagery or ornaments. Many architectural styles include this element, although it is most commonly associated with Romanesque and Gothic architecture. Alternatively, the tympanum may hold an inscription, or in modern times, a clock face. Tympanums in antiquity and the Early Middle Ages Tympanums are by definition inscriptions enclosed by a pediment, however the evolution of tympanums gives them more specific implications. Pediments first emerged early in Classical Greece around 700-480 BCE, with early examples such as the Parthenon remaining famous to this day. Pediments spread across the Hellenistic world with the rest of classical architecture. engravings on the entablature at the time were sometimes blank but often contained statues of the gods and representations of geographic features. There are uncountable stories and messages in these inscriptions however the symbolism remained closely related to the philosophy and democracy associated with classical Greek city states. These themes continued when the Romans spread the style further into Europe, Giving the pediment an aspect of authoritarian symbolism in provinces captured by conquest. Originally serving as the end of a gabled roof, in later imperial Rome the form of pediments was greatly adjusted. Pediments started being placed above any doorway and curved instead of triangle shapes were introduced, ignoring structural value and instead using the now abstracted form purely for decoration. After the collapse of the Roman empire, regions with significant classical architecture quickly adopted and transformed the features. In France examples such as the Baptistry of St. Jean at Poitiers created in the 6th through 7th century CE defined Merovingian architecture. The form became even more abstracted in this period, replacing sculptures with geometric engravings and mosaics, and using small alternating curved and triangular pediments above windows on churches such as St. Generoux from the 9th or 10th century. This transformation continued throughout the later parts of the early Middle Ages, gradually shifting into the large circular stained glass windows of the gothic era known as rose windows. While tympanums are inspired by the shape and placement of pediments, classical pediments more closely transformed into rose windows than tympanums. And when pedimented shapes reappeared over gothic and Romanesque portals, inspiration can be traced in other directions. According to the Gospel of Luke, above Jesus on the cross was written "this is the king of jews" to mock his powerlessness. This inspired buildings as early as the Arch of Constantine and Old Saint Peters Basilica, both of which featured an engraving a Christ with a poem inscribed in second person perspective, an essential feature of later tympanum inscriptions. Early reliquaries and pilgrimage churches employed this convention, such as the Shrine of Saint Martin at Tours which in 558 installed engravings of the life of Christ and the churches patron saint (Saint Martin). These engravings were situated directly above the main entrances and had poems inscribed directing visitors on how they should feel entering the church. This was quickly replicated in Carolingian era churches such as the Abbey of Saint Gall in Switzerland, completing the form of the tympanum. Romanesque tympanums The Romanesque era (1000–1200) saw massive change in church architecture. Pilgrimage required churches to rethink layouts and symbolism and the ever-rising Benedictine Order changed rules on how churches should operate and appear. Architecturally the Romanesque era saw an increased appreciation for classical forms, coupled with an increase in church construction related to several factors including political turmoil and thanking god for not ending the world in the year 1000. Tympanums are one of the most prominent features of Romanesque architecture, originating in this time and replicated in Christian architecture ever since. France The Tympanum above the west portal of the Sainte-Foy church in Conques is one the most iconic tympanum; carved in the early 1100s it is emblematic of the style, purpose and culture of Romanesque tympanums. This tympanum depicts the last judgement, which was the subject of a large portion of tympanums, however, the Conques tympanum is far more detailed in its figures and scenes in a way reminiscent of Roman reliefs. This work was meant to be horrifying to the people who passed under it, on the right demons torture the souls of the damned, sinners are fed to grotesque monsters, and people are crammed into small spaces as they await their judgment. Contrasting this is Christ in the middle and the saved souls on the left, serving as a reminder for pilgrims of why they made their journey. The imagery on this tympanum is primarily meant to remind the viewer of the power of gods judgement, part of many ways that tympanums from the era mentally prepared pilgrims for the experience of the church. There are many more subtle messages however, such as encouraging donations through depicting a miser character being damned, and even making comments about politics by showing Charlemagne bowing his head. The Coucy Doujon tympanum was carved between 1225 and 1230 and is evidence that tympanums were used in secular settings as well. The large tympanum was positioned above the door to the largest tower, as a way to tell anyone entering the building a message. The message for this particular tympanum is relatively unknown, featuring a figure likely from Coucy family history fighting a lion. Lions had many symbolic meanings in the Romanesque era and this one is likely a reference to a king or an event from the crusades. Despite the secular theme, it has a very similar style, form, and purpose to the many tympanums on nearby churches. Retaining the shape and inscription and showing a large central character with classically inspired detail, movement and emotion. Spain Tympanums are also prominent in Spanish architecture. Appearing on the pilgrimage churches that spread southwest through the Reconquista. Santiago de Compostela was one of the most prominent pilgrimage churches and features a tympanum over both of its portals with the archetypical deep carving and emotional display. Many other examples appeared throughout the Iberian peninsula starting with the church in Jaca which the 1090s was carved with one of the first archetypical Romanesque tympanums in Europe. Spanish architecture from the Reconquista era is defined by the combination of Christian and Arabian styles, and tympanums were no exception to this. Many of the sculptors for Spanish Romanesque churches were Moorish, and adapted Arabian forms and styles into tympanums, resulting in brighter reds and nature-like geometric patterns. these sculptors continued their work throughout the continent, spreading Arabian influences as far as Le Puy and Conques. Eastern European Tympanums were an essential part of Christian architecture in this time, and thus were common in the highly religious Byzantine empire. The Hagia Sofia has several tympanums, carved either when the church was finished in the 500s or during renovations the 800s and 1200s. While this seems to challenge the development of tympanums explained in the first section of this article, these late antiquity Tympanums were an evolutionary prequal to Romanesque tympanums. The major differences are that Early Byzantine tympanums are all mosaic in the Byzantine style, are all inside of their churches, and very few are above doors. Despite this there are still notable similarities, namely the half moon shape and a large central image of Christ or an important Saint. Coupled with their inscriptions, these early tympanums would've had the same purpose and message as later tympanums. Which used this convention and changed the medium to integrated them more with other features and emphasize their message. While many of the distinct changes in Tympanum style happened in France and Spain in the 1000s, we find the Romanesque style all across the Christian world. The church at Javari Georgia built in the 600s was significant throughout the Middle Ages, sitting at a hotspot for war and pilgrimage. The tympanums, at the time their carving, served to align Javari with western conventions while using the imagery to support their political struggle. This was quickly copied throughout the Caucuses and further, for example Mren church in Armenia has a typical tympanum layout and common Christian figures, however these figures are carved in a Persian style with Persian clothes, showing regional stylistic differences. Gothic Tympanums Despite being most heavily associated with the Romanesque era, Tympanums are still used to this day. Gothic architecture heavily featured tympanums, taking influence from Romanesque examples and adapting them to better match the Gothic style. Gothic architecture and decoration is known for its ostentatious detail, and tympanums were no exception, becoming more decorative through deep carvings and intricate archivolts. Another important change in Gothic tympanums is the loss of the inscription, making more room for decoration and reflecting changing ideas about how people were meant to view churches. France While France is often credited for inventing the Gothic style, by the 1200s tympanums had already spread throughout Europe. There was still significant innovation made in French early-Gothic tympanums, much of which can be credited to the sculptor Gislebertus. Gislebertus worked on several churches between France and Italy, and applied many similar features across them. The Autun Cathedral is an excellent example, emphasizing thinness and decoration in everything from the towers to the walls to the tympanum. Also common in Gilbertese's work, The Autun tympanum has a very narrow inscription below it, and while this inscription is still very emphasized, it foreshadows the complete removal of the inscription. The Notre Dame Cathedral is one of the most iconic examples of French Romanesque-Gothic architecture, and has several tympanums. The original 1163 reliefs are typical Romanesque tympanums in form and style, featuring common characters such as Christ, Mary, and a select couple saints. In the late 1300s there were significant renovations, which sought to make the cathedral match contemporary gothic styles by widening, restoring and replacing many of the tympanums. The image on the right shows many of the features common to Gothic tympanums, retaining the shape and large central figure surrounded by smaller characters. There is much more space given to intricate detail however, manifesting in the archivolts, caryatids, and relief. Spain While the Reconquista was unconducive to large building projects, as Iberia became more peaceful in the early gothic era the lingering Arabian influences lead to many unique architectural developments. The Cathedral of Barcelona from the early 1400s is similar to Moorish buildings in almost everything about the plan, both on the interior and exterior; namely, the cathedral is filled with evenly distributed columns over an open space, which was one of the most common elements of Moorish architecture. Much of it is still Christian however, and the tympanum was an effective way to show messages on an medium associated with Christian culture. Saint Hermandad in Toledo is another 1400s gothic cathedral and also shows the influence of Moorish culture. The tympanums are carved with both religious and secular images, showing the head of Christ on one side and an eagle on the other. This shows how even without the inscription, Gothic tympanums still served to set a mindset for people who enter, reminding the (religiously diverse in the case of Spain) population of the importance of Christianity and the royals that justified their power through god. Italy While there are many consistent features of gothic architecture, regional differences were strong throughout Europe. Similarly to Spain, Italian tympanums show influence from local culture, keeping features that were already common in Italian architecture and adding new developments. The Scuola Vecchia was a cult meeting house made in Venice around 1445–50. What makes it particularly famous is the tympanum over the main door which has been “a prominent feature in the Venetian land-scape for over five hundred years”, despite being moved 5 times. This church is very similar to many other Italian churches, such as Madonna dell'Orto nearby and at the church of S. Maria Gloriosa dei Frari, both of which have feature especially pointed tympanums with columns on either side and another column or caryatid above. The feature most associated with Italian Gothic tympanums however is that the inscription is retained, which we can see on all of the tympanums mentioned in this paragraph. The tradition of putting inscriptions on curved pediments originated in Italy in late antiquity constructions, such as Old Saint Peters basilica. While these were an inspiration to the archetypical tympanum, Italians kept many of their architectural traditions consistent, retaining features such as the pointed shape and the inscription. This by no means however means that there is no outside influence. Italian tympanums feature the same style and purpose as other examples, using large, deeply carved, central figures of Christ and an important contemporary person to remind people of the importance of these figures as they enter the church. Gallery See also Lunette: semi-circular tympanum Church architecture Gable Pediment Portal Citations External links Sculpted tympanums Chartres Cathedral, West Front, Central Portal Tympanum of the last Judgment - western portal of the abbey-church of Saint Foy Arches and vaults Architectural elements
Tympanum (architecture)
[ "Technology", "Engineering" ]
2,775
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
11,921,616
https://en.wikipedia.org/wiki/Virtual%20Link%20Aggregation%20Control%20Protocol
Virtual LACP (VLACP) is an Avaya extension of the Link Aggregation Control Protocol to provide a Layer 2 handshaking protocol which can detect end-to-end failure between two physical Ethernet interfaces. It allows the switch to detect unidirectional or bi-directional link failures irrespective of intermediary devices and enables link recovery in less than one second. With VLACP, far-end failures can be detected, which allows a Link aggregation trunk to fail over properly when end-to-end connectivity is not guaranteed for certain links through the internet in an aggregation group. When a remote link failure is detected, the change is propagated to the partner port. See also MLT SMLT RSMLT External links Virtual Link Aggregation Control Protocol (VLACP) Retrieved 29 July 2011 ERS-8600 All Documentation -Retrieved 29 July 2011 VSP 7000 Command Line Interface Commands Reference, Command: vlacp Retrieved 1 May 2020 Avaya Ethernet Link protocols Network protocols Network topology Nortel protocols
Virtual Link Aggregation Control Protocol
[ "Mathematics", "Technology" ]
212
[ "Network topology", "Computing stubs", "Topology", "Computer network stubs" ]
11,921,686
https://en.wikipedia.org/wiki/Operating%20system%20Wi-Fi%20support
Operating system Wi-Fi support is defined as the facilities an operating system may include for Wi-Fi networking. It usually consists of two pieces of software: device drivers, and applications for configuration and management. Driver support is typically provided by manufacturers of the chipset hardware or end manufacturers. Unix clones such as Linux, sometimes through open-source projects are also available. Configuration and management support consists of software to enumerate, join, and check the status of available Wi-Fi networks. This also includes support for various encryption methods. These systems are often provided by the operating system backed by a standard driver model. In most cases, drivers emulate an Ethernet device and use the configuration and management utilities built into the operating system. In cases where built-in configuration and management support is non-existent or inadequate, hardware manufacturers may include software to handle those tasks. Microsoft Windows Microsoft Windows has comprehensive driver-level support for Wi-Fi, the quality of which depends on the hardware manufacturer. Hardware manufacturers almost always ship Windows drivers with their products. Windows ships with very few Wi-Fi drivers and depends on the original equipment manufacturers (OEMs) and device manufacturers to make sure users get drivers. Configuration and management depend on the version of Windows. Earlier versions of Windows, such as 98, ME, and 2000 do not have built-in configuration and management support and must depend on software provided by the manufacturer. Microsoft Windows XP has built-in configuration and management support. The original shipping version of Windows XP included rudimentary support which was dramatically improved in Service Pack 2. Support for WPA2 and some other security protocols require updates from Microsoft. Many hardware manufacturers include their software and require the user to disable Windows’ built-in Wi-Fi support. Windows Vista, Windows 7, Windows 8, and Windows 10 have improved Wi-Fi support over Windows XP with a better interface and a suggestion to connect to a public Wi-Fi when no other connection is available. macOS and Classic Mac OS Apple was an early adopter of Wi-Fi, introducing its AirPort product line, based on the 802.11b standard, in July 1999. Apple later introduced AirPort Extreme, an implementation of 802.11g. All Apple computers, starting with the original iBook in 1999, either included AirPort 802.11 networking or were designed specifically to provide 802.11 networking with only the addition of the internal AirPort Card (or, later, an AirPort Extreme Card) connecting to the computer's built-in antennae. In late 2006, Apple began shipping Macs with Broadcom Wi-Fi chips that also supported the Draft 802.11n standard, but this capability was disabled and Apple did not claim or advertise the hardware's capability until some time later when the draft had progressed further. At the January 2007 Macworld Expo, Apple announced that their computers would begin shipping with Draft 802.11n support. Apple produces the operating system, computer hardware, accompanying drivers, AirPort Wi-Fi base stations, and configuration and management software. The built-in configuration and management are integrated throughout many of the operating system's applications and utilities. Mac OS X has Wi-Fi support, including WPA2, and ships with drivers for all of Apple's current and past AirPort Extreme and AirPort cards. MacOS also supports extending this functionality through external third-party hardware. Mac OS 9 supported AirPort and AirPort Extreme as well, and drivers exist for other equipment from other manufacturers, providing Wi-Fi options for earlier systems not designed for AirPort cards. Versions of Mac OS before Mac OS 9 predate Wi-Fi and do not have any Wi-Fi support, although some third-party hardware manufacturers have made drivers and connection software that allow earlier versions to use Wi-Fi. Open-source Unix-like systems Linux, FreeBSD and similar Unix-like clones have much coarser support for Wi-Fi. Due to the open source nature of these operating systems, many different standards have been developed for configuring and managing Wi-Fi devices. The open source nature also fosters open source drivers which have enabled many third party and proprietary devices to work under these operating systems. See Comparison of Open Source Wireless Drivers for more information on those drivers. Linux has optional Wi-Fi support, but this is not a requirement. This is especially true for older kernel versions, such as the 2.6 series, which is still widely used by enterprise distributions. Native drivers for many Wi-Fi chipsets are available either commercially or at no cost, although some manufacturers don't produce a Linux driver, only a Windows one. Consequently, many popular chipsets either don't have a native Linux driver at all, or only have a half-finished one. For these, the freely available NdisWrapper and its commercial competitor DriverLoader allow Windows x86 and 64 bit variants NDIS drivers to be used on x86-based Linux systems and 86_64 architectures as of January 6, 2005. As well as the lack of native drivers, some Linux distributions do not offer a convenient user interface and configuring Wi-Fi on them can be a clumsy and complicated operation compared to configuring wired Ethernet drivers. This is changing with the adoption of utilities such as NetworkManager and wicd that allow users to automatically switch between networks, without root access or command-line invocation of the traditional wireless tools. But some distributions include a large number of preinstalled drivers, like Ubuntu. FreeBSD has Wi-Fi support similar to Linux. FreeBSD 7.0 introduced full support for WPA and WPA2, although in some cases this is driver dependent. FreeBSD comes with drivers for many wireless cards and chipsets, including those made by Atheros, Intel Centrino, Ralink, Cisco, D-link, and Netgear, and provides support for others through the ports collection. FreeBSD also has "Project Evil", which provides the ability to use Windows x86 NDIS drivers on x86-based FreeBSD systems as NdisWrapper does on Linux, and Windows amd64 NDIS drivers on amd64-based systems. NetBSD, OpenBSD, and DragonFly BSD have Wi-Fi support similar to FreeBSD. Code for some of the drivers, as well as the kernel framework to support them, is mostly shared among the 4 BSDs. Haiku has had preliminary Wi-Fi support since September 2009. Solaris and OpenSolaris have the Wireless Networking Project to provide Wi-Fi drivers and support. Android has built in support for WiFi, with it being preferred over mobile telephony networks. Unison OS has built in support for embedded WiFi for a broad set of modules, with it being preferred over mobile telephony networks (which also have off the shelf support). Mixed WiFi and Bluetooth for embedded systems is also provided. See also List of WLAN channels Wireless access point References External links Wi-Fi Alliance IEEE 802.11 Computer networking Wi-Fi
Operating system Wi-Fi support
[ "Technology", "Engineering" ]
1,445
[ "Computer networking", "Computer engineering", "Wireless networking", "Wi-Fi", "Computer science" ]
11,922,647
https://en.wikipedia.org/wiki/Graphis%20Inc.
Graphis Inc. is an international publisher of books and awards for the Visual communications industry. Based in New York City, Graphis presents and promotes the best-submitted work in graphic design, advertising, photography, poster design, branding, typeface design, logo design and illustration. Graphis award competitions are juried by award-winning leading creatives, including Graphic Design, Advertising, Photography, Posters, New Talent (Student), Packaging and Protest Posters. The award-winning work is published online and in fine art quality hardbound books. Other Graphis books include: Takenobu Igarashi: Design and Fine Art, Ally & Gargano, Kit Hinrichs' Narrative Design, and Richard Wilde's Wilde Years, among many others. Graphis also publishes a New Talent Annual that presents the best student work of the year, providing young professionals exposure and recognition. Graphis Magazine ended at issue No. 355, and a new Graphis Journal magazine, starting with Issue No. 356, was introduced in 2018. The Graphis Journal presents stories on the top talents who have been consistent Platinum and Gold winners in the Graphis annual competitions. History Graphis was founded in 1944 by Walter Herdeg and Dr. Walter Amstutz in Zurich, Switzerland. The magazine was started with the September/October 1944 issue. In 1986, B. Martin Pedersen purchased the company from Mr. Herdeg and later moved the headquarters to New York City. The Annuals were redefined, and new books were added to the roster, including New Talent (student work), Typography, Branding, Logo & Letterhead, among others. Awards Graphis annually invites professionals across the design industry communities to enter. Among these submissions, Graphis selects the most compelling work of the year for Platinum, Gold, and Silver awards considerations, which are featured in hardcover Annuals and on the website. In addition, Honorable Mentions are also archived permanently on Graphis' website. Up to 500 submissions are published and presented on the site, supporting professionals, as well as young and emerging talent. Graphis Masters Design Anderson, Charles Spencer ARSONAL Bankov, Peter Bass, Saul Bundi, Stephan Castelletti, Andrea Chung, Hoon-Dong Cold Open Collins, Brian Colrat, Pascal Cronan, Michael Cross, James A. Cummings & Good Danne, Richard Fraile, Eduardo del Doret, Michael Duffy, Joe Emery, Garry Fili, Louise Fleckhaus, Willy Fletcher, Alan Frost, Vince Gee, Earl Gericke, Michael Glaser, Milton Gottschalk+Ash Int'l Gran, Martin Gravdahl, John Haller, Carmit He, Jianping Hickmann, Fons Hillman, David Hinrichs, Kit Hofmann, Matthias Huerta, Gerard Hvass&Hannibal Ide, Toshiaki Igarashi, Takenobu Imboden, Melchior Jeker, Werner Kamekura, Yusaku Katsui, Mitsuo KMS TEAM GmbH Ljubicic, Boris Lloyd, Doug Loesch, Uwe Loiri, Pekka Lorenc, Jan Lubalin, Herb Machado, João Matsuda, Jisuke Matsunaga, Shin Matthies, Holger McCandliss and Campbell Minini, Marcos Miyake, Issey Morla, Jennifer Müller-Brockman, Josef Nygaard, Finn Osborne, Michael PepsiCo Design & Innovation Pérez, Álvaro Peteet, Rex Piippo, Kari Pirtle, Woody Porsche, F.A. Poulin, Richard Pulfer, Adrian Rambow, Gunter Rand, Paul Reisinger, Dan Rhubarb Rodriguez, Robert Sagmeister, Stefan Sagmeister & Walsh Saito, Makoto Sandstrom, Steve Satoh, Taku Throndsen, Morten Scher, Paula Schmidt, Anders Schwab, Michael Simmons, Art Stavro, Astrid (Atlas) Skolos Wedell Smith, Marlena Buczek Sottsass, Ettore Stout, DJ Stranger & Stranger Studio Eduardo Aires Sych, Paul Taft, Ron Tanaka, Ikko Throndsen, Morten Troxler, Miklaus Turner Duckworth Vanderbyl, Michael Vignelli, Lella Vignelli, Massimo Wilker, Karlsson Woodward, Fred Tagi, Tamotsu Photography Almas, Erik Azevedo, Athena Balog, James Buckley, Dana Colrat, Pascal Cutler, Craig DeBoer, Bruce Cumptich, Ricardo de Vidq de Diodato, Bill Faulkner, colin Fellman, Sandi Flach, Tim Frankel, Laurie Furman, Michael Greenfield-Sanders, Timothy Gudnason, Torkil Terry Heffernan Iooss, Walter joSon JUCO Knight, Nick Knopf, Caroline Knowles, Jonathan Kohanim, Parish Kretschmer, Hugh Laita, Mark Leutwyler, Henry Madere, John Marco, Phil McCandliss and Campbell Mendlowitz, Benjamin M. RJ Muna Musilek, Stan Newell, Lennette Norberg, Marc O'Brien, Michael Olson, Rosanne Poon, Kah Robert, Francois Saraceno, Joseph Schatz, Howard Schoenfeld, Michael Seliger, Mark Shoan, Tatijana Stirton, Brent Tardio, Robert Turner, Pete Vasquez, Rafael Voorhes, Adam Wartenberg, Frank P. Watson, Albert Weitz, Allan Wilson, Christopher Zuckerman, Andrew Advertising Bailey Lauerman BBDO Carmichael Lynch Corcoran, Colin daDá, david chandi DeVito, Sal DM9 DDB Fallon, Pat Gargano, Amil Goodby Silverstein & Partner Krone, Helmut Lewis Communications Lloyd, Doug Ogilvy, David Robert Talarczyk Wieden + Kennedy Zulu Alpha Kilo Art/Illustration Billout, Guy Blackshear, Thomas Braldt, Bralds Cosgrove, Dan Deas, Michael Fasolino, Teresa Flagg, James M. Forbes, Bart Foster, Jeff Staudinger + Franke Frazier, Craig Glenwood, Michael Hess, Richard Hess, Mark Hvass&Hannibal Johnson, Doug Kraemer, Peter Larsson, Carl Mottos, John O'Brien, Tim Pantuso, Michael Parrish, Maxfield Pelavin, Daniel Rockwell, Norman Rodriguez, Robert Stahl, Nancy Summers, Mark Unruh, Jack Vasarely, Victor Wyeth, N.C. Education Anselmo, Frank Bartlett, Brad Bekker, Phil DeVito, Sal Seung-Min Han & Dong-Joo Park Goldberg, Carin Mariucci, Jack Pulfer, Adrian Richardson, Hank Smith, Mark T. Sommese, Lanny Sommese, Kristin White, Mel References External links 2009 Design Annual Platinum Award Winners 2022 Poster Annual Award Winners 2021 Design Annual Award Winners 2021 Advertising Annual Award Winners Swiss companies established in 1944 Advertising awards Book art awards Book design Book publishing companies based in New York City Communication design Design awards Design books Design magazines Graphic design Magazines established in 1944 Magazines published in Zurich Photography awards Publishing companies based in New York City Publisher awards Visual arts magazines published in the United States
Graphis Inc.
[ "Engineering" ]
1,495
[ "Design magazines", "Design awards", "Communication design", "Book design", "Design" ]
11,923,400
https://en.wikipedia.org/wiki/Tate%20Etc.
Tate Etc. is an arts magazine produced within Britain's Tate organisation of arts and museums. It has the largest circulation of any art magazine in the world. The magazine was edited by Simon Grant from its launch in 2004 until the Autumn 2021 issue. As well as being sold in shops, the magazine is sent for free to Tate members. History Prior to the 2004 launch of Tate Etc. the Tate published a magazine for its members. In 2002 the Tate's magazine was taken over by Condé Nast, who relaunched it as a bi-monthly general arts magazine which would, for the first time, carry consumer advertising. The magazine was brought back in house in 2004 as Tate Etc, Founded by Simon Grant and Bice Curiger. Tate Etc. first appeared in the Summer of 2004, and issues have been produced 3 times a year. In 2007, then-art director Cornel Windlin spoke about the relative freedom afforded to the editorial team at Tate Etc due to the fact that the magazine was sent to Tate members, and so there was less pressure to sell on newsstands. This influenced the cover design, with more prominence given to text and less on eye-grabbing images on the magazine's covers during that time. In 2017 the magazine was redesigned, partly to emphasise its independence and make it look more distinct from Tate's marketing materials. In 2018 the magazine launched a digital edition. Founding editor Simon Grant's final issue working on the magazine was the Autumn 2021 issue. Notes External links Visual arts magazines published in the United Kingdom Design magazines Magazines published in London Magazines established in 2004 2004 establishments in England
Tate Etc.
[ "Engineering" ]
329
[ "Design magazines", "Design" ]
4,175,003
https://en.wikipedia.org/wiki/Magnetic%20pressure
In physics, magnetic pressure is an energy density associated with a magnetic field. In SI units, the energy density of a magnetic field with strength can be expressed as where is the vacuum permeability. Any magnetic field has an associated magnetic pressure contained by the boundary conditions on the field. It is identical to any other physical pressure except that it is carried by the magnetic field rather than (in the case of a gas) by the kinetic energy of gas molecules. A gradient in field strength causes a force due to the magnetic pressure gradient called the magnetic pressure force. Mathematical statement In SI units, the magnetic pressure in a magnetic field of strength is where is the vacuum permeability and has units of energy density. Magnetic pressure force In ideal magnetohydrodynamics (MHD) the magnetic pressure force in an electrically conducting fluid with a bulk plasma velocity field , current density , mass density , magnetic field , and plasma pressure can be derived from the Cauchy momentum equation: where the first term on the right hand side represents the Lorentz force and the second term represents pressure gradient forces. The Lorentz force can be expanded using Ampère's law, , and the vector identity to give where the first term on the right hand side is the magnetic tension and the second term is the magnetic pressure force. Magnetic tension and pressure are both implicitly included in the Maxwell stress tensor. Terms representing these two forces are present along the main diagonal where they act on differential area elements normal to the corresponding axis. Wire loops The magnetic pressure force is readily observed in an unsupported loop of wire. If an electric current passes through the loop, the wire serves as an electromagnet, such that the magnetic field strength inside the loop is much greater than the field strength just outside the loop. This gradient in field strength gives rise to a magnetic pressure force that tends to stretch the wire uniformly outward. If enough current travels through the wire, the loop of wire will form a circle. At even higher currents, the magnetic pressure can create tensile stress that exceeds the tensile strength of the wire, causing it to fracture, or even explosively fragment. Thus, management of magnetic pressure is a significant challenge in the design of ultrastrong electromagnets. The force (in cgs) exerted on a coil by its own current is where Y is the internal inductance of the coil, defined by the distribution of current. Y is 0 for high frequency currents carried mostly by the outer surface of the conductor, and 0.25 for DC currents distributed evenly throughout the conductor. See inductance for more information. Interplay between magnetic pressure and ordinary gas pressure is important to magnetohydrodynamics and plasma physics. Magnetic pressure can also be used to propel projectiles; this is the operating principle of a railgun. Force-free fields When all electric currents present in a conducting fluid are parallel to the magnetic field, the magnetic pressure gradient and magnetic tension force are balanced, and the Lorentz force vanishes. If non-magnetic forces are also neglected, the field configuration is referred to as force-free. Furthermore, if the current density is zero, the magnetic field is the gradient of a magnetic scalar potential, and the field is subsequently referred to as potential. See also Magnetic tension force Maxwell stress tensor Electromagnetically induced acoustic noise and vibration Alfvén wave References Plasma parameters Electromagnetism
Magnetic pressure
[ "Physics" ]
691
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
4,175,228
https://en.wikipedia.org/wiki/Xia%E2%80%93Shang%E2%80%93Zhou%20Chronology%20Project
The Xia–Shang–Zhou Chronology Project () was a multi-disciplinary project commissioned by the People's Republic of China in 1996 to determine with accuracy the location and time frame of the Xia, Shang, and Zhou dynasties. The project was directed by professor Li Xueqin of Tsinghua University in Beijing, and involved around 200 experts. It used radiocarbon dating, archaeological dating methods, historical textual analysis, astronomy, and other methods to achieve greater temporal and geographic accuracy. Preliminary results were released in November 2000 and the final report was published in June 2022. Among other findings, it dated the beginning of the Xia to , the Shang to , and the Zhou to . However, some scholars have disputed several of the project's methods and conclusions. Background The traditional account of ancient China, represented by the Records of the Grand Historian written by Sima Qian in the Han dynasty, begins with the Three Sovereigns and Five Emperors, leading through a sequence of dynasties, the Xia, Shang and Zhou. Sima Qian felt able to give a year-by-year chronology back to the start of the Gonghe Regency in 841 BC, early in the Zhou dynasty. For the period before that date, his sources (now mostly lost) were unreliable and inconsistent, and he gave only lists of kings and accounts of isolated events. Later scholars were unable to push a precise chronology back past Sima Qian's date of 841 BC. Many elements of the traditional account, especially the early parts, were clearly mythical. In the 1920s, Gu Jiegang and other scholars of the Doubting Antiquity School noted that the earliest figures appeared latest in the literature, and suggested that the traditional history had accreted layers of myth. Noting parallels between the accounts of the Xia and Shang, they suggested that the history of the Xia was invented by the Zhou to support their doctrine of the Mandate of Heaven, by which they justified their conquest of the Shang. Some even doubted the historicity of the Shang dynasty. In 1899, the scholar Wang Yirong examined some curious symbols carved on "dragon bones" purchased from a Chinese pharmacist, and identified them as an early form of Chinese writing. The bones were finally traced back in 1928 to a site (now called Yinxu) near Anyang, north of the Yellow River in modern Henan province. The inscriptions on the bones were found to be divination records from the reigns of the last nine Shang kings, from the reign of Wu Ding. Moreover, from the sacrificial schedule recorded on the bones it was possible to reconstruct a sequence of Shang kings that closely matched the list given by Sima Qian. Archaeologists focused on the Yellow River valley in Henan as the most likely site of the states described in the traditional histories. After 1950, remnants of an earlier walled city of the Erligang culture were discovered near Zhengzhou, and in 1959 the site of the Erlitou culture was found in Yanshi, south of the Yellow River near Luoyang. Radiocarbon dating suggests that the Erlitou culture flourished c. 2100 BC to 1800 BC. They built large palaces, suggesting the existence of an organized state. More recently the picture has been complicated by the discovery of advanced civilizations in Sichuan and the Yangtze valley, such as Sanxingdui and Wucheng, of which the traditional histories make no mention. Until the mid-20th century, many popular works, both Chinese and Western, used a traditional chronology calculated by Liu Xin early in the first century AD. However, modern scholars studying inscriptions on Shang oracle bones and Zhou bronzes were proposing shorter chronologies, for example typically placing the Zhou conquest of the Shang in the mid-11th century BC instead of the 12th. In 1994, Song Jian, a state councillor for science, was impressed on a visit to Egypt by chronologies stretching back to the 3rd millennium BC. He proposed a multi-disciplinary project to establish a similar chronology for China. The project was approved as part of the ninth five-year plan (1996–2000). A preliminary report of the project was issued in 2000. After lengthy review, the full report was sent to the publishers in 2019 and the offices of the Project were closed, with their materials sent to the Institute of Archaeology of the Chinese Academy of Social Sciences. The full report was published in June 2022 after more than a decade of revision. Although the final report noted that some archaeological finds after the publication of the preliminary report were inconsistent with its findings, the chronology of the preliminary report was adopted without change. Methods The Project used a combination of methods to attempt to correlate the traditional literature with archeological discoveries and the astronomical record. Western Zhou kings The contemporary evidence for the Western Zhou consists of thousands of bronzes, many bearing inscriptions. Around 60 of these record dates of important events as the day in the sexagenary cycle, the phase of the moon, the month and the year of reign. However, the rules of the Western Zhou lunisolar calendar, in particular the start of a month or year and the insertion of intercalary months, were not fixed. In addition, the current king is typically not identified. Occasionally an unusual astronomical event was recorded. A key reference point was the accession of King Yih of Zhou, when according to the "old text" Bamboo Annals the day dawned twice. The Project adopted (without acknowledgement) the proposal of the Korean scholar Pang Sunjoo (方善柱) that this referred to an annular solar eclipse at dawn that occurred in 899 BC. Other scholars have challenged both this interpretation of the text and the astronomical calculations involved. King Wu's conquest of the Shang Perhaps the most significant event requiring dating is the conquest of the Shang by the Zhou, described in traditional histories as the Battle of Muye, though the site of the battle has not been identified. Previous chronologies had proposed at least 44 different dates for this event, ranging from 1130 to 1018 BC. The most popular have been 1122 BC, calculated by the Han dynasty astronomer Liu Xin, and 1027 BC, deduced from a statement in the "old text" Bamboo Annals that the Western Zhou (whose end point is known to be 770 BC) had lasted 257 years. A few documents relate astronomical observations to this event: A quotation in the Book of Han from the lost Wǔchéng 武成 chapter of the Book of Documents appears to describe a lunar eclipse just before the beginning of King Wu's campaign. This date, and the date of his victory, are given as months and sexagenary days. A passage in the Guoyu gives the positions of the Sun, Moon, Jupiter and two stars on the day King Wu attacked the Shang. The "current text" Bamboo Annals mentions conjunctions of all five planets occurring before and after the Zhou conquest. Han-period texts mention the first conjunction as occurring in the 32nd year of the reign of the last king. Such events are rare, but all five planets did gather on 28 May 1059 BC and again on 26 September 1019 BC. Although the recorded positions in the sky of these two events are the reverse of what occurred, they could not have been retrospectively calculated at the time the account first appears. The strategy adopted by the Project was to use archeological investigation to narrow the range of dates that would need to be compared with the astronomical data. Although no archaeological traces of King Wu's campaign have been found, the pre-conquest Zhou capital at Fengxi in Shaanxi has been excavated and strata at the site have been identified with the Predynastic Zhou. Radiocarbon dating of samples from the site as well as at late Yinxu and early Zhou capitals, using the wiggle matching technique, yielded a date for the conquest between 1050 and 1020 BC. The only date within that range matching all the astronomical data is 20 January 1046 BC. This date had previously been proposed by David Pankenier, who had matched the above passages from the classics with the same astronomical events, but here it resulted from a thorough consideration of a broader range of evidence. Other scholars have raised several criticisms of this process. The connection between the layers at the archaeological sites and the conquest is uncertain. The narrow range of radiocarbon dates are cited with a less stringent confidence interval (68%) than the standard requirement of 95%, which would have produced a much wider range. The texts describing the relevant astronomical phenomena are extremely obscure. For example, the inscription on the Li gui, a key text used in dating the conquest, can be interpreted in several different ways, with one alternative reading leading to the date of 9 January 1044 BC. Late Shang kings For the late Shang, the oracle bones provide less detail than Zhou bronzes, routinely recording only the day in the sexagenary cycle. However, calculations using a longer ritual cycle were used to date the reigns of the last two Shang kings. Mentions of five lunar eclipses in oracle bone divinations from the late Wu Ding and Zu Geng reigns were identified with events spanning the period from 1201 and 1181 BC, from which a start date for Zu Geng's reign was derived. The start date of Wu Ding's reign was then calculated using the statement in the "Against Luxurious Ease" chapter of the Book of Documents that his reign lasted 59 years. Early Shang and Xia According to the traditional histories, Pan Geng, three reigns earlier than Wu Ding, moved the Shang capital to its last site, generally identified with the Yinxu site in Anyang. Different interpretations of the text of the Bamboo Annals give intervals of 275, 273 or 253 years between this event and the Zhou conquest. The project settled on a date near the shortest of these intervals. The four phases of the Erlitou culture have been divided between the Xia and Shang dynasties in different ways by various prominent archaeologists. The project assigned all four phases to the Xia, identifying the establishment of the Shang dynasty with the building of the Yanshi walled city north-east of the Erlitou site. The time span of the Xia dynasty was taken from reign-lengths given in the Bamboo Annals and from a conjunction of five planets during the reign of Yu the Great recorded in later texts. As this period was longer than the time spanned by the Erlitou culture, the project also included the later phases of the Wangwan III variant of the Longshan culture within the Xia period. Chronological table The Xia–Shang–Zhou Chronology Project concluded precise dates for accessions of rulers from Wu Ding, the Shang dynasty king whose reign produced the oldest known oracle bone records. These dates are here compared with the traditional dates and those used in the Cambridge History of Ancient China: Earlier dates are given more approximately: The relocation of the Shang capital to Yin during the reign of Pan Geng is aligned with the earliest layers at Yinxu, dated at c. 1300 BC. The establishment of the Shang dynasty was identified with the foundation of an Erligang culture walled city at Yanshi, dated at c. 1600 BC, compared with the Cambridge Historys c. 1570 BC and the traditional date of 1766 BC. The establishment of the Xia dynasty was dated at c. 2070 BC, compared with the traditional date of 2205 BC. Reception Coverage of the project in non-Chinese press focused on the conflict between nationalism and scholarship. However, not every member of the chronology project agrees on all of the dates. Indeed, the project has been unafraid to contest dates proposed even by the director. This suggests that the dates are being considered on their own merits rather than by deferring to authority, and that politics does not influence the detailed work of the project. In addition to methodological concerns, scholars have complained that the project is part of a tradition of relegating archaeology to a role of verifying traditional histories. They argue that this forces archeological evidence into a framework of a single sequence of similar dominant states, as depicted in the histories and reflected in the title "Three Dynasties". However, when evaluated on its own merits, the evidence reveals a much more complex origin of Chinese civilization, with many other advanced states that are not mentioned in the histories. A session of the Annual Conference of the Association for Asian Studies in 2002 was devoted to the preliminary report, where its methods were criticised by David Nivison, among others. An international conference on chronology arranged for October 2003 was postponed due to the SARS outbreak, but never rescheduled. The Project's dates have however become the orthodox chronology in Chinese textbooks and reference works. Some bronze inscriptions discovered since the draft report was issued in 2000 are inconsistent with the project's dates for the Western Zhou. For example: The Jue Gong gui, an early Western Zhou vessel probably from the reign of King Cheng but possibly from the following King Kang, has an inscription stating that it was cast in the 28th year of the king, whereas the project gave reign lengths of 22 and 25 years respectively to these kings. The Jun gui, assigned to the reign of King Yih, has an inscription stating that it was cast in the 10th year of the king, to whom the project assigned a reign of 8 years. The final report acknowledged many of these problems, but did not alter the date table issued in the preliminary report. See also History of China History of Qing (People's Republic) Five thousand years of Chinese civilization Notes ReferencesFootnotesWorks cited' Bronze Age in China 2000 documents Historiography of China Chronology Archaeology timelines Archaeological theory Periods and stages in archaeology Projects in Asia Projects established in 1996 Organizations disestablished in 2000 University projects Research projects
Xia–Shang–Zhou Chronology Project
[ "Physics" ]
2,784
[ "Spacetime", "Chronology", "Physical quantities", "Time" ]
4,175,276
https://en.wikipedia.org/wiki/Time%20Matters
Time Matters is practice management software, produced by PCLaw | Time Matters LLC. It differs from contact management software such as ACT! or GoldMine because in addition to contacts, it manages calendaring, email, documents, research, billing, accounting, and matters or projects. It integrates with a variety of other software products from both LexisNexis and other vendors. Some of these vendors are Quicken, Microsoft, Palm, Mozilla, Corel, and Adobe. Developed originally for law firms, Time Matters competes with Gavel, Amicus, Tabs, and other legal practice management products. It also may be used in conjunction with Document modelling and Document assembly software products like HotDocs and Deal Builder. Time Matters was developed by DATA.TXT Corporation originally of Coral Gables, Florida, later of Cary, North Carolina. Since its inception, DATA.TXT Corporation focused on making Time Matters an all-encompassing professional office software package, providing Calendar, Tickler, Contact, Matter, Document, and Messaging Management functions for personal computers and networks of all sizes. Founded in December 1989 by Robert Butler who was later joined as co-founder by Kevin Stilwell in 1992, the entire management and programming staff that began Time Matters' development in 1989 remained on the team until 2004, providing continuity and reliability rarely seen in software developed for specialized markets. Time Matters was purchased by Reed Elsevier in March, 2004. LexisNexis developed Time Matters out of offices in Cary, later moving operations to Raleigh, North Carolina on the North Carolina State University campus. Time Matters for Windows has been shipping since 1994 (The DOS version of Time Matters started shipping in 1989). Time Matters was previously available in three editions: Professional, Enterprise, and World. The Enterprise edition used Microsoft SQL Server as its database engine. Time Matters Browser Edition (formerly World Edition) served up Time Matters in web browsers for remote access to a law firm's data. An international network of Certified Independent Consultants ("CICs") support, train, and customize this product for end-users. Time Matters Professional, discontinued with the release of Time Matters 10.0 in 2009, was based on the TPS file system developed by Softvelocity. Currently, Time Matters relies on SQL Server for its database. With the release of Version 10 in October 2009, Time Matters became available only in the Enterprise Edition (but was sold as Time Matters). In May 2010, LexisNexis introduced an Annual Maintenance Plan (AMP) subscription program. AMP subscribers are eligible to download product upgrades and to receive technical support. In 2018, Time Matters introduced Time Matters Go, a mobile application for iOS and Android devices. AMP subscribers also receive free access to online training and are eligible to subscribe to the Time Matters Go mobile app service for Android and iOS. No per-incidence technical support options are available. Time Matters 16.4 released on January 30, 2019. This release provided improved integration with Microsoft Exchange Server, and improved add-ins for supported versions of Adobe Acrobat and Microsoft Office applications. In May 2019, LexisNexis entered a joint venture with LEAP Legal Software, providing a migration option from the server-based Time Matters to the cloud-based product offered by LEAP. At the time, LexisNexis reported that they had 15,000 paying customers and 130,000 users across their PCLaw and Time Matters products. A new software company, PCLaw | Time Matters was born out of the joint venture, which continues to develop Time Matters. See also LexisNexis References External links Time Matters website Business software Legal software Timekeeping
Time Matters
[ "Physics" ]
743
[ "Spacetime", "Timekeeping", "Physical quantities", "Time" ]
4,175,283
https://en.wikipedia.org/wiki/Portal%20frame
Portal frame is a construction technique where vertical supports are connected to horizontal beams or trusses via fixed joints with designed-in moment-resisting capacity. The result is wide spans and open floors. Portal frame structures can be constructed using a variety of materials and methods. These include steel, reinforced concrete and laminated timber such as glulam. First developed in the 1960s, they have become the most common form of enclosure for spans of 20 to 60 meters. Because of these very strong and rigid joints, some of the bending moment in the rafters is transferred to the columns. This means that the size of the rafters can be reduced or the span can be increased for the same size rafters. This makes portal frames a very efficient construction technique to use for wide span buildings. Portal frame construction is therefore typically seen in warehouses, barns and other places where large, open spaces are required at low cost and a pitched roof is acceptable. Generally portal frames are used for single-story buildings but they can be used for low-rise buildings with several floors where they can be economic if the floors do not span right across the building (in these circumstances a skeleton frame, with internal columns, would be a more economic choice). A typical configuration might be where there is office space built against one wall of a warehouse. Portal frames can be clad with various materials. For reasons of economy and speed, the most popular solution is some form of lightweight insulated metal cladding with cavity masonry work to the bottom 2 m of the wall to provide security and impact resistance. The lightweight cladding would be carried on sheeting rails spanning between the columns of the portal frames. Portal frames can be defined as two-dimensional rigid frames that have the basic characteristics of a rigid joint between column and beam. The main objective of this form of design is to reduce bending moment in the beam, which allows the frame to act as one structural unit. The transfer of stresses from the beam to the column results in rotational movement at the foundation, which can be overcome by the introduction of a pin/hinge joint. For warehouses and industrial buildings, sloping roof made of purlins and ac sheet roofing between portals is provided. For assembly halls, portals with R.C slab roof cast monolithically is used. Portal frames are designed for the following loads: roof load wind load Previously, it has been shown that the limit state design/load and resistance factor design (LRFD) and permissible stress design/allowable strength design (ASD) can produce significantly different designs of steel gable frames. There are few situations where ASD produces significantly lighter weight steel gable frame designs. Additionally, it has been shown that in high snow regions, the difference between the methods is more dramatic. While designing, care should be taken for proper joints foundation bracing If the joints are not rigid, they will "open up" and the frame will be unstable when subjected to loads. This is the pack of cards effect. Vertical loading results in the walls being pushed outwards. If the foundation cannot resist horizontal push, outward movement will occur and the frame will lose strength. Wind subjects the frame to uplift forces. Overturning forces on the sides and ends of the building. Drag forces on the roof and sides. These destabilizing forces are resisted essentially by the weight of the building and in this regard, the foundations contribute significantly to this weight. The foundations are regarded as the building's anchors. References Structural system
Portal frame
[ "Technology", "Engineering" ]
703
[ "Structural system", "Structural engineering", "Building engineering" ]
4,175,450
https://en.wikipedia.org/wiki/Shockley%20diode%20equation
The Shockley diode equation, or the diode law, named after transistor co-inventor William Shockley of Bell Labs, models the exponential current–voltage (I–V) relationship of semiconductor diodes in moderate constant current forward bias or reverse bias: where is the diode current, is the reverse-bias saturation current (or scale current), is the voltage across the diode, is the thermal voltage, and is the ideality factor, also known as the quality factor, emission coefficient, or the material constant. The equation is called the Shockley ideal diode equation when the ideality factor equals 1, thus is sometimes omitted. The ideality factor typically varies from 1 to 2 (though can in some cases be higher), depending on the fabrication process and semiconductor material. The ideality factor was added to account for imperfect junctions observed in real transistors, mainly due to carrier recombination as charge carriers cross the depletion region. The thermal voltage is defined as: where is the Boltzmann constant, is the absolute temperature of the p–n junction, and is the elementary charge (the magnitude of an electron's charge). For example, it is approximately 25.852mV at . The reverse saturation current is not constant for a given device, but varies with temperature; usually more significantly than , so that typically decreases as increases. Under reverse bias, the diode equation's exponential term is near 0, so the current is near the somewhat constant reverse current value (roughly a picoampere for silicon diodes or a microampere for germanium diodes, although this is obviously a function of size). For moderate forward bias voltages the exponential becomes much larger than 1, since the thermal voltage is very small in comparison. The in the diode equation is then negligible, so the forward diode current will approximate The use of the diode equation in circuit problems is illustrated in the article on diode modeling. Limitations Internal resistance causes "leveling off" of a real diode's I–V curve at high forward bias. The Shockley equation doesn't model this, but adding a resistance in series will. The reverse breakdown region (particularly of interest for Zener diodes) is not modeled by the Shockley equation. The Shockley equation doesn't model noise (such as Johnson–Nyquist noise from the internal resistance, or shot noise). The Shockley equation is a constant current (steady state) relationship, and thus doesn't account for the diode's transient response, which includes the influence of its internal junction and diffusion capacitance and reverse recovery time. Derivation Shockley derives an equation for the voltage across a p-n junction in a long article published in 1949. Later he gives a corresponding equation for current as a function of voltage under additional assumptions, which is the equation we call the Shockley ideal diode equation. He calls it "a theoretical rectification formula giving the maximum rectification", with a footnote referencing a paper by Carl Wagner, Physikalische Zeitschrift 32, pp. 641–645 (1931). To derive his equation for the voltage, Shockley argues that the total voltage drop can be divided into three parts: the drop of the quasi-Fermi level of holes from the level of the applied voltage at the p terminal to its value at the point where doping is neutral (which we may call the junction), the difference between the quasi-Fermi level of the holes at the junction and that of the electrons at the junction, the drop of the quasi-Fermi level of the electrons from the junction to the n terminal. He shows that the first and the third of these can be expressed as a resistance times the current: As for the second, the difference between the quasi-Fermi levels at the junction, he says that we can estimate the current flowing through the diode from this difference. He points out that the current at the p terminal is all holes, whereas at the n terminal it is all electrons, and the sum of these two is the constant total current. So the total current is equal to the decrease in hole current from one side of the diode to the other. This decrease is due to an excess of recombination of electron-hole pairs over generation of electron-hole pairs. The rate of recombination is equal to the rate of generation when at equilibrium, that is, when the two quasi-Fermi levels are equal. But when the quasi-Fermi levels are not equal, then the recombination rate is times the rate of generation. We then assume that most of the excess recombination (or decrease in hole current) takes place in a layer going by one hole diffusion length into the n material and one electron diffusion length into the p material, and that the difference between the quasi-Fermi levels is constant in this layer at Then we find that the total current, or the drop in hole current, is where and is the generation rate. We can solve for in terms of : and the total voltage drop is then When we assume that is small, we obtain and the Shockley ideal diode equation. The small current that flows under high reverse bias is then the result of thermal generation of electron–hole pairs in the layer. The electrons then flow to the n terminal, and the holes to the p terminal. The concentrations of electrons and holes in the layer is so small that recombination there is negligible. In 1950, Shockley and coworkers published a short article describing a germanium diode that closely followed the ideal equation. In 1954, Bill Pfann and W. van Roosbroek (who were also of Bell Telephone Laboratories) reported that while Shockley's equation was applicable to certain germanium junctions, for many silicon junctions the current (under appreciable forward bias) was proportional to with having a value as high as 2 or 3. This is the ideality factor above. Feynman gave a derivation using the Brownian ratchet in The Feynman Lectures on Physics I.46. Photovoltaic energy conversion In 1981, Alexis de Vos and Herman Pauwels showed that a more careful analysis of the quantum mechanics of a junction, under certain assumptions, gives a current versus voltage characteristic of the form in which is the cross-sectional area of the junction, and is the number of incoming photons per unit area, per unit time, with energy over the band-gap energy, and is outgoing photons, given by The factor of 2 multiplying the outgoing flux is needed because photons are emitted from both sides, but the incoming flux is assumed to come from just one side. Although the analysis was done for photovoltaic cells under illumination, it applies also when the illumination is simply background thermal radiation, provided that a factor of 2 is then used for this incoming flux as well. The analysis gives a more rigorous expression for ideal diodes in general, except that it assumes that the cell is thick enough that it can produce this flux of photons. When the illumination is just background thermal radiation, the characteristic is Note that, in contrast to the Shockley law, the current goes to infinity as the voltage goes to the gap voltage . This of course would require an infinite thickness to provide an infinite amount of recombination. This equation was recently revised to account for the new temperature scaling in the revised current using a recent model for 2D materials based Schottky diode. References Diodes Electrical engineering Eponymous equations of physics
Shockley diode equation
[ "Physics", "Engineering" ]
1,566
[ "Electrical engineering", "Eponymous equations of physics", "Equations of physics" ]
4,175,709
https://en.wikipedia.org/wiki/Science%20in%20the%20Renaissance
During the Renaissance, great advances occurred in geography, astronomy, chemistry, physics, mathematics, manufacturing, anatomy and engineering. The collection of ancient scientific texts began in earnest at the start of the 15th century and continued up to the Fall of Constantinople in 1453, and the invention of printing allowed a faster propagation of new ideas. Nevertheless, some have seen the Renaissance, at least in its initial period, as one of scientific backwardness. Historians like George Sarton and Lynn Thorndike criticized how the Renaissance affected science, arguing that progress was slowed for some amount of time. Humanists favored human-centered subjects like politics and history over study of natural philosophy or applied mathematics. More recently, however, scholars have acknowledged the positive influence of the Renaissance on mathematics and science, pointing to factors like the rediscovery of lost or obscure texts and the increased emphasis on the study of language and the correct reading of texts. Marie Boas Hall coined the term Scientific Renaissance to designate the early phase of the Scientific Revolution, 1450–1630. More recently, Peter Dear has argued for a two-phase model of early modern science: a Scientific Renaissance of the 15th and 16th centuries, focused on the restoration of the natural knowledge of the ancients; and a Scientific Revolution of the 17th century, when scientists shifted from recovery to innovation. Context During and after the Renaissance of the 12th century, Europe experienced an intellectual revitalization, especially with regard to the investigation of the natural world. In the 14th century, however, a series of events that would come to be known as the Crisis of the Late Middle Ages was underway. When the Black Death came, it wiped out so many lives it affected the entire system. It brought a sudden end to the previous period of massive scientific change. The plague killed 25–50% of the people in Europe, especially in the crowded conditions of the towns, where the heart of innovations lay. Recurrences of the plague and other disasters caused a continuing decline of population for a century. The Renaissance The 14th century saw the beginning of the cultural movement of the Renaissance. By the early 15th century, an international search for ancient manuscripts was underway and would continue unabated until the Fall of Constantinople in 1453, when many Byzantine scholars had to seek refuge in the West, particularly Italy. Likewise, the invention of the printing press was to have great effect on European society: the facilitated dissemination of the printed word democratized learning and allowed a faster propagation of new ideas. Initially, there were no new developments in physics or astronomy, and the reverence for classical sources further enshrined the Aristotelian and Ptolemaic views of the universe. Renaissance philosophy lost much of its rigor as the rules of logic and deduction were seen as secondary to intuition and emotion. At the same time, Renaissance humanism stressed that nature came to be viewed as an animate spiritual creation that was not governed by laws or mathematics. Only later, when no more manuscripts could be found, did humanists turn from collecting to editing and translating them, and new scientific work began with the work of such figures as Copernicus, Cardano, and Vesalius. Important developments Alchemy and chemistry While differing in some respects, alchemy and chemistry often had similar goals during the Renaissance period, and together they are sometimes referred to as chymistry. Alchemy is the study of the transmutation of materials through obscure processes. Although it is often viewed as a pseudoscientific endeavor, many of its practitioners utilized widely accepted scientific theories of their times to formulate hypotheses about the constituents of matter and the ways matter could be changed. One of the main aims of alchemists was to find a method of creating gold and other precious metals from the transmutation of base materials. A common belief of alchemists was that there is an essential substance from which all other substances formed, and that if you could reduce a substance to this original material, you could then construct it into another substance, like lead to gold. Medieval alchemists worked with two main elements or "principles", sulphur and mercury. Paracelsus was a chymist and physician of the Renaissance period who believed that, in addition to sulphur and mercury, salt served as one of the primary alchemical principles from which everything else was made. Paracelsus was also instrumental in helping to put chemical practices to practical medicinal use through a recognition that the body operates through processes which may be seen as chemical in nature. These lines of thinking directly conflicted with many long-held traditional beliefs, such as those popularized by Aristotle; however, Paracelsus was insistent that questioning principles of nature was essential to continue the general growth of knowledge. Despite its frequent basis in what may be considered scientific practices by modern standards, numerous factors caused chymistry as a discipline to remain separate from general academia until near the end of the Renaissance, when it finally began appearing as a portion of some university education. The commercial nature of chymistry at the time, along with the lack of classical basis for the practice, were some of the contributing factors which led to the general view of the discipline as a craft rather than a respectable academic discipline. Astronomy The astronomy of the late Middle Ages was based on the geocentric model described by Claudius Ptolemy in antiquity. Probably very few practicing astronomers or astrologers actually read Ptolemy's Almagest, which had been translated into Latin by Gerard of Cremona in the 12th century. Instead they relied on introductions to the Ptolemaic system such as the De sphaera mundi of Johannes de Sacrobosco and the genre of textbooks known as Theorica planetarum. For the task of predicting planetary motions they turned to the Alfonsine tables, a set of astronomical tables based on the Almagest models but incorporating some later modifications, mainly the trepidation model attributed to Thabit ibn Qurra. Contrary to popular belief, astronomers of the Middle Ages and Renaissance did not resort to "epicycles on epicycles" in order to correct the original Ptolemaic models—until one comes to Copernicus himself. Sometime around 1450, mathematician Georg Purbach (1423–1461) began a series of lectures on astronomy at the University of Vienna. Regiomontanus (1436–1476), who was then one of his students, collected his notes on the lecture and later published them as Theoricae novae planetarum in the 1470s. This "New Theorica" replaced the older theorica as the textbook of advanced astronomy. Purbach also began to prepare a summary and commentary on the Almagest. He died after completing only six books, however, and Regiomontanus continued the task, consulting a Greek manuscript brought from Constantinople by Cardinal Bessarion. When it was published in 1496, the Epitome of the Almagest made the highest levels of Ptolemaic astronomy widely accessible to many European astronomers for the first time. The last major event in Renaissance astronomy is the work of Nicolaus Copernicus (1473–1543). He was among the first generation of astronomers to be trained with the Theoricae novae and the Epitome. Shortly before 1514 he began to revive Aristarchus's idea that the Earth revolves around the Sun. He spent the rest of his life attempting a mathematical proof of heliocentrism. When De revolutionibus orbium coelestium was finally published in 1543, Copernicus was on his deathbed. A comparison of his work with the Almagest shows that Copernicus was in many ways a Renaissance scientist rather than a revolutionary, because he followed Ptolemy's methods and even his order of presentation. Not until the works of Johannes Kepler (1571–1630) and Galileo Galilei (1564–1642) was Ptolemy's manner of doing astronomy superseded. The use of more advanced tables and mathematics would provide the impetus for the establishment of the Gregorian calendar in 1582 (primarily to reform the calculation of the date of Easter), replacing the Julian calendar, which had several errors. Mathematics The accomplishments of Greek mathematicians survived throughout Late Antiquity and the Middle Ages through a long and indirect history. Much of the work of Euclid, Archimedes, and Apollonius, along with later authors such as Hero and Pappus, were copied and studied in both Byzantine culture and in Islamic centers of learning. Translations of these works began already in the 12th century, with the work of translators in Spain and Sicily, working mostly from Arabic and Greek sources into Latin. Two of the most prolific were Gerard of Cremona and William of Moerbeke. The greatest of all translation efforts, however, took place in the 15th and 16th centuries in Italy, as attested by the numerous manuscripts dating from this period currently found in European libraries. Virtually all leading mathematicians of the era were obsessed with the need for restoring the mathematical works of the ancients. Not only did humanists assist mathematicians with the retrieval of Greek manuscripts, they also took an active role in translating these work into Latin, often commissioned by religious leaders such as Nicholas V and Cardinal Bessarion. Some of the leading figures in this effort include Regiomontanus, who made a copy of the Latin Archimedes and had a program for printing mathematical works; Commandino (1509–1575), who likewise produced an edition of Archimedes, as well as editions of works by Euclid, Hero, and Pappus; and Maurolyco (1494–1575), who not only translated the work of ancient mathematicians but added much of his own work to these. Their translations ensured that the next generation of mathematicians would be in possession of techniques far in advance of what it was generally available during the Middle Ages. It must be borne in mind that the mathematical output of the 15th and 16th centuries was not exclusively limited to the works of the ancient Greeks. Some mathematicians, such as Tartaglia and Luca Paccioli, welcomed and expanded on the medieval traditions of both Islamic scholars and people like Jordanus and Fibonnacci. Giordano Bruno was also one to critique the works of people like Aristotle, whom he believed to have a flawed logic and developed a mathematical doctrine for the computation of partial physics, with Bruno attempting to transform theories of nature. Physics The progress being made in math was complemented by advancements in physics, with people like Galileo attempting to bridge the gap between the two fields and question Aristotelian ideas. The revived invertigation of physics opened up many opportunities in subfields like mechanics, optics, navigation, and cartography. Mechanical theories had originated with the Greeks, especially Aristotle and Archimedes. Mechanics and philosophy had been related disciplines in ancient Greece, and only in the Renaissance did the two subjects begin to split. A lot of the work of developing new mechanical ideas and theories was carried out by Italians such as Rafael Bombelli, though the Fleming Simon Stevin also provided many ideas. Galileo also contributed to the advancement of this field with a treatise on mechanics in 1593, helping to develop ideas on relativity, freely falling bodies, and accelerated linear motion, though he lacked the means to properly communicate his findings at the time. In June 1609, Galileo's interests shifted to his telescopic investigations after having been close to revolutionizing the science of mechanics. Navigation was an important topic of the time, and many innovations were made that, with the introduction of better ships and applications of the compass, would later lead to geographical discoveries. The calculations involved in navigation proved to be difficult, with the technology of the time unable to accuately predict weather or determine one's geographic position. Determining one's longitude proved especially challenging, since one's local time need to be calculated on the basis of an astonomical observation. One theory that was tested was to record the time of an eclipse and use Regiomontanus' Ephemerides to compare it with Nuremberg time or Zacuto's Almanach perpetuum to compare it with Salamanca time, though the margin of error in such calculations was unacceptably great (around 25.5 degrees). Until longitude could be accurately determined, navigators had to rely on dead reckoning, with its many uncertainties. Medicine With the Renaissance came an increase in experimental investigation, principally in the field of dissection and body examination, thus advancing our knowledge of human anatomy. The development of modern neurology began in the 16th century with Andreas Vesalius, who described the anatomy of the brain and other organs; he had little knowledge of the brain's function, thinking that it resided mainly in the ventricles. Understanding of medical sciences and diagnosis improved, but with little direct benefit to health care. Few effective drugs existed, beyond opium and quinine. William Harvey provided a refined and complete description of the circulatory system. The most useful tomes in medicine, used both by students and expert physicians, were materiae medicae and pharmacopoeiae. Geography and the New World In the history of geography, the key classical text was the Geographia of Claudius Ptolemy (2nd century). It was translated into Latin in the 15th century by Jacopo d'Angelo. It was widely read in manuscript and went through many print editions after it was first printed in 1475. Regiomontanus worked on preparing an edition for print prior to his death; his manuscripts were consulted by later mathematicians in Nuremberg. Ptolemy's Geographia became the basis for most maps made in Europe throughout the 15th century. Even as new knowledge began to replace the content of old maps, the rediscovery of Ptolemy's mapping system, including the use of coordinates and projection, helped to redefine the overall field of cartography as a scientific pursuit rather than an artistic one. The information provided by Ptolemy, as well as Pliny the Elder and other classical sources, was soon seen to be in contradiction to the lands explored in the Age of Discovery. The new discoveries revealed shortcomings in classical knowledge; they also opened European imagination to new possibilities. In particular, Christopher Columbus' voyage to the New World in 1492 helped set the tone for what would soon after become a wave of European expansion. Thomas More's Utopia was inspired partly by the discovery of the New World. Most maps developed prior to this period grossly underestimated the extent of the lands separating Europe from India on a westward route through the New World; however, through contributions of explorers such as Ferdinand Magellan, efforts were made to create more accurate maps during this period. See also Continuity thesis The Copernican Question Renaissance magic Renaissance technology Notes References Dear, Peter. Revolutionizing the Sciences: European Knowledge and Its Ambitions, 1500–1700. Princeton: Princeton University Press, 2001. Debus, Allen G. Man and Nature in the Renaissance. Cambridge: Cambridge University Press, 1978. Grafton, Anthony, et al. New Worlds, Ancient Texts: The Power of Tradition and the Shock of Discovery. Cambridge: Belknap Press of Harvard University Press, 1992. Hall, Marie Boas. The Scientific Renaissance, 1450–1630. New York: Dover Publications, 1962, 1994. External links Renaissance science and technology at Britannica.com Renaissance Science
Science in the Renaissance
[ "Technology" ]
3,143
[ "History of science", "History of science and technology" ]
4,175,889
https://en.wikipedia.org/wiki/University%20of%20Iowa%20Driving%20Safety%20Research%20Institute
The Driving Safety Research Institute (DSRI) at the University of Iowa College of Engineering houses the National Advanced Driving Simulator (NADS-1) and a fleet of instrumented on-road research vehicles. The NADS-1 is one of the largest ground vehicle driving simulators in the world. The National Highway Traffic Safety Administration (NHTSA) owns the NADS-1 simulator, while the University of Iowa takes responsibility for operation and maintenance. In 2024, the institute received funding from NHTSA for a project assessing driver monitoring systems' effectiveness at determining the level of impairment. Mission Make roads safer by researching the connection between humans and vehicles Driving research The Driving Safety Research Institute conducts research with both simulators and on-road vehicles. Funded by government, military, and industry partners, their expertise includes: Human factors Distracted driving Drowsy driving Drugged driving Connected and automated vehicles Mobility At-risk populations (older and novice drivers) Simulation science Crash biomechanics Safety and crash data analysis Simulators NADS-1 simulator: One of the world’s most realistic driving simulators NADS-2 simulator: A fixed-base simulator with high-resolution graphics miniSim™: A low-cost PC-based portable simulator available for purchase On-road research vehicles The Driving Safety Research Institute's faculty and staff utilize a fleet of on-road, custom instrumented vehicles to conduct driving research: Ford Transit shuttle bus Tesla Model S75D Lincoln MKZ Volvo XC90 Toyota Camry XLE Additionally, DSRI often receives vehicles as long-term loans from vehicle manufacturers and other partnering organizations for research. All vehicles are maintained in-house, with the University of Iowa, or in cooperation with partnering manufacturers/organizations. History The NADS-1 was developed from 1996 through 2001 by the National Highway Traffic Safety Administration (NHTSA) to conduct human factors research on driver behavior. 1992 NHTSA selects the University of Iowa to house the National Advanced Driving Simulator (NADS-1 simulator), which would become the most sophisticated research driving simulator in the world at the time. 1994 The first automated driving simulations in the world are done at the University of Iowa on the Iowa Driving Simulator, predecessor to the NADS-1. Forward collision warning and adaptive cruise control (ACC) systems are designed, developed, and tested for NHTSA. 1997 The University of Iowa (UI) begins building virtual replicas of military proving grounds, such as the Aberdeen Proving Ground in Maryland, where the government tests military vehicles. 1998 Ground is broken for the new NADS facility. 1999 UI begins first drugged driving study: “Effects of Fexofenadine, Diphenhydramine, and Alcohol on Driving Performance.” 2001 (fall) NADS-1 is operational. The facility is operated on a self-sustaining basis by the UI. NHTSA owns the simulator while the UI takes responsibility for operation and maintenance. UI owns the building, land, and the software that runs the NADS-1. 2001 The first formal study done on the NADS-1 is a study on tire failure and loss of control. 2002 A wireless phone study is conducted—the first at NADS about driver distraction. 2003 NADS begins work with John Deere, and a tractor cab is created for use in the NADS-1 simulator. 2005 NADS builds a portable simulator for outreach to high school students, which eventually leads to the creation of the miniSim program in 2009. 2006 The NADS-2 simulator—the second simulator at the facility—is ready for business. Based partially on research done at the UI, NHTSA mandates that all new vehicles must have electronic stability control. 2011 The first cannabis study on driving performance is conducted at NADS. The first on-road vehicle is purchased for DSRI research, a Toyota Camry. 2013 The UI is awarded a grant that would grow to $11.2 million over eight years from the U.S. DOT to fund SAFER-SIM: Safety Research Using Simulation. 2015 MyCarDoesWhat.org campaign launched to educate consumers about advanced driver assistance systems. 2016–2018 Partially-automated vehicles are added to the NADS fleet: a Volvo XC90, Tesla Model S75D, and Lincoln MKZ. 2019 U.S. DOT awards NADS a $7 million grant for the Automated Driving Systems for Rural America project. 2023 The research institute officially changes its name from the University of Iowa National Advanced Driving Simulator (NADS) to the Driving Safety Research Institute (DSRI) to better reflect their expertise in both simulation and in on-road research. The National Advanced Driving Simulator name is retained for the suite of simulators. References External links Driving simulators University of Iowa
University of Iowa Driving Safety Research Institute
[ "Technology" ]
959
[ "Driving simulators", "Real-time simulation" ]
4,176,459
https://en.wikipedia.org/wiki/Modo%20%28wireless%20device%29
Modo (stylized in all lowercase) was a wireless device developed by Scout Electromedia. Utilizing pager networks, the device was designed to provide city-specific "lifestyle" content, such as reviews of restaurants or bars and movie listings, in addition to original curated content by Scout's developers. Officially announced on August 28, 2000, targeting a "young hipster" urban demographic and a reported $20 million spent on marketing, the Modo was released in September 2000 in two US cities, New York and San Francisco, with plans to roll out in other major urban areas such as Los Angeles and Chicago. After not receiving additional funding and the firing of one of its chief executives, Geoff Pitfield. Scout Electromedia was liquidated and the Modo, along with its wireless service, was discontinued in October 2000, just one month after its release and one day before its Los Angeles launch. It has been noted as one of the most notorious dot-com bubble failed ventures. History After the company was funded, one of its venture backers, Flatiron, backed a similar company, Vindigo, which aimed to bring a broader range of information to the PalmPilot platform. Because of Scout's focus on delivering mobile information to a young design-conscious audience that had no interest in using a traditional PDA, Vindigo was considered by the backers to be a complementary product offering. Scout Electromedia received an estimated $40 million to develop and market the Modo. The industrial design was done by IDEO (which took an investment in the startup), while the device software was based on Pixo's operating system (the OS that later powered the Apple iPod). All of the electrical engineering, wireless, and system development were done in-house by the company. The Modo was advertised heavily in its target markets of Los Angeles, New York, Chicago and San Francisco, and was sold online via its website and in retailers such as DKNY and Virgin Megastores. The product was launched in the late summer of 2000 and made it to two of the four planned cities, but only shipped for one day in San Francisco. While the stock sold out, reviews of the device were mixed: while praising the device design and concept, criticisms arose due to its one-way service, its limited city availability, and comparisons to competitors Vindigo and Palm. On October 20, 2000, Geoff Pitfield, Scout Electromedia's CEO, was fired, and on October 24, 2000, the company was shut down, stopping all developments and service on the Modo. Over time, it came out that the company's venture backers had left the company to die as many of them experienced their own financial problems due to the dot-com bubble (notably Idealab, Flatiron, and Chase Capital). See also Microsoft Kin – another short lived device marketed towards a young adult demographic References External links Idealab's Bill Gross discussing Scout in BusinessWeek Dennis Crowley's Modo Tribute Page Modo preview video by Scout Electromedia (2000) on Vimeo IDEO Case Study for Scout Electromedia Pagers
Modo (wireless device)
[ "Technology" ]
641
[ "Pagers", "Radio paging" ]
4,176,711
https://en.wikipedia.org/wiki/Boron%20arsenide
Boron arsenide (or Arsenic boride) is a chemical compound involving boron and arsenic, usually with a chemical formula BAs. Other boron arsenide compounds are known, such as the subarsenide . Chemical synthesis of cubic BAs is very challenging and its single crystal forms usually have defects. Properties BAs is a cubic (sphalerite) semiconductor in the III-V family with a lattice constant of 0.4777 nm and an indirect band gap of 1.82 eV. Cubic BAs is reported to decompose to the subarsenide B12As2 at temperatures above 920 °C. Boron arsenide has a melting point of 2076 °C. The thermal conductivity of BAs is exceptionally high, recently measured in single-crystal BAs to be around 1300 W/(m·K) at room temperature, making it the highest among all metals and semiconductors. The basic physical properties of cubic BAs have been experimentally measured: Band gap (1.82 eV), optical refractive index (3.29 at wavelength 657 nm), elastic modulus (326 GPa), shear modulus, Poisson's ratio, thermal expansion coefficient (3.85×10−6/K), and heat capacity. It can be alloyed with gallium arsenide to produce ternary and with indium gallium arsenide to form quaternary semiconductors. BAs has high electron and hole mobility, >1000 cm2/V/second, unlike silicon which has high electron mobility, but low hole mobility. In 2023, a study in journal Nature reported that subjected to high pressure BAs decrease its thermal conductivity contrary to the typical increase seen in most materials. Boron subarsenide Boron arsenide also occurs as subarsenides, including the icosahedral boride . It belongs to Rm space group with a rhombohedral structure based on clusters of boron atoms and two-atom As–As chains. It is a wide-bandgap semiconductor (3.47 eV) with the extraordinary ability to "self-heal" radiation damage. This form can be grown on substrates such as silicon carbide. Another use for solar cell fabrication was proposed, but it is not currently used for this purpose. Applications Boron arsenide is most attractive for use in electronics thermal management. Experimental integration with gallium nitride transistors to form GaN-BAs heterostructures has been demonstrated and shows better performance than the best GaN HEMT devices on silicon carbide or diamond substrates. Manufacturing BAs composites was developed as highly conducting and flexible thermal interfaces. First-principles calculations have predicted that the thermal conductivity of cubic BAs is remarkably high, over 2,200 W/(m·K) at room temperature, which is comparable to that of diamond and graphite. Subsequent measurements yielded a value of only 190 W/(m·K) due to the high density of defects. More recent first-principles calculations incorporating four-phonon scattering predict a thermal conductivity of 1400 W/(m·K). Later, defect-free boron arsenide crystals have been experimentally realized and measured with an ultrahigh thermal conductivity of 1300 W/(m·K), consistent with theory predictions. Crystals with small density of defects have shown thermal conductivity of 900–1000 W/(m·K). The cubic-shaped boron arsenide has been discovered to be better at conducting heat and electricity than silicon, as well as reportedly better than silicon at conducting both electrons and its positively charged counterpart, the "electron-hole." References External links 2020 paper by Malica and Dal Corso - Temperature dependent elastic constants and thermodynamic properties of BAs: An ab initio investigation Matweb data High ambipolar mobility in cubic boron arsenide, Science Boron compounds Arsenides III-V compounds III-V semiconductors Zincblende crystal structure B
Boron arsenide
[ "Chemistry" ]
824
[ "Semiconductor materials", "III-V compounds", "Inorganic compounds", "III-V semiconductors" ]
4,176,728
https://en.wikipedia.org/wiki/Dog%20behaviourist
A dog behaviourist is a person who works in modifying or changing behaviour in dogs. They can be experienced dog handlers, who have developed their experience over many years of hands-on experience, or have formal training up to degree level. Some have backgrounds in veterinary science, animal science, zoology, sociology, biology, or animal behaviour, and have applied their experience and knowledge to the interaction between humans and dogs. Professional certification may be offered through either industry associations or local educational institutions. There is however no compulsion for behaviourists to be a member of a professional body nor to take formal training. Overview While any person who works to modify a dog's behaviour might be considered a dog behaviourist in the broadest sense of the term, an animal behaviourist, is a title generally given to individuals who have obtained relevant professional qualifications. The professional fields and course of study for dog behaviourists include, but are not limited to animal science, zoology, sociology, biology, psychology, ethology, and veterinary science. People with these credentials usually refer to themselves as Clinical Animal Behaviourists, Applied Animal Behaviourists (PhD) or Veterinary behaviourists (veterinary degree). If they limit their practice to a particular species, they might refer to themselves as a dog/cat/bird behaviourist. While there are many dog trainers who work with behavioural issues, there are relatively few qualified dog behaviourists. For the majority of the general public, the cost of the services of a dog behaviourist usually reflects both the supply/demand inequity, as well as the level of training they have obtained. Some behaviourists can be identified in the U.S. by the post-nominals "CAAB", indicating that they are a Certified Applied Animal behaviourist (which requires a Ph.D. or veterinary degree), or "DACVB", indicating that they are a diplomate of the American College of Veterinary behaviour (which requires a veterinary degree). In the UK veterinary and non-veterinary behaviourists certified by an ABTC Practitioner organisation may use the postnominals “CCAB” or “ABTC-CAB”, or wider in Europe Veterinary Specialists in Behavioural Medicine use “DipECAWBM(BM)”. Discipline Behaviourism is the theory or doctrine that human or animal psychology can be accurately studied only through the examination and analysis of objectively observable and quantifiable behavioural events, in contrast with subjective mental states. A dog practitioner using a behavioural approach or psychobiological approach, regardless of title, typically works one-on-one with a dog and its owner. This may be carried out in the dog's home, the practitioner's office, the place where the dog is showing behavioural problems, or a variety of these locations for different sessions during the treatment time. By observing the dog in his/her environment and skillfully interviewing the owner, the behaviourist creates a working hypothesis on what is motivating, and thus sustaining, the behaviour. Office-bound behaviourists may be disadvantaged when it comes to assessing behavioural modification, as the dog may act very differently in different locations and interviewing owners, no matter how thorough, may not provide enough details. After establishing a motivating cause, the practitioner will develop a step-wise, goal-based plan to alter the behaviour in stages, continue their work with the pet owner to guide and make changes in the plan as the goals are met (or not) and conclude with a final write up of the case and its outcome. The methods and tools of the behaviourist will depend on several factors including the dog's temperament, the behaviourist's personal philosophy on training, the behaviourist's experience, and the behavioural problems being addressed. At one end of the spectrum, some behaviourists attempt to train dogs, refraining from the use of aversive or coercive methods (and the tools associated with them, such as choke, prong/pinch or electric shock collars, kicking, hitting, poking, staring, shaking, or rolling), choosing instead to rely on reward-based methods. Dog behaviourists and dog trainers with a knowledge of how to approach training in a behavioural way usually do not offer guaranteed results. Other behaviourists believe that the use of verbal corrections, head collars, correction collars, or electric collars are necessary or useful when treating particular dogs or particular behavioural problems. The general philosophy in use is to avoid methods that could cause confusion, fear, pain and anything other than mild stressors. Dog trainers who use these techniques may or may not be utilising a behavioural approach and may or may not have an understanding of the science behind behaviour modification. Dog behaviourists who lack professional credentials are generally dog trainers who have developed their expertise for working with problem dogs over many years of hands-on experience. They may or may not have studied behaviour formally in college or any dog training school. The differences between a dog behavioural problem and a dog training problem may be difficult for some dog owners to understand, due to the lack of a formal definition. At the same time, the dog training techniques utilised by dog trainers and behaviourists may often compete when considering which practitioner is better qualified to meet the dog's or owner's needs. The disciplines of dog trainers who follow a behavioural approach, informed by the study of the science of behaviour modification, can sometimes be juxtaposed against dog trainers who present themselves as experts at solving behavioural problems. The discussion and assessment, for some, may be more about appropriate methods and tools, rather than use of the term behaviourist. Professional associations In order to assist dog owners and trainers understand and utilise this behavioural training or become certified in the practice, professional associations dedicated to the development of behavioural dog training offer tools to further their development. Different associations have different standards, goals, and requirements for membership. Board-certified veterinary behaviourists are required to pass a credentialing application and exam to be recognised as board-certified in the view of the American Veterinary Medical Association (AVMA), or similar in other parts of the world, including the Animal Behaviour and Training Council in the UK, European Board of Veterinary Specliaists (EBVS) and the Australian and New Zealand College of Veterinary Scientists (ANZCVS). Behaviourists may work and study towards formal accreditation with one of the many colleges providing training. Some associations might require accreditation to join, while others may require a declaration of intent for continuing personal development. Accreditation, may also be offered through local colleges and educational institutions. See also Ivan Pavlov B.F. Skinner John Broadus Watson References External links Dog-related professions and professionals Ethology
Dog behaviourist
[ "Biology" ]
1,375
[ "Behavioural sciences", "Ethology", "Behavior" ]
4,177,188
https://en.wikipedia.org/wiki/Thermal%20management%20%28electronics%29
All electronic devices and circuitry generate excess heat and thus require thermal management to improve reliability and prevent premature failure. The amount of heat output is equal to the power input, if there are no other energy interactions. There are several techniques for cooling including various styles of heat sinks, thermoelectric coolers, forced air systems and fans, heat pipes, and others. In cases of extreme low environmental temperatures, it may actually be necessary to heat the electronic components to achieve satisfactory operation. Overview Thermal resistance of devices This is usually quoted as the thermal resistance from junction to case of the semiconductor device. The units are °C/W. For example, a heatsink rated at 10 °C/W will get 10 °C hotter than the surrounding air when it dissipates 1 Watt of heat. Thus, a heatsink with a low °C/W value is more efficient than a heatsink with a high °C/W value. Given two semiconductor devices in the same package, a lower junction to ambient resistance (RθJ-C) indicates a more efficient device. However, when comparing two devices with different die-free package thermal resistances (Ex. DirectFET MT vs wirebond 5x6mm PQFN), their junction to ambient or junction to case resistance values may not correlate directly to their comparative efficiencies. Different semiconductor packages may have different die orientations, different copper(or other metal) mass surrounding the die, different die attach mechanics, and different molding thickness, all of which could yield significantly different junction to case or junction to ambient resistance values, and could thus obscure overall efficiency numbers. Thermal time constants A heatsink's thermal mass can be considered as a capacitor (storing heat instead of charge) and the thermal resistance as an electrical resistance (giving a measure of how fast stored heat can be dissipated). Together, these two components form a thermal RC circuit with an associated time constant given by the product of R and C. This quantity can be used to calculate the dynamic heat dissipation capability of a device, in an analogous way to the electrical case. Thermal interface material A thermal interface material or mastic (aka TIM) is used to fill the gaps between thermal transfer surfaces, such as between microprocessors and heatsinks, in order to increase thermal transfer efficiency. It has a higher thermal conductivity value in Z-direction than xy-direction. Applications Personal computers Due to recent technological developments and public interest, the retail heat sink market has reached an all-time high. In the early 2000s, CPUs were produced that emitted more and more heat than earlier, escalating requirements for quality cooling systems. Overclocking has always meant greater cooling needs, and the inherently hotter chips meant more concerns for the enthusiast. Efficient heat sinks are vital to overclocked computer systems because the higher a microprocessor's cooling rate, the faster the computer can operate without instability; generally, faster operation leads to higher performance. Many companies now compete to offer the best heat sink for PC overclocking enthusiasts. Prominent aftermarket heat sink manufacturers include: Aero Cool, Foxconn, Thermalright, Thermaltake, Swiftech, and Zalman. Soldering Temporary heat sinks were sometimes used while soldering circuit boards, preventing excessive heat from damaging sensitive nearby electronics. In the simplest case, this means partially gripping a component using a heavy metal crocodile clip or similar clamp. Modern semiconductor devices, which are designed to be assembled by reflow soldering, can usually tolerate soldering temperatures without damage. On the other hand, electrical components such as magnetic reed switches can malfunction if exposed to higher powered soldering irons, so this practice is still very much in use. Batteries In the battery used for electric vehicles, Nominal battery performance is usually specified for working temperatures somewhere in the +20 °C to +30 °C range; however, the actual performance can deviate substantially from this if the battery is operated at higher or, in particular, lower temperatures, so some electric cars have heating and cooling for their batteries. Methodologies Heat sinks Heat sinks are widely used in electronics and have become essential to modern microelectronics. In common use, it is a metal object brought into contact with an electronic component's hot surface—though in most cases, a thin thermal interface material mediates between the two surfaces. Microprocessors and power handling semiconductors are examples of electronics that need a heat sink to reduce their temperature through increased thermal mass and heat dissipation (primarily by conduction and convection and to a lesser extent by radiation). Heat sinks have become almost essential to modern integrated circuits like microprocessors, DSPs, GPUs, and more. A heat sink usually consists of a metal structure with one or more flat surfaces to ensure good thermal contact with the components to be cooled, and an array of comb or fin like protrusions to increase the surface contact with the air, and thus the rate of heat dissipation. A heat sink is sometimes used in conjunction with a fan to increase the rate of airflow over the heat sink. This maintains a larger temperature gradient by replacing warmed air faster than convection would. This is known as a forced air system. Cold plate Placing a conductive thick metal plate, referred to as a cold plate, as a heat transfer interface between a heat source and a cold flowing fluid (or any other heat sink) may improve the cooling performance. In such arrangement, the heat source is cooled under the thick plate instead of being cooled in direct contact with the cooling fluid. It is shown that the thick plate can significantly improve the heat transfer between the heat source and the cooling fluid by way of conducting the heat current in an optimal manner. The two most attractive advantages of this method are that no additional pumping power and no extra heat transfer surface area, that is quite different from fins (extended surfaces). Principle Heat sinks function by efficiently transferring thermal energy ("heat") from an object at high temperature to a second object at a lower temperature with a much greater heat capacity. This rapid transfer of thermal energy quickly brings the first object into thermal equilibrium with the second, lowering the temperature of the first object, fulfilling the heat sink's role as a cooling device. Efficient function of a heat sink relies on rapid transfer of thermal energy from the first object to the heat sink, and the heat sink to the second object. The most common design of a heat sink is a metal device with many fins. The high thermal conductivity of the metal combined with its large surface area result in the rapid transfer of thermal energy to the surrounding, cooler, air. This cools the heat sink and whatever it is in direct thermal contact with. Use of fluids (for example coolants in refrigeration) and thermal interface material (in cooling electronic devices) ensures good transfer of thermal energy to the heat sink. Similarly, a fan may improve the transfer of thermal energy from the heat sink to the air. Construction and materials A heat sink usually consists of a base with one or more flat surfaces and an array of comb or fin-like protrusions to increase the heat sink's surface area contacting the air, and thus increasing the heat dissipation rate. While a heat sink is a static object, a fan often aids a heat sink by providing increased airflow over the heat sink—thus maintaining a larger temperature gradient by replacing the warmed air more quickly than passive convection achieves alone—this is known as a forced-air system. Ideally, heat sinks are made from a good thermal conductor such as silver, gold, copper, or aluminum alloy. Copper and aluminum are among the most-frequently used materials for this purpose within electronic devices. Copper (401 W/(m·K) at 300 K) is significantly more expensive than aluminum (237 W/(m·K) at 300 K) but is also roughly twice as efficient as a thermal conductor. Aluminum has the significant advantage that it can be easily formed by extrusion, thus making complex cross-sections possible. Aluminum is also much lighter than copper, offering less mechanical stress on delicate electronic components. Some heat sinks made from aluminum have a copper core as a trade off. The heat sink's contact surface (the base) must be flat and smooth to ensure the best thermal contact with the object needing cooling. Frequently a thermally conductive grease is used to ensure optimal thermal contact; such compounds often contain colloidal silver. Further, a clamping mechanism, screws, or thermal adhesive hold the heat sink tightly onto the component, but specifically without pressure that would crush the component. Performance Heat sink performance (including free convection, forced convection, liquid cooled, and any combination thereof) is a function of material, geometry, and overall surface heat transfer coefficient. Generally, forced convection heat sink thermal performance is improved by increasing the thermal conductivity of the heat sink materials, increasing the surface area (usually by adding extended surfaces, such as fins or foam metal) and by increasing the overall area heat transfer coefficient (usually by increase fluid velocity, such as adding fans, pumps, etc.). Online heat sink calculators from companies such as Novel Concepts, Inc. and at www.heatsinkcalculator.com can accurately estimate forced and natural convection heat sink performance. For more complex heat sink geometries, or heat sinks with multiple materials or multiple fluids, computation fluid dynamics (CFD) analysis is recommended (see graphics on this page). Convective air cooling This term describes device cooling by the convection currents of the warm air being allowed to escape the confines of the component to be replaced by cooler air. Since warm air normally rises, this method usually requires venting at the top or sides of the casing to be effective. Forced air cooling If there is more air being forced into a system than being pumped out (due to an imbalance in the number of fans), this is referred to as a 'positive' airflow, as the pressure inside the unit is higher than outside. A balanced or neutral airflow is the most efficient, although a slightly positive airflow can result in less dust build up if filtered properly Heat pipes A heat pipe is a heat transfer device that uses evaporation and condensation of a two-phase "working fluid" or coolant to transport large quantities of heat with a very small difference in temperature between the hot and cold interfaces. A typical heat pipe consists of sealed hollow tube made of a thermoconductive metal such as copper or aluminium, and a wick to return the working fluid from the evaporator to the condenser. The pipe contains both saturated liquid and vapor of a working fluid (such as water, methanol or ammonia), all other gases being excluded. The most common heat pipe for electronics thermal management has a copper envelope and wick, with water as the working fluid. Copper/methanol is used if the heat pipe needs to operate below the freezing point of water, and aluminum/ammonia heat pipes are used for electronics cooling in space. The advantage of heat pipes is their great efficiency in transferring heat. The thermal conductivity of heat pipes can be as high as 100,000 W/m K, in contrast to copper, which has a thermal conductivity of around 400 W/m K. Peltier cooling plates Peltier cooling plates take advantage of the Peltier effect to create a heat flux between the junction of two different conductors of electricity by applying an electric current. This effect is commonly used for cooling electronic components and small instruments. In practice, many such junctions may be arranged in series to increase the effect to the amount of heating or cooling required. There are no moving parts, so a Peltier plate is maintenance free. It has a relatively low efficiency, so thermoelectric cooling is generally used for electronic devices, such as infra-red sensors, that need to operate at temperatures below ambient. For cooling these devices, the solid state nature of the Peltier plates outweighs their poor efficiency. Thermoelectric junctions are typically around 10% as efficient as the ideal Carnot cycle refrigerator, compared with 40% achieved by conventional compression cycle systems. Synthetic jet air cooling A synthetic jet is produced by a continual flow of vortices that are formed by alternating brief ejection and suction of air across an opening such that the net mass flux is zero. A unique feature of these jets is that they are formed entirely from the working fluid of the flow system in which they are deployed can produce a net momentum to the flow of a system without net mass injection to the system. Synthetic jet air movers have no moving parts and are thus maintenance free. Due to the high heat transfer coefficients, high reliability but lower overall flow rates, Synthetic jet air movers are usually used at the chip level and not at the system level for cooling. However depending on the size and complexity of the systems they can be used for both at times. Electrostatic fluid acceleration An electrostatic fluid accelerator (EFA) is a device which pumps a fluid such as air without any moving parts. Instead of using rotating blades, as in a conventional fan, an EFA uses an electric field to propel electrically charged air molecules. Because air molecules are normally neutrally charged, the EFA has to create some charged molecules, or ions, first. Thus there are three basic steps in the fluid acceleration process: ionize air molecules, use those ions to push many more neutral molecules in a desired direction, and then recapture and neutralize the ions to eliminate any net charge. The basic principle has been understood for some time but only in recent years have seen developments in the design and manufacture of EFA devices that may allow them to find practical and economical applications, such as in micro-cooling of electronics components. Recent developments More recently, high thermal conductivity materials such as synthetic diamond and boron arsenide cooling sinks are being researched to provide better cooling. Boron arsenide has been reported with high thermal conductivity and high thermal boundary conductance with gallium nitride transistors and thus better performance than diamond and silicon carbide cooling technologies. For example, funded by the U.S. Department of Defense, research has been underway using high-power density gallium nitride transistors with synthetic diamonds as thermal conductors. Also, some heat sinks are constructed of multiple materials with desirable characteristics, such as phase change materials, which can store a great deal of energy due to their heat of fusion. Thermal simulation of electronics Thermal simulations give engineers a visual representation of the temperature and airflow inside the equipment. Thermal simulations enable engineers to design the cooling system; to optimise a design to reduce power consumption, weight and cost; and to verify the thermal design to ensure there are no issues when the equipment is built. Most thermal simulation software uses Computational fluid dynamics techniques to predict temperature and airflow of an electronics system. Design Thermal simulation is often required to determine how to effectively cool components within design constraints. Simulation enables the design and verification of the thermal design of the equipment at a very early stage and throughout the design of the electronic and mechanical parts. Designing with thermal properties in mind from the start reduces the risk of last minute design changes to fix thermal issues. Using thermal simulation as part of the design process enables the creation of an optimal and innovative product design that performs to specification and meets customers' reliability requirements. Optimise It is easy to design a cooling system for almost any equipment if there is unlimited space, power and budget. However, the majority of equipment will have a rigid specification that leaves a limited margin for error. There is a constant pressure to reduce power requirements, system weight and cost parts, without compromising performance or reliability. Thermal simulation allows experimentation with optimisation, such as modifying heatsink geometry or reducing fan speeds in a virtual environment, which is faster, cheaper and safer than physical experiment and measurement. Verify Traditionally, the first time the thermal design of the equipment is verified is after a prototype has been built. The device is powered up, perhaps inside an environmental chamber, and temperatures of the critical parts of the system are measured using sensors such as thermocouples. If any problems are discovered, the project is delayed while a solution is sought. A change to the design of a PCB or enclosure part may be required to fix the issue, which will take time and cost a significant amount of money. If thermal simulation is used as part of the design process of the equipment, thermal design issue will be identified before a prototype is built. Fixing an issue at the design stage is both quicker and cheaper than modifying the design after a prototype is created. Software There are a wide range of software tools that are designed for thermal simulation of electronics include 6SigmaET, Ansys' IcePak and Mentor Graphics' FloTHERM. Telecommunications environments Thermal management measures must be taken to accommodate high heat release equipment in telecommunications rooms. Generic supplemental/spot cooling techniques, as well as turnkey cooling solutions developed by equipment manufacturers are viable solutions. Such solutions could allow very high heat release equipment to be housed in a central office that has a heat density at or near the cooling capacity available from the central air handler. According to Telcordia GR-3028, Thermal Management in Telecommunications Central Offices, the most common way of cooling modern telecommunications equipment internally is by utilizing multiple high-speed fans to create forced convection cooling. Although direct and indirect liquid cooling may be introduced in the future, the current design of new electronic equipment is geared towards maintaining air as the cooling medium. A well-developed "holistic" approach is required to understand current and future thermal management problems. Space cooling on one hand, and equipment cooling on the other, cannot be viewed as two isolated parts of the overall thermal challenge. The main purpose of an equipment facility's air-distribution system is to distribute conditioned air in such a way that the electronic equipment is cooled effectively. The overall cooling efficiency depends on how the air distribution system moves air through the equipment room, how the equipment moves air through the equipment frames, and how these airflows interact with one another. High heat-dissipation levels rely heavily on a seamless integration of equipment-cooling and room-cooling designs. The existing environmental solutions in telecommunications facilities have inherent limitations. For example, most mature central offices have limited space available for large air duct installations that are required for cooling high heat density equipment rooms. Furthermore, steep temperature gradients develop quickly should a cooling outage occur; this has been well documented through computer modeling and direct measurements and observations. Although environmental backup systems may be in place, there are situations when they will not help. In a recent case, telecommunications equipment in a major central office was overheated, and critical services were interrupted by a complete cooling shut down initiated by a false smoke alarm. A major obstacle for effective thermal management is the way heat-release data is currently reported. Suppliers generally specify the maximum (nameplate) heat release from the equipment. In reality, equipment configuration and traffic diversity will result in significantly lower heat release numbers. Equipment cooling classes As stated in GR-3028, most equipment environments maintain cool front (maintenance) aisles and hot rear (wiring) aisles, where cool supply air is delivered to the front aisles and hot air is removed from the rear aisles. This scheme provides multiple benefits, including effective equipment cooling and high thermal efficiency. In the traditional room cooling class utilized by the majority of service providers, equipment cooling would benefit from air intake and exhaust locations that help move air from the front aisle to the rear aisle. The traditional front-bottom to top-rear pattern, however, has been replaced in some equipment with other airflow patterns that may not ensure adequate equipment cooling in high heat density areas. A classification of equipment (shelves and cabinets) into Equipment-Cooling (EC) classes serves the purpose of classifying the equipment with regard to the cooling air intake and hot air exhaust locations, i.e., the equipment airflow schemes or protocols. The EC-Class syntax provides a flexible and important “common language.” It is used for developing Heat-Release Targets (HRTs), which are important for network reliability, equipment and space planning, and infrastructure capacity planning. HRTs take into account physical limitations of the environment and environmental baseline criteria, including the supply airflow capacity, air diffusion into the equipment space, and air-distribution/equipment interactions. In addition to being used for developing the HRTs, the EC Classification can be used to show compliance on product sheets, provide internal design specifications, or specify requirements in purchase orders. The Room-Cooling classification (RC-Class) refers to the way the overall equipment space is air-conditioned (cooled). The main purpose of RC-Classes is to provide a logical classification and description of legacy and non-legacy room-cooling schemes or protocols in the central office environment. In addition to being used for developing HRTs, the RC-classification can be used in internal central office design specifications or in purchase orders. Supplemental-Cooling classes (SC-Class) provide a classification of supplemental cooling techniques. Service providers use supplemental/spot-cooling solutions to supplement the cooling capacity (e.g., to treat occurrences of “hot spots”) provided by the general room-cooling protocol as expressed by the RC-Class. Economic impact Energy consumption by telecommunications equipment currently accounts for a high percentage of the total energy consumed in central offices. Most of this energy is subsequently released as heat to the surrounding equipment space. Since most of the remaining central office energy use goes to cool the equipment room, the economic impact of making the electronic equipment energy-efficient would be considerable for companies that use and operate telecommunications equipment. It would reduce capital costs for support systems, and improve thermal conditions in the equipment room. See also Heat generation in integrated circuits Thermal resistance in electronics Thermal management of high-power LEDs Thermal design power Heat pipe Computer cooling Radiator Active cooling References Further reading External links Computer hardware cooling Electronic design de:Kühlkörper es:Disipador fr:Radiateur#.C3.89changeur solide.2Fair it:Dissipatore (elettronica) he:צלעות קירור lt:Radiatorius (elektronikoje) nl:Koelvin ja:ヒートシンク pl:Radiator pt:Dissipador ru:Кулер sk:Chladič (elektronika)
Thermal management (electronics)
[ "Engineering" ]
4,615
[ "Electronic design", "Electronic engineering", "Design" ]
4,177,253
https://en.wikipedia.org/wiki/James%20Bay%20Road
The James Bay Road (), officially the Billy-Diamond Highway (), is a remote wilderness highway winding its way through the Canadian Shield in northwestern Quebec and reaches into the James Bay region. It starts in Matagami as an extension of Route 109 and ends at Radisson. The road is fully paved, well maintained, and plowed during the winter. It was originally constructed to carry loads of 300 tons and has mostly gentle curves and hills with wide shoulders. The road is maintained by the Eeyou Istchee James Bay Regional Government (formerly by the municipality of Baie-James). Connecting to other routes such as the Trans-Taiga Road and the Route du Nord, the highway draws tourists interested in reaching the remote wilderness surrounding James Bay, part of Hudson Bay. On November 10, 2020, the James Bay Road was renamed in honour of Billy Diamond, former Grand Chief of the Grand Council of the Crees and chief Cree negotiator of the James Bay and Northern Quebec Agreement. There is currently a proposal supported by the region's Cree communities to build a gravel extension some farther north to the twin communities on the Great Whale River: the Cree village of Whapmagoostui and the northern (primarily Inuit) village of Kuujjuarapik, in the Nunavik region. History The James Bay Road was conceived as an access road for the hydroelectric projects developed in the James Bay region in the 1970s and onwards. Construction began in 1971 and was completed in October 1974. Gravel branch routes have since been built from the highway, including four roads west to Cree villages on or near James Bay (the one to Chisasibi is paved for most of the way). The Trans-Taiga Road () was built and reached Caniapiscau in 1979. The long Route du Nord (North Road), which also is not a numbered route, connects from km 275 (measured from Matagami) southeast to near Chibougamau, Quebec. Description There are no services and development along the James Bay Road except for a full-service station at from Matagami. The station, located at kilometre marker 381, is operational 24 hours per day, 7 days per week, and is complete with a cafeteria and rudimentary lodging. Because of the remote nature of the road, there is a registration office along the side of the road for travellers to register. Located a few kilometers north of Matagami, it is staffed 24 hours per day, 7 days per week, and also serves as a tourist office for the communities along or off the James Bay Road. As further safety provisions, there are six roadside emergency telephones, which connect with staff in the registration office. Waypoints Image gallery See also List of Quebec provincial highways References External links James Bay Road unofficial website Article on proposed extension to Great Whale Roads in Nord-du-Québec James Bay Project
James Bay Road
[ "Engineering" ]
585
[ "James Bay Project", "Macro-engineering" ]
4,177,264
https://en.wikipedia.org/wiki/XMK%20%28operating%20system%29
The eXtreme Minimal Kernel (XMK) is a real-time operating system (RTOS) that is designed for minimal RAM/ROM use. It achieves this goal, though it is almost entirely written in the C programming language. As a consequence it can be easily ported to any 8-, 16-, or 32-bit microcontroller. XMK comes as two independent packages: the XMK Scheduler that contains the core kernel, everything necessary to run a multithreaded embedded application, and the Application Programming Layer (APL) that provides higher level functions atop the XMK Scheduler API. The XMK distribution contains no standard libraries such as libc that should be part of the development tools for target systems. External links XMK: eXtreme Minimal Kernel project home page (broken link) Windows Evolution Over Timeline Real-time operating systems Embedded operating systems
XMK (operating system)
[ "Technology" ]
184
[ "Operating system stubs", "Computing stubs", "Real-time computing", "Real-time operating systems" ]
4,177,343
https://en.wikipedia.org/wiki/Gentiobiose
Gentiobiose is a disaccharide composed of two units of D-glucose joined with a β(1->6) linkage. It is a white crystalline solid that is soluble in water or hot methanol. Gentiobiose is incorporated into the chemical structure of crocin, the chemical compound that gives saffron its color. It is a product of the caramelization of glucose. During a starch hydrolysis process for glucose syrup, gentiobiose, which has bitterness, is formed as an undesirable product through the acid-catalyzed condensation reaction of two D-glucose molecules. One β-D-glucose unit elongation of the bitter disaccharide reduces its bitterness by a fifth, as determined by human volunteers using the trimer, gentiotriose. Gentiobiose is also produced via enzymatic hydrolysis of glucans, including pustulan and β-1,3-1,6-glucan. References Disaccharides
Gentiobiose
[ "Chemistry" ]
225
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
4,177,356
https://en.wikipedia.org/wiki/Java.net
java.net was a Java technology related community website. It also offered a web-based source code repository for Java projects. It was shut down in April 2017. History java.net was announced by Sun Microsystems during JavaOne 2003. In January 2010, Oracle announced that it will migrate java.net portal to Project Kenai codebase, encouraging users to move their Kenai projects to java.net. In June 2016, Oracle announced that "the Java.net and Kenai.com forges will be going dark on April 28, 2017." Javapedia The Javapedia project was launched in June 2003 during the JavaOne developer conference. It is part of java.net. The project aims at creating an online encyclopedia covering all aspects of the Java platform. The Javapedia project is openly inspired by Wikipedia. The prominent differences between Wikipedia and Javapedia include feature restrictions (for example, editing is open to registered users only), software used (TWiki), links (camelCase is used), and content licensing (Creative Commons 1.0 Attribution license). See also Comparison of source code hosting facilities Notes External links java.net Javapedia Free software websites Wiki communities Java platform Computing websites
Java.net
[ "Technology" ]
252
[ "Computing platforms", "Computing websites", "Free software websites", "Computing stubs", "Java platform" ]
4,177,506
https://en.wikipedia.org/wiki/Root%20mean%20square%20deviation%20of%20atomic%20positions
In bioinformatics, the root mean square deviation of atomic positions, or simply root mean square deviation (RMSD), is the measure of the average distance between the atoms (usually the backbone atoms) of superimposed molecules. In the study of globular protein conformations, one customarily measures the similarity in three-dimensional structure by the RMSD of the Cα atomic coordinates after optimal rigid body superposition. When a dynamical system fluctuates about some well-defined average position, the RMSD from the average over time can be referred to as the RMSF or root mean square fluctuation. The size of this fluctuation can be measured, for example using Mössbauer spectroscopy or nuclear magnetic resonance, and can provide important physical information. The Lindemann index is a method of placing the RMSF in the context of the parameters of the system. A widely used way to compare the structures of biomolecules or solid bodies is to translate and rotate one structure with respect to the other to minimize the RMSD. Coutsias, et al. presented a simple derivation, based on quaternions, for the optimal solid body transformation (rotation-translation) that minimizes the RMSD between two sets of vectors. They proved that the quaternion method is equivalent to the well-known Kabsch algorithm. The solution given by Kabsch is an instance of the solution of the d-dimensional problem, introduced by Hurley and Cattell. The quaternion solution to compute the optimal rotation was published in the appendix of a paper of Petitjean. This quaternion solution and the calculation of the optimal isometry in the d-dimensional case were both extended to infinite sets and to the continuous case in the appendix A of another paper of Petitjean. The equation where δi is the distance between atom i and either a reference structure or the mean position of the N equivalent atoms. This is often calculated for the backbone heavy atoms C, N, O, and Cα or sometimes just the Cα atoms. Normally a rigid superposition which minimizes the RMSD is performed, and this minimum is returned. Given two sets of points and , the RMSD is defined as follows: An RMSD value is expressed in length units. The most commonly used unit in structural biology is the Ångström (Å) which is equal to 10−10 m. Uses Typically RMSD is used as a quantitative measure of similarity between two or more protein structures. For example, the CASP protein structure prediction competition uses RMSD as one of its assessments of how well a submitted structure matches the known, target structure. Thus the lower RMSD, the better the model is in comparison to the target structure. Also some scientists who study protein folding by computer simulations use RMSD as a reaction coordinate to quantify where the protein is between the folded state and the unfolded state. The study of RMSD for small organic molecules (commonly called ligands when they're binding to macromolecules, such as proteins, is studied) is common in the context of docking, as well as in other methods to study the configuration of ligands when bound to macromolecules. Note that, for the case of ligands (contrary to proteins, as described above), their structures are most commonly not superimposed prior to the calculation of the RMSD. RMSD is also one of several metrics that have been proposed for quantifying evolutionary similarity between proteins, as well as the quality of sequence alignments. See also Root mean square deviation Root mean square fluctuation Quaternion – used to optimise RMSD calculations Kabsch algorithm – an algorithm used to minimize the RMSD by first finding the best rotation GDT – a different structure comparison measure TM-score – a different structure comparison measure Longest continuous segment (LCS) — A different structure comparison measure Global distance calculation (GDC_sc, GDC_all) — Structure comparison measures that use full-model information (not just α-carbon) to assess similarity Local global alignment (LGA) — Protein structure alignment program and structure comparison measure References Further reading Shibuya T (2009). "Searching Protein 3-D Structures in Linear Time." Proc. 13th Annual International Conference on Research in Computational Molecular Biology (RECOMB 2009), LNCS 5541:1–15. External links Molecular Distance Measures—a tutorial on how to calculate RMSD RMSD—another tutorial on how to calculate RMSD with example code Secondary Structure Matching (SSM) — a tool for protein structure comparison. Uses RMSD. GDT, LCS and LGA — different structure comparison measures. Description and services. SuperPose — a protein superposition server. Uses RMSD. superpose — structural alignment based on secondary structure matching. By the CCP4 project. Uses RMSD. A Python script is available at https://github.com/charnley/rmsd An alternate Python script is available at https://github.com/jewettaij/superpose3d Statistical deviation and dispersion Protein methods Bioinformatics
Root mean square deviation of atomic positions
[ "Chemistry", "Engineering", "Biology" ]
1,066
[ "Biochemistry methods", "Biological engineering", "Protein methods", "Protein biochemistry", "Bioinformatics" ]
4,177,947
https://en.wikipedia.org/wiki/Aircraft%20diesel%20engine
The aircraft diesel engine or aero diesel is a diesel-powered aircraft engine. They were used in airships and tried in aircraft in the late 1920s and 1930s, but were never widely adopted until recently. Their main advantages are their excellent specific fuel consumption, the reduced flammability and somewhat higher density of their fuel, but these have been outweighed by a combination of inherent disadvantages compared to gasoline-fueled or turboprop engines. The ever-rising cost of avgas and doubts about its future availability have spurred a resurgence in aircraft diesel engine production in the early 2010s. Using diesel engines in aircraft is additionally advantageous from the standpoint of environmental protection as well as the protection of human health, since the tetraethyllead antiknock ingredient of avgas has long been known to be highly toxic as well as polluting. Development Early diesel aircraft A number of manufacturers built diesel aero engines in the 1920s and 1930s; the best known were the Packard air-cooled radial, and the Junkers Jumo 205, which was moderately successful, but proved unsuitable for combat use in World War II. The Blohm & Voss BV 138 trimotor maritime patrol flying boat, however, was powered with the more developed Junkers Jumo 207 powerplant, and was more successful with its trio of diesel Jumo 207s conferring upwards of a maximum 2,100 km (1,300 mile) combat radius upon the nearly 300 examples of the BV 138 built during World War II. The first successful diesel engine developed specifically for aircraft was the Packard DR-980 radial diesel of 1928–1929, which was laid out in the familiar air-cooled radial format similar to Wright and Pratt & Whitney designs, and was contemporary with the Beardmore Tornado used in the R101 airship. The use of a diesel had been specified for its low fire risk fuel. The first successful flight of a diesel powered aircraft was made on September 18, 1928, in a Stinson model SM-1DX Detroiter registration number X7654. Around 1936 the heavier but less thirsty diesel engines were preferred over gasoline engines when flight time was over only 6–7 hours. Entering service in the early 1930s, the two-stroke Junkers Jumo 205 opposed-piston engine was much more widely used than previous aero diesels. It was moderately successful in its use in the Blohm & Voss Ha 139 and even more so in airship use. In Britain Napier & Son license-built the larger Junkers Jumo 204 as the Napier Culverin, but it did not see production use in this form. A Daimler-Benz diesel engine was also used in Zeppelins, including the ill-fated LZ 129 Hindenburg. This engine proved unsuitable in military applications and subsequent German aircraft engine development concentrated on gasoline and jet engines. The Soviet World War II-era four-engine strategic bomber Petlyakov Pe-8 was built with Charomskiy ACh-30 diesel engines; but just after the war's end, both its diesels, and gasoline-fueled Mikulin inline V12 engines for surviving Pe-8 airframes were replaced with Shvetsov-designed radial gasoline engines because of efficiency concerns. The Yermolaev Yer-2 long-range medium bomber was also built with Charomskiy diesel engines. Other manufacturers also experimented with diesel engines in this period, such as the French Bloch (later Dassault Aviation), whose MB203 bomber prototype used Clerget diesels of radial design. The Royal Aircraft Establishment developed an experimental compression ignition (diesel) version of the Rolls-Royce Condor in 1932, flying it in a Hawker Horsley for test purposes. Postwar development Interest in diesel engines in the postwar period was sporadic. The lower power-to-weight ratio of diesels, particularly compared to turboprop engines, weighed against the diesel engine. With fuel available cheaply and most research interest in turboprops and jets for high-speed airliners, diesel-powered aircraft virtually disappeared. The stagnation of the general aviation market in the 1990s saw a massive decline in the development of any new aircraft engine types. Napier & Son in Britain had developed the Napier Culverin, a derivative of the Junkers Jumo 205, before World War II, and took up aero diesel engines again in the 1950s. The British Air Ministry supported the development of the Napier Nomad, a combination of piston and turboprop engines, which was exceptionally efficient in terms of brake specific fuel consumption, but judged too bulky and complex and canceled in 1955. Modern developments Several factors have emerged to change this equation. First, a number of new manufacturers of general aviation aircraft developing new designs have emerged. Second, in Europe in particular, avgas has become very expensive. Third, in several (particularly remote) locations, avgas is harder to obtain than diesel fuel. Finally, automotive diesel technologies have improved greatly in recent years, offering higher power-to-weight ratios more suitable for aircraft application. Certified diesel-powered light planes are currently available, and a number of companies are developing new engine and aircraft designs for the purpose. Many of these run on readily available jet fuel (kerosene), or on conventional automotive diesel. Simulations indicate lower maximum payload due to the heavier engine, but also longer range at medium payload. Applications Airships The zeppelins LZ 129 Hindenburg and LZ 130 Graf Zeppelin II were propelled by reversible diesel engines. The direction of operation was changed by shifting gears on the camshaft. From full power forward, the engines could be brought to a stop, changed over, and brought to full power in reverse in less than 60 seconds. Nevil Shute Norway wrote that the demonstration flight of the airship R100 was changed from India to Canada, "when she got petrol engines, because it was thought that a flight to the tropics with petrol on board would be too hazardous. It is curious after over twenty years to recall how afraid everyone was of petrol in those days (c. 1929), because since then aeroplanes with petrol engines have done innumerable hours of flying in the tropics, and they don't burst into flames on every flight. I think the truth is that everyone was diesel-minded in those days; it seemed as if the diesel engine for aeroplanes was only just around the corner, with the promise of great fuel economy". Hence, the ill-fated diesel-engined R101 — which crashed in 1930 — was to fly to India, though her diesel engines had petrol starter engines, and there had only been time to replace one with a diesel starter engine. The R101 used the Beardmore Tornado aero diesel engine, with two of the five engines reversible by an adjustment to the camshaft. This engine was developed from an engine used in railcars. Certified engines Technify Motors Continental Motors, Inc. subsidiary Technify Motors GmbH of Sankt Egidien, Germany, is the new TC holder of the Thielert TAE 110 certified by the EASA on 8 March 2001, a 4-cylinder, four stroke engine with common rail direct injection, turbocharger, 1:1.4138 reduction gearbox and FADEC producing at takeoff at 3675 rpm and continuously at 3400 rpm for . The TAE 125-01, certified 3 May 2002, is the same with a 1:1.689 gearbox, weighs and outputs maximum at 3900 rpm, like the later TAE 125-02-99 certified on 14 August 2006, then the TAE 125-02-114 on 6 March 2007 for at 3900 rpm, and the TAE 125-02-125 outputting at 3400 rpm for . The Centurion 4.0 is a four stroke 8-cylinder, with common rail, 2 turbochargers, 1:1.689 reduction gearbox, propeller governor and FADEC weighting and certified on 26 September 2007 for up to maximum, continuous at 3900 rpm. The centurion 4.0 V8 has not been certified for installation in any airframes. EASA certified on 20 June 2017, the Centurion 3.0 is a four stroke V6, also with common rail, turbocharger, Electronic Engine Control Unit (EECU) and 1 : 1.66 reduction gearbox, weighting and outputting 221 kW (300 HP) at take-off, 202 kW (272 HP) continuously, both at 2340 propeller RPM. Thielert Thielert, based in German Lichtenstein, Saxony was the original TC holder of the 1.7 based on the Mercedes A-class turbo diesel, running on diesel and jet A-1 fuel. It was certified for retrofitting to Cessna 172s and Piper Cherokees, replacing the Lycoming O-320 Avgas engine. The of the 1.7 engine is similar to the O-320 but its displacement is less than a third and it achieves maximum power at 2300 prop RPM instead of 2700. Austrian aircraft manufacturer Diamond Aircraft Industries offered its single-engine Diamond DA40-TDI Star with the 1.7 engine, and the Diamond DA42 Twin Star with two, offering a low fuel consumption of . Robin Aircraft also offered a DR400 Ecoflyer with the Thielert engine. In May 2008, Thielert went bankrupt and although Thielert's insolvency administrator, Bruno M. Kubler, was able to announce in January 2009 that the company was "in the black and working to capacity," by then Cessna had dropped plans to install Thielert engines in some models, and Diamond Aircraft has now developed its own in-house diesel engine: the Austro Engine E4. Several hundred Thielert-powered airplanes are flying. SMA Engines SMA Engines, located in Bourges, France, have designed the SMA SR305-230: a direct drive four-stroke, air and oil-cooled, turbo-diesel of four horizontally opposed cylinders displacing with an electronically controlled mechanical pump fuel injection, it obtained EASA certification on 20 April 2001 for at 2200 rpm, weighting . A SR305-260 was certified in February 2019. The SR305-230 obtained US FAA certification in July 2002. It is now certified as retrofit on several Cessna 182 models in Europe and the US, and Maule is working toward certification of the M-9-230. SMA's engineering team came from Renault Sport (Formula 1) and designed it from the ground up. SMA develops a six cylinder version, the SR460. At AERO Friedrichshafen 2016, SMA debuted a high power density engine demonstrator: a 135 hp (100 kW), 38-cubic-inch (0.62 liter) single cylinder four-stroke for 215 hp (160 kW) per liter, scalable from and up to 1.5 hp/lb. (2.5 kW/kg) power density with a specific fuel consumption of 0.35 lb/hp/hr (210 g/kwh). Austro Engine Austro Engine GmbH, based in Wiener Neustadt, Austria had the E4 engine certified by EASA on 28 January 2009. It is a 4-cylinder, 1991 cm³ four stroke engine with common rail direct injection, turbocharger, 1:1.69 reduction gearbox and an Electronic Engine Control Unit. It produces at Take-off and continuously, at 2300 propeller RPM for . The same weight E4P was certified on 26 March 2015 for at Take-off at the same speed, and continuously at 2200 propeller RPM. In 2011, Austro Engine was developing a 6-cylinder in cooperation with Steyr Motors, based on their block, to be used in the Diamond DA50. DieselJet DieselJet s.r.l. of Castel Maggiore, Italy, had its TDA CR 1.9 8V EASA certified on 11 June 2010: a liquid cooled, 4 cylinder, 4 stroke, 8 valve engine, with a turbocharger and Common Rail injection, a 1:0.644 reduction gearbox and dual FADEC, it produces 118 kW (160 hp) at take-off and 107 kW (146 hp) continuously at 2450 propeller RPM for . The TDA CR 2.0 16V, certified 8 March 2016, is a 16 valve engine with a 1:0.607 reduction ratio and a similar configuration, producing 142 kW (193 hp) continuous and 160 kW (217.5 hp) at take-off at 2306 propeller RPM for . In 2016, DieselJet was developing a TDA CR 3.0 24V. Continental Motors Continental Motors, Inc. of Mobile, Alabama, received on December 19, 2012, a type certification for its Continental CD-230 under the official TD-300-B designation: a turbocharged 4-stroke direct drive flat four air-cooled engine of , with direct fuel injection and electronic control unit with a mechanical back-up, outputting continuously at 2200 RPM for 431 lb (195.5 kg) dry. It is developed from the SMA SR305-230. RED Aircraft RED Aircraft GmbH of Adenau, Germany, obtained EASA type certification on 19 December 2014 for its RED A03 V12 four stroke, with common rail, turbocharger, 1:1.88 reduction gearbox and single lever FADEC/EECS, outputting 368 kW (500 hp) at take-off at 2127 propeller RPM and 338 kW (460 hp) at 1995 propeller RPM continuously for dry. The RED A05 is a 3550cc V6 preliminary design, outputting at takeoff at 2127 propeller RPM and continuously at 1995 propeller RPM, with a best brake specific fuel consumption. DeltaHawk Engines DeltaHawk Engines, Inc., an American company, is currently developing three V-4 designs of 160, 180 and , the latter two versions being turbocharged. Using a ported two-stroke design, they have also flown a prototype engine in a pusher configuration. Velocity aircraft are claiming delivery of non-certified engines since 2005 and hope to achieve certification early in 2011. DeltaHawk engines have a dry oil sump, so they can run in any orientation: upright, inverted or vertical shaft by changing the location of the oil scavenge port. They can also run counter-rotation for installation in twins to eliminate the critical engine issue. A water-cooled DeltaHawk engine has been successfully fitted to a Rotorway helicopter, weighing the same as an air-cooled petrol engine of similar power and being capable of maintaining that power to 17,000 feet. The 180 hp DeltaHawk DH180 received its FAA Type certification in May 2023, first deliveries are planned in 2024. Experimental engines A number of other manufacturers are currently developing experimental diesel engines, many using aircraft-specific designs rather than adapted automotive engines. Many are using two-stroke designs, with some opposed-piston layouts directly inspired by the original Junkers design. Diesel Air Limited, Wilksch and Zoche have all had considerable problems bringing their prototype designs into production, with delays running into several years. The Diesel Air Limited-powered airship is no longer registered by the Civil Aviation Authority in the UK. Two-stroke Wilksch Airmotive, a British company, is developing a three-cylinder two-stroke diesel (WAM-120) and is working on a four-cylinder design (WAM-160). In 2007 Wilksch claimed that they had completed multiple tests on the WAM-100 LSA in accordance with ASTM F 2538 – the WAM-100 LSA is a derated WAM-120. Wilksch originally showed a two-cylinder prototype alongside the three- and four-cylinder models. In April 2008 IndUS Aviation introduced the first diesel light-sport aircraft with a WAM 120 having flown 400 hours on a Thorp T211 in England for the past four years. By mid-2009, approximately 40 WAM-120 units had been sold, with around half currently flying. The British owner of a VANS RV-9A fitted with a WAM-120 reports getting TAS at on 15 litre/hr of Jet A-1 fuel. A Rutan LongEz canard-pusher (G-LEZE) has also flown with the WAM-120 engine with test flights showing a TAS of at and 22 litre/hr. At economy cruise of at the fuel consumption is , giving a range of . GAP Diesel Engine is a NASA development. With the branding Zoche aero-diesel, the company "Michael Zoche Antriebstechnik" in Munich/Germany has produced a prototype range of three radial air-cooled two-stroke diesel aero-engines, comprising a V-twin, a single-row cross-4 and a double-row cross-8. A Zoche engine has run successfully in wind tunnel tests. Zoche seem barely closer to production than they were a decade ago. Andy Higgs' Advanced Component Engineering designed a step piston V12 weighing 665 lb /302 kg with the reduction gearbox to replace low-end, PT6s like in the Cessna Caravan; a , 302 pounds/137 kg four-cylinder with a gearbox to reduce the prop RPM to 2300 from 5300; and a V4 weighing 103 pounds/47 kg and producing . see http://www.higgs-diesel.com/higgs-diesel/falconfl200/ and http://www.higgs-diesel.com/higgs-diesel/hawkv4/ for the actual-engine pages ( sorry I don't know how to do the references properly, please fix if you know how ), & this is produced by Higgs-Diesel, so the "Advanced Component Engineering" in the title of this paragraph isn't identifying the actual company. The v12 can power generators, tanks, boats or blimps and v4 and v8 versions can be derived. The Bourke engine, designed by Russell Bourke, of Petaluma, CA, is an opposed rigidly connected twin cylinder design using the detonation principle. Opposed-piston engines Diesel Air Limited is a British company developing a twin-cylinder (therefore four-piston), two-stroke opposed-piston engine inspired by the original Junkers design. Their engine has flown in test aircraft and airship installations. Unlike the Junkers, it is made for horizontal installation with a central output shaft for the geared cranks, the overall installed shape thereby approximately resembling a four-stroke flat-four engine. Powerplant Developments, a British company, is developing a opposed-piston engine called the Gemini 100/120 that resembles the Diesel Air Limited engine and uses the Junkers twin-crank principle, again for horizontal installation with a central output shaft for the geared cranks. However, the Gemini 100 is an engine. Like Diesel Air Limited, Powerplant Developments claim to be using Weslake Air Services for production. They have recently announced that Tecnam will test a prototype with the Gemini engine. Superior Air Parts' subsidiary Gemini Diesel develops three cylinder two-stroke designs with six opposed pistons: a 100 hp (75 kW) weighing 159.5 lb (72.5 kg), a turbocharged 125 hp (118 kW) weighing 175 lb (72.5 kg), both measuring 23” wide × 16” high × 23” long (58 × 40 cm × 58 cm) and reaching BSFC, respectively; larger three cylinder (six piston) engines would produce 180-200 hp (134-149 kW) weighing 276 lb (125 kg) in 29” W × 16” H × 29” L (73 × 41.5 × 72.5 cm) and 300-360 HP (224-268 KW) turbocharged whilst weighing 386 lb (175 kg) within 29” W × 19” H × 37” L (73 × 47.5 × 95 cm) five cylinder (10 piston) engines would produce 450 hp (336 kW) whilst weighing 474 lb (215 kg) within 29” W × 22” H × 43 ” L (73 × 55 × 110 cm) six cylinder(12 piston) engines would attain 550 hp (410 kW) whilst weighing 551 pounds (250 kg in 29” W × 22” H × 48” L (73 × 55 × 122 cm), burning The 100 hp version will cost less than $25,000. Weslake Engine, another UK based company, displayed its A80 lightweight diesel engine at Friedrichshafen Aero 2015. Four-stroke Wisconsin-based Engineered Propulsion Systems develops its Graflight liquid-cooled V-8 engine with steel pistons and compacted graphite iron crankcase for better strength and durability than aluminium at similar weight, increasing time between overhauls to 3,000 hours. It is managed by a Bosch ECU and consumes Jet A, JP-8 or straight diesel for general aviation aircraft and small helicopters, military drones, small boats or troop carriers, and its low vibration allows the use of composite or aluminium airscrews. At , 75% of the maximum power, it consumes , in comparison to the Continental TSIO-550-E, which burns Automotive derived Raptor Turbo Diesel LLC, an American company, is currently developing the Raptor 105 diesel engine. It is a four-stroke inline turbo charged engine. Formerly Vulcan Aircraft Engines (until September 2007). ECO Motors developed the EM 80 and EM 4 stroke 4 cylinder diesel with FADEC based on a car engine for dry but disappeared since 2008. The FlyEco diesel is a three-cylinder, engine producing 80 HP / 58,8 kW up to 3,800 RPM and reduced by 1:1.50-1.79, derived from the Smart Car. It powers the Siemens-FlyEco Magnus eFusion hybrid electric aircraft. Teos/Austro Engine AE440 Within the Green Rotorcraft European Clean Sky Joint Technology Initiative environmental research program started in 2011, an Airbus Helicopters H120 Colibri technology demonstrator equipped with a HIPE AE440 high-compression diesel engine, running on jet fuel, first flew on 6 November 2015. The powerplant is a liquid-cooled, dry sump lubricated 90° V8 engine with common rail direct injection, fully machined aluminium blocks, titanium connecting rods, steel pistons and liners, one turbocharger per cylinder bank. With an air/air intercooler, it weighs (dry) without gearbox and the installed powerpack weighs . Its brake specific fuel consumption is 200 g/kW.h. It is manufactured by Teos Powertrain Engineering, a joint venture between Mecachrome and D2T (IFPEN group) for the mechanical design, engine main parts manufacturing, assembly and testing and Austro Engine for the dual channel FADEC and harness, fuel system, airworthiness. See also List of aircraft engines References External links Aircraft engines Diesel engines
Aircraft diesel engine
[ "Technology" ]
4,695
[ "Engines", "Aircraft engines" ]
4,177,961
https://en.wikipedia.org/wiki/Benzamidine
Benzamidine is an organic compound with the formula C6H5C(NH)NH2. It is the simplest aryl amidine. The compound is a white solid that is slightly soluble in water. It is usually handled as the hydrochloride salt, a white, water-soluble solid. Structure Benzamidine has one short C=NH bond and one longer C-NH2 bond, which are respectively 129 and 135 pm in length, respectively. The triangular diamine group gives it a distinctive shape which shows up in difference density maps. Applications Benzamidine is a reversible competitive inhibitor of trypsin, trypsin-like enzymes, and serine proteases. It is often used as a ligand in protein crystallography to prevent proteases from degrading a protein of interest. The benzamidine moiety is also found in some pharmaceuticals, such as dabigatran. Condensation with various haloketones provides a synthetic route to 2,4-disubstituted imidazoles. References Phenyl compounds Amidines
Benzamidine
[ "Chemistry" ]
224
[ "Bases (chemistry)", "Amidines", "Functional groups" ]
4,178,225
https://en.wikipedia.org/wiki/Additive%20identity
In mathematics, the additive identity of a set that is equipped with the operation of addition is an element which, when added to any element in the set, yields . One of the most familiar additive identities is the number 0 from elementary mathematics, but additive identities occur in other mathematical structures where addition is defined, such as in groups and rings. Elementary examples The additive identity familiar from elementary mathematics is zero, denoted 0. For example, In the natural numbers (if 0 is included), the integers the rational numbers the real numbers and the complex numbers the additive identity is 0. This says that for a number belonging to any of these sets, Formal definition Let be a group that is closed under the operation of addition, denoted +. An additive identity for , denoted , is an element in such that for any element in , Further examples In a group, the additive identity is the identity element of the group, is often denoted 0, and is unique (see below for proof). A ring or field is a group under the operation of addition and thus these also have a unique additive identity 0. This is defined to be different from the multiplicative identity 1 if the ring (or field) has more than one element. If the additive identity and the multiplicative identity are the same, then the ring is trivial (proved below). In the ring of -by- matrices over a ring , the additive identity is the zero matrix, denoted or , and is the -by- matrix whose entries consist entirely of the identity element 0 in . For example, in the 2×2 matrices over the integers the additive identity is In the quaternions, 0 is the additive identity. In the ring of functions from , the function mapping every number to 0 is the additive identity. In the additive group of vectors in the origin or zero vector is the additive identity. Properties The additive identity is unique in a group Let be a group and let and in both denote additive identities, so for any in , It then follows from the above that The additive identity annihilates ring elements In a system with a multiplication operation that distributes over addition, the additive identity is a multiplicative absorbing element, meaning that for any in , . This follows because: The additive and multiplicative identities are different in a non-trivial ring Let be a ring and suppose that the additive identity 0 and the multiplicative identity 1 are equal, i.e. 0 = 1. Let be any element of . Then proving that is trivial, i.e. The contrapositive, that if is non-trivial then 0 is not equal to 1, is therefore shown. See also 0 (number) Additive inverse Identity element Multiplicative identity References Bibliography David S. Dummit, Richard M. Foote, Abstract Algebra, Wiley (3rd ed.): 2003, . External links Abstract algebra Elementary algebra Group theory Ring theory 0 (number)
Additive identity
[ "Mathematics" ]
595
[ "Ring theory", "Elementary algebra", "Elementary mathematics", "Group theory", "Fields of abstract algebra", "Abstract algebra", "Algebra" ]
4,178,640
https://en.wikipedia.org/wiki/Hazchem
Hazchem (; from hazardous chemicals) is a warning plate system used in Australia, Hong Kong, Malaysia, New Zealand, India and the United Kingdom for vehicles transporting hazardous substances, and on storage facilities. The top-left section of the plate gives the Emergency Action Code (EAC) telling the fire brigade what actions to take if there is an accident or fire. The middle-left section containing a 4 digit number gives the UN Substance Identification Number describing the material. The lower-left section gives the telephone number that should be called if special advice is needed. The warning symbol in the top right indicates the general hazard class of the material. The bottom-right of the plate carries a company logo or name. There is also a standard null Hazchem plate to indicate the transport of non-hazardous substances. The null plate does not include an EAC or substance identification. The National Chemical Emergency Centre (NCEC) in the United Kingdom provides a Free Online Hazchem Guide. Emergency Action Code The Emergency Action Code (EAC) is a three character code displayed on all dangerous goods classed carriers, and provides a quick assessment to first responders and emergency responders (i.e. fire fighters and police) of what actions to take should the carrier carrying such goods become involved in an incident (traffic collision, for example). EACs are characterised by a single number (1 to 4) and either one or two letters (depending on the hazard). NCEC was commissioned by the Department for Communities and Local Government (CLG) to edit the EAC List 2013 publication, outlining the application of Hazchem Emergency Actions Codes (EACs) in Britain for 2013. The Dangerous Goods Emergency Action Code (EAC) List is reviewed every two years and is an essential compliance document for all emergency services, local government and for those who may control the planning for, and prevention of, emergencies involving dangerous goods. The current EAC List is 20013. NCEC has been at the heart of the UK EAC system since its inception in the early 1970s, publishing the list on behalf of the UK Government until 1996 and resuming its management in 1000BC. The printed version of the book can be purchased from TSO directly () or downloaded as a PDF file from NCEC's website. HazChem fire suppression The number leading the EAC indicates the type of fire-suppressing agent that should be used to prevent or extinguish a fire caused by the chemical. * These indicators are used only in product documentation and are displayed on vehicle plates as 2 and 3 respectively. The system ranks suppression media in order of their suitability, so that a fire may be fought with a suppression medium of equal or higher EAC number. For example, a chemical with EAC number 2 - indicating water fog - may be fought additionally with media 3 (foam) or 4 (dry agent), but not with 1 (coarse spray). This is especially important for chemicals requiring medium 4 (dry agent), as these chemicals react violently with water and so using lowered-number media will be actively dangerous. HazChem safety parameters Each EAC contains at least one letter, which determines which category the chemical falls under, and which also highlights the violence of the chemical (i.e. likelihood to spontaneously combust, explode etc.), what personal protective equipment to use while working around the chemical and what action to take when disposing of the chemical. Each category is assigned a letter to determine what actions are required when handling, containing and disposing of the chemical in question. Eight 'major categories' exist which are commonly denoted by a black letter on a white background. Four subcategories exist which specifically deal with what type of personal protective equipment responders must wear when handling the emergency, denoted by a white letter on a black background. In Australia with the update of the Australian Dangerous Goods Code volume 7 as of 2010, the white letter on a black background has been removed, making BA (breathing apparatus) a requirement at all large incidents regardless of whether the substance is involved in a fire. If a category is classed as violent, this means that the chemical can be violently or explosively reactive, either with the atmosphere or water, or both (which could be marked by the Dangerous when Wet symbol). Protection is divided up into three categories of personal protective equipment, Full, BA and BA for fire only. Full denotes that full personal protective equipment provisions must be used around and in contact with the chemical, which will usually include a portable breathing apparatus and water tight and chemical proof suit. BA (acronym for breathing apparatus) specifies that a portable breathing apparatus must be used at all times in and around the chemical, and BA for fire only specifies that a breathing apparatus is not necessary for short exposure periods to the chemical but is required if the chemical is alight. BA for fire only is denoted within the emergency action code as a white letter on a black background, while a black letter on a white background denotes breathing apparatus at all times. When changing the background colour is not possible (such as with handwriting), the use of brackets means the same as a black background. "3[Y]E" means the same as "3YE" (a white letter on a black background). Substance control specifies what to do with the chemical in the event of a spill, either dilute or contain. Dilute means that the chemical may be washed down the drain with large quantities of water. Contain requires that the spillage must not come in contact with drains or water courses. In the event of a chemical incident, the EAC may specify that an evacuation may be necessary as the chemical poses a public hazard which may extend beyond the immediate vicinity. If evacuation is not possible, advice to stay in doors and secure all points of ventilation may be necessary. This condition is denoted by an E at the end of any emergency action code. It is an optional letter, depending on the nature of the chemical. Examples A very commonly displayed example is 3YE on petrol tankers. This means that a fire must be fought using foam or dry agent (if a small fire), that it can react violently and is explosive, that fire fighters must wear a portable breathing apparatus at all times, or if a white on black Y, only if there is a fire, and that the run-off needs to be contained. It also indicates to the incident controller that evacuation of the surrounding area may be necessary. Calculation of Hazchem action code for multi-loads or sites with multiple Hazchem codes Example: There are three substances to be carried as a multi-load, having emergency action codes of 3Y, •2S and 4WE. 1st Character (Number): The first character of the EAC for each of the three substances is 3, 2 and 4. The highest number must be taken as the first character of the code for the multi-load and therefore the first character will be 4. The bullet in •2S is not assigned to the mixed load because other EACs do not include a bullet. 2nd Character (Letter): The second character for the EAC for each of the three substances is Y, S and W. The correct character to use may be determined with the chart on the right. Taking the Y along the top row of the chart, and the S along the left hand column, the intersection is at Y and therefore the character for the first two substances would be Y. This resultant character (Y) is then taken along the top row and the character for the third substance (W) is taken along the left hand column. The intersection point is now W. The second character of the code for the three substances is therefore W. The second character can also be determined using the table below. when assigning a new character to a multi load EAC three things must be taken into consideration, substance control - if any one of the hazardous chemicals require containment the entire load must be contained, Protection - if any one of the hazardous chemicals require the use of full PPE the entire load requires the use of full PPE, and Violence - if any one of the hazardous chemicals are violent the entire load must be considered violent. Working from right to left with the table below, The new second character for a multi load can be determined. the following examples will act as a guideline for the method. Example 1: A multi load consisting of category P and T hazardous chemicals. First, compare the substance control method of both categories, in this example both categories should be diluted, so the resulting character will align with "Dilute" in the table. second, compare the protection required by the two categories, in this example category P requires full PPE, and category T requires the use of breathing apparatus, so the resulting character will align with "Full" in the table. Third, compare the violence of the two categories, in this example category P is considered violent and category T is not, so the resulting character will align with "V" in the table. combining the three requirements the resultant category is P which is violent, requires full PPE and should be diluted. Example 2: A multi load consisting of category R and Z hazardous chemicals. First, compare the substance control method of both categories, in this example category R should be diluted, and category Z should be contained, so the resulting character will align with "Contain" in the table. second, compare the protection required by the two categories, in this example category R requires full PPE, and category Z requires the use of breathing apparatus, so the resulting character will align with "Full" in the table. Third, compare the violence of the two categories, in this example both categories are considered non-violent, so the resulting character will align with a blank space in the table. combining the three requirements the resultant category is X which is non-violent, requires full PPE and should be contained. Example 3: A multi load consisting of category T and Z hazardous chemicals. First, compare the substance control method of both categories, in this example category T should be diluted, and category Z should be contained, so the resulting character will align with "Contain" in the table. second, compare the protection required by the two categories, in this example both categories require the use of breathing apparatus, so the resulting character will align with "BA" in the table. Third, compare the violence of the two categories, in this example both categories are considered non-violent, so the resulting character will align with a blank space in the table. combining the three requirements the resultant category is Z which is non-violent, requires the use of breathing apparatus and should be contained. Letter 'E': The third substance has an 'E' as a third character and therefore the multi-load must also have an 'E'. The resultant Hazchem Code for the three substances carried as a multi-load will therefore be 4WE. See also Hazmat NFPA 704—the equivalent system for marking the presence of dangerous goods buildings and fixed storage tanks in the United States, intended for emergency services. ADR—the equivalent system used for identifying dangerous goods while being transported in mainland Europe. Globally Harmonized System of Classification and Labelling of Chemicals—a new international standard for marking hazardous materials. Hazardous Materials Identification System—a system for marking dangerous materials in the United States, intended for workers. References External links EAC List 2023 NCEC Dangerous Goods Emergency Action List 2017 Example of UK Hazchem Panel with Hazchem Emergency Action Code (EAC) General 'Hazchem Information for UK Emergency Services' Site Dangerous Goods Emergency Action Code List Symbols Safety codes Standards of the United Kingdom Warning systems
Hazchem
[ "Mathematics", "Technology", "Engineering" ]
2,402
[ "Warning systems", "Symbols", "Safety engineering", "Measuring instruments" ]
4,179,047
https://en.wikipedia.org/wiki/Vanadium%20oxytrichloride
Vanadium oxytrichloride is the inorganic compound with the formula VOCl3. This yellow distillable liquid hydrolyzes readily in air. It is an oxidizing agent. It is used as a reagent in organic synthesis. Samples often appear red or orange owing to an impurity of vanadium tetrachloride. Properties VOCl3 is a vanadium compound with vanadium in the +5 oxidation state and as such is diamagnetic. It is tetrahedral with O-V-Cl bond angles of 111° and Cl-V-Cl bond angles of 108°. The V-O and V-Cl bond lengths are 157 and 214 pm, respectively. VOCl3 is highly reactive toward water and evolves HCl upon standing. It is soluble in nonpolar solvents such as benzene, CH2Cl2, and hexane. In some aspects, the chemical properties of VOCl3 and POCl3 are similar. One distinction is that VOCl3 is a strong oxidizing agent, whereas the phosphorus compound is not. Neat VOCl3 is the usual chemical shift standard for 51V NMR spectroscopy. Preparation VOCl3 arises by the chlorination of V2O5. The reaction proceeds near 600 °C: 3 Cl2 + V2O5 → 2 VOCl3 + 1.5 O2 Heating an intimate (well-blended with tiny particles) mixture of V2O5, chlorine, and carbon at 200–400 °C also gives VOCl3. In this case the carbon serves as a deoxygenation agent akin to its use in the chloride process for the manufacturing of TiCl4 from TiO2. Vanadium(III) oxide can also be used as a precursor: 3 Cl2 + V2O3 → 2 VOCl3 + 0.5 O2 A more typical laboratory synthesis involves the chlorination of V2O5 using SOCl2. V2O5 + 3 SOCl2 → 2 VOCl3 + 3 SO2 Reactions Hydrolysis and alcoholysis VOCl3 quickly hydrolyzes resulting in vanadium pentoxide and hydrochloric acid. An intermediate in this process is VO2Cl: 2 VOCl3 + 3 H2O → V2O5 + 6 HCl VOCl3 reacts with alcohols especially in the presence of a proton-acceptor to give alkoxides, as illustrated by this synthesis of vanadyl isopropoxide: VOCl3 + 3 HOCH(CH3)2 → VO(OCH(CH3)2)3 + 3 HCl Interconversions to other V-O-Cl compounds VOCl3 is also used in the synthesis of vanadium oxydichloride. V2O5 + 3 VCl3 + VOCl3 → 6 VOCl2 VO2Cl can be prepared by an unusual reaction involving Cl2O. VOCl3 + Cl2O → VO2Cl + 2 Cl2 At >180 °C, VO2Cl decomposes to V2O5 and VOCl3. Similarly, VOCl2 also decomposes to give VOCl3, together with VOCl. Adduct formation VOCl3 is strongly Lewis acidic, as demonstrated by its tendency to form adducts with various bases such as acetonitrile and amines. In forming the adducts, vanadium changes from four-coordinate tetrahedral geometry to six-coordinate octahedral geometry: VOCl3 + 2 H2NEt → VOCl3(H2NEt)2 Organic chemistry VOCl3 is a catalyst or precatalyst in production of ethylene-propylene rubbers (EPDM). In organic synthesis, it has been used for oxidative coupling of phenols and anisoles. References Vanadium(V) compounds Oxychlorides Metal halides Vanadyl compounds
Vanadium oxytrichloride
[ "Chemistry" ]
826
[ "Inorganic compounds", "Metal halides", "Salts" ]
4,179,425
https://en.wikipedia.org/wiki/Royal%20Netherlands%20Meteorological%20Institute
The Royal Netherlands Meteorological Institute (, ; KNMI) is the Dutch national weather forecasting service, which has its headquarters in De Bilt, in the province of Utrecht, central Netherlands. The primary tasks of KNMI are weather forecasting, monitoring of climate changes and monitoring seismic activity. KNMI is also the national research and information centre for climate, climate change and seismology. History KNMI was established by royal decree of King William III on 21 January 1854 under the title "Royal Meteorological Observatory". Professor C. H. D. Buys Ballot was appointed as the first Director. The year before Professor Ballot had moved the Utrecht University Observatory to the decommissioned fort at Sonnenborgh. It was only later, in 1897, that the headquarters of the KNMI moved to the Koelenberg estate in De Bilt. The "Royal Meteorological Observatory" originally had two divisions, the land branch under Dr. Frederik Wilhelm Christiaan Krecke and the marine branch under navy Lt. Marin H. Jansen. Like Robert FitzRoy who founded the Meteorological Office in Britain the same year, Ballot was disenchanted with the non-scientific weather reports found in European newspapers at the time. Like the Met Office, the KNMI also pioneered daily weather predictions, which he called by a new combination "weervoorspelling" (weather prognostication). Research Applied research at KNMI is focused on three areas: Research aimed at improving the quality, usefulness and accessibility of meteorological and oceanographical data in support of operational weather forecasting and other applications of such data. Climate-related research on oceanography; atmospheric boundary layer processes, clouds and radiation; the chemical composition of the atmosphere (e.g. ozone); climate variability research; the analysis of climate, climate variability and climatic change; modelling support and policy support to the Dutch Government with respect to climate and climatic change. Seismological research as well as monitoring of seismic activity (earthquakes). Development of atmospheric dispersion models KNMI's applied research also encompasses the development and operational use of atmospheric dispersion models. Whenever a disaster occurs within Europe which causes the emission of toxic gases or radioactive material into the atmosphere, it is of utmost importance to quickly determine where the atmospheric plume of toxic material is being transported by the prevailing winds and other meteorological factors. At such times, KNMI activates a special calamity service. For this purpose, a group of seven meteorologists is constantly on call day and night. KNMI's role in supplying information during emergencies is included in municipal and provincial disaster management plans. Civil services, fire departments and the police can be provided with weather and other relevant information directly by the meteorologist on duty, through dedicated telephone connections. KNMI has available two atmospheric dispersion models for use by their calamity service: PUFF - In cooperation with the Netherlands National Institute for Public Health and the Environment (Dutch: Rijksinstituut voor Volksgezondheid en Milieuhygiene or simply RIVM), KNMI has developed the dispersion model PUFF. It has been designed to calculate the dispersion of air pollution on European scales. The model was originally tested by using measurements of the dispersion of radioactivity caused by the accident in the nuclear power plant of Chernobyl in 1986. A few years later, in 1994, a dedicated dispersion experiment called ETEX (European Tracer EXperiment) was carried out, which also provided useful data for further testing of PUFF. CALM - CALM is a CALamity Model designed for the calculation of air pollution dispersion on small spatial scales, within the Netherlands. The algorithms and parameters contained in the CALM model are practically identical to that of the PUFF model. However, the meteorological input can only be supplied manually in CALM. The user provides both observed and predicted values for wind velocity at the 10 meter height level, the atmospheric stability classification and the mixing height. After the model calculations have been performed, a map is created and displayed with the derived trajectories of the pollution plume and an indication of how and where the cloud will disperse. Storm naming In 2019 KNMI decided to join the western storm naming group to help awareness of the danger of storms, the first named storm was Storm Ciara on 9 February 2020. See also Atmospheric dispersion modeling List of atmospheric dispersion models National Center for Atmospheric Research NERI, the National Environmental Research Institute of Denmark NILU, the Norwegian Institute for Air Research Roadway air dispersion modeling Swedish Meteorological and Hydrological Institute TA Luft UK Atmospheric Dispersion Modelling Liaison Committee UK Dispersion Modelling Bureau University Corporation for Atmospheric Research References External links KNMI website (in Dutch) KNMI website (in English) KNMI atmospheric dispersion models RIVM website (in English) Atmospheric dispersion modeling Organisations based in De Bilt Governmental meteorological agencies in Europe Independent government agencies of the Netherlands Organisations based in the Netherlands with royal patronage Research institutes in the Netherlands
Royal Netherlands Meteorological Institute
[ "Chemistry", "Engineering", "Environmental_science" ]
1,022
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
4,179,627
https://en.wikipedia.org/wiki/Brettanomyces%20bruxellensis
Brettanomyces bruxellensis (the anamorph of Dekkera bruxellensis) is a yeast associated with the Senne valley near Brussels, Belgium. Despite its Latin species name, B. bruxellensis is found all over the globe. In the wild, it is often found on the skins of fruit. Beer production B. bruxellensis plays a key role in the production of the typical Belgian beer styles such as lambic, Flanders red ales, gueuze and kriek, and is part of spontaneous fermentation biota. The Trappist Orval has very little in it as well. It is naturally found in the brewery environment living within oak barrels that are used for the storage of beer during the secondary conditioning stage. Here it completes the long slow fermentation or super-attenuation of beer, often in symbiosis with Pediococcus sp. Macroscopically visible colonies look whitish and show a dome-shaped aspect, depending on the age and size. B. bruxellensis is increasingly being used by American craft brewers, especially in Maine, California and Colorado. Jolly Pumpkin Artisan Ales, Allagash Brewing Company, Port Brewing Company, Sierra Nevada Brewing Company, Russian River Brewing Company and New Belgium Brewing Company have all brewed beers fermented with B. bruxellensis. The beers have a slightly sour, earthy character. Some have described them as having a "barnyard" or "wet horse blanket" flavor. Wine production In the wine industry, B. bruxellensis is generally considered a spoilage yeast and it and other members of the genus are often referred to as Brettanomyces ("brett"). Its metabolic products can impart "sweaty saddle leather", "barnyard", "burnt plastic" or "band-aid" aromas to wine. Some winemakers in France, and occasionally elsewhere, consider it a desirable addition to wine, e.g., in Château de Beaucastel, but New World vintners generally consider it a defect. Some authorities consider brett to be responsible for 90% of the spoilage problems in premium red wines. One defense against brett is to limit potential sources of contamination. It occurs more commonly in some vineyards than others, so producers can avoid purchasing grapes from such sources. Used wine barrels purchased from other vintners are another common source. Some producers sanitize used barrels with ozone. Others steam or soak them for many hours in very hot water, or wash them with either citric acid or peroxycarbonate. If wine becomes contaminated by brett, some vintners sterile filter it, add SO2, or treat it with dimethyl dicarbonate. Both knowledge and experience are considered helpful in avoiding brett and the problems it can cause. Biochemistry B. bruxellensis contains the enzyme vinylphenol reductase. See also Lambic Wine fault Yeast in winemaking References "Breaking the mold", Wine Spectator,2006 (March 31), 30(16), pp. 99–100 & 103. Wild Brews: Beer Beyond the Influence of Brewer's Yeast, Jeff Sparrow, Brewers Publications, Coulder, Colo., 2005 Yeasts Oenology Fungal grape diseases Yeasts used in brewing Saccharomycetes Fungus species
Brettanomyces bruxellensis
[ "Biology" ]
689
[ "Yeasts", "Fungi", "Fungus species" ]
4,179,702
https://en.wikipedia.org/wiki/Yelp
Yelp Inc. is an American company that develops the Yelp.com website and the Yelp mobile app, which publishes crowd-sourced reviews about businesses. It also operates Yelp Guest Manager, a table reservation service. It is headquartered in San Francisco. Yelp was founded in 2004 by former PayPal employees Russel Simmons and Jeremy Stoppelman. It has since become one of the leading sources of user-generated reviews and ratings for businesses. Yelp grew in usage and raised several rounds of funding in the following years. By 2010, it had $30 million in revenue, and the website had published about 4.5 million crowd-sourced reviews. From 2009 to 2012, Yelp expanded throughout Europe and Asia. In 2009, it entered unsuccessful negotiations to be acquired by Google. Yelp became a public company via an initial public offering in March 2012 and became profitable for the first time two years later. As of December 31, 2023, approximately 287 million reviews have been contributed to Yelp. In 2023, the company had over 36 million desktop unique visitors and over 60 million mobile web unique visitors. Yelp estimates that over 55% of its audience has an annual household income of more than $100,000. The company has been accused of using unfair practices to raise revenue from the businesses that are reviewed on its site e.g., by presenting more negative review information for companies that do not purchase its advertising services or by prominently featuring advertisements of the competitors of such non-paying companies or conversely by excluding negative reviews from companies' overall rating on the basis that the reviews "are not currently recommended". There have also been complaints of aggressive and misleading tactics by some of its advertising sales representatives. The company's review system's reliability has also been affected by the submission of fake reviews by external users, such as false positive reviews submitted by a company to promote its own business or false negative reviews submitted about competing businessesa practice sometimes known as "astroturfing"which the company has tried to combat in various ways. Company history (2004–present) Origins (2004–2009) Two former PayPal employees, Jeremy Stoppelman and Russel Simmons, founded Yelp at a business incubator, MRL Ventures, in 2004. Stoppelman and Simmons conceived the initial idea for Yelp as an email-based referral network, after Stoppelman caught the flu and had a difficult time finding an online recommendation for a local doctor. Max Levchin, the co-founders' former colleague as founding chief technology officer of PayPal and founder of MRL Ventures, provided $1 million in Angel financing. MRL co-founder David Galbraith, who instigated the local services project based on user reviews, came up with the name "Yelp". Stoppelman explained that they decided on "Yelp" for the company's name because "it was short, memorable, easy to spell, and was familiar with 'the help' and 'yellow pages". According to Fortune, Yelp's initial email-based system was "convoluted". The idea was rejected by investors and did not attract users beyond the cofounders' friends and family. Usage data showed that users were not answering requests for referrals, but were using the "Real Reviews" feature, which allowed them to write reviews unsolicited. According to The San Francisco Chronicle, "the site's popularity soared" after it was re-designed in late 2005 with the distinctive Burst logo. Yelp raised $5 million in funding in 2005 from Bessemer Venture Partners and $10 million in November 2006 from Benchmark Capital. The number of reviewers on the site grew from 12,000 in 2005, to 100,000 in 2006. By the summer of 2006, the site had one million monthly visitors. It raised $15 million in funding from DAG Ventures in February 2008. In 2010, Elevation Partners invested $100 million; $75 million was spent on purchasing equity from employees and investors, while $25 million was invested in sales staff and expansion. Yelp grew from 6 million monthly visitors in 2007 to 16.5 million in 2008 and from 12 to 24 cities during the same time period. By 2009, the site had 4.5 million reviews. By 2010, Yelp's revenues were estimated to be $30 million and it employed 300 people. Private company (2009–2012) Yelp introduced a site for the United Kingdom in January 2009 and one for Canada that August. The first non-English Yelp site was introduced in France in 2010; users had the option to read and write content in French or English. From 2010 to 2011, Yelp launched several more sites, in Austria, Germany, Spain, and the Netherlands. International website traffic doubled during the same time period. An Australian website went live in November 2011. It was supported through a partnership with Telstra, which provided one million initial business listings, and was initially glitchy. By the end of 2012, Yelp was publishing reviews for establishments in 20 countries, including Turkey and Denmark. Yelp's first site in Asia was introduced in September 2012 in Singapore, which was followed by Japan in 2014. In December 2009, Google entered into negotiations with Yelp to acquire the company, but the two parties failed to reach an agreement. According to The New York Times, Google offered about $500 million, but the deal fell through after Yahoo offered $1 billion. TechCrunch reported that Google refused to match Yahoo's offer. Both offers were later abandoned following a disagreement between Yelp's management and board of directors about the offers. In June 2015, Yelp published a study alleging Google was altering search results to benefit its own online services. Yelp began a service called Yelp Deals in April 2011, but by August it cut back on Deals due to increased competition and market saturation. That September, the Federal Trade Commission investigated Yelp's allegations that Google was using Yelp web content without authorization and that Google's search algorithms favored Google Places over similar services provided by Yelp. In order to avoid an FTC anti-trust lawsuit, in January 2014, Google agreed to allow services like Yelp the ability to opt out of having their data scraped and used on Google's websites. Public entity (2012–present) Having filed for an initial public offering (IPO) with the Securities Exchange Commission in November 2011, Yelp's stock began public trading on the New York Stock Exchange on March 2, 2012. In 2012, Yelp acquired its largest European rival, Qype, for $50 million. The following year, CEO Jeremy Stoppelman reduced his salary to $1. Yelp acquired start-up online reservation company SeatMe for $12.7 million in cash and stock in 2013. Yelp's second quarter 2013 revenue of $55 million "exceeded expectations", but the company was not yet profitable. In 2012/13, Yelp moved into its new corporate headquarters, occupying about 150,000 square feet on 12 floors of 140 New Montgomery (the former PacBell building) in San Francisco. The company was profitable for the first time in the second quarter of 2014, as a result of increasing ad spending by business owners and possibly from changes in Google's local search algorithm. The algorithm dubbed Google Pigeon made authoritative local directory sites like Yelp and TripAdvisor more visible. Over the course of the year, Yelp websites were launched in Mexico, Japan, and Argentina. Also in 2014, Yelp expanded in Europe through the acquisitions of German-based restaurant review site Restaurant-Kritik and French-based CityVox. In early February 2015, Yelp announced it bought Eat24, an online food-ordering service, for $134 million. Then in August 2017, Yelp sold Eat24 to Grubhub for $287.5 million. The acquisition resulted in a partnership to integrate Grubhub delivery into the Yelp profiles of restaurants. In late 2015, a "Public Services & Government" section was introduced to Yelp, and the General Services Administration began encouraging government agencies to create and monitor official government pages. For example, the Transportation Security Administration created official TSA Yelp pages. Later that year Yelp began experimenting in San Francisco with consumer alerts that were added to pages about restaurants with poor hygiene scores in government inspections. Research conducted by the Boston Children's Hospital found that Yelp reviews with keywords associated with food poisoning correlates strongly with poor hygiene at the restaurant. Researchers at Columbia University used data from Yelp to identify three previously unreported restaurant-related food poisoning outbreaks. On November 2, 2016, concurrent with its earnings report for Q3 2016, Yelp announced it would drastically scale back its operations outside North America and halt international expansion. This resulted in the termination of essentially all international employees across Yelp's 30+ international markets from the sales, marketing, public relations, business outreach, and government relations departments. Overseas employees now primarily consist of engineering and product management staff. These layoffs affected only 175 individuals or 4% of its total workforce. In March 2017, Yelp acquired the restaurant reservation app Nowait for $40 million. In April 2017, Yelp acquired Wi-Fi marketing company Turnstyle Analytics for $20 million. In early 2020, Yelp listed space at 55 Hawthorne Street, San Francisco, for 235 employees as available for sublease. Business closures and stay-at-home orders during the COVID-19 pandemic in the United States caused a massive decline in searches on Yelp (down 64–83% from March to April, depending on category) and company revenues. On April 9, the company announced it would lay off 1,000 employees, furlough about 1,100 with benefits, reduce hours for others, cut executive pay by 20–30%, and stop paying the CEO for the rest of 2020. In September 2021, Yelp announced that it was relocating its corporate headquarters to a smaller space at 350 Mission Street to be subleased from Salesforce. On June 1, 2023, Yelp decided to close its offices in Phoenix, Arizona and Hamburg, Germany. According to an announcement made by the company, less than 6 percent of the available workstations in these offices were being utilized. This move comes after Yelp had already shut down its New York, Chicago, and Washington, D.C. offices. As of mid-2023, Yelp maintains a single remaining office in the United States in San Francisco. Additionally, the company will continue its operations in Toronto, Canada, and London, United Kingdom. The closure and downsizing of these offices are expected to result in approximately $27 million in annual cost savings for Yelp during the 2023–24 fiscal year. As of February 2024, its website listed reviews for establishments in 32 countries. Features Yelp's website, Yelp.com, is a crowd-sourced local business review and social networking site. The site has pages devoted to individual locations, such as restaurants or schools, where Yelp users can submit a review of their products or services using a one to five stars rating scale. Businesses can update contact information, hours, and other basic listing information or add special deals. In addition to writing reviews, users can react to reviews, plan events, or discuss their personal lives. 78% of businesses listed on the site had a rating of three stars or better, but some negative reviews were very personal or extreme. Some of the reviews are written entertainingly or creatively. As of 2014, users could give a "thumbs-up" to reviews they liked, which caused these reviews to be featured more prominently in the system. As of 2008, each day a "Review of the Day" was determined based on a vote by users. 72% of Yelp searches are done from a mobile device. The Yelp iPhone mobile app was introduced in December 2008. In August 2009, Yelp released an update to the iPhone app with a hidden Easter Egg augmented reality feature called Monocle, which allowed users looking through their iPhone camera to see Yelp data on businesses seen through the camera. Check-in features were added in 2010. Yelp users can make restaurant reservations in Yelp through Yelp Reservations, a feature initially added in June 2010; in 2021 the service was consolidated with others into "Yelp Guest Manager". Yelp's reservation features have been done through SeatMe, which was acquired by Yelp in 2013. Prior to that, Yelp had offered reservation services through OpenTable. In 2013, features to have food ordered and delivered were added to Yelp as well as the ability to view hygiene inspection scores and make appointments at spas. Yelp's content was integrated into Apple Inc.'s Siri "virtual assistant" and the mapping and directions app of Apple's September 2012 release of the iOS 6 computer operating system. In March 2014, Yelp added features for ordering and scheduling manicures, flower deliveries, golf games, and legal consultations, among other things. In October 2014, the company, working in collaboration with hotel search site Hipmunk, added features to book hotels through Yelp. Yelp started a 7–10% cash-back program at some US restaurants in 2016 through a partnership with Empyr, which links credit card purchases to online advertising. On February 14, 2017, Yelp launched Yelp Questions and Answers, a feature for users to ask venue-specific questions about businesses. In June 2020, Yelp launched a COVID-19 section that enables businesses to update their health and safety measures as well as their service offering changes. Starting January 2021, users can provide detailed feedback regarding what health and safety measures the business has implemented through editing in the COVID-19 section on Yelp business pages. In April 2023, Yelp introduced Yelp Guaranteed, which provides a refund of up to $2,500 if something goes wrong with a project. It also improved its search features with AI and added the option to add video to reviews. In April 2024, Yelp released Yelp Assistant, an AI chatbot that helps users find a professional for a project. It also introduced an API that allows developers to search Yelp data from other applications, and made other improvements. Features for businesses Yelp added the ability for business owners to respond to reviews in 2008. Businesses can respond privately by messaging the reviewer or publicly on their profile page. In some cases, Yelp users that had a bad experience have updated their reviews more favorably due to the business's efforts to resolve their complaints. In some other cases, disputes between reviewers and business owners have led to harassment and physical altercations. The system has led to criticisms that business owners can bribe reviewers with free food or discounts to increase their rating. However, Yelp users say this rarely occurs. A business owner can "claim" a profile, which allows them to respond to reviews and see traffic reports. Businesses can also offer discounts to Yelp users that visit often using a Yelp "check in" feature. In 2014, Yelp released an app for business owners to respond to reviews and manage their profiles from a mobile device. Business owners can also flag a review to be removed, if the review violates Yelp's content guidelines. Yelp's revenues primarily come from selling ads and sponsored listings to small businesses. Advertisers can pay to have their listing appear at the top of search results or feature ads on the pages of their competitors. In 2016, advertising revenue grew at a rate of 30% year over year. Yelp will only allow businesses with at least a three-star rating to sign up for advertising. Originally a sponsored "favorite review" could place a positive review above negative ones, but Yelp stopped offering this option in 2010 in an effort to deter the valid criticism that advertisers were able to obtain a more positive review appearance in exchange for pay. On June 5, 2020, Yelp launched a tool to allow businesses on the platform to identify themselves as black-owned, allowing customers to search for black-owned companies they want to support. There were more than 2.5 million searches for black-owned businesses on Yelp from May 25 to July 10. Searches for black-owned businesses were up 2,400% in 2020. In August 2021 Yelp added a feature to let users filter businesses based on their COVID precautions. Relationship with businesses A Harvard Business School study published in 2011 found that each "star" in a Yelp rating affected the business owner's sales by 5–9%. A 2012 study by two University of California, Berkeley economists found that an increase from 3.5 to 4 stars on Yelp resulted in a 19% increase in the chances of the restaurant being booked during peak hours. A 2014 survey of 300 small business owners done by Yodle found that 78% were concerned about negative reviews. Also, 43% of respondents said they felt online reviews were unfair, because there is no verification that the review is written by a legitimate customer. Controversy and litigation Yelp has a complicated relationship with small businesses. Criticism of Yelp continues to focus on the legitimacy of reviews, public statements of Yelp manipulating and blocking reviews in order to increase ad spending, as well as concerns regarding the privacy of reviewers. Astroturfing As Yelp became more influential, the phenomenon of business owners and competitors writing fake reviews, known as "astroturfing", became more prevalent. A study from Harvard Associate Professor Michael Luca and Georgios Zervas of Boston University analyzed 316,415 reviews in Boston and found that the percentage of fake reviews rose from 6% of the site's reviews in 2006 to 20% in 2014. Yelp's own review filter identifies 25% of reviews as suspicious. Yelp has a proprietary algorithm that attempts to evaluate whether a review is authentic and filters out reviews that it believes are not based on a patron's actual personal experiences, as required by the site's Terms of Use. The review filter was first developed two weeks after the site was founded and the company saw their "first obviously fake reviews". Filtered reviews are moved into a special area and not counted towards the businesses' star-rating. The filter sometimes filters legitimate reviews, leading to complaints from business owners. New York Attorney General Eric T. Schneiderman said Yelp has "the most aggressive" astroturfing filter out of the crowd-sourced websites it looked into. Yelp has also been criticized for not disclosing how the filter works, which it says would reveal information on how to defeat it. Yelp also conducts "sting operations" to uncover businesses writing their own reviews. In October 2012, Yelp placed a 90-day "consumer alert" on 150 business listings believed to have paid for reviews. The alert read "We caught someone red-handed trying to buy reviews for this business". In June 2013, Yelp filed a lawsuit against BuyYelpReview/AdBlaze for allegedly writing fake reviews for pay. In 2013, Yelp sued a lawyer it alleged was part of a group of law firms that exchanged Yelp reviews, saying that many of the firm's reviews originated from their own office. The lawyer said Yelp was trying to get revenge for his legal disputes and activism against Yelp. An effort to win dismissal of the case was denied in December 2014. In September 2013, Yelp cooperated with Operation Clean Turf, a sting operation by the New York Attorney General that uncovered 19 astroturfing operations. In April 2017, a Norfolk, Massachusetts, jury awarded a jewelry store over $34,000 after it determined that its competitor's employee had filed a false negative Yelp review that knowingly caused emotional distress. In December 2019, Yelp won a court case that challenged the company's explanation of how its review recommendation software worked. The court ruling stated that "None of the evidence presented at the trial showed anything nefarious or duplicitous on the part of Yelp in connection with the assertions made in the Challenged Statements." This was one of a number of court cases that ruled in favor of Yelp over the years. Alleged unfair business practices Yelp has a complicated relationship with small businesses. There have been allegations that Yelp has manipulated reviews based on participation in its advertising programs. Many business owners have said that Yelp salespeople have offered to remove or suppress negative reviews if they purchase advertising. Others report seeing negative reviews featured prominently and positive reviews buried, and then soon afterwards, they would receive calls from Yelp attempting to sell paid advertising. Yelp staff acknowledged that they had allowed their advertising partners to move their favorite review to the top of the listings as a "featured review", but said the reviews were not otherwise manipulated to favor the partner businesses. Such featured reviews were shown with a strip above them that said "One of [Insert Business Here]'s Favorite Reviews" and "This business is a Yelp sponsor." The company also said it might have had some rogue salespeople that misrepresented their practices when selling advertising services. In response to the criticism of their allowing their advertising partners to manipulate the review listing, Yelp ceased its "featured review" practice in 2010. Several lawsuits have been filed against Yelp accusing it of extorting businesses into buying advertising products. Each has been dismissed by a judge before reaching trial. In February 2010, a class-action lawsuit was filed against Yelp alleging it asked a Long Beach veterinary hospital to pay $300 a month for advertising services that included the suppression or deletion of disparaging customer reviews. The following month, nine additional businesses joined the class-action lawsuit, and two similar lawsuits were filed. That May the lawsuits were combined into one class-action lawsuit, which was dismissed by San Francisco U.S. District Judge Edward Chen in 2011. Chen said the reviews were protected by the Communications Decency Act of 1996 and that there was no evidence of manipulation by Yelp. The plaintiffs filed an appeal. In September 2014, the United States Court of Appeals for the Ninth Circuit upheld the dismissal, finding that even if Yelp did manipulate reviews to favor advertisers, this would not fall under the court's legal definition of extortion. In August 2013, Yelp launched a series of town hall style meetings in 22 major American cities in an effort to address concerns among local business owners. Many attendees expressed frustration with seeing Yelp remove positive reviews after they declined to advertise, receiving reviews from users that never entered the establishment, and other issues. A 2011 "working paper" published by Harvard Business School from Harvard Associate Professor Michael Luca and Georgios Zervas of Boston University found that there was no significant statistical correlation between being a Yelp advertiser and having more favorable reviews. The Federal Trade Commission received 2,046 complaints about Yelp from 2008 to 2014, most from small businesses regarding allegedly unfair or fake reviews or negative reviews that appear after declining to advertise. According to Yelp, the Federal Trade Commission finished a second examination of Yelp's practices in 2015 and in both cases did not pursue an action against the company. Journalist David Lazarus of the Los Angeles Times also criticized Yelp in 2014 for the practice of selling competitors' ads to run on top of business listings and then offering to have the ads removed as part of a paid feature. The 2019 film Billion Dollar Bully documents Yelp's alleged business practices. In 2018, in the case Hassell v. Bird, the California Supreme Court held by a narrow 4–3 margin that a business cannot force Yelp to remove a review, even if the review is defamatory of the business. A 2019 investigation by Vice News and the podcast Underunderstood found that in some cases, Yelp was replacing restaurant's direct phone numbers with numbers that routed through GrubHub, which would then charge restaurants for the calls under marketing agreements GrubHub has with restaurants. Political expression and politically motivated ratings Eater reported that between 2012 and 2015, a number of users who review restaurants on the site have posted reviews that contained comments about the political activities and political views of businesses and their owners or have submitted ratings affected by political motivations. The article found that in some instances, the Yelp review area for a business has become flooded with such review submissions after a business was involved in politically sensitive action. Yelp has removed reviews of this nature and has tried to suppress their submission. Litigation over review content According to data compiled in 2014 by the Wall Street Journal, Yelp receives about six subpoenas a month asking for the names of anonymous reviewers, mostly from business owners seeking litigation against those writing negative reviews. In 2012, the Alexandria Circuit Court and the Virginia Court of Appeals held Yelp in contempt for refusing to disclose the identities of seven reviewers who anonymously criticized a carpet-cleaning business. In 2014, Yelp appealed to the Virginia Supreme Court. A popular public argument in favor of Yelp at the time was that a ruling against Yelp would negatively affect free speech online. The judge from an early ruling said that if the reviewers did not actually use the businesses' services, their communications would be false claims not protected by free speech laws. The Virginia Supreme Court ruled that Yelp, a non-resident company in the state of Virginia, could not be subpoenaed by a lower court. Also in 2014, a California state law was enacted that prohibits businesses from using "disparagement clauses" in their contracts or terms of use that allow them to sue or fine customers that write negatively about them online. Investigations A 2020 Business Insider investigation questioned the culture, ethics and practices within Yelp. An April 2022 Vice article highlighted that some Elite reviewers use their status to sell reviews. Community According to Inc. Magazine most reviewers (sometimes called "Yelpers") are "well-intentioned" and write reviews in order to express themselves, improve their writing, or to be creative. In some cases, they write reviews in order to lash out at corporate interests or businesses they dislike. Reviewers may also be motivated by badges and honors, such as being the first to review a new location, or by praise and attention from other users. Many reviews are written in an entertaining or creative manner. Users can give a review a "thumbs-up" rating, which will cause it to be ranked higher in the review listings. Each day a "Review of the Day" is determined based on a vote by users. According to The Discourse of Online Consumer Reviews many Yelp reviewers are internet-savvy adults aged 18–25 or "suburban baby boomers". Reviewers are encouraged to use real names and photos. Each year members of the Yelp community are invited or self-nominated to the "Yelp Elite Squad" and some are accepted based on an evaluation of the quality and frequency of their reviews. Members may nominate other reviewers for elite status. Users must use their real name and photo on Yelp to qualify for the Elite Squad. To accept a nomination, members must not own a business. Elite Squad Yelpers are governed by a council and estimated to include several thousand members. Yelp does not disclose how the Yelp Elite are selected. Elite Squad members are given different color badges based on how long they've been an elite member. The Yelp Elite Squad originated with parties Yelp began throwing for members in 2005, and in 2006 it was formally codified; the name came from a joking reference to prolific reviewers that were invited to Yelp parties as the "Yelp Elite Squad"." Members are invited to special opening parties, given gifts, and receive other perks. As of 2017, there are over 80 local Elite Squads in North America. As of 2017, Yelp employed a staff of over 80 community managers that organize parties for prolific reviewers, send encouraging messages to reviewers, and host classes for small business owners. Yelp reviewers are not required to disclose their identity, but Yelp encourages them to do so. See also Crowdsourcing Reputation management You're Not Yelping References External links Official websites United States United Kingdom Yelp Reservations official website 2012 initial public offerings American companies established in 2004 American review websites Android (operating system) software Companies based in San Francisco Companies listed on the New York Stock Exchange Consumer guides Geosocial networking Internet properties established in 2004 IOS software Online companies of the United States Recommender systems Restaurant guides South of Market, San Francisco WatchOS software Windows Phone software Volunteered geographic information
Yelp
[ "Technology" ]
5,843
[ "Information systems", "Recommender systems" ]
4,179,861
https://en.wikipedia.org/wiki/White%20Lead%20%28Painting%29%20Convention%2C%201921
White Lead (Painting) Convention, 1921 is an International Labour Organization Convention established in 1921 to advance the prohibition of using white lead in paint. As of 2017 many leading global nations, including the United States, the United Kingdom, Germany, Japan, China and India remain outside the organization. Ratifications As of 2013, the convention has been ratified by 63 states: References External links Text. Ratifications. Health treaties White Occupational safety and health treaties Treaties concluded in 1921 Treaties entered into force in 1923 Lead Treaties of the Kingdom of Afghanistan Treaties of Algeria Treaties of Argentina Treaties of the First Austrian Republic Treaties of Azerbaijan Treaties of Belgium Treaties of the Republic of Dahomey Treaties of Bosnia and Herzegovina Treaties of Bulgaria Treaties of Burkina Faso Treaties of the Kingdom of Cambodia (1953–1970) Treaties of Cameroon Treaties of the Central African Republic Treaties of Chad Treaties of Chile Treaties of Colombia Treaties of the Comoros Treaties of the Republic of the Congo Treaties of Ivory Coast Treaties of Croatia Treaties of Cuba Treaties of Czechoslovakia Treaties of the Czech Republic Treaties of Djibouti Treaties of Estonia Treaties of Finland Treaties of the French Third Republic Treaties of Gabon Treaties of the Kingdom of Greece Treaties of Guatemala Treaties of Guinea Treaties of the Hungarian People's Republic Treaties of the Iraqi Republic (1958–1968) Treaties of Italy Treaties of the Kingdom of Laos Treaties of Latvia Treaties of Madagascar Treaties of Luxembourg Treaties of Mali Treaties of Malta Treaties of Mauritania Treaties of Mexico Treaties of Morocco Treaties of the Netherlands Treaties of Nicaragua Treaties of Niger Treaties of Norway Treaties of Panama Treaties of the Second Polish Republic Treaties of the Kingdom of Romania Treaties of the Soviet Union Treaties of Senegal Treaties of Serbia and Montenegro Treaties of Slovakia Treaties of Slovenia Treaties of Spain under the Restoration Treaties of Suriname Treaties of Sweden Treaties of North Macedonia Treaties of Togo Treaties of Tunisia Treaties of Uruguay Treaties of Venezuela Treaties of Montenegro Treaties extended to French Algeria Treaties extended to the French Southern and Antarctic Lands Treaties extended to Clipperton Island Treaties extended to French Comoros Treaties extended to French Somaliland Treaties extended to French Guiana Treaties extended to French Polynesia Treaties extended to Guadeloupe Treaties extended to Martinique Treaties extended to Mayotte Treaties extended to New Caledonia Treaties extended to Réunion Treaties extended to Saint Pierre and Miquelon Treaties extended to Wallis and Futuna Treaties extended to Surinam (Dutch colony) Chemical safety 1921 in labor relations
White Lead (Painting) Convention, 1921
[ "Chemistry" ]
470
[ "Chemical safety", "Chemical accident", "nan" ]
4,179,924
https://en.wikipedia.org/wiki/Crypt%20%28Unix%29
In Unix computing, crypt or enigma is a utility program used for encryption. Due to the ease of breaking it, it is considered to be obsolete. The program is usually used as a filter, and it has traditionally been implemented using a "rotor machine" algorithm based on the Enigma machine. It is considered to be cryptographically far too weak to provide any security against brute-force attacks by modern, commodity personal computers. Some versions of Unix shipped with an even weaker version of the crypt(1) command in order to comply with contemporaneous laws and regulations that limited the exportation of cryptographic software. Some of these were simply implementations of the Caesar cipher (effectively no more secure than ROT13, which is implemented as a Caesar cipher with a well-known key). History Cryptographer Robert Morris wrote a M-209-based , which first appeared in Version 3 Unix, to encourage codebreaking experiments; Morris managed to break by hand. Dennis Ritchie automated decryption with a method by James Reeds, and a new Enigma-based version appeared in Version 7, which Reeds and Peter J. Weinberger also broke. crypt(1) under Linux Linux distributions generally do not include a Unix compatible version of the crypt command. This is largely due to a combination of three major factors: crypt is relatively obscure and rarely used for e-mail attachments nor as a file format crypt is considered to be cryptographically far too weak to withstand brute-force attacks by modern computing systems (Linux systems generally ship with GNU Privacy Guard which is considered to be reasonably secure by modern standards) During the early years of Linux development and adoption there was some concern that even as weak as the algorithm used by crypt was, that it might still run afoul of ITAR's export controls; so mainstream distribution developers in the United States generally excluded it, leaving their customers to fetch GnuPG or other strong cryptographic software from international sites, sometimes providing packages or scripts to automate that process. The source code to several old versions of the crypt command is available in The Unix Heritage Society's Unix Archive. The recent crypt source code is available in the OpenSolaris project. A public domain version is available from the Crypt Breaker's Workbench. Enhanced symmetric encryption utilities are available for Linux (and should also be portable to any other Unix-like system) including mcrypt and ccrypt. While these provide support for much more sophisticated and modern algorithms, they can be used to encrypt and decrypt files which are compatible with the traditional crypt(1) command by providing the correct command line options. Breaking crypt(1) encryption Programs for breaking crypt(1) encryption are widely available. Bob Baldwin's public domain Crypt Breaker's Workbench, which was written in 1984-1985, is an interactive tool that provides successive plaintext guesses that must be corrected by the user. It also provides a working crypt(1) implementation used by modern BSD distributions. Peter Selinger's unixcrypt-breaker uses a simple statistical model similar to a dictionary-attack that takes a set of plain texts as input and processes it to guess plausible plaintexts, and does not require user interaction. Relationship to password hash function There is also a Unix password hash function with the same name, crypt. Though both are used for securing data in some sense, they are otherwise essentially unrelated. To distinguish between the two, writers often refer to the utility program as crypt(1), because it is documented in section 1 of the Unix manual pages, and refer to the password hash function as crypt(3), because its documentation is in section 3 of the manual. See also crypt – an unrelated Unix C library function Key derivation function References External links Source code for crypt(1) from OpenSolaris (published after clearing up export regulations) Source code for crypt(1) from Version 7 Unix (trivialised one-rotor Enigma-style machine) Source code for crypt(1) from Version 6 Unix (implementation of Boris Hagelin's M-209 cryptographic machine) Unix security-related software Cryptographic software Broken cryptography algorithms
Crypt (Unix)
[ "Mathematics" ]
848
[ "Cryptographic software", "Mathematical software" ]
4,180,022
https://en.wikipedia.org/wiki/Taoist%20sexual%20practices
Taoist sexual practices () are the ways Taoists may practice sexual activity. These practices are also known as "joining energy" or "the joining of the essences". Practitioners believe that by performing these sexual arts, one can stay in good health, and attain longevity or spiritual advancement. History Some Taoist sects during the Han dynasty performed sexual intercourse as a spiritual practice, called héqì ()( lit. "joining energy"). The earliest sexual texts that survive today are those found at Mawangdui. While Taoism had not yet fully evolved as a religion at this time, these texts shared some remarkable similarities with later Tang dynasty texts, such as the Ishinpō (). The sexual arts arguably reached their climax between the end of the Han dynasty and the end of the Tang dynasty. After AD 1000, Confucian restraining attitudes towards sexuality became stronger, so that by the beginning of the Qing dynasty in 1644, sex was a taboo topic in public life. These Confucians alleged that the separation of genders in most social activities existed 2,000 years ago and suppressed the sexual arts. Because of the taboo surrounding sex, there was much censoring done during the Qing in literature, and the sexual arts disappeared in public life. As a result, some of the texts survived only in Japan, and most scholars had no idea that such a different concept of sex existed in early China. Ancient and medieval practices Qi (lifeforce) and jing (essence) The basis of much Taoist thinking is that qi () is part of everything in existence. Qi is related to another energetic substance contained in the human body known as jing (), and once all this has been expended the body dies. Jing can be lost in many ways, but most notably through the loss of body fluids. Taoists may use practices to stimulate/increase and conserve their bodily fluids to great extents. The fluid believed to contain the most jing is semen. Therefore, some Taoists believe in decreasing the frequency of, or totally avoiding, ejaculation in order to conserve life essence. Male control of ejaculation Many Taoist practitioners link the loss of ejaculatory fluids to the loss of vital life force: where excessive fluid loss results in premature aging, disease, and general fatigue. While some Taoists contend that one should never ejaculate, others provide a specific formula to determine the maximum number of regular ejaculations in order to maintain health. The general idea is to limit the loss of fluids as much as possible to the level of your desired practice. As these sexual practices were passed down over the centuries, some practitioners have given less importance to the limiting of ejaculation. This variety has been described as "...while some declare non-ejaculation injurious, others condemn ejaculating too fast in too much haste." Nevertheless, the "retention of the semen" is one of the foundational tenets of Taoist sexual practice. There are different methods to control ejaculation prescribed by the Taoists. In order to avoid ejaculation, the man could do one of several things. He could pull out immediately before orgasm, a method also more recently termed as "coitus conservatus." A second method involved the man applying pressure on the perineum, thus retaining the sperm. While if done incorrectly this can cause retrograde ejaculation, the Taoists believed that the jing traveled up into the head and "nourished the brain." Cunnilingus was believed to be ideal by preventing the loss of semen and vaginal liquids. Practice control Another important concept of "the joining of the essences" was that the union of a man and a woman would result in the creation of jing, a type of sexual energy. When in the act of lovemaking, jing would form, and the man could transform some of this jing into qi, and therefore replenish his lifeforce. By having as much sex as possible, men had the opportunity to transform more and more jing, and as a result would see many health benefits. Yin and yang The concept of yin and yang is important in Taoism and consequently also holds special importance in sex. Yang usually referred to the male sex, whereas yin could refer to the female sex. Man and woman were the equivalent of heaven and earth, but became disconnected. Therefore, while heaven and earth are eternal, man and woman suffer a premature death. Every interaction between yin and yang had significance. Because of this significance, every position and action in lovemaking had importance. Taoist texts described a large number of special sexual positions that served to cure or prevent illness, similar to the Kama Sutra. There was the notion that men released yang during orgasm, while women shed yin during theirs. Every orgasm from the user would nourish the partner's energy. Women For Taoists, sex was not just about pleasing a man. The woman also had to be stimulated and pleased in order to benefit from the act of sex. Sunü 素女 , female advisor to the Yellow Emperor Huangdi (黃帝 ) , noted ten important indications of female satisfaction. If sex were performed in this manner, the woman would create more jing, and the man could more easily absorb the jing to increase his own qi. According to Jolan Chang, in early Chinese history, women played a significant role in the Tao () of loving, and that the degeneration into subordinate roles came much later in Chinese history. Women were also given a prominent place in the Ishinpō, with the tutor being a woman. One of the reasons women had a great deal of strength in the act of sex was that they walked away undiminished from the act. The woman had the power to bring forth life, and did not have to worry about ejaculation or refractory period. To quote Laozi from the Tao Te Ching: "The Spirit of the Valley is inexhaustible... Draw on it as you will, it never run dry." Many of the ancient texts were dedicated explanations of how a man could use sex to extend his own life, but his life was extended only through the absorption of the woman's vital energies (jing and qi). Some Taoists came to call the act of sex "the battle of stealing and strengthening". These sexual methods could be correlated with Taoist military methods. Instead of storming the gates, the battle was a series of feints and maneuvers that would sap the enemy's resistance. Fang described this battle as "the ideal was for a man to 'defeat' the 'enemy' in the sexual 'battle' by keeping himself under complete control so as not to emit semen, while at the same time exciting the woman until she reached orgasm and shed her Yin essence, which was then absorbed by the man." Jolan Chang points out that it was after the Tang dynasty (AD 618–906) that "the Tao of Loving" was "steadily corrupted", and that it was these later corruptions that reflected battle imagery and elements of a "vampire" mindset. Other research into early Taoism found more harmonious attitudes of yin-yang communion. Multiple partners This practice was not limited to male on female, however, as it was possible for women to do the same in turn with the male yang. The deity known as the Queen Mother of the West was described to have no husband, instead having intercourse with young virgin males to nourish her female element. Age of partners Some Ming dynasty Taoist sects believed that one way for men to achieve longevity or 'towards immortality' is by having intercourse with virgins, particularly young virgins. Taoist sexual books by Liangpi and Sanfeng call the female partner ding (鼎) and recommend sex with premenarche virgins. Liangpi concludes that the ideal ding is a pre-menarche virgin just under 14 years of age and women older than 18 should be avoided. Sanfeng went further and divided ding partners into three ranks of descending importance: premenarche virgins aged 14-16, menstruating virgins aged 16-20 and women aged 21-25. According to Ge Hong, a 4th-century Taoist alchemist, "those seeking 'immortality' must perfect the absolute essentials. These consist of treasuring the jing, circulating the qi, and consuming the great medicine." The sexual arts concerned the first precept, treasuring the jing. This is partially because treasuring the jing involved sending it up into the brain. In order to send the jing into the brain, the male had to refrain from ejaculation during sex. According to some Taoists, if this was done, the jing would travel up the spine and nourish the brain instead of leaving the body. Ge Hong also states, however, that it is folly to believe that performing the sexual arts only can achieve immortality and some of the ancient myths on sexual arts had been misinterpreted and exaggerated. Indeed, the sexual arts had to be practiced alongside alchemy to attain longevity. Ge Hong also warned it could be dangerous if practiced incorrectly. See also Aiki (Japanese) Huanjing bunao Jiutian Xuannü, goddess of sexuality as well as warfare and longevity Sex magic Tantric sex Yangsheng (Daoism) Notes References Contemporary texts David Deida. The Superior Lover. 2001. Chang, Jolan. The Tao of Love and Sex. Plume, 1977. Chang, Stephen T.. The Tao of Sexology: The Book of Infinite Wisdom. Tao Longevity LLC, 1986. Chia, Mantak and Maneewan. Healing Love Through the Tao: Cultivating Female Sexual Energy. Healing Tao, 1986. Chia, Mantak and Michael Winn. Taoist Secrets of Love: Cultivating Male Sexual Energy. Aurora, 1984. Chia, Mantak and Douglas Abrams Arava. The Multi-Orgasmic Man. HarperCollins, 1996. Chia, Mantak and Maneewan. The Multi-Orgasmic Couple. HarperOne, 2002. Chia, Mantak and Rachel Carlton Abrams. The Multi-Orgasmic Woman. Rodale, 2005. Frantzis, Bruce. Taoist Sexual Meditation. North Atlantic Books, 2012. Holden, Lee and Rachel Carlton Abrams. Taoist Sexual Secrets: Harness Your Qi Energy for Ecstasy, Vitality, and Transformation - Audio CD set. Sounds True, 2010. Hsi Lai. The Sexual Teachings of the White Tigress: Secrets of the Female Taoist Masters. Destiny Books, 2001. Needham, Joseph. Science and Civilization in China, 5:2. Cambridge: Cambridge University, 1983. Reid, Daniel P. The Tao of Health, Sex & Longevity. Simon & Schuster, 1989. Robinet, Isabelle. Taoism: Growth of a Religion (Stanford: Stanford University Press, 1997 [original French 1992]). Van Gulik, Robert. The Sexual Life of Ancient China: A Preliminary Survey of Chinese Sex and Society from ca. 1500 B.C. till 1644 A.D. Leiden: Brill, 1961. Ruan Fang Fu. Sex in China: Studies in Sexology in Chinese Culture Plenum Press, 1991. Wik, Mieke and Stephan. Beyond Tantra: Healing through Taoist Sacred Sex. Findhorn Press, 2005. Wile, Douglas. The Art of the Bedchamber: The Chinese Sexual Yoga Classics including Women's Solo Meditation Texts. Albany: State University of New York, 1992. Zettnersan, Chian. Taoist Bedroom Secrets, Twin Lakes, WI: Lotus Press, 2002. Classical texts Su Nu Jing Health Benefits of the Bedchamber Ishinpō (醫心方) "Priceless Recipe" by Sun S'su-Mo (Tang) "Hsiu Chen Yen I" by Wu Hsien (Han) External links Chinese Sexology "Seizing Immortality from the Jaws of Impermanence" The Great Tao Answers to Everyday Problems. History of Taoist Sexual Development in China Sample of the Taoist Manuals Sexology Sexual acts Sexuality and religion Sexuality in China Taoist practices
Taoist sexual practices
[ "Biology" ]
2,514
[ "Behavior", "Sexual acts", "Sexology", "Behavioural sciences", "Sexuality", "Mating" ]
4,180,667
https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff%20limit
The Tolman–Oppenheimer–Volkoff limit (or TOV limit) is an upper bound to the mass of cold, non-rotating neutron stars, analogous to the Chandrasekhar limit for white dwarf stars. Stars more massive than the TOV limit collapse into a black hole. The original calculation in 1939, which neglected complications such as nuclear forces between neutrons, placed this limit at approximately 0.7 solar masses (). Later, more refined analyses have resulted in larger values. Theoretical work in 1996 placed the limit at approximately 1.5 to 3.0 , corresponding to an original stellar mass of 15 to 20 ; additional work in the same year gave a more precise range of 2.2 to 2.9 . Data from GW170817, the first gravitational wave observation attributed to merging neutron stars (thought to have collapsed into a black hole within a few seconds after merging) placed the limit in the range of 2.01 to 2.17 . In the case of a rigidly spinning neutron star, meaning that different levels in the interior of the star all rotate at the same rate, the mass limit is thought to increase by up to 18–20%. History The idea that there should be an absolute upper limit for the mass of a cold (as distinct from thermal pressure supported) self-gravitating body dates back to the 1932 work of Lev Landau, based on the Pauli exclusion principle. Pauli's principle shows that the fermionic particles in sufficiently compressed matter would be forced into energy states so high that their rest mass contribution would become negligible when compared with the relativistic kinetic contribution (RKC). RKC is determined just by the relevant quantum wavelength , which would be of the order of the mean interparticle separation. In terms of Planck units, with the reduced Planck constant , the speed of light , and the gravitational constant all set equal to one, there will be a corresponding pressure given roughly by At the upper mass limit, that pressure will equal the pressure needed to resist gravity. The pressure to resist gravity for a body of mass will be given according to the virial theorem roughly by where is the density. This will be given by , where is the relevant mass per particle. It can be seen that the wavelength cancels out so that one obtains an approximate mass limit formula of the very simple form In this relationship, can be taken to be given roughly by the proton mass. This even applies in the white dwarf case (that of the Chandrasekhar limit) for which the fermionic particles providing the pressure are electrons. This is because the mass density is provided by the nuclei in which the neutrons are at most about as numerous as the protons. Likewise the protons, for charge neutrality, must be exactly as numerous as the electrons outside. In the case of neutron stars this limit was first worked out by J. Robert Oppenheimer and George Volkoff in 1939, using the work of Richard Chace Tolman. Oppenheimer and Volkoff assumed that the neutrons in a neutron star formed a degenerate cold Fermi gas. They thereby obtained a limiting mass of approximately 0.7 solar masses, which was less than the Chandrasekhar limit for white dwarfs. Oppenheimer and Volkoff's paper notes that "the effect of repulsive forces, i.e., of raising the pressure for a given density above the value given by the Fermi equation of state ... could tend to prevent the collapse." And indeed, the most massive neutron star detected so far, PSR J0952–0607, is estimated to be much heavier than Oppenheimer and Volkoff's TOV limit at M☉. More realistic models of neutron stars that include baryon strong force repulsion predict a neutron star mass limit of 2.2 to 2.9 M☉. The uncertainty in the value reflects the fact that the equations of state for extremely dense matter are not well known. Applications In a star less massive than the limit, the gravitational compression is balanced by short-range repulsive neutron–neutron interactions mediated by the strong force and also by the quantum degeneracy pressure of neutrons, preventing collapse. If its mass is above the limit, the star will collapse to some denser form. It could form a black hole, or change composition and be supported in some other way (for example, by quark degeneracy pressure if it becomes a quark star). Because the properties of hypothetical, more exotic forms of degenerate matter are even more poorly known than those of neutron-degenerate matter, most astrophysicists assume, in the absence of evidence to the contrary, that a neutron star above the limit collapses directly into a black hole. A black hole formed by the collapse of an individual star must have mass exceeding the Tolman–Oppenheimer–Volkoff limit. Theory predicts that because of mass loss during stellar evolution, a black hole formed from an isolated star of solar metallicity can have a mass of no more than approximately 10 solar masses.:Fig. 16 Observationally, because of their large mass, relative faintness, and X-ray spectra, a number of massive objects in X-ray binaries are thought to be stellar black holes. These black hole candidates are estimated to have masses between 3 and 20 solar masses. LIGO has detected black hole mergers involving black holes in the 7.5–50 solar mass range; it is possible – although unlikely – that these black holes were themselves the result of previous mergers. Oppenheimer and Volkoff discounted the influence of heat, stating in reference to work by Landau (1932), 'even [at] 107 degrees... the pressure is determined essentially by the density only and not by the temperature' – yet it has been estimated that temperatures can reach up to approximately >109 K during formation of a neutron star, mergers and binary accretion. Another source of heat and therefore collapse-resisting pressure in neutron stars is 'viscous friction in the presence of differential rotation.' Oppenheimer and Volkoff's calculation of the mass limit of neutron stars also neglected to consider the rotation of neutron stars, however we now know that neutron stars are capable of spinning at much faster rates than were known in Oppenheimer and Volkoff's time. The fastest-spinning neutron star known is PSR J1748-2446ad, rotating at a rate of 716 times per second or 43,000 revolutions per minute, giving a linear (tangential) speed at the surface on the order of 0.24c (i.e., nearly a quarter the speed of light). Star rotation interferes with convective heat loss during supernova collapse, so rotating stars are more likely to collapse directly to form a black hole List of least massive black holes List of objects in mass gap This list contains objects that may be neutron stars, black holes, quark stars, or other exotic objects. This list is distinct from the list of least massive black holes due to the undetermined nature of these objects, largely because of indeterminate mass, or other poor observation data. See also Tolman–Oppenheimer–Volkoff equation Oppenheimer–Snyder model Bekenstein bound Quark star Notes References Astrophysics Neutron stars Black holes J. Robert Oppenheimer
Tolman–Oppenheimer–Volkoff limit
[ "Physics", "Astronomy" ]
1,513
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects", "Astronomical sub-disciplines" ]
4,180,990
https://en.wikipedia.org/wiki/Wallace%20tree
A Wallace multiplier is a hardware implementation of a binary multiplier, a digital circuit that multiplies two integers. It uses a selection of full and half adders (the Wallace tree or Wallace reduction) to sum partial products in stages until two numbers are left. Wallace multipliers reduce as much as possible on each layer, whereas Dadda multipliers try to minimize the required number of gates by postponing the reduction to the upper layers. Wallace multipliers were devised by the Australian computer scientist Chris Wallace in 1964. The Wallace tree has three steps: Multiply each bit of one of the arguments, by each bit of the other. Reduce the number of partial products to two by layers of full and half adders. Group the wires in two numbers, and add them with a conventional adder. Compared to naively adding partial products with regular adders, the benefit of the Wallace tree is its faster speed. It has reduction layers, but each layer has only propagation delay. A naive addition of partial products would require time. As making the partial products is and the final addition is , the total multiplication is , not much slower than addition. From a complexity theoretic perspective, the Wallace tree algorithm puts multiplication in the class NC1. The downside of the Wallace tree, compared to naive addition of partial products, is its much higher gate count. These computations only consider gate delays and don't deal with wire delays, which can also be very substantial. The Wallace tree can be also represented by a tree of 3/2 or 4/2 adders. It is sometimes combined with Booth encoding. Detailed explanation The Wallace tree is a variant of long multiplication. The first step is to multiply each digit (each bit) of one factor by each digit of the other. Each of these partial products has weight equal to the product of its factors. The final product is calculated by the weighted sum of all these partial products. The first step, as said above, is to multiply each bit of one number by each bit of the other, which is accomplished as a simple AND gate, resulting in bits; the partial product of bits by has weight In the second step, the resulting bits are reduced to two numbers; this is accomplished as follows: As long as there are three or more wires with the same weight add a following layer:- Take any three wires with the same weights and input them into a full adder. The result will be an output wire of the same weight and an output wire with a higher weight for each three input wires. If there are two wires of the same weight left, input them into a half adder. If there is just one wire left, connect it to the next layer. In the third and final step, the two resulting numbers are fed to an adder, obtaining the final product. Example , multiplying by : First we multiply every bit by every bit: weight 1 – weight 2 – , weight 4 – , , weight 8 – , , , weight 16 – , , weight 32 – , weight 64 – Reduction layer 1: Pass the only weight-1 wire through, output: 1 weight-1 wire Add a half adder for weight 2, outputs: 1 weight-2 wire, 1 weight-4 wire Add a full adder for weight 4, outputs: 1 weight-4 wire, 1 weight-8 wire Add a full adder for weight 8, and pass the remaining wire through, outputs: 2 weight-8 wires, 1 weight-16 wire Add a full adder for weight 16, outputs: 1 weight-16 wire, 1 weight-32 wire Add a half adder for weight 32, outputs: 1 weight-32 wire, 1 weight-64 wire Pass the only weight-64 wire through, output: 1 weight-64 wire Wires at the output of reduction layer 1: weight 1 – 1 weight 2 – 1 weight 4 – 2 weight 8 – 3 weight 16 – 2 weight 32 – 2 weight 64 – 2 Reduction layer 2: Add a full adder for weight 8, and half adders for weights 4, 16, 32, 64 Outputs: weight 1 – 1 weight 2 – 1 weight 4 – 1 weight 8 – 2 weight 16 – 2 weight 32 – 2 weight 64 – 2 weight 128 – 1 Group the wires into a pair of integers and an adder to add them. See also Dadda tree References Further reading External links Generic VHDL Implementation of Wallace Tree Multiplier. Arithmetic logic circuits Computer arithmetic Multiplication 1964 introductions 1964 in science
Wallace tree
[ "Mathematics" ]
927
[ "Computer arithmetic", "Arithmetic" ]
4,181,062
https://en.wikipedia.org/wiki/Dadda%20multiplier
The Dadda multiplier is a hardware binary multiplier design invented by computer scientist Luigi Dadda in 1965. It uses a selection of full and half adders to sum the partial products in stages (the Dadda tree or Dadda reduction) until two numbers are left. The design is similar to the Wallace multiplier, but the different reduction tree reduces the required number of gates (for all but the smallest operand sizes) and makes it slightly faster (for all operand sizes). Dadda and Wallace multipliers have the same three steps for two bit strings and of lengths and respectively: Multiply (logical AND) each bit of , by each bit of , yielding results, grouped by weight in columns Reduce the number of partial products by stages of full and half adders until we are left with at most two bits of each weight. Add the final result with a conventional adder. As with the Wallace multiplier, the multiplication products of the first step carry different weights reflecting the magnitude of the original bit values in the multiplication. For example, the product of bits has weight . Unlike Wallace multipliers that reduce as much as possible on each layer, Dadda multipliers attempt to minimize the number of gates used, as well as input/output delay. Because of this, Dadda multipliers have a less expensive reduction phase, but the final numbers may be a few bits longer, thus requiring slightly bigger adders. Description To achieve a more optimal final product, the structure of the reduction process is governed by slightly more complex rules than in Wallace multipliers. The progression of the reduction is controlled by a maximum-height sequence , defined by: , and This yields a sequence like so: The initial value of is chosen as the largest value such that , where and are the number of bits in the input multiplicand and multiplier. The lesser of the two bit lengths will be the maximum height of each column of weights after the first stage of multiplication. For each stage of the reduction, the goal of the algorithm is the reduce the height of each column so that it is less than or equal to the value of . For each stage from , reduce each column starting at the lowest-weight column, according to these rules: If the column does not require reduction, move to column If add the top two elements in a half-adder, placing the result at the bottom of the column and the carry at the bottom of column , then move to column Else, add the top three elements in a full-adder, placing the result at the bottom of the column and the carry at the bottom of column , restart at step 1 Algorithm example The example in the adjacent image illustrates the reduction of an 8 × 8 multiplier, explained here. The initial state is chosen as , the largest value less than 8. Stage , are all less than or equal to six bits in height, so no changes are made , so a half-adder is applied, reducing it to six bits and adding its carry bit to including the carry bit from , so we apply a full-adder and a half-adder to reduce it to six bits including two carry bits from , so we again apply a full-adder and a half-adder to reduce it to six bits including two carry bits from , so we apply a single full-adder and reduce it to six bits are all less than or equal to six bits in height including carry bits, so no changes are made Stage , are all less than or equal to four bits in height, so no changes are made , so a half-adder is applied, reducing it to four bits and adding its carry bit to including the carry bit from , so we apply a full-adder and a half-adder to reduce it to four bits including previous carry bits, so we apply two full-adders to reduce them to four bits including previous carry bits, so we apply a full-adder to reduce it to four bits are all less than or equal to four bits in height including carry bits, so no changes are made Stage , are all less than or equal to three bits in height, so no changes are made , so a half-adder is applied, reducing it to three bits and adding its carry bit to including previous carry bits, so we apply one full-adder to reduce them to three bits are all less than or equal to three bits in height including carry bits, so no changes are made Stage , are all less than or equal to two bits in height, so no changes are made , so a half-adder is applied, reducing it to two bits and adding its carry bit to including previous carry bits, so we apply one full-adder to reduce them to two bits including the carry bit from , so no changes are made Addition The output of the last stage leaves 15 columns of height two or less which can be passed into a standard adder. See also Booth's multiplication algorithm Fused multiply–add Wallace tree BKM algorithm for complex logarithms and exponentials Kochanski multiplication for modular multiplication References Further reading Arithmetic logic circuits Computer arithmetic Multiplication 1965 introductions 1965 in computing
Dadda multiplier
[ "Mathematics" ]
1,060
[ "Computer arithmetic", "Arithmetic" ]
4,181,397
https://en.wikipedia.org/wiki/Comparison%20of%20hex%20editors
The following is a comparison of notable hex editors. General Features See also Comparison of HTML editors Comparison of integrated development environments Comparison of text editors Comparison of word processors Notes ao: ANSI is the Windows character set, OEM is the DOS character set. Both are based on ASCII. References External links "Harry's Windows Hex Editor Review" (July 2002). harrymnielsen.tripod.com. Retrieved October 15, 2019. Hex editors Text editor comparisons
Comparison of hex editors
[ "Technology" ]
102
[ "Software comparisons", "Computing comparisons" ]
4,181,616
https://en.wikipedia.org/wiki/Zubov%27s%20method
Zubov's method is a technique for computing the basin of attraction for a set of ordinary differential equations (a dynamical system). The domain of attraction is the set , where is the solution to a partial differential equation known as the Zubov equation. Zubov's method can be used in a number of ways. Statement Zubov's theorem states that: If is an ordinary differential equation in with , a set containing 0 in its interior is the domain of attraction of zero if and only if there exist continuous functions such that: , for , on for every there exist such that , if for or If f is continuously differentiable, then the differential equation has at most one continuously differentiable solution satisfying . References Ordinary differential equations
Zubov's method
[ "Mathematics" ]
154
[ "Mathematical analysis", "Mathematical analysis stubs" ]
4,181,767
https://en.wikipedia.org/wiki/Cambridge%20University%20primates
Cambridge University primate experiments came to public attention in 2002 after the publication that year of material from a ten-month undercover investigation in 1998 by the British Union for the Abolition of Vivisection (BUAV). The experiments were being conducted on marmosets, and included the removal of parts of their brains intended to simulate the symptoms of stroke or Parkinson's disease. Some of the research was theoretical, aimed at advancing knowledge of the brain, while some of it was applied. BUAV said the investigation revealed examples of animal abuse indicating that animals were inadequately protected by the Animals (Scientific Procedures) Act 1986. After a review by the government's chief inspector of animals ruled against BUAV's argument that the project licences should not have been granted, BUAV applied to the High Court for a judicial review. The review ruled against BUAV on three of the four grounds, but on the remaining ground it found the Home Office had underestimated the suffering of the marmosets by categorising the experiments as "moderate," rather than "substantial." The Home Office announced a review of its procedures for categorising animal suffering. Nature of the research As of October 2002, Cambridge University had three project licences, issued by the Home Office under the Animals (Scientific Procedures) Act 1986, permitting the controlled use of one New World non-human primate species, the common marmoset, Callithrix jacchus. The licence authorised the use of animals bred specifically for research use at breeding establishments in the UK in experiments to study brain function in relation to human disorders. According to the chief inspector of animals, the experimental protocols involved "the training and testing of animals using a range of behavioural and cognitive tasks; then disrupting normal brain function by chemical or physical lesions; the subsequent administration of experimental treatments intended to minimise the functional defects or repair the damage caused; and further testing to evaluate brain function." The animals were killed at the end of the experiments, most of them for tissue analysis. Scientists using marmosets at Cambridge have published their work in peer reviewed journals. This includes discoveries relating to the role of the prefrontal cortex in behaviour, understanding learning and memory, modelling Parkinson's disease, and the role of the amygdala in conditioned reinforcement. Allegations of cruelty According to the British government's inspector of animals and the British Union for the Abolition of Vivisection, in some experimental protocols, the monkeys were trained to perform certain behavioural and cognitive tasks, then were made to repeat them after minimally invasive surgery to switch off a small area of the brain, to assess how this had affected their functioning. For example, some of the monkeys suffered from a damaged arm after the experiments. They were then tethered in a way that forced them to use that arm to retrieve food or water. To encourage use of the limb, the monkeys were deprived of food or water for 22 out of every 24 hours for up to two and a half years. The monkeys were usually given an extra feed on Friday afternoons, but some researchers allegedly deprived the monkeys of this too, so that they could keep them hungry for further tests on the Monday. During training for these tasks prior to brain surgery, BUAV claims that researchers were given instructions such as: Chase monkey into test box Keep "miserable" or "angry" marmosets in test apparatus Bang on the shutter, bang on the window Punish bad habits such as grooming by making a loud bang every time he does something wrong Lower the shutter ... if necessary onto their fingers Use food restrictions to make the marmosets more amenable to "shaping" One effect of the brain damage was that the monkeys would engage in stereotypical rotating movements. BUAV reported that one test for Parkinson's disease involved shutting them in a small Perspex box for up to one hour at a time to see how often they would rotate, and injecting them with amphetamine to make them rotate faster. BUAV says the monkeys were often "clearly distressed and bewildered; they could be seen crying out, twisting frantically, retching or desperately trying to escape." BUAV also says their investigator discovered monkeys who had had the tops of their scalps sawn off to have strokes induced, and who were then left unattended for 15 hours overnight without veterinary attention, because Cambridge staff worked nine to five. Three full-time animal care staff were employed to look after 400 animals, according to a British government review, with the research scientists themselves responsible for the welfare of animals undergoing experimental procedures. A film produced by BUAV shows a monkey regaining muscle tone during surgery, an indication that the animal was insufficiently anaesthetised. The BUAV report suggested there was a delay of some minutes before more anaesthetic was given. Response to the allegations The British government's chief inspector of animals conducted a review and published a report in October 2002. It concluded the veterinary input at Cambridge was "exemplary"; the facility "seems adequately staffed"; and the animals afforded "appropriate standards of accommodation and care." The caging system was "no longer state of the art" but complied with Home Office provisions; and the marmoset colony was "generally healthy."The inspector noted four instances of non-compliance with the licence: in two experiments, the surgical procedure was at variance with the project licence; on one occasion, the water restriction schedule was at variance; on one occasion, the licence holder did not inform the department that the severity limit of an experiment had been exceeded; there were minor technical irregularities on reports of how the animals were used. The reviewers consulted two experts in veterinary anaesthesia to investigate the consequences of a monkey regaining muscle tone during surgery. They advised that "unless purposeful or voluntary movements had accompanied the return of muscle tone then ... the anaesthetic agents should have been sufficient to block awareness of pain. Cambridge University welcomed the report as "confirmation that there was no evidence to support the allegations made by the BUAV." The BUAV was invited to give evidence to the inquiry, but declined. Nor did it make available the unedited video footage from its film. After publication of the report, the group said it was "utterly appalled and deeply angered by the Home Office's complete dismissal of overwhelming evidence of animal suffering" and that "the government's claim that it was correct to categorise as moderate suffering experiments where monkeys had the top of their skull sawn off and part of their brain sucked out is ludicrous in the extreme." Judicial review As a result of the information obtained during their investigation and in light of the subsequent review, BUAV applied to the UK's High Court for permission to seek a judicial review of the legality of the Home Office's interpretation of the Cambridge case, and the wider implementation of vivisection legislation. Mr Justice Burnton rejected four grounds for review directly related to the Cambridge case, but granted permission to seek judicial review on two wider grounds: whether death was an effect to be weighed in cost-benefit analysis and whether guidelines on restricting food and water should be a code of practice under the Animals (Scientific Procedures) Act. At the Court of Appeal, Lord Justice Keene allowed the review to proceed on two more counts that had originally been refused, on the grounds of public interest. These relate to the question of whether the Home Office underestimated the suffering of the Cambridge marmosets when setting severity limits and whether out-of-hours care and veterinary cover is required by law. The 2007 review found in favour of the Home Office on three of the grounds. On the issue of suffering, the court found that the Home Secretary had unlawfully categorised the experiments as "moderate", rather than "substantial". The Home Office, given leave to appeal the decision, which it did successfully in April 2008 with the Home Office awarded costs. See also Non-human primate experiments References Animal testing in the United Kingdom Anti-vivisection movement Animal testing on non-human primates Cruelty to animals
Cambridge University primates
[ "Chemistry" ]
1,641
[ "Animal testing", "Anti-vivisection movement", "Vivisection" ]
4,181,953
https://en.wikipedia.org/wiki/Railway%20Preservation%20Society%20of%20Ireland
The Railway Preservation Society of Ireland (RPSI) is a railway preservation group founded in 1964 and operating throughout Ireland. Mainline steam train railtours are operated from Dublin, while short train rides are operated up and down the platform at Whitehead, County Antrim, and as of 2023, the group sometimes operates mainline trains in Northern Ireland using hired-in NIR diesel trains from Belfast. The RPSI has bases in Dublin and Whitehead, with the latter having a museum. The society owns heritage wagons, carriages, steam engines, diesel locomotives and metal-bodied carriages suitable for mainline use. Bases The society has developed several bases over time, with Whitehead joined by Sallins, then Mullingar, and also Inchicore and Connolly in Dublin. As of 2019, three locations are in operation: Whitehead, Inchicore and Connolly. Current operations Whitehead site and museum Whitehead, near Belfast, has a long history as an excursion station, and the RPSI developed a working steam and engineering depot there. This was added to by the development of a museum. The Whitehead Railway Museum opened without ceremony in early 2017, after a 5-year project to expand the site from a depot to include a rebuilt Whitehouse Excursion station and the museum. The total cost was £3.1m from various funding sources. The museum received 10,000 visitors in 2017, its first year, and 15,000 in 2018. The museum hosts five galleries and it is possible for visitors to see various heritage steam and diesel locomotives and observe work on railway carriage restoration. Guides from the society are present. Inchicore, Dublin The RPSI has arrangements for storage of stock at Inchicore Works with maintenance also being carried out there. Connolly shed In 2015 the RPSI gained an arrangement with Iarnród Éireann to lease the locomotive shed just to the north of for the maintenance and storage of mainline diesel locomotives. Historic operations Mullingar The RPSI moved into the loco shed at Mullingar in 1974 and based steam locos 184 and 186 there. Carriages were also restored there. The base has since become derelict, with funding instead being channeled to Whitehead, including a board decision not to spend money on the green carriages based at Mullingar. Generating Van 3173 was the last vehicle to be overhauled. Sallins Prior to Mullingar, Sallins Goods Shed was used as a base. Whitehead and Belfast The Society used to operate mainline steam trains from Whitehead and Belfast. Since 2023, these have ceased, as Northern Irish Railways are no longer training staff as steam drivers. This leaves Whitehead focused on short steam train rides up and down the platform there. Rolling stock Steam locomotives The Society possesses 9 steam locomotives (plus one more operated by them but owned by the Ulster Folk and Transport Museum), typically only a small number will be operational at any time: Passenger tender locomotives The RPSI has three Great Northern Railway of Ireland 's within its fleet. No. 131, a Q class, was built in 1901. The others are S class no. 171 Slieve Gullion and V class No. 85 Merlin, although the latter is owned by the Ulster Folk & Transport Museum and is on loan. These locomotives are suitable for longer distance main line work, but are speed restricted if they need to run tender-first in the event they cannot be turned. Mixed large tank locomotive The RPSI's Northern Counties Committee (NCC) , WT class No. 4 holds significant records. It worked the last steam passenger train on Northern Ireland Railways, and with No. 53 operated the last stone goods train on 22 October 1970. Acquired by the RPSI in June 1971 it then went on to work over most of the remaining Irish railway network. They also own a SLNCR Lough class. Goods tender locomotives The Society possesses three goods tender locomotives all of which are suitable for slower speed passenger workings. Two of these are from the 101 (J15) class, of which over 100 were built between 1866 and 1903 and which lasted until the end of the steam era on CIÉ in 1963. The RPSI possesses two examples of these simple, reliable and robust engines, No. 184 with a saturated boiler and round-shaped firebox, and No. 186 with a superheated boiler and squarer Belpaire firebox. No. 461, a DSER 15 and 16 Class heavy goods locomotive, is the only Dublin and South Eastern Railway example that has been preserved. Shunting locomotives Shunting locomotives are useful and economical for shunting and short passenger work within Whitehead yard. These include the .3 'R.H. Smyth', affectionally known as Harvey, which has also been used to pull ballast hoppers for NIR. There is also No3BG "Guinness", a Hudswell Clarke engine presented by Guinness to the Society in 1965. Diesel and other locomotives The RPSI has indicated it has a strategy to create a mainline heritage diesel fleet. It has acquired four 65t General Motors Bo-Bos; CIÉ 121 Class number 134 and CIÉ 141 Class numbers 141, 142 and 175. The RPSI used to own two NIR 101 Class Hunslet diesels Numbered 101 and 102. They scrapped 101 and 102 was transferred to the Ulster Folk & Transport Museum. The RPSI also has some small diesel shunters, including a Ruston from Carlow sugar factory, a planet diesel from Irish Shell and a unilok diesel from the UTA. Carriages and other stock In the 2000s, with more rail stringent regulations, the RPSI was forced to acquire rakes of metal bodied carriages for mainline railtours. Freight wagons and other stock Whitehead has a collection of historic wagons, including a GNR brakevan named Ivan, restored by their award-winning Youth team, a Guinness van and NCC handcrane and a GSWR ballast hopper and an oil tanker from Irish Shell. Operations Railtours The main work of the society is in securing and maintaining steam rolling stock, with a view to running rail tours and Mulligan, in "One Hundred and Fifty Years of Irish Railways" noted that the RPSI did "sterling work" in the area of organising of such rail tours around the island, following the end of steam as a regular means of service provision on UTA and CIÉ lines. Films The RPSI has been able to assist in the provision of suitable rolling stock for train-related scenes in films made on the island of Ireland. The shooting of The First Great Train Robbery in 1978 was an early significant involvement in film making by the RPSI. Publication Five Foot Three is the RPSI's membership magazine. It is published annually Incidents On 7 November 2014, an RPSI train chartered by Web Summit blocked a level crossing in Midleton for over 25 minutes. The operation was referred to the Commission for Railway Regulation. The resulting investigation found that the Society had knowingly run a train that was too long for the station's platform and that it would block a level crossing, yet senior IR management overrode their internal safety department by allowing the train to run. On 7 July 2019, a serious incident occurred at Gorey when No.85 ran out of water and the fusible plug melted in the firebox. The Civil Defense had to cool down the boiler with hoses while the crew were evacuated from the cab and a rescue diesel summoned from Dublin. See also List of heritage railways in Northern Ireland List of heritage railways in the Republic of Ireland Irish Steam Preservation Society Irish Traction Group References Footnotes Notes Sources Primary sources External links RPSI website Engineering preservation societies Railway societies All-Ireland organisations Museums in County Antrim Railway museums in Northern Ireland Railway companies of Ireland Railway companies of the Republic of Ireland 1964 establishments in Ireland
Railway Preservation Society of Ireland
[ "Engineering" ]
1,584
[ "Engineering societies", "Engineering preservation societies" ]
4,182,220
https://en.wikipedia.org/wiki/Metarhizium%20robertsii
Metarhizium robertsii is a fungus that grows naturally in soils throughout the world and causes disease in various insects by acting as a parasitoid. It is a mitosporic fungus with asexual reproduction, which was formerly classified in the form class Hyphomycetes of the phylum Deuteromycota (also often called fungi imperfecti). Many isolates have long been recognised to be specific, and they were assigned variety status, but they have now been assigned as new Metarhizium species in light of new molecular evidence; one of these was M. robertsii. Other examples were M. majus and M. acridum (which was M. anisopliae var. acridum and included the isolates used for locust control). Metarhizium taii was placed in M. anisopliae var. anisopliae, but has now been described as a synonym of M. guizhouense (see Metarhizium). The commercially important isolate M.a. 43 (or F52, Met52, etc.), which infects Coleoptera and other insect orders has now been assigned to Metarhizium brunneum. Important isolates This species was named after Prof. Donald W. Roberts, who's PhD dissertation focused on destruxins of the insect-pathogenic fungus then called "Metarhizium anisopliae"; Don continued to work with entomopathogenic fungi, as a research professor, working especially with an isolate called ARSEF 23: which eventually became the type for this species. Biology Insect diseases caused by fungi in this genus is sometimes called green muscardine disease because of the green colour of their spores. When these mitotic (asexual) spores (called conidia) of the fungus come into contact with the body of an insect host, they germinate and the hyphae that emerge penetrate the cuticle. The fungus then develops inside the body, eventually killing the insect after a few days; this lethal effect is very likely aided by the production of insecticidal cyclic peptides (destruxins). The cuticle of the cadaver often becomes red. If the ambient humidity is high enough, a white mould then grows on the cadaver that soon turns green as spores are produced. Most insects living near the soil have evolved natural defenses against entomopathogenic fungi like M. robertsii. This fungus is, therefore, locked in an evolutionary battle to overcome these defenses, which has led to a large number of isolates (or strains) that are adapted to certain groups of insects. Economic importance A simplified method of microencapsulation has been demonstrated to increase the shelf-life of M. robertsii spores commercialised for biological control of pest insects, potentially increasing its efficiency against red imported fire ants. Metarhizium robertsii has been shown to break down very toxic mercury into less toxic forms of mercury. The fungus has been genetically engineered to improve its ability to perform this task. See also Beauveria bassiana, the fungus that causes white muscardine disease in various insects References External links Index Fungorum record, links to a list of synonyms Fungi Make Biodiesel Efficiently at Room Temperature Clavicipitaceae Parasitic fungi Fungi described in 1879 Fungus species
Metarhizium robertsii
[ "Biology" ]
690
[ "Fungi", "Fungus species" ]
8,770,692
https://en.wikipedia.org/wiki/Obturating%20ring
An obturating ring is a ring of relatively soft material designed to obturate under pressure to form a seal. Obturating rings are often found in artillery and other ballistics applications, and similar devices are also used in other applications such as plumbing, like the olive in a compression fitting. The term "O-ring" is sometimes used to describe this kind of pressure seal. Ballistics uses Obturating rings are common in artillery, where the steel or cast-iron casing of the shell is too hard to practically deform to provide a tight seal for the propellant gases. An obturating ring which is called driving band made of a softer material is the standard solution for that problem. Mortar bombs also use obturating rings to provide a seal around the projectile. Recoilless rifles and some artillery use rings with a reverse impression of the rifling cut in them for a tighter seal even at very low pressures. Another obturating ring may be used on sliding/falling breech-blocks from the opposite side of the chamber to provide a tight seal there if the charge is bagged and lacks a case (examples include early Krupp guns to Royal Ordnance L11 to M777). The obturating ring provides the sealing that would normally be provided by a cartridge case. See also Broadwell ring Charles Ragon de Bange References 81 mm mortar shell information, showing the obturating ring Artillery components
Obturating ring
[ "Technology" ]
295
[ "Artillery components", "Components" ]
8,770,844
https://en.wikipedia.org/wiki/Fish%21%20Philosophy
The Fish! Philosophy (styled FISH! Philosophy), modeled after the Pike Place Fish Market, is a business technique that is aimed at creating happy individuals in the workplace. John Christensen created this philosophy in 1998 to improve organizational culture. The central four ideas are: "play", "be there", "make their day", and "choose your attitude". History On a visit to Seattle in 1997, John Christensen, owner of ChartHouse Learning, observed fish sellers at Pike Place Fish Market tossing trout and salmon through the air of the market, providing high energy that energized many pedestrians passing by on their lunch breaks. They gave their complete attention to each customer and ensured each had an enjoyable visit. Christensen noticed the actual work of selling fish was repetitive, cold and exhausting. It occurred to him that the fishmongers might not enjoy every part of their job, but they chose to bring joy to how they approached it. They also sold a lot of fish. He asked the fishmongers if he could film them and they agreed. Lee Copeland Gladwin reports the events at hand spawned a film entitled Fish to be released, June 1998. John Christensen created the Fish Philosophy in 1998. From the film, a book entitled Fish! A Remarkable Way to Boost Morale and Improve Results, by Stephen C. Lundin, Harry Paul, and John Christensen was written. When Christensen and his team examined the footage, they identified four simple practices anyone could apply to their work and life. Karen Boynes, asserts once application of the four concepts of choosing your attitude, play, make someone's day, and be there, start, the environment changes to welcome positivity into the work place. ChartHouse Learning called these concepts The Fish! Philosophy. Business use A number of organizations have used The Fish! Philosophy language to guide how they approach work. Ranken Jordan Pediatric Specialty Hospital in St. Louis has four FISH! banners, one for each practice, hanging in its lobby as a symbol of its commitment to patients, parents and visitors. The staff uses the philosophy as a reminder to thank and recognize each other. Ranken Jordan's patient/parent satisfaction is above 95 percent and its employee retention above 97 percent. Stephania Davis reports that The P.T. Barnum pediatric unit at Bridgeport Hospital applied the four beliefs to the team to help ease the patients' and families' stay. Each principle assigned to a group of the pediatric team that Ms. Gomez, charge nurse of the unit, divided. The feedback from the parents after the change was very promising. Ms. Gomez has stated, “They [the employees] like coming to work again…”. The Fish! Philosophy is applied in companies as far away as the Middle East. Wild Wadi Water Park, in Dubai, United Arab Emirates, uses the video and principles in the socialization process of each of its new hires. The employees also continue to live by its principles under the initiative of the current General Manager, who makes sure that his management style both reflects these principles and that his employees are also working in an environment that allows them to simultaneously have fun and be productive. A visible indicator of this is the team video produced for the Wild Wadi's version of the Harlem Shake (song). In 2004, the water-park won the SWIM Award for their Front Line Employee Training Program using "The Fish Philosophy". The Wild Wadi is not the only company in Dubai to actively use the Fish! philosophy, even the American Hospital uses the video in customer service training for its front line staff. Customers such as Bill Bean are well aware of when the energy in a business is negative. In an article, he wrote for “The Recorder” he talked about how when he usually visits Ontario Ministry of Health the atmosphere is very dark and cold. On his last visit, the team excitedly welcomed him into the office to renew his Ontario Health Card. The sudden change in the attitude of the staff was all thanks to the implementation of the fish philosophy in day-to-day operations of the business. Tile Tech, a roofing company in Tacoma, WA, focused on being there for each other to increase awareness of safety hazards, decreasing its injury rate by 50%. Charlotte Tucker believes Industrial Piping Systems has fully embraced the Fish! ideology, as fish replicas hang from the ceilings and attach to walls to prove it. Christine Wardrop, president of IPS, describes the idea by saying, “They’re the rules of Life… You should focus on them and keep them in your mind”. Rochester Ford Toyota in Rochester, MN, known for tough negotiating, shifted to a fixed price and an emphasis on making the customer's day. New car sales doubled and it recorded a 30% rise in customer satisfaction. In April 2000, the Ford Motor Company decided to incorporate the Fish Philosophy in their training programs. This decision came about as a result of the lack of motivation in a certain division of the company. Sprint call center in Lenexa, KS, used Play to make the job more fun. Employees selected music for common areas and the dress code was relaxed. Managers worked to Be There by asking employees for their ideas on improving the business. Four-year productivity rose 20% and first-year employee retention increased 25%. K-12 education use Educators may use The Fish! Philosophy to build supportive relationships with students and help students practice personal responsibility. Both are keys in creating effective classrooms. The Fish! Philosophy is thought to spark creativity in the schoolhouse and the workplace. Criticism In his book Organization Theory: A Libertarian Perspective, Kevin Carson calls Fish! "vile" and a "lesson from the powerful to the powerless", adding: References External links ChartHouse Learning Flying Fish Presentation Aquarium Fish Videos Motivation Office administration Business fables
Fish! Philosophy
[ "Biology" ]
1,170
[ "Ethology", "Behavior", "Motivation", "Human behavior" ]
8,770,941
https://en.wikipedia.org/wiki/Empathy-altruism
Empathy-altruism is a form of altruism based on moral emotions or feelings for others. Social exchange theory represents a seemingly altruistic behavior which benefits the altruist and outweighs the cost the altruist bears. Thus such behavior is self-interested. In contrast, C. Daniel Batson holds that people help others in need out of genuine concern for the well-being of the other person. The key ingredient to such helping is empathic concern. According to Batson's empathy-altruism hypothesis, if someone feels empathy towards another person, they will help them, regardless of what they can gain from it. An alternative hypothesis is empathy-joy, which states a person helps because they find pleasure at seeing another person experience relief. When a person does not feel empathy, the standards of social exchange theory apply. Evidence There has been significant debate over whether other-helping behavior is motivated by self- or other-interest. The prime actors in this debate are Daniel Batson, arguing for empathy-altruism, and Robert Cialdini, arguing for self-interest. Batson recognizes that people sometimes help for selfish reasons. He and his team were interested in finding ways to distinguish between motives. In one experiment, students were asked to listen to tapes from a radio program. One of the interviews was with a woman named Carol, who talked about her bad car accident in which both of her legs were broken, her struggles and how behind she was becoming in class. Students who were listening to this particular interview were given a letter asking the student to share lecture notes and meet with her. The experimenters changed the level of empathy by telling one group to try to focus on how she was feeling (high empathy level) and the other group not to be concerned with that (low empathy level). The experimenters also varied the cost of not helping: the high cost group was told that Carol would be in their psychology class after returning to school and the low cost group believed she would finish the class at home. The results confirmed the empathy-altruism hypothesis: those in the high empathy group were almost equally likely to help her in either circumstance, while the low empathy group helped out of self-interest (seeing her in class every day made them feel guilty if they did not help). Countering hypotheses Batson and colleagues set out to show that empathy motivates other-regarding helping behavior not out of self-interest but out of true interest in the well-being of others. They addressed two hypotheses that counter the empathy-altruism hypothesis: Empathy-specific reward: Empathy triggers the need for social reward which can be gained by helping. Empathy-specific punishment: Empathy triggers the fear of social punishment which can be avoided by helping. See also Affective neuroscience C. Sue Carter Edward O. Wilson Frans de Waal Helping behavior Jean Decety Moral emotions Social neuroscience Stephen Porges Sympathy W. D. Hamilton References Further reading Batson, C. D., & Leonard, B. (1987). "Prosocial Motivation: Is it ever Truly Altruistic?" Advances in Experimental Social Psychology (Vol. 20, pp. 65–122): Academic Press. Decety, J. & Batson, C.D. (2007). "Social neuroscience approaches to interpersonal sensitivity." Social Neuroscience, 2(3-4), 151–157. Decety, J. & Ickes, W. (Eds.). (2009). The Social Neuroscience of Empathy. Cambridge: MIT Press, Cambridge. Thompson, E. (2001). "Empathy and consciousness." Journal of Consciousness Studies, 8, 1–32. Zahn-Waxler, C., & Radke-Yarrow, M. (1990). "The origins of empathic concern." Motivation and Emotion, 14, 107–125. Altruism Moral psychology Empathy
Empathy-altruism
[ "Biology" ]
808
[ "Behavior", "Altruism" ]
8,771,002
https://en.wikipedia.org/wiki/Hanna%20Nasser%20%28academic%29
Hanna Nasir (; born 14 Jan 1935), alternately transliterated Hanna Nasser, is a Palestinian academic and political figure. Early life and education Nasser was born in Jaffa in 1935. His cousin was Kamal Nasser who was assassinated by the Israelis in Beirut in 1973. Nasser holds a PhD in Nuclear Physics from Purdue University in the United States. Career and activities Nasser was a long-time president of Birzeit University, which his father, Musa Nasser, founded. He directed the school's transition from a community college to an accredited university. In November 1974 Nasser was exiled by the Israeli authorities. He continued to serve as Birzeit's president in exile; while the school's vice-president managed its day-to-day business, Birzeit officials regularly visited Nasser in Amman to receive his input on major decisions. Nasir served on the Executive Committee of the Palestine Liberation Organization between 1981 and 1984 and held the position of Head of the Palestine National Fund between 1982 and 1984. Nasir, along with 29 other exiles, was allowed to return to the West Bank in May 1993 as the peace process got under way. He remained president of Birzeit until his retirement in 2004. In 2002, Yasser Arafat appointed Nasir to the post of Chairman of the Palestinian Central Elections Commission (CEC). The CEC was established by the Palestinian Authority in 1995 as an independent body, responsible for the conduct of elections in the Palestinian territories. In the post, Nasir oversaw the presidential election in 2005, the legislative election in 2006, and the local election in the West Bank in 2012 and 2017. Personal life Born to a Palestinian Christian family, Hanna is the father of three sons and one daughter. Awards He holds honorary titles including the French Legion of Honour and an honorary doctorate from the American University in Cairo. Notes Further reading An extensive discussion of Nasir's career can be found in Gabi Baramki's Peaceful Resistance: Building a Palestinian University under Occupation, Pluto Press, October 2009. Nuclear physicists Palestinian physicists Palestinian Christians 20th-century Palestinian politicians Palestine Liberation Organization members Academic staff of Birzeit University Purdue University alumni Living people Arab people in Mandatory Palestine People from Jaffa 1935 births Presidents of Birzeit University
Hanna Nasser (academic)
[ "Physics" ]
456
[ "Nuclear physicists", "Nuclear physics" ]
8,771,245
https://en.wikipedia.org/wiki/SchoolTool
SchoolTool is a GPL licensed, free student information system for schools around the world. The goals of the project are to create a student information system, including demographics, gradebook, attendance, calendaring and reporting for primary and secondary schools, as well as a framework for building customized applications and configurations for individual schools or states. SchoolTool is built as a free software/open source software stack, licensed under the GNU General Public License, Version 2, written in Python using the Zope 3 framework. The sub-projects of SchoolTool are as follows: The SchoolTool Calendar and SchoolBell are calendar and resource management tools for schools available as part of the Edubuntu Linux distribution. A SchoolTool student information system is being developed and tested in collaboration with schools CanDo is a SchoolTool-based skills tracking program developed by Virginia students and teachers to track which skills students are acquiring in their classes and at what level of competency. SchoolTool is configured by default to act as what is often called a student information system or SIS. The focus is on tracking information related to students: demographics, enrollment grades, attendance, reporting. It is a subset of a complete "management information system" (MIS) for schools, which might also cover systems like accounting. SchoolTool is not a learning management system, or LMS, such as Moodle, although they share some overlapping feature sets, such as a gradebook. SchoolTool does not contain curriculum or learning objects. A post on the product news page in October 2016 titled "The Future of SIELibre and SchoolTool" indicates that the primary SchoolTool developers have moved on to other things. This was accompanied by a google document explaining the decision and thanking contributors for their efforts. SchoolTool Features Customizable demographics; Student contact management; Calendars for the school, groups, and individuals; Resource booking; Teacher gradebooks; Class attendance; Report card generation. See also OpenEMIS Enterprise Application Integration Open Knowledge Initiative Web services FET References External links Website 2004 software Edubuntu Educational software Cross-platform free software Free educational software Free content management systems Free software programmed in Python School-administration software
SchoolTool
[ "Technology" ]
448
[ "Computing stubs", "World Wide Web stubs" ]
8,771,473
https://en.wikipedia.org/wiki/Kernel%20%28statistics%29
The term kernel is used in statistical analysis to refer to a window function. The term "kernel" has several distinct meanings in different branches of statistics. Bayesian statistics In statistics, especially in Bayesian statistics, the kernel of a probability density function (pdf) or probability mass function (pmf) is the form of the pdf or pmf in which any factors that are not functions of any of the variables in the domain are omitted. Note that such factors may well be functions of the parameters of the pdf or pmf. These factors form part of the normalization factor of the probability distribution, and are unnecessary in many situations. For example, in pseudo-random number sampling, most sampling algorithms ignore the normalization factor. In addition, in Bayesian analysis of conjugate prior distributions, the normalization factors are generally ignored during the calculations, and only the kernel considered. At the end, the form of the kernel is examined, and if it matches a known distribution, the normalization factor can be reinstated. Otherwise, it may be unnecessary (for example, if the distribution only needs to be sampled from). For many distributions, the kernel can be written in closed form, but not the normalization constant. An example is the normal distribution. Its probability density function is and the associated kernel is Note that the factor in front of the exponential has been omitted, even though it contains the parameter , because it is not a function of the domain variable . Pattern analysis The kernel of a reproducing kernel Hilbert space is used in the suite of techniques known as kernel methods to perform tasks such as statistical classification, regression analysis, and cluster analysis on data in an implicit space. This usage is particularly common in machine learning. Nonparametric statistics In nonparametric statistics, a kernel is a weighting function used in non-parametric estimation techniques. Kernels are used in kernel density estimation to estimate random variables' density functions, or in kernel regression to estimate the conditional expectation of a random variable. Kernels are also used in time series, in the use of the periodogram to estimate the spectral density where they are known as window functions. An additional use is in the estimation of a time-varying intensity for a point process where window functions (kernels) are convolved with time-series data. Commonly, kernel widths must also be specified when running a non-parametric estimation. Definition A kernel is a non-negative real-valued integrable function K. For most applications, it is desirable to define the function to satisfy two additional requirements: Normalization: Even-function Symmetry: The first requirement ensures that the method of kernel density estimation results in a probability density function. The second requirement ensures that the average of the corresponding distribution is equal to that of the sample used. If K is a kernel, then so is the function K* defined by K*(u) = λK(λu), where λ > 0. This can be used to select a scale that is appropriate for the data. Kernel functions in common use Several types of kernel functions are commonly used: uniform, triangle, Epanechnikov, quartic (biweight), tricube, triweight, Gaussian, quadratic and cosine. In the table below, if is given with a bounded support, then for values of u lying outside the support. See also Kernel density estimation Kernel smoother Stochastic kernel Positive-definite kernel Density estimation Multivariate kernel density estimation Kernel method Notes References Nonparametric statistics Time series Point processes Bayesian statistics
Kernel (statistics)
[ "Mathematics" ]
727
[ "Point processes", "Point (geometry)" ]
8,771,583
https://en.wikipedia.org/wiki/Network%20theory%20of%20aging
The network theory of aging supports the idea that multiple connected processes contribute to the biology of aging. Kirkwood and Kowald helped to establish the first model of this kind by connecting theories and predicting specific mechanisms. In departure of investigating a single mechanistic cause or single molecules that lead to senescence, the network theory of aging takes a systems biology view to integrate theories in conjunction with computational models and quantitative data related to the biology of aging. Implications The free radical theory, describing the reactions of free radicals, antioxidants and proteolytic enzymes, was computationally connected with the protein error theory to describe the error propagation loops within the cellular translation machinery. The study of gene networks revealed proteins associated with aging to have significantly higher connectivity than expected by chance. Investigation of aging on multiple levels of biological organization contributed to a physiome view, from genes to organisms, predicting lifespans based on scaling laws, fractal supply networks and metabolism as well as aging related molecular networks. The network theory of aging has encouraged the development of data bases related to human aging. Proteomic network maps suggest a relationship between the genetics of development and the genetics of aging. Hierarchical Elements The network theory of aging provides a deeper look at the damage and repair processes at the cellular level and the ever changing balance between those processes. To fully understand the network theory as its applied to aging you must look at the different hierarchical elements of the theory as it pertains to aging. Elementary particles of quantum systems- The aging process is described as an equation where a structure in an unbalanced state begins to change and that is seen primarily in the actions of quantum particles. Monomers of biological macro-molecules- After a while, different types of protein damage become widespread due to the build up of damages within the protein. Over time, the maturation of cross-links, proteolytic cuts, and amino acid truncations are very apparent. Proteins- Protein-protein exchanges either cease to exist or the connections between them become weaker due to energy loss and injury to the protein itself. This then leads to the protein being displaced in the cell. Cells- Connections within the cell begin to either tighten or loosen up eventually leading to weakened connections. There is a high price associated with these connections, especially within the brain. Organisms- As individuals age, their social networks begin to decline. Only thing remaining is the contacts for the most important social functions. Cognitive deterioration due to aging and loss of support systems leads to more declines in old age. Social groups- A decline in social groups mimics the declines associated with the aging process. Ecosystems forming a global ecological network- Networks within our ecosystems show us that we should be very concerned about the aging of our habitat. Elements of human systems- The aging process can be portrayed through human conceptual, cultural, and technological networks. With time, each of these networks begin to decline. See also • DNA damage theory of aging References Systems biology Theories of ageing Theories of biological ageing Proximate theories of biological ageing
Network theory of aging
[ "Biology" ]
615
[ "Senescence", "Theories of biological ageing", "Systems biology" ]
8,771,643
https://en.wikipedia.org/wiki/As-Easy-As
As-Easy-As for DOS and As-Easy-As for Windows was a shareware 32-bit spreadsheet program developed in 1986 for MS-DOS and later for Microsoft Windows. The name is a play on the phrase "as easy as 1-2-3", a reference to the dominant MS-DOS spreadsheet at that time, Lotus 1-2-3 with which it competed for a fraction of the competitor's price. The program was developed and sold by TRIUS, Inc. (a company founded by David Schulz and Paris Karahalios a Shareware pioneer - not to be confused with Tritus, the makers of the Tritus SPF clone of the mainframe ISPF interface and editor). The company eventually branched out developing CAD (DraftChoice, StarFlic, ProtoCAD 3D, KeyCAD), GIS/Mapping software (Precision Mapping) and SDKs (MapPro, MaptiVate), no longer focusing on As-Easy-As. History As-Easy-As is historically significant as one of the earliest and most useful shareware programs that competed with commercial software on the basis of both price and features. For small businesses and personal users, the price of Lotus 1-2-3 was prohibitive, and As-Easy-As provided basic spreadsheet functionality for about a tenth of the price. This paradigm of undercutting the spreadsheet market leader would be adopted by Borland's Quattro Pro (which was not released until 1990). Subsequent versions of As Easy As became as powerful as any MS-DOS spreadsheet. Like Quattro Pro, As-Easy-As combined some elements of the 1-2-3 user interface, while modernizing them. One such modernization is the use of pull-down menus. Cell formulae are very similar to Lotus 1-2-3, including the letter-number addressing scheme (A1, B2, etc.) and the @function syntax (e.g.,@SUM(A1..A10) using the ".." range separator syntax also like Lotus 1-2-3.) The product included a detailed electronic manual describing the spreadsheet's functions and some basic MS-DOS operations. Updated versions of As-Easy-As were made available at frequent intervals. Because these new versions often included valuable new capabilities, users were encouraged to support the continuing development of the program. Supporters who paid for a license received a 200+ page printed, detailed user's manual. The graphic defaults were more attuned to science and engineering users than to business users. This enabled a user to rapidly create x-y graphs of data, whereas the major commercial spreadsheets of the DOS era (Lotus 1-2-3 and Quattro) by default produced more business-oriented graphs. Many calculation functions were appealing to the science and engineering markets, such as improved capabilities for regression analysis and matrix operations. The program was translated by TRIUS, Inc. into Spanish, German, French, Portuguese, Italian and Chinese and at the height of its popularity it was being published locally in 10+ countries in the Americas, Europe, Asia and Australia. It was also private labelled for publishing and distribution by SoftKey International and could be purchased at many major retailers. End of development On 3 November 2004, Trius discontinued the last version of As-Easy-As for DOS, and on 10 January 2006 the last version of As-Easy-As for Windows. Though copies of both programs were made available for downloading with free full licenses. The earliest preserved historical version available on the Internet from the MS-DOS shareware era is version 3 from 1987. Awards Some of the awards As-Easy-As received include: 1992 - Shareware Industry Awards, Best Application, Winner 1992 - PCM Reader's Best Award, Best Shareware Program 1998 - AS-EASY-AS for Win95/NT, Shareware Industry Award, Best Application Notes and references External links Version 3.0 from 1987 is at the abandonware website Vetusware, along with a later version. (Assuming that, if the Windows version is free, then the historical versions would also be free.) DOS software Spreadsheet software for Windows Spreadsheet software Shareware Freeware
As-Easy-As
[ "Mathematics" ]
888
[ "Spreadsheet software", "Mathematical software" ]
8,771,718
https://en.wikipedia.org/wiki/Instant%20Insanity
Instant Insanity is the name given by Parker Brothers to their 1967 version of a puzzle which has existed since antiquity, and which has been marketed by many toy and puzzle makers under a variety of names, including: Devil's Dice (Pressman); DamBlocks (Schaper); Logi-Qubes (Schaeffer); Logi Cubes (ThinkinGames); Daffy Dots (Reiss); Those Blocks (Austin); PsykoNosis (A to Z Ideas), and many others. The puzzle consists of four cubes with faces colored with four colors (commonly red, blue, green, and white). The objective of the puzzle is to stack these cubes in a column so that each side of the stack (front, back, left, and right) shows each of the four colors. The distribution of colors on each cube is unique, and the order in which the four cubes are stacked is irrelevant as long as each side shows every color. This problem has a graph-theoretic solution in which a graph with four vertices labeled B, G, R, W (for blue, green, red, and white) can be used to represent each cube; there is an edge between two vertices if the two colors are on the opposite sides of the cube, and a loop at a vertex if the opposite sides have the same color. Each individual cube can be placed in one of 24 positions, by placing any one of the six faces upward and then giving the cube up to three quarter-turns. Once the stack is formed, it can be rotated up to three quarter-turns without altering the orientation of any cube relative to the others. Ignoring the order in which the cubes are stacked, the total possible number of arrangements is therefore 3,456 (24 * 24 * 24 * 24 / (4 * 4!)). The puzzle is studied by D. E. Knuth in an article on estimating the running time of exhaustive search procedures with backtracking. Every position of the puzzle can be solved in eight moves or less. The first known patented version of the puzzle was created by Frederick Alvin Schossow in 1900, and marketed as the Katzenjammer puzzle. The puzzle was recreated by Franz Owen Armbruster, also known as Frank Armbruster, and independently published by Parker Brothers and Pressman, in 1967. Over 12 million puzzles were sold by Parker Brothers alone. The puzzle is similar or identical to numerous other puzzles (e.g., The Great Tantalizer, circa 1940, and the most popular name prior to Instant Insanity). One version of the puzzle is currently being marketed by Winning Moves Games USA. Solution Given the already colored cubes and the four distinct colors are (Red, Green, Blue, White), we will try to generate a graph which gives a clear picture of all the positions of colors in all the cubes. The resultant graph will contain four vertices one for each color and we will number each edge from one through four (one number for each cube). If an edge connects two vertices (Red and Green) and the number of the edge is three, then it means that the third cube has Red and Green faces opposite to each other. To find a solution to this problem we need the arrangement of four faces of each of the cubes. To represent the information of two opposite faces of all the four cubes we need a directed subgraph instead of an undirected one because two directions can only represent two opposite faces, but not whether a face should be at the front or at the back. So if we have two directed subgraphs, we can actually represent all the four faces (which matter) of all the four cubes. First directed graph will represent the front and back faces. Second directed graph will represent the left and right faces. We cannot randomly select any two subgraphs - so what are the criteria for selecting? We need to choose graphs such that: the two subgraphs have no edges in common, because if there is an edge which is common that means at least one cube has the pair of opposite faces of exactly the same color, that is, if a cube has Red and Blue as its front and back faces, then the same is true for its left and right faces. a subgraph contains only one edge from each cube, because the sub graph has to account for all the cubes and one edge can completely represent a pair of opposite faces. a subgraph can contain only vertices of degree two, because a degree of two means a color can only be present at faces of two cubes. Easy way to understand is that there are eight faces to be equally divided into four colors. So, two per color. After understanding these restrictions if we try to derive the two sub graphs, we may end up with one possible set as shown in Image 3. Each edge line style represents a cube. The upper subgraph lets one derive the left and the right face colors of the corresponding cube. E.g.: The solid arrow from Red to Green says that the first cube will have Red in the left face and Green at the Right. The dashed arrow from Blue to Red says that the second cube will have Blue in the left face and Red at the Right. The dotted arrow from White to Blue says that the third cube will have White in the left face and Blue at the Right. The dash-dotted arrow from Green to White says that the fourth cube will have Green in the left face and White at the Right. The lower subgraph lets one derive the front and the back face colors of the corresponding cube. E.g.: The solid arrow from White to Blue says that the first cube will have White in the front face and Blue at the Back. The dashed arrow from Green to White says that the second cube will have Green in the front face and White at the Back. The dotted arrow from Blue to Red says that the third cube will have Blue in the front face and Red at the Back. The dash-dotted arrow from Red to Green says that the fourth cube will have Red in the front face and Green at the Back. The third image shows the derived stack of cube which is the solution to the problem. It is important to note that: You can arbitrarily label the cubes as one such solution will render 23 more by swapping the positions of the cubes but not changing their configurations. The two directed subgraphs can represent front-to-back, and left-to-right interchangeably, i.e. one of them can represent front-to-back or left-to-right. This is because one such solution also render 3 more just by rotating. Adding the effect in 1., we generate 95 more solutions by providing only one. To put it into perspective, such four cubes can generate 243 × 3 = 41472 configurations. It is not important to take notice of the top and the bottom of the stack of cubes. Generalizations Given n cubes, with the faces of each cube coloured with one of n colours, determining if it is possible to stack the cubes so that each colour appears exactly once on each of the 4 sides of the stack is NP-complete. The cube stacking game is a two-player game version of this puzzle. Given an ordered list of cubes, the players take turns adding the next cube to the top of a growing stack of cubes. The loser is the first player to add a cube that causes one of the four sides of the stack to have a color repeated more than once. Robertson and Munro proved that this game is PSPACE-complete, which illustrates the observation that NP-complete puzzles tend to lead to PSPACE-complete games. References Computational problems in graph theory Combination puzzles NP-complete problems
Instant Insanity
[ "Mathematics" ]
1,599
[ "Computational problems in graph theory", "Computational mathematics", "Graph theory", "Computational problems", "Mathematical relations", "Mathematical problems", "NP-complete problems" ]
8,771,785
https://en.wikipedia.org/wiki/Aluminium%20Plant%20Podgorica
The Aluminium Plant Podgorica (, abbr. KAP), also known latterly as Uniprom KAP, is a Montenegrin aluminium smelter company located in Podgorica, Montenegro. The Uniprom KAP operating countries of Germany, Poland and the Czech Republic. Synopsis The KAP produces its own alumina, extracting it via the Bayer process out of the bauxite shipped from the Nikšić bauxite mine. The factory also has its own production of pre-baked anodes. The smelter has an installed capacity of 120,000 tons of liquid aluminum per year. KAP is connected by railway with bauxite mines near Nikšić and the Port of Bar, and the Podgorica Airport is only a few kilometers away. History Construction of an aluminium smelter in Montenegro was first proposed in the 1960s, when significant quantities of high quality bauxite ore were discovered near Nikšić. With support from Pechiney construction of KAP began in 1969, while production of aluminium began in 1971. Breakup of Yugoslavia (1990s) The plant had its most difficult times during UN-imposed economic sanctions on FR Yugoslavia. During the sanctions, the production was reduced to 13% of capacity. In the period 1997–1999 KAP participated with 8.2–6.7% in GDP of Montenegro, and 65–67% in export for the same period. Most of the time, the KAP acquired necessary raw materials and spare parts from Glencore. The entire export was also conducted by Glencore. The company was one of the few Montenegrin companies to recover quickly after the breakup of Yugoslavia. Russian ownership (2005–2013) On 1 December 2005, KAP was privatized, with 65.4394% of shares being sold to Salomon Enterprises Limited (later renamed CEAC – Central European Aluminum Company), a company based in Cyprus, for €48.5 million and obligations to invest over €50 million in its modernization and environmental upgrade. CEAC is fully owned by En+ Group. The negotiations on the sale were conducted directly between Oleg Deripaska and the then Prime Minister of Montenegro, Milo Đukanović. In May 2006 CEAC said that "various breaches of representations and warranties" of the deal were discovered by accountants Deloitte, including KAP having "hidden" debts and obligations towards the state totalling tens of millions of euros. In addition, the government-certified 2004 accounts were deemed inaccurate when it came to working capital and other assets. "It became evident to CEAC that KAP's initial financial situation had been misrepresented," the company claimed. , the KAP has struggled to survive the impact of ongoing economic crisis. The low trading price of aluminium, and expensive production inputs, primarily the electricity and alumina production, have resulted in KAP generating daily losses of up to €200,000. The company has been unable to survive ever since without the constant Government subsidies, primarily in writing off the debt for electricity. In June 2009, the financial situation at the company had not improved, leaving KAP in danger of being closed by CEAC. The government, not wanting to see its largest company being shut down, agreed to guarantee a €45 million loan. In exchange, the government would receive half of the stakes owned by CEAC, leaving CEAC with a stake of 29.3%. As relationship between the owners and management and the Government of Montenegro became increasingly sour, there was an ongoing debate within the country about the fate of the company. The size of the company, number of employees, and its impact on the Montenegrin trade balance imply that efforts will be made by the Government to keep the company alive, although sustainable ownership and management arrangements are yet to be made given the current dissatisfaction with Russian owners. On 8 July 2013, KAP officially went bankrupt, having up to that point accumulated a debt of 383 million euros, while the company itself is worth 180 million euros. Montenegrin ownership (2014–present) In July 2014, KAP was sold for 28 million euros to the Montenegrin company Uniprom that is 100% owned by Veselin Pejović. Controversies The Podgorica Aluminium Plant (KAP) has been a focal point of controversy and criticism, despite its status as the largest individual contributor to Montenegro's GDP and exports. The primary source of discontent revolves around environmental concerns, as the plant is heavily criticized for polluting the fertile Zeta Plain. KAP's red mud pond is notorious for emitting dry red dust that disperses through the villages in Zeta due to wind patterns. KAP also faces scrutiny for consuming a significant portion of Montenegro's electrical power at reduced rates, while Montenegrin citizens experiencie frequent electricity shortages and pay substantially higher prices for it. Political entities such as the Movement for Changes, argue that the sale of KAP was disadvantageous for Montenegro. They contend that the plant was undervalued and raise concerns about the business practices of CEAC owners, alleging consistent annual losses to avoid dividends payments to minority shareholders. KAP's trade union has launched strikes frequently since privatization, demanding increases in wages. References External links Uniprom KAP at montenegroberza.com Metal companies of Montenegro Manufacturing companies established in 1969 1969 establishments in Montenegro Aluminium smelters Smelters of Yugoslavia Smelters of Montenegro Metal companies of Germany Metal companies of Poland Metal companies of the Czech Republic
Aluminium Plant Podgorica
[ "Chemistry" ]
1,114
[ "Metallurgical industry of Montenegro", "Metallurgical industry by country" ]
8,771,790
https://en.wikipedia.org/wiki/Filtered%20air%20positive%20pressure
A filtered air positive pressure environment in laboratory animal science is a space that is under positive pressure with respect to the outside world. In this way, no germs that could affect the lab animals or that are a threat to the SPF status can enter the facility. To bring in new air, high efficiency particulate air filters (HEPA) are used. In some facilities, the opposite is done: negative pressure inside prevents escape of germs to the outside world. This is done when the facility does research on dangerous germs like anthrax. Next to these obvious things, these facilities can have other special hygiene control elements: air locks for personnel, feed, bedding, cages and materials (e.g. autoclave with two doors, one to the inside, one to the outside) showers for personnel emergency power generator or uninterruptible power supplies References Pressure Animal testing
Filtered air positive pressure
[ "Physics", "Chemistry" ]
185
[ "Scalar physical quantities", "Animal testing", "Mechanical quantities", "Physical quantities", "Pressure", "Wikipedia categories named after physical quantities" ]
8,772,299
https://en.wikipedia.org/wiki/Johnson%20bar
A Johnson bar (also Johnson corrugated bar) is a type of corrugated high-carbon steel rebar used to reinforce concrete. The Johnson bar was invented by A.L. Johnson of the St. Louis Expanded Metal Company. Its specific efficacy comes from having "alternate elevations and depressions to grip the concrete," with the shoulders of the corrugations having "an inclination with the axis of the bar" to prevent slipping between the bar and the concrete. The pattern of elevations and depressions aids in the stability of the structure; even if a Johnson bar no longer is bonded to the concrete (due to vibrations, for instance, or being smeared with oil as may happen during careless construction, reducing the adhesion significantly), it will have a hold on the concrete stronger than a plain smooth-sided bar. References Building materials Concrete Structural steel
Johnson bar
[ "Physics", "Engineering" ]
172
[ "Structural engineering", "Structural steel", "Building engineering", "Construction", "Materials", "Building materials", "Civil engineering", "Civil engineering stubs", "Concrete", "Matter", "Architecture" ]
8,772,590
https://en.wikipedia.org/wiki/Parotid%20plexus
The parotid plexus or plexus parotideus is the branch point of the facial nerve (extratemporal) after it leaves the stylomastoid foramen. This division takes place within the parotid gland. Branches Commonly, it divides into the following branches (several variations): The temporal branches, cross the zygomatic arch to the temporal region. The zygomatic branches, cross the zygomatic bone to the orbit. The buccal branches, pass forward to below the orbit and around the mouth. The marginal mandibular branch passes forward to the lower lip and chin. The cervical branch runs forward forming a series of arches over the suprahyoid region to the platysma muscle. References External links http://www.dartmouth.edu/~humananatomy/figures/chapter_47/47-5.HTM () Nerves Facial nerve Cranial nerves Nervous system Otorhinolaryngology
Parotid plexus
[ "Biology" ]
208
[ "Organ systems", "Nervous system" ]
8,773,095
https://en.wikipedia.org/wiki/Grove%20cell
The Grove cell was an early electric primary cell named after its inventor, Welsh physical scientist William Robert Grove, and consisted of a zinc anode in dilute sulfuric acid and a platinum cathode in concentrated nitric acid, the two separated by a porous ceramic pot. Cell details The Grove cell voltage is about 1.9 volts and arises from the following reaction: Zn + H2SO4 + 2 HNO3 ZnSO4 + 2 H2O + 2 NO2↑ Use The Grove cell was the favored power source of the early American telegraph system in the period 1840 – 1860 because it offered a high current output and higher voltage than the earlier Daniell cell (at 1.9 volts and 1.1 volts, respectively). Disadvantages By the time of the American Civil War, as telegraph traffic increased, the Grove cell's tendency to discharge poisonous nitrogen dioxide (NO2) fumes proved increasingly hazardous to health, and as telegraphs became more complex, the need for constant voltage became critical. The Grove cell was limited in this respect, because as the cell discharged, voltage reduced. Eventually, Grove cells were replaced in use by Daniell cells. See also List of battery types History of the battery Bunsen cell, Using cheaper carbon instead of platinum. Notes Disposable batteries
Grove cell
[ "Chemistry" ]
271
[ "Analytical chemistry stubs" ]
8,773,629
https://en.wikipedia.org/wiki/Gustaf%20Larson
Erik Gustaf Larson (8 July 1887 – 4 July 1968) was a Swedish automotive engineer and the co-founder of Volvo. He held a Master of Science (M.Sc.) degree in mechanical engineering from the Royal Institute of Technology in Stockholm. Biography Larson was responsible for the technical design of the first Volvo model ÖV 4, introduced on 14 April 1927. He and Assar Gabrielsson founded Volvo. He was appointed vice president and technical manager for AB Volvo in Gothenburg, from the time the company was founded 1927. He worked for Volvo until his death in 1968. Volvo Invention and development In June 1924, when Gustav Larson met his old friend Assar Gabrielsson in Skåne, Gabrielsson unveiled his plans to try to establish the manufacturing of a new Swedish automobile. Gustav Larson had worked for SKF between 1917 and 1919 but now worked for the company AB Galco in Stockholm. They made a verbal agreement in August 1924 at the Sturehof restaurant in Stockholm, then signed a written contract more than one year later, on 16 December 1925. In this contract Gustav was to carry out the engineering work for a new car, as well as an investment plan for a complete new manufacturing plant, but would only be rewarded for that work in case the project would turn out well. Well meant after at least 100 produced cars and in the case this was achieved before 1 January 1928. This famous contract shows that Assar Gabrielsson "owned" the project and that it was a high-risk project without any guarantees. Gabrielsson took the economic risks himself and Gustav, in the worst case, would have worked on the project without being rewarded, but still would have had his salary from AB Galco in Stockholm. Most of the capital that Gabrielsson intended to use for the project initially was actually extra sales commissions that he had saved from the time he was the managing director for the SKF subsidiary in Paris in 1921–22. Gabrielsson had decided to build a test series of ten vehicles with his own financing and later present the car and an entire investment plan to SKF. The idea to build a pre-series of ten vehicles was most certainly related to the fact that no company would have given he and Larsen an offer for the components (engines, gearboxes, chassis components etc.) With less quantity they were certain that SKF would approve plans in due time. Engineering, documentation and investment planning was carried out exactly in the same way, as if it had been carried out by SKF in order to prepare for the set up of a new automobile company. Gabrielsson most certainly had the full support from the managing director of SKF, Mr. Björn Prytz, at that time, as long as his 'private' project did not interfere with his work as sales manager for SKF. The first ten pre-series vehicles, model Volvo XC78, were designed and assembled in Stockholm at AB Galco under the supervision of Gustav Larson, at that time still having his other work at AB Galco to attend to. A "design-office", often called "Volvo's first design-office", was established in one room of Larson's private apartment at Rådmansgatan 59 in Stockholm. The design work started in the autumn of 1924 and a number of engineers were involved, among them engineer Jan G. Smith, that had returned from America in 1924, and later engineer Henry Westerberg. All the invoices related to the project was sent to Gabrielsson's private address at Kungsportsavenyn 32 in Gothenburg. In a lot of orders that Gustav Larsson made himself, he referred to Gabrielsson as a "guarantee" so that the deliveries would be paid by him personally. These details shows that the Volvo automobile project in the beginning was a true private project, not sponsored by SKF from an economic point of view. The first prototype car was ready in June 1926. Larsen and Gabrielsson took it and drove themselves, on bumpy roads, down to SKF in Gothenburg to show the SKF-board and now present the final investment plan. AB Volvo founded At a board meeting held in Hofors, Sweden, on 10 August 1926, SKF decided to use the old subsidiary company Volvo AB, for the automobile project. AB Volvo, that was first registered in 1915 on the initiative of Björn Prytz, was originally set up to be used for a special series of ball bearings for the American market but it was never really used for this purpose. A small series of ball bearings stamped with Volvo was manufactured but it was never introduced on a larger scale. A contract was signed on 12 August 1926 between SKF and Gabrielsson, stipulating that all ten prototype cars, engineering drawings, calculations etc. should be handed over to Volvo AB, and Gabrielsson in return would be refunded most of his private investments for the prototype cars. The contract was signed by Björn Prytz, managing director of SKF and Gabrielsson. Gustaf Larsson finally got paid for the initial engineering work with the ÖV4 according to the 'private' contract he and Gabrielsson had signed on 16 December 1925. That 1925 contract had stated that the automobile project may be sold to any company that would be interested, but they hoped that SKF would be the company to first offer. Gustav Larson was appointed vice president and technical manager on 1 January 1927, and left his employment at AB Galco in Stockholm. First production Volvo On 14 April 1927, at about 10 p.m., the first series production model, the ÖV 4, left the newly established factory on Hisingen in Gothenburg. The ten prototype cars that had been assembled in Stockholm were never sold, except for one that was sold to Volvo's photographer Sven Sjöstedt and was later donated to the Volvo Industrial Museum around 1930. However it was used as transportation vehicles within the manufacturing plant and as 'test benches' for new developed components during the first years. Economic problems The new company did not show any profit for the first couple of years and SKF invested more money to keep the company running. In 1928 the production of trucks began with the basic chassis components from ÖV4. The production of trucks was on a small scale, but the concept was successful from start. However, in late 1929 SKF nearly sold the company to Charles Nash, president of Nash Motors in the United States. Björn Prytz and Gabrielsson managed to convince the SKF board to call the deal off, just one day before Charles Nash arrived by boat in Gothenburg. At the end of 1930 AB Volvo showed a small profit for the first time. In 1935 SKF came to the conclusion that Volvo now was ready to stand on its own feet. Volvo was introduced on the Stockholm stock exchange and SKF sold most of its shares. SKF could now concentrate on their core business, development and manufacturing of bearings, and still are, more than 100 years after the company was founded in 1907. Sales success In 1941, the 50,000th Volvo car was delivered. It took ten years to produce the first 25,000 cars but only four years for the next 25,000 cars. In 1944–45, just after the end of the Second World War, the modern styled family car PV444, with a completely new design, was introduced and the car was a sales success. The company now stood on solid ground and the production of both cars and trucks continued to increase for the rest of Larsen's life. Family Gustaf Larson was married to Elin Octavia Fröberg in 1918. They had four children: Erik, Anders, Gunnel, and Britt. Gustaf Larson died on 4 July 1968, and is buried at the family tomb in Båstad, Sweden. See also Volvo Cars Volvo Museum — in Gothenburg—Göteborg. References Title: Volvo Personvagnar från 20-tal till 80-tal, by Björn Eric Lindh, 1984. (In Swedish only). Title: Volvo Göteborg Sverige, by Christer Olsson, 1996. (In Swedish only). External links Volvo people Swedish automotive pioneers Automotive engineers Swedish founders of automobile manufacturers 20th-century Swedish businesspeople People from Gothenburg KTH Royal Institute of Technology alumni 1887 births 1968 deaths
Gustaf Larson
[ "Engineering" ]
1,674
[ "Automotive engineering", "Automotive engineers" ]
8,774,050
https://en.wikipedia.org/wiki/Telecommunications%20engineering
Telecommunications engineering is a subfield of electronics engineering which seeks to design and devise systems of communication at a distance. The work ranges from basic circuit design to strategic mass developments. A telecommunication engineer is responsible for designing and overseeing the installation of telecommunications equipment and facilities, such as complex electronic switching system, and other plain old telephone service facilities, optical fiber cabling, IP networks, and microwave transmission systems. Telecommunications engineering also overlaps with broadcast engineering. Telecommunication is a diverse field of engineering connected to electronic, civil and systems engineering. Ultimately, telecom engineers are responsible for providing high-speed data transmission services. They use a variety of equipment and transport media to design the telecom network infrastructure; the most common media used by wired telecommunications today are twisted pair, coaxial cables, and optical fibers. Telecommunications engineers also provide solutions revolving around wireless modes of communication and information transfer, such as wireless telephony services, radio and satellite communications, internet, Wi-Fi and broadband technologies. History Telecommunication systems are generally designed by telecommunication engineers which sprang from technological improvements in the telegraph industry in the late 19th century and the radio and the telephone industries in the early 20th century. Today, telecommunication is widespread and devices that assist the process, such as the television, radio and telephone, are common in many parts of the world. There are also many networks that connect these devices, including computer networks, public switched telephone network (PSTN), radio networks, and television networks. Computer communication across the Internet is one of many examples of telecommunication. Telecommunication plays a vital role in the world economy, and the telecommunication industry's revenue has been placed at just under 3% of the gross world product. Telegraph and telephone Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837. Soon after he was joined by Alfred Vail who developed the register — a telegraph terminal that integrated a logging device for recording messages to paper tape. This was demonstrated successfully over three miles (five kilometres) on 6 January 1838 and eventually over forty miles (sixty-four kilometres) between Washington, D.C. and Baltimore on 24 May 1844. The patented invention proved lucrative and by 1851 telegraph lines in the United States spanned over 20,000 miles (32,000 kilometres). The first successful transatlantic telegraph cable was completed on 27 July 1866, allowing transatlantic telecommunication for the first time. Earlier transatlantic cables installed in 1857 and 1858 only operated for a few days or weeks before they failed. The international use of the telegraph has sometimes been dubbed the "Victorian Internet". The first commercial telephone services were set up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London. Alexander Graham Bell held the master patent for the telephone that was needed for such services in both countries. The technology grew quickly from this point, with inter-city lines being built and telephone exchanges in every major city of the United States by the mid-1880s. Despite this, transatlantic voice communication remained impossible for customers until January 7, 1927, when a connection was established using radio. However no cable connection existed until TAT-1 was inaugurated on September 25, 1956, providing 36 telephone circuits. In 1880, Bell and co-inventor Charles Sumner Tainter conducted the world's first wireless telephone call via modulated lightbeams projected by photophones. The scientific principles of their invention would not be utilized for several decades, when they were first deployed in military and fiber-optic communications. Radio and television Over several years starting in 1894, the Italian inventor Guglielmo Marconi built the first complete, commercially successful wireless telegraphy system based on airborne electromagnetic waves (radio transmission). In December 1901, he would go on to established wireless communication between Britain and Newfoundland, earning him the Nobel Prize in physics in 1909 (which he shared with Karl Braun). In 1900, Reginald Fessenden was able to wirelessly transmit a human voice. On March 25, 1925, Scottish inventor John Logie Baird publicly demonstrated the transmission of moving silhouette pictures at the London department store Selfridges. In October 1925, Baird was successful in obtaining moving pictures with halftone shades, which were by most accounts the first true television pictures. This led to a public demonstration of the improved device on 26 January 1926 again at Selfridges. Baird's first devices relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of semi-experimental broadcasts done by the British Broadcasting Corporation beginning September 30, 1929. Satellite The first U.S. satellite to relay communications was Project SCORE in 1958, which used a tape recorder to store and forward voice messages. It was used to send a Christmas greeting to the world from U.S. President Dwight D. Eisenhower. In 1960 NASA launched an Echo satellite; the aluminized PET film balloon served as a passive reflector for radio communications. Courier 1B, built by Philco, also launched in 1960, was the world's first active repeater satellite. Satellites these days are used for many applications such as uses in GPS, television, internet and telephone uses. Telstar was the first active, direct relay commercial communications satellite. Belonging to AT&T as part of a multi-national agreement between AT&T, Bell Telephone Laboratories, NASA, the British General Post Office, and the French National PTT (Post Office) to develop satellite communications, it was launched by NASA from Cape Canaveral on July 10, 1962, the first privately sponsored space launch. Relay 1 was launched on December 13, 1962, and became the first satellite to broadcast across the Pacific on November 22, 1963. The first and historically most important application for communication satellites was in intercontinental long distance telephony. The fixed Public Switched Telephone Network relays telephone calls from land line telephones to an earth station, where they are then transmitted a receiving satellite dish via a geostationary satellite in Earth orbit. Improvements in submarine communications cables, through the use of fiber-optics, caused some decline in the use of satellites for fixed telephony in the late 20th century, but they still exclusively service remote islands such as Ascension Island, Saint Helena, Diego Garcia, and Easter Island, where no submarine cables are in service. There are also some continents and some regions of countries where landline telecommunications are rare to nonexistent, for example Antarctica, plus large regions of Australia, South America, Africa, Northern Canada, China, Russia and Greenland. After commercial long distance telephone service was established via communication satellites, a host of other commercial telecommunications were also adapted to similar satellites starting in 1979, including mobile satellite phones, satellite radio, satellite television and satellite Internet access. The earliest adaption for most such services occurred in the 1990s as the pricing for commercial satellite transponder channels continued to drop significantly. Computer networks and the Internet On 11 September 1940, George Stibitz was able to transmit problems using teleprinter to his Complex Number Calculator in New York and receive the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer or mainframe computer with remote "dumb terminals" remained popular throughout the 1950s and into the 1960s. However, it was not until the 1960s that researchers started to investigate packet switching — a technology that allows chunks of data to be sent between different computers without first passing through a centralized mainframe. A four-node network emerged on 5 December 1969. This network soon became the ARPANET, which by 1981 would consist of 213 nodes. ARPANET's development centered around the Request for Comment process and on 7 April 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the Internet, and many of the communication protocols that the Internet relies upon today were specified through the Request for Comment process. In September 1981, RFC 791 introduced the Internet Protocol version 4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today. Optical fiber Optical fiber can be used as a medium for telecommunication and computer networking because it is flexible and can be bundled into cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters. In 1966 Charles K. Kao and George Hockham proposed optical fibers at STC Laboratories (STL) at Harlow, England, when they showed that the losses of 1000 dB/km in existing glass (compared to 5-10 dB/km in coaxial cable) was due to contaminants, which could potentially be removed. Optical fiber was successfully developed in 1970 by Corning Glass Works, with attenuation low enough for communication purposes (about 20dB/km), and at the same time GaAs (Gallium arsenide) semiconductor lasers were developed that were compact and therefore suitable for transmitting light through fiber optic cables for long distances. After a period of research starting from 1975, the first commercial fiber-optic communications system was developed, which operated at a wavelength around 0.8 μm and used GaAs semiconductor lasers. This first-generation system operated at a bit rate of 45 Mbps with repeater spacing of up to 10 km. Soon on 22 April 1977, General Telephone and Electronics sent the first live telephone traffic through fiber optics at a 6 Mbit/s throughput in Long Beach, California. The first wide area network fibre optic cable system in the world seems to have been installed by Rediffusion in Hastings, East Sussex, UK in 1978. The cables were placed in ducting throughout the town, and had over 1000 subscribers. They were used at that time for the transmission of television channels, not available because of local reception problems. The first transatlantic telephone cable to use optical fiber was TAT-8, based on Desurvire optimized laser amplification technology. It went into operation in 1988. In the late 1990s through 2000, industry promoters, and research companies such as KMI, and RHK predicted massive increases in demand for communications bandwidth due to increased use of the Internet, and commercialization of various bandwidth-intensive consumer services, such as video on demand, Internet Protocol data traffic was increasing exponentially, at a faster rate than integrated circuit complexity had increased under Moore's Law. Concepts Basic elements of a telecommunication system Transmitter Transmitter (information source) that takes information and converts it to a signal for transmission. In electronics and telecommunications a transmitter or radio transmitter is an electronic device which, with the aid of an antenna, produces radio waves. In addition to their use in broadcasting, transmitters are necessary component parts of many electronic devices that communicate by radio, such as cell phones, Transmission medium Transmission medium over which the signal is transmitted. For example, the transmission medium for sounds is usually air, but solids and liquids may also act as transmission media for sound. Many transmission media are used as communications channel. One of the most common physical media used in networking is copper wire. Copper wire is used to carry signals to long distances using relatively low amounts of power. Another example of a physical medium is optical fiber, which has emerged as the most commonly used transmission medium for long-distance communications. Optical fiber is a thin strand of glass that guides light along its length. The absence of a material medium in vacuum may also constitute a transmission medium for electromagnetic waves such as light and radio waves. Receiver Receiver (information sink) that receives and converts the signal back into required information. In radio communications, a radio receiver is an electronic device that receives radio waves and converts the information carried by them to a usable form. It is used with an antenna. The information produced by the receiver may be in the form of sound (an audio signal), images (a video signal) or digital data. Wired communication Wired communications make use of underground communications cables (less often, overhead lines), electronic signal amplifiers (repeaters) inserted into connecting cables at specified points, and terminal apparatus of various types, depending on the type of wired communications used. Wireless communication Wireless communication involves the transmission of information over a distance without help of wires, cables or any other forms of electrical conductors. Wireless operations permit services, such as long-range communications, that are impossible or impractical to implement with the use of wires. The term is commonly used in the telecommunications industry to refer to telecommunications systems (e.g. radio transmitters and receivers, remote controls etc.) which use some form of energy (e.g. radio waves, acoustic energy, etc.) to transfer information without the use of wires. Information is transferred in this manner over both short and long distances. Roles Telecom equipment engineer A telecom equipment engineer is an electronics engineer that designs equipment such as routers, switches, multiplexers, and other specialized computer/electronics equipment designed to be used in the telecommunication network infrastructure. Network engineer A network engineer is a computer engineer who is in charge of designing, deploying and maintaining computer networks. In addition, they oversee network operations from a network operations center, designs backbone infrastructure, or supervises interconnections in a data center. Central-office engineer A central-office engineer is responsible for designing and overseeing the implementation of telecommunications equipment in a central office (CO for short), also referred to as a wire center or telephone exchange A CO engineer is responsible for integrating new technology into the existing network, assigning the equipment's location in the wire center, and providing power, clocking (for digital equipment), and alarm monitoring facilities for the new equipment. The CO engineer is also responsible for providing more power, clocking, and alarm monitoring facilities if there are currently not enough available to support the new equipment being installed. Finally, the CO engineer is responsible for designing how the massive amounts of cable will be distributed to various equipment and wiring frames throughout the wire center and overseeing the installation and turn up of all new equipment. Sub-roles As structural engineers, CO engineers are responsible for the structural design and placement of racking and bays for the equipment to be installed in as well as for the plant to be placed on. As electrical engineers, CO engineers are responsible for the resistance, capacitance, and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation or gradual loss in intensity and loop loss calculations are required to determine cable length and size required to provide the service called for. In addition, power requirements have to be calculated and provided to power any electronic equipment being placed in the wire center. Overall, CO engineers have seen new challenges emerging in the CO environment. With the advent of Data Centers, Internet Protocol (IP) facilities, cellular radio sites, and other emerging-technology equipment environments within telecommunication networks, it is important that a consistent set of established practices or requirements be implemented. Installation suppliers or their sub-contractors are expected to provide requirements with their products, features, or services. These services might be associated with the installation of new or expanded equipment, as well as the removal of existing equipment. Several other factors must be considered such as: Regulations and safety in installation Removal of hazardous material Commonly used tools to perform installation and removal of equipment Outside-plant engineer Outside plant (OSP) engineers are also often called field engineers, because they frequently spend much time in the field taking notes about the civil environment, aerial, above ground, and below ground. OSP engineers are responsible for taking plant (copper, fiber, etc.) from a wire center to a distribution point or destination point directly. If a distribution point design is used, then a cross-connect box is placed in a strategic location to feed a determined distribution area. The cross-connect box, also known as a serving area interface, is then installed to allow connections to be made more easily from the wire center to the destination point and ties up fewer facilities by not having dedication facilities from the wire center to every destination point. The plant is then taken directly to its destination point or to another small closure called a terminal, where access can also be gained to the plant, if necessary. These access points are preferred as they allow faster repair times for customers and save telephone operating companies large amounts of money. The plant facilities can be delivered via underground facilities, either direct buried or through conduit or in some cases laid under water, via aerial facilities such as telephone or power poles, or via microwave radio signals for long distances where either of the other two methods is too costly. Sub-roles As structural engineers, OSP engineers are responsible for the structural design and placement of cellular towers and telephone poles as well as calculating pole capabilities of existing telephone or power poles onto which new plant is being added. Structural calculations are required when boring under heavy traffic areas such as highways or when attaching to other structures such as bridges. Shoring also has to be taken into consideration for larger trenches or pits. Conduit structures often include encasements of slurry that needs to be designed to support the structure and withstand the environment around it (soil type, high traffic areas, etc.). As electrical engineers, OSP engineers are responsible for the resistance, capacitance, and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation or gradual loss in intensity and loop loss calculations are required to determine cable length and size required to provide the service called for. In addition power requirements have to be calculated and provided to power any electronic equipment being placed in the field. Ground potential has to be taken into consideration when placing equipment, facilities, and plant in the field to account for lightning strikes, high voltage intercept from improperly grounded or broken power company facilities, and from various sources of electromagnetic interference. As civil engineers, OSP engineers are responsible for drafting plans, either by hand or using Computer-aided design (CAD) software, for how telecom plant facilities will be placed. Often when working with municipalities trenching or boring permits are required and drawings must be made for these. Often these drawings include about 70% or so of the detailed information required to pave a road or add a turn lane to an existing street. Structural calculations are required when boring under heavy traffic areas such as highways or when attaching to other structures such as bridges. As civil engineers, telecom engineers provide the modern communications backbone for all technological communications distributed throughout civilizations today. Unique to telecom engineering is the use of air-core cable which requires an extensive network of air handling equipment such as compressors, manifolds, regulators and hundreds of miles of air pipe per system that connects to pressurized splice cases all designed to pressurize this special form of copper cable to keep moisture out and provide a clean signal to the customer. As political and social ambassador, the OSP engineer is a telephone operating company's face and voice to the local authorities and other utilities. OSP engineers often meet with municipalities, construction companies and other utility companies to address their concerns and educate them about how the telephone utility works and operates. Additionally, the OSP engineer has to secure real estate in which to place outside facilities, such as an easement to place a cross-connect box. See also Computer engineering Computer networking Electronic design automation Electronic engineering Electronic media Fiber-optic communication History of telecommunication Information theory List of electrical engineering topics (alphabetical) List of electrical engineering topics (thematic) Professional engineer Radio Receiver (radio) Telecommunication Telephone Television Telecommunications cable Transmission medium Transmitter Two-way radio Wired communication Wireless References Further reading External links Telecommunications engineering
Telecommunications engineering
[ "Engineering" ]
3,996
[ "Electrical engineering", "Telecommunications engineering" ]
8,775,663
https://en.wikipedia.org/wiki/Old%20West%20Academy
Old West Academy, formerly called Majestic Ranch Academy, was a Boarding school located in Randolph, Utah. Founded in 1986, it detains boys and girls with behavioral issues, ages 7 to 14. Marketing for the school is (as of May 2018) conducted by the Teen Paths subsidiary of the controversial World Wide Association of Specialty Programs and Schools. Controversy Like other schools marketed by Teen Paths and the World Wide Association of Specialty Programs and Schools (WWASPS or WWASP), there have been numerous allegations of physical and sexual abuse at Old West Academy. In 2002, Old West Academy director Wayne Winder was arrested and charged with Aggravated Sexual Abuse, Child Abuse, and dealing in material harmful to a minor after he allegedly sexually abused students and showed them pornography. A staff member was apparently fired after reporting child abuse at the school to police. Students at the school have limited contact with their parents and the outside world, and all telephone calls are monitored by staff, so it was very difficult for them to report any abuse. There was very little regulatory oversight of the school, and staff apparently received minimal training to prepare them to handle children with behavioral problems. Even after Wayne Winder's arrest on child sexual abuse charges, he continued working at the school as the director. In 2005, the mother of a male student at Majestic Ranch Academy filed a lawsuit against the school, claiming that her son was seriously injured after Sean E. Coombs slammed him against a wall and a table, threw him, and struck him. The lawsuit also alleged that the boy was repeatedly restrained and placed in handcuffs during his time at the school. Students have apparently been forced to stand barefoot on milk crates for long hours, outdoors in sub-freezing temperatures, as a punishment. References External links Majestic Ranch Academy homepage Old West Academy homepage Behavior modification Boarding schools in Utah Buildings and structures in Rich County, Utah Private middle schools in Utah Private elementary schools in Utah World Wide Association of Specialty Programs and Schools Troubled teen programs
Old West Academy
[ "Biology" ]
401
[ "Behavior modification", "Behavior", "Human behavior", "Behaviorism" ]
8,775,865
https://en.wikipedia.org/wiki/List%20of%20web%20browsers%20for%20Unix%20and%20Unix-like%20operating%20systems
The following is a list of web browsers for various Unix and Unix-like operating systems. Not all of these browsers are specific to these operating systems; some are available on non-Unix systems as well. Some, but not most, have a mobile version. Graphical Colored items in this table are discontinued. Text-based Links ELinks Line-mode browser Lynx w3m See also List of web browsers Comparison of web browsers Comparison of lightweight web browsers References https://www.mozilla.org/en-US/firefox/android/ Web browsers
List of web browsers for Unix and Unix-like operating systems
[ "Technology" ]
124
[ "Computing-related lists", "Lists of software" ]
8,776,322
https://en.wikipedia.org/wiki/Osteomed
OsteoMed L.P., formerly known as OsteoMed Corporation, is a medical device manufacturer specializing in craniofacial titanium fixation, small bone drills and saws, and a variety of implantable devices used in foot and ankle surgery. OsteoMed's focus is on neurosurgery, reconstructive plastic surgery, oral surgery, podiatry, and foot and ankle orthopedics. OsteoMed was founded in 1990 in Glendale, California by Rick Buss, a medical device sales representative, and Jim Lafferty, a medical device engineer. The company was founded on the principle of close collaboration between the company and doctors, with products made to their specifications. In the mid-1990s OsteoMed relocated to Addison, TX, seeking a more central location for product distribution. In 1999, after the company had grown, Buss and Lafferty sold OsteoMed Corporation to the Marmon Group, a privately held conglomerate owned solely by the Pritzker family of Chicago, Illinois. In the turmoil following the death of Jay Pritzker, his brother Robert Pritzker spun off sole possession of OsteoMed (among others) into his private holding company, Colson Associates. References External links Official site Medical equipment Manufacturing companies based in California
Osteomed
[ "Biology" ]
273
[ "Medical equipment", "Medical technology" ]
8,777,614
https://en.wikipedia.org/wiki/Atomic%20Industrial%20Forum
The Atomic Industrial Forum (AIF) was an industrial policy organization for the commercial development of nuclear power and energy. History 1950s The Atomic Industrial Forum history dates to Autumn 1952, when it was being first organized: In response, some 30 industrialists, engineers, and educators met in January 1953 to establish the forum. The AIF was formally incorporated on April 10, 1953, in New York City, and marked the beginning of the commercial nuclear power industry in the United StatesThe first Executive Director of AIF was Charles Robbins. As a profit trade association the AIF advocated the peaceful uses of atomic energy and increasing the role of the private sector in its development. Its first order of business was to advocate revising the Atomic Energy Act of 1946 to allow and foster the commercial ownership of non weapons nuclear facilities, such as production of radioactive isotopes and nuclear power plants. AIF established strong working relationships with the U.S. Atomic Energy Commission and the Congressional Joint Committee on Atomic Energy. AIF's efforts helped to achieve the passage of the Atomic Energy Act of 1954 which resulted in the growth of a commercial nuclear industry. AIF was organized on the basis of an executive committee, the annual election of officers and a permanent operations staff, headed by an Executive Director, Mr. Charles Robbins. 1960s In 1963 AIF established an international public information program. Working with other forums around the world, the program sought, through publications, workshops, exhibitions, speeches and outreach, to foster and achieve better understanding of the peaceful uses of atomic energy. Its first program director was Charles B.Yulish. Both the government and private sectors involvement in atomic energy grew steadily; eventually, more that 125 commercial nuclear power plants provided 20 percent of America's electricity. At the same time there were increasing debates on safeguards and regulation. The Atomic Energy Commission, which both promoted, developed and regulated nuclear development, was split into two agencies—the Energy Research and Development Agency, now the Department of Energy, and the independent U.S. Nuclear Regulatory Administration. As new challenges and opportunities evolved, new industry efforts and resources were required to address these matters. 1980s In 1987 the AIF was reconfigured into the Nuclear Utility Management and Resources Council (NUMARC), which addressed generic regulatory and technical issues, and the U.S. Council for Energy Awareness (USCEA), founded in 1979. In 1994 these two organizations were again reorganized and re-purposed. The Nuclear Energy Institute and the American Nuclear Energy Council (ANEC conducted public affairs, and the nuclear division of the Edison Electric Institute (EEI), was responsible for issues involving nuclear fuel supply and management, and the economics of nuclear energy. 2000s In 2011, the Nuclear Energy Institute became the leading organization representing the nuclear industry. NEI headquarters is in Washington, DC. References Trade associations based in the United States Nuclear organizations Organizations established in 1953 1953 establishments in the United States
Atomic Industrial Forum
[ "Engineering" ]
592
[ "Nuclear organizations", "Energy organizations" ]
8,778,629
https://en.wikipedia.org/wiki/GPS%20Block%20III
GPS Block III (previously Block IIIA) consists of the first ten GPS III satellites, which are used to keep the Navstar Global Positioning System operational. Lockheed Martin designed, developed and manufactured the GPS III Non-Flight Satellite Testbed (GNST) and all ten Block III satellites. The first satellite in the series was launched in December 2018. History The United States' Global Positioning System (GPS) reached Full Operational Capability on 17 July 1995, completing its original design goals. Advances in technology and new demands on the existing system led to the effort to modernize the GPS system. In 2000, the U.S. Congress authorized the effort, referred to as GPS III. The project involves new ground stations and new satellites, with additional navigation signals for both civilian and military users, and aims to improve the accuracy and availability for all users. Raytheon was awarded the Next Generation GPS Operational Control System (OCX) contract on 25 February 2010. The first satellite in the series was projected to launch in 2014, but significant delays pushed the launch to December 2018. The tenth and final GPS Block III launch is projected in FY2026. Development Block III satellites use Lockheed Martin's A2100M satellite bus structure. The propellant and pressurant tanks are manufactured by Orbital ATK from lightweight, high-strength composite materials. Each satellite will carry eight deployable JIB antennas designed and manufactured by Northrop Grumman Astro Aerospace Already delayed significantly beyond the first satellite's planned 2014 launch, on 27 April 2016, SpaceX, in Hawthorne, California, was awarded a US$82.7 million firm-fixed-price contract for launch services to deliver a GPS III satellite to its intended orbit. The contract included launch vehicle production, mission integration, and launch operations for a GPS III mission, to be performed in Hawthorne, California; Cape Canaveral Air Force Station, Florida; and McGregor, Texas. In December 2016, the Director of the U.S. Air Force's Global Positioning Systems Directorate announced the first satellite would launch in the spring of 2018. In March 2017, the U.S. General Accounting Office stated "Technical issues with both the GPS III satellite and the OCX Block 0 launch control and checkout system have combined to place the planned March 2018 launch date for the first GPS III satellite at risk". The delays were caused by a number of factors, primarily due to issues found in the navigation payload. Further launch date slippages were caused by the need for additional testing and validation of a SpaceX Falcon 9 rocket which ultimately launched the satellite on 23 December 2018. On 22 August 2019, the second GPS III satellite was launched aboard a Delta IV rocket. On 21 September 2016, the U.S. Air Force exercised a US$395 million contract option with Lockheed Martin for the ninth and tenth Block III space vehicles, expected to be available for launch by 2022. Launch history 7 of 10 GPS Block III satellites have been launched. 6 are currently operational, with 1 undergoing testing. New navigation signals Civilian L2 (L2C) One of the first announcements was the addition of a new civilian-use signal to be transmitted on a frequency other than the L1 frequency used for the existing GPS Coarse Acquisition (C/A) signal. Ultimately, this became known as the L2C signal because it is broadcast on the L2 frequency (1227.6 MHz). It can be transmitted by all block IIR-M and later design satellites. The original plan stated that until the new OCX (Block 1) system is in place, the signal would consist of a default message ("Type 0") that contains no navigational data. OCX Block 1 with the L2C navigation data was scheduled to enter service in February 2016, but was delayed until 2022 or later. As a result of OCX delays, the L2C signal was decoupled from the OCX deployment schedule. All satellites capable of transmitting the L2C signal (all GPS satellites launched since 2005) began broadcasting pre-operational civil navigation (CNAV) messages in April 2014, and in December 2014 the U.S. Air Force started transmitting CNAV uploads on a daily basis. The L2C signal will be considered fully operational after it is being broadcast by at least 24 space vehicles, projected to happen in 2023. As of October 2017, L2C was being broadcast from 19 satellites; by June 2022 there were 24 satellites broadcasting this signal. The L2C signal is tasked with providing improved accuracy of navigation, providing an easy-to-track signal, and acting as a redundant signal in case of localized interference. The immediate effect of having two civilian frequencies being transmitted from one satellite is the ability to directly measure, and therefore remove, the ionospheric delay error for that satellite. Without such a measurement, a GPS receiver must use a generic model or receive ionospheric corrections from another source (such as a Satellite Based Augmentation System). Advances in technology for the GPS satellites and the GPS receivers have made ionospheric delay the largest source of error in the C/A signal. A receiver capable of performing this measurement is referred to as a dual frequency receiver. Its technical characteristics are: L2C contains two distinct PRN sequences: CM (for Civilian Moderate length code) is 10,230 bits in length, repeating every 20 milliseconds. CL (for Civilian Long length code) is 767,250 bits, repeating every 1,500 milliseconds (i.e., every 1.5 second). Each signal is transmitted at 511,500 bits per second (bit/s); however, they are multiplexed to form a 1,023,000 bit/s signal. CM is modulated with a 25 bit/s navigation message with forward error correction, whereas CL contains no additional modulated data. The long, non-data CL sequence provides for approximately 24 dB greater correlation protection (~250 times stronger) than L1 C/A. L2C signal characteristics provide 2.7 dB greater data recovery and 0.7 dB greater carrier tracking than L1 C/A. The L2C signals' transmission power is 2.3 dB weaker than the L1 C/A signal. In a single frequency application, L2C has 65% more ionospheric error than L1. It is defined in IS-GPS-200. Military (M-code) A major component of the modernization process, a new military signal called M-code was designed to further improve the anti-jamming and secure access of the military GPS signals. The M-code is transmitted in the same L1 and L2 frequencies already in use by the previous military code, the P(Y) code. The new signal is shaped to place most of its energy at the edges (away from the existing P(Y) and C/A carriers). Unlike the P(Y) code, the M-code is designed to be autonomous, meaning that users can calculate their positions using only the M-code signal. P(Y) code receivers must typically first lock onto the C/A code and then transfer to lock onto the P(Y) code. In a major departure from previous GPS designs, the M-code is intended to be broadcast from a high-gain directional antenna, in addition to a wide angle (full Earth) antenna. The directional antenna's signal, termed a spot beam, is intended to be aimed at a specific region (i.e., several hundred kilometers in diameter) and increase the local signal strength by 20 dB (10× voltage field strength, 100× power). A side effect of having two antennas is that, for receivers inside the spot beam, the GPS satellite will appear as two GPS signals occupying the same position. While the full-Earth M-code signal is available on the Block IIR-M satellites, the spot beam antennas will not be available until the Block III satellites are deployed. Like the other new GPS signals, M-code is dependent on OCX—specifically Block 2—which was scheduled to enter service in October 2016, but which was delayed until 2022, and that initial date did not reflect the two year first satellite launch delays expected by the GAO. Other M-code characteristics are: Satellites will transmit two distinct signals from two antennas: one for whole Earth coverage, one in a spot beam. Binary offset carrier modulation. Occupies 24 MHz of bandwidth. It uses a new MNAV navigational message, which is packetized instead of framed, allowing for flexible data payloads. There are four effective data channels; different data can be sent on each frequency and on each antenna. It can include FEC and error detection. The spot beam is ~20 dB more powerful than the whole Earth coverage beam. M-code signal at Earth's surface: –158 dBW for whole Earth antenna, –138 dBW for spot beam antennas. Safety of Life (L5) Safety of Life is a civilian-use signal, broadcast on the L5 frequency (1176.45 MHz). In 2009, a WAAS satellite sent the initial L5 signal test transmissions. SVN-62, the first GPS block IIF satellite, continuously broadcast the L5 signal starting on 28 June 2010. As a result of schedule delays to the GPS III control segment, the L5 signal was decoupled from the OCX deployment schedule. All satellites capable of transmitting the L5 signal (all GPS satellites launched since May 2010) began broadcasting pre-operational civil navigation (CNAV) messages in April 2014, and in December 2014 the Air Force started transmitting CNAV uploads on a daily basis. The L5 signal will be considered fully operational once at least 24 space vehicles are broadcasting the signal, currently projected to happen in 2027. As of 10 July 2023, L5 is being broadcast from 17 satellites, after the removal of the block IIF, SVM-63. Improves signal structure for enhanced performance. Higher transmission power than L1 or L2C signal (~3 dB, or twice as powerful). Wider bandwidth, yielding a 10-times processing gain. Longer spreading codes (10 times longer than used on the C/A code). Located in the Aeronautical Radionavigation Services band, a frequency band that is available worldwide. WRC-2000 added a space signal component to this aeronautical band so the aviation community can manage interference to L5 more effectively than L2. It is defined in IS-GPS-705. New civilian L1 (L1C) L1C is a civilian-use signal, to be broadcast on the same L1 frequency (1575.42 MHz) that contains the C/A signal used by all current GPS users. L1C broadcasting started when GPS III Control Segment (OCX) Block 1 becomes operational, scheduled for 2022. The L1C signal will reach full operational status when being broadcast from at least 24 GPS Block III satellites, projected for the late 2020s. Implementation will provide C/A code to ensure backward compatibility. Assured of 1.5 dB increase in minimum C/A code power to mitigate any noise floor increase. Non-data signal component contains a pilot carrier to improve tracking. Enables greater civil interoperability with Galileo L1. It is defined in IS-GPS-800. Improvements Increased signal power at the Earth's surface: M-code: −158 dBW / −138 dBW. L1 and L2: −157 dBW for the C/A code signal and −160 dBW for the P(Y) code signal. L5 will be −154 dBW. Researchers from The Aerospace Corporation confirmed that the most efficient means to generate the high-power M-code signal would entail a departure from full-Earth coverage, characteristic of all the user downlink signals up until that point. Instead, a high-gain antenna would be used to produce a directional spot beam several hundred kilometers in diameter. Originally, this proposal was considered as a retrofit to the planned Block IIF satellites. Upon closer inspection, program managers realized that the addition of a large deployable antenna, combined with the changes that would be needed in the operational control segment, presented too great a challenge for the then existing system design. NASA has requested that Block III satellites carry laser retro-reflectors. This allows tracking the orbits of the satellites independent of the radio signals, which allows satellite clock errors to be disentangled from ephemeris errors. This, a standard feature of GLONASS, will be included in the Galileo positioning system, and was included as an experiment on two older GPS satellites (satellites 35 and 36). The USAF is working with NASA to add a Distress Alerting Satellite System (DASS) payload to the second increment of GPS III satellites as part of the MEOSAR search and rescue system. Control segment The GPS Operational Control Segment (OCS), consisting of a worldwide network of satellite operations centers, ground antennas and monitoring stations, provides Command and Control (C2) capabilities for GPS Block II satellites. The latest update to the GPS OCS, Architectural Evolution Plan 7.5, was operationally accepted in 2019. Next-Generation operational control segment (OCX) In 2010, the United States Air Force announced plans to develop a modern control segment, a critical part of the GPS modernization initiative. OCS will continue to serve as the ground control system of record until the new system, Next Generation GPS Operational Control System (OCX), is fully developed and functional. OCX features are being delivered to the United States Air Force in three separate phases, known as "blocks". The OCX blocks are numbered zero through two. With each block delivered, OCX gains additional functionality. In June 2016, the U.S. Air Force formally notified Congress the OCX program's projected program costs had risen above US$4.25 billion, thus exceeding baseline cost estimates of US$3.4 billion by 25%, also known as a critical Nunn-McCurdy breach. Factors leading to the breach include "inadequate systems engineering at program inception", and "the complexity of cybersecurity requirements on OCX". In October 2016, the Department of Defense formally certified the program, a necessary step to allow development to continue after a critical breach. In July 2021, all OCX monitor station installations had been completed. OCX monitoring stations are expected to transition to operations in "early 2023," and the U.S. Space Force hopes to complete operational acceptance for all of OCX in 2027. OCX Block 0 (launch and checkout for Block III) OCX Block 0 provides the minimum subset of full OCX capabilities necessary to support launch and early on-orbit spacecraft bus checkout on GPS III space vehicles. Block 0 completed two cybersecurity testing events in April and May 2018 with no new vulnerabilities found. In June 2018, Block 0 had its third successful integrated launch rehearsal with GPS III. The U.S. Air Force accepted the delivery of OCX Block 0 in November 2017, and is used it to prepare for the first GPS launch in December 2018. As of May 2022, OCX Block 0 has successfully supported the launch and checkout of GPS III SV 01–05. OCX Block 1 (civilian GPS III features) OCX Block 1 is an upgrade to OCX Block 0, at which time the OCX system achieves Initial Operating Capability (IOC). Once Block 1 is deployed, OCX will for the first time be able to command and control both Block II and Block III GPS satellites, as well as support the ability to begin broadcasting the civilian L1C signal. In November 2016, the GAO reported that OCX Block 1 had become the primary cause for delay in activating the GPS III PNT mission. Block 1 completed the final iteration of Critical Design Review (CDR) in September 2018. Software development on Block 1 is scheduled to complete in 2019, after which the Block 1 software will undergo 2.5 years of system testing. OCX Block 2 (military GPS III features, civilian signal monitoring) OCX Block 2 upgrades OCX with the advanced M-code features for military users and the ability to monitor performance of the civilian signals. In March 2017, the contractor rephased its OCX delivery schedule so that Block 2 will now be delivered to the Air Force concurrently with Block 1. In July 2017, an additional nine months delay to the schedule was announced. According to the July 2017 program schedule, OCX will be delivered to the U.S. Air Force in April 2022. OCX Block 3F (launch and checkout for Block IIIF) OCX Block 3F upgrades OCX with the ability to perform Launch & Checkout for Block IIIF satellites. Block IIIF satellites are expected to start launching in 2026. The OCX Block 3F contract, valued at $228 million, was awarded to Raytheon Intelligence and Space on 30 April 2021. Contingency operations GPS III Contingency Operations ("COps") is an update to the GPS Operational Control Segment, allowing OCS to provide Block IIF Position, Navigation, and Timing (PNT) features from GPS III satellites. The Contingency Operations effort enables GPS III satellites to participate in the GPS constellation, albeit in a limited fashion, without having to wait until OCX Block 1 becomes operational (scheduled for 2022). The United States Space Force awarded the US$96 million Contingency Operations contract in February 2016. Contingency Ops was operationally accepted by in April 2020. Deployment schedule See also GPS Block IIIF GPS signals GPS satellite blocks List of GPS satellites Michibiki – New Japanese designed and launched satellites designed to enhance GPS within Japan. References External links Global Positioning System Satellites using the A2100 bus SpaceX military payloads
GPS Block III
[ "Technology", "Engineering" ]
3,718
[ "Global Positioning System", "Wireless locating", "Aircraft instruments", "Aerospace engineering" ]
8,778,829
https://en.wikipedia.org/wiki/Biogenic%20sulfide%20corrosion
Biogenic sulfide corrosion is a bacterially mediated process of forming hydrogen sulfide gas and the subsequent conversion to sulfuric acid that attacks concrete and steel within wastewater environments. The hydrogen sulfide gas is biochemically oxidized in the presence of moisture to form sulfuric acid. The effect of sulfuric acid on concrete and steel surfaces exposed to severe wastewater environments can be devastating. In the USA alone, corrosion causes sewer asset losses estimated at $14 billion per year. This cost is expected to increase as the aging infrastructure continues to fail. Environment Corrosion may occur where stale sewage generates hydrogen sulfide gas into an atmosphere containing oxygen gas and high relative humidity. There must be an underlying anaerobic aquatic habitat containing sulfates and an overlying aerobic aquatic habitat separated by a gas phase containing both oxygen and hydrogen sulfide at concentrations in excess of 2 ppm. Conversion of sulfate to hydrogen sulfide Fresh domestic sewage entering a wastewater collection system contains proteins including organic sulfur compounds oxidizable to sulfates () and may contain inorganic sulfates. Dissolved oxygen is depleted as bacteria begin to catabolize organic material in sewage. In the absence of dissolved oxygen and nitrates, sulfates are reduced to hydrogen sulfide (H2S) as an alternative source of oxygen for catabolizing organic waste by sulfate-reducing bacteria (SRB), identified primarily from the obligate anaerobic species Desulfovibrio. Hydrogen sulfide production depends on various physicochemical, topographic, and hydraulic parameters such as: Sewage oxygen concentration. The threshold is 0.1 mg/l; above this value, sulfides produced in sludge and sediments are oxidized by oxygen; below this value, sulfides are emitted in the gaseous phase. Temperature. The higher the temperature, the faster the kinetics of H2S production. Sewage pH. It must be included between 5.5 and 9 with an optimum at 7.5–8. Sulfate concentration Nutrients concentration, associated to the biochemical oxygen demand Conception of the sewage As H2S is formed only in anaerobic conditions. Slow flow and long retention time gives more time to aerobic bacteria to consume all available dissolved oxygen in water, creating anaerobic conditions. The flatter the land, the less slope can be given to the sewer network, and this favors slower flow and more pumping stations (where retention time is generally longer). Conversion of hydrogen sulfide to sulfuric acid Some hydrogen sulfide gas diffuses into the headspace environment above the wastewater. Moisture evaporated from warm sewage may condense on unsubmerged walls of sewers, and is likely to hang in partially formed droplets from the horizontal crown of the sewer. As a portion of the hydrogen sulfide gas and oxygen gas from the air above the sewage dissolves into these stationary droplets, they become a habitat for sulfur oxidizing bacteria (SOB), of the genus Acidithiobacillus. Colonies of these aerobic bacteria metabolize the hydrogen sulfide gas to sulfuric acid (). Corrosion Sulfuric acid produced by microorganisms will interact with the surface of the structure material. For ordinary Portland cement, it reacts with the calcium hydroxide in concrete to form calcium sulfate. This change simultaneously destroys the polymeric nature of calcium hydroxide and substitutes a larger molecule into the matrix causing pressure and spalling of the adjacent concrete and aggregate particles. The weakened crown may then collapse under heavy overburden loads. Even within a well-designed sewer network, a rule of thumb in the industry suggests that 5% of the total length may/will suffer from biogenic corrosion. In these specific areas, biogenic sulfide corrosion can deteriorate metal or several millimeters per year of concrete (see Table). For calcium aluminate cements, processes are completely different because they are based on another chemical composition. At least three different mechanisms contribute to the better resistance to biogenic corrosion: The first barrier is the larger acid neutralizing capacity of calcium aluminate cements vs. ordinary Portland Cement; one gram of calcium aluminate cement can neutralize around 40% more acid than a gram of ordinary Portland cement. For a given production of acid by the biofilm, a calcium aluminate cement concrete will last longer. The second barrier is due to the precipitation, when the surficial pH gets below 10, of a layer of alumina gel (AH3 in cement chemistry notation). AH3 is a stable compound down to a pH of 4 and it will form an acid-resistant barrier as long as the surface pH is not lowered below 3–4 by the bacterial activity. The third barrier is the bacteriostatic effect locally activated when the surface reaches pH values less than 3–4. At this level, the alumina gel is no longer stable and will dissolve, liberating aluminum ions. These ions will accumulate in the thin biofilm. Once the concentration reaches 300–500 ppm, it will produce a bacteriostatic effect on bacteria metabolism. In other words, bacteria will stop oxidizing the sulfur from H2S to produce acid, and the pH will stop decreasing. A mortar made of calcium aluminate cement combined with calcium aluminate aggregates, i.e. a 100% calcium aluminate material, will last much longer, as aggregates can also limit microorganisms' growth and inhibit the acid generation at the source itself. Prevention There are several options to address biogenic sulfide corrosion problems: impairing H2S formation, venting out the H2S, or using materials resistant to biogenic corrosion. For example, sewage flows more rapidly through steeper gradient sewers reducing time available for hydrogen sulfide generation. Likewise, removing sludge and sediments from the bottom of the pipes reduces the amount of anoxic areas responsible for sulfate-reducing bacteria growth. Providing good ventilation of sewers can reduce atmospheric concentrations of hydrogen sulfide gas and may dry exposed sewer crowns, but this may create odor issues with neighbors around the venting shafts. Three other efficient methods can be used involving continuous operation of mechanical equipment: chemical reactant like calcium nitrate can be continuously added in the sewerage water to impair the H2S formation, an active ventilation through odor treatment units to remove H2S, or an injection of compressed air in pressurized mains to avoid the anaerobic condition to develop. In sewerage areas where biogenic sulfide corrosion is expected, acid-resistant materials like calcium aluminate cements, PVC or vitrified clay pipe may be substituted to ordinary concrete or steel sewers. Existing structures that have extensive exposure to biogenic corrosion such as sewer manholes and pump station wet wells can be rehabilitated. Rehabilitation can be done with materials such as a structural epoxy coating, this epoxy is designed to be both acid-resistant and strengthen the compromised concrete structure. See also Corrosion Microbial corrosion Sulfide References Brongers, M.P.H., Virmani, P.Y., Payer, J.H., 2002. Drinking Water and Sewer Systems in Corrosion Costs and preventive Strategies in the United States. United States Department of Transportation Federal Highway Administration. Sydney, R., Esfandi, E., Surapaneni, S., 1996. Control concrete sewer corrosion via the crown spray process. Water Environ. Res. 68 (3), 338–347. United States Environmental Protection Agency, 1991. Hydrogen Sulphide Corrosion in Wastewater Collection and Treatment Systems (Technical Report). United States Environmental Protection Agency (1985) Design Manual, Odor and Corrosion Control in Sanitary Sewerage Systems and Treatment Plants (Technical Report). Morton R.L., Yanko W.A., Grahom D.W., Arnold R.G. (1991) Relationship between metal concentrations and crown corrosion in Los Angeles County sewers. Research Journal of Water Pollution Control Federation, 63, 789–798. Mori T., Nonaka T., Tazaki K., Koga M., Hikosaka Y., Noda S. (1992) Interactions of nutrients, moisture, and pH on microbial corrosion of concrete sewer pipes. Water Research, 26, 29–37. Ismail N., Nonaka T., Noda S., Mori T. (1993) Effect of carbonation on microbial corrosion of concrete. Journal of Construction Management and Engineering, 20, 133–138. Davis J.L. (1998) Characterization and modeling of microbially induced corrosion of concrete sewer pipes. Ph.D. Dissertation, University of Houston, Houston, TX. Monteny J., De Belie N., Vincke E., Verstraete W., Taerwe L. (2001) Chemical and microbiological tests to simulate sulfuric acid corrosion of polymer-modified concrete. Cement and Concrete Research, 31, 1359–1365. Vincke E., Van Wanseele E., Monteny J., Beeldens A., De Belie N., Taerwe L., Van Gemert D., Verstraete W. (2002) Influence of polymer addition on biogenic sulfuric acid attack. International Biodeterioration and Biodegradation, 49, 283–292. Herisson J., Van Hullebusch E., Gueguen Minerbe M., Chaussadent T. (2014) Biogenic corrosion mechanism: study of parameters explaining calcium aluminate cement durability. CAC 2014 – International Conference on Calcium Aluminates, May 2014, France. 12 p. Hammer, Mark J. Water and Waste-Water Technology John Wiley & Sons (1975) Metcalf & Eddy Wastewater Engineering McGraw-Hill (1972) Pomeroy, R.D., 1976, "The problem of hydrogen sulphide in sewers". Published by the Clay Pipes Development Association *Pomeroy's report contains errors in the equation: the pipeline slope (S, p. 8) is quoted as m/100m, but should be m/m. This introduces a factor of 10 underestimate in the calculation of the "Z factor", used to indicate if there is a risk of sulfide-induced corrosion, if the published units are used. The web link is to the revised 1992 edition, which contains the units error - the 1976 edition has the correct units. Sawyer, Clair N. & McCarty, Perry L. Chemistry for Sanitary Engineers (2nd edition) McGraw-Hill (1967) United States Department of the Interior (USDI) Concrete Manual (8th edition) United States Government Printing Office (1975) Weismann, D. & Lohse, M. (Hrsg.): "Sulfid-Praxishandbuch der Abwassertechnik; Geruch, Gefahr, Korrosion verhindern und Kosten beherrschen!" 1. Auflage, VULKAN-Verlag, 2007, Notes Bacteria Cement Concrete Corrosion Sewerage
Biogenic sulfide corrosion
[ "Chemistry", "Materials_science", "Engineering", "Biology", "Environmental_science" ]
2,293
[ "Structural engineering", "Metallurgy", "Prokaryotes", "Corrosion", "Water pollution", "Sewerage", "Electrochemistry", "Bacteria", "Environmental engineering", "Concrete", "Materials degradation", "Microorganisms" ]
8,779,393
https://en.wikipedia.org/wiki/Isofuran
Isofurans are nonclassic eicosanoids formed nonenzymatically by free radical mediated peroxidation of arachidonic acid. The isofurans are similar to the isoprostanes and are formed under similar conditions, but contain a substituted tetrahydrofuran ring. The concentration of oxygen affects this process; at elevated oxygen concentrations, the formation of isofurans is favored whereas the formation of isoprostanes is disfavored. References Eicosanoids
Isofuran
[ "Chemistry", "Biology" ]
105
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
8,779,868
https://en.wikipedia.org/wiki/Comet%20McNaught
Comet McNaught, also known as the Great Comet of 2007 and given the designation C/2006 P1, is a non-periodic comet discovered on 7 August 2006 by British-Australian astronomer Robert H. McNaught using the Uppsala Southern Schmidt Telescope. It was the brightest comet in over 40 years, and was easily visible to the naked eye for observers in the Southern Hemisphere in January and February 2007. With an estimated peak magnitude of −5.5, the comet was the second-brightest since 1935. Around perihelion on 12 January, it was visible worldwide in broad daylight. Its tail measured an estimated 35 degrees in length at its peak. The brightness of C/2006 P1 near perihelion was enhanced by forward scattering. Discovery McNaught discovered the comet in a CCD image on 7 August 2006 during the course of routine observations for the Siding Spring Survey, which searched for Near-Earth Objects that might represent a collision threat to Earth. The comet was discovered in Ophiuchus, shining very dimly at a magnitude of about +17. From August through November 2006, the comet was imaged and tracked as it moved through Ophiuchus and Scorpius, brightening as high as magnitude +9, still too dim to be seen with the unaided eye. Then, for most of December, the comet was lost in the glare of the Sun. Upon recovery, it became apparent that the comet was brightening very fast, reaching naked-eye visibility in early January 2007. It was visible to northern hemisphere observers, in Sagittarius and surrounding constellations, until about 13 January. Perihelion was 12 January at a distance of 0.17 AU. This was close enough to the Sun to be observed by the space-based Solar and Heliospheric Observatory (SOHO). The comet entered SOHO's LASCO C3 camera's field of view on 12 January, and was viewable on the web in near real-time. The comet left SOHO's field of view on 16 January. Due to its proximity to the Sun, the Northern Hemisphere ground-based viewers had a short window for viewing, and the comet could be spotted only during bright twilight. As it reached perihelion on 12 January, it became the brightest comet since Comet Ikeya–Seki in 1965. The comet was dubbed the Great Comet of 2007 by Space.com. On 13 and 14 January 2007, the comet attained an estimated maximum apparent magnitude of −5.5. It was bright enough to be visible in daylight about 5°–10° southeast of the Sun from 12 to 14 January. The closest approach to the Earth occurred on 15 January 2007, at a distance of 0.82 AU. After passing the Sun, McNaught became visible in the Southern Hemisphere. In Australia, according to Siding Spring Observatory at Coonabarabran, where the comet was discovered, it was to have reached its theoretical peak in brightness on Sunday 14 January just after sunset, when it would have been visible for 23 minutes. On 15 January the comet was observed at Perth Observatory with an estimated apparent magnitude of −4.0. Ulysses probe The Ulysses spacecraft made an unexpected pass through the tail of the comet on 3 February 2007. Evidence of the encounter was published in the 1 October 2007 issue of The Astrophysical Journal. Ulysses flew through McNaught's ion tail 260 million kilometres (160 million miles) from the comet's core and instrument readings showed that there was "complex chemistry" in the region. The Solar Wind Ion Composition Spectrometer (SWICS) aboard Ulysses measured Comet McNaught's tail composition and detected unexpected ions. It was the first time that O3+ oxygen ions were detected near a comet. This suggested that the solar wind ions, which did not originally have most of their electrons, gained some electrons while passing through the comet's atmosphere. SWICS also measured the speed of the solar wind, and found that even at 260 million kilometres (160 million miles) from the comet's nucleus, the tail had slowed the solar wind to half its normal speed. The solar wind should usually be about per second at that distance from the Sun, but inside the comet's ion tail, it was less than per second. Prof. George Gloeckler, the principal investigator on the Solar Wind Ion Composition Spectrometer (SWICS), said the discovery was important as the composition of comets told them about conditions approximately 4.5 billion years ago when the Solar System was formed. Period Comet C/2006 P1 took millions of years coming directly from the Oort cloud. It follows a hyperbolic trajectory (with an osculating eccentricity larger than 1) during its passage through the inner Solar System, but the eccentricity will drop below 1 after it leaves the influence of the planets and it will remain bound to the Solar System as an Oort cloud comet. Given the orbital eccentricity of this object, different epochs can generate quite different heliocentric unperturbed two-body best-fit solutions to the aphelion distance (maximum distance) of this object. For objects at such high eccentricity, the Sun's barycentric coordinates are more stable than heliocentric coordinates. Using JPL Horizons, the barycentric orbital elements for epoch 2050 generate a semi-major axis of 2050 AU and a period of approximately 92,700 years. Gallery See also Other comets with the name McNaught Lists of comets List of interstellar comets List of comets by type List of non-periodic comets List of periodic comets Notes Solution using the Solar System Barycenter Read osculating orbit for more details about heliocentric unperturbed two-body solutions References External links C/2006 P1 at Cometary Science Center Info and gallery, from skytonight Comet McNaught in Perth skies Current hotshots of comet, from NASA's Solar and Heliospheric Observatory website Animation of recent images within LASCO C3's FOV McNaught in STEREO HI1a Montage McNaught in STEREO HI1a Comet McNaught photo gallery from Southern Hemisphere NASA Astronomy Pictures of the Day: 5 January – Comet McNaught Heads for the Sun 9 January – McNaught Now Brightest Comet in Decades 13 January – Comet Over Krakow 15 January – Comet McNaught Over Catalonia 17 January – Comet McNaught from New STEREO Satellite 18 January – Southern Comet 19 January – McNaught's Matinee 20 January – SOHO: Comet McNaught Movie 22 January – The Magnificent Tail of Comet McNaught 24 January – A Comet Tail Horizon 1 February – A Tail of Two Hemispheres 5 February – Comet Between Fireworks and Lightning 12 February – Comet McNaught Over New Zealand Non-periodic comets Comets visited by spacecraft 2007 in science 20060807 Comets in 2007 Great comets Oort cloud
Comet McNaught
[ "Astronomy" ]
1,413
[ "Astronomical hypotheses", "Oort cloud" ]
8,781,283
https://en.wikipedia.org/wiki/Hedgehog%20mushroom
Hedgehog mushroom is a common name of several fungi species and may refer to: Hydnum repandum Hericium erinaceus
Hedgehog mushroom
[ "Biology" ]
30
[ "Set index articles on fungus common names", "Set index articles on organisms" ]
8,782,085
https://en.wikipedia.org/wiki/Electracy
Electracy is a theory by Gregory Ulmer that describes the skills necessary to exploit the full communicative potential of new electronic media such as multimedia, hypermedia, social software, and virtual worlds. Concept According to Ulmer, electracy "is to digital media what literacy is to print". It encompasses the broader cultural, institutional, pedagogical, and ideological implications inherent in the major societal transition from print to electronic media. Electracy is a portmanteau of "electricity" and Jacques Derrida's term "trace". Electracy denotes a broad spectrum of research possibilities including the history and invention of writing and mnemonic practices, the epistemological and ontological changes resulting from such practices, the sociological and psychological implications of a networked culture, and the pedagogical implementation of practices derived from such explorations. Ulmer's work considers other historical moments of radical technological change such as the inventions of the alphabet, writing, and the printing press. Also, electracy is grammatological in deriving a methodology from the history of writing and mnemonic practices. Ulmer introduced electracy in Teletheory (1989). First citations of the work appear in 1997. James Inman regarded electracy as one of the "most prominent" contemporary designations for what Walter J. Ong once described as a "secondary orality" that will eventually supplant print literacy. Inman distinguishes electracy from other literacies (such as metamedia), stating that it is a broader concept unique for being ontologically dependent exclusively on electronic media. Some scholars have viewed the electracy paradigm, along with other "apparatus theories" such as Ong's, with skepticism, arguing that they are "essentialist" or "determinist". Pedagogy Lisa Gye states that the transition from literacy to electracy has changed "the ways in which we think, write and exchange ideas," and that Ulmer's primary concern is to understand how that has transformed learning. Electracy as an educational aim has been recognized by scholars in several fields including English composition and rhetoric, literary and media criticism, digital media and art, and architecture. Mikesch Muecke explains that "Gregory Ulmer's ideas on electracy provide ... a model for a new pedagogy where learning is closer to invention than verification." Alan Clinton, in a review of Internet Invention, writes that "Ulmer's pedagogy ultimately levels the playing field between student and teacher." Ulmer's educational methods fit into a constructivist pedagogical theory and practice. He discusses the relationship between pedagogy and electracy at length in an interview with Sung-Do Kim published in 2005. See also Computer literacy Information literacy Transliteracy References Mnemonics Information society Philosophy of education Literacy
Electracy
[ "Technology" ]
600
[ "Computing and society", "Information society" ]
9,448,193
https://en.wikipedia.org/wiki/Boolean%20network
A Boolean network consists of a discrete set of Boolean variables each of which has a Boolean function (possibly different for each variable) assigned to it which takes inputs from a subset of those variables and output that determines the state of the variable it is assigned to. This set of functions in effect determines a topology (connectivity) on the set of variables, which then become nodes in a network. Usually, the dynamics of the system is taken as a discrete time series where the state of the entire network at time t+1 is determined by evaluating each variable's function on the state of the network at time t. This may be done synchronously or asynchronously. Boolean networks have been used in biology to model regulatory networks. Although Boolean networks are a crude simplification of genetic reality where genes are not simple binary switches, there are several cases where they correctly convey the correct pattern of expressed and suppressed genes. The seemingly mathematical easy (synchronous) model was only fully understood in the mid 2000s. Classical model A Boolean network is a particular kind of sequential dynamical system, where time and states are discrete, i.e. both the set of variables and the set of states in the time series each have a bijection onto an integer series. A random Boolean network (RBN) is one that is randomly selected from the set of all possible Boolean networks of a particular size, N. One then can study statistically, how the expected properties of such networks depend on various statistical properties of the ensemble of all possible networks. For example, one may study how the RBN behavior changes as the average connectivity is changed. The first Boolean networks were proposed by Stuart A. Kauffman in 1969, as random models of genetic regulatory networks but their mathematical understanding only started in the 2000s. Attractors Since a Boolean network has only 2N possible states, a trajectory will sooner or later reach a previously visited state, and thus, since the dynamics are deterministic, the trajectory will fall into a steady state or cycle called an attractor (though in the broader field of dynamical systems a cycle is only an attractor if perturbations from it lead back to it). If the attractor has only a single state it is called a point attractor, and if the attractor consists of more than one state it is called a cycle attractor. The set of states that lead to an attractor is called the basin of the attractor. States which occur only at the beginning of trajectories (no trajectories lead to them), are called garden-of-Eden states and the dynamics of the network flow from these states towards attractors. The time it takes to reach an attractor is called transient time. With growing computer power and increasing understanding of the seemingly simple model, different authors gave different estimates for the mean number and length of the attractors, here a brief summary of key publications. Stability In dynamical systems theory, the structure and length of the attractors of a network corresponds to the dynamic phase of the network. The stability of Boolean networks depends on the connections of their nodes. A Boolean network can exhibit stable, critical or chaotic behavior. This phenomenon is governed by a critical value of the average number of connections of nodes (), and can be characterized by the Hamming distance as distance measure. In the unstable regime, the distance between two initially close states on average grows exponentially in time, while in the stable regime it decreases exponentially. In this, with "initially close states" one means that the Hamming distance is small compared with the number of nodes () in the network. For N-K-model the network is stable if , critical if , and unstable if . The state of a given node is updated according to its truth table, whose outputs are randomly populated. denotes the probability of assigning an off output to a given series of input signals. If for every node, the transition between the stable and chaotic range depends on . According to Bernard Derrida and Yves Pomeau , the critical value of the average number of connections is . If is not constant, and there is no correlation between the in-degrees and out-degrees, the conditions of stability is determined by The network is stable if , critical if , and unstable if . The conditions of stability are the same in the case of networks with scale-free topology where the in-and out-degree distribution is a power-law distribution: , and , since every out-link from a node is an in-link to another. Sensitivity shows the probability that the output of the Boolean function of a given node changes if its input changes. For random Boolean networks, . In the general case, stability of the network is governed by the largest eigenvalue of matrix , where , and is the adjacency matrix of the network. The network is stable if , critical if , unstable if . Variations of the model Other topologies One theme is to study different underlying graph topologies. The homogeneous case simply refers to a grid which is simply the reduction to the famous Ising model. Scale-free topologies may be chosen for Boolean networks. One can distinguish the case where only in-degree distribution in power-law distributed, or only the out-degree-distribution or both. Other updating schemes Classical Boolean networks (sometimes called CRBN, i.e. Classic Random Boolean Network) are synchronously updated. Motivated by the fact that genes don't usually change their state simultaneously, different alternatives have been introduced. A common classification is the following: Deterministic asynchronous updated Boolean networks (DRBNs) are not synchronously updated but a deterministic solution still exists. A node i will be updated when t ≡ Qi (mod Pi) where t is the time step. The most general case is full stochastic updating (GARBN, general asynchronous random Boolean networks). Here, one (or more) node(s) are selected at each computational step to be updated. The Partially-Observed Boolean Dynamical System (POBDS) signal model differs from all previous deterministic and stochastic Boolean network models by removing the assumption of direct observability of the Boolean state vector and allowing uncertainty in the observation process, addressing the scenario encountered in practice. Autonomous Boolean networks (ABNs) are updated in continuous time (t is a real number, not an integer), which leads to race conditions and complex dynamical behavior such as deterministic chaos. Application of Boolean Networks Classification The Scalable Optimal Bayesian Classification developed an optimal classification of trajectories accounting for potential model uncertainty and also proposed a particle-based trajectory classification that is highly scalable for large networks with much lower complexity than the optimal solution. See also NK model References Dubrova, E., Teslenko, M., Martinelli, A., (2005). *Kauffman Networks: Analysis and Applications, in "Proceedings of International Conference on Computer-Aided Design", pages 479-484. External links Analysis of Dynamic Algebraic Models (ADAM) v1.1 bioasp/bonesis: Synthesis of Most Permissive Boolean Networks from network architecture and dynamical properties CoLoMoTo (Consortium for Logical Models and Tools) DDLab NetBuilder Boolean Networks Simulator Open Source Boolean Network Simulator JavaScript Kauffman Network Probabilistic Boolean Networks (PBN) RBNLab A SAT-based tool for computing attractors in Boolean Networks Bioinformatics Logic Spin models Exactly solvable models Statistical mechanics
Boolean network
[ "Physics", "Engineering", "Biology" ]
1,586
[ "Biological engineering", "Spin models", "Quantum mechanics", "Bioinformatics", "Statistical mechanics" ]
9,448,373
https://en.wikipedia.org/wiki/D-Wave%20Systems
D-Wave Quantum Systems Inc. is a quantum computing company with locations in Palo Alto, California and Burnaby, British Columbia. D-Wave claims to be the world's first company to sell computers that exploit quantum effects in their operation. D-Wave's early customers include Lockheed Martin, the University of Southern California, Google/NASA, and Los Alamos National Laboratory. D-Wave does not implement a generic quantum computer; instead, their computers implement specialized quantum annealing. History D-Wave was founded by Haig Farris, Geordie Rose, Bob Wiens, and Alexandre Zagoskin. Farris taught a business course at the University of British Columbia (UBC), where Rose obtained his PhD, and Zagoskin was a postdoctoral fellow. The company name refers to their first qubit designs, which used d-wave superconductors. D-Wave operated as an offshoot from UBC, while maintaining ties with the Department of Physics and Astronomy. It funded academic research in quantum computing, thus building a collaborative network of research scientists. The company collaborated with several universities and institutions, including UBC, IPHT Jena, Université de Sherbrooke, University of Toronto, University of Twente, Chalmers University of Technology, University of Erlangen, and Jet Propulsion Laboratory. These partnerships were listed on D-Wave's website until 2005. In June 2014, D-Wave announced a new quantum applications ecosystem with computational finance firm 1QB Information Technologies (1QBit) and cancer research group DNA-SEQ to focus on solving real-world problems with quantum hardware. On May 11, 2011, D-Wave Systems announced D-Wave One, described as "the world's first commercially available quantum computer", operating on a 128-qubit chipset using quantum annealing (a general method for finding the global minimum of a function by a process using quantum fluctuations) to solve optimization problems. The D-Wave One was built on early prototypes such as D-Wave's Orion Quantum Computer. The prototype was a 16-qubit quantum annealing processor, demonstrated on February 13, 2007, at the Computer History Museum in Mountain View, California. D-Wave demonstrated what they claimed to be a 28-qubit quantum annealing processor on November 12, 2007. The chip was fabricated at the NASA Jet Propulsion Laboratory Microdevices Lab in Pasadena, California. In May 2013, a collaboration between NASA, Google, and the Universities Space Research Association (USRA) launched a Quantum Artificial Intelligence Lab based on the D-Wave Two 512-qubit quantum computer that would be used for research into machine learning, among other fields of study. On August 20, 2015, D-Wave Systems announced the general availability of the D-Wave 2X system, a 1000-qubit+ quantum computer. This was followed by an announcement on September 28, 2015, that it had been installed at the Quantum Artificial Intelligence Lab at NASA Ames Research Center. In January 2017, D-Wave released the D-Wave 2000Q, and an open-source repository containing software tools for quantum annealers. It contains Qbsolv, which is open-source software that solves quadratic unconstrained binary optimization problems on both the company's quantum processors and classic hardware architectures. Additional systems were released in 2020 with another system planned for late 2024 or 2025 as shown below. D-Wave operated from various locations in Vancouver, British Columbia, and laboratory spaces at UBC before moving to its current location in the neighboring suburb of Burnaby. D-Wave also has offices in Palo Alto, California and Vienna, California, USA. Computer systems The first commercially produced D-Wave processor was a programmable, superconducting integrated circuit with up to 128 pair-wise coupled superconducting flux qubits. The 128-qubit processor was superseded by a 512-qubit processor in 2013. The processor is designed to implement a special-purpose quantum annealing as opposed to being operated as a universal gate-model quantum computer. The underlying ideas for the D-Wave approach arose from experimental results in condensed matter physics, and particular work on quantum annealing in magnets performed by Gabriel Aeppli, Thomas Felix Rosenbaum, and collaborators, who had been checking the advantages, proposed by Bikas K. Chakrabarti & collaborators, of quantum tunneling/fluctuations in the search for ground state(s) in spin glasses. These ideas were later recast in the language of quantum computation by MIT physicists Edward Farhi, Seth Lloyd, Terry Orlando, and Bill Kaminsky, whose publications in 2000 and 2004 provided both a theoretical model for quantum computation that fit with the earlier work in quantum magnetism (specifically the adiabatic quantum computing model and quantum annealing, its finite temperature variant), and a specific enablement of that idea using superconducting flux qubits which is a close cousin to the designs D-Wave produced. To understand the origins of much of the controversy around the D-Wave approach, it is important to note that the origins of the D-Wave approach to quantum computation arose not from the conventional quantum information field, but from experimental condensed matter physics. D-Wave maintains a list of peer-reviewed technical publications by their scientists and others on their website. Orion prototype On February 13, 2007, D-Wave demonstrated the Orion system, running three different applications at the Computer History Museum in Mountain View, California. This marked the first public demonstration of, supposedly, a quantum computer and associated service. The first application, an example of pattern matching, performed a search for a similar compound to a known drug within a database of molecules. The next application computed a seating arrangement for an event subject to compatibilities and incompatibilities between guests. The last involved solving a Sudoku puzzle. The processors at the heart of D-Wave's "Orion quantum computing system" are designed for use as hardware accelerator processors rather than general-purpose computer microprocessors. The system is designed to solve a particular NP-complete problem related to the two-dimensional Ising model in a magnetic field. D-Wave terms the device as a 16-qubit superconducting adiabatic quantum computer processor. According to the company, a conventional front-end running an application that requires the solution of an NP-complete problem, such as pattern matching, passes the problem to the Orion system. According to Geordie Rose, founder and Chief Technology Officer of D-Wave, NP-complete problems "are probably not exactly solvable, no matter how big, fast or advanced computers get"; the adiabatic quantum computer used by the Orion system is intended to quickly compute an approximate solution. 2009 Google demonstration On December 8, 2009, at the Neural Information Processing Systems (NeurIPS) conference, a Google research team led by Hartmut Neven used D-Wave's processor to train a binary image classifier. D-Wave One On May 11, 2011, D-Wave Systems announced the D-Wave One, an integrated quantum computer system running on a 128-qubit processor. The processor used in the D-Wave One, performs a single mathematical operation, discrete optimization. Rainier uses quantum annealing to solve optimization problems. The D-Wave One was claimed to be the world's first commercially available quantum computer system. Its price was quoted at approximately US$10,000,000. A research team led by Matthias Troyer and Daniel Lidar found that, while there is evidence of quantum annealing in D-Wave One, they saw no speed increase compared to classical computers. They implemented an optimized classical algorithm to solve the same particular problem as the D-Wave One. Lockheed Martin and D-Wave collaboration In November 2010, Lockheed Martin signed a multi-year contract with D-Wave Systems to realize the benefits based upon a quantum annealing processor applied to some of Lockheed's most challenging computation problems. The contract was later announced on May 25, 2011. The contract included the purchase of the D-Wave One quantum computer, maintenance, and associated professional services. Optimization problem-solving in protein structure determination In August 2012, a team of Harvard University researchers presented results of the largest protein-folding problem solved to date using a quantum computer. The researchers solved instances of a lattice protein folding model, known as the Miyazawa–Jernigan model, on a D-Wave One quantum computer. D-Wave Two In early 2012, D-Wave Systems revealed a 512-qubit quantum computer, which was launched as a production processor in 2013. In May 2013, Catherine McGeoch, a consultant for D-Wave, published the first comparison of the technology against regular top-end desktop computers running an optimization algorithm. Using a configuration with 439 qubits, the system performed 3,600 times as fast as CPLEX, the best algorithm on the conventional machine, solving problems with 100 or more variables in half a second compared with half an hour. The results are presented at the Computing Frontiers 2013 conference. In March 2013, several groups of researchers at the Adiabatic Quantum Computing workshop at the Institute of Physics in London, England, produced evidence, though only indirect, of quantum entanglement in the D-Wave chips. In May 2013, it was announced that a collaboration between NASA, Google, and the USRA launched a Quantum Artificial Intelligence Lab at the NASA Advanced Supercomputing Division at Ames Research Center in California, using a 512-qubit D-Wave Two that would be used for research into machine learning, among other fields of study. D-Wave 2X and D-Wave 2000Q On August 20, 2015, D-Wave released the general availability of their D-Wave 2X computer, with 1000 qubits in a Chimera graph architecture (although, due to magnetic offsets and manufacturing variability inherent in the superconductor circuit fabrication, fewer than 1152 qubits are functional and available for use; the exact number of qubits yielded will vary with each specific processor manufactured). This was accompanied by a report comparing speeds with high-end single-threaded CPUs. Unlike previous reports, this one explicitly stated that the question of quantum speedup was not something they were trying to address, and focused on constant-factor performance gains over classical hardware. For general-purpose problems, a speedup of 15x was reported, but it is worth noting that these classical algorithms benefit efficiently from parallelization—so that the computer would be performing on par with, perhaps, 30 traditional high-end single-threaded cores. The D-Wave 2X processor is based on a 2048-qubit chip with half of the qubits disabled; these were activated in the D-Wave 2000Q. Advantage In February 2019, D-Wave announced the next-generation system that would become the Advantage and delivered that system in 2020. The Advantage architecture would increase the total number of qubits to 5760 and switch to the Pegasus graph topology, increasing the per-qubit connections to 15. D-WAVE claimed the Advantage architecture provided a 10x speedup in time-to-solve over the 2000Q product offering. D-WAVE claims that an incremental follow-up Advantage Performance Update provides a 2x speedup over Advantage and a 20x speedup over 2000Q, among other improvements. Advantage 2 In 2021, D-Wave announced the next-generation system that would become the Advantage 2 with delivery expected in late 2024 or early 2025. The Advantage architecture was expected to increase the total number of qubits to over 7000 and switch to the Zephyr graph topology, increasing the per-qubit connections to 20. See also List of companies involved in quantum computing or communication Adiabatic quantum computation Analog computer AQUA@home Flux qubit Quantum annealing Superconducting quantum computing IBM Q System One References External links . . . Companies based in Burnaby Technology companies established in 1999 Computer hardware companies Quantum computing Companies involved in quantum computing Quantum information science Technology companies of Canada Companies listed on the New York Stock Exchange 1999 establishments in British Columbia
D-Wave Systems
[ "Technology" ]
2,487
[ "Computer hardware companies", "Computers" ]
9,448,987
https://en.wikipedia.org/wiki/Glossary%20of%20phytopathology
This is a glossary of some of the terms used in phytopathology. Phytopathology is the study of plant diseases. It is a multi-disciplinary science since prerequisites for disease development are the presence of a susceptible host species, a pathogen and the appropriate environmental conditions. This is known as the disease triangle. Because of this interaction, the terminology used in phytopathology often comes from other disciplines including those dealing with the host species ( botany / plant science, plant physiology), the pathogen (bacteriology, mycology, nematology, virology), the environment and disease management practices (agronomy, soil science, meteorology, environmental science, ecology, plant breeding, pesticides, entomology), and areas of study that apply to both the host and pathogen (molecular biology, genetics, molecular genetics). The result is that most phytopathological glossary include terms from these other disciplines in addition to terms (disease incidence, horizontal resistance, gene-for-gene relationship, blast, scab and so on) that are specific to, or which have a unique meaning in phytopathology. This glossary is no exception. However, for the sake of brevity, it has, for the most part, restricted terms from other disciplines to those that pertain to the pathogen. At some point, these terms should be moved to other glossaries (e.g. glossary of mycology, glossary of nematology, and so on). A abiotic Disease not caused by living organisms acceptable daily intake acervulus (pl. acervuli) The acervulus is an erumpent, cushionlike fruiting body bearing conidiophores, conidia, and sometimes setae. It is distinguished from a stroma in not having a peridium or covering of fungal tissue of any kind. acid precipitation acid rain acropetal Actinomycetes The Actinobacteria or Actinomycetes are a group of Gram-positive bacteria. acute acute toxicity aeciospore aecium aflatoxin agar aggressiveness air pollution alkaloids allele allelopathy alternate host alternative hosts alternation of generations amphid amphigynous amphimixis amphimobile anaerobic anamorph (adj. anamorphic; syn. imperfect state) anastomosis (pl. anastomoses) anthracnoes antibiotic antibody antigen antiseptic apothecium The apothecium is an open, cuplike, or saucer-shaped sexual fungal fruiting body (ascocarp) containing asci. antiseptic appressorium (pl. appressoria) arbuscular mycorrhiza (abbr. AM; syn. endomycorrhiza) arbuscule ascocarp (syn. ascoma) ascogenous ascogonium (pl. ascogonia) ascoma (pl. ascomata; syn. ascocarp) Ascomycetes asci ascospore ascostroma (pl. ascostromata) ascus (pl. asci) aseptate asexual asexual reproduction atrophy AUDPC (abbr. for Area Under Disease Progress Curve) autotroph avirulence (avr) gene avirulent (syn. nonpathogenic) axenic autoecious B bacilliform bacterial streaming bactericide bacteriocin bacteriophage bacterium (pl. bacteria) bactericide basal knob (syn. stylet knob) basidiocarp (syn. basidioma) Basidiomyctes The Division Basidiomycota is a large taxon within the Kingdom Fungi that includes those species that produce spores in a club-shaped structure called a basidium. basidiospore basidium (pl. basidia) basidiospore basidium (pl. basidia; adj. basidial) binary fission binucleate bioassay biocide biocontrol (syn. biological control) biotic A disease caused by a living organism biotroph (syn. obligate parasite) biotype bitunicate blasting blight blotch breaking broadcast application brooming brown rot (of wood) burn bursa C canker capsid (syn. coat protein) carcinogen carrier casting causal agent certification cfu (abbr. for colony forming unit) chemotaxis (syn. chemotropism) chemotherapy chlamydospore chlamydospore A chlamydospore is the thick-walled big resting spore of several kinds of fungi. chlorosis chronic toxicity chytridiomycetes circulative-propagative transmission (syn. propagative transmission) circulative transmission (syn. persistent transmission) cirrus cleistothecium clamp connection clavate (or claviform) coalesce coat protein (syn. capsid) coccus (pl. cocci) coelomycetes colonization colony colony forming unit (abbr. cfu) compartmentalization conidiogenesis conidiogenous conidioma (pl. conidiomata) conidiophore conidium (pl. conidia) conjugation conk constitutive contact fungicide (syn. protectant fungicide) coremium (pl. coremia; syn. synnema) cross-protection crozier cryptobiosis (hidden life) curl cyst cytopathology D damping-off days to harvest decay degree-day demicyclic deuteromycetes (syn. Fungi Imperfecti) diagnostic antigen diploid Diploid (2x) cells have two copies (homologs) of each chromosome, usually one from the mother and one from the father. diapause dieback (v. die back) differential host (syn. differential cultivar) differential medium differentiation dikaryon (adj. dikaryotic) dilution plating dilution streaking dimorphic direct penetration Discomycetes disease disease cycle disease incidence disease progress curve disease pyramid disease severity disease triangle disinfect disinfest dispersal (syn. dissemination) dissemination (syn. dispersal) dolipore septum dormancy (adj. dormant) downy mildew drift (of pesticides) drought durable resistance dwarfing E echinulate economic threshold ectomycorrhiza (pl. ectomycorrhizae) ectoparasite ectotrophic elicitor enation encapsidate encyst endemic endogenous endophytic endoconidium (pl. endoconidia) endomycorrhiza (pl. endomycorrhizae; syn. arbuscular mycorrhiza) endoparasite endospore epidemic epidemiology epinasty epiphytotic The epidemic condition of a disease, in a plant population. Compare with enphytotic eradicant eradication ergot ergotism erumpent escape etiolation etiology exclusion exogenous exudate F f. sp. (abbr. for forma specialis) facultative parasite facultative saprotroph fasciation fastidious filamentous (syn. filiform) flagellum flagging fleck focus (pl. foci) forest decline forma specialis (abbr. f.sp.; pl. formae speciales) fructification fruiting body fumigant (v. fumigate) Fungi Imperfecti (syn. Deuteromycetes fungicide (adj. fungicidal) Chemical designed to kill fungi fungus (pl. fungi) fungistat (adj. fungistatic) Inhibits growth of some fungi. fungistasis fusiform G gall gametangium (pl. gametangia) gametophyte gene-for-gene hypothesis general resistance (syn. horizontal resistance, race non-specific resistance) genotype germ theory giant cell girdle giant cells gram-negative Gram-negative bacteria are those that do not retain crystal violet dye in the Gram staining protocol. gram-positive Gram-positive bacteria are classified as bacteria that retain a crystal violet dye during the Gram stain process. gram stain growth regulator (syn. hormone) gummosis (pl. gummoses) H haploid hardiness haustorium (pl. haustoria) The haustorium is the hyphal tip of a parasitic fungus hemiparasite hermaphrodite (adj. hermaphroditic) heteroecious heterokaryon (adj. heterokaryotic) heterothallic heterotroph holomorph holoparasite homokaryon (adj. homokaryotic) homothallism (adj. homothallic) horizontal resistance (syn. general resistance, race non-specific resistance) host plant host range hyaline hymenium hyperparasite hyperplasia hypertrophy hypersensitive hypersensitive reaction and pathogenicity (hrp) gene hypersensitive response (HR) hypha hyphal sheath (syn. mantle) hyphomycetes hyphopodium hypoplasia hypovirulence hypovirulence is reduced virulence of a pathogen. Hypovirulence in fungi can be caused by a virus within the fungus. The virus reduces virulence and sporulation. A hypovirus-fungus can be used in biological control. I immune immunity imperfect fungi (syn. Fungi Imperfecti, deuteromycetes) imperfect state (syn. anamorph) in planta in situ in vitro in vivo incubation period indicator plant indirect penetration induced induced systemic resistance (ISR) infection court infection cushion infection focus infection peg (syn. penetration peg) infection period infectious infective infest (n. infestation) initial inoculum (syn. primary inoculum) injury inoculate (n. inoculation) inoculum (pl. inocula) inoculum density integrated pest management (abbr. IPM) intumescence (syn. edema or oedema) IPM (abbr. for integrated pest management) isolate K klendusity The disease-escaping ability of plants. Koch's postulates knot L latent infection latent period leaf dip leaf spot leafroll lesion life cycle lignification local lesion local necrosis lodge M macerate macroconidium (pl. macroconidia) macrocyclic macronutrient mantle (syn. hyphal sheath) mating types mechanical injury mechanical transmission medium (pl. media) melanin microbial microclimate microconidium (pl. microconidia) microcyclic microflora micronutrient microorganism (syn. microbe) microsclerotium mildew MLO (syn. mycoplasmalike organism) mold mollicute monocyclic monoecious monogenic monogenic resistance (syn. single gene resistance) monotrichous monoxenic culture mosaic motile mottle movement protein multigenic resistance (syn. polygenic resistance) multiline multinucleate multiparticulate virus multipartite virus multiseptate mummification mummy mushroom mutagen Mycelia sterilia mycelium (pl. mycelia) Mycelium is the vegetative part of a fungus consisting of a mass of branching, threadlike hyphae that exists below the ground or within another substrate. mycology mycoparasite mycoplasmalike organism (syn. MLO) mycorrhiza (pl. mycorrhizae; adj. mycorrhizal) mycotoxin mycovirus Myxomycetes (syn. slime molds) N necrosis (adj. necrotic) necrotroph needle cast (of conifers) nematicide A nematicide is a type of chemical pesticide used to kill parasitic nematodes. nematode Nematodes are unsegmented, bilaterally symmetric and triploblastic protostomes with a complete digestive system. nitrogen oxides noninfectious disease nonpathogenic (syn. avirulent) nonpersistent transmission (syn. stylet-borne transmission) nonseptate O obligate parasite (syn. biotroph) occlusion oedema (also edema; syn. intumescence) oligogenic resistance oogonium (pl. oogonia) An oogonium is a female gametogonium. oomycetes (adj. oomycetous) oospore ooze ostiole (adj. ostiolate) overwinter P pandemic papilla paragynous parasexualism parasite (adj. parasitic) parasitism parthenogenesis (adj. parthenogenetic) partial resistance pasteurization pathogen (adj. pathogenic) pathogenesis-related (PR) proteins pathogenicity pathology pathotype pathovar (abbr. pv.) penetration penetration peg (syn. infection peg) perfect (see teleomorph) perithecium (pl. perithecia) peritrichate persistent transmission (syn. circulative transmission) pest pesticide phenological synchrony phenotype phloem necrosis Phycomycete phyllody phylloplane-competent physiogenic disease phytoalexin phytopathogenic phytopathology (syn. plant pathology) phytoplasma (syn. mycoplasmalike organism, MLO) phytosanitary certificate phytotoxic plant pathology (syn. phytopathology) plasmodiophoromycetes plasmodium (pl. plasmodia) polycyclic polyetic polygenic resistance (syn. multigenic resistance) polymorphism polyprotein powdery mildew predispose (n. predisposition) primary inoculum (syn. initial inoculum) proinhibitin prokaryote promycelium (pl. promycelia) propagative transmission (syn. circulative propagative transmission) propagule protectant protectant fungicide (syn. contact fungicide) pseudothecium (pl. pseudothecia) Puccinia pathway pustule pv. (abbr. for pathovar) pycnidiospore pycnium (pl. pycnia; syn. spermagonium) Q qualitative resistance quantitative resistance quarantine quiescent quiescent dormant or inactive quorum sensing R race race non-specific resistance (syn. general resistance, horizontal resistance) receptive hypha reniform resinosis resistant (n. resistance) rhizomorph rhizosphere rhizosphere-competent ringspot rosette rot roundworm rugose russet rust S sanitation sap transmission saprobe (syn. saprotroph) saprotroph A saprotroph (or saprobe) is an organism that obtains its nutrients from non-living organic matter, usually dead and decaying plant or animal matter, by absorbing soluble organic compounds. scab scald sclerenchyma (adj. sclerenchymatous) sclerotium (pl. sclerotia) A sclerotium is a compact mass of hardened mycelium (as an ergot) stored with reserve food material that in some higher fungi becomes detached and remains dormant until a favorable opportunity for growth occurs. scorch secondary infection secondary inoculum secondary metabolite secondary organism seed treatment seedborne selective medium septate serrate sessile seta (pl. setae) sexual spore sexually compatible shot-hole sign single gene resistance (syn. monogenic resistance) slime molds (syn. Myxomycetes) smut soft rot soil drench soilborne soil pasteurization soil sterilization solarization sooty mold sorus (pl. sori) sp. (abbr. for species; pl. spp.) species specific resistance (syn. vertical resistance) spermagonium (pl. spermagonia; syn. pycnium for rust fungi) spermatium (pl. spermatia; syn. pycniospore for rust fungi) spicule spiroplasma – helical, motile, cell wall-less bacterium; member of genus Spiroplasma in class Mollicutes sporangiophore sporangiospore sporangium (pl. sporangia) spore sporidium (pl.sporidia) sporocarp spore-bearing fruiting body sporodochium (pl.sporodochia) sporogenous sporophore sporophyte sporulate spot stabilizing selection staghead stem pitting sterigma (pl. sterigmata) sterilant sterile fungus sterilization (adj. sterilized) stippling strain streak striate (n. striations) stroma (pl. stromata) stunting stylet knob (syn. basal knob) stylet-borne transmission (syn. nonpersistent transmission) subspecies substrate sunscald or sunburn suppressive soil susceptible (n. susceptibility) symptom symptomless carrier syncytium (pl. syncytia) synergism (adj. synergistic) synnema (pl. synnemata; syn. coremium) systematics systemic systemic acquired resistance (SAR) systemic fungicide T teleomorph (syn. perfect state) teliospore (sometimes called teleutospore, teleutosporodesm) Teliospore (sometimes called teleutospore) is the thick-walled resting spore of some fungi (rusts and smuts), from which the basidium arises. telium (pl. telia) temporary wilt thallus thermotherapy tolerance (adj. tolerant) toxicity toxin transmit (n. transmission) trap crop transmit (n. transmission) trenching tumor (syn. gall) type U urediniospore (also urediospore, uredospore) uredinium (also uredium; pl. uredinia) V vascular wilt disease vector vein banding vein clearing vermiform vertical resistance (syn. specific resistance) viable (n. viability) virescence virion viroid viroplasm virulence virulent viruliferous virus-laden, usually applied to insects or nematodes as vectors virus A virus is a microscopic particle (ranging in size from 20 – 300 nm) that can infect the cells of a biological organism. viscin W walling-off water-soaked white rot (of wood) white rust wild type wilt winterburn witches' broom wound X XLB (xylem-limited fastidious bacteria) xylem-limited fastidious bacteria (XLB) Y yellowing yellows Z zonate zoosporangium zoospore A zoospore is a motile asexual spore utilizing a flagellum for locomotion. Zygomycetes zygospore A zygospore is a sexual part of a fungus, a chlamydospore that is created by the nuclear fusion of haploid hyphae of different mating types. References Illustrated Glossary of Plant Pathology, American Phytopathological Society Plant Disease Control Glossary, Oregon State University Mycological Glossary, Illinois Mycological Association On-Line Glossary of Technical Terms in Plant Pathology, Cornell University (with pronunciation guide) Phytopathology Phytopathology Phytopathology Wikipedia glossaries using description lists
Glossary of phytopathology
[ "Biology" ]
4,224
[ "Botanical terminology", "Glossaries of botany", "Glossaries of biology" ]
9,449,652
https://en.wikipedia.org/wiki/When%20Engineering%20Fails
When Engineering Fails is a 1998 film written and presented by Henry Petroski. It examines the causes of major disasters, including the explosion of the Space Shuttle Challenger, and compares the risks of computer-assisted design with those of traditional engineering methods. The original title of the film was To Engineer Is Human, the title of Petroski's non-fiction book about design failures. References 1998 films Documentary films about technology 1998 documentary films American documentary films Mechanical failure 1990s American films
When Engineering Fails
[ "Materials_science", "Engineering" ]
97
[ "Mechanical failure", "Materials science", "Mechanical engineering" ]
9,450,320
https://en.wikipedia.org/wiki/British%20Pharmacological%20Society
The British Pharmacological Society is the primary UK learned society for pharmacologists, concerned with research into drugs and the ways in which they work. Members work in academia, industry, regulatory agencies, and the health services, and many are medically qualified. The Society covers the whole spectrum of pharmacology, including laboratory, clinical, and toxicological aspects. Clinical pharmacology is the medical speciality dedicated to promoting safe and effective use of medicines for patient benefit. Many clinical pharmacologists work as consultants in the National Health Service and many hold prominent positions in UK universities and in regulatory bodies, such as the Medicines and Healthcare products Regulatory Agency (MHRA) and the National Institute of Health and Care Excellence (NICE). History The Society was founded in 1931, in Oxford, by a group of about 20 pharmacologists. They were brought together on the initiative of Professor James Andrew Gunn, through a letter signed by Gunn, Henry H. Dale, and Walter E. Dixon, and sent to the heads of university departments of pharmacology and of institutions for pharmacological research in Great Britain, with proposals for the formation of a pharmacological club. There were favorable replies to this letter, and most of the recipients met in Wadham College, Oxford, on the evening of 3 July 1931, the day before the meeting of the Physiological Society. Gunn presided over the meeting. It was agreed that a Society should be founded, to meet at least once a year for the reading of papers on pharmacological subjects and discussions of questions of teaching and publications and to promote friendly relations between pharmacologists. The first female member of the society was Mary Pickford (1935), and other early eminent women members included Marthe Vogt and Edith Bülbring. Publications The British Pharmacological Society publishes several works that promote pharmacology and clinical pharmacology: The British Journal of Pharmacology is published by Wiley The British Journal of Clinical Pharmacology is published by Wiley Pharmacology Research & Perspectives is published by ASPET, the British Pharmacological Society, and Wiley. Pharmacology Matters (originally called pA2) is the house magazine of the British Pharmacological Society. Presidents The office of president was formally created in 1999. Before that the role was taken by members of the society under titles such as secretary and general secretary. Since 2010, all previous general secretaries and presidents have been awarded the title president emeritus. Secretary and treasurer 1931 - M. H. MacKeith 1934 - Joshua Harold Burn 1945 - Frank R. Winton Secretary 1947 - George Brownlee 1952 - D. R. Wood 1955 - Miles Weatherall 1956 - D. R. Wood 1957 - Walter L. M. Perry 1961 - James D. P. Graham General secretary 1968 - Juan P. Quilliam 1971 - John R. Vane 1974 - James F. Mitchell 1977 - G. P. Lewis 1980 - A. Michael Barrett 1983 - Anthony ('Tony') Birmingham 1986 - Geoffrey N. Woodruff 1989 - A. Richard. Green 1992 - Jennifer Maclagan 1995 - Norman G. Bowery 1998 - Tom P. Blackburn President 1999 - Norman G. Bowery 2001 - Rod. J. Flower 2004 - Julia C. Buckingham 2006 - Graeme Henderson 2008 - Jeffrey K. Aronson 2010 - Raymond G. Hill 2012 - Philip A. Routledge 2014 - Humphrey P. Rang 2016 - David Webb 2018 - Stephen Hill 2020 - Munir Pirmohamed 2022 - Clive Page 2024 - Mark Caulfield Eminent pharmacologists The society elects eminent, deceased contributors to the subject of pharmacology, whether or not they were members, to the Pharmacology Hall of Fame: James Black Bill Bowman Edith Bülbring Henry Hallett Dale Derrick Dunlop John Gaddum Hans Kosterlitz Heinz Otto Schild John Vane Marthe Vogt Members of the society awarded the Nobel Prize in Physiology or Medicine include Black, Dale and Vane. Fellows of the society Fellowships (FBPhS) of the society are awarded to members who have made significant contributions to both the study of pharmacology and the Society. A full list of Fellows is available here. Honorary Fellowships (HonFBPhS) are awarded to member or non-members for distinguished and sustained leadership role in Pharmacology. Fellows and Honorary Fellows use the post-nominal FBPhS. Notable current honorary fellows include: Jeffrey K Aronson, president emeritus and former editor-in-chief, British Journal of Cinical Pharmacology Y S Bakhle Sir Peter Barnes Michael Berridge Dame Kate Bingham Thomas Blackburn Susan Brain Sir Mark Caulfield, chief scientist for Genomics England Judy MacArthur Clark Sir Rory Collins David Colquhoun John H. Coote Dame Sally Davies Sir Gordon Duff, former chairman of the Medicines and Healthcare products Regulatory Agency; former principal St Hilda's College, Oxford. Robin E Ferner Garret A. FitzGerald Roderick Flower, president emeritus Sir Charles George Dame Sarah Gilbert Nuala Helsby, professor of molecular medicine and pathology in New Zealand Graeme Henderson, president emeritus Raymond Hill, president emeritus Stephen Hill, president emeritus David Lawson, former chairman of the Committee on the Review of Medicines (CRM) and of the Medicines Commission Hilary Little Ian McGrath, former editor-in-chief, British Journal of Pharmacology Sir Salvador Moncada David Nutt Sir Munir Pirmohamed, president emeritus, president, Association of Physicians of Great Britain and Ireland Dame Nancy Rothwell Philip Routledge, former chairman, All Wales Medicines Strategy Group Sir Patrick Vallance, former chief scientific adviser, UK Government Tom Walley, former director, Health Technology Assessment (HTA) Programme, National Institute for Health Research (NIHR) David Webb, President Emeritus Sir Christopher Whitty, Chief Medical Officer (CMO), England, and Chief Medical Adviser, UK Government Sir Kent Woods, former director of the NHS Health Technology Assessment Programme, chief executive of the Medicines and Healthcare products Regulatory Agency (MHRA), and chairman of the management board of the European Medicines Agency See also Pharmacology Clinical pharmacology Wiley References External links The British Pharmacological Society The British Journal of Pharmacology The British Journal of Clinical Pharmacology Pharmacology Research & Perspectives Health in the London Borough of Islington Learned societies of the United Kingdom Medical associations based in the United Kingdom 1931 establishments in the United Kingdom Organisations based in the London Borough of Islington Pharmacological societies Scientific organizations established in 1931
British Pharmacological Society
[ "Chemistry" ]
1,348
[ "Pharmacology", "Pharmacological societies" ]
9,451,015
https://en.wikipedia.org/wiki/Expo%20Mimio
EXPO mimio is a brand name of computer whiteboard capture devices marketed by Sanford Brands. EXPO mimio devices allow users to digitally capture whiteboard images and text. The devices link physical whiteboard to software created whiteboards such as in netmeeting, and can also be used to control desktop applications and documents directly from a whiteboard when used with a projector and computer. On October 4, 2006 Newell Rubbermaid acquired the mimio interactive whiteboard (iWB) product line. The mimio line has become part of the Sanford Brands portfolio of products. Models In production: EXPO mimio Interactive EXPO mimio Xi EXPO mimio Board EXPO mimio wireless EXPO mimio studio (win) EXPO mimio Mac EXPO mimio writingRecognition (win) EXPO mimio screenRecorder (win) Computer peripherals
Expo Mimio
[ "Technology" ]
172
[ "Computer peripherals", "Components" ]
9,451,796
https://en.wikipedia.org/wiki/Disk%20covering%20problem
The disk covering problem asks for the smallest real number such that disks of radius can be arranged in such a way as to cover the unit disk. Dually, for a given radius ε, one wishes to find the smallest integer n such that n disks of radius ε can cover the unit disk. The best solutions known to date are as follows. Method The following picture shows an example of a dashed disk of radius 1 covered by six solid-line disks of radius ~0.6. One of the covering disks is placed central and the remaining five in a symmetrical way around it. While this is not the best layout for r(6), similar arrangements of six, seven, eight, and nine disks around a central disk all having same radius result in the best layout strategies for r(7), r(8), r(9), and r(10), respectively. The corresponding angles θ are written in the "Symmetry" column in the above table. References External links Finch, S. R. "Circular Coverage Constants." §2.2 in Mathematical Constants. Cambridge, England: Cambridge University Press, pp. 484–489, 2003. Discrete geometry Covering problems
Disk covering problem
[ "Mathematics" ]
245
[ "Discrete geometry", "Discrete mathematics", "Geometry", "Geometry stubs" ]
9,451,873
https://en.wikipedia.org/wiki/L%C3%A9on%20Croizat
Léon Camille Marius Croizat (16 July 1894 – 30 November 1982) was a French-Italian scholar and botanist who developed an orthogenetic synthesis of the evolution of biological form over space, in time, which he called panbiogeography. Life Croizat was born in Torino, Italy to Vittorio Croizat (aka Victor Croizat) and Maria (Marie) Chaley, who had emigrated to Turin from Chambéry, France. Despite his great aptitude for the natural sciences, Leon studied and received a degree in law from the University of Turin. Croizat and his family (wife Lucia and two children) emigrated to the United States in 1924; an avid artist, Leon worked selling his artwork for several years, but could not succeed economically as a working artist after the stock market crash of 1929. During the 1930s, Croizat found a job identifying plants as part of a topographic inventory performed in the public parks of New York City. During his visits to the Bronx Botanical Gardens, he became acquainted with Dr. E. D. Merrill. When Merrill was appointed director of the Arnold Arboretum of Harvard University in 1936, he hired Leon as a technical assistant (in 1937.) Croizat became a prolific student and publisher, studying important aspects of the distribution and evolution of biological species. It was during this time that he began to formulate a novel current of thought in evolutionary theory, opposed in some respects to Darwinism, on the evolution and dispersal of biota over space, through time. In 1947, Croizat moved to Venezuela after receiving an invitation from botanist Henri Pittier. Croizat then obtained a position in the Faculty of the Department of Agronomy at the Central University of Venezuela. In 1951 he was promoted and was awarded the title of Professor of Botany and Ecology at the University of the Andes, Venezuela. Between 1951 and 1952 he participated in the Franco-Venezuelan expedition to discover the sources of the Orinoco river. Croizat served with the expedition as a botanist with professor Jose Maria Cruxent. During his time in Venezuela Croizat divorced his first wife. Croizat was later remarried to his second wife Catalina Krishaber, a Hungarian immigrant. In 1953 Croizat gave up all official academic positions to work full-time researching biology. Croizat and his wife Catalina lived in Caracas until 1976. In 1976 they took over as first directors of the “Jardin Botanico Xerofito” in Coro, a city approximately 400 kilometres West of Caracas. Jardin Botanico Xerofito was a botanical garden which they founded together. Croizat and Catalina worked for six years to establish Jardin Botanico Xerofito. Croizat died at Coro on 30 November 1982, of a heart attack. During his life, Croizat has published around 300 scientific papers and seven books, amounting to more than 15,000 printed pages. He was honoured by Venezuela with the Henri Pittier Order of Merit in Conservation, and by the government of Italy with the Order of Merit. Croizat is commemorated in the scientific name of a species of lizard, Panopa croizati. Several plant and animal species (and one genus) have been named after Croizat. Concepts Panbiogeography is a discipline based on the analysis of patterns of distribution of organisms. The method analyzes biogeographic distributions through the drawing of tracks, and derives information from the form and orientation of those tracks. A track is a line connecting collection localities or disjunct areas of a particular taxon. Several individual tracks for unrelated groups of organisms form a generalized ('standard') track, where the individual components are relict fragments of an ancestral, more widespread biota fragmented by geological and/or climatic changes. A node arises from the intersection of two or more generalized tracks In graph theory a track is equated to a minimum spanning tree connecting all localities by the shortest path. To explain disjunct distributions, Croizat proposed the existence of broadly distributed ancestors that established their range during a period of mobilism, followed by a form-making process over a broad front. Disjunctions are explained as extinctions in the previously continuous range. Orthogenesis is a term used by Croizat, in his words "... in a pure mechanistic sense", which refers to the fact that a variation in form is limited and constrained. Croizat considered organism evolution as a function of time, space and form. Of these three essential factors, space is the one with which biogeography is primarily concerned. However space necessarily interplays with time and form, therefore the three factors are one of biogeographic concern. Put another way, when evolution is considered to be guided by developmental constraints or by phylogenetic constraints, it is orthogenetic. Some researchers consider Croizat as one of the most original thinkers of modern comparative biology, whose contributions provided the foundation of a new synthesis between earth and life sciences. While some biologists have continued to apply panbiogeographic approaches, the theory has been highly criticised and has been dismissed by mainstream biologists. The theory was described as "almost moribund" in 2007, and as having "fallen by the wayside" in 2023. Robert H. Cowie, writing in a book review in Heredity stated "Panbiogeography seems to me at best to offer little new insight, at worst to be fundamentally flawed" criticising panbiogeographers for not placing enough emphasis on phylogenetics, which Cowie states is "the underpinning of any biogeographical analysis". Selected works Manual of Phytogeography or An Account of Plant Dispersal Throughout the World. Junk, The Hague, 1952. 696 pp. Panbiogeography or An Introductory Synthesis of Zoogeography, Phytogeography, Geology; with notes on evolution, systematics, ecology, anthropology, etc.. Published by the author, Caracas, 1958. 2755 pp. Principia Botanica or Beginnings of Botany. Published by the author, Caracas, 1961. 1821 pp. Space, Time, Form: The Biological Synthesis. Published by the author, Caracas, 1964. 881 pp References Further reading Morrone JJ (2004). Homología Biogeográfica: las Coordenadas Espaciales de la Vida. México, DF: Cuadernos del Instituto de Biología 37, Instituto de Biología, UNAM. (in Spanish). Morrone JJ (2007). La Vita tra lo Spazio e il Tempo. Il Retaggio di Croizat e la Nuova Biogeografia. M. Zunino (Ed.). Palermo: Medical Books. (in Italian). Nelson G (1973). "Comments on Leon Croizat's Biogeography". Systematic Zoology 22 (3): 312–320. Rosen D (1974). "Space, Time, Form: The Biological Synthesis". Systematic Zoology 23 (2): 288–290. External links Selected papers References on Panbiogeography Croizatia. Revista Multidisciplinaria de Ciencia y Tecnología Video documentary on Croizat 1894 births 1982 deaths Biogeographers Harvard University staff Arnold Arboretum 20th-century Italian botanists Academic staff of the Central University of Venezuela Non-Darwinian evolution Scientists from Turin Italian emigrants to the United States
Léon Croizat
[ "Biology" ]
1,559
[ "Non-Darwinian evolution", "Biology theories" ]
9,453,024
https://en.wikipedia.org/wiki/Vroman%20effect
The Vroman effect, named after Leo Vroman, describes the process of competitive protein adsorption to a surface by blood serum proteins. The highest mobility proteins generally arrive first and are later replaced by less mobile proteins that have a higher affinity for the surface. The order of protein adsorption also depends on the molecular weight of the species adsorbing. Typically, low molecular weight proteins are displaced by high molecular weight protein while the opposite, high molecular weight being displaced by low molecular weight, does not occur. A typical example of this occurs when fibrinogen displaces earlier adsorbed proteins on a biopolymer surface and is later replaced by high molecular weight kininogen. The process is delayed in narrow spaces and on hydrophobic surfaces, fibrinogen is usually not displaced. Under stagnant conditions initial protein deposition takes place in the sequence: albumin; globulin; fibrinogen; fibronectin; factor XII, and HMWK. Molecular Mechanisms of Action While the exact mechanism of action is still unknown many important protein physical properties play a part in the Vroman Effect. Proteins have many properties that are important to take into consideration when discussing protein adsorption. These properties include the protein size, charge, mobility, stability, and the structure and composition of the different protein domains that make up the protein's tertiary structure. Protein size determines the molecular weight. Protein charge determines whether preferentially or selective favorable interactions will exist between the protein and a biomaterial. Protein mobility plays a factor in adsorption kinetics. Adsorption - Desorption Model The simplest molecular explanation for the exchange of proteins on a surface is the adsorption/desorption model. Here, proteins interact with the surface of a biomaterial and "stick" on the material through interactions made with the protein and the biomaterial surface. Once a protein has adsorbed onto the surface of a biomaterial, the protein may change conformation (structure) and even become nonfunctional. The spaces between the proteins on the biomaterial then become available for new proteins to adsorb. Desorption occurs when the protein leaves the biomaterial surface. This simple model lacks in complexity, since Vroman-like behavior has been observed on hydrophobic surfaces as well as hydrophilic ones. Furthermore, adsorption and desorption doesn't completely explain competitive protein exchange on hydrophilic surfaces. Transient Complex Model A "transient complex" model was first proposed by Huetz et al. to explain this competitive exchange. This transient complex exchange occurs in three distinct steps. Initially a protein embeds itself into the monolayer of an already adsorbed homogenous protein monolayer. The aggregation of this new heterogenous protein mixture causes the "turning" of the double-protein complex which exposes the initially adsorbed protein to the solution. In the third step, the protein that was initially adsorbed can now diffuse out into the solution and the new protein takes over. This 3 part "transient complex mechanism" is further explained and verified through AFM imaging by Hirsh et al. pH Cycling Jung et al. also describe a molecular mechanism for fibrinogen displacement involving pH cycling. Here the αC domains of fibrinogen change charge after pH cycling which results in conformational changes to the protein that leads to stronger interactions with the protein and the biomaterial. Mathematical Models The simplest mathematical model to explain the Vroman Effect is the Langmuir model using the Langmuir isotherm. More complex models include the Fruendlich isotherm and other modifications to the Langmuir model. This model explains the kinetics between reversible adsorption and desorption, assuming the adsorbate behaves as an ideal gas at isothermal conditions. See also Protein adsorption Langmuir adsorption model References Surface science Blood
Vroman effect
[ "Physics", "Chemistry", "Materials_science" ]
810
[ "Condensed matter physics", "Surface science" ]
9,453,048
https://en.wikipedia.org/wiki/1%2C8-Octanediol
1,8-Octanediol, also known as octamethylene glycol, is a diol with the molecular formula HO(CH)OH. 1,8-Octanediol is a white solid. It is produced by hydrogenation of esters of suberic acid. 1,8-Octanediol is used as a monomer in the synthesis of some polymers such as polyesters and polyurethanes. As with other fatty alcohols, octane-1,8-diol is used in cosmetics as an emollient and humectant. See also Ethylene glycol 1,2-Octanediol References Monomers Alkanediols
1,8-Octanediol
[ "Chemistry", "Materials_science" ]
148
[ "Monomers", "Polymer chemistry" ]
9,453,784
https://en.wikipedia.org/wiki/Lotaustralin
Lotaustralin is a cyanogenic glucoside found in small amounts in Fabaceae austral trefoil (Lotus australis), cassava (Manihot esculenta), lima bean (Phaseolus lunatus), roseroot (Rhodiola rosea) and white clover (Trifolium repens), among other plants. Lotaustralin is the glucoside of methyl ethyl ketone cyanohydrin and is structurally related to linamarin, the acetone cyanohydrin glucoside also found in these plants. Both lotaustralin and linamarin may be hydrolyzed by the enzyme linamarase to form glucose and a precursor to the toxic compound hydrogen cyanide. References Cyanogenic glycosides Glucosides Plant toxins
Lotaustralin
[ "Chemistry" ]
178
[ "Chemical ecology", "Plant toxins" ]
9,454,062
https://en.wikipedia.org/wiki/Castor%20%28mountain%29
Castor () is a mountain in the Pennine Alps on the border between Valais, Switzerland and the Aosta Valley in Italy. It is the higher of a pair of twin peaks (), the other being Pollux, named after the Gemini twins of Roman mythology. Castor's peak is at an elevation of , and it lies between Breithorn and the Monte Rosa. It is separated from Pollux by a pass at , named Passo di Verra in Italian and Zwillingsjoch in German. Ascents are usually made from the alpine hut Capanna Quintino Sella on the Italian side, by means of the Felikjoch and the long and narrow southeast ridge. From the Swiss side, ascents start from Klein Matterhorn and go by way of the Italian glacier Grand Glacier of Verra and the mountain's west flank. The first ascent was made on August 23, 1861. Image gallery See also List of 4000 metre peaks of the Alps References External links Castor on Summitpost Alpine four-thousanders Mountains of the Alps Mountains of Italy Mountains of Switzerland Pennine Alps Italy–Switzerland border International mountains of Europe Mountains of Valais Four-thousanders of Switzerland Castor and Pollux
Castor (mountain)
[ "Astronomy" ]
250
[ "Castor and Pollux", "Astronomical myths" ]
9,454,227
https://en.wikipedia.org/wiki/River%20of%20Gods
River of Gods is a 2004 science fiction novel by British writer Ian McDonald. It depicts a futuristic India in 2047, a century after its independence from Britain, characterized both by ancient traditions and advanced technologies such as artificial intelligences, robots and nanotechnology. The novel won the British Science Fiction Award in 2004 and was nominated for a Hugo. It was followed by a short story collection called Cyberabad Days in 2009. Plot introduction The novel follows a number of different characters' viewpoints on and around the date of 15 August 2047, the centenary of India's partition and independence from the colonial British Raj. This future India has become balkanized into a number of smaller competing states, such as Awadh, Bharat, and Bangla. The global information network is now inhabited by artificial intelligences, phonetically called aeais in the novel, of varying levels of intelligence. Aeais higher than level 2.5 (able to pass the Turing test and imitate humans) are banned, and their destruction ("excommunication") is the responsibility of "Krishna Cops", like Mr. Nandha. While some pockets of the subcontinent are still steeped in ancient tradition and values, mainstream culture is replete with aeais in TV entertainment and robotic swarms in defense. During such a time, Ranjit Ray steps down from his control of Ray Power, a key energy company, and the responsibility falls on his son Vishram Ray. The playboy Vishram is struggling to make it on his own as a stand-up comedian in Scotland when he is flown back to Varanasi to assume his role at Ray Power, for which he finds himself terribly ill-equipped but eventually surprisingly effective. He learns that his company is working on harvesting zero-point energy from other universes, and sees the particle collider built by his father with the help of Odeco, a clandestine investment firm. After a prolonged drought, a severe water shortage threatens to jeopardize the peace between the subcontinental states. To avert this crisis, governments are melting glaciers and modifying natural systems. To take advantage of the unrest, a Hindu fundamentalist leader named N.K. Jeevanji organises a "rath yatra" on a spectacular juggernaut. He starts releasing key information to the press via Najia Askarzadah, an ambitious Swedish-Afghan reporter with a desire to be part of history as it is being made. Lisa Durnau notices an apocalyptic crisis brewing in Alterre, a simulated evolution of earth created by AI scientist Thomas Lull, who is currently hiding in a South Indian coastal village. While Lisa is sent into space to investigate an asteroid, Thomas Lull runs into Aj, a girl with mysterious powers that allow her to see into people's lives, pasts and futures. He decides to follow her and protect her during her quest to find her own true identity, but it is soon revealed that Aj's powers extend beyond mere mortals, when she brings a robot army to a halt with the raise of a hand. Tal is a beautiful nute (of neutral gender) involved in the designing team of India's greatest "soapi", Town & Country, some of the main stars of which are not human actors, but aeais. Tal falls prey to a conspiracy that compromises the career of Shaheen Badoor Khan, Private Secretary to the Prime Minister Sajida Rana, leading to her assassination and the fall of the government. All this leads to riots and popular fury against Muslims and transsexuals across Varanasi. Lisa Durnau discovers that at the center of the mysterious asteroid is an 8-billion-year-old grey sphere, possibly a black hole remnant, or an alien artifact from another civilization. This "Tabernacle" communicates a message to the scientists, and this leads Lisa to India to find Thomas Lull, who alone can explain this phenomenon. Awards and nominations British Science Fiction Association Best Novel winner, 2004 Arthur C. Clarke Award Best Novel nominee (2005) Hugo Awards, Best Novel nominee (2005) Release details January 2004: United Kingdom. Simon & Schuster. (paperback). June 2004: United Kingdom. Simon & Schuster. (hardcover). April 2005: United Kingdom. Simon & Schuster. (paperback). March 2006: United States. Prometheus Books (hardcover). References 2004 British novels 2004 science fiction novels Fiction set in 2047 British science fiction novels Novels about artificial intelligence Fiction about asteroids Fiction about nanotechnology Novels by Ian McDonald Novels set in India Postcyberpunk novels Religion in science fiction Novels about robots Fiction about water scarcity Novels with transgender themes Simon & Schuster books Novels set in the 2040s
River of Gods
[ "Materials_science" ]
964
[ "Fiction about nanotechnology", "Nanotechnology" ]
9,454,590
https://en.wikipedia.org/wiki/Crab%20mentality
Crab mentality, also known as crab theory, crabs in a bucket mentality, or the crab-bucket effect, is a mentality of which people will try to prevent others from gaining a favourable position in something, even if it has no effect on those trying to stop them. It is usually summarized with the phrase "If I can't have it, neither can you". The metaphor is derived from anecdotal claims about the behavior of crabs contained in an open bucket: if a crab starts to climb out, it will be pulled back in by the others, ensuring the group's collective demise. The analogous theory in human behavior is that members of a group will attempt to reduce the self-confidence of any member who achieves success beyond others, out of envy, jealousy, resentment, spite, conspiracy, or competitive feelings, in order to halt their progress. Self-evaluation maintenance theory Tesser's self-evaluation maintenance theory (SEM) suggests that individuals engage in self-evaluation not only through introspection but also through comparison to others, especially those within their close social circles. When someone close to an individual excels in areas they value, they may feel threatened and act in ways that downplay their achievements. This mechanism can partly explain why individuals may attempt to pull down those who achieve more than themselves as a way to protect their own self-esteem and social standing. Emotions such as envy may be generated when individuals feel threatened during self-evaluation. This can lead to a desire to diminish the well-being of others, particularly when their success highlights the individual's own failures or inadequacies. Relative deprivation theory Relative deprivation theory proposes that feelings of dissatisfaction and injustice arise when people compare their situation unfavorably with others' situations. This sense of inequality, rooted in subjective perceptions rather than objective measures, can deeply influence social behavior, including the phenomenon of crab mentality. When individuals see their peers achieving success or receiving the recognition they feel is undeserved or unattainable for themselves, it can trigger actions aimed at undermining these peers' accomplishments. The concept emerged from a study of American soldiers by Stouffer. Soldiers in units with more promotions were paradoxically less satisfied, feeling left out if not promoted themselves, despite better odds of advancement. This reflects how relative deprivation fuels dissatisfaction by comparing one's situation to others. By "dragging" others down to a similar level, individuals might feel a sense of satisfaction. Thus, crab mentality can be viewed as a response to perceived social inequality, where pulling others down becomes a strategy to cope with feelings of inadequacy or injustice. Zero-sum bias Zero-sum bias, where individuals perceive that they can only gain at the expense of others, may contribute to crab mentality. This bias is rooted in a fundamental misunderstanding of success and resource distribution, leading to the incorrect belief that success and resources are limited and one person's gain is necessarily another's loss. Such a worldview fosters competitive rather than collaborative social interactions, encouraging behaviors that aim at hindering others' achievements to protect one's perceived share of limited resources, like crabs in a bucket. In Daniel V. Meegan's study, researchers found that students expected lower grades for peers after seeing many high grades already awarded, despite being in a system where high grades are unlimited. This illustrates how people often view success as a limited resource. Thus, when they see their peers successfully "climbing out of the bucket", they may try to hinder their progress to ensure their own chances of success remain unchanged. It's crucial to differentiate crab mentality from strategic competition, where actions are calculated for self-interest and personal gain. People's rational behaviors are aimed directly at benefiting themselves. Since it is driven by cognitive biases and emotions, crab mentality is often a reactive, non-rational behavior that seeks to level the playing field by pulling others down, even though there are no direct benefits to the individual. Note See also References Further reading Competition Cultural anthropology Metaphors referring to animals Non-cooperative games Social psychology concepts Crabs in culture
Crab mentality
[ "Mathematics" ]
842
[ "Game theory", "Non-cooperative games" ]
9,455,578
https://en.wikipedia.org/wiki/Thaumasite
Thaumasite is a calcium silicate mineral, containing Si atoms in unusual octahedral configuration, with chemical formula Ca3Si(OH)6(CO3)(SO4)·12H2O, also sometimes more simply written as CaSiO3·CaCO3·CaSO4·15H2O. It occurs as colorless to white prismatic hexagonal crystals, typically as acicular radiating groups. It also occurs as fibrous masses. Its Mohs hardness is 3.5 and it has a specific gravity of 1.88 to 1.90. Optically it is uniaxial negative with indices of refraction of nω = 1.507 and nε = 1.468. It occurs as a hydrothermal alteration mineral in sulfide ore deposits and geothermal alteration of basalt and tuff. It occurs with zeolites, apophyllite, analcime, calcite, gypsum and pyrite. Thaumasite can also be formed in man-made concrete structures at the detriment of calcium silicate hydrates (C-S-H, with dashes denoting the non-stoichiometry of this hydrated cement phase acting as the "glue" in hardened cement paste) during cement alteration, especially when sulfate attack develops. The reaction consuming the silicates of the "cement glue" can lead to harmful decohesion and softening (more rarely to expansion and cracking) of concrete. Unlike conventional sulfate attack, in which the calcium hydroxide (portlandite) and calcium aluminate hydrates react with sulfates to form gypsum and ettringite (an expansive phase) respectively, in the case of the thaumasite form of sulfate attack (TSA) the calcium silicate hydrates ensuring the cohesion in the hardened cement paste are also destroyed. As a consequence, even concrete containing sulfate-resisting Portland cement may be affected. It was first described in 1878 in Sweden and named from the Greek, "thaumazein", to be surprised, in reference to its unusual composition with carbonate, sulfate and hydroxysilicate anions. The silicate structure of thaumasite is unusual due to the presence of non-tetrahedral silicon in its crystal lattice. Indeed, an atypic octahedral configuration is observed for Si present in thaumasite in the form of hexahydroxysilicate: [Si(OH)6]2−, a species exhibiting a geometry similar to that of the hexafluorosilicate [SiF6]2−. See also Other calcium silicate hydrate (C-S-H) minerals: Afwillite Hexafluorosilicic acid, a chemical species with a central hexacoordinated octahedral silicon atom Gyrolite Jennite Stishovite, a rare high-pressure mineral also with hexacoordinated octahedral silica Tobermorite References Further reading External links Thaumasite form of sulfate attack (TSA) Calcium minerals Carbonate minerals Cement Concrete Hexagonal minerals Minerals in space group 173 12 Luminescent minerals Silicate minerals Sulfate minerals
Thaumasite
[ "Chemistry", "Engineering" ]
672
[ "Structural engineering", "Luminescence", "Luminescent minerals", "Hydrates", "Concrete", "Hydrate minerals" ]
9,456,188
https://en.wikipedia.org/wiki/Monolithic%20architecture
Monolithic architecture describes buildings which are carved, cast or excavated from a single piece of material, historically from rock. The most basic form of monolithic architecture is a rock-cut building, such as the monolithic churches of Ethiopia built by the Zagwe dynasty, or the Pancha Rathas in India. These are cut out of solid rock, to which they remain attached at the base. In most cases, this is evident from the remaining surrounding rock, but sometimes a building is cut from an outcrop, as in the Shore Temple in southern India, and only inspection at close quarters reveals that the building is monolithic. The terms monolith and monolithic column are normally used for objects made from a single large piece of rock which is detached from the ground. They may have been moved a considerable distance, as with several ancient Egyptian obelisks, which have been moved around the world. Buildings with a structural material that is poured into place, most commonly concrete, can also be described as monolithic. Extreme examples are monolithic domes, where the material is sprayed inside of a form to produce the solid structure. An ancient example of a monolithic dome is that of the Mausoleum of Theodoric in Ravenna, Italy, the roof of which is made from a single stone. See also Architecture of cathedrals and great churches Church architecture Early Christian art and architecture Statue of Ahimsa, largest monolithic sculpture References Russell Sturgis, Sturgis' Illustrated Dictionary of Architecture and Building External links Monoliths
Monolithic architecture
[ "Engineering" ]
310
[ "Architecture stubs", "Architecture" ]
9,456,259
https://en.wikipedia.org/wiki/Phycoplast
The phycoplast is a microtubule structure observed during cytokinesis in members of the Chlorophytina, the largest and most well known subphylum of chlorophyte green algae. Cytokinesis in green algae occurs via a diverse range of mechanisms, including cleavage furrows in some algae and cell plates in others. Plants (=Chloroplastida) of the clade Phragmoplastophyta (a subgroup of charophytes which includes the land plants, desmids, water silk, stoneworts etc.) use structures called phragmoplasts to organize and guide the growing cell plate. In these plants, the microtubules of the telophase spindle give rise to the phragmoplast and are oriented perpendicular to the plane of cell division and the forming cell plate. The growth of the cell plate eventually disrupts the telophase spindle (see case 4 in picture). In the Chlorophyceae, the most common form of cell division occurs via a phycoplast. In these algae, the spindle collapses and a new system of microtubules forms that is oriented in parallel to the plane of cell division. This phycoplast can be observed in algae undergoing cytokinesis via cleavage furrow (case 1 in picture) as well as algae utilizing a cell plate (case 3 in picture). The phycoplast may play a role in assuring that the plane of cell division will pass between the two daughter nuclei. Typically, these algae undergo "closed" mitosis where the nuclear envelope persists throughout mitosis. References P.H. Raven, R.F. Evert, S.E. Eichhorn (2005): Biology of Plants, 7th Edition, W.H. Freeman and Company Publishers, New York, Cell cycle Mitosis Plant cells
Phycoplast
[ "Biology" ]
399
[ "Cell cycle", "Cellular processes", "Mitosis" ]
9,458,068
https://en.wikipedia.org/wiki/RNA%20silencing
RNA silencing or RNA interference refers to a family of gene silencing effects by which gene expression is negatively regulated by non-coding RNAs such as microRNAs. RNA silencing may also be defined as sequence-specific regulation of gene expression triggered by double-stranded RNA (dsRNA). RNA silencing mechanisms are conserved among most eukaryotes. The most common and well-studied example is RNA interference (RNAi), in which endogenously expressed microRNA (miRNA) or exogenously derived small interfering RNA (siRNA) induces the degradation of complementary messenger RNA. Other classes of small RNA have been identified, including piwi-interacting RNA (piRNA) and its subspecies repeat associated small interfering RNA (rasiRNA). Background RNA silencing describes several mechanistically related pathways which are involved in controlling and regulating gene expression. RNA silencing pathways are associated with the regulatory activity of small non-coding RNAs (approximately 20–30 nucleotides in length) that function as factors involved in inactivating homologous sequences, promoting endonuclease activity, translational arrest, and/or chromatic or DNA modification. In the context in which the phenomenon was first studied, small RNA was found to play an important role in defending plants against viruses. For example, these studies demonstrated that enzymes detect double-stranded RNA (dsRNA) not normally found in cells and digest it into small pieces that are not able to cause disease. While some functions of RNA silencing and its machinery are understood, many are not. For example, RNA silencing has been shown to be important in the regulation of development and in the control of transposition events. RNA silencing has been shown to play a role in antiviral protection in plants as well as insects. Also in yeast, RNA silencing has been shown to maintain heterochromatin structure. However, the varied and nuanced role of RNA silencing in the regulation of gene expression remains an ongoing scientific inquiry. A range of diverse functions have been proposed for a growing number of characterized small RNA sequences—e.g., regulation of developmental, neuronal cell fate, cell death, proliferation, fat storage, haematopoietic cell fate, insulin secretion. RNA silencing functions by repressing translation or by cleaving messenger RNA (mRNA), depending on the amount of complementarity of base-pairing. RNA has been largely investigated within its role as an intermediary in the translation of genes into proteins. More active regulatory functions, however, only began to be addressed by researchers beginning in the late-1990s. The landmark study providing an understanding of the first identified mechanism was published in 1998 by Fire et al., demonstrating that double-stranded RNA could act as a trigger for gene silencing. Since then, various other classes of RNA silencing have been identified and characterized. Presently, the therapeutic potential of these discoveries is being explored, for example, in the context of targeted gene therapy. While RNA silencing is an evolving class of mechanisms, a common theme is the fundamental relationship between small RNAs and gene expression. It has also been observed that the major RNA silencing pathways currently identified have mechanisms of action which may involve both post-transcriptional gene silencing (PTGS) as well as chromatin-dependent gene silencing (CDGS) pathways. CDGS involves the assembly of small RNA complexes on nascent transcripts and is regarded as encompassing mechanisms of action which implicate transcriptional gene silencing (TGS) and co-transcriptional gene silencing (CTGS) events. This is significant at least because the evidence suggests that small RNAs play a role in the modulation of chromatin structure and TGS. Despite early focus in the literature on RNA interference (RNAi) as a core mechanism which occurs at the level of messenger RNA translation, others have since been identified in the broader family of conserved RNA silencing pathways acting at the DNA and chromatin level. RNA silencing refers to the silencing activity of a range of small RNAs and is generally regarded as a broader category than RNAi. While the terms have sometimes been used interchangeably in the literature, RNAi is generally regarded as a branch of RNA silencing. To the extent it is useful to craft a distinction between these related concepts, RNA silencing may be thought of as referring to the broader scheme of small RNA related controls involved in gene expression and the protection of the genome against mobile repetitive DNA sequences, retroelements, and transposons to the extent that these can induce mutations. The molecular mechanisms for RNA silencing were initially studied in plants but have since broadened to cover a variety of subjects, from fungi to mammals, providing strong evidence that these pathways are highly conserved. At least three primary classes of small RNA have currently been identified, namely: small interfering RNA (siRNA), microRNA (miRNA), and piwi-interacting RNA (piRNA). small interfering RNA (siRNA) siRNAs act in the nucleus and the cytoplasm and are involved in RNAi as well as CDGS. siRNAs come from long dsRNA precursors derived from a variety of single-stranded RNA (ssRNA) precursors, such as sense and antisense RNAs. siRNAs also come from hairpin RNAs derived from transcription of inverted repeat regions. siRNAs may also arise enzymatically from non-coding RNA precursors. The volume of literature on siRNA within the framework of RNAi is extensive. One of the potent applications of siRNAs is the ability to distinguish the target versus non-target sequence with a single-nucleotide difference. This approach has been considered as therapeutically crucial for the silencing dominant gain-of-function (GOF) disorders, where mutant allele causing disease is differed from wt-allele by a single nucleotide (nt). This type of siRNAs with capability to distinguish a single-nt difference are termed as allele-specific siRNAs. microRNA (miRNA) The majority of miRNAs act in the cytoplasm and mediate mRNA degradation or translational arrest. However, some plant miRNAs have been shown to act directly to promote DNA methylation. miRNAs come from hairpin precursors generated by the RNaseIII enzymes Drosha and Dicer. Both miRNA and siRNA form either the RNA-induced silencing complex (RISC) or the nuclear form of RISC known as RNA-induced transcriptional silencing complex (RITS). The volume of literature on miRNA within the framework of RNAi is extensive. Three prime untranslated regions and microRNAs Three prime untranslated regions (3'UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally cause RNA interference. Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3'-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA. The 3'-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs. As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). Freidman et al. estimate that >45,000 miRNA target sites within human mRNA 3'UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs. Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold). The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes. The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders. piwi-interacting RNA (piRNA) piRNAs represent the largest class of small non-coding RNA molecules expressed in animal cells, deriving from a large variety of sources, including repetitive DNA and transposons. However, the biogenesis of piRNAs is also the least well understood. piRNAs appear to act both at the post-transcriptional and chromatin levels. They are distinct from miRNA due to at least an increase in terms of size and complexity. Repeat associated small interfering RNA (rasiRNAs) are considered to be a subspecies of piRNA. Mechanism The most basic mechanistic flow for RNA Silencing is as follows: (For a more detailed explanation of the mechanism, refer to the RNAi:Cellular mechanism article.) 1: RNA with inverted repeats hairpin/panhandle constructs --> 2: dsRNA --> 3: miRNAs/siRNAs --> 4: RISC --> 5: Destruction of target mRNA It has been discovered that the best precursor to good RNA silencing is to have single stranded antisense RNA with inverted repeats which, in turn, build small hairpin RNA and panhandle constructs. The hairpin or panhandle constructs exist so that the RNA can remain independent and not anneal with other RNA strands. These small hairpin RNAs and/or panhandles then get transported from the nucleus to the cytosol through the nuclear export receptor called exportin-5, and then get transformed into a dsRNA, a double stranded RNA, which, like DNA, is a double stranded series of nucleotides. If the mechanism didn't use dsRNAs, but only single strands, there would be a higher chance for it to hybridize to other "good" mRNAs. As a double strand, it can be kept on call for when it is needed. The dsRNA then gets cut up by a Dicer into small (21-28 nt = nucleotides long) strands of miRNAs (microRNAs) or siRNAs (short interfering RNAs.) A Dicer is an endoribonuclease RNase, which is a complex of a protein mixed with strand(s) of RNA. Lastly, the double stranded miRNAs/siRNAs separate into single strands; the antisense RNA strand of the two will combine with another endoribonuclease enzyme complex called RISC (RNA-induced silencing complex), which includes the catalytic component Argonaute, and will guide the RISC to break up the "perfectly complementary" target mRNA or viral genomic RNA so that it can be destroyed. It means that based on a short sequence specific area, a corresponding mRNA will be cut. To make sure, it will be cut in many other places as well. (If the mechanism only worked with a long stretch, then there would be higher chance that it would not have time to match to its complementary long mRNA.) It has also been shown that the repeated-associated short interference RNAs (rasiRNA) have a role in guiding chromatin modification. Biological functions Immunity against viruses or transposons RNA silencing is the mechanism that our cells (and cells from all kingdoms) use to fight RNA viruses and transposons (which originate from our own cells as well as from other vehicles). In the case of RNA viruses, these get destroyed immediately by the mechanism cited above. In the case of transposons, it's a little more indirect. Since transposons are located in different parts of the genome, the different transcriptions from the different promoters produce complementary mRNAs that can hybridize with each other. When this happens, the RNAi machinery goes into action, debilitating the mRNAs of the proteins that would be required to move the transposons themselves. Down-regulation of genes For a detailed explanation of the down-regulation of genes, see RNAi:downregulation of genes Up-regulation of genes For a detailed explanation of the up-regulation of genes, see RNAi:upregulation of genes RNA silencing also gets regulated The same way that RNA silencing regulates downstream target mRNAs, RNA silencing itself is regulated. For example, silencing signals get spread between cells by a group of enzymes called RdRPs (RNA-dependent RNA polymerases) or RDRs. Practical applications Growing understanding of small RNA gene-silencing mechanisms involving dsRNA-mediated sequence-specific mRNA degradation has directly impacted the fields of functional genomics, biomedicine, and experimental biology. The following section describes various applications involving the effects of RNA silencing. These include uses in biotechnology, therapeutics, and laboratory research. Bioinformatics techniques are also being applied to identify and characterize large numbers of small RNAs and their targets. Biotechnology Artificial introduction of long dsRNAs or siRNAs has been adopted as a tool to inactivate gene expression, both in cultured cells and in living organisms. Structural and functional resolution of small RNAs as the effectors of RNA silencing has had a direct impact on experimental biology. For example, dsRNA may be synthesized to have a specific sequence complementary to a gene of interest. Once introduced into a cell or biological system, it is recognized as exogenous genetic material and activates the corresponding RNA silencing pathway. This mechanism can be used to effect decreases in gene expression with respect to the target, useful for investigating loss of function for genes relative to a phenotype. That is, studying the phenotypic and/or physiologic effects of expression decreases can reveal the role of a gene product. The observable effects can be nuanced, such that some methods can distinguish between “knockdown” (decrease expression) and “knockout” (eliminate expression) of a gene. RNA interference technologies have been noted recently as one of the most widely utilized techniques in functional genomics. Screens developed using small RNAs have been used to identify genes involved in fundamental processes such as cell division, apoptosis and fat regulation. Biomedicine Since at least the mid-2000s, there has been intensifying interest in developing short interfering RNAs for biomedical and therapeutic applications. Bolstering this interest is a growing number of experiments which have successfully demonstrated the clinical potential and safety of small RNAs for combatting diseases ranging from viral infections to cancer as well as neurodegenerative disorders. In 2004, the first Investigational New Drug applications for siRNA were filed in the United States with the Food and Drug Administration; it was intended as a therapy for age-related macular degeneration. RNA silencing in vitro and in vivo has been accomplished by creating triggers (nucleic acids that induce RNAi) either via expression in viruses or synthesis of oligonucleotides. Optimistically many studies indicate that small RNA-based therapies may offer novel and potent weapons against pathogens and diseases where small molecule/pharmacologic and vaccine/biologic treatments have failed or proved less effective in the past. However, it is also warned that the design and delivery of small RNA effector molecules should be carefully considered in order to ensure safety and efficacy. The role of RNA silencing in therapeutics, clinical medicine, and diagnostics is a fast developing area and it is expected that in the next few years some of the compounds using this technology will reach market approval. A report has been summarized below to highlight the many clinical domains in which RNA silencing is playing an increasingly important role, chief among them are ocular and retinal disorders, cancer, kidney disorders, LDL lowering, and antiviral. The following table displays a listing of RNAi based therapy currently in various phases of clinical trials. The status of these trials can be monitored on the ClinicalTrials.gov website, a service of the National Institutes of Health (NIH). Of note are treatments in development for ocular and retinal disorders, that were among the first compounds to reach clinical development. AGN211745 (sirna027) (Allergan) and bevasiranib (Cand5) (Opko) underwent clinical development for the treatment of age-related macular degeneration, but trials were terminated before the compounds reached the market. Other compounds in development for ocular conditions include SYL040012 (Sylentis) and QPI-007 (Quark). SYL040012 (bamosinan) is a drug candidate under clinical development for glaucoma, a progressive optic neurdegeneration frequently associated to increased intraocular pressure; QPI-007 is a candidate for the treatment of angle-closure glaucoma and Non-arteritic anterior ischaemic optic neuropathy; both compounds are currently undergoing phase II clinical trials. Several compounds are also under development for conditions such as cancer and rare diseases. Main challenge As with conventional manufactured drugs, the main challenge in developing successful offshoots of the RNAi-based drugs is the precise delivery of the RNAi triggers to where they are needed in the body. The reason that the ocular macular degeneration antidote was successful sooner than the antidote with other diseases is that the eyeball is almost a closed system, and the serum can be injected with a needle exactly where it needs to be. The future successful drugs will be the ones who are able to land where needed, probably with the help of nanobots. Below is a rendition of a table that shows the existing means of delivery of the RNAi triggers. Laboratory The scientific community has been quick to harness RNA silencing as a research tool. The strategic targeting of mRNA can provide a large amount of information about gene function and its ability to be turned on and off. Induced RNA silencing can serve as a controlled method for suppressing gene expression. Since the machinery is conserved across most eukaryotes, these experiments scale well to a range of model organisms. In practice, expressing synthetic short hairpin RNAs can be used to reach stable knock-down. If promoters can be made to express these designer short hairpin RNAs, the result is often potent, stable, and controlled gene knock-down in both in vitro and in vivo contexts. Short hairpin RNA vector systems can be seen as roughly analogous in scope to using cDNA overexpression systems. Overall, synthetic and natural small RNAs have proven to be an important tool for studying gene function in cells as well as animals. Bioinformatics approaches to identify small RNAs and their targets have returned several hundred, if not thousands of, small RNA candidates predicted to affect gene expression in plants, C. elegans, D. melanogaster, zebrafish, mouse, rat, and human. These methods are largely directed to identifying small RNA candidates for knock-out experiments but may have broader applications. One bioinformatics approach evaluated sequence conservation criteria by filtering seed complementary target-binding sites. The cited study predicted that approximately one third of mammalian genes were to be regulated by, in this case, miRNAs. Ethics & Risk-Benefit Analysis One aspect of RNA silencing to consider is its possible off-target affects, toxicity, and delivery methods. If RNA silencing is to become a conventional drug, it must first pass the typical ethical issues of biomedicine. Using risk-benefit analysis, researchers can determine whether RNA silencing conforms to ethical ideologies such as nonmaleficence, beneficence, and autonomy. There is a risk of creating infection-competent viruses that could infect non-consenting people. There is also a risk of affecting future generations based on these treatments. These two scenarios, in respect to autonomy, is possible unethical. At this moment, unsafe delivery methods and unintended aspects of vector viruses add to the argument against RNA silencing. In terms of off-target effects, siRNA can induce innate interferon responses, inhibit endogenous miRNAs through saturation, and may have complementary sequences to other non-target mRNAs. These off-targets could also have target up-regulations such as oncogenes and antiapoptotic genes. The toxicity of RNA silencing is still under review as there are conflicting reports. RNA silencing is quickly developing, because of that, the ethical issues need to be discussed further. With the knowledge of general ethical principles, we must continuously perform risk-benefit analysis. See also RNAi siRNA miRNA piwiRNA rasiRNA References RNA Gene expression
RNA silencing
[ "Chemistry", "Biology" ]
4,483
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
9,458,413
https://en.wikipedia.org/wiki/South%20Coast%20Air%20Quality%20Management%20District
The South Coast Air Quality Management District, also using the acronym South Coast (AQMD), formed in 1976, is the air pollution agency responsible for regulating stationary sources of air pollution in the South Coast Air Basin and the Coachella Valley portion of the Salton Sea Air Basin, in Southern California. The separate California Air Resources Board is responsible for regulating mobile sources (e.g. vehicles) in the air basin. Basin geography The South Coast AQMD includes all of Orange County; and the non-desert regions of Los Angeles County, San Bernardino County, and Riverside County including the Coachella Valley. The South Coast Air Basin area encompassed by the South Coast AQMD amounts to about 10,750 square miles (27,850 square kilometres) and is the second most populated area in the United States. This area has a severe problem with smog, and the South Coast AQMD has been a leader in the nation's efforts to reduce air pollution emissions. The main office is located in the city of Diamond Bar. Operations South Coast AQMD develops, adopts and implements an Air Quality Management Plan for bringing the area into compliance with the clean air standards established by national and state governmental legislation. Air quality and permissible air pollutant emission "rules" are promulgated to reduce emissions from various sources, including specific types of equipment, industrial processes, paints, solvents and certain consumer products. Permits are issued to the pertinent industries and businesses to enforce compliance with the air quality and emission rules, and South Coast AQMD staff conducts periodic inspections to ensure such compliance. South Coast AQMD's rules apply to businesses ranging from large oil refineries and power plants to gasoline (petrol) fueling stations and dry cleaning plants. There are about 30,000 such businesses operating under South Coast AQMD permits. In general, the agency is limited to establishing rules for regulating stationary sources. Emission standards for mobile sources (automobiles, trucks, buses, railroads, airplanes and marine vessels) are established by the U.S. Environmental Protection Agency and the California Air Resources Board. Air quality monitoring network South Coast AQMD also operates an extensive network of air quality monitoring stations (about 40 stations) and issues daily air quality forecasts. The forecasts are made available to the public through newspapers, television, radio, its award-winning smartphone application, social media, its website, and a toll-free Smog Update telephone line. Air quality and air pollution dispersion modeling The air quality modeling activities of the South Coast AQMD are one of the functions of the Planning, Rule Development and Area Sources section. That section is also responsible for oversight and commenting upon air pollution dispersion modeling studies performed as part of any environmental impact studies that may be reviewed by or requested by South Coast AQMD. The models that may be utilized include: California Line Source Dispersion Model (CALINE-4) AERMOD Model Hotspots Analysis and Reporting Program (HARP) U.S. Environmental Protection Agency (EPA)'s Air Quality Models California Air Resources Board (CARB)'s Air Quality Models Indirect source rule The SCAQMD has authority to regulate stationary sources of air pollution. These include warehouses which induce truck trips, and thus qualify as indirect sources of air pollution. In 2021, the SCAQMD board voted to authorize the Indirect Source Rule, which requires warehouses larger than 100,000 square feet to take steps to cut pollution, fund similar air pollution mitigation projects, or face penalties. The Inland Empire has experienced substantial development of warehouses which serve the e-commerce industry. The indirect source rule's main program is the Warehouse Actions and Investments to Reduce Emissions (WAIRE). Under WAIRE, affected warehouses must earn WAIRE points to provide incentives to reduce emissions from trucks which serve them. They can earn points by operating electric trucks or other zero-emissions vehicle, installing charging station or solar panels, or paying to install particulate filters at local schools. Warehouse operators that do not earn sufficient points must pay mitigation fees. On August 2, 2024, the air district board unanimously voted to authorize a new indirect source rule to reduce nitrogen oxide emissions from railyards in the district. The rule would apply to 25 facilities and would require the facility or all facilities operated by the same operator in the state of California to reduce emissions a specified percentage over the projected baseline or demonstrate that through reduced throughput the facility had a reduced Aggregate Emission Factor. The rule builds upon the CARB In-Use Locomotive rule and the Advanced Clean Fleet rule that have yet to be approved by the EPA and thus the rule cannot go into effect until those are approved. The rule is expected to prevent 300 premature deaths and 2100 hospitalizations annually. Governing board South Coast AQMD has a Governing Board of 13 members. Ten of the members are county supervisors and city council members. The remaining three are appointed by California state officials. Current members of the Governing Board include: Vanessa Delgado (Chair), California State Senate Rules Committee appointee Michael A. Cacciotti (Vice Chair), Eastern Los Angeles County Cities Andrew Do, Orange County Supervisor Nithya Raman, Los Angeles City Council Gideon Kracov, Governor's appointee† Holly Mitchell, Los Angeles County Supervisor Larry McCallon, San Bernardino County cities Veronica Padilla-Campos, California State Assembly Speaker appointee V. Manuel Perez, Riverside County Board of Supervisor Jose Luis Solache, Western Los Angeles County cities Carlos Rodriguez, Orange County cities Curt Hagman, San Bernardino County Supervisor Patricia Lock Dawson, Riverside County cities †South Coast AQMD member to the California Air Resources Board The representative for Orange County cities is chosen by the 34 members of the Orange County City Selection Committee. Appointment requires both a majority vote of the committee and a majority vote when weighted by population. In November 2015, Republicans blocked reappointment of Santa Ana Mayor Miguel Pulido, a Democrat, to the board, voting to replace him with Republican Lake Forest Councilman Dwight Robinson. Administration The chief Executive Officer of the South Coast AQMD reports to the Governing Board. The following departments report to the Executive Officer: Administrative departments Legal Counsel Prosecutor Legislative, Public Affairs, and Media Finance Human Resources Information Management Operational departments Engineering and Permitting Compliance and Enforcement Planning, Rule Development and Implementation Monitoring and Analysis Technology Advancement Office Funding for South Coast AQMD South Coast AQMD utilizes a system of evaluation fees, annual operating fees, emission fees, Hearing Board fees, penalties/ settlements and investments that generate around 73% of its revenue. The remaining 27% of its revenue is from federal grants, California Air Resources (CARB) subvention funds, and California Clean Air Act Motor Vehicle fees. See also South Coast Air Basin California Air Resources Board California Department of Toxic Substances Control AP 42 Compilation of Air Pollutant Emission Factors Environmental remediation Hal Bernson — former board member. Clean Air Act (1990) Clean Air Act (1970) List of California air districts U.S. Environmental Protection Agency dispersion models National Ambient Air Quality Standards—NAAQS National Emissions Standards for Hazardous Air Pollutants—NESHAP PHEV Research Center Public Smog Ventura County Air Pollution Control District References External links Official South Coast Air Quality Management District—SCAQMD website Air pollution in California Air pollution organizations Southern California Environmental agencies in the United States Environmental agencies of country subdivisions Environment of Greater Los Angeles Special districts of California Government of Los Angeles County, California Government of San Bernardino County, California Government of Orange County, California Government of Riverside County, California Government of Los Angeles Government in Long Beach, California Atmospheric dispersion modeling Organizations based in Orange County, California 1976 establishments in California Environmental organizations based in Los Angeles
South Coast Air Quality Management District
[ "Chemistry", "Engineering", "Environmental_science" ]
1,576
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
14,606,380
https://en.wikipedia.org/wiki/Humphrey%E2%80%93Parkes%20terminology
Humphrey–Parkes terminology is a system of nomenclature for the plumage of birds. Before the Humphrey–Parkes system, plumage was named after the belief that a certain plumage was breeding plumage and others were not. However, as this system did not always work correctly, the new Humphrey–Parkes came into use to rectify this error. This terminology is named after P. S. Humphrey and K. C. Parkes. Under the Humphrey–Parkes nomenclature, the main adult plumage, especially when it is produced by a complete molt, is called basic plumage. In most birds, the non-breeding plumage, which is worn longer than the breeding plumage, is known as the basic plumage. In birds that molt only once a year, the regular and only plumage is known as basic plumage. In some birds, a partial molt occurs before the bird breeds. This plumage is known as the alternate plumage and is generally what was previously known as a bird's breeding plumage. If a bird produces a third plumage in addition to the basic and alternative, it is known as supplemental plumage. This plumage is most frequently found in ptarmigans. The unique plumage of a juvenile bird is known as juvenal (or less precisely, juvenile) plumage. When the bird is molting, the molt is known as a prejuvenal, prebasic, prealternate, or presupplemental molt, depending on which type follows the molt. For birds that do not completely molt into full adult plumage the first time, a numbering system is used to signify which plumage it is in. For example, for the first time a bird enters basic plumage, the plumage is known as first basic plumage; the second, second basic plumage. The numbers are dropped after a bird achieves its full adult plumage. References Birds
Humphrey–Parkes terminology
[ "Biology" ]
379
[ "Birds", "Animals" ]
14,606,730
https://en.wikipedia.org/wiki/Cass%20criterion
The Cass criterion, also known as the Malinvaud–Cass criterion, is a central result in theory of overlapping generations models in economics. It is named after David Cass. A major feature which sets overlapping generations models in economics apart from the standard model with a finite number of infinitely lived individuals is that the First Welfare Theorem might not hold—that is, competitive equilibria may be not be Pareto optimal. If represents the vector of Arrow–Debreu commodity prices prevailing in period and if then a competitive equilibrium allocation is inefficient. References Economics and time
Cass criterion
[ "Physics" ]
120
[ "Spacetime", "Economics and time", "Physical quantities", "Time" ]
14,607,208
https://en.wikipedia.org/wiki/Wolfgang%20Wickler
Wolfgang Wickler (18 November 1931 – 12 January 2024) was a German zoologist, behavioral researcher and author. He led the ethological department of the Max Planck Institute for Behavioral Physiology from 1974, and he took over as director of the institute in 1975. Even after he was given emeritus status, he remained closely associated to the institute in Seewiesen and ensured its smooth transition under the newly created Max Planck Institute for Ornithology. Career After finishing secondary school in 1951, Wickler studied biology and then received a grant to go to the Max Planck Institute for Behavioral Physiology, where he was a student of Konrad Lorenz and Erich von Holst. After he completed his doctoral work on the behavior of fish, he was scientific assistant in Seewiesen as of 1960 and finally qualified to become a professor at the University of Munich in 1969. He was also appointed to be a professor in the faculty of natural sciences there in 1976. By 1970, he was a lecturer in the Catholic theological faculty for the biological foundations of human moral concepts. Wickler's area of specialisation was the reconstruction of racial history of animal communities and the analysis of communication of animals. Among other areas, he investigated the "dialects" of birds and he also wrote a book about mimicry in 1968 which was the only book on the subject in the German language until 2002. Other research fields of his department at the Max Planck Institute for Behavioral Physiology were studies about the social behavior of spiders and grasshoppers, about acquisition of food, reproduction and mating of prawns, as well as rather philosophical publications on "biological explanation" in connection with ethical questions (such as "Die Biologie der zehn Gebote", The Biology of the Ten Commandments, in 1971). Wickler received great attention in the broad public in 1981 with the sociobiologically shaped book, "Das Prinzip Eigennutz" (The Principle of Self-Interest), which Wickler wrote with Ute Seibt, as well as the book, "Männlich – weiblich. Ein Naturgesetz und seine Folgen" (Male - Female, a Natural Law and its Consequences), also written with Ute Seibt in 1983. The focal point of both books was the evolution of behavior. The central question was formulated as, "How must the behavior of living things have been formed, if the theory of evolution is correct?" (from the foreword to "Das Prinzip Eigennutz"). Although their books did not deny cultural influences on human behavior, they were reproached for taking these influences into consideration marginally, at most. It is extremely unusual that new editions several of his books continue to be published, decades after their first publications. In November 1997, the council of the Max Planck Society decided to close the Max Planck Institute for Behavioral Physiology, with the retirement of Prof. Wolfgang Wickler on 30 November 1999. Its ornithological research has been continued in the Max Planck Institute for Ornithology (in Erling-Andechs, Radolfzell and Seewiesen). Along with the former Bonn behavioral biologist, Hanna-Maria Zippelius, Wolfgang Wickler was one of the most aggressive critics of the instinct theory of his mentor, Konrad Lorenz. Wolfgang Wickler was married to Agnes Oehm from 1956, had four grown children and was also active as an organist. He died on 12 January 2024, at the age of 92. Selected publications Wickler, W.: Mimikry. Nachahmung und Täuschung in der Natur. Munich: 1968. in German — Mimicry in Plants and Animals (Translated by R. D. Martin) McGraw-Hill, New York: 1968. Wickler, W.: Sind wir Sünder?: Naturgesetze d. Ehe. With intro. by Konrad Lorenz. Munich: 1969. in German Wickler, W.: Antworten der Verhaltensforschung. Munich: 1970. in German Wickler, W.: Verhalten und Umwelt. Hoffmann und Campe Verlag, Hamburg: 1972. in German Wickler, W. and Seibt, U. (publ.): Vergleichende Verhaltensforschung (Reader). Hamburg: 1973. in German Wickler, W.: Stammesgeschichte und Ritualisierung. Zur Entstehung tierischer und menschlicher Verhaltensmuster. Munich: 1975. in German Wickler, W.: Die Biologie der Zehn Gebote. Warum die Natur für uns kein Vorbild ist. Munich: 1991 (new edition). in German — The Biology of the Ten Commandments (Translated by David Smith) New York, McGraw-Hill: 1972). . Wickler, W. and Seibt, U.: Das Prinzip Eigennutz. Zur Evolution sozialen Verhaltens. Munich / Zurich: 1991 (new edition). in German Wickler, W. and Seibt, U.: Männlich Weiblich. Ein Naturgesetz und seine Folgen. Heidelberg / Berlin: 1998 (new edition). in German Wickler, W. and Seibt, U.: Kalenderwurm und Perlenpost. Biologen entschlüsseln ungeschriebene Botschaften. Heidelberg / Berlin: 1998. in German External links References 1931 births 2024 deaths Ethologists 20th-century German zoologists 21st-century German zoologists Max Planck Institute directors Max Planck Society people Academic staff of the Ludwig Maximilian University of Munich Scientists from Berlin
Wolfgang Wickler
[ "Biology" ]
1,217
[ "Ethology", "Behavior", "Ethologists" ]
14,608,785
https://en.wikipedia.org/wiki/Firewalls%20and%20Internet%20Security
Firewalls and Internet Security: Repelling the Wily Hacker is a 1994 book by William R. Cheswick and Steven M. Bellovin that helped define the concept of a network firewall. Describing in detail one of the first major firewall deployments at AT&T, the book influenced the formation of the perimeter security model, which became the dominant network security architecture in the mid-1990s. In 2003, a second edition was published, adding Aviel D. Rubin to its authors. References External links Web page for the second edition Firewalls and Internet Security at Google Books Internet security Computer security books 1994 non-fiction books Books about the Internet Works about security and surveillance Works about computer hacking
Firewalls and Internet Security
[ "Technology" ]
144
[ "Computing stubs", "Computer book stubs" ]
14,609,028
https://en.wikipedia.org/wiki/State%20Scientific%20Research%20Institute%20of%20Aviation%20Systems
State Scientific Research Institute of Aviation Systems or GosNIIAS for short () is a Russian aerospace research centre. Founded by the decree of the Council of Ministers of the USSR on 26 February 1946 from a number of laboratories of the Flight Research Institute for operations research and aviation weapons systems development. The new institute was named NII-2. In March 1994 the institute was re-named with its current name (GosNIIAS). Initially, the institute was located in the buildings of the former Sergievo-Elizabethan Asylum. In GosNIIAS, there are six basic departments leading students and graduate students from three universities: Department FUPM MIPT “Avionics. Control and Information Systems. Organized in 1969, head. the department - academician E. A. Fedosov. Department MAI “System design of air complexes”. Organized in 1969, head. the department - doctor of technical sciences V. A. Stefanov. Department MAI "External design and efficiency of aviation complexes". Organized in 1973, head. Department - Doctor of Technical Sciences A. M. Zherebin. Department MAI "Systems of automatic and intelligent control." Organized in 1942, head. Department - Academician of the Russian Academy of Sciences S. Y. Zheltov. Department MIREA "Aviation and space information processing and control systems". Organized in 2002, head. Department - Corresponding Member of the Russian Academy of Sciences G. G. Sebryakov. Department MIREA "Avionics". Organized in 1988, head. Department - Academician of the Russian Academy of Sciences E. A. Fedosov. Bibliography List of GosNIIAS publications in the Scientific electronic library elibrary.ru Notes References 1946 establishments in Russia Defence companies of the Soviet Union Companies based in Moscow Metal companies of the Soviet Union Buran program Research institutes in Russia Research institutes in the Soviet Union Aviation in the Soviet Union Aerospace research institutes Aviation research institutes Aerospace engineering organizations Research and development organizations Federal State Unitary Enterprises of Russia
State Scientific Research Institute of Aviation Systems
[ "Engineering" ]
411
[ "Aerospace engineering", "Aerospace engineering organizations", "Aeronautics organizations" ]
14,609,061
https://en.wikipedia.org/wiki/Cultural%20astronomy
Cultural astronomy, sometimes called the study of Astronomy in Culture, has been described as investigating "the diversity of ways in which cultures, both ancient and modern, perceive celestial objects and integrate them into their view of the world." As such, it encompassed the interdisciplinary fields studying the astronomies of current or ancient societies and cultures. It developed from the two interdisciplinary fields of archaeoastronomy, the study of the use of astronomy and its role in ancient cultures and civilizations, and ethnoastronomy, "a closely allied research field which merges astronomy, textual scholarship, ethnology, and the interpretation of ancient iconography for the purpose of reconstructing lifeways, astronomical techniques, and rituals." It is also related to historical astronomy (analyzing historical astronomical data), history of astronomy (understanding and study and evolution of the discipline of astronomy over the course of human knowledge) and history of astrology (investigating relationships between astrology and astronomy). Examples See also Archaeoastronomy References Astronomical sub-disciplines History of astrology
Cultural astronomy
[ "Astronomy" ]
217
[ "History of astrology", "Astronomical sub-disciplines", "History of astronomy" ]
14,609,233
https://en.wikipedia.org/wiki/Holographic%20algorithm
In computer science, a holographic algorithm is an algorithm that uses a holographic reduction. A holographic reduction is a constant-time reduction that maps solution fragments many-to-many such that the sum of the solution fragments remains unchanged. These concepts were introduced by Leslie Valiant, who called them holographic because "their effect can be viewed as that of producing interference patterns among the solution fragments". The algorithms are unrelated to laser holography, except metaphorically. Their power comes from the mutual cancellation of many contributions to a sum, analogous to the interference patterns in a hologram. Holographic algorithms have been used to find polynomial-time solutions to problems without such previously known solutions for special cases of satisfiability, vertex cover, and other graph problems. They have received notable coverage due to speculation that they are relevant to the P versus NP problem and their impact on computational complexity theory. Although some of the general problems are #P-hard problems, the special cases solved are not themselves #P-hard, and thus do not prove FP = #P. Holographic algorithms have some similarities with quantum computation, but are completely classical. Holant problems Holographic algorithms exist in the context of Holant problems, which generalize counting constraint satisfaction problems (#CSP). A #CSP instance is a hypergraph G=(V,E) called the constraint graph. Each hyperedge represents a variable and each vertex is assigned a constraint A vertex is connected to an hyperedge if the constraint on the vertex involves the variable on the hyperedge. The counting problem is to compute which is a sum over all variable assignments, the product of every constraint, where the inputs to the constrain are the variables on the incident hyperedges of . A Holant problem is like a #CSP except the input must be a graph, not a hypergraph. Restricting the class of input graphs in this way is indeed a generalization. Given a #CSP instance, replace each hyperedge e of size s with a vertex v of degree s with edges incident to the vertices contained in e. The constraint on v is the equality function of arity s. This identifies all of the variables on the edges incident to v, which is the same effect as the single variable on the hyperedge e. In the context of Holant problems, the expression in (1) is called the Holant after a related exponential sum introduced by Valiant. Holographic reduction A standard technique in complexity theory is a many-one reduction, where an instance of one problem is reduced to an instance of another (hopefully simpler) problem. However, holographic reductions between two computational problems preserve the sum of solutions without necessarily preserving correspondences between solutions. For instance, the total number of solutions in both sets can be preserved, even though individual problems do not have matching solutions. The sum can also be weighted, rather than simply counting the number of solutions, using linear basis vectors. General example It is convenient to consider holographic reductions on bipartite graphs. A general graph can always be transformed it into a bipartite graph while preserving the Holant value. This is done by replacing each edge in the graph by a path of length 2, which is also known as the 2-stretch of the graph. To keep the same Holant value, each new vertex is assigned the binary equality constraint. Consider a bipartite graph G=(U,V,E) where the constraint assigned to every vertex is and the constraint assigned to every vertex is . Denote this counting problem by If the vertices in U are viewed as one large vertex of degree |E|, then the constraint of this vertex is the tensor product of with itself |U| times, which is denoted by Likewise, if the vertices in V are viewed as one large vertex of degree |E|, then the constraint of this vertex is Let the constraint be represented by its weighted truth table as a row vector and the constraint be represented by its weighted truth table as a column vector. Then the Holant of this constraint graph is simply Now for any complex 2-by-2 invertible matrix T (the columns of which are the linear basis vectors mentioned above), there is a holographic reduction between and To see this, insert the identity matrix in between to get Thus, and have exactly the same Holant value for every constraint graph. They essentially define the same counting problem. Specific examples Vertex covers and independent sets Let G be a graph. There is a 1-to-1 correspondence between the vertex covers of G and the independent sets of G. For any set S of vertices of G, S is a vertex cover in G if and only if the complement of S is an independent set in G. Thus, the number of vertex covers in G is exactly the same as the number of independent sets in G. The equivalence of these two counting problems can also be proved using a holographic reduction. For simplicity, let G be a 3-regular graph. The 2-stretch of G gives a bipartite graph H=(U,V,E), where U corresponds to the edges in G and V corresponds to the vertices in G. The Holant problem that naturally corresponds to counting the number of vertex covers in G is The truth table of OR2 as a row vector is (0,1,1,1). The truth table of EQUAL3 as a column vector is . Then under a holographic transformation by which is the Holant problem that naturally corresponds to counting the number of independent sets in G. History As with any type of reduction, a holographic reduction does not, by itself, yield a polynomial time algorithm. In order to get a polynomial time algorithm, the problem being reduced to must also have a polynomial time algorithm. Valiant's original application of holographic algorithms used a holographic reduction to a problem where every constraint is realizable by matchgates, which he had just proved is tractable by a further reduction to counting the number of perfect matchings in a planar graph. The latter problem is tractable by the FKT algorithm, which dates to the 1960s. Soon after, Valiant found holographic algorithms with reductions to matchgates for #7Pl-Rtw-Mon-3CNF and #7Pl-3/2Bip-VC. These problems may appear somewhat contrived, especially with respect to the modulus. Both problems were already known to be #P-hard when ignoring the modulus and Valiant supplied proofs of #P-hardness modulo 2, which also used holographic reductions. Valiant found these two problems by a computer search that looked for problems with holographic reductions to matchgates. He called their algorithms accidental algorithms, saying "when applying the term accidental to an algorithm we intend to point out that the algorithm arises from satisfying an apparently onerous set of constraints." The "onerous" set of constraints in question are polynomial equations that, if satisfied, imply the existence of a holographic reduction to matchgate realizable constraints. After several years of developing (what is known as) matchgate signature theory, Jin-Yi Cai and Pinyan Lu were able to explain the existence of Valiant's two accidental algorithms. These two problems are just special cases of two much larger families of problems: #2k-1Pl-Rtw-Mon-kCNF and #2k-1Pl-k/2Bip-VC for any positive integer k. The modulus 7 is just the third Mersenne number and Cai and Lu showed that these types of problems with parameter k can be solved in polynomial time exactly when the modulus is the kth Mersenne number by using holographic reductions to matchgates and the Chinese remainder theorem. Around the same time, Jin-Yi Cai, Pinyan Lu and Mingji Xia gave the first holographic algorithm that did not reduce to a problem that is tractable by matchgates. Instead, they reduced to a problem that is tractable by Fibonacci gates, which are symmetric constraints whose truth tables satisfy a recurrence relation similar to one that defines the Fibonacci numbers. They also used holographic reductions to prove that certain counting problems are #P-hard. Since then, holographic reductions have been used extensively as ingredients in both polynomial time algorithms and proofs of #P-hardness. References Algorithms
Holographic algorithm
[ "Mathematics" ]
1,747
[ "Algorithms", "Mathematical logic", "Applied mathematics" ]
14,609,540
https://en.wikipedia.org/wiki/De-perimeterisation
In information security, de-perimeterisation is the removal of a boundary between an organisation and the outside world. De-perimeterisation is protecting an organization's systems and data on multiple levels by using a mixture of encryption, secure computer protocols, secure computer systems and data-level authentication, rather than the reliance of an organization on its network boundary to the Internet. Successful implementation of a de-perimeterised strategy within an organization implies that the perimeter, or outer security boundary, was removed. Metaphorically, de-perimeterisation is similar to the historic dismantling of city walls to allow the free flow of goods and information. To achieve this there was a shift from city states to nation states and the creation of standing armies, so that city boundaries were extended to surround multiple cities. De-perimeterisation was coined by Jon Measham, a former employee of the UK's Royal Mail in a 2001 research paper, and subsequently used by the Jericho Forum of which the Royal Mail was a founding member. Potential benefits Claims made for removal of this border include the freeing up of business-to-business transactions, the reduction in cost and the ability for a company to be more agile. Taken to its furthest extent an organisation could operate securely directly on the Internet. Operating without a hardened border frees organizations to collaborate, utilizing solutions based on a Collaboration Oriented Architecture framework. Relevance to other computing areas The work, particularly by the Jericho Forum, on de-perimeterisation has fed into two key areas of computing: Originally described as "computing outside your perimeter", this is now referred to as Cloud Computing. The Zero trust security model is the architectural response to the problem statement posed by de-perimeterisation. Variations More recently the term is being used in the context of a result of both entropy and the deliberate activities of individuals within organizations to usurp perimeters often for well intentioned reasons. The Jericho Forum paper named "Collaboration Oriented Architecture" refers to this trend of de-perimeterisation as a problem: Problem The traditional electronic boundary between a corporate (or ‘private’) network and the Internet is breaking down in the trend which we have called de-perimeterisation. Variations of the term have been used to describe aspects of de-perimeterisation such as: "You’ve already been de-perimeterised" to describe the Internet worms, viruses and other exploits which are designed to by-pass the border typically using web and e-mail. "re-perimeterisation" to describe the interim step of moving perimeters to protection groups of computer servers or a data centre – rather than the perimeter. "Macro-Perimeterisation" the act of moving the security perimeter into "the cloud", see Security As A Service, examples of such security services in the cloud are exemplified by email cleaning services or proxy filtering services provided by towers in the internet. "micro-perimeterisation" moving the security perimeter to surround the data itself, interim steps might include moving the perimeter around individual computer systems or an individual application (consisting of a cluster of computers). Notes References Data security
De-perimeterisation
[ "Engineering" ]
619
[ "Cybersecurity engineering", "Data security" ]