id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
4,182,759
https://en.wikipedia.org/wiki/High-Speed%20Serial%20Interface
The High-Speed Serial Interface (HSSI) is a differential ECL serial interface standard developed by Cisco Systems and T3plus Networking primarily for use in WAN router connections. It is capable of speeds up to 52 Mbit/s with cables up to in length. While HSSI uses 50-pin connector physically similar to that used by SCSI-2, it requires a cable with an impedance of 110 Ω (as opposed to the 75 Ω of a SCSI-2 cable). The physical layer of the standard is defined by EIA-613 and the electrical layer by EIA-612. It is supported by the Linux kernel since version 3.4-rc2. References External links What is HSSI? HSSI Description Serial buses
High-Speed Serial Interface
Technology
158
49,047,722
https://en.wikipedia.org/wiki/Rhizopogon%20parvisporus
Rhizopogon parvisporus is a small, truffle-like fungus in the family Rhizopogonaceae. Found in Canada, it was described as new to science in 1962 by Constance Bowerman, from collections made in Newfoundland. Description The roughly spherical to irregularly shaped fruitbodies of the fungus measure in diameter when fresh, although they tend to shrink when dry. They have a hard, wrinkled surface that is yellowish brown or lighter in color. The peridium is 300–570 μm thick. The spores have the shape of narrow ellipsoids, and rarely exceed 5 μm in length. They often contain two oil droplets, but occasionally have three or four. Habitat and distribution The fungus is only known from Fort Smith (Northwest Territories), and Newfoundland. In the former location, it was found along a riverbank in spruce woods, while in the latter it grew on mossy slopes in thickets of alder and fir. References External links Fungi of Canada Rhizopogonaceae Fungi described in 1992 Fungi without expected TNC conservation status Fungus species
Rhizopogon parvisporus
Biology
224
9,792,866
https://en.wikipedia.org/wiki/Bruker
Bruker Corporation is an American manufacturer of scientific instruments for molecular and materials research, as well as for industrial and applied analysis. It is headquartered in Billerica, Massachusetts, and is the publicly traded parent company of Bruker Scientific Instruments (Bruker AXS, Bruker BioSpin, Bruker Daltonics and Bruker Optics) and Bruker Energy & Supercon Technologies (BEST) divisions. In April 2010, Bruker created a Chemical Analysis Division (headquartered in Fremont, CA) under the Bruker Daltonics subsidiary. This division contains three former Varian product lines: ICPMS systems, laboratory gas chromatography (GC), and GC-triple quadrupole mass spectrometer (originally designed by Bear Instruments and acquired by Varian in 2001). In 2012, it sponsored the Fritz Feigl Prize, and since 1999 the company has also sponsored the Günther Laukien Prize. History The company was founded on September 7, 1960, in Karlsruhe, Germany as Bruker-Physik AG by five people, one of them being Günther Laukien, who was a professor at the University of Karlsruhe at the time. The name Bruker originates from co-founder Emil Bruker, as Günther Laukien himself was formally not allowed to commercialize his research whilst being a professor. Bruker produced Nuclear Magnetic Resonance Spectroscopy (NMR) and EMR spectroscopy equipment then. In the early 1960s, the company had around 60 employees and was growing rapidly. One of the early success products was the HFX 90 NMR spectroscopy system, with three independent channels and which was also the first NMR system using only semiconductor transistors. In 1969, Bruker launched the first commercial Fourier transform NMR spectroscopy system (FT-NMR) and in the 1970s the company was the first to commercialize a superconducting FT-NMR. Later, the company would expand their product range with MRI, FTIR and FT-Raman spectrometers and with mass spectrometers. In 1968, Bruker shipped NMR systems to Yale University in Connecticut. After that, demand from the US grew, so Bruker opened an office in Elmsford, New York which marked the start of their US activities. In 2008 after a corporate reorganization lasting 8 years, all divisions were merged in a unified Bruker Corporation. Günther Laukien died in 1997; one of his four sons Frank Laukien, is currently the CEO of Bruker. Another son, Jörg C. Laukien, also works for the company. Another son, Dirk D. Laukien, is a former company executive. Acquisitions Bruker acquisitions include GE NMR Instruments (1992), Siemens AXS (1997), Nonius (2001), MacScience (2002), Vacuumschmelze Hanau (2003), Röntec (2005), SOCABIM (2005), PGT (2005), Keymaster (2006), Quantron (2006), JuWe (2008), SIS (2008), ACCEL (2009), Michrom Bioresources (2011), Skyscan (2012), Prairie Technologies (2013), Oncovision (Preclinical PET imaging business, 2016), Oxford Instruments Superconducting Technology (2016), Hysitron Inc. (2017), XGLab (2017), Luxendo (2017), Alicona (2018), PMOD Technologies LLC (2019), Optimal Group (2022), Neurescence Inc (2022), and MIRO Analytical (majority 2023). Other In 1964, the company bought the NMR division of the Swiss Trüb-Täuber. Bruker made several offers to take over its supplier Oxford Instruments during the 1970s, but after almost a decade of negotiations, an acquisition was eventually rejected by Oxford Instruments. In 1997, the analytical X-ray division of Siemens was acquired by Bruker. In 2010, Bruker bought 3 product lines from Agilent, which Agilent had acquired from Varian. These included mass spectrometry and gas chromatography instruments. They have since divested these products to Scion Instruments with the exception of the triple quadrupole In 2012, Bruker bought parts of Carestream Health, including their in-vivo imaging portfolio and related aspects. In 2019, Bruker bought Alicona, known for production of metrology equipment based on focus variation, to extend its analytics business in the industrial market. In November 2022, it was announced Bruker had acquired the Mountain View-headquartered miniaturized microscope / miniscope company, Inscopix, Inc. Products Bruker develops and delivers a wide variety of professional and scientific analysis devices including mass spectrometers, single-Crystal and powder X-ray diffractometers, X-ray tomography devices, NMR spectroscopy devices, fluorescence microscopes, raman spectroscopes, atomic-force microscopes, and profilometers Notable product use Bruker products are used globally in a variety of situations. The National High Magnetic Field Laboratory at Florida State University selected Bruker to build the world's first 21.0 tesla FT-ICR MS. The Total Carbon Column Observing Network uses high resolution FT-IR spectrometers made by Bruker to measure various greenhouse gases across the globe. Awards In May, 2004, Frost & Sullivan selected the Company's Bruker Daltonics subsidiary for their 2004 Product Line Innovation Award for the Life Sciences. Bruker Daltonics received this award for its innovative development of sophisticated mass spectrometers. References External links Companies based in Billerica, Massachusetts Technology companies established in 1960 Companies listed on the Nasdaq Instrument-making corporations Laboratory equipment manufacturers Research support companies 1960 establishments in West Germany Life science companies based in Massachusetts Companies in the S&P 400
Bruker
Biology
1,236
4,212,710
https://en.wikipedia.org/wiki/Duvenhage%20lyssavirus
Duvenhage lyssavirus (DUVV) is a member of the genus Lyssavirus, which also contains the rabies virus. The virus was discovered in 1970, when a South African farmer (after whom the virus is named) died of a rabies-like encephalitic illness, after being bitten by a bat. In 2006, Duvenhage virus killed a second person, when a man was scratched by a bat in North West Province, South Africa, 80 km from the 1970 infection. He developed a rabies-like illness 27 days after the bat encounter, and died 14 days after the onset of illness. A 34-year-old woman who died in Amsterdam on December 8, 2007, was the third recorded fatality. She had been scratched on the nose by a small bat while travelling through Kenya in October 2007, and was admitted to hospital four weeks later with rabies-like symptoms. Microbats are believed to be the natural reservoir of Duvenhage virus. It has been isolated twice from insectivorous bats, in 1981 from Miniopterus schreibersi, and in 1986 from Nycteris thebaica, and the virus is closely related to another bat-associated lyssavirus endemic to Africa, Lagos bat lyssavirus. References Lyssaviruses
Duvenhage lyssavirus
Biology
278
36,988,616
https://en.wikipedia.org/wiki/RKA%20Mission%20Control%20Center
The RKA Mission Control Center (), also known by its acronym TsUP () or by its radio callsign Mission Control Moscow, is the mission control center of Roscosmos. It is located in Korolyov, Moscow Oblast, on Pionerskaya Street near the S.P. Korolev Rocket and Space Corporation Energia plant. It contains an active control room for the International Space Station. It also houses a memorial control room for the Mir space station where the last few orbits of Mir before it burned up in the atmosphere are shown on the display screens. TsUP provides practical flight control for spacecraft of several different classes: crewed orbital complexes, spaceships, space probes and civilian, and scientific satellites. At the same time, it carries out scientific and engineering research and development of methods, algorithms, and tools for control problems, ballistics, and navigation. Notes References External links Science and technology in Russia Space program of Russia Roscosmos Soviet and Russian space program locations
RKA Mission Control Center
Astronomy
207
4,313,931
https://en.wikipedia.org/wiki/Fuel%20economy%20in%20automobiles
The fuel economy of an automobile relates to the distance traveled by a vehicle and the amount of fuel consumed. Consumption can be expressed in terms of the volume of fuel to travel a distance, or the distance traveled per unit volume of fuel consumed. Since fuel consumption of vehicles is a significant factor in air pollution, and since the importation of motor fuel can be a large part of a nation's foreign trade, many countries impose requirements for fuel economy. Different methods are used to approximate the actual performance of the vehicle. The energy in fuel is required to overcome various losses (wind resistance, tire drag, and others) encountered while propelling the vehicle, and in providing power to vehicle systems such as ignition or air conditioning. Various strategies can be employed to reduce losses at each of the conversions between the chemical energy in the fuel and the kinetic energy of the vehicle. Driver behavior can affect fuel economy; maneuvers such as sudden acceleration and heavy braking waste energy. Electric cars do not directly burn fuel, and so do not have fuel economy per se, but equivalence measures, such as miles per gallon gasoline equivalent have been created to attempt to compare them. Quantities and units of measure The fuel efficiency of motor vehicles can be expressed in multiple ways: Fuel consumption is the amount of fuel used per unit distance; for example, litres per 100 kilometres (L/100 km). The lower the value, the more economic a vehicle is (the less fuel it needs to travel a certain distance); this is the measure generally used across Europe (except the UK, Denmark and The Netherlands - see below), New Zealand, Australia, and Canada. Also in Uruguay, Paraguay, Guatemala, Colombia, China, and Madagascar., as also in post-Soviet space. Fuel economy is the distance travelled per unit volume of fuel used; for example, kilometres per litre (km/L) or miles per gallon (MPG), where 1 MPG (imperial) ≈ 0.354006 km/L. The higher the value, the more economic a vehicle is (the more distance it can travel with a certain volume of fuel). This measure is popular in the US and the UK (mpg), but in Europe, India, Japan, South Korea and Latin America the metric unit km/L is used instead. The formula for converting to miles per US gallon (3.7854 L) from L/100 km is , where is value of L/100 km. For miles per Imperial gallon (4.5461 L) the formula is . In parts of Europe, the two standard measuring cycles for "litre/100 km" value are "urban" traffic with speeds up to 50 km/h from a cold start, and then "extra urban" travel at various speeds up to 120 km/h which follows the urban test. A combined figure is also quoted showing the total fuel consumed in divided by the total distance traveled in both tests. Fuel economy can be expressed in two ways: Units of fuel per fixed distanceGenerally expressed in liters per 100 kilometers (L/100 km), used in most European countries, Canada, China, South Africa, Australia and New Zealand. Irish law allows for the use of miles per imperial gallon, alongside liters per 100 kilometers. Liters per 100 kilometers may be used alongside miles per imperial gallon in the UK. The window sticker on new US cars displays the vehicle's fuel consumption in US gallons per 100 miles, in addition to the traditional mpg number. A lower number means more efficient, while a higher number means less efficient. Units of distance per fixed fuel unit Miles per gallon (mpg) are commonly used in the United States, the United Kingdom, and Canada (alongside L/100 km). Kilometers per liter (km/L) are more commonly used elsewhere in the Americas, Asia, parts of Africa and Oceania. In the Levant km/20 L is used, known as kilometers per tanaka, a metal container which has a volume of twenty liters. When mpg is used, it is necessary to identify the type of gallon: the imperial gallon is 4.54609 liters, and the U.S. gallon is 3.785 liters. When using a measure expressed as distance per fuel unit, a higher number means more efficient, while a lower number means less efficient. Conversions of units: Statistics While the thermal efficiency (mechanical output to chemical energy in fuel) of petroleum engines has increased since the beginning of the automotive era, this is not the only factor in fuel economy. The design of automobile as a whole and usage pattern affects the fuel economy. Published fuel economy is subject to variation between jurisdiction due to variations in testing protocols. One of the first studies to determine fuel economy in the United States was the Mobil Economy Run, which was an event that took place every year from 1936 (except during World War II) to 1968. It was designed to provide real, efficient fuel efficiency numbers during a coast-to-coast test on real roads and with regular traffic and weather conditions. The Mobil Oil Corporation sponsored it and the United States Auto Club (USAC) sanctioned and operated the run. In more recent studies, the average fuel economy for new passenger car in the United States improved from 17 mpg (13.8 L/100 km) in 1978 to more than 22 mpg (10.7 L/100 km) in 1982. The average fuel economy for new 2020 model year cars, light trucks and SUVs in the United States was . 2019 model year cars (ex. EVs) classified as "midsize" by the US EPA ranged from 12 to 56 mpgUS (20 to 4.2 L/100 km) However, due to environmental concerns caused by CO2 emissions, new EU regulations are being introduced to reduce the average emissions of cars sold beginning in 2012, to 130 g/km of CO2, equivalent to 4.5 L/100 km (52 mpgUS, 63 mpgimp) for a diesel-fueled car, and 5.0 L/100 km (47 mpgUS, 56 mpgimp) for a gasoline (petrol)-fueled car. The average consumption across the fleet is not immediately affected by the new vehicle fuel economy: for example, Australia's car fleet average in 2004 was 11.5 L/100 km (20.5 mpgUS), compared with the average new car consumption in the same year of 9.3 L/100 km (25.3 mpgUS) Speed and fuel economy studies Fuel economy at steady speeds with selected vehicles was studied in 2010. The most recent study indicates greater fuel efficiency at higher speeds than earlier studies; for example, some vehicles achieve better fuel economy at rather than at , although not their best economy, such as the 1994 Oldsmobile Cutlass Ciera with the LN2 2.2L engine, which has its best economy at (), and gets better economy at than at ( vs ). The proportion of driving on high speed roadways varies from 4% in Ireland to 41% in the Netherlands. When the US National Maximum Speed Law's speed limit was mandated from 1974 to 1995, there were complaints that fuel economy could decrease instead of increase. The 1997 Toyota Celica got better fuel-efficiency at than it did at ( vs ), although even better at than at ( vs ), and its best economy () at only . Other vehicles tested had from 1.4 to 20.2% better fuel-efficiency at vs. . Their best economy was reached at speeds of (see graph). Officials hoped that the limit, combined with a ban on ornamental lighting, no gasoline sales on Sunday, and a 15% cut in gasoline production, would reduce total gasoline consumption by 200,000 barrels a day, representing a 2.2% drop from annualized 1973 gasoline consumption levels. This was partly based on a belief that cars achieve maximum efficiency between 40 and 50 mph (65 and 80 km/h) and that trucks and buses were most efficient at . In 1998, the U.S. Transportation Research Board footnoted an estimate that the 1974 National Maximum Speed Limit (NMSL) reduced fuel consumption by 0.2 to 1.0 percent. Rural interstates, the roads most visibly affected by the NMSL, accounted for 9.5% of the U.S' vehicle-miles-traveled in 1973, but such free-flowing roads typically provide more fuel-efficient travel than conventional roads. Discussion of statistics A reasonably modern European supermini and many mid-size cars, including station wagons, may manage motorway travel at 5 L/100 km (47 mpg US/56 mpg imp) or 6.5 L/100 km in city traffic (36 mpg US/43 mpg imp), with carbon dioxide emissions of around 140 g/km. An average North American mid-size car travels 21 mpg (US) (11 L/100 km) city, 27 mpg (US) (9 L/100 km) highway; a full-size SUV usually travels 13 mpg (US) (18 L/100 km) city and 16 mpg (US) (15 L/100 km) highway. Pickup trucks vary considerably; whereas a 4 cylinder-engined light pickup can achieve 28 mpg (8 L/100 km), a V8 full-size pickup with extended cabin only travels 13 mpg (US) (18 L/100 km) city and 15 mpg (US) (15 L/100 km) highway. The average fuel economy for all vehicles on the road is higher in Europe than the United States because the higher cost of fuel changes consumer behaviour. In the UK, a gallon of gas without tax would cost US$1.97, but with taxes cost US$6.06 in 2005. The average cost in the United States was US$2.61. European-built cars are generally more fuel-efficient than US vehicles. While Europe has many higher efficiency diesel cars, European gasoline vehicles are on average also more efficient than gasoline-powered vehicles in the USA. Most European vehicles cited in the CSI study run on diesel engines, which tend to achieve greater fuel efficiency than gas engines. Selling those cars in the United States is difficult because of emission standards, notes Walter McManus, a fuel economy expert at the University of Michigan Transportation Research Institute. "For the most part, European diesels don’t meet U.S. emission standards", McManus said in 2007. Another reason why many European models are not marketed in the United States is that labor unions object to having the big 3 import any new foreign built models regardless of fuel economy while laying off workers at home. An example of European cars' capabilities of fuel economy is the microcar Smart Fortwo cdi, which can achieve up to 3.4 L/100 km (69.2 mpg US) using a turbocharged three-cylinder 41 bhp (30 kW) Diesel engine. The Fortwo is produced by Daimler AG and is only sold by one company in the United States. Furthermore, the world record in fuel economy of production cars is held by the Volkswagen Group, with special production models (labeled "3L") of the Volkswagen Lupo and the Audi A2, consuming as little as . Diesel engines generally achieve greater fuel efficiency than petrol (gasoline) engines. Passenger car diesel engines have energy efficiency of up to 41% but more typically 30%, and petrol engines of up to 37.3%, but more typically 20%. A common margin is 25% more miles per gallon for an efficient turbodiesel. For example, the current model Skoda Octavia, using Volkswagen engines, has a combined European fuel efficiency of for the petrol engine and for the — and heavier — diesel engine. The higher compression ratio is helpful in raising the energy efficiency, but diesel fuel also contains approximately 10% more energy per unit volume than gasoline which contributes to the reduced fuel consumption for a given power output. In 2002, the United States had 85,174,776 trucks, and averaged . Large trucks, over , averaged . The average economy of automobiles in the United States in 2002 was . By 2010 this had increased to . Average fuel economy in the United States gradually declined until 1973, when it reached a low of and gradually has increased since, as a result of higher fuel cost. A study indicates that a 10% increase in gas prices will eventually produce a 2.04% increase in fuel economy. One method by car makers to increase fuel efficiency is lightweighting in which lighter-weight materials are substituted in for improved engine performance and handling. Differences in testing standards Identical vehicles can have varying fuel consumption figures listed depending upon the testing methods of the jurisdiction. Lexus IS 250 – petrol 2.5 L 4GR-FSE V6, 204 hp (153 kW), 6 speed automatic, rear wheel drive Australia (L/100 km) – 'combined' 9.1, 'urban' 12.7, 'extra-urban' 7.0 Canada (L/100 km) – 'combined' 9.6, 'city' 11.1, 'highway' 7.8 European Union (L/100 km) – 'combined' 8.9, 'urban' 12.5, 'extra-urban' 6.9 United States (L/100 km) – 'combined' 9.8, 'city' 11.2, 'highway' 8.1 Energy considerations Since the total force opposing the vehicle's motion (at constant speed) multiplied by the distance through which the vehicle travels represents the work that the vehicle's engine must perform, the study of fuel economy (the amount of energy consumed per unit of distance traveled) requires a detailed analysis of the forces that oppose a vehicle's motion. In terms of physics, Force = rate at which the amount of work generated (energy delivered) varies with the distance traveled, or: Note: The amount of work generated by the vehicle's power source (energy delivered by the engine) would be exactly proportional to the amount of fuel energy consumed by the engine if the engine's efficiency is the same regardless of power output, but this is not necessarily the case due to the operating characteristics of the internal combustion engine. For a vehicle whose source of power is a heat engine (an engine that uses heat to perform useful work), the amount of fuel energy that a vehicle consumes per unit of distance (level road) depends upon: The thermodynamic efficiency of the heat engine; Frictional losses within the drivetrain; Rolling resistance within the wheels and between the road and the wheels; Non-motive subsystems powered by the engine, such as air conditioning, engine cooling, and the alternator; Aerodynamic drag from moving through air; Energy converted by frictional brakes into waste heat, or losses from regenerative braking in hybrid vehicles; Fuel consumed while the engine is not providing power but still running, such as while idling, minus the subsystem loads. Ideally, a car traveling at a constant velocity on level ground in a vacuum with frictionless wheels could travel at any speed without consuming any energy beyond what is needed to get the car up to speed. Less ideally, any vehicle must expend energy on overcoming road load forces, which consist of aerodynamic drag, tire rolling resistance, and inertial energy that is lost when the vehicle is decelerated by friction brakes. With ideal regenerative braking, the inertial energy could be completely recovered, but there are few options for reducing aerodynamic drag or rolling resistance other than optimizing the vehicle's shape and the tire design. Road load energy or the energy demanded at the wheels, can be calculated by evaluating the vehicle equation of motion over a specific driving cycle. The vehicle powertrain must then provide this minimum energy to move the vehicle and will lose a large amount of additional energy in the process of converting fuel energy into work and transmitting it to the wheels. Overall, the sources of energy loss in moving a vehicle may be summarized as follows: Engine efficiency (20–30%), which varies with engine type, the mass of the automobile and its load, and engine speed (usually measured in RPM). Aerodynamic drag force, which increases roughly by the square of the car's speed, but notes that drag power goes by the cube of the car's speed. Rolling friction. Braking, although regenerative braking captures some of the energy that would otherwise be lost. Losses in the transmission. Manual transmissions can be up to 94% efficient whereas older automatic transmissions may be as low as 70% efficient Automated manual transmissions, which have the same mechanical internals as conventional manual transmissions, will give the same efficiency as a pure manual gearbox plus the added bonus of intelligence selecting optimal shifting points, and/or automated clutch control but manual shifting, as with older semi-automatic transmissions. Air conditioning. The power required for the engine to turn the compressor decreases the fuel-efficiency, though only when in use. This may be offset by the reduced drag of the vehicle compared with driving with the windows down. The efficiency of AC systems gradually deteriorates due to dirty filters etc.; regular maintenance prevents this. The extra mass of the air conditioning system will cause a slight increase in fuel consumption. Power steering. The older hydraulic power steering systems are powered by a hydraulic pump constantly engaged to the engine. Power assistance required for steering is inversely proportional to the vehicle speed so the constant load on the engine from a hydraulic pump reduces fuel efficiency. More modern designs improve fuel efficiency by only activating the power assistance when needed; this is done by using either direct electrical power steering assistance or an electrically powered hydraulic pump. Cooling. The older cooling systems used a constantly engaged mechanical fan to draw air through the radiator at a rate directly related to the engine speed. This constant load reduces efficiency. More modern systems use electrical fans to draw additional air through the radiator when extra cooling is required. Electrical systems. Headlights, battery charging, active suspension, circulating fans, defrosters, media systems, speakers, and other electronics can also significantly increase fuel consumption, as the energy to power these devices causes an increased load on the alternator. Since alternators are commonly only 40–60% efficient, the added load from electronics on the engine can be as high as at any speed including idle. In the FTP 75 cycle test, a 200-watt load on the alternator reduces fuel efficiency by 1.7 mpg. Headlights, for example, consume 110 watts on low and up to 240 watts on high. These electrical loads can cause much of the discrepancy between real-world and EPA tests, which only include the electrical loads required to run the engine and basic climate control. Standby. The energy is needed to keep the engine running while it is not providing power to the wheels, i.e., when stopped, coasting or braking. Fuel-efficiency decreases from electrical loads are most pronounced at lower speeds because most electrical loads are constant while engine load increases with speed. So at a lower speed, a higher proportion of engine horsepower is used by electrical loads. Hybrid cars see the greatest effect on fuel-efficiency from electrical loads because of this proportional effect. Fuel economy-boosting technologies Engine-specific technology Other vehicle technologies Future technologies Technologies that may improve fuel efficiency, but are not yet on the market, include: HCCI (Homogeneous Charge Compression Ignition) combustion Scuderi engine Compound engines Two-stroke diesel engines High-efficiency gas turbine engines BMW's Turbosteamer – using the heat from the engine to spin a mini turbine to generate power Vehicle electronic control systems that automatically maintain distances between vehicles on motorways/freeways that reduce ripple back braking, and consequent re-acceleration. Time-optimized piston path, to capture energy from hot gases in the cylinders when they are at their highest temperatures sterling hybrid battery vehicle Many aftermarket consumer products exist that are purported to increase fuel economy; many of these claims have been discredited. In the United States, the Environmental Protection Agency maintains a list of devices that have been tested by independent laboratories and makes the test results available to the public. Fuel economy maximizing behaviors Governments, various environmentalist organizations, and companies like Toyota and Shell Oil Company have historically urged drivers to maintain adequate air pressure in tires and careful acceleration/deceleration habits. Keeping track of fuel efficiency stimulates fuel economy-maximizing behavior. A five-year partnership between Michelin and Anglian Water shows that 60,000 liters of fuel can be saved on tire pressure. The Anglian Water fleet of 4,000 vans and cars are now lasting their full lifetime. This shows the impact that tire pressures have on the fuel efficiency. Fuel economy as part of quality management regimes Environmental management systems EMAS, as well as good fleet management, includes record-keeping of the fleet fuel consumption. Quality management uses those figures to steer the measures acting on the fleets. This is a way to check whether procurement, driving, and maintenance in total have contributed to changes in the fleet's overall consumption. Fuel economy standards and testing procedures * highway ** combined Australia From October 2008, all new cars had to be sold with a sticker on the windscreen showing the fuel consumption and the CO2 emissions. Fuel consumption figures are expressed as urban, extra urban and combined, measured according to ECE Regulations 83 and 101 – which are the based on the European driving cycle; previously, only the combined number was given. Australia also uses a star rating system, from one to five stars, that combines greenhouse gases with pollution, rating each from 0 to 10 with ten being best. To get 5 stars a combined score of 16 or better is needed, so a car with a 10 for economy (greenhouse) and a 6 for emission or 6 for economy and 10 for emission, or anything in between would get the highest 5 star rating. The lowest rated car is the Ssangyong Korrando with automatic transmission, with one star, while the highest rated was the Toyota Prius hybrid. The Fiat 500, Fiat Punto and Fiat Ritmo as well as the Citroen C3 also received 5 stars. The greenhouse rating depends on the fuel economy and the type of fuel used. A greenhouse rating of 10 requires 60 or less grams of CO2 per km, while a rating of zero is more than 440 g/km CO2. The highest greenhouse rating of any 2009 car listed is the Toyota Prius, with 106 g/km CO2 and . Several other cars also received the same rating of 8.5 for greenhouse. The lowest rated was the Ferrari 575 at 499 g/km CO2 and . The Bentley also received a zero rating, at 465 g/km CO2. The best fuel economy of any year is the 2004–2005 Honda Insight, at . Canada Vehicle manufacturers follow a controlled laboratory testing procedure to generate the fuel consumption data that they submit to the Government of Canada. This controlled method of fuel consumption testing, including the use of standardized fuels, test cycles and calculations, is used instead of on-road driving to ensure that all vehicles are tested under identical conditions and that the results are consistent and repeatable. Selected test vehicles are "run in" for about 6,000 km before testing. The vehicle is then mounted on a chassis dynamometer programmed to take into account the aerodynamic efficiency, weight and rolling resistance of the vehicle. A trained driver runs the vehicle through standardized driving cycles that simulate trips in the city and on the highway. Fuel consumption ratings are derived from the emissions generated during the driving cycles. THE 5 CYCLE TEST: The city test simulates urban driving in stop-and-go traffic with an average speed of 34 km/h and a top speed of 90 km/h. The test runs for approximately 31 minutes and includes 23 stops. The test begins from a cold engine start, which is similar to starting a vehicle after it has been parked overnight during the summer. The final phase of the test repeats the first eight minutes of the cycle but with a hot engine start. This simulates restarting a vehicle after it has been warmed up, driven and then stopped for a short time. Over five minutes of test time are spent idling, to represent waiting at traffic lights. The ambient temperature of the test cell starts at 20 °C and ends at 30 °C. The highway test simulates a mixture of open highway and rural road driving, with an average speed of 78 km/h and a top speed of 97 km/h. The test runs for approximately 13 minutes and does not include any stops. The test begins from a hot engine start. The ambient temperature of the test cell starts at 20 °C and ends at 30 °C. In the cold temperature operation test, the same driving cycle is used as in the standard city test, except that the ambient temperature of the test cell is set to −7 °C. In the air conditioning test, the ambient temperature of the test cell is raised to 35 °C. The vehicle's climate control system is then used to lower the internal cabin temperature. Starting with a warm engine, the test averages 35 km/h and reaches a maximum speed of 88 km/h. Five stops are included, with idling occurring 19% of the time. The high speed/quick acceleration test averages 78 km/h and reaches a top speed of 129 km/h. Four stops are included and brisk acceleration maximizes at a rate of 13.6 km/h per second. The engine begins warm and air conditioning is not used. The ambient temperature of the test cell is constantly 25 °C. Tests 1, 3, 4, and 5 are averaged to create the city driving fuel consumption rate. Tests 2, 4, and 5 are averaged to create the highway driving fuel consumption rate. Europe In the European Union, passenger vehicles are commonly tested using two drive cycles, and corresponding fuel economies are reported as "urban" and "extra-urban", in liters per 100 km and (in the UK) in miles per imperial gallon. The urban economy is measured using the test cycle known as ECE-15, first introduced in 1970 by EC Directive 70/220/EWG and finalized by EEC Directive 90/C81/01 in 1999. It simulates a 4,052 m (2.518 mile) urban trip at an average speed of 18.7 km/h (11.6 mph) and at a maximum speed of 50 km/h (31 mph). The extra-urban driving cycle or EUDC lasts 400 seconds (6 minutes 40 seconds) at an average speed 62.6 km/h (39 mph) and a top speed of 120 km/h (74.6 mph). EU fuel consumption numbers are often considerably lower than corresponding US EPA test results for the same vehicle. For example, the 2011 Honda CR-Z with a six-speed manual transmission is rated 6.1/4.4 L/100 km in Europe and 7.6/6.4 L/100 km (31/37 mpg ) in the United States. In the European Union advertising has to show carbon dioxide (CO2)-emission and fuel consumption data in a clear way as described in the UK Statutory Instrument 2004 No 1661. Since September 2005 a color-coded "Green Rating" sticker has been available in the UK, which rates fuel economy by CO2 emissions: A: <= 100 g/km, B: 100–120, C: 121–150, D: 151–165, E: 166–185, F: 186–225, and G: 226+. Depending on the type of fuel used, for gasoline A corresponds to about and G about . Ireland has a very similar label, but the ranges are slightly different, with A: <= 120 g/km, B: 121–140, C: 141–155, D: 156–170, E: 171–190, F: 191–225, and G: 226+. From 2020, EU requires manufacturers to average 95 g/km emission or less, or pay an excess emissions premium. In the UK the ASA (Advertising standards agency) have claimed that fuel consumption figures are misleading. Often the case with European vehicles as the MPG (miles per gallon) figures that can be advertised are often not the same as "real world" driving. The ASA have said that car manufacturers can use "cheats" to prepare their vehicles for their compulsory fuel efficiency and emissions tests in a way set out to make themselves look as "clean" as possible. This practice is common in gasoline and diesel vehicle tests, but hybrid and electric vehicles are not immune as manufacturers apply these techniques to fuel efficiency. Car experts also assert that the official MPG figures given by manufacturers do not represent the true MPG values from real-world driving. Websites have been set up to show the real-world MPG figures, based on crowd-sourced data from real users, vs the official MPG figures. The major loopholes in the current EU tests allow car manufacturers a number of "cheats" to improve results. Car manufacturers can: Disconnect the alternator, thus no energy is used to recharge the battery; Use special lubricants that are not used in production cars, in order to reduce friction; Turn off all electrical gadgets i.e. Air Con/Radio; Adjust brakes or even disconnect them to reduce friction; Tape up cracks between body panels and windows to reduce air resistance; Remove Wing mirrors. According to the results of a 2014 study by the International Council on Clean Transportation (ICCT), the gap between official and real-world fuel-economy figures in Europe has risen to about 38% in 2013 from 10% in 2001. The analysis found that for private cars, the difference between on-road and official values rose from around 8% in 2001 to 31% in 2013, and 45% for company cars in 2013. The report is based on data from more than half a million private and company vehicles across Europe. The analysis was prepared by the ICCT together with the Netherlands Organization for Applied Scientific Research (TNO), and the German Institut für Energie- und Umweltforschung Heidelberg (IFEU). In 2018 update of the ICCT data the difference between the official and real figures was again 38%. Japan The evaluation criteria used in Japan reflects driving conditions commonly found, as the typical Japanese driver does not drive as fast as other regions internationally (Speed limits in Japan). 10–15 mode The 10–15 mode driving cycle test is the official fuel economy and emission certification test for new light duty vehicles in Japan. Fuel economy is expressed in km/L (kilometers per liter) and emissions are expressed in g/km. The test is carried out on a dynamometer and consist of 25 tests which cover idling, acceleration, steady running and deceleration, and simulate typical Japanese urban and/or expressway driving conditions. The running pattern begins with a warm start, lasts for 660 seconds (11 minutes) and runs at speeds up to . The distance of the cycle is , average speed of , and duration 892 seconds (14.9 minutes), including the initial 15 mode segment. JC08 A new more demanding test, called the JC08, was established in December 2006 for Japan's new standard that goes into effect in 2015, but it is already being used by several car manufacturers for new cars. The JC08 test is significantly longer and more rigorous than the 10–15 mode test. The running pattern with JC08 stretches out to 1200 seconds (20 minutes), and there are both cold and warm start measurements and top speed is . The economy ratings of the JC08 are lower than the 10–15 mode cycle, but they are expected to be more real world. The Toyota Prius became the first car to meet Japan's new 2015 Fuel Economy Standards measured under the JC08 test. New Zealand Starting on 7 April 2008 all cars of up to 3.5 tonnes GVW sold other than private sale need to have a fuel economy sticker applied (if available) that shows the rating from one half star to six stars with the most economic cars having the most stars and the more fuel hungry cars the least, along with the fuel economy in L/100 km and the estimated annual fuel cost for driving 14,000 km (at present fuel prices). The stickers must also appear on vehicles to be leased for more than 4 months. All new cars currently rated range from to and received respectively from 4.5 to 5.5 stars. Saudi Arabia The Kingdom of Saudi Arabia announced new light-duty vehicle fuel economy standards in November 2014 which became effective 1 January 2016 and will be fully phased in by 1 January 2018 (Saudi Standards regulation SASO-2864). A review of the targets will be carried by December 2018, at which time targets for 2021–2025 will be set. United States US Energy Tax Act The Energy Tax Act of 1978 in the US established a gas guzzler tax on the sale of new model year vehicles whose fuel economy fails to meet certain statutory levels. The tax applies only to cars (not trucks) and is collected by the IRS. Its purpose is to discourage the production and purchase of fuel-inefficient vehicles. The tax was phased in over ten years with rates increasing over time. It applies only to manufacturers and importers of vehicles, although presumably some or all of the tax is passed along to automobile consumers in the form of higher prices. Only new vehicles are subject to the tax, so no tax is imposed on used car sales. The tax is graduated to apply a higher tax rate for less-fuel-efficient vehicles. To determine the tax rate, manufacturers test all the vehicles at their laboratories for fuel economy. The US Environmental Protection Agency confirms a portion of those tests at an EPA lab. In some cases, this tax may apply only to certain variants of a given model; for example, the 2004–2006 Pontiac GTO (captive import version of the Holden Monaro) did incur the tax when ordered with the four-speed automatic transmission, but did not incur the tax when ordered with the six-speed manual transmission. EPA testing procedure through 2007 Two separate fuel economy tests simulate city driving and highway driving: the "city" driving program or Urban Dynamometer Driving Schedule or (UDDS) or FTP-72 is defined in and consists of starting with a cold engine and making 23 stops over a period of 31 minutes for an average speed of 20 mph (32 km/h) and with a top speed of 56 mph (90 km/h). The "highway" program or Highway Fuel Economy Driving Schedule (HWFET) is defined in and uses a warmed-up engine and makes no stops, averaging 48 mph (77 km/h) with a top speed of 60 mph (97 km/h) over a distance. A weighted average of city (55%) and highway (45%) fuel economies is used to determine the combined rating and guzzler tax. This rating is what is also used for light-duty vehicle corporate average fuel economy regulations. The procedure has been updated to FTP-75, adding a "hot start" cycle which repeats the "cold start" cycle after a 10-minute pause. Because EPA figures had almost always indicated better efficiency than real-world fuel-efficiency, the EPA has modified the method starting with 2008. Updated estimates are available for vehicles back to the 1985 model year. EPA testing procedure: 2008 and beyond US EPA altered the testing procedure effective MY2008 which adds three new Supplemental Federal Test Procedure (SFTP) tests to include the influence of higher driving speed, harder acceleration, colder temperature and air conditioning use. SFTP US06 is a high speed/quick acceleration loop that lasts 10 minutes, covers , averages and reaches a top speed of . Four stops are included, and brisk acceleration maximizes at a rate of per second. The engine begins warm and air conditioning is not used. Ambient temperature varies between to . SFTO SC03 is the air conditioning test, which raises ambient temperatures to , and puts the vehicle's climate control system to use. Lasting 9.9 minutes, the loop averages and maximizes at a rate of . Five stops are included, idling occurs 19 percent of the time and acceleration of 5.1 mph per second is achieved. Engine temperatures begin warm. Lastly, a cold temperature cycle uses the same parameters as the current city loop, except that ambient temperature is set to . EPA tests for fuel economy do not include electrical load tests beyond climate control, which may account for some of the discrepancy between EPA and real world fuel-efficiency. A 200 W electrical load can produce a 0.4 km/L (0.94 mpg) reduction in efficiency on the FTP 75 cycle test. Beginning with model year 2017 the calculation method changed to improve the accuracy of the estimated 5-cycle city and highway fuel economy values derived from just the FTP and HFET tests, with lower uncertainty for fuel efficient vehicles. Electric vehicles and hybrids Following the efficiency claims made for vehicles such as Chevrolet Volt and Nissan Leaf, the National Renewable Energy Laboratory recommended to use EPA's new vehicle fuel efficiency formula that gives different values depending on fuel used. In November 2010 the EPA introduced the first fuel economy ratings in the Monroney stickers for plug-in electric vehicles. For the fuel economy label of the Chevy Volt plug-in hybrid EPA rated the car separately for all-electric mode expressed in miles per gallon gasoline equivalent (MPG-e) and for gasoline-only mode expressed in conventional miles per gallon. EPA also estimated an overall combined city/highway gas-electricity fuel economy rating expressed in miles per gallon gasoline equivalent (MPG-e). The label also includes a table showing fuel economy and electricity consumed for five different scenarios: , , and driven between a full charge, and a never charge scenario. This information was included to make the consumers aware of the variability of the fuel economy outcome depending on miles driven between charges. Also the fuel economy for a gasoline-only scenario (never charge) was included. For electric-only mode the energy consumption estimated in kWh per is also shown. For the fuel economy label of the Nissan Leaf electric car EPA rated the combined fuel economy in terms of miles per gallon gasoline equivalent, with a separate rating for city and highway driving. This fuel economy equivalence is based on the energy consumption estimated in kWh per 100 miles, and also shown in the Monroney label. In May 2011, the National Highway Traffic Safety Administration (NHTSA) and EPA issued a joint final rule establishing new requirements for a fuel economy and environment label that is mandatory for all new passenger cars and trucks starting with model year 2013, and voluntary for 2012 models. The ruling includes new labels for alternative fuel and alternative propulsion vehicles available in the US market, such as plug-in hybrids, electric vehicles, flexible-fuel vehicles, hydrogen fuel cell vehicle, and natural gas vehicles. The common fuel economy metric adopted to allow the comparison of alternative fuel and advanced technology vehicles with conventional internal combustion engine vehicles is miles per gallon of gasoline equivalent (MPGe). A gallon of gasoline equivalent means the number of kilowatt-hours of electricity, cubic feet of compressed natural gas (CNG), or kilograms of hydrogen that is equal to the energy in a gallon of gasoline. The new labels also include for the first time an estimate of how much fuel or electricity it takes to drive , providing US consumers with fuel consumption per distance traveled, the metric commonly used in many other countries. EPA explained that the objective is to avoid the traditional miles per gallon metric that can be potentially misleading when consumers compare fuel economy improvements, and known as the "MPG illusion" – this illusion arises because the reciprocal (i.e. non-linear) relationship between cost (equivalently, volume of fuel consumed) per unit distance driven and MPG value means that differences in MPG values are not directly meaningful – only ratios are (in mathematical terms, the reciprocal function does not commute with addition and subtraction; in general, a difference in reciprocal values is not equal to the reciprocal of their difference). It has been claimed that many consumers are unaware of this, and therefore compare MPG values by subtracting them, which can give a misleading picture of relative differences in fuel economy between different pairs of vehicles – for instance, an increase from 10 to 20 MPG corresponds to a 100% improvement in fuel economy, whereas an increase from 50 to 60 MPG is only a 20% improvement, although in both cases the difference is 10 MPG. The EPA explained that the new gallons-per-100-miles metric provides a more accurate measure of fuel efficiency – notably, it is equivalent to the normal metric measurement of fuel economy, liters per 100 kilometers (L/100 km). CAFE standards The Corporate Average Fuel Economy (CAFE) regulations in the United States, first enacted by Congress in 1975, are federal regulations intended to improve the average fuel economy of cars and light trucks (trucks, vans and sport utility vehicles) sold in the US in the wake of the 1973 Arab Oil Embargo. Historically, it is the sales-weighted average fuel economy of a manufacturer's fleet of current model year passenger cars or light trucks, manufactured for sale in the United States. Under Truck CAFE standards 2008–2011 this changes to a "footprint" model where larger trucks are allowed to consume more fuel. The standards were limited to vehicles under a certain weight, but those weight classes were expanded in 2011. Federal and state regulations The Clean Air Act of 1970 prohibited states from establishing their own air pollution standards. However, the legislation authorized the EPA to grant a waiver to California, allowing the state to set higher standards. The law provides a “piggybacking” provision that allows other states to adopt vehicle emission limits that are the same as California's. California's waivers were routinely granted until 2007, when the George W. Bush administration rejected the state's bid to adopt global warming pollution limits for cars and light trucks. California and 15 other states that were trying to put in place the same emissions standards sued in response. The case was tied up in court until the Obama administration reversed the policy in 2009 by granting the waiver. In August 2012, President Obama announced new standards for American-made automobiles of an average of 54.5 miles per gallon by the year 2025. In April 2018, EPA Administrator Scott Pruitt announced that the Trump administration planned to roll back the 2012 federal standards and would also seek to curb California's authority to set its own standards. Although the Trump administration was reportedly considering a compromise to allow state and national standards to stay in place, on 21 February 2019 the White House declared that it had abandoned these negotiations. A government report subsequently found that, in 2019, new light-duty vehicle fuel economy fell 0.2 miles per gallon (to 24.9 miles per gallon) and pollution increased 3 grams per mile traveled (to 356 grams per mile). A decrease in fuel economy and an increase in pollution had not occurred for the previous five years. The Obama-era rule was officially rolled back on 31 March 2020 during the Trump administration, but the rollback was reversed on 20 December 2021 during the Biden administration. Fuel economy of trucks Trucks are usually bought as an investment good. They are meant to earn money. As the Diesel fuel burnt in heavy trucks accounts for around 30% of the total costs for a freight forwarding company there is always a lot of interest in both the haulage industry and the truck builder industry to strive for best fuel economy. For truck buyers the fuel economy measured by standard procedures is only a first guideline. Professional trucking companies measure the fuel economy of their trucks and truck fleets in real usage. Fuel economy of trucks in real usage is determined by four important factors: The truck technology that is constantly improved by the various OEMs. The driver's driving style contributes a lot to the real fuel economy (different from the test cycles where a standard driving style is used). The maintenance condition of the vehicle influences the fuel efficiency – again different from standardized procedures where the trucks are always presented in flawless condition. Last but not least the usage of the vehicle influences the fuel consumption: Hilly roads and heavy loads will increase the fuel consumption of a vehicle. Effect on pollution Fuel efficiency directly affects emissions causing pollution by affecting the amount of fuel used. However, it also depends on the fuel source used to drive the vehicle concerned. Cars for example, can run on a number of fuel types other than gasoline, such as natural gas, LPG or biofuel or electricity which creates various quantities of atmospheric pollution. A kilogram of carbon, whether contained in petrol, diesel, kerosene, or any other hydrocarbon fuel in a vehicle, leads to approximately 3.6 kg of CO2 emissions. Due to the carbon content of gasoline, its combustion emits 2.3 kg/L (19.4 lb/US gal) of CO2; since diesel fuel is more energy dense per unit volume, diesel emits 2.6 kg/L (22.2 lb/US gal). This figure is only the CO2 emissions of the final fuel product and does not include additional CO2 emissions created during the drilling, pumping, transportation and refining steps required to produce the fuel. Additional measures to reduce overall emission includes improvements to the efficiency of air conditioners, lights and tires. Unit conversions US Gallons 1 mpg ≈ 0.425 km/L 235.2/mpg ≈ L/100 km 1 mpg ≈ 1.201 mpg (imp) Imperial gallons 1 mpg ≈ 0.354 km/L 282/mpg ≈ L/100 km 1 mpg ≈ 0.833 mpg (US) Conversion from mpg Conversion from km/L and L/100 km See also Automobile costs ACEA agreement Battery electric vehicle Car speed and energy consumption Car tuning Emission standard Energy conservation Energy-efficient driving FF layout Fuel efficiency in transportation Fuel saving devices Gasoline gallon equivalent Motorized quadricycle (vehicles with low power engines/low top speed) Miles per gallon gasoline equivalent Passenger miles per gallon The Very Light Car Vehicle Efficiency Initiative Vehicle metrics Green vehicle Low-carbon economy Low-rolling resistance tires Microcar Plug-in hybrid Annotations References External links Real fuel consumption by user reports Model Year 2014 Fuel Economy Guide , U.S. Environmental Protection Agency and U.S. Department of Energy, April 2014. Fuel Efficiency in Electric, Hybrid and Petrol Cars – Model Year 2019 Fuel Consumption Calculator Online Energy economics Fuel technology Green vehicles Car costs
Fuel economy in automobiles
Environmental_science
9,551
13,283,342
https://en.wikipedia.org/wiki/Mining%20lamp
A mining lamp is a lamp, developed for the rigid necessities of underground mining operations. Most often it is worn on a hard hat in the form of a headlamp. History Types 1813 Dr William Reid Clanny Exhibited The Clanny Lamp 1815 Humphry Davy Exhibited The Davy Lamp 1815 George Stephenson Exhibited his Lamp The Davey Safety Lamp was made in London by Humphry Davy. George Stephenson invented a similar lamp but Davys invention was safer due to it having a fine wire gauze that surrounded the flame. This enabled the light to pass through and reduced the risk of explosion by stopping the "firedamp" methane gas coming in contact with the flame. 1840 Mathieu Mueseler Exhibited The Museler Lamp in Belgium. 1859 William Clark patented the first electrical mining lamp. 1870s J.B.Marsaut (France) double gauze design 1872 Coal Mines Regulation Act required locked lamps under certain conditions 1881 Joseph Swan exhibited his first electric lamp 1882 Made by William Reid Clanny invented a 'bonnetted' Clanny lamp, 1883 Elliis Lever of Culcheth Hall Bowdon offered a £500 prize for creation of a safe portable mining lamp. 1885 Thomas Evans of Aberdare made a Clanny type of safety lamp 1886 Royal Commission on Accidents in Mines tested lamps and made recommendations 1887 Coal Mines Regulation Act – requirements on construction, examination, where used, etc. 1889 John Davis and Co, Derby, were supplying portable electric lamps 1896 Coal Mines Regulation Act - requirements on provision by mine owners, where to be used, etc. 1909 Cap (helmet) lamps introduced in Scotland 1911 Prize offered for best electrical lamp 1911 Coal Mines Act made requirements for pit managers to take examinations, where can be used (including electrical), etc. 1920 Electrical lamp with built in accumulator 1924 Miners Lamp Committee – tests and recommendations 1950 Shale miner's electric safety cap lamp and battery pack made in England and supplied by Concordia Electric Safety Lamp Company Ltd, Cardiff. Variants Carbide lamp, a lamp that produces and burns acetylene Safety lamp, any of several types of lamp which are designed to be safe to use in coal mines Davy lamp, a safety lamp containing a candle Geordie lamp, a safety lamp Wheat lamp See also Headlamp (outdoor) References Mining equipment Types of lamp Safety equipment Mine safety Coal mining
Mining lamp
Engineering
472
4,372,065
https://en.wikipedia.org/wiki/Alpha%20Pyxidis
Alpha Pyxidis, Latinised from α Pyxidis, is a giant star in the constellation Pyxis. It is the brightest star in Pyxis, and is easily visible to the naked eye. It has a stellar classification of B1.5III and is a Beta Cephei variable. This star has more than ten times the mass of the Sun and is more than six times the Sun's radius. The surface temperature is and the star is about 10,000 times as luminous as the Sun. Stars such as this with more than 10 solar masses are expected to end their life by exploding as a supernova. [[File:AlpPyxLightCurve.png|thumb|left|A light curve for Alpha Pyxidis, plotted from TESS data.]] Naming In Chinese, (), meaning Celestial Dog'', refers to an asterism consisting of α Pyxidis, e Velorum, f Velorum, β Pyxidis, γ Pyxidis and δ Pyxidis. Consequently, α Pyxidis itself is known as (, ). References External links B-type giants Beta Cephei variables Pyxis Pyxidis, Alpha PD-32 02399 074575 042828 3468
Alpha Pyxidis
Astronomy
274
197,544
https://en.wikipedia.org/wiki/Pentagram
A pentagram (sometimes known as a pentalpha, pentangle, or star pentagon) is a regular five-pointed star polygon, formed from the diagonal line segments of a convex (or simple, or non-self-intersecting) regular pentagon. Drawing a circle around the five points creates a similar symbol referred to as the pentacle, which is used widely by Wiccans and in paganism, or as a sign of life and connections. The word pentagram comes from the Greek word πεντάγραμμον (pentagrammon), from πέντε (pente), "five" + γραμμή (grammē), "line". The word pentagram refers to just the star and the word pentacle refers to the star within a circle, although there is some overlap in usage. The word pentalpha is a 17th-century revival of a post-classical Greek name of the shape. History Early history Early pentagrams have been found on Sumerian pottery from Ur c. 3500 BCE, and the five-pointed star was at various times the symbol of Ishtar or Marduk. Pentagram symbols from about 5,000 years ago were found in the Liangzhu culture of China. A pentagram appeared in a Chinese text on music theory from the Warring States period (221 BC) as a diagram of the mathematical relations between the five notes in a particular Chinese musical scale. The pentagram was known to the ancient Greeks, with a depiction on a vase possibly dating back to the 7th century BCE. Pythagoreanism originated in the 6th century BCE and used the pentagram as a symbol of mutual recognition, of wellbeing, and to recognize good deeds and charity. From around 300–150 BCE the pentagram stood as the symbol of Jerusalem, marked by the 5 Hebrew letters ירשלם spelling its name. In Neoplatonism, the pentagram was said to have been used as a symbol or sign of recognition by the Pythagoreans, who called the pentagram "health". Western symbolism Middle Ages The pentagram was used in ancient times as a Christian symbol for the five senses, or of the five wounds of Christ. The pentagram plays an important symbolic role in the 14th-century English poem Sir Gawain and the Green Knight, in which the symbol decorates the shield of the hero, Gawain. The unnamed poet credits the symbol's origin to King Solomon, and explains that each of the five interconnected points represents a virtue tied to a group of five: Gawain is perfect in his five senses and five fingers, faithful to the Five Wounds of Christ, takes courage from the five joys that Mary had of Jesus, and exemplifies the five virtues of knighthood, which are generosity, friendship, chastity, chivalry, and piety. The North rose of Amiens Cathedral (built in the 13th century) exhibits a pentagram-based motif. Some sources interpret the unusual downward-pointing star as symbolizing the Holy Spirit descending on people. Renaissance Heinrich Cornelius Agrippa and others perpetuated the popularity of the pentagram as a magic symbol, attributing the five neoplatonic elements to the five points, in typical Renaissance fashion. Agrippa depicts the human body inscribed in an 'upright' (point-up) pentagram and another with its hands in rotated pentagrams, among numerous other geometrical figures, in the section on 'the proportions and harmonious measures of the human body', and an 'inverted' (point-down) version of the Pythagorean 'hygeia' pentagram in the section on 'characters, received only by revelation, which no other kind of reasoning can discover', alongside variations of the Chi-Rho and the Hebrew word Makabi. 'Of this type are the signet shown to Constantine, which most people called a cross, inscribed in Latin letters, 'in this conquer', and another revealed to Antiochus who was surnamed Soteris, in the shape of a pentagon, which issued health, for resolved into letters, it issued the word ὑγίεα, that is, 'health', in the confidence and virtue of which signs, each of the kings won a notable victory against their enemies. Thus Judas, who for this reason was afterwards known as Maccabeus, was about to fight with the Jews against Antiochus Eupatorus, and received that noble seal מׄכׄבׄיׄ from the angel'. Romanticism By the mid-19th century, a further distinction had developed amongst occultists regarding the pentagram's orientation. With a single point upwards it depicted spirit presiding over the four elements of matter, and was essentially "good". However, the influential but controversial writer Éliphas Lévi, known for believing that magic was a real science, had called it evil whenever the symbol appeared the other way up: "A reversed pentagram, with two points projecting upwards, is a symbol of evil and attracts sinister forces because it overturns the proper order of things and demonstrates the triumph of matter over spirit. It is the goat of lust attacking the heavens with its horns, a sign execrated by initiates." "The flaming star, which, when turned upside down, is the sign of the goat of black magic, whose head may be drawn in the star, the two horns at the top, the ears to the right and left, the beard at the bottom. It is a sign of antagonism and fatality. It is the goat of lust attacking the heavens with its horns." "Let us keep the figure of the Five-pointed Star always upright, with the topmost triangle pointing to heaven, for it is the seat of wisdom, and if the figure is reversed, perversion and evil will be the result." The apotropaic (protective) use in German folklore of the pentagram symbol (called Drudenfuss in German) is referred to by Goethe in Faust (1808), where a pentagram prevents Mephistopheles from leaving a room (but did not prevent him from entering by the same way, as the outward pointing corner of the diagram happened to be imperfectly drawn): Also protective is the use in Icelandic folklore of a gestured or carved rather than painted pentagram (called in Icelandic), according to 19th century folklorist Jón Árnason: A butter that comes from the fake vomit is called a fake butter; it looks like any other butter; but if one makes a sign of a cross over it, or carves a cross on it, or a figure called a buttermilk-knot,* it all explodes into small pieces and becomes like a grain of dross, so that nothing remains of it, except only particles, or it subsides like foam. Therefore it seems more prudent, if a person is offered a horrible butter to eat, or as a fee, to make either mark on it, because a fake butter cannot withstand either a cross mark or a butter-knot. &ast; The butter-knot is shaped like this: Uses in modern occultism Based on Renaissance-era occultism, the pentagram found its way into the symbolism of modern occultists. Its major use is a continuation of the ancient Babylonian use of the pentagram as an apotropaic charm to protect against evil forces. Éliphas Lévi claimed that "The Pentagram expresses the mind's domination over the elements and it is by this sign that we bind the demons of the air, the spirits of fire, the spectres of water, and the ghosts of earth." In this spirit, the Hermetic Order of the Golden Dawn developed the use of the pentagram in the lesser banishing ritual of the pentagram, which is still used to this day by those who practice Golden Dawn-type magic. Aleister Crowley made use of the pentagram in the system of magick used in Thelema: an adverse or inverted pentagram represents the descent of spirit into matter, according to the interpretation of Lon Milo DuQuette. Crowley contradicted his old comrades in the Hermetic Order of the Golden Dawn, who, following Levi, considered this orientation of the symbol evil and associated it with the triumph of matter over spirit. Use in new religious movements Baháʼí Faith The five-pointed star is a symbol of the Baháʼí Faith. In the Baháʼí Faith, the star is known as the Haykal (), and it was initiated and established by the Báb. The Báb and Bahá'u'lláh wrote various works in the form of a pentagram. The Church of Jesus Christ of Latter-day Saints The Church of Jesus Christ of Latter-day Saints is theorized to have begun using both upright and inverted five-pointed stars in Temple architecture, dating from the Nauvoo Illinois Temple dedicated on 30 April 1846. Other temples decorated with five-pointed stars in both orientations include the Salt Lake Temple and the Logan Utah Temple. These usages come from the symbolism found in Revelation chapter 12: "And there appeared a great wonder in heaven; a woman clothed with the sun, and the moon under her feet, and upon her head a crown of twelve stars." Wicca Because of a perceived association with Satanism and occultism, many United States schools in the late 1990s sought to prevent students from displaying the pentagram on clothing or jewelry. In public schools, such actions by administrators were determined in 2000 to be in violation of students' First Amendment right to free exercise of religion. The encircled pentagram (referred to as a pentacle by the plaintiffs) was added to the list of 38 approved religious symbols to be placed on the tombstones of fallen service members at Arlington National Cemetery on 24 April 2007. The decision was made following ten applications from families of fallen soldiers who practiced Wicca. The government paid the families to settle their pending lawsuits. Other religious use Satanism The inverted pentagram is broadly used in Satanism, sometimes depicted with the goat's head of Baphomet, as popularized by the Church of Satan since 1968. LaVeyan Satanists pair the goat head with Hebrew letters at the five points of the pentagram to form the Sigil of Baphomet. The Baphomet sigil was adapted for the Joy of Satan Ministries logo, using cuneiform characters at the five points of the pentagram, reflecting the shape's earliest use in Sumeria. The inverted pentagram also appears in The Satanic Temple logo, with an alternative depiction of Baphomet's head. Other depictions of the Satanic goat's head resemble the inverted pentagram without its explicit outline. Serer religion The five-pointed star is a symbol of the Serer religion and the Serer people of West Africa. Called Yoonir in their language, it symbolizes the universe in the Serer creation myth, and also represents the star Sirius. Other modern use The pentagram is featured on the national flags of Morocco (adopted 1915) and Ethiopia (adopted 1996 and readopted 2009) The Order of the Eastern Star, an organization (established 1850) associated with Freemasonry, uses a pentagram as its symbol, with the five isosceles triangles of the points colored blue, yellow, white, green, and red. In most Grand Chapters the pentagram is used pointing down, but in a few, it is pointing up. Grand Chapter officers often have a pentagon inscribed around the star(the emblem shown here is from the Prince Hall Association). A pentagram is featured on the flag of the Dutch city of Haaksbergen, as well on its coat of arms. A pentagram is featured on the flag of the Japanese city of Nagasaki, as well on its emblem. Geometry The pentagram is the simplest regular star polygon. The pentagram contains ten points (the five points of the star, and the five vertices of the inner pentagon) and fifteen line segments. It is represented by the Schläfli symbol {5/2}. Like a regular pentagon, and a regular pentagon with a pentagram constructed inside it, the regular pentagram has as its symmetry group the dihedral group of order 10. It can be seen as a net of a pentagonal pyramid although with isosceles triangles. Construction The pentagram can be constructed by connecting alternate vertices of a pentagon; see details of the construction. It can also be constructed as a stellation of a pentagon, by extending the edges of a pentagon until the lines intersect. Golden ratio The golden ratio, φ = (1 + ) / 2 ≈ 1.618, satisfying: plays an important role in regular pentagons and pentagrams. Each intersection of edges sections the edges in the golden ratio: the ratio of the length of the edge to the longer segment is φ, as is the length of the longer segment to the shorter. Also, the ratio of the length of the shorter segment to the segment bounded by the two intersecting edges (a side of the pentagon in the pentagram's center) is φ. As the four-color illustration shows: The pentagram includes ten isosceles triangles: five acute and five obtuse isosceles triangles. In all of them, the ratio of the longer side to the shorter side is φ. The acute triangles are golden triangles. The obtuse isosceles triangle highlighted via the colored lines in the illustration is a golden gnomon. Trigonometric values As a result, in an isosceles triangle with one or two angles of 36°, the longer of the two side lengths is φ times that of the shorter of the two, both in the case of the acute as in the case of the obtuse triangle. Spherical pentagram A pentagram can be drawn as a star polygon on a sphere, composed of five great circle arcs, whose all internal angles are right angles. This shape was described by John Napier in his 1614 book Mirifici logarithmorum canonis descriptio (Description of the wonderful rule of logarithms) along with rules that link the values of trigonometric functions of five parts of a right spherical triangle (two angles and three sides). It was studied later by Carl Friedrich Gauss. Three-dimensional figures Several polyhedra incorporate pentagrams: Higher dimensions Orthogonal projections of higher dimensional polytopes can also create pentagrammic figures: All ten 4-dimensional Schläfli–Hess 4-polytopes have either pentagrammic faces or vertex figure elements. Pentagram of Venus The pentagram of Venus is the apparent path of the planet Venus as observed from Earth. Successive inferior conjunctions of Venus repeat with an orbital resonance of approximately 13:8—that is, Venus orbits the Sun approximately 13 times for every eight orbits of Earth—shifting 144° at each inferior conjunction. The tips of the five loops at the center of the figure have the same geometric relationship to one another as the five vertices, or points, of a pentagram, and each group of five intersections equidistant from the figure's center have the same geometric relationship. In computer systems The pentagram has these Unicode code points that enable them to be included in documents: See also Pentachoron – the 4-simplex References Further reading External links The Pythagorean Pentacle from the Biblioteca Arcana. Christian symbols Golden ratio Magic symbols National symbols of Ethiopia National symbols of Morocco 5 (number) Pythagorean symbols Religious symbols 05 Serer religious symbols Wicca Paganism
Pentagram
Mathematics
3,230
26,602,636
https://en.wikipedia.org/wiki/Olaratumab
Olaratumab, sold under the brand name Lartruvo, is a monoclonal antibody medication developed by Eli Lilly and Company for the treatment of solid tumors. It is directed against the platelet-derived growth factor receptor alpha. It was removed from the United States and European Union markets in 2019, due to insufficient proof of its medical advantage (see below "Medical uses"). Medical uses Olaratumab is used in combination with doxorubicin for the treatment of adults with advanced soft-tissue sarcoma (STS) who cannot be cured by cancer surgery or radiation therapy, and who have not been previously treated with doxorubicin. In a randomised controlled trial with 133 STS patients, olaratumab plus doxorubicin improved the median of progression-free survival from 4.1 to 6.6 months as compared to doxorubicin alone (p = 0.0615, narrowly missing statistical significance), and overall survival from 14.7 to 26.5 months (p = 0.0003, highly significant). However, the ANNOUNCE phase 3 trial did not find any advantage in adding olaratumab to doxorubicin. Therefore, in January 2019, FDA and EMA decided to recommend against starting olaratumab for soft tissue sarcoma. In April 2019 the European Medicines Agency explicitly requested the marketing authorisation of the medicine to be revoked. Shortly afterwards the German Physician's Medicines Commission reported that olaratumab will be removed from the German market "in a few weeks" and asked doctors not to treat new patients with this drug outside of clinical trials. Lilly subsequently voluntarily withdrew its approval in the United States. Contraindications The drug has no contraindications apart from hypersensitivity reactions. Side effects In studies, the most serious side effects of the combination olaratumab/doxorubicin were neutropenia (low count of neutrophil white blood cells) with a severity of grade 3 or 4 in 55% of patients, and musculoskeletal pain grade 3 or 4 in 8% of patients. Common milder side effects were lymphopenia, headache, diarrhoea, nausea and vomiting, mucositis, and reactions at the infusion site; all typical effects of cancer therapies. Interactions No pharmacokinetic interactions with doxorubicin were observed in studies. Being a monoclonal antibody, olaratumab is neither metabolised by cytochrome P450 liver enzymes nor transported by transmembrane pumps, and is thus not expected to interact relevantly with other drugs. Pharmacology Mechanism of action Olaratumab inhibits growth of tumour cells by blocking subunit alpha of the platelet-derived growth factor receptor, a type of tyrosine kinase. Pharmacokinetics After intravenous infusion, olaratumab has a volume of distribution of 7.7 litres in steady state and a biological half-life of 11 days. History Olaratumab was originally developed by ImClone Systems, which was acquired by Eli Lilly in 2008. A Phase I clinical trial was conducted in Japanese patients in September 2010, followed by a Phase II trial in 133 patients, starting in October 2010. In February 2015, the European Medicines Agency assigned olaratumab orphan drug status for the treatment of soft-tissue sarcoma. The European Commission granted a conditional marketing authorisation, based on the mentioned Phase II study, valid throughout the European Union on 9 November 2016. Previously considered a promising drug, the FDA granted olaratumab fast track designation, breakthrough therapy designation and priority review status. In October 2016, the US FDA issued an accelerated approval notice for use of olaratumab with doxorubicin to treat adults with certain types of soft-tissue sarcoma, based on the same study. A phase III trial completed in 2019, and unfortunately showed no benefit from the addition of olaratumab to doxorubicin. As noted above, these results led to withdrawal of approval in the United States and Europe. References Monoclonal antibodies for tumors Orphan drugs Drugs developed by Eli Lilly and Company Withdrawn drugs
Olaratumab
Chemistry
882
38,507,281
https://en.wikipedia.org/wiki/C9H7NO4
{{DISPLAYTITLE:C9H7NO4}} The molecular formula C9H7NO4 (molar mass: 193.16 g/mol, exact mass: 193.0375 u) may refer to: DHICA Dopachrome Molecular formulas
C9H7NO4
Physics,Chemistry
59
5,644,238
https://en.wikipedia.org/wiki/Female%20sabotage
Female sabotage is an evolutionary theory regarding the propensity of certain females to select "burdened" males of their species for mating. History Soon after Charles Darwin published his theory of natural selection, he was faced with a puzzle. If natural selection suggests "survival of the fittest," then there is a question as to why some males have traits that detract from their survival. Darwin knew that there was more to natural selection than simple fitness. An equally important part of the struggle of life regards reproduction. In this case, the question becomes, "Why would a female burden her offspring with dangerous traits by mating with a similarly burdened male?" Noting that the males with burdensome traits are almost entirely those in polygamous species, where a minority of males generally mate with many females, Darwin had an insight. He realized that if females found these male burdens more "attractive", and if that attractiveness resulted in more matings by burdened males, then the increase in matings of a few sons might offset the death of many other sons as a result of the burden. In effect, if the success of the surviving males produced enough offspring to cover more than the loss of potential offspring from their lost brothers, then the female who mated with a burdened male had chosen correctly. Female sabotage theory In 1996, however, Joe Abraham presented a re-interpretation of the problem. In polygamous species, males generally contribute nothing to the nurturing of offspring, but nevertheless continue to consume finite resources. In such situations, males effectively become competitors with females and young once they are finished mating. This gives females a reason to sabotage males, and mating gives them an opportunity to do so. By choosing to mate exclusively with males who are unlikely to survive because of their burdens, the females ensure that as the males die, more food and other resources will remain for females and their young. Because females are the limiting resource in most species, as their numbers increase, population fitness will also increase. Just as a given amount of land can only produce a finite amount of grazing, and a limited amount of grazing can only support a limited number of grazing animals, so a given number of grazing animals can only sustain a limited number of predators. Similar limitations apply to all living things, and are known as the carrying capacity of a physical area. If males' burdens are more likely to draw the interest of local predators, then such males effectively shift predation away from females and their young. In this case, the females and young will gain an added benefit from decreased predation, and enjoy even higher rates of survivability. Abraham's explanation reunites the major split in sexual selection—intrasexual competition (male combat) and intersexual selection (female choice)--together under one rubric. Under female sabotage, the increase in resources becomes the critical factor, and the cause of increased male mortality is secondary. The theory also offers new, feminist approaches to leks, harems, resource guarding and mate location. Perhaps the most attractive aspect of Abraham's explanation, however, is that it can easily work with any of the many current theories of sexual selection, and must play some role in them. An increase in resources and a decrease in predation for females and their young is an inevitable result of increased male mortality, regardless of what mechanism drives females to mate with males carrying burdensome traits. References Abraham, J.N. 1998. "La Saboteuse: An Ecological Theory of Sexual Dimorphism in Animals." Acta Biotheoretica 46:23-35. Evolutionary biology
Female sabotage
Biology
737
56,340,765
https://en.wikipedia.org/wiki/Cohesion%20number
The Cohesion number (Coh) is a useful dimensionless number in particle technology by which the cohesivity of different powders can be compared. This is especially useful in DEM simulations (Discrete Element Method) of granular materials where scaling of the size and stiffness of the particles are inevitable due to the computationally demanding nature of the DEM modelling. Background In simulation of granular materials, scaling the particle size with regards to the other particles physical and mechanical properties is a challenging job. Especially in simulation of cohesive powders, lack of a robust criterion for tuning the level of the surface energy of the particles can waste enormous amount of time during the process of calibration. The Bond number has been used traditionally in this regards, where the significance of the adhesive force (pull-off force) is compared with the particles gravitational force (weight); nevertheless, the influence of the materials properties, particularly the particles stiffness, is not comprehensively observed in this number. The particles stiffness, which is not present in the Bond number, has a considerable impact on how particles respond to an applied force. If the forces in the Bond number are substituted with potential and cohesion energies, a new dimensionless number will be formed whereby the effect of the particles stiffness is also considered. This was firstly proposed by Behjani et al. where they introduced a dimensionless number titled as the Cohesion number. Definition and mathematical derivations The Cohesion number is a dimensionless number which shows the ratio of the work required for detaching two arbitrary solid particles (work of cohesion) to their gravitational potential energy as expressed below, For example, in the JKR contact model the work of cohesion is by which the Cohesion number is derived as follows: Mass can be shown in the form of density and volume and the constant number can be eliminated, The final version of the Cohesion number is as following: is the particle density is the gravity is the interfacial energy is the equivalent Young’s modulus: is the material Poisson's ratio shows the equivalent radius: This number is dependent on the particles surface energy, particles size, particle density, gravity, and the Young’s modulus. It well justifies that the materials having lower stiffness become “stickier” if adhesive and it is a useful scaling method for the DEM simulations at which Young’s modulus is selected smaller than the real value in order to increase the computational speed. Recently, a rigorous analysis of the contact stiffness reduction for the adhesive contacts to speed up the DEM calculations shows the same fractional form. See also Contact mechanics Surface tension References Dimensionless numbers of mechanics
Cohesion number
Physics
545
72,412,136
https://en.wikipedia.org/wiki/Operation%20Cyberstorm
Operation Cyberstorm was a two-year undercover operation in the United States by the Federal Bureau of Investigation (FBI), against illegal copying of software. At the time, it was the largest sweep ever conducted by the FBI against illegal copying. Investigations A number of individuals purchased software at discounts, and resold them at a profit in violation of their software license. Convictions Mirza Ali, 60, of Fremont, California and Sameena Ali, 53, also of Fremont, were sentenced in 2007 to 60 months imprisonment, and forfeiture in the amount of $5,105,977. Keith Griffen, 56, of Oregon City, Oregon, was sentenced to 33 months of imprisonment, restitution to Microsoft Corporation in the amount of $20,000,000, three years of supervised release, and $900 in special assessments. William Glushenko, 66, was sentenced to one year of probation and 100 hours of community service after pleading guilty to misprision of felony. References Cyberstorm Copyright enforcement Cyberstorm
Operation Cyberstorm
Technology
209
45,114,355
https://en.wikipedia.org/wiki/Wheat%20Improvement%20Strategic%20Programme
The Wheat Improvement Strategic Programme (WISP) is a Biotechnology and Biological Sciences Research Council (BBSRC) funded collaborative programme for wheat improvement, which brings together experts from five UK institutions: John Innes Centre, Rothamsted Research, the National Institute for Agricultural Botany (NIAB) and the University of Nottingham, and the University of Bristol. The programme is divided into four pillars (Landraces, Synthetics, Alien Introgression, Elite Wheats) and two themes (Phenotyping and Genotyping). Aims Specific goals of the project are to: Understand the genetics behind factors limiting grain yield, such as drought tolerance, plant shape and resistance to pests and diseases. Identify new and useful genetic variation from related species and sources of wheat germplasm not adapted to target environments. Cross wheat lines to produce germplasm that allows the identification of genes influencing key traits. Generate a database of genetic markers, for use in precision breeding. The new germplasm and the information generated by this project will be made freely available. Plant breeders can use the germplasm to cross with their existing lines, while academics will be able to make use of it to understand the mechanistic basis of key traits in bread wheat. The WISP website gives access to current research outcomes and available resources. References External links http://www.wheatisp.org/ Agricultural organisations based in the United Kingdom College and university associations and consortia in the United Kingdom Genetics in the United Kingdom Plant genetics Wheat organizations
Wheat Improvement Strategic Programme
Biology
318
2,922,306
https://en.wikipedia.org/wiki/12%20Canis%20Majoris
12 Canis Majoris is a variable star located about 707 light years away from the Sun in the southern constellation of Canis Major. It has the variable star designation HK Canis Majoris; 12 Canis Majoris is the Flamsteed designation. This body is just barely visible to the naked eye as a dim, blue-white hued star with a baseline apparent visual magnitude of +6.07. It is moving away from the Earth with a heliocentric radial velocity of +16 km/s. This is the brightest star in the vicinity of the open cluster NGC 2287, although it is probably not a member based on its proper motion. This star has a stellar classification of B7 II/III, matching the spectrum of a B-type star intermediate between a giant and bright giant. (Cidale et al. (2007) show a class of B5 V, which would indicate it is instead a B-type main-sequence star.) It is a magnetic Bp star of the helium–weak variety (CP4), with the spectrum displaying evidence for vertical stratification of helium in the atmosphere. Holger Pedersen and Bjarne Thomsen discovered that 12 Canis Majoris is a variable star, in 1977. It was given its variable star designation in 1981. Samus et al. (2017) classify it as an SX Arietis variable that varies in brightness by about 0.05 magnitudes over a period of 2.18045 days. It has 4.8 times the mass of the Sun and 2.73 times the Sun's radius. The star is radiating 537 times the Sun's luminosity from its photosphere at an effective temperature of . References B-type giants B-type bright giants B-type main-sequence stars Helium-weak stars SX Arietis variables Canis Major Durchmusterung objects Canis Majoris, 12 049333 032504 2509 Canis Majoris, HK
12 Canis Majoris
Astronomy
412
55,822,704
https://en.wikipedia.org/wiki/New-collar%20worker
A new-collar worker is an individual who develops technical and soft skills needed to work in the contemporary technology industry through nontraditional education paths. The term was introduced by IBM CEO Ginni Rometty in late 2016 and refers to "middle-skill" occupations in technology, such as cybersecurity analysts, application developers and cloud computing specialists. Etymology The term "new-collar job" is a play on “blue-collar job”. It originated with IBM's CEO Ginni Rometty, relating to the company's efforts to increase the number of people qualified for technology jobs. In November 2016, Rometty wrote an open letter to then-President-elect Donald Trump, which introduced the idea of "new-collar jobs" and urged his support for the creation of these types of roles. Rometty coined the term in response to new employment designations as industries are moving into a new technology era, and jobs are created that require new skills in data science, cloud computing and artificial intelligence. Occupations and education requirements According to Rometty, "relevant skills, sometimes obtained through vocational training", are the qualifying characteristics of new-collar work. Typical new-collar jobs include: cloud computing technicians, database managers, cybersecurity analysts, user interface designers, and other assorted IT roles. Technical skills and education are required for these roles but not necessarily a four-year college degree. Skills may be developed through nontraditional education such as community college courses and industry certification programs. Employers of new-collar workers value the ability to adapt and learn, equally to more formal education. As well, training for new-collar jobs often involves development of relevant soft skills. Due to a widespread skills gap, industry demand for new-collar workers has led to the development of education initiatives focused on technical skills. Examples of such initiatives include a partnership between Delta Air Lines and about 37 aviation maintenance schools in the US to develop a curriculum focused on skills needed in the aviation industry, and IBM's P-Tech program for high-school and associate degree. Usage In the United States, the "New Collar Jobs Act" was released by Representatives Ted Lieu (California), Matt Cartwright (Pennsylvania) and Ann McLane Kuster (New Hampshire) in July 2017. The Act sought to provide scholarship funding and debt relief for individuals who study cybersecurity and take up cybersecurity roles, as well as establishing tax breaks for employers that offer cybersecurity training. In August 2017, Virginia Lt. Governor Ralph Northam announced a vocational training program titled "Get Skilled, Get A Job, and Give Back", focused on skills for new-collar jobs. See also Designation of workers by collar color IBM SkillsBuild References 2016 neologisms Employment classifications IBM Office work Social classes
New-collar worker
Technology
565
21,491,913
https://en.wikipedia.org/wiki/WHO%20Drug%20Dictionary
The WHODrug Dictionary is an international classification of medicines created by the WHO Programme for International Drug Monitoring and managed by the Uppsala Monitoring Centre. It is used by pharmaceutical companies, clinical trial organizations and drug regulatory authorities for identifying drug names in spontaneous ADR reporting (and pharmacovigilance) and in clinical trials. Created in 1968 and regularly updated, since 2005 there have been major developments in the form of a WHO Drug Dictionary Enhanced (with considerably more fields and data entries) and a WHO Herbal Dictionary, which covers traditional and herbal medicines. Since 2016 all of the WHODrug products have been available in a single subscription service called WHODrug Global. Organization WHODrug drug code consist of 11 characters (alphanumeric code). It has 3 parts: Drug Record Number(Drug Rec No), Sequence number 1(Seq1) and Sequence number 2 (Seq2). Drug Rec No consists of 6 characters. It uniquely identifies active moieties, regardless of salt form or plant part and extract. Seq1 is used to uniquely identify different variations (e.g. salts and esters), plant parts and extraction methods, thereby defining active substances or a combination of active substances. WHODrug records sharing the same Drug Rec No and Seq1 contain the same variation/plant part/extract variation of the same active moiety. For single-ingredient records, Seq1=01 identifies a specific active moiety. If Seq1 is higher than 01 it refers to variations of that active moiety. For multi-ingredient records, Seq1=01 identifies a combination of active moieties. If Seq1 is higher than 01 it refers to variations of one or more of the active moieties in the combination. Finally, Seq2 uniquely identifies the name of the record in WHODrug. Example The Drug Code for the substance Ibuprofen is 001092 01 001. The Drug Code for the trade name Advil infants pain & fever relief is 001092 01 A3D. Relationship to Anatomical Therapeutic Chemical Classification System WHODrug records are classified with at least one code from Anatomical Therapeutic Chemical Classification System (including the HATC which stands for Herbal ATC and which is treated as part of ATC for mapping purposes). Preferably, a fourth level ATC code is assigned. ATC assignments in WHODrug are marked as 'official' or 'UMC-assigned'. Official ATC codes are classifications included in the official ATC index, while UMC-assigned ATC codes are classifications NOT included in the official ATC index. In addition, a separate cross reference called "Cross Reference ATC 5". In this additional reference, WHODrug records are matched to fifth level ATC codes where applicable. Formats WHODrug is offered in Four formats (called B3 and C3). B3 format is brief while C3 format contains additional columns on top of B3 format. Dictionary Versions Standardised Drug Groupings WHODrug concepts can be organized into groups. Standardised Drug Groupings (SDGs) define groups of drugs. For example, diuretics, corticosteroids, drugs used in diabetes. Groups are also defined based on interaction, for example, drugs interacting with CYP2C8 or drugs interacting with UGT. R,Lee References External links www.who-umc.org Pharmacological classification systems
WHO Drug Dictionary
Chemistry
703
292,800
https://en.wikipedia.org/wiki/Dirac%20spinor
In quantum field theory, the Dirac spinor is the spinor that describes all known fundamental particles that are fermions, with the possible exception of neutrinos. It appears in the plane-wave solution to the Dirac equation, and is a certain combination of two Weyl spinors, specifically, a bispinor that transforms "spinorially" under the action of the Lorentz group. Dirac spinors are important and interesting in numerous ways. Foremost, they are important as they do describe all of the known fundamental particle fermions in nature; this includes the electron and the quarks. Algebraically they behave, in a certain sense, as the "square root" of a vector. This is not readily apparent from direct examination, but it has slowly become clear over the last 60 years that spinorial representations are fundamental to geometry. For example, effectively all Riemannian manifolds can have spinors and spin connections built upon them, via the Clifford algebra. The Dirac spinor is specific to that of Minkowski spacetime and Lorentz transformations; the general case is quite similar. This article is devoted to the Dirac spinor in the Dirac representation. This corresponds to a specific representation of the gamma matrices, and is best suited for demonstrating the positive and negative energy solutions of the Dirac equation. There are other representations, most notably the chiral representation, which is better suited for demonstrating the chiral symmetry of the solutions to the Dirac equation. The chiral spinors may be written as linear combinations of the Dirac spinors presented below; thus, nothing is lost or gained, other than a change in perspective with regards to the discrete symmetries of the solutions. The remainder of this article is laid out in a pedagogical fashion, using notations and conventions specific to the standard presentation of the Dirac spinor in textbooks on quantum field theory. It focuses primarily on the algebra of the plane-wave solutions. The manner in which the Dirac spinor transforms under the action of the Lorentz group is discussed in the article on bispinors. Definition The Dirac spinor is the bispinor in the plane-wave ansatz of the free Dirac equation for a spinor with mass , which, in natural units becomes and with Feynman slash notation may be written An explanation of terms appearing in the ansatz is given below. The Dirac field is , a relativistic spin-1/2 field, or concretely a function on Minkowski space valued in , a four-component complex vector function. The Dirac spinor related to a plane-wave with wave-vector is , a vector which is constant with respect to position in spacetime but dependent on momentum . The inner product on Minkowski space for vectors and is . The four-momentum of a plane wave is where is arbitrary, In a given inertial frame of reference, the coordinates are . These coordinates parametrize Minkowski space. In this article, when appears in an argument, the index is sometimes omitted. The Dirac spinor for the positive-frequency solution can be written as where is an arbitrary two-spinor, concretely a vector. is the Pauli vector, is the positive square root . For this article, the subscript is sometimes omitted and the energy simply written . In natural units, when is added to or when is added to , means in ordinary units; when is added to , means in ordinary units. When m is added to or to it means (which is called the inverse reduced Compton wavelength) in ordinary units. Derivation from Dirac equation The Dirac equation has the form In order to derive an expression for the four-spinor , the matrices and must be given in concrete form. The precise form that they take is representation-dependent. For the entirety of this article, the Dirac representation is used. In this representation, the matrices are These two 4×4 matrices are related to the Dirac gamma matrices. Note that and are 2×2 matrices here. The next step is to look for solutions of the form while at the same time splitting into two two-spinors: Results Using all of the above information to plug into the Dirac equation results in This matrix equation is really two coupled equations: Solve the 2nd equation for and one obtains Note that this solution needs to have in order for the solution to be valid in a frame where the particle has . Derivation of the sign of the energy in this case. We consider the potentially problematic term . If , clearly as . On the other hand, let , with a unit vector, and let . Hence the negative solution clearly has to be omitted, and . End derivation. Assembling these pieces, the full positive energy solution is conventionally written as The above introduces a normalization factor derived in the next section. Solving instead the 1st equation for a different set of solutions are found: In this case, one needs to enforce that for this solution to be valid in a frame where the particle has . The proof follows analogously to the previous case. This is the so-called negative energy solution. It can sometimes become confusing to carry around an explicitly negative energy, and so it is conventional to flip the sign on both the energy and the momentum, and to write this as In further development, the -type solutions are referred to as the particle solutions, describing a positive-mass spin-1/2 particle carrying positive energy, and the -type solutions are referred to as the antiparticle solutions, again describing a positive-mass spin-1/2 particle, again carrying positive energy. In the laboratory frame, both are considered to have positive mass and positive energy, although they are still very much dual to each other, with the flipped sign on the antiparticle plane-wave suggesting that it is "travelling backwards in time". The interpretation of "backwards-time" is a bit subjective and imprecise, amounting to hand-waving when one's only evidence are these solutions. It does gain stronger evidence when considering the quantized Dirac field. A more precise meaning for these two sets of solutions being "opposite to each other" is given in the section on charge conjugation, below. Chiral basis In the chiral representation for , the solution space is parametrised by a vector , with Dirac spinor solution where are Pauli 4-vectors and is the Hermitian matrix square-root. Spin orientation Two-spinors In the Dirac representation, the most convenient definitions for the two-spinors are: and since these form an orthonormal basis with respect to a (complex) inner product. Pauli matrices The Pauli matrices are Using these, one obtains what is sometimes called the Pauli vector: Orthogonality The Dirac spinors provide a complete and orthogonal set of solutions to the Dirac equation. This is most easily demonstrated by writing the spinors in the rest frame, where this becomes obvious, and then boosting to an arbitrary Lorentz coordinate frame. In the rest frame, where the three-momentum vanishes: one may define four spinors Introducing the Feynman slash notation the boosted spinors can be written as and The conjugate spinors are defined as which may be shown to solve the conjugate Dirac equation with the derivative understood to be acting towards the left. The conjugate spinors are then and The normalization chosen here is such that the scalar invariant really is invariant in all Lorentz frames. Specifically, this means Completeness The four rest-frame spinors indicate that there are four distinct, real, linearly independent solutions to the Dirac equation. That they are indeed solutions can be made clear by observing that, when written in momentum space, the Dirac equation has the form and This follows because which in turn follows from the anti-commutation relations for the gamma matrices: with the metric tensor in flat space (in curved space, the gamma matrices can be viewed as being a kind of vielbein, although this is beyond the scope of the current article). It is perhaps useful to note that the Dirac equation, written in the rest frame, takes the form and so that the rest-frame spinors can correctly be interpreted as solutions to the Dirac equation. There are four equations here, not eight. Although 4-spinors are written as four complex numbers, thus suggesting 8 real variables, only four of them have dynamical independence; the other four have no significance and can always be parameterized away. That is, one could take each of the four vectors and multiply each by a distinct global phase This phase changes nothing; it can be interpreted as a kind of global gauge freedom. This is not to say that "phases don't matter", as of course they do; the Dirac equation must be written in complex form, and the phases couple to electromagnetism. Phases even have a physical significance, as the Aharonov–Bohm effect implies: the Dirac field, coupled to electromagnetism, is a U(1) fiber bundle (the circle bundle), and the Aharonov–Bohm effect demonstrates the holonomy of that bundle. All this has no direct impact on the counting of the number of distinct components of the Dirac field. In any setting, there are only four real, distinct components. With an appropriate choice of the gamma matrices, it is possible to write the Dirac equation in a purely real form, having only real solutions: this is the Majorana equation. However, it has only two linearly independent solutions. These solutions do not couple to electromagnetism; they describe a massive, electrically neutral spin-1/2 particle. Apparently, coupling to electromagnetism doubles the number of solutions. But of course, this makes sense: coupling to electromagnetism requires taking a real field, and making it complex. With some effort, the Dirac equation can be interpreted as the "complexified" Majorana equation. This is most easily demonstrated in a generic geometrical setting, outside the scope of this article. Energy eigenstate projection matrices It is conventional to define a pair of projection matrices and , that project out the positive and negative energy eigenstates. Given a fixed Lorentz coordinate frame (i.e. a fixed momentum), these are These are a pair of 4×4 matrices. They sum to the identity matrix: are orthogonal and are idempotent It is convenient to notice their trace: Note that the trace, and the orthonormality properties hold independent of the Lorentz frame; these are Lorentz covariants. Charge conjugation Charge conjugation transforms the positive-energy spinor into the negative-energy spinor. Charge conjugation is a mapping (an involution) having the explicit form where denotes the transpose, is a 4×4 matrix, and is an arbitrary phase factor, The article on charge conjugation derives the above form, and demonstrates why the word "charge" is the appropriate word to use: it can be interpreted as the electrical charge. In the Dirac representation for the gamma matrices, the matrix can be written as Thus, a positive-energy solution (dropping the spin superscript to avoid notational overload) is carried to its charge conjugate Note the stray complex conjugates. These can be consolidated with the identity to obtain with the 2-spinor being As this has precisely the form of the negative energy solution, it becomes clear that charge conjugation exchanges the particle and anti-particle solutions. Note that not only is the energy reversed, but the momentum is reversed as well. Spin-up is transmuted to spin-down. It can be shown that the parity is also flipped. Charge conjugation is very much a pairing of Dirac spinor to its "exact opposite". See also Dirac equation Weyl equation Majorana equation Helicity basis Spin(1,3), the double cover of SO(1,3) by a spin group References Quantum mechanics Quantum field theory Spinors Spinor
Dirac spinor
Physics
2,496
681,806
https://en.wikipedia.org/wiki/Near-extremal%20black%20hole
In theoretical physics, a near-extremal black hole is a black hole which is not far from the minimal possible mass that can be compatible with the given charges and angular momentum. The calculations of the properties of near-extremal black holes are usually performed using perturbation theory around the extremal black hole; the expansion parameter is called non-extremality. In supersymmetric theories, near-extremal black holes are often small perturbations of supersymmetric black holes. Such black holes have a very small Hawking temperature and consequently emit a small amount of Hawking radiation. Their black hole entropy can often be calculated in string theory, much like in the case of extremal black holes, at least to the first order in non-extremality. Black holes
Near-extremal black hole
Physics,Astronomy
171
18,771,595
https://en.wikipedia.org/wiki/Levi-Civita%20parallelogramoid
In the mathematical field of differential geometry, the Levi-Civita parallelogramoid is a quadrilateral in a curved space whose construction generalizes that of a parallelogram in the Euclidean plane. It is named for its discoverer, Tullio Levi-Civita. Like a parallelogram, two opposite sides AA′ and BB′ of a parallelogramoid are parallel (via parallel transport side AB) and the same length as each other, but the fourth side A′B′ will not in general be parallel to or the same length as the side AB, although it will be straight (a geodesic). Construction A parallelogram in Euclidean geometry can be constructed as follows: Start with a straight line segment AB and another straight line segment AA′. Slide the segment AA′ along AB to the endpoint B, keeping the angle with AB constant, and remaining in the same plane as the points A, A′, and B. Label the endpoint of the resulting segment B′ so that the segment is BB′. Draw a straight line A′B′. In a curved space, such as a Riemannian manifold or more generally any manifold equipped with an affine connection, the notion of "straight line" generalizes to that of a geodesic. In a suitable neighborhood (such as a ball in a normal coordinate system), any two points can be joined by a geodesic. The idea of sliding the one straight line along the other gives way to the more general notion of parallel transport. Thus, assuming either that the manifold is complete, or that the construction is taking place in a suitable neighborhood, the steps to producing a Levi-Civita parallelogram are: Start with a geodesic AB and another geodesic AA′. These geodesics are assumed to be parameterized by their arclength in the case of a Riemannian manifold, or to carry a choice of affine parameter in the general case of an affine connection. "Slide" (parallel transport) the tangent vector of AA′ from A to B. The resulting tangent vector at B generates a geodesic via the exponential map. Label the endpoint of this geodesic by B′, and the geodesic itself BB′. Connect the points A′ and B′ by the geodesic A′B′. Quantifying the difference from a parallelogram The length of this last geodesic constructed connecting the remaining points A′B′ may in general be different than the length of the base AB. This difference is measured by the Riemann curvature tensor. To state the relationship precisely, let AA′ be the exponential of a tangent vector X at A, and AB the exponential of a tangent vector Y at A. Then where terms of higher order in the length of the sides of the parallelogram have been suppressed. Discrete approximation Parallel transport can be discretely approximated by Schild's ladder, which approximates Levi-Civita parallelogramoids by approximate parallelograms. Notes References Curvature (mathematics) Differential geometry Types of quadrilaterals
Levi-Civita parallelogramoid
Physics
641
51,198,091
https://en.wikipedia.org/wiki/O-Desmethylangolensin
O-Desmethylangolensin (O-DMA) is a phytoestrogen. It is an intestinal bacterial metabolite of the soy phytoestrogen daidzein. It produced in some people, deemed O-DMA producers, but not others. O-DMA producers were associated with 69% greater mammographic density and 6% bone density. See also Equol References Dihydrostilbenoids Phytoestrogens
O-Desmethylangolensin
Chemistry
102
48,497,560
https://en.wikipedia.org/wiki/Allomyces%20catenoides
Allomyces catenoides is a species of fungus in the family Blastocladiaceae. It was described by Frederick Kroeber Sparrow in 1964. References Fungi described in 1964 Blastocladiomycota Fungus species
Allomyces catenoides
Biology
47
52,156,116
https://en.wikipedia.org/wiki/H3K27me3
H3K27me3 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the tri-methylation of lysine 27 on histone H3 protein. This tri-methylation is associated with the downregulation of nearby genes via the formation of heterochromatic regions. Nomenclature H3K27me3 indicates trimethylation of lysine 27 on histone H3 protein subunit: Lysine methylation This diagram shows the progressive methylation of a lysine residue. The tri-methylation (right) denotes the methylation present in H3K27me3. Understanding histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K27me3. Mechanism and function of modification The placement of a repressive mark on lysine 27 requires the recruitment of chromatin regulators by transcription factors. These modifiers are either histone modification complexes which covalently modify the histones to move around the nucleosomes and open the chromatin, or chromatin remodelling complexes which involve movement of the nucleosomes without directly modifying them. These histone marks can serve as docking sites of other co-activators as seen with H3K27me3. This occurs through polycomb mediated gene silencing via histone methylation and chromodomain interactions. A polycomb repressive complex (PRC); PRC2, mediates the tri-methylation of histone 3 on lysine 27 through histone methyl transferase activity. This mark can recruit PRC1 which will bind and contribute to the compaction of the chromatin. The inflammatory transcription factor NF-κB can cause demethylation of H3K27me3 via Jmjd3. H3K27me3 is linked to the repair of DNA damages, particularly repair of double-strand breaks by homologous recombinational repair. Relationship with other modifications H3K27 can undergo a variety of other modifications. It can exist in mono- as well as di-methylated states. The roles of these respective modifications are not as well characterised as tri-methylation. PRC2 is however believed to be implicated in all the different methylations associated with H3K27me. H3K27me1 is linked to promotion of transcription and is seen to accumulate in transcribed genes. Histone-histone interactions play a role in this process. Regulation occurs via Setd2-dependent H3K36me3 deposition. H3K27me2 is broadly distributed within the core histone H3 and is believed to play a protective role by inhibiting non-cell-type specific enhancers. Ultimately, this leads to the inactivation of transcription. Acetylation is usually linked to the upregulation of genes. This is the case in H3K27ac which is an active enhancer mark. It is found in distal and proximal regions of genes. It is enriched in transcriptional start sites (TSS). H3K27ac shares a location with H3K27me3 and they interact in an antagonistic manner. H3K27me3 is often seen to interact with H3K4me3 in bivalent domains . These domains are usually found in embryonic stem cells and are pivotal for proper cell differentiation. H3K27me3 and H3K4me3 determine whether a cell will remain unspecified or will eventually differentiate. The Grb10 gene in mice makes use of these bivalent domains. Grb10 displays imprinted gene expression. Genes are expressed from one parental allele while simultaneously being silenced in the other parental allele. Demethylation of H3K27me3 can lead to up-regulation of genes controlling the senescence-associated secretory phenotype (SASP). Other well characterised modifications are H3K9me3 as well as H4K20me3 which—just like H3K27me3—are linked to transcriptional repression via formation of heterochromatic regions. Mono-methylations of H3K27, H3K9, and H4K20 are all associated with gene activation. Epigenetic implications The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. Five core histone modifications were found with each respective one being linked to various cell functions. H3K4me3-promoters H3K4me1- primed enhancers H3K36me3-gene bodies H3K27me3-polycomb repression H3K9me3-heterochromatin The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Cause-and-effect relationship between sperm-transmitted histone marks and gene expression and development is in offspring and grandoffspring. Clinical significance H3K27me3 is believed to be implicated in some diseases due to its regulation as a repressive mark. Cohen–Gibson syndrome Cohen–Gibson syndrome is a disorder linked to overgrowth and is characterised by dysmorphic facial features and variable intellectual disability. In some cases, a de novo missense mutation in EED was associated with decreased levels of H3K27me3 in comparison to wild type. This decrease was linked to loss of PRC2 activity. Diffuse midline Glioma Diffuse midline glioma, H3K27me3-altered (DMG), also known as diffuse intrinsic pontine glioma (DIPG) is a type of highly aggressive brain tumor mostly found in children. All DMGs exhibit loss of H3K27me3, in about 80% of cases due to a genetic mutation receplacing lysine with methionine (M), known as H3K27M. In rare forms, H3Kme3-loss is mediated by overexpression of the EZH inhibiting protein, decreasing PRC2-activity. Spectrum disorders There is evidence that implicates the downregulation of expression of H3K27me3 in conjunction with differential expression of H3K4me3 AND DNA methylation may play a factor in fetal alcohol spectrum disorder (FASD) in C57BL/6J mice. This histone code is believed to affect the peroxisome associated pathway and induce the loss of the peroxisomes to ameliorate oxidative stress. Methods The histone mark H3K27me3 can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. See also Histone methylation Histone methyltransferase SET domain-containing lysine-specific Methyllysine JARID1B, an enzyme which can reverse the methylation Bivalent chromatin, where this repressing modification is often used with activator H3K4me3 References Epigenetics Post-translational modification
H3K27me3
Chemistry
2,110
29,873,791
https://en.wikipedia.org/wiki/Principia%20%28alga%29
Principia is a genus of alga that has been placed in the coralline stem group on the basis of its slightly differentiated thallus; it forms an "intermediate" between Hortonella, Neoprincipia and Archaeolithophyllum. Fossil algae
Principia (alga)
Biology
57
68,926,710
https://en.wikipedia.org/wiki/Thraustochytrium%20pachydermum
Thraustochytrium pachydermum is a species of heterokont. References Heterokont species Marine microorganisms Protists described in 1958
Thraustochytrium pachydermum
Biology
38
3,911,673
https://en.wikipedia.org/wiki/Fluoroacetic%20acid
Fluoroacetic acid is a organofluorine compound with the chemical formula . It is a colorless solid that is noted for its relatively high toxicity. The conjugate base, fluoroacetate occurs naturally in at least 40 plants in Australia, Brazil, and Africa. It is one of only five known organofluorine-containing natural products. Toxicity Fluoroacetic acid is a harmful metabolite of some fluorine-containing drugs (median lethal dose, LD50 = 10 mg/kg in humans). The most common metabolic sources of fluoroacetic acid are fluoroamines and fluoroethers. Fluoroacetic acid can disrupt the Krebs cycle. The metabolite of fluoroacetic acid is Fluorocitric acid and is very toxic because it is not processable using aconitase in the Krebs cycle (where fluorocitrate takes place of citrate as the substrate). The enzyme is inhibited and the cycle stops working. In contrast with fluoroacetic acid, difluoroacetic acid and trifluoroacetic acid are far less toxic. Its pKa is 2.66, in contrast to 1.24 and 0.23 for the respective di- and trifluoroacetic acid. Uses Fluoroacetic acid is used to manufacture pesticides especially rodenticides (see sodium fluoroacetate). The overall market is projected to rise at a considerable rate during the forecast period, 2021 to 2027. See also Fluorocitric acid References Carboxylic acids Organofluorides Halogen-containing natural products Respiratory toxins Fluorine-containing natural products Aconitase inhibitors Fluorinated carboxylic acids Plant toxins
Fluoroacetic acid
Chemistry
369
2,113,062
https://en.wikipedia.org/wiki/Piston%20pump
A piston pump is a type of positive displacement pump where the high-pressure seal reciprocates with the piston. Piston pumps can be used to move liquids or compress gases. They can operate over a wide range of pressures. High pressure operation can be achieved without adversely affecting flow rate. Piston pumps can also deal with viscous media and media containing solid particles. This pump type functions through a piston cup, oscillation mechanism where down-strokes cause pressure differentials, filling of pump chambers, where up-stroke forces the pump fluid out for use. Piston pumps are often used in scenarios requiring high, consistent pressure and in water irrigation or delivery systems. Types The two main types of piston pump are the lift pump and the force pump. Both types may be operated either by hand or by an engine. Lift pump In a lift pump, the upstroke of the piston draws water, through a valve, into the lower part of the cylinder. On the downstroke, water passes through valves set in the piston into the upper part of the cylinder. On the next upstroke, water is discharged from the upper part of the cylinder via a spout. This type of pump is limited by the height of water that can be supported by air pressure against a vacuum. Force pump In a force pump, the upstroke of the piston draws water, through an inlet valve, into the cylinder. On the downstroke, the water is discharged, through an outlet valve, into the outlet pipe. Piston pumps may be classified as either single-acting and single-effect (the fluid is pumped by a single face of the piston, and the active stroke is in only one direction) or double-acting and double-effect (the fluid is pumped by both faces of the piston, and the strokes in both directions are active). Calculation of delivery rate The calculation of a piston pump's theoretical delivery rate is relatively simple. Single-acting pumps In a single acting pump, only one side of the piston is in contact with the fluid. As a result of this, only one stroke is a delivery stroke. The theoretical delivery rate can be calculated by using the following equation: Where Q is the delivery rate, d is the diameter of the piston, h is the stroke, and n is the rpm. If the pump has multiple cylinders, Q is multiplied by the number of cylinders. Double-acting pumps In a double acting pump, both sides of the piston are in contact with the fluid. As a result of this, both strokes are delivery strokes. An approximation of the delivery rate is given by the following equation: However, this equation fails to take into consideration the volume taken up by the piston rod. The true delivery rate can be calculated accordingly: d1 is equal to the diameter of the piston rod. Fluctuation in delivery rate The piston in a plunger and piston pump does not move at a constant velocity and as a result of this the pressure and delivery fluctuate over the duration of the stroke. The following diagram shows the relation between the angle of the crankshaft and the delivery rate of a single-acting and double-acting pump. The line shows the average delivery rate of the pump. These fluctuations in pressure and delivery can cause undesired effects such as water hammer and thus are generally mitigated by the installation of an air-filled accumulator. The delivery can be further smoothed out by the use of multiple cylinders that are offset from one another. As a result the actual delivery rate is often smaller and can be found by the following equation: Qs is the actual delivery rate, Q is the theoretical rate, and λ is the loss coefficient. Others Axial piston pump Radial piston pump See also Plunger pump Diaphragm pump References Pumps
Piston pump
Physics,Chemistry
768
46,287,862
https://en.wikipedia.org/wiki/Jonathan%20Rosenberg%20%28SIP%20author%29
Jonathan Rosenberg (born ) is a technologist noted for his work in IP communications. Network World has referred to him as "a pioneer [in] the development of the SIP protocol", and he was included in the 2002 TR35 list of the world's top under-35 innovators, as published by MIT Technology Review. , Rosenberg is the chief technology officer and Head of A.I. at Five9, a cloud contact center provider. Prior to this, he was vice president and chief technology officer for Collaboration at Cisco, having previously worked as Skype's chief technology strategist. Rosenberg is a longtime member of the IETF, where he has served in several leadership positions over his career. As of August 2017, he remains the 8th most prolific author of internet standards. Career Rosenberg has worked in the VOIP field for over 20 years to date. He began working at Lucent Technologies in March 1993 as a Member of the Technical Staff. There, he led a small SIP research lab, acted as a team lead for DSP work, and conducted research in areas of wide area service discovery. It was during his time at Lucent that he received his PhD from Columbia University. In October 1999, Rosenberg left Lucent to serve as the chief technology officer of dynamicsoft. At dynamicsoft, Rosenberg conceived of, drove requirements for, designed, and lead software development for industry's first SIP application server. He also acted as architect and first product manager for dynamicsoft presence engine, conceived of, helped developed requirements for, and designed SIP firewall control proxy and co-developed the architecture for the dynamicsoft network application engine. When dynamicsoft was acquired by Cisco, Rosenberg followed the company. At Cisco, Rosenberg earned the rank of Cisco Fellow, the company's most senior technical position. He drove Cisco's Intercompany Media Engine (IME) product from concept through ship and set technology strategy for Cisco's service provider voice business. In 2009, Rosenberg left Cisco to serve as Chief Technology Strategist at Skype. At Skype, He pioneered the Facebook video calling integration, along with many other developments. In 2013, Rosenberg left Skype to return to his alma mater Cisco Systems, taking the position of CTO and Vice President of Cloud Collaboration. Shortly thereafter, he was promoted to CTO and Vice President of Collaboration. In September 2018, Rosenberg reported that he is leaving Cisco. In January 2019, he followed his former Cisco Systems boss Rowan Trollope to Five9, joining the company as CTO and Head of A.I. Personal life Jonathan Rosenberg lives in Freehold Township, New Jersey with his wife and two children. See also References Further reading Includes an interview and a short profile of Rosenberg. Living people American chief technology officers Cisco people 1970s births
Jonathan Rosenberg (SIP author)
Technology
568
1,836,008
https://en.wikipedia.org/wiki/L%C3%A9o%20Marion
Léo Edmond Marion, (March 22, 1899 – July 16, 1979) was a Canadian organic chemist and academic administrator. He was Vice-President of the National Research Council of Canada. From 1964 until 1965 he was President of the Royal Society of Canada. From 1965 until 1969, he was Dean of Faculty of Pure and Applied Science at the University of Ottawa. Honours In 1963 he was awarded an Honorary Doctor of Science from the University of British Columbia. In 1965 he was awarded an Honorary Doctor of Science from Carleton University. In 1967 he was made a Companion of the Order of Canada. In 1968 he was awarded an Honorary Doctor of Laws from the University of Saskatchewan. References External links 1899 births 1979 deaths Canadian organic chemists Canadian university and college faculty deans Companions of the Order of Canada Fellows of the Royal Society Fellows of the Royal Society of Canada Scientists from Ontario 20th-century Canadian chemists Academic staff of the University of Ottawa Canadian Members of the Order of the British Empire
Léo Marion
Chemistry
196
36,372,603
https://en.wikipedia.org/wiki/Arctur-1
Arctur-1 was a supercomputer located in Slovenia which is used by scientific and technical users in technologically intensive industries and research. In 2017 it was replaced by Arctur-2. The High Performance Computer (HPC) was located in Gorjansko (Slovenia) and was put into operation in October 2010. Arctur-1 was built with 84 IBM iDataPlex dx360 M3 nodes, each with two Intel Xeon X5650 cores (6 cores clocked at 2,66 GHz) for a total of 1008 cores, 2,66 terabytes of memory (2,66 gigabytes per core), reaching a peak processing power of 10 TFlops (Rpeak). Compute nodes are connected with InfiniBand QDR 40 Gbit/s. The supercomputer was managed by Arctur. References Supercomputing in Europe X86 supercomputers
Arctur-1
Technology
193
48,246,926
https://en.wikipedia.org/wiki/Artomyces%20nothofagi
Artomyces nothofagi is a species of coral fungus in the family Auriscalpiaceae. Found in southern Chile, it was described as new to science in 2015 by Richard Kneal and Matthew Smith. The specific epithet nothofagi refers to the substrate it grows on, Nothofagus dombeyi. The species distinguished from other Artomyces species by a combination of smooth spores, largely unbranched fruitbodies, and gloeocystidia that extend beyond the hymenium. Molecular phylogenetic analysis confirms A. nothofagi species is genetically distinct from other members of its genus. References External links Fungi described in 2015 Fungi of Chile Russulales Fungus species
Artomyces nothofagi
Biology
140
61,276,647
https://en.wikipedia.org/wiki/List%20of%20textbooks%20on%20classical%20mechanics%20and%20quantum%20mechanics
This is a list of notable textbooks on classical mechanics and quantum mechanics arranged according to level and surnames of the authors in alphabetical order. Undergraduate Classical mechanics Chapters 1–21. Numerous subsequent editions. Quantum mechanics Advanced undergraduate and graduate Classical mechanics Quantum mechanics Three volumes. Landau, L. D, and Lifshitz, E. M. Course of Theoretical Physics Volume 3 - Quantum Mechanics: Non-Relativistic Theory. Edited by Pitaevskiĭ L. P. Translated by J. B Sykes and J. S Bell, Third edition, revised and enlarged ed., Pergamon Press, 1977. . Leonard I. Schiff (1968) Quantum Mechanics McGraw-Hill Education Davydov A.S. (1965) Quantum Mechanics Pergamon ISBN 9781483172026 Both topics See also List of textbooks in thermodynamics and statistical mechanics List of textbooks in electromagnetism List of books on general relativity Teaching quantum mechanics External links A Physics Book List. John Baez. Department of Mathematics, University of California, Riverside. 1993–1997. Textbooks Lists of science textbooks Mathematics-related lists Physics-related lists Textbooks
List of textbooks on classical mechanics and quantum mechanics
Physics
238
74,929,430
https://en.wikipedia.org/wiki/General%20Company%20for%20Glass%20and%20Refractories
The glass factory, officially called the General Company for Glass and Refractories, is an Iraqi government factory for the production of glass, refractories, and ceramics. It was established in 1971, at a cost of 6,700,000 dinars, in the city of Ramadi, affiliated with the Ministry of Industry and Minerals, 80 kilometers west of Baghdad. The production of 9,000 tons of glass panels and bottles began in an initial phase in February 1972. The workers were 1,375, then they reached 5,000 employees. In 1978, the factory relied on the raw materials found in that region, and in 1979 the production capacity reached 22,700 tons. Of glass, the director of the factory stated that its production was “classified from the finest types of glass.” After the Battle of the Mother of All Battles (also known as the Persian Gulf War) in 1991, and the imposition of an economic blockade on many raw materials, the government was forced to import materials with hard currency, and to buy used glass debris from citizens. It collected 2,325 tons between May and August. In 1992, the factory was able to produce 90% of what it had produced before the war. The factory has been closed since 2003, and 2,500 employees belong to it. The factory contains 3 complexes: a glass complex, a ceramics complex, and a refractory complex. The factory was damaged as a result of the Islamic State's control over it after the Battle of Ramadi, which resulted in the destruction of most of the production factories. The company's director, Nazim Reda Hamad, said, "This company was subjected to acts of sabotage and destruction due to the entry of terrorist groups into it, which led to the destruction of most of the production plants. The company's staff were able to limit the damage and thus it was included in the investment files, which were presented based on the ministry's instructions." The factory is close to the Anbar desert, which is rich in local raw materials. The glass factory derives its water needs through direct pipes from the Euphrates River, which is very close by. The Technical Institute was also established in Ramadi to train and graduate intermediate technicians in glass technology. The purpose of its establishment was firstly: to qualify technical staff who are being appointed for the first time or who want to change the nature of their work, and secondly: to hold various technical, financial, commercial and administrative training courses for the company's employees. Economic expert Abdul Majeed Al-Anbari said, “The Ramadi Glass Factory constitutes a major investment in the natural resources available in Anbar... This industrial institution produces glass extracted from the city of Ramadi, which is considered one of the purest glass in the world.” The site and its construction Natural glass is an ancient material, and Iraq is considered one of the oldest glass-manufacturing countries, as its manufacture began in Iraq more than 3,500 years ago. Iraq has a large reserve of pure white sand suitable for the manufacture of glass. Italian engineers said that the soil of Anbar is one of the regions in the world richest in silicic acid. Iraq was one of the few Arab countries that took the initiative to benefit from glass sand. It was planned that the glass factory would be established in Rutba and not Ramadi, which is about 300 km from Rutba. The preference for the location of Ramadi over Rutba has several reasons, including that Rutba is located in the middle of the desert. Western Region, where it is difficult to obtain water, and making glass requires a lot of water. Producing one ton of glass requires 98,000 liters of water. Baghdad Magazine reported in 1967 that geologists “surveyed the Wadi Houran area in Rutba and found that there were approximately 5 million tons of sand suitable for making glass at a depth of 10 meters.” In 1969, the chief engineer of the glass factory project said that the first phase, which includes the project management building, garages, sedimentation basins, and the factory fence, has now been completed, and that the second phase, which includes the fuel tanks, cooling tower, and glass paste making shops, has now been 60% completed, and work has begun. In the third stage. A team of 60 Russian experts were used to build the factory, who trained Iraqi cadres to work in it and manage it. The Journal of the California Institute of International Studies reported that Iraq would no longer need to import glass. In 1987, a contract for the expansion of the factory was awarded to Belgian contractors, so production increased and products diversified. During the imposition of the economic blockade on Iraq, Glass Industry magazine reported that a Russian team went to the glass factory in an effort to modernize and expand it. On March 18, 2021, investment contracts were concluded between the glass factory and Russian companies to rehabilitate and operate the company's factories and establish new ones. Ramadi west of the Euphrates The city of Ramadi was limited to the eastern side of the Euphrates, and when the glass factory was established west of the Euphrates, 7 kilometers away, urban growth began in the 1970s near the factory, and then the area increased in growth after the establishment of Anbar University. The area in which the glass factory is located is called the glass factory area, according to the municipal designation, and it is an industrial area. References Al Anbar Governorate Companies of Iraq Glassmaking companies
General Company for Glass and Refractories
Materials_science,Engineering
1,131
32,170,974
https://en.wikipedia.org/wiki/Cfr10I/Bse634I
In molecular biology, the Cfr10I/Bse634I family of restriction endonucleases includes the type II restriction endonucleases Cfr10I and Bse634I. They exhibit a conserved tetrameric architecture that is of functional importance, wherein two dimers are arranged, back-to-back, with their putative DNA-binding clefts facing opposite directions. These clefts are formed between two monomers that interact, mainly via hydrophobic interactions supported by a few hydrogen bonds, to form a U-shaped dimer. Each monomer is folded to form a compact alpha-beta structure, whose core is made up of a five-stranded mixed beta-sheet. The monomer may be split into separate N-terminal and C-terminal subdomains at a hinge located in helix alpha3. Both Cfr10I and Bse634I recognise the double-stranded sequence RCCGGY and cleave after the purine R. Recognition sequence Cut 5' RCCGGY 5' ---R CCGGY--- 3' 3' YGGCCR 3' ---YGGCC R--- 5' References Protein domains Bacterial enzymes Restriction enzymes EC 3.1
Cfr10I/Bse634I
Biology
260
46,855,068
https://en.wikipedia.org/wiki/Allomyces%20neomoniliformis
Allomyces neomoniliformis is a species of fungus from Japan. References External links Mycobank entry Blastocladiomycota Fungi described in 1940 Fungi of Japan Fungus species
Allomyces neomoniliformis
Biology
41
71,485,381
https://en.wikipedia.org/wiki/Low-rise%20high-density
Low-rise high-density housing refers to residential developments which are typically 4 stories or less in height, have a high number of housing units per acre of land, and have between 35-80 dwellings per hectare. This housing type is thought to provide a middle ground between detached single-family homes and high-rise apartment buildings. Background Origins and Early Developments Although the concept of low-rise high-density housing can be traced back to Le Corbusier's unbuilt Roq et Rob project from 1949, a more direct influence was the pioneering work of the Swiss firm Atelier 5, whose Siedlung Halen project built in Bern, Switzerland in 1955-61 became a seminal example of the typology. Rise in popularity during the 1960s and 70s In the 1960s and 1970s, low-rise high-density housing gained popularity among architects as a reaction to the perceived social failures of high-rise "tower-in-the-park" public housing projects. Architects and planners began to rethink and reintroduce this housing model as a way to combine the benefits of urban and suburban living. Characteristics Low-rise buildings' height, often 2-4 stories, intended to maintain a human-scaled, "homelike" environment High number of housing units packed into a relatively small land area, creating density Incorporation of shared common spaces and amenities to offset smaller private living spaces Integration with walkable, transit-oriented neighborhoods to reduce car dependency Ability to be built in existing urban areas without major ecosystem destruction Main proponents The low-rise, high-density approach has regained popularity as an alternative to suburban sprawl and high-rise housing, offering a way to create density while providing a sense of community and connection to the ground. Le Corbusier: His Roq et Rob project in 1949 is considered an early influence on the low-rise, high-density approach. Atelier 5: The Swiss architecture firm designed Siedlung Halen in Bern, Switzerland from 1959-61, which is considered the most influential low-rise, high-density project of the 1960s. The New York State Urban Development Corporation (UDC): In 1973, the UDC, along with the Institute for Architecture and Urban Studies, presented the Marcus Garvey Park Village project in Brownsville, Brooklyn and the Another Chance for Housing: Low Rise Alternatives exhibition at the Museum of Modern Art. This showcased a future for housing in the U.S. that combined urban and suburban living benefits. Seven young architecture firms: Engaged by the UDC to further develop the low-rise, high-density prototype presented at MoMA, drawing from the pioneering work of architects like Atelier 5. Contemporary architects and researchers: Figures like Karen Kubey, exhibitor of Suburban Alternatives, which traced the typology of low-rise, high-density housing over time, advocates for this approach. Benefits of Low-Rise High-Density The aim of this housing model is to deliver the benefits of density, such as supporting public services and reducing environmental impact, while still providing residents with a sense of community and individual identity more typical of single-family homes. Studies have found that low-rise high-density developments have several potential benefits: Increasing property values in the surrounding area Attracting new businesses and employers to the community Reducing urban decay by repurposing unused buildings Decreasing overall energy consumption and carbon emissions compared to low-density sprawl Challenges and Considerations While low-rise high-density housing is seen as a valuable alternative to high-rise towers, it presents several challenges: Overcoming regulatory constraints and zoning that favors single-family homes Addressing resident perceptions and preferences for detached housing Carefully planning maintenance costs and sinking funds to avoid disrepair Integrating the developments seamlessly into existing neighborhoods Examples The low-rise high-density housing projects built in the London borough of Camden between 1965-1973 stood in contrast to the post-war high-rise models, aiming to create a more "homelike" and human-scaled environment for residents. In 2021, Minneapolis eliminated single-family zoning and permitted the construction of duplexes, triplexes, and fourplexes on all residential lots throughout the city, with the goal of increasing housing supply and affordability. Advocacy and Criticism Advocates of low-rise, high-density architecture argue that this type of development can provide an effective "missing middle" between low-density suburbs and high-rise towers. 3-7 story mid-rise buildings, often in a perimeter block configuration with a central courtyard, are cited as an example of this "missing middle" that can enable walkable neighborhoods with multiple different uses and housing types. Proponents suggest that this medium-density approach can achieve higher densities without the perceived downsides of high-rise towers, such as limited access to outdoor space, reduced community cohesion, and higher maintenance costs. Mid-rise, medium-density development is more common in Europe than in North America and Australia, where urban development has tended towards either low-density suburbs or high-rise towers Criticisms or challenges associated with low-rise, high-density architecture include: Difficulty Achieving Density Targets: low-rise, high-density housing can be an effective "missing middle" between low-density suburbs and high-rise towers, but achieving the desired density levels may be challenging. The Residences at Sandford Lodge project in Ireland is cited as achieving around 100 units per hectare, which is comparable to taller apartment buildings, but this may not always be the case. Potential Overlooking and Privacy Issues: The Sandford Lodge project is noted for addressing issues of overlooking adjoining properties and privacy through its design, but this may not be universal among low-rise, high-density developments. Ensuring adequate privacy and separation between units can be a design challenge. Lack of Contemporary Scholarship: Low-rise, high-density housing has been an under-examined typology and has had limited contemporary scholarship and examples compared to single-family homes or high-rise developments. Halting of Prototype Development: The New York State Urban Development Corporation's efforts to further develop low-rise, high-density prototypes were halted in the 1970s due to policy changes. Economic Factors and Public Perception: Architect Michael Pyatok has discussed the economic factors influencing low-rise, high-density housing and need to "charm the public" in order to make it appealing, implying potential resistance or skepticism from the public. See also Low-rise building Louis Sauer Postmodernism architecture References Postmodern architecture Structural system
Low-rise high-density
Technology,Engineering
1,314
47,069,434
https://en.wikipedia.org/wiki/Arun%20Kumar%20Basak
Arun Kumar Basak FInstP CPhys (born October 17, 1941) is a Bangladeshi physicist. He is Professor Emeritus in the Department of Physics, University of Rajshahi. Early life and education Basak was born in Radhanagor of Pabna town, Bengal Presidency, British India to parents Haripada Basak and Usha Rani Basak. Basak matriculated in 1957 securing First Division from R.M. Academy. He secured the second position in the merit list in the Intermediate Science Examination in 1959 from Govt. Edward College. He was placed in First Class with the first position in the B.Sc. (Hons) examination from Rajshahi College in 1961. In M.Sc. Examination (1963) from the University of Rajshahi, he obtained the first position in first class and was awarded an RU Gold medal. Career In December 1963, Basak joined the University of Rajshahi as a lecturer in the Department of Physics. In 1978, Basak was appointed as an associate professor by the University of Dhaka, but he preferred to stay in Rajshahi where he became associate professor in the later part of 1978. He was awarded a merit scholarship for securing the highest marks in the Faculty of Science and got admission at Imperial College, London. Owing to the 1965 Indo-Pak war, he could not avail the opportunity. In 1972, he went to the University of Birmingham with a Commonwealth Scholarship. He worked with the tensor polarized deuteron and the polarized 3He beams, the latter being the only one of its kind in the world. He earned his Ph.D. degree in 1975. Professional membership Senior associate of the International Centre for Theoretical Physics at Trieste, Italy during 1987–96. Elected as a fellow of the Bangladesh Academy of Sciences in 2001 Elected as a fellow of the Institute of Physics (London) in 2001. Was a principal investigator from Bangladesh in a collaborative project, which funded by the US National Science Foundation. The fellow of Bangladesh Physical Society from 1987 (life membership). A member of American Physical Society during 2000-03 and from 2013(life membership). Others Was a post doctoral fellow in nuclear physics, the Ohio State University, United States during 1981–82. An associate member of ICTP, Italy during 1988–1995. Visiting scholar at Southern Illinois University, US in 1997. Visiting professor at Kent State University, US. Awards Bangladesh Academy of Sciences Gold Medal in Physical Sciences (2003) Star Lifetime Award on Physics (2016) References External links Top Publications of A. K. Basak 1941 births Living people Bengali Hindus Bangladeshi Hindus Bangladeshi physicists University of Rajshahi alumni Academic staff of the University of Rajshahi Alumni of the University of Birmingham Bengali physicists Nuclear physicists Theoretical physicists Fellows of Bangladesh Academy of Sciences Physics educators People from Pabna District Rajshahi College alumni Fellows of the Bangladesh Physical Society Pabna Edward College alumni
Arun Kumar Basak
Physics
594
3,057,518
https://en.wikipedia.org/wiki/Backward-wave%20oscillator
A backward wave oscillator (BWO), also called carcinotron or backward wave tube, is a vacuum tube that is used to generate microwaves up to the terahertz range. Belonging to the traveling-wave tube family, it is an oscillator with a wide electronic tuning range. An electron gun generates an electron beam that interacts with a slow-wave structure. It sustains the oscillations by propagating a traveling wave backwards against the beam. The generated electromagnetic wave power has its group velocity directed oppositely to the direction of motion of the electrons. The output power is coupled out near the electron gun. It has two main subtypes, the M-type (M-BWO), the most powerful, and the O-type (O-BWO). The output power of the O-type is typically in the range of 1 mW at 1000 GHz to 50 mW at 200 GHz. Carcinotrons are used as powerful and stable microwave sources. Due to the good quality wavefront they produce (see below), they find use as illuminators in terahertz imaging. The backward wave oscillators were demonstrated in 1951, M-type by Bernard Epsztein and O-type by Rudolf Kompfner. The M-type BWO is a voltage-controlled non-resonant extrapolation of magnetron interaction. Both types are tunable over a wide range of frequencies by varying the accelerating voltage. They can be swept through the band fast enough to be appearing to radiate over all the band at once, which makes them suitable for effective radar jamming, quickly tuning into the radar frequency. Carcinotrons allowed airborne radar jammers to be highly effective. However, frequency-agile radars can hop frequencies fast enough to force the jammer to use barrage jamming, diluting its output power over a wide band and significantly impairing its efficiency. Carcinotrons are used in research, civilian and military applications. For example, the Czechoslovak Kopac passive sensor and Ramona passive sensor air defense detection systems employed carcinotrons in their receiver systems. Basic concept All travelling-wave tubes operate in the same general fashion, and differ primarily in details of their construction. The concept is dependent on a steady stream of electrons from an electron gun that travel down the center of the tube (see adjacent concept diagram). Surrounding the electron beam is some sort of radio frequency source signal; in the case of the traditional klystron this is a resonant cavity fed with an external signal, whereas in more modern devices there are a series of these cavities or a helical metal wire fed with the same signal. As the electrons travel down the tube, they interact with the RF signal. The electrons are attracted to areas with maximum positive bias and repelled from negative areas. This causes the electrons to bunch up as they are repelled or attracted along the length of the tube, a process known as velocity modulation. This process makes the electron beam take on the same general structure as the original signal; the density of the electrons in the beam matches the relative amplitude of the RF signal in the induction system. The electron current is a function of the details of the gun, and is generally orders of magnitude more powerful than the input RF signal. The result is a signal in the electron beam that is an amplified version of the original RF signal. As the electrons are moving, they induce a magnetic field in any nearby conductor. This allows the now-amplified signal to be extracted. In systems like the magnetron or klystron, this is accomplished with another resonant cavity. In the helical designs, this process occurs along the entire length of the tube, reinforcing the original signal in the helical conductor. The "problem" with traditional designs is that they have relatively narrow bandwidths; designs based on resonators will work with signals within 10% or 20% of their design, as this is physically built into the resonator design, while the helix designs have a much wider bandwidth, perhaps 100% on either side of the design peak. BWO The BWO is built in a fashion similar to the helical TWT. However, instead of the RF signal propagating in the same (or similar) direction as the electron beam, the original signal travels at right angles to the beam. This is normally accomplished by drilling a hole through a rectangular waveguide and shooting the beam through the hole. The waveguide then goes through two right angle turns, forming a C-shape and crossing the beam again. This basic pattern is repeated along the length of the tube so the waveguide passes across the beam several times, forming a series of S-shapes. The original RF signal enters from what would be the far end of the TWT, where the energy would be extracted. The effect of the signal on the passing beam causes the same velocity modulation effect, but because of the direction of the RF signal and specifics of the waveguide, this modulation travels backward along the beam, instead of forward. This propagation, the slow-wave, reaches the next hole in the folded waveguide just as the same phase of the RF signal does. This causes amplification just like the traditional TWT. In a traditional TWT, the speed of propagation of the signal in the induction system has to be similar to that of the electrons in the beam. This is required so that the phase of the signal lines up with the bunched electrons as they pass the inductors. This places limits on the selection of wavelengths the device can amplify, based on the physical construction of the wires or resonant chambers. This is not the case in the BWO, where the electrons pass the signal at right angles and their speed of propagation is independent of that of the input signal. The complex serpentine waveguide places strict limits on the bandwidth of the input signal, such that a standing wave is formed within the guide. But the velocity of the electrons is limited only by the allowable voltages applied to the electron gun, which can be easily and rapidly changed. Thus the BWO takes a single input frequency and produces a wide range of output frequencies. Carcinotron The device was originally given the name "carcinotron", after the Greek name for the crayfish, which swim backwards. By simply changing the supply voltage, the device could produce any required frequency across a band that was much larger than any existing microwave amplifier could match - the cavity magnetron worked at a single frequency defined by the physical dimensions of their resonators, and while the klystron amplified an external signal, it only did so efficiently within a small range of frequencies. Previously, jamming a radar was a complex and time-consuming operation. Operators had to listen for potential frequencies being used, set up one of a bank of amplifiers on that frequency, and then begin broadcasting. When the radar station realized what was happening, they would change their frequencies and the process would begin again. In contrast, the carcinotron could sweep through all the possible frequencies so rapidly that it appeared to be a constant signal on all of the frequencies at once. Typical designs could generate hundreds or low thousands of watts, so at any one frequency, there might be a few watts of power that is received by the radar station. However, at long range the amount of energy from the original radar broadcast that reaches the aircraft is only a few watts at most, so the carcinotron can overpower them. The system was so powerful that it was found that a carcinotron operating on an aircraft would begin to be effective even before it rose above the radar horizon. As it swept through the frequencies it would broadcast on the radar's operating frequency at what were effectively random times, filling the display with random dots any time the antenna was pointed near it, perhaps 3 degrees on either side of the target. There were so many dots that the display simply filled with white noise in that area. As it approached the station, the signal would also begin to appear in the antenna's sidelobes, creating further areas that were blanked out by noise. At close range, on the order of , the entire radar display would be completely filled with noise, rendering it useless. The concept was so powerful as a jammer that there were serious concerns that ground-based radars were obsolete. Airborne radars had the advantage that they could approach the aircraft carrying the jammer, and, eventually, the huge output from their transmitter would "burn through" the jamming. However, interceptors of the era relied on ground direction to get into range, using ground-based radars. This represented an enormous threat to air defense operations. For ground radars, the threat was eventually solved in two ways. The first was that radars were upgraded to operate on many different frequencies and switch among them randomly from pulse to pulse, a concept now known as frequency agility. Some of these frequencies were never used in peacetime, and highly secret, with the hope that they would not be known to the jammer in wartime. The carcinotron could still sweep through the entire band, but then it would be broadcasting on the same frequency as the radar only at random times, reducing its effectiveness. The other solution was to add passive receivers that triangulated on the carcinotron broadcasts, allowing the ground stations to produce accurate tracking information on the location of the jammer and allowing them to be attacked. The slow-wave structure The needed slow-wave structures must support a radio frequency (RF) electric field with a longitudinal component; the structures are periodic in the direction of the beam and behave like microwave filters with passbands and stopbands. Due to the periodicity of the geometry, the fields are identical from cell to cell except for a constant phase shift Φ. This phase shift, a purely real number in a passband of a lossless structure, varies with frequency. According to Floquet's theorem (see Floquet theory), the RF electric field E(z,t) can be described at an angular frequency ω, by a sum of an infinity of "spatial or space harmonics" En where the wave number or propagation constant kn of each harmonic is expressed as kn = (Φ + 2nπ) / p (-π < Φ < +π) z being the direction of propagation, p the pitch of the circuit and n an integer. Two examples of slow-wave circuit characteristics are shown, in the ω-k or Brillouin diagram: on figure (a), the fundamental n=0 is a forward space harmonic (the phase velocity vn=ω/kn has the same sign as the group velocity vg=dω/dkn), synchronism condition for backward interaction is at point B, intersection of the line of slope ve - the beam velocity - with the first backward (n = -1) space harmonic, on figure (b) the fundamental (n=0) is backward A periodic structure can support both forward and backward space harmonics, which are not modes of the field, and cannot exist independently, even if a beam can be coupled to only one of them. As the magnitude of the space harmonics decreases rapidly when the value of n is large, the interaction can be significant only with the fundamental or the first space harmonic. M-type BWO The M-type carcinotron, or M-type backward wave oscillator, uses crossed static electric field E and magnetic field B, similar to the magnetron, for focussing an electron sheet beam drifting perpendicularly to E and B, along a slow-wave circuit, with a velocity E/B. Strong interaction occurs when the phase velocity of one space harmonic of the wave is equal to the electron velocity. Both Ez and Ey components of the RF field are involved in the interaction (Ey parallel to the static E field). Electrons which are in a decelerating Ez electric field of the slow-wave, lose the potential energy they have in the static electric field E and reach the circuit. The sole electrode is more negative than the cathode, in order to avoid collecting those electrons having gained energy while interacting with the slow-wave space harmonic. O-type BWO The O-type carcinotron, or O-type backward wave oscillator, uses an electron beam longitudinally focused by a magnetic field, and a slow-wave circuit interacting with the beam. A collector collects the beam at the end of the tube. O-BWO spectral purity and noise The BWO is a voltage tunable oscillator, whose voltage tuning rate is directly related to the propagation characteristics of the circuit. The oscillation starts at a frequency where the wave propagating on the circuit is synchronous with the slow space charge wave of the beam. Inherently the BWO is more sensitive than other oscillators to external fluctuations. Nevertheless, its ability to be phase- or frequency-locked has been demonstrated, leading to successful operation as a heterodyne local oscillator. Frequency stability The frequency–voltage sensitivity, is given by the relation f/f = 1/2 [1/(1 + |vΦ/vg|)] (V0/V0) The oscillation frequency is also sensitive to the beam current (called "frequency pushing"). The current fluctuations at low frequencies are mainly due to the anode voltage supply, and the sensitivity to the anode voltage is given by f/f = 3/4 [ωq/ω/(1 + |vΦ/vg|)] (Va/Va) This sensitivity as compared to the cathode voltage sensitivity, is reduced by the ratio ωq/ω, where ωq is the angular plasma frequency; this ratio is of the order of a few times 10−2. Noise Measurements on submillimeter-wave BWO's (de Graauw et al., 1978) have shown that a signal-to-noise ratio of 120 dB per MHz could be expected in this wavelength range. In heterodyne detection using a BWO as a local oscillator, this figure corresponds to a noise temperature added by the oscillator of only 1000–3000 K. Notes References Johnson, H. R. (1955). Backward-wave oscillators. Proceedings of the IRE, 43(6), 684–697. Ramo S., Whinnery J. R., Van Duzer T. - Fields and Waves in Communication Electronics (3rd ed.1994) John Wiley & Sons Kantorowicz G., Palluel P. - Backward Wave Oscillators, in Infrared and Millimeter Waves, Vol 1, Chap. 4, K. Button ed., Academic Press 1979 de Graauw Th., Anderegg M., Fitton B., Bonnefoy R., Gustincic J. J. - 3rd Int. Conf. Submm. Waves, Guilford University of Surrey (1978) Convert G., Yeou T., in Millimeter and Submillimeter Waves, Chap. 4, (1964) Illife Books, London External links Virtual Valve Museum Thomson CSF CV6124 (Wayback Machine) Microwave technology Terahertz technology Vacuum tubes
Backward-wave oscillator
Physics
3,173
187,315
https://en.wikipedia.org/wiki/Antenna%20%28zoology%29
Antennae (: antenna) (sometimes referred to as "feelers") are paired appendages used for sensing in arthropods. Antennae are connected to the first one or two segments of the arthropod head. They vary widely in form but are always made of one or more jointed segments. While they are typically sensory organs, the exact nature of what they sense and how they sense it is not the same in all groups. Functions may variously include sensing touch, air motion, heat, vibration (sound), and especially smell or taste. Antennae are sometimes modified for other purposes, such as mating, brooding, swimming, and even anchoring the arthropod to a substrate. Larval arthropods have antennae that differ from those of the adult. Many crustaceans, for example, have free-swimming larvae that use their antennae for swimming. Antennae can also locate other group members if the insect lives in a group, like the ant. The common ancestor of all arthropods likely had one pair of uniramous (unbranched) antenna-like structures, followed by one or more pairs of biramous (having two major branches) leg-like structures, as seen in some modern crustaceans and fossil trilobites. Except for the chelicerates and proturans, which have none, all non-crustacean arthropods have a single pair of antennae. Crustaceans Crustaceans bear two pairs of antennae. The pair attached to the first segment of the head are called primary antennae or antennules. This pair is generally uniramous, but is biramous in crabs and lobsters and remipedes. The pair attached to the second segment are called secondary antennae or simply antennae. The second antennae are plesiomorphically biramous, but many species later evolved uniramous pairs. The second antennae may be significantly reduced (e.g. remipedes) or apparently absent (e.g. barnacles). The subdivisions of crustacean antennae have many names, including flagellomeres (a shared term with insects), annuli, articles, and segments. The terminal ends of crustacean antennae have two major categorizations: segmented and flagellate. An antenna is considered segmented if each of the annuli is separate from those around it and has individual muscle attachments. Flagellate antennae, on the other hand, have muscle attachments only around the base, acting as a hinge for the flagellum—a flexible string of annuli with no muscle attachment. There are several notable non-sensory uses of antennae in crustaceans. Many crustaceans have a mobile larval stage called a nauplius, which is characterized by its use of antennae for swimming. Barnacles, a highly modified crustacean, use their antennae to attach to rocks and other surfaces. The second antennae in the burrowing Hippoidea and Corystidae have setae that interlock to form a tube or "snorkel" which funnels filtered water over the gills. Insects Some claim insects evolved from prehistoric crustaceans, and they have secondary antennae like crustaceans, but not primary antennae. Antennae are the primary olfactory sensors of insects and are accordingly well-equipped with a wide variety of sensilla (singular: sensillum). Paired, mobile, and segmented, they are located between the eyes on the forehead. Embryologically, they represent the appendages of the second head segment. All insects have antennae, however they may be greatly reduced in the larval forms. Amongst the non-insect classes of the Hexapoda, both Collembola and Diplura have antenna, but Protura do not. Antennal fibrillae play an important role in Culex pipiens mating practices. The erection of these fibrillae is considered to be the first stage in reproduction. These fibrillae serve different functions across the sexes. As antennal fibrillae are used by female C. pipiens to locate hosts to feed on, male C. pipiens utilize them to locate female mates. Structure The three basic segments of the typical insect antenna are the scape or scapus (base), the pedicel or pedicellus (stem), and finally the flagellum, which often comprises many units known as flagellomeres. The pedicel (the second segment) contains the Johnston's organ which is a collection of sensory cells. The scape is mounted in a socket in a more or less ring-shaped sclerotised region called the torulus, often a raised portion of the insect's head capsule. The socket is closed off by the membrane into which the base of the scape is set. However, the antenna does not hang free on the membrane, but pivots on a rigidly sprung projection from the rim of the torulus. That projection on which the antenna pivots is called the antennifer. The whole structure enables the insect to move the antenna as a whole by applying internal muscles connected to the scape. The pedicel is flexibly connected to the distal end of the scape and its movements in turn can be controlled by muscular connections between the scape and pedicel. The number of flagellomeres can vary greatly between insect species, and often is of diagnostic importance. True flagellomeres are connected by membranous linkage that permits movement, though the flagellum of "true" insects does not have any intrinsic muscles. Some other Arthropoda do however have intrinsic muscles throughout the flagellum. Such groups include the Symphyla, Collembola and Diplura. In many true insects, especially the more primitive groups such as Thysanura and Blattodea, the flagellum partly or entirely consists of a flexibly connected string of small ring-shaped annuli. The annuli are not true flagellomeres, and in a given insect species the number of annuli generally is not as consistent as the number of flagellomeres in most species. In many beetles and in the chalcidoid wasps, the apical flagellomeres form a club shape, called the clava. The collective term for the segments between the club and the antennal base is the funicle; traditionally in describing beetle anatomy, the term "funicle" refers to the segments between the club and the scape. However, traditionally in working on wasps the funicle is taken to comprise the segments between the club and the pedicel. Quite commonly the funicle beyond the pedicel is quite complex in Endopterygota such as beetles, moths and Hymenoptera, and one common adaptation is the ability to fold the antenna in the middle, at the joint between the pedicel and the flagellum. This gives an effect like a "knee bend", and such an antenna is said to be geniculate. Geniculate antennae are common in the Coleoptera and Hymenoptera. They are important for insects like ants that follow scent trails, for bees and wasps that need to "sniff" the flowers that they visit, and for beetles such as Scarabaeidae and Curculionidae that need to fold their antennae away when they self-protectively fold up all their limbs in defensive attitudes. Because the funicle is without intrinsic muscles, it generally must move as a unit, in spite of being articulated. However, some funicles are complex and very mobile. For example, the Scarabaeidae have lamellate antennae that can be folded tightly for safety or spread openly for detecting odours or pheromones. The insect manages such actions by changes in blood pressure, by which it exploits elasticity in walls and membranes in the funicles, which are in effect erectile. In the groups with more uniform antennae (for example: millipedes), all segments are called antennomeres. Some groups have a simple or variously modified apical or subapical bristle called an arista (this may be especially well-developed in various Diptera). Functions Olfactory receptors on the antennae bind to free-floating molecules, such as water vapour, and odours including pheromones. The neurons that possess these receptors signal this binding by sending action potentials down their axons to the antennal lobe in the brain. From there, neurons in the antennal lobes connect to mushroom bodies that identify the odour. The sum of the electrical potentials of the antennae to a given odour can be measured using an electroantennogram. In the monarch butterfly, antennae are necessary for proper time-compensated solar compass orientation during migration. Antennal clocks exist in monarchs, and they are likely to provide the primary timing mechanism for sun compass orientation. In the African cotton leafworm, antennae have an important function in signaling courtship. Specifically, antennae are required for males to answer the female mating call. Although females do not require antennae for mating, a mating that resulted from a female without antennae was abnormal. In the diamondback moth, antennae serve to gather information about a host plant's taste and odor. After the desired taste and odor has been identified, the female moth will deposit her eggs onto the plant. Giant swallowtail butterflies also rely on antenna sensitivity to volatile compounds to identify host plants. It was found that females are actually more responsive with their antenna sensing, most likely because they are responsible for oviposition on the correct plant. In the crepuscular hawk moth (Manduca sexta), antennae aid in flight stabilization. Similar to halteres in Dipteran insects, the antennae transmit coriolis forces through the Johnston's organ that can then be used for corrective behavior. A series of low-light, flight stability studies in which moths with flagellae amputated near the pedicel showed significantly decreased flight stability over those with intact antennae. To determine whether there may be other antennal sensory inputs, a second group of moths had their antennae amputated and then re-attached, before being tested in the same stability study. These moths showed slightly decreased performance from intact moths, indicating there are possibly other sensory inputs used in flight stabilization. Re-amputation of the antennae caused a drastic decrease in flight stability to match that of the first amputated group. References Arthropod anatomy Zoology
Antenna (zoology)
Biology
2,163
19,174,753
https://en.wikipedia.org/wiki/Mueller%E2%80%93Hinton%20agar
Mueller Hinton agar is a type of growth medium used in microbiology to culture bacterial isolates and test their susceptibility to antibiotics. This medium was first developed in 1941 by John Howard Mueller and Jane Hinton, who were microbiologists working at Harvard University. However, Mueller Hinton agar is made up of a couple of components, including beef extract, acid hydrolysate of casein, and starch, as well as agar to solidify the mixture. The composition of Mueller Hinton agar can vary depending on the manufacturer and the intended use, but the medium is generally nutrient-rich and free of inhibitors that could interfere with bacterial growth.                                                               Mueller Hinton agar is commonly used in the disk diffusion method, which is a simple and widely used method for testing the susceptibility of bacterial isolates to antibiotics. In this method, small disks impregnated with different antibiotics are placed on the surface of the agar, and the zone of inhibition around each disk is measured to determine the susceptibility of the bacterial isolate to that antibiotic. Mueller Hinton agar is particularly useful for testing a wide range of antibiotics, as it has a low content of calcium and magnesium ions, which can interfere with the activity of certain antibiotics. For example,  MH agar may be used in the laboratory for the rapid presumptive identification of C. albicans, as an alternative method for germ tube test (Mattie. As, 2014). The medium is also free of inhibitors that could interfere with bacterial growth, making it a reliable and consistent substrate for bacterial cultures. The composition of Mueller Hinton agar can affect the growth characteristics of bacterial isolates, as well as their response to antibiotics. For example, variations in the pH of the medium can affect the activity of certain antibiotics, and the presence of certain nutrients can promote the growth of specific bacterial species. More so, careful selection and preparation of Mueller Hinton agar is important for accurate microbiological assays. The use of Mueller Hinton agar has been critical in the development of antibiotics and in the study of antibiotic resistance. Mueller–Hinton agar is a microbiological growth medium that is commonly used for antibiotic susceptibility testing, specifically disk diffusion tests. It is also used to isolate and maintain Neisseria and Moraxella species. It typically contains: 2.0 g beef extract 17.5 g casein hydrolysate 1.5 g starch 17.0 g agar 1 liter of distilled water. pH adjusted to neutral at 25 °C. Five percent sheep's blood and nicotinamide adenine dinucleotide may also be added when susceptibility testing is done on Streptococcus and Campylobacter species. It has a few properties that make it excellent for antibiotic use. First of all, it is a nonselective, nondifferential medium. This means that almost all organisms plated on it will grow. Additionally, it contains starch. Starch is known to absorb toxins released from bacteria, so that they cannot interfere with the antibiotics. Second, it is a loose agar. This allows for better diffusion of the antibiotics than most other plates. A better diffusion leads to a truer zone of inhibition. Mueller–Hinton agar was co-developed by a microbiologist John Howard Mueller and a veterinary scientist Jane Hinton at Harvard University as a culture for gonococcus and meningococcus. They co-published the method in 1941. References Microbiological media Cell culture media
Mueller–Hinton agar
Biology
744
25,234,160
https://en.wikipedia.org/wiki/Properties%20of%20concrete
Concrete has relatively high compressive strength (resistance to breaking when squeezed), but significantly lower tensile strength (resistance to breaking when pulled apart). The compressive strength is typically controlled with the ratio of water to cement when forming the concrete, and tensile strength is increased by additives, typically steel, to create reinforced concrete. In other words we can say concrete is made up of sand (which is a fine aggregate), ballast (which is a coarse aggregate), cement (can be referred to as a binder) and water (which is an additive). Reinforced concrete Concrete has relatively high compressive strength, but significantly lower tensile strength. As a result, without compensating, concrete would almost always fail from tensile stresses even when loaded in compression. The practical implication of this is that concrete elements subjected to tensile stresses must be reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion, and as it matures concrete shrinks. All concrete structures will crack to some extent, due to shrinkage and tension. Concrete which is subjected to long-duration forces is prone to creep. The density of concrete varies, but is around . Reinforced concrete is the most common form of concrete. The reinforcement is often steel rebar (mesh, spiral, bars and other forms). Structural fibers of various materials are available. Concrete can also be prestressed (reducing tensile stress) using internal steel cables (tendons), allowing for beams or slabs with a longer span than is practical with reinforced concrete alone. Inspection of existing concrete structures can be non-destructive if carried out with equipment such as a Schmidt hammer, which is sometimes used to estimate relative concrete strengths in the field. Mix design The ultimate strength of concrete is influenced by the water-cementitious ratio (w/cm), the design constituents, and the mixing, placement and curing methods employed. All things being equal, concrete with a lower water-cement (cementitious) ratio makes a stronger concrete than that with a higher ratio. The total quantity of cementitious materials (portland cement, slag cement, pozzolans) can affect strength, water demand, shrinkage, abrasion resistance and density. All concrete will crack independent of whether or not it has sufficient compressive strength. In fact, high Portland cement content mixtures can actually crack more readily due to increased hydration rate. As concrete transforms from its plastic state, hydrating to a solid, the material undergoes shrinkage. Plastic shrinkage cracks can occur soon after placement but if the evaporation rate is high they often can actually occur during finishing operations, for example in hot weather or a breezy day. In very high-strength concrete mixtures (greater than 70 MPa) the crushing strength of the aggregate can be a limiting factor to the ultimate compressive strength. In lean concretes (with a high water-cement ratio) the crushing strength of the aggregates is not so significant. The internal forces in common shapes of structure, such as arches, vaults, columns and walls are predominantly compressive forces, with floors and pavements subjected to tensile forces. Compressive strength is widely used for specification requirement and quality control of concrete. Engineers know their target tensile (flexural) requirements and will express these in terms of compressive strength. Wired.com reported on 13 April 2007 that a team from the University of Tehran, competing in a contest sponsored by the American Concrete Institute, demonstrated several blocks of concretes with abnormally high compressive strengths between at 28 days. The blocks appeared to use an aggregate of steel fibres and quartz – a mineral with a compressive strength of 1100 MPa, much higher than typical high-strength aggregates such as granite (). Reactive powder concrete, also known as ultra-high-performance concrete, can be even stronger, with strengths of up to 800 MPa (116,000 PSI). These are made by eliminating large aggregate completely, carefully controlling the size of the fine aggregates to ensure the best possible packing, and incorporating steel fibers (sometimes produced by grinding steel wool) into the matrix. Reactive powder concretes may also make use of silica fume as a fine aggregate. Commercial reactive powder concretes are available in the strength range. Elasticity The modulus of elasticity of concrete is a function of the modulus of elasticity of the aggregates and the cement matrix and their relative proportions. The modulus of elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. The elastic modulus of the hardened paste may be in the order of 10-30 GPa and aggregates about 45 to 85 GPa. The concrete composite is then in the range of 30 to 50 GPa. The American Concrete Institute allows the modulus of elasticity to be calculated using the following equation: (psi) where weight of concrete (pounds per cubic foot) and where compressive strength of concrete at 28 days (psi) This equation is completely empirical and is not based on theory. Note that the value of Ec found is in units of psi. For normal weight concrete (defined as concrete with a wc of 150 lb/ft3 and subtracting 5 lb/ft3 for steel) Ec is permitted to be taken as . The publication used by structural bridge engineers is the AASHTO Load and Resistance Factor Design Manual, or "LRFD." From the LRFD, section 5.4.2.4, Ec is determined by: (ksi) where correction factor for aggregate source (taken as 1.0 unless determined otherwise) weight of concrete (kips per cubic foot), where and specified compressive strength of concrete at 28 days (ksi) For normal weight concrete (wc=0.145 kips per cubic feet) Ec may be taken as: (ksi) Thermal properties Expansion and shrinkage Concrete has a very low coefficient of thermal expansion. However, if no provision is made for expansion, very large forces can be created, causing cracks in parts of the structure not capable of withstanding the force or the repeated cycles of expansion and contraction. The coefficient of thermal expansion of Portland cement concrete is 0.000009 to 0.000012 (per degree Celsius) (8 to 12 microstrains/°C)(8-12 1/MK). Thermal Conductivity Concrete has moderate thermal conductivity, much lower than metals, but significantly higher than other building materials such as wood, and is a poor insulator. A layer of concrete is frequently used for 'fireproofing' of steel structures. However, the term fireproof is inappropriate, for high temperature fires can be hot enough to induce chemical changes in concrete, which in the extreme can cause considerable structural damage to the concrete. Cracking As concrete matures it continues to shrink, due to the ongoing reaction taking place in the material, although the rate of shrinkage falls relatively quickly and keeps reducing over time (for all practical purposes concrete is usually considered to not shrink due to hydration any further after 30 years). The relative shrinkage and expansion of concrete and brickwork require careful accommodation when the two forms of construction interface. All concrete structures will crack to some extent. One of the early designers of reinforced concrete, Robert Maillart, employed reinforced concrete in a number of arched bridges. His first bridge was simple, using a large volume of concrete. He then realized that much of the concrete was very cracked, and could not be a part of the structure under compressive loads, yet the structure clearly worked. His later designs simply removed the cracked areas, leaving slender, beautiful concrete arches. The Salginatobel Bridge is an example of this. Concrete cracks due to tensile stress induced by shrinkage or stresses occurring during setting or use. Various means are used to overcome this. Fiber reinforced concrete uses fine fibers distributed throughout the mix or larger metal or other reinforcement elements to limit the size and extent of cracks. In many large structures, joints or concealed saw-cuts are placed in the concrete as it sets to make the inevitable cracks occur where they can be managed and out of sight. Water tanks and highways are examples of structures requiring crack control. Shrinkage cracking Shrinkage cracks occur when concrete members undergo restrained volumetric changes (shrinkage) as a result of either drying, autogenous shrinkage, or thermal effects. Restraint is provided either externally (i.e. supports, walls, and other boundary conditions) or internally (differential drying shrinkage, reinforcement). Once the tensile strength of the concrete is exceeded, a crack will develop. The number and width of shrinkage cracks that develop are influenced by the amount of shrinkage that occurs, the amount of restraint present, and the amount and spacing of reinforcement provided. These are minor indications and have no real structural impact on the concrete member. Plastic-shrinkage cracks are immediately apparent, visible within 0 to 2 days of placement, while drying-shrinkage cracks develop over time. Autogenous shrinkage also occurs when the concrete is quite young and results from the volume reduction resulting from the chemical reaction of the Portland cement. Tension cracking Concrete members may be put into tension by applied loads. This is most common in concrete beams where a transversely applied load will put one surface into compression and the opposite surface into tension due to induced bending. The portion of the beam that is in tension may crack. The size and length of cracks is dependent on the magnitude of the bending moment and the design of the reinforcing in the beam at the point under consideration. Reinforced concrete beams are designed to crack in tension rather than in compression. This is achieved by providing reinforcing steel which yields before failure of the concrete in compression occurs and allowing remediation, repair, or if necessary, evacuation of an unsafe area. Creep Creep is the permanent movement or deformation of a material in order to relieve stresses within the material. Concrete that is subjected to long-duration forces is prone to creep. Short-duration forces (such as wind or earthquakes) do not cause creep. Creep can sometimes reduce the amount of cracking that occurs in a concrete structure or element, but it also must be controlled. The amount of primary and secondary reinforcing in concrete structures contributes to a reduction in the amount of shrinkage, creep and cracking. Water retention Portland cement concrete holds water. However, some types of concrete (like Pervious concrete) allow water to pass, hereby being perfect alternatives to Macadam roads, as they do not need to be fitted with storm drains. Concrete testing Engineers usually specify the required compressive strength of concrete, which is normally given as the 28-day compressive strength in megapascals (MPa) or pounds per square inch (psi). Twenty eight days is a long wait to determine if desired strengths are going to be obtained, so three-day and seven-day strengths can be useful to predict the ultimate 28-day compressive strength of the concrete. A 25% strength gain between 7 and 28 days is often observed with 100% OPC (ordinary Portland cement) mixtures, and between 25% and 40% strength gain can be realized with the inclusion of pozzolans such as flyash, and supplementary cementitious materials (SCMs) such as slag cement. Strength gain depends on the type of mixture, its constituents, the use of standard curing, proper testing by certified technicians, and care of cylinders in transport. For practical immediate considerations, it is incumbent to accurately test the fundamental properties of concrete in its fresh, plastic state. Concrete is typically sampled while being placed, with testing protocols requiring that test samples be cured under laboratory conditions (standard cured). Additional samples may be field cured (non-standard) for the purpose of early 'stripping' strengths, that is, form removal, evaluation of curing, etc. but the standard cured cylinders comprise acceptance criteria. Concrete tests can measure the "plastic" (unhydrated) properties of concrete prior to, and during placement. As these properties affect the hardened compressive strength and durability of concrete (resistance to freeze-thaw), the properties of workability (slump/flow), temperature, density and age are monitored to ensure the production and placement of 'quality' concrete. Depending on project location, tests are performed per ASTM International, European Committee for Standardization or Canadian Standards Association. As measurement of quality must represent the potential of concrete material delivered and placed, it is imperative that concrete technicians performing concrete tests are certified to do so according to these standards. Structural design, concrete material design and properties are often specified in accordance with national/regional design codes such as American Concrete Institute. Compressive strength tests are conducted by certified technicians using an instrumented, hydraulic ram which has been annually calibrated with instruments traceable to the Cement and Concrete Reference Laboratory (CCRL) of the National Institute of Standards and Technology (NIST) in the U.S., or regional equivalents internationally. Standardized form factors are 6" by 12" or 4" by 8" cylindrical samples, with some laboratories opting to utilize cubic samples. These samples are compressed to failure. Tensile strength tests are conducted either by three-point bending of a prismatic beam specimen or by compression along the sides of a standard cylindrical specimen. These destructive tests are not to be equated with nondestructive testing using a rebound hammer or probe systems which are hand-held indicators, for relative strength of the top few millimeters, of comparative concretes in the field. Mechanical properties at elevated temperature Temperatures elevated above degrade the mechanical properties of concrete, including compressive strength, fracture strength, tensile strength, and elastic modulus, with respect to deleterious effect on its structural changes. Chemical changes With elevated temperature, concrete will lose its hydration product because of water evaporation. Therefore its resistance of moisture flow of concrete decreases and the number of unhydrated cement grains grows with the loss of chemically bonded water, resulting in lower compressive strength. Also, the decomposition of calcium hydroxide in concrete forms lime and water. When temperature decreases, lime will reacts with water and expands to cause a reduction of strength. Physical changes At elevated temperatures, small cracks form and propagate inside the concrete with increased temperature, possibly caused by differential thermal coefficients of expansion within the cement matrix. Likewise, when water evaporates from concrete, the loss of water impedes the expansion of cement matrix by shrinking. Moreover, when the temperatures reach , siliceous aggregates transform from α-phase, hexagonal crystal system, to β-phase, bcc structure, causing expansion of concrete and decreasing the strength of the material. Spalling Spalling at elevated temperature is pronounced, driven by vapor pressure and thermal stresses. When the concrete surface is subjected to a sufficiently high temperature, the water close to the surface starts to move out from the concrete into atmosphere. However, with a high temperature gradient between the surface and the interior, vapor can also move inwards where it may condense with lower temperatures. A water-saturated interior resists the further movement of vapor into the mass of the concrete. If the condensation rate of vapor is much faster than the escaping speed of vapor out of concrete due to sufficiently high heating rate or adequately dense pore structure, a large pore pressure can cause spalling. At the same time, thermal expansion on the surface will generate a perpendicular compressive stress opposing the tensile stress within the concrete. Spalling occurs when the compressive stress exceeds the tensile stress. See also Segregation in concrete - particle segregation in concrete applications Creep and shrinkage of concrete References Properties
Properties of concrete
Engineering
3,230
47,885,504
https://en.wikipedia.org/wiki/HD%20137509
HD 137509 is a star in the southern constellation of Apus, positioned less than a degree from the northern constellation boundary with Triangulum Australe. It has the variable star designation of NN Apodis, or NN Aps for short, and ranges in brightness from an apparent visual magnitude of 6.86 down to 6.93 with a period of 4.4916 days. The star is located at a distance of approximately 647 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +0.50 km/s. In 1973, W. P. Bidelman and D. J. MacConnell found this to be a peculiar A star of the silicon type. During a reclassification of the spectra of southern stars in 1975, A. P. Cowley and N. Houk noted the strength of hydrogen lines and weakness of helium are more typical of a class near B9. It shows a luminosity above the main sequence, which is common for a peculiar A star. The stellar atmosphere appears deficient in helium, but shows a rich variety of metallic lines. However, there are no lines of manganese or mercury, so it's not a Hg–Mn Ap star. HD 137509 is now classified as or , matching a late-type, helium-weak Bp star with overabundances of silicon, chromium, and iron. This star was found to be photometrically variable by L. O. Lodén and A. Sundman in 1989, and a variable spectrum was noted by H. Pedersen in 1979. It has one of the strongest magnetic fields recorded for a chemically peculiar star, measured at around , and shows a strong quadrupolar component. Both variances of the star allow its rotation period to be precisely measured. It is classified as a Alpha2 Canum Venaticorum variable. The star is about 124 million years old with 3.4 times the mass of the Sun and 2.8 times the Sun's radius. On average it is radiating ~123 times the luminosity of the Sun from its photosphere at an effective temperature of 13,100 K. References B-type main-sequence stars Ap stars Helium-weak stars Alpha2 Canum Venaticorum variables Apus Durchmusterung objects 137509 076011 Apodis, NN
HD 137509
Astronomy
498
749,598
https://en.wikipedia.org/wiki/Windows%20Chat
Windows Chat (not to be confused with Microsoft Comic Chat) is a simple LAN-based text chatting program included in Windows for Workgroups and, later, the Windows NT-line of operating systems, including Windows NT 3.x, NT 4.0, Windows 2000, Windows XP and Windows Server 2003 and also Windows 95. In later Windows versions, the Network DDE service may need to be enabled to receive calls. It utilizes the NetBIOS session service and NetDDE. Users can chat with each other over an IPX LAN. The shortcut to the executable is not present in the Start Menu in newer versions of Windows; it must instead be run by using Start > Run... > WinChat.exe. Windows Chat utilizes a split screen user interface similar to UNIX talk. Windows Chat is real time text, with typing being transmitted immediately. Microsoft removed the application from Windows versions from Vista on, with the removal of NetDDE, though the program and the DDE service it needs may be manually installed. However the application can still be used through programs such as virtual machines if earlier versions of Windows are installed on them. See also WinPopup Microsoft Comic Chat Microsoft V-Chat References External links Using Windows Chat in Windows XP Windows components LAN messengers
Windows Chat
Technology
263
2,633,325
https://en.wikipedia.org/wiki/Norman%20Macleod%20Ferrers
Norman Macleod Ferrers (11 August 1829 – 31 January 1903) was a British mathematician and university administrator and editor of a mathematical journal. Career and research Ferrers was educated at Eton College before studying at Gonville and Caius College, Cambridge, where he was Senior Wrangler in 1851. He was appointed to a Fellowship at the college in 1852, was called to the bar in 1855 and was ordained deacon in 1859 and priest in 1860. In 1880, he was appointed Master of the college, and served as vice-chancellor of Cambridge University from 1884 to 1885. Ferrers made many contributions to mathematical literature. From 1855 to 1891, he worked with J. J. Sylvester as editors, with others, in publishing The Quarterly Journal of Pure and Applied Mathematics. Ferrers assembled the papers of George Green for publication in 1871. In 1861 he published "An Elementary Treatise on Trilinear Co-ordinates". One of his early contributions was on Sylvester's development of Poinsot's representation of the motion of a rigid body about a fixed point. In 1871 he first suggested to extend the equations of motion with nonholonomic constraints. Another treatise of his on "Spherical Harmonics", published in 1877, presented many original features. In 1881, he studied Kelvin's investigation of the law of distribution of electricity in equilibrium on an uninfluenced spherical bowl and made the addition of finding the potential at any point of space in zonal harmonics. He died at the College Lodge on 31 January 1903. Integer partitions Ferrers is associated with a particular way of arranging the partition of a natural number p. If p is the sum of n terms, the largest of which is m, then the Ferrers diagram starts with a row of m dots. The terms are arranged in order, and a row of dots corresponds to each term. Adams, Ferrers, and Sylvester articulated this theorem of partitions: "The number of modes of partitioning (n) into (m) parts is equal to the number of modes of partitioning (n) into parts, one of which is always (m), and the others (m) or less than (m)." The proof, attributed to Ferrers by Sylvester in 1883, involves flipping a Ferrers diagram about a diagonal line. In 1951 Jacques Riguet adopted this manner of ordering to the rows of a logical matrix. Alignment of rows of ones along the right side of a matrix is used, instead of the alignment of dots on the left. The logical matrix corresponds to a heterogeneous relation of Ferrers type. Family On 3 April 1866, he married Emily, daughter of John Lamb, dean of Bristol cathedral. They had four sons and one daughter. Bibliography 1861: An Elementary Treatise on Trilinear Coordinates (London), link from Internet Archive 1871: Mathematical papers of the late George Green, Edited By N.M. Ferrers 1877: An elementary treatise on spherical harmonics and subjects connected with them (London) from Internet Archive References External links The National Archives | National Register of Archives | Person details | Archive Details at www.nra.nationalarchives.gov.uk http://www.macleodgenealogy.org/ACMS/D0035/I2304.html 1829 births 1903 deaths 19th-century British mathematicians Alumni of Gonville and Caius College, Cambridge Combinatorialists Fellows of Gonville and Caius College, Cambridge Fellows of the Royal Society Masters of Gonville and Caius College, Cambridge People educated at Eton College Senior Wranglers 19th-century English Anglican priests Vice-chancellors of the University of Cambridge
Norman Macleod Ferrers
Mathematics
748
31,350,649
https://en.wikipedia.org/wiki/NGC%20371
NGC 371, also called Hodge 53, is an open cluster 200,000 light-years (61,320 pc) away located in the Small Magellanic Cloud in Tucana constellation. It was discovered on 1 August 1826 by Scottish astronomer James Dunlop. See also List of NGC objects (1–1000) References External links The Rose-red Glow of Star Formation - ESO Photo release Open clusters 0371 Small Magellanic Cloud 18260801 Tucana Discoveries by James Dunlop
NGC 371
Astronomy
101
6,796,998
https://en.wikipedia.org/wiki/Gleason%27s%20theorem
In mathematical physics, Gleason's theorem shows that the rule one uses to calculate probabilities in quantum physics, the Born rule, can be derived from the usual mathematical representation of measurements in quantum physics together with the assumption of non-contextuality. Andrew M. Gleason first proved the theorem in 1957, answering a question posed by George W. Mackey, an accomplishment that was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics. Multiple variations have been proven in the years since. Gleason's theorem is of particular importance for the field of quantum logic and its attempt to find a minimal set of mathematical axioms for quantum theory. Statement of the theorem Conceptual background In quantum mechanics, each physical system is associated with a Hilbert space. For the purposes of this overview, the Hilbert space is assumed to be finite-dimensional. In the approach codified by John von Neumann, a measurement upon a physical system is represented by a self-adjoint operator on that Hilbert space sometimes termed an "observable". The eigenvectors of such an operator form an orthonormal basis for the Hilbert space, and each possible outcome of that measurement corresponds to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator on the Hilbert space whose trace is equal to 1. In the language of von Weizsäcker, a density operator is a "catalogue of probabilities": for each measurement that can be defined, the probability distribution over the outcomes of that measurement can be computed from the density operator. The procedure for doing so is the Born rule, which states that where is the density operator, and is the projection operator onto the basis vector corresponding to the measurement outcome . The Born rule associates a probability with each unit vector in the Hilbert space, in such a way that these probabilities sum to 1 for any set of unit vectors comprising an orthonormal basis. Moreover, the probability associated with a unit vector is a function of the density operator and the unit vector, and not of additional information like a choice of basis for that vector to be embedded in. Gleason's theorem establishes the converse: all assignments of probabilities to unit vectors (or, equivalently, to the operators that project onto them) that satisfy these conditions take the form of applying the Born rule to some density operator. Gleason's theorem holds if the dimension of the Hilbert space is 3 or greater; counterexamples exist for dimension 2. Deriving the state space and the Born rule The probability of any outcome of a measurement upon a quantum system must be a real number between 0 and 1 inclusive, and in order to be consistent, for any individual measurement the probabilities of the different possible outcomes must add up to 1. Gleason's theorem shows that any function that assigns probabilities to measurement outcomes, as identified by projection operators, must be expressible in terms of a density operator and the Born rule. This not only gives the rule for calculating probabilities, but also determines the set of possible quantum states. Let be a function from projection operators to the unit interval with the property that, if a set of projection operators sum to the identity matrix (that is, if they correspond to an orthonormal basis), then Such a function expresses an assignment of probability values to the outcomes of measurements, an assignment that is "noncontextual" in the sense that the probability for an outcome does not depend upon which measurement that outcome is embedded within, but only upon the mathematical representation of that specific outcome, i.e., its projection operator. Gleason's theorem states that for any such function , there exists a positive-semidefinite operator with unit trace such that Both the Born rule and the fact that "catalogues of probability" are positive-semidefinite operators of unit trace follow from the assumptions that measurements are represented by orthonormal bases, and that probability assignments are "noncontextual". In order for Gleason's theorem to be applicable, the space on which measurements are defined must be a real or complex Hilbert space, or a quaternionic module. (Gleason's argument is inapplicable if, for example, one tries to construct an analogue of quantum mechanics using p-adic numbers.) History and outline of Gleason's proof In 1932, John von Neumann also managed to derive the Born rule in his textbook Mathematical Foundations of Quantum Mechanics. However, the assumptions on which von Neumann built his no hidden variables proof were rather strong and eventually regarded to not be well-motivated. Specifically, von Neumann assumed that the probability function must be linear on all observables, commuting or non-commuting. His proof was derided by John Bell as "not merely false but foolish!". Gleason, on the other hand, did not assume linearity, but merely additivity for commuting projectors together with noncontextuality, assumptions seen as better motivated and more physically meaningful. By the late 1940s, George Mackey had grown interested in the mathematical foundations of quantum physics, wondering in particular whether the Born rule was the only possible rule for calculating probabilities in a theory that represented measurements as orthonormal bases on a Hilbert space. Mackey discussed this problem with Irving Segal at the University of Chicago, who in turn raised it with Richard Kadison, then a graduate student. Kadison showed that for 2-dimensional Hilbert spaces there exists a probability measure that does not correspond to quantum states and the Born rule. Gleason's result implies that this only happens in dimension 2. Gleason's original proof proceeds in three stages. In Gleason's terminology, a frame function is a real-valued function on the unit sphere of a Hilbert space such that whenever the vectors comprise an orthonormal basis. A noncontextual probability assignment as defined in the previous section is equivalent to a frame function. Any such measure that can be written in the standard way, that is, by applying the Born rule to a quantum state, is termed a regular frame function. Gleason derives a sequence of lemmas concerning when a frame function is necessarily regular, culminating in the final theorem. First, he establishes that every continuous frame function on the Hilbert space is regular. This step makes use of the theory of spherical harmonics. Then, he proves that frame functions on have to be continuous, which establishes the theorem for the special case of . This step is regarded as the most difficult of the proof. Finally, he shows that the general problem can be reduced to this special case. Gleason credits one lemma used in this last stage of the proof to his doctoral student Richard Palais. Robin Lyth Hudson described Gleason's theorem as "celebrated and notoriously difficult". Cooke, Keane and Moran later produced a proof that is longer than Gleason's but requires fewer prerequisites. Implications Gleason's theorem highlights a number of fundamental issues in quantum measurement theory. As Fuchs argues, the theorem "is an extremely powerful result", because "it indicates the extent to which the Born probability rule and even the state-space structure of density operators are dependent upon the theory's other postulates". In consequence, quantum theory is "a tighter package than one might have first thought". Various approaches to rederiving the quantum formalism from alternative axioms have, accordingly, employed Gleason's theorem as a key step, bridging the gap between the structure of Hilbert space and the Born rule. Hidden variables Moreover, the theorem is historically significant for the role it played in ruling out the possibility of certain classes of hidden variables in quantum mechanics. A hidden-variable theory that is deterministic implies that the probability of a given outcome is always either 0 or 1. For example, a Stern–Gerlach measurement on a spin-1 atom will report that the atom's angular momentum along the chosen axis is one of three possible values, which can be designated , and . In a deterministic hidden-variable theory, there exists an underlying physical property that fixes the result found in the measurement. Conditional on the value of the underlying physical property, any given outcome (for example, a result of ) must be either impossible or guaranteed. But Gleason's theorem implies that there can be no such deterministic probability measure. The mapping is continuous on the unit sphere of the Hilbert space for any density operator . Since this unit sphere is connected, no continuous probability measure on it can be deterministic. Gleason's theorem therefore suggests that quantum theory represents a deep and fundamental departure from the classical intuition that uncertainty is due to ignorance about hidden degrees of freedom. More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity. To construct a counterexample for 2-dimensional Hilbert space, known as a qubit, let the hidden variable be a unit vector in 3-dimensional Euclidean space. Using the Bloch sphere, each possible measurement on a qubit can be represented as a pair of antipodal points on the unit sphere. Defining the probability of a measurement outcome to be 1 if the point representing that outcome lies in the same hemisphere as and 0 otherwise yields an assignment of probabilities to measurement outcomes that obeys Gleason's assumptions. However, this probability assignment does not correspond to any valid density operator. By introducing a probability distribution over the possible values of , a hidden-variable model for a qubit that reproduces the predictions of quantum theory can be constructed. Gleason's theorem motivated later work by John Bell, Ernst Specker and Simon Kochen that led to the result often called the Kochen–Specker theorem, which likewise shows that noncontextual hidden-variable models are incompatible with quantum mechanics. As noted above, Gleason's theorem shows that there is no probability measure over the rays of a Hilbert space that only takes the values 0 and 1 (as long as the dimension of that space exceeds 2). The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined. The fact that such a finite subset of rays must exist follows from Gleason's theorem by way of a logical compactness argument, but this method does not construct the desired set explicitly. In the related no-hidden-variables result known as Bell's theorem, the assumption that the hidden-variable theory is noncontextual instead is replaced by the assumption that it is local. The same sets of rays used in Kochen–Specker constructions can also be employed to derive Bell-type proofs. Pitowsky uses Gleason's theorem to argue that quantum mechanics represents a new theory of probability, one in which the structure of the space of possible events is modified from the classical, Boolean algebra thereof. He regards this as analogous to the way that special relativity modifies the kinematics of Newtonian mechanics. The Gleason and Kochen–Specker theorems have been cited in support of various philosophies, including perspectivism, constructive empiricism and agential realism. Quantum logic Gleason's theorem finds application in quantum logic, which makes heavy use of lattice theory. Quantum logic treats the outcome of a quantum measurement as a logical proposition and studies the relationships and structures formed by these logical propositions. They are organized into a lattice, in which the distributive law, valid in classical logic, is weakened, to reflect the fact that in quantum physics, not all pairs of quantities can be measured simultaneously. The representation theorem in quantum logic shows that such a lattice is isomorphic to the lattice of subspaces of a vector space with a scalar product. Using Solèr's theorem, the (skew) field K over which the vector space is defined can be proven, with additional hypotheses, to be either the real numbers, complex numbers, or the quaternions, as is needed for Gleason's theorem to hold. By invoking Gleason's theorem, the form of a probability function on lattice elements can be restricted. Assuming that the mapping from lattice elements to probabilities is noncontextual, Gleason's theorem establishes that it must be expressible with the Born rule. Generalizations Gleason originally proved the theorem assuming that the measurements applied to the system are of the von Neumann type, i.e., that each possible measurement corresponds to an orthonormal basis of the Hilbert space. Later, Busch and independently Caves et al. proved an analogous result for a more general class of measurements, known as positive-operator-valued measures (POVMs). The set of all POVMs includes the set of von Neumann measurements, and so the assumptions of this theorem are significantly stronger than Gleason's. This made the proof of this result simpler than Gleason's, and the conclusions stronger. Unlike the original theorem of Gleason, the generalized version using POVMs also applies to the case of a single qubit. Assuming noncontextuality for POVMs is, however, controversial, as POVMs are not fundamental, and some authors defend that noncontextuality should be assumed only for the underlying von Neumann measurements. Gleason's theorem, in its original version, does not hold if the Hilbert space is defined over the rational numbers, i.e., if the components of vectors in the Hilbert space are restricted to be rational numbers, or complex numbers with rational parts. However, when the set of allowed measurements is the set of all POVMs, the theorem holds. The original proof by Gleason was not constructive: one of the ideas on which it depends is the fact that every continuous function defined on a compact space attains its minimum. Because one cannot in all cases explicitly show where the minimum occurs, a proof that relies upon this principle will not be a constructive proof. However, the theorem can be reformulated in such a way that a constructive proof can be found. Gleason's theorem can be extended to some cases where the observables of the theory form a von Neumann algebra. Specifically, an analogue of Gleason's result can be shown to hold if the algebra of observables has no direct summand that is representable as the algebra of 2×2 matrices over a commutative von Neumann algebra (i.e., no direct summand of type I2). In essence, the only barrier to proving the theorem is the fact that Gleason's original result does not hold when the Hilbert space is that of a qubit. Notes References Hilbert spaces Quantum measurement Probability theorems
Gleason's theorem
Physics,Mathematics
3,176
3,572,234
https://en.wikipedia.org/wiki/Terrace%20%28geology%29
In geology, a terrace is a step-like landform. A terrace consists of a flat or gently sloping geomorphic surface, called a tread, that is typically bounded on one side by a steeper ascending slope, which is called a "riser" or "scarp". The tread and the steeper descending slope (riser or scarp) together constitute the terrace. Terraces can also consist of a tread bounded on all sides by a descending riser or scarp. A narrow terrace is often called a bench. The sediments underlying the tread and riser of a terrace are also commonly, but incorrectly, called terraces, leading to confusion. Terraces are formed in various ways. Fluvial terraces Fluvial terraces are remnants of the former floodplain of a stream or river. They are formed by the downcutting of a river or stream channel into and the abandonment and lateral erosion of its former floodplain. The downcutting, abandonment, and lateral erosion of a former floodplain can be the result of either changes in sea level, local or regional tectonic uplift; changes in local or regional climate; changes in the amount of sediment being carried by the river or stream; change in discharge of the river; or a complex mixture of these and other factors. The most common sources of the variations in rivers and streams that create fluvial terraces are vegetative, geomorphic, and hydrologic responses to climate. More recently, the direct modification of rivers and streams and their watersheds by cultural processes have resulted in the development of terraces along many rivers and streams. Kame terraces Kame terraces are formed on the side of a glacial valley and are the deposits of meltwater streams flowing between the ice and the adjacent valley side. Marine terraces A marine terrace represents the former shoreline of a sea or ocean. It can be formed by marine abrasion or erosion of materials comprising the shoreline (marine-cut terraces or wave-cut platforms); the accumulations of sediments in the shallow-water to slightly emerged coastal environments (marine-built terraces or raised beach); or the bioconstruction by coral reefs and accumulation of reef materials (reef flats) in intertropical regions. The formation of a marine terrace follows this general process: A wave cut platform must be carved into bedrock (high wave energy is needed for this process). Although this is the first step to the process for the formation of a marine terrace, not all wave cut platforms will become a marine terrace. After the wave cut platform is formed it must be removed from interaction with the high wave energy. This process happens by either change in sea level due to glacial-interglacial cycles or tectonically rising landmasses. When the wave cut has been raised above sea level it is preserved. The terraces are most commonly preserved in flights along the coastline. Lacustrine terraces A lake (lacustrine) terrace represents the former shoreline of either a nonglacial, glacial, or proglacial lake. As with marine terraces, a lake terrace can be formed by either the abrasion or erosion of materials comprising the shoreline, the accumulations of sediments in the shallow-water to slightly emerged environments, or some combination of these. Given the smaller size of lakes relative to the size of typical marine water bodies, lake terraces are overall significantly narrower and less well developed than marine terraces. However, not all lake terraces are relict shorelines. In case of the lake terraces of ancient ice-walled lakes, some proglacial lakes, and alluvium-dammed (slackwater) lakes, they often represent the relict bottom of these lakes. Finally, glaciolacustrine kame terraces are either the relict deltas or bottoms of ancient ice marginal lakes. Structural terraces In geomorphology, a structural terrace is a terrace created by the differential erosion of flat-lying or nearly flat-lying layered strata. The terrace results from preferential stripping by erosion of a layer of softer strata from an underlying layer of harder strata. The preferential removal of softer material exposes the flat surface of the underlying harder layer, creating the tread of a structural terrace. Structural terraces are commonly paired and not always associated with river valleys. Travertine terraces A travertine terrace is formed when geothermally heated supersaturated alkaline waters emerge to the surface and form waterfalls of precipitated carbonates. See also Terrace Crossing - a geographical zone between the sedimentation (downstream) part and the erosion (upstream) part of a river References External links Here is a good example of a river terrace: http://www.geographie.uni-erlangen.de/mrichter/gallery/photos/asia/images/river_terraces_near_kasbeki.jpg Landforms Geomorphology Riparian zone Archaeological features Lacustrine landforms
Terrace (geology)
Environmental_science
1,005
12,465,812
https://en.wikipedia.org/wiki/C6H6O4
{{DISPLAYTITLE:C6H6O4}} The molecular formula C6H6O4 (molar mass : 142.10 g/mol, exact mass : 142.026608 u) may refer to: Dimethyl acetylenedicarboxylate 5-Hydroxymaltol 2-Hydroxymuconate semialdehyde Kojic acid Muconic acid Tetrahydroxybenzene 1,2,3,4-Tetrahydroxybenzene 1,2,3,5-Tetrahydroxybenzene 1,2,4,5-Tetrahydroxybenzene
C6H6O4
Chemistry
145
2,642,217
https://en.wikipedia.org/wiki/Apyrase
Apyrase (, ATP-diphosphatase, adenosine diphosphatase, ADPase, ATP diphosphohydrolase) is a calcium-activated plasma membrane-bound enzyme (magnesium can also activate it) () that catalyses the hydrolysis of ATP to yield AMP and inorganic phosphate. Two isoenzymes are found in commercial preparations from S. tuberosum. One with a higher ratio of substrate selectivity for ATP:ADP (approx 10) and another with no selectivity (ratio 1). It can also act on ADP and other nucleoside triphosphates and diphosphates with the general reaction being NTP -> NDP + Pi -> NMP + 2Pi. This is the same activity that has been employed in the degradation of unincorporated nucleosides during pyrosequencing. The salivary apyrases of blood-feeding arthropods are nucleotide hydrolysing enzymes that are implicated in the inhibition of host platelet aggregation through the hydrolysis of extracellular adenosine diphosphate. References External links Protein families EC 3.6.1
Apyrase
Biology
242
910,281
https://en.wikipedia.org/wiki/Variscite
Variscite is a hydrated aluminium phosphate mineral (). It is a relatively rare phosphate mineral. It is sometimes confused with turquoise; however, variscite is usually greener in color. The green color results from the presence of small amounts of trivalent chromium (). Geology Variscite is a secondary mineral formed by direct deposition from phosphate-bearing water which has reacted with aluminium-rich rocks in a near-surface environment. It occurs as fine-grained masses in nodules, cavity fillings, and crusts. Variscite often contains white veins of the calcium aluminium phosphate mineral crandallite. It was first described in 1837 and named for the locality of Variscia, the historical name of the Vogtland, in Germany. At one time, variscite was called Utahlite. At times, materials which may be turquoise or may be variscite have been marketed as "variquoise". Appreciation of the color ranges typically found in variscite have made it a popular gem in recent years. Variscite from Nevada typically contains black spiderwebbing in the matrix and is often confused with green turquoise. Most of the Nevada variscite recovered in recent decades has come from mines located in Lander County and Esmeralda County, specifically in the Candelaria Hills. Notable localities are Lucin, Snowville, and Fairfield in Utah, United States. Most recently found in Wyoming as well. It is also found in Germany, Australia, Poland, Spain, Italy (Sardinia), and Brazil. Jewelry Variscite has been used in Europe to make personal ornaments, especially beads, since Neolithic times. Its use continued during the Bronze Age and in Roman times although it was not until the 19th century that it was determined that all variscite used in Europe came from three sites in Spain, Gavá (Barcelona), Palazuelo de las Cuevas (Zamora), and Encinasola (Huelva). Variscite is sometimes used as a semi-precious stone, and is popular for carvings and ornamental use due to its beautiful and intense green color, and is commonly used in silversmithing in place of turquoise. Variscite is more rare and less common than turquoise, but because it is not as commonly available as turquoise or as well known to the general public, raw variscite tends to be less expensive than turquoise. Gallery See also (same etymology, as named from the ancient locality of Variscia in Germany) List of minerals References Aluminium minerals Phosphate minerals Orthorhombic minerals Minerals in space group 61 Luminescent minerals Gemstones Dihydrate minerals Minerals described in 1837
Variscite
Physics,Chemistry
546
21,559,940
https://en.wikipedia.org/wiki/Animal%20spirits%20%28Keynes%29
Animal spirits is a term used by John Maynard Keynes in his 1936 book The General Theory of Employment, Interest and Money to describe the instincts, proclivities and emotions that seemingly influence human behavior, which can be measured in terms of consumer confidence. Use by Keynes The original passage by Keynes reads: Earlier uses Philosophy and social science The notion of animal spirits has been described by René Descartes, Isaac Newton, and other scientists as how the notion of the vitality of the body is used. In one of his letters about light, Newton wrote that animated spirits live in "the brain, nerves, and muscles, [which] may become a convenient vessel to hold so subtle a spirit." These spirits, as described by Newton, are animated spirits of an ethereal nature, relating to life in the body. Later, it became a concept that acquired a psychological content, but was always thought of in connection with the life processes of the body. Therefore, they retained a lower overall animal status. William Safire explored the origins of the phrase in his 2009 article "On Language: 'Animal Spirits'": Thomas Hobbes used the phrase "animal spirits" to refer to passive emotions and instincts, as well as natural functions like breathing. Ralph Waldo Emerson in Society and Solitude (1870) wrote of "animal spirits" as prompting people to action, in a broader sense than Keynes's: In social science, Karl Marx refers to "animal spirits" in the 1887 English translation of Capital, Volume 1. Marx speaks of the animal spirits of the workers, which he believes a capitalist can either impel by encouraging social interaction and competition within their factory or depress by adopting assembly-line work whereby the worker repeats a single task. Earlier and contemporaneous English use The authors P. G. Wodehouse and Arthur Conan Doyle were popular among public school boys in England before the Great War. Both used the phrase "animal spirits" in their writings. John Coates of Cambridge University describes the Edwardian public school environment as one where dynamism and leadership coexist with less constructive traits such as recklessness, heedlessness, and in-caution. Coates attributes this to fluctuations in hormonal balances; abnormally high levels of testosterone may create individual success but also collective excessive aggression, overconfidence, and herd behavior, while too much cortisol can promote irrational pessimism and risk aversion. The author's remedy for this is to shift the employment balance in finance towards women and older men and monitor traders' biology. Contemporary research The term "animal spirits" was used in the works of a psychologist that Keynes had studied in 1905 and also suggests that Keynes implicitly drew upon an evolutionary understanding of human instinct. In 2009, economists George Akerlof and Robert J. Shiller advised in addition that: Shiller further contends that "animal spirits" refers also to the sense of trust humans have in one another, including a sense of fairness in economic dealings. See also Behavioral economics Decision making Drive theory Notes Further reading Akerlof, George A., and Robert J. Shiller. Animal spirits: How human psychology drives the economy, and why it matters for global capitalism (Princeton University Press, 2010) online. Dow, Alexander, and Sheila C. Dow. "Animal spirits revisited." Capitalism and Society 6.2 (2011). online Francois, Patrick, and Huw Lloyd-Ellis. "Animal spirits through creative destruction." American Economic Review 93.3 (2003): 530–550. online Howitt, Peter, and R. Preston McAfee. "Animal spirits." American Economic Review (1992): 493–507. online Lear, Jackson. Animal Spirits: The American Pursuit of Vitality from Camp Meeting to Wall Street (2023) excerpt External links "Animal spirits" from The Economist economic terms "A special report on the future of finance: Wild-animal spirits", The Economist, 22 January 2009 "Animal Spirits Depend on Trust: The proposed stimulus isn't big enough to restore confidence" by Robert J. Shiller, The Wall Street Journal, 27 January 2009 Loewenstein, George and Ted O'Donoghue. "Animal Spirits: Affective and Deliberative Processes in Economic Behavior", Cornell University Working Paper 04–14, August 2004 In 2013, NPR's Planet Money produced a video series and web site following the making of a tee shirt that they designed, featuring a visual pun on Keynes' animal spirits. Economics catchphrases Human behavior Keynesian economics
Animal spirits (Keynes)
Biology
935
68,089,820
https://en.wikipedia.org/wiki/Gallus%20giganteus
Gallus giganteus, Jago cock, or Jago Fowl, is a hypothetical species of fowl. It was first described in 1813 by Coenraad Jacob Temminck who named it on the basis of a large foot that he received, having a spur. Temminck thought it was a wild bird from southern Sumatra and western Java. Temminck assigned large domesticated species to the binomial Gallus patavinus (from Padua, as there were two breeds from Italy) used by Mathurin Jacques Brisson. It was again described by Leonard Fitzinger in 1878. The birds were described as being large with a comb that extended back in a line along the eye, thick, raised and appearing truncated on the top. The throat was bare and the wattles under the mandibles small. Later authors began to use the name Gallus giganteus for large domesticated species, including from Malabar, which was illustrated in Hardwicke's collection. Temminck believed that Gallus giganteus was one of six wild ancestors of the domestic chicken. Edward Blyth believed that domestic chickens were entirely derived by artificial selection of red junglefowl (Gallus gallus). Charles Darwin also favoured Blyth's hypothesis. References Hypothetical species
Gallus giganteus
Biology
270
944,836
https://en.wikipedia.org/wiki/List%20of%20North%20American%20broadcast%20station%20classes
This is a list of broadcast station classes applicable in much of North America under international agreements between the United States, Canada and Mexico. Effective radiated power (ERP) and height above average terrain (HAAT) are listed unless otherwise noted. All radio and television stations within of the US-Canada or US-Mexico border must get approval by both the domestic and foreign agency. These agencies are Industry Canada/Canadian Radio-television and Telecommunications Commission (CRTC) in Canada, the Federal Communications Commission (FCC) in the US, and the Federal Telecommunications Institute (IFT) in Mexico. AM Station class descriptions All domestic (United States) AM stations are classified as A, B, C, or D. A (formerly I) — clear-channel stations — 10 kW to 50 kW, 24 hours. Class A stations are only protected within a radius of the transmitter site. The old Class I was divided into three: Class I-A, I-B and I-N. NARBA distinguished between Class I-A, which were true clear-channel stations that did not share their channel with another Class I station, and Class I-B, in which a station operated with 50 kW at night but shared its channel with at least one other I-B station, requiring directional operation. This distinction was superseded by the Regional Agreement for the Medium Frequency Broadcasting Service in Region 2 (Rio Agreement), which instituted the current class system. The former Class I-As are omnidirectional, with the exception of 870 WWL New Orleans and 1030 WBZ Boston, which use directional antennas to put a better signal over their largest population areas. Most former Class I-Bs are directional at night, although a few are also directional during days. (A handful of I-Bs did not have to use directional antennas: 680 KNBR San Francisco, 810 WGY Schenectady, 850 KOA Denver, 940 XEQ Mexico City, 1070 KNX Los Angeles and 1070 CBA Moncton. KNX and CBA were far enough apart that both could operate without using a directional antenna. XEQ is far enough from Montreal that it did not need a directional antenna. KNBR and KOA are the only Class Is on their frequency but share those frequencies with several Class II-Bs.) Former Class I-N stations exist only in Alaska, where they are too remote to interfere with other clear-channel stations in the contiguous 48 states. They are only held to Class B efficiency standards (although higher efficiency is acceptable). No new Class A stations are licensed in the conterminous United States, although the FCC states it may be possible to license additional Class A stations in Alaska. B (formerly II and III) — regional stations — 250 W to 50 kW, 24 hours. Stations on the AM expanded band, 1610 kHz to 1700 kHz, are limited to 10 kW days and 1 kW nights, non-directionally. Several expanded band stations operate DA-N or even DA-2 with up to 10 kW during all hours, after providing proof that such operations will not cause co- or adjacent-channel interference. If under 250 W at night, the antenna must be efficient enough to radiate more than 140.82 mV/m at 1 km. C (formerly IV) — local unlimited-time stations — 250 W to 1 kW, 24 hours. Class C stations that were licensed at 100 W are grandfathered. Rare Class Cs operate with directional arrays, such as KYPA and KHCB. D (formerly II-D, II-S, III-S) — current and former daytimers — Daytime 250 W to 50 kW, nighttime under 250 W or off-air. Field strength is limited to 140 mV/m (millivolts per meter) at 1 km. No new class D stations are licensed, with the exception of Class B stations that are downgrading their nighttime operations to Class D (i.e., less than 250 W). The station's daytime operation is then also reclassified as Class D. If a Class D station is on the air at night, it is not protected from any co-channel interference. TIS/HAR — travelers' information stations / highway advisory radio stations — Up to 10 W transmitter output power. Stations within US national parks are licensed by NTIA and not the FCC. Unlicensed broadcasting — (see low-power broadcasting) — 100 mW DC input to final amplifier with a maximum length radiator, no license needed, may be measured at edge of campus for school stations and neighborhood broadcasters. Notes: In the Western Hemisphere (ITU region 2), medium wave AM broadcasts are on channels spaced 10 kHz apart from 530 kHz to 1700 kHz, with certain classes restricted to subsets of the available frequencies. With few exceptions, Class A stations can be found only on the frequencies of 540 kHz, 640 to 780 kHz, 800 to 900 kHz, 940 kHz, 990 to 1140 kHz, 1160 to 1220 kHz, and 1500 to 1580 kHz. The exceptions are cited in relevant international treaties. While US and Canadian Class A stations are authorized to operate at a maximum of 50,000 watts day and night (and a minimum of 10,000 watts at night, if grandfathered), certain existing Mexican Class A stations, and certain new Cuban Class A stations are authorized to operate at a higher power. Certain Mexican Class A stations are authorized to operate at less than 50,000 watts at night, if grandfathered, but may operate at up to 100,000 watts during the day. Class B and D stations can be found on any frequencies from 540 kHz to 1700 kHz except where frequencies have been reserved for Class C stations. Class C stations can be found in the lower 48 US states on the frequencies of 1230 kHz, 1240 kHz, 1340 kHz, 1400 kHz, 1450 kHz, and 1490 kHz (commonly known as "graveyard" frequencies). Other countries may use other frequencies for their Class C stations. American territories in ITU region 3 with AM broadcasting stations (Guam and the Northern Mariana Islands) use the 9 kHz spacing customary to the rest of the world. All stations are class B or lower. Canada also defines Class CC (Carrier Current, restricted to the premises) and LP. (less than 100 watts) TIS stations can be found on any frequency from 530 kHz to 1700 kHz in the US, but may only carry non-commercial messages without music. There is a network of TISs on 1710 in New Jersey. Low-power AM stations located on a school campus are allowed to be more powerful, so long as their signal strength does not exceed roughly 14 to 45 μV/m (microvolts per meter) (depending on frequency) at a distance of 30 meters (98.4 ft) from campus. Former system AM station classes were previously assigned Roman numerals from I to IV in the US, with subclasses indicated by a letter suffix. Current class A is equivalent to the old class I; class B is the old classes II and III, with class D being the II-D, II-S, and III-S subclasses; and class C is the old class IV. The following conversion table compares the old AM station classes with the new AM station classes: {| class="wikitable" |- ! Old Domestic Station Class ! New Domestic Station Class |- | I | A |- | II | B |- | III | B |- | IV | C |- | II-S | D |- | III-S | D |- | II-D(Daytime Only) | D |} AM station classes and clear channels listed by frequency The following chart lists frequencies on the broadcast company band, and which classes broadcast on these frequencies; Class A and Class B, 10,000 watt and higher (full-time) stations in North America which broadcast on clear-channel station frequencies are also shown. By international agreement, Class A stations must be 10,000 watts and above, with a 50,000 watt maximum for the US and Canada, but no maximum for other governments in the region. Mexico, for example, typically runs 150,000 to 500,000 watts, but some stations are grandfathered at 10,000 to 20,000 watts at night; by treaty, these sub-50,000 watt Mexican stations may operate with a maximum of 100,000 watts during the daytime. Because the AM broadcast band developed before technology suitable for directional antennas, there are numerous exceptions, such as the US use of 800 (kHz) and 900 non-directionally in Alaska, limited to 5 kW at night; and 1050 and 1220, directionally, in the continental US, and without time limits; each of these being assigned to specific cities (and each of these being Mexican Class I-A clear channels). In return for these limits on US stations, Mexico accepted limits on 830 and 1030 in Mexico City, non-directionally, restricted to 5 kW at night (both of these being US Class I-A clear channels). FM Station class description Notes: Canada protects all radio stations out to a signal strength of 0.5mV/m (54dBu), whereas only commercial B stations in the US are. Commercial B1 in the US is 0.7mV/m (57dBu), and all other stations are 1.0mV/m (60dBu). Noncommercial-band stations (88.1 to 91.9) are not afforded this protection, and are treated as C3 and C2 even when they are B1 or B. C3 and C2 may also be reported internationally as B1 and B, respectively. Class C0 is for former C stations, demoted at request of another station which needs the downgrade to accommodate its own facilities. In practice, many stations are above the maximum HAAT for a particular class, and correspondingly must downgrade their power to remain below the reference distance. Conversely, they may not increase power if they are below maximum HAAT. All class D (including L1 and L2 LPFM and translator) stations are secondary in the US, and can be bumped or forced off-air completely, even if they are not just a repeater and are the only station a licensee has. The United States is divided into regions that have different restrictions for FM stations. Zone I (much of the US Northeast and Midwest) and I-A (most of California, plus Puerto Rico) is limited to classes B and B1, while Zone II (everything else) has only the C classes. All areas have the same classes for A and D. Power and height restrictions were put in place in 1962. A number of previously existing stations were grandfathered in, such as KRUZ in Santa Barbara, California, and WLFP in Memphis, Tennessee. The following table lists the various classes of FM stations, the reference facilities for each station class, and the protected and city grade contours for each station class: Historically, there were local "Class A" frequencies (like AM radio's class C stations) to which only class A stations would be allocated & the other frequencies could not have a class A. According to the 1982 FCC rules & regulations, those frequencies were: 92.1, 92.7, 93.5, 94.3, 95.3, 95.9, 96.7, 97.7, 98.3, 99.3, 100.1, 100.9, 101.7, 102.3, 103.1, 103.9, 104.9, 105.5, 106.3 & 107.1. Stations on those twenty frequencies were limited to having equivalent signals no greater that 3KW at above average terrain. FM zones The US is divided into three zones for FM broadcasting: I, I-A and II. The zone where a station is located may limit the choices of broadcast class available to a given FM station. Zone I in the US includes all of Connecticut, the District of Columbia, Delaware, Illinois, Indiana, Massachusetts, Maryland, New Jersey, Ohio, Pennsylvania, Rhode Island, and West Virginia. It also includes the areas south of latitude 43.5°N in Michigan, New Hampshire, New York, and Vermont; as well as coastal Maine, southeastern Wisconsin, and northern and eastern Virginia. Zone I-A includes California south of 40°N, as well as Puerto Rico and the US Virgin Islands. Zone II includes the remainder of the continental US, plus Alaska and Hawaii. In Zones I and I-A, there are no Class C, C0, or C1 stations. However, there are a few Class B stations with grandfathered power limits in excess of 50 KW, such as WETA (licensed for Washington DC in zone I, at a power of 75 kW ERP), WNCI (Columbus, Ohio in zone I, at 175 kW ERP), KPFK (Los Angeles in zone I-A, at 110 KW ERP), and the most extreme example being WBCT (Grand Rapids, Michigan, in zone I, at 320  kW ERP). TV Full-power stations in the US VHF low (2-6): 100 kW video analog at in Zone I and in Zone II and Zone III above average terrain; 10 kW in Zone I and 45 kW in Zone II and Zone III digital at above average terrain VHF high (7-13): 316 kW video analog at in Zone I and in Zone II and Zone III above average terrain; 30 kW in Zone I and 160 kW in Zone II and Zone III digital at above average terrain UHF (14-36): video analog at above average terrain; digital at above average terrain Notes: All full-power analog television station transmissions in the US were terminated at midnight Eastern Daylight Time on June 12, 2009. Many broadcasters replaced their analog signal with their digital ATSC signal on the same transmission channel at that time. All US digital stations received a -DT suffix during the analog-to-digital transition. At analog shutdown, the FCC assigned to each digital station the call sign its associated analog station had used. (with a -TV suffix if the analog station had this suffix, without the -TV suffix if the analog station didn't have it). Stations could optionally choose to keep the -DT suffix. Most stations did not keep the -DT suffix. For US analog stations, the -TV suffix was required if there was a radio station with the same three- or four-letter callsign. Stations not required to use the -TV suffix may optionally request it if desired. Analog audio power was limited to 22% of video. Full-power stations in Canada Class A: UHF, 10 kW video/ EHAAT Class B: UHF, 100 kW video/ EHAAT Class C: UHF, video/ EHAAT (?) Class D: UHF, more than / EHAAT Class R: VHF, 100 kW low-band (channels 2–6), 325 kW high-band. (channels 7-13) Class S: VHF, more than 100 kW low-band/325 kW high-band. Notes: Official definitions of these classes are difficult to locate. The values above are inferred from the Industry Canada database. There is some ambiguity about the difference between Classes C and D. Power-level limitations are not firmly enforced in Canada, and Industry Canada has been known to license stations for power levels much higher than the generally accepted limits. For example, CFRN-TV in Edmonton, Alberta operated on Channel 3 at over 600 kW but was not subject to international co-ordination due to its location north of the border. In Canada, the callsigns of all private TV stations have the -TV suffix. Most CBC Television and Ici Radio-Canada Télé TV callsigns end in the letter T and have no suffix. A few Radio-Canada stations, purchased by the CBC from private owners, retain the old -TV callsigns. Canadian digital stations all receive the -DT suffix. (this includes CBC and Radio-Canada stations) The Industry Canada database shows -PT suffixes for the channel allotments for permanent post-transition digital operation but when licences are issued for these permanent digital stations, -DT callsigns are used. Low-power TV (US) LPTV (secondary) (suffix: -LP, or a sequential-numbered callsign in format W##XX with no suffix for analog or with -D suffix for digital, or -LD for low-power digital stations): VHF: 3 kW analog video; 3 kW digital UHF: 150 kW analog video; 15 kW digital Experimental Unlicensed: not allowed except for medical telemetry, and certain wireless microphones The LPTV (low-power television) service was created in 1982 by the FCC to allocate channels for smaller, local stations, and community channels, such as public access stations. LPTV stations that meet additional requirements such as children's "E/I" core programming and Emergency Alert System broadcasting capabilities can qualify for a Class A (-CA) license. Broadcast translators, boosters, and other LPTV stations are considered secondary to full-power stations, unless they have upgraded to class A. Class A is still considered LPTV with respect to stations in Canada and Mexico. Class A television (US) Class-A stations (US) (suffix: -CA or -CD for digital class A): VHF: 3 kW analog video; 3 kW digital UHF: 150 kW analog video; 15 kW digital The class-A television class is a variant of LPTV created in 2000 by the FCC to allocate and protect some low-power affiliates. Class-A stations are still low-power, but are protected from RF interference and from having to change channel should a full-service station request that channel. Additionally, class-A stations, LPTV stations, and translators are the only stations currently authorized to broadcast both analog and digital signals, unlike full-power stations which must broadcast a digital signal only. Low-power TV (Canada) In Canada, there is no formal transmission power below which a television transmitter is considered broadcasting at low power. Industry Canada considers that a low power digital television undertaking "shall not normally extend a distance of 20 km in any direction from the antenna site," based on the determined noise-limited bounding contour. Mexico All digital television stations in Mexico have -TDT callsign suffixes. Analog stations, which existed until December 31, 2016, had -TV callsign suffixes. The equivalent of low power or translator service in Mexico is the equipo complementario de zona de sombra, which is intended only to fill in gaps between a station's expected and actual service area caused by terrain; a station of this type shares the callsign of another station. In analog, these services often were broadcast on the same or adjacent channels to their parent station, except in certain areas with tight packing of television stations (such as central Mexico). In digital, these services usually operate on the same RF channel as their parent station, except for those with conflicting full-power applications (XHBS-TDT Cd. Obregón, Son., channel 30 instead of 25), in certain other cases where it is technically not feasible (XHAW-TDT Guadalupe, NL, channel 26 instead of 25) or to make way for eventual repacking on upper UHF (XHPNW-TDT has four shadows on 33, its post-repacking channel, instead of 39). Equipos complementarios can relay their parent station, or a station that carries 75% or more of the same programming as its parent station. Stations of either type may have unusually low or high effective radiated powers. XHSMI-TDT in Oaxaca is licensed for two watts in digital. The highest-powered shadows are XEQ-TDT Toluca and XHBS-TDT Ciudad Obregón, both at 200 kW. FCC service table The United States Federal Communications Commission lists the following services on their website for television broadcasting: See also Call signs in North America - How call signs and classes are used in North America ITU prefix - How callsigns and classes are used worldwide Low-power broadcasting Class A television service References External links FCC AM classes FCC FM classes FCC LPTV Facts FCC Class-A TV Information Broadcast engineering Broadcast station classes, North America
List of North American broadcast station classes
Engineering
4,214
1,249,440
https://en.wikipedia.org/wiki/Osmotic%20stress%20technique
The osmotic stress technique is a method for measuring the effect of water on biological molecules, particularly enzymes. Just as the properties of molecules can depend on the presence of salts, pH, and temperature, they can depend significantly on the amount of water present. In the osmotic stress technique, flexible neutral polymers such as polyethylene glycol and dextran are added to the solution containing the molecule of interest, replacing a significant part of the water. The amount of water replaced is characterized by the chemical activity of water. See also Osmotic shock References Tables containing osmotic pressure data for use in the osmotic stress technique Biochemistry methods
Osmotic stress technique
Chemistry,Biology
136
24,803,004
https://en.wikipedia.org/wiki/Smaart
Smaart (System Measurement Acoustical Analysis in Real Time) is a suite of audio and acoustical measurements and instrumentation software tools introduced in 1996 by JBL's professional audio division. It is designed to help the live sound engineer optimize sound reinforcement systems before public performance and actively monitor acoustical parameters in real time while an audio system is in use. Most earlier analysis systems required specific test signals sent through the sound system, ones that would be unpleasant for the audience to hear. Smaart is a source-independent analyzer and therefore will work effectively with a variety of test signals including speech or music. The product has been known as JBL-SMAART, SIA-SMAART Pro, EAW SMAART, and SmaartLive. As of 2008 the product has been branded as simply Smaart. An acoustician version has been offered as Smaart Acoustic Tools, however as of Smaart v7.4, Acoustic Tools have been included within the Impulse Response mode of Smaart. A standalone sound pressure level monitoring-only version called Smaart SPL was released in 2020. Smaart is a real-time single and dual-channel fast Fourier transform (FFT) analyzer. Smaart has two modes: Real-Time Mode and impulse response mode. Real-time mode views include single channel Spectrum and dual channel Transfer Function measurements to display RTA, Spectrograph, and Transfer Function (Live IR, Phase, Coherence, Magnitude) measurements. The impulse response mode will display time domain graphs such as Lin (Linear), Log (Logarithmic), ETC (Energy Time Curve), as well as Frequency, Spectrograph, and Histogram graphs. Impulse Response mode also includes a suite of acoustical intelligibly criteria such as STI, STIPA, Clarity, RT60, EDT, etc. Smaart has been licensed and owned by several companies since JBL and is currently owned and developed by Rational Acoustics. First written as a native Windows 3.1 application to work within Windows 95 on IBM PC–compatible computers, in 2006 a version was introduced that was compatible on both Windows and Apple Macintosh operating systems. Smaart was in its 8th version. Use Smaart is based on real-time fast Fourier transform (FFT) analysis, including dual-FFT audio signal comparison, called "transfer function", and single-FFT spectrum analyzer. It includes maximum length sequence (MLS) analysis as a choice for impulse response, for the measurement of room acoustics. The FFT implementation of Smaart includes a proprietary multi-time window (MTW) selection in which the FFT, rather than being a fixed length, is made increasingly shorter as the frequency increases. This feature allows the software to 'ignore' later signal reflections from walls and other surfaces, increasing in coherence as the audio frequency increases. The latest version of Smaart 8 runs under Windows 7 or newer, and Mac OSX 10.7 or newer, including 32- and 64-bit versions. A computer having a dual-core processor with a clock rate of at least 2 GHz is recommended. Smaart can be set to sample rates of 44.1 kHz, 48 kHz or 96 kHz, and to bit depths of 16 or 24. The software works with computer audio protocols ASIO, Core Audio, WAV or WDM audio drivers. Transfer function Smaart's transfer function requires a stereo input to the computer because it analyzes two channels of audio signal. Using its dual-FFT mode, Smaart compares one channel with the other to show the difference. This is used by live sound engineers to set up concert sound systems before a show and to monitor and adjust these systems during the performance. The first channel of audio undergoing analysis is connected directly from one of the main outputs of the mixing console and the second channel is connected to a microphone placed in the audience listening area, usually an omnidirectional test microphone with a flat, neutral pickup characteristic. The direct mixing console audio output is compared with the microphone input to determine how the sound is changed by the sound system elements such as loudspeakers and amplifiers, and by the room acoustics indoors or by the weather conditions and acoustic environment outdoors. Smaart displays the difference between the intended sound from the mixer and the received sound at the microphone, and this real-time display informs the audio engineer's decisions regarding delay times, equalization and other sound system adjustment parameters. Although pink noise is a traditional choice for test signal, Smaart is a source-independent analyzer, which means that it does not rely on a specific test signal to produce measurement data. Pink noise is still in common usage because its energy distribution allows for quick measurement acquisition, but music or another broadband test signal can be used instead. Transfer function measurements can also be used to examine the frequency response of audio equipment, including individual amplifiers, loudspeakers and digital signal processors such as audio crossovers and equalizers. It can be used to compare a known neutral-response test microphone with another microphone in order to better understand its frequency response and, by changing the angle of the microphone under test, its polar response. Transfer function measurements can be used to adjust audio crossover settings for multi-way loudspeakers; similarly, they can be used to adjust only the subwoofer-to-top box crossover characteristics in a sound system where the main, non-subwoofer loudspeakers are flown or rigged but the subwoofers are placed on the ground. One of the traces in the Smaart display shows phase response. To properly align adjacent frequency bands through a crossover, the two phase responses should be adjusted until they are seen in Smaart to be parallel through the crossover frequency. The transfer function measurement can be used to measure frequency-related electrical impedance, one of the electrical characteristics of dynamic loudspeakers. Grateful Dead sound system engineer "Dr. Don" Pearson worked out the method in 2000, using Smaart to compare the voltage drop through a simple resistor between a loudspeaker and a random noise generator. Real-time analyzer In spectrograph view, Smaart displays a real-time spectrum analysis, showing the relative strength of audio frequencies for one audio signal. Needing only one channel of audio input, this capability can be used for a variety of purposes. With Smaart's input connected to the mixing console's pre-fade listen (PFL) or cue bus, spectrograph view can display the frequency response of individual channels, several selected channels, or various mixes. Spectrograph mode can be used to display room resonances: pink noise is applied to the room's sound system, and the signal from a test microphone in the room is displayed on Smaart. When the pink noise is muted, the display shows the lingering tails of noise frequencies that are resonating. Impulse response Smaart can be used to find the delay time between two signals, in which case the computer needs two input channels and the software uses a transfer function measurement engine. Called "Delay Locator", the software calculates the impulse responses of two continuous audio signals, finding the similarities in the signals and measuring how much time has elapsed between them. This is used to set delay times for delay towers at large outdoor sound systems, and it is used to set delay times for other loudspeaker zones in smaller systems. Veteran Van Halen touring sound engineer Jim Yakabuski calls such delay locator programs as Smaart a "must have" item, useful for quickly aligning sound system elements when setup time is limited. Market Smaart is primarily aimed at sound system operators to assist them in setting up and tuning sound systems. Other users include audio equipment designers and architectural acousticians. Author and sound engineer Bob McCarthy wrote in 2007 that because of Smaart's widespread acceptance at all levels of live sound mixing, the paradigm has reversed from the 1980s one of surprise at finding scientific tools in the concert sound scene to one of surprise if the observer finds that such tools are not being used to tune a sound system. Smaart has been compared to other software-based sound system measurement tools such as SIM by Meyer Sound Laboratories and IASYS by Audio Control, both of which offer delay finder tools. Smaart has been described as "a newer, slimmer and much cheaper—but not necessarily better—version of the Meyer SIM system." MLSSA, developed by DRA Laboratories in 1987, and TEF, a time delay spectrometry product by Gold Line, are other products predating Smaart that are used to tune loudspeakers such as studio monitors. A software tool that reached Mac users in 1997 was named SpectraFoo, by Metric Halo. At the same time, some early Smaart users found that after tweaking their MIDI drivers they could get Smaart to work on an Apple computer, the software running inside an x86 emulator such as SoftWindows "with varying results". History As early as 1978, field analysis of rock concert audio was undertaken by Don Pearson, known by his nickname "Dr. Don", while working on sound systems used by the Grateful Dead. Pearson published articles about impulse response measurements taken during setup and testing of concert sound systems, and recommended the Dead buy an expensive Brüel & Kjær 2032 Dual Channel FFT analyzer, made for industrial engineering. Along with Dead audio engineer Dan Healy, Pearson developed methods of working with this system to set up sound systems on tour, and he assisted Meyer engineers working on a more suitable source-independent measurement system which was to become their SIM product. As well, Pearson had an "intimate involvement" with the engineers who were creating Smaart, including a meeting with Jamie Anderson. Smaart was developed by Sam Berkow in association with Alexander "Thorny" Yuill-Thornton II, touring sound engineer with Luciano Pavarotti and The Three Tenors. In 1995, Berkow and Thorny founded SIA Software Company, produced Smaart and licensed the product to JBL. First exhibited in New York City at the Audio Engineering Society's 99th convention in October 1995 and described the next month in Billboard magazine, in May 1996 the software product was introduced at the price of $695, the equivalent of $ in today's currency. Studio Sound magazine described Smaart in 1996 as "the most talked about new product" at the 100th AES convention in Copenhagen, exemplifying a new trend in software audio measurement. Calvert Dayton joined SIA Software in 1996 as graphic designer, technical writer and website programmer. Smaart was unusual because it helped audio professionals such as theatrical sound designers do what was previously possible only with highly sophisticated and expensive measurement devices. Audio system engineers from Clair Brothers used Smaart to tune the sound system at each stop during U2's PopMart Tour 1997–1998. As it increased in popularity, engineers who used Smaart found mixed results: touring veteran Doug Fowler wrote that "misuse was rampant" when the software first started appearing in the field. He warned users against faulty interpretation, saying "I still see bad decisions based on bad data, or bad decisions based on a fundamental lack of understanding of the issues at hand." Nevertheless, Clive Young, editor of Pro Sound News, wrote in 2005 that the introduction of Smaart in 1995 was the start of "the modern era of sound reinforcement system analysis software". In 1998, JBL Smaart Pro won the TEC Awards category for computer software and peripherals. Eastern Acoustic Works (EAW) bought SIA Software, and brought in Jamie Anderson to manage the division. Version 3 was introduced under EAW's ownership, with the additional capability of accepting optional plug-ins which could be used to apply sound system adjustments, as measured by Smaart, to digital signal processing (DSP) equipment. The external third party DSP would perform the corrections indicated by Smaart. Versions 4 and 5 were built upon the foundation of version 3, but with each major release, the application was getting more and more difficult to write, and further improvements appeared practically impossible to implement. For version 6, the designers decided to tear Smaart back down to its basics and rebuild it on a flexible multi-tasking, multi-platform framework which would allow it to be used on Mac OS X and Windows machines. Writing it took two years, and it was released in a package which included the earlier version 5 because there was not enough time to incorporate all elements of the existing feature set. Anderson said in 2007, "we released Version 6 without all of the features of 5, but we are adding those features back in." Smaart 6 was nominated for a TEC Award in 2007 but did not win. EAW developed a digital mixing console prototype in 2005, the UMX.96; a console which incorporated SmaartLive 5 internally. Any selected channel on the mixer could be used as a source for Smaart analysis, displaying, for instance, the real-time results of channel equalization. The console could be configured to send multiple microphone inputs to Smaart, and it offered constant metering of sound pressure level in decibels. When it was put into production in 2007, band engineer Don Dodge took the mixer out on a world tour with Foreigner, the first concert mixed in March 2007. With its 15-inch touchscreen able to serve both audio control and Smaart analysis functions, Dodge continued to mix Foreigner on it throughout 2007 and 2008. Rational Acoustics was incorporated on April 1, 2008. On November 9, 2009, under the leadership of Jamie and Karen Anderson, programmer Adam Black and technical chief Calvert Dayton, Rational Acoustics became the full owner of the Smaart brand. Rational released Smaart 7 on April 14, 2010; a version which uses less processing power than v5 and v6 because of efficiencies brought about in the redesigned code. Smaart 7 was written using a new object-oriented code architecture, it was given improved data acquisition. Other new features include graphic user interface changes and delay tracking. Users can run simultaneously displayed real-time measurements in multiple windows, as many as their computer hardware will allow. Smaart 7 was nominated in 2010 for a TEC Award but did not win. In April 2011, Smaart 7 was named one of four Live Design Sound Products of the Year 2010–2011. Version history May 1996 – JBL-Smaart 1.0 March 1997 – JBL-Smaart 1.4 1998 – SIA-Smaart Pro 2 April 1999 – SIA-Smaart Pro 3 2000 – SIA SmaartLive 4 October 2000 – SIA SmaartLive 4.1 April 2001 – SIA SmaartLive 4.5 September 2001 – SIA SmaartLive 4.6 June 2002 – SIA SmaartLive 5 October 2003 – SIA SmaartLive 5.3 2006 – EAW Smaart 6 April 2010 – Smaart 7 October 2010 – Smaart 7.1 April 2011 – Smaart 7.2 July 2011 – Smaart 7.3 August 2012 - Smaart 7.4 April 2014 - Smaart 7.5 March 2016 - Smaart 8.0 November 2016 – Smaart 8.1 December 2017 – Smaart 8.2 October 2018 – Smaart 8.3 November 2019 – Smaart 8.4 June 2020 - Smaart 8.5 September 2022 - Smaart 9 (Suite, RT, LE, and SPL) References External links Rational Acoustics Home Page Smaart Basics: Example System Overview, video with Jamie Anderson Sam Berkow NAMM Oral History Interview (2011) 1996 software Audiovisual introductions in 1996 Acoustics Windows multimedia software MacOS multimedia software
Smaart
Physics
3,300
506,291
https://en.wikipedia.org/wiki/Phenakistiscope
The phenakistiscope (also known by the spellings phénakisticope or phenakistoscope) was the first widespread animation device that created a fluid illusion of motion. Dubbed and ('stroboscopic discs') by its inventors, it has been known under many other names until the French product name became common (with alternative spellings). The phenakistiscope is regarded as one of the first forms of moving media entertainment that paved the way for the future motion picture and film industry. Similar to a GIF animation, it can only show a short continuous loop. Etymology and spelling When it was introduced in the French newspaper in June 1833, the term 'phénakisticope' was explained to be from the root Greek word phenakistikos (or rather from φενακίζειν phenakizein), meaning "deceiving" or "cheating", and ὄψ óps, meaning "eye" or "face", so it was probably intended loosely as 'optical deception' or 'optical illusion'. The term phénakisticope was first used by the French company Alphonse Giroux et Compagnie in their application for an import license (29 May 1833) and this name was used on their box sets. Fellow Parisian publisher Junin also used the term 'phenakisticope' (both with and without the accent). Inventor Joseph Plateau did not give a name for the device when he first published about it in January 1833. Later in 1833 he used 'phénakisticope' in an article to refer to the published versions that he was not involved with. By then, he had an authorized set published first as Phantasmascope (by Ackermann in London), which some months later was changed into Fantascope for a new edition and sets by other animators. In many writings and presentations Plateau used both the terms phénakisticope and fantascope, seemingly accepting phénakisticope as the better-known name and holding on to fantascope as the name he preferred. The spelling 'phenakistiscope' was possibly introduced by lithographers Forrester & Nichol in collaboration with optician John Dunn; they used the title "The Phenakistiscope, or, Magic Disc" for their box sets, as advertised in September 1833. The corrupted part 'scope' was understood to be derived from Greek 'skopos', meaning "aim", "target", "object of attention" or "watcher", "one who watches" (or rather from skopein) and was quite common in the naming of optical devices (e.g. Telescope, Microscope, Kaleidoscope, Fantascope, Bioscope). The misspelling 'phenakistoscope' can already be found in 1835 in The American Journal of Science and Arts and later ended up as a standard name through encyclopedias, for instance in A Dictionary of Science, Literature, & Art (London, 1842)Iconographic Encyclopaedia of Science, Literature, and Art (New York, 1852). Technology The phénakistiscope usually comes in the form of a spinning cardboard disc attached vertically to a handle. Arrayed radially around the disc's center is a series of pictures showing sequential phases of the animation. Small rectangular apertures are spaced evenly around the rim of the disc. The user would spin the disc and look through the moving slits at the images reflected in a mirror. The scanning of the slits across the reflected images keeps them from simply blurring together so that the user can see a rapid succession of images that appear to be a single moving picture. When there is the same number of images as slots, the images will animate in a fixed position, but will not drift across the disc. Fewer images than slots and the images will drift in the opposite direction to that of the spinning disc. More images than slots and the images will drift in the same direction as the spinning disc. Unlike the zoetrope and other successors, common versions of the phénakisticope could only practically be viewed by one person at a time. The pictures of the phénakisticope became distorted when spun fast enough to produce the illusion of movement; they appeared a bit slimmer and were slightly curved. Sometimes animators drew an opposite distortion in their pictures to compensate for this. However, most animations were not intended to give a realistic representation and the distortion isn't very obvious in cartoonish pictures. The distortion and the flicker caused by the rotating slits are not seen in most phénakisticope animations now found online (for instance the GIF animation on this page). These are usually animations created with software. These do not replicate the actual viewing experience of a phénakisticope, but they can present the work of the animators in an optimized fashion. Some miscalculated modern re-animations also have the slits rotating (which would appear motionless when viewed through an actual phénakisticope) and the figures moving across the discs where they were supposed to stand still (or standing still when they were supposed to move around). Most commercially produced discs are lithographic prints that were colored by hand, but also multi-color lithography and other printing techniques have been used by some manufacturers. Invention The phenakisticope was invented almost simultaneously around December 1832 by the Belgian physicist Joseph Plateau and the Austrian professor of practical geometry Simon Stampfer. As a university student Plateau noticed in some early experiments that when looking from a small distance at two concentric cogwheels that turned fast in opposite directions, it produced the optical illusion of a motionless wheel. He later read Peter Mark Roget's 1824 article Explanation of an optical deception in the appearance of the spokes of a wheel when seen through vertical apertures which addressed the same illusion. Plateau decided to investigate the phenomenon further and later published his findings in Correspondance Mathématique et Physique in 1828. In a letter to the same scientific periodical dated December 5, 1829 he presented his (still nameless) Anorthoscope, a disc that turns an anamorphic picture into a normal picture when it is spun fast and seen through the four radial slits of a counter-rotating black disc. This invention was later marketed, for instance by Newton & Co in London. On 10 December 1830 Michael Faraday presented a paper at the Royal Institution of Great Britain called On a Peculiar Class of Optical Deceptions about the optical illusions that could be found in rotating wheels. He referred to Roget's paper and described his associated new findings. Much was similar to what Plateau had published and Faraday not only acknowledged this publicly but also corresponded with Plateau personally and sent him his paper. Some of Faraday's experiments were new to Plateau and especially the one with a fixed image produced by a turning wheel in front of the mirror inspired Plateau with the idea for new illusions. In July 1832 Plateau sent a letter to Faraday and added an experimental disc with some "anamorphoses" that produced a "completely immobile image of a little perfectly regular horse" when rotated in front of a mirror. After several attempts and many difficulties he constructed a working model of the phénakisticope in November or December 1832. Plateau published his invention in a 20 January 1833 letter to Correspondance Mathématique et Physique. He believed that if the manner of producing the illusions could be somehow modified, they could be put to other uses, "for example, in phantasmagoria". Stampfer read about Faraday's findings in December 1832 and was inspired to do similar experiments, which soon led to his invention of what he called Stroboscopischen Scheiben oder optischen Zauberscheiben (stroboscope discs or optical magic discs). Stampfer had thought of placing the sequence of images on either a disc, a cylinder (like the later zoetrope) or, for a greater number of images, on a long, looped strip of paper or canvas stretched around two parallel rollers (much like film reels). He also suggests covering up most of the disc or the mirror with a cut-out sheet of cardboard so that one sees only one of the moving figures and painting theatrical coulisses and backdrops around the cut-out part (somewhat similar to the later Praxinoscope-Theatre). Stampfer also mentioned a version which has a disc with pictures on one end and a slotted disc on the other side of an axis, but he found spinning the disc in front of a mirror more simple. By February 1833 he had prepared six double-sided discs, which were later published by Trentsensky & Vieweg. Matthias Trentsensky and Stampfer were granted an Austrian patent (Kaiserlichen königlichen Privilegium) for the discs on 7 May 1833. Publisher and Plateau's doctoral adviser Adolphe Quetelet claimed to have received a working model to present to Faraday as early as November 1832. Plateau mentioned in 1836 that he thought it difficult to state the exact time when he got the idea, but he believed he was first able to successfully assemble his invention in December. He stated to trust the assertion of Stampfer to have invented his version at the same time. Peter Mark Roget claimed in 1834 to have constructed several phénakisticopes and showed them to many friends as early as in the spring of 1831, but as a consequence of more serious occupations he did not get around to publishing any account of his invention. Commercial production According to Mathias Trentsensky, of art dealer and publishing company Trentsensky & Vieweg, Stampfer had prepared six double-sided discs as early as February 1833 and had repeatedly demonstrated these to many friends. In April 1833 Trentsensky applied for an Austrian patent (k.k. Privilegium) together with Stampfer, which was granted on 7 May 1833. A first edition of four double-sided discs was soon published, but it sold out within four weeks and left them unable to ship orders. These discs probably had round holes as illustrated in an 1868 article and a 1922 reconstruction by William Day, but no original copies are known to still exist. Trentsensky & Vieweg published an improved and expanded set of eight double-sided discs with vertical slits in July 1833. English editions were published not much later with James Black and Joseph Myers & Co. A total of 28 different disc designs have been credited to Professor Stampfer. Joseph Plateau never patented his invention, but he did design his own set of six discs for Ackermann & Co in London. The series was published in July 1833 as Phantasmascope. In October 1833, Ackermann & Co changed the name of the series to Fantascope and released two more sets of six discs each, one designed by Thomas Talbot Bury and one by Thomas Mann Baynes. In the meantime some other publishers had apparently been inspired by the first edition of Professor Stampfer's Stroboscopische Scheiben: Alphonse Giroux et Compagnie applied for a French import license on 28 May 1833 for 'Le Phénakisticope' and were granted one on 5 August 1833. They had a first set of 12 single sided discs available before the end of June 1833. Before the end of December 1833 they released two more sets. By 16 June 1833, Joh. Val. Albert published Die belebte Wunderscheibe in Frankfurt and soon marketed internationally. This version had uncut discs with pictures and a separate larger disc with round holes. The set of Die Belebte Wunderscheibe in Dick Balzer's collection shows several discs with designs that are very similar to those of Stampfer and about half of them are also very similar to those of Giroux's first set. It is unclear where these early designs (other than Stampfer's) originated, but many of them would be repeated on many discs of many other publishers. It is unlikely that much of this copying was done with any licensing between companies or artists. Joseph Plateau and Simon Stampfer both complained around July 1833 that the designs of the discs they had seen around (besides their own) were poorly executed and they did not want to be associated with them. The phénakisticope became very popular and soon there were very many other publishers releasing discs with numerous names, including: Periphanoscop – oder Optisches Zauber-theater / ou Le Spectacle Magique / or The Magical Spectacle (by R.S. Siebenmann, Arau, August 1833) Toover-schijf (by A. van Emden, Amsterdam, August 1833) Fores's Moving Panorama, or Optical Illusions (London, September 1833) The Phenakistiscope or Magic Disc (by Forrester & Nichol & John Dunn, September 1833) Motoscope, of wonderschijf (Amsterdam, September 1833) McLean's Optical Illusions, or, Magic Panorama (London, November 1833) Le Fantascope (by Dero-Becker, Belgium, December 1833) The Phenakisticope, or Living Picture (by W. Soffe, December 1833) Soffe's Phantascopic Pantomime, or Magic Illusions (December 1834) Wallis's Wheel of Wonders (London, December 1834) The Laughingatus, or Magic Circle (by G.S. Tregear, ) Le Phenakisticope (by Junin, Paris, 1839?) Das Phorolyt oder die magische Doppelscheibe (by Purkyně & Pornatzki, Breslau, 1841) Optische Zauber-Scheiben / Disques Magique (unknown origin, one set executed by Frederic Voigtlaender) Optische Belustigungen – Optical Amusements – Optic Amusements (unknown origin) Fantasmascope. Tooneelen in den spiegel (K. Fuhri, The Hague, 1848) Kinesiskop (designed by Purkyně, published by Ferdinand Durst, Prague, 1861) The Magic Wheel (by J. Bradburn, US, 1864) L'Ékonoscope (by Pellerin & Cie, France, 1868) Pantinoscope (with Journal des Demoiselles, France, 1868) Magic Circle (by G. Ingram, ) Tableaux Animés – Nouveau Phénakisticope (by Wattilaux, France, ) The Zoopraxiscope (by Eadward Muybridge, US, 1893) Prof. Zimmerman's Ludoscope (by Harbach & Co, Philadelphia, 1904) After its commercial introduction by the Milton Bradley Company, the Zoetrope (patented in 1867) soon became the more popular animation device and consequently fewer phénakisticopes were produced. Variations Many versions of the phénakisticope used smaller illustrated uncut cardboard discs that had to be placed on a larger slotted disc. A common variant had the illustrated disc on one end of a brass axis and the slotted disc on the other end; this was slightly more unwieldy but needed no mirror and was claimed to produce clearer images. Fores offered an Exhibitor: a handle for two slotted discs with the pictures facing each other which allowed two viewers to look at the animations at the same time, without a mirror. A few discs had a shaped edge on the cardboard to allow for the illusion of figures crawling over the edge. Ackermann & Co published three of those discs in 1833, including one by inventor Joseph Plateau. Some versions added a wooden stand with a hand-cranked mechanism to spin the disc. Several phénakisticope projectors with glass discs were produced and marketed since the 1850s. Joseph Plateau created a combination of his phénakisticope and his Anorthoscope sometime between 1844 and 1849, resulting in a back-lit transparent disc with a sequence of figures that are animated when it is rotated behind a counter-rotating black disc with four illuminated slits, spinning four times as fast. Unlike the phénakisticope several persons could view the animation at the same time. This system has not been commercialised; the only known two handmade discs are in the Joseph Plateau Collection of the Ghent University. Belgian painter Jean Baptiste Madou created the first images on these discs and Plateau painted the successive parts. In 1849 Joseph Plateau discussed the possibilities of combining the phénakisticope with the stereoscope as suggested to him by its inventor Charles Wheatstone. In 1852 Duboscq patented such a "Stéréoscope-fantascope, stéréofantscope ou Bïoscope". Of three planned variations only one was actually produced but without much success. Only one extant disc is known, which is in the Plateau collection of Ghent University. Projection The first known plan for a phénakisticope projector with a transparent disc was made by Englishman T.W. Naylor in 1843 in the Mechanical's Magazine – Volume 38. His letter was illustrated with a detailed side view of the device. Naylor suggested tracing the pictures of available phenakisticopes onto glass with transparent paint and painting the rest black. Nothing else is known of Naylor or his machine. Franz von Uchatius possibly read about Naylor's idea in German or Austrian technical journals and started to develop his own version around 1851. Instrument maker Wenzel Prokesch made a first model for him which could only project images of a few inches in diameter. A more successful second model by Prokesch had a stationary disc with transparent pictures with a separate lens for each picture focused on the same spot on a screen. A limelight revolved rapidly behind the disc to project the sequential images one by one in succession. This model was demonstrated to the Austrian Academy of Sciences in 1853. Prokesch marketed the machine and sold one to magician Ludwig Döbler who used it in his shows that also included other magic lantern techniques, like dissolving views. From around 1853 until the 1890s J. Duboscq in Paris marketed different models of a projection phénakisticope. It had a glass disc with a diameter of 34 centimeters for the pictures and a separate disc with four lenses. The discs rotated at different speeds. An "Optical Instrument" was patented in the U.S. in 1869 by O.B. Brown, using a phenakistiscope-like disc with a technique very close to the later cinematograph; with Maltese Cross motion; a star-wheel and pin being used for intermittent motion, and a two-sector shutter. Thomas Ross developed a small transparent phénakisticope system, called Wheel of life, which fitted inside a standard magic lantern slide. A first version, patented in 1869, had a glass disc with eight phases of a movement and a counter-rotating glass shutter disc with eight apertures. The discs depicted Ice Skaters, Fishes, Giant's Ladder, Bottle Imp and other subjects. An improved version had 13 images and a single slot shutter disc and received British Patent 2685 on 10 October 1871. Henry Renno Heyl presented his Phasmatrope on 5 February 1870 at the Philadelphia Academy of Music. This modified magic lantern had a wheel that could hold 16 photographic slides and a shutter. The wheel was rotated in front of the light source by an intermittent mechanism to project the slides successively (probably with a speed of 3 fps). The program contained three subjects: All Right (a popular Japanese acrobat), Brother Jonathan and a waltzing couple. Brother Jonathan addressed the audience with a voice actor behind the screen and professed that "this art will rapidly develop into one of the greatest merit for instruction and enjoyment." The pictures of the waltzing couple survived and consist of four shots of costumed dancers (Heyl and a female dancing partner) that were repeated four times in the wheel. The pictures were posed. Capturing movement with "instantaneous photography" would first be established by Eadward Muybridge in 1878. Eadward Muybridge created his Zoopraxiscope in 1879 and lectured until 1894 with this projector for glass discs on which pictures in transparent paint were derived from his chronophotographic plates. Scientific use The phénakisticope was invented through scientific research into optical illusions and published as such, but soon the device was marketed very successfully as an entertaining novelty toy. After the novelty wore off, it was mostly seen as a toy for children. Nonetheless, some scientists still regard it as a useful demonstration tool. The Czech physiologist Jan Purkyně used his version, called Phorolyt, in lectures since 1837. In 1861 one of the subjects he illustrated was the beating of a heart. German physicist Johann Heinrich Jakob Müller published a set of 8 discs depicting several wave motions (waves of sound, air, water, etcetera) with J.V. Albert in Frankfurt in 1846. The famous English pioneer of photographic motion studies Eadweard Muybridge built a phenakisticope projector for which he had his photographs rendered as contours on glass discs. The results were not always very scientific; he often edited his photographic sequences for aesthetic reasons and for the glass discs he sometimes even reworked images from multiple photographs into new combinations. An entertaining example is the sequence of a man somersaulting over a bull chased by a dog. For only one disc he chose a photographic representation; the sequence of a running horse skeleton, which was probably too detailed to be painted on glass. This disc was most likely the very first time a stop motion technique was successfully applied. Muybridge first called his apparatus Zoogyroscope, but soon settled on the name Zoöpraxiscope. He used it in countless lectures on human and animal locomotion between 1880 and 1895. 20th and 21st centuries The Joseph Plateau Award, a trophy resembling a phénakisticope, was a Belgian movie award given yearly between 1985 and 2006. Several vinyl music releases have phénakistiscope-like animations on the labels or on the vinyl itself. In 1956 Red Raven Movie Records started a series of 78 RPM 8" singles with animations to be viewed with a device with small mirrors similar to a praxinoscope to be placed on the center of the disc. Since 2010 audio-visual duo Sculpture has released several picture discs with very elaborate animations to be viewed under a stroboscope flashing exactly 25 times per second, or filmed with a video camera shooting progressively at a very high shutter speed with a frame rate of 25fps. Gallery See also Eadweard Muybridge Electrotachyscope Flip book History of animation History of film List of film formats List of multiple discoveries Kaleidoscope Optical toys Praxinoscope Precursors of film Strobe light Thaumatrope Zoetrope Zoopraxiscope References External links Collection of simulated phenakistiscopes in action – Museum For The History Of Sciences The Richard Balzer Collection (animated gallery) An exhibit of similar optical toys, including the zoetrope (Laura Hayes and John Howard Wileman Exhibit of Optical Toys in the NCSSM) Some pictures – Example of the phenakistiscope Magic Wheel optical toy, 1864, in the Staten Island Historical Society Online Collections Database 1830s toys Audiovisual introductions in 1832 Austrian inventions Belgian inventions History of animation Optical illusions Optical toys Precursors of film
Phenakistiscope
Physics
4,824
20,759,958
https://en.wikipedia.org/wiki/Bencyclane
Bencyclane is an antispasmodic, vasodilator, and platelet aggregation inhibitor. Synthesis Grignard addition of benzylmagnesiumbromide to suberone would give 1-benzylcycloheptanol [4006-73-9] (1'). Williamson ether synthesis with 3-dimethylaminopropylchloride [109-54-6] (2) completed the synthesis of bencyclane (3). See also Clofenciclan References Calcium channel blockers Ethers Dimethylamino compounds
Bencyclane
Chemistry
125
12,507,185
https://en.wikipedia.org/wiki/Alan%20Carter%20%28philosopher%29
Alan Brian Carter (born 1952) is Emeritus Professor of Moral Philosophy at the University of Glasgow. Life and work Carter earned a BA at the University of Kent at Canterbury, a MA at the University of Sussex and a DPhil at St Cross College at the University of Oxford. Carter's first academic position was lecturer in political theory at University College Dublin. He then became head of the Philosophy Department at Heythrop College, University of London. Subsequently, he was professor of philosophy and environmental studies at the University of Colorado at Boulder. He has been a visiting professor at the University of British Columbia and at the University of Bucharest. For a number of years Carter was joint editor of the Journal of Applied Philosophy. He works principally in political philosophy, moral philosophy, and environmental philosophy. Carter has published on a wide range of topics: within political philosophy he has written on political obligation, equality, and property rights; within environmental philosophy he has written on the moral status of both nonhuman animals and ecosystems; within applied ethics he has written on problems regarding future persons and world hunger; within political theory he has written on theories of the state and Third World underdevelopment; and within anarchism and Marxism Carter has written on their respective theories of history. He is currently developing an environmentalist moral theory that is, normatively, value pluralist and, metaethically, projectivist, topics he has previously written about in moral theory. Some of Carter's work in environmental philosophy is discussed critically by Robin Attfield. Carter's state-primacy theory has been discussed by Robyn Eckersley and criticized by John Barry. and, most fully, by Simon Hailwood. Carter has responded by arguing that his critics fail to take sufficiently into account the problems the military causes in modern societies: "it is telling how little attention green liberal critics of the state-primacy theory have paid to the role of the military and to its highly distorting effects. Failing to examine in any detail military requirements within ostensibly 'liberal democracies', whether existing or imagined, is more like simply ignoring an argument rather than answering it." Carter was one of the founder members of the London-based Anarchist Research Group. Colin Ward has described Carter, with Murray Bookchin, as one of the leading eco-anarchist thinkers. Outside of academia, Carter is a former Chair of the World Development Movement Scotland and a former Board Member of Friends of the Earth Scotland. He is also a former Board Member and a former Trustee of Friends of the Earth. Publications Carter's publications include over 50 articles in academic journals and he is the author of 3 books: (1999) (1988) (1987) Selected articles "A Solution to the Purported Non-Transitivity of Normative Evaluation," Journal of Philosophy 112, 1 (2015): 23-45 "A distinction within egalitarianism," Journal of Philosophy 108, 10 (2011): 535–54 "Anarchism: some theoretical foundations," Journal of Political Ideologies 16, 3 (2011): 245-264 "Beyond primacy: Marxism, anarchism and radical green political theory," Environmental Politics 19, 6 (2010): 951-972 "The problem of political compliance in Rawls's theories of justice: Parts I and II," The Journal of Moral Philosophy 3, 1 (2006): 7–21 and 3, 2 (2006): 135–157 "A defense of egalitarianism," Philosophical Studies 131, 2 (2006): 269–302 "Some Theoretical Foundations for Radical Green Politics," Environmental Values 13, 3 (2004): 305–28 "Saving nature and feeding people," Environmental Ethics 26, 4 (2004): 339–60 "Value-pluralist egalitarianism," Journal of Philosophy 99, 11 (2002): 577–99 "Can we harm future people?" Environmental Values 10, 4 (2001): 429–454 "Humean nature," Environmental Values 9, 1 (2000): 3–37 "Analytical anarchism: some conceptual foundations," Political Theory 28, 2 (2000): 230–53 "In defense of radical disobedience," The Journal of Applied Philosophy 15, 1 (1998): 29–47 "Towards a green political theory" in Andrew Dobson and Paul Lucardie (eds.), The Politics of Nature: Explorations in Green Political Theory (London: Routledge, 1993), pp. 39–62 See also Anarchism in the United Kingdom Citations External links Alan Carter's webpage at Academia.edu 1952 births Living people 21st-century British philosophers Academics of Heythrop College Academics of the University of Glasgow Alumni of the University of Kent Alumni of St Cross College, Oxford Alumni of the University of Sussex Anarchist theorists English anarchists English philosophers English political philosophers Environmental ethicists Green anarchists Scholars of Marxism Academic staff of the University of British Columbia People educated at Monkwearmouth School
Alan Carter (philosopher)
Environmental_science
1,024
2,205,922
https://en.wikipedia.org/wiki/List%20of%20biodiversity%20conservation%20sites%20in%20the%20United%20Kingdom
This article provides a list of sites in the United Kingdom which are recognised for their importance to biodiversity conservation. The list is divided geographically by region and county. Inclusion criteria Sites are included in this list if they are given any of the following designations: Sites of importance in a global context Biosphere Reserves (BR) World Heritage Sites (WHS) (where biological interest forms part of the reason for designation) all Ramsar Sites Sites of importance in a European context all Special Protection Areas (SPA) all Special Area of Conservation (SAC) all Important Bird Areas (IBA) Sites of importance in a national context all sites which were included in the Nature Conservation Review (NCR site) all national nature reserves (NNR) Sites of Special Scientific Interest (SSSI), where biological interest forms part of the justification for notification (SSSIs which are designated purely for their geological interest are not included unless they meet other criteria) England Southwest Cornwall Devon Dorset Somerset Avon Wiltshire Gloucestershire Southeast Bedfordshire Berkshire Buckinghamshire Essex Greater London Hampshire Hertfordshire Kent Oxfordshire Surrey Sussex Rye Harbour Nature Reserve Midlands Derbyshire Herefordshire Leicestershire Northamptonshire Shropshire Staffordshire Nottinghamshire Warwickshire Worcestershire East Anglia Northwest Cheshire Northeast Lincolnshire Yorkshire County Durham Wales Anglesey Scotland Northeast Scotland Shetland Unst Orkney Outer Hebrides Lewis and Harris North Uist, South Uist and Benbecula Other islands See also Conservation in the United Kingdom National Nature Reserves in the United Kingdom Sites of Special Scientific Interest References Biodiversity Biodiversity Conservation in the United Kingdom
List of biodiversity conservation sites in the United Kingdom
Biology
291
75,049
https://en.wikipedia.org/wiki/Electrostatic%20discharge
Electrostatic discharge (ESD) is a sudden and momentary flow of electric current between two differently-charged objects when brought close together or when the dielectric between them breaks down, often creating a visible spark associated with the static electricity between the objects. ESD can create spectacular electric sparks (lightning, with the accompanying sound of thunder, is an example of a large-scale ESD event), but also less dramatic forms which may be neither seen nor heard, yet still be large enough to cause damage to sensitive electronic devices. Electric sparks require a field strength above approximately 4 × 106 V/m in air, as notably occurs in lightning strikes. Other forms of ESD include corona discharge from sharp electrodes, brush discharge from blunt electrodes, etc. ESD can cause harmful effects of importance in industry, including explosions in gas, fuel vapor and coal dust, as well as failure of solid state electronics components such as integrated circuits. These can suffer permanent damage when subjected to high voltages. Electronics manufacturers therefore establish electrostatic protective areas free of static, using measures to prevent charging, such as avoiding highly charging materials and measures to remove static such as grounding human workers, providing antistatic devices, and controlling humidity. ESD simulators may be used to test electronic devices, for example with a human body model or a charged device model. Causes One of the causes of ESD events is static electricity. Static electricity is often generated through tribocharging, the separation of electric charges that occurs when two materials are brought into contact and then separated. Examples of tribocharging include walking on a rug, rubbing a plastic comb against dry hair, rubbing a balloon against a sweater, ascending from a fabric car seat, or removing some types of plastic packaging. In all these cases, the breaking of contact between two materials results in tribocharging, thus creating a difference of electrical potential that can lead to an ESD event. Another cause of ESD damage is through electrostatic induction. This occurs when an electrically charged object is placed near a conductive object isolated from the ground. The presence of the charged object creates an electrostatic field that causes electrical charges on the surface of the other object to redistribute. Even though the net electrostatic charge of the object has not changed, it now has regions of excess positive and negative charges. An ESD event may occur when the object comes into contact with a conductive path. For example, charged regions on the surfaces of styrofoam cups or bags can induce potential on nearby ESD sensitive components via electrostatic induction and an ESD event may occur if the component is touched with a metallic tool. ESD can also be caused by energetic charged particles impinging on an object. This causes increasing surface and deep charging. This is a known hazard for most spacecraft. Types Electrostatic discharge (ESD) phenomena vary in complexity and magnitude, with the electric spark being the most visible and dramatic example. This occurs when a strong electric field ionizes the air, creating a conductive channel that can convey an electric current. People may experience this as a small jolt of discomfort, but ESD can inflict severe damage on electronic components, potentially leading to malfunctions and failures. In hazardous environments where flammable gases or dust particles are present, ESD can trigger fires or explosions. Not all ESD events, however, are accompanied by a visible spark or noise. It is possible for a person to carry a charge that, while undetectable to the human senses, can still be potent enough to harm delicate electronics. Some components can be compromised by discharges as faint as 30 V, with such damage sometimes not becoming apparent until significant usage has occurred, thus affecting the lifespan and performance of the devices. Cable discharge events (CDEs) are discharges occurring when connecting electrical cables to a device. Sparks A spark is triggered when the electric field strength exceeds approximately 4–30 kV/cmthe dielectric field strength of air. This may cause a very rapid increase in the number of free electrons and ions in the air, temporarily causing the air to abruptly become an electrical conductor in a process called dielectric breakdown. Perhaps the best known example of a natural spark is lightning. In this case the electric potential between a cloud and ground, or between two clouds, is typically hundreds of millions of volts. The resulting current that cycles through the stroke channel causes an enormous transfer of energy. On a much smaller scale, sparks can form in air during electrostatic discharges from charged objects that are charged to as little as 380 V (Paschen's law). Earth's atmosphere consists of 21% oxygen (O2) and 78% nitrogen (N2). During an electrostatic discharge, such as a lightning flash, the affected atmospheric molecules become electrically overstressed. The diatomic oxygen molecules are split, and then recombine to form ozone (O3), which is unstable, or reacts with metals and organic matter. If the electrical stress is high enough, nitrogen oxides (NOx) can form. Both products are toxic to animals, and nitrogen oxides are essential for nitrogen fixation. Ozone attacks all organic matter by ozonolysis and is used in water purification. Sparks are an ignition source in combustible environments that may lead to catastrophic explosions in concentrated fuel environments. Most explosions can be traced back to a tiny electrostatic discharge, whether it was an unexpected combustible fuel leak invading a known open air sparking device, or an unexpected spark in a known fuel rich environment. The result is the same if oxygen is present and the three criteria of the fire triangle have been combined. Damage prevention in electronics Many electronic components, especially integrated circuits and microchips, can be damaged by ESD. Sensitive components need to be protected during and after manufacture, during shipping and device assembly, and in the finished device. Grounding is especially important for effective ESD control. It should be clearly defined, and regularly evaluated. Protection during manufacturing In manufacturing, prevention of ESD is based on an Electrostatic Discharge Protected Area (EPA). The EPA can be a small workstation or a large manufacturing area. The main principle of an EPA is that there are no highly-charging materials in the vicinity of ESD sensitive electronics, all conductive and dissipative materials are grounded, workers are grounded, and charge build-up on ESD sensitive electronics is prevented. International standards are used to define a typical EPA and can be found for example from International Electrotechnical Commission (IEC) or American National Standards Institute (ANSI). ESD prevention within an EPA may include using appropriate ESD-safe packing material, the use of conductive filaments on garments worn by assembly workers, conducting wrist straps and foot-straps to prevent high voltages from accumulating on workers' bodies, anti-static mats or conductive flooring materials to conduct harmful electric charges away from the work area, and humidity control. Humid conditions prevent electrostatic charge generation because the thin layer of moisture that accumulates on most surfaces serves to dissipate electric charges. Ionizers are used especially when insulative materials cannot be grounded. Ionization systems help to neutralize charged surface regions on insulative or dielectric materials. Insulating materials prone to triboelectric charging of more than 2,000 V should be kept away at least 12 inches from sensitive devices to prevent accidental charging of devices through field induction. On aircraft, static dischargers are used on the trailing edges of wings and other surfaces. Manufacturers and users of integrated circuits must take precautions to avoid ESD. ESD prevention can be part of the device itself and include special design techniques for device input and output pins. External protection components can also be used with circuit layout. Due to dielectric nature of electronics component and assemblies, electrostatic charging cannot be completely prevented during handling of devices. Most of ESD sensitive electronic assemblies and components are also so small that manufacturing and handling is done with automated equipment. ESD prevention activities are therefore important with those processes where components come into direct contact with equipment surfaces. In addition, it is important to prevent ESD when an electrostatic discharge sensitive component is connected with other conductive parts of the product itself. An efficient way to prevent ESD is to use materials that are not too conductive but will slowly conduct static charges away. These materials are called static dissipative and have resistivity values below 1012 ohm-meters. Materials in automated manufacturing which will touch on conductive areas of ESD sensitive electronic should be made of dissipative material, and the dissipative material must be grounded. These special materials are able to conduct electricity, but do so very slowly. Any built-up static charges dissipate without the sudden discharge that can harm the internal structure of silicon circuits. Protection during transit Sensitive devices need to be protected during shipping, handling, and storage. The buildup and discharge of static can be minimized by controlling the surface resistance and volume resistivity of packaging materials. Packaging is also designed to minimize frictional or triboelectric charging of packs due to rubbing together during shipping, and it may be necessary to incorporate electrostatic or electromagnetic shielding in the packaging material. A common example is that semiconductor devices and computer components are usually shipped in an antistatic bag made of a partially conductive plastic, which acts as a Faraday cage to protect the contents against ESD. Simulation and testing for electronic devices For testing the susceptibility of electronic devices to ESD from human contact, an ESD Simulator with a special output circuit, called the human body model (HBM) is often used. This consists of a capacitor in series with a resistor. The capacitor is charged to a specified high voltage from an external source, and then suddenly discharged through the resistor into an electrical terminal of the device under test. One of the most widely used models is defined in the JEDEC 22-A114-B standard, which specifies a 100 picofarad capacitor and a 1,500 ohm resistor. Other similar standards are MIL-STD-883 Method 3015, and the ESD Association's ESD STM5.1. For compliance to European Union standards for Information Technology Equipment, the IEC/EN 61000-4-2 test specification is used. Another specification referenced by equipment maker Schaffner calls for C = 150 pF and R = 330 Ω which provides high fidelity results. While the theory is mostly there, very few companies measure the real ESD survival rate. Guidelines and requirements are given for test cell geometries, generator specifications, test levels, discharge rate and waveform, types and points of discharge on the "victim" product, and functional criteria for gauging product survivability. A charged device model (CDM) test is used to define the ESD a device can withstand when the device itself has an electrostatic charge and discharges due to metal contact. This discharge type is the most common type of ESD in electronic devices and causes most of the ESD damages in their manufacturing. CDM discharge depends mainly on parasitic parameters of the discharge and strongly depends on size and type of component package. One of the most widely used CDM simulation test models is defined by the JEDEC. Other standardized ESD test circuits include the machine model (MM) and transmission line pulse (TLP). See also Automotive Electronics Council, which defines in some of its standards, ESD test qualification requirements for electronic components used in vehicles Dielectric wireless receiver Electric arc Electromagnetic pulse Electrostatic voltmeter ESD Association, the industry leader in electrostatic discharge education and standards. ESD turnstile ggNMOS Latchup, for qualification testing of semiconductor devices, ESD and latchup are commonly considered together Spark gap Static electricity Wimshurst machine References External links ESD Association Electrical breakdown Electrical safety Electrostatics Plasma phenomena
Electrostatic discharge
Physics
2,447
1,724,862
https://en.wikipedia.org/wiki/Capsella%20bursa-pastoris
Capsella bursa-pastoris, known as shepherd's purse because of its triangular flat fruits, which are purse-like, is a small annual and ruderal flowering plant in the mustard family (Brassicaceae). Scientists have referred to this species as a protocarnivore, since it has been found that its seeds attract and kill nematodes as a means to locally enrich the soil. It is native to Eurasia but is naturalized and considered a common weed in many parts of the world, especially in colder climates. It has a number of culinary uses. Description Capsella bursa-pastoris plants grow from a rosette of lobed leaves at the base. From the base emerges a stem most often tall, but occasionally as much as or as little as , which bears a few pointed leaves which partly grasp the stem. The flowers, which appear in any month of the year in the British Isles, are white and small, in diameter, with four petals and six stamens. They are borne in loose racemes, and produce flattened, two-chambered seed pods known as silicles, which are triangular to heart-shaped, each containing several seeds. Like a number of other plants in several plant families, its seeds contain a substance known as mucilage, a condition known as myxospermy. Recently, this has been demonstrated experimentally to perform the function of trapping nematodes, as a form of 'protocarnivory'. Capsella bursa-pastoris is closely related to the model organism Arabidopsis thaliana and is also used as a model organism, because the variety of genes expressed throughout its life cycle can be compared to genes that have been well studied in A. thaliana. Unlike most flowering plants, it flowers almost all year round. Like other annual ruderals exploiting disturbed ground, C. bursa-pastoris reproduces entirely from seed, has a long soil seed bank, and short generation time, and is capable of producing several generations each year. Chemistry Fumaric acid has been isolated from C. bursa-pastoris. Taxonomy Capsella bursa-pastoris is classified in the Capsella genus of plants in the family Brassicaceae. It has two subspecies, bursa-pastoris and thracicus. History In China, where it is known as jìcài (; ), the term first appears in the song and poetry collection Shijing (). However, these early mentions may not be refereing to Shepherd's purse, but to other plants. While today ji clearly indicates this species, previously it was used for all plants with leaves consumed in soups. A very early European illustration of Capsella bursa-pastoris was published in a medieval Herbarius in aproximatly 1486. The book was printed in Louvain in what is now Belgium. The species was apparently not included in the ancient pharmacopoeia with William Turner stating in 1548 that it and twenty or thirty others had come to be known as medicinal plants from Arab sources. It was formally described by the Swedish botanist Carl Linnaeus in his seminal publication Species Plantarum in 1753, and then published by Friedrich Kasimir Medikus in Pflanzen-Gattungen (Pfl.-Gatt.) in 1792. Names William Coles wrote in his book, Adam in Eden (1657), "It is called Shepherd's purse or Scrip (wallet) from the likeness of the seed hath with that kind of leathearne bag, wherein Shepherds carry their Victualls [food and drink] into the field." In England and Scotland, it was once commonly called 'mother's heart', from which was derived a child's game/trick of picking the seed pod, which then would burst and the child would be accused of 'breaking his mother's heart'. Distribution and habitat It is native to eastern Europe and Asia minor, but is naturalized and considered a common weed in many parts of the world, especially in colder climates, including the British Isles, where it is regarded as an archaeophyte, North America and China, but also in the Mediterranean and North Africa. C. bursa-pastoris is the second-most prolific wild plant in the world, and is common on cultivated ground and waysides and meadows. Ecology Pathogens of this plant include: White rust Albugo candida One species of downy mildew Hyaloperonospora parasitica Phoma herbarum Uses Capsella bursa-pastoris gathered from the wild or cultivated has many uses, including for food, to supplement animal feed, for cosmetics, and in traditional medicine—reportedly to stop bleeding. The plant can be eaten raw; the leaves are best when gathered young. Native Americans ground it into a meal and made a beverage from it. Cooking It is cultivated as a commercial food crop in Asia. In China, where it is known as jìcài (; ) its use as food has been recorded since the Zhou Dynasty. Historically, it was used to make geng soup, congee, and preserved as yāncài ( ). In the Ming-dynasty famine survival guide Jiuhuang bencao, it was recommended to mix jìcài with water and other ingredients to make bread-like bing. Today, it is commonly used in food in Shanghai and the surrounding Jiangnan region. The savory leaf is stir-fried with nian gao rice cakes and other ingredients or as part of the filling in wontons. It is one of the ingredients of the symbolic dish consumed in the Japanese spring-time festival, Nanakusa-no-sekku. In Korea, it is known as naengi () and used as a root vegetable in the characteristic Korean dish, namul (fresh greens and wild vegetables). Culture In a poem in the Shijing, the taste of the jìcài was compared to a happy marriage. Its sweet taste is also recorded in the Erya lexicon, compiled ). References External links Mrs. M. Grieve. A Modern Herbal. Shepherd's Purse bursa-pastoris Carnivorous plants of Europe Cosmopolitan species Ruderal species Edible plants Asian vegetables Medicinal plants of Asia Medicinal plants of Europe Plants described in 1753 Taxa named by Carl Linnaeus Plants used in Native American cuisine
Capsella bursa-pastoris
Biology
1,312
4,371,558
https://en.wikipedia.org/wiki/Tiger%20bush
Tiger bush, or brousse tigrée in the French language, is a patterned vegetation community and ground consisting of alternating bands of trees, shrubs, or grass separated by bare ground or low herb cover, that run roughly parallel to contour lines of equal elevation. The patterns occur on low slopes in arid and semi-arid regions, such as in Australia, Sahelian West Africa, and North America. Due to the natural water harvesting capacity, many species in tiger bush usually occur only under a higher rainfall regime. Formation The alternating pattern arises from the interplay of hydrological, ecological, and erosional phenomena. In the regions where tiger bush is present, plant growth is water-limited - the shortage of rainfall prevents vegetation from covering the entire landscape. Instead, trees and shrubs are able to establish by either tapping soil moisture reserves laterally or by sending roots to deeper, wetter soil depths. By a combination of plant litter, root macropores, and increased surface roughness, infiltration into the soil around the base of these plants is enhanced. Surface runoff arriving at these plants will thus likely to become run-on, and infiltrate into the soil. By contrast, the areas between these larger plants contain a greater portion of bare ground and herbaceous plants. Both bare soil, with its smoother surface and soil crusts, and herbaceous plants, with fewer macropores, inhibit infiltration. This causes much of the rainfall that falls in the inter-canopy areas to flow downslope, and infiltrate beneath the larger plants. The larger plants are in effect harvesting rainfall from the ground immediately up-slope. Although these vegetation patterns may seem very stable through time, such patterning requires specific climatic conditions. For instance, a decrease in rainfall is able to trigger patterning in formerly homogeneous vegetation within a few decades. More water will infiltrate at the up-slope edge of the canopies than down-slope. This favours the establishment and growth of plants at the up-slope edge, and mortality of those down-slope. Differences in growth and mortality across the vegetation band result in the band moving gradually upslope. Tiger bush never develops on moderate to steep slopes, because in these cases surface runoff concentrates into narrow threads or rills instead of flowing over the surface as sheet flow. Sheet flow distributes water more evenly across a hillslope, allowing a continuous vegetation band to form. The exact roles and importance of the different phenomena is still the subject of research, especially of research in physics since the 1990s. Exploitation and conservation The woody plants which make up tiger bush are used for fire wood and as a source of foliage for grazers. The extensive loss of tiger bush around Niamey, Niger, now threatens local giraffe populations. In neighbouring Burkina Faso, the tiger bush vegetation is also declining. Scientific research The pattern was first described in 1950 in British Somaliland by W.A. Macfadyen. The term tiger bush was first coined by Albert Clos-Arceduc in 1956. References See also Ecohydrology Patterns in nature Biogeography Biogeomorphology Ecosystems Ecosystems of Niger
Tiger bush
Biology
637
65,269,068
https://en.wikipedia.org/wiki/Shallow%20%28underwater%20relief%29
Shallow is an elevation of the bottom in the sea, river, lake, which impedes navigation. It is a type of an underwater relief where the depth of the water is low compared to that of the surrounding points. Usually formed by sand or pebble deposits, can also be of volcanic origin or the result of human or animal activities. Stranded near the shore of a reservoir or watercourse is called a shoal; the shallow ocean area adjacent to the mainland is the continental shelf. Shallows can be permanently hidden under water or appear on the surface of the water periodically (for example, during low tide in the seas, changes in the water level in rivers from water content) in the form of islands, sediments, side streams, spits, etc. On river shoals, if possible, to cross the river on foot, or by land transport, arrange fords. See also Spit (landform) Rapids Reef Ocean bank Bibliography Jean-Jacques Delannoy, Philip Deline, René Lhénaff, Géographie physique: aspects et dynamique du géosystème terrestre, De Boeck Superieur, 2016, p. 634. Republished in 2001 then in 2014 under the title Dictionnaire de la mer: savoir-faire, traditions, vocabulaires-techniques, Omnibus, XXIV-861 p., Hydrology
Shallow (underwater relief)
Chemistry,Engineering,Environmental_science
289
39,572,504
https://en.wikipedia.org/wiki/Infosec%20Standard%205
HMG Infosec Standard 5, or IS5, is a data destruction standard used by the British government. Context IS5 is part of a larger family of IT security standards published by CESG; it is referred to by the more general Infosec Standard No.1. IS5 is similar to DOD 5220.22-M (used in the USA). Requirements IS5 sets a wide range of requirements—not just the technical detail of overwriting data, but also the policies and processes that organisations should have in place, to ensure that media are disposed of securely. IS5 also touches on risk management accreditation, because secure reuse and disposal of media is an important control for organisations handling high-impact data. It's not sufficient just to sanitise media; the sanitisation should also be auditable, and records must be kept. IS5 defines two different levels of overwriting: Baseline overwriting of data involves one pass, overwriting every sector of the storage medium once with zeros. Enhanced overwriting involves three passes; each sector is overwritten first with 1s, then with 0s, and then with randomly generated 1s and 0s. Regardless of which level is used, verification is needed to ensure that overwriting was successful. Apart from overwriting, other methods could be used, such as degaussing, or physical destruction of the media. With some inexpensive media, destruction and replacement may be cheaper than sanitisation followed by reuse. ATA Secure Erase is not approved. Different methods apply to different media, ranging from paper to CDs to mobile phones. The choice of method affects reusability. Four different outcomes are considered: Reuse of media in a similarly secure environment; Reuse of media in a less-secure environment (accredited at a lower IL); Reuse anywhere (i.e. an untrusted or unknown environment); Destruction. Stricter requirements apply to data with a stronger protective marking or IL. In some cases, media at or above IL4 / CONFIDENTIAL may have to be handled at a secure site, such as a List X site. References Classified information in the United Kingdom Computer security in the United Kingdom Information assurance standards IT risk management
Infosec Standard 5
Technology
451
12,571
https://en.wikipedia.org/wiki/Galaxy%20groups%20and%20clusters
Galaxy groups and clusters are the largest known gravitationally bound objects to have arisen thus far in the process of cosmic structure formation. They form the densest part of the large-scale structure of the Universe. In models for the gravitational formation of structure with cold dark matter, the smallest structures collapse first and eventually build the largest structures, clusters of galaxies. Clusters are then formed relatively recently between 10 billion years ago and now. Groups and clusters may contain ten to thousands of individual galaxies. The clusters themselves are often associated with larger, non-gravitationally bound, groups called superclusters. Groups of galaxies Groups of galaxies are the smallest aggregates of galaxies. They typically contain no more than 50 galaxies in a diameter of 1 to 2 megaparsecs (Mpc)(see 1022 m for distance comparisons). Their mass is approximately 1013 solar masses. The spread of velocities for the individual galaxies is about 150 km/s. However, this definition should be used as a guide only, as larger and more massive galaxy systems are sometimes classified as galaxy groups. Groups are the most common structures of galaxies in the universe, comprising at least 50% of the galaxies in the local universe. Groups have a mass range between those of the very large elliptical galaxies and clusters of galaxies. Our own galaxy, the Milky Way, is contained in the Local Group of more than 54 galaxies. In July 2017 S. Paul, R. S. John et al. defined clear distinguishing parameters for classifying galaxy aggregations as ‘galaxy groups’ and ‘clusters’ on the basis of scaling laws that they followed. According to this paper, galaxy aggregations less massive than 8 × 1013 solar masses are classified as galaxy groups. Clusters of galaxies Clusters are larger than groups, although there is no sharp dividing line between the two. When observed visually, clusters appear to be collections of galaxies held together by mutual gravitational attraction. However, their velocities are too large for them to remain gravitationally bound by their mutual attractions, implying the presence of either an additional invisible mass component, or an additional attractive force besides gravity. X-ray studies have revealed the presence of large amounts of intergalactic gas known as the intracluster medium. This gas is very hot, between 107K and 108K, and hence emits X-rays in the form of bremsstrahlung and atomic line emission. The total mass of the gas is greater than that of the galaxies by roughly a factor of two. However, this is still not enough mass to keep the galaxies in the cluster. Since this gas is in approximate hydrostatic equilibrium with the overall cluster gravitational field, the total mass distribution can be determined. It turns out the total mass deduced from this measurement is approximately six times larger than the mass of the galaxies or the hot gas. The missing component is known as dark matter and its nature is unknown. In a typical cluster perhaps only 5% of the total mass is in the form of galaxies, maybe 10% in the form of hot X-ray emitting gas and the remainder is dark matter. Brownstein and Moffat use a theory of modified gravity to explain X-ray cluster masses without dark matter. Observations of the Bullet Cluster are the strongest evidence for the existence of dark matter; however, Brownstein and Moffat have shown that their modified gravity theory can also account for the properties of the cluster. Observational methods Clusters of galaxies have been found in surveys by a number of observational techniques and have been studied in detail using many methods: Optical or infrared: The individual galaxies of clusters can be studied through optical or infrared imaging and spectroscopy. Galaxy clusters are found by optical or infrared telescopes by searching for overdensities, and then confirmed by finding several galaxies at a similar redshift. Infrared searches are more useful for finding more distant (higher redshift) clusters. X-ray: The hot plasma emits X-rays that can be detected by X-ray telescopes. The cluster gas can be studied using both X-ray imaging and X-ray spectroscopy. Clusters are quite prominent in X-ray surveys and along with AGN are the brightest X-ray emitting extragalactic objects. Radio: A number of diffuse structures emitting at radio frequencies have been found in clusters. Groups of radio sources (that may include diffuse structures or AGN) have been used as tracers of cluster location. At high redshift imaging around individual radio sources (in this case AGN) has been used to detect proto-clusters (clusters in the process of forming). Sunyaev-Zel'dovich effect: The hot electrons in the intracluster medium scatter radiation from the cosmic microwave background through inverse Compton scattering. This produces a "shadow" in the observed cosmic microwave background at some radio frequencies. Gravitational lensing: Clusters of galaxies contain enough matter to distort the observed orientations of galaxies behind them. The observed distortions can be used to model the distribution of dark matter in the cluster. Temperature and density Clusters of galaxies are the most recent and most massive objects to have arisen in the hierarchical structure formation of the Universe and the study of clusters tells one about the way galaxies form and evolve. Clusters have two important properties: their masses are large enough to retain any energetic gas ejected from member galaxies and the thermal energy of the gas within the cluster is observable within the X-Ray bandpass. The observed state of gas within a cluster is determined by a combination of shock heating during accretion, radiative cooling, and thermal feedback triggered by that cooling. The density, temperature, and substructure of the intracluster X-Ray gas therefore represents the entire thermal history of cluster formation. To better understand this thermal history one needs to study the entropy of the gas because entropy is the quantity most directly changed by increasing or decreasing the thermal energy of intracluster gas. List of groups and clusters See also Entropy Fossil galaxy group Galactic orientation Galaxy filament Illustris project Intracluster medium Large-scale structure of the Cosmos List of galaxy groups and clusters Supercluster Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure References Further reading Large-scale structure of the cosmos tr:Galaksi kümesi
Galaxy groups and clusters
Astronomy
1,281
2,234,333
https://en.wikipedia.org/wiki/Data%20%28computer%20science%29
In computer science, data (treated as singular, plural, or as a mass noun) is any sequence of one or more symbols; datum is a single symbol of data. Data requires interpretation to become information. Digital data is data that is represented using the binary number system of ones (1) and zeros (0), instead of analog representation. In modern (post-1960) computer systems, all data is digital. Data exists in three states: data at rest, data in transit and data in use. Data within a computer, in most cases, moves as parallel data. Data moving to or from a computer, in most cases, moves as serial data. Data sourced from an analog device, such as a temperature sensor, may be converted to digital using an analog-to-digital converter. Data representing quantities, characters, or symbols on which operations are performed by a computer are stored and recorded on magnetic, optical, electronic, or mechanical recording media, and transmitted in the form of digital electrical or optical signals. Data pass in and out of computers via peripheral devices. Physical computer memory elements consist of an address and a byte/word of data storage. Digital data are often stored in relational databases, like tables or SQL databases, and can generally be represented as abstract key/value pairs. Data can be organized in many different types of data structures, including arrays, graphs, and objects. Data structures can store data of many different types, including numbers, strings and even other data structures. Characteristics Metadata helps translate data to information. Metadata is data about the data. Metadata may be implied, specified or given. Data relating to physical events or processes will have a temporal component. This temporal component may be implied. This is the case when a device such as a temperature logger receives data from a temperature sensor. When the temperature is received it is assumed that the data has a temporal reference of now. So the device records the date, time and temperature together. When the data logger communicates temperatures, it must also report the date and time as metadata for each temperature reading. Fundamentally, computers follow a sequence of instructions they are given in the form of data. A set of instructions to perform a given task (or tasks) is called a program. A program is data in the form of coded instructions to control the operation of a computer or other machine. In the nominal case, the program, as executed by the computer, will consist of machine code. The elements of storage manipulated by the program, but not actually executed by the central processing unit (CPU), are also data. At its most essential, a single datum is a value stored at a specific location. Therefore, it is possible for computer programs to operate on other computer programs, by manipulating their programmatic data. To store data bytes in a file, they have to be serialized in a file format. Typically, programs are stored in special file types, different from those used for other data. Executable files contain programs; all other files are also data files. However, executable files may also contain data used by the program which is built into the program. In particular, some executable files have a data segment, which nominally contains constants and initial values for variables, both of which can be considered data. The line between program and data can become blurry. An interpreter, for example, is a program. The input data to an interpreter is itself a program, just not one expressed in native machine language. In many cases, the interpreted program will be a human-readable text file, which is manipulated with a text editor program. Metaprogramming similarly involves programs manipulating other programs as data. Programs like compilers, linkers, debuggers, program updaters, virus scanners and such use other programs as their data. For example, a user might first instruct the operating system to load a word processor program from one file, and then use the running program to open and edit a document stored in another file. In this example, the document would be considered data. If the word processor also features a spell checker, then the dictionary (word list) for the spell checker would also be considered data. The algorithms used by the spell checker to suggest corrections would be either machine code data or text in some interpretable programming language. In an alternate usage, binary files (which are not human-readable) are sometimes called data as distinguished from human-readable text. The total amount of digital data in 2007 was estimated to be 281 billion gigabytes (281 exabytes). Data keys and values, structures and persistence Keys in data provide the context for values. Regardless of the structure of data, there is always a key component present. Keys in data and data-structures are essential for giving meaning to data values. Without a key that is directly or indirectly associated with a value, or collection of values in a structure, the values become meaningless and cease to be data. That is to say, there has to be a key component linked to a value component in order for it to be considered data. Data can be represented in computers in multiple ways, as per the following examples: RAM Random access memory (RAM) holds data that the CPU has direct access to. A CPU may only manipulate data within its processor registers or memory. This is as opposed to data storage, where the CPU must direct the transfer of data between the storage device (disk, tape...) and memory. RAM is an array of linear contiguous locations that a processor may read or write by providing an address for the read or write operation. The processor may operate on any location in memory at any time in any order. In RAM the smallest element of data is the binary bit. The capabilities and limitations of accessing RAM are processor specific. In general main memory is arranged as an array of locations beginning at address 0 (hexadecimal 0). Each location can store usually 8 or 32 bits depending on the computer architecture. Keys Data keys need not be a direct hardware address in memory. Indirect, abstract and logical keys codes can be stored in association with values to form a data structure. Data structures have predetermined offsets (or links or paths) from the start of the structure, in which data values are stored. Therefore, the data key consists of the key to the structure plus the offset (or links or paths) into the structure. When such a structure is repeated, storing variations of the data values and the data keys within the same repeating structure, the result can be considered to resemble a table, in which each element of the repeating structure is considered to be a column and each repetition of the structure is considered as a row of the table. In such an organization of data, the data key is usually a value in one (or a composite of the values in several) of the columns. Organised recurring data structures The tabular view of repeating data structures is only one of many possibilities. Repeating data structures can be organised hierarchically, such that nodes are linked to each other in a cascade of parent-child relationships. Values and potentially more complex data-structures are linked to the nodes. Thus the nodal hierarchy provides the key for addressing the data structures associated with the nodes. This representation can be thought of as an inverted tree. Modern computer operating system file systems are a common example; and XML is another. Sorted or ordered data Data has some inherent features when it is sorted on a key. All the values for subsets of the key appear together. When passing sequentially through groups of the data with the same key, or a subset of the key changes, this is referred to in data processing circles as a break, or a control break. It particularly facilitates the aggregation of data values on subsets of a key. Peripheral storage Until the advent of bulk non-volatile memory like flash, persistent data storage was traditionally achieved by writing the data to external block devices like magnetic tape and disk drives. These devices typically seek to a location on the magnetic media and then read or write blocks of data of a predetermined size. In this case, the seek location on the media, is the data key and the blocks are the data values. Early used raw disk data file-systems or disc operating systems reserved contiguous blocks on the disc drive for data files. In those systems, the files could be filled up, running out of data space before all the data had been written to them. Thus much unused data space was reserved unproductively to ensure adequate free space for each file. Later file-systems introduced partitions. They reserved blocks of disc data space for partitions and used the allocated blocks more economically, by dynamically assigning blocks of a partition to a file as needed. To achieve this, the file system had to keep track of which blocks were used or unused by data files in a catalog or file allocation table. Though this made better use of the disc data space, it resulted in fragmentation of files across the disc, and a concomitant performance overhead due additional seek time to read the data. Modern file systems reorganize fragmented files dynamically to optimize file access times. Further developments in file systems resulted in virtualization of disc drives i.e. where a logical drive can be defined as partitions from a number of physical drives. Indexed data Retrieving a small subset of data from a much larger set may imply inefficiently searching through the data sequentially. Indexes are a way to copy out keys and location addresses from data structures in files, tables and data sets, then organize them using inverted tree structures to reduce the time taken to retrieve a subset of the original data. In order to do this, the key of the subset of data to be retrieved must be known before retrieval begins. The most popular indexes are the B-tree and the dynamic hash key indexing methods. Indexing is overhead for filing and retrieving data. There are other ways of organizing indexes, e.g. sorting the keys and using a binary search algorithm. Abstraction and indirection Object-oriented programming uses two basic concepts for understanding data and software: The taxonomic rank-structure of classes, which is an example of a hierarchical data structure; and at run time, the creation of references to in-memory data-structures of objects that have been instantiated from a class library. It is only after instantiation that an object of a specified class exists. After an object's reference is cleared, the object also ceases to exist. The memory locations where the object's data was stored are garbage and are reclassified as unused memory available for reuse. Database data The advent of databases introduced a further layer of abstraction for persistent data storage. Databases use metadata, and a structured query language protocol between client and server systems, communicating over a computer network, using a two phase commit logging system to ensure transactional completeness, when saving data. Parallel distributed data processing Modern scalable and high-performance data persistence technologies, such as Apache Hadoop, rely on massively parallel distributed data processing across many commodity computers on a high bandwidth network. In such systems, the data is distributed across multiple computers and therefore any particular computer in the system must be represented in the key of the data, either directly, or indirectly. This enables the differentiation between two identical sets of data, each being processed on a different computer at the same time. See also Big data Data Data dictionary Data modeling Data stream Data set Database index State (computer science) Tuple References
Data (computer science)
Technology
2,352
289,406
https://en.wikipedia.org/wiki/Blood%20sugar%20level
The blood sugar level, blood sugar concentration, blood glucose level, or glycemia is the measure of glucose concentrated in the blood. The body tightly regulates blood glucose levels as a part of metabolic homeostasis. For a 70 kg (154 lb) human, approximately four grams of dissolved glucose (also called "blood glucose") is maintained in the blood plasma at all times. Glucose that is not circulating in the blood is stored in skeletal muscle and liver cells in the form of glycogen; in fasting individuals, blood glucose is maintained at a constant level by releasing just enough glucose from these glycogen stores in the liver and skeletal muscle in order to maintain homeostasis. Glucose can be transported from the intestines or liver to other tissues in the body via the bloodstream. Cellular glucose uptake is primarily regulated by insulin, a hormone produced in the pancreas. Once inside the cell, the glucose can now act as an energy source as it undergoes the process of glycolysis. In humans, properly maintained glucose levels are necessary for normal function in a number of tissues, including the human brain, which consumes approximately 60% of blood glucose in fasting, sedentary individuals. A persistent elevation in blood glucose leads to glucose toxicity, which contributes to cell dysfunction and the pathology grouped together as complications of diabetes. Glucose levels are usually lowest in the morning, before the first meal of the day, and rise after meals for an hour or two by a few millimoles per litre. Abnormal persistently high glycemia is referred to as hyperglycemia; low levels are referred to as hypoglycemia. Diabetes mellitus is characterized by persistent hyperglycemia from a variety of causes, and it is the most prominent disease related to the failure of blood sugar regulation. Diabetes mellitus is also characterized by frequent episodes of low sugar, or hypoglycemia. There are different methods of testing and measuring blood sugar levels. Drinking alcohol causes an initial surge in blood sugar and later tends to cause levels to fall. Also, certain drugs can increase or decrease glucose levels. Units of Measurement There are two ways of measuring blood glucose levels: In the United Kingdom and Commonwealth countries (Australia, Canada, India, etc.) and ex-USSR countries molar concentration, measured in mmol/L (millimoles per litre, or millimolar, abbreviated mM). In the United States, Germany, Japan and many other countries mass concentration is measured in mg/dl (milligrams per decilitre). Unit conversion formula from mmol/L to mg/dL Since the molecular mass of glucose C6H12O6 is 180.156 g/mol, the factor between the two units is about 18, so 1 mmol/L of glucose is equivalent to 18 mg/dL. Normal value range Humans Normal blood glucose level (tested while fasting) for non-diabetics should be 3.9–5.5 mmol/L (70–100 mg/dL). According to the American Diabetes Association, the fasting blood glucose target range for diabetics, should be 3.9 - 7.2 mmol/L (70 - 130 mg/dL) and less than 10 mmol/L (180 mg/dL) two hours after meals (as measured by a blood glucose monitor). Normal value ranges may vary slightly between laboratories. Glucose homeostasis, when operating normally, restores the blood sugar level to a narrow range of about 4.4 to 6.1 mmol/L (79 to 110 mg/dL) (as measured by a fasting blood glucose test). The global mean fasting plasma blood glucose level in humans is about 5.5 mmol/L (100 mg/dL); however, this level fluctuates throughout the day. Blood sugar levels for those without diabetes and who are not fasting is usually below 6.9 mmol/L (125 mg/dL). Despite widely variable intervals between meals or the occasional consumption of meals with a substantial carbohydrate load, human blood glucose levels tend to remain within the normal range. However, shortly after eating, the blood glucose level may rise, in non-diabetics, temporarily up to 7.8 mmol/L (140 mg/dL) or slightly more. The actual amount of glucose in the blood and body fluids is very small. In a healthy adult male of with a blood volume of 5 L, a blood glucose level of 5.5 mmol/L (100 mg/dL) amounts to 5 g, equivalent to about a teaspoonful of sugar. Part of the reason why this amount is so small is that, to maintain an influx of glucose into cells, enzymes modify glucose by adding phosphate or other groups to it. Other animals In general, ranges of blood sugar in common domestic ruminants are lower than in many monogastric mammals. However this generalization does not extend to wild ruminants or camelids. For serum glucose in mg/dL, reference ranges of 42 to 75 for cows, 44 to 81 for sheep, and 48 to 76 for goats, but 61 to 124 for cats; 62 to 108 for dogs, 62 to 114 for horses, 66 to 116 for pigs, 75 to 155 for rabbits, and 90 to 140 for llamas have been reported. A 90 percent reference interval for serum glucose of 26 to 181 mg/dL has been reported for captured mountain goats (Oreamnos americanus), where no effects of the pursuit and capture on measured levels were evident. For beluga whales, the 25–75 percent range for serum glucose has been estimated to be 94 to 115 mg/dL. For the white rhinoceros, one study has indicated that the 95 percent range is 28 to 140 mg/dL. For harp seals, a serum glucose range of 4.9 to 12.1 mmol/L [i.e. 88 to 218 mg/dL] has been reported; for hooded seals, a range of 7.5 to 15.7 mmol/L [i.e. about 135 to 283 mg/dL] has been reported. Regulation The body's homeostatic mechanism keeps blood glucose levels within a narrow range. It is composed of several interacting systems, of which hormone regulation is the most important. There are two types of mutually antagonistic metabolic hormones affecting blood glucose levels: catabolic hormones (such as glucagon, cortisol and catecholamines) which increase blood glucose; and one anabolic hormone (insulin), which decreases blood glucose. These hormones are secreted from pancreatic islets (bundles of endocrine tissues), of which there are four types: alpha (A) cells, beta (B) cells, Delta (D) cells and F cells. Glucagon is secreted from alpha cells, while insulin is secreted by beta cells. Together they regulate the blood-glucose levels through negative feedback, a process where the end product of one reaction stimulates the beginning of another reaction. In blood-glucose levels, insulin lowers the concentration of glucose in the blood. The lower blood-glucose level (a product of the insulin secretion) triggers glucagon to be secreted, and repeats the cycle. In order for blood glucose to be kept stable, modifications to insulin, glucagon, epinephrine and cortisol are made. Each of these hormones has a different responsibility to keep blood glucose regulated; when blood sugar is too high, insulin tells muscles to take up excess glucose for storage in the form of glycogen. Glucagon responds to too low of a blood glucose level; it informs the tissue to release some glucose from the glycogen stores. Epinephrine prepares the muscles and respiratory system for activity in the case of a "fight or flight" response. Lastly, cortisol supplies the body with fuel in times of heavy stress. Abnormalities High blood sugar If blood sugar levels remain too high the body suppresses appetite over the short term. Long-term hyperglycemia causes many health problems including heart disease, cancer, eye, kidney, and nerve damage. Blood sugar levels above 16.7mmol/L (300mg/dL) can cause fatal reactions. Ketones will be very high (a magnitude higher than when eating a very low carbohydrate diet) initiating ketoacidosis. The ADA (American Diabetes Association) recommends seeing a doctor if blood glucose reaches 13.3 mmol/L (240 mg/dL), and it is recommended to seek emergency treatment at 15mmol/L (270mg/dL) blood glucose if Ketones are present. The most common cause of hyperglycemia is diabetes. When diabetes is the cause, physicians typically recommend an anti-diabetic medication as treatment. From the perspective of the majority of patients, treatment with an old, well-understood diabetes drug such as metformin will be the safest, most effective, least expensive, and most comfortable route to managing the condition. Treatment will vary for the distinct forms of Diabetes and can differ from person to person based on how they are reacting to treatment. Diet changes and exercise implementation may also be part of a treatment plan for diabetes. Some medications may cause a rise in blood sugars of diabetics, such as steroid medications, including cortisone, hydrocortisone, prednisolone, prednisone, and dexamethasone. Low blood sugar When the blood sugar level is below 70 mg/dL, this is referred to as having low blood sugar. Low blood sugar is very frequent among type 1 diabetics. There are several causes of low blood sugar, including, taking an excessive amount of insulin, not consuming enough carbohydrates, drinking alcohol, spending time at a high elevation, puberty, and menstruation. If blood sugar levels drop too low, a potentially fatal condition called hypoglycemia develops. Symptoms may include lethargy, impaired mental functioning; irritability; shaking, twitching, weakness in arm and leg muscles; pale complexion; sweating; loss of consciousness. Mechanisms that restore satisfactory blood glucose levels after extreme hypoglycemia (below 2.2 mmol/L or 40 mg/dL) must be quick and effective to prevent extremely serious consequences of insufficient glucose: confusion or unsteadiness and, in the extreme (below 0.8 mmol/L or 15 mg/dL) loss of consciousness and seizures. Without discounting the potentially quite serious conditions and risks due to or oftentimes accompanying hyperglycemia, especially in the long-term (diabetes or pre-diabetes, obesity or overweight, hyperlipidemia, hypertension, etc.), it is still generally more dangerous to have too little glucose – especially if levels are very low – in the blood than too much, at least temporarily, because glucose is so important for metabolism and nutrition and the proper functioning of the body's organs. This is especially the case for those organs that are metabolically active or that require a constant, regulated supply of blood sugar (the liver and brain are examples). Symptomatic hypoglycemia is most likely associated with diabetes and liver disease (especially overnight or postprandial), without treatment or with wrong treatment, possibly in combination with carbohydrate malabsorption, physical over-exertion or drugs. Many other less likely illnesses, like cancer, could also be a reason. Starvation, possibly due to eating disorders, like anorexia, will also eventually lead to hypoglycemia. Hypoglycemic episodes can vary greatly between persons and from time to time, both in severity and swiftness of onset. For severe cases, prompt medical assistance is essential, as damage to brain and other tissues and even death will result from sufficiently low blood-glucose levels. Glucose measurement In the past to measure blood glucose it was necessary to take a blood sample, as explained below, but since 2015 it has also been possible to use a continuous glucose monitor, which involves an electrode placed under the skin. Both methods, as of 2023, cost hundreds of dollars or euros per year for supplies needed. Sample source Glucose testing in a fasting individual shows comparable levels of glucose in arterial, venous, and capillary blood. But following meals, capillary and arterial blood glucose levels can be significantly higher than venous levels. Although these differences vary widely, one study found that following the consumption of 50 grams of glucose, "the mean capillary blood glucose concentration is higher than the mean venous blood glucose concentration by 35%." Sample type Glucose is measured in whole blood, plasma or serum. Historically, blood glucose values were given in terms of whole blood, but most laboratories now measure and report plasma or serum glucose levels. Because red blood cells (erythrocytes) have a higher concentration of protein (e.g., hemoglobin) than serum, serum has a higher water content and consequently more dissolved glucose than does whole blood. To convert from whole-blood glucose, multiplication by 1.14 has been shown to generally give the serum/plasma level. To prevent contamination of the sample with intravenous fluids, particular care should be given to drawing blood samples from the arm opposite the one in which an intravenous line is inserted. Alternatively, blood can be drawn from the same arm with an IV line after the IV has been turned off for at least 5 minutes, and the arm has been elevated to drain infused fluids away from the vein. Inattention can lead to large errors, since as little as 10% contamination with a 5% glucose solution (D5W) will elevate glucose in a sample by 500 mg/dL or more. The actual concentration of glucose in blood is very low, even in the hyperglycemic. Measurement techniques Two major methods have been used to measure glucose. The first, still in use in some places, is a chemical method exploiting the nonspecific reducing property of glucose in a reaction with an indicator substance that changes color when reduced. Since other blood compounds also have reducing properties (e.g., urea, which can be abnormally high in uremic patients), this technique can produce erroneous readings in some situations (5–15 mg/dL has been reported). The more recent technique, using enzymes specific to glucose, is less susceptible to this kind of error. The two most common employed enzymes are glucose oxidase and hexokinase. Average blood glucose concentrations can also be measured. This method measures the level of glycated hemoglobin, which is representative of the average blood glucose levels over the last, approximately, 120 days. In either case, the chemical system is commonly contained on a test strip which is inserted into a meter, and then has a blood sample applied. Test-strip shapes and their exact chemical composition vary between meter systems and cannot be interchanged. Formerly, some test strips were read (after timing and wiping away the blood sample) by visual comparison against a color chart printed on the vial label. Strips of this type are still used for urine glucose readings, but for blood glucose levels they are obsolete. Their error rates were, in any case, much higher. Errors when using test strips were often caused by the age of the strip or exposure to high temperatures or humidity. More precise blood glucose measurements are performed in a medical laboratory, using hexokinase, glucose oxidase, or glucose dehydrogenase enzymes. Urine glucose readings, however taken, are much less useful. In properly functioning kidneys, glucose does not appear in urine until the renal threshold for glucose has been exceeded. This is substantially above any normal glucose level, and is evidence of an existing severe hyperglycemic condition. However, as urine is stored in the bladder, any glucose in it might have been produced at any time since the last time the bladder was emptied. Since metabolic conditions change rapidly, as a result of any of several factors, this is delayed news and gives no warning of a developing condition. Blood glucose monitoring is far preferable, both clinically and for home monitoring by patients. Healthy urine glucose levels were first standardized and published in 1965 by Hans Renschler. A noninvasive method of sampling to monitor glucose levels has emerged using an exhaled breath condensate. However this method does need highly sensitive glucose biosensors. Clinical correlation The fasting blood glucose level, which is measured after a fast of 8 hours, is the most commonly used indication of overall glucose homeostasis, largely because disturbing events such as food intake are avoided. Conditions affecting glucose levels are shown in the table below. Abnormalities in these test results are due to problems in the multiple control mechanism of glucose regulation. The metabolic response to a carbohydrate challenge is conveniently assessed by a postprandial glucose level drawn 2 hours after a meal or a glucose load. In addition, the glucose tolerance test, consisting of several timed measurements after a standardized amount of oral glucose intake, is used to aid in the diagnosis of diabetes. Error rates for blood glucose measurements systems vary, depending on laboratories, and on the methods used. Colorimetry techniques can be biased by color changes in test strips (from airborne or finger-borne contamination, perhaps) or interference (e.g., tinting contaminants) with light source or the light sensor. Electrical techniques are less susceptible to these errors, though not to others. In home use, the most important issue is not accuracy, but trend. Thus if a meter / test strip system is consistently wrong by 10%, there will be little consequence, as long as changes (e.g., due to exercise or medication adjustments) are properly tracked. In the US, home use blood test meters must be approved by the federal Food and Drug Administration before they can be sold. Finally, there are several influences on blood glucose level aside from food intake. Infection, for instance, tends to change blood glucose levels, as does stress either physical or psychological. Exercise, especially if prolonged or long after the most recent meal, will have an effect as well. In the typical person, maintenance of blood glucose at near constant levels will nevertheless be quite effective. See also Blood glucose monitoring Glycemic index Saccharide recognition by boronic acids References Further reading External links Glucose (blood, serum, plasma): analyte monograph – The Association for Clinical Biochemistry and Laboratory Medicine Blood tests Concentration indicators Diabetes Diagnostic endocrinology Human homeostasis
Blood sugar level
Chemistry,Biology
3,853
256,162
https://en.wikipedia.org/wiki/Neutrophil
Neutrophils are a type of phagocytic white blood cell and part of innate immunity. More specifically, they form the most abundant type of granulocytes and make up 40% to 70% of all white blood cells in humans. Their functions vary in different animals. They are also known as neutrocytes, heterophils or polymorphonuclear leukocytes. They are formed from stem cells in the bone marrow and differentiated into subpopulations of neutrophil-killers and neutrophil-cagers. They are short-lived (between 5 and 135 hours, see ) and highly mobile, as they can enter parts of tissue where other cells/molecules cannot. Neutrophils may be subdivided into segmented neutrophils and banded neutrophils (or bands). They form part of the polymorphonuclear cells family (PMNs) together with basophils and eosinophils. The name neutrophil derives from staining characteristics on hematoxylin and eosin (H&E) histological or cytological preparations. Whereas basophilic white blood cells stain dark blue and eosinophilic white blood cells stain bright red, neutrophils stain a neutral pink. Normally, neutrophils contain a nucleus divided into 2–5 lobes. Neutrophils are a type of phagocyte and are normally found in the bloodstream. During the beginning (acute) phase of inflammation, particularly as a result of bacterial infection, environmental exposure, and some cancers, neutrophils are one of the first responders of inflammatory cells to migrate toward the site of inflammation. They migrate through the blood vessels and then through interstitial space, following chemical signals such as interleukin-8 (IL-8), C5a, fMLP, leukotriene B4, and hydrogen peroxide (H2O2) in a process called chemotaxis. They are the predominant cells in pus, accounting for its whitish/yellowish appearance. Neutrophils are recruited to the site of injury within minutes following trauma and are the hallmark of acute inflammation. They not only play a central role in combating infection but also contribute to pain in the acute period by releasing pro-inflammatory cytokines and other mediators that sensitize nociceptors, leading to heightened pain perception. However, due to some pathogens being indigestible, they may not be able to resolve certain infections without the assistance of other types of immune cells. Structure When adhered to a surface, neutrophil granulocytes have an average diameter of 12–15 micrometers (μm) in peripheral blood smears. In suspension, human neutrophils have an average diameter of 8.85 μm. With the eosinophil and the basophil, they form the class of polymorphonuclear cells, named for the nucleus' multilobulated shape (as compared to lymphocytes and monocytes, the other types of white cells). The nucleus has a characteristic lobed appearance, the separate lobes connected by chromatin. The nucleolus disappears as the neutrophil matures, which is something that happens in only a few other types of nucleated cells. Up to 17% of female human neutrophil nuclei have a drumstick-shaped appendage which contains the inactivated X chromosome. In the cytoplasm, the Golgi apparatus is small, mitochondria and ribosomes are sparse, and the rough endoplasmic reticulum is absent. The cytoplasm also contains about 200 granules, of which a third are azurophilic. Neutrophils will show increasing segmentation (many segments of the nucleus) as they mature. A normal neutrophil should have 3–5 segments. Hypersegmentation is not normal but occurs in some disorders, most notably vitamin B12 deficiency. This is noted in a manual review of the blood smear and is positive when most or all of the neutrophils have 5 or more segments. Neutrophils are the most abundant white blood cells in the human body (approximately 1011 are produced daily); they account for approximately 50–70% of all white blood cells (leukocytes). The stated normal range for human blood counts varies between laboratories, but a neutrophil count of 2.5–7.5 × 109/L is a standard normal range. People of African and Middle Eastern descent may have lower counts, which are still normal. A report may divide neutrophils into segmented neutrophils and bands. When circulating in the bloodstream and inactivated, neutrophils are spherical. Once activated, they change shape and become more amorphous or amoeba-like and can extend pseudopods as they hunt for antigens. The capacity of neutrophils to engulf bacteria is reduced when simple sugars like glucose, fructose as well as sucrose, honey and orange juice were ingested, while the ingestion of starches had no effect. Fasting, on the other hand, strengthened the neutrophils' phagocytic capacity to engulf bacteria. It was concluded that the function, and not the number, of phagocytes in engulfing bacteria was altered by the ingestion of sugars. In 2007 researchers at the Whitehead Institute of Biomedical Research found that given a selection of sugars on microbial surfaces, the neutrophils reacted to some types of sugars preferentially. The neutrophils preferentially engulfed and killed beta-1,6-glucan targets compared to beta-1,3-glucan targets. Development Life span The average lifespan of inactivated human neutrophils in the circulation has been reported by different approaches to be between 5 and 135 hours. Upon activation, they marginate (position themselves adjacent to the blood vessel endothelium) and undergo selectin-dependent capture followed by integrin-dependent adhesion in most cases, after which they migrate into tissues, where they survive for 1–2 days. Neutrophils have also been demonstrated to be released into the blood from a splenic reserve following myocardial infarction. The distribution ratio of neutrophils in bone marrow, blood and connective tissue is 28:1:25. Neutrophils are much more numerous than the longer-lived monocyte/macrophage phagocytes. A pathogen (disease-causing microorganism or virus) is likely to first encounter a neutrophil. Some experts hypothesize that the short lifetime of neutrophils is an evolutionary adaptation. The short lifetime of neutrophils minimizes propagation of those pathogens that parasitize phagocytes (e.g. Leishmania) because the more time such parasites spend outside a host cell, the more likely they will be destroyed by some component of the body's defenses. Also, because neutrophil antimicrobial products can also damage host tissues, their short life limits damage to the host during inflammation. Neutrophils will be removed after phagocytosis of pathogens by macrophages. PECAM-1 and phosphatidylserine on the cell surface are involved in this process. Function Chemotaxis Neutrophils undergo a process called chemotaxis via amoeboid movement, which allows them to migrate toward sites of infection or inflammation. Cell surface receptors allow neutrophils to detect chemical gradients of molecules such as interleukin-8 (IL-8), interferon gamma (IFN-γ), C3a, C5a, and leukotriene B4, which these cells use to direct the path of their migration. Neutrophils have a variety of specific receptors, including ones for complement, cytokines like interleukins and IFN-γ, chemokines, lectins, and other proteins. They also express receptors to detect and adhere to endothelium and Fc receptors for opsonin. In leukocytes responding to a chemoattractant, the cellular polarity is regulated by activities of small Ras or Rho guanosine triphosphatases (Ras or Rho GTPases) and the phosphoinositide 3-kinases (PI3Ks). In neutrophils, lipid products of PI3Ks regulate activation of Rac1, hematopoietic Rac2, and RhoG GTPases of the Rho family and are required for cell motility. Ras-GTPases and Rac-GTPases regulate cytoskeletal dynamics and facilitate neutrophils adhesion, migration, and spreading. They accumulate asymmetrically to the plasma membrane at the leading edge of polarized cells. Spatially regulating Rho GTPases and organizing the leading edge of the cell, PI3Ks and their lipid products could play pivotal roles in establishing leukocyte polarity, as compass molecules that tell the cell where to crawl. It has been shown in mice that in certain conditions neutrophils have a specific type of migration behaviour referred to as neutrophil swarming during which they migrate in a highly coordinated manner and accumulate and cluster to sites of inflammation. Anti-microbial function Being highly motile, neutrophils quickly congregate at a focus of infection, attracted by cytokines expressed by activated endothelium, mast cells, and macrophages. Neutrophils express and release cytokines, which in turn amplify inflammatory reactions by several other cell types. In addition to recruiting and activating other cells of the immune system, neutrophils play a key role in the front-line defense against invading pathogens, and contain a broad range of proteins. Neutrophils have three methods for directly attacking microorganisms: phagocytosis (ingestion), degranulation (release of soluble anti-microbials), and generation of neutrophil extracellular traps (NETs). Phagocytosis Neutrophils are phagocytes, capable of ingesting microorganisms or particles. For targets to be recognized, they must be coated in opsoninsa process known as antibody opsonization. They can internalize and kill many microbes, each phagocytic event resulting in the formation of a phagosome into which reactive oxygen species and hydrolytic enzymes are secreted. The consumption of oxygen during the generation of reactive oxygen species has been termed the "respiratory burst", although unrelated to respiration or energy production. The respiratory burst involves the activation of the enzyme NADPH oxidase, which produces large quantities of superoxide, a reactive oxygen species. Superoxide decays spontaneously or is broken down via enzymes known as superoxide dismutases (Cu/ZnSOD and MnSOD), to hydrogen peroxide, which is then converted to hypochlorous acid (HClO), by the green heme enzyme myeloperoxidase. It is thought that the bactericidal properties of HClO are enough to kill bacteria phagocytosed by the neutrophil, but this may instead be a step necessary for the activation of proteases. Though neutrophils can kill many microbes, the interaction of neutrophils with microbes and molecules produced by microbes often alters neutrophil turnover. The ability of microbes to alter the fate of neutrophils is highly varied, can be microbe-specific, and ranges from prolonging the neutrophil lifespan to causing rapid neutrophil lysis after phagocytosis. Chlamydia pneumoniae and Neisseria gonorrhoeae have been reported to delay neutrophil apoptosis. Thus, some bacteriaand those that are predominantly intracellular pathogenscan extend the neutrophil lifespan by disrupting the normal process of spontaneous apoptosis and/or PICD (phagocytosis-induced cell death). On the other end of the spectrum, some pathogens such as Streptococcus pyogenes are capable of altering neutrophil fate after phagocytosis by promoting rapid cell lysis and/or accelerating apoptosis to the point of secondary necrosis. Degranulation Neutrophils also release an assortment of proteins in three types of granules by a process called degranulation. The contents of these granules have antimicrobial properties, and help combat infection. Glitter cells are polymorphonuclear leukocyte neutrophils with granules. Degranulation is postulated to occur in a hierarchical manner, with the sequential release of secretory vesicles, tertiary granules, specific granules, and azurophilic granules in response to increasing intracellular calcium concentrations. The release of neutrophils by degranulation occurs through exocytosis, regulated by exocytotic machinery including SNARE proteins, RAC2, RAB27, and others. Neutrophil extracellular traps In 2004, Brinkmann and colleagues described a striking observation that activation of neutrophils causes the release of web-like structures of DNA; this represents a third mechanism for killing bacteria. These neutrophil extracellular traps (NETs) comprise a web of fibers composed of chromatin and serine proteases that trap and kill extracellular microbes. It is suggested that NETs provide a high local concentration of antimicrobial components and bind, disarm, and kill microbes independent of phagocytic uptake. In addition to their possible antimicrobial properties, NETs may serve as a physical barrier that prevents further spread of pathogens. Trapping of bacteria may be a particularly important role for NETs in sepsis, where NETs are formed within blood vessels. Finally, NET formation has been demonstrated to augment macrophage bactericidal activity during infection. Recently, NETs have been shown to play a role in inflammatory diseases, as NETs could be detected in preeclampsia, a pregnancy-related inflammatory disorder in which neutrophils are known to be activated. Neutrophil NET formation may also impact cardiovascular disease, as NETs may influence thrombus formation in coronary arteries. NETs are now known to exhibit pro-thrombotic effects both in vitro and in vivo. More recently, in 2020 NETs were implicated in the formation of blood clots in cases of severe COVID-19. Tumor Associated Neutrophils (TANS) TANs can exhibit an elevated extracellular acidification rate when there is an increase in glycolysis levels. When there is a metabolic shift in TANs this can lead to tumor progression in certain areas of the body, such as the lungs. TANs support the growth and progression of tumors unlike normal neutrophils which would inhibit tumor progression through the phagocytosis of tumor cells. Utilizing a mouse model, they identified that both Glut1 and glucose metabolism increased in TANs found within a mouse who possessed lung adenocarcinoma. A study showed that lung tumor cells can remotely initiate osteoblasts and these osteoblasts can worsen tumors in two ways. First, they can induce SiglecFhigh-expressing neutrophil formation that in turn promotes lung tumor growth and progression. Second, the osteoblasts can promote bone growth thus forming a favorable environment for tumor cells to grow to form bone metastasis. Clinical significance Low neutrophil counts are termed neutropenia. This can be congenital (developed at or before birth) or it can develop later, as in the case of aplastic anemia or some kinds of leukemia. It can also be a side-effect of medication, most prominently chemotherapy. Neutropenia makes an individual highly susceptible to infections. It can also be the result of colonization by intracellular neutrophilic parasites. In alpha 1-antitrypsin deficiency, the important neutrophil elastase is not adequately inhibited by alpha 1-antitrypsin, leading to excessive tissue damage in the presence of inflammation – the most prominent one being emphysema. Negative effects of elastase have also been shown in cases when the neutrophils are excessively activated (in otherwise healthy individuals) and release the enzyme in extracellular space. Unregulated activity of neutrophil elastase can lead to disruption of pulmonary barrier showing symptoms corresponding with acute lung injury. The enzyme also influences activity of macrophages by cleaving their toll-like receptors (TLRs) and downregulating cytokine expression by inhibiting nuclear translocation of NF-κB. In Familial Mediterranean fever (FMF), a mutation in the pyrin (or marenostrin) gene, which is expressed mainly in neutrophil granulocytes, leads to a constitutively active acute-phase response and causes attacks of fever, arthralgia, peritonitis, and – eventually – amyloidosis. Hyperglycemia can lead to neutrophil dysfunction. Dysfunction in the neutrophil biochemical pathway myeloperoxidase as well as reduced degranulation are associated with hyperglycemia. The Absolute neutrophil count (ANC) is also used in diagnosis and prognosis. ANC is the gold standard for determining severity of neutropenia, and thus neutropenic fever. Any ANC < 1500 cells / mm3 is considered neutropenia, but <500 cells / mm3 is considered severe. There is also new research tying ANC to myocardial infarction as an aid in early diagnosis. Neutrophils promote ventricular tachycardia in acute myocardial infarction. In autopsy, the presence of neutrophils in the heart or brain is one of the first signs of infarction, and is useful in the timing and diagnosis of myocardial infarction and stroke. Pathogen evasion and resistance Just like phagocytes, pathogens may evade or infect neutrophils. Some bacterial pathogens evolved various mechanisms such as virulence molecules to avoid being killed by neutrophils. These molecules collectively may alter or disrupt neutrophil recruitment, apoptosis or bactericidal activity. Neutrophils can also serve as host cell for various parasites that infects them avoding phagocytosis, including: Leishmania major – uses neutrophils as vehicle to parasitize phagocytes M. tuberculosis M. leprae Yersinia pestis Chlamydia pneumoniae Neutrophil antigens There are five (HNA 1–5) sets of neutrophil antigens recognized. The three HNA-1 antigens (a-c) are located on the low affinity Fc-γ receptor IIIb (FCGR3B :CD16b) The single known HNA-2a antigen is located on CD177. The HNA-3 antigen system has two antigens (3a and 3b) which are located on the seventh exon of the CLT2 gene (SLC44A2). The HNA-4 and HNA-5 antigen systems each have two known antigens (a and b) and are located in the β2 integrin. HNA-4 is located on the αM chain (CD11b) and HNA-5 is located on the αL integrin unit (CD11a). Subpopulations Two functionally unequal subpopulations of neutrophils were identified on the basis of different levels of their reactive oxygen metabolite generation, membrane permeability, activity of enzyme system, and ability to be inactivated. The cells of one subpopulation with high membrane permeability (neutrophil-killers) intensively generate reactive oxygen metabolites and are inactivated in consequence of interaction with the substrate, whereas cells of another subpopulation (neutrophil-cagers) produce reactive oxygen species less intensively, don't adhere to substrate and preserve their activity. Additional studies have shown that lung tumors can be infiltrated by various populations of neutrophils. Video Neutrophils display highly directional amoeboid motility in infected footpad and phalanges. Intravital imaging was performed in the footpad path of LysM-eGFP mice 20 minutes after infection with Listeria monocytogenes. Additional images See also List of distinct cell types in the adult human body References External links Neutropenia Information () Absolute Neutrophil Count Calculator Neutrophil Trace Element Content and Distribution Articles containing video clips Cell biology Granulocytes Human cells Phagocytes
Neutrophil
Biology
4,471
60,724,218
https://en.wikipedia.org/wiki/Feedback%20terminal
A feedback terminal is a physical device that is commonly used to collect large amounts of anonymous real-time feedback from customers, employees, travelers, visitors or students. Typically, feedback terminals feature buttons that users can press to indicate how satisfied they are with the service provided. This information can then be used by organisations to analyze where the experience is optimal, and where it can be improved. Applications Feedback terminals are utilized to measure and improve customer and employee experience across a broad range of industries. Feedback terminals have seen use in a wide variety of industries, including retail, healthcare, hospitality industry, airports, and educational institutions. Feedback terminals also allow for the collection of real-time feedback. For example, by collecting real-time feedback in a public restroom, a facilities manager can be alerted if the collected customer experience has dropped below the certain threshold and then can immediately send out cleaners. Feedback terminals are also often used to measure Net Promoter Score (NPS) on-site, a metric which can be used to gauge the loyalty of a company’s customer relationships. Using a five button feedback terminal, the Net Promoter Score is can be calculated based on responses to a question asking about a customer's experience. Doing so can allow companies to more efficiently categorize customers based on their expected behavior. Benefits The main benefit of such feedback collection is that organisations can collect a larger amount of data when compared to traditional surveys, potentially registering thousands of impressions in a day from a wide range of customers. Measuring people’s experience along the customer journey helps business and other organisations understand how people feel about their experiences at various touchpoints. The organisations can use the feedback data to make improvements that meet the expectations of the majority. Real-time feedback is one of the major benefits of the feedback terminals compared to alternative survey methods. As opposed to traditional surveys, people can use feedback terminals to express their opinion right after the experience has happened, so the feedback could also often be more accurate. In addition, the ability to track feedback in real-time enables the organisations to pinpoint sudden drops in customer or employee experience and solve problems before they get out of control or too many people get disappointed with their experiences. Some providers of feedback terminals also allow their customers to switch between smiley and multiple-choice questions to poll for more precise answers. For example, using multiple-choice feedback buttons, customers can express their opinion on what the company needs to improve next. One major benefit also comes from a marketing perspective. By displaying positive feedback scores, companies can boost confidence about their products and services and in turn win more new customers. This tactic has been widely used among businesses in the tourism sector by displaying feedback or review scores from popular tourism websites. Additionally, feedback terminals often collect more responses compared to traditional surveys. Organisations using feedback terminals could expect to receive feedback from up to 30% or more of the footfall. Finally, by deploying several feedback terminals, organisations can compare results from multiple locations. This feature could be very useful in a pursuit of providing the same level of customer experience across all outlets that belong to the same brand. In a notable example, feedback terminals have been used to gauge customer experience across one hundred and fifty gas stations and immediately managers could see substantial differences in customer experience that had to be addressed. Criticism Some experts argue that organisations should be careful when interpreting the data that was collected using feedback terminals. The reason being that such surveys are likely to have biased results due to unrepresentative data collection. Collected data may overly represent extremes, and this would cause for inaccurate projections. The anonymity aspect of feedback terminals has been criticized too. According to some experts, the data anonymity prevents organizations from knowing if they satisfy their most important customers. References Smart devices
Feedback terminal
Technology
755
41,768,703
https://en.wikipedia.org/wiki/GLIC
The GLIC receptor is a bacterial (Gloeobacter) Ligand-gated Ion Channel, homolog to the nicotinic acetylcholine receptors. It is a proton-gated (the channel opens when it binds a proton, ion), cation-selective channel (it selectively lets the positive ions through). Like the nicotinic acetylcholine receptors is a functional pentameric oligomer (the channel normally works as an assembly of five subunits). However while its eukaryotic homologues are hetero-oligomeric (assembled from different subunits), all until now known bacteria known to express LICs encode a single monomeric unit, indicating the GLIC to be functionally homo-oligomeric (assembled from identical subunits). The similarity of amino-acid sequence to the eukaryotic LGICs is not localized to any single or particular tertiary domain, indicating the similar function of the GLIC to its eukaryotic equivalents. Regardless, the purpose of regulating the threshold for action potential excitation in the nerve signal transmission of multicellular organisms cannot translate to single-cell organisms, thereby not making the purpose of bacterial LGICs immediately obvious. Structure The structure of the open channel structure was solved by two independent research teams in 2009 at low pH values of 4-4.6 (GLIC being proton-gated). See also Cys-loop receptors Ion channel Receptor (biochemistry) References External links Electrophysiology Ion channels Ionotropic receptors Molecular neuroscience Neurochemistry Protein families
GLIC
Chemistry,Biology
331
24,993,240
https://en.wikipedia.org/wiki/Van%20der%20Corput%20lemma%20%28harmonic%20analysis%29
In mathematics, in the field of harmonic analysis, the van der Corput lemma is an estimate for oscillatory integrals named after the Dutch mathematician J. G. van der Corput. The following result is stated by E. Stein: Suppose that a real-valued function is smooth in an open interval , and that for all . Assume that either , or that and is monotone for . Then there is a constant , which does not depend on , such that for any . Sublevel set estimates The van der Corput lemma is closely related to the sublevel set estimates, which give the upper bound on the measure of the set where a function takes values not larger than . Suppose that a real-valued function is smooth on a finite or infinite interval , and that for all . There is a constant , which does not depend on , such that for any the measure of the sublevel set is bounded by . References Inequalities Harmonic analysis Fourier analysis
Van der Corput lemma (harmonic analysis)
Mathematics
201
34,551,376
https://en.wikipedia.org/wiki/Antieigenvalue%20theory
In applied mathematics, antieigenvalue theory was developed by Karl Gustafson from 1966 to 1968. The theory is applicable to numerical analysis, wavelets, statistics, quantum mechanics, finance and optimization. The antieigenvectors are the vectors most turned by a matrix or operator , that is to say those for which the angle between the original vector and its transformed image is greatest. The corresponding antieigenvalue is the cosine of the maximal turning angle. The maximal turning angle is and is called the angle of the operator. Just like the eigenvalues which may be ordered as a spectrum from smallest to largest, the theory of antieigenvalues orders the antieigenvalues of an operator A from the smallest to the largest turning angles. References . Operator theory Matrix theory
Antieigenvalue theory
Mathematics
166
69,383,671
https://en.wikipedia.org/wiki/Global%20catastrophe%20scenarios
Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic (caused by humans), such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history. Anthropogenic Experts at the Future of Humanity Institute at the University of Oxford and the Centre for the Study of Existential Risk at the University of Cambridge prioritize anthropogenic over natural risks due to their much greater estimated likelihood. They are especially concerned by, and consequently focus on, risks posed by advanced technology, such as artificial intelligence and biotechnology. Artificial intelligence The creators of a superintelligent entity could inadvertently give it goals that lead it to annihilate the human race. It has been suggested that if AI systems rapidly become super-intelligent, they may take unforeseen actions or out-compete humanity. According to philosopher Nick Bostrom, it is possible that the first super-intelligence to emerge would be able to bring about almost any possible outcome it valued, as well as to foil virtually any attempt to prevent it from achieving its objectives. Thus, even a super-intelligence indifferent to humanity could be dangerous if it perceived humans as an obstacle to unrelated goals. In Bostrom's book Superintelligence, he defines this as the control problem. Physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk have echoed these concerns, with Hawking theorizing that such an AI could "spell the end of the human race". In 2009, the Association for the Advancement of Artificial Intelligence (AAAI) hosted a conference to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness, as depicted in science-fiction, is probably unlikely, but there are other potential hazards and pitfalls. Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns. A survey of AI experts estimated that the chance of human-level machine learning having an "extremely bad (e.g., human extinction)" long-term effect on humanity is 5%. A 2008 survey by the Future of Humanity Institute estimated a 5% probability of extinction by super-intelligence by 2100. Eliezer Yudkowsky believes risks from artificial intelligence are harder to predict than any other known risks due to bias from anthropomorphism. Since people base their judgments of artificial intelligence on their own experience, he claims they underestimate the potential power of AI. Biotechnology Biotechnology can pose a global catastrophic risk in the form of bioengineered organisms (viruses, bacteria, fungi, plants, or animals). In many cases the organism will be a pathogen of humans, livestock, crops, or other organisms we depend upon (e.g. pollinators or gut bacteria). However, any organism able to catastrophically disrupt ecosystem functions, e.g. highly competitive weeds, outcompeting essential crops, poses a biotechnology risk. A biotechnology catastrophe may be caused by accidentally releasing a genetically engineered organism from controlled environments, by the planned release of such an organism which then turns out to have unforeseen and catastrophic interactions with essential natural or agro-ecosystems, or by intentional usage of biological agents in biological warfare or bioterrorism attacks. Pathogens may be intentionally or unintentionally genetically modified to change virulence and other characteristics. For example, a group of Australian researchers unintentionally changed characteristics of the mousepox virus while trying to develop a virus to sterilize rodents. The modified virus became highly lethal even in vaccinated and naturally resistant mice. The technological means to genetically modify virus characteristics are likely to become more widely available in the future if not properly regulated. In December 2024, a broad coalition of scientists warned that mirror life, organisms that use the mirror images of naturally occurring chiral biomolecules, should not be created because if it escapes into the environment, it would evade predation by natural organisms and compete against it for non-chiral nutrients. Biological weapons, whether used in war or terrorism, could result in human extinction. Terrorist applications of biotechnology have historically been infrequent. To what extent this is due to a lack of capabilities or motivation is not resolved. However, given current development, more risk from novel, engineered pathogens is to be expected in the future. Exponential growth has been observed in the biotechnology sector, and Noun and Chyba predict that this will lead to major increases in biotechnological capabilities in the coming decades. They argue that risks from biological warfare and bioterrorism are distinct from nuclear and chemical threats because biological pathogens are easier to mass-produce and their production is hard to control (especially as the technological capabilities are becoming available even to individual users). In 2008, a survey by the Future of Humanity Institute estimated a 2% probability of extinction from engineered pandemics by 2100. Noun and Chyba propose three categories of measures to reduce risks from biotechnology and natural pandemics: Regulation or prevention of potentially dangerous research, improved recognition of outbreaks, and developing facilities to mitigate disease outbreaks (e.g. better and/or more widely distributed vaccines). Chemical weapons By contrast with nuclear and biological weapons, chemical warfare, while able to create multiple local catastrophes, is unlikely to create a global one. Choice to have fewer children The world population may decline through a preference for fewer children. If developing world demographics are assumed to become developed world demographics, and if the latter are extrapolated, some projections suggest an extinction before the year 3000. John A. Leslie estimates that if the reproduction rate drops to the German or Japanese level the extinction date will be 2400. However, some models suggest the demographic transition may reverse itself due to evolutionary biology. Climate change Human-caused climate change has been driven by technology since the 19th century or earlier. Projections of future climate change suggest further global warming, sea level rise, and an increase in the frequency and severity of some extreme weather events and weather-related disasters. Effects of global warming include loss of biodiversity, stresses to existing food-producing systems, increased spread of known infectious diseases such as malaria, and rapid mutation of microorganisms. A common belief is that the current climate crisis could spiral into human extinction. In November 2017, a statement by 15,364 scientists from 184 countries indicated that increasing levels of greenhouse gases from use of fossil fuels, human population growth, deforestation, and overuse of land for agricultural production, particularly by farming ruminants for meat consumption, are trending in ways that forecast an increase in human misery over coming decades. An October 2017 report published in The Lancet stated that toxic air, water, soils, and workplaces were collectively responsible for nine million deaths worldwide in 2015, particularly from air pollution which was linked to deaths by increasing susceptibility to non-infectious diseases, such as heart disease, stroke, and lung cancer. The report warned that the pollution crisis was exceeding "the envelope on the amount of pollution the Earth can carry" and "threatens the continuing survival of human societies". Carl Sagan and others have raised the prospect of extreme runaway global warming turning Earth into an uninhabitable Venus-like planet. Some scholars argue that much of the world would become uninhabitable under severe global warming, but even these scholars do not tend to argue that it would lead to complete human extinction, according to Kelsey Piper of Vox. All the IPCC scenarios, including the most pessimistic ones, predict temperatures compatible with human survival. The question of human extinction under "unlikely" outlier models is not generally addressed by the scientific literature. Factcheck.org judges that climate change fails to pose an established "existential risk", stating: "Scientists agree climate change does pose a threat to humans and ecosystems, but they do not envision that climate change will obliterate all people from the planet." Cyberattack Cyberattacks have the potential to destroy everything from personal data to electric grids. Christine Peterson, co-founder and past president of the Foresight Institute, believes a cyberattack on electric grids has the potential to be a catastrophic risk. She notes that little has been done to mitigate such risks, and that mitigation could take several decades of readjustment. Environmental disaster An environmental or ecological disaster, such as world crop failure and collapse of ecosystem services, could be induced by the present trends of overpopulation, economic development, and non-sustainable agriculture. Most environmental scenarios involve one or more of the following: Holocene extinction event, scarcity of water that could lead to approximately half the Earth's population being without safe drinking water, pollinator decline, overfishing, massive deforestation, desertification, climate change, or massive water pollution episodes. Detected in the early 21st century, a threat in this direction is colony collapse disorder, a phenomenon that might foreshadow the imminent extinction of the Western honeybee. As the bee plays a vital role in pollination, its extinction would severely disrupt the food chain. An October 2017 report published in The Lancet stated that toxic air, water, soils, and workplaces were collectively responsible for nine million deaths worldwide in 2015, particularly from air pollution which was linked to deaths by increasing susceptibility to non-infectious diseases, such as heart disease, stroke, and lung cancer. The report warned that the pollution crisis was exceeding "the envelope on the amount of pollution the Earth can carry" and "threatens the continuing survival of human societies". A May 2020 analysis published in Scientific Reports found that if deforestation and resource consumption continue at current rates they could culminate in a "catastrophic collapse in human population" and possibly "an irreversible collapse of our civilization" within the next several decades. The study says humanity should pass from a civilization dominated by the economy to a "cultural society" that "privileges the interest of the ecosystem above the individual interest of its components, but eventually in accordance with the overall communal interest." The authors also note that "while violent events, such as global war or natural catastrophic events, are of immediate concern to everyone, a relatively slow consumption of the planetary resources may be not perceived as strongly as a mortal danger for the human civilization." Evolution Some scenarios envision that humans could use genetic engineering or technological modifications to split into normal humans and a new species – posthumans. Such a species could be fundamentally different from any previous life form on Earth, e.g. by merging humans with technological systems. Such scenarios assess the risk that the "old" human species will be outcompeted and driven to extinction by the new, posthuman entity. Experimental accident Nick Bostrom suggested that in the pursuit of knowledge, humanity might inadvertently create a device that could destroy Earth and the Solar System. Investigations in nuclear and high-energy physics could create unusual conditions with catastrophic consequences. All of these worries have so far proven unfounded. For example, scientists worried that the first nuclear test might ignite the atmosphere. Early in the development of thermonuclear weapons there were some concerns that a fusion reaction could "ignite" the atmosphere in a chain reaction that would engulf Earth. Calculations showed the energy would dissipate far too quickly to sustain a reaction. Others worried that the RHIC or the Large Hadron Collider might start a chain-reaction global disaster involving black holes, strangelets, or false vacuum states. It has been pointed out that much more energetic collisions take place currently in Earth's atmosphere. Though these particular concerns have been challenged, the general concern about new experiments remains. Mineral resource exhaustion Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and the paradigm founder of ecological economics, has argued that the carrying capacity of Earth—that is, Earth's capacity to sustain human populations and consumption levels—is bound to decrease sometime in the future as Earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse, leading to the demise of human civilization itself. Ecological economist and steady-state theorist Herman Daly, a student of Georgescu-Roegen, has propounded the same argument by asserting that "all we can do is to avoid wasting the limited capacity of creation to support present and future life [on Earth]." Ever since Georgescu-Roegen and Daly published these views, various scholars in the field have been discussing the existential impossibility of allocating Earth's finite stock of mineral resources evenly among an unknown number of present and future generations. This number of generations is likely to remain unknown to us, as there is no way—or only little way—of knowing in advance if or when mankind will ultimately face extinction. In effect, any conceivable intertemporal allocation of the stock will inevitably end up with universal economic decline at some future point. Nanotechnology Many nanoscale technologies are in development or currently in use. The only one that appears to pose a significant global catastrophic risk is molecular manufacturing, a technique that would make it possible to build complex structures at atomic precision. Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories of desktop proportions. When nanofactories gain the ability to produce other nanofactories, production may only be limited by relatively abundant factors such as input materials, energy and software. Molecular manufacturing could be used to cheaply produce, among many other products, highly advanced, durable weapons. Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities. Chris Phoenix and Treder classify catastrophic risks posed by nanotechnology into three categories: From augmenting the development of other technologies such as AI and biotechnology. By enabling mass-production of potentially dangerous products that cause risk dynamics (such as arms races) depending on how they are used. From uncontrolled self-perpetuating processes with destructive effects. Several researchers say the bulk of risk from nanotechnology comes from the potential to lead to war, arms races, and destructive global government. Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races): A large number of players may be tempted to enter the race since the threshold for doing so is low; The ability to make weapons with molecular manufacturing will be cheap and easy to hide; Therefore, lack of insight into the other parties' capabilities can tempt players to arm out of caution or to launch preemptive strikes; Molecular manufacturing may reduce dependency on international trade, a potential peace-promoting factor; Wars of aggression may pose a smaller economic threat to the aggressor since manufacturing is cheap and humans may not be needed on the battlefield. Since self-regulation by all state and non-state actors seems hard to achieve, measures to mitigate war-related risks have mainly been proposed in the area of international cooperation. International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control. International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed. One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour. The Center for Responsible Nanotechnology also suggests some technical restrictions. Improved transparency regarding technological capabilities may be another important facilitator for arms-control. Gray goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation and has been a theme in mainstream media and fiction. This scenario involves tiny self-replicating robots that consume the entire biosphere (ecophagy) using it as a source of energy and building blocks. Nowadays, however, nanotech experts—including Drexler—discredit the scenario. According to Phoenix, a "so-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident". Nuclear war Some fear a hypothetical World War III could cause the annihilation of humankind. Nuclear war could yield unprecedented human death tolls and habitat destruction. Detonating large numbers of nuclear weapons would have an immediate, short term and long-term effects on the climate, potentially causing cold weather known as a "nuclear winter" with reduced sunlight and photosynthesis that may generate significant upheaval in advanced civilizations. However, while popular perception sometimes takes nuclear war as "the end of the world", experts assign low probability to human extinction from nuclear war. In 1982, Brian Martin estimated that a US–Soviet nuclear exchange might kill 400–450 million directly, mostly in the United States, Europe and Russia, and maybe several hundred million more through follow-up consequences in those same areas. In 2008, a survey by the Future of Humanity Institute estimated a 4% probability of extinction from warfare by 2100, with a 1% chance of extinction from nuclear warfare. The scenarios that have been explored most frequently are nuclear warfare and doomsday devices. Mistakenly launching a nuclear attack in response to a false alarm is one possible scenario; this nearly happened during the 1983 Soviet nuclear false alarm incident. Although the probability of a nuclear war per year is slim, Professor Martin Hellman has described it as inevitable in the long run; unless the probability approaches zero, inevitably there will come a day when civilization's luck runs out. During the Cuban Missile Crisis, U.S. president John F. Kennedy estimated the odds of nuclear war at "somewhere between one out of three and even". The United States and Russia have a combined arsenal of 14,700 nuclear weapons, and there is an estimated total of 15,700 nuclear weapons in existence worldwide. World population and agricultural crisis The Global Footprint Network estimates that current activity uses resources twice as fast as they can be naturally replenished, and that growing human population and increased consumption pose the risk of resource depletion and a concomitant population crash. Evidence suggests birth rates may be rising in the 21st century in the developed world. Projections vary; researcher Hans Rosling has projected population growth to start to plateau around 11 billion, and then to slowly grow or possibly even shrink thereafter. A 2014 study published in Science asserts that the human population will grow to around 11 billion by 2100 and that growth will continue into the next century. The 20th century saw a rapid increase in human population due to medical developments and massive increases in agricultural productivity such as the Green Revolution. Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The Green Revolution in agriculture helped food production to keep pace with worldwide population growth or actually enabled population growth. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon-fueled irrigation. David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), place in their 1994 study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 200 million. To achieve a sustainable economy and avert disaster, the United States must reduce its population by at least one-third, and world population will have to be reduced by two-thirds, says the study. The authors of this study believe the mentioned agricultural crisis will begin to have an effect on the world after 2020 and will become critical after 2050. Geologist Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before. Since supplies of petroleum and natural gas are essential to modern agriculture techniques, a fall in global oil supplies (see peak oil for global concerns) could cause spiking food prices and unprecedented famine in the coming decades. Wheat is humanity's third-most-produced cereal. Extant fungal infections such as Ug99 (a kind of stem rust) can cause 100% crop losses in most modern varieties. Little or no treatment is possible and the infection spreads on the wind. Should the world's large grain-producing areas become infected, the ensuing crisis in wheat availability would lead to price spikes and shortages in other food products. Human activity has triggered an extinction event often referred to as the sixth "mass extinction", which scientists consider a major threat to the continued existence of human civilization. The 2019 Global Assessment Report on Biodiversity and Ecosystem Services, published by the United Nations' Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, asserts that roughly one million species of plants and animals face extinction from human impacts such as expanding land use for industrial agriculture and livestock rearing, along with overfishing. A 1997 assessment states that over a third of Earth's land has been modified by humans, that atmospheric carbon dioxide has increased around 30 percent, that humans are the dominant source of nitrogen fixation, that humans control most of the Earth's accessible surface fresh water, and that species extinction rates may be over a hundred times faster than normal. Ecological destruction which impacts food production could produce a human population crash. Non-anthropogenic Of all species that have ever lived, 99% have gone extinct. Earth has experienced numerous mass extinction events, in which up to 96% of all species present at the time were eliminated. A notable example is the K-T extinction event, which killed the dinosaurs. The types of threats posed by nature have been argued to be relatively constant, though this has been disputed. A number of other astronomical threats have also been identified. Asteroid impact An impact event involving a near-Earth object (NEOs) could result in localized or widespread destruction, including widespread extinction and possibly human extinction. Several asteroids have collided with Earth in recent geological history. The Chicxulub asteroid, for example, was about ten kilometers (six miles) in diameter and is theorized to have caused the extinction of non-avian dinosaurs at the end of the Cretaceous. No sufficiently large asteroid currently exists in an Earth-crossing orbit; however, a comet of sufficient size to cause human extinction could impact the Earth, though the annual probability may be less than 10−8. Geoscientist Brian Toon estimates that while a few people, such as "some fishermen in Costa Rica", could plausibly survive a ten-kilometer (six-mile) meteorite, a hundred-kilometer (sixty-mile) meteorite would be large enough to "incinerate everybody". Asteroids with around a 1 km diameter have impacted the Earth on average once every 500,000 years; these are probably too small to pose an extinction risk, but might kill billions of people. Larger asteroids are less common. Small near-Earth asteroids are regularly observed and can impact anywhere on the Earth injuring local populations. As of 2013, Spaceguard estimates it has identified 95% of all NEOs over 1 km in size. None of the large "dinosaur-killer" asteroids known to Spaceguard pose a near-term threat of collision with Earth. In April 2018, the B612 Foundation reported "It's a 100 per cent certain we'll be hit [by a devastating asteroid], but we're not 100 per cent sure when." Also in 2018, physicist Stephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched. Planetary or interstellar collision In April 2008, it was announced that two simulations of long-term planetary movement, one at the Paris Observatory and the other at the University of California, Santa Cruz, indicate a 1% chance that Mercury's orbit could be made unstable by Jupiter's gravitational pull sometime during the lifespan of the Sun. Were this to happen, the simulations suggest a collision with Earth could be one of four possible outcomes (the others being Mercury colliding with the Sun, colliding with Venus, or being ejected from the Solar System altogether). Collision with or a near miss by a large object from outside the Solar System could also be catastrophic to life on Earth. Interstellar objects, including asteroids, comets, and rogue planets, are difficult to detect with current technology until they enter the Solar System, and could potentially do so at high speed. If Mercury or a rogue planet of similar size were to collide with Earth, all life on Earth could be obliterated entirely: an asteroid 15 km wide is believed to have caused the extinction of the non-avian dinosaurs, whereas Mercury is 4,879 km in diameter. The destabilization of Mercury's orbit is unlikely in the foreseeable future. A close pass by a large object could cause massive tidal forces that triggered anything from minor earthquakes to liquification of the Earth's crust to Earth being torn apart, becoming a disrupted planet. Stars and black holes are easier to detect from a longer distance, but are much more difficult to deflect. Passage through the solar system could result in the destruction of the Earth or the Sun by being directly consumed. Astronomers expect the collision of the Milky Way Galaxy with the Andromeda Galaxy in about four billion years, but due to the large amount of empty space between them, most stars are not expected to collide directly. The passage of another star system into or close to the outer reaches of the Solar System could trigger a swarm of asteroid impacts as the orbit of objects in the Oort Cloud is disturbed, or objects orbiting the two stars collide. It also increases the risk of catastrophic irradiation of the Earth. Astronomers have identified fourteen stars with a 90% chance of coming within 3.26 light years of the Sun in the next few million years, and four within 1.6 light years, including HIP 85605 and Gliese 710. Observational data on nearby stars was too incomplete for a full catalog of near misses, but more data is being collected by the Gaia spacecraft. Physics hazards Strangelets, if they exist, might naturally be produced by strange stars, and in the case of a collision, might escape and hit the Earth. Likewise, a false vacuum collapse could be triggered elsewhere in the universe. Gamma-ray burst Another interstellar threat is a gamma-ray burst, typically produced by a supernova when a star collapses inward on itself and then "bounces" outward in a massive explosion. Under certain circumstances, these events are thought to produce massive bursts of gamma radiation emanating outward from the axis of rotation of the star. If such an event were to occur oriented towards the Earth, the massive amounts of gamma radiation could significantly affect the Earth's atmosphere and pose an existential threat to all life. Such a gamma-ray burst may have been the cause of the Ordovician–Silurian extinction events. This scenario is unlikely in the foreseeable future. Astroengineering projects proposed to mitigate the risk of gamma-ray bursts include shielding the Earth with ionised smartdust and star lifting of nearby high mass stars likely to explode in a supernova. A gamma-ray burst would be able to vaporize anything in its beams out to around 200 light-years. The Sun A powerful solar flare, solar superstorm or a solar micronova, which is a drastic and unusual decrease or increase in the Sun's power output, could have severe consequences for life on Earth. The Earth will naturally become uninhabitable due to the Sun's stellar evolution, within about a billion years. In around 1 billion years from now, the Sun's brightness may increase as a result of a shortage of hydrogen, and the heating of its outer layers may cause the Earth's oceans to evaporate, leaving only minor forms of life. Well before this time, the level of carbon dioxide in the atmosphere will be too low to support plant life, destroying the foundation of the food chains. See Future of the Earth. About 7–8 billion years from now, if and after the Sun has become a red giant, the Earth will probably be engulfed by an expanding Sun and destroyed. Uninhabitable universe The ultimate fate of the universe is uncertain, but is likely to eventually become uninhabitable, either suddenly or gradually. If it does not collapse into the Big Crunch, over very long time scales the heat death of the universe may render life impossible. The expansion of spacetime could cause the destruction of all matter in a Big Rip scenario. If our universe lies within a false vacuum, a bubble of lower-energy vacuum could come to exist by chance or otherwise in our universe, and catalyze the conversion of our universe to a lower energy state in a volume expanding at nearly the speed of light, destroying all that is known without forewarning. Such an occurrence is called vacuum decay, or the "Big Slurp". Extraterrestrial invasion Intelligent extraterrestrial life, if it exists, could invade Earth, either to exterminate and supplant human life, enslave it under a colonial system, exploit the planet's resources, or destroy it altogether. Although the existence of sentient alien life has never been conclusively proven, scientists such as Carl Sagan have posited it to be very likely. Scientists consider such a scenario technically possible, but unlikely. An article in The New York Times Magazine discussed the possible threats for humanity of intentionally sending messages aimed at extraterrestrial life into the cosmos in the context of the SETI efforts. Several public figures such as Stephen Hawking and Elon Musk have argued against sending such messages, on the grounds that extraterrestrial civilizations with technology are probably far more advanced than, and could therefore pose an existential threat to, humanity. Invasion by microscopic life is also a possibility. In 1969, the "Extra-Terrestrial Exposure Law" was added to the United States Code of Federal Regulations (Title 14, Section 1211) in response to the possibility of biological contamination resulting from the U.S. Apollo Space Program. It was removed in 1991. Natural pandemic A pandemic involving one or more viruses, prions, or antibiotic-resistant bacteria. Epidemic diseases that have killed millions of people include smallpox, bubonic plague, influenza, HIV/AIDS, COVID-19, cocoliztli, typhus, and cholera. Endemic tuberculosis and malaria kill over a million people each year. Sudden introduction of various European viruses decimated indigenous American populations. A deadly pandemic restricted to humans alone would be self-limiting as its mortality would reduce the density of its target population. A pathogen with a broad host range in multiple species, however, could eventually reach even isolated human populations. U.S. officials assess that an engineered pathogen capable of "wiping out all of humanity", if left unchecked, is technically feasible and that the technical obstacles are "trivial". However, they are confident that in practice, countries would be able to "recognize and intervene effectively" to halt the spread of such a microbe and prevent human extinction. There are numerous historical examples of pandemics that have had a devastating effect on a large number of people. The present, unprecedented scale and speed of human movement make it more difficult than ever to contain an epidemic through local quarantines, and other sources of uncertainty and the evolving nature of the risk mean natural pandemics may pose a realistic threat to human civilization. There are several classes of argument about the likelihood of pandemics. One stems from history, where the limited size of historical pandemics is evidence that larger pandemics are unlikely. This argument has been disputed on grounds including the changing risk due to changing population and behavioral patterns among humans, the limited historical record, and the existence of an anthropic bias. Another argument is based on an evolutionary model that predicts that naturally evolving pathogens will ultimately develop an upper limit to their virulence. This is because pathogens with high enough virulence quickly kill their hosts and reduce their chances of spreading the infection to new hosts or carriers. This model has limits, however, because the fitness advantage of limited virulence is primarily a function of a limited number of hosts. Any pathogen with a high virulence, high transmission rate and long incubation time may have already caused a catastrophic pandemic before ultimately virulence is limited through natural selection. Additionally, a pathogen that infects humans as a secondary host and primarily infects another species (a zoonosis) has no constraints on its virulence in people, since the accidental secondary infections do not affect its evolution. Lastly, in models where virulence level and rate of transmission are related, high levels of virulence can evolve. Virulence is instead limited by the existence of complex populations of hosts with different susceptibilities to infection, or by some hosts being geographically isolated. The size of the host population and competition between different strains of pathogens can also alter virulence. Neither of these arguments is applicable to bioengineered pathogens, and this poses entirely different risks of pandemics. Experts have concluded that "Developments in science and technology could significantly ease the development and use of high consequence biological weapons", and these "highly virulent and highly transmissible [bio-engineered pathogens] represent new potential pandemic threats". Natural climate change Climate change refers to a lasting change in the Earth's climate. The climate has ranged from ice ages to warmer periods when palm trees grew in Antarctica. It has been hypothesized that there was also a period called "snowball Earth" when all the oceans were covered in a layer of ice. These global climatic changes occurred slowly, near the end of the last Major Ice Age when the climate became more stable. However, abrupt climate change on the decade time scale has occurred regionally. A natural variation into a new climate regime (colder or hotter) could pose a threat to civilization. In the history of the Earth, many Ice Ages are known to have occurred. An ice age would have a serious impact on civilization because vast areas of land (mainly in North America, Europe, and Asia) could become uninhabitable. Currently, the world is in an Interglacial period within a much older glacial event. The last glacial expansion ended about 10,000 years ago, and all civilizations evolved later than this. Scientists do not predict that a natural ice age will occur anytime soon. The amount of heat-trapping gases emitted into Earth's oceans and atmosphere will prevent the next ice age, which otherwise would begin in around 50,000 years, and likely more glacial cycles. On a long time scale, natural shifts such as Milankovitch cycles (hypothesized quaternary climatic oscillations) could create unknown climate variability and change. Volcanism A geological event such as massive flood basalt, volcanism, or the eruption of a supervolcano could lead to a so-called volcanic winter, similar to a nuclear winter. Human extinction is a possibility. One such event, the Toba eruption, occurred in Indonesia about 71,500 years ago. According to the Toba catastrophe theory, the event may have reduced human populations to only a few tens of thousands of individuals. Yellowstone Caldera is another such supervolcano, having undergone 142 or more caldera-forming eruptions in the past 17 million years. A massive volcano eruption would eject extraordinary volumes of volcanic dust, toxic and greenhouse gases into the atmosphere with serious effects on global climate (towards extreme global cooling: volcanic winter if short-term, and ice age if long-term) or global warming (if greenhouse gases were to prevail). When the supervolcano at Yellowstone last erupted 640,000 years ago, the thinnest layers of the ash ejected from the caldera spread over most of the United States west of the Mississippi River and part of northeastern Mexico. The magma covered much of what is now Yellowstone National Park and extended beyond, covering much of the ground from Yellowstone River in the east to Idaho falls in the west, with some of the flows extending north beyond Mammoth Springs. According to a recent study, if the Yellowstone caldera erupted again as a supervolcano, an ash layer one to three millimeters thick could be deposited as far away as New York, enough to "reduce traction on roads and runways, short out electrical transformers and cause respiratory problems". There would be centimeters of thickness over much of the U.S. Midwest, enough to disrupt crops and livestock, especially if it happened at a critical time in the growing season. The worst-affected city would likely be Billings, Montana, population 109,000, which the model predicted would be covered with ash estimated as 1.03 to 1.8 meters thick. The main long-term effect is through global climate change, which reduces the temperature globally by about 5–15 °C for a decade, together with the direct effects of the deposits of ash on their crops. A large supervolcano like Toba would deposit one or two meters thickness of ash over an area of several million square kilometers. (1000 cubic kilometers is equivalent to a one-meter thickness of ash spread over a million square kilometers). If that happened in some densely populated agricultural area, such as India, it could destroy one or two seasons of crops for two billion people. However, Yellowstone shows no signs of a supereruption at present, and it is not certain that a future supereruption will occur. Research published in 2011 finds evidence that massive volcanic eruptions caused massive coal combustion, supporting models for the significant generation of greenhouse gases. Researchers have suggested that massive volcanic eruptions through coal beds in Siberia would generate significant greenhouse gases and cause a runaway greenhouse effect. Massive eruptions can also throw enough pyroclastic debris and other material into the atmosphere to partially block out the sun and cause a volcanic winter, as happened on a smaller scale in 1816 following the eruption of Mount Tambora, the so-called Year Without a Summer. Such an eruption might cause the immediate deaths of millions of people several hundred kilometers (or miles) from the eruption, and perhaps billions of death worldwide, due to the failure of the monsoons, resulting in major crop failures causing starvation on a profound scale. A much more speculative concept is the verneshot: a hypothetical volcanic eruption caused by the buildup of gas deep underneath a craton. Such an event may be forceful enough to launch an extreme amount of material from the crust and mantle into a sub-orbital trajectory. See also Great Filter Notes References Works cited . Existential risk Man-made disasters International responses to disasters Doomsday scenarios Apocalyptic fiction
Global catastrophe scenarios
Biology
8,186
57,821
https://en.wikipedia.org/wiki/Wireless%20Markup%20Language
Wireless Markup Language (WML), based on XML, is an obsolete markup language intended for devices that implement the Wireless Application Protocol (WAP) specification, such as mobile phones. It provides navigational support, data input, hyperlinks, text and image presentation, and forms, much like HTML (Hypertext Markup Language). It preceded the use of other markup languages used with WAP, such as XHTML and HTML itself, which achieved dominance as processing power in mobile devices increased. WML history Building on Openwave's HDML, Nokia's "Tagged Text Markup Language" (TTML) and Ericsson's proprietary markup language for mobile content, the WAP Forum created the WML 1.1 standard in 1998. WML 2.0 was specified in 2001, but has not been widely adopted. It was an attempt at bridging WML and XHTML Basic before the WAP 2.0 spec was finalized. In the end, XHTML Mobile Profile became the markup language used in WAP 2.0. The newest WML version in active use is 1.3. The first company to launch a public WML site was Dutch mobile phone network operator Telfort in October 1999 and the first company in the world to launch the Nokia 7110. The Telfort WML site was created and developed as side project to test the device's capabilities by a billing engineer called Christopher Bee and National Deployment Manager, Euan McLeod. The WML site consists of four pages in both Dutch and English that contained many grammatical errors in Dutch as the two developers were unaware the WML was configured on the Nokia 7110 as the home page and neither were native Dutch speakers. WML markup WML documents are XML documents that validate against the WML DTD (Document Type Definition) . The W3C Markup Validation service (http://validator.w3.org/) can be used to validate WML documents (they are validated against their declared document type). For example, the following WML page could be saved as "example.wml": <?xml version="1.0"?> <!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.1//EN" "http://www.wapforum.org/DTD/wml_1.1.xml" > <wml> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> </head> <card id="main" title="First Card"> <p mode="wrap">This is a sample WML page.</p> </card> </wml> A WML document is known as a "deck". Data in the deck is structured into one or more "cards" (pages), each of which represents a single interaction with the user. WML decks are stored on an ordinary web server configured to serve the text/vnd.wap.wml MIME type in addition to plain HTML and variants. The WML cards when requested by a device are accessed by a bridge (WAP gateway), which sits between mobile devices and the World Wide Web, passing pages from one to the other much like a proxy. The gateways send the WML pages on in a form suitable for mobile device reception (WAP Binary XML). This process is hidden from the phone, so it may access the page in the same way as a browser accesses HTML, using a URL (for example, http://example.com/foo.wml). (Provided the mobile phone operator has not specifically locked the phone to prevent access of user-specified URLs.) WML has a scaled-down set of procedural elements, which can be used by the author to control navigation to other cards. Mobile devices are moving towards allowing more XHTML and even standard HTML as processing power in handsets increases. These standards are concerned with formatting and presentation. They do not however address cell-phone or mobile device hardware interfacing in the same way as WML. WML capability in desktop browsers The Presto layout engine (used by Opera before its switch to Blink) understood WML natively. Mozilla-based browsers (Firefox (before version 57), SeaMonkey, MicroB) could interpret WML through the WMLBrowser add-on. Google Chrome can also interpret WML via two extensions: WML and FireMobileSimulator. Criticism See also WMLScript Wireless Application Protocol Bitmap Format Mobile browser List of document markup languages Comparison of document markup languages XHTML Mobile Profile References External links Technical Specifications at the WAP Forum XHTML-MP Authoring Practices Open Mobile Alliance Wireless Application Protocol Open Mobile Alliance standards XML markup languages
Wireless Markup Language
Technology
1,038
38,826,716
https://en.wikipedia.org/wiki/HD%2079917
HD 79917 is a single star in the southern constellation of Vela. It has the Bayer designation l (lowercase L) Velorum, while HD 79917 is the star's identifier from the Henry Draper Catalogue. The star has an orange hue and is faintly visible to the naked eye with an apparent visual magnitude of +4.92. It is located at a distance of approximately 228 light-years from the Sun based on parallax, and is drifting further away with a radial velocity of +1.6 km/s. This is an aging giant star with a stellar classification of K1III, having exhausted is core hydrogen then cooled and expanded off the main sequence. It has 12.6 times the girth of the Sun and is radiating 67 times the Sun's luminosity from its enlarged photosphere at an effective temperature of . References K-type giants Vela (constellation) Velorum, l Durchmusterung objects 079917 045439 3682
HD 79917
Astronomy
212
50,760,704
https://en.wikipedia.org/wiki/GEOVIA
Dassault Systèmes GEOVIA is a set of geologic modeling and mining engineering software applications developed by the French engineering software company Dassault Systèmes. Formerly known as Gemcom, the company was founded in 1985 as a spin-off from by mining consultants SRK Consulting, with headquarters in Vancouver, British Columbia, Canada. References Geology software Mining engineering
GEOVIA
Engineering
73
144,929
https://en.wikipedia.org/wiki/Boron%20group
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 2 | |- ! 3 | |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} The boron group are the chemical elements in group 13 of the periodic table, consisting of boron (B), aluminium (Al), gallium (Ga), indium (In), thallium (Tl) and nihonium (Nh). This group lies in the p-block of the periodic table. The elements in the boron group are characterized by having three valence electrons. These elements have also been referred to as the triels. Several group 13 elements have biological roles in the ecosystem. Boron is a trace element in humans and is essential for some plants. Lack of boron can lead to stunted plant growth, while an excess can also cause harm by inhibiting growth. Aluminium has neither a biological role nor significant toxicity and is considered safe. Indium and gallium can stimulate metabolism; gallium is credited with the ability to bind itself to iron proteins. Thallium is highly toxic, interfering with the function of numerous vital enzymes, and has seen use as a pesticide. Characteristics Like other groups, the members of this family show patterns in electron configuration, especially in the outermost shells, resulting in trends in chemical behavior: The boron group is notable for trends in the electron configuration, as shown above, and in some of its elements' characteristics. Boron differs from the other group members in its hardness, refractivity and reluctance to participate in metallic bonding. An example of a trend in reactivity is boron's tendency to form reactive compounds with hydrogen. Although situated in p-block, the group is notorious for violation of the octet rule by its members boron and (to a lesser extent) aluminium. All members of the group are characterized as trivalent. Chemical reactivity Hydrides Most of the elements in the boron group show increasing reactivity as the elements get heavier in atomic mass and higher in atomic number. Boron, the first element in the group, is generally unreactive with many elements except at high temperatures, although it is capable of forming many compounds with hydrogen, sometimes called boranes. The simplest borane is diborane, or B2H6. Another example is B10H14. The next group-13 elements, aluminium and gallium, form fewer stable hydrides, although both AlH3 and GaH3 exist. Indium, the next element in the group, is not known to form many hydrides, except in complex compounds such as the phosphine complex (Cy=cyclohexyl). No stable compound of thallium and hydrogen has been synthesized in any laboratory. Oxides All of the boron-group elements are known to form a trivalent oxide, with two atoms of the element bonded covalently with three atoms of oxygen. These elements show a trend of increasing pH (from acidic to basic). Boron oxide (B2O3) is slightly acidic, aluminium and gallium oxide (Al2O3 and Ga2O3 respectively) are amphoteric, indium(III) oxide (In2O3) is nearly amphoteric, and thallium(III) oxide (Tl2O3) is a Lewis base because it dissolves in acids to form salts. Each of these compounds are stable, but thallium oxide decomposes at temperatures higher than 875 °C. Halides The elements in group 13 are also capable of forming stable compounds with the halogens, usually with the formula MX3 (where M is a boron-group element and X is a halogen.) Fluorine, the first halogen, is able to form stable compounds with every element that has been tested (except neon and helium), and the boron group is no exception. It is even hypothesized that nihonium could form a compound with fluorine, NhF3, before spontaneously decaying due to nihonium's radioactivity. Chlorine also forms stable compounds with all of the elements in the boron group, including thallium, and is hypothesized to react with nihonium. All of the elements will react with bromine under the right conditions, as with the other halogens but less vigorously than either chlorine or fluorine. Iodine will react with all natural elements in the periodic table except for the noble gases, and is notable for its explosive reaction with aluminium to form AlI3. Astatine, the fifth halogen, has only formed a few compounds, due to its radioactivity and short half-life, and no reports of a compound with an At–Al, –Ga, –In, –Tl, or –Nh bond have been seen, although scientists think that it should form salts with metals. Tennessine, the sixth and final member of group 17, may also form compounds with the elements in the boron group; however, because Tennessine is purely synthetic and thus must be created artificially, its chemistry has not been investigated, and any compounds would likely decay nearly instantly after formation due to its extreme radioactivity. Physical properties It has been noticed that the elements in the boron group have similar physical properties, although most of boron's are exceptional. For example, all of the elements in the boron group, except for boron itself, are soft. Moreover, all of the other elements in group 13 are relatively reactive at moderate temperatures, while boron's reactivity only becomes comparable at very high temperatures. One characteristic that all do have in common is having three electrons in their valence shells. Boron, being a metalloid, is a thermal and electrical insulator at room temperature, but a good conductor of heat and electricity at high temperatures. Unlike boron, the metals in the group are good conductors under normal conditions. This is in accordance with the long-standing generalization that all metals conduct heat and electricity better than most non-metals. Oxidation states The inert s-pair effect is significant in the group-13 elements, especially the heavier ones like thallium. This results in a variety of oxidation states. In the lighter elements, the +3 state is the most stable, but the +1 state becomes more prevalent with increasing atomic number, and is the most stable for thallium. Boron is capable of forming compounds with lower oxidization states, of +1 or +2, and aluminium can do the same. Gallium can form compounds with the oxidation states +1, +2 and +3. Indium is like gallium, but its +1 compounds are more stable than those of the lighter elements. The strength of the inert-pair effect is maximal in thallium, which is generally only stable in the oxidation state of +1, although the +3 state is seen in some compounds. Stable and monomeric gallium, indium and thallium radicals with a formal oxidation state of +2 have since been reported. Nihonium may have +5 oxidation state. Periodic trends There are several trends that can be observed in the properties of the boron group members. The boiling points of these elements drop from period to period, while densities tend to rise. Nuclear With the exception of the synthetic nihonium, all of the elements of the boron group have stable isotopes. Because all their atomic numbers are odd, boron, gallium and thallium have only two stable isotopes, while aluminium and indium are monoisotopic, having only one, although most indium found in nature is the weakly radioactive 115In. 10B and 11B are both stable, as are 27Al, 69Ga and 71Ga, 113In, and 203Tl and 205Tl. All of these isotopes are readily found in macroscopic quantities in nature. In theory, though, all isotopes with an atomic number greater than 66 are supposed to be unstable to alpha decay. Conversely, all elements with atomic numbers are less than or equal to 66 (except Tc, Pm, Sm and Eu) have at least one isotope that is theoretically energetically stable to all forms of decay (with the exception of proton decay, which has never been observed, and spontaneous fission, which is theoretically possible for elements with atomic numbers greater than 40). Like all other elements, the elements of the boron group have radioactive isotopes, either found in trace quantities in nature or produced synthetically. The longest-lived of these unstable isotopes is the indium isotope 115In, with its extremely long half-life of . This isotope makes up the vast majority of all naturally occurring indium despite its slight radioactivity. The shortest-lived is 7B, with a half-life of a mere , being the boron isotope with the fewest neutrons and a enough to measure. Some radioisotopes have important roles in scientific research; a few are used in the production of goods for commercial use or, more rarely, as a component of finished products. History The boron group has had many names over the years. According to former conventions it was Group IIIB in the European naming system and Group IIIA in the American. The group has also gained two collective names, "earth metals" and "triels". The latter name is derived from the Latin prefix tri- ("three") and refers to the three valence electrons that all of these elements, without exception, have in their valence shells. The name "triels" was first suggested by International Union of Pure and Applied Chemistry (IUPAC) in 1970. Boron was known to the ancient Egyptians, but only in the mineral borax. The metalloid element was not known in its pure form until 1808, when Humphry Davy was able to extract it by the method of electrolysis. Davy devised an experiment in which he dissolved a boron-containing compound in water and sent an electric current through it, causing the elements of the compound to separate into their pure states. To produce larger quantities he shifted from electrolysis to reduction with sodium. Davy named the element boracium. At the same time two French chemists, Joseph Louis Gay-Lussac and Louis Jacques Thénard, used iron to reduce boric acid. The boron they produced was oxidized to boron oxide. Aluminium, like boron, was first known in minerals before it was finally extracted from alum, a common mineral in some areas of the world. Antoine Lavoisier and Humphry Davy had each separately tried to extract it. Although neither succeeded, Davy had given the metal its current name. It was only in 1825 that the Danish scientist Hans Christian Ørsted successfully prepared a rather impure form of the element. Many improvements followed, a significant advance being made just two years later by Friedrich Wöhler, whose slightly modified procedure still yielded an impure product. The first pure sample of aluminium is credited to Henri Etienne Sainte-Claire Deville, who substituted sodium for potassium in the procedure. At that time aluminium was considered precious, and it was displayed next to such metals as gold and silver. The method used today, electrolysis of aluminium oxide dissolved in cryolite, was developed by Charles Martin Hall and Paul Héroult in the late 1880s. Thallium, the heaviest stable element in the boron group, was discovered by William Crookes and Claude-Auguste Lamy in 1861. Unlike gallium and indium, thallium had not been predicted by Dmitri Mendeleev, having been discovered before Mendeleev invented the periodic table. As a result, no one was really looking for it until the 1850s when Crookes and Lamy were examining residues from sulfuric acid production. In the spectra they saw a completely new line, a streak of deep green, which Crookes named after the Greek word θαλλός (), referring to a green shoot or twig. Lamy was able to produce larger amounts of the new metal and determined most of its chemical and physical properties. Indium is the fourth element of the boron group but was discovered before the third, gallium, and after the fifth, thallium. In 1863 Ferdinand Reich and his assistant, Hieronymous Theodor Richter, were looking in a sample of the mineral zinc blende, also known as sphalerite (ZnS), for the spectroscopic lines of the newly discovered element thallium. Reich heated the ore in a coil of platinum metal and observed the lines that appeared in a spectroscope. Instead of the green thallium lines that he expected, he saw a new line of deep indigo-blue. Concluding that it must come from a new element, they named it after the characteristic indigo color it had produced. Gallium minerals were not known before August 1875, when the element itself was discovered. It was one of the elements that the inventor of the periodic table, Dmitri Mendeleev, had predicted to exist six years earlier. While examining the spectroscopic lines in zinc blende the French chemist Paul Emile Lecoq de Boisbaudran found indications of a new element in the ore. In just three months he was able to produce a sample, which he purified by dissolving it in a potassium hydroxide (KOH) solution and sending an electric current through it. The next month he presented his findings to the French Academy of Sciences, naming the new element after the Greek name for Gaul, modern France. The last confirmed element in the boron group, nihonium, was not discovered but rather created or synthesized. The element's synthesis was first reported by the Dubna Joint Institute for Nuclear Research team in Russia and the Lawrence Livermore National Laboratory in the United States, though it was the Dubna team who successfully conducted the experiment in August 2003. Nihonium was discovered in the decay chain of moscovium, which produced a few precious atoms of nihonium. The results were published in January of the following year. Since then around 13 atoms have been synthesized and various isotopes characterized. However, their results did not meet the stringent criteria for being counted as a discovery, and it was the later RIKEN experiments of 2004 aimed at directly synthesizing nihonium that were acknowledged by IUPAC as the discovery. Etymology The name "boron" comes from the Arabic word for the mineral borax, (بورق, boraq) which was known before boron was ever extracted. The "-on" suffix is thought to have been taken from "carbon". Aluminium was named by Humphry Davy in the early 1800s. It is derived from the Greek word alumen, meaning bitter salt, or the Latin alum, the mineral. Gallium is derived from the Latin Gallia, referring to France, the place of its discovery. Indium comes from the Latin word indicum, meaning indigo dye, and refers to the element's prominent indigo spectroscopic line. Thallium, like indium, is named after the Greek word for the color of its spectroscopic line: , meaning a green twig or shoot. "Nihonium" is named after Japan (Nihon in Japanese), where it was discovered. Occurrence and abundance Boron Boron, with its atomic number of 5, is a very light element. Almost never found free in nature, it is very low in abundance, composing only 0.001% (10 ppm) of the Earth's crust. It is known to occur in over a hundred different minerals and ores, however: the main source is borax, but it is also found in colemanite, boracite, kernite, tusionite, berborite and fluoborite. Major world miners and extractors of boron include Turkey, the United States, Argentina, China, Bolivia and Peru. Turkey is by far the most prominent of these, accounting for around 70% of all boron extraction in the world. The United States is second, most of its yield coming from the state of California. Aluminium Aluminium, in contrast to boron, is the most abundant metal in the Earth's crust, and the third most abundant element. It composes about 8.2% (82,000 ppm) of the Earth's crust, surpassed only by oxygen and silicon. It is like boron, however, in that it is uncommon in nature as a free element. This is due to aluminium's tendency to attract oxygen atoms, forming several aluminium oxides. Aluminium is now known to occur in nearly as many minerals as boron, including garnets, turquoises and beryls, but the main source is the ore bauxite. The world's leading countries in the extraction of aluminium are Ghana, Suriname, Russia and Indonesia, followed by Australia, Guinea and Brazil. Gallium Gallium is a relatively rare element in the Earth's crust and is not found in as many minerals as its lighter homologues. Its abundance on the Earth is a mere 0.0018% (18 ppm). Its production is very low compared to other elements, but has increased greatly over the years as extraction methods have improved. Gallium can be found as a trace in a variety of ores, including bauxite and sphalerite, and in such minerals as diaspore and germanite. Trace amounts have been found in coal as well. The gallium content is greater in a few minerals, including gallite (CuGaS2), but these are too rare to be counted as major sources and make negligible contributions to the world's supply. Indium Indium is another rare element in the boron group at only 0.000005% (0.05 ppm),. Very few indium-containing minerals are known, all of them scarce: an example is indite. Indium is found in several zinc ores, but only in minute quantities; likewise some copper and lead ores contain traces. As is the case for most other elements found in ores and minerals, the indium extraction process has become more efficient in recent years, ultimately leading to larger yields. Canada is the world's leader in indium reserves, but both the United States and China have comparable amounts. Thallium Thallium is of intermediate abundance in the Earth's crust, estimated to be 0.00006% (0.6 ppm). It is found on the ground in some rocks, in the soil and in clay. Many sulfide ores of iron, zinc and cobalt contain thallium. In minerals it is found in moderate quantities: some examples are crookesite (in which it was first discovered), lorandite, routhierite, bukovite, hutchinsonite and sabatierite. There are other minerals that contain small amounts of thallium, but they are very rare and do not serve as primary sources. Nihonium Nihonium is an element that is never found in nature but has been created in a laboratory. It is therefore classified as a synthetic element with no stable isotopes. Applications With the exception of synthetic nihonium, all the elements in the boron group have numerous uses and applications in the production and content of many items. Boron Boron has found many industrial applications in recent decades, and new ones are still being found. A common application is in fiberglass. There has been rapid expansion in the market for borosilicate glass; most notable among its special qualities is a much greater resistance to thermal expansion than regular glass. Another commercially expanding use of boron and its derivatives is in ceramics. Several boron compounds, especially the oxides, have unique and valuable properties that have led to their substitution for other materials that are less useful. Boron may be found in pots, vases, plates, and ceramic pan-handles for its insulating properties. The compound borax is used in bleaches, for both clothes and teeth. The hardness of boron and some of its compounds give it a wide array of additional uses. A small part (5%) of the boron produced finds use in agriculture. Aluminium Aluminium is a metal with numerous familiar uses in everyday life. It is most often encountered in construction materials, in electrical devices, especially as the conductor in cables, and in tools and vessels for cooking and preserving food. Aluminium's lack of reactivity with food products makes it particularly useful for canning. Its high affinity for oxygen makes it a powerful reducing agent. Finely powdered pure aluminium oxidizes rapidly in air, generating a huge amount of heat in the process (burning at about or ), leading to applications in welding and elsewhere that a large amount of heat is needed. Aluminium is a component of alloys used for making lightweight bodies for aircraft. Cars also sometimes incorporate aluminium in their framework and body, and there are similar applications in military equipment. Less common uses include components of decorations and some guitars. The element is also sees use in a diverse range of electronics. Gallium Gallium and its derivatives have only found applications in recent decades. Gallium arsenide has been used in semiconductors, in amplifiers, in solar cells (for example in satellites) and in tunnel diodes for FM transmitter circuits. Gallium alloys are used mostly for dental purposes. Gallium ammonium chloride is used for the leads in transistors. A major application of gallium is in LED lighting. The pure element has been used as a dopant in semiconductors, and has additional uses in electronic devices with other elements. Gallium has the property of being able to 'wet' glass and porcelain, and thus can be used to make mirrors and other highly reflective objects. Gallium can be added to alloys of other metals to lower their melting points. Indium Indium's uses can be divided into four categories: the largest part (70%) of the production is used for coatings, usually combined as indium tin oxide (ITO); a smaller portion (12%) goes into alloys and solders; a similar amount is used in electrical components and in semiconductors; and the final 6% goes to minor applications. Among the items in which indium may be found are platings, bearings, display devices, heat reflectors, phosphors, and nuclear control rods. Indium tin oxide has found a wide range of applications, including glass coatings, solar panels, streetlights, electrophosetic displays (EPDs), electroluminescent displays (ELDs), plasma display panels (PDPs), electrochemic displays (ECs), field emission displays (FEDs), sodium lamps, windshield glass and cathode-ray tubes, making it the single most important indium compound. Thallium Thallium is used in its elemental form more often than the other boron-group elements. Uncompounded thallium is used in low-melting glasses, photoelectric cells, switches, mercury alloys for low-range glass thermometers, and thallium salts. It can be found in lamps and electronics, and is also used in myocardial imaging. The possibility of using thallium in semiconductors has been researched, and it is a known catalyst in organic synthesis. Thallium hydroxide (TlOH) is used mainly in the production of other thallium compounds. Thallium sulfate (Tl2SO4) is an outstanding vermin-killer, and it is a principal component in some rat and mouse poisons. However, the United States and some European countries have banned the substance because of its high toxicity to humans. In other countries, though, the market for the substance is growing. Tl2SO4 is also used in optical systems. Biological role None of the group-13 elements has a major biological role in complex animals, but some are at least associated with a living being. As in other groups, the lighter elements usually have more biological roles than the heavier. The heaviest ones are toxic, as are the other elements in the same periods. Boron is essential in most plants, whose cells use it for such purposes as strengthening cell walls. It is found in humans, certainly as a essential trace element, but there is ongoing debate over its significance in human nutrition. Boron's chemistry does allow it to form complexes with such important molecules as carbohydrates, so it is plausible that it could be of greater use in the human body than previously thought. Boron has also been shown to be able to replace iron in some of its functions, particularly in the healing of wounds. Aluminium has no known biological role in plants or animals, despite its widespread occurrence in nature. Gallium is not essential for the human body, but its relation to iron(III) allows it to become bound to proteins that transport and store iron. Gallium can also stimulate metabolism. Indium and its heavier homologues have no biological role, although indium salts in small doses, like gallium, can stimulate metabolism. Toxicity Each element of the boron group has a unique toxicity profile to plants and animals. As an example of boron toxicity, it has been observed to harm barley in concentrations exceeding 20 mM. The symptoms of boron toxicity are numerous in plants, complicating research: they include reduced cell division, decreased shoot and root growth, decreased production of leaf chlorophyll, inhibition of photosynthesis, lowering of stomata conductance, reduced proton extrusion from roots, and deposition of lignin and suberin. Aluminium does not present a prominent toxicity hazard in small quantities, but very large doses are slightly toxic. Gallium is not considered toxic, although it may have some minor effects. Indium is not toxic and can be handled with nearly the same precautions as gallium, but some of its compounds are slightly to moderately toxic. Thallium, unlike gallium and indium, is extremely toxic, and has caused many poisoning deaths. Its most noticeable effect, apparent even from tiny doses, is hair loss all over the body, but it causes a wide range of other symptoms, disrupting and eventually halting the functions of many organs. The nearly colorless, odorless and tasteless nature of thallium compounds has led to their use by murderers. The incidence of thallium poisoning, intentional and accidental, increased when thallium (with its similarly toxic compound, thallium sulfate) was introduced to control rats and other pests. The use of thallium pesticides has therefore been prohibited since 1975 in many countries, including the USA. Nihonium is a highly unstable element and decays by emitting alpha particles. Due to its strong radioactivity, it would definitely be extremely toxic, although significant quantities of nihonium (larger than a few atoms) have not yet been assembled. Notes References Bibliography External links oxide (chemical compound) – Britannica Online Encyclopedia. Britannica.com. Retrieved on 2011-05-16. Visual Elements: Group 13. Rsc.org. Retrieved on 2011-05-16. Trends In Chemical Reactivity Of Group 13 Elements. Tutorvista.com. Retrieved on 2011-05-16. etymonline.com Retrieved on 2011-07-27 Periodic table Groups (periodic table)
Boron group
Chemistry
5,683
19,841,881
https://en.wikipedia.org/wiki/Nokia%202760
The Nokia 2760 is a clamshell Mobile phone released by Nokia in 2007 and manufactured in Hungary. Its operating frequency is Dual band GSM 900/1800(RM-258) or 850/1900(RM-259, RM-391 for T-Mobile USA). The phone supports EDGE for mobile broadband. The Nokia 2760 was popular in the late-2000s, with it being the most popular choice of mobile phone in Finland in 2010. Nokia 2760 Flip In December 2021, HMD Global announced a re-release of the Nokia 2760. Branded the Nokia 2760 Flip, the phone features KaiOS, USB Type-C, a five-megapixel camera, MicroSD storage (only available on very early production units) and support for Wi-Fi and Bluetooth connectivity, as well as GPS navigation and 4G connectivity. Nokia 2780 Flip The Nokia 2780 Flip is essentially the 2760 Flip with an FM radio and MicroSD card slot. The 2780 Flip is also not carrier-locked. The Nokia 2780 Flip was announced in November 2022. References 2760 Mobile phones introduced in 2007 Series 40 devices
Nokia 2760
Technology
235
25,923,914
https://en.wikipedia.org/wiki/Plutonium%E2%80%93gallium%20alloy
Plutonium–gallium alloy (Pu–Ga) is an alloy of plutonium and gallium, used in nuclear weapon pits, the component of a nuclear weapon where the fission chain reaction is started. This alloy was developed during the Manhattan Project. Overview Metallic plutonium has several different solid allotropes. The δ phase is the least dense and most easily machinable. It is formed at temperatures of 310–452 °C at ambient pressure (1 atmosphere), and is thermodynamically unstable at lower temperatures. However, plutonium can be stabilized in the δ phase by alloying it with a small amount of another metal. The preferred alloy is 3.0–3.5 mol.% (0.8–1.0 wt.%) gallium. Pu–Ga has many practical advantages: stable between −75 and 475 °C, very low thermal expansion, low susceptibility to corrosion (4% of the corrosion rate of pure plutonium), good castability; since plutonium has the rare property that the molten state is denser than the solid state, the tendency to form bubbles and internal defects is decreased. Use in nuclear weapons Stabilized δ-phase Pu–Ga is ductile, and can be rolled into sheets and machined by conventional methods. It is suitable for shaping by hot pressing at about 400 °C. This method was used for forming the first nuclear weapon pits. More modern pits are produced by casting. Subcritical testing showed that wrought and cast plutonium performance is the same. As only the ε-δ transition occurs during cooling, casting Pu-Ga is easier than casting pure plutonium. δ phase Pu–Ga is still thermodynamically unstable, so there are concerns about its aging behavior. There are substantial differences of density (and therefore volume) between the various phases. The transition between δ-phase and α-phase plutonium occurs at a low temperature of 115 °C and can be reached by accident. Prevention of the phase transition and the associated mechanical deformations and consequent structural damage and/or loss of symmetry is of critical importance. Under 4 mol.% gallium the pressure-induced phase change is irreversible. However, the phase change is useful during the operation of a nuclear weapon. As the reaction starts, it generates enormous pressures, in the range of hundreds of gigapascals. Under these conditions, δ phase Pu–Ga transforms to α phase, which is 25% denser and thus more critical. Effect of gallium Plutonium in its α phase has a low internal symmetry, caused by uneven bonding between the atoms, more resembling (and behaving like) a ceramic than a metal. Addition of gallium causes the bonds to become more even, increasing the stability of the δ phase. The α phase bonds are mediated by the 5f shell electrons, and can be disrupted by increased temperature or by presence of suitable atoms in the lattice which reduce the available number of 5f electrons and weaken their bonds. The alloy is denser in molten state than in solid state, which poses an advantage for casting as the tendency to form bubbles and internal defects is decreased. Gallium tends to segregate in plutonium, causing "coring"—gallium-rich centers of grains and gallium-poor grain boundaries. To stabilize the lattice and reverse and prevent segregation of gallium, annealing is required at the temperature just below the δ–ε phase transition, so gallium atoms can diffuse through the grains and create homogeneous structure. The time to achieve homogenization of gallium increases with increasing grain size of the alloy and decreases with increasing temperature. The structure of stabilized plutonium at room temperature is the same as unstabilized at δ-phase temperature, with the difference of gallium atoms substituting plutonium in the fcc lattice. The presence of gallium in plutonium signifies its origin from weapon plants or decommissioned nuclear weapons. The isotopic signature of plutonium then allows rough identification of its origin, manufacturing method, type of the reactor used in its production, and rough history of the irradiation, and matching to other samples, which is of importance in investigation of nuclear smuggling. Aging There are several plutonium and gallium intermetallic compounds: PuGa, Pu3Ga, and Pu6Ga. During aging of the stabilized δ alloy, gallium segregates from the lattice, forming regions of Pu3Ga (ζ'-phase) within α phase, with the corresponding dimensional and density change and buildup of internal strains. The decay of plutonium however produces energetic particles (alpha particles and uranium-235 nuclei) that cause local disruption of the ζ' phase, and establishing a dynamic equilibrium with only a modest amount of ζ' phase present, which explains the alloy's unexpectedly slow, graceful aging. The alpha particles are trapped as interstitial helium atoms in the lattice, coalescing into tiny (about 1 nm diameter) helium-filled bubbles in the metal and causing negligible levels of void swelling; the size of bubbles appears to be limited, though their number increases with time. Addition of 7.5 wt.% of plutonium-238, which has significantly faster decay rate, to the alloy increases the aging damage rate by 16 times, assisting with plutonium aging research. The Blue Gene supercomputer aided with simulations of plutonium aging processes. Production Plutonium alloys can be produced by adding a metal to molten plutonium. However, if the alloying metal is sufficiently reductive, plutonium can be added in the form of oxides or halides. The δ phase plutonium–gallium and plutonium–aluminium alloys are produced by adding plutonium(III) fluoride to molten gallium or aluminium, which has the advantage of avoiding dealing directly with the highly reactive plutonium metal. Reprocessing into MOX fuel For reprocessing of surplus warhead pits into MOX fuel, the majority of gallium has to be removed as its high content could interfere with the fuel rod cladding (gallium attacks zirconium) and with migration of fission products in the fuel pellets. In the ARIES process, the pits are converted to oxide by converting the material to plutonium hydride, then optionally to nitride, and then to oxide. Gallium is then mostly removed from the solid oxide mixture by heating at 1100 °C in a 94% argon 6% hydrogen atmosphere, reducing gallium content from 1% to 0.02%. Further dilution of plutonium oxide during the MOX fuel manufacture brings gallium content to levels considered negligible. A wet route of gallium removal, using ion exchange, is also possible. Electrorefining is another way to separate gallium and plutonium. Development history During the Manhattan Project (1942-1945), the maximum amount of diluent atoms for plutonium to not affect the explosion efficiency was calculated to be 5 mol.%. Two stabilizing elements were considered, silicon and aluminium. However, only aluminium produced satisfactory alloys. But the aluminium tendency to react with α-particles and emit neutrons limited its maximum content to 0.5 mol.%; the next element from the boron group of elements, gallium, was tried and found to be satisfactory. The early atomic bomb design secrets passed to the Soviets by spy Klaus Fuchs included the gallium trick for stabilizing phases of plutonium, and thus the first Soviet atomic bomb used this alloy also. References Gallium alloys Plutonium compounds Low thermal expansion materials Nuclear weapons
Plutonium–gallium alloy
Physics,Chemistry
1,559
78,357,190
https://en.wikipedia.org/wiki/Streptomycin%20thallous%20acetate%20actidione%20agar
'Streptomycin thallous acetate actidione agar, often abbreviated STAA, is a selective culture medium designed to favor the growth of Brochothrix thermosphacta for lab study. This medium was developed in 1966, by George Alan Gardner. Typical composition STAA agar typically contains (w/v): 2.0% peptone 0.2% yeast extract 0.75% glycerol 0.1% dipotassium hydrogen phosphate 0.1% magnesium sulfate heptahydrate 1.3% agar pH adjusted to 7.0 at 25 °C After autoclaving at 121 °C for 15 minutes, the media is cooled to 50 °C and additives are added: 0.05% Streptomycin sulphate 0.005% Thallous acetate STAA contains streptomycin sulphate, which inhibits some Gram-positive organisms and most Gram-negatives at higher concentrations, whilst Brochothrix thermosphacta remains resistant. Thallous acetate inhibits most yeasts as well as many aerobic and facultatively anaerobic bacteria. The test sample is homogenized in sterile 0.1% peptone water and diluted. 0.1ml volumes are transferred to the agar plate and spread across the surface. The agar plates are incubated at 22 °C for 48 hours aerobically. Typical colonies of Brochothrix thermosphacta will grow as straw-colored colonies, 0.5-1.0mm in diameter. Some Pseudomonads can be able to grow on STAA and may be differentiated from Brochothrix thermosphacta by performing an oxidase test. References Microbiological media
Streptomycin thallous acetate actidione agar
Biology
370
1,195,281
https://en.wikipedia.org/wiki/Rugby%20league%20positions
A rugby league team consists of 13 players on the field, with 4 substitutes on the bench. Each of the 13 players is assigned a position, normally with a standardised number, which reflects their role in attack and defence, although players can take up any position at any time. Players are divided into two general types, forwards and backs. Forwards are generally chosen for their size and strength. They are expected to run with the ball, to attack, and to make tackles. Forwards are required to improve the team's field position thus creating space and time for the backs. Backs are usually smaller and faster, though a big, fast player can be of advantage in the backs. Their roles require speed and ball-playing skills, rather than just strength, to take advantage of the field position gained by the forwards. Typically forwards tend to operate in the centre of the field, while backs operate nearer to the touch-lines, where more space can usually be found. Names and numbering The laws of the game recognise standardised numbering of positions. The starting side normally wear the numbers corresponding to their positions, only changing in the case of substitutions and position shifts during the game. In some competitions, such as Super League, players receive a squad number to use all season, no matter what positions they play in. The positions and the numbers are defined by the game's laws as: Backs 1 Full back 2 Right wing 3 Right centre 4 Left centre 5 Left wing 6 Stand-off half (Predominately used in the Northern hemisphere) or Five-eighth (Elsewhere) 7 Scrum half (Predominately used in the Northern hemisphere) or Half-back Forwards 8 Prop (Front Row Forward) 9 Hooker or Dummy-half 10 Prop (Front Row Forward) 11 Second Row Forward 12 Second Row Forward 13 Lock Forward or Loose Forward In practice, the term 'front row forward' is used less frequently than the term 'Prop' of which a team has two. The scrum half is often known as the half back, especially in Australasia, and the lock forward is usually known as loose forward in England. Backs There are seven backs, numbered 1 to 7. For these positions, the emphasis is on speed and ball-handling skills. Generally, the "back-line" consists of smaller, more agile players. Fullback Numbered 1, the fullback's primary role is the last line of defence, standing behind the main line of defenders. Defensively, fullbacks must be able to chase and tackle any player who breaks the first line of defence, and must be able to catch and return kicks made by the attacking side. Their role in attack is usually as a support player, and they are often used to come into the line to create an overlap in attack. Fullbacks that feature in their respective nations' rugby league halls of fame are France's Puig Aubert, Australia's Clive Churchill, Charles Fraser, Graeme Langlands, Graham Eadie and Billy Slater, Great Britain/Wales' Jim Sullivan, New Zealand's Des White and Great Britain's Kris Radlinski. Threequarters There are four threequarters: two wingers and two centres - right wing (2), right centre (3), left centre (4) and left wing (5). Typically these players work in pairs, with one winger and one centre occupying each side of the field. Wing Also known as wingers. There are two wings in a rugby league team, numbered 2 and 5. They are usually positioned closest to the touch-line on each side of the field. They are generally among the fastest players in a team, with the speed to exploit space that is created for them and finish an attacking move. In defence their primary role is to mark their opposing wingers, and they are also usually required to catch and return kicks made by an attacking team, often dropping behind the defensive line to help the fullback. Wingers that feature in their nations' rugby league halls of fame are Great Britain's Billy Batten, Billy Boston and Clive Sullivan, Australia's Brian Bevan, John Ferguson, Ken Irvine, Harold Horder and Brian Carlson, South African Tom van Vollenhoven and France's Raymond Contrastin. Centre There are only 2 centres, right and left, numbered 3 and 4 respectively. They are usually positioned just inside the wingers and are typically the second-closest players to the touch-line on each side of the field. In attack their primary role is to provide an attacking threat out wide and as such they often need to be some of the fastest players on the pitch, often providing the pass for their winger to finish off a move, by drawing and passing to give the fast wingers space to move. In defence, they are expected to mark their opposite centre. Centres that feature in their countries' halls of fame are France's Max Rousié, England's Eric Ashton, Harold Wagstaff and Neil Fox, Wales' Gus Risman and Australia's Reg Gasnier, H "Dally" Messenger, Dave Brown, Jim Craig, Bob Fulton, Mal Meninga, and Greg Inglis. Half pair There are two halves. Positioned more centrally in attack, beside or behind the forwards, they direct the ball and are usually the team's main play-makers, and as such are typically required to be the most skillful and intelligent players on the team. These players also usually perform most tactical kicking for their team. Stand-off / five-eighth Numbered 6, the stand-off or five-eighth is usually a strong passer and runner, while also being agile. Often this player is referred to as "second receiver", as in attacking situations they are typically the second player to receive the ball (after the half-back) and are then able to initiate an attacking move. Scrum-half / half-back Numbered 7, the scrum-half or half-back is usually involved in directing the team's play. The position is sometimes referred to as "first receiver", as half-backs are often the first to receive the ball from the dummy-half after a play-the-ball. This makes them important decision-makers in attack. Forwards A rugby league forward pack consists of six players who tend to be bigger and stronger than backs, and generally rely more on their strength and size to fulfill their roles than play-making skills. The forwards also traditionally formed and contested scrums; however, in the modern game it is largely immaterial which players pack down in the scrum. Despite this, forwards are still referred to by the position they would traditionally take in the scrum. Front row The front row of the scrum traditionally included the hooker with the two props on either side. All three may be referred to as front-rowers, but this term is now most commonly just used as a colloquialism to refer to the props. Hooker The hooker or rake, numbered 9, traditionally packs in the middle of the scrum's front row. The position is named because of the traditional role of "hooking" the ball back with the foot when it enters the scrum. It is usually the hooker who plays in the dummy-half position, receiving the ball from the play-the-ball and continuing the team's attack by passing the ball to a teammate or by running with the ball. As such, hookers are required to be reliable passers and often possess a similar skill-set to half backs. Prop There are two props, numbered 8 and 10, who pack into the front row of the scrum on either side of the hooker. Sometimes called "bookends" in Australasia, the props are usually the largest and heaviest players on a team. In attack, their size and strength means that they are primarily used for running directly into the defensive line, as a kind of "battering ram" to simply gain metres. Similarly, props are relied upon to defend against such running from the opposition's forwards. Prop forwards that feature in their respective nations' rugby league halls of fame are Australia's Arthur Beetson, Duncan Hall, Frank Burge and Herb Steinohrt and New Zealand's Cliff Johnson. Back row Three forwards make up the back row of the scrum: two second-rowers and a loose forward. All three may be referred to as back-rowers. Second-row forward Second-row forwards are numbered 11 and 12. While their responsibilities are similar in many ways to the props, these players typically possess more speed and agility and take up a wider position in attack and defence. Often each second rower will cover a specific side of the field, working in unison with their respective centre and winger. Second rowers are often relied upon to perform large numbers of tackles in defence. Second-row forwards that feature in their nations' halls of fame include New Zealand's Mark Graham, Australia's Norm Provan, George Treweek and Harry Bath, France's Jean Galia, and Great Britain & England's Martin Hodgson. Loose forward / lock forward Numbered 13, the loose forward or lock forward packs behind the two-second-rows in the scrum. Some teams choose to simply deploy a third prop in the loose forward position, while other teams use a more skilful player as an additional playmaker. Loose forwards that feature in their nation's Halls of Fame include Australia's Ron Coote, Johnny Raper, Bradley Clyde and Wally Prigg, Great Britain's Vince Karalius, Ellery Hanley and 'Rocky' Turner, and New Zealand's Charlie Seeling. Interchange In addition to the thirteen on-field players, there are a maximum of five substitute players who start the game on their team's bench. Usually, they will be numbered 14, 15, 16, 17 and 18. Each player normally keeps their number for the whole game, regardless of which position they play in. That is, if player number 14 replaces the fullback, they will wear the number 14 for the whole game, and not change shirts to display the number 1. The rules governing if and when a replacement can be used have varied over the history of the game; currently they can be used for any reason by their coach – typically because of injury, to manage fatigue, for tactical reasons or due to poor performance. Under current rules, players who have been substituted are typically allowed to be substituted back into the game later on. Leagues in different countries have had different rules on how many interchanges can be made in a game. the Super League allowed up to ten interchanges per team in each game, this was reduced to eight interchanges per team per game, commencing in the 2019 season. Commencing in the 2016 season, Australia's National Rugby League permits up to eight interchanges per team per game. Additionally, if a player is injured due to foul play and an opposition player has been sin-binned or sent off then the injured player's team is given a free interchange. Often an interchange bench will include at least one (and usually two) replacement props, as it is generally considered to be the most physically taxing position and these players are likely to tire the quickest. Concussion substitute Commencing in 2021, a player named as the squad's 18th player on match day is able to take the field when three players fail a head injury assessment; or when a player suffers a match-ending injury caused by foul play, in which the opposing player was either sin-binned or sent off. Since the change, there have been calls to reduce the number of players that suffer a match-ending injury to two players, in the wake of a few incidents in the NRL. The concussion substitute was used during the 2021 Rugby League World Cup played in 2022, and adopted by the RFL in 2023. Roles As well as their positions, players' roles may be referred to by a range of other terms. Marker Following a tackle, the defending team may position two players – known as markers – at the play-the-ball to stand, one behind the other facing the tackled player and the attacking team's dummy-half. Dummy half The dummy half or (acting half-back) is the player who stands behind the play-the-ball and collects the ball, before passing, running or kicking the ball. The hooker has become almost synonymous with the dummy half role. However, any player of any position can play the role at any time and this often happens during a game, particularly when the hooker is the player tackled. First receiver The first receiver is the name given to the first player to receive the ball off the play-the-ball, i.e. from the dummy-half. Second receiver If the ball is passed immediately by the first receiver, then the player catching it is sometimes referred to as the second receiver. Utility A player who can play in a number of different positions is often referred to as a "utility player", "utility forward", or "utility back". Goal-kicker Although any player can attempt their team's kicks at goal (penalty kicks or conversions), most teams have specific players who train extensively at kicking, and often use only one player to take goal kicks during a game. Captain The captain is the on-field leader of a team and a point of contact between the referee and a team, and can be a player of any position. Some of the captain's responsibilities are stipulated in the laws. Before a match, the two teams' captains toss a coin with the referee. The captain that wins the toss can decide to kick off or can choose which end of the field to defend. The captain that loses the toss then takes the other of the alternatives. The captain is often seen as responsible for a team's discipline. When a team persistently breaks the laws, the referee while issuing a caution will often speak with the team's captain to encourage them to improve their team's discipline. The captains are also traditionally responsible for appointing a substitute should a player suffer an injury during a game, although in the professional game there are other procedures in place for dealing with this. See also Rugby league gameplay Notes References Rugby league
Rugby league positions
Mathematics
2,902
2,884,728
https://en.wikipedia.org/wiki/Automated%20reasoning
In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy. The most developed subareas of automated reasoning are automated theorem proving (and the less automated but more pragmatic subfield of interactive theorem proving) and automated proof checking (viewed as guaranteed correct reasoning under fixed assumptions). Extensive work has also been done in reasoning by analogy using induction and abduction. Other important topics include reasoning under uncertainty and non-monotonic reasoning. An important part of the uncertainty field is that of argumentation, where further constraints of minimality and consistency are applied on top of the more standard automated deduction. John Pollock's OSCAR system is an example of an automated argumentation system that is more specific than being just an automated theorem prover. Tools and techniques of automated reasoning include the classical logics and calculi, fuzzy logic, Bayesian inference, reasoning with maximal entropy and many less formal ad hoc techniques. Early years The development of formal logic played a big role in the field of automated reasoning, which itself led to the development of artificial intelligence. A formal proof is a proof in which every logical inference has been checked back to the fundamental axioms of mathematics. All the intermediate logical steps are supplied, without exception. No appeal is made to intuition, even if the translation from intuition to logic is routine. Thus, a formal proof is less intuitive and less susceptible to logical errors. Some consider the Cornell Summer meeting of 1957, which brought together many logicians and computer scientists, as the origin of automated reasoning, or automated deduction. Others say that it began before that with the 1955 Logic Theorist program of Newell, Shaw and Simon, or with Martin Davis’ 1954 implementation of Presburger's decision procedure (which proved that the sum of two even numbers is even). Automated reasoning, although a significant and popular area of research, went through an "AI winter" in the eighties and early nineties. The field subsequently revived, however. For example, in 2005, Microsoft started using verification technology in many of their internal projects and is planning to include a logical specification and checking language in their 2012 version of Visual C. Significant contributions Principia Mathematica was a milestone work in formal logic written by Alfred North Whitehead and Bertrand Russell. Principia Mathematica - also meaning Principles of Mathematics - was written with a purpose to derive all or some of the mathematical expressions, in terms of symbolic logic. Principia Mathematica was initially published in three volumes in 1910, 1912 and 1913. Logic Theorist (LT) was the first ever program developed in 1956 by Allen Newell, Cliff Shaw and Herbert A. Simon to "mimic human reasoning" in proving theorems and was demonstrated on fifty-two theorems from chapter two of Principia Mathematica, proving thirty-eight of them. In addition to proving the theorems, the program found a proof for one of the theorems that was more elegant than the one provided by Whitehead and Russell. After an unsuccessful attempt at publishing their results, Newell, Shaw, and Herbert reported in their publication in 1958, The Next Advance in Operation Research: "There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until (in a visible future) the range of problems they can handle will be co- extensive with the range to which the human mind has been applied." Examples of Formal Proofs {| class="wikitable" |- ! Year !! Theorem !! Proof System !! Formalizer !! Traditional Proof |- | 1986 || First Incompleteness|| Boyer-Moore || Shankar || Gödel |- | 1990 || Quadratic Reciprocity || Boyer-Moore || Russinoff || Eisenstein |- | 1996 || Fundamental- of Calculus || HOL Light || Harrison || Henstock |- | 2000 || Fundamental- of Algebra || Mizar || Milewski || Brynski |- | 2000 || Fundamental- of Algebra || Coq || Geuvers et al. || Kneser |- | 2004 || Four Color || Coq || Gonthier || Robertson et al. |- | 2004 || Prime Number || Isabelle || Avigad et al. || Selberg-Erdős |- | 2005 || Jordan Curve || HOL Light || Hales || Thomassen |- | 2005 || Brouwer Fixed Point || HOL Light || Harrison || Kuhn |- | 2006 || Flyspeck 1 || Isabelle || Bauer- Nipkow || Hales |- | 2007 || Cauchy Residue || HOL Light || Harrison || Classical |- | 2008 || Prime Number || HOL Light || Harrison || Analytic proof |- | 2012 || Feit-Thompson || Coq || Gonthier et al. || Bender, Glauberman and Peterfalvi |- | 2016 || Boolean Pythagorean triples problem || Formalized as SAT || Heule et al. || None |} Proof systems Boyer-Moore Theorem Prover (NQTHM) The design of NQTHM was influenced by John McCarthy and Woody Bledsoe. Started in 1971 at Edinburgh, Scotland, this was a fully automatic theorem prover built using Pure Lisp. The main aspects of NQTHM were: the use of Lisp as a working logic. the reliance on a principle of definition for total recursive functions. the extensive use of rewriting and "symbolic evaluation". an induction heuristic based the failure of symbolic evaluation. HOL Light Written in OCaml, HOL Light is designed to have a simple and clean logical foundation and an uncluttered implementation. It is essentially another proof assistant for classical higher order logic. Coq Developed in France, Coq is another automated proof assistant, which can automatically extract executable programs from specifications, as either Objective CAML or Haskell source code. Properties, programs and proofs are formalized in the same language called the Calculus of Inductive Constructions (CIC). Applications Automated reasoning has been most commonly used to build automated theorem provers. Oftentimes, however, theorem provers require some human guidance to be effective and so more generally qualify as proof assistants. In some cases such provers have come up with new approaches to proving a theorem. Logic Theorist is a good example of this. The program came up with a proof for one of the theorems in Principia Mathematica that was more efficient (requiring fewer steps) than the proof provided by Whitehead and Russell. Automated reasoning programs are being applied to solve a growing number of problems in formal logic, mathematics and computer science, logic programming, software and hardware verification, circuit design, and many others. The TPTP (Sutcliffe and Suttner 1998) is a library of such problems that is updated on a regular basis. There is also a competition among automated theorem provers held regularly at the CADE conference (Pelletier, Sutcliffe and Suttner 2002); the problems for the competition are selected from the TPTP library. See also Automated machine learning (AutoML) Automated theorem proving Reasoning system Semantic reasoner Program analysis (computer science) Applications of artificial intelligence Outline of artificial intelligence Casuistry • Case-based reasoning Abductive reasoning Inference engine Commonsense reasoning Conferences and workshops International Joint Conference on Automated Reasoning (IJCAR) Conference on Automated Deduction (CADE) International Conference on Automated Reasoning with Analytic Tableaux and Related Methods Journals Journal of Automated Reasoning Communities Association for Automated Reasoning (AAR) References External links International Workshop on the Implementation of Logics Workshop Series on Empirically Successful Topics in Automated Reasoning Theoretical computer science Automated theorem proving Logic in computer science
Automated reasoning
Mathematics
1,694
78,810,231
https://en.wikipedia.org/wiki/V553%20Centauri
V553 Centauri is a variable star in the southern constellation of Centaurus, abbreviated V553 Cen. It ranges in brightness from an apparent visual magnitude of 8.22 down to 8.80 with a period of 2.06 days. At that magnitude, it is too dim to be visible to the naked eye. Based on parallax measurements, it is located at a distance of approximately 1,890 light years from the Sun. Observations The variability of this star was announced in 1936 by C. Hoffmeister. In 1957, he determined it to be a Delta Cepheid variable with a magnitude range of and a periodicity of . The observers M. W. Feast and G. H. Herbig noted a peculiar spectrum with strong absorption lines of the molecules CH and CN, while neutral iron lines are unusually weak. They found a stellar classification of G5p I–III. In 1972, T. Lloyd-Evans and associates found the star's prominent bands of C2, CH, and CN varied with the Cepheid phase, being strongest at minimum. They suggested a large overabundance of carbon in the star's atmosphere. Chemical analysis of the atmosphere in 1979 showed a metallicity close to solar, with an enhancement of carbon and nitrogen. It was proposed that V553 Cen is an evolved RR Lyrae variable and is now positioned above the horizontal branch on the HR diagram. V553 Cen is classified as a BL Herculis variable, being a low–mass type II Cepheid with a period between . As with other variables of this type, it displays a secondary bump on its light curve. It is a member of a small group of carbon Cepheids, and is one of the brightest stars of that type. V553 Cen does not appear to have a companion. From the luminosity and shape of the light curve, stellar models from 1981 suggest a mass equal to 49% of the Sun's with 9.9 times the radius of the Sun. Further analysis of the spectrum showed that oxygen is not enhanced, but sodium may be moderately enhanced. There is no evidence of s-process enhancement of elements. Instead, the abundance peculiarities are the result of nuclear reaction sequences followed by dredge-up. In particular, these are the product of triple-α, CN, ON, and perhaps some Ne–Na reactions. See also Carbon star RT Trianguli Australis References Further reading BL Herculis variables G-type giants Centaurus CD−31 11449 129981 072257 Centauri, V553
V553 Centauri
Astronomy
544
47,706,049
https://en.wikipedia.org/wiki/Gomphus%20pleurobrunnescens
Gomphus pleurobrunnescens is a species of fungus in the genus Gomphus, family Gomphaceae. It has been recorded from tropical locales of southeastern Mexico. References External links Fungi of Mexico Fungi described in 2010 Gomphaceae Fungi without expected TNC conservation status Fungus species
Gomphus pleurobrunnescens
Biology
64
13,902,650
https://en.wikipedia.org/wiki/Little%20brown%20bat
The little brown bat or little brown myotis (Myotis lucifugus) is an endangered species of mouse-eared microbat found in North America. It has a small body size and glossy brown fur. It is similar in appearance to several other mouse-eared bats, including the Indiana bat, northern long-eared bat, and Arizona myotis, to which it is closely related. Despite its name, the little brown bat is not closely related to the big brown bat, which belongs to a different genus. Its mating system is polygynandrous, or promiscuous, and females give birth to one offspring annually. The offspring, called pups, are quickly weaned and reach adult size in some dimensions by three weeks old. The little brown bat has a mean lifespan of 6.5 years, though one individual in the wild reached 34 years old. It is nocturnal, foraging for its insect prey at night and roosting in hollow trees or buildings during the day, among less common roost types. It navigates and locates prey with echolocation. It has few natural predators, but may be killed by raptors such as owls, as well as terrestrial predators such as raccoons. Other sources of mortality include diseases such as rabies and white-nose syndrome. White-nose syndrome has been a significant cause of mortality since 2006, killing over one million little brown bats by 2011. In the Northeastern United States, population loss has been extreme, with surveyed hibernacula (caves used for hibernation) averaging a population loss of 90%. Humans frequently encounter the little brown bat due to its habit of roosting in buildings. Colonies in buildings are often considered pests because of the production of waste or the concern of rabies transmission. Little brown bats rarely test positive for rabies, however. Some people attempt to attract little brown bats to their property, but not their houses, by installing bat houses. Taxonomy The little brown bat was described as a new species in 1831 by American naturalist John Eatton Le Conte. It was initially in the genus Vespertilio, with a binomial of Vespertilio lucifugus, before it was re-categorized as belonging to the Myotis genus. "Myotis" is a Neo-Latin construction, from the Greek "muós (meaning "mouse") and "oûs" (meaning ear), literally translating to "mouse-eared". "Lucifugus" is from Latin "lux" (meaning "light") and "fugere" (meaning "to shun"), literally translating to "light-shunning". The holotype had possibly been collected in Georgia near the Le Conte Plantation near Riceboro, but this has been disputed because the initial record lacked detail on where the specimen was collected. Within its family, the Vespertilionidae (vesper bats), the little brown bat is a member of the subfamily Myotinae, which contains only the mouse-eared bats of genus Myotis. Based on a 2007 study using mitochondrial and nuclear DNA, it is part of a Nearctic clade of mouse-eared bats. Its sister taxon is the Arizona myotis, M. occultus. As of 2005, five subspecies of the little brown bat are recognized: M. l. lucifugus, M. l. alascensis, M. l. carissima, M. l. pernox, and M. l. relictus. Formerly, the Arizona myotis and southeastern myotis (M. austroriparius) were also considered subspecies (M. l. occultus and M. l. austroriparius), but both are now recognized as full species. In a 2018 study by Morales and Carstens, they concluded that the five subspecies are independent, paraphyletic lineages, meaning that grouping them together excludes other lineages with the same common ancestor, and therefore each warrant specific status. Results of one study suggested that the little brown bat can hybridize with Yuma myotis, M. yumanensis. The two species occur in the same area in much of the Western United States, as well as southern British Columbia. The two species are morphologically different throughout most of the range, but in some regions, individuals have been documented that are intermediate in appearance between the two. However, a 1983 study by Herd and Fenton found no morphological, genetic, or ecological evidence to support the notion that the two species hybridize. Anatomy and physiology External characteristics The little brown bat is a small species, with individuals weighing with a total body length of . Individuals have the lowest weight in the spring as they emerge from hibernation. It has a forearm length of and a wingspan of . It is a sexually dimorphic species, with females larger than males on average. A variety of fur colors is possible, with pelage ranging from pale tan or reddish to dark brown. Its belly fur is a lighter color than its back fur. Its fur is glossy in appearance, though less so on its belly. A variety of pigmentation disorders have been documented in this species, including albinism (total lack of pigment), leucism (partial lack of pigment), and melanism (over-pigmentation). Head and teeth It is a diphyodont mammal, meaning that it has two sets of teeth during its lifetime—milk teeth and adult teeth. The dental formula of the milk teeth is for a total of 22 teeth, while that of the adult teeth is for a total of 38 teeth. Newborns ("pups") are born with 20 milk teeth which becomes 22 when the final upper premolars emerge. Pups begin losing milk teeth once they have reached a body length of ; total loss of milk teeth and emergence of adult teeth is usually complete by the time a juvenile is long. It has a relatively short snout and a gently sloped forehead. It lacks a sagittal crest, which can be used to distinguish it from the Arizona myotis. Its skull length is . The braincase appears nearly circular though somewhat flattened when viewed from the back. Its ears are long, while the tragi, or cartilaginous flaps that project in front of the ear openings, are long. The tragi are blunt at the tips and considered of medium length for a mouse-eared bat. Senses The little brown bat is dichromatic and its eyesight is likely sensitive to ultraviolet and red light, based on a genetic analysis that discovered that the genes SWS1 and M/LWS were present and functional. Its ability to see ultraviolet light may be useful in capturing insects, as 80% of nocturnal moths' wings reflect UV light. It is unclear if or how seeing red light is advantageous for this species. It is adapted to see best in low-light conditions. It lacks eyeshine. The little brown bat lacks a vomeronasal organ. Relative to frugivorous bat species such as the Jamaican fruit bat, it has small eyes and a reduced olfactory epithelium. Instead, it has a more sophisticated system of echolocation, suggesting that reliance on echolocation decreases the need for orientation via sight or smell. Physiology In fall through spring, the little brown bat enters torpor, a state of decreased physiological activity, daily. Torpor saves energy for the bat when ambient temperatures are below throughout the year and in the winter; instead of expending energy to maintain a constant body temperature, it allows its body to cool and physiological activity to slow. While in torpor, its heart rate drops from up to 210 beats per minute to as few as 8 beats per minute. The exception to this rule is females at the end of pregnancy, which no longer have the ability to thermoregulate, and therefore must roost in warm places. During daily roosting, it can cope with high levels of water loss of up to 25%. In the winter time, it enters a prolonged state of torpor known as hibernation. To conserve energy, it limits how frequently it arouses from torpor, with individuals existing in uninterrupted torpor for up to 90 days. Arousal is the most energetically costly phase of torpor, which is why individuals do so infrequently. Despite the energy-saving mechanism of hibernation, individuals lose a quarter of their pre-hibernation body mass during the winter. Similar species The little brown bat can be confused with the Indiana bat (M. sodalis) in appearance. The two can be differentiated by the little brown bat's lack of a keeled calcar—the cartilaginous spur on its uropatagium (the flight membrane between its hind legs). While it does have a calcar, that of the little brown bat is not nearly as pronounced. Additionally, the little brown bat can be distinguished by the presence of hairs on its toes and feet that extend beyond the length of the digits. The northern long-eared bat (M. septentrionalis), another similar species, can be distinguished by its much longer ears, and tragi that are long and sharply pointed. Biology and ecology Reproduction and life cycle The little brown bat has a promiscuous mating structure, meaning that individual bats of both sexes mate with multiple partners. It is a seasonal breeder, with mating taking place in the fall before the annual hibernation. As a seasonal breeder, males do not produce sperm year-round; instead, spermatogenesis occurs May through August each year. Throughout the spring and summer, males and females roost separately. In the fall, however, individuals of both sexes will congregate in the same roost in a behavior known as "swarming". Like several other bat species, males of this species exhibit homosexual behaviors, with male bats mating indiscriminately with torpid, roosting bats, regardless of sex. Although copulation occurs in the fall, fertilization does not occur until the spring due to sperm storage. Gestation proceeds for 50–60 days following fertilization. The litter size is one individual. At birth, pups weigh approximately and have a forearm length less than . While they have a small absolute mass, they are enormous relative to their mothers, weighing up to 30% of her postpartum body weight at birth. Pups' eyes and ears are closed at first, but open within a few hours of birth. They exhibit rapid growth; at around three weeks old, the young start flying, begin the weaning process, and are of a similar size to adults in forearm length but not weight. The young are totally weaned by 26 days old. Females may become sexually mature in the first year of life. Males become sexually mature in their second year. It is a very long-lived species relative to its body size. In the wild, individuals have been documented living up to 34 years. The average lifespan, however, is around 6.5 years. Males and females have high annual survival rates (probability of surviving another year), though survival rates vary by sex and region. One colony documented in Ontario had a male survival rate of 81.6% and a female survival rate of 70.8%; a colony in southern Indiana had survival rates of 77.1% and 85.7% for males and females, respectively. Social behavior The little brown bat is a colonial species, with hibernating colonies consisting of up to 183,500 individuals, though the average colony size is little more than 9,000. Historically, individuals within these colonies were highly aggregated and densely clustered together, though the disease white-nose syndrome is making solitary hibernation more common. During the spring and summer, maternity colonies of almost all female individuals form. These colonies usually consist of several hundred bats. Outside of these maternity colonies, adult males and non-reproductive females will roost by themselves or in small aggregations. Maternity colonies begin to break apart in late summer. Diet and foraging The little brown bat is nocturnal, resting during the day and foraging at night. Individuals typically emerge from their roosts at dusk, foraging for 1.5–3 hours before stopping to roost. A second foraging bout usually occurs later in the night, ending at dawn. Based on documenting one individual flying in a wind tunnel, it flies at approximately ; this increased to when flying over the surface of water. Home range size is variable; in one study of 22 females in Canada, pregnant females had an average home range of and lactating females had an average of . It produces calls that are high intensity frequency modulated (FM) and that last from less than one millisecond (ms) to about 5 ms and have a sweep rate of 80–40 kHz, with most of their energy at 45 kHz. Individuals emit approximately 20 calls per second when in flight. It consumes a variety of arthropod species, including insects and spiders. Prey species include beetles, flies, mayflies, true bugs, ants, moths, lacewings, stoneflies, and caddisflies. It also consumes mosquitoes, with one study documenting that, across twelve colonies in Wisconsin, 71.9% of all little brown bat guano (feces) samples contained mosquito DNA. During late pregnancy, when energetic demands are high, females consume around of insects nightly, or of insects per hour of foraging. With an average body mass of , that means that pregnant females consume 61% of their body weight nightly. Energetic demands during lactation are even higher, though, with females consuming of insects nightly, or of insects per hour of foraging. Because lactating females have an average mass of , this means that they consume nearly 85% of their body weight nightly. As the pup grows, lactation requires more and more energy; at the predicted lactation peak of 18 days old, a female would have to consume of insects per night, or 125% of her own weight. An often-mentioned statement is that "bats can eat 1000 mosquitoes per hour." While the little brown bat does consume mosquitoes and has high energetic needs, the study that is the basis for this claim was an experiment in which individuals were put into rooms full of either mosquitoes or fruit flies. For a duration up to 31 minutes, they captured an average of 1.5–5.7 mosquitoes per minute. The individual most efficient at catching fruit flies caught an average of 14.8 per minute for 15 minutes. Extrapolating these numbers results in conclusions that it can eat approximately 340 mosquitoes per hour, or 890 fruit flies. However, there is no assurance that individuals forage with such high efficiencies for long periods of time, or that prey is dense enough in natural settings to allow capture rates observed in enclosed areas. Predation and disease The little brown bat likely has few predators. Known predators include owls such as the eastern screech owl, northern saw-whet owl, and the great horned owl. Raccoons are also opportunistic predators of the little brown bat, picking individuals off the cave walls of their hibernacula (caves used for hibernation) or eating individuals that have fallen to the cave floor. The presence of helminth parasites in the gastrointestinal tract of the little brown bat is most common in the spring and fall and least common in the summer. Digenetic trematodes are the most common of these parasites, with the more common of these species including Ototrema schildti and Plagiorchis vespertilionis. The little brown bat is also affected by ectoparasites (external parasites), including bat fleas such as Myodopsylla insignis, chiggers like Leptotrombidium myotis, and the bat mites Spinturnix americanus. When parasitizing a female bat, bat mites synchronize their reproductive cycle with that of their host, with their own reproduction tied to the host's pregnancy hormones. Lactating females have a higher intensity of parasitization by mites, which may promote vertical transmission—the transfer of mites to the bat's offspring. The little brown bat is affected by the rabies virus—specifically, the strain associated with this species is known as MlV1. However, it is susceptible to other strains of the virus, including those of the big brown bat and the silver-haired bat, which is most lethal to humans. The rabies virus can be present in an individual's saliva, meaning that it can be spread through bites, 12–18 days before the individual begins showing symptoms. Individuals do not always develop rabies after exposure, though. In one study, no little brown bats developed rabies after subcutaneous exposure to the MlV1 strain. Some individuals in the wild have antibodies for the rabies virus. The little brown bat is also susceptible to the disease white-nose syndrome, which is caused by the fungus Pseudogymnoascus destructans. The disease affects individuals when they are hibernating, which is when their body temperatures are within the ideal growth range of P. destructans, . Pseudogymnoascus destructans is the first known pathogen that kills a mammal host during its torpor. Mortality from white-nose syndrome begins to manifest 120 days after hibernation begins, and mortality peaks 180 days after bats enter hibernacula. The growth of P. destructans on bats erodes the skin of their wing and tail membranes, muzzles, and ears. White-nose syndrome causes affected bats to burn through their energy reserves twice as fast as uninfected individuals. In addition to visible fungus growth on the nose, ears, and wings, white-nose syndrome results in higher carbon dioxide levels in the blood, causing acidosis, and hyperkalemia (elevated blood potassium). Arousal from torpor becomes more frequent, and water loss increases due increased respiration rate in an attempt to remove excess carbon dioxide from the blood. The premature loss of fat reserves during hibernation results in starvation. Survivors of white-nose syndrome have longer bouts of torpor and lower body temperatures during torpor than individuals that die. Some individuals are more likely to survive based on their genetics, which predisposes them to remain in torpor longer and have larger fat reserves. Little brown bats are most affected by white-nose syndrome when they exhibit social, grouping behavior when hibernating, as P. destructans is transmitted by direct contact. In hibernacula where bats exhibit more solitary behavior, colonies are more prone to avoid infections of white-nose syndrome. In some colonies where grouping behavior was common before exposure to white-nose syndrome, bats now hibernate in a more solitary fashion. Before white-nose syndrome, only 1.16% of little brown bats hibernated singly; after white-nose syndrome, the percentage grew to 44.5%. Range and habitat The little brown bat lives throughout much of North America. In the north, its range extends as far west as Alaska and across much of Canada to Labrador. In the south, its range extends to Southern California and across the northern parts of Arizona and New Mexico. Historically, the largest known aggregations of this species occurred in the karstic regions of the Eastern United States. Roosting habitat The little brown bat roosts in sheltered places during the day. These roosts can include human structures or natural structures such as tree hollows, wood piles, rocky outcrops, or, occasionally, caves. Species of trees used for roosting include quaking aspen, balsam poplar, oak, and maple. It prefers roosts that are warm and dark. For maternity colonies, females prefer roosts that are . Hibernation habitat The little brown bat hibernates in caves or old mines. Females migrate up to hundreds of kilometers from their summer ranges to reach these hibernacula. It prefers hibernacula in which the relative humidity is greater than 90% and ambient temperatures are above the freezing point. Preferred hibernacula also maintain a constant temperature throughout the winter. Foraging habitat The little brown bat forages along the edges of vegetated habitat. It also forages along the edges of bodies of water or streams. In one study in the Canadian province of Alberta, its foraging activity was significantly higher in old-growth forest than would be expected based on its relative availability. Conservation As of 2021, the little brown bat is evaluated as an endangered species by the IUCN, a dramatic change from 2008 when it was designated as the lowest conservation priority, least concern. Until recently, the species was regarded as one of the most common bats in North America. However, a serious threat to the species has emerged in the form of a fungus-caused disease known as white-nose syndrome. It was one of the first bat species documented with the disease, which now affects at least seven hibernating bat species in the United States and Canada. From 2006 to 2011, over one million little brown bats died from the disease in the Northeastern United States, with winter hibernacula populations declining up to 99%. As of 2017, hibernacula counts for little brown bats in the Northeast had declined by an average of 90%. White-nose syndrome first appeared in New York in 2006; it has steadily diffused from eastern New York, though, until recently, remaining east of the Rocky Mountains. In March 2016, white-nose syndrome was detected on a little brown bat in King County, Washington, representing a jump from the previous westernmost extent of the disease in any bat species. In 2010, Frick et al. predicted a 99% chance of local extinction of little brown bats by the year 2026. They also predicted that the pre-white-nose syndrome population of 6.5 million individuals could be reduced to as few as 65,000 (1%) via the disease outbreak. Despite heavy declines, the species has avoided extinction in the Northeast through the persistence of small, localized populations. While the mortality rate of the disease is very high, some individuals that are exposed do survive. In 2010, Kunz and Reichard published a report arguing that the precipitous decline of the little brown bat justified its emergency listing as a federally endangered species under the U.S. Endangered Species Act. However, it is not federally listed as threatened or endangered as of 2018, though several U.S. states list it as endangered (Connecticut, Maine, Massachusetts, New Hampshire, Pennsylvania, Vermont, Virginia), threatened (Tennessee, Wisconsin), or of Special Concern (Michigan, Ohio). The little brown bat was listed as an endangered species by the Committee on the Status of Endangered Wildlife in Canada in February 2012 after an emergency assessment. The emergency designation as endangered was confirmed in November 2013. Relationship to people Little brown bats commonly occupy human structures. Females will situate maternity colonies within buildings. This small body size of this species can make it challenging to prevent individuals from entering a structure, as they can take advantage of gaps or holes as small as × . Once inside a building, a colony of little brown bats can disturb human inhabitants with their vocalizations and production of guano and urine. Large accumulations of guano can provide a growth medium for fungi, including the species that causes histoplasmosis. Concerns about humans becoming affected by bat ectoparasites such as ticks, fleas, or bat bugs are generally unfounded, as parasites that feed on bats are often specific to bats and die without them. Because they are often found in proximity to humans, the little brown bat and the not-closely related big brown bat are the two bat species most frequently submitted for rabies testing in the United States. Little brown bats infrequently test positive for the rabies virus; of the 586 individuals submitted for testing across the United States in 2015, the most recent data available as of 2018, 16 (2.7%) tested positive for the virus. Little brown bats are a species that will use bat houses for their roosts. Landowners will purchase or construct bat houses and install them, hoping to attract bats for various reasons. Some install bat houses in an attempt to negate the effects of removing a colony from a human structure ("rehoming" them into a more acceptable space). While this can be effective for other species, there is not evidence to suggest that this is effective for little brown bats, though it has been shown that little brown bats will choose to occupy artificial bat boxes installed at the sites of destroyed buildings that once housed colonies. Others are attempting to help bats out of concern for them due to the effects of white-nose syndrome. Bat houses are also installed in an attempt to control the bats' insect prey such as mosquitoes or taxa that harm crops. Little brown bats are vulnerable near moving vehicles on roads, either foraging or crossing. Bats can easily be pulled into the slipstreams of faster moving vehicles. When little brown bats cross roads, they approach the road using canopy tree cover and avoid crossing where there is no cover. When the cover is lower, bats cross roads lower. References Animal models Bats of Canada Bats of the United States Mammals described in 1831 Little Brown Bat Taxa named by John Eatton Le Conte Articles containing video clips
Little brown bat
Biology
5,243
32,744,201
https://en.wikipedia.org/wiki/Rogers%20polynomials
In mathematics, the Rogers polynomials, also called Rogers–Askey–Ismail polynomials and continuous q-ultraspherical polynomials, are a family of orthogonal polynomials introduced by in the course of his work on the Rogers–Ramanujan identities. They are q-analogs of ultraspherical polynomials, and are the Macdonald polynomials for the special case of the A1 affine root system . and discuss the properties of Rogers polynomials in detail. Definition The Rogers polynomials can be defined in terms of the q-Pochhammer symbol and the basic hypergeometric series by where x = cos(θ). References Orthogonal polynomials Q-analogs
Rogers polynomials
Mathematics
132
59,478,233
https://en.wikipedia.org/wiki/Perpendicular%20paramagnetic%20bond
A perpendicular paramagnetic bond is a type of chemical bond that does not exist under normal, atmospheric conditions. Such a phenomenon was first hypothesized through simulation to exist in the atmospheres of white dwarf stars whose magnetic fields, on the order of 105 teslas, could allow such interactions to exist. In a very strong magnetic field, excited electrons in molecules may be stabilized, causing these molecules to abandon their original orientations parallel to the magnetic field and instead lie perpendicular to it. Normally, at such intense temperatures as those near a white dwarf, more common molecular bonds cannot form and existing ones decompose. References Astrophysics Chemical bonding White dwarfs Hypothetical processes Exotic matter Magnetism in astronomy
Perpendicular paramagnetic bond
Physics,Chemistry,Materials_science,Astronomy
143
35,791,919
https://en.wikipedia.org/wiki/20%CE%B1%2C22R-Dihydroxycholesterol
20α,22R-Dihydroxycholesterol, or (3β)-cholest-5-ene-3,20,22-triol is an endogenous, metabolic intermediate in the biosynthesis of the steroid hormones from cholesterol. Cholesterol ((3β)-cholest-5-en-3-ol) is hydroxylated by cholesterol side-chain cleavage enzyme (P450scc) to form 22R-hydroxycholesterol, which is subsequently hydroxylated again by P450scc to form 20α,22R-dihydroxycholesterol, and finally the bond between carbons 20 and 22 is cleaved by P450scc to form pregnenolone ((3β)-3-hydroxypregn-5-en-20-one), the precursor to the steroid hormones. See also 22R-Hydroxycholesterol 27-Hydroxycholesterol References Cholestanes Sterols
20α,22R-Dihydroxycholesterol
Chemistry,Biology
222
628,083
https://en.wikipedia.org/wiki/Thomas%20Digges
Thomas Digges (; c. 1546 – 24 August 1595) was an English mathematician and astronomer. He was the first to expound the Copernican system in English but discarded the notion of a fixed shell of immoveable stars to postulate infinitely many stars at varying distances. He was also first to postulate the "dark night sky paradox". Life Thomas Digges, born about 1546, was the son of Leonard Digges (c. 1515 – c. 1559), the mathematician and surveyor, and Bridget Wilford, the daughter of Thomas Wilford, esquire, of Hartridge in Cranbrook, Kent, by his first wife, Elizabeth Culpeper, the daughter of Walter Culpeper, esquire. Digges had two brothers, James and Daniel, and three sisters, Mary, who married a man with the surname of Barber; Anne, who married William Digges; and Sarah, whose first husband was surnamed Martin, and whose second husband was John Weston. After the death of his father, Digges grew up under the guardianship of John Dee, a typical Renaissance natural philosopher. In 1583, Lord Burghley appointed Digges, with John Chamber and Henry Savile, to sit on a commission to consider whether England should adopt the Gregorian calendar, as proposed by Dee. Digges served as a member of parliament for Wallingford and also had a military career as a Muster-Master General to the English forces from 1586 to 1594 during the war with the Spanish Netherlands. In his capacity of Master-Muster General he was instrumental in promoting improvements at the Port of Dover. Digges died on 24 August 1595. His last will, in which he specifically excluded both his brother, James Digges, and William Digges, was proved on 1 September. Digges was buried in the chancel of the church of St Mary Aldermanbury, London. Marriage and issue Digges married Anne St Leger (1555–1636), daughter of Sir Warham St Leger and his first wife, Ursula Neville (d. 1575), the fifth daughter of George Neville, 5th Baron Bergavenny, by his third wife, Mary Stafford. In his will he named two surviving sons, Sir Dudley Digges (1583–1639), politician and statesman, and Leonard Digges (1588–1635), poet, and two surviving daughters, Margaret and Ursula. After Digges's death, his widow, Anne, married Thomas Russell of Alderminster in Warwickshire, "whom in 1616 William Shakespeare named as an overseer of his will". Work Digges attempted to determine the parallax of the 1572 supernova observed by Tycho Brahe, and concluded it had to be beyond the orbit of the Moon. This contradicted Aristotle's view of the universe, according to which no change could take place among the fixed stars. In 1576, he published a new edition of his father's perpetual almanac, A Prognostication everlasting. The text written by Leonard Digges for the third edition of 1556 was left unchanged, but Thomas added new material in several appendices. The most important of these was A Perfit Description of the Caelestiall Orbes according to the most aunciente doctrine of the Pythagoreans, latelye revived by Copernicus and by Geometricall Demonstrations approved. Contrary to the Ptolemaic cosmology of the original book by his father, the appendix featured a detailed discussion of the controversial and still poorly known Copernican heliocentric model of the Universe. This was the first publication of that model in English, and a milestone in the popularisation of science. For the most part, the appendix was a loose translation into English of chapters from Copernicus' book De revolutionibus orbium coelestium. Thomas Digges went further than Copernicus, however, by proposing that the universe is infinite, containing infinitely many stars, and may have been the first person to do so, predating Giordano Bruno's (1584) and William Gilbert's (1600) same views. According to Harrison: An illustration of the Copernican universe can be seen above right. The outer inscription on the map reads (after spelling adjustments from Elizabethan to Modern English): In 1583, Lord Burghley appointed Digges, along with Henry Savile (Bible translator) and John Chamber, to sit on a commission to consider whether England should adopt the Gregorian calendar, as proposed by John Dee; in fact Britain did not adopt the calendar until 1752. References Sources and further reading Text of the Perfit Description: Johnson, Francis R. and Larkey, Sanford V., "Thomas Digges, the Copernican System and the idea of the Infinity of the Universe in 1576," Huntington Library Bulletin 5 (1934): 69–117. Harrison, Edward Robert (1987) Darkness at Night. Harvard University Press: 211–17. An abridgement of the preceding. Internet version at Dartmouth retrieved on 2 November 2013 Gribbin, John, 2002. Science: A History. Penguin. Johnson, Francis R., Astronomical Thought in Renaissance England: A Study of the English Scientific Writings from 1500 to 1645, Johns Hopkins Press, 1937. Kugler, Martin Astronomy in Elizabethan England, 1558 to 1585: John Dee, Thomas Digges, and Giordano Bruno, Montpellier: Université Paul Valéry, 1982. Vickers, Brian (ed.), Occult & Scientific Mentalities in the Renaissance. Cambridge: Cambridge University Press, 1984. External links Digges, Thomas Thomas Digges, Gentleman and Mathematician John Dee, Thomas Digges and the identity of the mathematician Digges's Mactutor biography Digges, Thomas (1546–1595), History of Parliament 1540s births 1595 deaths 16th-century Calvinist and Reformed Christians 16th-century English mathematicians Alumni of Queens' College, Cambridge John Dee 16th-century English astronomers English MPs 1572–1583 English MPs 1584–1585 Copernican Revolution People from Dover District
Thomas Digges
Astronomy
1,272
47,026,749
https://en.wikipedia.org/wiki/Calvatia%20oblongispora
Calvatia oblongispora is a species of puffball from the genus Calvatia. Found in Brazil, it was described as new to science in 2009. The fruitbody is spherical or nearly so, measuring about in diameter. The thin, fragile peridium is readily detachable from the internal gleba. It is light beige and wrinkled, with a small, short, thin mycelium cord up to 5 mm long. The spores are cylindrical to ellipsoid in shape, hyaline (translucent), and measure 5.4–7.5 by 3.6–4.3 μm. They are covered in small spiny protrusions and have a single oil droplet within. References External links Agaricaceae Fungi of Brazil Fungi described in 2009 Puffballs Fungus species oblongispora
Calvatia oblongispora
Biology
174
21,028,827
https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282020%29
This page summarizes projects that propose to bring more than of new liquid fuel capacity to market with the first production of fuel beginning in 2020. This is part of the Wikipedia summary of Oil Megaprojects. Quick links to other years Detailed list of projects for 2020 References Oil megaprojects Oil fields Proposed energy projects 2020 in technology
Oil megaprojects (2020)
Engineering
70
10,374,695
https://en.wikipedia.org/wiki/NGC%206782
NGC 6782 is a barred spiral galaxy located in the southern constellation of Pavo, at a distance of approximately from the Milky Way. It was discovered on July 12, 1834 by English astronomer John Herschel. John L. E. Dreyer described it as, "considerably faint, considerably small, round, a little brighter middle, 9th magnitude star to south". The morphological classification of NGC 6782 is (R1R′2)SB(r)a, indicating a barred spiral galaxy with a multiple ring system and tightly-wound spiral arms. It is seen nearly face-on, being inclined by an angle of to the line of sight from the Earth. At the galactic core is an almost circular nuclear ring at the inner Lindblad resonance. This is attached to the primary bar, which extends out to a somewhat pointy, diamond-shaped inner ring. It is actually a double-barred galaxy, with an interior bar inside the nuclear ring. A pair of faint spiral arms extend out from the inner ring to the outer parts of the galaxy, where it joints a double outer ring system. Both inner rings of the galaxy are undergoing star formation, producing hot OB stars, with little star formation occurring in the remainder. Gallery See also Messier 94 - a similar spiral galaxy References External links Hubble Heritage site: Pictures and description NGC 6782 Pavo (constellation) 6782 63168
NGC 6782
Astronomy
289
6,261,649
https://en.wikipedia.org/wiki/World%20Telecommunication%20and%20Information%20Society%20Day
World Telecommunication and Information Society Day is an international day proclaimed in November 2006 by the International Telecommunication Union Plenipotentiary Conference in Antalya, Turkey, to be celebrated annually on 17 May. History World Telecommunication Day The day had previously been known as 'World Telecommunication Day' to commemorate the founding of the International Telecommunication Union on 17 May 1865. It was instituted by the Plenipotentiary Conference in Malaga-Torremolinos in 1973. The main objective of the day was to raise global awareness of social changes brought about by the Internet and new technologies. It also aims to help reduce the digital divide. World Information Society Day World Information Society Day was an international day proclaimed to be on 17 May by a United Nations General Assembly resolution, following the 2005 World Summit on the Information Society in Tunis. World Telecommunication and Information Society Day In November 2006, the ITU Plenipotentiary Conference in Antalya, Turkey, decided to celebrate both events on 17 May as World Telecommunication and Information Society Day. All previous topics 1969 The role and activities of the Union 1970 Telecommunications and training 1971 Space and Telecommunications 1972 World Telecommunication Network 1973 International Cooperation 1974 Telecommunications and Transportation 1975 Telecommunications and Meteorology 1976 Telecommunications and Information 1977 Telecommunications and Development 1978 Radio Communications 1979 Telecommunications in the Service of Mankind 1980 Rural Telecom 1981 Telecommunications and Health 1982 International Cooperation 1983 One world, One network 1984 Telecommunications: a Broad Vision 1985 Telecom is Good For Development 1986 Partner On The Move 1987 Telecom Serves All Countries 1988 Dissemination of Technological Knowledge in The Electronic Age 1989 International Cooperation 1990 Telecommunications and Industrial Development 1991 Telecommunications and Human Security 1992 Telecommunications and Space: Xintiandi 1993 Telecommunications and Human Development 1994 Telecommunications and Culture 1995 Telecommunications and Environment 1996 Telecommunications and Sports 1997 Telecommunications and Humanitarian Aid 1998 Telecom Trade 1999 E-commerce 2000 Mobile Communications 2001 Internet: Challenges, Opportunities and Prospects 2002 Helping People Bridge the Digital Divide 2003 Helping all Mankind Communicate 2004 Information and Communication Technology: A Path to Sustainable Development 2005 Take Action to Create a Fair Information Society 2006 Advancing Global Cyber Security 2007 Let ICT Benefit The Next Generation 2008 Let ICT Benefit People With Disabilities, and Let All People Enjoy ICT Opportunities 2009 Protect Children's Online Safety 2010 ICT Makes Urban Life Better 2011 ICT Makes Rural Life Better 2012 Information Communication and Women 2013 ICT and Improving Road Safety 2014 Broadband Promotes sustainable Development 2015 Telecommunications and Information and Communication Technology: Driving Forces of Innovation 2016 Promote ICT Entrepreneurship and Expand Social Impact 2017 Develop Big Data and Expand Influence 2018 Promote the Proper Use of Artificial Intelligence For the Benefit of All Mankind 2019 Narrowing the Standardization Gap 2020 Connectivity Goal 2030: Using ICT to Promote the Achievement of the Sustainable Development Goals 2021 Accelerating Digital Transformation in challenging time 2022 Digital technologies for older persons and health ageing 2024 Digital Innovation for Sustainable Development See also System Administrator Appreciation Day Programmers' Day World Development Information Day World Television Day References External links World Telecommunication and Information Society Day — United Nations World Telecommunication and Information Society Day — International Telecommunication Union Recurring events established in 2006 United Nations General Assembly resolutions Non-profit technology Internet governance International Telecommunication Union May observances United Nations days
World Telecommunication and Information Society Day
Technology
614
56,523,750
https://en.wikipedia.org/wiki/Cyanoalanine
Cyanoalanine (more accurately β-Cyano-L-alanine) is an amino acid with the formula NCCH2CH(NH2)CO2H. Like most amino acids, it exists as a tautomer NCCH2CH(NH3+)CO2−. It is a rare example of a nitrile-containing amino acid. It is a white, water-soluble solid. It can be found in common vetch seeds. Cyanoalanine arises in nature by the action of cyanide on cysteine catalyzed by L-3-cyanoalanine synthase: HSCH2CH(NH2)CO2H + HCN → NCCH2CH(NH2)CO2H + H2S It is converted to aspartic acid and asparagine enzymatically. References Alpha-Amino acids Nitriles
Cyanoalanine
Chemistry
187
11,128,746
https://en.wikipedia.org/wiki/Verticillium%20dahliae
Verticillium dahliae is a fungal plant pathogen. It causes verticillium wilt in many plant species, causing leaves to curl and discolor. It may cause death in some plants. Over 400 plant species are affected by Verticillium complex. Management Verticillium dahliae has a wide host range and can persist as microsclerotia in the soil for years, so management via fallowing or crop rotation generally has little success. The exception to this is rotation using broccoli, which has been shown to decrease Verticillium severity and incidence in cauliflower fields. This is likely due to the production of allyl isothiocyanate in broccoli, which can suppress the growth of plant pathogenic fungi. Seed choice may reduce disease presence. Purchasing seed stock from certified Verticillium-free growers and utilizing resistant or partially resistant cultivars can decrease disease incidence. Even resistant cultivars may show symptoms if the field has a high concentration of Verticillium, so site selection is still essential to minimizing disease incidence. Using fertilizers high in nitrogen and overwatering crops, especially early in the season, may increase disease incidence, so proper fertilizer ratios and drip irrigation are recommended. Following harvest, burning crop residues will limit the amount of Verticillium that can enter the soil and overwinter. Hosts and symptoms There are many strains of Verticillium dahliae which are categorized into vegetative compatibility groups (VCG). These groups comprise strains that are able to exchange genetic material via anastomosis. Each VCG affects a few or only one host and the virulence of the pathogen varies by host. While individual V. dahliae strains are relatively host specific, as a species it has a wide range. Verticillium dahliae has a very wide host range, affecting over 300 plant species. Some susceptible crops include Brussels sprouts, cabbage, eggplant, cucumbers, mint, pepper, potatoes, pumpkin, spinach, tomato, watermelon, honeydew, and cantaloupe. Of these, tomato, potato, and eggplant have resistant or tolerant varieties. Symptoms of this disease are seen throughout the plant. Leaves may have abnormal coloration, necrotic areas, wilt, and/or fall off the plant. The stem may have discolored vascular tissue, exhibit rosetting (shortened internodes of the plant caused by reduced growth, resulting in a rosette-like appearance), and/or be stunted. Early senescence and dieback may also occur. Microsclerotia can be seen under a lens as small black structures in the vasculature of living and dead plants. This feature can be used to distinguish V. dahliae from V. albo-atrum, the other verticillium wilt pathogen. Disease cycle Verticillium dahliae invades the host plant via natural wounds or by penetrating the root tissue. Following entry, the pathogen enters the xylem where conidia are spread throughout the host. The plant responds to the pathogen by producing tyloses which block the xylem, resulting in decreased water flow and wilting. When the plant dies, Verticillium survives as mycelia in dead tissue, as long-term resting spores in the form of microsclerotia, or saprophytically in the soil. Microsclerotia can be spread via wind and rain, resulting in infection of previously pathogen-free fields. Additionally, the disease can spread locally from the roots of affected plants to healthy plants, live in the vascular tissue of some resistant species, and spread via wind from host leaf tissue. With this pathogen's ability to survive saprophytically or form resting spores that can survive for over a decade, once a site is infected, it will most likely never be Verticillium-free again. Recombination V. dahliae, a fungus in the division Ascomycota, has a strongly clonal population structure. Recombination events have occurred between different clonal lineages, and less frequently within lineages. Two mating types have been identified. Homologs of eight meiosis specific genes are present in the V. dahliae genome. These findings suggest that the capability for meiotic sexual reproduction has been adaptively maintained in the clonal lineages of V. dahliae, and can occasionally be expressed as recombination between genetic markers. Perhaps, as suggested by Wallen and Perlin for Ascomycota fungi generally, in V. dahliae homologous recombination during sexual reproduction functions to repair DNA damage, especially under stressful conditions. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Fungi described in 1913 Enigmatic Hypocreales taxa Fungus species
Verticillium dahliae
Biology
1,022
24,111,099
https://en.wikipedia.org/wiki/Psychotropic%20alkylamines
Psychotropic alkylamines are alkylamines that share the critical property of not containing an aromatic nucleus, but are still biologically active. While many of these molecules are stimulants, others are antiviral, have competitive NMDA antagonist activity, or are nicotinic receptor antagonists. Alkenylamines Isometheptene Alkanolamines Heptaminol References Amines
Psychotropic alkylamines
Chemistry
88
35,184,684
https://en.wikipedia.org/wiki/Mycoplasma%20haemomuris
Mycoplasma haemomuris, formerly known as Haemobartonella muris and Bartonella muris, is a Gram-negative bacillus. It is known to cause anemia in rats and mice. References Further reading haemomuris
Mycoplasma haemomuris
Biology
59
59,572,014
https://en.wikipedia.org/wiki/SN-22
SN-22 is a chemical compound which acts as a moderately selective agonist at the 5-HT2 family of serotonin receptors, with a Ki of 19 nM at 5-HT2 subtypes versus 514 nM at 5-HT1A receptors. Many related derivatives are known, most of which are ligands for 5-HT1A, 5-HT6 or dopamine D2 receptors or show SSRI activity. See also BRL-54443 LY-334370 LY-367,265 MPMI MPTP Naratriptan N,N-Dimethyltryptamine RU-24969 Sertindole References Serotonin receptor agonists
SN-22
Chemistry
153
8,893,870
https://en.wikipedia.org/wiki/NGC%204194
NGC 4194, the Medusa merger, is a galaxy merger in the constellation Ursa Major about away. It was discovered on April 2, 1791 by German-British astronomer William Herschel. Due to its disturbed appearance, it is object 160 in Halton Arp's 1966 Atlas of Peculiar Galaxies. The morphological classification of NGC 4194 is Imeger, indicating an irregular form. This galaxy consists of a brighter central region spanning an angular size across, with an accompanying system of loops and arcs. Additional material is thinly spread out to a radius of from the central region. There is a tidal tail and regions undergoing high levels of star formation, making this a starburst galaxy. It is a source for strong infrared and radio emission. These features indicate NGC 4194 is a late-stage galaxy merger. A region of extreme star formation across exists in the center of the Eye of Medusa, the central gas-rich region. Within of the dynamic center of NGC 4194, star formation is occurring at a rate of ·yr−1. The star forming regions in this volume range from 5 to 9 million years in age, with the youngest occurring in areas of the highest star formation rate. As of 2014, no galactic nucleus has been detected based on radio emissions, nor have the respective nuclei of the merger galaxies. However, X-ray emission from a black hole in the tidal tail was detected by Chandra in 2009. References Further reading External links Peculiar galaxies Galaxy mergers Luminous infrared galaxies Ursa Major 160 4194 39068 Markarian galaxies 07241
NGC 4194
Astronomy
322