id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
12,870,604
https://en.wikipedia.org/wiki/Top%20Tier%20Detergent%20Gasoline
Top Tier Detergent Gasoline and Top Tier Diesel Fuel are performance specifications and trademarks designed and supported by several automakers. BMW, General Motors, Fiat Chrysler Automobiles, Ford, Acura/Honda, Toyota, Volkswagen, Mercedes-Benz, Navistar, Audi, and Volvo support the gasoline standard, while General Motors, Volkswagen, Detroit Diesel, and Navistar support the diesel standard (). Top Tier fuels must maintain levels of detergent additives that are believed to result in a higher standard of engine cleanliness and performance as compared to the United States Environmental Protection Agency (EPA) requirement. In addition, Top Tier fuels may not contain metallic additives, which can harm the vehicle emission system and create pollutants. As of 2018, Top Tier Detergent Gasoline is available from 61 licensed retail brands, and Top Tier Diesel Fuel is available from 5 licensed retail brands. Licensed Top Tier fuel retailers use a higher level of detergent additive which can increase fuel economy and optimal engine performance. According to an automotive industry spokesman, the regular use of this type of gasoline results in improved engine life. The Top Tier standards must apply to all grades of gasoline or diesel that a company sells, whether it is economy (low-octane) or premium (high-octane). Purpose of detergents in gasoline Detergent additives serve to prevent the buildup of engine "gunk," which can cause a host of mechanical problems. Automotive journalist Craig Cole writes, "Gasoline is an impure substance refined from a very impure base stock –crude oil. It’s an explosive hydrocarbon cocktail containing all kinds of different chemicals. In addition to its own molecular variability, refiners and retailers incorporate additional substances into the mix, from ethanol alcohol to octane enhancers." While General Motors' fuels engineer Andrew Buczynsky claims that no one has identified the exact molecule in gasoline that causes engine buildup, he suggests using Top Tier Detergent Gasoline to keep one's engine cleaner. Engine gunk typically builds up in fuel injectors and on intake valves, and if severe can result in reduced fuel efficiency, acceleration, and power. Left unchecked, engine gunk can also contribute to increased emissions, rough idling, and tendency to stall, and can therefore increase required motor repairs. When fuel injectors accumulate deposits, they do not distribute fuel evenly, creating pockets of too much fuel and too little fuel. Too-little fuel around the spark plug dampens the combustion that drives the piston downward and may cause a misfire. When the frequency of misfires reaches a certain point, the on-board computer turns on the "service engine" light on the dash. The repair for this type of problem depends on the severity of the deposits. In milder cases, a mechanic may solve the problem by adding a can of fuel-injector cleaner into the gas tank. However, in some cases, the fuel injectors must be replaced. Deposits formed on the intake valves may be removed via walnut shell blasting. In severe cases, a more costly cylinder-head rebuild may be necessary. Certain forms of sulfur that refiners or pipelines may leave in finished gasoline, such as mercaptans and hydrogen sulfide, can contaminate fuel sending units and lead to erratic dashboard fuel gauge readings, which may be expensive to repair. However, this problem has become less common since 2006, since manufacturers have been making these units with improved alloys that are less affected by these forms of sulfur. Chris Martin at Honda states, "We've supported it [Top Tier gasoline] because we've seen a benefit from it for our consumers in the long run. . . We don't require that our vehicle owners use Top Tier gas [but it helps] make sure the engines are going to last as long as they could." Characteristics of Top Tier gasoline To be certified as Top Tier, a gasoline must pass a series of performance tests that demonstrate specified levels of: 1) deposit control on intake valves; 2) deposit control on fuel injectors; 3) deposit control on combustion chambers; 4) prevention of intake-valve sticking. Gasoline marketers agree when they sign on to Top Tier program that all their grades of gasoline meet these standards. However, premium grade gasoline may have yet higher levels of detergent additives. Typically, Top Tier gasoline will contain two to three times the amount of detergent additives currently required by the EPA. The extra additives are estimated to cost less than a cent per gallon. In addition to the detergent additive requirement, Top Tier gasoline cannot contain metallic additives, because they can be harmful to a car's emissions-control systems. According to its auto industry research and to automotive journalists, all vehicles will benefit from using Top Tier Detergent Gasoline over gasoline meeting the basic EPA standard. New vehicles will supposedly benefit by keeping their engine clean and running optimally, while older vehicles may benefit with increased engine performance and prolonged vehicle life. History In the late 1980s, automakers became concerned with fuel additives as more advanced fuel injection technology became widely used in new cars. The injectors often became clogged, and the problem was found to be inadequate levels of detergent additives in some gasoline. The automakers began to recommend specific brands of gas with adequate content to their customers. But some fuel marketers were still not using detergents, and in a move supported by the auto industry, the federal government mandated specific levels of additives. The U.S. Environmental Protection Agency (EPA) introduced the minimum gasoline detergent standard in 1995. However, the new regulations had unintended consequences. The new EPA standards required lower levels of detergent additives than were then being used by a few major fuel marketers. When the new regulations came in, most gasoline marketers who had previously provided higher levels of detergents reduced the level of detergents in their gasolines to meet the new standard. The EPA detergent additive levels were designed to meet emissions standards but not engine longevity standards. Automakers said they were seeing persistent problems such as clogged fuel injectors, and contaminated combustion chambers, resulting in higher emissions and lower fuel economy. By 2002, the automakers said their repair records suggested that the EPA standard for detergents was not high enough, but the EPA was not responsive when they asked them to increase the standards. These concerns were heightened by plans to introduce a new generation of vehicles that would meet the EPA's “Tier Two” environmental standards for reduced emissions. These vehicles require higher levels of detergents to avoid reduced performance. Cars with gasoline direct injection (GDI) have been especially prone to carbon buildup, and car makers recommend fuels with higher detergent levels to combat the problem. At first GDI was mainly available in high-end autos, but it is now being used in mid-range cars and economy cars, such as the Hyundai Sonata, Ford Focus and Hyundai Accent. In 2004 representatives of BMW, General Motors, Honda, and Toyota got together to specify what makes a good fuel. Using recommendations from the Worldwide Fuel Charter, a global committee of automakers and engine manufacturers, they established a proprietary standard for a class of gasoline called "TOP TIER" Detergent Gasoline The new standard required increased levels of detergents, and restricted metallic content. Volkswagen/Audi joined the group of automakers in 2007. Gas brands can participate and get a TOP TIER license if they meet certain standards, which includes performance tests for intake valve and combustion chamber deposits, fuel injector fouling, and intake valve sticking. Additive manufacturers pay for the testing, the cost of which varies from year to year, while gasoline companies pay an annual fee based on the number of stations it operates to participate in the program. In addition to higher detergent levels, Top Tier standards also require that gasoline be free of metallic additives, which can be harmful to the emissions control systems in cars. In October 2017 a Top Tier Diesel Fuel program was launched. Reception Most of the fuel experts and auto mechanics who have publicly commented on Top Tier gasoline recommend it. A 2007 USA Today article quoted three critics who say it has little or no benefit, but the same article quoted three endorsers of the new standard. Tom Magliozzi, co-host of NPR's weekly radio show, Car Talk, said that using top tier detergent gasoline is only critical on high-end vehicles. For other vehicles, he and another source said that periodic use of a concentrated engine cleaner every 100,000 miles will "often" clean out carbon buildup. However, journalist and automotive mechanics instructor Jim Kerr says that with some brands of gasoline, deposits can build up on intake valves in less than 10,000 kilometers (6200 miles). And General Motors fuels engineer Andrew Buczynsky says the various engine-cleaning additives available at auto-parts stores should be used with caution. He said some work but most do not, and that care must be taken when using these additives because some may contaminate the catalytic converter. Also, if too much is used, the additive may cling to valve stems and cause them to hang open. Most mechanics agree that consistent use of a fuel with adequate cleaning ability is best. Magliozzi's co-host, Ray Magliozzi, said that to be sure of preventing buildup of fuel injectors and valves, motorists should use Top Tier gasoline "at least most of the time." Several others agree: Mechanic Pam Oakes says Top Tier gas is effective in cleaning carbon from engines and is worth buying. She says she's seen the difference it can make and recommends it to all of her customers. Westside Autos in Clive, Iowa, and Motor Age columnist Larry Hammer also recommend Top Tier for removing carbon build-up, adding that a cleaner engine will also burn fuel more cleanly and therefore produce less emissions. Automotive mechanics instructor Jim Kerr concurs: "All gasoline is not created equal . . . Top Tier does have benefits." Availability In 2004 the standard was adopted by ten gasoline distributors. Chevron and QuikTrip were first, followed that same year by 76 Stations, Conoco, Phillips 66, Road Ranger, Kwik Trip/Kwik Star, Shell, and MFA Oil Company. Since then, many more gasoline distributors have met the proprietary standard and TOP TIER gasoline can now be found in gas stations all over the U.S. and Canada. Top Tier is also available from select brands in Canada, El Salvador, Guatemala, Honduras, Mexico, Panama, and Puerto Rico. Meeting this standard allows gasoline marketers to differentiate themselves from their competition. All stations selling the brand must meet Top Tier standards before the brand is qualified. They must pass separate tests measuring the ability of their gasoline to keep intake valves, combustion chambers, fuel injectors clean, and to prevent intake valves from sticking. Top Tier licensed retail brands As of April 2022, there are two Top Tier Diesel brands and more than 60 Top Tier gasoline brands in the US and Canada. See also Gasoline List of gasoline additives Vehicle emissions control References External links Petroleum products Engine technology Fuel additives
Top Tier Detergent Gasoline
Chemistry,Technology
2,279
52,452,533
https://en.wikipedia.org/wiki/NGC%206717
NGC 6717 (also known as Palomar 9) is a globular cluster in the constellation Sagittarius, and is a member of the Palomar Globular Clusters group. Palomar 9 was discovered by William Herschel on August 7, 1784. It is located about 7,300 parsecs away from Earth. The globular cluster, which has an apparent magnitude of 9.28 and diameter of 9.9 arcminutes, is located just south of the star ν2 Sagittarii. The bright star region on the north-eastern edge has the separate designation IC 4802. References External links NGC 6717 Globular clusters Sagittarius (constellation) 6717
NGC 6717
Astronomy
147
78,872,132
https://en.wikipedia.org/wiki/Li%20%28short%29
The li () in Mandarin, or lei in Cantonese, is a traditional Chinese unit of length. One li equals 10 hao, 1/10 of a fen, 1/1000 of a chi, or 1/3 mm in China. Chinese length units promulgated in 1915 Present law on Chinese length units This law of length measurement was issued by the Chinese government in 1929, and has been effective since 1 January, 1930. The base unit chi is defined to be 1/3 meter. Chinese length units in engineering These units are based on the metric system. The Chinese word for metre is mǐ, which can take the Chinese standard SI prefixes (for "kilo-", "centi-", etc.). A kilometre, however, may also be called gōnglǐ, i.e. a metric lǐ. In the engineering field, traditional units are rounded up to metric units. Compounds 差之毫釐,謬以千里 (chāzhīháolí, miùyǐqiānlǐ) See also Li (length) Fen (length) Chinese units of measurement References History of science and technology in China Units of length Customary units of measurement
Li (short)
Mathematics
236
67,798,185
https://en.wikipedia.org/wiki/Zero-knowledge%20service
In cloud computing, the term zero-knowledge (or occasionally no-knowledge or zero access) refers to an online service that stores, transfers or manipulates data in a way that maintains a high level of confidentiality, where the data is only accessible to the data's owner (the client), and not to the service provider. This is achieved by encrypting the raw data at the client's side or end-to-end (in case there is more than one client), without disclosing the password to the service provider. This means that neither the service provider, nor any third party that might intercept the data, can decrypt and access the data without prior permission, allowing the client a higher degree of privacy than would otherwise be possible. In addition, zero-knowledge services often strive to hold as little metadata as possible, holding only that data that is functionally needed by the service. The term "zero-knowledge" was popularized by backup service SpiderOak, which later switched to using the term "no knowledge" to avoid confusion with the computer science concept of zero-knowledge proof. Disadvantages Most cloud storage services keep a copy of the client's password on their servers, allowing clients who have lost their passwords to retrieve and decrypt their data using alternative means of authentication; but since zero-knowledge services do not store copies of clients' passwords, if a client loses their password then their data cannot be decrypted, making it practically unrecoverable. Most cloud storage services are also able to furnish access requests from law enforcement agencies for similar reasons; zero-knowledge services, however, are unable to do so, since their systems are designed to make clients' data inaccessible without the client's explicit cooperation. References Privacy Computer security Backup software Secure communication Internet terminology
Zero-knowledge service
Technology
374
58,732,452
https://en.wikipedia.org/wiki/Gogeldrie%20Weir
The Gogeldrie Weir is a heritage-listed former weir and now recreation area and weir at Murrumbidgee River near Narrandera, Leeton Shire, New South Wales, Australia. It was designed and built by WC & IC from 1958 to 1959. It was added to the New South Wales State Heritage Register on 2 April 1999. History The Gogeldrie Weir was completed in 1959 to divert water from the Murrumbidgee River to the Coleambally Irrigation Area via the Coleambally Canal, and to part of the Murrumbidgee Irrigation Areas and associated irrigation districts via the Stuart Canal. The Coleambally Irrigation Scheme was one of the last major schemes into public irrigation by the government to enable agriculture to expand in the Coleambally south of the Murrumbidgee. The scheme utilised the regulated flows from the Snowy Scheme and the Blowering Dam. The mechanics of Coleambally were similar to the MIA, with a major diversionary weir at Gogeldrie on the Murrumbidgee distributing water by gravity through networks of canals and channels. The first farms were taken up in 1960. Description The Gogeldrie Weir is one of seven major weirs on the Murrumbidgee River. It is approximately downstream of Narrandera. The weir is between abutments. The weir structure comprises concrete sill floor reinforced with steel sheet piling cut-off walls, the floor is surmounted by concrete piers and steel superstructure providing supports for the steel sluice gates. There are six gates each measuring high and wide, weighing . The gates are opened individually by electric motors placed centrally between piers. The gates move vertically and the counterweights drop into the counterweight wells allowed for in each of the concrete piers. The original gate control meter has been replaced by computerised meter in 1996. The weir provides a pool level suitable for the diversion of water from the Murrumbidgee River into Coleambally Canal supplying the Coleambally Irrigation Area, and via Coononcoocabil Lagoon into the Stuart Canal to supply part of the Murrumbidgee Irrigation Areas and associated irrigation districts. At full supply level, the weir holds . The effective capacity of both canals for long term operation at about per day. Heritage listing As at 6 December 2000, Gogeldrie Weir is associated with the Coleambally Irrigation Area and also part of the Murrumbidgee Irrigation Area. It is a major component in the Coleambally Irrigation Scheme being the diversion weir that controls and diverts water from the Murrumbidgee River to the Coleambally area. The weir is a landmark in the region. Gogeldrie Weir was listed on the New South Wales State Heritage Register on 2 April 1999. See also References Bibliography Attribution External links New South Wales State Heritage Register Narrandera Parks in New South Wales Dams in New South Wales Articles incorporating text from the New South Wales State Heritage Register Weirs Dams completed in 1959 1959 establishments in Australia Murrumbidgee River
Gogeldrie Weir
Environmental_science
634
21,409,235
https://en.wikipedia.org/wiki/Jump%20wire
A jump wire (also known as jumper, jumper wire, DuPont wire) is an electrical wire, or group of them in a cable, with a connector or pin at each end (or sometimes without them  simply "tinned"), which is normally used to interconnect the components of a breadboard or other prototype or test circuit, internally or with other equipment or components, without soldering. Individual jump wires are fitted by inserting their "end connectors" into the slots provided in a breadboard, the header connector of a circuit board, or a piece of test equipment. Types There are different types of jumper wires. Some have the same type of electrical connector at both ends, while others have different connectors. Some common connectors are: Solid tips – are used to connect on/with a breadboard or female header connector. The arrangement of the elements and ease of insertion on a breadboard allows increasing the mounting density of both components and jump wires without fear of short-circuits. The jump wires vary in size and colour to distinguish the different working signals. Crocodile clips – are used, among other applications, to temporarily bridge sensors, buttons and other elements of prototypes with components or equipment that have arbitrary connectors, wires, screw terminals, etc. Banana connectors – are commonly used on test equipment for DC and low-frequency AC signals. Registered jack (RJnn) – are commonly used in telephone (RJ11) and computer networking (RJ45). RCA connectors – are often used for audio, low-resolution composite video signals, or other low-frequency applications requiring a shielded cable. RF connectors – are used to carry radio frequency signals between circuits, test equipment, and antennas. RF jumper cables - Jumper cables is a smaller and more bendable corrugated cable which is used to connect antennas and other components to network cabling. Jumpers are also used in base stations to connect antennas to radio units. Usually the most bendable jumper cable diameter is 1/2". See also Jumper (computing) Pin header References External links What is a 'breadboard' '? Lego Electronic Lab Kit Techniques progressive wiring effective construction techniques Electronic design
Jump wire
Engineering
442
76,093,849
https://en.wikipedia.org/wiki/IBM%20Granite
IBM Granite is a series of decoder-only AI foundation models created by IBM. It was announced on September 7, 2023, and an initial paper was published 4 days later. Initially intended for use in the IBM's cloud-based data and generative AI platform Watsonx along with other models, IBM opened the source code of some code models. Granite models are trained on datasets curated from Internet, academic publishings, code datasets, legal and finance documents. Foundation models A foundation model is an AI model trained on broad data at scale such that it can be adapted to a wide range of downstream tasks. Granite's first foundation models were Granite.13b.instruct and Granite.13b.chat. The "13b" in their name comes from 13 billion, the amount of parameters they have as models, lesser than most of the larger models of the time. Later models vary from 3 to 34 billion parameters. On May 6, 2024, IBM released the source code of four variations of Granite Code Models under Apache 2, an open source permissive license that allows completely free use, modification and sharing of the software, and put them on Hugging Face for public use. According to IBM's own report, Granite 8b outperforms Llama 3 on several coding related tasks within similar range of parameters. See also Mistral AI, a company that also provides open source models GPT LLaMA Cyc Gemini References External links GitHub page IBM Granite Playground IBM products IBM software Large language models Generative artificial intelligence Artificial neural networks 2023 software Free software Open-source artificial intelligence
IBM Granite
Engineering
329
43,357,825
https://en.wikipedia.org/wiki/SoloPower%20Systems
SoloPower Systems Inc. technology is used to create ultra-lightweight, thin-film, flexible Solar Panels, based on CIGS (Copper indium gallium selenide). Originally developed by San Jose, California-based Solopower Inc., the technology is now owned by Solopower Systems Inc., a solar panel development & manufacturing company based in Portland, Oregon. SoloPower technology features an electroplating process that utilizes nearly 100% of its materials to manufacture its CIGS cells. Company history SoloPower technology was created by SoloPower Inc., founded by Bulent Basol and Homayoun Talieh in 2005, which began a pilot manufacturing line for SoloPower solar panels in December of that same year. In February 2011, the United States Department of Energy conditionally approved a $197 million Federal Loan Guarantee to the company to help it build and equip a high-volume manufacturing plant in Portland, Oregon. By the time the plant was completed in early 2013, however, the company opted to finance the project with private rather than public funds. In 2012, SoloPower Panels became the first CIGS flexible solar panels to obtain both UL and IEC Certification. In March 2012, Solopower Modules also set a world record aperture efficiency for flexible CIGS Solar Panels of 13.4%, as measured by the US Department of Energy’s National Renewable Energy Laboratory (NREL), and in May 2012, SoloPower technology was announced a winner of the 2012 TiE50 award in the category of Renewable Energy. In April, 2013, following a sharp downturn in market conditions – caused largely by shrinking government investment and stiff competition from low-cost foreign producers – SoloPower Inc. announced plans to close its pilot plant in San Jose, California, and temporarily suspended operations at its Portland plant; however, in the summer of 2017, the Oregon Department of Energy paid $641,835 in Solopower's rent in Oregon. Subsequently, in July 2013, the company’s major creditors transferred the SoloPower technology to a new entity named Solopower Systems Inc., with plans to begin commercially producing Solopower Panels in the summer of 2014. Funding Lead investors in the development of SoloPower technology - Hudson Capital Management, Firsthand Technology Value Fund, Crosslink Capital, and Convexa Capital Ventures - have invested some $200 million over the period. The technology has also received the benefit of loans from the Oregon Department of Energy (which also paid for rent in 2017) and the California Energy Commission, and other state tax incentives. See also SoloPower References External links SoloPower company website Solar energy companies of the United States Photovoltaics manufacturers Thin-film cell manufacturers Technology companies of the United States Companies based in Portland, Oregon Energy companies established in 2005 Renewable resource companies established in 2005 Technology companies established in 2005 2005 establishments in California Energy companies established in 2013 Renewable resource companies established in 2013 2013 establishments in Oregon American brands American companies established in 2005 American companies established in 2013
SoloPower Systems
Engineering
602
43,573,636
https://en.wikipedia.org/wiki/Storage%20security
Storage security is a specialty area of security that is concerned with securing data storage systems and ecosystems and the data that resides on these systems. Introduction According to the Storage Networking Industry Association (SNIA), storage security represents the convergence of the storage, networking, and security disciplines, technologies, and methodologies for the purpose of protecting and securing digital assets. Historically, the focus has been on both the vendor aspects of making storage product more secure and the consumer aspects associated with using storage products in secure ways. The SNIA Dictionary defines storage security as: Technical controls, which may include integrity, confidentiality and availability controls, that protect storage resources and data from unauthorized users and uses. ISO/IEC 27040 provides the following more comprehensive definition for storage security: application of physical, technical and administrative controls to protect storage systems and infrastructure as well as the data stored within them Note 1 to entry: Storage security is focused on protecting data (and its storage infrastructure) against unauthorized disclosure, modification or destruction while assuring its availability to authorized users. Note 2 to entry: These controls may be preventive, detective, corrective, deterrent, recovery or compensatory in nature. Principles of Security Storage Integrity: Stored data cannot be changed. Confidentiality: Only authorized users will have access to the data locally or through network. Availability: Manage and minimize the risk of inaccessibility due to deliberate destructions or accidents such as natural disaster, mechanical and power failures. Data sanitization is a practice in which storage mediums are destroyed on-site. For instance if a hard-drive needs to be upgraded or replaced, it would be considered insecure to sell or recycle the drive since it is possible traces of the data may still exist even after formatting. Threfore destroying the drive rather than allowing it to leave the site is a common practice. Relevant standards and specifications Applying security to storage systems and ecosystems requires one to have a good working knowledge of an assortment of standards and specifications, including, but not limited to: ISO Guide 73:2009, Risk management — Vocabulary ISO 7498-2:1989, Information technology — Open Systems Interconnection — Basic Reference Model — Part 2: Security Architecture ISO 16609:2004, Banking — Requirements for message authentication using symmetric techniques ISO/PAS 22399:2007, Societal security — Guideline for incident preparedness and operational continuity management ISO/IEC 10116:2006, Information technology — Security techniques — Modes of operation for an n-bit block cipher ISO/TR 10255:2009, Document management applications — Optical disk storage technology, management and standards ISO/TR 18492:2005, Long-term preservation of electronic document-based information ISO 16175-1:2010, Information and documentation — Principles and functional requirements for records in electronic office environments — Part 1: Overview and statement of principles ISO 16175-2:2011, Information and documentation — Principles and functional requirements for records in electronic office environments — Part 2: Guidelines and functional requirements for digital records management systems ISO 16175-3:2010, Information and documentation — Principles and functional requirements for records in electronic office environments — Part 3: Guidelines and functional requirements for records in business systems ISO/IEC 11770 (all parts), Information technology — Security techniques — Key management ISO/IEC 17826:2012, Information technology — Cloud Data Management Interface (CDMI) ISO/IEC 19790:2006, Information technology — Security techniques — Security requirements for cryptographic modules ISO/IEC 24759:2008, Information technology — Security techniques — Test requirements for cryptographic modules ISO/IEC 24775, Information technology — Storage management (to be published) ISO/IEC 27000:2014, Information technology — Security techniques — Information security management systems — Overview and vocabulary ISO/IEC 27001:2013, Information technology — Security techniques — Information security management systems — Requirements ISO/IEC 27002:2013, Information technology — Security techniques — Code of practice for information security controls ISO/IEC 27003:2010, Information technology — Security techniques — Information security management systems implementation guidance ISO/IEC 27005:2008, Information technology — Security techniques — Information security risk management ISO/IEC 27031:2011, Information technology — Security techniques — Guidelines for information and communication technology readiness for business continuity ISO/IEC 27033-1:2009, Information technology — Security techniques — Network security — Part 1: Overview and concepts ISO/IEC 27033-2, Information technology — Security techniques — Network security — Part 2: Guidelines for the design and implementation of network security ISO/IEC 27033-3:2010, Information technology — Security techniques — Network security — Part 3: Reference networking scenarios — Threats, design techniques and control issues ISO/IEC 27033-4:2014, Information technology — Security techniques — Network security — Part 4: Securing communications between networks using security gateways ISO/IEC 27037:2012, Information technology — Security techniques — Guidelines for identification, collection, acquisition, and preservation of digital evidence ISO/IEC/IEEE 24765-2010, Systems and software engineering — Vocabulary IEEE 1619-2007, IEEE Standard for Wide-Block Encryption for Shared Storage Media IEEE 1619.1-2007, IEEE Standard for Authenticated Encryption with Length Expansion for Storage Devices IEEE 1619.2-2010, IEEE Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices IETF RFC 1813 NFS Version 3 Protocol Specification IETF RFC 3195 Reliable Delivery for syslog IETF RFC 3530 Network File System (NFS) version 4 Protocol IETF RFC 3720 Internet Small Computer Systems Interface (iSCSI) IETF RFC 3723 Securing Block Storage Protocols over IP IETF RFC 3821 Fibre Channel Over TCP/IP (FCIP) IETF RFC 4303 IP Encapsulating Security Payload (ESP) IETF RFC 4595 Use of IKEv2 in the Fibre Channel Security Association Management Protocol IETF RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 IETF RFC 5424 The Syslog Protocol IETF RFC 5425 TLS Transport Mapping for Syslog IETF RFC 5426 Transmission of Syslog Messages over UDP IETF RFC 5427 Textual Conventions for Syslog Management IETF RFC 5661 Network File System (NFS) Version 4 Minor Version 1 Protocol IETF RFC 5663 Parallel NFS (pNFS) Block/Volume Layout IETF RFC 5848 Signed Syslog Messages IETF RFC 6012 Datagram Transport Layer Security (DTLS) Transport Mapping for Syslog IETF RFC 6071 IP Security (IPsec) and Internet Key Exchange (IKE) Document Roadmap IETF RFC 6587 Transmission of Syslog Messages over TCP IETF RFC 7146, Securing Block Storage Protocols over IP: RFC 3723 Requirements Update for IPsec v3 ANSI INCITS 400–2004, Information technology — SCSI Object-based Storage Device Commands (OSD) ANSI INCITS 458–2011, Information technology — SCSI Object-Based Storage Device Commands – 2 (OSD-2) ANSI INCITS 461–2010, Fibre Channel — Switch Fabric — 5 (FC-SW-5) ANSI INCITS 462–2010, Information Technology — Fibre Channel - Backbone — 5 (FC-BB-5) ANSI INCITS 463–2010, Fibre Channel — Generic Services — 6 (FC-GS-6) ANSI INCITS 470–2011, Fibre Channel — Framing and Signaling-3 (FC-FS-3) ANSI INCITS 482–2012, Information Technology — ATA/ATAPI Command Set — 2 (ACS-2) ANSI INCITS 496–2012, Information Technology — Fibre Channel — Security Protocols — 2 (FC-SP-2) ANSI INCITS 512–2013, Information Technology — SCSI Block Commands — 3 (SBC-3) NIST FIPS 140–2, Security Requirements for Cryptographic Modules NIST FIPS 197, Advanced Encryption Standard NIST Special Publication 800-38A, Recommendation for Block Cipher Modes of Operation: Three Variants of Ciphertext Stealing for CBC Mode NIST Special Publication 800-38C, Recommendation for Block Cipher Modes of Operation: the CCM Mode for Authentication and Confidentiality NIST Special Publication 800-38D, Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC NIST Special Publication 800-38E, Recommendation for Block Cipher Modes of Operation: The XTS-AES Mode for Confidentiality on Storage Devices NIST Special Publication 800-57 Part 1, Recommendation for Key Management: Part 1: General (Revision 3) NIST Special Publication 800-57 Part 2, Recommendation for Key Management: Part 2: Best Practices for Key Management Organization NIST Special Publication 800-67, Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher NIST Special Publication 800-88 Revision 1, Guidelines for Media Sanitization, http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-88r1.pdf Storage Networking Industry Association (SNIA), Storage Management Initiative – Specification (SMI-S), Version 1.5, Architecture Book, http://www.snia.org/tech_activities/standards/curr_standards/smi Storage Networking Industry Association (SNIA), SNIA Technical Position: TLS Specification for Storage Systems v1.0, http://www.snia.org/tls Trusted Computing Group, Storage Architecture Core Specification, Version 2.0, November 2011 Trusted Computing Group, Storage Security Subsystem Class: Enterprise, Version 1.0, January 2011 Trusted Computing Group, Storage Security Subsystem Class: Opal, Version 2.0, February 2012 OASIS, Key Management Interoperability Protocol Specification (Version 1.2 or later) OASIS, Key Management Interoperability Protocol Profiles (Version 1.2 or later) Recommendation ITU-T X.1601 (2013), Security framework for cloud computing Recommendation ITU-T Y.3500 | ISO/IEC 17788:2014, Information technology — Cloud computing — Overview and vocabulary External links Storage Networking Industry Association: SNIA Home References Security Information assurance standards
Storage security
Technology
2,121
44,190,450
https://en.wikipedia.org/wiki/C%20cap
The term C cap (C-cap, Ccap) describes an amino acid in a particular position within a protein or polypeptide. The C cap residue of an alpha helix is the last amino acid residue at the C terminus of the helix. More precisely, it is defined as the last residue (i) whose NH group is hydrogen-bonded to the CO group of residue i-4 (or sometimes residue i-3). Because of this it is sometimes also described as the residue following the helix. Certain motifs occur commonly at or around the C cap, notably the Schellman loop and the niche (protein structural motif). The N cap is the corresponding amino acid residue at the other end of the helix References Amino acids
C cap
Chemistry
151
8,632,359
https://en.wikipedia.org/wiki/Software%20metering
Software metering is the monitoring and controlling of software for analytics and enforcing of agreements. It can be either passive data collection, or active restriction. Types Software metering can take different forms: Tracking and maintaining software licenses. Making sure that the number of concurrent users of the software do not exceed the terms of the license. This can include monitoring of concurrent usage of software for real-time enforcement of license limits. Real-time monitoring of all (or selected) applications running on the computers within the organization in order to detect unregistered or unlicensed software and prevent their execution, or limit their execution to within certain hours. The system administrator can configure the software metering agent on each computer in the organization. Fixed planning to allocate software usage to computers according to the policies a company specifies and to maintain a record of usage and attempted usage. A company can check out and check in licenses for mobile users, and can also keep a record of all licenses in use. This is often used when limited license counts are available to avoid violating strict license controls. A method of software licensing where the licensed software automatically records how many times, or for how long one or more functions in the software are used, and the user pays fees based on this actual usage (also known as 'pay-per-use') References See also License manager Product activation Software license Systems management System administration Key server (software licensing) System administration Computer systems
Software metering
Technology,Engineering
293
8,810,974
https://en.wikipedia.org/wiki/Onverwacht%20Group
The Onverwacht Group or Onverwacht Series is a series of greenstone belts and volcanic rock formations from the Archean Eon in the Kaapvaal Craton in South Africa and Eswatini. A well known part of the Onverwacht Series is visible in the Komati valley, located in the east of the Transvaal region. Subdivision The Onverwacht Group can be divided into two subgroups with six formations: Geluk Subgroup Swartkoppie Formation – Kromberg Formation – Hooggenoeg Formation – Tjakastad Subgroup Komatii Formation – Theespruit Formation – Sandspruit Formation – The fossils found in the Onverwacht Series are among the oldest found on Earth. See also Archean life in the Barberton Greenstone Belt Warrawoona Group References Bibliography External links Der Barberton Greenstone Belt und Komatiite Geologic groups of Africa Geologic formations of South Africa Geology of Eswatini Archean Africa Fossiliferous stratigraphic units of Africa Paleontology in South Africa Origin of life
Onverwacht Group
Biology
222
56,570,847
https://en.wikipedia.org/wiki/Nuclear%20shaped%20charge
Nuclear shaped charges refers to nuclear weapons that focus the energy of their explosion into certain directions, as opposed to a spherical explosion. Edward Teller referred to such concepts as third-generation weapons, the first generation being the atom bomb and the second the H-bomb. The basic concept has been raised on several occasions, with the first known references being part of the Project Orion nuclear-powered spacecraft project in the 1960s. This used beryllium oxide to convert the X-rays released by a small bomb into longer wavelength radiation, which explosively vaporized a tamper material, normally tungsten, causing it to carry away much of the bomb's energy as kinetic energy in the form of tungsten plasma. The same concept was explored as a weapon in the Casaba/Howitzer proposals. The ideas were explored by Los Alamos National Laboratory as part of the Strategic Defense Initiative. Studies and tests Project Orion in the 1960s envisioned the use of nuclear shaped charges for propulsion. The nuclear explosion would turn a tungsten plate into a jet of plasma that would then hit the drive pusher plate. About 85% of the bomb's energy could be directed into the target as plasma, albeit with a very wide cone angle of 22.5 degrees. A 4,000 ton spacecraft would use 5 kiloton charges, and a 10,000 ton spacecraft would use 15 kiloton charges. Orion also researched the possibility of nuclear shaped charges being used as weapons in space warfare. These weapons would have yields of a few kilotons, could convert about 50% of that energy into a plasma jet with a velocity of 280 kilometers per second, and could theoretically get beam angles as low as 0.1 radians (5.73 degrees), quite wide but considerably narrower than the propulsion unit. The nuclear shaped charge concept was also studied extensively in the 1980s as part of Project Prometheus, along with bomb-pumped lasers. Using a combination of explosive wave-shaping and "gun-barrel" design, up to 5% of a small nuclear bomb could reportedly be converted into kinetic energy driving a beam of particles with a beam angle of 0.001 radians (0.057 degrees), far more concentrated than the earlier-proposed plasma jet, though this decreases to 1% efficiency at 50 kilotons (half a kiloton of energy in the beam) and efficiency suffers greatly at even higher yields. There has only been one known nuclear shaped charge test, conducted in 1985 as part of Operation Grenadier. During the test, codenamed 'Chamita', the intent was to use a nuclear detonation to accelerate a one kilogram mass of tungsten at one hundred kilometers per second, in the form of small particles focused in a cone-shaped beam. The test succeeded in propelling one kilogram of tungsten/molybdenum particles to seventy kilometers per second, corresponding to the energy of about 0.59 tons of TNT. As the yield of the detonated nuclear device was 8 kilotons, this came out to only 0.007% efficiency. Princeton nuclear physicist Dan L. Fenstermacher stated that there is a fundamental problem associated with the Casaba Howitzer concept that becomes dire at higher yields: a good portion of the bomb's energy inevitably becomes black-body radiation, which would quickly overtake the propelled mass. This poses the risk that most of the particles will be vaporized or even ionized, rendering them useless for dealing damage to the target. He concluded: "The NKEW concept is thus one that may require subkiloton explosives to be feasible... Whatever the case may be, it is clear that demonstrating a rush of hypervelocity pellets from a nuclear blast, while perhaps impressive, in no way guarantees that a useful weapon will ever be derived from this concept." References Nuclear weapons
Nuclear shaped charge
Physics
789
481,170
https://en.wikipedia.org/wiki/Fiddlehead
Fiddleheads or fiddlehead greens are the furled fronds from a fledgling fern, harvested for use as a vegetable. Left on the plant, each fiddlehead would unroll into a new frond (circinate vernation). As fiddleheads are harvested early in the season, before the frond has opened and reached its full height, they are cut fairly close to the ground. Fiddleheads from brackens contain ptaquiloside, a compound associated with bracken toxicity, and thiaminase. Not all species contain ptaquiloside, such as Diplazium esculentum, a fern with fiddleheads regularly consumed in parts of East Asia, which differs from bracken (Pteridium aquilinum). The fiddlehead resembles the curled ornamentation (called a scroll) on the end of a stringed instrument, such as a fiddle. It is also called a crozier, after the curved staff used by bishops, which has its origins in the shepherd's crook. Varieties The fiddleheads of certain ferns are eaten as a cooked leaf vegetable. The most popular of these are: Bracken, Pteridium aquilinum, found worldwide (Toxic if not cooked fully) Ostrich fern, Matteuccia struthiopteris, found in northern regions worldwide, and the central/eastern part of North America (See health warning) Lady fern, Athyrium filix-femina, throughout most of the temperate northern hemisphere. Cinnamon fern or buckhorn fern, Osmunda cinnamomea, found in the eastern parts of North America, although not so palatable as ostrich fern. Royal fern, Osmunda regalis, found worldwide Midin, or Stenochlaena palustris, found in Sarawak, where it is prized as a local delicacy Zenmai or flowering fern, Osmunda japonica, found in East Asia Vegetable fern, Athyrium esculentum, found throughout Asia and Oceania Fiddleheads' ornamental value makes them very expensive in the temperate regions where they are not abundant. Sources and harvesting Available seasonally, fiddleheads are both foraged and commercially harvested in spring. When picking fiddleheads, it is recommended to take only one third the tops per plant/cluster for sustainable harvest. Each plant produces several tops that turn into fronds. Culinary uses Fiddleheads have been part of traditional diets in much of Northern France since the beginning of the Middle Ages, across Asia, and also among Native Americans for centuries. They are also part of the diet in the Russian Far East where they are often picked in the wild in autumn, preserved in salt over winter, and then consumed in spring. Asian cuisine In Indonesia, young fiddlehead ferns are cooked in a rich coconut sauce spiced with chili pepper, galangal, lemongrass, turmeric leaves and other spices. This dish is called gulai pakis or gulai paku, and originated from the Minangkabau ethnic group of Indonesia. In the Philippines, young fronds of Diplazium esculentum or pakô is a delicacy often made into a salad with tomato, salted egg slices, and a simple vinaigrette dressing. In East Asia, fiddleheads of bracken (Pteridium aquilinum) are eaten as a vegetable, called kogomi () in Japan, gosari () in Korea, and juécài () in China and Taiwan. In Korea, a typical banchan (small side dish) is gosari-namul (), which consists of prepared fernbrake fiddleheads that have been sauteed. It is also a component of the popular dish bibimbap, yukgaejang, and bindae-tteok. In Jeju Island, southernmost island of South Korea, collecting it in April to May is a convention. In Japan, bracken fiddleheads are a prized dish, and roasting the fiddleheads is reputed to neutralize any toxins in the vegetable. In Japan, fiddleheads of flowering fern (Osmunda japonica), known as zenmai (), as well as those of the ostrich fern (Matteuccia struthiopteris), known as kogomi (), are commonly eaten in springtime. Fiddleheads in Japan are considered sansai, or wild vegetables. They are also traditionally used to make warabimochi, a Japanese-style dessert. Indian cuisine In the Indian subcontinent, it is found in the Himalayan states of North and Northeast India. In the state of Tripura, it is known as muikhonchok in the Kokborok language. As part of the Tripuri cuisine; fiddlehead fern is prepared by stir frying as bhaja served as a side dish. In Manipur it is known as 'Chekoh' in the local Thadou language. It is usually eaten stir fried with chicken, eggs, prawns or other proteins. In Mandi (Himachal Pradesh) it is called Lingad and used for vegetable pickling. In the Kullu Valley in Himachal Pradesh, it is known locally as and is used to make a pickle . In the Kangra Valley it is called in the Kangri dialect and is eaten as a vegetable. In Chamba it is known as "kasrod". In Kumaon division of Uttarakhand, it is called limbra. In Garhwal division of Uttarakhand, it is called and eaten as a vegetable. In Darjeeling and Sikkim regions, it is called (नियुरो) and is common as a vegetable side dish, often mixed with local cheese and sometimes pickled. In Southern regions of West Bengal it is known as dheki shaak or dheki shaag. In Assam, it is known as (); there it is a popular side dish. In the area of Jammu in Jammu and Kashmir, it's known as kasrod (कसरोड). The most famous Dogra dish is kasrod ka achaar (fiddlehead fern pickle). In Poonch, it is known as 'Kandor'(कंडोर) in local language. In Kishtwar, it is known as (टेड‍‌) in the local language Kishtwari. It is also cooked as a dry vegetable side dish to be eaten with rotis or parathas. In Ramban district of Jammu and Kashmir, it is called "DheeD" in Khah language. Nepali cuisine In Nepal, it is a seasonal food called (नियुरो) or niuro (निउरो). There are three varieties of fiddlehead most commonly found in Nepali cuisine, namely सेती निउरो having whitish green stem, काली निउरो having dark purple stem, and ठूलो निउरो having large green stems. It is served as a vegetable side dish, often cooked in local clarified butter. It is also pickled. North American cooking Ostrich ferns (Matteuccia struthiopteris), known locally as "fiddleheads", grow wild in wet areas of northeastern North America in spring. The Maliseet, Mi'kmaq, and Penobscot peoples of Eastern Canada and Maine have traditionally harvested fiddleheads, and the vegetable was introduced first to the Acadian settlers in the early 18th century, and later to United Empire Loyalists as they began settling in New Brunswick in the 1780s. Fiddleheads remain a traditional dish in these regions, with most commercial harvesting occurring in New Brunswick, Quebec and Maine, and the vegetable is considered particularly emblematic of New Brunswick. North America's largest grower, packer and distributor of wild fiddleheads established Ontario's first commercial fiddlehead farm in Port Colborne in 2006. Fiddlehead-producing areas are also located in Nova Scotia, Vermont and New Hampshire. The Canadian village of Tide Head, New Brunswick bills itself as the "Fiddlehead Capital of the World." Fiddleheads are sold fresh and frozen. Fresh fiddleheads are available in the market for only a few weeks in springtime, and are fairly expensive. Pickled and frozen fiddleheads, however, can be found in some shops year-round. The vegetable is typically steamed, boiled and/or sautéed before being eaten hot, with hollandaise sauce, butter, lemon, vinegar and/or garlic, or chilled in salad or with mayonnaise. To cook fiddleheads, it is advised to remove the brown papery husk before washing in several changes of cold water, then boil or steam them. Boiling reduces the bitterness and the content of tannins and toxins. The Centers for Disease Control and Prevention associated a number of food-borne illness cases with fiddleheads in the early 1990s. Although they did not identify a toxin in the fiddleheads, the findings of that case suggest that fiddleheads should be cooked thoroughly before eating. The cooking time recommended by health authorities is 15 minutes if boiled and 10 to 12 minutes if steamed. The cooking method recommended by gourmets is to spread a thin layer in a steam basket and steam lightly, just until tender crisp. Māori cuisine Māori people have historically eaten young fern shoots called pikopiko, which can refer to several species of New Zealand ferns. Constituents Fiddleheads are low in sodium, but rich in potassium. Many ferns also contain the enzyme thiaminase, which breaks down thiamine. This can lead to beriberi, if consumed in extreme excess. Further, there is some evidence that certain varieties of fiddleheads, e.g. bracken (Pteridium genus), are toxic. It is recommended to fully cook fiddleheads to destroy the shikimic acid. Ostrich fern (Matteuccia struthiopteris) is not thought to cause cancer, although there is evidence it contains a toxin unidentified as yet. See also Boyi and Shuqi: two Chinese princes who were said to have famously survived exile in the wilderness for a long while on a diet of fiddleheads References Further reading Barrett, L. E. and Diket, Lin. FiddleMainia. WaveCloud Corporation: 2014. . Lyon, Amy, and Lynne Andreen. In a Vermont Kitchen. HP Books: 1999. . pp 68–69. Strickland, Ron. Vermonters: Oral Histories from Down Country to the Northeast Kingdom. New England Press: 1986. . External links Facts on Fiddleheads, University of Maine, 2018 Canadian cuisine Japanese cuisine Leaf vegetables New England cuisine Perennial vegetables Ferns Spring (season) Vermont cuisine
Fiddlehead
Biology
2,219
7,994,192
https://en.wikipedia.org/wiki/Sumatran%20flying%20squirrel
The Sumatran flying squirrel (Hylopetes winstoni) is a flying squirrel only found on the island of Sumatra. It is listed as data deficient on the IUCN red list. Originally discovered in 1949, it is known only from a single specimen. It is a nocturnal, arboreal creature, spending most of its life in the canopy. The Sumatran flying squirrel is threatened by a restricted range and habitat loss due to logging. Unlike most other flying squirrels, it does not have a membrane connecting to its tail. References External links indonesianfauna.com Hylopetes Mammals of Indonesia Mammals described in 1949 Species known from a single specimen
Sumatran flying squirrel
Biology
132
21,916,773
https://en.wikipedia.org/wiki/First-level%20NUTS%20of%20the%20European%20Union
The Classification of Territorial Units for Statistics (NUTS, for the French ) is a geocode standard for referencing the administrative divisions of countries for statistical purposes. The standard was developed by the European Union. There are three levels of NUTS defined, with two levels of local administrative units (LAUs). Depending on their size, not all countries have every level of division. One of the most extreme cases is Luxembourg, which has only LAUs; the three NUTS divisions each correspond to the entire country itself. There are 92 first-level NUTS regions of the European Union, and 240 second-level NUTS regions. Former member states Below are the first-level NUTS regions of former member states of the European Union. EFTA member states Below are the first-level NUTS regions of EFTA. EU candidates Below are the first-level NUTS regions of candidates of the European Union. See also Local government Regional policy of the European Union Region (Europe) External links Europa – Eurostat – Regions Overview maps of the NUTS and Statistical Regions of Europe – Overview map of EU Countries – NUTS level 1 [Archive] The 104 NUTS-1 EU Regions of 2016 to present 1 Types of geographical division
First-level NUTS of the European Union
Mathematics
239
157,774
https://en.wikipedia.org/wiki/Permafrost
Permafrost () is soil or underwater sediment which continuously remains below for two years or more: the oldest permafrost had been continuously frozen for around 700,000 years. Whilst the shallowest permafrost has a vertical extent of below a meter (3 ft), the deepest is greater than . Similarly, the area of individual permafrost zones may be limited to narrow mountain summits or extend across vast Arctic regions. The ground beneath glaciers and ice sheets is not usually defined as permafrost, so on land, permafrost is generally located beneath a so-called active layer of soil which freezes and thaws depending on the season. Around 15% of the Northern Hemisphere or 11% of the global surface is underlain by permafrost, covering a total area of around . This includes large areas of Alaska, Canada, Greenland, and Siberia. It is also located in high mountain regions, with the Tibetan Plateau being a prominent example. Only a minority of permafrost exists in the Southern Hemisphere, where it is consigned to mountain slopes like in the Andes of Patagonia, the Southern Alps of New Zealand, or the highest mountains of Antarctica. Permafrost contains large amounts of dead biomass that have accumulated throughout millennia without having had the chance to fully decompose and release their carbon, making tundra soil a carbon sink. As global warming heats the ecosystem, frozen soil thaws and becomes warm enough for decomposition to start anew, accelerating the permafrost carbon cycle. Depending on conditions at the time of thaw, decomposition can release either carbon dioxide or methane, and these greenhouse gas emissions act as a climate change feedback. The emissions from thawing permafrost will have a sufficient impact on the climate to impact global carbon budgets. It is difficult to accurately predict how much greenhouse gases the permafrost releases because of the different thaw processes are still uncertain. There is widespread agreement that the emissions will be smaller than human-caused emissions and not large enough to result in runaway warming. Instead, the annual permafrost emissions are likely comparable with global emissions from deforestation, or to annual emissions of large countries such as Russia, the United States or China. Apart from its climate impact, permafrost thaw brings more risks. Formerly frozen ground often contains enough ice that when it thaws, hydraulic saturation is suddenly exceeded, so the ground shifts substantially and may even collapse outright. Many buildings and other infrastructure were built on permafrost when it was frozen and stable, and so are vulnerable to collapse if it thaws. Estimates suggest nearly 70% of such infrastructure is at risk by 2050, and that the associated costs could rise to tens of billions of dollars in the second half of the century. Furthermore, between 13,000 and 20,000 sites contaminated with toxic waste are present in the permafrost, as well as the natural mercury deposits, which are all liable to leak and pollute the environment as the warming progresses. Lastly, concerns have been raised about the potential for pathogenic microorganisms surviving the thaw and contributing to future pandemics. However, this is considered unlikely, and a scientific review on the subject describes the risks as "generally low". Classification and extent Permafrost is soil, rock or sediment that is frozen for more than two consecutive years. In practice, this means that permafrost occurs at a mean annual temperature of or below. In the coldest regions, the depth of continuous permafrost can exceed . It typically exists beneath the so-called active layer, which freezes and thaws annually, and so can support plant growth, as the roots can only take hold in the soil that's thawed. Active layer thickness is measured during its maximum extent at the end of summer: as of 2018, the average thickness in the Northern Hemisphere is ~, but there are significant regional differences. Northeastern Siberia, Alaska and Greenland have the most solid permafrost with the lowest extent of active layer (less than on average, and sometimes only ), while southern Norway and the Mongolian Plateau are the only areas where the average active layer is deeper than , with the record of . The border between active layer and permafrost itself is sometimes called permafrost table. Around 15% of Northern Hemisphere land that is not completely covered by ice is directly underlain by permafrost; 22% is defined as part of a permafrost zone or region. This is because only slightly more than half of this area is defined as a continuous permafrost zone, where 90%–100% of the land is underlain by permafrost. Around 20% is instead defined as discontinuous permafrost, where the coverage is between 50% and 90%. Finally, the remaining <30% of permafrost regions consists of areas with 10%–50% coverage, which are defined as sporadic permafrost zones, and some areas that have isolated patches of permafrost covering 10% or less of their area. Most of this area is found in Siberia, northern Canada, Alaska and Greenland. Beneath the active layer annual temperature swings of permafrost become smaller with depth. The greatest depth of permafrost occurs right before the point where geothermal heat maintains a temperature above freezing. Above that bottom limit there may be permafrost with a consistent annual temperature—"isothermal permafrost". Continuity of coverage Permafrost typically forms in any climate where the mean annual air temperature is lower than the freezing point of water. Exceptions are found in humid boreal forests, such as in Northern Scandinavia and the North-Eastern part of European Russia west of the Urals, where snow acts as an insulating blanket. Glaciated areas may also be exceptions. Since all glaciers are warmed at their base by geothermal heat, temperate glaciers, which are near the pressure melting point throughout, may have liquid water at the interface with the ground and are therefore free of underlying permafrost. "Fossil" cold anomalies in the geothermal gradient in areas where deep permafrost developed during the Pleistocene persist down to several hundred metres. This is evident from temperature measurements in boreholes in North America and Europe. Discontinuous permafrost The below-ground temperature varies less from season to season than the air temperature, with mean annual temperatures tending to increase with depth due to the geothermal crustal gradient. Thus, if the mean annual air temperature is only slightly below , permafrost will form only in spots that are sheltered (usually with a northern or southern aspect, in the north and south hemispheres respectively) creating discontinuous permafrost. Usually, permafrost will remain discontinuous in a climate where the mean annual soil surface temperature is between . In the moist-wintered areas mentioned before, there may not even be discontinuous permafrost down to . Discontinuous permafrost is often further divided into extensive discontinuous permafrost, where permafrost covers between 50 and 90 percent of the landscape and is usually found in areas with mean annual temperatures between , and sporadic permafrost, where permafrost cover is less than 50 percent of the landscape and typically occurs at mean annual temperatures between . In soil science, the sporadic permafrost zone is abbreviated SPZ and the extensive discontinuous permafrost zone DPZ. Exceptions occur in un-glaciated Siberia and Alaska where the present depth of permafrost is a relic of climatic conditions during glacial ages where winters were up to colder than those of today. Continuous permafrost At mean annual soil surface temperatures below the influence of aspect can never be sufficient to thaw permafrost and a zone of continuous permafrost (abbreviated to CPZ) forms. A line of continuous permafrost in the Northern Hemisphere represents the most southern border where land is covered by continuous permafrost or glacial ice. The line of continuous permafrost varies around the world northward or southward due to regional climatic changes. In the southern hemisphere, most of the equivalent line would fall within the Southern Ocean if there were land there. Most of the Antarctic continent is overlain by glaciers, under which much of the terrain is subject to basal melting. The exposed land of Antarctica is substantially underlain with permafrost, some of which is subject to warming and thawing along the coastline. Alpine permafrost A range of elevations in both the Northern and Southern Hemisphere are cold enough to support perennially frozen ground: some of the best-known examples include the Canadian Rockies, the European Alps, Himalaya and the Tien Shan. In general, it has been found that extensive alpine permafrost requires mean annual air temperature of , though this can vary depending on local topography, and some mountain areas are known to support permafrost at . It is also possible for subsurface alpine permafrost to be covered by warmer, vegetation-supporting soil. Alpine permafrost is particularly difficult to study, and systematic research efforts did not begin until the 1970s. Consequently, there remain uncertainties about its geography As recently as 2009, permafrost had been discovered in a new area – Africa's highest peak, Mount Kilimanjaro ( above sea level and approximately 3° south of the equator). In 2014, a collection of regional estimates of alpine permafrost extent had established a global extent of . Yet, by 2014, alpine permafrost in the Andes has not been fully mapped, although its extent has been modeled to assess the amount of water bound up in these areas. Subsea permafrost Subsea permafrost occurs beneath the seabed and exists in the continental shelves of the polar regions. These areas formed during the last Ice Age, when a larger portion of Earth's water was bound up in ice sheets on land and when sea levels were low. As the ice sheets melted to again become seawater during the Holocene glacial retreat, coastal permafrost became submerged shelves under relatively warm and salty boundary conditions, compared to surface permafrost. Since then, these conditions led to the gradual and ongoing decline of subsea permafrost extent. Nevertheless, its presence remains an important consideration for the "design, construction, and operation of coastal facilities, structures founded on the seabed, artificial islands, sub-sea pipelines, and wells drilled for exploration and production". Subsea permafrost can also overlay deposits of methane clathrate, which were once speculated to be a major climate tipping point in what was known as a clathrate gun hypothesis, but are now no longer believed to play any role in projected climate change. Past extent of permafrost At the Last Glacial Maximum, continuous permafrost covered a much greater area than it does today, covering all of ice-free Europe south to about Szeged (southeastern Hungary) and the Sea of Azov (then dry land) and East Asia south to present-day Changchun and Abashiri. In North America, only an extremely narrow belt of permafrost existed south of the ice sheet at about the latitude of New Jersey through southern Iowa and northern Missouri, but permafrost was more extensive in the drier western regions where it extended to the southern border of Idaho and Oregon. In the Southern Hemisphere, there is some evidence for former permafrost from this period in central Otago and Argentine Patagonia, but was probably discontinuous, and is related to the tundra. Alpine permafrost also occurred in the Drakensberg during glacial maxima above about . Manifestations Base depth Permafrost extends to a base depth where geothermal heat from the Earth and the mean annual temperature at the surface achieve an equilibrium temperature of . This base depth of permafrost can vary wildly – it is less than a meter (3 ft) in the areas where it is shallowest, yet reaches in the northern Lena and Yana River basins in Siberia. Calculations indicate that the formation time of permafrost greatly slows past the first several metres. For instance, over half a million years was required to form the deep permafrost underlying Prudhoe Bay, Alaska, a time period extending over several glacial and interglacial cycles of the Pleistocene. Base depth is affected by the underlying geology, and particularly by thermal conductivity, which is lower for permafrost in soil than in bedrock. Lower conductivity leaves permafrost less affected by the geothermal gradient, which is the rate of increasing temperature with respect to increasing depth in the Earth's interior. It occurs as the Earth's internal thermal energy is generated by radioactive decay of unstable isotopes and flows to the surface by conduction at a rate of ~47 terawatts (TW). Away from tectonic plate boundaries, this is equivalent to an average heat flow of 25–30 °C/km (124–139 °F/mi) near the surface. Massive ground ice When the ice content of a permafrost exceeds 250 percent (ice to dry soil by mass) it is classified as massive ice. Massive ice bodies can range in composition, in every conceivable gradation from icy mud to pure ice. Massive icy beds have a minimum thickness of at least 2 m and a short diameter of at least 10 m. First recorded North American observations of this phenomenon were by European scientists at Canning River (Alaska) in 1919. Russian literature provides an earlier date of 1735 and 1739 during the Great North Expedition by P. Lassinius and Khariton Laptev, respectively. Russian investigators including I.A. Lopatin, B. Khegbomov, S. Taber and G. Beskow had also formulated the original theories for ice inclusion in freezing soils. While there are four categories of ice in permafrost – pore ice, ice wedges (also known as vein ice), buried surface ice and intrasedimental (sometimes also called constitutional) ice – only the last two tend to be large enough to qualify as massive ground ice. These two types usually occur separately, but may be found together, like on the coast of Tuktoyaktuk in western Arctic Canada, where the remains of Laurentide Ice Sheet are located. Buried surface ice may derive from snow, frozen lake or sea ice, aufeis (stranded river ice) and even buried glacial ice from the former Pleistocene ice sheets. The latter hold enormous value for paleoglaciological research, yet even as of 2022, the total extent and volume of such buried ancient ice is unknown. Notable sites with known ancient ice deposits include Yenisei River valley in Siberia, Russia as well as Banks and Bylot Island in Canada's Nunavut and Northwest Territories. Some of the buried ice sheet remnants are known to host thermokarst lakes. Intrasedimental or constitutional ice has been widely observed and studied across Canada. It forms when subterranean waters freeze in place, and is subdivided into intrusive, injection and segregational ice. The latter is the dominant type, formed after crystallizational differentiation in wet sediments, which occurs when water migrates to the freezing front under the influence of van der Waals forces. This is a slow process, which primarily occurs in silts with salinity less than 20% of seawater: silt sediments with higher salinity and clay sediments instead have water movement prior to ice formation dominated by rheological processes. Consequently, it takes between 1 and 1000 years to form intrasedimental ice in the top 2.5 meters of clay sediments, yet it takes between 10 and 10,000 years for peat sediments and between 1,000 and 1,000,000 years for silt sediments. Landforms Permafrost processes such as thermal contraction generating cracks which eventually become ice wedges and solifluction – gradual movement of soil down the slope as it repeatedly freezes and thaws – often lead to the formation of ground polygons, rings, steps and other forms of patterned ground found in arctic, periglacial and alpine areas. In ice-rich permafrost areas, melting of ground ice initiates thermokarst landforms such as thermokarst lakes, thaw slumps, thermal-erosion gullies, and active layer detachments. Notably, unusually deep permafrost in Arctic moorlands and bogs often attracts meltwater in warmer seasons, which pools and freezes to form ice lenses, and the surrounding ground begins to jut outward at a slope. This can eventually result in the formation of large-scale land forms around this core of permafrost, such as palsas – long (), wide () yet shallow (< tall) peat mounds – and the even larger pingos, which can be high and in diameter. Ecology Only plants with shallow roots can survive in the presence of permafrost. Black spruce tolerates limited rooting zones, and dominates flora where permafrost is extensive. Likewise, animal species which live in dens and burrows have their habitat constrained by the permafrost, and these constraints also have a secondary impact on interactions between species within the ecosystem. While permafrost soil is frozen, it is not completely inhospitable to microorganisms, though their numbers can vary widely, typically from 1 to 1000 million per gram of soil. The permafrost carbon cycle (Arctic Carbon Cycle) deals with the transfer of carbon from permafrost soils to terrestrial vegetation and microbes, to the atmosphere, back to vegetation, and finally back to permafrost soils through burial and sedimentation due to cryogenic processes. Some of this carbon is transferred to the ocean and other portions of the globe through the global carbon cycle. The cycle includes the exchange of carbon dioxide and methane between terrestrial components and the atmosphere, as well as the transfer of carbon between land and water as methane, dissolved organic carbon, dissolved inorganic carbon, particulate inorganic carbon and particulate organic carbon. Most of the bacteria and fungi found in permafrost cannot be cultured in the laboratory, but the identity of the microorganisms can be revealed by DNA-based techniques. For instance, analysis of 16S rRNA genes from late Pleistocene permafrost samples in eastern Siberia's Kolyma Lowland revealed eight phylotypes, which belonged to the phyla Actinomycetota and Pseudomonadota. "Muot-da-Barba-Peider", an alpine permafrost site in eastern Switzerland, was found to host a diverse microbial community in 2016. Prominent bacteria groups included phylum Acidobacteriota, Actinomycetota, AD3, Bacteroidota, Chloroflexota, Gemmatimonadota, OD1, Nitrospirota, Planctomycetota, Pseudomonadota, and Verrucomicrobiota, in addition to eukaryotic fungi like Ascomycota, Basidiomycota, and Zygomycota. In the presently living species, scientists observed a variety of adaptations for sub-zero conditions, including reduced and anaerobic metabolic processes. Construction on permafrost There are only two large cities in the world built in areas of continuous permafrost (where the frozen soil forms an unbroken, below-zero sheet) and both are in Russia – Norilsk in Krasnoyarsk Krai and Yakutsk in the Sakha Republic. Building on permafrost is difficult because the heat of the building (or pipeline) can spread to the soil, thawing it. As ice content turns to water, the ground's ability to provide structural support is weakened, until the building is destabilized. For instance, during the construction of the Trans-Siberian Railway, a steam engine factory complex built in 1901 began to crumble within a month of operations for these reasons. Additionally, there is no groundwater available in an area underlain with permafrost. Any substantial settlement or installation needs to make some alternative arrangement to obtain water. A common solution is placing foundations on wood piles, a technique pioneered by Soviet engineer Mikhail Kim in Norilsk. However, warming-induced change of friction on the piles can still cause movement through creep, even as the soil remains frozen. The Melnikov Permafrost Institute in Yakutsk found that pile foundations should extend down to to avoid the risk of buildings sinking. At this depth the temperature does not change with the seasons, remaining at about . Two other approaches are building on an extensive gravel pad (usually thick); or using anhydrous ammonia heat pipes. The Trans-Alaska Pipeline System uses heat pipes built into vertical supports to prevent the pipeline from sinking and the Qingzang railway in Tibet employs a variety of methods to keep the ground cool, both in areas with frost-susceptible soil. Permafrost may necessitate special enclosures for buried utilities, called "utilidors". Impacts of climate change Increasing active layer thickness Globally, permafrost warmed by about between 2007 and 2016, with stronger warming observed in the continuous permafrost zone relative to the discontinuous zone. Observed warming was up to in parts of Northern Alaska (early 1980s to mid-2000s) and up to in parts of the Russian European North (1970–2020). This warming inevitably causes permafrost to thaw: active layer thickness has increased in the European and Russian Arctic across the 21st century and at high elevation areas in Europe and Asia since the 1990s. Between 2000 and 2018, the average active layer thickness had increased from ~ to ~, at an average annual rate of ~. In Yukon, the zone of continuous permafrost might have moved poleward since 1899, but accurate records only go back 30 years. The extent of subsea permafrost is decreasing as well; as of 2019, ~97% of permafrost under Arctic ice shelves is becoming warmer and thinner. Based on high agreement across model projections, fundamental process understanding, and paleoclimate evidence, it is virtually certain that permafrost extent and volume will continue to shrink as the global climate warms, with the extent of the losses determined by the magnitude of warming. Permafrost thaw is associated with a wide range of issues, and International Permafrost Association (IPA) exists to help address them. It convenes International Permafrost Conferences and maintains Global Terrestrial Network for Permafrost, which undertakes special projects such as preparing databases, maps, bibliographies, and glossaries, and coordinates international field programmes and networks. Climate change feedback As recent warming deepens the active layer subject to permafrost thaw, this exposes formerly stored carbon to biogenic processes which facilitate its entrance into the atmosphere as carbon dioxide and methane. Because carbon emissions from permafrost thaw contribute to the same warming which facilitates the thaw, it is a well-known example of a positive climate change feedback. Permafrost thaw is sometimes included as one of the major tipping points in the climate system due to the exhibition of local thresholds and its effective irreversibility. However, while there are self-perpetuating processes that apply on the local or regional scale, it is debated as to whether it meets the strict definition of a global tipping point as in aggregate permafrost thaw is gradual with warming. In the northern circumpolar region, permafrost contains organic matter equivalent to 1400–1650 billion tons of pure carbon, which was built up over thousands of years. This amount equals almost half of all organic material in all soils, and it is about twice the carbon content of the atmosphere, or around four times larger than the human emissions of carbon between the start of the Industrial Revolution and 2011. Further, most of this carbon (~1,035 billion tons) is stored in what is defined as the near-surface permafrost, no deeper than below the surface. However, only a fraction of this stored carbon is expected to enter the atmosphere. In general, the volume of permafrost in the upper 3 m of ground is expected to decrease by about 25% per of global warming, yet even under the RCP8.5 scenario associated with over of global warming by the end of the 21st century, about 5% to 15% of permafrost carbon is expected to be lost "over decades and centuries". The exact amount of carbon that will be released due to warming in a given permafrost area depends on depth of thaw, carbon content within the thawed soil, physical changes to the environment, and microbial and vegetation activity in the soil. Notably, estimates of carbon release alone do not fully represent the impact of permafrost thaw on climate change. This is because carbon can be released through either aerobic or anaerobic respiration, which results in carbon dioxide (CO2) or methane (CH4) emissions, respectively. While methane lasts less than 12 years in the atmosphere, its global warming potential is around 80 times larger than that of CO2 over a 20-year period and about 28 times larger over a 100-year period. While only a small fraction of permafrost carbon will enter the atmosphere as methane, those emissions will cause 40–70% of the total warming caused by permafrost thaw during the 21st century. Much of the uncertainty about the eventual extent of permafrost methane emissions is caused by the difficulty of accounting for the recently discovered abrupt thaw processes, which often increase the fraction of methane emitted over carbon dioxide in comparison to the usual gradual thaw processes. Another factor which complicates projections of permafrost carbon emissions is the ongoing "greening" of the Arctic. As climate change warms the air and the soil, the region becomes more hospitable to plants, including larger shrubs and trees which could not survive there before. Thus, the Arctic is losing more and more of its tundra biomes, yet it gains more plants, which proceed to absorb more carbon. Some of the emissions caused by permafrost thaw will be offset by this increased plant growth, but the exact proportion is uncertain. It is considered very unlikely that this greening could offset all of the emissions from permafrost thaw during the 21st century, and even less likely that it could continue to keep pace with those emissions after the 21st century. Further, climate change also increases the risk of wildfires in the Arctic, which can substantially accelerate emissions of permafrost carbon. Impact on global temperatures Altogether, it is expected that cumulative greenhouse gas emissions from permafrost thaw will be smaller than the cumulative anthropogenic emissions, yet still substantial on a global scale, with some experts comparing them to emissions caused by deforestation. The IPCC Sixth Assessment Report estimates that carbon dioxide and methane released from permafrost could amount to the equivalent of 14–175 billion tonnes of carbon dioxide per of warming. For comparison, by 2019, annual anthropogenic emissions of carbon dioxide alone stood around 40 billion tonnes. A major review published in the year 2022 concluded that if the goal of preventing of warming was realized, then the average annual permafrost emissions throughout the 21st century would be equivalent to the year 2019 annual emissions of Russia. Under RCP4.5, a scenario considered close to the current trajectory and where the warming stays slightly below , annual permafrost emissions would be comparable to year 2019 emissions of Western Europe or the United States, while under the scenario of high global warming and worst-case permafrost feedback response, they would approach year 2019 emissions of China. Fewer studies have attempted to describe the impact directly in terms of warming. A 2018 paper estimated that if global warming was limited to , gradual permafrost thaw would add around to global temperatures by 2100, while a 2022 review concluded that every of global warming would cause and from abrupt thaw by the year 2100 and 2300. Around of global warming, abrupt (around 50 years) and widespread collapse of permafrost areas could occur, resulting in an additional warming of . Thaw-induced ground instability As the water drains or evaporates, soil structure weakens and sometimes becomes viscous until it regains strength with decreasing moisture content. One visible sign of permafrost degradation is the random displacement of trees from their vertical orientation in permafrost areas. Global warming has been increasing permafrost slope disturbances and sediment supplies to fluvial systems, resulting in exceptional increases in river sediment. On the other hands, disturbance of formerly hard soil increases drainage of water reservoirs in northern wetlands. This can dry them out and compromise the survival of plants and animals used to the wetland ecosystem. In high mountains, much of the structural stability can be attributed to glaciers and permafrost. As climate warms, permafrost thaws, decreasing slope stability and increasing stress through buildup of pore-water pressure, which may ultimately lead to slope failure and rockfalls. Over the past century, an increasing number of alpine rock slope failure events in mountain ranges around the world have been recorded, and some have been attributed to permafrost thaw induced by climate change. The 1987 Val Pola landslide that killed 22 people in the Italian Alps is considered one such example. In 2002, massive rock and ice falls (up to 11.8 million m3), earthquakes (up to 3.9 Richter), floods (up to 7.8 million m3 water), and rapid rock-ice flow to long distances (up to 7.5 km at 60 m/s) were attributed to slope instability in high mountain permafrost. Permafrost thaw can also result in the formation of frozen debris lobes (FDLs), which are defined as "slow-moving landslides composed of soil, rocks, trees, and ice". This is a notable issue in the Alaska's southern Brooks Range, where some FDLs measured over in width, in height, and in length by 2012. As of December 2021, there were 43 frozen debris lobes identified in the southern Brooks Range, where they could potentially threaten both the Trans Alaska Pipeline System (TAPS) corridor and the Dalton Highway, which is the main transport link between the Interior Alaska and the Alaska North Slope. Infrastructure As of 2021, there are 1162 settlements located directly atop the Arctic permafrost, which host an estimated 5 million people. By 2050, permafrost layer below 42% of these settlements is expected to thaw, affecting all their inhabitants (currently 3.3 million people). Consequently, a wide range of infrastructure in permafrost areas is threatened by the thaw. By 2050, it's estimated that nearly 70% of global infrastructure located in the permafrost areas would be at high risk of permafrost thaw, including 30–50% of "critical" infrastructure. The associated costs could reach tens of billions of dollars by the second half of the century. Reducing greenhouse gas emissions in line with the Paris Agreement is projected to stabilize the risk after mid-century; otherwise, it'll continue to worsen. In Alaska alone, damages to infrastructure by the end of the century would amount to $4.6 billion (at 2015 dollar value) if RCP8.5, the high-emission climate change scenario, were realized. Over half stems from the damage to buildings ($2.8 billion), but there's also damage to roads ($700 million), railroads ($620 million), airports ($360 million) and pipelines ($170 million). Similar estimates were done for RCP4.5, a less intense scenario which leads to around by 2100, a level of warming similar to the current projections. In that case, total damages from permafrost thaw are reduced to $3 billion, while damages to roads and railroads are lessened by approximately two-thirds (from $700 and $620 million to $190 and $220 million) and damages to pipelines are reduced more than ten-fold, from $170 million to $16 million. Unlike the other costs stemming from climate change in Alaska, such as damages from increased precipitation and flooding, climate change adaptation is not a viable way to reduce damages from permafrost thaw, as it would cost more than the damage incurred under either scenario. In Canada, Northwest Territories have a population of only 45,000 people in 33 communities, yet permafrost thaw is expected to cost them $1.3 billion over 75 years, or around $51 million a year. In 2006, the cost of adapting Inuvialuit homes to permafrost thaw was estimated at $208/m2 if they were built at pile foundations, and $1,000/m2 if they didn't. At the time, the average area of a residential building in the territory was around 100 m2. Thaw-induced damage is also unlikely to be covered by home insurance, and to address this reality, territorial government currently funds Contributing Assistance for Repairs and Enhancements (CARE) and Securing Assistance for Emergencies (SAFE) programs, which provide long- and short-term forgivable loans to help homeowners adapt. It is possible that in the future, mandatory relocation would instead take place as the cheaper option. However, it would effectively tear the local Inuit away from their ancestral homelands. Right now, their average personal income is only half that of the median NWT resident, meaning that adaptation costs are already disproportionate for them. By 2022, up to 80% of buildings in some Northern Russia cities had already experienced damage. By 2050, the damage to residential infrastructure may reach $15 billion, while total public infrastructure damages could amount to 132 billion. This includes oil and gas extraction facilities, of which 45% are believed to be at risk. Outside of the Arctic, Qinghai–Tibet Plateau (sometimes known as "the Third Pole"), also has an extensive permafrost area. It is warming at twice the global average rate, and 40% of it is already considered "warm" permafrost, making it particularly unstable. Qinghai–Tibet Plateau has a population of over 10 million people – double the population of permafrost regions in the Arctic – and over 1 million m2 of buildings are located in its permafrost area, as well as 2,631 km of power lines, and 580 km of railways. There are also 9,389 km of roads, and around 30% are already sustaining damage from permafrost thaw. Estimates suggest that under the scenario most similar to today, SSP2-4.5, around 60% of the current infrastructure would be at high risk by 2090 and simply maintaining it would cost $6.31 billion, with adaptation reducing these costs by 20.9% at most. Holding the global warming to would reduce these costs to $5.65 billion, and fulfilling the optimistic Paris Agreement target of would save a further $1.32 billion. In particular, fewer than 20% of railways would be at high risk by 2100 under , yet this increases to 60% at , while under SSP5-8.5, this level of risk is met by mid-century. Release of toxic pollutants For much of the 20th century, it was believed that permafrost would "indefinitely" preserve anything buried there, and this made deep permafrost areas popular locations for hazardous waste disposal. In places like Canada's Prudhoe Bay oil field, procedures were developed documenting the "appropriate" way to inject waste beneath the permafrost. This means that as of 2023, there are ~4500 industrial facilities in the Arctic permafrost areas which either actively process or store hazardous chemicals. Additionally, there are between 13,000 and 20,000 sites which have been heavily contaminated, 70% of them in Russia, and their pollution is currently trapped in the permafrost. About a fifth of both the industrial and the polluted sites (1000 and 2200–4800) are expected to start thawing in the future even if the warming does not increase from its 2020 levels. Only about 3% more sites would start thawing between now and 2050 under the climate change scenario consistent with the Paris Agreement goals, RCP2.6, but by 2100, about 1100 more industrial facilities and 3500 to 5200 contaminated sites are expected to start thawing even then. Under the very high emission scenario RCP8.5, 46% of industrial and contaminated sites would start thawing by 2050, and virtually all of them would be affected by the thaw by 2100. Organochlorines and other persistent organic pollutants are of a particular concern, due to their potential to repeatedly reach local communities after their re-release through biomagnification in fish. At worst, future generations born in the Arctic would enter life with weakened immune systems due to pollutants accumulating across generations. A notable example of pollution risks associated with permafrost was the 2020 Norilsk oil spill, caused by the collapse of diesel fuel storage tank at Norilsk-Taimyr Energy's thermal power plant No. 3. It spilled 6,000 tonnes of fuel into the land and 15,000 into the water, polluting Ambarnaya, Daldykan and many smaller rivers on Taimyr Peninsula, even reaching lake Pyasino, which is a crucial water source in the area. State of emergency at the federal level was declared. The event has been described as the second-largest oil spill in modern Russian history. Another issue associated with permafrost thaw is the release of natural mercury deposits. An estimated 800,000 tons of mercury are frozen in the permafrost soil. According to observations, around 70% of it is simply taken up by vegetation after the thaw. However, if the warming continues under RCP8.5, then permafrost emissions of mercury into the atmosphere would match the current global emissions from all human activities by 2200. Mercury-rich soils also pose a much greater threat to humans and the environment if they thaw near rivers. Under RCP8.5, enough mercury will enter the Yukon River basin by 2050 to make its fish unsafe to eat under the EPA guidelines. By 2100, mercury concentrations in the river will double. Contrastingly, even if mitigation is limited to RCP4.5 scenario, mercury levels will increase by about 14% by 2100, and will not breach the EPA guidelines even by 2300. Revival of ancient organisms Microorganisms Bacteria are known for being able to remain dormant to survive adverse conditions, and viruses are not metabolically active outside of host cells in the first place. This has motivated concerns that permafrost thaw could free previously unknown microorganisms, which may be capable of infecting either humans or important livestock and crops, potentially resulting in damaging epidemics or pandemics. Further, some scientists argue that horizontal gene transfer could occur between the older, formerly frozen bacteria, and modern ones, and one outcome could be the introduction of novel antibiotic resistance genes into the genome of current pathogens, exacerbating what is already expected to become a difficult issue in the future. At the same time, notable pathogens like influenza and smallpox appear unable to survive being thawed, and other scientists argue that the risk of ancient microorganisms being both able to survive the thaw and to threaten humans is not scientifically plausible. Likewise, some research suggests that antimicrobial resistance capabilities of ancient bacteria would be comparable to, or even inferior to modern ones. Plants In 2012, Russian researchers proved that permafrost can serve as a natural repository for ancient life forms by reviving a sample of Silene stenophylla from 30,000-year-old tissue found in an Ice Age squirrel burrow in the Siberian permafrost. This is the oldest plant tissue ever revived. The resultant plant was fertile, producing white flowers and viable seeds. The study demonstrated that living tissue can survive ice preservation for tens of thousands of years. History of scientific research Between the middle of the 19th century and the middle of the 20th century, most of the literature on basic permafrost science and the engineering aspects of permafrost was written in Russian. One of the earliest written reports describing the existence of permafrost dates to 1684, when well excavation efforts in Yakutsk were stumped by its presence. A significant role in the initial permafrost research was played by Alexander von Middendorff (1815–1894) and Karl Ernst von Baer, a Baltic German scientist at the University of Königsberg, and a member of the St Petersburg Academy of Sciences. Baer began publishing works on permafrost in 1838 and is often considered the "founder of scientific permafrost research." Baer laid the foundation for modern permafrost terminology by compiling and analyzing all available data on ground ice and permafrost. Baer is also known to have composed the world's first permafrost textbook in 1843, "materials for the study of the perennial ground-ice", written in his native language. However, it was not printed then, and a Russian translation wasn't ready until 1942. The original German textbook was believed to be lost until the typescript from 1843 was discovered in the library archives of the University of Giessen. The 234-page text was available online, with additional maps, preface and comments. Notably, Baer's southern limit of permafrost in Eurasia drawn in 1843 corresponds well with the actual southern limit verified by modern research. Beginning in 1942, Siemon William Muller delved into the relevant Russian literature held by the Library of Congress and the U.S. Geological Survey Library so that he was able to furnish the government an engineering field guide and a technical report about permafrost by 1943. That report coined the English term as a contraction of permanently frozen ground, in what was considered a direct translation of the Russian term (). In 1953, this translation was criticized by another USGS researcher Inna Poiré, as she believed the term had created unrealistic expectations about its stability: more recently, some researchers have argued that "perpetually refreezing" would be a more suitable translation. The report itself was classified (as U.S. Army. Office of the Chief of Engineers, Strategic Engineering Study, no. 62, 1943), until a revised version was released in 1947, which is regarded as the first North American treatise on the subject. Between 11 and 15 November 1963, the First International Conference on Permafrost took place on the grounds of Purdue University in the American town of West Lafayette, Indiana. It involved 285 participants (including "engineers, manufacturers and builders" who attended alongside the researchers) from a range of countries (Argentina, Austria, Canada, Germany, Great Britain, Japan, Norway, Poland, Sweden, Switzerland, the US and the USSR). This marked the beginning of modern scientific collaboration on the subject. Conferences continue to take place every five years. During the Fourth conference in 1983, a special meeting between the "Big Four" participant countries (US, USSR, China, and Canada) officially created the International Permafrost Association. In recent decades, permafrost research has attracted more attention than ever due to its role in climate change. Consequently, there has been a massive acceleration in published scientific literature. Around 1990, almost no papers were released containing the words "permafrost" and "carbon": by 2020, around 400 such papers were published every year. References Sources . Climate Change 2013 Working Group 1 website. External links International Permafrost Association (IPA) Map of permafrost in Antarctica. Permafrost – what is it? – Alfred Wegener Institute YouTube video 1940s neologisms Cryosphere Geography of the Arctic Geomorphology Montane ecology Patterned grounds Pedology Periglacial landforms
Permafrost
Environmental_science
9,187
3,156,532
https://en.wikipedia.org/wiki/Zinc%20dithiophosphate
Zinc dialkyldithiophosphates (often referred to as ZDDP) are a family of coordination compounds developed in the 1940s that feature zinc bound to the anion of a dialkyldithiophosphoric salt (e.g., ammonium diethyl dithiophosphate). These uncharged compounds are not salts. They are soluble in nonpolar solvents, and the longer-chain derivatives easily dissolve in mineral and synthetic oils used as lubricants. They come under CAS number . In aftermarket oil additives, the percentage of ZDDP ranges approximately between 2 and 15%. Zinc dithiophosphates have many names, including ZDDP, ZnDTP, and ZDP. Applications The main application of ZDDPs are as anti-wear additives in lubricants including greases, hydraulic oils, and motor oils. ZDDPs also act as corrosion inhibitors and antioxidants. Concentrations in lubricants range from 600 ppm for modern, energy-conserving low-viscosity oils to 2000 ppm in some racing oils. It has been reported that zinc and phosphorus emissions may damage catalytic converters and standard formulations of lubricating oils for gasoline engines now have reduced amounts of the additive due to the API limiting the concentration of this additive in new API SM and SN oils; however, this affects only 20- and 30-grade "ILSAC" oils. Grades 40 and higher have no regulation regarding the concentration of ZDDP, except for diesel oils meeting the API CJ-4 specification which have had the level of zddp reduced slightly, although most diesel Heavy-Duty Engine oils still have a higher concentration of this additive. Crankcase oils with reduced ZDDP have been cited as causing damage to, or failure of, classic/collector car flat-tappet camshafts and lifters which undergo very high boundary layer pressures and/or shear forces at their contact faces, and in other regions such as main bearings, and piston rings and pins. Roller camshafts/followers are more commonly used to reduce camshaft lobe friction in modern engines. There are additives, such as STP Oil Treatment, and some racing oils such as PurOl, PennGrade 1, and Valvoline VR-1, Kixx Hydraulic Oil which are available in the retail market with the necessary amount of ZDDP for engines using increased valve spring pressures. Tribofilm formation mechanism Various mechanisms have been proposed for how ZDDP forms protective tribofilms on solid surfaces. In-situ atomic-force microscopy (AFM) experiments show that the growth of ZDDP tribofilms increases exponentially with both the applied pressure and temperature, consistent with a stress-promoted thermal activation reaction rate model. Subsequently, experiments with negligible solid-solid contact demonstrated that film formation rate depends on the applied shear stress. Synthesis and structure With the formula Zn[(S2P(OR)2]2, zinc dithiophosphate features diverse R groups. Typically, R is a branched or linear alkyl between 1-14 carbons in length. Examples include 2-butyl, pentyl, hexyl, 1,3-dimethylbutyl, heptyl, octyl, isooctyl (2-ethylhexyl), 6-methylheptyl, 1-methylpropyl, dodecylphenyl, and others. A mix of zinc dialkyl(C3-C6)dithiophosphates come under CAS number . A list of other examples with their CAS numbers is here. Zinc dithiophosphate are produced in two steps. First phosphorus pentasulfide is treated with suitable alcohols (ROH) to give the dithiophosphoric acid. A wide variety of alcohols can be employed, which allows the lipophilicity of the final zinc product to be fine tuned. The resulting dithiophosphate is then neutralized by adding zinc oxide: P2S5 + 4 ROH → 2 (RO)2PS2H + H2S 2 (RO)2PS2H + ZnO → Zn[(S2P(OR)2]2 + H2O Structural chemistry In Zn[(S2P(OR)2]2, the zinc has tetrahedral geometry. This monomeric compound Zn[(S2P(OR)2]2 exists in equilibrium with dimers, oligomers, and polymers [Zn[(S2P(OR)2]2]n (n > 1). For example, zinc diethyldithiophosphate, Zn[(S2P(OEt)2]2, crystallizes as a polymeric solid consisting of linear chains. Reaction of Zn[(S2P(OR)2]2 with additional zinc oxide gives rise to the oxygen-centered cluster, Zn4O[(S2P(OR)2]6, which adopts the structure seen for basic zinc acetate. See also Transition metal dithiophosphate complexes References dithiophosphate Phosphorothioates Lubricants Corrosion inhibitors
Zinc dithiophosphate
Chemistry
1,096
10,020,856
https://en.wikipedia.org/wiki/Overlay%20control
In silicon wafer manufacturing overlay control is the control of pattern-to-pattern alignment necessary in the manufacture of silicon wafers. Silicon wafers are currently manufactured in a sequence of steps, each stage placing a pattern of material on the wafer; in this way transistors, contacts, etc., all made of different materials, are laid down. In order for the final device to function correctly, these separate patterns must be aligned correctly – for example contacts, lines and transistors must all line up. Overlay control has always played an important role in semiconductor manufacturing, helping to monitor layer-to-layer alignment on multi-layer device structures. Misalignment of any kind can cause short circuits and connection failures, which in turn impact fab yield and profit margins. Overlay control has become even more critical now because the combination of increasing pattern density and innovative techniques such as double patterning and 193 nm immersion lithography creates a novel set of pattern-based yield challenges at the 45 nm technology node and below. This combination causes error budgets to shrink below 30 percent of design rules, where existing overlay metrology solutions cannot meet total measurement uncertainty (TMU) requirements. Overlay metrology solutions with both higher measurement accuracy/precision and process robustness are key factors when addressing increasingly tighter overlay budgets. Higher order overlay control and in-field metrology using smaller, micro-grating or other novel targets are becoming essential for successful production ramps and higher yields at 45 nm and beyond. Examples of the widely adopted overlay measurement tools worldwide are KLA-Tencor's ARCHER , and the nanometrics CALIPER series, overlay metrology platforms. Semiconductor device fabrication
Overlay control
Materials_science
348
43,371,242
https://en.wikipedia.org/wiki/International%20Society%20for%20Stem%20Cell%20Research
The International Society for Stem Cell Research (ISSCR) is an independent 501(c)(3) nonprofit organization based in Evanston, Illinois, United States. The organization's mission is to promote excellence in stem cell science and applications to human health. History The International Society for Stem Cell Research was formed in 2002 (incorporated on March 30, 2001) to foster the exchange of information on stem cell research. Leonard Zon, professor of pediatrics at Harvard Medical School, served as the organization's first president. In June 2003, the International Society for Stem Cell Research held its first convention. More than 600 scientists attended, many of whom expressed frustration over restrictions that President George W. Bush's administration had placed on the field of stem-cell research, slowing the pace of research. Scientists who were leaders in their fields were prohibited from using funding from the National Institutes of Health to conduct certain experiments that could provide significant medical achievements. As a service to the field, in 2006, the ISSCR developed guidelines that address the international diversity of cultural, political, legal, and ethical perspectives related to stem cell research and its translation to medicine. The guidelines were designed to underscore widely shared principles in science that call for rigor, oversight, and transparency in all areas of practice. Adherence to the ISSCR guidelines would provide assurance that stem cell research is conducted with scientific and ethical integrity and that new therapies are evidence-based. In response to advances in science, the guidelines were updated in 2008, and again in 2016, to encompass a broader and more expansive scope of research and clinical endeavor than before, imposing rigor on all stages of research, addressing the cost of regenerative medicine products, and highlighting the need for accurate and effective public communication. The 2016 Guidelines for Stem Cell Research and Clinical Translation have been adopted by researchers, clinicians, organizations, and institutions around the world. In 2013, the Society's official journal, Stem Cell Reports, was established; it is published monthly by Cell Press on the Society's behalf. In March 2015, scientists, including an inventor of CRISPR, urged a worldwide hold on germline gene therapy, writing that "scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans" until the full implications "are discussed among scientific and governmental organizations". After the publication that a Chinese group had used CRISPR to modify a gene in human embryos, the group repeated their call for a suspension of "attempts at human clinical germ-line genome editing while extensive scientific analysis of the potential risks is conducted, along with broad public discussion of the societal and ethical implications." The ISSCR’s Annual Meetings are the largest stem cell research conferences in the world, drawing nearly 3,900 attendees in 2020 for the organization's first global, virtual event, ISSCR 2020 Digital . The ISSCR’s membership includes international leaders of stem cell research and regenerative medicine representing more than 70 countries worldwide. In 2021, the ISSCR published an update to its internationally recognized Guidelines for Stem Cell Research and Clinical Translation, that address the international diversity of cultural, political, legal, and ethical issues associated with stem cell research and its translation to medicine.. In 2022, the Society hosted its first hybrid annual meeting in San Francisco, USA and launched ISSCR.digital, which offers scientific education and opportunities to network and build new connections with the global community. References External links International conferences Bioethics Stem cell research Ethics of science and technology Organizations established in 2001 Non-profit organizations based in Chicago Academic and educational organizations in Chicago
International Society for Stem Cell Research
Chemistry,Technology,Biology
731
1,208,872
https://en.wikipedia.org/wiki/Shannon%27s%20source%20coding%20theorem
In information theory, Shannon's source coding theorem (or noiseless coding theorem) establishes the statistical limits to possible data compression for data whose source is an independent identically-distributed random variable, and the operational meaning of the Shannon entropy. Named after Claude Shannon, the source coding theorem shows that, in the limit, as the length of a stream of independent and identically-distributed random variable (i.i.d.) data tends to infinity, it is impossible to compress such data such that the code rate (average number of bits per symbol) is less than the Shannon entropy of the source, without it being virtually certain that information will be lost. However it is possible to get the code rate arbitrarily close to the Shannon entropy, with negligible probability of loss. The source coding theorem for symbol codes places an upper and a lower bound on the minimal possible expected length of codewords as a function of the entropy of the input word (which is viewed as a random variable) and of the size of the target alphabet. Note that, for data that exhibits more dependencies (whose source is not an i.i.d. random variable), the Kolmogorov complexity, which quantifies the minimal description length of an object, is more suitable to describe the limits of data compression. Shannon entropy takes into account only frequency regularities while Kolmogorov complexity takes into account all algorithmic regularities, so in general the latter is smaller. On the other hand, if an object is generated by a random process in such a way that it has only frequency regularities, entropy is close to complexity with high probability (Shen et al. 2017). Statements Source coding is a mapping from (a sequence of) symbols from an information source to a sequence of alphabet symbols (usually bits) such that the source symbols can be exactly recovered from the binary bits (lossless source coding) or recovered within some distortion (lossy source coding). This is one approach to data compression. Source coding theorem In information theory, the source coding theorem (Shannon 1948) informally states that (MacKay 2003, pg. 81, Cover 2006, Chapter 5): i.i.d. random variables each with entropy can be compressed into more than bits with negligible risk of information loss, as ; but conversely, if they are compressed into fewer than bits it is virtually certain that information will be lost.The coded sequence represents the compressed message in a biunivocal way, under the assumption that the decoder knows the source. From a practical point of view, this hypothesis is not always true. Consequently, when the entropy encoding is applied the transmitted message is . Usually, the information that characterizes the source is inserted at the beginning of the transmitted message. Source coding theorem for symbol codes Let denote two finite alphabets and let and denote the set of all finite words from those alphabets (respectively). Suppose that is a random variable taking values in and let be a uniquely decodable code from to where . Let denote the random variable given by the length of codeword . If is optimal in the sense that it has the minimal expected word length for , then (Shannon 1948): Where denotes the expected value operator. Proof: source coding theorem Given is an i.i.d. source, its time series is i.i.d. with entropy in the discrete-valued case and differential entropy in the continuous-valued case. The Source coding theorem states that for any , i.e. for any rate larger than the entropy of the source, there is large enough and an encoder that takes i.i.d. repetition of the source, , and maps it to binary bits such that the source symbols are recoverable from the binary bits with probability of at least . Proof of Achievability. Fix some , and let The typical set, , is defined as follows: The asymptotic equipartition property (AEP) shows that for large enough , the probability that a sequence generated by the source lies in the typical set, , as defined approaches one. In particular, for sufficiently large , can be made arbitrarily close to 1, and specifically, greater than (See AEP for a proof). The definition of typical sets implies that those sequences that lie in the typical set satisfy: The probability of a sequence being drawn from is greater than . , which follows from the left hand side (lower bound) for . , which follows from upper bound for and the lower bound on the total probability of the whole set . Since bits are enough to point to any string in this set. The encoding algorithm: the encoder checks if the input sequence lies within the typical set; if yes, it outputs the index of the input sequence within the typical set; if not, the encoder outputs an arbitrary digit number. As long as the input sequence lies within the typical set (with probability at least ), the encoder does not make any error. So, the probability of error of the encoder is bounded above by . Proof of converse: the converse is proved by showing that any set of size smaller than (in the sense of exponent) would cover a set of probability bounded away from . Proof: Source coding theorem for symbol codes For let denote the word length of each possible . Define , where is chosen so that . Then where the second line follows from Gibbs' inequality and the fifth line follows from Kraft's inequality: so . For the second inequality we may set so that and so and and so by Kraft's inequality there exists a prefix-free code having those word lengths. Thus the minimal satisfies Extension to non-stationary independent sources Fixed rate lossless source coding for discrete time non-stationary independent sources Define typical set as: Then, for given , for large enough, . Now we just encode the sequences in the typical set, and usual methods in source coding show that the cardinality of this set is smaller than . Thus, on an average, bits suffice for encoding with probability greater than , where and can be made arbitrarily small, by making larger. See also Channel coding Error exponent Noisy-channel coding theorem References Information theory Coding theory Data compression Presentation layer protocols Mathematical theorems in theoretical computer science Articles containing proofs
Shannon's source coding theorem
Mathematics,Technology,Engineering
1,289
42,966,879
https://en.wikipedia.org/wiki/Rewrite%20order
In theoretical computer science, in particular in automated reasoning about formal equations, reduction orderings are used to prevent endless loops. Rewrite orders, and, in turn, rewrite relations, are generalizations of this concept that have turned out to be useful in theoretical investigations. Motivation Intuitively, a reduction order R relates two terms s and t if t is properly "simpler" than s in some sense. For example, simplification of terms may be a part of a computer algebra program, and may be using the rule set { x+0 → x , 0+x → x , x*0 → 0, 0*x → 0, x*1 → x , 1*x → x }. In order to prove impossibility of endless loops when simplifying a term using these rules, the reduction order defined by "sRt if term t is properly shorter than term s" can be used; applying any rule from the set will always properly shorten the term. In contrast, to establish termination of "distributing-out" using the rule x*(y+z) → x*y+x*z, a more elaborate reduction order will be needed, since this rule may blow up the term size due to duplication of x. The theory of rewrite orders aims at helping to provide an appropriate order in such cases. Formal definitions Formally, a binary relation (→) on the set of terms is called a rewrite relation if it is closed under contextual embedding and under instantiation; formally: if l→r implies u[lσ]p→u[rσ]p for all terms l, r, u, each path p of u, and each substitution σ. If (→) is also irreflexive and transitive, then it is called a rewrite ordering, or rewrite preorder. If the latter (→) is moreover well-founded, it is called a reduction ordering, or a reduction preorder. Given a binary relation R, its rewrite closure is the smallest rewrite relation containing R. A transitive and reflexive rewrite relation that contains the subterm ordering is called a simplification ordering. Properties The converse, the symmetric closure, the reflexive closure, and the transitive closure of a rewrite relation is again a rewrite relation, as are the union and the intersection of two rewrite relations. The converse of a rewrite order is again a rewrite order. While rewrite orders exist that are total on the set of ground terms ("ground-total" for short), no rewrite order can be total on the set of all terms. A term rewriting system is terminating if its rules are a subset of a reduction ordering. Conversely, for every terminating term rewriting system, the transitive closure of (::=) is a reduction ordering, which need not be extendable to a ground-total one, however. For example, the ground term rewriting system { f(a)::=f(b), g(b)::=g(a) } is terminating, but can be shown so using a reduction ordering only if the constants a and b are incomparable. A ground-total and well-founded rewrite ordering necessarily contains the proper subterm relation on ground terms. Conversely, a rewrite ordering that contains the subterm relation is necessarily well-founded, when the set of function symbols is finite. A finite term rewriting system is terminating if its rules are subset of the strict part of a simplification ordering. Notes References Rewriting systems Order theory
Rewrite order
Mathematics
746
75,266,397
https://en.wikipedia.org/wiki/OnePlus%2011
The OnePlus 11 is an Android-based smartphone manufactured by OnePlus and co-developed with Hasselblad. It was released on January 9th, 2023 and succeeded the OnePlus 10 Pro. It ships with a screen protector pre-applied. It diverted from usual portfolio offering, in which there is only one flagship offered. This is an effort by OnePlus to streamline its flagship and marked the end of 'Pro' naming scheme onwards. There is a lesser variant called OnePlus 11R which is only available in India. Design The OnePlus 11 brings a sleek and modern design that reflects the brand's focus on aesthetics and build quality. It is composed of glass and metal combination. Both front and back are covered in Gorilla Glass Victus and 5 respectively. It is supported by aluminium frame. At launch, two colors were available, with color addition later on: Titan Black - has a matte-frosted glass with silky finish. Fingerprint-resistant design. Eternal Green - has a glossy finish, inspired by the shade of rainforest dusk. Features an internal layering treatment that reduces fingerprint smudges. Only available for 256GB model and above. OnePlus also brings another new edition, called OnePlus 11 Jupiter Rock Limited Edition, also powered by Qualcomm Snapdragon 8 Gen 2. It is only available in China. Its distinguishing appearance is on the design of its back camera. It features a circular camera housing on the back, which contains a triple-camera setup and the flash. Hasselblad branding is visible in the middle. The circular design is distinctive enough to make it one of the standout design elements of the OnePlus 11. It has curved front screen and back design. The placement of front camera is at upper left. Volume rocker is situated at left frame and there is a unique feature called alert slider, which make a return after being removed by OnePlus for previous series on the right. Power button is at the lower right frame. Sim tray, microphone, USB C connector and speaker are on the bottom. The top hosts another speaker and microphone. Hardware Chipset OnePlus 11 uses the Snapdragon 8 Gen 2 processor with the Adreno 740 GPU. Display It has curved 120Hz 2K Super Fluid AMOLED, manufactured by Samsung Display. This 6.7-inch screen is of third generation LTPO AMOLED and has 1440 x 3216px resolution. Dolby Vision, Dolby Atmos HDR standard, 10-bit color depth are on the list. It supports variable refresh rate. Camera system Connectivity OnePlus 11 supports eSIM, aside from conventional dual nano sim card slot. It has Bluetooth version 5.3. Codec supported: aptX HD, aptX, LDAC, LHDC, AAC, SBC. Battery & charging OnePlus 11 comes with non-removable dual cell battery, 2,500mAh each, totalling to 5,000mAh. To achieve the fastest charging speed, it uses proprietary OnePlus's technology, Super VOOC 100W. Those in United States are supplied with 80W variant due to the country limitation on voltage socket. Average charging speed to full capacity is around 25 minutes using global proprietary charger. Software & support OnePlus 11 ships with OxygenOS 13.0 on top of Android 13 for global market. China variant uses ColorOS. OnePlus claims to support at least four generations of android OS upgrades and five years of security updates from the launch date. Reception Allison Johnson at The Verge praised its screen, long software support, and price tag while criticized its lack of wireless charging and inconsistent telephoto camera performance. References External links OnePlus mobile phones Phablets Android (operating system) devices Mobile phones with multiple rear cameras Mobile phones with 8K video recording Mobile phones introduced in 2023
OnePlus 11
Technology
805
8,101,372
https://en.wikipedia.org/wiki/Staggered%20truss%20system
The staggered truss system is a type of structural steel framing used in high-rise buildings. The system consists of a series of story-high trusses spanning the total width between two rows of exterior columns and arranged in a staggered pattern on adjacent column lines. William LeMessurier, the founder Cambridge, Massachusetts engineering firm LeMessurier Consultants has been credited in developing this award winning system as part of his research at the Massachusetts Institute of Technology. History The staggered truss system came about due to sponsored research at Massachusetts Institute of Technology's Departments of Architecture and Civil Engineering in the 1960s by U.S. Steel. The research attempted to achieve the same floor-to-floor height with steel as you could with flat plate concrete. The system was presented at the 1966 AISC Conference (the predecessor to the current North American Steel Construction Conference). Additional benefits discovered were high resistance to wind loads and versatility of floor layout with large column-free areas. It has been used with on a number of LeMessurier Consultants work in hotels including Lafayette Place Hotel in Boston and the Aladdin Hotel in Las Vegas. Other locations that use this system include the Resorts International Hotel in Atlantic City, New Jersey, Embassy Suites hotel in New York City, Baruch College Academic Center in New York City, Trump Taj-Mahal in Atlantic City NJ, and the Renaissance Hotel in Nashville TN Description The staggered truss system for steel framing is an efficient structural system for high-rise apartments, hotels, motels, dormitories, and hospitals. The arrangement of story-high trusses in a staggered pattern at alternate column lines provide large column-free areas for room layouts. These column free areas can be utilized for ballrooms, concourses, and other large areas. The staggered truss structural system consists of story-high steel trusses placed on alternating column lines on each floor so that the long axis of one truss is always between the trusses on the floor below. The system staggers trusses on a 12’ module, meaning that on any given floor the trusses were 24’ apart. The interaction of the floors, trusses, and columns makes the structure perform as a single unit, thereby taking maximum advantage of the strength and rigidity of all the components simultaneously. Each component performs its particular function, totally dependent upon the others for its performance. The total frame behaves as a cantilever beam when subjected to lateral loads. All columns are placed on the exterior wall of the building and function as the flanges of the beam, while the trusses which span the total transverse width between columns function as the web of the cantilever beam. While earlier staggered truss systems utilized channels for web diagonals and verticals, today most of the trusses are designed with hollow structural sections (HSS) for vertical and diagonal members because they are more structurally efficient and easier to fabricate. The trusses are fabricated with camber to compensate for dead load and are transported to the site, stored and then erected—generally in one piece. Fabrication of this type of structure requires certified welders and overhead cranes capable of lifting 10 to 15-ton trusses and columns for projects up to 20 stories. Fabrication involves the following components: Columns, Spandrel Beams, Trusses, Secondary Columns & Beams and the Floor System. Advantages Large clear span open areas for ballrooms, or other wide concourse are possible at the first floor level, because columns are located only on the exterior faces of the building. This allows for spaces as much as 60 feet in each direction with columns often only appearing on the perimeter of a structure. This also increases design flexibility especially for atrium placement and open space floor plans. Floor spans may be short bay lengths, while providing two column bay spacing for room arrangements. This results in low floor-to-floor heights. Typically, an 8'-8" floor-to-floor height is achieved. Columns have minimum bending moments due to gravity and wind loads, because of the cantilever action of the double-planar system of framing. Columns are oriented with their strong axis resisting lateral forces in the longitudinal direction of the building. Maximum live load reductions may be realized because tributary areas may be adjusted to suit code requirements. Foundations are on column lines only and may consist of two strip footings. Because the vertical loads are concentrated at a few column points, less foundation formwork is required. Drift is small, because the total frame is acting as a stiff truss with direct axial loads only acting in most structural members. Secondary bending occurs only in the chords of the trusses. High strength steels may be used to advantage, because all truss members and columns are subjected, for all practical purposes, to axial loads only. A lightweight steel structure is achieved by the use of high strength steels and an efficient framing system. Since this reduces the weight of the superstructure, there is a substantial cost savings in foundation work. Faster to erect than comparable concrete structures. Once two floors are erected, window installation can start and stay right behind the steel and floor erection. No time is lost in waiting for other trades, such as bricklayers, to start work. Except for foundations, topping slab, and grouting, all "wet" trades are eliminated. Fire resistance; steel is localized to the trusses, which only occur at every 58-to-70-feet on a floor, so the fireproofing operation can be completed efficiently. Furthermore, the trusses are typically placed within demising walls and it is possible that the necessary fire rating can be entirely by enclosing the trusses with gypsum wallboard. Finally, if spray-on protection is desired, the applied thickness can be kept to a minimum due to the compact nature of the truss elements. References External links Buildings - "Turning Green Into Gold" article on the Aladdin Hotel in Las Vegas Architectural Record Case Study: Embassy Suites Hotel Staggered Truss Framing Systems Using ETABS Aluminum Truss Frame Systems Construction Structural system
Staggered truss system
Technology,Engineering
1,213
32,248,932
https://en.wikipedia.org/wiki/Cerato-platanin
In molecular biology, the cerato-platanin family of proteins includes the phytotoxin cerato-platanin (CP) produced by the Ascomycete Ceratocystis platani. CP homologs are also found in both the Ascomycota and the Basidiomycota branches of Dikarya. This toxin causes the severe plant disease: canker stain. This protein occurs in the cell wall of the fungus and is involved in the host-pathogen interaction and induces both cell necrosis and phytoalexin synthesis which is one of the first plant defense-related events. CP, like other fungal surface proteins, is able to self-assemble in vitro. CP is a 120 amino acid protein, containing 40% hydrophobic residues. It is one of the rare examples of protein in which contains a Hopf link. The link is formed by covalent loops - the pieces of protein backbone closed by two disulphide bonds (formed out of four cysteine residues). The N-terminal region of CP is very similar to cerato-ulmin, a phytotoxic protein produced by the Ophiostoma species belonging to the hydrophobin family, which also self-assembles. References Protein families
Cerato-platanin
Biology
265
9,712,670
https://en.wikipedia.org/wiki/Eurocities
Eurocities is a network of large cities in Europe, established in 1986 by the mayors of six large cities: Barcelona, Birmingham, Frankfurt, Lyon, Milan, and Rotterdam. Today, Eurocities members include over 200 of Europe's major cities from 38 countries, which between them represent over 130 million people. Eurocities is one of the major city networks in the EU. It is an example of how city diplomacy is seeking influence and prominence in the established world of international relations. At the EU level, Eurocities promotes the implementation of the European Union's subsidiarity principle. This offers multiple opportunities to engage and influence EU initiatives and policies, especially on urban development and more recently the European Green Deal. Eurocities is sometimes seen as an interest group more focused on re-establishing the power of the city over the nation-state, rather than connecting EU citizens across cities and borders. Recently, EU mayors of the network have tried to raise their global profile for their efforts to tackle climate change. Strategy and activities Eurocities coordinates multiple projects in the field of mobility, environmental transition, social inclusion, and digital innovation. The Eurocities secretariat is based in Brussels, Belgium. The network is led by an executive committee composed of 12 elected cities and their mayors. The executive committee meets at least three times a year and oversees the annual work programme, internal rules and budget, as approved by the annual general meeting (AGM). Thematic work is coordinated in six forums and a number of related working groups covering, among other topics, culture, economic development, environment, knowledge society, mobility, and social affairs. Eurocities activities include: Advocacy: representing the voice of cities at EU level, to bring about change on the ground Insights: Monitoring and communicating to cities the latest EU developments, funding opportunities, and trends affecting them Sharing of best practices: Facilitating the exchange of knowledge, experience and good practices between cities to scale up urban solutions Training: develop the capacity to face current and future urban challenges Membership criteria Membership of Eurocities is open to any European city with a population of 250,000 or more. Cities within the European Union become full members, and other European cities become associate members. Local authorities of smaller cities, but with a population of more than 50,000 can become partners. Companies and businesses can become associated business partners. Members See also B40 Balkan Cities Network Notes References External links Lists of cities in Europe Municipal international relations Cross-European advocacy groups 1986 establishments in Europe Organizations established in 1986 Urban planning Diplomacy
Eurocities
Engineering
514
851,716
https://en.wikipedia.org/wiki/Tugtupite
Tugtupite is a beryllium aluminium tectosilicate. It also contains sodium and chlorine and has the formula Na4AlBeSi4O12Cl. Tugtupite is a member of the silica-deficient feldspathoid mineral group. It occurs in high alkali intrusive igneous rocks. Tugtupite is tenebrescent, sharing much of its crystal structure with sodalite, and the two minerals are occasionally found together in the same sample. Tugtupite occurs as vitreous, transparent to translucent masses of tetragonal crystals and is commonly found in white, pink, to crimson, and even blue and green. It has a Mohs hardness of 4 and a specific gravity of 2.36. It fluoresces crimson under ultraviolet radiation. It was first found in 1962 at Tugtup agtakôrfia Ilimaussaq intrusive complex of southwest Greenland. It has also been found at Mont-Saint-Hilaire in Quebec and in the Lovozero Massif of the Kola Peninsula in Russia The name is derived from the Greenlandic Inuit word for reindeer (tuttu), and means "reindeer blood". The U.S. Geological Survey reports that in Nepal, tugtupite (as well as jasper and nephrite) were found extensively in most of the rivers from the Bardia to the Dang. It is used as a gemstone. References Sodium minerals Beryllium minerals Aluminium minerals Feldspathoid Sodalite group Natural history of Greenland Tetragonal minerals Minerals in space group 82 Gemstones
Tugtupite
Physics
335
24,738,927
https://en.wikipedia.org/wiki/QuRiNet
The Quail Ridge Wireless Mesh Network project is an effort to provide a wireless communications infrastructure to the Quail Ridge Reserve, a wildlife reserve in California in the United States. The network is intended to benefit on-site ecological research and provide a wireless mesh network tested for development and analysis. The project is a collaboration between the University of California Natural Reserve System and the Networks Lab at the Department of Computer Science, UC Davis. Project The large-scale wireless mesh network would consist of various sensor networks gathering temperature, visual, and acoustic data at certain locations. This information would then be stored at the field station or relayed further over Ethernet. The backbone nodes would also serve as access points enabling wireless access at their locations. The Quail Ridge Reserve would also be used for further research into wireless mesh networks. External links qurinet.cs.ucdavis.edu spirit.cs.ucdavis.edu nrs.ucdavis.edu/quail.html nrs.ucop.edu Computer networking
QuRiNet
Technology,Engineering
210
31,754,091
https://en.wikipedia.org/wiki/SN%202011by
SN 2011by was a supernova in NGC 3972. It was a type Ia supernova, discovered by Zhangwei Jin and Xing Gao (China). SN 2011by is about 5.3" east and 19.1" north of the center of NGC 3972. References External links Light curves and spectra on the Open Supernova Catalog 2011 in science Supernovae 20110426 Ursa Major
SN 2011by
Chemistry,Astronomy
89
34,361,584
https://en.wikipedia.org/wiki/Hot%20Line%20%28TV%20series%29
Hot Line is an American erotic anthology series featured on Cinemax. The series features simulated sex scenes, and thus can be categorized as "softcore" or "voyeur". The premise of the show is listeners of a fictional radio show titled, "Hot Line", call in to recount their sexual exploits. Many high-profile porn stars made an appearance on the show. Episodes Season 1 (1995) "Voyeur" – January 6, 1995 "Highest Bidder" – January 13, 1995 "The Homecoming" – January 20, 1995 "Vision of Love" – January 27, 1995 "Payback" – February 3, 1995 "Fountain of Youth" – February 10, 1995 Season 2 (1996) "Hung Jury" – July 5, 1996 "The Sitter" – July 12, 1996 "E-Mail" – July 19, 1996 "Sleepless Nights" – July 26, 1996 "Sexual Chemistry" – August 2, 1996 "Hannah's Surprise" – August 9, 1996 "The Gardener" – August 16, 1996 "Shutterbugs" – August 23, 1996 "Double Exposure" – August 30, 1996 "Where Were We" – September 13, 1996 "Brunch Club" – September 20, 1996 "Mistaken Identity" – September 27, 1996 External links 1990s American drama television series 1995 American television series debuts 1996 American television series endings 1990s American anthology television series Cinemax original programming Television series by Warner Bros. Television Studios Erotic television series American English-language television shows Erotic drama television series
Hot Line (TV series)
Biology
313
37,376,969
https://en.wikipedia.org/wiki/Magnesium%20oxalate
Magnesium oxalate is an organic compound comprising a magnesium cation with a 2+ charge bonded to an oxalate anion. It has the chemical formula MgCO. Magnesium oxalate is a white solid that comes in two forms: an anhydrous form and a dihydrate form where two water molecules are complexed with the structure. Both forms are practically insoluble in water and are insoluble in organic solutions. Natural occurrence Magnesium oxalate has been found naturally near Mill of Johnston, which is located close to Insch in northeast Scotland. This naturally occurring magnesium oxalate is called glushinskite and occurs at the lichen/rock interface on serpentinite as a creamy white layer mixed in with the hyphae of the lichen fungus. A scanning electron micrograph of samples taken showed that the crystals had a pyramidal structure with both curved and striated faces. The size of these crystals ranged from 2 to 5 μm. Synthesis and reactions Magnesium oxalate can by synthesized by combining a magnesium salt or ion with an oxalate. Mg2+ + CO2− → MgCO A specific example of a synthesis would be mixing Mg(NO3)2 and KOH and then adding that solution to dimethyl oxalate, (COOCH3)2. When heated, magnesium oxalate will decompose. First, the dihydrate will decompose at 150 °C into the anhydrous form. MgCO•2H2O → MgCO + 2 H2O With additional heating the anhydrous form will decompose further into magnesium oxide and carbon oxides between 420 °C and 620 °C. First, carbon monoxide and magnesium carbonate form. The carbon monoxide then oxidizes to carbon dioxide, and the magnesium carbonate decomposes further to magnesium oxide and carbon dioxide. MgCO → MgCO + CO CO + 1/2 O2 → CO2 MgCO → MgO + CO2 Magnesium oxalate dihydrate has also been used in the synthesis of nano sized particles of magnesium oxide, which have larger surface are to volume ratio than conventionally synthesized particles and are optimal for various applications, such as in catalysis. By using a sol-gel synthesis, which involves combining a magnesium salt, in this case magnesium oxalate, with a gelating agent, nano sized particles of magnesium oxide can be produced. Health and safety Magnesium oxalate is a skin and eye irritant. If inhaled, it will irritate the lungs and mucous membranes. Magnesium oxalate has no known chronic effects nor any carcinogenic effects. Magnesium oxalate is non-flammable and stable, but in fire conditions it will give off toxic fumes. According to OSHA, magnesium oxalate is considered to be hazardous. See also Calcium oxalate Oxalic acid References Oxalates Magnesium compounds Inorganic compounds
Magnesium oxalate
Chemistry
607
76,644,460
https://en.wikipedia.org/wiki/MCG%20-03-04-014
MCG -03-04-014 or PGC 4167, is a spiral galaxy located 450 million light-years in the constellation of Cetus. MCG -03-04-014 is classified as a luminous infrared galaxy, meaning it has high star-formation regions. MCG -03-04-014 has a galactic center that is obscured by dust lanes and presents an abundant supply of molecular gas. The reasons behind the luminosity of this galaxy are debated among astronomers. Some attribute it to recent starbursts, while others point to activity in the galaxies' supermassive black holes. It is also considered that both factors may contribute. The exact cause remains uncertain. According to SIMBAD, it is considered to be a Seyfert type 1 galaxy, hence the possible reason for its luminosity level. References Spiral galaxies -03-04-014 004167 Luminous infrared galaxies Cetus J01100897-1651096 IRAS catalogue objects
MCG -03-04-014
Astronomy
209
40,521,492
https://en.wikipedia.org/wiki/Avnu%20Alliance
Avnu Alliance is a consortium of member companies working together to create an interoperable ecosystem of low-latency, time-synchronized, highly reliable networking devices using the IEEE open standard, Time-Sensitive Networking (TSN) and its Pro AV networking protocol, Milan. Avnu Alliance creates comprehensive certification programs to ensure interoperability of network devices. In the Professional Audio Video (AV) industry, Alliance member companies worked together to develop Milan: a standards-based, user-driven deterministic network protocol for professional media, that through certification, assures devices will work together at new levels of convenience, reliability, and functionality.  Milan™ is a standards-based deterministic network protocol for real time media. Avnu Members may use the Avnu-certified or Milan-certified logo on devices that pass the conformance tests from Avnu. Not every device based on AVB or TSN is submitted for certification to the Avnu Alliance. The lack of the Avnu logo does not necessarily imply a device is incompatible with other Avnu-certified devices. The Alliance, in conjunction with other complimentary standards bodies and alliances, provides a united network foundation for use in professional AV, automotive, industrial control and consumer segments. History Audio Video Bridging is the set of standards developed by the IEEE Audio Video Bridging Task Group of the IEEE 802.1 standards committee. The committee developed the original technical standards for AVB, a way to simplify audio and video streaming through the use of the Ethernet cable, rather than the complicated approach traditionally taken using an array of analog one-way, single-purpose and point-to-point cables. Avnu Alliance was launched on August 25, 2009, to create certification processes based on AVB standards that would ensure interoperability. Founding members include: Broadcom, Cisco Systems, Harman International, Intel and Xilinx. In 2012, Avnu Alliance announced the formation of Avnu Automotive AVB Gen2 Council (AAA2C), a committee of automotive industry experts that will collectively identify automotive requirements for future development of the second generation of AVB standards. In April 2013, the forum launched the Avnu Alliance Broadcast Advisory Council (AABAC) to assess and improve AVB requirements in the broadcast industry. Created with the participation of Avnu Alliance members, network technologists and broadcasters, the AABAC also intends to promote the use of AVB in broadcast applications. Avnu Alliance's president is Dave Cavalcanti of Intel Corporation, Gary Stuebing of Cisco is vice president and chairman of Avnu Alliance. Certification Avnu Alliance invites industry companies to participate and collaborate in their efforts to improve audio/video quality. Their members create a broad array of products, including cars, semiconductors, loudspeakers, consoles and microphones. By 2012, Avnu Alliance had created a single set of open standards for AVB, which it uses to certify devices to guarantee interoperability. Since January 2012, Avnu Alliance has worked with the test house, the University of New Hampshire InterOperability Laboratory (UNH-IOL), to test interoperability and provide validation for its certified products. Avnu's certification testing officially began at UNH-IOL in February 2013. The UNH-IOL is a neutral, independent testing service that works with other audio/video industry consortiums, including the Ethernet Alliance, Wi-Fi Alliance and IPv6 Forum, to provide third-party verification, lower costs through collaborative testing, and help guide industry acceptance of a technology standard. In October 2021, Avnu Alliance expanded global TSN testing with new Registered Test Facilities (RTF) around the world including: Allion in Taichung, Taiwan; Excelfore in Tokyo, Japan; and Granite River Labs in both Santa Clara, CA, USA and Karlsruhe, Germany. The technology ecosystem supporting and accelerating the development of Avnu certified products has matured to include standards-compliant silicon devices, FPGA IP, open-source software, and also 3rd-party AVB certification services such as Coveloz's AVB Certification Service. When a product is submitted to Avnu Alliance for certification, it is tested against over 400 pages of conformance requirements, which combine IEEE 802.1 standard requirements with additional requirements developed by Avnu. Any issues the product may have are then reported to the manufacturer. After fixing any outstanding issues, the manufacturer can submit the product to Avnu for official approval and permission to use the Avnu certification logo on the product and any accompanying marketing efforts. In February 2014, Avnu Alliance announced their first certified products, a series of AVB switches from Extreme Networks that passed all conformance tests and will bear the Avnu-certified logo. Extreme Networks' Summit X440 is a series of stackable switches that extend the benefits of the ExtremeXOS software. They are intended for professional AV and IT applications, allowing data, audio and video to co-exist on a single standards-based network. Standardization The Avnu Alliance's goal is to make it easier to implement network systems by promoting the adoption of the IEEE 802.1, 1722 and 1733 AVB standards in automotive, professional and consumer electronics markets, ensuring that AVB products from different manufacturers would be able to interconnect seamlessly. Along with ensuring interoperability, the adoption of IEEE 802.1 (and the related IEEE 1722 and IEEE 1733) AVB standards over various networks would reduce technical issues, such as synching, glitches and delays, while improving content streaming capabilities. The Alliance's industry standards improved Ethernet technology, making it simpler to add enhanced performance and capabilities to audio/video networks, while bringing down costs by using lighter, cheaper cable that is easier to set up and able to carry a larger amount of information than what was regularly used in networking environments. Milan In 2018, the Avnu Alliance announced the Milan (Media-integrated local area network), an initiative to define implementation details such as media formats, clocking, redundancy and device control, and provide certification and testing program to ensure that devices from different vendors are interoperable within common device profiles. Milan specification is based on the following IEEE standards: IEEE 802.1BA-2011 Audio Video Bridging (AVB) Systems - consists of usage-specific profiles for device interoperability; IEEE 802.1Q-2011 Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks - defines methods for traffic shaping (Forwarding and Queuing for Time-Sensitive Streams) and bandwidth reservation (Stream Reservation Protocol) in network bridges and VLANs; IEEE 802.1AS-2011 Timing and Synchronization for Time-Sensitive Applications - defines the Generalized Precision Time Protocol (gPTP); IEEE 1722-2016 Layer 2 Transport Protocol for Time Sensitive Applications - defines (AV Transport Protocol, AVTP) and payload formats; IEEE 1722.1-2013 Device Discovery, Enumeration, Connection Management and Control Protocol (AVDECC). The specification requires media clocking based on the IEEE 1722 CRF (Clock Reference Format) and sample rate of 48 kHz (optionally 96 and 192 kHz); audio stream format is based on IEEE 1722 32-bit Standard AAF Audio Format with 8 channels per stream (optionally, 24 and 32 bit High Capacity Format with 56 and 64 channels). Redundancy is provided with two independent logical networks for every endpoint and a seamless switchover mechanism. AVDECC defines operations to discover device addition and removal, retrieve device entity model, connect and disconnect streams, manage device and connection status, and remote control devices. In October 2021, Avnu Alliance introduced the Milan Advanced Certification Program to make certification testing more streamlined for its members including the introduction of the Avnu Express Test Tool that vendors can use to internally verify device conformance prior to submission for certification testing, providing valuable insights into the product that can be used to optimize product development, increasing the probability of certification success, and saving manufacturers time, resources, and money. Markets Automotive The Alliance members represent a variety of facets of automotive technology. Because of the growing complexity of in-vehicle audio/video systems and the increasing number of in-vehicle applications (such as infotainment, safety and multiple cameras), testing to ensure interoperability is increasingly important in the automotive market. Automotive systems with multiple applications require interoperability to work properly. As of 2013, Avnu Alliance works with the GENIVI Alliance and the JASPAR Alliance to standardize in-vehicle Ethernet requirements. Professional AV The IEEE AVB Task Group has developed a series of enhancements that provide highly-reliable delivery of low latency, synchronized audio and video. This technology enables the construction of affordable, high performance professional media networks. Video interoperability specifications for the pro market are currently in development. Meyer Sound was noted for adopting Avnu Alliance's Ethernet standards to assist in the development of its first AVB-capable loudspeaker, named CAL, which stands for column array loudspeaker, completed in 2013. Consumer electronics The improvements to Ethernet developed by IEEE's AVB task force also benefit those desiring to distribute digital content among multiple devices in their home networks. In 2013, the Avnu Alliance began to establish testing requirements for AVB interoperability specifications for reliable, time-synced AV streaming over Ethernet and wireless networks in residential applications. Industrial control Avnu's Industrial group defines compliance and interoperability requirements for TSN networking elements. Milestones See also IEEE Standards Association References External links Official website IEEE standards IEEE 802 Ethernet standards Interoperable communications Companies based in Beaverton, Oregon Consortia in the United States
Avnu Alliance
Technology
1,983
19,980,114
https://en.wikipedia.org/wiki/Target%20strength
The target strength or acoustic size is a measure of the area of a sonar target. This is usually quantified as a number of decibels. For fish such as salmon, the target size varies with the length of the fish and a 5 cm fish could have a target strength of about -50 dB. The target strength of a fish also depends on the orientation of the fish at the moment of sonification, which in turn changes the scattering cross-section of the fish and any air-filled cavities of the fish. The effect of this means that behavioral reaction affects observed biomass, for example fish evading the research vessel at night due to strong lights and vibrations from the hull and machinery. Target strength is often observed on or near a specific frequency where the target is most resonant. Narrowband (CW) pulses has historically been used, but there is ongoing research into using wideband (FM) pulses for improved classification. Formula For some simple shapes, target strength can be derived mathematically. For other objects like fish, where the size of the air bladder is the main factor, target strength is commonly derived empirically. Target strength (TS) is referenced to 1 meter from the acoustic center of the target, assuming isotropic reflection: Where: is the reflected intensity from target is the incident intensity on target is the backscattering cross-section Target strength of a sphere with radius , large compared to the wavelength, assuming reference distance 1 meter: Thus, for a sphere of radius 2 meter, the target strength is 0 dB. NOAA has a calculator that can be used to inspect the target strength of calibration spheres made out of copper or tungsten carbide in relation to physical parameters found in the ocean. References Further reading "Introduction to the use of sonar systems for estimating fish biomass, FAO Fisheries Technical Paper No. 191, Revision 1, FAO 1982" Fisheries Acoustics Simmonds, E John and MacLennan, David N (2005) Blackwell Publishing. C. S. Clay & H. Medwin, Acoustical Oceanography (Wiley, New York, 1977). Acoustics Oceanography Fisheries science Sonar
Target strength
Physics,Environmental_science
442
4,336,277
https://en.wikipedia.org/wiki/EDAG
EDAG Engineering Group AG (short Edag, own spelling EDAG) is an international corporate group active in the Engineering services sector. Since 2015, it has been based in Arbon, Canton Thurgau, Switzerland. The EDAG Group is one of the world's largest independent engineering partners for vehicles and smart factories. The holding which incorporated the company was founded in Germany during 1969. During 1987, EDAG Group's first overseas branch office was opened in Martorell (Barcelona), Spain. Since incorporation, its structure has been changed several times, including operations as an Aktiengesellschaft, a joint-stock company and a Kommanditgesellschaft auf Aktien. The main operational company of the EDAG Group is EDAG Engineering GmbH, which is based in Wiesbaden, Germany. By November 2018, the company was reportedly operating from 60 sites across 19 countries. It is active in the fields of product development, production plant development, plant engineering, limited series manufacturing, modules, and optimization. History On 1 February 1969, the company Eckard Design was founded by Horst Eckard in Groß-Zimmern, outside Darmstadt. The firm's first customer was the car manufacturer Ford at their works in Cologne; the firm's first offices were also established in this city. Before the end of its first year, another customer in the form of local car manufacturer BMW had also been acquired. By the end of 1969, Eckard Design employed a total of eight staff members, some of whom would become key officials of the business for the next 30 years. During 1970, a new head office was opened in Fulda; the city's central location meant that Eckard Design's headquarters were never more than 300 km away from any of the domestic car manufacturers. Two years later, additional offices in neighbouring Steinau were also secured; that same year, turnover breached DM 2 million for the first time. In 1973, Volkswagen became a customer of the firm, while turnover rose to DM 3.3 million. During the following year, the company was reorganised as Eckard Design GbR; it also completed its first acquisition, that being of the Mücke-based manufacturing company FFT. In 1975, vehicle manufacturer Opel became a client of Eckard Design, while a major order for Karmann was placed during the following year. During the late 1970s, a close working relationship with Audi also developed. In 1981, company turnover exceeded DM 20 million for the first time; the Fulda office was enlarged during the following year. In 1983, Eckard Design began investing in computer-aided design (CAD) technology in line with clients such as Audi. Headcount reached 250 during 1984, while the firm's offices continued to expand. Since 1986, the firm has also been involved in the construction of prototypes at its own experimental shop in Fulda. One year later, the company launched its first international activities, deciding to open its first overseas branch office in Martorell (Barcelona), Spain. During 1989, its 20th anniversary, turnover breached the DM 100 million milestone; the same year, the company received its first order from Italian luxury car specialist Ferrari. During the early 1990s, considerable investment was directed into growth into the international automotive market. During 1992, the company's Rechtsform was reorganised, when it became an Aktiengesellschaft and the name Edag Engineering + Design AG came into being. In 1998, EDAG was the first automotive service provider to be admitted to the Verband der Automobilindustrie (English: Association of the German Automotive Industry). During 2004, FFT Flexible Fertigungstechnik GmbH & Co. KG, which is presently called FFT Produktionssysteme GmbH & Co. KG, was integrated into the company. In February 2006, Lutz Helmig, owner of Aton GmbH and founder of the Helios-Kliniken, purchased EDAG from the nine founding families. In 2007, personnel service provider ED Work GmbH & Co. KG was founded. As a result of the takeover by Aton GmbH, the legal form of the company changed on 11 January 2008 from a joint-stock company to a Kommanditgesellschaft auf Aktien. A further consequence of the re-structuring was that directors Klaus Blickle and Jürgen Böhm left the company on 8 April 2008, and were replaced by long-standing managers Jörg Ohlsen, Manfred Hahl and Rainer Bauer. As a consequence of the merger with Rücker GmbH during early 2014, the executive board was re-structured, while the head office was transferred to Wiesbaden. Since March 2015, EDAG has been operating under the name EDAG Engineering GmbH. With the listing of EDAG Engineering Group AG on the German stock exchange in Frankfurt am Main (Prime Standard), ISIN 0303692047, EDAG Engineering GmbH became part of EDAG Engineering Group AG. The vast majority of EDAG Group's revenue has typically been derived from its activity in the automotive sector, comprising 62 percent of turnover during 2017; other major contributors to turnover include its electronics businesses, contributing 20 percent that year, as well as a further 18 percent from production solutions activities for that same year. According to market analysts Edison Group, EDAG has been increasingly moving its activities into the international market in recent years; however, it has been facing increasingly intensive competition from rivals on the global stage. For several decades, EDAG has been active throughout the aerospace sector. During the early 2000s, aircraft manufacturer AvCraft contracted EDAD to produce wing tooling for a new final assembly line at Oberpfaffenhoffen for the Fairchild-Dornier 328JET. The company has acted as an intermediate party between various other suppliers and the multinational aerospace conglomerate Airbus. Offices The branch offices on five continents are mostly located at important sites of partner industries. Selection of sites in GermanyAachen, Eisenach, Fulda, Hamburg, Ingolstadt, Cologne, Leipzig, Munich, Rostock, Ulm, Wolfsburg Selection of sites in Europe Turin – Sant'Agata Bolognese – Fiorano Modenese – Mladá Boleslav – Břeclav – Warwickshire – Győr – Gothenburg - Helmond - Kocaeli – Worldwide São Bernardo do Campo (São Paulo) – Troy (Detroit) – Puebla – Yokohama – Seoul - Shah Alam (Kuala Lumpur) – Pune (Maharashtra) – Shanghai – Automotive industry EDAG is noted for its concept cars such as EDAG Biwak estate concept on the Beetle, the EDAG Pontiac Solstice Hardtop and the EDAG Show Car No. 8 based on Smart mechanicals. EDAG has conducted design work for the application of 3D-printing (additive manufacturing) technology under their "Genesis project", in which Fraunhofer, Laser Zentrum Nord and DMRC Paderborn participated. Concept Cars and Prototypes Source: edag.de EDAG Scout (1999) EDAG 2000 (2000) EDAG Keinath GT/C (2001) EDAG Keinath GT/C Cabrio (2002) EDAG Pontiac Solstice Hardtop EDAG Cinema 7D (2003) EDAG GenX (2004) EDAG/Rinspeed Chopster (2005) EDAG Show Car No.8 (2005) EDAG Pontiac Solstice Hardtop (2006) EDAG Biwak (2006) EDAG LUV (2007) EDAG Light Car - Open Source (2009) EDAG Light Car Sharing (2011) EDAG Genesis (2014) EDAG Light Cocoon (2015) References External links Website of EDAG Group AG Website of EDAG Engineering GmbH Website of EDAG Engineering Group AG (Investor Relations) International engineering consulting firms Manufacturing companies of Germany Manufacturing companies of Switzerland Aircraft manufacturers of Germany Engineering companies of Germany Engineering companies of Switzerland Auto parts suppliers of Germany Automotive companies of Switzerland Manufacturing companies established in 1969 1969 establishments in Germany companies based in Thurgau
EDAG
Engineering
1,658
5,811,728
https://en.wikipedia.org/wiki/Volterra%20series
The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture "memory" effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of the system depends strictly on the input at that particular time. In the Volterra series, the output of the nonlinear system depends on the input to the system at all other times. This provides the ability to capture the "memory" effect of devices like capacitors and inductors. It has been applied in the fields of medicine (biomedical engineering) and biology, especially neuroscience. It is also used in electrical engineering to model intermodulation distortion in many devices, including power amplifiers and frequency mixers. Its main advantage lies in its generalizability: it can represent a wide range of systems. Thus, it is sometimes considered a non-parametric model. In mathematics, a Volterra series denotes a functional expansion of a dynamic, nonlinear, time-invariant functional. The Volterra series are frequently used in system identification. The Volterra series, which is used to prove the Volterra theorem, is an infinite sum of multidimensional convolutional integrals. History The Volterra series is a modernized version of the theory of analytic functionals from the Italian mathematician Vito Volterra, in his work dating from 1887. Norbert Wiener became interested in this theory in the 1920s due to his contact with Volterra's student Paul Lévy. Wiener applied his theory of Brownian motion for the integration of Volterra analytic functionals. The use of the Volterra series for system analysis originated from a restricted 1942 wartime report of Wiener's, who was then a professor of mathematics at MIT. He used the series to make an approximate analysis of the effect of radar noise in a nonlinear receiver circuit. The report became public after the war. As a general method of analysis of nonlinear systems, the Volterra series came into use after about 1957 as the result of a series of reports, at first privately circulated, from MIT and elsewhere. The name itself, Volterra series, came into use a few years later. Mathematical theory The theory of the Volterra series can be viewed from two different perspectives: An operator mapping between two function spaces (real or complex) A real or complex functional mapping from a function space into real or complex numbers The latter functional mapping perspective is more frequently used due to the assumed time-invariance of the system. Continuous time A continuous time-invariant system with x(t) as input and y(t) as output can be expanded in the Volterra series as Here the constant term on the right side is usually taken to be zero by suitable choice of output level . The function is called the n-th-order Volterra kernel. It can be regarded as a higher-order impulse response of the system. For the representation to be unique, the kernels must be symmetrical in the n variables . If it is not symmetrical, it can be replaced by a symmetrized kernel, which is the average over the n! permutations of these n variables . If N is finite, the series is said to be truncated. If a, b, and N are finite, the series is called doubly finite. Sometimes the n-th-order term is divided by n!, a convention which is convenient when taking the output of one Volterra system as the input of another ("cascading"). The causality condition: Since in any physically realizable system the output can only depend on previous values of the input, the kernels will be zero if any of the variables are negative. The integrals may then be written over the half range from zero to infinity. So if the operator is causal, . Fréchet's approximation theorem: The use of the Volterra series to represent a time-invariant functional relation is often justified by appealing to a theorem due to Fréchet. This theorem states that a time-invariant functional relation (satisfying certain very general conditions) can be approximated uniformly and to an arbitrary degree of precision by a sufficiently high finite-order Volterra series. Among other conditions, the set of admissible input functions for which the approximation will hold is required to be compact. It is usually taken to be an equicontinuous, uniformly bounded set of functions, which is compact by the Arzelà–Ascoli theorem. In many physical situations, this assumption about the input set is a reasonable one. The theorem, however, gives no indication as to how many terms are needed for a good approximation, which is an essential question in applications. Discrete time The discrete-time case is similar to the continuous-time case, except that the integrals are replaced by summations: where Each function is called a discrete-time Volterra kernels. If P is finite, the series operator is said to be truncated. If a, b and P are finite, the series operator is called a doubly finite Volterra series. If , the operator is said to be causal. We can always consider, without loss of the generality, the kernel as symmetrical. In fact, for the commutativity of the multiplication it is always possible to symmetrize it by forming a new kernel taken as the average of the kernels for all permutations of the variables . For a causal system with symmetrical kernels we can rewrite the n-th term approximately in triangular form Methods to estimate the kernel coefficients Estimating the Volterra coefficients individually is complicated, since the basis functionals of the Volterra series are correlated. This leads to the problem of simultaneously solving a set of integral equations for the coefficients. Hence, estimation of Volterra coefficients is generally performed by estimating the coefficients of an orthogonalized series, e.g. the Wiener series, and then recomputing the coefficients of the original Volterra series. The Volterra series main appeal over the orthogonalized series lies in its intuitive, canonical structure, i.e. all interactions of the input have one fixed degree. The orthogonalized basis functionals will generally be quite complicated. An important aspect, with respect to which the following methods differ, is whether the orthogonalization of the basis functionals is to be performed over the idealized specification of the input signal (e.g. gaussian, white noise) or over the actual realization of the input (i.e. the pseudo-random, bounded, almost-white version of gaussian white noise, or any other stimulus). The latter methods, despite their lack of mathematical elegance, have been shown to be more flexible (as arbitrary inputs can be easily accommodated) and precise (due to the effect that the idealized version of the input signal is not always realizable). Crosscorrelation method This method, developed by Lee and Schetzen, orthogonalizes with respect to the actual mathematical description of the signal, i.e. the projection onto the new basis functionals is based on the knowledge of the moments of the random signal. We can write the Volterra series in terms of homogeneous operators, as where To allow identification orthogonalization, Volterra series must be rearranged in terms of orthogonal non-homogeneous G operators (Wiener series): The G operators can be defined by the following: whenever is arbitrary homogeneous Volterra, x(n) is some stationary white noise (SWN) with zero mean and variance A. Recalling that every Volterra functional is orthogonal to all Wiener functional of greater order, and considering the following Volterra functional: we can write If x is SWN, and by letting , we have So if we exclude the diagonal elements, , it is If we want to consider the diagonal elements, the solution proposed by Lee and Schetzen is The main drawback of this technique is that the estimation errors, made on all elements of lower-order kernels, will affect each diagonal element of order p by means of the summation , conceived as the solution for the estimation of the diagonal elements themselves. Efficient formulas to avoid this drawback and references for diagonal kernel element estimation exist Once the Wiener kernels were identified, Volterra kernels can be obtained by using Wiener-to-Volterra formulas, in the following reported for a fifth-order Volterra series: Multiple-variance method In the traditional orthogonal algorithm, using inputs with high has the advantage of stimulating high-order nonlinearity, so as to achieve more accurate high-order kernel identification. As a drawback, the use of high values causes high identification error in lower-order kernels, mainly due to nonideality of the input and truncation errors. On the contrary, the use of lower in the identification process can lead to a better estimation of lower-order kernel, but can be insufficient to stimulate high-order nonlinearity. This phenomenon, which can be called locality of truncated Volterra series, can be revealed by calculating the output error of a series as a function of different variances of input. This test can be repeated with series identified with different input variances, obtaining different curves, each with a minimum in correspondence of the variance used in the identification. To overcome this limitation, a low value should be used for the lower-order kernel and gradually increased for higher-order kernels. This is not a theoretical problem in Wiener kernel identification, since the Wiener functional are orthogonal to each other, but an appropriate normalization is needed in Wiener-to-Volterra conversion formulas for taking into account the use of different variances. Furthermore, new Wiener to Volterra conversion formulas are needed. The traditional Wiener kernel identification should be changed as follows: In the above formulas the impulse functions are introduced for the identification of diagonal kernel points. If the Wiener kernels are extracted with the new formulas, the following Wiener-to-Volterra formulas (explicited up the fifth order) are needed: As can be seen, the drawback with respect to the previous formula is that for the identification of the n-th-order kernel, all lower kernels must be identified again with the higher variance. However, an outstanding improvement in the output MSE will be obtained if the Wiener and Volterra kernels are obtained with the new formulas. Feedforward network This method was developed by Wray and Green (1994) and utilizes the fact that a simple 2-fully connected layer neural network (i.e., a multilayer perceptron) is computationally equivalent to the Volterra series and therefore contains the kernels hidden in its architecture. After such a network has been trained to successfully predict the output based on the current state and memory of the system, the kernels can then be computed from the weights and biases of that network. The general notation for the n-th-order Volterra kernel is given by where is the order, the weights to the linear output node, the coefficients of the polynomial expansion of the output function of the hidden nodes, and are the weights from the input layer to the non-linear hidden layer. It is important to note that this method allows kernel extraction up until the number of input delays in the architecture of the network. Furthermore, it is vital to carefully construct the size of the network input layer so that it represents the effective memory of the system. Exact orthogonal algorithm This method and its more efficient version (fast orthogonal algorithm) were invented by Korenberg. In this method the orthogonalization is performed empirically over the actual input. It has been shown to perform more precisely than the crosscorrelation method. Another advantage is that arbitrary inputs can be used for the orthogonalization and that fewer data points suffice to reach a desired level of accuracy. Also, estimation can be performed incrementally until some criterion is fulfilled. Linear regression Linear regression is a standard tool from linear analysis. Hence, one of its main advantages is the widespread existence of standard tools for solving linear regressions efficiently. It has some educational value, since it highlights the basic property of Volterra series: linear combination of non-linear basis-functionals. For estimation, the order of the original should be known, since the Volterra basis functionals are not orthogonal, and thus estimation cannot be performed incrementally. Kernel method This method was invented by Franz and Schölkopf and is based on statistical learning theory. Consequently, this approach is also based on minimizing the empirical error (often called empirical risk minimization). Franz and Schölkopf proposed that the kernel method could essentially replace the Volterra series representation, although noting that the latter is more intuitive. Differential sampling This method was developed by van Hemmen and coworkers and utilizes Dirac delta functions to sample the Volterra coefficients. See also Wiener series Polynomial signal processing References Further reading Barrett J.F: Bibliography of Volterra series, Hermite functional expansions, and related subjects. Dept. Electr. Engrg, Univ.Tech. Eindhoven, NL 1977, T-H report 77-E-71. (Chronological listing of early papers to 1977) URL: http://alexandria.tue.nl/extra1/erap/publichtml/7704263.pdf Bussgang, J.J.; Ehrman, L.; Graham, J.W: Analysis of nonlinear systems with multiple inputs, Proc. IEEE, vol.62, no.8, pp. 1088–1119, Aug. 1974 Giannakis G.B & Serpendin E: A bibliography on nonlinear system identification. Signal Processing, 81 2001 533–580. (Alphabetic listing to 2001) www.elsevier.nl/locate/sigpro Korenberg M.J. Hunter I.W: The Identification of Nonlinear Biological Systems: Volterra Kernel Approaches, Annals Biomedical Engineering (1996), Volume 24, Number 2. Kuo Y L: Frequency-domain analysis of weakly nonlinear networks, IEEE Trans. Circuits & Systems, vol.CS-11(4) Aug 1977; vol.CS-11(5) Oct 1977 2–6. Rugh W J: Nonlinear System Theory: The Volterra–Wiener Approach. Baltimore 1981 (Johns Hopkins Univ Press) http://rfic.eecs.berkeley.edu/~niknejad/ee242/pdf/volterra_book.pdf Schetzen M: The Volterra and Wiener Theories of Nonlinear Systems, New York: Wiley, 1980. Mathematical series Functional analysis
Volterra series
Mathematics
3,027
2,400,249
https://en.wikipedia.org/wiki/Pegfilgrastim
Pegfilgrastim, sold under the brand name Neulasta among others, is a PEGylated form of the recombinant human granulocyte colony-stimulating factor (GCSF) analog filgrastim. It serves to stimulate the production of white blood cells (neutrophils). Pegfilgrastim was developed by Amgen. Pegfilgrastim treatment can be used to stimulate bone marrow to produce more neutrophils to fight infection in patients undergoing chemotherapy. Pegfilgrastim has a human half-life of 15 to 80 hours, much longer than the parent filgrastim (3–4 hours). Pegfilgrastim was approved for medical use in the United States in January 2002, in the European Union in August 2002, and in Australia in September 2002. It is on the World Health Organization's List of Essential Medicines. Medical uses Pegfilgrastim is indicated to decrease the incidence of infection, as manifested by febrile neutropenia, in people with non-myeloid malignancies receiving myelosuppressive anti-cancer drugs associated with a clinically significant incidence of febrile neutropenia; and to increase survival in people acutely exposed to myelosuppressive doses of radiation (hematopoietic subsyndrome of acute radiation syndrome). See also References Amgen Drugs developed by Hoffmann-La Roche Immunostimulants Recombinant proteins World Health Organization essential medicines
Pegfilgrastim
Biology
326
13,132,497
https://en.wikipedia.org/wiki/Heterococcus
Heterococcus is a genus of yellow-green algae (xanthophytes) in the family Heteropediaceae. It is the only xanthophyte genus known to form lichens. Pirula is regarded as a synonym. Species Heterococcus africanus Heterococcus akinetus Heterococcus anguinus Heterococcus arcticus Heterococcus botrys Heterococcus brevicellularis Heterococcus caespitosus Heterococcus canadensis Heterococcus capitatus Heterococcus chodatii Heterococcus clavatus Heterococcus conicus Heterococcus corniculatus Heterococcus crassulus Heterococcus curvatus Heterococcus curvirostrus Heterococcus dissociatus Heterococcus endolithicus Heterococcus erectus Heterococcus filiformis Heterococcus flavescens Heterococcus fontanus Heterococcus fuornensis Heterococcus furcatus Heterococcus gemmatus Heterococcus geniculatus Heterococcus granulatus Heterococcus implexus Heterococcus leptosiroides Heterococcus longicellularis Heterococcus mainxii Heterococcus marietanii Heterococcus mastigophorus Heterococcus maximus Heterococcus moniliformis Heterococcus nepalensis Heterococcus papillosus Heterococcus plectenchymaticus Heterococcus pleurococcoides Heterococcus polymorphus Heterococcus presolanensis Heterococcus protonematoides Heterococcus quadratus Heterococcus ramosissimus Heterococcus stellatus Heterococcus stigeoclonioides Heterococcus subterrestris Heterococcus tectiformis Heterococcus teleutosporoides Heterococcus tellii Heterococcus thermalis Heterococcus tiroliensis Heterococcus undulatus Heterococcus unguis Heterococcus vesiculosus Heterococcus virginis Heterococcus viridis Heterococcus zonatus References Xanthophyceae Ochrophyte genera Taxa described in 1908
Heterococcus
Biology
498
7,583,202
https://en.wikipedia.org/wiki/Lernmatrix
Lernmatrix (German for "learning matrix") is a special type of artificial neural network (ANN) architecture, similar to associative memory, invented around 1960 by Karl Steinbuch, a pioneer in computer science and ANNs. This model for learning systems could establish complex associations between certain sets of characteristics (e.g., letters of an alphabet) and their meanings. Function The Lernmatrix generally consists of n "characteristic lines" and m "meaning lines," where each characteristic line is connected to each meaning line, similar to how neurons in the brain are connected by synapses. (This can be realized in various ways – according to Steinbuch, this could be done by hardware or software). To train a Lernmatrix, values are specified on the corresponding characteristic and meaning lines (binary or real); then the connections between all pairs of characteristic and meaning lines are strengthened by the Hebb rule. A trained Lernmatrix, when given a specific input on the characteristic lines, activates the corresponding meaning lines. In modern language, it is a linear projection module. By appropriately interconnecting several Lernmatrices, a switching system can be built that, after completing certain training phases, is ultimately able to automatically determine the most probable associated meaning for an input sequence of features. See also Artificial neural networks External links A new theoretical framework for the Steinbuch's Lernmatrix Pattern recognition and classification using weightless neural networks (WNN) and Steinbuch Lernmatrix DARPA project will study neural network processes Discussion in the newsgroup de.sci.informatik.ki Wolfgang Hilberg: "Karl Steinbuch, ein zu Unrecht vergessener Pionier der künstlichen neuronalen Systeme", Communication from the Institute for Data Technology at the Technical University of Darmstadt, 1995, PDF file (size approx. 4 MB) accessed on June 17, 2017 Further reading Karl Steinbuch: Automat und Mensch. 1st ed. Springer 1961. References Artificial neural networks Neuroinformatics
Lernmatrix
Biology
426
6,805,053
https://en.wikipedia.org/wiki/Predispositioning%20theory
Predispositioning theory, in the field of decision theory and systems theory, is a theory focusing on the stages between a complete order and a complete disorder. Predispositioning theory was founded by Aron Katsenelinboigen (1927–2005), a professor in the Wharton School who dealt with indeterministic systems such as chess, business, economics, and other fields of knowledge and also made an essential step forward in elaboration of styles and methods of decision-making. Predispositioning theory Predispositioning theory is focused on the intermediate stage between a complete order and a complete disorder. According to Katsenelinboigen, the system develops gradually, going through several stages, starting with incomplete and inconsistent linkages between its elements and ending with complete and consistent ones. "Mess. The zero phase can be called a mess because it contains no linkages between the system's elements. Such a definition of mess as ‘a disorderly, un-tidy, or dirty state of things’ we find in Webster's New World Dictionary. (...) Chaos. Mess should not be confused with the next phase, chaos, as this term is understood today. Arguably, chaos is the first phase of indeterminism that displays sufficient order to talk of the general problem of system development. The chaos phase is characterized by some ordering of accumulated statistical data and the emergence of the basic rules of interactions of inputs and outputs (not counting boundary conditions). Even such a seemingly limited ordering makes it possible to fix systemic regularities of the sort shown by Feigenbaum numbers and strange attractors. (...) Different types of orderings in the chaos phase may be brought together under the notion of directing, for they point to a possible general direction of system development and even its extreme states. But even if a general path is known, enormous difficulties remain in linking algorithmically the present state with the final one and in operationalizing the algorithms. These objectives are realized in the next two large phases that I call predispositioning and programming. (...) Programming. When linkages between states are established through reactive procedures, either by table functions or analytically, it is often assumed that each state is represented only by essentials. For instance, the production function in economics ties together inputs and outputs in physical terms. When a system is represented as an equilibrium or an optimization model, the original and conjugated parameters are stated explicitly; in economics, they are products (resources) and prices, respectively.9 Deterministic economic models have been extensively formalized; they assume full knowledge of inputs, outputs, and existing technologies. (...) Predispositioning (...) exhibits less complete linkages between system's elements than programming but more complete than chaos." Methods like programming and randomness are well-known and developed while the methodology for the intermediate stages lying between complete chaos and complete order as well as their philosophical conceptualization have never been discussed explicitly and no methods of their measurements were elaborated. According to Katsenelinboigen, operative sub-methods of dealing with the system are programming, predispositioning, and randomness. They correspond to three stages of systems development. Programming is a formation of complete and consistent linkages between all the stages of the systems' development. Predispositioning is a formation of semi-efficient linkages between the stages of the system's development. In other words, predispositioning is a method responsible for creation of a predisposition. Randomness is a formation of inconsistent linkages between the stages of the system's development. In this context, for instance, Darwinism emphasizes the exclusive role of chance occurrences in the system's development since it gives top priority to randomness as a method. Conversely, creationism states that the system develops in a comprehensive fashion, i.e. that programming is the only method involved in the development of the system. As Aron Katsenelinboigen notices, both schools neglect the fact that the process of the system's development includes a variety of methods which govern different stages, depending on the systems’ goals and conditions. Unfortunately, predispositioning as a method as well as a predisposition as an intermediate stage have never been discussed by scholars, though there were some interesting intuitive attempts to deal with the formation of a predisposition. The game of chess, at this point, was one of the most productive fields in the study of predispositioning as a method. Owing to chess’ focus on the positional style, it elaborated a host of innovative strategies and tactics that Katsenelinboigen analyzed and systematized and made them a basis for his theory. To sum up, the main focus of predispositioning theory is on the intermediate stage of systems development, the stage that Katsenelinboigen proposed to call a "predisposition". This stage is distinguished by semi-complete and semi-consistent linkages between its elements. The most vital question when dealing with semi-complete and semi-consistent stages of the system is the question of its evaluation. To this end, Katsenelinboigen elaborated his structure of values, using the game of chess as a model. Structure of values According to Katsenelinboigen's predispositioning theory, in the chess game pieces are evaluated from two basic points of view – their weight in a given position on the chessboard and their weight independent to any particular position. Based on the degree of conditionality, the values are: Fully unconditional Unconditional Semi-conditional Conditional According to Katsenelinboigen, game pieces in chess are evaluated from two basic points of view: their weight with regard to a certain situation on the chessboard and their weight without regard to any particular situation, only to the position of the pieces. The latter are defined by Katsenelinboigen as semi-unconditional values, formed by the sole condition of the rules of piece interaction. The semiunconditional values of the pieces (such as queen 9, rook 5, bishop 3, knight 3, and pawn 1) appear as a result of the rules of interaction of a piece with the opponent's king. All other conditions, such as starting conditions, final goal, and a program that links the initial condition to the final state, are not taken into account. The degree of conditionality is increased by applying preconditions, and the presence of all four preconditions fully forms conditional values. Katsenelinboigen outlines two extreme cases of the spectrum of values—fully conditional and fully unconditional—and says that, in actuality, they are ineffectual in evaluating the material and so are sometimes replaced by semiconditional or semiunconditional valuations, which are distinguished by their differing degrees of conditionality. He defines fully conditional values as those based on complete and consistent linkages among all four preconditions." The conditional values are formed by the four basic conditions: starting conditions final goal a program that links the initial conditions with the final state rules of interaction The degree of unconditionality is predicated by the necessity to evaluate things under uncertainty (when the future is unknown) and conditions cannot be specified. Applying his concept of values to social systems, Katsenelinboigen shows how the degree of unconditionality forms morality and law. According to him, the moral values represented in the Torah as the Ten Commandments are analogous to semi-unconditional values in a chess game, for they are based exclusively on the rules of interactions. "The difference between these two approaches is clearly manifested in the various translations of the Torah. For instance, The Holy Scriptures (1955), a new translation based on the masoretic text (a vast body of the textual criticism of the Hebrew Bible), translates the commandment as ‘Thou shalt not commit murder.’ In The Holy Bible, commonly known as the authorized (King James) version (The Gideons International, 1983), this commandment is translated as ‘Thou shalt not kill.’ (...) The difference between unconditional and semi-unconditional evaluations will become more prominent if we use the same example of ‘Thou shalt not kill and ‘Thou shalt not murder’ to illustrate the conduct of man in accordance with his precepts. In an extreme case, one who follows ‘Thou shalt not kill’ will allow himself to be killed before he kills another. These views are held by one of the Hindu sects in Sri Lanka (the former Ceylon). To the best of my knowledge, the former prime minister of Ceylon, Solomon Bandaranaike (1899-1959), belonged to this sect. He did not allow himself to kill an attacker and was murdered. As he lay bleeding to death, he did crawl over to the murderer and knock the pistol from his hand before it could be used against his wife, Sirimavo Bandaranaike. She later became the prime minister of Ceylon-Sri Lanka." But how does one ascribe weights to certain parameters, establishes the degree of conditionality, etc.? How does the evaluative process go in indeterministic systems? The role of subjectivity Katsenelinboigen states that the evaluative category for indeterministic systems is based on subjectivity. "This pioneering approach to the evaluative process is the subject of Katsenelinboigen’s work on indeterministic systems. The roots of one’s subjective evaluation lie in the fact that the executor cannot be separated from the evaluator, who evaluates the system in accordance with his or her own particular ability to develop it. This can be observed in chess, in which the same position is evaluated differently by different chess players, or in literature with regard to hermeneutics." Katsenelinboigen writes: The subjective element arises not because the set of positional parameters and their valuations are formed based on a player’s intuition. Rather, the choice of relevant parameters depends on the actual executor of the position, that is, the particular strengths and weaknesses of a given player. The role of the executor becomes vital because the actual realization of the position is not known beforehand, so future moves will have to be made based on the contingent situation at hand. Katsenelinboigen clearly explains why subjectivity of the managerial decision is inevitable: "The original subjective evaluation of the situation by the decision-maker is critical in the creative strategic management. Subjectivity of the managerial decisions is inevitable due to the intrinsically indeterministic nature of the strategic management, meaning that the subjectivity arises not just because of the lack of scientific foundation in business management. The effective approach to the strategic decision-making, as demonstrated in the game of chess, presupposes that each player has a unique, individual vision of his strategic position. To make it more systematic, one should not substitute the player’s intuition with some objective laws that relate essential and positional parameters, but rather complement the intuition with the statistical analysis." To sum up, subjectivity becomes an important factor in evaluating a predisposition. The roots of one's subjective evaluation lie in the fact that the executor cannot be separated from the evaluator who evaluates the system in accordance with his own particular ability to develop it. The structure of values plays an essential part in calculus of predisposition. Calculus of predispositions Calculus of predispositions, a basic part of predispositioning theory, belongs to the indeterministic procedures. "The key component of any indeterministic procedure is the evaluation of a position. Since it is impossible to devise a deterministic chain linking the inter-mediate state with the outcome of the game, the most complex component of any indeterministic method is assessing these intermediate stages. It is precisely the function of predispositions to assess the impact of an intermediate state upon the future course of development." According to Katsenelinboigen, calculus of predispositions is another method of computing probability. Both methods may lead to the same results and, thus, can be interchangeable. However, it is not always possible to interchange them since computing via frequencies requires availability of statistics, possibility to gather the data as well as having the knowledge of the extent to which one can interlink the system's constituent elements. Also, no statistics can be obtained on unique events and, naturally, in such cases the calculus of predispositions becomes the only option. The procedure of calculating predispositions is linked to two steps – dissection of the system on its constituent elements and integration of the analyzed parts in a new whole. According to Katsenelinboigen, the system is structured by two basic types of parameters – material and positional. The material parameters constitute the skeleton of the system. Relationships between them form positional parameters. The calculus of predispositions primarily deals with analyzing the system's material and positional parameters as independent variables and measuring them in unconditional valuations. "In order to quantify the evaluation of a position we need new techniques, which I have grouped under the heading of calculus of predispositions. This calculus is based on a weight function, which represents a variation on the well-known criterion of optimality for local extremum. This criterion incorporates material parameters and their conditional valuations. The following key elements distinguish the modified weight function from the criterion of optimality: First and foremost, the weight function includes not only material parameters as independent (controlling) variables, but also positional (relational) parameters. The valuations of material and positional parameters composing the weight function are, to a certain extent, unconditional; that is, they are independent of the specific conditions, but do take into account the rules of the game and statistics (experience)." To conclude, there are some basic differences between frequency-based and predispositions-based methods of computing probability. The frequency-based method is grounded in statistics and frequencies of events. The predispositions-based method approaches a system from the point of view of its predisposition. It is used when no statistics is available. The predispositions-based method is used for the novel and unique situations. According to Katsenelinboigen, the two methods of computing probability may complement each other if, for instance, they are applied to a multilevel system with an increasing complexity of its composition at higher levels. See also Karl Popper References External links Concept of cancer Economy as a Dynamic System The concept of indeterminism and its applications The Language of Predispositioning A Concept of Dramatic Genre and the Comedy of a New Type Systems theory Systems engineering Decision theory
Predispositioning theory
Engineering
3,044
36,726,184
https://en.wikipedia.org/wiki/Planktivore
A planktivore is an aquatic organism that feeds on planktonic food, including zooplankton and phytoplankton. Planktivorous organisms encompass a range of some of the planet's smallest to largest multicellular animals in both the present day and in the past billion years; basking sharks and copepods are just two examples of giant and microscopic organisms that feed upon plankton. Planktivory can be an important mechanism of top-down control that contributes to trophic cascades in aquatic and marine systems. There is a tremendous diversity of feeding strategies and behaviors that planktivores utilize to capture prey. Some planktivores utilize tides and currents to migrate between estuaries and coastal waters; other aquatic planktivores reside in lakes or reservoirs where diverse assemblages of plankton are present, or migrate vertically in the water column searching for prey. Planktivore populations can impact the abundance and community composition of planktonic species through their predation pressure, and planktivore migrations facilitate nutrient transport between benthic and pelagic habitats. Planktivores are an important link in marine and freshwater systems that connect primary producers to the rest of the food chain. As climate change causes negative effects throughout the global oceans, planktivores are often directly impacted through changes to food webs and prey availability. Additionally, harmful algal blooms (HABs) can negatively impact many planktivores and can transfer harmful toxins from the phytoplankton, to the planktivores, and along up the food chain. As an important source of revenue for humans through tourism and commercial uses in fisheries, many conservation efforts are going on globally to protect these diverse animals known as planktivores. Plankton and planktivory across taxonomic classes Phytoplankton: prey Plankton are defined as any type of organism that is unable to swim actively against currents and are thus transported by the physical forcing of tides and currents in the ocean. Phytoplankton form the lowest trophic level of marine food webs and thus capture light energy and materials to provide food and energy for hundreds of thousands of types of planktivores. Because they require light and abundant nutrients, phytoplankton are typically found in surface waters where light rays can penetrate water. Nutrients that sustain phytoplankton include nitrate, phosphate, silicate, calcium, and micronutrients like iron; however, not all phytoplankton require all these identified nutrients and thus differences in nutrient availability impact phytoplankton species composition. This class of microscopic, photosynthetic organisms includes diatoms, coccolithophores, protists, cyanobacteria, dinoflagellates, and other microscopic algae. Phytoplankton conduct photosynthesis via pigments in their cells; phytoplankton can use chlorophyll as well as other accessory photosynthetic pigments like fucoxanthin, chlorophyll c, alloxanthin, and carotenoids, depending on species. Due to their environmental requirements for light and nutrients, phytoplankton are most commonly found near continental margins, the equator, high-latitudes, and nutrient-rich areas. They also form the foundation of the biological pump, which transports carbon to depth in the ocean. Zooplankton: predators and prey Zooplankton ("zoo" meaning "animal") are generally consumers of other organisms for food. Zooplankton may consume either phytoplankton or other zooplankton, making them the smallest class of planktivores. They are common to most marine pelagic environments and act as an important step in the food chain to transfer energy up from primary producers to the rest of the marine food web. Some zooplankton remain planktonic for their entire lives, while others eventually grow large enough to swim against currents. For instance, fish are born as planktonic larvae but once they grow large enough to swim, they are no longer considered plankton. Many taxonomic groups (e.g. fishes, krill, corals, etc.) are zooplankton at some point in their lives. For example, oysters begin as planktonic larvae; during this stage when they are considered zooplankton, they consume phytoplankton. Once they mature to adulthood, oysters continue to consume phytoplankton. The spiny water flea is another example of a planktivorous invertebrate. Some of the largest communities of zooplankton exist in high latitude systems like the eastern Bering Sea; pockets of dense zooplankton abundance also exist in the California Current and the Gulf of Mexico. Zooplankton are, in turn, common prey items for planktivores; they respond to environmental change very rapidly due to their relatively short life spans, and so scientists can track their dynamics to understand what might be occurring in the larger marine food web and environment. The relative ratios of certain zooplankton in the larger zooplankton community can also indicate an environmental change (e.g., eutrophication) that may be significant. For instance, an increase in rotifer abundance in the Great Lakes has been correlated with abnormally high levels of nutrients (eutrophication). Vertebrates: predators and prey Many fishes are planktivorous during all or part of their life cycles, and these planktivorous fish are important to human industry and as prey for other organisms in the environment like seabirds and piscivorous fishes. Planktivores comprise a large component of tropical ecosystems; in the Indo-Australian Archipelago, one study identified 350 planktivorous fish species in one studied grid cell and found that 27% of all fish species in this region were planktivorous. This global study found that coral reef habitats globally have a disproportionate amount of planktivorous fishes. In other habitats, examples of planktivorous fishes include many types of salmon like the pink salmon, sandeels, sardines, and silvery lightfish. In ancient systems (read more below), the Titanichthys was an early massive vertebrate pelagic planktivore, with a lifestyle similar to that of the modern basking, whale, and megamouth sharks, all of whom are also planktivores. Sea birds can also be planktivores; least auklets, crested auklets, storm petrels, ancient auklets, phalaropes, and many penguins are all examples of avian planktivores. Planktivorous seabirds can be indicators of ecosystem status because their dynamics often reflect processes affecting many trophic levels, like the consequences of climate change. Blue whales and bowhead whales as well as some seals like the crabeater seal (Lobodon carcinophagus) are also planktivorous. Blue whales were recently found to consume a vast amount more plankton than was previously understood, representing an important element of the ocean biogeochemical cycle. Feeding strategies As previously mentioned, some plankton communities are well-studied and respond to environmental change very rapidly; understanding unusual plankton dynamics can elucidate potential consequences to planktivorous species and the larger marine food chain. One well-studied planktivore species is the gizzard shad (Dorosoma cepedianum) which has a voracious appetite for various forms of plankton across its life cycle. Planktivores can be either obligate planktivores, meaning they can only feed on plankton, or facultative planktivores, which take plankton when available but eat other types of food as well. In the case of the gizzard shad, they are obligate planktivores when larvae and juveniles, in part due to their very small mouth size; larval gizzard shad are most successful when small zooplankton are present in adequate quantities within their habitat. As they grow, gizzard shad become omnivores, consuming phytoplankton, zooplankton, and larger pieces of nutritious detritus. Adult gizzard shad consume large volumes of zooplankton until it becomes scarce, then start consuming organic debris instead. Larval fishes and blueback herring are other well-studied examples of obligate planktivores, whereas fishes like the ocean sunfish can alternate between plankton and other food sources (i.e., are facultative planktivores). Facultative planktivores tend to be more opportunistic and live in ecosystems with many types of food sources. Obligate planktivores have fewer options for prey choices; they are typically restricted to marine pelagic ecosystems that have a dominant plankton presence, such as highly productive upwelling regions. Mechanics of consuming plankton Planktivores, whether obligate or facultative, obtain food in multiple ways. Particulate feeders eat planktonic items selectively, by identifying plankton and pursuing them in the water column. Filter feeders process large volumes of water internally via different mechanisms, explained below, and strain food items out en masse or remove food particles from water as it passes by. "Tow-net" filter feeders swim rapidly with mouths open to filter the water, whereas "pumping" filter feeders suck in water via pumping actions. The charismatic flamingo is a pumping filter feeder, using its muscular tongue to pump water along specialized grooves in its bill and pump water back out once plankton have been retrieved. In a different filter feeding process, stationary animals, like corals, use their tentacles to grab plankton particles out of the water column and transfer the particles into their mouth. There are numerous interesting adaptations to remove plankton from the water column. The phalaropes use surface tension feeding to transport particles of prey to their mouth to be swallowed. These birds capture individual particles of plankton held in a droplet of water, suspended in their beaks. They then use a sequence of actions that begin with a quick opening of their beak to increase the surface area of the water droplet encasing prey. The action of stretching out the water droplet ultimately pushes the water and prey to the back of the throat where it can be consumed. These birds also spin around at the water surface, creating their own eddies that draw prey up closer to their beaks. Some species actively hunt plankton: in certain habitats such as the deep open ocean, as mentioned above, the planktivorous basking shark (Cetorhinus maximus) track the movements of their prey closely up and down the water column. The megamouth shark (Megachasma pelagios), another planktivorous species, adopts a similar feeding strategy that mirrors the movement in the water column of their planktonic prey. Similar to active hunting, some zooplankton, like copepods, are ambush hunters meaning they wait in the water column for prey to come within range and then rapidly attack and consume. Some fishes change their feeding strategy throughout their lives; the Atlantic menhaden (Brevoortia tyrannus) is an obligate filter feeder in early life stages, but matures into a particulate feeder. Some fishes, like the northern anchovy (Engraulis mordax) can merely modify their feeding behavior depending on the prey or environmental conditions. Some fishes also school together when feeding to help improve contact rates of plankton and simultaneously prevent themselves from predation. Some fishes have gill rakes, an internal filtration structure that assists fishes with capturing plankton prey. The amount of gill rakes can indicate planktivory as well as the typical size of plankton consumed, showing a correlation between gill rake structure and the consumed plankton type. Nutritional value of plankton Plankton have highly variable chemical compositions, which impacts their nutritional quality as a food source. Scientists are still understanding how nutritional quality varies with the type of plankton; for example diatom nutritional quality is a controversial topic. The ratios of phosphorus and nitrogen to carbon within a given plankton determine its nutritional quality. More carbon in an organism relative to these two elements decreases the plankton's nutritional value. Additionally, plankton with higher amounts of polyunsaturated fatty acids are typically more energy dense. The nutritional value of plankton does sometimes depend on the nutritional needs of the planktivorous species. For fishes, the nutritional value of plankton is dependent on docosahexaenoic acid, long-chain polyunsaturated fatty acids, arachidonic acid, and eicosapentaenoic acid with higher concentrations of those chemicals leading to higher nutritional value. However, lipids in plankton prey are not the only required chemical for larval fish; Malzahn et al. found that other nutrients, like phosphorus, were necessary before growth improvements due to lipid concentrations can be realized. Additionally, it has been shown experimentally that the nutritional value of prey is more important than prey abundance for larval fishes. With climate change, plankton may decrease in nutritional quality. Lau et al. discovered that warming conditions and inorganic nutrient depletion in lakes as a result of climate change decreased the nutritional value of plankton communities. Planktivory across ecological systems Ancient systems Planktivory is a common feeding strategy among some of our planet's largest organisms in both the present and the past. Massive Mesozoic organisms like pachycormids have recently been identified as planktivores; some individuals of this group reached lengths upwards of 9 feet. Scientists also recently discovered the fossilized remains of another ancient organism, which they named the "false megamouth" (Pseudomegachasma) shark, and which was likely a filter-feeding planktivore during the Cretaceous period. This new discovery illuminated planktivory as an example of convergent evolution, whereby distinct lineages evolved to fulfill similar dietary niches. In other words, the false megamouth and its planktivory evolved separate from the ancestors of present-day shark planktivores like the megamouth shark, whale shark, and basking shark, all mentioned above. Arctic systems The Arctic supports productive ecosystems that include many types of planktivorous species. Planktivorous pink salmon are common in the Arctic and the Bering Strait and have been suggested to exert significant control on structuring the phytoplankton and zooplankton dynamics in the subarctic North Pacific. Shifts in prey type have also been observed: in northern Arctic regions, salmon are typically piscivorous (consuming other fish) while in the southern Arctic and Bering Strait they are planktivorous. Capelin, Mallotus villosus, are also distributed across much of the Arctic and can exert significant control on zooplankton populations as a result of their planktivorous diet. Capelin have also been seen to exhibit cannibalism on their eggs when other types of preferred plankton sources become less available; alternatively, this behavior may be because increased spawning leads to more eggs in the environment for consumption. Arctic cod are also important zooplankton consumers and appear to follow aggregations of zooplankton around the region. Planktivorous birds like the fork-tailed storm-petrel and many types of auklets are also very common in the Arctic. Little auks are the most common Arctic planktivore species; as they reproduce on land, their planktivory creates an important link between marine and terrestrial nutrient reserves. This link is formed as little auks consume plankton with marine-derived nutrients at sea, then deposit nutrient-rich waste products on land during their reproductive process. Temperate and sub-arctic systems In freshwater lake systems, planktivory can be an important forcer of trophic cascades which can ultimately affect phytoplankton production. Fishes, in these systems, can promote phytoplankton productivity by preying on the zooplankton that control phytoplankton abundances. This is an example of top-down trophic control, where higher trophic organisms like fishes impose control on the abundance of lower trophic organisms, like phytoplankton. Such control on primary production via planktivorous organisms can be important in the functioning of mid-western United States lake systems. Fishes are often the most impactful zooplankton predators, as seen in Newfoundland where three-spine stickleback (Gasterosteus aculeatus) predate heavily upon zooplankton. In temperate lakes, the cyprinid and centrarchid fish families are commonly represented among the planktivore community. Planktivores can exert significant competition pressure on organisms in certain lake systems; for instance, in an Idaho lake the introduced planktivorous invertebrate shrimp Mysis relicta competes with the native landlocked planktivorous salmon kokanees. Because of the salmon's importance in trophic cycling, the loss of fishes in temperate lake systems could lead to widespread ecological consequences; in this example, such a loss could lead to unchecked predation on plankton by Mysis relicta. Planktivory can also be important in man-made reservoirs. In contrast to deeper and colder natural lakes, reservoirs are warmer, shallower, heavily modified human made systems with different ecosystem dynamics. Gizzard shad, the previously mentioned obligate planktivore, is frequently the most common fish in many reservoir systems. In certain sub-Arctic habitats like deep waters, the planktivorous basking shark tracks the movements of their prey closely up and down the water column in deep waters. Other species like the megamouth shark adopt a similar feeding strategy that mirrors movement in the water column of their plankton prey. In sub-Arctic lakes, certain morphs of the whitefish (Coregonus lavaretus) are planktivorous; the pelagic whitefish feeds primarily on zooplankton and as such have more gill rakers for enhanced feeding than other, non-planktivorous morphs of the same species. Nutrient limitation in lake systems The primary limiting nutrient shifts between nitrogen and phosphorus; a resulting consequence of changes in the structure of the food-web, thus limiting primary and secondary production in aquatic ecosystems. The bioavailability of such nutrients drives variation in the biomass and productivity of planktonic species. Due to variance in the N:P excretion of planktivorous fish species, consumer-driven nutrient cycling results in changes in nutrient availability. By feeding on zooplankton, planktivorous fish can increase the rate of nutrient recycling by releasing phosphorus from their prey. Planktivorous fish may release cyanobacteria from nutrient limitation by increasing the concentration of bioavailable phosphorus through excretion. The presence of planktivorous fish can disturb sediments, resulting in an increase in the amount of nutrients that are bioavailable to phytoplankton and further support in phytoplankton nutrient demands. Planktivore effects on a global scale Trophic regulation Planktivory can play an important role in the growth, abundance, and community composition of planktonic species via top-down trophic control. For example, competitive superiority of large zooplankton over smaller species in lake systems leads to large-body dominance in the absence of planktivorous fish as a result of increased food availability and grazing efficiency. Alternatively, the presence of planktivorous fish results in a decrease in zooplankton population through predation and shifts the community composition towards smaller zooplankton by limiting food availability and influencing size-selective predation (see the "predation" page for more information regarding size-selective predation). Predation by planktivorous fish reduces grazing by zooplankton and subsequently increases phytoplankton primary production and biomass. By limiting the population and growth rate of zooplankton, obligate zooplanktivores are less likely to migrate to the area due to the lack of available food. For example, the presence of gizzard shad in reservoirs has been observed to strongly influence the recruitment of other planktivores. Variations of fish recruitment and mortality rates from nutrient limitation have also been noted in lake ecosystems. Piscivory can have similar top-down effects on planktonic species by influencing the community composition of planktivores. The population of planktivorous fish can also be influenced through predation by piscivorous species such as marine mammals and aquatic birds. For example, planktivorous minnows in Lake Gatun experienced a rapid population decline after the introduction of peacock bass (Cichla ocellaris). However, a reduced population of planktivorous fish species result in a population increase of another class of planktivores – zooplankton. In lake ecosystems, some fish have been observed to behave first as zooplanktivores then as piscivores, affecting cascading trophic interactions. Planktivory pressure from zooplankton in marine communities (top-down control, as previously mentioned)has a large influence on phytoplankton productivity. Zooplankton can control phytoplankton seasonal dynamics as they exert the largest grazing pressure on phytoplankton; they also may modify their grazing strategies depending on environmental conditions, leading to seasonal change. For instance, copepods can switch between ambushing prey and using water flow to capture prey depending on external conditions and prey abundance. The planktivorous pressure zooplankton exert could explain the diversity of phytoplankton despite many phytoplankton occupying similar ecological niches (see the "paradox of the plankton" page for more information regarding this ecological conundrum). One notable example of trophic control is how planktivores have the ability to impact the species distribution of larval crabs in estuaries and coastal waters. Crab larvae, which are also planktivores, are hatched inside estuaries but some species then begin their migration out to waters along the coast where there are not as many predators. These crab larvae then utilize the tides to return to the estuaries when they become benthic organisms and are no longer planktivores. Planktivores tend to live their early lives within estuaries. These juvenile fish tend to inhabit these regions throughout the warmer months in the year. Throughout the year, the risk for plankton varies within estuaries, the risk reaches its highest from August to October, and the lowest from December to April, this is consistent with the theory that planktivory is the highest in the summer months in this system. The risk of planktivory is strongly correlated with the number of planktivores within this system. Nutrient transport Consumers can regulate primary production in an ecosystem by altering ratios of nutrients via different rates of recycling. Nutrient transport is greatly influenced by planktivorous fish, which recycle and transport nutrients between benthic and pelagic habitats. Nutrients released by benthic-feeding fishes can increase the total nutrient content of pelagic waters, as transported nutrients are fundamentally different from those that are recycled. Additionally, planktivorous fish can have significant effect on nutrient transport as well as total nutrient concentration by disturbing sediments through bioturbation. Increased nutrient cycling from near-sediment bioturbation by filter-feeding planktivores can increase phytoplankton population via nutrient enrichment. Salmon accumulate marine nutrients as they mature in ocean environments which they then transport back to their stream of origin to spawn. As they decompose, the freshwater streams become enriched with nutrients which contribute to the development of the ecosystem. The physical transport of nutrients and plankton can greatly affect the community composition and food web structure within oceanic ecosystems. In nearshore regions, planktivores and piscivores have been shown to be highly sensitive to changes in ocean currents while zooplankton populations are unable to tainted levels of predation pressure. Planktivore modification on plankton growth In some marine systems, planktivory can be an important factor controlling the duration and extent of phytoplankton blooms. Changes in phytoplankton communities and growth rates can modify the amount of grazing pressure present; grazing pressure can also be dampened by physical factors in the water column. The scientist Michael Behrenfeld proposed that the deepening of the mixed layer in the ocean, a vertical region near the surface made physically and chemically homogenous by active mixing, leads to a decrease in grazing interactions among planktivores and plankton because planktivores and plankton become more spatially distant from one another. This spatial distance thereby facilitates phytoplankton blooms and ultimately grazing rates by planktivores; both the physical changes and changes to grazing pressure have a significant influence on where and when phytoplankton blooms occur. The shallowing of the mixed layer due to physical processes within the water column conversely intensifies planktivore feeding. Harmful algal blooms Harmful algal blooms occur when there is a bloom of toxin producing phytoplankton. Planktivores such as fish and filter feeders that are present have a high likelihood of consuming these phytoplankton because that is what makes up the majority of their diet, or the diet of their prey. Since these planktivores near the bottom of the food chain consume harmful toxins, those toxins then move up the food web when predators consume these fish. The increasing concentration of some toxins through trophic levels presented here is called bioaccumulation, and this can lead to a range of impacts from non-lethal changes in behavior to major die-offs of large marine animals. There are monitoring programs in place for shellfish due to human health concerns and the ease of sampling in oysters. Some fish feed directly on phytoplankton, like the Atlantic herring (Clupea harengus), and Clupeidae, while other fish feed on zooplankton that consume the harmful algae. Domoic acid is a toxin carried by a type of diatom called Pseudo-nitzschia. Pseudo-nitzchia were the main organism responsible for a large HAB that took place along the west coast of the US in 2015 and had a large impact on the Dungeness crab fishery that year. When harmful algal blooms occur, planktivorous fish can act as vectors for poisonous substances like domoic acid. These planktivorous fish are eaten by larger fish and birds and the subsequent ingestion of toxins can then harm those species. Those animals consume planktivorous fish during a harmful algal bloom, and can have miscarriages, seizures, vomiting, and can sometimes die. Additionally, marine mammal mortality is occasionally attributed to harmful algal blooms, according to NOAA. Krill are another example of a planktivore that may exhibit high levels of domoic acid in their system; these large plankton are then consumed by humpback and blue whales. Since krill can have such a high level of domoic acid in their system when blooms are present, that concentration is rapidly transferred to whales which leads them to have a high concentration of domoic acid in their system as well. There is no evidence proving that this domoic acid has had a negative impact on the whales, but if the concentration of domoic acid is great enough, they could be impacted similarly to other marine mammals. The role of climate change Climate change is a worldwide phenomenon that affects everything from the largest planktivores such as whales, to even the smallest plankton. Climate change affects weather patterns, creates seasonal anomalies, alters sea surface temperature, alters ocean currents, and can affect nutrient availability for phytoplankton, and may even spur HABs in some systems. Arctic and Antarctic The Arctic has been hit hard with shorter winters and hotter summers creating less permafrost and rapidly melting ice caps causing lower salinity levels. The coupling of higher ocean CO2 levels, temperatures, and lower salinity is causing changes in phytoplankton communities and diatom diversity. Thalassiosira spp. Plankton was replaced by solitary Cylindrotheca closterium or Pseudo-nitzschia spp., a common HAB causing phytoplankton, under higher temperature and lower salinity in combination. Community changes such as this one, have large-scale effects through trophic levels. A shift in the primary producer communities can cause shifts in consumer communities, as the new food may provide different dietary benefits. As there is less permanent ice in the Arctic and less summer ice, some planktivores species are already moving north into these new open waters. Atlantic cod and orcas have been documented in these new territories, while planktivores such as Arctic cod are losing their habitat and feeding grounds under and around the sea ice. Similarly, the Arctic birds, the Least and Crested Auklets rely on zooplankton that lives under the disappearing sea ice and has seen dramatic effects on reproductive fitness and nutrition stress with the decreasing amounts of zooplankton available in the Bering Sea basin. In another prime example of shifting food webs, Moore et al. (2018) have found a shift from benthic dominated ecosystem to a more pelagic dominated ecosystem feeding structure. With longer open water periods, due to a loss of sea ice the Chukchi Sea has seen a shift in the past three decades. The increase in air temperature and loss of sea ice have coupled to promote an increase in pelagic fishes and a decrease in benthic biomass. This shift has encouraged a shift to planktivorous seabirds instead of piscivorous seabirds. Pollock fish are a planktivorous fish that rely on copepods as their primary diet as juveniles. According to the Oscillating Control Hypothesis, early ice retreat caused by a warming climate creates a later bloom of copepods and aphids (a plankton species). The later bloom produces fewer large lipid rich copepods, and results in smaller less nutrient rich copepods. The older pollock then face a winter starvation, causing carnivory on young pollock (<1yr old), and reduced population numbers and fitness. Similar to the Arctic, sea ice in the Antarctic is melting rapidly and permanent ice is becoming less and less (Zachary Lab Cite). This ice melt creates changes in freshwater input and ocean stratification, consequently affecting nutrient delivery to primary producers. As sea ice recedes, there is less valuable surface area for algae to grow on the bottom of the ice. This lack of algae inhibits krill (a partial planktonic species) to have less food availability, consequently affecting the fitness of Antarctic primary consumers such as krill, squid, pollock, and other carnivorous zooplankton. Subarctic The Subarctic has seen similar ecosystem changes especially in well studied places such as Alaska. The warmer waters have helped increase zooplankton communities and have been creating a shift in ecosystem dynamics (Green 2017). There has been a large shift from piscivorous seabirds such as pacific loons and black-legged kittiwakes to planktivores sea birds such as ancient auklets and short-tailed shearwaters. Marine planktivores such as the charismatic humpback, fin, and minke whales have been benefiting from the increase in zooplankton such as an increase in krill. As these large whales spend more time migrating into these northern water, they are taking up resources previously only used by arctic planktivores, creating potential shifts in food availability and thus food webs. Tropics Tropical and equatorial marine regions are mainly characterized by coral reef communities or vast open oceans. Coral reefs are one of the most susceptible ecosystems to climate change, in particular the symptoms of warming oceans and acidification. Ocean acidification raises CO2 levels in the ocean and has significant effects on zooplankton communities. Smith et al. (2016) discovered that increased levels of CO2 show reductions in zooplankton biomass but not zooplankton quality in tropical ecosystems, as increased CO2 had no negative effects on fatty acid compositions. This means that planktivores are not receiving less nutritious zooplankton, but are experiencing lesser availability of zooplankton than is needed for survival. One of the most important planktivores in the tropics are corals themselves. Although spending a portion of their life cycle as planktonic organisms themselves, established corals are sedentary organisms that can use their tentacles to capture plankton from the surrounding environment to help supplement energy produced by the photosynthetic zooxanthellae. Climate change has had significant impacts on coral reefs, with warming causing coral bleaching and increases in infectious diseases, sea-level rise causing more sedimentation that then smothers corals, stronger and more frequent storms causing breakage and structural destruction, an increase of land runoff bringing more nutrients into the systems causing algal blooms that murk up the water and therefore diminish light availability for photosynthesis, altered ocean currents causing a difference in the dispersal of larvae and planktonic food availability, and lastly changes in ocean pH decreasing structural integrity and growth rates. There is also a plethora of planktivorous fish throughout the tropics that play important ecological roles within marine systems. Similar to corals, planktivorous reef fish are directly affected by these changing systems and these negative effects then disrupt food webs through the oceans. As plankton communities shift in speciation and availability, primary consumers have a harder time meeting energy budgets. This lack of food availability can influence reproductivity and overall primary consumer populations, creating food shortages for higher trophic consumers. Effects of planktivores on industry The global fisheries industry is a multi-billion dollar, international industry that provides food and livelihoods to billions of people around the globe. Some of the most important fisheries include salmon, pollock, mackerel, char, cod, halibut, and trout. In 2021, the take home total profits, before bonuses, actually going into fishermen's pockets, from the Alaskan salmon, cod, flounder, and groundfish fishing season came to $248 million. Planktivorous fish alone create an important, large economic industry. In 2017 Alaska pollock was the United States' largest commercial fishery by volume with 3.4 billion pounds being caught and coming in at total value of $413 million. Besides fishing, planktivorous marine animals drive tourism economy as well. Tourist travel across the world for whale watching, to see charismatic megafauna such as humpback whales in Hawaii, Minke whales in Alaska, grey whales in Oregon, and whale sharks in South America. Manta rays also drive dive and snorkel tourism, raking in over $73 million annually, in direct revenue, over 23 countries around the world. The main participating countries in Manta ray tourism include Japan, Indonesia, the Maldives, Mozambique, Thailand, Australia, Mexico, United States, Federated States of Micronesia and Palau. See also Zooplankton Plankton Piscivore Phytoplankton Filter feeder Carnivore References Ecology terminology Animals by eating behaviors Limnology
Planktivore
Biology
7,328
10,087,606
https://en.wikipedia.org/wiki/Symmetry%20operation
In mathematics, a symmetry operation is a geometric transformation of an object that leaves the object looking the same after it has been carried out. For example, a turn rotation of a regular triangle about its center, a reflection of a square across its diagonal, a translation of the Euclidean plane, or a point reflection of a sphere through its center are all symmetry operations. Each symmetry operation is performed with respect to some symmetry element (a point, line or plane). In the context of molecular symmetry, a symmetry operation is a permutation of atoms such that the molecule or crystal is transformed into a state indistinguishable from the starting state. Two basic facts follow from this definition, which emphasizes its usefulness. Physical properties must be invariant with respect to symmetry operations. Symmetry operations can be collected together in groups which are isomorphic to permutation groups. In the context of molecular symmetry, quantum wavefunctions need not be invariant, because the operation can multiply them by a phase or mix states within a degenerate representation, without affecting any physical property. Molecules Identity Operation The identity operation corresponds to doing nothing to the object. Because every molecule is indistinguishable from itself if nothing is done to it, every object possesses at least the identity operation. The identity operation is denoted by or . In the identity operation, no change can be observed for the molecule. Even the most asymmetric molecule possesses the identity operation. The need for such an identity operation arises from the mathematical requirements of group theory. Reflection through mirror planes The reflection operation is carried out with respect to symmetry elements known as planes of symmetry or mirror planes. Each such plane is denoted as (sigma). Its orientation relative to the principal axis of the molecule is indicated by a subscript. The plane must pass through the molecule and cannot be completely outside it. If the plane of symmetry contains the principal axis of the molecule (i.e., the molecular -axis), it is designated as a vertical mirror plane, which is indicated by a subscript (). If the plane of symmetry is perpendicular to the principal axis, it is designated as a horizontal mirror plane, which is indicated by a subscript (). If the plane of symmetry bisects the angle between two 2-fold axes perpendicular to the principal axis, it is designated as a dihedral mirror plane, which is indicated by a subscript (). Through the reflection of each mirror plane, the molecule must be able to produce an identical image of itself. Inversion operation In an inversion through a centre of symmetry, (the element), we imagine taking each point in a molecule and then moving it out the same distance on the other side. In summary, the inversion operation projects each atom through the centre of inversion and out to the same distance on the opposite side. The inversion center is a point in space that lies in the geometric center of the molecule. As a result, all the cartesian coordinates of the atoms are inverted (i.e. to ). The symbol used to represent inversion center is . When the inversion operation is carried out times, it is denoted by , where when is even and when is odd. Examples of molecules that have an inversion center include certain molecules with octahedral geometry (general formula ), square planar geometry (general formula ), and ethylene (). Examples of molecules without inversion centers are cyclopentadienide () and molecules with trigonal pyramidal geometry (general formula ). Proper rotation operations or n-fold rotation A proper rotation refers to simple rotation about an axis. Such operations are denoted by where is a rotation of or performed times. The superscript is omitted if it is equal to one. is a rotation through 360°, where . It is equivalent to the Identity () operation. is a rotation of 180°, as is a rotation of 120°, as and so on. Here the molecule can be rotated into equivalent positions around an axis. An example of a molecule with symmetry is the water () molecule. If the molecule is rotated by 180° about an axis passing through the oxygen atom, no detectable difference before and after the operation is observed. Order of an axis can be regarded as a number of times that, for the least rotation which gives an equivalent configuration, that rotation must be repeated to give a configuration identical to the original structure (i.e. a 360° or 2 rotation). An example of this is proper rotation, which rotates by represents the first rotation around the axis by is the rotation by while is the rotation by is the identical configuration because it gives the original structure, and it is called an identity element (). Therefore, is an order of three, and is often referred to as a threefold axis. Improper rotation operations An improper rotation involves two operation steps: a proper rotation followed by reflection through a plane perpendicular to the rotation axis. The improper rotation is represented by the symbol where is the order. Since the improper rotation is the combination of a proper rotation and a reflection, will always exist whenever and a perpendicular plane exist separately. is usually denoted as , a reflection operation about a mirror plane. is usually denoted as , an inversion operation about an inversion center. When is an even number but when is odd Rotation axes, mirror planes and inversion centres are symmetry elements, not symmetry operations. The rotation axis of the highest order is known as the principal rotation axis. It is conventional to set the Cartesian -axis of the molecule to contain the principal rotation axis. Examples Dichloromethane, . There is a rotation axis which passes through the carbon atom and the midpoints between the two hydrogen atoms and the two chlorine atoms. Define the z axis as co-linear with the axis, the plane as containing and the plane as containing . A rotation operation permutes the two hydrogen atoms and the two chlorine atoms. Reflection in the plane permutes the hydrogen atoms while reflection in the plane permutes the chlorine atoms. The four symmetry operations , , and form the point group . Note that if any two operations are carried out in succession the result is the same as if a single operation of the group had been performed. Methane, . In addition to the proper rotations of order 2 and 3 there are three mutually perpendicular axes which pass half-way between the C-H bonds and six mirror planes. Note that Crystals In crystals, screw rotations and/or glide reflections are additionally possible. These are rotations or reflections together with partial translation. These operations may change based on the dimensions of the crystal lattice. The Bravais lattices may be considered as representing translational symmetry operations. Combinations of operations of the crystallographic point groups with the addition symmetry operations produce the 230 crystallographic space groups. See also Molecular symmetry Crystal structure Crystallographic restriction theorem References F. A. Cotton Chemical applications of group theory, Wiley, 1962, 1971 Physical chemistry Symmetry
Symmetry operation
Physics,Chemistry,Mathematics
1,415
3,114,516
https://en.wikipedia.org/wiki/Hutchinson%27s%20ratio
In ecological theory, the Hutchinson's ratio is the ratio of the size differences between similar species when they are living together as compared to when they are isolated. It is named after G. Evelyn Hutchinson who concluded that various key attributes in species varied according to the ratio of 1:1.1 to 1:1.4. See also Hutchinson's rule References External links https://web.archive.org/web/20010228025300/http://www.limnology.org/news/30/hutchinson.html Ecology
Hutchinson's ratio
Biology
116
9,065,634
https://en.wikipedia.org/wiki/Upsilon%20Librae
Upsilon Librae (υ Lib, υ Librae) is the Bayer designation for a double star in the zodiac constellation Libra. With an apparent visual magnitude of 3.628, it is visible to the naked eye. The distance to this star, based upon an annual parallax shift of 14.58, is around 224 light years. It has a magnitude 10.8 companion at an angular separation of 2.0 arc seconds along a position angle of 151°, as of 2002. The brighter component is an evolved K-type giant star with a stellar classification of K3 III. The measured angular diameter, after correction for limb darkening, is . At the estimated distance of the star, this yields a physical size of about 31.5 times the radius of the Sun. It has 1.67 times the mass of the Sun and radiates 309 times the solar luminosity from its outer atmosphere at an effective temperature of 4,135 K. The star is about three billion years old. Upsilon Librae will be the brightest star in the night sky in about 2.3 million years, and will peak in brightness with an apparent magnitude of −0.46, or more than 40 times its present-day brightness. References K-type giants Double stars Libra (constellation) Librae, Upsilon CD-27 10464 Librae, 39 139063 076470 5794
Upsilon Librae
Astronomy
298
13,318,208
https://en.wikipedia.org/wiki/Peter%20Adolf%20Thiessen
Peter Adolf Thiessen (6 April 1899 – 5 March 1990) was a German physical chemist and a tribologist– he is credited as the founder of the tribochemistry. At the close of the World War II, he voluntarily went to the Soviet Union and played a crucial role in advancing the Soviet program of nuclear weapons, and was a recipient of national honors of the Soviet Union. Upon his return to East Germany in 1956, Thiessen engaged his life in the advancement of applied applications of the physical chemistry. Education Thiessen was born in Schweidnitz, Silesia, Prussia, which now is known as Świdnica, Lower Silesian in Poland, on 6 April 1899. Thiessen hailed from a wealthy German family, which owned a land in Schweidnitz. From 1919 to 1923, he attended and studied chemistry at the Breslau University, University of Freiburg, University of Greifswald, and the University of Göttingen. He received his doctorate in chemistry in 1923 under Richard Adolf Zsigmondy at Göttingen. Career Early years In 1923, Thiessen was a supernumerary assistant professor of chemistry at the University of Göttingen and from 1924 to 1930 was a regular professor. He joined the Nazi Party in 1925; and became a Privatdozent at Göttingen in 1926. In 1930, he became head of the department of inorganic chemistry there, and in 1932 he also became an untenured extraordinarius professor. In 1933, Thiessen became a department chair of chemistry at the Kaiser-Wilhelm Institute for Physical Chemistry and Electrical Chemistry (KWIPC) of the Kaiser-Wilhelm Gesellschaft (KWG). For a short time in 1935, he became an ordinarius professor of chemistry at the University of Münster. Later, that year and until 1945, he became an ordinarius professor at the Humboldt University of Berlin and director of the KWIPC in Berlin-Dahlem. As director of the KWIPC, he transformed it into a scientific model based on the Nazi Party's guidelines. Thiessen was the main advisor and confidant to Rudolf Mentzel, who was head of the chemistry and organic materials section of the Reichsforschungsrat (RFR, Reich Research Council). Thiessen, as director of the KWIPC, had a flat on Faradayweg in Dahlem that the former director Fritz Haber used for business purposes; Thiessen shared this flat with Mentzel. In the Soviet Union Before the end of World War II, Thiessen had Communist contacts. He, Manfred von Ardenne, director of his private laboratory (Research Laboratory for Electron Physics), Gustav Hertz, Nobel Laureate and director of the second research laboratory at Siemens, and Max Volmer, ordinarius professor and director of the Physical Chemistry Institute at the Technical University of Berlin, had made a pact. The pact was a pledge that whoever first made contact with the Soviet authorities would speak for the rest. The objectives of their pact were threefold: (1) prevent plunder of their institutes, (2) continue their work with minimal interruption, and (3) protect themselves from prosecution for any political acts of the past. On 27 April 1945, Thiessen arrived at von Ardenne’s institute in an armored vehicle with a major of the Soviet Army, who was also a leading Soviet chemist. All four were taken into the Soviet custody and were held in Russia where Von Ardenne was made head of Institute A, in Sinop, a suburb of Sukhumi. Hertz was made head of Institute G, in Agudseri (Agudzery), about 10 km southeast of Sukhumi and a suburb of Gul’rips (Gulrip’shi). Volmer went to the Nauchno-Issledovatel’skij Institut-9 (NII-9, Scientific Research Institute No. 9), in Moscow; he was given a design bureau to work on the production of heavy water. In Institute A, Thiessen became leader for developing engineering design techniques for manufacturing porous barriers for isotope separation using the gaseous and centrifugal technologies. In 1949, six German scientists, including Hertz, Thiessen, and Barwich, were called in for consultation at Sverdlovsk-44, which was responsible for uranium enrichment using the gaseous diffusion. The plant, which was smaller than the American Oak Ridge Laboratory's K-25 gaseous diffusion plant, was getting only a little over half of the expected 90pc or higher enrichment. Awards for uranium enrichment technologies were made in 1951 after testing of a bomb with uranium; the first test was with plutonium. Thiessen received a Stalin Prize, first class in 1953. He is credited with founding the field of tribochemistry, which he formulated when encountering problems to make the gaseous diffusion method feasible for the Soviet nuclear weapons. Return to East Germany In 1953, Thiessen was notified by the Soviet administration in Russia that he would allowed to return to Germany but had to quarantined for at least two years, which was a standard practice for the German experts in Soviet program of nuclear weapons. He performed unclassified research in the Soviet Union and returned to East Germany in 1955 where he was elected as a Fellow of the German Academy of Sciences in East Berlin, and from 1956 was director of the Institute of Physical Chemistry in East Berlin. From 1957 to 1965, he was also chairman of the Research Council of the German Democratic Republic.. From 1965 till 1990, Thiessen served on different research capacities to advance the field of Tribology, for which he is credited as one of the founders, and died in East Berlin on 5 March 1990, aged 90. Books Peter Adolf Thiessen and Helmut Sandig Planung der Forschung (Dietz, 1961) Peter Adolf Thiessen Erfahrungen, Erkenntnisse, Folgerungen (Akademie-Verlag, 1979) Peter Adolf Thiessen Forschung und Praxis formen die neue Technik (Urania-Verl., 1961) Peter Adolf Thiessen Vorträge zum Festkolloquium anlässlich des 65. Geburtstages von P. A. Thiessen (Akademie-Verl., 1966) Peter Adolf Thiessen, Klaus Meyer, and Gerhard Heinicke Grundlagen der Tribochemie (Akademi-Verlar, 1967) Articles Peter Adolf Thiessen Die physikalische Chemie im nationalsozialistischen Staat, Der Deutscher Chemiker. Mitteilungen aus Stand / Beruf und Wissenschaft (Supplement to Angewandte Chemie. Zeitschrift des Vereins Deutsche Chemiker, No.19.) Volume 2, No. 5, May 9, 1936. Reprinted in English in Hentschel, Klaus (editor) and Ann M. Hentschel (editorial assistant and translator) Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996) 134-137 as Document 48. Thiessen: Physical Chemistry in the National Socialist State [May 9, 1936]. Notes References Albrecht, Ulrich, Andreas Heinemann-Grüder, and Arend Wellmann Die Spezialisten: Deutsche Naturwissenschaftler und Techniker in der Sowjetunion nach 1945 (Dietz, 1992, 2001) Barwich, Heinz and Elfi Barwich Das rote Atom (Fischer-TB.-Vlg., 1984) Beneke, Klaus Die Kolloidwissenschaftler Peter Adolf Thiessen, Gerhart Jander, Robert Havemann, Hans Witzmann und ihre Zeit (Knof, 2000) Heinemann-Grüder, Andreas Die sowjetische Atombombe (Westfaelisches Dampfboot, 1992) Heinemann-Grüder, Andreas Keinerlei Untergang: German Armaments Engineers during the Second World War and in the Service of the Victorious Powers in Monika Renneberg and Mark Walker (editors) Science, Technology and National Socialism 30-50 (Cambridge, 2002 paperback edition) Hentschel, Klaus (editor) and Ann M. Hentschel (editorial assistant and translator) Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996) Klaus Hentschel The Mental Aftermath: The Mentality of German Physicists 1945 – 1949 (Oxford, 2007) Holloway, David Stalin and the Bomb: The Soviet Union and Atomic Energy 1939–1956 (Yale, 1994) Kruglov, Arkadii The History of the Soviet Atomic Industry (Taylor and Francis, 2002) Naimark, Norman M. The Russians in Germany: A History of the Soviet Zone of Occupation, 1945-1949 (Hardcover - Aug 11, 1995) Belknap Oleynikov, Pavel V. German Scientists in the Soviet Atomic Project, The Nonproliferation Review Volume 7, Number 2, 1 – 30 (2000). The author has been a group leader at the Institute of Technical Physics of the Russian Federal Nuclear Center in Snezhinsk (Chelyabinsk-70). External links Fritz Haber Institute - MPG 1899 births 1990 deaths People from Świdnica German barons University of Breslau alumni University of Freiburg alumni University of Greifswald alumni University of Göttingen alumni German chemists German physical chemists 20th-century German chemists Scientists from the Province of Silesia Academic staff of the University of Göttingen Academic staff of the University of Münster Academic staff of the Humboldt University of Berlin Nazi Party members German expatriates in the Soviet Union Nuclear weapons program of the Soviet Union people Recipients of the Stalin Prize East German scientists Recipients of the National Prize of East Germany Members of the German Academy of Sciences at Berlin Max Planck Institute directors Tribologists Foreign members of the USSR Academy of Sciences
Peter Adolf Thiessen
Materials_science
2,062
60,184,869
https://en.wikipedia.org/wiki/David%20Dewhurst%20Medal
The David Dewhurst award is a bronze medal bestowed by Engineers Australia and is the most distinguished accolade within their biomedical engineering discipline. It is named in honour of David John Dewhurst (1919 - 1996), an outstanding Australian biophysicist and biomedical engineer who performed pioneering work in the area of the cochlear implant. The award was inaugurated in 1994 as the Eminent Biomedical Engineers Award and its first winner was David Dewhurst. Following his death in 1996 the award’s name was changed to the David Dewhurst Award as a permanent memorial. Recipients 1996 Keith Daniel, Nucleus Ltd. 1997 Peter C. Farrell, ResMed 1998 George Kossoff, CSIRO 1999 Richard Kirsner, La Trobe University 2000 Klaus Schindhelm, University of NSW 2001 Alex Watson, Premier Biomedical Engineering Pty Ltd 2002 Barry Seeger 2003 Laurie Knuckey 2004 Mark Pearcy 2005 John Southwell 2006 John Symonds 2007 Geoffrey Wickham, Telectronics 2009 Andrew Downing, Flinders University 2010 Alexander McLean 2011 Graham Grant 2012 David Burton, Compumedics 2013 Nigel Lovell, University of NSW 2014 James F. Patrick, University of Melbourne 2015 Derek Abbott, University of Adelaide 2016 Karen Reynolds, Flinders University 2017 Walter John Russell, University of Adelaide 2018 Christopher Bertram, University of Sydney 2019 Alan Finkel, Office of the Chief Scientist (Australia) 2020 Michael Griffiths 2021 Leo Barnes, TUV SUD GmbH 2023 Ed Skull See also List of engineering awards List of prizes named after people List of medicine awards References Australian science and technology awards Engineering awards Awards established in 1996
David Dewhurst Medal
Technology
322
9,873,718
https://en.wikipedia.org/wiki/NTBackup
NTBackup (also known as Windows Backup and Backup Utility) is the first built-in backup utility of the Windows NT family. It was introduced with Windows NT 3.51. NTBackup comprises a GUI (wizard-style) and a command-line utility to create, customize, and manage backups. It takes advantage of Shadow Copy (to create backups) and Task Scheduler (to schedule them). NTBackup stores backups in the BKF file format (a proprietary format at the time) on external sources, e.g., floppy disks, hard drives, tape drives, and Zip drives. When used with tape drives, NTBackup uses the Microsoft Tape Format (MTF), which is also used by BackupAssist, Backup Exec, and Veeam Backup & Replication and is compatible with BKF. Starting with Windows Vista and Windows Server 2008, NTBackup is replaced by Backup and Restore and Windows Server Backup. In addition to their corresponding GUIs, the command-line utility WBAdmin can operate both. The new backup system provides similar functionality but uses the Virtual Hard Disk file format to back up content. Neither Backup and Restore nor Windows Server Backup support the use of tape drives. To and restore NTBackup's BKF files, Microsoft has made available the NTBackup Restore utility for Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2. Features NTBackup supports several operating system features including backing up the computer's System State. On computers that are not domain controllers, this includes the Windows Registry, boot files, files protected by Windows File Protection, Performance counter configuration information, COM+ class registration database, IIS metabase, replicated data sets, Exchange Server data, Cluster service information, and Certificate Services database. On domain controllers, NTBackup can back up Active Directory, including the SYSVOL directory share. NTBackup supports Encrypting File System, NTFS hard links and junction points, alternate data streams, disk quota information, mounted drive and remote storage information. It saves NTFS permissions, audit entries and ownership settings, respects the archive bit attribute on files and folders and can create normal, copy, differential, incremental and daily backups, backup catalogs, as well as Automated System Recovery. It supports logging and excluding files from the backup per-user or for all users. Hardware compression is supported if the tape drive supports it. Software compression is not supported, even in Backup to files. NTBackup can use removable media devices that are supported natively by the Removable Storage Manager (RSM) component of Windows. However, RSM supports only those tape devices which have RSM-aware WDM drivers. NTBackup from Windows XP and newer includes Volume Shadow Copy (VSS) support and thus can back up locked files. In the case of Windows XP Home Edition, NTBackup is not installed by default but is available on the Windows XP installation disc. Windows XP introduced a wizard-style user interface for NTBackup in addition to the advanced UI. An expert system administrator can use the NTBackup scripting language to create a functional backup system. Scripting enables the system administrator to automate and schedule backups of files and system state, control the RSM to follow a media rotation strategy, reprogram the RSM to work with external HDD and NAS as well as tape, send email reminders to prompt users to insert the media and compile backup reports that include logs and remaining capacity. An alternative to scripting is GUI software such as BackupAssist, which automates NTBackup and can perform automatic, scheduled backups of Windows-based servers and PCs using NTBackup. Third-party plug-ins can be used with the deprecated Removable Storage component in Microsoft Windows to support modern storage media such as external hard disks, flash memory, optical media such as CD, DVD and Blu-ray and network file systems exposing the pieces of media as virtual tape to NTBackup which is based on Removable Storage. NTBackup can be used under Windows Vista and up by copying the NTBackup files from a Windows XP machine. To use tapes or other backup locations that use the Removable Storage Manager, you will need to turn it on in the Turn Windows features on or off control panel, but in Windows 7 and up, the component was removed. Corrupt or damaged backup files Due to the large size typical of today's backups, and faulty data transmission over unreliable USB or FireWire interfaces, backup files are prone to be corrupt or damaged. When trying to restore, NTBackup may display messages like "The Backup File Is Unusable", "CRC failed error" or "Unrecognized Media". Third-party, mostly commercial solutions may recover corrupt BKF files. References Further reading External links Microsoft Backup Basics from Microsoft NTBackup Guide for Windows XP Professional MSKB104169: Files that are automatically skipped by NTBackup Microsoft Tape Format (MTF) Specification Document by Seagate mftar: a filter to convert MFT/BKF files to the more common tar format (Linux and Unices) mtftar is an updated tool for translating a MTF stream to a TAR stream Backup software for Windows Discontinued Windows components
NTBackup
Technology
1,098
49,916,168
https://en.wikipedia.org/wiki/Cavity%20optomechanics
Cavity optomechanics is a branch of physics which focuses on the interaction between light and mechanical objects on low-energy scales. It is a cross field of optics, quantum optics, solid-state physics and materials science. The motivation for research on cavity optomechanics comes from fundamental effects of quantum theory and gravity, as well as technological applications. The name of the field relates to the main effect of interest: the enhancement of radiation pressure interaction between light (photons) and matter using optical resonators (cavities). It first became relevant in the context of gravitational wave detection, since optomechanical effects must be taken into account in interferometric gravitational wave detectors. Furthermore, one may envision optomechanical structures to allow the realization of Schrödinger's cat. Macroscopic objects consisting of billions of atoms share collective degrees of freedom which may behave quantum mechanically (e.g. a sphere of micrometer diameter being in a spatial superposition between two different places). Such a quantum state of motion would allow researchers to experimentally investigate decoherence, which describes the transition of objects from states that are described by quantum mechanics to states that are described by Newtonian mechanics. Optomechanical structures provide new methods to test the predictions of quantum mechanics and decoherence models and thereby might allow to answer some of the most fundamental questions in modern physics. There is a broad range of experimental optomechanical systems which are almost equivalent in their description, but completely different in size, mass, and frequency. Cavity optomechanics was featured as the most recent "milestone of photon history" in nature photonics along well established concepts and technology like quantum information, Bell inequalities and the laser. Concepts of cavity optomechanics Physical processes Stokes and anti-Stokes scattering The most elementary light-matter interaction is a light beam scattering off an arbitrary object (atom, molecule, nanobeam etc.). There is always elastic light scattering, with the outgoing light frequency identical to the incoming frequency . Inelastic scattering, in contrast, is accompanied by excitation or de-excitation of the material object (e.g. internal atomic transitions may be excited). However, it is always possible to have Brillouin scattering independent of the internal electronic details of atoms or molecules due to the object's mechanical vibrations: where is the vibrational frequency. The vibrations gain or lose energy, respectively, for these Stokes/anti-Stokes processes, while optical sidebands are created around the incoming light frequency: If Stokes and anti-Stokes scattering occur at an equal rate, the vibrations will only heat up the object. However, an optical cavity can be used to suppress the (anti-)Stokes process, which reveals the principle of the basic optomechanical setup: a laser-driven optical cavity is coupled to the mechanical vibrations of some object. The purpose of the cavity is to select optical frequencies (e.g. to suppress the Stokes process) that resonantly enhance the light intensity and to enhance the sensitivity to the mechanical vibrations. The setup displays features of a true two-way interaction between light and mechanics, which is in contrast to optical tweezers, optical lattices, or vibrational spectroscopy, where the light field controls the mechanics (or vice versa) but the loop is not closed. Radiation pressure force Another but equivalent way to interpret the principle of optomechanical cavities is by using the concept of radiation pressure. According to the quantum theory of light, every photon with wavenumber carries a momentum , where is the Planck constant. This means that a photon reflected off a mirror surface transfers a momentum onto the mirror due to the conservation of momentum. This effect is extremely small and cannot be observed on most everyday objects; it becomes more significant when the mass of the mirror is very small and/or the number of photons is very large (i.e. high intensity of the light). Since the momentum of photons is extremely small and not enough to change the position of a suspended mirror significantly, the interaction needs to be enhanced. One possible way to do this is by using optical cavities. If a photon is enclosed between two mirrors, where one is the oscillator and the other is a heavy fixed one, it will bounce off the mirrors many times and transfer its momentum every time it hits the mirrors. The number of times a photon can transfer its momentum is directly related to the finesse of the cavity, which can be improved with highly reflective mirror surfaces. The radiation pressure of the photons does not simply shift the suspended mirror further and further away as the effect on the cavity light field must be taken into account: if the mirror is displaced, the cavity's length changes, which also alters the cavity resonance frequency. Therefore, the detuning—which determines the light amplitude inside the cavity—between the changed cavity and the unchanged laser driving frequency is modified. It determines the light amplitude inside the cavity – at smaller levels of detuning more light actually enters the cavity because it is closer to the cavity resonance frequency. Since the light amplitude, i.e. the number of photons inside the cavity, causes the radiation pressure force and consequently the displacement of the mirror, the loop is closed: the radiation pressure force effectively depends on the mirror position. Another advantage of optical cavities is that the modulation of the cavity length through an oscillating mirror can directly be seen in the spectrum of the cavity. Optical spring effect Some first effects of the light on the mechanical resonator can be captured by converting the radiation pressure force into a potential, and adding it to the intrinsic harmonic oscillator potential of the mechanical oscillator, where is the slope of the radiation pressure force. This combined potential reveals the possibility of static multi-stability in the system, i.e. the potential can feature several stable minima. In addition, can be understood to be a modification of the mechanical spring constant, This effect is known as the optical spring effect (light-induced spring constant). However, the model is incomplete as it neglects retardation effects due to the finite cavity photon decay rate . The force follows the motion of the mirror only with some time delay, which leads to effects like friction. For example, assume the equilibrium position sits somewhere on the rising slope of the resonance. In thermal equilibrium, there will be oscillations around this position that do not follow the shape of the resonance because of retardation. The consequence of this delayed radiation force during one cycle of oscillation is that work is performed, in this particular case it is negative,, i.e. the radiation force extracts mechanical energy (there is extra, light-induced damping). This can be used to cool down the mechanical motion and is referred to as optical or optomechanical cooling. It is important for reaching the quantum regime of the mechanical oscillator where thermal noise effects on the device become negligible. Similarly, if the equilibrium position sits on the falling slope of the cavity resonance, the work is positive and the mechanical motion is amplified. In this case the extra, light-induced damping is negative and leads to amplification of the mechanical motion (heating). Radiation-induced damping of this kind has first been observed in pioneering experiments by Braginsky and coworkers in 1970. Quantized energy transfer Another explanation for the basic optomechanical effects of cooling and amplification can be given in a quantized picture: by detuning the incoming light from the cavity resonance to the red sideband, the photons can only enter the cavity if they take phonons with energy from the mechanics; it effectively cools the device until a balance with heating mechanisms from the environment and laser noise is reached. Similarly, it is also possible to heat structures (amplify the mechanical motion) by detuning the driving laser to the blue side; in this case the laser photons scatter into a cavity photon and create an additional phonon in the mechanical oscillator. The principle can be summarized as: phonons are converted into photons when cooled and vice versa in amplification. Three regimes of operation: cooling, heating, resonance The basic behaviour of the optomechanical system can generally be divided into different regimes, depending on the detuning between the laser frequency and the cavity resonance frequency : Red-detuned regime, (most prominent effects on the red sideband, ): In this regime state exchange between two resonant oscillators can occur (i.e. a beam-splitter in quantum optics language). This can be used for state transfer between phonons and photons (which requires the so-called "strong coupling regime") or the above-mentioned optical cooling. Blue-detuned regime, (most prominent effects on the blue sideband, ): This regime describes "two-mode squeezing". It can be used to achieve quantum entanglement, squeezing, and mechanical "lasing" (amplification of the mechanical motion to self-sustained optomechanical oscillations / limit cycle oscillations), if the growth of the mechanical energy overwhelms the intrinsic losses (mainly mechanical friction). On-resonance regime, : In this regime the cavity is simply operated as an interferometer to read the mechanical motion. The optical spring effect also depends on the detuning. It can be observed for high levels of detuning () and its strength varies with detuning and the laser drive. Mathematical treatment Hamiltonian The standard optomechanical setup is a Fabry–Pérot cavity, where one mirror is movable and thus provides an additional mechanical degree of freedom. This system can be mathematically described by a single optical cavity mode coupled to a single mechanical mode. The coupling originates from the radiation pressure of the light field that eventually moves the mirror, which changes the cavity length and resonance frequency. The optical mode is driven by an external laser. This system can be described by the following effective Hamiltonian: where and are the bosonic annihilation operators of the given cavity mode and the mechanical resonator respectively, is the frequency of the optical mode, is the position of the mechanical resonator, is the mechanical mode frequency, is the driving laser frequency, and is the amplitude. It satisfies the commutation relations is now dependent on . The last term describes the driving, given by where is the input power coupled to the optical mode under consideration and its linewidth. The system is coupled to the environment so the full treatment of the system would also include optical and mechanical dissipation (denoted by and respectively) and the corresponding noise entering the system. The standard optomechanical Hamiltonian is obtained by getting rid of the explicit time dependence of the laser driving term and separating the optomechanical interaction from the free optical oscillator. This is done by switching into a reference frame rotating at the laser frequency (in which case the optical mode annihilation operator undergoes the transformation ) and applying a Taylor expansion on . Quadratic and higher-order coupling terms are usually neglected, such that the standard Hamiltonian becomes where the laser detuning and the position operator . The first two terms ( and ) are the free optical and mechanical Hamiltonians respectively. The third term contains the optomechanical interaction, where is the single-photon optomechanical coupling strength (also known as the bare optomechanical coupling). It determines the amount of cavity resonance frequency shift if the mechanical oscillator is displaced by the zero point uncertainty , where is the effective mass of the mechanical oscillator. It is sometimes more convenient to use the frequency pull parameter, or , to determine the frequency change per displacement of the mirror. For example, the optomechanical coupling strength of a Fabry–Pérot cavity of length with a moving end-mirror can be directly determined from the geometry to be . This standard Hamiltonian is based on the assumption that only one optical and mechanical mode interact. In principle, each optical cavity supports an infinite number of modes and mechanical oscillators which have more than a single oscillation/vibration mode. The validity of this approach relies on the possibility to tune the laser in such a way that it only populates a single optical mode (implying that the spacing between the cavity modes needs to be sufficiently large). Furthermore, scattering of photons to other modes is supposed to be negligible, which holds if the mechanical (motional) sidebands of the driven mode do not overlap with other cavity modes; i.e. if the mechanical mode frequency is smaller than the typical separation of the optical modes. Linearization The single-photon optomechanical coupling strength is usually a small frequency, much smaller than the cavity decay rate , but the effective optomechanical coupling can be enhanced by increasing the drive power. With a strong enough drive, the dynamics of the system can be considered as quantum fluctuations around a classical steady state, i.e. , where is the mean light field amplitude and denotes the fluctuations. Expanding the photon number , the term can be omitted as it leads to a constant radiation pressure force which simply shifts the resonator's equilibrium position. The linearized optomechanical Hamiltonian can be obtained by neglecting the second order term : where . While this Hamiltonian is a quadratic function, it is considered "linearized" because it leads to linear equations of motion. It is a valid description of many experiments, where is typically very small and needs to be enhanced by the driving laser. For a realistic description, dissipation should be added to both the optical and the mechanical oscillator. The driving term from the standard Hamiltonian is not part of the linearized Hamiltonian, since it is the source of the classical light amplitude around which the linearization was executed. With a particular choice of detuning, different phenomena can be observed (see also the section about physical processes). The clearest distinction can be made between the following three cases: : a rotating wave approximation of the linearized Hamiltonian, where one omits all non-resonant terms, reduces the coupling Hamiltonian to a beamsplitter operator, . This approximation works best on resonance; i.e. if the detuning becomes exactly equal to the negative mechanical frequency. Negative detuning (red detuning of the laser from the cavity resonance) by an amount equal to the mechanical mode frequency favors the anti-Stokes sideband and leads to a net cooling of the resonator. Laser photons absorb energy from the mechanical oscillator by annihilating phonons in order to become resonant with the cavity. : a rotating wave approximation of the linearized Hamiltonian leads to other resonant terms. The coupling Hamiltonian takes the form , which is proportional to the two-mode squeezing operator. Therefore, two-mode squeezing and entanglement between the mechanical and optical modes can be observed with this parameter choice. Positive detuning (blue detuning of the laser from the cavity resonance) can also lead to instability. The Stokes sideband is enhanced, i.e. the laser photons shed energy, increasing the number of phonons and becoming resonant with the cavity in the process. : In this case of driving on-resonance, all the terms in must be considered. The optical mode experiences a shift proportional to the mechanical displacement, which translates into a phase shift of the light transmitted through (or reflected off) the cavity. The cavity serves as an interferometer augmented by the factor of the optical finesse and can be used to measure very small displacements. This setup has enabled LIGO to detect gravitational waves. Equations of motion From the linearized Hamiltonian, the so-called linearized quantum Langevin equations, which govern the dynamics of the optomechanical system, can be derived when dissipation and noise terms to the Heisenberg equations of motion are added. Here and are the input noise operators (either quantum or thermal noise) and and are the corresponding dissipative terms. For optical photons, thermal noise can be neglected due to the high frequencies, such that the optical input noise can be described by quantum noise only; this does not apply to microwave implementations of the optomechanical system. For the mechanical oscillator thermal noise has to be taken into account and is the reason why many experiments are placed in additional cooling environments to lower the ambient temperature. These first order differential equations can be solved easily when they are rewritten in frequency space (i.e. a Fourier transform is applied). Two main effects of the light on the mechanical oscillator can then be expressed in the following ways: The equation above is termed the optical-spring effect and may lead to significant frequency shifts in the case of low-frequency oscillators, such as pendulum mirrors. In the case of higher resonance frequencies ( MHz), it does not significantly alter the frequency. For a harmonic oscillator, the relation between a frequency shift and a change in the spring constant originates from Hooke's law. The equation above shows optical damping, i.e. the intrinsic mechanical damping becomes stronger (or weaker) due to the optomechanical interaction. From the formula, in the case of negative detuning and large coupling, mechanical damping can be greatly increased, which corresponds to the cooling of the mechanical oscillator. In the case of positive detuning the optomechanical interaction reduces effective damping. Instability can occur when the effective damping drops below zero (), which means that it turns into an overall amplification rather than a damping of the mechanical oscillator. Important parameter regimes The most basic regimes in which the optomechanical system can be operated are defined by the laser detuning and described above. The resulting phenomena are either cooling or heating of the mechanical oscillator. However, additional parameters determine what effects can actually be observed. The good/bad cavity regime (also called the resolved/unresolved sideband regime) relates the mechanical frequency to the optical linewidth. The good cavity regime (resolved sideband limit) is of experimental relevance since it is a necessary requirement to achieve ground state cooling of the mechanical oscillator, i.e. cooling to an average mechanical occupation number below . The term "resolved sideband regime" refers to the possibility of distinguishing the motional sidebands from the cavity resonance, which is true if the linewidth of the cavity, , is smaller than the distance from the cavity resonance to the sideband (). This requirement leads to a condition for the so-called sideband parameter: . If the system resides in the bad cavity regime (unresolved sideband limit), where the motional sideband lies within the peak of the cavity resonance. In the unresolved sideband regime, many motional sidebands can be included in the broad cavity linewidth, which allows a single photon to create more than one phonon, which leads to greater amplification of the mechanical oscillator. Another distinction can be made depending on the optomechanical coupling strength. If the (enhanced) optomechanical coupling becomes larger than the cavity linewidth (), a strong-coupling regime is achieved. There the optical and mechanical modes hybridize and normal-mode splitting occurs. This regime must be distinguished from the (experimentally much more challenging) single-photon strong-coupling regime, where the bare optomechanical coupling becomes of the order of the cavity linewidth, . Effects of the full non-linear interaction described by only become observable in this regime. For example, it is a precondition to create non-Gaussian states with the optomechanical system. Typical experiments currently operate in the linearized regime (small ) and only investigate effects of the linearized Hamiltonian. Experimental realizations Setup The strength of the optomechanical Hamiltonian is the large range of experimental implementations to which it can be applied, which results in wide parameter ranges for the optomechanical parameters. For example, the size of optomechanical systems can be on the order of micrometers or in the case for LIGO, kilometers. (although LIGO is dedicated to the detection of gravitational waves and not the investigation of optomechanics specifically). Examples of real optomechanical implementations are: Cavities with a moving mirror: the archetype of an optomechanical system. The light is reflected from the mirrors and transfers momentum onto the movable one, which in turn changes the cavity resonance frequency. Membrane-in-the-middle system: a micromechanical membrane is brought into a cavity consisting of fixed massive mirrors. The membrane takes the role of the mechanical oscillator. Depending on the positioning of the membrane inside the cavity, this system behaves like the standard optomechanical system. Levitated system: an optically levitated nanoparticle is brought into a cavity consisting of fixed massive mirrors. The levitated nanoparticle takes the role of the mechanical oscillator. Depending on the positioning of the particle inside the cavity, this system behaves like the standard optomechanical system. Microtoroids that support an optical whispering gallery mode can be either coupled to a mechanical mode of the toroid or evanescently to a nanobeam that is brought in proximity. Optomechanical crystal structures: patterned dielectrics or metamaterials can confine optical and/or mechanical (acoustic) modes. If the patterned material is designed to confine light, it is called a photonic crystal cavity. If it is designed to confine sound, it is called a phononic crystal cavity. Either can be used respectively as the optical or mechanical component. Hybrid crystals, which confine both sound and light to the same area, are especially useful, as they form a complete optomechanical system. Electromechanical implementations of an optomechanical system use superconducting LC circuits with a mechanically compliant capacitance like a membrane with metallic coating or a tiny capacitor plate glued onto it. By using movable capacitor plates, mechanical motion (physical displacement) of the plate or membrane changes the capacitance , which transforms mechanical oscillation into electrical oscillation. LC oscillators have resonances in the microwave frequency range; therefore, LC circuits are also termed microwave resonators. The physics is exactly the same as in optical cavities but the range of parameters is different because microwave radiation has a larger wavelength than optical light or infrared laser light. A purpose of studying different designs of the same system is the different parameter regimes that are accessible by different setups and their different potential to be converted into tools of commercial use. Measurement The optomechanical system can be measured by using a scheme like homodyne detection. Either the light of the driving laser is measured, or a two-mode scheme is followed where a strong laser is used to drive the optomechanical system into the state of interest and a second laser is used for the read-out of the state of the system. This second "probe" laser is typically weak, i.e. its optomechanical interaction can be neglected compared to the effects caused by the strong "pump" laser. The optical output field can also be measured with single photon detectors to achieve photon counting statistics. Relation to fundamental research One of the questions which are still subject to current debate is the exact mechanism of decoherence. In the Schrödinger's cat thought experiment, the cat would never be seen in a quantum state: there needs to be something like a collapse of the quantum wave functions, which brings it from a quantum state to a pure classical state. The question is where the boundary lies between objects with quantum properties and classical objects. Taking spatial superpositions as an example, there might be a size limit to objects which can be brought into superpositions, there might be a limit to the spatial separation of the centers of mass of a superposition or even a limit to the superposition of gravitational fields and its impact on small test masses. Those predictions can be checked with large mechanical structures that can be manipulated at the quantum level. Some easier to check predictions of quantum mechanics are the prediction of negative Wigner functions for certain quantum states, measurement precision beyond the standard quantum limit using squeezed states of light, or the asymmetry of the sidebands in the spectrum of a cavity near the quantum ground state. Applications Years before cavity optomechanics gained the status of an independent field of research, many of its techniques were already used in gravitational wave detectors where it is necessary to measure displacements of mirrors on the order of the Planck scale. Even if these detectors do not address the measurement of quantum effects, they encounter related issues (photon shot noise) and use similar tricks (squeezed coherent states) to enhance the precision. Further applications include the development of quantum memory for quantum computers, high precision sensors (e.g. acceleration sensors) and quantum transducers e.g. between the optical and the microwave domain (taking advantage of the fact that the mechanical oscillator can easily couple to both frequency regimes). Related fields and expansions In addition to the standard cavity optomechanics explained above, there are variations of the simplest model: Pulsed optomechanics: the continuous laser driving is replaced by pulsed laser driving. It is useful for creating entanglement and allows backaction-evading measurements. Quadratic coupling: a system with quadratic optomechanical coupling can be investigated beyond the linear coupling term . The interaction Hamiltonian would then feature a term with . In membrane-in-the-middle setups it is possible to achieve quadratic coupling in the absence of linear coupling by positioning the membrane at an extremum of the standing wave inside the cavity. One possible application is to carry out a quantum nondemolition measurement of the phonon number. Reversed dissipation regime: in the standard optomechanical system the mechanical damping is much smaller than the optical damping. A system where this hierarchy is reversed can be engineered; i.e. the optical damping is much smaller than the mechanical damping (). Within the linearized regime, symmetry implies an inversion of the above described effects; For example, cooling of the mechanical oscillator in the standard optomechanical system is replaced by cooling of the optical oscillator in a system with reversed dissipation hierarchy. This effect was also seen in optical fiber loops in the 1970s. Dissipative coupling: the coupling between optics and mechanics arises from a position-dependent optical dissipation rate instead of a position-dependent cavity resonance frequency , which changes the interaction Hamiltonian and alters many effects of the standard optomechanical system. For example, this scheme allows the mechanical resonator to cool to its ground state without the requirement of the good cavity regime. Extensions to the standard optomechanical system include coupling to more and physically different systems: Optomechanical arrays: coupling several optomechanical systems to each other (e.g. using evanescent coupling of the optical modes) allows multi-mode phenomena like synchronization to be studied. So far many theoretical predictions have been made, but only few experiments exist. The first optomechanical array (with more than two coupled systems) consists of seven optomechanical systems. Hybrid systems: an optomechanical system can be coupled to a system of a different nature (e.g. a cloud of ultracold atoms and a two-level system), which can lead to new effects on both the optomechanical and the additional system. Cavity optomechanics is closely related to trapped ion physics and Bose–Einstein condensates. These systems share very similar Hamiltonians, but have fewer particles (about 10 for ion traps and 105–108 for Bose–Einstein condensates) interacting with the field of light. It is also related to the field of cavity quantum electrodynamics. See also Quantum harmonic oscillator Optical cavity Laser cooling Coherent control References Further reading Daniel Steck, Classical and Modern Optics Michel Deverot, Bejamin Huard, Robert Schoelkopf, Leticia F. Cugliandolo (2014). Quantum Machines: Measurement and Control of Engineered Quantum Systems. Lecture Notes of the Les Houches Summer School: Volume 96, July 2011. Oxford University Press Demir, Dilek,"A table-top demonstration of radiation pressure", 2011, Diplomathesis, E-Theses univie. doi:10.25365/thesis.16381 Quantum optics
Cavity optomechanics
Physics
5,971
23,188,156
https://en.wikipedia.org/wiki/BD%E2%88%9217%2063
BD−17 63 is a K-type main-sequence star in the southern constellation Cetus. It is a 10th magnitude star at a distance of 113 light-years from Earth. The star is rotating slowly with a negligible level of magnetic activity and an age of over 4 billion years. The star BD-17 63 is named Felixvarela. The name was selected in the NameExoWorlds campaign by Cuba, during the 100th anniversary of the IAU. Felix Varela (1788–1853) was the first to teach science in Cuba. Planetary system In October 2008 an exoplanet, BD−17 63 b, was reported to be orbiting this star on an eccentric orbit. This object was detected using the radial velocity method by search programs conducted using the HARPS spectrograph. An astrometric measurement of the planet's inclination and true mass was published in 2022 as part of Gaia DR3, with another astrometric orbital solution published in 2023. See also List of extrasolar planets References K-type main-sequence stars Planetary systems with one confirmed planet Cetus BD-17 0063 002247 Felixvarela
BD−17 63
Astronomy
242
3,448,809
https://en.wikipedia.org/wiki/Atom%20laser
An atom laser is a coherent state of propagating atoms. They are created out of a Bose–Einstein condensate of atoms that are output coupled using various techniques. Much like an optical laser, an atom laser is a coherent beam that behaves like a wave. There has been some argument that the term "atom laser" is misleading. Indeed, "laser" stands for light amplification by stimulated emission of radiation which is not particularly related to the physical object called an atom laser, and perhaps describes more accurately the Bose–Einstein condensate (BEC). The terminology most widely used in the community today is to distinguish between the BEC, typically obtained by evaporation in a conservative trap, from the atom laser itself, which is a propagating atomic wave obtained by extraction from a previously realized BEC. Some ongoing experimental research tries to obtain directly an atom laser from a "hot" beam of atoms without making a trapped BEC first. Introduction The first pulsed atom laser was demonstrated at MIT by Professor Wolfgang Ketterle et al. in November 1996. Ketterle used an isotope of sodium and used an oscillating magnetic field as their output coupling technique, letting gravity pull off partial pieces looking much like a dripping tap (See movie in External Links). From the creation of the first atom laser there has been a surge in the recreation of atom lasers along with different techniques for output coupling and in general research. The current developmental stage of the atom laser is analogous to that of the optical laser during its discovery in the 1960s. To that effect the equipment and techniques are in their earliest developmental phases and still strictly in the domain of research laboratories. The brightest atom laser so far has been demonstrated at IESL-FORTH, Crete, Greece. Physics The physics of an atom laser is similar to that of an optical laser. The main differences between an optical and an atom laser are that atoms interact with themselves, cannot be created as photons can, and possess mass whereas photons do not (atoms therefore propagate at a speed below that of light). The van der Waals interaction of atoms with surfaces makes it difficult to make the atomic mirrors, typical for conventional lasers. A pseudo-continuously operating atom laser was demonstrated for the first time by Theodor Hänsch, Immanuel Bloch and Tilman Esslinger at the Max Planck Institute for Quantum Optics in Munich. They produce a well controlled continuous beam spanning up to 100 ms, whereas their predecessor produced only short pulses of atoms. However, this does not constitute a continuous atom laser since the replenishing of the depleted BEC lasts approximately 100 times longer than the duration of the emission itself (i.e. the duty cycle is 1/100). Recent developments in the field have shown progress towards a continuous atom laser, namely the creation of a continuous Bose-Einstein-Condensate. Applications Atom lasers are critical for atom holography. Similar to conventional holography, atom holography uses the diffraction of atoms. The De Broglie wavelength of the atoms is much smaller than the wavelength of light, so atom lasers can create much higher resolution holographic images. Atom holography might be used to project complex integrated-circuit patterns, just a few nanometres in scale, onto semiconductors. Another application, which might also benefit from atom lasers, is atom interferometry. In an atom interferometer an atomic wave packet is coherently split into two wave packets that follow different paths before recombining. Atom interferometers, which can be more sensitive than optical interferometers, could be used to test quantum theory, and have such high precision that they may even be able to detect changes in space-time. This is because the de Broglie wavelength of the atoms is much smaller than the wavelength of light, the atoms have mass, and because the internal structure of the atom can also be exploited. See also Bose–Einstein condensate List of laser articles References External links Atom Laser Movie Atom lasers at physicsweb.org Research groups working with atom traps Quantum optics Laser types
Atom laser
Physics
840
30,256,549
https://en.wikipedia.org/wiki/Computer%20technology%20for%20developing%20areas
Computer technology for developing areas is a field focused on using technology to improve the quality of life and support economic development in regions with limited access to resources and infrastructure. This area of research seeks to address the digital divide, which refers to the gap between those who have access to technology and those who do not, and the resulting inequalities in education, healthcare, and economic opportunities. Computer technology is often given to developing areas through donation. Many institutions, government, charitable, and for-profit organizations throughout the world give hardware, software, and infrastructure along with the necessary training to use and maintain it all. Opportunity Developing countries lag behind other nations in terms of ready access to the internet, though computer access has started to bridge that gap. Access to computers, or to broadband access, remains rare for half of the world's population. For example, as of 2010, on average of only one in 130 people in Africa had a computer while in North America and Europe one in every two people had access to the Internet. 90% of students in Africa had never touched a computer. Industrialized countries have an average GNP ten times larger than those of developing countries. The per capita GNP of the United States compared to the per capita of India holds a ratio of fifty to zero. This may be due to differences in economic priorities and social needs. Salaries of clerical staff in developed countries are averaged ten times larger salaries than those in developing countries. Purposes and usage of technology varies drastically due to shifts of priority between industrialized and developing countries. Underutilization of existing computers continues to be a problem in developing countries. Simple designs such as computer memory still have not been implemented or maximized in comparison to industrialized countries today. Local networks can provide significant access to software and information even without utilizing an internet connection, for example through use of the Wikipedia CD selection or the eGranary Digital Library. Focusing on Africa Exploring the introduction of computer technology in Africa Africa presents a unique cultural climate for the introduction of computer technology not only because of its diverse population, varied geography and multifaceted issues but also because of it singular challenges. Africa is composed of 53 countries many gaining independence since 1950 containing 75 unique ethnic groups and approximately 700 million people. It has been colonized and hence influenced strongly by Europeans from France, Portugal, Britain, Spain, Italy and Belgium except for the countries of Ethiopia and Liberia. Martin & O'Meara describe Africa's diversity and some of the issues that it presents: ethnicity, geography, rural/urban life styles, family life (class levels), access to developed world products, education, and media. Despite this somewhat overwhelming diversity in Africa, the need for self-determination by Africans as fought for example by the Nigerian's five Ogoni clans during the 1990s over oil rights is paramount. The "bare necessities of life – water, electricity, roads, education and a right to self-determination so that we can be responsible for our resources and our environment" must be respected. Technology such as computers is considered by some to be important in obtaining such self-determination for Africa especially in the area of education. While it has already had an extreme boost through the independence of many of the African countries, more education can lead to water, electricity, roads and more self-determination. Bill Clinton supports the use of technology in education stating, "[s]o, I think that the potential of information technology to empower individuals, promote growth, reduce inequality, increase government capacity, and make citizen interaction with government work better is enormous" And at the same forum, Bill Gates further states, "Out of 6 billion people, somewhat less than 1 billion are using this technology. ... Part of how to do that is by having community access, getting it into schools and libraries, and many of the projects we've done, both here in Africa and around the world have that theme that, although it won't be in the home at first, it will be accessible." Africa is a diverse continent comprising 53 countries with over 75 ethnic groups and a population of approximately 1.3 billion people. The continent has a wide range of geographical features, including deserts, savannas, mountains, and forests. While Africa has seen significant progress in various sectors since gaining independence from European colonial powers, it continues to face multifaceted challenges, including poverty, disease, conflict, and underdevelopment. The continent's education system is also plagued with issues such as inadequate infrastructure, limited resources, and a shortage of qualified teachers. These factors have contributed to low literacy rates in many African countries. Despite these challenges, technology has been identified as a potential tool for addressing some of Africa's development issues. The use of computer technology in Africa has been mainly focused on education, health, agriculture, and e-commerce. However, there are challenges to introducing computer technology in Africa, including limited infrastructure, lack of electricity, and high costs. To overcome these challenges, various initiatives have been undertaken, such as providing community access to technology, and creating partnerships with governments, non-governmental organizations, and the private sector. Despite these initiatives, the adoption of computer technology in Africa remains uneven, with many areas still lacking access to computers and the Internet. Nonetheless, the continent's commitment to embracing technology has led to the development of innovative solutions, such as mobile money and e-learning platforms, that have the potential to transform Africa's economy and society. South Africa and the Smart Cape Access Project South Africa has one of the largest and most successful introductions of computers to the residents in Africa with the Smart Cape Access Project initiated in 2000 in Cape Town winning the Bill and Melinda Gates Foundation Access to Learning Award in 2003 (Valentine, 2004. The project piloted 36 computers in six public libraries in disadvantages areas of Cape Town in 2002 with four computers designated for public use for each library. Libraries had the important structure with security, electricity and telephone connections, and known access by the public. Cape Town City Council sought information from librarians to build their project realizing that free Internet access was critical to the projects success including training, a user guide, help desk support and feedback loop. They anticipated that Internet access would "create much-needed jobs for citizens, but ... it can empower people to market themselves, start their own businesses, or gain access to useful information". Funding for the project relied on donations and partnerships from private organizations with extensive volunteer help in accessing open-source software that is available from licensed vendors or free on the Internet. While the project has been plagued by slow Internet speeds, long lines of waiting users, hacking and budgets, the demand for more computers remains high. Residents have used Internet access to build their own businesses using Smart Cape for administration, to obtain jobs sometimes overseas, to create some unsanctioned small-scale ventures such as paying an educated user to write one's resume, to write letters, e-mail, play games, complete homework and do research, and to obtain information such as BMW advertisements among other uses. Older people, unemployed youth and school children have been the most prevalent users of the Internet with 79 percent being men. With the first phase of the project completed in 2005 and the second phase consisting of monitoring and evaluation of pilot sites just completed in 2007, the roll out of the final phase of the project is underway. Over one hundred thousand people have made use of the Smart Cape Access Project computers' free access since 2002 (Brown, 2007) which is about one fifth increase in overall access to the Internet for the 3.2 million population of Cape Town increasing total access to 17 percent of the residents in 2008 (Mokgata, 2008). However, the project continues to be plagued by budget issues leading to questions about long-term sustainability because of its heavy reliance on donations and volunteers. The project reports did not address the maintenance of the computers or the network which could also be a rather large expenditure. Of further concern is the lack of use by women and girls, which culturally presents a hierarchy problem because men are the public face, and another topic to consider in the future. Africa and other less successful projects Unlike the Smart Cape Access Project, many other projects that attempt to introduce computers to Africa fail not only in the sustainability issue but also in training, support and feedback. Although in many cases access to the Internet via cable or wireless and electricity remain overwhelming issues. Less than one percent of Africans access broadband and only four percent use the Internet according to the BGBC in an article about Intel backing wireless access in Africa. The cost of wireless remains prohibitive to most Africans and possibly more important is that there is not an overall "education model" that supports how to integrate forms of hardware to provide the wireless network. Kenya provides an example pursuing the use of fiber optic cable to connect to the Internet thus being able to lower access costs from $7,500 a satellite-delivered megabyte to $400 from present levels. The Alcatel-Lucent project started at the end of 2007 (two year delivery date) and will piggyback on the expansion of electricity to many rural villages providing Internet access. It will also provide speed that is currently lacking with the satellite connection. Freeplay Foundation has attempted to address the issue of electricity by first developing battery powered lights for rural areas of Africa piloting a project also in Kenya in 2008."The World Bank estimates that more than 500 million people in sub-Saharan Africa do not have access to electricity supplies that could be used to light their homes" or power computers. Freeplay has also provided a distribution system through women that will provide income in selling, repair and maintenance for customers and is prototyping in Kenya early in 2008. While purchasing the lights may pose a sustainability issue, such inventions could be hopefully tapped for future powering of computers in Africa. An example of further difficulties surrounding introducing computers in Africa is found in the study of Mozambique one of the poorest nations of the world with 60 percent of its population below the poverty line. Despite their poverty, Mozambicans view their education and access to the Internet as only second to obtaining enough food to eat. This is shown in statistics that identify the increase in computers per hundred inhabitants from .08 to 1.6 in just two years between 1996 and 1998. However, in non urban areas where better off residents might make 40 to 60 US dollars a month, access to the Internet could eat up half of their income so community-owned settings have been instituted with some unknown success. Other pilot programs are also proliferating across the country with unknown results at this time. This lack of data regarding the overall implementation of computers in Mozambique highlights the sustainability issue of computers in Africa as does the following example in Cameroon. Cameroon was the recipient of the School of Engineering and Applied Science communication technology through a student volunteer organization. Computers were obtained, shipped, refurbished and integrated with teaching computer skills to residents. A recipient was the Presbyterian Teachers Training College which interacts with primary and secondary schools. However, no maintenance or support procedures and facilities were available as part of this effort and information on the continued value of the project are unavailable. Similarly but on a larger scale, Computer Aid, a British charity, has shipped over 30,000 PCs to 87 developing companies and is currently shipping at a rate of 1,000 a month. While it refurbishes donated computers before shipping, it appears to have not follow up to the placement of computers. However, Rwanda seems to be eager to have these computers and is providing a government sponsored Information and Communication Technology policy with access to computers through schools, community and health projects. While all of these projects are admirable, successful introduction of computers to Africa necessitates more of the United Nations' Millennium Development goals approach which has been agreed to by countries and leading development institutions around the world to promote a comprehensive and coordinated approach to tackling many problems in developing countries ("Microsoft technology, partnerships", 2006). However, by 2008 Bill Gates had changed his perspective on technology solving problems in Africa stating, "I mean, do people have a clear view of what it means to live on $1 a day? ... He openly dismisses the notion that the world's poorest people constitute a significant market for high-tech products anytime soon. ...the world's poorest two billion people desperately need health care right now, not laptops". Here the dilemma is introduced to the mix of feeding people from handouts or providing tools for their own self-determination. As a proponent of self-determination not excluding the benefit of philanthropy, a review of projects discussed above and others merged with the successful Fisher approach to KickStart International could provide a framework for more successful introduction of computers to Africa, possibly skipping to first world technology. Martin Fisher: a possible business plan Martin Fisher started KickStart International with Nick Moon in 1991 as a "non-profit organization that develops and markets new technologies for use in Africa". It develops technologies advocating understanding the cultural factors surrounding making money in Africa rather than an approach of giving away technology with expertise that has little to do with Africa's ability to make a living. Moon and Fisher believe that "the poor people don't need handouts, they need concrete opportunities to use their skills and initiative". Fisher further states that "our approach is to design, market, and sell simple tools that poor entrepreneurs buy and use to create profitable new small businesses and earn a decent income". He also stresses the need to build tools that can be supported in Africa using limited materials and assembly methods. They have designed and marketed a number of tools focusing on farming in African countries of Kenya, Tanzania and Mali because 80 percent of the poor are farmers having only two assets: land and the skill of farming. For example, KickStart had created a Hip Pump selling for $34.00 allowing a farmer to use the motion of her or his hips against a lever as a drive mechanism. The pump is capable of lifting water from six meters below the ground to 13 meters above it to allow a farmer to irrigate about three-quarters of an acre in eight hours. Other technologies have included pressing oil seeds, making building blocks from compacted soil, baling hay and producing a latrine cover. These technologies are being mass-produced in Africa. The company has successfully sold over 63,000 pumps (Perlin, 2006) and estimates that 42,000 new micro-enterprises have been started using KickStart equipment such as this pump generating more than 42 million US dollars per year in new profits and wages. Fisher and Moon further estimate that they have helped 200,000 people escape from poverty. They have been successful in Africa because they have focused on: 1. Understanding the culture and environment. 2. Providing income producing tools to create new wealth. 3. Building tools that can be supported in the environment. While KickStart has not talked something as technically challenging as computers, its business plan can be easily adapted to the introduction of computers in Africa. For example, the Smart Cape Access Project has shown widespread success understanding the culture and environment of Cape Town, but still is concerned about sustainability and use by women. Most notable, the project needs to consider how access to the Internet can provide income producing tools to create new wealth and pursue a better maintenance plan. Also of importance is inclusion of women and girls' positive impact in the roll out of technologies for the eventual introduction of computers to Africa. Although KickStart has not yet addressed the technical challenges of introducing computers to Africa, their business plan can be readily adapted to this goal. The Smart Cape Access Project in Cape Town is a notable success, demonstrating an understanding of the local culture and environment, but it also raises concerns about sustainability and female engagement. It is crucial to consider how access to the internet can offer income-generating tools, create new wealth, and improve maintenance plans. Moreover, promoting the involvement of women and girls will have a positive impact on the rollout of technology and the eventual introduction of computers to Africa. Sources of hardware Inexpensive new computers initiatives Initiatives such as the OLPC computer and Sakshat Tablet are intended to provide rugged technology at a price affordable for mass deployments. The World Bank surveyed the available ICT (Information and communication technologies for development) devices in 2010. The Raspberry Pi is a single-board computers used to promote low-cost educational computing and interfacing applications. Electronic waste statistics Press Release Unep, NEMA and Uganda Cleaner Production Centre Uganda typically has both repair and refurbishers of computers. In some countries charitable NPOs can give tax-deductible donation receipts for computers they're able to refurbish or otherwise reuse. Increased use of technology especially in ICT, low initial cost, and unplanned obsolescence of electrical and electronic equipment has led to an e-waste generation problem for Uganda. A Joint Team from UNEP, NEMA and UCPC, Estimate the current e-waste generated in Uganda at 10,300 tonnes from refrigerators, 3,300 tonnes from TVs, 2,600 tonnes from personal computers, 300 tonnes from printers and 170 tonnes from mobile phones. However, as a result of the ban of used electronics, the accumulation of e-waste from 2010 to 2011 has reduced by a percentage of 40% An e-learning strategy is being developed consultatively involving various stakeholders in the environment sector which yet Uganda has no e-waste recycler with capacity to cab down the problem of accumulation of e-waste. List of Charitable organisations multi-national – Digital Partnership multi-national – InterConnection multi-national – Non-Profit Computing, Inc. (a United Nations advisor) multi-national – World Computer Exchange Ireland – Camara Japan – IDCE Norway, Denmark and Sweden – FAIR (Fair Allocation of Infotech Resources) UK – Computers 4 Africa UK – Digital Pipeline Computers for African Schools Computer Aid International Digital Links International UK – IT Schools Africa US (some multi-national) – TechSoup Global Problems encountered Technology leaders like Microsoft co-founder Bill Gates argue that developing areas have more pressing needs than computer technology: "'Fine, go to those Bangalore Infosys centres, but just for the hell of it go three miles aside and go look at the guy living with no toilet, no running water,' Gates says... 'The world is not flat and PCs are not, in the hierarchy of human needs, in the first five rungs.'" A 2010 research report from the Governance and Social Development Resource Centre found "Very few ICT4D activities have proved sustainable... Recent research has stressed the need to shift from a technology-led approach, where the emphasis is on technical innovation towards an approach that emphasises innovative use of already established technology (mobiles, radio, television)." However, of 27 applications of ICTs for development, E-government, E-learning and E-health were found to be possible of great success, as well as the strengthening of social networks and boosting of security (particularly of women). One key problem is the ability of the recipients to maintain the donated technology and teach others its use. Another significant problem can be the selection of software installed on technology – instructors trained in one set of software (for example Ubuntu) can be expected to have difficulty in navigating computers donated with different software (for example Windows XP). A pressing problem is also the misuse of electronic waste in dangerous ways. Burning technology to obtain the metals inside will release toxic fumes into the air. (Certification of recyclers to e-Stewards or R2 Solutions standards is intended to preclude environmental pollution.) Finally, while countries may receive many donations of hardware, software, training, and technical support, internet penetration in developing countries is often extremely low compared with the developed world. However, in recent years, mobile internet has had massive growth in these regions and has become the primary way most people access the internet. Mobile internet penetration is not equal however, with rural areas often having much lower rates of internet access. This furthers the economic and cultural divide between urban and rural areas in developing countries as internet access is becoming more essential to everyday life. See also Basel Action Network Community informatics Electronic waste by country Electronic Waste Recycling Act (disambiguation) Green computing Non-profit technology NTAP (nonprofit technology assistance provider) Personal computer Plockton High School (Computers for Africa) Recycling Streetlites (African charity) Telecentre United Nations Information and Communication Technologies Task Force Waste Electrical and Electronic Equipment Directive References External links Computer Refurbishment Centre Opens for Business in Kampala (6 December 2008) Affordable handheld computer reaches Latin America (5 April 2009) Science and technology in Africa
Computer technology for developing areas
Technology
4,220
56,823,699
https://en.wikipedia.org/wiki/Poisoning%20of%20Sergei%20and%20Yulia%20Skripal
The poisoning of Sergei and Yulia Skripal, also known as the Salisbury Poisonings, was a botched assassination attempt to poison Sergei Skripal, a former Russian military officer and double agent for the British intelligence agencies in the city of Salisbury, England on 4 March 2018. Sergei and his daughter, Yulia Skripal, were poisoned by means of a Novichok nerve agent. Both spent several weeks in hospital in a critical condition, before being discharged. A police officer, Nick Bailey, was also taken into intensive care after attending the incident, and was later discharged. The British government accused Russia of attempted murder and announced a series of punitive measures against Russia, including the expulsion of diplomats. The UK's official assessment of the incident was supported by 28 other countries which responded similarly. Altogether, an unprecedented 153 Russian diplomats were expelled by the end of March 2018. Russia denied the accusations, expelled foreign diplomats in retaliation for the expulsion of its own diplomats, and accused Britain of the poisoning. On 30 June 2018, a similar poisoning of two British nationals in Amesbury, north of Salisbury, involved the same nerve agent. Charlie Rowley found a perfume bottle, later discovered to contain the agent, in a litter bin somewhere in Salisbury and gave it to Dawn Sturgess who sprayed it on her wrist. Sturgess fell ill within 15 minutes and died on 8 July, but Rowley, who had also come into contact with the poison, survived. British police believe this incident was not a targeted attack, but a result of the way the nerve agent was disposed of after the poisoning in Salisbury. A public inquiry was launched into the circumstances of Sturgess's death. On 5 September 2018, British authorities identified two Russian nationals, using the names Alexander Petrov and Ruslan Boshirov, as suspected of the Skripals' poisoning, and alleged that they were active officers in Russian military intelligence. Later, investigative website Bellingcat stated that it had positively identified Ruslan Boshirov as being the highly decorated GRU Colonel Anatoliy Chepiga, that Alexander Petrov was Alexander Mishkin, also of the GRU, and that a third GRU officer present in the UK at the time was identified as Denis Vyacheslavovich Sergeev, believed to hold the rank of major general in the GRU. The pattern of his communications while in the UK indicates that he liaised with superior officers in Moscow. The attempted assassination and subsequent agent exposures was an embarrassment for Putin and for Russia's spying organisation. It was allegedly organised by the secret Unit 29155 of the Russian GRU, under the command of Major General Andrei V. Averyanov. On 27 November 2019, the Organisation for the Prohibition of Chemical Weapons (OPCW) added Novichok, the Soviet-era nerve agent used in the attack, to its list of banned substances. Chronology of events At 14:40 GMT on 3 March 2018, Yulia Skripal, the 33-year-old daughter of Sergei Skripal, a 66-year-old resident of Salisbury, flew into Heathrow Airport from Sheremetyevo International Airport in Moscow, Russia. At 09:15 on 4 March Sergei Skripal's burgundy 2009 BMW 320d was seen in the area of London Road, Churchill Way North and Wilton Road at Salisbury. At 13:30 Skripal's car was seen on Devizes Road on the way towards the town centre. At 13:40 the Skripals arrived in the upper level car park at the Maltings, Salisbury and then went to the Bishop's Mill pub in the town centre. At 14:20 they dined at Zizzi on Castle Street, leaving at 15:35. At 16:15 an emergency services call reported that a man and woman, later identified as Sergei and Yulia, had been found unconscious on a public bench in the centre of Salisbury by the passing Chief Nursing Officer for the British Army and her daughter. An eyewitness saw the woman foaming at the mouth with her eyes wide open but completely white. According to a later British government statement they were "slipping in and out of consciousness on a public bench". At 17:10, they were taken separately to Salisbury District Hospital by an ambulance and an air ambulance. At 09:03 the following morning, Salisbury NHS Foundation Trust declared a major incident in response to concerns raised by medical staff; shortly afterwards this became a multi-agency incident named Operation Fairline. Health authorities checked 21 members of the emergency services and the public for possible symptoms; two police officers were treated for minor symptoms, said to be itchy eyes and wheezing, while one, Detective Sergeant Nick Bailey, who had been sent to Skripal's house, was in a serious condition. On 22 March, Bailey was discharged from the hospital. In a statement he said "normal life for me will probably never be the same" and thanked the hospital staff. On 26 March, Skripal and his daughter were reported to still be critically ill. On 29 March it was announced that Yulia's condition was improving and she was no longer in a critical condition. After three weeks in a critical condition, Yulia regained consciousness and was able to speak. Sergei was also in a critical condition until he regained consciousness one month after the attack. On 5 April, doctors said that Sergei was no longer in critical condition and was responding well to treatment. On 9 April, Yulia was discharged from hospital and taken to a secure location. On 18 May, Sergei Skripal was discharged from the hospital too. On 23 May, a handwritten letter and a video statement by Yulia were released to the Reuters news agency for the first time after the poisoning. She stated that she was lucky to be alive after the poisoning and thanked the staff of the Salisbury hospital. She described her treatment as slow, heavy and extremely painful and mentioned a scar on her neck, apparently from a tracheotomy. She expressed hope that someday she would return to Russia. She thanked the Russian embassy for its offer of assistance but said she and her father were "not ready to take it". On 5 April, British authorities said that inside Skripal's house, which had been sealed by the police, two guinea pigs were found dead by vets, when they were allowed in, along with a cat in a distressed state, which had to be put down. On 22 November the first interview with DS Bailey was released, in which he reported that he had been poisoned, despite the fact that he inspected the Skripals' house wearing a forensic suit. In addition to the poisoning, Bailey and his family had lost their home and all their possessions, because of contamination. Investigators said that the perfume bottle containing Novichok nerve agent, which was later found in a bin, had contained enough of the nerve agent to potentially kill thousands of people. In early 2019, building contractors built a scaffolding "sealed frame" over the house and the garage of Skripal's home. A military team then dismantled and removed the roofs on both buildings over the course of two weeks. Cleaning and decontamination was followed by rebuilding over a period of four months. On 22 February 2019, Government officials announced that the last of the 12 sites that had been undergoing an intense and hazardous clean-up – Skripal's house – had been judged safe. In May 2019, Sergei Skripal made a phone call and left a voice message to his niece Viktoria living in Russia. This was the first time after the poisoning that his voice had been heard by the public. In August 2019 it was confirmed that a second police officer had been poisoned while investigating, but only in trace amounts. Investigation The first public response to the poisoning came on 6 March. It was agreed under the Counter Terrorism Policing network that the Counter Terrorism Command based within the Metropolitan Police would take over the investigation from Wiltshire Police. Assistant Commissioner Mark Rowley, head of Counter Terrorism Policing, appealed for witnesses to the incident following a COBR meeting chaired by Home Secretary Amber Rudd. Samples of the nerve agent used in the attack tested positive at the Defence Science and Technology Laboratory at Porton Down for a "very rare" nerve agent, according to the UK Home Secretary. 180 military experts in chemical warfare defence and decontamination, as well as 18 vehicles, were deployed on 9 March to assist the Metropolitan Police to remove vehicles and objects from the scene and look for any further traces of the nerve agent. The personnel were drawn mostly from the Army, including instructors from the Defence CBRN Centre and the 29 Explosive Ordnance Disposal and Search Group, as well as from the Royal Marines and Royal Air Force. The vehicles included TPz Fuchs operated by Falcon Squadron from the Royal Tank Regiment. On 11 March, the UK government advised those present at either The Mill pub or the Zizzi restaurant in Salisbury on 4 and 5 March to wash or wipe their possessions, emphasising that the risk to the general public was low. Several days later, on 12 March, Prime Minister Theresa May said the agent had been identified as one of the Novichok family of agents, believed to have been developed in the 1980s by the Soviet Union. According to the Russian ambassador to the UK, Alexander Yakovenko, the British authorities identified the agent as A-234, derived from an earlier version known as A-232. By 14 March, the investigation was focused on Skripal's home and car, a bench where the two fell unconscious, a restaurant in which they dined and a pub where they had drinks. A recovery vehicle was removed by the military from Gillingham in Dorset on 14 March, in connection with the poisoning. Subsequently, there was speculation within the British media that the nerve agent had been planted in one of the personal items in Yulia Skripal's suitcase before she left Moscow for London, and in US media that it had been planted in their car. Ahmet Üzümcü, Director-General of the Organisation for the Prohibition of Chemical Weapons (OPCW), said on 20 March that it will take "another two to three weeks to finalise the analysis" of samples taken from the poisoning of Skripal. On 22 March, the Court of Protection gave permission for new blood samples to be obtained from Yulia and Sergei Skripal for use by the OPCW. By 28 March, the police investigation concluded that the Skripals were poisoned at Sergei's home, with the highest concentration being found on the handle of his front door. On 12 April the OPCW confirmed the UK's analysis of the type of nerve agent and reported it was of a "high purity", stating that the "name and structure of the identified toxic chemical are contained in the full classified report of the Secretariat, available to States Parties". A declassified letter from the UK's national security adviser, Sir Mark Sedwill, to NATO Secretary General Jens Stoltenberg, stated Russian military intelligence hacked Yulia Skripal's email account since at least 2013 and tested methods for delivering nerve agents including on door handles. The Department for Environment confirmed the nerve agent was delivered "in a liquid form". They said eight sites require decontamination, which will take several months to complete and cost millions of pounds. The BBC reported experts said the nerve agent does not evaporate or disappear over time. Intense cleaning with caustic chemicals is required to get rid of it. The Skripals' survival was possibly due to the weather – there had been heavy fog and high humidity, and according to its inventor and other scientists, moisture weakens the potency of this type of toxin. On 22 April 2018, it was reported that British counter-terror police had identified a suspect in the poisoning: a former Federal Security Service (FSB) officer (reportedly a 54-year-old former FSB captain) who acted under several code names including "Gordon" and "Mihails Savickis". According to detectives, he led a team of six Russian assassins who organised the chemical weapons attack. Sedwill reported on 1 May 2018 however that UK intelligence and police agencies had failed to identify the individual or individuals who carried out the attack. On 3 May 2018, the head of the OPCW, Ahmet Üzümcü, informed the New York Times that he had been told that about 50–100 grams of the nerve agent was thought to have been used in the attack, which indicated it was likely created for use as a weapon and was enough to kill a large number of people. The next day however the OPCW made a correcting statement that the "quantity should probably be characterised in milligrams", though "the OPCW would not be able to estimate or determine the amount of the nerve agent that was used". On 19 July the Press Association reported that police believed they had identified "several Russians" as the suspected perpetrators of the attack. They had been identified through CCTV, cross-checked with border entry data. On 6 August 2018, it was reported that the British government was "poised to submit an extradition request to Moscow for two Russians suspected of carrying out the Salisbury nerve agent attack". The Metropolitan Police used two super recognisers to identify the suspects after trawling through up to 5,000 hours of CCTV footage from Salisbury and numerous airports across the country. British Prime Minister Theresa May announced in the Commons the same day that British intelligence services had identified the two suspects as officers in the G. U. Intelligence Service (formerly known as GRU) and the assassination attempt was not a rogue operation and was "almost certainly" approved at a senior level of the Russian government. May also said Britain would push for the EU to agree new sanctions against Russia. On 5 September 2018, the Russian news site Fontanka reported that the numbers on leaked passport files for Petrov and Boshirov are only three digits apart, and fall in a range that includes the passport files for a Russian military official expelled from Poland for spying. It is not known how the passport files were obtained, but Andrew Roth, the Moscow correspondent for The Guardian, commented that "If the reporting is confirmed, it would be a major blunder by the intelligence agency, allowing any country to check passport data for Russians requesting visas or entering the country against a list of nearly 40 passport files of suspected GRU officers." On 14 September 2018, the online platforms Bellingcat and The Insider Russia observed that in Petrov's leaked passport files, there is no record of a residential address or any identification papers prior to 2009, suggesting that the name is an alias created that year; the analysis also noted that Petrov's dossier is stamped "Do not provide any information" and has the handwritten annotation "S.S.," a common abbreviation in Russian for "top secret". On 15 September 2018, the Russian opposition newspaper Novaya Gazeta reported finding in Petrov's passport files a cryptic number that seems to be a telephone number associated with the Russian Defence Ministry, most likely the Military Intelligence Directorate. As part of the announcement Scotland Yard and the Counter Terrorism Command released a detailed track of the individuals' 48 hours in the UK. This covered their arrival from Moscow at Gatwick Airport, a trip to Salisbury by train the day before the attack, stated by police to be for reconnaissance, a trip to Salisbury by train on the day of the attack, and return to Moscow via Heathrow Airport. The two spent both nights at the City Stay Hotel, next to Bow Church DLR station in Bow, East London. Novichok was found in their hotel room after police sealed it off on 4 May 2018. Neil Basu, National Lead for Counter Terrorism Policing said that tests were carried out on their hotel room and it was "deemed safe". On 26 September 2018, the real identity of the suspect named by police as Ruslan Boshirov was revealed as Colonel Anatoliy Vladimirovich Chepiga by The Daily Telegraph, citing reporting by itself and Bellingcat, with Petrov having a more junior rank in the GRU. The 39-year-old was made a Hero of the Russian Federation by decree of the President in 2014. Two European security sources confirmed that the details were accurate. The BBC commented: "The BBC understands there is no dispute over the identification." The Secretary of State for Defence Gavin Williamson wrote: "The true identity of one of the Salisbury suspects has been revealed to be a Russian Colonel. I want to thank all the people who are working so tirelessly on this case." However, that statement was subsequently deleted from Twitter. On 8 October 2018, the real identity of the suspect named by police as Alexander Petrov was revealed as Alexander Mishkin. On 22 November 2018, more CCTV footage, with the two suspects walking in Salisbury, was published by the police. On 19 December 2018, Mishkin (a.k.a. Petrov) and Chepiga (a.k.a. Boshirov) were added to the sanctions list of the United States Treasury Department, along with other 13 members of the GRU agency. On 6 January 2019, the Telegraph reported that the British authorities had established all the essential details of the assassination attempt, including the chain of command that leads up to Vladimir Putin. In February, a third GRU officer present in the UK at the time, Denis Sergeev, was identified. In September 2021, the BBC reported that Crown Prosecution Service had authorised charges against the three men but that formal charges could not be laid unless the men were arrested. The charges authorised against the three men are conspiracy to murder, attempted murder, causing grievous bodily harm and use and possession of a chemical weapon. Response of the United Kingdom Within days of the attack, political pressure began to mount on Theresa May's government to take action against the perpetrators, and most senior politicians appeared to believe that the Russian government was behind the attack. The situation was additionally sensitive for Russia as Russian president Vladimir Putin was facing his fourth presidential election in mid-March, and Russia was to host the 2018 FIFA World Cup football competition in June. When giving a response to an urgent question from Tom Tugendhat, the chairman of the Foreign Affairs Select Committee of the House of Commons, who suggested that Moscow was conducting "a form of soft war against the West", Foreign Secretary Boris Johnson on 6 March said the government would "respond appropriately and robustly" if the Russian state was found to have been involved in the poisoning. UK Home Secretary Amber Rudd said on 8 March 2018 that the use of a nerve agent on UK soil was a "brazen and reckless act" of attempted murder "in the most cruel and public way". Prime Minister Theresa May said in the House of Commons on 12 March: May also said that the UK government requested that Russia explain which of these two possibilities it was by the end of 13 March 2018. She also said: "[T]he extra-judicial killing of terrorists and dissidents outside Russia were given legal sanction by the Russian Parliament in 2006. And of course Russia used radiological substances in its barbaric assault on Mr Litvinenko."  She said that the UK government would "consider in detail the response from the Russian State" and in the event that there was no credible response, the government would "conclude that this action amounts to an unlawful use of force by the Russian State against the United Kingdom" and measures would follow. British media billed the statement as "Theresa May's ultimatum to Putin". On 13 March 2018, UK Home Secretary Amber Rudd ordered an inquiry by the police and security services into alleged Russian state involvement in 14 previous suspicious deaths of Russian exiles and businessmen in the UK. May unveiled a series of measures on 14 March 2018 in retaliation for the poisoning attack, after the Russian government refused to meet the UK's request for an account of the incident. One of the chief measures was the expulsion of 23 Russian diplomats which she presented as "actions to dismantle the Russian espionage network in the UK", as these diplomats had been identified by the UK as "undeclared intelligence agents". The BBC reported other responses, including: Increasing checks on private flights, customs and freight Freezing Russian state assets where there is evidence that they may be used to threaten the life or property of UK nationals or residents Plans to consider new laws to increase defences against "hostile state activity" Ministers and the British royal family boycotting the 2018 FIFA World Cup in Russia Suspending all high-level bilateral contacts between the UK and Russia Retraction of the state invitation to Russia's foreign minister Sergey Lavrov A new £48-million chemical weapons defence centre Offering voluntary vaccinations against anthrax to British troops who are held at high readiness so that they are ready to deploy to areas where there is risk of this type of attack May said that some measures which the government planned could "not be shared publicly for reasons of national security". Jeremy Corbyn cast doubt in his parliamentary response to May's statement concerning blaming the attack on Russia prior to the results of an independent investigation, which provoked criticism from some MPs, including members of his own party. A few days later, Corbyn was satisfied that the evidence pointed to Russia. He supported the expulsion but argued that a crackdown on money laundering by UK financial firms on behalf of Russian oligarchs would be a more effective measure against "the Putin regime" than the Conservative government's plans. Corbyn pointed to the pre-Iraq War judgements about Iraq and weapons of mass destruction as reason to be suspicious. The United Nations Security Council called an urgent meeting on 14 March 2018 on the initiative of the UK to discuss the Salisbury incident. According to the Russian mission's press secretary, the draft press statement introduced by Russia at the United Nations Security Council meeting was blocked by the UK. The UK and the US blamed Russia for the incident during the meeting, with the UK accusing Russia of breaking its obligations under the Chemical Weapons Convention. Separately, the White House fully supported the UK in attributing the attack to Russia, as well as the punitive measures taken against Russia. The White House also accused Russia of undermining the security of countries worldwide. The UK, and subsequently NATO, requested Russia provide "full and complete disclosure" of the Novichok programme to the OPCW. On 14 March 2018, the government stated it would supply a sample of the substance used to the OPCW once UK legal obligations from the criminal investigation permitted. Boris Johnson said on 16 March that it was "overwhelmingly likely" that the poisoning had been ordered directly by Russian president Vladimir Putin, which marked the first time the British government accused Putin of personally ordering the poisoning. According to the UK Foreign Office, the UK attributed the attack to Russia based on Porton Down's determination that the chemical was Novichok, additional intelligence, and a lack of alternative explanations from Russia. The Defence Science and Technology Laboratory announced that it was "completely confident" that the agent used was Novichok, but they still did not know the "precise source" of the agent. The UK had held an intelligence briefing with its allies in which it stated that the Novichok chemical used in the Salisbury poisoning was produced at a chemical facility in the town of Shikhany, Saratov Oblast, Russia. Response of Russia Russian government On 6 March 2018 Andrey Lugovoy, deputy of Russia's State Duma and alleged killer of Alexander Litvinenko, in his interview with the Echo of Moscow said: "Something constantly happens to Russian citizens who either run away from Russian justice, or for some reason choose for themselves a way of life they call a change of their Motherland. So the more Britain accepts on its territory every good-for-nothing, every scum from all over the world, the more problems they will have." Russian Foreign Minister Sergey Lavrov on 9 March rejected Britain's claim of Russia's involvement in Skripal's poisoning and accused the United Kingdom of spreading propaganda. Lavrov said that Russia was "ready to cooperate" and demanded access to the samples of the nerve-agent which was used to poison Skripal. The request was rejected by the British government. Following Theresa May's 12 March statement in Parliament – in which she gave President Putin's administration until midnight of the following day to explain how a former spy was poisoned in Salisbury, otherwise she would conclude it was an "unlawful use of force" by the Russian state against the UK, Lavrov, talking to the Russian press on 13 March, said that the procedure stipulated by the Chemical Weapons Convention should be followed whereunder Russia was entitled to have access to the substance in question and 10 days to respond. On 17 March, Russia announced that it was expelling 23 British diplomats and ordered the closure of the UK's consulate in St Petersburg and the British Council office in Moscow, stopping all British Council activities in Russia. Russia has officially declared the poisoning to be a fabrication and a "grotesque provocation rudely staged by the British and U.S. intelligence agencies" to undermine the country. The Russian government and embassy of Russia in the United Kingdom repeatedly requested access to the Skripals, and sought to offer consular assistance. These requests and offers were respectively denied or declined. On 5 September 2018 Putin's Press Secretary, Dmitry Peskov, stated that Russia had not received any official request from Britain for assistance in identifying the two suspected Russian GRU military intelligence officers that Scotland Yard believes carried out the Skripal attack. The same day, the Foreign Ministry of Russia asserted that UK ambassador in Moscow, Laurie Bristow, had said that London would not provide Russia with the suspects' fingerprints, passport numbers, visa numbers, or any extra data. On 12 September 2018, Putin, while answering questions at the plenary meeting of the 4th Eastern Economic Forum in Russia's Far Eastern port city of Vladivostok said that the identities of both men London suspected of involvement in the Skripal case were known to the Russian authorities and that both were civilians, who had done nothing criminal. He also said he would like the men to come forward to tell their story. In a 13 September 2018 interview on the state-funded television channel RT, the accused claimed to be sports nutritionists who had gone to Salisbury merely to see the sights and look for nutrition products, saying that they took a second day-trip to Salisbury because slush had dampened their first one. On 26 September, the same day one of the suspects was identified as the Colonel of GRU, Lavrov urged the British authorities to cooperate in the investigation of the case, said Britain had given no proof of Russia's guilt and suggested that Britain had something to hide. On 25 September, the FSB began searching for Ministry of Internal Affairs (MVD) officers who had provided journalists with foreign passport and flight information about the suspects. Russian state media For a few days following the poisoning, the story was discussed by web sites, radio stations and newspapers, but Russian state-run main national TV channels largely ignored the incident. Eventually, on 7 March, anchor Kirill Kleimyonov of the state television station Channel One Russia's current affairs programme Vremya mentioned the incident by attributing the allegation to Boris Johnson. After speaking of Johnson disparagingly, Kleimyonov said that being "a traitor to the motherland" was one of the most hazardous professions and warned: "Don't choose England as a next country to live in. Whatever the reasons, whether you're a professional traitor to the motherland or you just hate your country in your spare time, I repeat, no matter, don't move to England. Something is not right there. Maybe it's the climate, but in recent years there have been too many strange incidents with a grave outcome. People get hanged, poisoned, they die in helicopter crashes and fall out of windows in industrial quantities." Kleimyonov's commentary was accompanied by a report highlighting previous suspicious Russia-related deaths in the UK, namely those of financier Alexander Perepilichny, businessman Boris Berezovsky, ex-FSB officer Alexander Litvinenko and radiation expert Matthew Puncher. Puncher discovered that Litvinenko was poisoned by polonium; he died in 2006, five months after a trip to Russia. Dmitry Kiselyov, pro-Kremlin TV presenter, said on 11 March that the poisoning of Sergei Skripal, who was "completely wrung out and of little interest" as a source, was only advantageous to the British to "nourish their Russophobia" and organise the boycott of the FIFA World Cup scheduled for June 2018. Kiselyov referred to London as a "pernicious place for Russian exiles". The prominent Russian television hosts' warnings to Russians living in the UK were echoed by a similar direct warning from a senior member of the Russian Federation Council, Andrey Klimov, who said: "It's going to be very unsafe for you." Claims made by Russian media were fact-checked by UK media organisations. An interview with two men claiming to be the suspects named by the UK was aired on RT on 13 September 2018 with RT editor Margarita Simonyan. They said they were ordinary tourists who had wished to see Stonehenge, Old Sarum, and the "famous ... 123-metre spire" of Salisbury Cathedral. They also said that they "maybe approached Skripal's house, but we didn't know where it was located," and denied using Novichok, which they had allegedly transported in a fake perfume bottle, saying, "Is it silly for decent lads to have women's perfume? The customs are checking everything, they would have questions as to why men have women's perfume in their luggage." Although Simonyan avoided most questions about the two men's backgrounds, she hinted that they might be gay by asking, "All footage features you two together ... What do you have in common that you spend so much time together?" The New York Times interpreted the hint by noting that "The possibility that Mr. Petrov and Mr. Boshirov could be gay would, for a Russian audience, immediately rule out the possibility that they serve as military intelligence officers." On 22 August 2022, the editor-in-chief of Kremlin-backed RT network, Margarita Simonyan, appeared to lend support to the suggestion that Russia had been involved in the poisoning, with her remark "I am sure we can find professionals willing to admire the famous spires in the vicinity of Tallinn" – seen as a reference to the agents' claims that they were sightseeing in Salisbury. Chemical weapons experts and intelligence Porton Down On 3 April 2018 Gary Aitkenhead, the chief executive of the Government's Defence Science and Technology Laboratory (Dstl) at Porton Down responsible for testing the substance involved in the case, said they had established the agent was Novichok or from that family but had been unable to verify the "precise source" of the nerve agent and that they had "provided the scientific info to Government who have then used a number of other sources to piece together the conclusions you have come to". Aitkenhead refused to comment on whether the laboratory had developed or maintains stocks of Novichok. He also dismissed speculations the substance could have come from Porton Down: "There is no way anything like that could have come from us or left the four walls of our facility." Aitkenhead stated the creation of the nerve agent was "probably only within the capabilities of a state actor", and that there was no known antidote. Former Russian scientists and intelligence officers Vil Mirzayanov, a former Soviet Union scientist who worked at the research institute that developed the Novichok class of nerve agents and lives in the United States, believes that hundreds of people could have been affected by residual contamination in Salisbury. He said that Sergei and Yulia Skripal, if poisoned with Novichok, would be left with debilitating health issues for the rest of their lives. He also criticised the response of Public Health England, saying that washing personal belongings was insufficient to remove traces of the chemical. Two other Russian scientists who now live in Russia and have been involved in Soviet-era chemical weapons development, Vladimir Uglev and Leonid Rink, were quoted as saying that Novichok agents had been developed in the 1970s–1980s within the programme that was officially titled FOLIANT, while the term Novichok referred to a whole system of chemical weapons use; they, as well as Mirzayanov, who published Novichok's formula in 2008, also noted that Novichok-type agents might be synthesised in other countries. In 1995, Leonid Rink received a one-year suspended sentence for selling Novichok agents to unnamed buyers, soon after the fatal poisoning of Russian banker Ivan Kivilidi by Novichok. A former KGB and FSB officer, Boris Karpichkov, who operated in Latvia in the 1990s and fled to the UK in 1998, told ITV's Good Morning Britain that on 12 February 2018, three weeks before the Salisbury attack and exactly on his birthday, he received a message over the burner phone from "a very reliable source" in the FSB telling Karpichkov that "something bad [wa]s going to happen with [him] and seven other people, including Mr. Skripal", whom he then knew nothing about. Karpichkov said he disregarded the message at the time, thinking it was not serious, as he had previously received such messages. According to Karpichkov, the FSB's list includes the names of Oleg Gordievsky and William Browder. Spiez Laboratory in Switzerland The Swiss Federal Intelligence Service announced on 14 September 2018 that two Russian spies had been caught in the Netherlands and expelled earlier in the year for attempting to hack into the Spiez Laboratory in the Swiss town of Spiez, a designated lab of the OPCW that had been tasked with confirming that the samples of poison collected in Salisbury were Novichok. The spies were discovered through a joint investigation by the Swiss, Dutch, and British intelligence services. The two men expelled were not the same as the Salisbury suspects. Response from other countries and organisations US government Following Theresa May's statement in Parliament, the US Secretary of State Rex Tillerson released a statement on 12 March that fully supported the stance of the UK government on the poisoning attack, including "its assessment that Russia was likely responsible for the nerve agent attack that took place in Salisbury". The following day, US President Donald Trump said that Russia was likely responsible. United States Ambassador to the United Nations Nikki Haley at the Security Council briefing on 14 March 2018 stated: "The United States believes that Russia is responsible for the attack on two people in the United Kingdom using a military-grade nerve agent". Following the United States National Security Council's recommendation, President Trump, on 26 March, ordered the expulsion of sixty Russian diplomats (referred to by the White House as "Russian intelligence officers") and the closure of the Russian consulate in Seattle. The action was cast as being "in response to Russia's use of a military-grade chemical weapon on the soil of the United Kingdom, the latest in its ongoing pattern of destabilising activities around the world". On 8 August, five months after the poisoning, the US government agreed to place sanctions on Russian banks and exports. On 6 August, the US State Department concluded that Russia was behind the poisoning. The sanctions, which are enforced under the Chemical and Biological Weapons Control and Warfare Elimination Act of 1991 (CBW Act), were planned to come into effect on 27 August. However, these sanctions were not implemented by the Trump administration. European Union and member states European Commission Vice-President Frans Timmermans argued for "unequivocal, unwavering and very strong" European solidarity with the United Kingdom when speaking to lawmakers in Strasburg on 13 March. Federica Mogherini, the High Representative of the Union for Foreign Affairs and Security Policy, expressed shock and offered the bloc's support. MEP and leader of the Alliance of Liberals and Democrats for Europe in the European Parliament Guy Verhofstadt proclaimed solidarity with the British people. During a meeting in the Foreign Affairs Council on 19 March, all foreign ministers of the European Union declared in a joint statement that the "European Union expresses its unqualified solidarity with the UK and its support, including for the UK's efforts to bring those responsible for this crime to justice." In addition, the statement also pointed out that "The European Union takes extremely seriously the UK Government's assessment that it is highly likely that the Russian Federation is responsible." Norbert Röttgen, a former federal minister in Angela Merkel's government and current chairman of Germany's parliamentary foreign affairs committee, said the incident demonstrated the need for Britain to review its open-door policy towards Russian capital of dubious origin. Sixteen EU countries expelled 33 Russian diplomats on 26 March. The European Union officially sanctioned 4 Russians that were suspected of carrying out the attack on 21 January 2019. The head of the GRU Igor Kostyukov and the deputy head Vladimir Alexseyev were both sanctioned along with Mishkin and Chepiga. The sanctions banned them from travelling to the EU and froze any assets they may have there along with banning any person or company in the EU providing any financial support to those sanctioned. Other non-EU countries Albania, Australia, Canada, Georgia, North Macedonia, Moldova, Norway and Ukraine expelled a total of 27 Russian diplomats who were believed to have been intelligence officers. Australia's Malcolm Turnbull said, "We responded with the solidarity we've always shown when Britain's freedoms have been challenged." The New Zealand Government also issued a statement supporting the actions, noting that it would have expelled any Russian intelligence agents who had been detected in the country. NATO NATO issued an official response to the attack on 14 March. The alliance expressed its deep concern over the first offensive use of a nerve agent on its territory since its foundation and said that the attack was in breach of international treaties. It called on Russia to fully disclose its research of the Novichok agent to the OPCW. Jens Stoltenberg, NATO Secretary General, announced on 27 March that NATO would be expelling seven Russian diplomats from the Russian mission to NATO in Brussels. In addition, 3 unfilled positions at the mission have been denied accreditation from NATO. Russia blamed the US for the NATO response. Joint responses The leaders of France, Germany, the United States and the United Kingdom released a joint statement on 15 March which supported the UK's stance on the incident, stating that it was "highly likely that Russia was responsible" and calling on Russia to provide complete disclosure to the OPCW concerning its Novichok nerve agent program. On 19 March, the European Union also issued a statement strongly condemning the attack and stating it "takes extremely seriously the UK Government's assessment that it is highly likely that the Russian Federation is responsible". On 6 September 2018, Canada, France, Germany and the United States issued a joint statement saying they had "full confidence" that the Salisbury attack was orchestrated by Russia's Main Intelligence Directorate and "almost certainly approved at a senior government level" and urged Russia to provide full disclosure of its Novichok programme to the OPCW. Expulsion of diplomats By the end of March 2018 a number of countries and other organisations expelled a total of more than 150 Russian diplomats in a show of solidarity with the UK. According to the BBC it was "the largest collective expulsion of Russian intelligence officers in history". The UK expelled 23 Russian diplomats on 14 March 2018. Three days later, Russia expelled an equal number of British diplomats and ordered the closure of the UK consulate in St. Petersburg and of the British Council in Russia. Nine countries expelled Russian diplomats on 26 March: along with 6 other EU nations, the US, Canada, Ukraine and Albania. The following day, several nations inside and outside of the EU, and NATO responded similarly. By 30 March, Russia expelled an equal number of diplomats of most nations who had expelled Russian diplomats. By that time, Belgium, Montenegro, Hungary and Georgia had also expelled one or more Russian diplomats. Additionally on 30 March, Russia reduced the size of the total UK mission's personnel in Russia to match that of the Russian mission to the UK. Bulgaria, Luxembourg, Malta, Portugal, Slovakia, Slovenia and the European Union itself have not expelled any Russian diplomats but have recalled their ambassadors from Russia for consultations. Furthermore, Iceland decided to diplomatically boycott the 2018 FIFA World Cup held in Russia. This cooperation of countries for the mass expulsions of Russian diplomats was used again just four years later in 2022 as the format for the diplomatic expulsions during the Russo-Ukrainian War. Notes 4 diplomats expelled. 3 pending applications declined. 7 expelled and 3 pending applications declined. Maximum delegation reduced by 10 (from 30 to 20). 48 Russian diplomats expelled from Washington D.C. and 12 expelled from New York. Aftermath Russia The failed poisoning of the Skripals became an embarrassment for Putin, and ended up causing severe damage to Russia's spying organisations. Once Bellingcat exposed the agents' names in September, Moscow then targeted interior ministry leaks that may have helped expose dozens of undercover operatives. It also prompted fury in the Kremlin, the result of which was a purge in the senior ranks of the GRU. Furthermore a number of botched attempts by the GRU were also revealed in October – the Sandworm cybercrime unit had attempted unsuccessfully to hack the UK Foreign Office and the Porton Down facility within a month of the poisonings. Another hack was attempted in April this time on the headquarters of the Organisation for the Prohibition of Chemical Weapons (OPCW) in the Netherlands. The OPCW was investigating the poisonings in the UK, as well the Douma chemical attack in Syria. Four Russian intelligence officers, believed to have been part of a 'GRU cleanup' unit for the earlier failed operations, travelled to The Hague on diplomatic passports. The incident was thwarted by Dutch military intelligence, who had been tipped off by British intelligence officials. The four tried – and failed – to destroy their equipment and were immediately put on a plane back to Moscow. Soon after these events Vladimir Putin's tone changed; at the Russian Energy Forum in Moscow he described Skripal as a "scum and a traitor to the motherland." The 2018 disclosure of the link on sequential passport numbers issued to GRU agents led to a number of other Russian agents fleeing the west and returning to Russia, including Maria Adela Kuhfeldt Rivera – real name Olga Kolobova, a deep cover agent in Naples. Another was Sergey Vladimirovich Cherkasov, arrested and jailed in Brazil in 2022. Russia's chief of military intelligence, Igor Korobov and his agency thus came under heavy criticism. Putin was angered over the identification of the agents and the botched failures, and in a meeting apparently scolded Korobov. Soon after this Korobov then collapsed at home in sudden "ill health", (claimed journalist Sergey Kanev) and died not long after in November after a "long illness". GRU defector Viktor Suvorov claimed that 'Korobov was murdered, and everyone in the GRU understood why'. Alexander Golts, a Russian military analyst even admitted that agents 'got a bit too relaxed' and went on to say 'such sloppy work is the reality'. In February 2019, Bellingcat confirmed that a third GRU officer present in the UK at the time was identified as Denis Vyacheslavovich Sergeev, believed to hold the rank of major general in the GRU. The pattern of his communications while in the UK indicated that he liaised with superior officers in Moscow. In September 2021, Bellingcat revealed that "Russian authorities had taken the unusual measure of erasing any public records" of Sergeev's existence, as well as the other two main suspects in the Skripal poisoning. Sergeev is said to have had a senior position to Chepiga and Mishkin and was likely in charge of coordinating the operation in Salisbury. In April 2021, Mishkin and Chepiga were named as having been involved in the 2014 Vrbětice ammunition warehouses explosions in the Czech Republic. The Moscow Times reported a public opinion survey later in the year of the poisonings: United Kingdom In the UK, the response to the poisonings was viewed as a success. Initially there were criticisms of the intelligence failures with the supposed GRU agents gaining access to the UK in the first place. After the Litvinenko poisoning, however, there were calls for more robust action against Russia, should an event like it unfold. The Salisbury poisonings put that robustness into action, rallying significant solidarity from the West. In addition, the response also exposed many Russian intelligence officers, and British officials believe they did real damage to Russian intelligence operations, even if it was short term. Some of the emergency vehicles used in the response to the poisoning were buried in a landfill site near Cheltenham. In June 2019 it was revealed emergency services spent £891,000 on replacing and discarding contaminated vehicles. South Western ambulance service discarded eight vehicles, comprising three ambulances and five paramedic cars. Wiltshire Police destroyed a total of 16 vehicles at a cost of £460,000. On 13 September 2018, Chris Busby, a retired research scientist, who is regularly featured as an expert on the Russian government-controlled RT television network, was arrested after his home in Bideford was raided by police. Busby was an outspoken critic of the British Government's handling of the Salisbury poisoning. In one video he said: "Just to make it perfectly clear, there's no way that there's any proof that the material that poisoned the Skripals came from Russia." Busby was held for 19 hours under the Explosive Substances Act 1883, before being released with no further action. Following his release, Busby told the BBC he believed that the fact that two of the officers who had raided his property had felt unwell was explained by "psychological problems associated with their knowledge of the Skripal poisoning". On 16 September, fears of Novichok contamination flared up again after two people fell ill at a Prezzo restaurant, from the Zizzi location where the Skripals had eaten before collapsing. The restaurant, a nearby pub, and surrounding streets were cordoned off, with some patrons under observation or unable to leave the area. The next day, the police said there was "nothing to suggest that Novichok" was the cause of the two people falling ill. However, on 19 September, one of the apparent victims, Anna Shapiro, claimed in The Sun newspaper that the incident had been an attempted assassination against her and her husband by Russia. This article was later removed from The Sun "for legal reasons" and the police began to investigate the incident as a "possible hoax" after the couple were discharged from hospital. In 2020, senior British officials told The Times that Sergei and Yulia Skripal had been given new identities and state support to start a new life. Both had relocated to New Zealand under the assumed identities. In May 2021 Nick Bailey, who continued to feel the effects of his poisoning and had retired early as a result, began personal injury litigation against Wiltshire Police; an undisclosed settlement was reached in April 2022. Recovery money As of 17 October 2018, a total of £7.5 million had been pledged by government in support of the city and to support businesses, boost tourism and to cover unexpected costs. Wiltshire Council had spent or pledged £7,338,974 on recovery, and a further £500,000 "was in the pipeline": £733,381 towards unexpected closure and loss of footfall to businesses £404,024 in revenue grants for 74 businesses £99,891 in capital grants £229,446 in business rate relief for 56 businesses £210,491 on events to boost tourism £500,000 from the Department of Digital, Culture, Media and Sport £4,000 on dry cleaning or disposal of clothes believed to be contaminated by Novichock £1 million towards keeping contaminated sites safe £570,000 recovery money to cover costs of free parking, and free park and ride services £4.1 million of the money pledged by the Home Office to cover Wiltshire Police's costs. A council commissioner said total policing cost had exceeded £10 million. Having £6.6 million allocated for funding the police force, he said he hoped to "recoup the full amount from central government". Recognition of responders Deputy Chief Constable Paul Mills and Superintendent Dave Minty of Wiltshire Police were each awarded the Queen's Police Medal in the 2020 New Year Honours for their roles in responding to the incident. The combined Wiltshire Emergency Services received Wiltshire Life'''s 2019 "Pride of Wiltshire" award. Media depictions The Salisbury Poisonings, a three-part dramatisation of the events in Salisbury and Amesbury, with a focus on the response of local officials and the local community, was broadcast on BBC One in June 2020 and later released on Netflix in December 2021. See also 2018 Amesbury poisonings Intelligence agencies of Russia Assassination of Kim Jong-nam by North Korea with VX nerve agent Poisoning of Alexander Litvinenko putatively by Russian intelligence agents with Polonium-210 Poisoning of Alexei Navalny, Russian politician poisoned with Novichok Bulgarian umbrella used to assassinate Georgi Markov in London Lists of poisonings Russian spies in the Russo-Ukrainian War Diplomatic expulsions during the Russo-Ukrainian War Our Guys in SalisburyNotes References External links Report from the Russian Embassy to the UK, "Salisbury Unanswered Questions," 4 March 2019 "Salisbury & Amesbury Investigation – Counter Terrorism Policing", 5 September 2018 "Russian spy: What we know so far", BBC, 19 March 2018 "Amanda Erickson: The long, terrifying history of Russian dissidents being poisoned abroad", The Washington Post'', 7 March 2018 "Joel Gunter: Sergei Skripal and the 14 deaths under scrutiny", bbc.com, 7 March 2018 Bellingcat's investigative page for the Chepiga identification – Skripal Suspect Boshirov Identified as GRU Colonel Anatoliy Chepiga 2018 controversies 2018 crimes in the United Kingdom 2018 in British politics 2018 in international relations Attacks in the United Kingdom in 2018 Crime in Wiltshire Diplomatic incidents Failed assassination attempts in the United Kingdom Forensic toxicology History of Salisbury March 2018 crimes in Europe March 2018 events in the United Kingdom Poisoning by drugs, medicaments and biological substances Russian intelligence operations Russia–United Kingdom relations Russia–United States relations Russian spies 2010s in Wiltshire Chemical weapons attacks State-sponsored terrorism Novichok agents
Poisoning of Sergei and Yulia Skripal
Chemistry,Environmental_science
10,489
24,999,087
https://en.wikipedia.org/wiki/Heat%20stroke
Heat stroke or heatstroke, also known as sun-stroke, is a severe heat illness that results in a body temperature greater than , along with red skin, headache, dizziness, and confusion. Sweating is generally present in exertional heatstroke, but not in classic heatstroke. The start of heat stroke can be sudden or gradual. Heatstroke is a life-threatening condition due to the potential for multi-organ dysfunction, with typical complications including seizures, rhabdomyolysis, or kidney failure. Heat stroke occurs because of high external temperatures and/or physical exertion. It usually occurs under preventable prolonged exposure to extreme environmental or exertional heat. However, certain health conditions can increase the risk of heat stroke, and patients, especially children, with certain genetic predispositions are vulnerable to heatstroke under relatively mild conditions. Preventive measures include drinking sufficient fluids and avoiding excessive heat. Treatment is by rapid physical cooling of the body and supportive care. Recommended methods include spraying the person with water and using a fan, putting the person in ice water, or giving cold intravenous fluids. Adding ice packs around a person is beneficial but does not by itself achieve the fastest possible cooling. Heat stroke results in more than 600 deaths a year in the United States. Rates increased between 1995 and 2015. Purely exercise-induced heat stroke, though a medical emergency, tends to be self-limiting (the patient stops exercising from cramp or exhaustion) and fewer than 5% of cases are fatal. Non-exertional heatstroke is a much greater danger: even the healthiest person, if left in a heatstroke-inducing environment without medical attention, will continue to deteriorate to the point of death, and 65% of the most severe cases are fatal even with treatment. Signs and symptoms Heat stroke generally presents with a hyperthermia of greater than in combination with disorientation. There is generally a lack of sweating in classic heatstroke, while sweating is generally present in exertional heatstroke. Early symptoms of heat stroke include behavioral changes, confusion, delirium, dizziness, weakness, agitation, combativeness, slurred speech, nausea, and vomiting. In some individuals with exertional heatstroke, seizures and sphincter incontinence have also been reported. Additionally, in exertional heat stroke, the affected person may sweat excessively. Rhabdomyolysis, which is characterized by skeletal muscle breakdown with the products of muscle breakdown entering the bloodstream and causing organ dysfunction, is seen with exertional heatstroke. If treatment is delayed, patients could develop vital organ damage, unconsciousness and even organ failure. In the absence of prompt and adequate treatment, heatstroke can be fatal. Causes Heat stroke occurs when thermoregulation is overwhelmed by a combination of excessive metabolic production of heat (exertion), excessive heat in the physical environment, and insufficient or impaired heat loss, resulting in an abnormally high body temperature. Substances that inhibit cooling and cause dehydration such as alcohol, stimulants, medications, and age-related physiological changes predispose to so-called "classic" or non-exertional heat stroke (NEHS), most often in elderly and infirm individuals in summer situations with insufficient ventilation. Young children have age specific physiologic differences that make them more susceptible to heat stroke including an increased surface area to mass ratio (leading to increased environmental heat absorption), an underdeveloped thermoregulatory system, a decreased sweating rate and a decreased blood volume to body size ratio (leading to decreased compensatory heat dissipation by redirecting blood to the skin). Exertional heat stroke Exertional heat stroke (EHS) can happen in young people without health problems or medications most often in athletes, outdoor laborers, or military personnel engaged in strenuous hot-weather activity or in first responders wearing heavy personal protective equipment. In environments that are not only hot but also humid, it is important to recognize that humidity reduces the degree to which the body can cool itself by perspiration and evaporation. For humans and other warm-blooded animals, excessive body temperature can disrupt enzymes regulating biochemical reactions that are essential for cellular respiration and the functioning of major organs. Cars When the outside temperature is , the temperature inside a car parked in direct sunlight can quickly exceed . Young children or elderly adults left alone in a vehicle are at particular risk of succumbing to heat stroke. "Heat stroke in children and in the elderly can occur within minutes, even if a car window is opened slightly." As these groups of individuals may not be able to open car doors or to express discomfort verbally (or audibly, inside a closed car), their plight may not be immediately noticed by others in the vicinity. In 2018, 51 children in the United States died in hot cars, more than the previous high of 49 in 2010. Dogs are even more susceptible than humans to heat stroke in cars, as they cannot produce whole-body sweat to cool themselves. Leaving the dog at home with plenty of water on hot days is recommended instead, or, if a dog must be brought along, it can be tied up in the shade outside the destination and provided with a full water bowl. Pathophysiology The pathophysiology of heat stroke involves an intense heat overload followed by a failure of the body's thermoregulatory mechanisms. More specifically, heat stroke leads to inflammatory and coagulation responses that can damage the vascular endothelium and result in numerous platelet complications, including decreased platelet counts, platelet clumping, and suppressed platelet release from bone marrow. Growing evidence also suggests the existence of a second pathway underlying heat stroke that involves heat and exercise-driven endotoxemia. Although its exact mechanism is not yet fully understood, this model theorizes that extreme exercise and heat disrupt the intestinal barrier by making it more permeable and allowing lipopolysaccharides (LPS) from gram-negative bacteria within the gut to move into the circulatory system. High blood LPS levels can then trigger a systemic inflammatory response and eventually lead to sepsis and related consequences like blood coagulation, multi-organ failure, necrosis, and central nervous system dysfunction. Diagnosis Heat stroke is a clinical diagnosis, based on signs and symptoms. It is diagnosed based on an elevated core body temperature (usually above 40 degrees Celsius), a history of heat exposure or physical exertion, and neurologic dysfunction. However, high body temperature does not necessarily indicate that heat stroke is present, such as with people in high-performance endurance sports or with people experiencing fevers. In others with heatstroke, the core body temperature is not always above 40 degrees Celsius. Therefore, heat stroke is more accurately diagnosed based on a constellation of symptoms rather than just a specific temperature threshold. Tachycardia (or a rapid heart rate), tachypnea (rapid breathing) and hypotension (low blood pressure) are common clinical findings. Those with classic heat stroke usually have dry skin, whereas those with exertional heat stroke usually have wet or sweaty skin. A core body temperature (such as a rectal temperature) is the preferred method for monitoring body temperature in the diagnosis and management of heat stroke as it is more accurate than peripheral body temperatures (such as an oral or axillary temperatures). Other conditions which may present similarly to heat stroke include meningitis, encephalitis, epilepsy, drug toxicity, severe dehydration, and certain metabolic syndromes such as serotonin syndrome, neuroleptic malignant syndrome, malignant hyperthermia and thyroid storm. Prevention The risk of heat stroke can be reduced by observing precautions to avoid overheating and dehydration. Light, loose-fitting clothes will allow perspiration to evaporate and cool the body. Wide-brimmed hats in light colors help prevent the sun from warming the head and neck. Vents on a hat will help cool the head, as will sweatbands wetted with cool water. Strenuous exercise should be avoided during hot weather, especially in the sun peak hours. Strenuous exercise should also be avoided if a person is ill and exercise intensity should match one's fitness level. Avoiding confined spaces (such as automobiles) without air-conditioning or adequate ventilation. During heat waves and hot seasons further measures that can be taken to avoid classic heat stroke include staying in air conditioned areas, using fans, taking frequent cold showers, and increasing social contact and well being checks (especially for the elderly or disabled persons). In hot weather, people need to drink plenty of cool liquids and mineral salts to replace fluids lost from sweating. Thirst is not a reliable sign that a person needs fluids. A better indicator is the color of urine. A dark yellow color may indicate dehydration. Some measures that can help protect workers from heat stress include: Know signs/symptoms of heat-related illnesses. Block out direct sun and other heat sources. Drink fluids often, and before you are thirsty. Wear lightweight, light-colored, loose-fitting clothes. Avoid beverages containing alcohol or caffeine. Treatment Treatment of heat stroke involves rapid mechanical cooling along with standard resuscitation measures. The body temperature must be lowered quickly via conduction, convection, or evaporation. During cooling, the body temperature should be lowered to less than 39 degrees Celsius, ideally less than 38-38.5 degrees Celsius. In the field, the person should be moved to a cool area, such as indoors or to a shaded area. Clothing should be removed to promote heat loss through passive cooling. Conductive cooling methods such as ice-water immersion should also be used, if possible. Evaporative and convective cooling by a combination of cool water spray or cold compresses with constant air flow over the body, such as with a fan or air-conditioning unit, is also an effective alternative. In hospital mechanical cooling methods include ice water immersion, infusion of cold intravenous fluids, placing ice packs or wet gauze around the person, and fanning. Aggressive ice-water immersion remains the gold standard for exertional heat stroke and may also be used for classic heat stroke. This method may require the effort of several people and the person should be monitored carefully during the treatment process. Immersion should be avoided for an unconscious person but, if there is no alternative, it can be applied with the person's head above water. A rapid and effective cooling usually reverses concomitant organ dysfunction. Immersion in very cold water was once thought to be counterproductive by reducing blood flow to the skin and thereby preventing heat from escaping the body core. However, research has shown that this mechanism does not play a dominant role in the decrease in core body temperature brought on by cold water. Dantrolene, a muscle relaxant used to treat other forms of hyperthermia, is not an effective treatment for heat stroke. Antipyretics such as aspirin and acetaminophen are also not recommended as a means to lower body temperature in the treatment of heat stroke and their use may lead to worsening liver damage. A cardiopulmonary resuscitation (CPR) may be necessary if the person goes into cardiac arrest. The person's condition should be reassessed and stabilized by trained medical personnel. And the person's heart rate and breathing should be monitored. IV fluid resuscitation is usually needed for circulatory failure and organ dysfunction and is also indicated if rhabdomyolysis is present. In severe cases hemodialysis and ventilator support may be needed. Prognosis In elderly people who experience classic heat stroke the mortality exceeds 50%. The mortality rate in exertional heat stroke is less than 5%. It was long believed that heat strokes lead only rarely to permanent deficits and that convalescence is almost complete. However, following the 1995 Chicago heat wave, researchers from the University of Chicago Medical Center studied all 58 patients with heat stroke severe enough to require intensive care at 12 area hospitals between July 12 and 20, 1995, ranging in age from 25 to 95 years. Nearly half of these patients died within a year 21 percent before and 28 percent after release from the hospital. Many of the survivors had permanent loss of independent function; one-third had severe functional impairment at discharge, and none of them had improved after one year. The study also recognized that because of overcrowded conditions in all the participating hospitals during the crisis, the immediate care which is critical was not as comprehensive as it should have been. In rare cases, brain damage has been reported as a permanent sequela of severe heat stroke, most commonly cerebellar atrophy. Epidemiology Various aspects can affect the incidence of heat stroke, including sex, age, geographical location, and occupation. The incidence of heat stroke is higher among men; however, the incidence of other heat illnesses is higher among women. The incidence of other heat illnesses in women compared with men ranged from 1.30 to 2.89 per 1000 person-years versus 0.98 to 1.98 per 1000 person-years. Different parts of the world also have different rates of heat stroke. During the 2003 European heatwave more than 70,000 people died of heat related illnesses, and during the 2022 European heatwave, 61,672 people died from heat related illnesses. Society and culture In Slavic mythology, there is a personification of sunstroke, Poludnitsa (lady midday), a feminine demon clad in white that causes impairment or death to people working in the fields at midday. There was a traditional short break in harvest work at noon, to avoid attack by the demon. Antonín Dvořák's symphonic poem, The Noon Witch, was inspired by this tradition. Other animals Heatstroke can affect livestock, especially in hot, humid weather; or if the horse, cow, sheep or other is unfit, overweight, has a dense coat, is overworked, or is left in a horsebox in full sun. Symptoms include drooling, panting, high temperature, sweating, and rapid pulse. The animal should be moved to shade, drenched in cold water and offered water or electrolyte to drink. See also Hyperthermia Heat exhaustion Occupational heat stress References External links Heat stroke on MedicineNet.com Effects of external causes Medical emergencies Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate Thermoregulation
Heat stroke
Biology
3,027
45,577,911
https://en.wikipedia.org/wiki/Alliance%20for%20Space%20Development
The Alliance for Space Development is a space advocacy organization dedicated to influencing space policy towards the goal of permanent human settlements in space. The founding executive members of the Alliance are the National Space Society and the Space Frontier Foundation. Member organizations of the Alliance are the Lifeboat Foundation, Mars Foundation, Mars Society, The Moon Society, Space Development Foundation, Space Development Steering Committee, Space For Humanity, Space Renaissance USA, Space Tourism Society, Students for the Exploration and Development of Space, Students on Capitol Hill, Tea Party in Space, and Waypaver Foundation. The primary goals of the Alliance are to elevate the growth of the space industry, reduce the cost of accessing space, and to clearly define space settlement as the reason for sending humans to space. Objectives The 2015 objectives of the Alliance are to amend the NASA Space Act to make the development and settlement of space a national purpose, reduce the cost of access to space, complete support for the Commercial Crew Transportation program, and ensuring a gapless transition from the International Space Station to future space stations. The Alliance is also proposing a “Cheap Access to Space Act” to offer $3.5 billion in government prizes for the development of reusable launch vehicles. Activities The Alliance participates in the March Storm, a grassroots action to lobby Congress in Washington D.C., and the August Home District Blitz. Reception Paul Brower wrote that what was missing from the Alliance's goals is the objective of colonizing the moon. In response, Al Globus, an Alliance board member, wrote that the Alliance is focused on the technological development that must precede a successful space settlement regardless of where that settlement is located. References External links Space advocacy organizations
Alliance for Space Development
Astronomy
342
1,967,506
https://en.wikipedia.org/wiki/Vehicle%20Information%20and%20Communication%20System
Vehicle Information and Communication System (VICS) is a technology used in Japan for delivering traffic and travel information to road vehicle drivers. It provides simple maps showing information about traffic jams, travel time, and road work - usually relevant to your location and usually incorporating infrared beacons. It can be compared with the European TMC technology. VICS is transmitted using: FM multiplex broadcasting (uses DARC). With this method, you have to manually select road conditions on-screen. Infrared beacons over Japan's highways and urban roads. With this method, road conditions automatically pop up. Microwaves in the ISM band. It is an application of ITS. The VICS information can be displayed on the car navigation unit at 3 levels: Level-1: Simple text data Level-2: In form of simple diagrams Level-3: Data superimposed on the map displayed on navigation unit (e.g., traffic congestion data) Information transmitted generally includes traffic congestion data, data on availability of service areas (SA) and parking areas (PA), information on road works and traffic collisions. Some advanced navigation units might utilize this data for route calculation (e.g., choosing a route to avoid congestion) or the driver might use his/her own discretion while using this information. See also G-Book Internavi CarWings External links VICS official website Information systems Warning systems Road transport in Japan Intelligent transportation systems Japanese inventions
Vehicle Information and Communication System
Technology,Engineering
294
17,153,924
https://en.wikipedia.org/wiki/Elastic%20pendulum
In physics and mathematics, in the area of dynamical systems, an elastic pendulum (also called spring pendulum or swinging spring) is a physical system where a piece of mass is connected to a spring so that the resulting motion contains elements of both a simple pendulum and a one-dimensional spring-mass system. For specific energy values, the system demonstrates all the hallmarks of chaotic behavior and is sensitive to initial conditions.At very low and very high energy, there also appears to be regular motion. The motion of an elastic pendulum is governed by a set of coupled ordinary differential equations.This behavior suggests a complex interplay between energy states and system dynamics. Analysis and interpretation The system is much more complex than a simple pendulum, as the properties of the spring add an extra dimension of freedom to the system. For example, when the spring compresses, the shorter radius causes the spring to move faster due to the conservation of angular momentum. It is also possible that the spring has a range that is overtaken by the motion of the pendulum, making it practically neutral to the motion of the pendulum. Lagrangian The spring has the rest length and can be stretched by a length . The angle of oscillation of the pendulum is . The Lagrangian is: where is the kinetic energy and is the potential energy. Hooke's law is the potential energy of the spring itself: where is the spring constant. The potential energy from gravity, on the other hand, is determined by the height of the mass. For a given angle and displacement, the potential energy is: where is the gravitational acceleration. The kinetic energy is given by: where is the velocity of the mass. To relate to the other variables, the velocity is written as a combination of a movement along and perpendicular to the spring: So the Lagrangian becomes: Equations of motion With two degrees of freedom, for and , the equations of motion can be found using two Euler-Lagrange equations: For : isolated: And for : isolated: These can be further simplified by scaling length and time . Expressing the system in terms of and results in nondimensional equations of motion. The one remaining dimensionless parameter characterizes the system. The elastic pendulum is now described with two coupled ordinary differential equations. These can be solved numerically. Furthermore, one can use analytical methods to study the intriguing phenomenon of order-chaos-order in this system for various values of the parameter and initial conditions and . See also Double pendulum Duffing oscillator Pendulum (mathematics) Spring-mass system References Further reading External links Holovatsky V., Holovatska Y. (2019) "Oscillations of an elastic pendulum" (interactive animation), Wolfram Demonstrations Project, published February 19, 2019. Holovatsky V., Holovatskyi I., Holovatska Ya., Struk Ya. Oscillations of the resonant elastic pendulum. Physics and Educational Technology, 2023, 1, 10–17, https://doi.org/10.32782/pet-2023-1-2 http://journals.vnu.volyn.ua/index.php/physics/article/view/1093 Chaotic maps Dynamical systems Mathematical physics Pendulums
Elastic pendulum
Physics,Mathematics
674
77,696,593
https://en.wikipedia.org/wiki/Association%20for%20Plant%20Breeding%20for%20the%20Benefit%20of%20Society
The Association for Plant Breeding for the Benefit of Society (APBREBES) is an international non-governmental organization founded in 2009 as a network to advocate on issues related to plant breeders' rights, peasants and farmers’ rights, food sovereignty, and the sustainable management of agricultural biodiversity. APBREBES has the status of observer to the International Union for the Protection of New Varieties of Plants (UPOV). Background In 2009, seven NGOs joined to create the APBREBES: the Center for International Environmental Law, Community Technology Development Trust, Development Fund (Norway), Local Initiatives for Biodiversity, Research and Development, Public Eye, Southeast Asia Regional Initiative for Community Empowerment, and Third World Network. The association's main focus are the International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGRFA) and the Convention on Biological Diversity. The network is also an important critique of the implementation of the UPOV system of plant breeders' rights. APBREBES emphasises equitable access to plant genetic resources and ensures that legal frameworks respect human rights and environmental sustainability. The association is active mostly at the UPOV, although it also occasionally work at the national or regional level, such as in Africa and elsewhere. In 2015, APBREBES developed guidelines for alternative to the UPOV system for developing countries' plant variety protection laws. References Bioethics Botany Plant breeding organizations Plant genetics Biological patent law Intellectual property organizations Intergovernmental organizations established by treaty Organizations established in 2009 Organisations based in Geneva Seed associations Peasant food Food sovereignty Food security International law organizations
Association for Plant Breeding for the Benefit of Society
Chemistry,Technology,Biology
313
2,902,722
https://en.wikipedia.org/wiki/30%20Aquarii
30 Aquarii is a single star located about 301 light years away from the Sun in the zodiac constellation of Aquarius. 30 Aquarii is its Flamsteed designation. It is visible to the naked eye as a dim, orange-hued star with an apparent visual magnitude of 5.56. The star is moving further from the Earth with a heliocentric radial velocity of 40 km/s. This object is an aging G-type giant star with a stellar classification of G8 III, although Houk and Swift (1999) found a class of K1 IV. It is a red clump giant, which indicates it is on the horizontal branch and is generating energy through helium fusion at its core. The star is nearly two billion years old with a leisurely rotation rate, showing a projected rotational velocity of 1.6 km/s. It has double the mass of the Sun and has expanded to ten times the Sun's radius. The star is radiating 55 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,944 K. References External links G-type giants K-type giants Horizontal-branch stars Aquarius (constellation) Durchmusterung objects Aquarii, 030 209396 108868 8401
30 Aquarii
Astronomy
266
54,611,624
https://en.wikipedia.org/wiki/Ophiocordyceps%20robertsii
Ophiocordyceps robertsii, known in New Zealand as vegetable caterpillar (Māori: āwhato or āwheto) is an entomopathogenic fungus belonging to the order Hypocreales (Ascomycota) in the family Ophiocordycipitaceae. It invades the caterpillars of leaf-litter dwelling moths and turns them into fungal mummies, sending up a fruiting spike above the forest floor to shed its spores. Caterpillars eat the spores whilst feeding on leaf litter to complete the fungal life cycle. Evidence of this fungus can be seen when small brown stems push through the forest floor: underneath will be the dried remains of the host caterpillar. This species was first thought by Europeans to be a worm or caterpillar that burrowed from the top of a tree to the roots, where it exited and then grew a shoot of the plant out of its head. It was the first fungus provided with a binomial name from New Zealand. Uses The parasitised caterpillar has been used by Māori as a food or to create an ink called ngārahu for traditional tā moko tattoos. The charred caterpillars were mixed with tree sap to make an almost black ink. Scientists suggest that the fungus produces antiseptic chemicals that can prevent infection. In the early 20th century, mummified caterpillars were sold to tourists as a curio. Genomics Ophiocordyceps robertsii has a genome that is relatively large for species in the Ophiocordycipitaceae family, estimated at between of 95-103 million base pairs. This size is comparable to that of the related species Ophiocordyceps sinensis. Analysis of the genome sequence revealed much of the size is due to DNA that is repetitive. The analysis also identified the mating-type locus, supporting a heterothallic lifecycle for the species, in which strains of the MAT1-1 and MAT1-2 types are required for mating. References External links Ophiocordyceps robertsii discussed in RNZ Critter of the Week, 21 July 2017 Fungi described in 1837 Fungi of New Zealand Ophiocordycipitaceae Taxa named by William Jackson Hooker Fungus species
Ophiocordyceps robertsii
Biology
467
13,246,451
https://en.wikipedia.org/wiki/Godwin%20Laboratory%2C%20University%20of%20Cambridge
The Godwin Laboratory is a research facility at the University of Cambridge. It was originally set up to investigate radiocarbon dating and its applications, and was one of the first laboratories to determine a radiocarbon calibration curve. The lab is named after the English scientist Harry Godwin. History With the late Professor Sir Nicholas Shackleton in charge, the focus of research shifted to marine isotope records, which document changes in the size of polar ice sheets and temperature changes. This research helped to establish the Milankovitch Theory as the most plausible explanation of glacial/interglacial changes over the past million years, and was continued to develop much more extensive geological timescales, covering the last 30 million years, on the basis of this hypothesis. Other areas researched by members of the laboratory include pollen records and tree rings as a proxy for past climate. The laboratory changed principal allegiance from the Department of Plant Sciences to the Department of Earth Sciences around 1995. In 2005, after Nick Shackleton's retirement, the laboratory was incorporated into the building housing the Department of Earth Sciences, where it continues to operate. It is part of the inter-departmental Godwin Institute for Quaternary Research, a loose collection of Cambridge University research facilities and workers focused on research particularly addressing the history of the last 1.8 million years. References Institutions in the Department of Earth Sciences, University of Cambridge Geology of the United Kingdom Radiocarbon dating Research institutes in Cambridge
Godwin Laboratory, University of Cambridge
Physics,Chemistry
290
40,256
https://en.wikipedia.org/wiki/Trusted%20client
In computing, a trusted client is a device or program controlled by the user of a service, but with restrictions designed to prevent its use in ways not authorized by the provider of the service. That is, the client is a device that vendors trust and then sell to the consumers, whom they do not trust. Examples include video games played over a computer network or the Content Scramble System (CSS) in DVDs. Trusted client software is considered fundamentally insecure: once the security is broken by one user, the break is trivially copyable and available to others. As computer security specialist Bruce Schneier states, "Against the average user, anything works; there's no need for complex security software. Against the skilled attacker, on the other hand, nothing works." Trusted client hardware is somewhat more secure, but not a complete solution. Trusted clients are attractive to business as a form of vendor lock-in: sell the trusted client at a loss and charge more than would be otherwise economically viable for the associated service. One early example was radio receivers that were subsidized by broadcasters, but restricted to receiving only their radio station. Modern examples include video recorders being forced by law to include Macrovision copy protection, the DVD region code system and region-coded video game consoles. Trusted computing aims to create computer hardware which assists in the implementation of such restrictions in software, and attempts to make circumvention of these restrictions more difficult. See also Digital rights management Dongle Secure cryptoprocessor Trust References Clients (computing) Cybersecurity engineering
Trusted client
Technology,Engineering
320
8,105,109
https://en.wikipedia.org/wiki/Common%20Vulnerability%20Scoring%20System
The Common Vulnerability Scoring System (CVSS) is a technical standard for assessing the severity of vulnerabilities in computing systems. Scores are calculated based on a formula with several metrics that approximate ease and impact of an exploit. Scores range from 0 to 10, with 10 being the most severe. While many use only the CVSS Base score for determining severity, temporal and environmental scores also exist, to factor in availability of mitigations and how widespread vulnerable systems are within an organization, respectively. The current version of CVSS (CVSSv4.0) was released in November 2023. CVSS is not intended to be used as a method for patch management prioritization, but is used like that regardless. History Research by the National Infrastructure Advisory Council (NIAC) in 2003/2004 led to the launch of CVSS version 1 (CVSSv1) in February 2005, with the goal of being "designed to provide open and universally standard severity ratings of software vulnerabilities". This initial draft had not been subject to peer review or review by other organizations. In April 2005, NIAC selected the Forum of Incident Response and Security Teams (FIRST) to become the custodian of CVSS for future development. Feedback from vendors using CVSSv1 in production suggested there were "significant issues with the initial draft of CVSS". Work on CVSS version 2 (CVSSv2) began in April 2005 with the final specification being launched in June 2007. Further feedback resulted in work beginning on CVSS version 3 in 2012, ending with CVSSv3.0 being released in June 2015. Terminology The CVSS assessment measures three areas of concern: base metrics for qualities intrinsic to a vulnerability, temporal metrics for characteristics that evolve over the lifetime of vulnerability, and environmental metrics for vulnerabilities that depend on a particular implementation or environment. A numerical score is generated for each of these metric groups. A vector string (or simply "vector" in CVSSv2) represents the values of all the metrics as a block of text. Version 2 Complete documentation for CVSSv2 is available from FIRST. A summary is provided below. Base metrics Access Vector The access vector (AV) shows how a vulnerability may be exploited. Access Complexity The access complexity (AC) metric describes how easy or difficult it is to exploit the discovered vulnerability. Authentication The authentication (Au) metric describes the number of times that an attacker must authenticate to a target to exploit it. It does not include (for example) authentication to a network in order to gain access. For locally exploitable vulnerabilities, this value should only be set to Single or Multiple if further authentication is required after initial access. Impact metrics Confidentiality The confidentiality (C) metric describes the impact on the confidentiality of data processed by the system. Integrity The Integrity (I) metric describes the impact on the integrity of the exploited system. Availability The availability (A) metric describes the impact on the availability of the target system. Attacks that consume network bandwidth, processor cycles, memory, or any other resources affect the availability of a system. Calculations These six metrics are used to calculate the exploitability and impact sub-scores of the vulnerability. These sub-scores are used to calculate the overall base score. The metrics are concatenated to produce the CVSS Vector for the vulnerability. Example A buffer overflow vulnerability affects web server software that allows a remote user to gain partial control of the system, including the ability to cause it to shut down: This would give an exploitability sub-score of 10, and an impact sub-score of 8.5, giving an overall base score of 9.0. The vector for the base score in this case would be AV:N/AC:L/Au:N/C:P/I:P/A:C. The score and vector are normally presented together to allow the recipient to fully understand the nature of the vulnerability and to calculate their own environmental score if necessary. Temporal metrics The value of temporal metrics change over the lifetime of the vulnerability, as exploits are developed, disclosed and automated and as mitigations and fixes are made available. Exploitability The exploitability (E) metric describes the current state of exploitation techniques or automated exploitation code. Remediation Level The remediation level (RL) of a vulnerability allows the temporal score of a vulnerability to decrease as mitigations and official fixes are made available. Report Confidence The report confidence (RC) of a vulnerability measures the level of confidence in the existence of the vulnerability and also the credibility of the technical details of the vulnerability. Calculations These three metrics are used in conjunction with the base score that has already been calculated to produce the temporal score for the vulnerability with its associated vector. The formula used to calculate the temporal score is: Example To continue with the example above, if the vendor was first informed of the vulnerability by a posting of proof-of-concept code to a mailing list, the initial temporal score would be calculated using the values shown below: This would give a temporal score of 7.3, with a temporal vector of E:P/RL:U/RC:UC (or a full vector of AV:N/AC:L/Au:N/C:P/I:P/A:C/E:P/RL:U/RC:UC). If the vendor then confirms the vulnerability, then the score rises to 8.1, with a temporal vector of E:P/RL:U/RC:C A temporary fix from the vendor would reduce the score back to 7.3 (E:P/RL:T/RC:C), while an official fix would reduce it further to 7.0 (E:P/RL:O/RC:C). As it is not possible to be confident that every affected system has been fixed or patched, the temporal score cannot reduce below a certain level based on the vendor's actions, and may increase if an automated exploit for the vulnerability is developed. Environmental metrics The environmental metrics use the base and current temporal score to assess the severity of a vulnerability in the context of the way that the vulnerable product or software is deployed. This measure is calculated subjectively, typically by affected parties. Collateral Damage Potential The collateral damage potential (CDP) metric measures the potential loss or impact on either physical assets such as equipment (and lives), or the financial impact upon the affected organisation if the vulnerability is exploited. Target Distribution The target distribution (TD) metric measures the proportion of vulnerable systems in the environment. Impact Subscore Modifier Three further metrics assess the specific security requirements for confidentiality (CR), integrity (IR) and availability (AR), allowing the environmental score to be fine-tuned according to the users' environment. Calculations The five environmental metrics are used in conjunction with the previously assessed base and temporal metrics to calculate the environmental score and to produce the associated environmental vector. Example If the aforementioned vulnerable web server were used by a bank to provide online banking services, and a temporary fix was available from the vendor, then the environmental score could be assessed as: This would give an environmental score of 8.2, and an environmental vector of CDP:MH/TD:H/CR:H/IR:H/AR:L. This score is within the range 7.0-10.0, and therefore constitutes a critical vulnerability in the context of the affected bank's business. Criticism of Version 2 Several vendors and organizations expressed dissatisfaction with CVSSv2. Risk Based Security, which manages the Open Source Vulnerability Database, and the Open Security Foundation jointly published a public letter to FIRST regarding the shortcomings and failures of CVSSv2. The authors cited a lack of granularity in several metrics, which results in CVSS vectors and scores that do not properly distinguish vulnerabilities of different type and risk profiles. The CVSS scoring system was also noted as requiring too much knowledge of the exact impact of the vulnerability. Oracle introduced the new metric value of "Partial+" for Confidentiality, Integrity, and Availability, to fill perceived gaps in the description between Partial and Complete in the official CVSS specifications. Version 3 To address some of these criticisms, development of CVSS version 3 was started in 2012. The final specification was named CVSSv3.0 and released in June 2015. In addition to a Specification Document, a User Guide and Examples document were also released. Several metrics were changed, added, and removed. The numerical formulas were updated to incorporate the new metrics while retaining the existing scoring range of 0-10. Textual severity ratings of None (0), Low (0.1-3.9), Medium (4.0-6.9), High (7.0-8.9), and Critical (9.0-10.0) were defined, similar to the categories NVD defined for CVSSv2 that were not part of that standard. Changes from Version 2 Base metrics In the Base vector, the new metrics User Interaction (UI) and Privileges Required (PR) were added to help distinguish vulnerabilities that required user interaction or user or administrator privileges to be exploited. Previously, these concepts were part of the Access Vector metric of CVSSv2. UI can take the values None or Required; attacks that do not require logging in as a user are considered more severe. PR can take the values None, Low, or High; similarly, attacks requiring fewer privileges are more severe. The Base vector also saw the introduction of the new Scope (S) metric, which was designed to make clear which vulnerabilities may be exploited and then used to attack other parts of a system or network. These new metrics allow the Base vector to more clearly express the type of vulnerability being evaluated. The Confidentiality, Integrity, and Availability (C, I, A) metrics were updated to have scores consisting of None, Low, or High, rather than the None, Partial, and Complete of CVSSv2. This allows more flexibility in determining the impact of a vulnerability on CIA metrics. Access Complexity was renamed Attack Complexity (AC) to make clear that access privileges were moved to a separate metric. This metric now describes how repeatable exploit of this vulnerability may be; AC is High if the attacker requires perfect timing or other circumstances (other than user interaction, which is also a separate metric) which may not be easily duplicated on future attempts. Attack Vector (AV) saw the inclusion of a new metric value of Physical (P), to describe vulnerabilities that require physical access to the device or system to perform. Temporal metrics The Temporal metrics were essentially unchanged from CVSSv2. Environmental metrics The Environmental metrics of CVSSv2 were completely removed and replaced with essentially a second Base score, known as the Modified vector. The Modified Base is intended to reflect differences within an organization or company compared to the world as a whole. New metrics to capture the importance of Confidentiality, Integrity, and Availability to a specific environment were added. Criticism of Version 3 In a blog post in September 2015, the CERT Coordination Center discussed limitations of CVSSv2 and CVSSv3.0 for use in scoring vulnerabilities in emerging technology systems such as the Internet of Things. Version 3.1 A minor update to CVSS was released on June 17, 2019. The goal of CVSSv3.1 was to clarify and improve upon the existing CVSSv3.0 standard without introducing new metrics or metric values, allowing for frictionless adoption of the new standard by both scoring providers and scoring consumers alike. Usability was a prime consideration when making improvements to the CVSS standard. Several changes being made in CVSSv3.1 are to improve the clarity of concepts introduced in CVSSv3.0, and thereby improve the overall ease of use of the standard. FIRST has used input from industry subject-matter experts to continue to enhance and refine CVSS to be more and more applicable to the vulnerabilities, products, and platforms being developed over the past 15 years and beyond. The primary goal of CVSS is to provide a deterministic and repeatable way to score the severity of a vulnerability across many different constituencies, allowing consumers of CVSS to use this score as input to a larger decision matrix of risk, remediation, and mitigation specific to their particular environment and risk tolerance. Updates to the CVSSv3.1 specification include clarification of the definitions and explanation of existing base metrics such as Attack Vector, Privileges Required, Scope, and Security Requirements. A new standard method of extending CVSS, called the CVSS Extensions Framework, was also defined, allowing a scoring provider to include additional metrics and metric groups while retaining the official Base, Temporal, and Environmental Metrics. The additional metrics allow industry sectors such as privacy, safety, automotive, healthcare, etc., to score factors that are outside the core CVSS standard. Finally, the CVSS Glossary of Terms has been expanded and refined to cover all terms used throughout the CVSSv3.1 documentation. Version 4.0 Version 4.0 was officially released in November 2023, and is available at FIRST. Among several clarifications, the most notable changes are the new base metric Attack Requirements which complement the metric Attack Complexity with an assessment what conditions at the target side are needed to exploit a vulnerability. Further, the Impact metrics are split into impact on the vulnerable system itself and impact on subsequent systems (this replaces the Scope metric from prior versions). The base metrics are now as follows. Attack Vector (AV): Over which (physical) way can you exploit a vulnerability? [N] network, [A] adjacent (i.e., limited to direct connections), [I] interaction (e.g. via SSH or Keyboard), or [P] physical (e.g. manipulate or observe hardware). Attack Complexity (AC): Are there any further counter measures the attacker has to circumvent, and how hard is it to do so? [L] low, or [H] high (e.g. data execution prevention). Attack Requirements (AT): Are there any conditions necessary for an attack which the attacker cannot influence? [N] none, or [P] present (e.g. a race condition must be won, or the system is in a specific state). Privileges Required (PR): Is it required to have any privileges on the target system? [N] none (unauthenticated), [L] low (normal user), or [H] high (administrative access). User Interaction (UI): Does the (legitimate) user of the system need to do anything to make the attack possible? [N] none, [P] passive (e.g. accidentally visiting a malicious website), or [A] active (e.g. executing a malicious office macro). Vulnerable System Confidentiality Impact (VC): [N] none, [L] low, or [H] high. Vulnerable System Integrity Impact (VI): [N] none, [L] low, or [H] high. Vulnerable System Availability Impact (VA): [N] none, [L] low, or [H] high. Subsequent System Confidentiality Impact (SC): [N] none, [L] low, or [H] high. Subsequent System Integrity Impact (SI): [N] none, [L] low, or [H] high. Subsequent System Availability Impact (SA): [N] none, [L] low, or [H] high. Additionally to these base metrics, there are optional metrics regarding public availability of an exploit, environment specific thread modelling, system recovery, and others. Example Assume there is an SQL-Injection in an online web shop. The database user of the online shop software only has read access to the database. Further the injection is in a view of the shop which is only visible to registered customers. The CVSS 4.0 base vector is as follows. AV:N as the vulnerability can be triggered over the web AC:L as SQL-Injections can be exploited reliably via scripts (assuming the online shop has no counter measures). AT:N as the attack doesn't depend on specific system conditions PR:L as attackers need to be authenticated as regular user, but no administrative rights are needed UI:N as no other users are involved VC:H as attackers can read all tables in the database VI:N as attackers have no write access VA:L as attackers might execute long queries on the database which temporarily render the database slower or unresponsive SC:N (we have no further information on subsequent systems) SI:N (we have no further information on subsequent systems) SA:L we can expect other systems involved in order management and logistics to be affected by an unresponsive database This results in the vector AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:N/VA:L/SC:N/SI:N/SA:L Adoption Versions of CVSS have been adopted as the primary method for quantifying the severity of vulnerabilities by a wide range of organizations and companies, including: The National Vulnerability Database (NVD) The Open Source Vulnerability Database (OSVDB) CERT Coordination Center, which in particular makes use of CVSSv2 Base, Temporal and Environmental metrics See also Common Weakness Enumeration (CWE) Common Vulnerabilities and Exposures (CVE) Common Attack Pattern Enumeration and Classification (CAPEC) References External links The Forum of Incident Response and Security Teams (FIRST) CVSS site National Vulnerability Database (NVD) CVSS site Common Vulnerability Scoring System v2 Calculator Computer security standards Computer network security
Common Vulnerability Scoring System
Technology,Engineering
3,719
1,082,946
https://en.wikipedia.org/wiki/Hydrodynamical%20helicity
In fluid dynamics, helicity is, under appropriate conditions, an invariant of the Euler equations of fluid flow, having a topological interpretation as a measure of linkage and/or knottedness of vortex lines in the flow. This was first proved by Jean-Jacques Moreau in 1961 and Moffatt derived it in 1969 without the knowledge of Moreau's paper. This helicity invariant is an extension of Woltjer's theorem for magnetic helicity. Let be the velocity field and the corresponding vorticity field. Under the following three conditions, the vortex lines are transported with (or 'frozen in') the flow: (i) the fluid is inviscid; (ii) either the flow is incompressible (), or it is compressible with a barotropic relation between pressure and density ; and (iii) any body forces acting on the fluid are conservative. Under these conditions, any closed surface whose normal vectors are orthogonal to the vorticity (that is, ) is, like vorticity, transported with the flow. Let be the volume inside such a surface. Then the helicity in , denoted , is defined by the volume integral For a localised vorticity distribution in an unbounded fluid, can be taken to be the whole space, and is then the total helicity of the flow. is invariant precisely because the vortex lines are frozen in the flow and their linkage and/or knottedness is therefore conserved, as recognized by Lord Kelvin (1868). Helicity is a pseudo-scalar quantity: it changes sign under change from a right-handed to a left-handed frame of reference; it can be considered as a measure of the handedness (or chirality) of the flow. Helicity is one of the four known integral invariants of the Euler equations; the other three are energy, momentum and angular momentum. For two linked unknotted vortex tubes having circulations and , and no internal twist, the helicity is given by , where is the Gauss linking number of the two tubes, and the plus or minus is chosen according as the linkage is right- or left-handed. For a single knotted vortex tube with circulation , then, as shown by Moffatt & Ricca (1992), the helicity is given by , where and are the writhe and twist of the tube; the sum is known to be invariant under continuous deformation of the tube. The invariance of helicity provides an essential cornerstone of the subject topological fluid dynamics and magnetohydrodynamics, which is concerned with global properties of flows and their topological characteristics. Meteorology In meteorology, helicity corresponds to the transfer of vorticity from the environment to an air parcel in convective motion. Here the definition of helicity is simplified to only use the horizontal component of wind and vorticity, and to only integrate in the vertical direction, replacing the volume integral with a one-dimensional definite integral or line integral: where is the altitude, is the horizontal velocity, is the horizontal vorticity. According to this formula, if the horizontal wind does not change direction with altitude, will be zero as and are perpendicular, making their scalar product nil. is then positive if the wind veers (turns clockwise) with altitude and negative if it backs (turns counterclockwise). This helicity used in meteorology has energy units per units of mass [m/s] and thus is interpreted as a measure of energy transfer by the wind shear with altitude, including directional. This notion is used to predict the possibility of tornadic development in a thundercloud. In this case, the vertical integration will be limited below cloud tops (generally 3 km or 10,000 feet) and the horizontal wind will be calculated to wind relative to the storm in subtracting its motion: where is the cloud motion relative to the ground. Critical values of SRH (Storm Relative Helicity) for tornadic development, as researched in North America, are: SRH = 150-299 ... supercells possible with weak tornadoes according to Fujita scale SRH = 300-499 ... very favourable to supercells development and strong tornadoes SRH > 450 ... violent tornadoes When calculated only below 1 km (4,000 feet), the cut-off value is 100. Helicity in itself is not the only component of severe thunderstorms, and these values are to be taken with caution. That is why the Energy Helicity Index (EHI) has been created. It is the result of SRH multiplied by the CAPE (Convective Available Potential Energy) and then divided by a threshold CAPE: This incorporates not only the helicity but the energy of the air parcel and thus tries to eliminate weak potential for thunderstorms even in strong SRH regions. The critical values of EHI: EHI = 1 ... possible tornadoes EHI = 1-2 ... moderate to strong tornadoes EHI > 2 ... strong tornadoes Notes References Batchelor, G.K., (1967, reprinted 2000) An Introduction to Fluid Dynamics, Cambridge Univ. Press Ohkitani, K., "Elementary Account Of Vorticity And Related Equations". Cambridge University Press. January 30, 2005. Chorin, A.J., "Vorticity and Turbulence". Applied Mathematical Sciences, Vol 103, Springer-Verlag. March 1, 1994. Majda, A.J. & Bertozzi, A.L., "Vorticity and Incompressible Flow". Cambridge University Press; 1st edition. December 15, 2001. Tritton, D.J., "Physical Fluid Dynamics". Van Nostrand Reinhold, New York. 1977. Arfken, G., "Mathematical Methods for Physicists", 3rd ed. Academic Press, Orlando, FL. 1985. Moffatt, H.K. (1969) The degree of knottedness of tangled vortex lines. J. Fluid Mech. 35, pp. 117–129. Moffatt, H.K. & Ricca, R.L. (1992) Helicity and the Cǎlugǎreanu Invariant. Proc. R. Soc. Lond. A 439, pp. 411–429. Thomson, W. (Lord Kelvin) (1868) On vortex motion. Trans. Roy. Soc. Edin. 25, pp. 217–260. Fluid dynamics
Hydrodynamical helicity
Chemistry,Engineering
1,355
25,935,889
https://en.wikipedia.org/wiki/Archaeometallurgical%20slag
Archaeometallurgical slag is slag discovered and studied in the context of archaeology. Slag, the byproduct of iron-working processes such as smelting or smithing, is left at the iron-working site rather than being moved away with the product. As it weathers well, it is readily available for study. The size, shape, chemical composition and microstructure of slag are determined by features of the iron-working processes used at the time of its formation. Overview The ores used in ancient smelting processes were rarely pure metal compounds. Impurities were removed from the ore through the process of slagging, which involves adding heat and chemicals. Slag is the material in which the impurities from ores (known as gangue), as well as furnace lining and charcoal ash, collect. The study of slag can reveal information about the smelting process used at the time of its formation. The finding of slag is direct evidence of smelting having occurred in that place as slag was not removed from the smelting site. Through slag analysis, archaeologists can reconstruct ancient human activities concerned with metal work such as its organization and specialization. The contemporary knowledge of slagging gives insights into ancient iron production. In a smelting furnace, up to four different phases might co-exist. From the top of the furnace to the bottom, the phases are slag, matte, speiss, and liquid metal. Slag can be classified as furnace slag, tapping slag or crucible slag depending on the mechanism of production. The slag has three functions. The first is to protect the melt from contamination. The second is to accept unwanted liquid and solid impurities. Finally, slag can help to control the supply of refining media to the melt. These functions are achieved if the slag has a low melting temperature, low density and high viscosity which ensure a liquid slag that separates well from the melting metal. Slag should also maintain its correct composition so that it can collect more impurities and be immiscible in the melt. Through chemical and mineralogical analysis of slag, factors such as the identity of the smelted metal, the types of ore used and technical parameters such as working temperature, gas atmosphere and slag viscosity can be learned. Slag formation Natural iron ores are mixtures of iron and unwanted impurities, or gangue. In ancient times, these impurities were removed by slagging. Slag was removed by liquation, that is, solid gangue was converted into a liquid slag. The temperature of the process was high enough for the slag to exist in its liquid form. Smelting was conducted in various types of furnaces. Examples are the bloomery furnace and the blast furnace. The condition in the furnace determines the morphology, chemical composition and the microstructure of the slag. The bloomery furnace produced iron in a solid state. This is because the bloomery process was conducted at a temperature lower than the melting point of iron metal. Carbon monoxide from the incomplete combustion of charcoal slowly diffused through the hot iron oxide ore, converting it to iron metal and carbon dioxide. Blast furnaces were used to produce liquid iron. The blast furnace was operated at higher temperatures and at a greater reducing condition than the bloomery furnace. A greater reducing environment was achieved by increasing the fuel to ore ratio. More carbon reacted with the ore and produced a cast iron rather than solid iron. Also, the slag produced was less rich in iron. A different process was used to make "tapped" slag. Here, only charcoal was added to the furnace. It reacted with oxygen, and generated carbon monoxide, which reduced the iron ore to iron metal. The liquefied slag separated from the ore, and was removed through the tapping arch of the furnace wall. In addition, the flux (purifying agent), the charcoal ash and the furnace lining contributed to the composition of the slag. Slag may also form during smithing and refining. The product of the bloomery process is heterogeneous blooms of entrapped slag. Smithing is necessary to cut up and remove the trapped slag by reheating, softening the slag and then squeezing it out. On the other hand, refining is needed for the cast iron produced in the blast furnace. By re-melting the cast iron in an open hearth, the carbon is oxidized and removed from the iron. Liquid slag is formed and removed in this process. Slag analysis The analysis of slag is based on its shape, texture, isotopic signature, chemical and mineralogical characteristics. Analytical tools like Optical Microscope, scanning electron microscope (SEM), X-ray Fluorescence (XRF), X-ray diffraction (XRD) and inductively coupled plasma-mass spectrometry (ICP-MS) are widely employed in the study of slag. Macro-analysis The first step in the investigation of archaeometallurgical slag is the identification and macro-analysis of slag in the field. Physical properties of slag such as shape, colour, porosity and even smell are used to make a primary classification to ensure representative samples from slag heaps are obtained for future micro-analysis. For example, tap slag usually has a wrinkled upper face and a flat lower face due to contact with soil. Furthermore, the macro-analysis of slag heaps can prove an estimated total weight which in turn can be used to determine the scale of production at a particular smelting location. Bulk chemical analysis The chemical composition of slag can reveal much about the smelting process. XRF is the most commonly used tool in analysing the chemical composition of slag. Through chemical analysis, the composition of the charge, the firing temperature, the gas atmosphere and the reaction kinetics can be determined. Ancient slag composition is usually a quaternary eutectic system CaO-SiO2-FeO-Al2O3 simplified to CaO-SiO2-FeO2, giving a low and uniform melting point. In some circumstances, the eutectic system was created according to the proportion of silicates to metal oxides in the gangue, together with the type of ore and the furnace lining. In other instances, a flux was required to achieve the correct system. The melting temperature of slag can be determined by plotting its chemical composition in a ternary plot. The viscosity of slag can be calculated through its chemical composition with equation: where is the index of viscosity. With recent advances in rotational viscometry techniques, viscosities of iron oxide slags are also widely undertaken. Coupled with phase equilibria studies, these analysis provide a better understanding of physico-chemical behaviour of slags at high temperatures. In the early stages of smelting, the separation between melting metal and slag is not complete. Hence, the main, minor and trace elements of metal in the slag can be indicators of the type of ore used in the smelting process. Mineralogical analysis The optical microscope, scanning electron microscope, X-ray diffraction and petrographic analysis can be used to determine the types and distribution of minerals in slag. The minerals present in the slag are good indicators of the gas atmosphere in the furnace, the cooling rate of the slag and the homogeneity of the slag. The type of ore and flux used in the smelting process can be determined if there are elements of un-decomposed charge or even metal pills trapped in the slag. Slag minerals are classified as silicates, oxides and sulfides. Bachmann classified the main silicates in slag according to the ratio between metal oxides and silica. Ratio MeO : SiO2 silicate examples 2 : 1 fayalite 2 : 1 monticellite 1.5 : 1 melilite 1 : 1 pyroxene Fayalite (Fe2SiO4) is the most common mineral found in ancient slag. By studying the shape of the fayalite, the cooling rates of the slag can be roughly estimated. Fayalite reacts with oxygen to form magnetite: 3Fe2SiO4 + O2= 2FeO·Fe2O3 + 3SiO2 Therefore, the gas atmosphere in the furnace can be calculated from the ratio of magnetite to fayalite in the slag. The presence of metal sulfides suggests that a sulfidic ore has been used. Metal sulfides survive the oxidizing stage before smelting and therefore may also indicate a multi-stage smelting process. When fayalite is replete with CaO, monticellite and pyroxene form. They are an indicator of a high calcium content in the ore. Lead isotope analysis Lead isotope analysis is a technique for determining the source of ore in ancient smelting. Lead isotope composition is a signature of ore deposits and varies very little throughout the whole deposit. Also, lead isotope composition is unchanged in the smelting process. The amount of each of the four stable isotopes of lead are used in the analysis. They are 204Pb, 206Pb, 207Pb and 208Pb. Ratios: 208Pb/207Pb, 207Pb/206Pb and 206Pb/204Pb are measured by mass spectrometry. Apart from 204Pb, the lead isotopes are all products of the radioactive decay of uranium and thorium. When ore is deposited, uranium and thorium are separated from the ore. Thus, deposits formed in different geological periods will have different lead isotope signatures. 238U →206Pb 235U →207Pb 232Th→208Pb For example, Hauptmann performed lead isotope analysis on slags from Faynan, Jordan. The resulting signature was the same as that from ores from the dolomite, limestone and shale deposits in the Wadi Khalid and Wadi Dana areas of Jordan. Physical dating Ancient slag is difficult to date. It has no organic material with which to perform radiocarbon dating. There are no cultural artifacts like pottery shards in the slag with which to date it. Direct physical dating of slag through thermoluminescence dating could be a good method to solve this problem. Thermoluminescence dating is possible if the slag contains crystal elements such as quartz or feldspar. However, the complex composition of slag can make this technique difficult unless the crystal elements can be isolated. See also Archaeometallurgy Bog iron Iron metallurgy in Africa Iron Age References Archaeometallurgy History of metallurgy
Archaeometallurgical slag
Chemistry,Materials_science
2,246
42,494,206
https://en.wikipedia.org/wiki/S%C3%A9rgio%20Valle%20Duarte
Sergio Valle Duarte (born September 26, 1954) is a Brazilian multimedia artist and fine-art photographer. Biography Self-taught, he lives and works in São Paulo. Between 1972 and 1974, he worked as an actor in television advertisements for Campari and Nestlé. Due to the military dictatorship in Brazil, in 1976 he moved to London where he worked as assistant to Rex Features International Photographic Press Agency. As freelance photographer, he followed pop music groups The Who, Tangerine Dream, Genesis, Deep Purple, and ZZ Top. In 1977 Brazilian magazine Geração Pop (Editora Abril) featured a series of pictures he made in London of The Rolling Stones. Soon after, between Europe and South America, he collaborated with a range of magazines, Interview, Playboy, Vogue, Sony Style, (1978–1990). In those years he joined The Image Bank, Getty Images (1980–2005) and was featured in photography art magazines Collector's Photography U.S.A., Zoom France, Special Bresil, Zoom Italy, Newlook France, Newlook USA, and Playboy (Brazil). As Multimedia artist, since 1970, he participated in the exhibition New Media Art Multimedia 70/80 with the triptych "Video Oil" at the Armando Alvares Penteado Foundation, curated by Deysi Piccinini, also the exhibition The plot of Taste another look at the daily, at the Julio Plaza installation Electronic Amusement with the project "Video Hypnosis" at the Biennial Foundation, São Paulo, 1985 and The First Quadrienal de Fotografias, curated by Paulo Klein at Museu de Arte Moderna de São Paulo, 1985. Duarte evolved his work adding new technologies and techniques with digital images, electrophotography, Xerox art conceptualizing artistically the reading of DNA and also in the future, the writing of DNA. To his portraits he sewed strands of hair of the models to allow them a future cloning. The model Gianne Albertoni is a part of the series that is featured in the permanent collection of museums in Europe and South America. The series is denominated by the artist as "Eletrografias e Fotografias com Fios de Cabelo para Futura Clonagem" (Electrophotographs and Photographs with Human Hair for Future Cloning), BioArt. Duarte is inspired by the surrealist tradition and the originality of his work resides in the fantastic colors and in the richness of details that he uses. Irreverent, but never dramatic, with a playful irony, Duarte's works are constantly moving, dancing, flying, stretching, as if they are to expand out of the frame. During the 1980s, he befriended the Italian artist and philosopher Joseph Pace, founder in Paris of Filtranisme, a neo-existential philosophical and artistic current, joining, in 1990, the enlarged "filtranistes" group. Due to a leak in the roof of his artist studio at Spring Street during a summer storm in the late 1990s, much of his work was destroyed; it is rare to find analog works before this period. He authenticates his works with a thumbprint. Duarte focuses his personal expression interpreting freely sacred and profane themes. From 2005 to 2015 he collaborated as curator for Brazil for the Florence Biennale and for the Padua Art Fair. Collections São Paulo Museum of Modern Art, Brazil Itaú Cultural, São Paulo, Brazil Museum of Modern Art, Rio de Janeiro, Rio de Janeiro, Brazil Yokohama Museum of Art, Yokohama, Japan Musée de l'Élysée, Lausanne, Switzerland Museum für Fotokopie, Mülheim, Germany Auer Photo Foundation, Geneva, Switzerland Museum Afro Brasil, São Paulo, Brazil Musée Français de la Photographie, Bievres, France Museum of Art of the Parliament of São Paulo, Brazil Gallery Selected bibliography Robert Louit " Portifolio Revista Zoom Internacional", 1985, edição 121, pp. 26–31, France Daysi Peccinini – "Arte Meios Multimeios 70/80" FAAP – Projeto Video Oil,1985,(Brazil). Paola Sammartano " Portifolio Revista Zoom Internacional" 1995, pp. 62–67, (Italy). Tadeu Chiarelli "Catalog geral do acervo do Museu de Arte Moderna de São Paulo", 2002, pp. 85–89, (Brazil). Coleção Joaquim Paiva, "Visões e Alumbramentos" Museum of Modern Art, Rio de Janeiro", 2002, (Brazil). Eduardo Bueno "São Paulo 450 anos em 24 horas", Bueno e Bueno 2004, pp. 21–23, 197, (Brazil). References 1954 births Living people 20th-century Brazilian artists 20th-century Brazilian male artists 20th-century photographers Multimedia artists Fine art photographers BioArtists Xerox artists Brazilian photographers
Sérgio Valle Duarte
Technology
1,030
41,358,691
https://en.wikipedia.org/wiki/Hydrodynamic%20quantum%20analogs
In physics, the hydrodynamic quantum analogs refer to experimentally-observed phenomena involving bouncing fluid droplets over a vibrating fluid bath that behave analogously to several quantum-mechanical systems. The experimental evidence for diffraction through slits has been disputed, however, though the diffraction pattern of walking droplets is not exactly the same as in quantum physics, it does appear clearly in the high memory parameter regime (at high forcing of the bath) where all the quantum-like effects are strongest. A droplet can be made to bounce indefinitely in a stationary position on a vibrating fluid surface. This is possible due to a pervading air layer that prevents the drop from coalescing into the bath. For certain combinations of bath surface acceleration, droplet size, and vibration frequency, a bouncing droplet will cease to stay in a stationary position, but instead “walk” in a rectilinear motion on top of the fluid bath. Walking droplet systems have been found to mimic several quantum mechanical phenomena including particle diffraction, quantum tunneling, quantized orbits, the Zeeman Effect, and the quantum corral. Besides being an interesting means to visualise phenomena that are typical of the quantum-mechanical world, floating droplets on a vibrating bath have interesting analogies with the pilot wave theory, one of the many interpretations of quantum mechanics in its early stages of conception and development. The theory was initially proposed by Louis de Broglie in 1927. It suggests that all particles in motion are actually borne on a wave-like motion, similar to how an object moves on a tide. In this theory, it is the evolution of the carrier wave that is given by the Schrödinger equation. It is a deterministic theory and is entirely nonlocal. It is an example of a hidden variable theory, and all non-relativistic quantum mechanics can be accounted for in this theory. The theory was abandoned by de Broglie in 1932, gave way to the Copenhagen interpretation, but was revived by David Bohm in 1952 as De Broglie–Bohm theory. The Copenhagen interpretation does not use the concept of the carrier wave or that a particle moves in definite paths until a measurement is made. Physics of bouncing and walking droplets History Floating droplets on a vibrating bath were first described in writing by Jearl Walker in a 1978 article in Scientific American. In 2005, Yves Couder and his lab were the first to systematically study the dynamics of bouncing droplets and discovered most of the quantum mechanical analogs. John Bush and his lab expanded upon Couder's work and studied the system in greater detail. In 2015 three separate groups, including John Bush, attempted to reproduce the effect and were unsuccessful. Stationary bouncing droplet A fluid droplet can float or bounce over a vibrating fluid bath because of the presence of an air layer between the droplet and the bath surface. The behavior of the droplet depends on the acceleration of the bath surface. Below a critical acceleration, the droplet will take successively smaller bounces before the intervening air layer eventually drains from underneath, causing the droplet to coalesce. Above the bouncing threshold, the intervening air layer replenishes during each bounce so the droplet never touches the bath surface. Near the bath surface, the droplet experiences equilibrium between inertial forces, gravity, and a reaction force due to the interaction with the air layer above the bath surface. This reaction force serves to launch the droplet back above the air like a trampoline. Molacek and Bush proposed two different models for the reaction force. Walking droplet For a small range of frequencies and drop sizes, a fluid droplet on a vibrating bath can be made to “walk” on the surface if the surface acceleration is sufficiently high (but still below the Faraday instability). That is, the droplet does not simply bounce in a stationary position but instead wanders in a straight line or in a chaotic trajectory. When a droplet interacts with the surface, it creates a transient wave that propagates from the point of impact. These waves usually decay, and stabilizing forces keep the droplet from drifting. However, when the surface acceleration is high, the transient waves created upon impact do not decay as quickly, deforming the surface such that the stabilizing forces are not enough to keep the droplet stationary. Thus, the droplet begins to “walk.” Quantum phenomena on a macroscopic scale A walking droplet on a vibrating fluid bath was found to behave analogously to several different quantum mechanical systems, namely particle diffraction, quantum tunneling, quantized orbits, the Zeeman effect, and the quantum corral. Single and double slit diffraction It has been known since the early 19th century that when light is shone through one or two small slits, a diffraction pattern appears on a screen far from the slits. Light has wave-like behavior, and interferes with itself through the slits, creating a pattern of alternating high and low intensity. Single electrons also exhibit wave-like behavior as a result of wave-particle duality. When electrons are fired through small slits, the probability of the electron striking the screen at a specific point shows an interference pattern as well. In 2006, Couder and Fort demonstrated that walking droplets passing through one or two slits exhibit similar interference behavior. They used a square shaped vibrating fluid bath with a constant depth (aside from the walls). The “walls” were regions of much lower depth, where the droplets would be stopped or reflected away. When the droplets were placed in the same initial location, they would pass through the slits and be scattered, seemingly randomly. However, by plotting a histogram of the droplets based on scattering angle, the researchers found that the scattering angle was not random, but droplets had preferred directions that followed the same pattern as light or electrons. In this way, the droplet may mimic the behavior of a quantum particle as it passes through the slit. Despite that research, in 2015 three teams: Bohr and Andersen's group in Denmark, Bush's team at MIT, and a team led by the quantum physicist Herman Batelaan at the University of Nebraska set out to repeat the Couder and Fort's bouncing-droplet double-slit experiment. Having their experimental setups perfected, none of the teams saw the interference-like pattern reported by Couder and Fort. Droplets went through the slits in almost straight lines, and no stripes appeared. It has since been shown that droplet trajectories are sensitive to interactions with container boundaries, air currents, and other parameters. Though the diffraction pattern of walking droplets is not exactly the same as in quantum physics, and is not expected to show a Fraunhofer-like dependence of the number of peaks on the slit width, the diffraction pattern does appear clearly in the high memory regime (at high forcing of the bath). Quantum tunneling Quantum tunneling is the quantum mechanical phenomenon where a quantum particle passes through a potential barrier. In classical mechanics, a classical particle could not pass through a potential barrier if the particle does not have enough energy, so the tunneling effect is confined to the quantum realm. For example, a rolling ball would not reach the top of a steep hill without adequate energy. However, a quantum particle, acting as a wave, can undergo both reflection and transmission at a potential barrier. This can be shown as a solution to the time dependent Schrödinger Equation. There is a finite, but usually small, probability to find the electron at a location past the barrier. This probability decreases exponentially with increasing barrier width. The macroscopic analogy using fluid droplets was first demonstrated in 2009. Researchers set up a square vibrating bath surrounded by walls on its perimeter. These “walls” were regions of lower depth, where a walking droplet may be reflected away. When the walking droplets were allowed to move around in the domain, they usually were reflected away from the barriers. However, surprisingly, sometimes the walking droplet would bounce past the barrier, similar to a quantum particle undergoing tunneling. In fact, the crossing probability was also found to decrease exponentially with increasing width of the barrier, exactly analogous to a quantum tunneling particle. Quantized orbits When two atomic particles interact and form a bound state, such the hydrogen atom, the energy spectrum is discrete. That is, the energy levels of the bound state are not continuous and only exist in discrete quantities, forming “quantized orbits.” In the case of a hydrogen atom, the quantized orbits are characterized by atomic orbitals, whose shapes are functions of discrete quantum numbers. On the macroscopic level, two walking fluid droplets can interact on a vibrating surface. It was found that the droplets would orbit each other in a stable configuration with a fixed distance apart. The stable distances came in discrete values. The stable orbiting droplets analogously represent a bound state in the quantum mechanical system. The discrete values of the distance between droplets are analogous to discrete energy levels as well. Zeeman effect When an external magnetic field is applied to a hydrogen atom, for example, the energy levels are shifted to values slightly above or below the original level. The direction of shift depends on the sign of the z-component of the total angular momentum. This phenomenon is known as the Zeeman Effect. In the context of walking droplets, an analogous Zeeman Effect can be demonstrated by observing orbiting droplets in a vibrating fluid bath. The bath is also brought to rotate at a constant angular velocity. In the rotating bath, the equilibrium distance between droplets shifts slightly farther or closer. The direction of shift depends on whether the orbiting drops rotate in the same direction as the bath or in opposite directions. The analogy to the quantum effect is clear. The bath rotation is analogous to an externally applied magnetic field, and the distance between droplets is analogous to energy levels. The distance shifts under an applied bath rotation, just as the energy levels shift under an applied magnetic field. Quantum corral Researchers have found that a walking droplet placed in a circular bath does not wander randomly, but rather there are specific locations the droplet is more likely to be found. Specifically, the probability of finding the walking droplet as a function of the distance from the center is non-uniform and there are several peaks of higher probability. This probability distribution mimics that of an electron confined to a quantum corral. See also Pilot-wave models De Broglie–Bohm theory Superfluid vacuum theory Quantum hydrodynamics References External links Research on hydrodynamic quantum analogues Prof. John Bush (MIT) Wired "Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?" 2014 Quantum models
Hydrodynamic quantum analogs
Physics
2,174
55,602,006
https://en.wikipedia.org/wiki/Chlorophyllide
Chlorophyllide a and chlorophyllide b are the biosynthetic precursors of chlorophyll a and chlorophyll b respectively. Their propionic acid groups are converted to phytyl esters by the enzyme chlorophyll synthase in the final step of the pathway. Thus the main interest in these chemical compounds has been in the study of chlorophyll biosynthesis in plants, algae and cyanobacteria. Chlorophyllide a is also an intermediate in the biosynthesis of bacteriochlorophylls. Structures Chlorophyllide a, is a carboxylic acid (R=H). In chlorophyllide b, the methyl group at position 13 (IUPAC numbering for chlorophyllide a) and highlighted in the green box, is replaced with a formyl group. Biosynthesis steps up to formation of protoporphyrin IX In the early steps of the biosynthesis, which starts from glutamic acid, a tetrapyrrole is created by the enzymes deaminase and cosynthetase which transform aminolevulinic acid via porphobilinogen and hydroxymethylbilane to uroporphyrinogen III. The latter is the first macrocyclic intermediate common to haem, sirohaem, cofactor F430, cobalamin and chlorophyll itself. The next intermediates are coproporphyrinogen III and protoporphyrinogen IX, which is oxidised to the fully aromatic protoporphyrin IX. Insertion of iron into protoporphyrin IX in for example mammals gives haem, the oxygen-carrying cofactor in blood, but plants combine magnesium instead to give, after further transformations, chlorophyll for photosynthesis. Biosynthesis of chlorophyllides from protoporphyrin IX Details of the late stages of the biosynthetic pathway to chlorophyll differ in the plants (for example Arabidopsis thaliana, Nicotiana tabacum and Triticum aestivum) and bacteria (for example Rubrivivax gelatinosus and Synechocystis) in which it has been studied. However, although the genes and enzymes vary, the chemical reactions involved are identical. Insertion of magnesium Chlorophyll is characterised by having a magnesium ion coordinated within a ligand called a chlorin. The metal is inserted into protoporphyrin IX by the enzyme magnesium chelatase which catalyzes the reaction protoporphyrin IX + + ATP + ADP + phosphate + Mg-protoporphyrin IX + 2 Esterification of the ring C propionate group The next step towards the chlorophyllides is the formation of a methyl (CH3) ester on one of the propionate groups, which is catalysed by the enzyme magnesium protoporphyrin IX methyltransferase in the methylation reaction Mg-protoporphyrin IX + S-adenosylmethionine Mg-protoporphyrin IX 13-methyl ester + S-adenosyl-L-homocysteine From porphyrin to chlorin The chlorin ring system features a five-membered carbon ring E is created when one of the propionate groups of the porphyrin is cyclised to the carbon atom linking the original pyrrole rings C and D. A series of chemical steps catalysed by the enzyme Magnesium-protoporphyrin IX monomethyl ester (oxidative) cyclase gives the overall reaction Mg-protoporphyrin IX 13-monomethyl ester + 3 NADPH + 3 H+ + 3 O2 divinylprotochlorophyllide + 3 NADP+ + 5 H2O In barley the electrons are provided by reduced ferredoxin, which can obtain them from photosystem I or, in the dark, from Ferredoxin—NADP(+) reductase: the cyclase protein is named XanL and is encoded by the Xantha-l gene. In anaerobic organisms such as Rhodobacter sphaeroides the same overall transformation occurs but the oxygen incorporated into magnesium-protoporphyrin IX 13-monomethyl ester comes from water in the reaction . Reduction steps to chlorophyllide a Two further transformations are required to produce chlorophyllide a. Both are reduction reactions: one converts a vinyl group to an ethyl group and the second adds two atoms of hydrogen to the pyrrole ring D, although the overall aromaticity of the macrocycle is retained. These reactions proceed independently and in some organisms the sequence is reversed. The enzyme divinyl chlorophyllide a 8-vinyl-reductase converts 3,8-divinylprotochlorophyllide to protochlorophyllide in reaction 3,8-divinylprotochlorophyllide + NADPH + H+ protochlorophyllide + NADP+ This is followed by the reaction in which the pyrrole ring D is reduced by the enzyme protochlorophyllide reductase protochlorophyllide + NADPH + H+ chlorophyllide a + NADP+ This reaction is light-dependent but there is an alternative enzyme, ferredoxin:protochlorophyllide reductase (ATP-dependent), that uses reduced ferredoxin as its cofactor and is not dependent on light; it performs the a similar reaction but with the alternative substrate 3,8-divinylprotochlorophyllide 3,8-divinylprotochlorophyllide + reduced ferredoxin + 2 ATP + 2 H2O 3,8-divinylchlorophyllide a + oxidized ferredoxin + 2 ADP + 2 phosphate In the organisms which use this alternative sequence of reduction steps, the process is completed by the reaction catalysed by an enzyme which can take a variety of substrates and perform the required vinyl-group reduction, for example in this case 3,8-divinylchlorophyllide a + 2 reduced ferredoxin + 2 H+ chlorophyllide a + 2 oxidized ferredoxin From chlorophyllide a to chlorophyllide b Chlorophyllide a oxygenase is the enzyme that converts chlorophyllide a to chlorophyllide b by catalysing the overall reaction chlorophyllide a + 2 O2 + 2 NADPH + 2 H+ chlorophyllide b + 3 H2O + 2 NADP+ Use in the biosynthesis of chlorophylls Chlorophyll synthase completes the biosynthesis of chlorophyll a by catalysing the reaction chlorophyllide a + phytyl diphosphate chlorophyll a + diphosphate This forms an ester of the carboxylic acid group in chlorophyllide a with the 20-carbon diterpene alcohol phytol. Chlorophyll b is made by the same enzyme acting on chlorophyllide b. The same is known for chlorophyll d and f, both made from corresponding chlorophyllides ultimately made from chlorophyllide a. Use in the biosynthesis of bacteriochlorophylls Bacteriochlorophylls are the light harvesting pigments found in photosynthetic bacteria: they do not produce oxygen as a side-product. There are many such structures but all are biosynthetically related by being derived from chlorophyllide a. BChl a: bacteriochlorin ring and sidechains Bacteriochlorophyll a is a typical example; its biosynthesis has been studied in Rhodobacter capsulatus and Rhodobacter sphaeroides. The first step is the reduction (with trans stereochemistry) of the pyrrole ring B, giving the characteristic 18-electron aromatic system of many bacteriochlorophylls. This is carried out by the enzyme chlorophyllide a reductase, which catalyses the reaction . chlorophyllide a + 2 reduced ferredoxin + ATP + H2O + 2 H+ 3-deacetyl 3-vinylbacteriochlorophyllide a + 2 oxidized ferredoxin + ADP + phosphate The next two steps convert the vinyl group first into a 1-hydroxyethyl group and then into the acetyl group of bacteriochlorophyllide a. The reactions are catalysed by chlorophyllide a 31-hydratase () and bacteriochlorophyllide a dehydrogenase () as follows: 3-deacetyl 3-vinylbacteriochlorophyllide a + H2O 3-deacetyl 3-(1-hydroxyethyl)bacteriochlorophyllide a 3-deacetyl 3-(1-hydroxyethyl)bacteriochlorophyllide a + NAD+ bacteriochlorophyllide a + NADH + H+ These three enzyme-catalysed reactions can occur in different sequences to produce bacteriochlorophyllide a ready for esterification to the final pigments for photosynthesis. The phytyl ester of bacteriochlorophyll a is not attached directly: rather, the initial intermediate is the ester with R=geranylgeranyl (from geranylgeranyl pyrophosphate) which is then subject to additional steps as three of the sidechain's alkene bonds are reduced. References Tetrapyrroles Photosynthetic pigments
Chlorophyllide
Chemistry
2,135
28,702,283
https://en.wikipedia.org/wiki/Islam%20and%20violence
The use of politically and religiously-motivated violence in Islam dates back to its early history. Islam has its origins in the behavior, sayings, and rulings of the Islamic prophet Muhammad, his companions, and the first caliphs in the 7th, 8th, and 9th centuries CE. Mainstream Islamic law stipulates detailed regulations for the use of violence, including corporal and capital punishment, as well as regulations on how, when, and whom to wage war against. Legal background Sharia law is the basic Islamic religious law derived from the religious precepts of Islam. The Quran and opinions of Muhammad (i.e., the Hadith and Sunnah) are the primary sources of sharia. For topics and issues not directly addressed in these primary sources, sharia is derived. The derivation differs between the various sects of Islam (Sunni and Shia are the majority), and various jurisprudence schools such as Hanafi, Maliki, Shafi'i, Hanbali and Jafari. The sharia in these schools is derived hierarchically using one or more of the following guidelines: Ijma (usually the consensus of Muhammad's companions), Qiyas (analogy derived from the primary sources), Istihsan (ruling that serves the interest of Islam in the discretion of Islamic jurists) and Urf (customs). Sharia is a significant source of legislation in various Muslim countries. Some apply all or a majority of the sharia, and these include Saudi Arabia, Sudan, Iran, Iraq, Afghanistan, Pakistan, Brunei, United Arab Emirates, Qatar, Yemen and Mauritania, respectively. In these countries, sharia-prescribed punishments, such as beheading, flogging and stoning, continue to be practiced judicially or extrajudicially. The introduction of sharia is a longstanding goal for Islamist movements globally, but attempts to impose sharia have been accompanied by controversy, violence, and even warfare. The differences between sharia and secular law have led to an ongoing controversy as to whether sharia is compatible with secular forms of government, human rights, freedom of thought, and women's rights. Types of violence Islam and war The first military rulings were formulated during the first hundred years after Muhammad established an Islamic state in Medina. These rulings evolved in accordance with the interpretations of the Quran (the Muslim Holy scriptures) and Hadith (the recorded traditions of Muhammad). The key themes in these rulings were the justness of war (see Justice in the Quran) and the injunction to jihad. The rulings do not cover feuds and armed conflicts in general. The millennium of Muslim conquests could be classified as a religious war. Some have pointed out that the current Western view of the need for a clear separation between Church and State was only first legislated into effect after 18 centuries of Christianity in the Western world. While some majority Muslim governments such as Turkey and many of the majority Muslim former Soviet republics have officially attempted to incorporate this principle of such a separation of powers into their governments, yet, the concept somewhat remains in a state of ongoing evolution and flux within the Muslim world. Islam has never had any officially recognized tradition of pacifism, and throughout its history, warfare has been an integral part of the Islamic theological system. Since the time of Muhammad, Islam has considered warfare to be a legitimate expression of religious faith, and has accepted its use for the defense of Islam. During approximately the first 1,000 years of its existence, the use of warfare by Muslim majority governments often resulted in the de facto propagation of Islam. The minority Sufi movement within Islam, which includes certain pacifist elements, has often been officially "tolerated" by many Muslim majority governments. Some notable Muslim clerics, such as Abdul Ghaffar Khan, have also developed alternative non-violent Muslim theologies. Some hold that the formal juristic definition of war in Islam constitutes an irrevocable and permanent link between the political and religious justifications for war within Islam. The Quranic concept of Jihad includes aspects of both a physical and an internal struggle. Jihad Jihad () is an Islamic term referring to the religious duty of Muslims to maintain the religion. In Arabic, the word jihād is a noun meaning "to strive, to apply oneself, to struggle, to persevere". A person engaged in jihad is called a mujahid, the plural of which is mujahideen (). The word jihad appears frequently in the Quran, often in the idiomatic expression "striving in the way of God (al-jihad fi sabil Allah)", to refer to the act of striving to serve the purposes of God on this earth. According to the classical Sharia law manual of Shafi'i, Reliance of the Traveller, a Jihad is a war that should be waged against non-Muslims, and the word Jihad is etymologically derived from the word mujahada, a mujahada is a war which should be waged to establish the religion. Jihad is sometimes referred to as the sixth pillar of Islam, though it occupies no such official status. In Twelver Shi'a Islam, however, jihad is one of the ten Practices of the Religion. Muslims and scholars do not all agree on its definition. Many observers—both Muslim and non-Muslim—as well as the Dictionary of Islam, talk of jihad having two meanings: an inner spiritual struggle (the "greater jihad"), and an outer physical struggle against the enemies of Islam (the "lesser jihad") which may take a violent or non-violent form. Jihad is often translated as "Holy War", although this term is controversial. According to orientalist Bernard Lewis, "the overwhelming majority of classical theologians, jurists", and specialists in the hadith "understood the obligation of jihad in a military sense." Javed Ahmad Ghamidi states that there is consensus among Islamic scholars that the concept of jihad will always include armed struggle against wrongdoers. According to Jonathan Berkey, jihad in the Quran was maybe originally intended against Muhammad's local enemies, the pagans of Mecca or the Jews of Medina, but the Quranic statements supporting jihad could be redirected once new enemies appeared. The first documentation of the law of Jihad was written by 'Abd al-Rahman al-Awza'i and Muhammad ibn al-Hasan al-Shaybani. The first forms of military Jihad occurred after the migration (hijra) of Muhammad and his small group of followers to Medina from Mecca and the conversion of several inhabitants of the city to Islam. The first revelation concerning the struggle against the Meccans was surah 22, verses 39–40: The main focus of Muhammad's later years was increasing the number of allies as well as the amount of territory under Muslim control. According to Richard Edwards and Sherifa Zuhur, offensive jihad was the type of jihad practiced by the early Muslim community because their weakness meant "no defensive action would have sufficed to protect them against the allied tribal forces determined to exterminate them." Jihad as a collective duty (Fard Kifaya) and offensive jihad is synonymous in classical Islamic law and tradition, which also asserted that offensive jihad could only be declared by the caliph, but an "individually incumbent jihad" (Fard Ayn) required only "awareness of an oppression targeting Islam or Islamic peoples." Tina Magaard, associate professor at the Aarhus University Department of Business Development and Technology, has analyzed the texts of the ten largest religions in the world. In an interview, she stated that the basic texts of Islam call for violence and aggression against followers of other faiths to a greater extent than texts of other religions. She has also argued that they contain direct incitements to terrorism. According to a number of sources, Shia doctrine taught that jihad (or at least full-scale jihad) can only be carried out under the leadership of the Imam (who will return from occultation to bring absolute justice to the world). However, "struggles to defend Islam" are permissible before his return. Caravan raids Ghazi () is an Arabic term originally referring to an individual who participates in Ghazw (), meaning military expeditions or raiding; after the emergence of Islam, it took on new connotations of religious warfare. The related word Ghazwa () is a singulative form meaning a battle or military expedition, often one led by Muhammad. The Caravan raids were a series of raids in which Muhammad and his companions participated. The raids were generally offensive and carried out to gather intelligence or seize the trade goods of caravans financed by the Quraysh. The raids were intended to weaken the economy of Mecca by Muhammad. His followers were also impoverished. Muhammad only attacked caravans as a response against Quraysh for confiscating the Muslims' homes and wealth back in Mecca and driving them into exile. Quran Islamic doctrine and teachings on matters of war and peace have become topics of heated discussion in recent years. Charles Matthews writes that there is a "large debate about what the Quran commands with regard to the 'sword verses' and the 'peace verses'". According to Matthews, "the question of the proper prioritization of these verses, and how they should be understood in relation to one another, has been a central issue for Islamic thinking about war." According to Dipak Gupta, "much of the religious justification of violence against nonbelievers (Dar ul Kufr) by the promoters of jihad is based on the Quranic "sword verses". The Quran contains passages that could be used to glorify or endorse violence. On the other hand, other scholars argue that such verses of the Qur'an are interpreted out of context, Micheline R. Ishay has argued that "the Quran justifies wars for self-defense to protect Islamic communities against internal or external aggression by non-Islamic populations, and wars waged against those who 'violate their oaths' by breaking a treaty". and British orientalist Gottlieb Wilhelm Leitner stated that jihad, even in self-defence, is "strictly limited". However, according to Oliver Leaman, a number of Islamic jurists asserted the primacy of the "sword verses" over the conciliatory verses in specific historical circumstances. For example, according to Diane Morgan, Ibn Kathir (1301–1372) asserted that the Sword Verse abrogated all peace treaties that had been promulgated between Muhammad and idolaters. Prior to the Hijra travel, Muhammad non-violently struggled against his oppressors in Mecca. It wasn't until after the exile that the Quranic revelations began to adopt a more defensive perspective. From that point onward, those dubious about the need to go to war were typically portrayed as lazy cowards allowing their love of peace to become a fitna to them. Hadiths The context of the Quran is elucidated by Hadith (the teachings, deeds, and sayings of Muhammad). Of the 199 references to jihad in perhaps the most standard collection of hadith—Sahih Bukhari—all refer to warfare. Quranism Quranists reject the hadith and only accept the Quran. The extent to which Quranists reject the authenticity of the Sunnah varies, but the more established groups have thoroughly criticised the authenticity of the hadith and refused it for many reasons, the most prevalent being the Quranist claim that hadith is not mentioned in the Quran as a source of Islamic theology and practice, was not recorded in written form until more than two centuries after the death of Muhammad, and contain perceived internal errors and contradictions. Ahmadiyya According to Ahmadi belief, Jihad can be divided into three categories: Jihad al-Akbar (Greater Jihad) is that against the self and refers to striving against one's low desires such as anger, lust and hatred; Jihad al-Kabīr (Great Jihad) refers to the peaceful propagation of Islam, with special emphasis on spreading the true message of Islam by the pen; Jihad al-Asghar (Smaller Jihad) is only for self-defence under situations of extreme religious persecution whilst not being able to follow one's fundamental religious beliefs, and even then only under the direct instruction of the Caliph. Ahmadi Muslims point out that as per Islamic prophecy, Mirza Ghulam Ahmad rendered Jihad in its military form as inapplicable in the present age as Islam, as a religion, is not being attacked militarily but through literature and other media, and therefore the response should be likewise. They believe that the answer to hate should be given by love. Concerning terrorism, the fourth Caliph of the Community writes: Various Ahmadis scholars, such as Muhammad Ali, Maulana Sadr-ud-Din and Basharat Ahmad, argue that when the Quran's verses are read in context, it clearly appears that the Quran prohibits initial aggression, and allows fighting only in self-defense. Ahmadi Muslims believe that no verse of the Quran abrogates or cancels another verse. All Quranic verses have equal validity, in keeping with their emphasis on the "unsurpassable beauty and unquestionable validity of the Qur'ān". The harmonization of apparently incompatible rulings is resolved through their juridical deflation in Ahmadī fiqh, so that a ruling (considered to have applicability only to the specific situation for which it was revealed), is effective not because it was revealed last, but because it is most suited to the situation at hand. Ahmadis are considered non-Muslims by the mainstream Muslims since they consider Mirza Ghulam Ahmad, founder of Ahmadiyya, as the promised Mahdi and Messiah. In a number of Islamic countries, especially Sunni-dominated nations, Ahmadis have been considered heretics and non-Muslim, and have been subject to various forms of religious persecution, discrimination and systematic oppression since the movement's inception in 1889. Islam and crime The Islamic criminal law is criminal law in accordance with Sharia. Strictly speaking, Islamic law does not have a distinct corpus of "criminal law." It divides crimes into three different categories depending on the offense – Hudud (crimes "against God", whose punishment is fixed in the Quran and the Hadiths); Qisas (crimes against an individual or family whose punishment is equal retaliation in the Quran and the Hadiths); and Tazir (crimes whose punishment is not specified in the Quran and the Hadiths, and is left to the discretion of the ruler or Qadi, i.e. judge). Some add the fourth category of Siyasah (crimes against government), while others consider it as part of either Hadd or Tazir crimes. Hudud is an Islamic concept: punishments under Islamic law (Shariah) are mandated and fixed by God. The Shariah divided offenses into those against God and those against man. Crimes against God violated his Hudud, or 'boundaries'. These punishments were specified by the Quran and in some instances by the Sunnah. They are namely for adultery, fornication, homosexuality, illegal sex by a slave girl, accusing someone of illicit sex but failing to present four male Muslim eyewitnesses, apostasy, consuming intoxicants, outrage (e.g. rebellion against the lawful Caliph, other forms of mischief against the Muslim state, or highway robbery), robbery and theft. The crimes against hudud cannot be pardoned by the victim or by the state, and the punishments must be carried out in public. These punishments range from public lashing to publicly stoning to death, amputation of hands and crucifixion. However, in most Muslim nations in modern times public stoning and execution are relatively uncommon, although they are found in Muslim nations that follow a strict interpretation of sharia, such as Saudi Arabia and Iran. Qisas is an Islamic term meaning "retaliation in kind" or revenge, "eye for an eye", "nemesis" or retributive justice. It is a category of crimes in Islamic jurisprudence, where Sharia allows equal retaliation as the punishment. Qisas principle is available against the accused, to the victim or victim's heirs, when a Muslim is murdered, suffers bodily injury, or suffers property damage. In the case of murder, Qisas means the right of a murder victim's nearest relative or Wali (legal guardian) to, if the court approves, take the life of the killer. The Quran mentions the "eye for an eye" concept as being ordained for the Children of Israel in : "O you who have believed, prescribed for you is legal retribution (Qasas) for those murdered – the free for the free, the slave for the slave, and the female for the female. But whoever overlooks from his brother anything, then there should be a suitable follow-up and payment to him with good conduct. This is an alleviation from your Lord and a mercy. But whoever transgresses after that will have a painful punishment." Shi'ite countries that use Islamic Sharia law, such as Iran, apply the "eye for an eye" rule literally. Tazir refers to punishment, usually corporal, for offenses at the discretion of the judge (Qadi) or ruler of the state. Capital punishment Beheading Beheading was the standard method of capital punishment under classical Islamic law. It was also, together with hanging, one of the ordinary methods of execution in the Ottoman Empire. Currently, Saudi Arabia is the only country in the world which uses decapitation within its Islamic legal system. The majority of executions carried out by the Wahhabi government of Saudi Arabia are public beheadings, which usually cause mass gatherings but are not allowed to be photographed or filmed. Beheading is reported to have been carried out by state authorities in Iran as recently as 2001, but as of 2014 is no longer in use. It is also a legal form of execution in Qatar and Yemen, but the punishment has been suspended in those countries. In recent years, non-state Jihadist organizations such as the Islamic State and Tawhid and Jihad either carry out or have carried out beheadings. Since 2002, they have circulated beheading videos as a form of terror and propaganda. Their actions have been condemned by other militant and terrorist groups, and they have also been condemned by mainstream Islamic scholars and organizations. Stoning Rajm () is an Arabic word that means "stoning". It is commonly used to refer to the Hudud punishment wherein an organized group throws stones at a convicted individual until that person dies. Under Islamic law, it is the prescribed punishment in cases of adultery committed by a married man or married woman. The conviction requires a confession from either the adulterer/adulteress, the testimony of four witnesses (as prescribed by the Quran in Surah an-Nur verse 4), or pregnancy outside of marriage. See below Sexual crimes Blasphemy Blasphemy in Islam is an impious utterance or action concerning God, Muhammad, or anything considered sacred in Islam. The Quran admonishes blasphemy, but does not specify any worldly punishment for it. The hadiths, which are another source of Sharia, suggest various punishments for blasphemy, which may include death. There are a number of surah in Qur'an relating to blasphemy, from which Quranic verses 5:33 and 33:57–61 have been most commonly used in Islamic history to justify and punish blasphemers. Various fiqhs (schools of jurisprudence) of Islam have different punishment for blasphemy, depending on whether blasphemer is Muslim or non-Muslim, man or woman. The punishment can be fines, imprisonment, flogging, amputation, hanging, or beheading. Muslim clerics may call for the punishment of an alleged blasphemer by issuing a fatwā. According to Islamic sources, Nadr ibn al-Harith, who was an Arab Pagan doctor from Taif, used to tell stories of Rustam and Esfandiyār to the Arabs and scoffed at Muhammad. After the battle of Badr, al-Harith was captured and, in retaliation, Muhammad ordered his execution in hands of Ali. Apostasy Apostasy in Islam is commonly defined as the conscious abandonment of Islam by a Muslim in word or through deed. A majority considers apostasy in Islam to be some form of religious crime, and Al-Baqara 256 says that there is "no compulsion in religion". The definition of apostasy from Islam and its appropriate punishment(s) are controversial, and they vary among Islamic scholars. Apostasy in Islam may include in its scope not only the renunciation of Islam by a Muslim and the joining of another religion or becoming non-religious, or questioning or denying any "fundamental tenet or creed" of Islam such as the divinity of God, prophethood of Muhammad, or mocking God, or worshipping one or more idols. The apostate (or murtadd مرتد) term has also been used for people of religions that trace their origins to Islam, such as those of the Baháʼí Faith founded in Iran, but who were never actually Muslims themselves. Apostasy in Islam does not include acts against Islam or conversion to another religion that is involuntary, due mental disorders, forced or done as concealment out of fear of persecution or during war (Taqiyya or Kitman). Historically, the majority of Islamic scholars considered apostasy a hudud crime as well as a sin, an act of treason punishable with the death penalty, and the Islamic law on apostasy and the punishment one of the immutable laws under Islam. The punishment for apostasy includes state enforced annulment of his or her marriage, seizure of the person's children and property with automatic assignment to guardians and heirs, and a death penalty for apostates, typically after a waiting period to allow the apostate time to repent and return to Islam. Female apostates could be either executed, according to Shafi'i, Maliki, and Hanbali schools of Sunni Islamic jurisprudence (fiqh), or imprisoned until she reverts to Islam as advocated by the Sunni Hanafi school and by Shi'a scholars. The kind of apostasy generally deemed to be punishable by the jurists was of the political kind, although there were considerable legal differences of opinion on this matter. There were early Islamic scholars who disagreed with the death penalty and prescribed indefinite imprisonment until repentance. The Hanafi jurist Sarakhsi also called for different punishments between the non-seditious religious apostasy and that of seditious and political nature, or high treason. Some modern scholars also argue that the death penalty is an inappropriate punishment, inconsistent with the Quranic injunctions such as Quran 88:21–22 or "no compulsion in religion"; and/or that it is not a general rule but enacted at a time when the early Muslim community faced enemies who threatened its unity, safety, and security, and needed to prevent and punish the equivalent of desertion or treason, and should be enforced only if apostasy becomes a mechanism of public disobedience and disorder (fitna). To the Ahmadi Muslim sect, there is no punishment for apostasy, neither in the Quran nor as taught by the founder of Islam, Muhammad. This position of the Ahmadi sect is not widely accepted in other sects of Islam, and the Ahmadi sect acknowledges that major sects have a different interpretation and definition of apostasy in Islam. Ulama of major sects of Islam consider the Ahmadi Muslim sect as kafirs (infidels) and apostates. Under current laws in Islamic countries, the actual punishment for the apostate ranges from execution to prison term to no punishment. Islamic nations with sharia courts use civil code to void the Muslim apostate's marriage and deny child custody rights, as well as his or her inheritance rights for apostasy. Twenty-three Muslim-majority countries, as of 2013, additionally covered apostasy in Islam through their criminal laws. Today, apostasy is a crime in 23 out 49 Muslim majority countries. It is subject in some countries, such as Iran and Saudi Arabia, to the death penalty, although executions for apostasy are rare. Apostasy is legal in secular Muslim countries such as Turkey. In numerous Islamic majority countries, many individuals have been arrested and punished for the crime of apostasy without any associated capital crimes. In a 2013 report based on an international survey of religious attitudes, more than 50% of the Muslim population in 6 Islamic countries supported the death penalty for any Muslim who leaves Islam (apostasy). A similar survey of the Muslim population in the United Kingdom, in 2007, found nearly a third of 16 to 24-year-old faithfuls believed that Muslims who convert to another religion should be executed, while less than a fifth of those over 55 believed the same. Sexual crimes Zina is an Islamic law, both in the four schools of Sunni fiqh (Islamic jurisprudence) and the two schools of Shi'a fiqh, concerning unlawful sexual relations between Muslims who are not married to one another through a Nikah. It includes extramarital sex and premarital sex, such as adultery (consensual sexual relations outside marriage), fornication (consensual sexual intercourse between two unmarried persons), illegal sex by a slave girl, and in some interpretations sodomy (anal intercourse between male same-sex partners). Traditionally, a married or unmarried Muslim male could have sex outside marriage with a non-Muslim slave girl, with or without her consent, and such sex was not considered zina. According to Quran 24:4, the proof that adultery has occurred requires four eyewitnesses to the act, which must have been committed by a man and a woman not validly married to one another, and consenting adults must have wilfully committed the act. Proof can also be determined by a confession. But this confession must be voluntary and based on legal counsel; it must be repeated on four separate occasions and made by a person who is sane. Otherwise, the accuser is then accorded a sentence for defamation (which means flogging or a prison sentence), and his or her testimony is excluded in all future court cases. There is disagreement between Islamic scholars on whether female eyewitnesses are acceptable witnesses in cases of zina (for other crimes, sharia considers two female witnesses equal the witness of one male). Zina is a Hudud crime, stated in multiple sahih hadiths to deserve the stoning (Rajm) punishment. In others stoning is prescribed as punishment for illegal sex between man and woman, In some sunnah, the method of stoning, by first digging a pit and partly burying the person's lower half in it, is described. Based on these hadiths, in some Muslim countries, married adulterers are sentenced to death, while consensual sex between unmarried people is sentenced with flogging a 100 times. Adultery can be punished by up to one hundred lashes, though this is not binding in nature, and the final decision will always be in the hands of a judge appointed by the state or community. However, no mention of stoning or capital punishment for adultery is found in the Quran and only mentions lashing as punishment for adultery. Nevertheless, most scholars maintain that there is sufficient evidence from hadiths to derive a ruling. Sharia law makes a distinction between adultery and rape and applies different rules. In the case of rape, the adult male perpetrator (i.e., rapist) of such an act is to receive the ḥadd zinā, but the non-consenting or invalidly consenting female (i.e., rape victim), proved by four eyewitnesses, is to be regarded as innocent of zinā and relieved of the ḥadd punishment. Confession and four witness-based prosecutions of zina are rare. Most cases of prosecutions are when the woman becomes pregnant or when she has been raped, seeks justice, and the Sharia authorities charge her for zina instead of duly investigating the rapist. Some fiqhs (schools of Islamic jurisprudence) created the principle of shubha (doubt), wherein there would be no zina charges if a Muslim man claims he believed he was having sex with a woman he was married to or with a woman he owned as a slave. Zina only applies to unlawful sex between free Muslims; the rape of a non-Muslim slave woman is not zina as the act is considered an offense not against the raped slave woman but against the owner of the slave. The zina and rape laws of countries under Sharia law are the subjects of a global human rights debate and one of many items of reform and secularization debate with respect to Islam. Contemporary human right activists refer this as a new phase in the politics of gender in Islam, the battle between forces of traditionalism and modernism in the Muslim world, and the use of religious texts of Islam through state laws to sanction and practice gender-based violence. In contrast to human rights activists, Islamic scholars and Islamist political parties consider 'universal human rights' arguments as the imposition of a non-Muslim culture on Muslim people, a disrespect of customary cultural practices and sexual codes that are central to Islam. Zina laws come under hudud—seen as a crime against Allah; the Islamists refer to this pressure and proposals to reform zina and other laws as 'contrary to Islam'. Attempts by international human rights to reform religious laws and codes of Islam have become the Islamist rallying platforms during political campaigns. Violence against LGBT people The Quran contains seven references to the fate of "the people of Lot", and their destruction is explicitly associated with their sexual practices: Given the fact that the Quran is allegedly vague regarding the punishment for homosexual sodomy, Islamic jurists turned to the collections of the hadith and the seerah (accounts of Muhammad's life) to support their argument for Hudud punishment. With a few exceptions, all scholars of Sharia or Islamic law interpret homosexual activity as a punishable offense as well as a sin. No specific punishment is prescribed, however, and this is usually left to the discretion of the local authorities on Islam. There are several methods by which sharia jurists have advocated the punishment of gays or lesbians who are sexually active. One form of execution involves an individual convicted of homosexual acts being stoned to death by a crowd of Muslims. Other Muslim jurists have established an ijma ruling which states that those persons who are committing homosexual acts should be thrown from rooftops or other high places, and this is the perspective of most Salafists. Today, homosexuality is not socially or legally accepted in most of the Islamic world. In Afghanistan, Brunei, Iran, Mauritania, Nigeria, Saudi Arabia, the United Arab Emirates and Yemen, homosexual acts carries the death penalty. In other Muslim-majority countries, such as Algeria, Gaza Strip, the Maldives, Malaysia, Pakistan, Qatar, Somalia, Sudan, and Syria, it is illegal. Same-sex sexual intercourse is legal in 20 Muslim-majority nations (Albania, Azerbaijan, Bahrain, Bosnia and Herzegovina, Burkina Faso, Chad, Djibouti, Guinea-Bissau, Lebanon, Jordan, Kazakhstan, Kosovo, Kyrgyzstan, Mali, Niger, Tajikistan, Turkey, the West Bank (State of Palestine), and most of Indonesia (except the province of Aceh), as well as Northern Cyprus). In Albania, Lebanon, and Turkey, there have been discussions about legalizing same-sex marriage. Homosexual relations between females are legal in Kuwait, Turkmenistan and Uzbekistan, but homosexual acts between males are illegal. Most Muslim-majority countries and the Organisation of Islamic Cooperation (OIC) have opposed moves to advance LGBT rights at the United Nations in the General Assembly and the UNHRC. In May 2016, a group of 51 Muslim states blocked 11 gay and transgender organizations from attending a high-level meeting on ending AIDS at the United Nations. However, Albania, Guinea-Bissau and Sierra Leone have signed a UN Declaration supporting LGBT rights. Kosovo as well as the (not internationally recognized) Muslim-majority Turkish Republic of Northern Cyprus also have anti-discrimination laws in place. On 12 June 2016, 49 people were killed and 53 other people were injured in a mass shooting at the Pulse gay nightclub in Orlando, Florida, in the second-deadliest mass shooting by an individual and the deadliest incident of violence against LGBT people in U.S. history. The shooter, Omar Mateen, pledged allegiance to the Islamic State. Investigators have classified the act as an Islamic terrorist attack and a hate crime, despite the fact that he was suffering from mental health issues and he acted alone. Upon further review, investigators indicated that Omar Mateen showed few signs of radicalization, suggesting that the shooter's pledge of allegiance to the Islamic State may have been a calculated move which he made in order to garner more news coverage for himself. Afghanistan, Algeria, Azerbaijan, Bahrain, Djibouti, Egypt, Iraq, Iran, Pakistan, Saudi Arabia, Turkey, Turkmenistan and the United Arab Emirates condemned the attack. Many American Muslims, including community leaders, swiftly condemned the attack. Prayer vigils for the victims were held at mosques across the country. The Florida mosque where Mateen sometimes prayed issued a statement in which it condemned the attack and offered its condolences to the victims. The Council on American–Islamic Relations called the attack "monstrous" and offered its condolences to the victims. CAIR Florida urged Muslims to donate blood and contribute funds in support of the victims' families. Domestic violence In Islam, while certain interpretations of Surah, An-Nisa, 34 in the Quran find that a husband hitting a wife is allowed, this has also been disputed. While some authors, such as Phyllis Chesler, argue that Islam is connected to violence against women, especially in the form of honor killings, others, such as Tahira Shahid Khan, a professor specializing in women's issues at the Aga Khan University in Pakistan, argue that it is the domination of men and inferior status of women in society that lead to these acts, not the religion itself. Public (such as through the media) and political discourse debating the relation between Islam, immigration, and violence against women is highly controversial in many Western countries. Many scholars claim Shari'a law encourages domestic violence against women when a husband suspects nushuz (disobedience, disloyalty, rebellion, ill conduct) in his wife. Other scholars claim wife beating for nashizah is not consistent with modern perspectives of Qur'an. Some conservative translations find that Muslim husbands are permitted to act what is known in Arabic as Idribuhunna with the use of "light force," and sometimes as much as to strike, hit, chastise, or beat. Contemporary Egyptian scholar Abd al-Halim Abu Shaqqa refers to the opinions of jurists Ibn Hajar al-Asqalani, a medieval Shafiite Sunni scholar of Islam who represents the entire realm of Shaykh al Islam, and al-Shawkani, a Yemeni Salafi scholar of Islam, jurist and reformer, who state that hitting should only occur in extraordinary cases. Some Islamic scholars and commentators have emphasized that hitting, even where permitted, is not to be harsh. Other interpretations of the verse claim it does not support hitting a woman but separating from her. Variations in interpretation are due to different schools of Islamic jurisprudence, histories and politics of religious institutions, conversions, reforms, and education. Although Islam permits women to divorce for domestic violence, they are subject to the laws of their nation, which might make it quite difficult for a woman to obtain a divorce. In deference to Surah 4:34, many nations with Shari'a law have refused to consider or prosecute cases of domestic abuse. Terrorism Islamic terrorism is, by definition, religiously-motivated terrorism which is engaged in by Muslim groups or individuals who profess Islamic, Islamic fundamentalist or Islamist motivations or goals, such as the imposition of slavery. In recent decades, incidents of Islamic terrorism have occurred on a global scale, not only in Muslim-majority states in Africa and Asia, but also in Europe, Russia, and the United States, and the targets of these attacks have been Muslims as well as non-Muslims. In a number of the worst-affected Muslim-majority regions, these terrorists have been met by armed, independent resistance groups, state actors and their proxies, and politically liberal Muslim protesters. Pacifism in Islam Different Muslim movements through history had linked pacifism with Muslim theology. However, warfare has been an integral part of Islamic history both for the defense and the spread of the faith since the time of Muhammad. Peace is an important aspect of Islam, and Muslims are encouraged, but not required to strive for peace and find peaceful solutions to all problems. However, most Muslims are generally not pacifists, because the teachings in the Qur'an and the Hadith allow Muslims to wage wars if they can be justified. According to James Turner Johnson, there is no normative tradition of pacifism in Islam. Prior to the Hijra travel, Muhammad waged a non-violent struggle against his opponents in Mecca. It was not until after the exile that the Quranic revelations began to adopt a more violent perspective. Fighting in self-defense is not only legitimate but considered obligatory upon Muslims, according to the Qur'an. The Qur'an, however, says that should the enemy's hostile behavior cease, then the reason for engaging the enemy also lapses. Statistics Older statistical academic studies have found evidence that violent crime is less common among Muslim populations than it is among non-Muslim populations. However, those studies insufficiently account for different definitions and report rates of violent crimes in other legal systems (e.g., domestic violence). The average homicide rate in the Muslim world was 2.4 per 100,000, less than a third of non-Muslim countries which had an average homicide rate of 7.5 per 100,000. The average homicide rate among the 19 most populous Muslim countries was 2.1 per 100,000, less than a fifth of the average homicide rate among the 19 most populous Christian countries which was 11.0 per 100,000, including 5.6 per 100,000 in the United States. A negative correlation was found between a country's homicide rate and its percentage of Muslims, in contrast to a positive correlation found between a country's homicide rate and its percentage of Christians. According to Professor Steven Fish: "The percentage of the society that is made up of Muslims is an extraordinarily good predictor of a country’s murder rate. More authoritarianism in Muslim countries does not account for the difference. I have found that controlling for political regime in statistical analysis does not change the findings. More Muslims, less homicide." At the same time, Fish states that: "In a recent book I reported that between 1994 and 2008, the world suffered 204 high-casualty terrorist bombings. Islamists were responsible for 125, or 61 percent of these incidents, which accounted for 70 percent of all deaths." Professor Jerome L. Neapolitan compared low crime rates in Islamic countries to low crime in Japan, comparing the role of Islam to that of Japan's Shinto and Buddhist traditions in fostering cultures emphasizing the importance of community and social obligation, contributing to less criminal behaviour than other nations. Gallup and Pew polls Polls have found Muslim-Americans to report less violent views than any other religious group in America. 89% of Muslim Americans claimed that the killing of civilians is never justified, compared to 71% of Catholics and Protestants, 75% of Jews, and 76% of atheists and non-religious groups. When Gallup asked if it is justifiable for the military to kill civilians, the percentage of people who said it is sometimes justifiable was 21% among Muslims, 58% among Protestants and Catholics, 52% among Jews, and 43% among atheists. According to 2006 data, Pew Research said that 46% of Nigerian Muslims, 29% of Jordan Muslims, 28% of Egyptian Muslims, 15% of British Muslims, and 8% of American Muslims thought suicide bombings are often or sometimes justified. The figure was unchanged – still 8% – for American Muslims by 2011. Pew in 2009 found that, among Muslims asked if suicide bombings against civilians was justifiable, 43% said it was justifiable in Nigeria, 38% in Lebanon, 15% in Egypt, 13% in Indonesia, 12% in Jordan, 7% among Arab Israelis, 5% in Pakistan, and 4% in Turkey. Pew Research in 2010 found that in Jordan, Lebanon, and Nigeria, roughly 50% of Muslims had favourable views of Hezbollah, and that Hamas also saw similar support. Counter-terrorism researchers suggests that support for suicide bombings is rooted in opposition to real or perceived foreign military occupation, rather than Islam, according to a Department of Defense-funded study by University of Chicago researcher Robert Pape. The Pew Research Center also found that support for the death penalty as punishment for "people who leave the Muslim religion" was 86% in Jordan, 84% in Egypt, 76% in Pakistan, 51% in Nigeria, 30% in Indonesia, 6% in Lebanon and 5% in Turkey. The different factors at play (e.g. sectarianism, poverty, etc.) and their relative impacts are not clarified. The Pew Research Center's 2013 poll showed that the majority of 14,244 Muslim, Christian, and other respondents in 14 countries with substantial Muslim populations are concerned about Islamic extremism and hold negative views on known terrorist groups. Gallup poll Gallup poll collected extensive data in a project called "Who Speaks for Islam?". John Esposito and Dalia Mogahed present data relevant to Islamic views on peace and more in their book Who Speaks for Islam? The book reports Gallup poll data from random samples in over 35 countries using Gallup's various research techniques (e.g., pairing male and female interviewers, testing the questions beforehand, communicating with local leaders when approval is necessary, travelling by foot if that is the only way to reach a region, etc.) There was a great deal of data. It suggests, firstly, that individuals who dislike America and consider the September 11 attacks to be "perfectly justified" form a statistically distinct group with much more extreme views. The authors call this 7% of Muslims "Politically Radicalized". They chose that title "because of their radical political orientation" and clarify, "we are not saying that all in this group commit acts of violence. However, those with extremist views are a potential source for recruitment or support for terrorist groups." The data also indicates that poverty is not simply to blame for the comparatively radical views of this 7% of Muslims, who tend to be better educated than moderates. The authors say that contrary to what the media may indicate, most Muslims believe that the September 11 attacks cannot actually be justified at all. The authors called this 55% of Muslims "Moderates". Included in that category were an additional 12% who said the attacks almost cannot be justified at all (thus, 67% of Muslims were classified as Moderates). 26% of Muslims were neither moderates nor radicals, leaving the remaining 7% called "Politically Radicalized". Esposito and Mogahed explain that the labels should not be taken as being perfectly definitive. Because there may be individuals who would generally not be considered radical, although they believe the attacks were justified, or vice versa. Perceptions of Islam Negative perceptions Philip W. Sutton and Stephen Vertigans describe Western views on Islam as based on a stereotype of it as an inherently violent religion, characterizing it as a 'religion of the sword'. They characterize the image of Islam in the Western world as a religion which is "dominated by conflict, aggression, 'fundamentalism', and global-scale violent terrorism." Juan Eduardo Campo writes that, "Europeans (have) viewed Islam in various ways: sometimes as a backward, violent religion; sometimes as an Arabian Nights fantasy; and sometimes as a complex and changing product of history and social life." Robert Gleave writes that, "at the centre of popular conceptions of Islam as a violent religion are the punishments carried out by regimes hoping to bolster both their domestic and international Islamic credentials." The 9/11 attack on the US has led many non-Muslims to indict Islam as a violent religion. According to Corrigan and Hudson, "some conservative Christian leaders (have) complained that Islam (is) incompatible with what they believed to be a Christian America." Examples of evangelical Christians who have expressed such sentiments include Franklin Graham, an American Christian evangelist and missionary, and Pat Robertson, an American media mogul, an executive chairman, and a former Southern Baptist minister. According to a survey conducted by LifeWay Research, a research group affiliated with the Southern Baptist Convention, said that two out of three Protestant pastors believe that Islam is a "dangerous" religion. Ed Stetzer, President of LifeWay, said "It's important to note our survey asked whether pastors viewed Islam as 'dangerous,' but that does not necessarily mean 'violent." Dr. Johannes J.G. Jansen was an Arabist who wrote an essay titled "Religious Roots of Muslim Violence", in which he discusses all aspects of the issue at length and unequivocally concludes that Muslim violence is mostly based on Islamic religious commands. Media coverage of terrorist attacks plays a critical role in creating negative perceptions of Islam and Muslims. Powell described how Islam initially appeared in U.S. news cycles because of its relationships to oil, Iraq, Iran, Afghanistan, and terrorism (92). Thus the audience was provided the base to associate Muslims to control of the resource of oil, war, and terrorism. A total of 11 terrorist attacks in the U.S. soil since the 9/11 and their content coverage (in 1,638 news stories) in the national media had been analyzed "through frames composed of labels, common themes, and rhetorical associations" (Powell 94). The key findings are summarized below: The media coverage of terrorism in the U.S. feeds a culture of fear of Islam and describes the United States as a good Christian nation (Powell 105). A clear pattern of reporting had been detected that differentiates "terrorists who were Muslim with international ties and terrorists who were U.S. citizens with no clear international ties" (Powell 105). This was utilized to frame "war of Islam on the United States". "Muslim Americans are no longer ‘'free'’ to practice and to name their religion without fear of prosecution, judgment, or connection to terrorism." (Powell 107) Islamophobia Islamophobia denotes the prejudice against, the hatred towards, or the fear of the religion of Islam or Muslims. While the term is now widely used, both the term itself and the underlying concept of Islamophobia have been heavily criticized. In order to differentiate between prejudiced views of Islam and secularly motivated criticism of Islam other terms have been proposed. The causes and characteristics of Islamophobia are still debated. Some commentators have posited an increase in Islamophobia resulting from the September 11 attacks, while others have associated it with the increased presence of Muslims in the United States, the European Union and other secular nations. Steven Salaita contends that indeed since 9/11, Arab Americans have evolved from what Nadine Naber described as an invisible group in the United States into a highly visible community that directly or indirectly has an effect on the United States' culture wars, foreign policy, presidential elections and legislative tradition. For ex- Islamophobia is rampant in China. That is why more than one million Muslims have been arbitrarily detained in China's Xinjiang region. Re-education camps are just one part of the government's crackdown on Uighurs. Favorable perceptions In response to these perceptions, Ram Puniyani, a secular activist and writer, says that "Islam does not condone violence but, like other religions, does believe in self-defence". Mark Juergensmeyer describes the teachings of Islam as ambiguous about violence. He states that, like all religions, Islam occasionally allows for force while stressing that the main spiritual goal is one of nonviolence and peace. Ralph W. Hood, Peter C. Hill and Bernard Spilka write in The Psychology of Religion: An Empirical Approach, "Although it would be a mistake to think that Islam is inherently a violent religion, it would be equally inappropriate to fail to understand the conditions under which believers might feel justified in acting violently against those whom their tradition feels should be opposed." Similarly, Chandra Muzaffar, a political scientist, Islamic reformist and activist, says, "The Quranic exposition on resisting aggression, oppression and injustice lays down the parameters within which fighting or the use of violence is legitimate. What this means is that one can use the Quran as the criterion for when violence is legitimate and when it is not." See also Christianity and violence Judaism and violence Persecution of Muslims Hindu terrorism Violence against Muslims in independent India Sectarian violence among Muslims Talibanization Notes References Further reading Criticism of Islam Islam-related controversies History of Islam Violence
Islam and violence
Biology
10,234
52,156,276
https://en.wikipedia.org/wiki/Tidal%20flooding
Tidal flooding, also known as sunny day flooding or nuisance flooding, is the temporary inundation of low-lying areas, especially streets, during exceptionally high tide events, such as at full and new moons. The highest tides of the year may be known as the king tide, with the month varying by location. These kinds of floods tend not to be a high risk to property or human safety, but further stress coastal infrastructure in low lying areas. This kind of flooding is becoming more common in cities and other human-occupied coastal areas as sea level rise associated with climate change and other human-related environmental impacts such as coastal erosion and land subsidence increase the vulnerability of infrastructure. Geographies faced with these issues can utilize coastal management practices to mitigate the effects in some areas, but increasingly these kinds of floods may develop into coastal flooding that requires managed retreat or other more extensive climate change adaptation practices are needed for vulnerable areas. Effects on infrastructure Tidal flooding is capable of greatly inhibiting natural gravity-based drainage systems in low-lying areas when it reaches levels that are below visible inundation of the surface, but which are high enough to incapacitate the lower drainage or sewer system. Thus, even normal rainfall or storm surge events can cause greatly amplified flooding effects. One passive solution to intrusion through drainage systems are one way back-flow valves in drainage ways. However, while this may prevent a majority of the tidal intrusion, it also inhibits drainage during exceptionally high tides that shut the valves. In Miami Beach, where resilience work is underway, the pump systems replace insufficient gravity-based systems. Relation to climate change Sunny day flooding is often associated with coastal regions, where sea level rise attributed to global warming can send water into the streets on days with elevated high tides. Further, regions with glaciers also experience sunny day flooding as climate change alters the dynamics of glacier meltwater. Abnormally hot temperatures not only swell rivers and creeks directly through accelerated snowmelt, but can burst ice dams and cause water from glacial lakes to swell waterways less predictably. A warming climate causes physical changes to the types of ice on a glacier. As glaciers retreat, there is less firn (water-retaining snow) so that more meltwater runs directly into the watershed over deeper, impervious glacial ice. Affected geographies United States Most of the coastal communities in the Eastern Seaboard of the United States are vulnerable to this kind of flooding as sea level rise increases. Due to changing geography such as subsidence, and poorly planned development, tidal flooding may exist separate from modern nuisance flooding associated with sea level rise and anthropocentric climate change. The widely publicized Holland Island in Maryland for example has disappeared over the years mainly due to subsidence and coastal erosion. In the New Orleans area on the Gulf Coast of Louisiana, land subsidence results in the Grand Isle tide gauge showing an extreme upward sea level trend. Florida In Florida, controversy arose when state-level government mandated that the term "nuisance flooding" and other terms be used in place of terms such as sea level rise, climate change and global warming, prompting allegations of climate change denial, specifically against Governor Rick Scott. This amid Florida, specifically South Florida and the Miami metropolitan area being one of the most at risk areas in the world for the potential effects of sea level rise, and where the frequency and severity of tidal flooding events increased in the 21st century. The issue is more bipartisan in South Florida, particularly in places like Miami Beach, where a several hundred million dollar project is underway to install more than 50 pumps and physically raise roads to combat the flooding, mainly along the west side of South Beach, formerly a mangrove wetland where the average elevation is less than . In the Miami metropolitan area, where the vast majority of the land is below , even a one-foot increase over the average high tide can cause widespread flooding. The 2015 and 2016 king tide event levels reached about MLLW, above mean sea level, or about NAVD88, and nearly the same above MHHW. While the tide range is very small in Miami, averaging about , with the greatest range being less than , the area is very acute to minute differences down to single inches due to the vast area at low elevation. NOAA tide gauge data for most stations shows current water level graphs relative to a fixed vertical datum, as well as mean sea level trends for some stations. During the king tides, the local Miami area tide gauge at Virginia Key shows levels running at times or more over datum. Fort Lauderdale has installed over one hundred tidal valves since 2013 to combat flooding. Fort Lauderdale is nicknamed the "Venice of America" due to its roughly of canals. A recent University of Florida study correlated the increased tidal flooding in south Florida, at least from 2011–2015 to episodic atmospheric conditions. The rate was about 3/4 of an inch (19 mm) per year, versus the global rate of just over a tenth of an inch (3 mm) per year. See also Acqua alta, tidal peaks in the northern Adriatic Sea which cause flooding in the Venetian Lagoon References External links Water Levels for Virginia Key Tide Gauge (Miami) South Florida's Rising Seas - Sea Level Rise Documentary, South Florida PBS Tides Flood
Tidal flooding
Environmental_science
1,072
6,047,282
https://en.wikipedia.org/wiki/Dicoumarol
Dicoumarol (INN) or dicumarol (USAN) is a naturally occurring anticoagulant drug that depletes stores of vitamin K (similar to warfarin, a drug that dicoumarol inspired). It is also used in biochemical experiments as an inhibitor of reductases. Dicoumarol is a natural chemical substance of combined plant and fungal origin. It is a derivative of coumarin, a bitter-tasting but sweet-smelling substance made by plants that does not itself affect coagulation, but which is (classically) transformed in mouldy feeds or silages by a number of species of fungi, into active dicoumarol. Dicoumarol does affect coagulation, and was discovered in mouldy wet sweet-clover hay, as the cause of a naturally occurring bleeding disease in cattle. See warfarin for a more detailed discovery history. Identified in 1940, dicoumarol became the prototype of the 4-hydroxycoumarin anticoagulant drug class. Dicoumarol itself, for a short time, was employed as a medicinal anticoagulant drug, but since the mid-1950s has been replaced by its simpler derivative warfarin, and other 4-hydroxycoumarin drugs. It is given orally, and it acts within two days. Uses Dicoumarol was used, along with heparin, for the treatment of deep venous thrombosis. Unlike heparin, this class of drugs may be used for months or years. Mechanism of action Like all 4-hydroxycoumarin drugs it is a competitive inhibitor of vitamin K epoxide reductase, an enzyme that recycles vitamin K, thus causing depletion of active vitamin K in blood. This prevents the formation of the active form of prothrombin and several other coagulant enzymes. These compounds are not antagonists of Vitamin K directly—as they are in pharmaceutical uses—but rather promote depletion of vitamin K in bodily tissues allowing vitamin K's mechanism of action as a potent medication for dicoumarol toxicity. The mechanism of action of Vitamin K along with the toxicity of dicoumarol are measured with the prothrombin time (PT) blood test. Poisoning Overdose results in serious, sometimes fatal uncontrolled hemorrhage. History Dicoumarol was isolated by Karl Link's laboratory at University of Wisconsin, six years after a farmer had brought a dead cow and a milk can full of uncoagulated blood to an agricultural extension station of the university. The cow had died of internal bleeding after eating moldy sweet clover; an outbreak of such deaths had begun in the 1920s during The Great Depression as farmers could not afford to waste hay that had gone bad. Link's work led to the development of the rat poison warfarin and then to the anticoagulants still in clinical use today. See also Ethyl biscoumacetate References Further reading ] External links Vitamin K antagonists Coumarin drugs Dimers (chemistry) 4-Hydroxycoumarins
Dicoumarol
Chemistry,Materials_science
650
37,516,758
https://en.wikipedia.org/wiki/Segmented%20scan
In computer science, a segmented scan is a modification of the prefix sum with an equal-sized array of flag bits to denote segment boundaries on which the scan should be performed. Example In the following, the '1' flag bits indicate the beginning of each segment. Group1 1 = 1 3 = 1 + 2 6 = 1 + 2 + 3 Group2 4 = 4 9 = 4 + 5 Group3 6 = 6 An alternative method used by High Performance Fortran is to begin a new segment at every transition of flag value. An advantage of this representation is that it is useful with both prefix and suffix (backwards) scans without changing its interpretation. In HPF, Fortran logical data type is used to represent segments. So the equivalent flag array for the above example would be as follows: See also Flattening transformation References Concurrent algorithms Higher-order functions
Segmented scan
Technology
176
78,923,897
https://en.wikipedia.org/wiki/Flezurafenib
Flezurafenib is an investigational new drug designed as a rapidly accelerated fibrosarcoma (RAF) kinase inhibitor which is being evaluated for the treatment of cancer. Developed by Jazz Pharmaceuticals, this novel therapeutic agent is currently being explored for its efficacy against solid tumors and hematological malignancies harboring oncogenic mutations that activate the RAS-RAF-MAPK signaling pathway. As of January 2025, flezurafenib has reached Phase 1 clinical trials, where it is being evaluated for the treatment of advanced cancers and advanced malignant solid neoplasms. References Antineoplastic drugs Chromanes Ethers 4-Fluorophenyl compounds Imidazoles Naphthyridines
Flezurafenib
Chemistry
150
63,018,589
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Watch%20Active%202
The Samsung Galaxy Watch Active 2 (stylized as Samsung Galaxy Watch Active2) is a smartwatch developed by Samsung Electronics, running the Tizen operating system. Announced on 5 August 2019, the Active 2 was scheduled for availability in the United States starting on 23 September 2019. The Active 2 was released in two sizes, 40mm or 44mm, and two connectivity formats, either Bluetooth or LTE capability. The LTE version functions as a standalone phone and allows a user to call, text, pay, and stream music or video without a nearby smartphone. An Under Armour Edition of the Active 2 was released on October 11, 2019, containing a watch face and strap branded with the Under Armour logo. Samsung announced as part of the move to move from Tizen OS to Wear OS by Google starting from August 2022. The Watch Active 2 will stop receiving software and security updates while the Watch 3 will stop receiving software updates in 2023. Specifications References External links Galaxy Watch Active 2 on S Consumer electronics brands Products introduced in 2019 Smartwatches Samsung wearable devices
Samsung Galaxy Watch Active 2
Technology
215
3,117,016
https://en.wikipedia.org/wiki/List%20of%20textbooks%20in%20thermodynamics%20and%20statistical%20mechanics
A list of notable textbooks in thermodynamics and statistical mechanics, arranged by category and date. Only or mainly thermodynamics Both thermodynamics and statistical mechanics 2e Kittel, Charles; and Kroemer, Herbert (1980) New York: W.H. Freeman 2e (1988) Chichester: Wiley , . (1990) New York: Dover Stephen G. Brush (1976) The Kind of Motion We Call Heat I-II North-Holland ISBN 0-444-87008-3 Statistical mechanics . 2e (1936) Cambridge: University Press; (1980) Cambridge University Press. ; (1979) New York: Dover Vol. 5 of the Course of Theoretical Physics. 3e (1976) Translated by J.B. Sykes and M.J. Kearsley (1980) Oxford : Pergamon Press. . 3e (1995) Oxford: Butterworth-Heinemann . 2e (1987) New York: Wiley . 2e (1988) Amsterdam: North-Holland . 2e (1991) Berlin: Springer Verlag , ; (2005) New York: Dover 2e (2000) Sausalito, Calif.: University Science 2e (1998) Chichester: Wiley S. R. De Groot, P. Mazur (2011) Non-Equilibrium Thermodynamics, Dover Books on Physics, ISBN 978-0486647418. Specialized topics Kinetic theory Vol. 10 of the Course of Theoretical Physics (3rd Ed). Translated by J.B. Sykes and R.N. Franklin (1981) London: Pergamon , Quantum statistical mechanics Mathematics of statistical mechanics Translated by G. Gamow (1949) New York: Dover . Reissued (1974), (1989); (1999) Singapore: World Scientific ; (1984) Cambridge: University Press . 2e (2004) Cambridge: University Press Miscellaneous (available online here) Historical (1896, 1898) Translated by Stephen G. Brush (1964) Berkeley: University of California Press; (1995) New York: Dover Translated by J. Kestin (1956) New York: Academic Press. German Encyclopedia of Mathematical Sciences. Translated by Michael J. Moravcsik (1959) Ithaca: Cornell University Press; (1990) New York: Dover See also List of textbooks on classical mechanics and quantum mechanics List of textbooks in electromagnetism List of books on general relativity Further reading References External links Statistical Mechanics and Thermodynamics Texts Clark University curriculum development project Lists of science textbooks Mathematics-related lists Physics-related lists Textbooks Textbooks
List of textbooks in thermodynamics and statistical mechanics
Physics
535
38,324,410
https://en.wikipedia.org/wiki/HD%2064760
HD 64760 (J Puppis) is a class B0.5 supergiant star in the constellation Puppis. Its apparent magnitude is 4.24 and it is approximately 1,660 light years away based on parallax. The stellar wind structure of HD 64760 has been extensively studied. Its spectrum shows classic P Cygni profiles indicative of strong mass loss and high-velocity winds, but the spectral line profiles are also variable. The variation shows a 2.4 day modulation which is caused by non-radial pulsation of the star itself. Other pulsation periods around 4.81 hours have also been identified. HD 64760 rotates rapidly. Despite its large size it completes a rotation every 4.1 days compared to every 27 days for the sun. This causes the star to be an oblate spheroid, with the equatorial radius 20% larger than the polar radius. It is estimated that the temperature of the photosphere is 23,300 K at the equator and 29,000 K at the poles, due to gravity darkening. In addition, the surface has temperature variations due to its pulsations. The effective temperature for the star as a whole is 24,600 K, to match the bolometric luminosity of . References Puppis B-type supergiants Puppis, J CD-47 3396 038518 3090 064760
HD 64760
Astronomy
288
13,247,267
https://en.wikipedia.org/wiki/Lyndon%E2%80%93Hochschild%E2%80%93Serre%20spectral%20sequence
In mathematics, especially in the fields of group cohomology, homological algebra and number theory, the Lyndon spectral sequence or Hochschild–Serre spectral sequence is a spectral sequence relating the group cohomology of a normal subgroup N and the quotient group G/N to the cohomology of the total group G. The spectral sequence is named after Roger Lyndon, Gerhard Hochschild, and Jean-Pierre Serre. Statement Let be a group and be a normal subgroup. The latter ensures that the quotient is a group, as well. Finally, let be a -module. Then there is a spectral sequence of cohomological type and there is a spectral sequence of homological type , where the arrow '' means convergence of spectral sequences. The same statement holds if is a profinite group, is a closed normal subgroup and denotes the continuous cohomology. Examples Homology of the Heisenberg group The spectral sequence can be used to compute the homology of the Heisenberg group G with integral entries, i.e., matrices of the form This group is a central extension with center corresponding to the subgroup with . The spectral sequence for the group homology, together with the analysis of a differential in this spectral sequence, shows that Cohomology of wreath products For a group G, the wreath product is an extension The resulting spectral sequence of group cohomology with coefficients in a field k, is known to degenerate at the -page. Properties The associated five-term exact sequence is the usual inflation-restriction exact sequence: Generalizations The spectral sequence is an instance of the more general Grothendieck spectral sequence of the composition of two derived functors. Indeed, is the derived functor of (i.e., taking G-invariants) and the composition of the functors and is exactly . A similar spectral sequence exists for group homology, as opposed to group cohomology, as well. References (paywalled) Spectral sequences Group theory
Lyndon–Hochschild–Serre spectral sequence
Mathematics
411
5,906,977
https://en.wikipedia.org/wiki/Microtek
Microtek International Inc. () is a Taiwan-based multinational manufacturer of digital imaging products and other consumer electronics. It produces imaging equipment for medical, biological and industrial fields. It occupies 20 percent of the global imaging market and holds 450 patents worldwide. It is known for its scanner brands ScanMaker and ArtixScan. The company launched the world's first halftone optical film scanner in 1984, the world's first desktop halftone scanner in 1986, and the world's first color scanner in 1989. It has subsidiaries in Shanghai, Tokyo, Singapore and Rotterdam. It expanded its product lines into the manufacturing of LCD monitors, LCD projectors and digital cameras. History 1980-1985: Founding and incorporation In 1979, the Taiwanese government launched the Hsinchu Science and Industrial Park (HSIP) as a vision of Shu Shien-Siu to emulate Silicon Valley and to lure back overseas Taiwanese with their experience and knowledge in engineering and technology fields. Initially there were 14 companies, the first was Wang Computer (王氏電腦), by 2010 only six of the original pioneers remained: United Microelectronics Corporation (聯電), Microtek International, Inc. (全友), Quartz Frequency Technology (頻率), Tecom (東訊), Sino-American Silicon Products Inc. (中美矽晶) and Flow Asia Corporation (福祿遠東). Microtek (Microelectronics Technology) was co-founded in HSIP in 1980 by five Californian Taiwanese, three were colleagues who had worked at Xerox Bobo Wang (王渤渤), Robert Hsieh (謝志鴻), Carter Tseng (曾憲章) and two were colleagues from the University of Southern California, Benny Hsu (許正勳) and Hu Chung-hsing (胡忠信). They decided to set up root after Hsu was invited by HSIP Manager Dr. Irving Ho (何宜慈). In September 1983, the Allied Association for Science Parks Industries (台灣科學園區同業公會 abbr. 竹科) was established and Hsu was elected to be its first chairman. Microtek first entered the industry in 1983, when scanners were little more than expensive tools for hobbyists. In 1984, it introduced the MS-300A, a desktop halftone scanner. At about the same time, the company realized a need for scanning software for mainstream users and developed EyeStar, the world's first scanning software application. EyeStar made desktop scanning a functional reality, serving as the de facto standard for image format for importing graphics before TIFF came to fruition. Microtek proceeded to develop an OCR, or Optical Character Recognition, program for text scanning, once more successfully integrating a core function of scanning with its machines. 1985: Microtek Lab, Inc. In 1985, Microtek set up its United States subsidiary, Microtek Lab, Inc., in Cerritos, California. The company went public in 1988. It was one of Taiwan's initial technology initial public offerings. Microtek has research and development labs located in California and Taiwan dedicated to optics design, mechanical and electronic engineering, software development, product quality, and technological advancement. According to AnnaLee Saxenian's 2006 book The New Argonauts: Regional Advantage in a Global Economy, Microtek has produced more than 20% of the worldwide image scanner market. 1989: Ulead Systems In 1989, Microtek invested in Ulead Systems (based in Taipei) which became the first publicly traded software company in Taiwan in 1999. Ulead System was founded by Lotus Chen, Lewis Liaw and Way-Zen Chen three colleagues from Taiwan's Institute for Information Industry. Microtek helped Ulead by jointly purchasing CCD sensors from Kodak which benefited both companies as it was a component not yet locally produced at the time. Products Herbarium Specimen Digitization ObjectScan 1600 is an on-top scanner designed for capturing high resolution image of herbarium specimen. The device is bundled with ScanWizard Graphy which provides scanner setting and image correction tools. The maximum resolution is 1600 dpi. ScanWizard Botany is a workstation software for specimen image processing, electronic data capture software, and uploading metadata to database or server. The software has OCR (Optical Character Recognition) function which can automatically detect label information and read barcode information on botanical collections. The information will be saved as metadata. It also includes image processing tools such as brightness adjustment and contrast adjustment. MiVAPP Botany is a botanical database management system and web-server system. This system allows a botanical garden, university, and museum to share their collection online. Operations Taiwan Microtek International Inc.: Headquarters, Science-Based Industrial Park, Hsinchu City Taipei Office: Da-an District, Taipei City Mainland China Shanghai Microtek Technology Co., Ltd: Shanghai Shanghai Microtek Medical Device Co., Ltd: Shanghai Shanghai Microtek Trading Co., Ltd: Shanghai Microtek Computer Technology (Wu Jiang) Co., Ltd: Jiangsu See also Ulead Systems List of companies in Taiwan References 1980 establishments in Taiwan Computer companies of Taiwan Computer hardware companies Computer peripheral companies Display technology companies Electronics companies of Taiwan Manufacturing companies based in Hsinchu Computer companies established in 1980 Manufacturing companies established in 1980 Companies listed on the Taiwan Stock Exchange Taiwanese brands Image scanners
Microtek
Technology
1,092
69,026,183
https://en.wikipedia.org/wiki/Regonyl
Regonyl (developmental code name TX-380), also known as 17α-ethynyl-5α-androst-2-en-17β-ol 17β-acetate, is a steroidal drug described as an antiprogestogen and "antiprolactin" (prolactin inhibitor). It was studied for lactation inhibition in bitches. It has minimal to no androgenic, estrogenic, or progestogenic activity but is said to strongly inhibit the hypothalamic–pituitary–gonadal axis at central and peripheral levels and to markedly oppose the action of progesterone. However, the antiprogestogenic effects of regonyl do not appear to be due to direct interaction with the progesterone receptor. The actions of regonyl result in estrus cycle disturbances and impaired ovulation. Regonyl was proposed for use in humans, for instance in the treatment of gynecological disorders like endometriosis and benign breast disease, and in hormonal contraception. References Abandoned drugs Alcohols Ethynyl compounds Androstanes Antiprogestogens Drugs with unknown mechanisms of action Prolactin modulators Veterinary drugs
Regonyl
Chemistry
265
1,233,851
https://en.wikipedia.org/wiki/Principle%20of%20good%20enough
The principle of good enough or "good enough" principle is a rule in software and systems design. It indicates that consumers will use products that are good enough for their requirements, despite the availability of more advanced technology. See also 80:20 rule Heuristic KISS principle Minimalism (computing) Perfect is the enemy of good Proof of concept Rule of thumb Satisficing Worse is Better You aren't gonna need it References ''Software Craftsmanship: The New Imperative' Creating a Software Engineering Culture Fundamental Concepts for the Software Quality Engineer, Volume 2 Software Creativity 2.0 Software War Stories: Case Studies in Software Management External links "The New Mantra of Tech: It's Good Enough" (Gizmodo by Mark Wilson April 27, 2009) Software architecture
Principle of good enough
Engineering
157
17,226,975
https://en.wikipedia.org/wiki/Electronic%20resource%20management
Electronic resource management (ERM) is the practices and techniques used by librarians and library staff to track the selection, acquisition, licensing, access, maintenance, usage, evaluation, retention, and de-selection of a library's electronic information resources. These resources include, but are not limited to, electronic journals, electronic books, streaming media, databases, datasets, CD-ROMs, and computer software. History Following the advent of the Digital Revolution, libraries began incorporating electronic information resources into their collections and services. The inclusion of these resources was driven by the core values of library science, as expressed by Raganathan's five laws of library science, especially the belief that electronic technologies made access to information more direct, convenient, and timely. By the end of 1990s, however, it became clear that the techniques used by librarians to manage physical resources did not transfer well to the electronic medium. In January 2000, the Digital Library Federation (DLF) conducted an informal survey aimed at identifying the major challenges facing research libraries regarding their use of information technologies. The survey revealed that digital collection development was considered the greatest source of anxiety and uncertainty among librarians, and that knowledge regarding the handling of electronic resources was rarely shared outside individual libraries. As a result, the Digital Library Federation created the Collection Practices Initiative and commissioned three reports with the goal of documenting effective practices in electronic resource management. In his 2001 report entitled Selection and Presentation of Commercially Available Electronic Resources, Timothy Jewell of the University of Washington discussed the home-grown and ad hoc management techniques academic libraries were employing to handle the acquisition, licensing, and activation of electronic resources. He concluded that "existing library management systems and software lack important features and functionality" to track electronic resources and that "coordinated efforts to define needs and establish standards may prove to be of broad benefit." Writing in The Scholarly Kitchen in 2019, Joseph Esposito noted that in a meeting with the heads of a number of academic libraries of various sizes, there was unanimous expression of frustration with electronic resource management systems. Data analysis In the 2020s, libraries have expanded the usage of open source data analysis tools like the non-profit Unpaywall Journals which combines several methods to help librarians analyze data that can be used to select electronic resources. See also ERAMS (e-resource access and management services) OpenURL knowledge base UKSG E-Resources Management Handbook References Library management Library automation
Electronic resource management
Engineering
492
32,392,505
https://en.wikipedia.org/wiki/Social%20film
A social film is a type of interactive film that is presented through the lens of social media. A social film is distributed digitally and integrates with a social networking service, such as Facebook or Google+. It combines features of web video, social network games and social media. Key elements Social films are a recent phenomenon and, in turn, there are few precedents for their format. Regardless, the medium has certain identifiable elements: Casual entertainment Social media User-generated content Game mechanics Using a combination of these factors, a social film engages viewers to interact directly with the work, be it through social media functionality like comments and ranking or adding directly to the narrative itself. Just as with memes, social film distribution relies on the viral spread enabled by social media. This is based on the viral expansion loops model, in which a viewer benefits from sharing the application with friends, exponentially creating new viewers compelled to share the application. History The first social film, Him, Her and Them was produced and released by Murmur in April 2011. It was distributed exclusively through Facebook and promoted as the first “Facebook film.” The film is composed of short video clips and interactive slideshows, integrating Facebook's Social Graph API. Users participate via text-based additions to the story, which are viewable only by friends within their social network. Other examples of social film include lonelygirl15, which used YouTube posts to create an interactive video series about a fictional character, and Ron Howard's Project Imagin8ion, where a photo contest was used as the basis for a short film. In July 2011, Intel and Toshiba partnered together to create Hollywood's first Social Film experience, a thriller called Inside, directed by D.J. Caruso and starring Emmy Rossum. The project is broken up into several segments across multiple social media platforms including Facebook, YouTube, and Twitter. In this instance, the audience is challenged to help Emmy Rossum's character, Christina, safely make it out of the room she's been trapped in. This particular form of social film is a major undertaking in that it combines social media activity with A-list acting talent to create a user experience that all happens in real time. Although not quite the same idea, Hollywood has been experimenting with the idea of interactive and crowd-sourced films. For example, Ridley Scott's Life in a Day, is a documentary style feature, which strives to be the largest crowd-sourced feature film ever created. In August 2012, Intel and Toshiba partnered again to create The Beauty Inside, directed by Drake Doremus, starring Mary Elizabeth Winstead and Topher Grace. It's Hollywood's first social film that gives everyone in the audience a chance to play Alex, the lead role. The experience will be broken up into six filmed episodes interspersed with real-time interactive storytelling that all takes place on Alex's Facebook timeline. In August 2013, Intel and Toshiba released their third entry into the category, The Power Inside, directed by Will Speck and Josh Gordon and starring Harvey Keitel, Analeigh Tipton, and Craig Roberts. It's Hollywood's first social film that asks the audience to audition to help save or destroy the world. The experience is broken up into six filmed episodes interspersed with user-generated content and interactive storytelling on the main character's Facebook timeline. In 2015, Intel partnered with Dell for their fourth entry, What Lives Inside directed by Robert Stromberg and starring Colin Hanks, Catherine O'Hara, and J. K. Simmons. The first of four episodes was released on Hulu on March 25, 2015. References Film genres Social media
Social film
Technology
746
48,668
https://en.wikipedia.org/wiki/Perlite
Perlite is an amorphous volcanic glass that has a relatively high water content, typically formed by the hydration of obsidian. It occurs naturally and has the unusual property of greatly expanding when heated sufficiently. It is an industrial mineral, suitable "as ceramic flux to lower the sintering temperature", and a commercial product useful for its low density after processing. Properties Perlite softens when it reaches temperatures of . Water trapped in the structure of the material vaporises and escapes, and this causes the expansion of the material to 7–16 times its original volume. The expanded material is a brilliant white, due to the reflectivity of the trapped bubbles. Unexpanded ("raw") perlite has a bulk density around 1100 kg/m3 (1.1 g/cm3), while typical expanded perlite has a bulk density of about 30–150 kg/m3 (0.03–0.150 g/cm3). Typical analysis 70–75% silicon dioxide: SiO2 12–15% aluminium oxide: Al2O3 3–4% sodium oxide: Na2O 3–5% potassium oxide: K2O 0.5-2% iron oxide: Fe2O3 0.2–0.7% magnesium oxide: MgO 0.5–1.5% calcium oxide: CaO 3–5% loss on ignition (chemical / combined water) Sources and production Perlite is a non-renewable resource. The world reserves of perlite are estimated at 700 million tonnes. The confirmed resources of perlite existing in Armenia amount to 150 million m3, whereas the total amount of projected resources reaches up to 3 billion m3. Considering specific density of 1.1 ton/m3 confirmed reserves in Armenia amount to 165 million tons. Other reported reserves are: Greece - 120 million tonnes, Turkey, USA and Hungary - about 49-57 million tonnes. Perlite world production, led by China, Turkey, Greece, USA, Armenia and Hungary, summed up to 4.6 million tonnes in 2018. Osham hills of Patanvav, Gujarat, India are the only source of mineral Perlite in India. Uses Because of its low density and relatively low price (about US$150 per tonne of unexpanded perlite), many commercial applications for perlite have been developed. Construction and manufacturing In the construction and manufacturing fields, it is used in lightweight plasters, concrete and mortar, insulation and ceiling tiles. It may also be used to build composite materials that are sandwich-structured or to create syntactic foam. Perlite filters are fairly common in filtering beer before it is bottled. Small quantities of perlite are also used in foundries, cryogenic insulation, and ceramics (as a clay additive). It is also used by the explosives industry. Aquatic filtration Perlite is currently used in commercial pool filtration technology, as a replacement to diatomaceous earth filters. Perlite is an excellent filtration aid and is used extensively as an alternative to diatomaceous earth. The popularity of perlite usage as a filter medium is growing considerably worldwide. Several products exist in the market to provide perlite based filtration. Several perlite filters and perlite media have met NSF-50 approval (Aquify PMF Series and AquaPerl), which standardizes water quality and technology safety and performance. Perlite can be safely disposed of through existing sewage systems, although some pool operators choose to separate the perlite using settling tanks or screening systems to be disposed of separately. Biotechnology Due to thermal and mechanical stability, non-toxicity, and high resistance against microbial attacks and organic solvents, perlite is widely used in biotechnological applications. Perlite was found to be an excellent support for immobilization of biocatalysts such as enzymes for bioremediation and sensing applications. Agriculture In horticulture, perlite can be used as a soil amendment or alone as a medium for hydroponics or for starting cuttings. When used as an amendment, it has high permeability and low water retention and helps prevent soil compaction. Cosmetics Perlite is used in cosmetics as an absorbent and mechanical exfoliant. Substitutes Perlite can be replaced for all of its uses. Substitutes include: Diatomite, used for filter-aids Expanded clay, an alternative lightweight filler for building materials Shale Pumice Slag Vermiculite - many expanders of perlite are also exfoliating vermiculite and belong to both trade associations Occupational safety As perlite contains silicon dioxide, goggles and silica filtering masks are recommended when handling large quantities. United States The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for perlite exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. See also Biochar Foam glass Industrial minerals Mortar (firestop) Vermiculite References External links The Perlite Institute Mineral Information Institute – perlite "That Wonderful Volcanic Popcorn." Popular Mechanics, December 1954, p. 136. CDC – NIOSH Pocket Guide to Chemical Hazards Felsic rocks Vitreous rocks Building stone Soil improvers Industrial minerals
Perlite
Chemistry
1,121
7,788,962
https://en.wikipedia.org/wiki/Roman%20military%20engineering
Roman military engineering was of a scale and frequency far beyond that of its contemporaries. Indeed, military engineering was in many ways endemic in Roman military culture, as demonstrated by each Roman legionary having as part of his equipment a shovel, alongside his gladius (sword) and pila (javelins). Workers, craftsmen, and artisans, known collectively as fabri, served in the Roman military. Descriptions of early Roman army structure (initially by phalanx, later by legion) attributed to king Servius Tullius state that two centuriae of fabri served under an officer, the praefectus fabrum. Roman military engineering took both routine and extraordinary forms, the former a part of standard military procedure, and the latter of an extraordinary or reactive nature. Proactive and routine military engineering The Roman legionary fortified camp Each Roman legion had a legionary fort as its permanent base. However, when on the march, particularly in enemy territory, the legion would construct a rudimentary fortified camp or castra, using only earth, turf and timber. Camp construction was the responsibility of engineering units to which specialists of many types belonged, officered by architecti (engineers), from a class of troops known as immunes who were excused from regular duties. These engineers would requisition manual labour from the soldiers at large as required. A legion could throw up a camp under enemy attack in a few hours. The names of the different types of camps apparently represent the amount of investment: tertia castra, quarta castra: "a camp of three days", "four days", etc. Bridges The engineers built bridges from timber and stone. Some Roman stone bridges survive. Stone bridges were made possible by the innovative use of keystone arches. One notable example was Julius Caesar's Bridge over the Rhine River. This bridge was completed in only ten days and is conservatively estimated to have been more than 100 m (328 feet) long. The construction was deliberately over-engineered for Caesar's stated purpose of impressing the Germanic tribes. Caesar writes in his War in Gaul that he rejected the idea of simply crossing in boats because it "would not be fitting for my own prestige and that of Rome" (at the time, he did not know that the Germanic tribes, with little knowledge of engineering, had already withdrawn from the area upon his arrival), and because a bridge would emphasize that Rome could travel wherever she wished. Caesar was able to cross over the completed bridge and explore the area uncontested, before crossing back over the subsequently dismantled bridge. Caesar related in War in Gaul that when he "sent messengers to the Sugambri to demand the surrender of those who had made war on me and on Gaul, they replied that the Rhine was the limit of Roman power". The bridge was intended to show otherwise. Siege machines Although most Roman siege engines were adaptations of earlier Greek designs, the Romans were adept at engineering them swiftly and efficiently, as well as innovating variations such as the repeating ballista. The 1st century BC army engineer Vitruvius describes in detail many of the Roman siege machines in his manuscript De architectura. Roads When invading enemy territories, the Roman army would often construct roads as it went, to allow swift reinforcement and resupply, or for easy retreat if necessary. Roman road-making skills were such that some survive today. Michael Grant credits the Roman building of the Via Appia with winning them the Second Samnite War. Civilian engineering by military troops When soldiers were not engaged in military campaigns, the legions had little to do, while costing the Roman state large sums of money. Thus, soldiers were involved in building civilian works to keep them well accustomed to hard physical labour and out of mischief, since it was believed that idle armies were a potential source of mutiny. Soldiers were put to use in the construction of roads, town walls, the digging of canals, drainage projects, aqueducts, harbours, and even in the cultivation of vineyards. Mining operations Soldiers were used in mining operations such as building aqueducts needed for prospecting for metal veins, activities such as hydraulic mining, and building reservoirs to hold water at the minehead. Reactive and extraordinary engineering The knowledge and experience learned through routine engineering lent itself readily to extraordinary engineering projects. In such projects, Roman military engineering greatly exceeded that of its contemporaries in imagination and scope. One notable project was the circumvallation of the entire city of Alesia and its Celtic leader Vercingetorix, within a massive double-wall – one inward-facing to prevent escape or offensive sallies, and one outward-facing to prevent attack by Celtic reinforcements. This wall is estimated to have been over long. A second example is the massive ramp built using thousands of tons of stones and beaten earth up to the invested city of Masada during the Jewish Revolt. The siege works and the ramp remain in a remarkable state of preservation. See also Technological history of the Roman military List of Roman pontoon bridges Roman architecture Roman aqueducts Roman engineering Notes External links Traianus - Technical investigation of Roman public works Military engineering
Roman military engineering
Engineering
1,039
41,291
https://en.wikipedia.org/wiki/Isochronous%20timing
A sequence of events is isochronous if the events occur regularly, or at equal time intervals. The term isochronous is used in several technical contexts, but usually refers to the primary subject maintaining a constant period or interval (the reciprocal of frequency), despite variations in other measurable factors in the same system. Isochronous timing is a characteristic of a repeating event whereas synchronous timing refers to the relationship between two or more events. In dynamical systems theory, an oscillator is called isochronous if its frequency is independent of its amplitude. In horology, a mechanical clock or watch is isochronous if it runs at the same rate regardless of changes in its drive force, so that it keeps correct time as its mainspring unwinds or chain length varies. Isochrony is important in timekeeping devices. Simply put, if a power providing device (e.g. a spring or weight) provides constant torque to the wheel train, it will be isochronous, since the escapement will experience the same force regardless of how far the weight has dropped or the spring has unwound. In electrical power generation, isochronous means that the frequency of the electricity generated is constant under varying load; there is zero generator droop. (See Synchronization (alternating current).) In telecommunications, an isochronous signal is one where the time interval separating any two corresponding transitions is equal to the unit interval or to a multiple of the unit interval; but phase is arbitrary and potentially varying. The term is also used in data transmission to describe cases in which corresponding significant instants of two or more sequential signals have a constant phase relationship. Isochronous burst transmission is used when the information-bearer channel rate is higher than the input data signaling rate. In the Universal Serial Bus used in computers, isochronous is one of the four data flow types for USB devices (the others being Control, Interrupt and Bulk). It is commonly used for streaming data types such as video or audio sources. Similarly, the IEEE 1394 interface standard, commonly called Firewire, includes support for isochronous streams of audio and video at known constant rates. In particle accelerators an isochronous cyclotron is a cyclotron where the field strength increases with radius to compensate for relativistic increase in mass with speed. An isochrone is a contour line of equal time, for instance, in geological layers, tree rings or wave fronts. An isochrone map or diagram shows such contours. In linguistics, isochrony is the postulated rhythmic division of time into equal portions by a language. In neurology, isochronic tones are regular beats of a single tone used for brainwave entrainment. See also Anisochronous References Synchronization Telecommunication theory Horology
Isochronous timing
Physics,Engineering
602
51,055,369
https://en.wikipedia.org/wiki/Oxogestone%20phenpropionate
Oxogestone phenpropionate (OPP; ) (former developmental code name or tentative brand name Oxageston), also known as xinogestone, as well as 20β-hydroxy-19-norprogesterone 20β-(3-phenylpropionate), is a progestin related to the 19-norprogesterone derivatives which was developed as an injectable hormonal contraceptive, specifically a progestogen-only injectable contraceptive, in the 1960s and early 1970s but was never marketed. It was studied at a dose of 50 to 75 mg once a month by intramuscular injection but was associated with a high failure rate with this regimen and was not further developed. OPP is the 20β-(3-phenylpropionate) ester of oxogestone, which, similarly, was never marketed. See also List of progestogen esters References Abandoned drugs Hormonal contraception Norpregnanes Phenylpropionate esters Progestogen esters Progestogens
Oxogestone phenpropionate
Chemistry
231
1,527,537
https://en.wikipedia.org/wiki/Bevacizumab
Bevacizumab, sold under the brand name Avastin among others, is a monoclonal antibody medication used to treat a number of types of cancers and a specific eye disease. For cancer, it is given by slow injection into a vein (intravenous) and used for colon cancer, lung cancer, ovarian cancer, glioblastoma, hepatocellular carcinoma, and renal-cell carcinoma. In many of these diseases it is used as a first-line therapy. For age-related macular degeneration it is given by injection into the eye (intravitreal). Common side effects when used for cancer include nose bleeds, headache, high blood pressure, and rash. Other severe side effects include gastrointestinal perforation, bleeding, allergic reactions, blood clots, and an increased risk of infection. When used for eye disease side effects can include vision loss and retinal detachment. Bevacizumab is a monoclonal antibody that functions as an angiogenesis inhibitor. It works by slowing the growth of new blood vessels by inhibiting vascular endothelial growth factor A (VEGF-A), in other words anti–VEGF therapy. Bevacizumab was approved for medical use in the United States in 2004. It is on the World Health Organization's List of Essential Medicines. Medical uses Colorectal cancer Bevacizumab was approved in the United States in February 2004, for use in metastatic colorectal cancer when used with standard chemotherapy treatment (as first-line treatment). In June 2006, it was approved with 5-fluorouracil-based therapy for second-line metastatic colorectal cancer. It was approved by the European Medicines Agency (EMA) in January 2005, for use in colorectal cancer. Bevacizumab has also been examined as an add on to other chemotherapy drugs in people with non-metastatic colon cancer. The data from two large randomized studies showed no benefit in preventing the cancer from returning and a potential to cause harm in this setting. In the EU, bevacizumab in combination with fluoropyrimidine-based chemotherapy is indicated for treatment of adults with metastatic carcinoma of the colon or rectum. Lung cancer In 2006, the US Food and Drug Administration (FDA) approved bevacizumab for use in first-line advanced nonsquamous non-small cell lung cancer in combination with carboplatin/paclitaxel chemotherapy. The approval was based on the pivotal study E4599 (conducted by the Eastern Cooperative Oncology Group), which demonstrated a two-month improvement in overall survival in patients treated with bevacizumab (Sandler, et al. NEJM 2004). A preplanned analysis of histology in E4599 demonstrated a four-month median survival benefit with bevacizumab for people with adenocarcinoma (Sandler, et al. JTO 2010); adenocarcinoma represents approximately 85% of all non-squamous cell carcinomas of the lung. A subsequent European clinical trial, AVAiL, was first reported in 2009 and confirmed the significant improvement in progression-free survival shown in E4599 (Reck, et al. Ann. Oncol. 2010). An overall survival benefit was not demonstrated in patients treated with bevacizumab; however, this may be due to the more limited use of bevacizumab as maintenance treatment in AVAiL versus E4599 (this differential effect is also apparent in the European vs US trials of bevacizumab in colorectal cancer: Tyagi and Grothey, Clin Colorectal Cancer, 2006). As an anti-angiogenic agent, there is no mechanistic rationale for stopping bevacizumab before disease progression. Stated another way, the survival benefits achieved with bevacizumab can only be expected when used in accordance with the clinical evidence: continued until disease progression or treatment-limiting side effects. Another large European-based clinical trial with bevacizumab in lung cancer, AVAPERL, was reported in October 2011 (Barlesi, et al. ECCM 2011). First-line patients were treated with bevacizumab plus cisplatin/pemetrexed for four cycles, and then randomized to receive maintenance treatment with either bevacizumab/pemetrexed or bevacizumab alone until disease progression. Maintenance treatment with bevacizumab/pemetrexed demonstrated a 50% reduction in risk of progression vs bevacizumab alone (median PFS: 10.2 vs 6.6 months). Maintenance treatment with bevacizumab/pemetrexed did not confer a significant increase in overall survival vs bevacizumab alone on follow up analysis. In the EU, bevacizumab, in addition to platinum-based chemotherapy, is indicated for first-line treatment of adults with unresectable advanced, metastatic or recurrent non-small cell lung cancer other than predominantly squamous cell histology. Bevacizumab, in combination with erlotinib, is indicated for first-line treatment of adults with unresectable advanced, metastatic or recurrent non-squamous non-small cell lung cancer with Epidermal Growth Factor Receptor (EGFR) activating mutations. Breast cancer In December 2010, the US Food and Drug Administration (FDA) notified its intention to remove the breast cancer indication from bevacizumab, saying that it had not been shown to be safe and effective in breast cancer patients. The combined data from four different clinical trials showed that bevacizumab neither prolonged overall survival nor slowed disease progression sufficiently to outweigh the risk it presents to patients. This only prevented Genentech from marketing bevacizumab for breast cancer. Doctors are free to prescribe bevacizumab off label, although insurance companies are less likely to approve off-label treatments. In June 2011, an FDA panel unanimously rejected an appeal by Roche. A panel of cancer experts ruled for a second time that Avastin should no longer be used in breast cancer patients, clearing the way for the US government to remove its endorsement from the drug. The June 2011 meeting of the FDA's oncologic drug advisory committee was the last step in an appeal by the drug's maker. The committee concluded that breast cancer clinical studies of patients taking Avastin have shown no advantage in survival rates, no improvement in quality of life, and significant side effects. In the EU, bevacizumab in combination with paclitaxel is indicated for first-line treatment of adults with metastatic breast cancer. Bevacizumab in combination with capecitabine is indicated for first-line treatment of adults with metastatic breast cancer in whom treatment with other chemotherapy options including taxanes or anthracyclines is not considered appropriate. Kidney cancer In certain kidney cancers, bevacizumab improves the progression free survival time but not survival time. In 2009, the FDA approved bevacizumab for use in metastatic renal cell cancer (a form of kidney cancer). following earlier reports of activity EU approval was granted in 2007. In the EU, bevacizumab in combination with interferon alfa-2a is indicated for first-line treatment of adults with advanced and/or metastatic renal cell cancer. Brain cancers Bevacizumab slows tumor growth but does not affect overall survival in people with glioblastoma. The FDA granted accelerated approval for the treatment of recurrent glioblastoma multiforme in May 2009. A 2018 Cochrane review deemed there to not be good evidence for its use in recurrences either. Macular degeneration In the EU, bevacizumab gamma (Lytenava) is indicated for the treatment of neovascular (wet) age-related macular degeneration (nAMD). Ovarian cancer In 2018, the US Food and Drug Administration (FDA) approved bevacizumab in combination with chemotherapy for stage III or IV of ovarian cancer after initial surgical operation, followed by single-agent bevacizumab. The approval was based on a study of the addition of bevacizumab to carboplatin and paclitaxel. Progression-free survival was increased to 18 months from 13 months. In the EU, bevacizumab, in combination with carboplatin and paclitaxel is indicated for the front-line treatment of adults with advanced (International Federation of Gynecology and Obstetrics (FIGO) stages IIIB, IIIC and IV) epithelial ovarian, fallopian tube, or primary peritoneal cancer. Bevacizumab, in combination with carboplatin and gemcitabine or in combination with carboplatin and paclitaxel, is indicated for treatment of adults with first recurrence of platinum-sensitive epithelial ovarian, fallopian tube or primary peritoneal cancer who have not received prior therapy with bevacizumab or other VEGF inhibitors or VEGF receptor-targeted agents. In May 2020, the FDA expanded the indication of olaparib to include its combination with bevacizumab for first-line maintenance treatment of adults with advanced epithelial ovarian, fallopian tube, or primary peritoneal cancer who are in complete or partial response to first-line platinum-based chemotherapy and whose cancer is associated with homologous recombination deficiency positive status defined by either a deleterious or suspected deleterious BRCA mutation, and/or genomic instability. Cervical cancer In the EU, bevacizumab, in combination with paclitaxel and cisplatin or, alternatively, paclitaxel and topotecan in people who cannot receive platinum therapy, is indicated for the treatment of adults with persistent, recurrent, or metastatic carcinoma of the cervix. Adverse effects Bevacizumab inhibits the growth of blood vessels, which is part of the body's normal healing and maintenance. The body grows new blood vessels in wound healing, and as collateral circulation around blocked or atherosclerotic blood vessels. One concern is that bevacizumab will interfere with these normal processes, and worsen conditions like coronary artery disease or peripheral artery disease. The main side effects are hypertension and heightened risk of bleeding. Bowel perforation has been reported. Fatigue and infection are also common. In advanced lung cancer, less than half of patients qualify for treatment. Nasal septum perforation and renal thrombotic microangiopathy have been reported. In December 2010, the FDA warned of the risk of developing perforations in the body, including in the nose, stomach, and intestines. In 2013, Hoffmann-La Roche announced that the drug was associated with 52 cases of necrotizing fasciitis from 1997 to 2012, of which 17 patients died. About 2/3 of cases involved patients with colorectal cancer, or patients with gastrointestinal perforations or fistulas. These effects are largely avoided in ophthalmological use since the drug is introduced directly into the eye thus minimizing any effects on the rest of the body. Neurological adverse events include reversible posterior encephalopathy syndrome. Ischemic and hemorrhagic strokes are also possible. Protein in the urine occurs in approximately 20% of people. This does not require permanent discontinuation of the drug. Nonetheless, the presence of nephrotic syndrome necessitates permanent discontinuation of bevacizumab. Mechanism of action Bevacizumab is a recombinant humanized monoclonal antibody that blocks angiogenesis by inhibiting vascular endothelial growth factor A (VEGF-A). VEGF-A is a growth factor protein that stimulates angiogenesis in a variety of diseases, especially in cancer. By binding VEGF-A, bevacizumab should act outside the cell, but in some cases (cervical and breast cancer) it is taken up by cells through constitutive endocytosis. It also is taken up by retinal photoreceptor cells after intravitreal injection. Chemistry Bevacizumab was originally derived from a mouse monoclonal antibody generated from mice immunized with the 165-residue form of recombinant human vascular endothelial growth factor. It was humanized by retaining the binding region and replacing the rest with a human full light chain and a human truncated IgG1 heavy chain, with some other substitutions. The resulting plasmid was transfected into Chinese hamster ovary cells which are grown in industrial fermentation systems. History Bevacizumab is a recombinant humanized monoclonal antibody and in 2004, it became the first clinically used angiogenesis inhibitor. Its development was based on the discovery of human vascular endothelial growth factor (VEGF), a protein that stimulated blood vessel growth, in the laboratory of Genentech scientist Napoleone Ferrara. Ferrara later demonstrated that antibodies against VEGF inhibit tumor growth in mice. His work validated the hypothesis of Judah Folkman, proposed in 1971, that stopping angiogenesis might be useful in controlling cancer growth. Approval It received its first approval in the United States in 2004, for combination use with standard chemotherapy for metastatic colon cancer. It has since been approved for use in certain lung cancers, renal cancers, ovarian cancers, and glioblastoma multiforme of the brain. In 2008, bevacizumab was approved for breast cancer by the FDA, but the approval was revoked on 18 November 2011 because, although there was evidence that it slowed progression of metastatic breast cancer, there was no evidence that it extended life or improved quality of life, and it caused adverse effects including severe high blood pressure and hemorrhaging. In 2008, the FDA gave bevacizumab provisional approval for metastatic breast cancer, subject to further studies. The FDA's advisory panel had recommended against approval. In July 2010, after new studies failed to show a significant benefit, the FDA's advisory panel recommended against the indication for advanced breast cancer. Genentech requested a hearing, which was granted in June 2011. The FDA ruled to withdraw the breast cancer indication in November 2011. FDA approval is required for Genentech to market a drug for that indication. Doctors may sometimes prescribe it for that indication, although insurance companies are less likely to pay for it. The drug remains approved for breast cancer use in other countries, including Australia. It has been funded by the English NHS Cancer Drugs Fund, but in January 2015 it was proposed to remove it from the approved list. It remains on the Cancer Drugs Fund as of March 2023. Society and culture Use for macular degeneration In 2015, there was a fierce debate in the UK and other European countries concerning the choice of prescribing bevacizumab or ranibizumab (Lucentis) for wet AMD. In the UK, part of the tension was between on the one hand, both the European Medicines Agency and the Medicines and Healthcare products Regulatory Agency which had approved Lucentis but not Avastin for wet AMD, and their interest in ensuring that doctors do not use medicines off-label when there are other, approved medications for the same indication, and on the other hand, NICE in the UK, which sets treatment guidelines, and has been unable so far to appraise Avastin as a first-line treatment, in order to save money for the National Health Service. Novartis and Roche (which respectively have marketing rights and ownership rights for Avastin) had not conducted clinical trials to get approval for Avastin for wet AMD and had no intention of doing so. Further, both companies lobbied against treatment guidelines that would make Avastin a first-line treatment, and when government-funded studies comparing the two drugs were published, they published papers emphasizing the risks of using Avastin for wet AMD. In March 2024, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Lytenava (bevacizumab gamma), intended for treatment of neovascular (wet) age-related macular degeneration (nAMD). The applicant for this medicinal product is Outlook Therapeutics Limited. Lytenava was approved for medical use in the European Union in May 2024. Breast cancer approval In March 2007, the European Commission approved bevacizumab in combination with paclitaxel for the first-line treatment of metastatic breast cancer. In 2008, the FDA approved bevacizumab for use in breast cancer. A panel of outside advisers voted 5 to 4 against approval, but their recommendations were overruled. The panel expressed concern that data from the clinical trial did not show any increase in quality of life or prolonging of life for patients—two important benchmarks for late-stage cancer treatments. The clinical trial did show that bevacizumab reduced tumor volumes and showed an increase in progression free survival time. It was based on this data that the FDA chose to overrule the recommendation of the panel of advisers. This decision was lauded by patient advocacy groups and some oncologists. Other oncologists felt that granting approval for late-stage cancer therapies that did not prolong or increase the quality of life for patients would give license to pharmaceutical companies to ignore these important benchmarks when developing new late-stage cancer therapies. In 2010, before the FDA announcement, The National Comprehensive Cancer Network (NCCN) updated the NCCN Clinical Practice Guidelines for Oncology (NCCN Guidelines) for Breast Cancer to affirm the recommendation regarding the use of bevacizumab in the treatment of metastatic breast cancer. In 2011, the US Food and Drug Administration removed bevacizumab indication for metastatic breast cancer after concluding that the drug has not been shown to be safe and effective. The specific indication that was withdrawn was for the use of bevacizumab in metastatic breast cancer, with paclitaxel for the treatment of people who have not received chemotherapy for metastatic HER2-negative breast cancer. Counterfeit In February 2012, Roche and its US biotech unit Genentech announced that counterfeit Avastin had been distributed in the United States. The investigation is ongoing, but differences in the outer packaging make identification of the bogus drugs simple for medical providers. Roche analyzed three bogus vials of Avastin and found they contained salt, starch, citrate, isopropanol, propanediol, t-butanol, benzoic acid, di-fluorinated benzene ring, acetone and phthalate moiety, but no active ingredients of the cancer drug. According to Roche, the levels of the chemicals were not consistent; whether the chemicals were at harmful concentrations could not therefore be determined. The counterfeit Avastin has been traced back to Egypt, and it entered legitimate supply chains via Europe to the United States. Biosimilars In July 2014, two pharming companies, PlantForm and PharmaPraxis, announced plans to commercialize a biosimilar version of bevacizumab made using a tobacco expression system in collaboration with the Fraunhofer Center for Molecular Biology. In September 2017, the US FDA approved Amgen's biosimilar (generic name bevacizumab-awwb, product name Mvasi) for six cancer indications. In January 2018, Mvasi was approved for use in the European Union. In February 2019, Zirabev was approved for use in the European Union. Zirabev was approved for medical use in the United States in June 2019, and in Australia in November 2019. In June 2020, Mvasi was approved for medical use in Australia. In August 2020, Aybintio was approved for use in the European Union. In September 2020, Equidacent was approved for use in the European Union. In January 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Alymsys, intended for the treatment of carcinoma of the colon or rectum, breast cancer, non-small cell lung cancer, renal cell cancer, epithelial ovarian, fallopian tube or primary peritoneal cancer, and carcinoma of the cervix. Alymsys was approved for medical use in the European Union in March 2021. In January 2021, Onbevzi was approved for medical use in the European Union. In June 2019, and June 2021, Zirabev was approved for medical use in Canada. Oyavas was approved for medical use in the European Union in March 2021. Abevmy was approved for medical use in the European Union in April 2021, and in Australia in September 2021. In September 2021, Bambevi was approved for medical use in Canada. Bevacip and Bevaciptin were approved for medical use in Australia in November 2021. In November 2021, Abevmy and Aybintio were approved for medical use in Canada. In April 2022, bevacizumab-maly (Alymsys) was approved for medical use in the United States. In August 2022, Vegzelma was approved for medical use in the European Union. In September 2022, bevacizumab-adcd (Vegzelma) was approved for medical use in the United States. In June 2023, Enzene Biosciences launched its bevacizumab biosimilar in India. Bevacizumab-tnjn (Avzivi) was approved for medical use in the United States in December 2023. In May 2024, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Avzivi, intended for the treatment of carcinoma of the colon or rectum, breast cancer, non-small cell lung cancer, renal cell cancer, epithelial ovarian, fallopian tube or primary peritoneal cancer and carcinoma of the cervix. The applicant for this medicinal product is FGK Representative Service GmbH. Avzivi was approved for medical use in the European Union in July 2024. Research A study released in April 2009, found that bevacizumab is not effective at preventing recurrences of non-metastatic colon cancer following surgery. Bevacizumab has been tested in ovarian cancer where it has shown improvement in progression-free survival but not in overall survival. and glioblastoma multiforme where it failed to improve overall survival. Bevacizumab has been investigated as a possible treatment of pancreatic cancer, as an addition to chemotherapy, but studies have shown no improvement in survival. It may also cause higher rates of high blood pressure, bleeding in the stomach and intestine, and intestinal perforations. The drug has also undergone trials as an addition to established chemotherapy protocols and surgery in the treatment of pediatric osteosarcoma, and other sarcomas, such as leiomyosarcoma. Bevacizumab has been studied as a treatment for cancers that grow from the nerve connecting the ear and the brain. References Further reading External links Angiogenesis inhibitors Drugs developed by Genentech Drugs developed by Hoffmann-La Roche Monoclonal antibodies for tumors Ophthalmology drugs Orphan drugs Specialty drugs World Health Organization essential medicines Wikipedia medicine articles ready to translate
Bevacizumab
Biology
5,014
67,162,341
https://en.wikipedia.org/wiki/Mott%20Bridge
Mott Bridge is a historic timber braced spandrel arch bridge over the North Umpqua River in Douglas County, Oregon, United States. The bridge provides access from Oregon Route 138 to the Mott Trailhead on the North Umpqua Trail. Constructed from 1935 to 1936 by the Civilian Conservation Corps, the bridge is the only surviving example of three such structures built during the Great Depression in the Pacific Northwest. The bridge is named after Lawrence Mott (1881-1931), who had a nearby fishing camp by the junction of Steamboat Creek and the North Umpqua River. Prior to the opening of the bridge, guests arriving from the north side of the river rang a bell to call for someone in the camp to row them and their baggage across the river. Mott Bridge has been designated as an Oregon Historic Civil Engineering Landmark by the American Society of Civil Engineers. References External links Bridges completed in 1936 Bridges in Douglas County, Oregon Historic Civil Engineering Landmarks Road bridges in Oregon Tourist attractions in Douglas County, Oregon Wooden bridges in Oregon
Mott Bridge
Engineering
209
9,875,903
https://en.wikipedia.org/wiki/Gammator
ӀA Gammator was a gamma irradiator made by the Radiation Machinery Corporation during the U.S. Atoms for Peace project of the 1950s and 1960s. The gammator was distributed by the "Atomic Energy Commission to schools, hospitals, and private firms to promote nuclear understanding." Around 120-140 Gammators were distributed throughout the U.S. and the whereabouts of several of them are unknown, although the Department of Energy has removed and destroyed many of the units. Specifications A Gammator weighed about 1,850 pounds and contained about 400 curies of caesium-137 in a pellet roughly the size of a pen. Concerns Because of the massive shielding of a Gammator, the machine is very safe when used as intended (e.g. school science experiments); according to the Los Alamos National Laboratory, it is similar to machines used to irradiate blood. However, this amount of nuclear material could pose a significant problem if used as the radioactive component in a dirty bomb. References Nuclear technology Atoms for Peace
Gammator
Physics
211
59,949,490
https://en.wikipedia.org/wiki/NGC%204093
NGC 4093 is an elliptical galaxy located 340 million light-years away in the constellation Coma Berenices. The galaxy was discovered by astronomer Heinrich d'Arrest on May 4, 1864. NGC 4093 is a member of the NGC 4065 Group and is a radio galaxy with a two sided jet. A rotating disk in the galaxy was detected by K. Geréb et al. See also List of NGC objects (4001–5000) References External links 4093 038323 Coma Berenices Astronomical objects discovered in 1864 Elliptical galaxies NGC 4065 Group Radio galaxies Discoveries by Heinrich Louis d'Arrest
NGC 4093
Astronomy
128
26,612,394
https://en.wikipedia.org/wiki/Gliese%20208
Gliese 208 (Gj 208) is a red dwarf star with an apparent magnitude of 8.9. It is 37 light years away in the constellation of Orion. It is an extremely wide binary with 2MASS J0536+1117, an M4 star 2.6 arcminutes away (at least 0.028 light years) The spectral type of Gj 208 has variously been described between K6 and M1. Two of the most recent observations give a statistically calculated spectral type of K7.9 or a more traditional classification of M0.0 Ve. It is a cool dwarf star and probably a spectroscopic binary. Calculations from 2010 suggest that this star passed as close as 1.537 parsecs (5.0 light-years) from the Sun about 500,000 years ago. GJ 208 is an RS Canum Venaticorum variable, close binary systems which show small amplitude brightness changes caused by chromospheric activity. Its visual magnitude varies by about a quarter magnitude with a period of 12.285 days. References External links Wikisky image of HD 245409 (Gliese 208) BD+11 0878 245409 026335 0208 K-type main-sequence stars Orion (constellation) Orionis, V2689 RS Canum Venaticorum variables Emission-line stars TIC objects
Gliese 208
Astronomy
292
4,716,591
https://en.wikipedia.org/wiki/Tellurium%20tetrachloride
Tellurium tetrachloride is the inorganic compound with the empirical formula TeCl4. The compound is volatile, subliming at 200 °C at 0.1 mmHg. Molten TeCl4 is ionic, dissociating into TeCl3+ and Te2Cl102−. Structure TeCl4 is monomeric in the gas phase, with a structure similar to that of SF4. In the solid state, it is a tetrameric cubane-type cluster, consisting of a Te4Cl4 core and three terminal chloride ligands for each Te. Alternatively, this tetrameric structure can be considered as a Te4 tetrahedron with face-capping chlorines and three terminal chlorines per tellurium atom, giving each tellurium atom a distorted octahedral environment Synthesis TeCl4 is prepared by chlorination of tellurium powder: Te + 2 Cl2 → TeCl4 The reaction is initiated with heat. The product is isolated by distillation. Crude TeCl4 can be purified by distillation under an atmosphere of chlorine. Alternatively TeCl4 can be prepared using sulfuryl chloride (SO₂Cl₂) as a chlorine source. Yet another method involves the reaction of tellurium with sulfur monochloride (S2Cl2) at room temperature. This exothermic reaction rapidly forms white needle-like crystals of TeCl4. Reactions Tellurium tetrachloride is the gateway compound for high valent organotellurium compounds. Arylation gives, depending on conditions, . TeCl4 has few applications in organic synthesis. Its equivalent weight is high, and the toxicity of organotellurium compounds is problematic. Possible applications of tellurium tetrachloride to organic synthesis have been reported. It adds to alkenes to give Cl-C-C-TeCl3 derivatives, wherein the Te can be subsequently removed with sodium sulfide. Electron-rich arenes react to give aryl Te compounds. Thus, anisole gives TeCl2(C6H4OMe)2, which can be reduced to the diaryl telluride. TeCl4 is a precursor to tellurium-containing heterocycles like tellurophenes. Heating a mixture of TeCl4 and metallic tellurium gives tellurium dichloride (TeCl2). In moist air, TeCl4 forms tellurium oxychloride (TeOCl2), which further decomposes with excess water to form tellurous acid (H2TeO3). Safety considerations As is the case for other tellurium compounds, TeCl4 is toxic. It also releases HCl upon hydrolysis. References Tellurium(IV) compounds Chlorides Tellurium halides Deliquescent materials Chalcohalides
Tellurium tetrachloride
Chemistry
598
3,105,691
https://en.wikipedia.org/wiki/Theobromine%20poisoning
Theobromine poisoning, also informally called chocolate poisoning or cocoa poisoning, is an overdosage reaction to the xanthine alkaloid theobromine, found in chocolate, tea, cola beverages, and some other foods. Sources Cocoa powder contains about theobromine by weight, so of raw cocoa contains approximately theobromine. Processed chocolate, in general, has smaller amounts. The amount found in highly refined chocolate candies or sweets (typically ) is much lower than that of dark chocolate or unsweetened baking chocolate ( or ). In species Humans Pharmacology Theobromine has a half-life of , but over may be unmodified after a single dose of In general, the amount of theobromine found in chocolate is small enough that chocolate can be safely consumed by humans with a negligible risk of poisoning. Toxicity Theobromine doses at per day, such as may be found in of cocoa powder, may be accompanied by sweating, trembling and severe headaches. These are the mild-to-moderate symptoms. The severe symptoms are cardiac arrhythmias, epileptic seizures, internal bleeding, heart attacks, and eventually death. Limited mood effects were shown at per day. In other species Toxicity Median lethal () doses of theobromine have only been published for cats, dogs, rats, and mice; these differ by a factor of 6 across species. Serious poisoning happens more frequently in domestic animals, which metabolize theobromine much more slowly than humans, and can easily consume enough chocolate to cause poisoning. The most common victims of theobromine poisoning are dogs, for whom it can be fatal. The toxic dose for cats is even lower than for dogs. However, cats are less prone to eating chocolate since they are unable to taste sweetness. Theobromine is less toxic to rats and mice, who all have an of about . In dogs, the biological half-life of theobromine is 17.5 hours; in severe cases, clinical symptoms of theobromine poisoning can persist for 72 hours. Medical treatment performed by a veterinarian involves inducing vomiting within two hours of ingestion and administration of benzodiazepines or barbiturates for seizures, antiarrhythmics for heart arrhythmias, and fluid diuresis. Theobromine is also suspected to induce right atrial cardiomyopathy after long term exposure at levels equivalent to approximately of dark chocolate per day. According to the Merck Veterinary Manual, baker's chocolate of approximately of a dog's body weight is sufficient to cause symptoms of toxicity. For example, of baker's chocolate would be enough to produce mild symptoms in a dog, while a 25% cacao chocolate bar (like milk chocolate) would be only 25% as toxic as the same dose of baker's chocolate. One ounce of milk chocolate per pound of body weight () is a potentially lethal dose in dogs. Wildlife In 2014, four American black bears were found dead at a bait site in New Hampshire. A necropsy and toxicology report performed at the University of New Hampshire in 2015 confirmed they died of heart failure caused by theobromine after they consumed of chocolate and doughnuts placed at the site as bait. A similar incident killed a black bear cub in Michigan in 2011. Pest control In previous research, the USDA investigated the possible use of theobromine as a toxicant to control coyotes preying on livestock. See also Xanthine oxidase Footnotes References Merck Veterinary Manual (Toxicology/Food Hazards section), Merck & Co., Inc., Chocolate Poisoning. (June 16, 2005) External links Toxicity basic facts Cat health Dog health Poisoning by drugs, medicaments and biological substances Veterinary toxicology
Theobromine poisoning
Environmental_science
772
2,462,396
https://en.wikipedia.org/wiki/Transitive%20set
In set theory, a branch of mathematics, a set is called transitive if either of the following equivalent conditions holds: whenever , and , then . whenever , and is not an urelement, then is a subset of . Similarly, a class is transitive if every element of is a subset of . Examples Using the definition of ordinal numbers suggested by John von Neumann, ordinal numbers are defined as hereditarily transitive sets: an ordinal number is a transitive set whose members are also transitive (and thus ordinals). The class of all ordinals is a transitive class. Any of the stages and leading to the construction of the von Neumann universe and Gödel's constructible universe are transitive sets. The universes and themselves are transitive classes. This is a complete list of all finite transitive sets with up to 20 brackets: Properties A set is transitive if and only if , where is the union of all elements of that are sets, . If is transitive, then is transitive. If and are transitive, then and are transitive. In general, if is a class all of whose elements are transitive sets, then and are transitive. (The first sentence in this paragraph is the case of .) A set that does not contain urelements is transitive if and only if it is a subset of its own power set, The power set of a transitive set without urelements is transitive. Transitive closure The transitive closure of a set is the smallest (with respect to inclusion) transitive set that includes (i.e. ). Suppose one is given a set , then the transitive closure of is Proof. Denote and . Then we claim that the set is transitive, and whenever is a transitive set including then . Assume . Then for some and so . Since , . Thus is transitive. Now let be as above. We prove by induction that for all , thus proving that : The base case holds since . Now assume . Then . But is transitive so , hence . This completes the proof. Note that this is the set of all of the objects related to by the transitive closure of the membership relation, since the union of a set can be expressed in terms of the relative product of the membership relation with itself. The transitive closure of a set can be expressed by a first-order formula: is a transitive closure of iff is an intersection of all transitive supersets of (that is, every transitive superset of contains as a subset). Transitive models of set theory Transitive classes are often used for construction of interpretations of set theory in itself, usually called inner models. The reason is that properties defined by bounded formulas are absolute for transitive classes. A transitive set (or class) that is a model of a formal system of set theory is called a transitive model of the system (provided that the element relation of the model is the restriction of the true element relation to the universe of the model). Transitivity is an important factor in determining the absoluteness of formulas. In the superstructure approach to non-standard analysis, the non-standard universes satisfy strong transitivity. Here, a class is defined to be strongly transitive if, for each set , there exists a transitive superset with . A strongly transitive class is automatically transitive. This strengthened transitivity assumption allows one to conclude, for instance, that contains the domain of every binary relation in . See also End extension Transitive relation Supertransitive class References Set theory
Transitive set
Mathematics
738
9,551,090
https://en.wikipedia.org/wiki/Heptaminol
Heptaminol is an amino alcohol which is classified as a cardiac stimulant (positive inotropic action). It also increases coronary blood flow along with mild peripheral vasoconstriction. It is sometimes used in the treatment of low blood pressure, particularly orthostatic hypotension as it is a potent positive inotrope (improving cardiac contraction). Use in doping Heptaminol is classified by the World Anti-Doping Agency as a doping substance. In 2008, the cyclist Dmitriy Fofonov tested positive for heptaminol at the Tour de France. In June 2010, the swimmer Frédérick Bousquet tested positive. In 2013, the cyclist Sylvain Georges tested positive at the Giro d'Italia. In 2014, baseball player Joel Piniero tested positive as well as St. Louis Cardinals minor league baseball player Yeison Medina. On March 22, 2019, Cycling South Africa reported that Ricardo Broxham has been sanctioned for an anti-doping rule violation of Articles 2.1 and 2.2 of the UCI Anti-Doping Rules after an in-competition test conducted on 18 August 2018 confirmed the presence of Heptaminol in his sample. The UCI Anti-Doping Tribunal has imposed a period of ineligibility of 12 months for the violation, applicable as of 22 September 2018 up to and including 22 September 2019 and a disqualification of all results from the 2018 UCI Junior Track Cycling World Championships. See also 1,3-Dimethylbutylamine Iproheptine Isometheptene Methylhexanamine Octodrine Oenethyl Tuaminoheptane References Vasodilators Amines Tertiary alcohols
Heptaminol
Chemistry
353
13,080,850
https://en.wikipedia.org/wiki/Sheer%20fabric
Sheer fabric is fabric which is made using thin thread or low density of knit. This results in a semi-transparent and flimsy cloth. Some fabrics become transparent when wet. Overview The sheerness of a fabric is expressed as a numerical denier which ranges from 3 (extremely rare, very thin, barely visible) to 15 (standard sheer for stockings) up to 30 (semi opaque) until 100 (opaque). The materials which can be made translucent include gossamer, silk, rayon or nylon. Sheer fabric comes in a wide variety of colors, but for curtains, white and shades of white, such as cream, winter white, eggshell, and ivory are popular. In some cases, sheer fabric is embellished with embroidered patterns or designs. A common use for sheer fabric is in curtains, which allows for sunlight to pass through during daylight, while maintaining a level of privacy. However, when it is lighter on the inside of a room than it is on the outside (such as at nighttime), then the inside of the room can be seen from the outside. Due to the loose weave in sheer fabrics, such curtains offer little heat insulation. Sheer fabric is used in clothing, in garments such as stockings or tights and in dancewear and lingerie, and sometimes as part of clothing, such as in wedding gowns and formal costumes. Sheer fabric for clothing offers very little in the way of warmth for the wearer, and for this reason is commonly worn in hot weather. It offers relatively low sun protection. Though sheer stockings have been popular since the 1920s, and have been used in women's nightwear for some time, the use of sheer fabrics in other clothing has become more common in recent years. There has been a sheer fashion trend in fashion circles since 2008, with sheer fabrics being used in tight clothes, layers, and in delicate feminine draping. See also Bodystocking Georgette (fabric) See-through clothing Silk Ultra sheer References Fashion design Textiles Properties of textiles Transparent materials
Sheer fabric
Physics,Engineering
410
1,490,597
https://en.wikipedia.org/wiki/Magic%20User%20Interface
The Magic User Interface (MUI in short) is an object-oriented system by Stefan Stuntz to generate and maintain graphical user interfaces. With the aid of a preferences program, the user of an application has the ability to customize the system according to personal taste. The Magic User Interface was written for AmigaOS and gained popularity amongst both programmers and users. It has been ported to PowerPC processors and adopted as the default GUI toolkit of the MorphOS operating system. The MUI application programmer interface has been cloned by the Zune toolkit used in the AROS Research Operating System. History Creating GUI applications on Amiga was difficult for a very long time, mainly because the programmer got only a minuscule amount of support from the operating system. Beginning with Kickstart 2.0, the gadtools.library was a step in the right direction, however, even using this library to generate complex and flexible interfaces remained difficult and still required a great deal of patience. The largest problem in existing tools for the creation of user interfaces was the inflexible output. Most of the programs were still using built-in fonts and window sizes, thus making the use of new high resolution graphics hardware adapters nearly unbearable. Even the preference programs on the Workbench were still only using the default fixed-width font. In 1992 Stefan Stuntz started developing a new object-oriented GUI toolkit for Amiga. Main goals for new GUI toolkit were: Font sensitivity: Possible for the font to be set in every application. Changeable window sizes: Windows have a sizing gadget which allows users to change the window size until it suits their needs Flexibility: Elements can be changed by the user regarding their own personal tastes. Controlling by keyboard: Widgets can be controlled by the keyboard as well as by the mouse. System integration: Every program has an ARexx port and can be iconified or uniconified by pushing a gadget or by using the Commodities exchange program. Adjusting to its environment: Every application can be made to open on any screen and adapts itself to its environment. MUI was released as shareware. Starting from MUI 3.9 an unrestricted version is integrated with MorphOS, but a shareware key is still required to activate all user configuration options in AmigaOS. Application theory UI development is done at source-code level without the aid of GUI builders. In MUI application the programmer only defines logical structure of the GUI and the layout is determined at run time depending on user configuration. Unlike on other GUI toolkits developer does not determine exact coordinates for UI objects but only their relative placement to each other using object groups. In traditional Intuition-based UI coding programmer had to calculate placement of gadgets relative to font and border sizes. By default all UI elements are resizable and change their size to match window size. It can also automatically switch into smaller font or hide UI elements if there is not enough space on screen to display window with full contents. This makes it very easy to build UI which adapts well to tiny and large displays as well. There are over 50 built-in MUI classes today and various third-party MUI classes. Example // Complete MUI application #include <libraries/mui.h> #include <proto/muimaster.h> // Sample application: ApplicationObject, SubWindow, WindowObject, WindowContents, VGroup, Child, TextObject, MUIA_Text_Contents, "Hello World!", End, End, End, End; This example code creates a small MUI application with the text "Hello World!" displayed on it. It is also possible to embed other BOOPSI based GUI toolkit objects inside a MUI application. Applications Some notable applications that use MUI as a widget toolkit include: Aladdin4D - 3D rendering/animation application Ambient - desktop environment AmIRC - IRC client Digital Universe - desktop planetarium IBrowse - web browser Origyn Web Browser - web browser PageStream - desktop publishing SimpleMail - email client Voyager - web browser YAM - email client Other GUI toolkits Currently there are two main widget toolkits in the Amiga world, which are competing with each other. The most widely used is MUI (adopted into AROS, MorphOS and in most Amiga programs), the other one is ReAction which was adopted in AmigaOS 3.5. There is in development a GTK MUI wrapper and it will allow the porting of various GTK based software. There is also modern interfaces based on XML, Feelin. Palette extension to Workbench defaults MUI extended Workbench's four-colour palette with four additional colours, allowing smoother gradients with less noticeable dithering. The MagicWB companion to MUI made use of this extended palette to provide more attractive icons to replace the dated Workbench defaults. MUI 4 added support for alpha blending and support for user defined widget shapes. See also ReAction GUI (ClassAct) Zune References External links MUI homepage Unofficial MUI nightly build directory Tutorial Widget toolkits Amiga APIs Amiga software AmigaOS AmigaOS 4 software MorphOS
Magic User Interface
Technology
1,077
42,181,888
https://en.wikipedia.org/wiki/Index%20of%20home%20automation%20articles
This is a list of home automation topics on Wikipedia. Home automation is the residential extension of building automation. It is automation of the home, housework or household activity. Home automation may include centralized control of lighting, HVAC (heating, ventilation and air conditioning), appliances, security locks of gates and doors and other systems, to provide improved convenience, comfort, energy efficiency and security. Home automation topics 0-9 6LoWPAN A Alarm.com, Inc. AlertMe AllJoyn Arduino B Belkin Wemo Bluetooth LE (BLE) Brillo (Project Brillo) Bticino Bus SCS Building automation C Connected Device C-Bus (protocol) CHAIN (industry standard) Clipsal C-Bus Comparison of domestic robots Control4 D Daintree Networks Dishwasher Domestic robot Dynalite E ESP32 ESP8266 Ember (company) European Home Systems Protocol Extron Electronics G Generalized Automation Language GreenPeak Technologies H Home Assistant (home automation software) Home automation Home automation for the elderly and disabled HomeLink Wireless Control System HomeOS HomeRF Honeywell, Inc. I Indoor positioning system Internet of Things Insteon Intelligent Home Control IoBridge iSmartAlarm IEEE 802.15.4 L Lagotek Lawn mower Lighting control system LinuxMCE LonWorks List of home automation topics List of home automation software List of network buses M Marata Vision Matter (standard) MCU (Micro Controller Unit) MiWi Mobile device Mobile Internet device N Nest Labs NodeMCU O OpenHAN Openpicus OpenTherm R Responsive architecture Robotic lawn mower Rotimatic S SM4All Smart device Smart environment Smart grid Smartlabs Smart lock Stardraw T Timer Thread (network protocol) U Universal Home API Universal powerline bus V Vacuum cleaner W Web of Things Washing machine Window blind X X10 (industry standard) X10 Firecracker XAP Home Automation protocol XPL Protocol Z Z-Wave Zigbee See also Home automation List of home automation topics List of home automation software List of home appliances Building automation Connected Devices Robotics References Home automation Building engineering
Index of home automation articles
Technology,Engineering
430
54,291,216
https://en.wikipedia.org/wiki/Micropound
The micropound (abbreviation μlb) is a small unit of avoirdupois weight and mass in the US and imperial systems of measurement, equal to one-millionth () pound. It is equal to exactly kg or about 453.6μg. See also English, US, & imperial units of measurement Avoirdupois pound References Citations Bibliography . Units of mass
Micropound
Physics,Mathematics
82
10,502,708
https://en.wikipedia.org/wiki/Chartered%20Institution%20of%20Building%20Services%20Engineers
The Chartered Institution of Building Services Engineers (CIBSE; pronounced 'sib-see') is an international professional engineering association based in London, England that represents building services engineers. It is a full member of the Construction Industry Council, and is consulted by government on matters relating to construction, engineering and sustainability. It is also licensed by the Engineering Council to assess candidates for inclusion on its Register of Professional Engineers. History CIBSE was formed in 1976, and received a Royal Charter that same year following a merger of the Institution of Heating and Ventilation Engineers (founded in 1897) and the Illuminating Engineering Society (founded in 1909). Previously CIBS, the word 'Engineers' was added in 1985, and hence the Institution became CIBSE. Royal Charter Under the CIBSE Royal Charter and By-laws, the Institution's primary objects are: The promotion for the benefit of the public in general of the art, science and practice of such engineering services as are associated with the built environment and with industrial processes, such art, science and practice being hereinafter called "building services engineering". The advancement of education and research in building services engineering, and the publication of the useful results of such research. CIBSE Regulations are informed by the Royal Charter and By-laws and cover matters relating to membership, election of the board, the chief executive, and regions and divisions. Membership CIBSE has seven grades of membership, with the upper four granting postnominals: Fellow (FCIBSE) Member (MCIBSE) Associate (ACIBSE) Licentiate (LCIBSE) Graduate Student - full and part-time Affiliate Members assessed by CIBSE for professional registration may be granted the following postnominals by the Engineering Council: Chartered Engineer (CEng) Incorporated Engineer (IEng) Engineering Technician (EngTech) Related bodies Four societies and one institute exist within CIBSE to reflect special areas of expertise that exist within the field of building services: Society of Facade Engineering (SFE) was set up in 2003 as a Society of CIBSE but with the support of the IStructE and RIBA. Its aim is to advance knowledge of and practice in facade engineering. Society of Light and Lighting (SLL) acts as the professional body for lighting in the UK. It represents the interests of those involved in the art, science and engineering of light and lighting in their widest definition and has over 3,000 members in the UK and worldwide. The SSL was originally founded in 1909 as the London-based Illuminating Engineering Society (not to be confused with the Illuminating Engineering Society based in New York). Society of Public Health Engineers (SoPHE) provides a higher profile and focus for public health engineers within CIBSE. Institute of Local Exhaust Ventilation Engineers (ILEVE) was established in 2011 to promote air quality in the workplace and to reduce ill health and death due to airborne contamination and hazardous substances in the working environment. Society of Digital Engineering (SDE) was formed to provide a home for those involved in digitising the built environment, either as designers, contractors, manufacturers, clients, facility managers or software vendors. Groups Various special interest groups operate within the Institution. These are free to join either as a member or non-member. ASHRAE Building Simulation Chimneys and Flues CHP and District Heating Daylight Electrical Services Energy Performance Young Energy Performance Group Facilities Management Healthcare Heritage Homes for the Future HVAC Systems Information Technology (IT) & Controls Intelligent Buildings Lifts Natural Ventilation Resilient Cities School Design Networks Young Engineers Network (YEN) Women in Building Services Engineering (WiBSE) Patrons CIBSE Patrons are businesses which collaborate to give financial, technical and moral backing to initiatives led by CIBSE. Certification In recent years, there has been an increasing focus on sustainability and green design by the UK government. The implementation of Part L (Conservation of Fuel and Power) of the U.K. Building Regulations in 2006 led CIBSE to set up the Low Carbon Consultants Register to ensure that a body of competent and trained professionals was available to implement the various requirements of the regulations, specifically in undertaking the relevant calculations to demonstrate the required reduction in carbon emissions from buildings both in design and operation. Members of the Register must undertake specific training and examinations to demonstrate their competence in various aspects of the regulations. The CIBSE scheme further offers accreditation as a Low Carbon Energy Assessor (LCEA), again subject to specific training and examinations, who are then able to provide the Energy Performance Certificates (EPCs) and Display Energy Certificates (DECs), as required under the Energy Performance in Buildings Regulations (EPB Regulations). These certificates can only be provided by accredited energy assessors who are members of an approved scheme such as the Low Carbon Energy Assessors Register. Furthermore, assessors are required to update their training regularly to ensure that continued high standards of competency are met. The LCC scheme has been expanded in recent years to include for the grade of Low Carbon Consultant: Energy Management Systems, these LCC's having been trained and tested by CIBSE to ensure they have the relevant competencies to assist organisations to implement BS EN 16001. CIBSE also offers certification for Air Conditioning Inspectors, to perform inspections as required by the Energy Performance of Buildings (Certificates and Inspections) (England and Wales) Regulations 2007. Training Many training options are available through CIBSE, with the aim of providing specialised courses, conferences and seminars for those within the building services industry. and the provision of Continuing Professional Development (CPD) training, to improve and enhance the skills required to be engineering professional. Included are a range of courses from fire safety and mechanical and electrical services courses, to facilities management and business skills-focused training. Online modules can also be completed which can be used to contribute towards the Edexcel Advanced Professional Diploma in Building Services Engineering. Publications CIBSE publishes several guides to building services design, which include for various recommended design criteria and standards, some of which are cited within the UK building regulations and therefore form a legislative requirement for major building services works. The main guides are: Guide A: Environmental design Guide B: Heating, ventilating, air conditioning and refrigeration Guide C: Reference data Guide D: Transportation systems in buildings Guide E: Fire safety engineering Guide F: Energy efficiency in buildings Guide G: Public health and plumbing engineering Guide H: Building control systems Guide J: Weather, solar and illuminance data (now withdrawn) Guide K: Electricity in buildings Guide L: Sustainability Guide M: Maintenance engineering and management In November 2011 CIBSE made its full range of published guidance (including all the CIBSE Guides, CIBSE Commissioning Codes, Applications Manuals, Technical Memoranda, Lighting Guides) available for free to its members through the Knowledge Portal. CIBSE publishes a monthly magazine, the CIBSE Journal (formerly the Building Services Journal). Two quarterly technical journals are published in association with Sage: Building Services Engineering Research & Technology (BSERT) is free online to all CIBSE members and Lighting Research & Technology Journal (LR&T) which is free for Society of Light and Lighting members only. See also American Society of Heating, Refrigerating and Air-Conditioning Engineers Building engineer Society of Engineers Society of Professional Engineers UK References External links Chartered Institution of Building Services Engineers (CIBSE) CIBSE Journal – The official magazine of CIBSE The American Society of Heating, Refrigerating and Air- Conditioning Engineers (ASHRAE) The Building Services Research and Information Association (BSRIA) BSRIA Building engineering organizations Heating, ventilation, and air conditioning Engineering societies based in the United Kingdom Building Services Engineers ECUK Licensed Members 1976 establishments in the United Kingdom Scientific organizations established in 1976 International professional associations Lighting organizations
Chartered Institution of Building Services Engineers
Engineering
1,587
152,900
https://en.wikipedia.org/wiki/Finitism
Finitism is a philosophy of mathematics that accepts the existence only of finite mathematical objects. It is best understood in comparison to the mainstream philosophy of mathematics where infinite mathematical objects (e.g., infinite sets) are accepted as existing. Main idea The main idea of finitistic mathematics is not accepting the existence of infinite objects such as infinite sets. While all natural numbers are accepted as existing, the set of all natural numbers is not considered to exist as a mathematical object. Therefore quantification over infinite domains is not considered meaningful. The mathematical theory often associated with finitism is Thoralf Skolem's primitive recursive arithmetic. History The introduction of infinite mathematical objects occurred a few centuries ago when the use of infinite objects was already a controversial topic among mathematicians. The issue entered a new phase when Georg Cantor in 1874 introduced what is now called naive set theory and used it as a base for his work on transfinite numbers. When paradoxes such as Russell's paradox, Berry's paradox and the Burali-Forti paradox were discovered in Cantor's naive set theory, the issue became a heated topic among mathematicians. There were various positions taken by mathematicians. All agreed about finite mathematical objects such as natural numbers. However there were disagreements regarding infinite mathematical objects. One position was the intuitionistic mathematics that was advocated by L. E. J. Brouwer, which rejected the existence of infinite objects until they are constructed. Another position was endorsed by David Hilbert: finite mathematical objects are concrete objects, infinite mathematical objects are ideal objects, and accepting ideal mathematical objects does not cause a problem regarding finite mathematical objects. More formally, Hilbert believed that it is possible to show that any theorem about finite mathematical objects that can be obtained using ideal infinite objects can be also obtained without them. Therefore allowing infinite mathematical objects would not cause a problem regarding finite objects. This led to Hilbert's program of proving both consistency and completeness of set theory using finitistic means as this would imply that adding ideal mathematical objects is conservative over the finitistic part. Hilbert's views are also associated with the formalist philosophy of mathematics. Hilbert's goal of proving the consistency and completeness of set theory or even arithmetic through finitistic means turned out to be an impossible task due to Kurt Gödel's incompleteness theorems. However, Harvey Friedman's grand conjecture would imply that most mathematical results are provable using finitistic means. Hilbert did not give a rigorous explanation of what he considered finitistic and referred to as elementary. However, based on his work with Paul Bernays some experts such as have argued that primitive recursive arithmetic can be considered an upper bound on what Hilbert considered finitistic mathematics. As a result of Gödel's theorems, as it became clear that there is no hope of proving both the consistency and completeness of mathematics, and with the development of seemingly consistent axiomatic set theories such as Zermelo–Fraenkel set theory, most modern mathematicians do not focus on this topic. Classical finitism vs. strict finitism In her book The Philosophy of Set Theory, Mary Tiles characterized those who allow potentially infinite objects as classical finitists, and those who do not allow potentially infinite objects as strict finitists: for example, a classical finitist would allow statements such as "every natural number has a successor" and would accept the meaningfulness of infinite series in the sense of limits of finite partial sums, while a strict finitist would not. Historically, the written history of mathematics was thus classically finitist until Cantor created the hierarchy of transfinite cardinals at the end of the 19th century. Views regarding infinite mathematical objects Leopold Kronecker remained a strident opponent to Cantor's set theory: Reuben Goodstein was another proponent of finitism. Some of his work involved building up to analysis from finitist foundations. Although he denied it, much of Ludwig Wittgenstein's writing on mathematics has a strong affinity with finitism. If finitists are contrasted with transfinitists (proponents of e.g. Georg Cantor's hierarchy of infinities), then also Aristotle may be characterized as a finitist. Aristotle especially promoted the potential infinity as a middle option between strict finitism and actual infinity (the latter being an actualization of something never-ending in nature, in contrast with the Cantorist actual infinity consisting of the transfinite cardinal and ordinal numbers, which have nothing to do with the things in nature): Other related philosophies of mathematics Ultrafinitism (also known as ultraintuitionism) has an even more conservative attitude towards mathematical objects than finitism, and has objections to the existence of finite mathematical objects when they are too large. Towards the end of the 20th century John Penn Mayberry developed a system of finitary mathematics which he called "Euclidean Arithmetic". The most striking tenet of his system is a complete and rigorous rejection of the special foundational status normally accorded to iterative processes, including in particular the construction of the natural numbers by the iteration "+1". Consequently Mayberry is in sharp dissent from those who would seek to equate finitary mathematics with Peano arithmetic or any of its fragments such as primitive recursive arithmetic. See also Temporal finitism Transcomputational problem Rational trigonometry Notes Further reading References Constructivism (mathematics) Infinity Epistemological theories
Finitism
Mathematics
1,125