id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
11,761,489 | https://en.wikipedia.org/wiki/B%C3%A1nh%20cu%E1%BB%91n | Bánh cuốn or Bánh quấn (, rolled cake) is a Vietnamese dish originating from Northern Vietnam.
In Vietnamese cuisine
Bánh cuốn is made from a thin, wide sheet of fermented rice batter filled with a mixture of cooked seasoned ground pork, minced wood ear mushroom, and minced shallots. Sides for this dish usually consist of chả lụa (Vietnamese pork sausage), sliced cucumber, and bean sprouts, with the dipping sauce, which is fish sauce, called nước chấm (fish sauce).
The rice sheet of bánh cuốn is extremely thin and delicate. It is made by steaming a slightly fermented rice batter on a cloth that is stretched over a pot of boiling water. It is a light dish and is generally eaten for breakfast everywhere in Vietnam. A different version of bánh cuốn, called bánh cuốn Thanh Trì and bánh cuốn làng Kênh, may be found in Thanh Trì, a southern district of Hanoi and Kênh village of Nam Định, an ancient village in the center of Nam Định city. Bánh cuốn Thanh Trì or Bánh cuốn làng Kênh are not rolls, but just rice sheets eaten with chả lụa, fried shallots, or prawns.
Bánh ướt is simply the unfilled rice sheet, and is typically served with bean sprouts, chopped lettuce, sliced cucumber, fresh basil and mint, fried shallots and onions, chả/giò lụa, and fish sauce.
In other countries
In regards to Vietnamese culture, Thai cuisine commonly refers to the dish as pak moh yuan (). Skilled food preparers will make each rice sheet extra thin with as much stuffing as possible. Rice sheets are usually made of arrowroot flour which gives a tapioca-like consistency. The dough may also be infused with naturally extracted herbs such as butterfly pea for blue shades and pandan for green shades. As for the stuffing, the most popular stuffing is ground pork with cilantro roots, pepper, garlic, shallots and preserved radish. Less common stuffing is chicken, mushroom, corn, coconut, bean sprouts, chives, etc. Vegetarian recipes are also available.
Pak moh yuan is often served with sauces and toppings. While sweet chili sauce is the standard, recipes from certain regions may also use seafood ingredients in their sauce. Coconut milk may be drizzled on top as a sweet option. The dish may be garnished with fried garlic and served with lettuce and fresh chili on the side.
Another variation known in Thai cuisine is khao phan (; lit. "rice wrap"). It is regarded a specialty of Uttaradit province where it is eaten freshly made in many variations, but also sun-dried. The dried versions often have spices added to them and are popularly used as a wrap for a spicy salad made with rice noodles and minced pork.
Gallery
Bánh ướt
Bánh ướt (, ), is a Vietnamese thin pancake wrapper consisting of rice noodle sheets, eaten with nước chấm, fried shallots, and a side of chả lụa (Vietnamese pork sausage).
See also
References
External links
Alice's Guide to Vietnamese Banh
Bánh cuốn on Hanoidelicious
Recipe for bánh cuốn in French: Bánh cuốn
Vietnamese rice dishes
Thai cuisine
Steamed foods
Fermented foods
Vietnamese noodle dishes
Bánh
Stuffed dishes | Bánh cuốn | Biology | 723 |
746,495 | https://en.wikipedia.org/wiki/Bioreactor | A bioreactor is any manufactured device or system that supports a biologically active environment. In one case, a bioreactor is a vessel in which a chemical process is carried out which involves organisms or biochemically active substances derived from such organisms. This process can either be aerobic or anaerobic. These bioreactors are commonly cylindrical, ranging in size from litres to cubic metres, and are often made of stainless steel.
It may also refer to a device or system designed to grow cells or tissues in the context of cell culture. These devices are being developed for use in tissue engineering or biochemical/bioprocess engineering.
On the basis of mode of operation, a bioreactor may be classified as batch, fed batch or continuous (e.g. a continuous stirred-tank reactor model). An example of a continuous bioreactor is the chemostat.
Organisms or biochemically active substances growing in bioreactors may be submerged in liquid medium or may be anchored to the surface of a solid medium. Submerged cultures may be suspended or immobilized. Suspension bioreactors may support a wider variety of organisms, since special attachment surfaces are not needed, and can operate at a much larger scale than immobilized cultures. However, in a continuously operated process the organisms will be removed from the reactor with the effluent. Immobilization is a general term describing a wide variety of methods for cell or particle attachment or entrapment. It can be applied to basically all types of
biocatalysis including enzymes, cellular organelles, animal and plant cells and organs. Immobilization is useful for continuously operated processes, since the organisms will not be removed with the reactor effluent, but is limited in scale because the microbes are only present on the surfaces of the vessel.
Large scale immobilized cell bioreactors are:
moving media, also known as moving bed biofilm reactor (MBBR)
packed bed
fibrous bed
membrane
Design
Bioreactor design is a relatively complex engineering task, which is studied in the discipline of biochemical/bioprocess engineering. Under optimum conditions, the microorganisms or cells are able to perform their desired function with limited production of impurities. The environmental conditions inside the bioreactor, such as temperature, nutrient concentrations, pH, and dissolved gases (especially oxygen for aerobic fermentations) affect the growth and productivity of the organisms. The temperature of the fermentation medium is maintained by a cooling jacket, coils, or both. Particularly exothermic fermentations may require the use of external heat exchangers. Nutrients may be continuously added to the fermenter, as in a fed-batch system, or may be charged into the reactor at the beginning of fermentation. The pH of the medium is measured and adjusted with small amounts of acid or base, depending upon the fermentation. For aerobic (and some anaerobic) fermentations, reactant gases (especially oxygen) must be added to the fermentation. Since oxygen is relatively insoluble in water (the basis of nearly all fermentation media), air (or purified oxygen) must be added continuously. The action of the rising bubbles helps mix the fermentation medium and also "strips" out waste gases, such as carbon dioxide. In practice, bioreactors are often pressurized; this increases the solubility of oxygen in water. In an aerobic process, optimal oxygen transfer is sometimes the rate limiting step. Oxygen is poorly soluble in water—even less in warm fermentation broths—and is relatively scarce in air (20.95%). Oxygen transfer is usually helped by agitation, which is also needed to mix nutrients and to keep the fermentation homogeneous. Gas dispersing agitators are used to break up air bubbles and circulate them throughout the vessel.
Fouling can harm the overall efficiency of the bioreactor, especially the heat exchangers. To avoid it, the bioreactor must be easily cleaned. Interior surfaces are typically made of stainless steel for easy cleaning and sanitation. Typically bioreactors are cleaned between batches, or are designed to reduce fouling as much as possible when operated continuously. Heat transfer is an important part of bioreactor design; small vessels can be cooled with a cooling jacket, but larger vessels may require coils or an external heat exchanger.
Types
Photobioreactor
A photobioreactor (PBR) is a bioreactor which incorporates some type of light source (that may be natural sunlight or artificial illumination). Virtually any translucent container could be called a PBR, however the term is more commonly used to define a closed system, as opposed to an open storage tank or pond.
Photobioreactors are used to grow small phototrophic organisms such as cyanobacteria, algae, or moss plants. These organisms use light through photosynthesis as their energy source and do not require sugars or lipids as energy
source. Consequently, risk of contamination with other organisms like bacteria or fungi is lower in photobioreactors when compared to bioreactors for heterotroph organisms.
Sewage treatment
Conventional sewage treatment utilises bioreactors to undertake the main purification processes. In some of these systems, a chemically inert medium with very high surface area is provided as a substrate for the growth of biological film. Separation of excess biological film takes place in settling tanks or cyclones. In other systems aerators supply oxygen to the sewage and biota to create activated sludge in which the biological component is freely mixed in the liquor in "flocs". In these processes, the liquid's biochemical oxygen demand (BOD) is reduced sufficiently to render the contaminated water fit for reuse. The biosolids can be collected for further processing, or dried and used as fertilizer. An extremely simple version of a sewage bioreactor is a septic tank whereby the sewage is left in situ, with or without additional media to house bacteria. In this instance, the biosludge itself is the primary host for the bacteria.
Bioreactors for specialized tissues
Many cells and tissues, especially mammalian ones, must have a surface or other structural support in order to grow, and agitated environments are often destructive to these cell types and tissues. Higher organisms, being auxotrophic, also require highly specialized growth media. This poses a challenge when the goal is to culture larger quantities of cells for therapeutic production purposes, and a significantly different design is needed compared to industrial bioreactors used for growing protein expression systems such as yeast and bacteria.
Many research groups have developed novel bioreactors for growing specialized tissues and cells on a structural scaffold, in attempt to recreate organ-like tissue structures in-vitro. Among these include tissue bioreactors that can grow heart tissue, skeletal muscle tissue, ligaments, cancer tissue models, and others. Currently, scaling production of these specialized bioreactors for industrial use remains challenging and is an active area of research.
For more information on artificial tissue culture, see tissue engineering.
Modelling
Mathematical models act as an important tool in various bio-reactor applications including wastewater treatment. These models are useful for planning efficient process control strategies and predicting the future plant performance. Moreover, these models are beneficial in education and research areas.
Bioreactors are generally used in those industries which are concerned with food, beverages and pharmaceuticals. The emergence of biochemical engineering is of recent origin. Processing of biological materials using biological agents such as cells, enzymes or antibodies are the major pillars of biochemical engineering. Applications of biochemical engineering cover major fields of civilization such as agriculture, food and healthcare, resource recovery and fine chemicals.
Until now, the industries associated with biotechnology have lagged behind other industries in implementing control over the process and optimization strategies. A main drawback in biotechnological process control is the problem of measuring key physical and biochemical parameters.
Operational stages in a bio-process
A bioprocess is composed mainly of three stages—upstream processing, bioreaction, and downstream processing—to convert raw material to finished product.
The raw material can be of biological or non-biological origin. It is first converted to a more suitable form for processing. This is done in an upstream processing step which involves chemical hydrolysis, preparation of liquid medium, separation of particulate, air purification and many other preparatory operations.
After the upstream processing step, the resulting feed is transferred to one or more bioreaction stages. The biochemical reactors or bioreactors form the base of the bioreaction step. This step mainly consists of three operations, namely, production of biomass, metabolite biosynthesis and biotransformation.
Finally, the material produced in the bioreactor must be further processed in the downstream section to convert it into a more useful form. The downstream process mainly consists of physical separation operations which include solid liquid separation, adsorption, liquid-liquid extraction, distillation, drying etc.
Specifications
A typical bioreactor consists of following parts:
Agitator – Used for the mixing of the contents of the reactor which keeps the cells in the perfect homogenous condition for better transport of nutrients and oxygen to the desired product(s).
Baffle – Used to break the vortex formation in the vessel, which is usually highly undesirable as it changes the center of gravity of the system and consumes additional power.
Sparger – In aerobic cultivation process, the purpose of the sparger is to supply adequate oxygen to the growing cells.
Jacket – The jacket provides the annular area for circulation of constant temperature of water which keeps the temperature of the bioreactor at a constant value.
See also
ATP test
Biochemical engineering
Biofuel from algae
Biological hydrogen production (algae)
Bioprocessor
Bioreactor landfill
Biotechnology
Cell culture
Chemostat
Digester
Electro-biochemical reactor (EBR)
Hairy root culture
History of biotechnology
Hollow fiber bioreactor
Immobilized enzyme
Industrial biotechnology
Moving bed biofilm reactor
Septic tank
Single-use bioreactor
Tissue engineering
References
Further reading
Pauline M Doran, Bio-process Engineering Principles, Elsevier, 2nd ed., 2013
Biotechnology company
External links
Photo-bioreactor.
Biotechnology
Biological engineering
Biochemical engineering | Bioreactor | Chemistry,Engineering,Biology | 2,114 |
11,935,110 | https://en.wikipedia.org/wiki/STAT6 | Signal transducer and activator of transcription 6 (STAT6) is a transcription factor that belongs to the Signal Transducer and Activator of Transcription (STAT) family of proteins. The proteins of STAT family transmit signals from a receptor complex to the nucleus and activate gene expression. Similarly as other STAT family proteins, STAT6 is also activated by growth factors and cytokines. STAT6 is mainly activated by cytokines interleukin-4 and interleukin-13.
Molecular biology
In the human genome, STAT6 protein is encoded by the STAT6 gene, located on the chromosome 12q13.3-q14.1. The gene encompasses over 19 kb and consists of 23 exons. STAT6 shares structural similarity with the other STAT proteins and is composed of the N-terminal domain, DNA binding domain, SH3- like domain, SH2 domain and transactivation domain (TAD).
STAT proteins are activated by the Janus family (JAKs) tyrosine kinases in response to cytokine exposure. STAT6 is activated by cytokines interleukin-4 (IL-4), and interleukin-13 (IL-13) with their receptors that both contain the α subunit of the IL-4 receptor (IL-4Rα). Tyrosine phosporylation of STAT6 after stimulation by IL-4 results in the formation of STAT6 homodimers that bind specific DNA elements via a DNA-binding domain.
Function
STAT6-mediated signaling pathway is required for the development of T-helper type 2 (Th2) cells and Th2 immune response. Expression of Th2 cytokines, including IL-4, IL-13, and IL-5, was reduced in STAT6-deficient mice. STAT 6 protein is crucial in IL4 mediated biological responses. It was found that STAT6 induce the expression of BCL2L1/BCL-X(L), which is responsible for the anti-apoptotic activity of IL4. IL-4 stimulates the phosphorylation of IL-4 receptor, which recruits cytosolic STAT6 by its SH2 domain and STAT6 is phosphorylated on tyrosine 641 (Y641) by JAK1, which results in the dimerization and nuclear translocation of STAT6 to activate target genes. Knockout studies in mice suggested the roles of this gene in differentiation of T helper 2 (Th2), expression of cell surface markers, and class switch of immunoglobulins.
Activation of STAT6 signaling pathway is necessary in macrophage function, and is required for the M2 subtype activation of macrophages. STAT6 protein also regulates other transcription factor as Gata3, which is important regulator of Th2 differentiation. STAT6 is also required for the development of IL-9-secreting T cells.
STAT6 also plays a critical role in Th2 lung inflammatory responses including clearance of parasitic infections and in the pathogenesis of asthma. Th2-cell derived cytokines as IL-4 and IL-13 induce the production of IgE which is a major mediator in allergic response. Association studies searching for relation of polymorphisms in STAT6 with IgE level or asthma discovered a few polymorphisms significantly associated with examined traits. Only two polymorphisms showed repeatedly significant clinical association and/or functional effect on STAT6 function (GT repeats in exon 1 and rs324011 polymorphism in intron 2).
Interactions
STAT6 has been shown to interact with:
CREB-binding protein,
EP300,
IRF4,
NFKB1,
Nuclear receptor coactivator 1, and
SND1.
Pathology
Gene fusion
Recurrent somatic fusions of the two genes, NGFI-A–binding protein 2 (NAB2) and STAT6, located at chromosomal region 12q13, have been identified in solitary fibrous tumors.
Amplification
STAT6 is amplified in a subset of dedifferentiated liposarcoma.
See also
Interleukin 4
References
Further reading
External links
Gene expression
Immune system
Proteins
Transcription factors
Signal transduction | STAT6 | Chemistry,Biology | 869 |
5,718,913 | https://en.wikipedia.org/wiki/WaveLAN | WaveLAN was a brand name for a family of wireless networking technology sold by NCR, AT&T, Lucent Technologies, and Agere Systems as well as being sold by other companies under OEM agreements. The WaveLAN name debuted on the market in 1990 and was in use until 2000, when Agere Systems renamed their products to ORiNOCO. WaveLAN laid the important foundation for the formation of IEEE 802.11 working group and the resultant creation of Wi-Fi.
WaveLAN has been used on two different families of wireless technology:
Pre-IEEE 802.11 WaveLAN, also called Classic WaveLAN
IEEE 802.11-compliant WaveLAN, also known as WaveLAN IEEE and ORiNOCO
History
WaveLAN was originally designed by NCR Systems Engineering, later renamed into WCND (Wireless Communication and Networking Division) at Nieuwegein, in the province Utrecht in the Netherlands, a subsidiary of NCR Corporation, in 1986–7, and introduced to the market in 1990 as a wireless alternative to Ethernet and Token Ring. The next year NCR contributed the WaveLAN design to the IEEE 802 LAN/MAN Standards Committee. This led to the founding of the 802.11 Wireless LAN Working Committee which produced the original IEEE 802.11 standard, which eventually became the basis of the certification mark Wi-Fi. When NCR was acquired by AT&T in 1991, becoming the AT&T GIS (Global Information Solutions) business unit, the product name was retained, as happened two years later when the product was transferred to the AT&T GBCS (Global Business Communications Systems) business unit, and again when AT&T spun off their GBCS business unit as Lucent in 1995. The technology was also sold as WaveLAN under an OEM agreement by Epson, Hitachi, and NEC, and as the RoamAbout DS by DEC. It competed directly with Aironet's non-802.11 ARLAN lineup, which offered similar speeds, frequency ranges and hardware.
Several companies also marketed wireless bridges and routers based on the WaveLAN ISA and PC cards, like the C-Spec OverLAN, KarlNet KarlBridge, Persoft Intersect Remote Bridge, and Solectek AIRLAN/Bridge Plus. Lucent's WavePoint II access point could accommodate both the classic WaveLAN PC cards as well as the WaveLAN IEEE cards. Also, there were a number of compatible third-party products available to address niche markets such as: Digital Ocean's Grouper, Manta, and Starfish offerings for the Apple Newton and Macintosh; Solectek's 915 MHz WaveLAN parallel port adapter; Microplex's M204 WaveLAN-compatible wireless print server; NEC's Japanese-market only C&C-Net 2.4 GHz adapter for the NEC-bus; Toshiba's Japanese-market only WaveCOM 2.4 GHz adapter for the Toshiba-Bus; and Teklogix's WaveLAN-compatible Pen-based and Notebook terminals.
During this time frame, networking professionals also realized that since NetWare 3.x and 4.x supported the WaveLAN cards and came with a Multi Protocol Router module that supported the IP/IPX RIP and OSPF routing protocols, one could construct a wireless routed network using NetWare servers and WaveLAN cards for a fraction of the cost of building a wireless bridged network using WaveLAN access points. Many NetWare classes and textbooks of the time included a NetWare OS CD with a 2-person license, so potentially the only cost incurred came from hardware.
When the 802.11 protocol was ratified, Lucent began producing chipsets and PC-cards to support this new standard under the name of WaveLAN IEEE. WaveLAN was among the first products certified by the Wi-Fi Alliance, originally called the Wireless Ethernet Compatibility Association (WECA). Shortly thereafter, Lucent spun off its semiconductor division that also produced the WaveLAN chipsets as Agere Systems. On June 17, 2002 Proxim acquired the IEEE 802.11 LAN equipment business including the trademark ORiNOCO from Agere Systems. Proxim later renamed its entire 802.11 wireless networking lineup to ORiNOCO, including products based on Atheros chipsets.
Specifications
Classic WaveLAN operates in the 900 MHz or 2.4 GHz ISM bands. Being a proprietary pre-802.11 protocol, it is completely incompatible with the 802.11 standard. Soon after the publication of the IEEE 802.11 standard on November 18, 1997, WaveLAN IEEE was placed on the market.
Hardware
The pre-802.11 standard WaveLAN cards were based on the Intel 82586 Ethernet PHY controller, which was a commonly used controller in its time and was found in many ISA and MCA Ethernet cards, such as the Intel EtherExpress 16 and the 3COM 3C523. The WaveLAN IEEE ISA, MCA and PCMCIA cards used Medium Access Controller (MAC), HERMES, designed specifically for 802.11 protocol support. The radio modem section was hidden from the OS, thus making the WaveLAN card appear to be a typical Ethernet card, with the radio-specific features taken care of behind the scenes.
While the 900 MHz models and the early 2.4 GHz models operated on one fixed frequency, the later 2.4 GHz cards as well as some 2.4 GHz WavePoint access points had the hardware capacity to operate over ten channels, ranging from 2.412 GHz to 2.484 GHz, with the channels available being determined by the region-specific firmware.
Security
For security, WaveLAN used a 16-bit NWID (NetWork ID) field, which yielded 65,536 potential combinations; the radio portion of the device could receive radio traffic tagged with another NWID, but the controller would discard the traffic. DES encryption (56-bit) was an option in some of the ISA and MCA cards and all of the WavePoint access points. The full-length ISA and MCA cards had a socket for an encryption chip, the half-length 915 MHz ISA cards had solder pads for a socket which was never added, and the 2.4 GHz half-length ISA cards had the chip soldered directly to the board.
For the IEEE 802.11 standard the goal was to provide data confidentiality comparable to that of a traditional wired network, using 64- and 128-bit data encryption technology. This first implementation was called “Wired Equivalent Privacy” (WEP).
There are shortcomings in WaveLAN & initial 802.11 compatible devices security strategy:
The initial IEEE 802.11 security WEP implementation, was shown to be vulnerable to attack.
This was addressed by the 802.11i Wi-Fi Protected Access (WPA) that replaced WEP in the standard.
Official specifications
Support
Officially released drivers
Windows 3.11, 95, and NT 3.5/4.0
Windows 3.11, Windows 95, and 98 supported the ISA and MCA cards natively but did not provide any configuration or link diagnostics utilities.
Windows NT 3.51 did not natively support the WaveLAN cards, but additional drivers from Microsoft's Windows NT Driver Library were available.
OS/2 NDIS and NetWare Requester
LAN Manager/IBM LAN Server
Artisoft LANtastic
PC-TCP for DOS
NetWare Lite, NetWare 2, 3, and 4. Netware 4.11 through 5.x supported the ISA and MCA cards natively but did not provide any configuration or link diagnostics utilities.
ODI/VLM NetWare client for DOS. The DOS drivers came with configuration and link diagnostics utilities.
SCO UNIX version 1.00.00.00
UnixWare version 1.1
NCR's documentation stated that drivers for Banyan Vines 5.05 were available on Banyan's BBS, but it is unclear if they ever materialized
Volunteer-developed drivers
Linux has included support for ISA Classic WaveLAN cards since the 2.0.37 kernel, while full support for the PC card Classic WaveLAN cards came later. Status of support for MCA Classic Wavelan cards is unknown.
FreeBSD version 2.2.1-up and the Mach4 kernel have had native support for the ISA Classic WaveLAN cards for several years. OpenBSD and NetBSD do not natively support any of the Classic WaveLAN cards.
Several open-source projects, such as NdisWrapper and Project Evil, currently exist that allow the use of NDIS drivers via a "wrapper". This allows non-Windows OS' to utilize the near-universal nature of drivers written for the Windows platform to the benefit of other operating systems, such as Linux, FreeBSD, and ZETA.
Examples
Classic WaveLAN technology was available for the MCA, ISA/EISA, and PCMCIA interfaces:
915 MHz
Full-length ISA card
F connector
RG-59/U antenna cable
NCR 008-0126998 HOLI (HOst Lan Interface) chip
NCR 008-0126999 Icarus or NCR 008-0127211 Daedalus chip
Intel N82586 PHY controller chip
IRQ, boot ROM, and boot ROM base address configured with a four-position DIP switch block at top of card
NCR part number 601-0068991
AT&T part number 3399-F170
Half-length ISA card
SMB connector
NCR 008-0126998 HOLI chip
Intel N82586 PHY controller chip
IRQ, boot ROM, and boot ROM base address configured with a four-position DIP switch block at top of card
AT&T part number 3399-K602.
Full-length MCA card
F connector
NCR 008-0127216 HOLI chip
NCR 008-0126999 Icarus chip
NCR 8-127000A socketed DES encryption chip
Intel N82586 PHY controller chip
MCA id number 6A14.
PC card
Large EAM (External Antenna Module)
Intel i82593 PHY controller chip
AT&T part number 3399-K080
Compaq/DEC Roamabout part number: DEINA-AA.
2.4 GHz
Full-length ISA card
Fixed frequency
IRQ, boot ROM, and boot ROM base address configured with a four-position DIP switch block at top of card
Half-length ISA card
SMB connector
Selectable frequency
Symbios Logic 008-0126998 HOLI chip
Intel N82586 PHY controller chip
IRQ, boot ROM, and boot ROM base address configured with a four-position DIP switch block at top of card
AT&T part number 3399-K635.
Full-length MCA card
SMB connector
NCR 008-0127216 HOLI chip
NCR 008-0127211 Daedalus chip
NCR 8-127000A socketed DES encryption chip
Intel N82586 PHY controller chip
AT&T part number 3399-K066
MCA id number 6A14.
PC card - 2.4 GHz, selectable frequency, large EAM (External Antenna Module).
Intel N82593 PHY controller chip
AT&T part number: AT&T 3399-K624.
Lucent part number: LUC 3399-K644.
Compaq/DEC Roamabout part number: DEIRB-xx.
Options
DES encryption chip. Part number 3399-K972.
Boot ROM chip. Part number 3399-K973.
Citations
References
NCR WaveLAN PC-AT Installation and Operations manual, part number ST-2119-09, revision number 008-0127167 Rev. B, copyright 1990, 1991 by NCR Corporation.
External links
NCR's HTTP site with a selection of WaveLAN drivers and documentation
FTP mirror site of DEC's ftp server with a selection of RoamAbout drivers and documentation
Detailed analysis of WaveLAN ISA cards
Wayback machine archive of documentation on an NCR WaveLAN backbone built in Latvia
Wayback machine archive of Byte Magazine's review of WaveLAN
Wayback machine archive for Wavelan Classic products
Detailed analysis of Wavelan MCA cards
Wireless networking
Network access
NCR Corporation products | WaveLAN | Technology,Engineering | 2,535 |
28,017,301 | https://en.wikipedia.org/wiki/Alf%20Howard | Alf Howard (30 April 1906 – 4 July 2010) was an Australian scientist, educator and explorer. He was most prominently known for being the last remaining member of the expedition to Antarctica, which was led by Sir Douglas Mawson on board the RRS Discovery in 1929–1931. Howard served in the capacity of a chemist and the hydrologist aboard the vessel, which included being the individual who was responsible for monitoring the sea-water temperatures and the collection and chemical analysis of sea-water samples.
Biography
Howard was born and raised in Camberwell, Victoria. He completed a Master of Science, his first of five degrees at the University of Melbourne in 1927; Howard also received an honorary doctorate in statistics and a PhD in linguistics from the University of Queensland. He worked with the Department of Human Movement as a programmer and a statics consultant and an honorary researcher working full-time without pay for over 20 years. He stopped in 2003 at age 97.
He was doing work on organic chemistry when he was approached by Sir David Orme Masson for the BANZARE. Within 48 hours Howard took the train to Perch and sailed to England on Orient Steam Navigation's Orvieto.
Howard was made a Member of the Order of Australia (AO) in the 1998 Australia Day Honours for "service to science through Antarctic exploration as a member of the British Australian New Zealand Antarctic Research Expedition (1929–1931), for his work on food technology and preservation, and for his contribution to statistical design".
Just before his 100th birthday, Howard, through pre-recorded film, opened the new Discovery's Ocean Odyssey wing at the Discovery Point museum in Dundee, which stands beside the old restored ship Discovery. He was profiled in the Tasmanian Museum and Art Gallery's new exhibition Islands to Ice: The Great Southern Ocean and Antarctica in late 2010.
In 2005 he donated 80,000 (AU$) for a computer laboratory for undergraduate students. Four years earlier in 2001 he was presented with the Australian Geographic Society award.
Death
Howard was the only surviving member of Mawson's expedition when he died in 2010 aged 104
. Alf died of natural causes.
References
External links
Australian Antarctic explorer dies
Alf Howard reflects on life lived at edge of world
1906 births
2010 deaths
Australian explorers
Australian scientists
Australian men centenarians
Hydrologists
Members of the Order of Australia
University of Melbourne alumni
University of Queensland alumni
People from Camberwell, Victoria
Scientists from Melbourne | Alf Howard | Environmental_science | 490 |
7,635,393 | https://en.wikipedia.org/wiki/Good%20engineering%20practice | Good engineering practice (GEP) is engineering and technical activities that ensure that a company manufactures products of the required quality as expected (e.g., by the relevant regulatory authorities). Good engineering practices are to ensure that the development and/or manufacturing effort consistently generates deliverables that support the requirements for qualification or validation. Good engineering practices are applied to all industries that require engineering.
See also
GxP
Good manufacturing practice (GMP)
Best practice
American National Standards Institute (ANSI)
Institute of Electrical and Electronics Engineers (IEEE)
European Medicines Agency (EMEA)
Food and Drug Administration (FDA)
Ministry of Health, Labour and Welfare (Japan)
Pharmaceutical Inspection Convention and Pharmaceutical Inspection Co-operation Scheme (PIC/S)
References
Sources
Risk-Based Qualification for the 21st Century
ISPE GAMP COP
Good
Engineering concepts
Good practice | Good engineering practice | Engineering | 170 |
15,544,777 | https://en.wikipedia.org/wiki/Ridley%E2%80%93Watkins%E2%80%93Hilsum%20theory | In solid state physics the Ridley–Watkins–Hilsum theory (RWH) explains the mechanism by which differential negative resistance is developed in a bulk solid state semiconductor material when a voltage is applied to the terminals of the sample. It is the theory behind the operation of the Gunn diode as well as several other microwave semiconductor devices, which are used practically in electronic oscillators to produce microwave power. It is named for British physicists Brian Ridley, Tom Watkins and Cyril Hilsum who wrote theoretical papers on the effect in 1961.
Negative resistance oscillations in bulk semiconductors had been observed in the laboratory by J. B. Gunn in 1962, and were thus named the "Gunn effect", but physicist Herbert Kroemer pointed out in 1964 that Gunn's observations could be explained by the RWH theory.
In essence, RWH mechanism is the transfer of conduction electrons in a semiconductor from a high mobility valley to lower-mobility, higher-energy satellite valleys. This phenomenon can only be observed in materials that have such energy band structures.
Normally, in a conductor, increasing electric field causes higher charge carrier (usually electron) speeds and results in higher current consistent with Ohm's law. In a multi-valley semiconductor, though, higher energy may push the carriers into a higher energy state where they actually have higher effective mass and thus slow down. In effect, carrier velocities and current drop as the voltage is increased. While this transfer occurs, the material exhibits a decrease in current – that is, a negative differential resistance. At higher voltages, the normal increase of current with voltage relation resumes once the bulk of the carriers are kicked into the higher energy-mass valley. Therefore the negative resistance only occurs over a limited range of voltages.
Of the type of semiconducting materials satisfying these conditions, gallium arsenide (GaAs) is the most widely understood and used. However RWH mechanisms can also be observed in indium phosphide (InP), cadmium telluride (CdTe), zinc selenide (ZnSe) and indium arsenide (InAs) under hydrostatic or uniaxial pressure.
See also
Gunn diode
References
Other sources
Electronic engineering
de:Gunn-Effekt | Ridley–Watkins–Hilsum theory | Physics,Materials_science,Technology,Engineering | 473 |
42,471,422 | https://en.wikipedia.org/wiki/Micro%20job | A micro job is a small paid freelance task selected from a centralized platform. The practice of working micro jobs is called microemployment, and people doing micro jobs are called microemployees. These jobs can be online or in-person: for example, acting as a virtual assistant, handyman, or nanny; or doing website design, dog boarding, or errands. Personal income varies depending on the jobs taken and the fee charged by the platform offering the jobs.
The concept is related to that of the gig economy. The micro-job industry is part of a larger movement of companies facilitating the outsourcing of products: for example AirBNB, which lets users independently rent out houses. Microemployment sites are growing rapidly as of 2013 and form a new type on-demand income for workers. These platforms can sometimes earn billions of dollars.
Differences from traditional jobs
Most micro jobs do not pay benefits. Workers also cannot rely on a steady paycheck.
In law
Microemployees are classified as independent contractors under the law, meaning they are not considered employees of the companies they work for. As independent contractors, they bear full legal responsibility for their actions and compliance with applicable regulations. The law is murky, however, on the relationship between microemployees and marketplaces where workers find jobs. Lawsuits are expected to test this connection. In January 2014 the Kuang-Liu family, of San Francisco, CA, filed a wrongful death lawsuit against Uber and driver Syed Muzzafar. The accident, which caused the death of their 6-year-old daughter and injured two other family members, was allegedly caused while Muzzafar was fulfilling a driving job from Uber.
Individual auto insurance policies do not cover commercial activities, which may result in denials of claims if drivers are working for hire. To prevent legal complications, some ride service providers are requiring their drivers to purchase commercial insurance. Legislation for microemployment work issues remains unclear and unresolved.
See also
Freelancer
Online volunteering
Temporary work
References
Crowdsourcing
Business terms
Digital labor | Micro job | Technology | 427 |
5,653,596 | https://en.wikipedia.org/wiki/Wireless%20application%20service%20provider | A wireless application service provider (WASP) is the generic name for a firm that provides remote services, typically to handheld devices, such as cellphones or PDAs, that connect to wireless data networks. WASPs are a specific category of application service providers (ASPs), though the latter term may more often be associated with standard web services. They can also be used for wireless bridging between different types of network topologies.
Wireless networking | Wireless application service provider | Technology,Engineering | 90 |
38,257,400 | https://en.wikipedia.org/wiki/Truncated%20order-6%20square%20tiling | In geometry, the truncated order-6 square tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t{4,6}.
Uniform colorings
Symmetry
The dual tiling represents the fundamental domains of the *443 orbifold symmetry. There are two reflective subgroup kaleidoscopic constructed from [(4,4,3)] by removing one or two of three mirrors. In these images fundamental domains are alternately colored black and cyan, and mirrors exist on the boundaries between colors.
A larger subgroup is constructed [(4,4,3*)], index 6, as (3*22) with gyration points removed, becomes (*222222).
The symmetry can be doubled as 642 symmetry by adding a mirror bisecting the fundamental domain.
Related polyhedra and tilings
From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular order-4 hexagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
It can also be generated from the (4 4 3) hyperbolic tilings:
See also
Square tiling
Tilings of regular polygons
List of uniform planar tilings
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Order-6 tilings
Square tilings
Truncated tilings
Uniform tilings | Truncated order-6 square tiling | Physics | 380 |
46,249,066 | https://en.wikipedia.org/wiki/Penicillium%20infrapurpureum | Penicillium infrapurpureum is a species of the genus of Penicillium.
References
infrapurpureum
Fungi described in 2014
Fungus species | Penicillium infrapurpureum | Biology | 35 |
299,329 | https://en.wikipedia.org/wiki/Probabilistic%20context-free%20grammar | In theoretical linguistics and computational linguistics, probabilistic context free grammars (PCFGs) extend context-free grammars, similar to how hidden Markov models extend regular grammars. Each production is assigned a probability. The probability of a derivation (parse) is the product of the probabilities of the productions used in that derivation. These probabilities can be viewed as parameters of the model, and for large problems it is convenient to learn these parameters via machine learning. A probabilistic grammar's validity is constrained by context of its training dataset.
PCFGs originated from grammar theory, and have application in areas as diverse as natural language processing to the study the structure of RNA molecules and design of programming languages. Designing efficient PCFGs has to weigh factors of scalability and generality. Issues such as grammar ambiguity must be resolved. The grammar design affects results accuracy. Grammar parsing algorithms have various time and memory requirements.
Definitions
Derivation: The process of recursive generation of strings from a grammar.
Parsing: Finding a valid derivation using an automaton.
Parse Tree: The alignment of the grammar to a sequence.
An example of a parser for PCFG grammars is the pushdown automaton. The algorithm parses grammar nonterminals from left to right in a stack-like manner. This brute-force approach is not very efficient. In RNA secondary structure prediction variants of the Cocke–Younger–Kasami (CYK) algorithm provide more efficient alternatives to grammar parsing than pushdown automata. Another example of a PCFG parser is the Stanford Statistical Parser which has been trained using Treebank.
Formal definition
Similar to a CFG, a probabilistic context-free grammar can be defined by a quintuple:
where
is the set of non-terminal symbols
is the set of terminal symbols
is the set of production rules
is the start symbol
is the set of probabilities on production rules
Relation with hidden Markov models
PCFGs models extend context-free grammars the same way as hidden Markov models extend regular grammars.
The Inside-Outside algorithm is an analogue of the Forward-Backward algorithm. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. This is equivalent to the probability of the PCFG generating the sequence, and is intuitively a measure of how consistent the sequence is with the given grammar. The Inside-Outside algorithm is used in model parametrization to estimate prior frequencies observed from training sequences in the case of RNAs.
Dynamic programming variants of the CYK algorithm find the Viterbi parse of a RNA sequence for a PCFG model. This parse is the most likely derivation of the sequence by the given PCFG.
Grammar construction
Context-free grammars are represented as a set of rules inspired from attempts to model natural languages. The rules are absolute and have a typical syntax representation known as Backus–Naur form. The production rules consist of terminal and non-terminal symbols and a blank may also be used as an end point. In the production rules of CFG and PCFG the left side has only one nonterminal whereas the right side can be any string of terminal or nonterminals. In PCFG nulls are excluded. An example of a grammar:
This grammar can be shortened using the '|' ('or') character into:
Terminals in a grammar are words and through the grammar rules a non-terminal symbol is transformed into a string of either terminals and/or non-terminals. The above grammar is read as "beginning from a non-terminal the emission can generate either or or ".
Its derivation is:
Ambiguous grammar may result in ambiguous parsing if applied on homographs since the same word sequence can have more than one interpretation. Pun sentences such as the newspaper headline "Iraqi Head Seeks Arms" are an example of ambiguous parses.
One strategy of dealing with ambiguous parses (originating with grammarians as early as Pāṇini) is to add yet more rules, or prioritize them so that one rule takes precedence over others. This, however, has the drawback of proliferating the rules, often to the point where they become difficult to manage. Another difficulty is overgeneration, where unlicensed structures are also generated.
Probabilistic grammars circumvent these problems by ranking various productions on frequency weights, resulting in a "most likely" (winner-take-all) interpretation. As usage patterns are altered in diachronic shifts, these probabilistic rules can be re-learned, thus updating the grammar.
Assigning probability to production rules makes a PCFG. These probabilities are informed by observing distributions on a training set of similar composition to the language to be modeled. On most samples of broad language, probabilistic grammars where probabilities are estimated from data typically outperform hand-crafted grammars. CFGs when contrasted with PCFGs are not applicable to RNA structure prediction because while they incorporate sequence-structure relationship they lack the scoring metrics that reveal a sequence structural potential
Weighted context-free grammar
A weighted context-free grammar (WCFG) is a more general category of context-free grammar, where each production has a numeric weight associated with it. The weight of a specific parse tree in a WCFG is the product (or sum ) of all rule weights in the tree. Each rule weight is included as often as the rule is used in the tree. A special case of WCFGs are PCFGs, where the weights are (logarithms of ) probabilities.
An extended version of the CYK algorithm can be used to find the "lightest" (least-weight) derivation of a string given some WCFG.
When the tree weight is the product of the rule weights, WCFGs and PCFGs can express the same set of probability distributions.
Applications
RNA structure prediction
Since the 1990s, PCFG has been applied to model RNA structures.
Energy minimization and PCFG provide ways of predicting RNA secondary structure with comparable performance. However structure prediction by PCFGs is scored probabilistically rather than by minimum free energy calculation. PCFG model parameters are directly derived from frequencies of different features observed in databases of RNA structures rather than by experimental
determination as is the case with energy minimization methods.
The types of various structure that can be modeled by a PCFG include long range interactions, pairwise structure and other nested structures. However, pseudoknots can not be modeled. PCFGs extend CFG by assigning probabilities to each production rule. A maximum probability parse tree from the grammar implies a maximum probability structure. Since RNAs preserve their structures over their primary sequence, RNA structure prediction can be guided by combining evolutionary information from comparative sequence analysis with biophysical knowledge about a structure plausibility based on such probabilities. Also search results for structural homologs using PCFG rules are scored according to PCFG derivations probabilities. Therefore, building grammar to model the behavior of base-pairs and single-stranded regions starts with exploring features of structural multiple sequence alignment of related RNAs.
The above grammar generates a string in an outside-in fashion, that is the basepair on the furthest extremes of the terminal is derived first. So a string such as is derived by first generating the distal 's on both sides before moving inwards:
A PCFG model extendibility allows constraining structure prediction by incorporating expectations about different features of an RNA . Such expectation may reflect for example the propensity for assuming a certain structure by an RNA. However incorporation of too much information may increase PCFG space and memory complexity and it is desirable that a PCFG-based model be as simple as possible.
Every possible string a grammar generates is assigned a probability weight given the PCFG model . It follows that the sum of all probabilities to all possible grammar productions is . The scores for each paired and unpaired residue explain likelihood for secondary structure formations. Production rules also allow scoring loop lengths as well as the order of base pair stacking hence it is possible to explore the range of all possible generations including suboptimal structures from the grammar and accept or reject structures based on score thresholds.
Implementations
RNA secondary structure implementations based on PCFG approaches can be utilized in :
Finding consensus structure by optimizing structure joint probabilities over MSA.
Modeling base-pair covariation to detecting homology in database searches.
pairwise simultaneous folding and alignment.
Different implementation of these approaches exist. For example, Pfold is used in secondary structure prediction from a group of related RNA sequences, covariance models are used in searching databases for homologous sequences and RNA annotation and classification, RNApromo, CMFinder and TEISER are used in finding stable structural motifs in RNAs.
Design considerations
PCFG design impacts the secondary structure prediction accuracy. Any useful structure prediction probabilistic model based on PCFG has to maintain simplicity without much compromise to prediction accuracy. Too complex a model of excellent performance on a single sequence may not scale. A grammar based model should be able to:
Find the optimal alignment between a sequence and the PCFG.
Score the probability of the structures for the sequence and subsequences.
Parameterize the model by training on sequences/structures.
Find the optimal grammar parse tree (CYK algorithm).
Check for ambiguous grammar (Conditional Inside algorithm).
The resulting of multiple parse trees per grammar denotes grammar ambiguity. This may be useful in revealing all possible base-pair structures for a grammar. However an optimal structure is the one where there is one and only one correspondence between the parse tree and the secondary structure.
Two types of ambiguities can be distinguished. Parse tree ambiguity and structural ambiguity. Structural ambiguity does not affect thermodynamic approaches as the optimal structure selection is always on the basis of lowest free energy scores. Parse tree ambiguity concerns the existence of multiple parse trees per sequence. Such an ambiguity can reveal all possible base-paired structures for the sequence by generating all possible parse trees then finding the optimal one. In the case of structural ambiguity multiple parse trees describe the same secondary structure. This obscures the CYK algorithm decision on finding an optimal structure as the correspondence between the parse tree and the structure is not unique. Grammar ambiguity can be checked for by the conditional-inside algorithm.
Building a PCFG model
A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left.
A starting non-terminal produces loops. The rest of the grammar proceeds with parameter that decide whether a loop is a start of a stem or a single stranded region and parameter that produces paired bases.
The formalism of this simple PCFG looks like:
The application of PCFGs in predicting structures is a multi-step process. In addition, the PCFG itself can be incorporated into probabilistic models that consider RNA evolutionary history or search homologous sequences in databases. In an evolutionary history context inclusion of prior distributions of RNA structures of a structural alignment in the production rules of the PCFG facilitates good prediction accuracy.
A summary of general steps for utilizing PCFGs in various scenarios:
Generate production rules for the sequences.
Check ambiguity.
Recursively generate parse trees of the possible structures using the grammar.
Rank and score the parse trees for the most plausible sequence.
Algorithms
Several algorithms dealing with aspects of PCFG based probabilistic models in RNA structure prediction exist. For instance the inside-outside algorithm and the CYK algorithm. The inside-outside algorithm is a recursive dynamic programming scoring algorithm that can follow expectation-maximization paradigms. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. The inside part scores the subtrees from a parse tree and therefore subsequences probabilities given an PCFG. The outside part scores the probability of the complete parse tree for a full sequence. CYK modifies the inside-outside scoring. Note that the term 'CYK algorithm' describes the CYK variant of the inside algorithm that finds an optimal parse tree for a sequence using a PCFG. It extends the actual CYK algorithm used in non-probabilistic CFGs.
The inside algorithm calculates probabilities for all of a parse subtree rooted at for subsequence . Outside algorithm calculates probabilities of a complete parse tree for sequence from root excluding the calculation of . The variables and refine the estimation of probability parameters of an PCFG. It is possible to reestimate the PCFG algorithm by finding the expected number of times a state is used in a derivation through summing all the products of and divided by the probability for a sequence given the model . It is also possible to find the expected number of times a production rule is used by an expectation-maximization that utilizes the values of and . The CYK algorithm calculates to find the most probable parse tree and yields .
Memory and time complexity for general PCFG algorithms in RNA structure predictions are and respectively. Restricting a PCFG may alter this requirement as is the case with database searches methods.
PCFG in homology search
Covariance models (CMs) are a special type of PCFGs with applications in database searches for homologs, annotation and RNA classification. Through CMs it is possible to build PCFG-based RNA profiles where related RNAs can be represented by a consensus secondary structure. The RNA analysis package Infernal uses such profiles in inference of RNA alignments. The Rfam database also uses CMs in classifying RNAs into families based on their structure and sequence information.
CMs are designed from a consensus RNA structure. A CM allows indels of unlimited length in the alignment. Terminals constitute states in the CM and the transition probabilities between the states is 1 if no indels are considered. Grammars in a CM are as follows:
probabilities of pairwise interactions between 16 possible pairs
probabilities of generating 4 possible single bases on the left
probabilities of generating 4 possible single bases on the right
bifurcation with a probability of 1
start with a probability of 1
end with a probability of 1
The model has 6 possible states and each state grammar includes different types of secondary structure probabilities of the non-terminals. The states are connected by transitions. Ideally current node states connect to all insert states and subsequent node states connect to non-insert states. In order to allow insertion of more than one base insert states connect to themselves.
In order to score a CM model the inside-outside algorithms are used. CMs use a slightly different implementation of CYK. Log-odds emission scores for the optimum parse tree - - are calculated out of the emitting states . Since these scores are a function of sequence length a more discriminative measure to recover an optimum parse tree probability score- - is reached by limiting the maximum length of the sequence to be aligned and calculating the log-odds relative to a null. The computation time of this step is linear to the database size and the algorithm has a memory complexity of .
Example: Using evolutionary information to guide structure prediction
The KH-99 algorithm by Knudsen and Hein lays the basis of the Pfold approach to predicting RNA secondary structure. In this approach the parameterization requires evolutionary history information derived from an alignment tree in addition to probabilities of columns and mutations. The grammar probabilities are observed from a training dataset.
Estimate column probabilities for paired and unpaired bases
In a structural alignment the probabilities of the unpaired bases columns and the paired bases columns are independent of other columns. By counting bases in single base positions and paired positions one obtains the frequencies of bases in loops and stems.
For basepair and an occurrence of is also counted as an occurrence of . Identical basepairs such as are counted twice.
Calculate mutation rates for paired and unpaired bases
By pairing sequences in all possible ways overall mutation rates are estimated. In order to recover plausible mutations a sequence identity threshold should be used so that the comparison is between similar sequences. This approach uses 85% identity threshold between pairing sequences.
First single base positions differences -except for gapped columns- between sequence pairs are counted such that if the same position in two sequences had different bases the count of the difference is incremented for each sequence.
For unpaired bases a 4 X 4 mutation rate matrix is used that satisfies that the mutation flow from X to Y is reversible:
For basepairs a 16 X 16 rate distribution matrix is similarly generated.
The PCFG is used to predict the prior probability distribution of the structure whereas posterior probabilities are estimated by the inside-outside algorithm and the most likely structure is found by the CYK algorithm.
Estimate alignment probabilities
After calculating the column prior probabilities the alignment probability is estimated by summing over all possible secondary structures. Any column in a secondary structure for a sequence of length such that can be scored with respect to the alignment tree and the mutational model . The prior distribution given by the PCFG is . The phylogenetic tree, can be calculated from the model by maximum likelihood estimation. Note that gaps are treated as unknown bases and the summation can be done through dynamic programming.
Assign production probabilities to each rule in the grammar
Each structure in the grammar is assigned production probabilities devised from the structures of the training dataset. These prior probabilities give weight to predictions accuracy. The number of times each rule is used depends on the observations from the training dataset for that particular grammar feature. These probabilities are written in parentheses in the grammar formalism and each rule will have a total of 100%. For instance:
Predict the structure likelihood
Given the prior alignment frequencies of the data the most likely structure from the ensemble predicted by the grammar can then be computed by maximizing through the CYK algorithm. The structure with the highest predicted number of correct predictions is reported as the consensus structure.
Pfold improvements on the KH-99 algorithm
PCFG based approaches are desired to be scalable and general enough. Compromising speed for accuracy needs to as minimal as possible. Pfold addresses the limitations of the KH-99 algorithm with respect to scalability, gaps, speed and accuracy.
In Pfold gaps are treated as unknown. In this sense the probability of a gapped column equals that of an ungapped one.
In Pfold the tree is calculated prior to structure prediction through neighbor joining and not by maximum likelihood through the PCFG grammar. Only the branch lengths are adjusted to maximum likelihood estimates.
An assumption of Pfold is that all sequences have the same structure. Sequence identity threshold and allowing a 1% probability that any nucleotide becomes another limit the performance deterioration due to alignment errors.
Protein sequence analysis
Whereas PCFGs have proved powerful tools for predicting RNA secondary structure, usage in the field of protein sequence analysis has been limited. Indeed, the size of the amino acid alphabet and the variety of interactions seen in proteins make grammar inference much more challenging. As a consequence, most applications of formal language theory to protein analysis have been mainly restricted to the production of grammars of lower expressive power to model simple functional patterns based on local interactions. Since protein structures commonly display higher-order dependencies including nested and crossing relationships, they clearly exceed the capabilities of any CFG. Still, development of PCFGs allows expressing some of those dependencies and providing the ability to model a wider range of protein patterns.
See also
Statistical parsing
Stochastic grammar
L-system
References
External links
Rfam Database
Infernal
The Stanford Parser: A statistical parser
pyStatParser
Bioinformatics
Formal languages
Language modeling
Natural language parsing
Statistical natural language processing
Probabilistic models | Probabilistic context-free grammar | Mathematics,Engineering,Biology | 4,163 |
25,196,684 | https://en.wikipedia.org/wiki/Gable%20roof | A gable roof is a roof consisting of two sections whose upper horizontal edges meet to form its ridge. The most common roof shape in cold or temperate climates, it is constructed of rafters, roof trusses or purlins. The pitch of a gable roof can vary greatly.
Distribution
The gable roof is so common because of the simple design of the roof timbers and the rectangular shape of the roof sections. This avoids details which require a great deal of work or cost and which are prone to damage. If the pitch or the rafter lengths of the two roof sections are different, it is described as an 'asymmetrical gable roof'. A gable roof on a church tower (gable tower) is usually called a 'cheese wedge roof' (Käsbissendach) in Switzerland.
Its versatility means that the gable roof is used in many regions of the world. In regions with strong winds and heavy rain, gable roofs are built with a steep pitch in order to prevent the ingress of water. By comparison, in alpine regions, gable roofs have a shallower pitch which reduces wind exposure and supports snow better, reducing the risk of an uncontrolled avalanche and more easily retaining an insulating layer of snow.
Gable roofs are most common in cold climates. They are the traditional roof style of New England and the east coast of Canada. Nathaniel Hawthorne’s The House of Seven Gables and Lucy Maud Montgomery’s Anne of Green Gables, the authors of which are from these respective regions, both reference this roof style in their titles.
Pros and cons
Gable roofs have several advantages. They are:
Inexpensive
May be designed in many different ways.
Are based on a simple design principle.
More weather-resistant than flat roofs
May allow an attic to be turned into living space if the pitch is sufficient to at least allow dormers. A steeper pitch will be sufficient on its own.
Disadvantages:
Gable roofs are more prone to wind damage than hip roofs.
German terminology
In German-speaking countries, the types of gable roof are referred to as:
Shallow gable roof (flaches Satteldach) with a pitch of ≤ 30°
New German (neudeutsches Dach) or angled roof (Winkeldach) with a pitch of 45°
When the pitch it greater than 62° it is called a Gothic (gotisches) or Old German roof (altdeutsches Dach)
If the roof has the shape of an equilateral triangle and 60° pitch it is called an Old Franconian (altfränkisches) (commonly found in the region of Franconia) or Old French roof (altfranzösisches Dach)
See also
List of roof shapes
References
External links
Roofs | Gable roof | Technology,Engineering | 553 |
41,225,449 | https://en.wikipedia.org/wiki/Uzawa%20iteration | In numerical mathematics, the Uzawa iteration is an algorithm for solving saddle point problems. It is named after Hirofumi Uzawa and was originally introduced in the context of concave programming.
Basic idea
We consider a saddle point problem of the form
where is a symmetric positive-definite matrix.
Multiplying the first row by and subtracting from the second row yields the upper-triangular system
where denotes the Schur complement.
Since is symmetric positive-definite, we can apply standard iterative methods like the gradient descent
method or the conjugate gradient method to solve
in order to compute .
The vector can be reconstructed by solving
It is possible to update alongside during the iteration for the Schur complement system and thus obtain an efficient algorithm.
Implementation
We start the conjugate gradient iteration by computing the residual
of the Schur complement system, where
denotes the upper half of the solution vector matching the initial guess for its lower half. We complete the initialization by choosing the first search direction
In each step, we compute
and keep the intermediate result
for later.
The scaling factor is given by
and leads to the updates
Using the intermediate result saved earlier, we can also update the upper part of the solution vector
Now we only have to construct the new search direction by the Gram–Schmidt process, i.e.,
The iteration terminates if the residual has become sufficiently small or if the norm of is significantly smaller than indicating that the Krylov subspace has been almost exhausted.
Modifications and extensions
If solving the linear system exactly is not feasible, inexact solvers can be applied.
If the Schur complement system is ill-conditioned, preconditioners can be employed to improve the speed of convergence of the underlying gradient method.
Inequality constraints can be incorporated, e.g., in order to handle obstacle problems.
References
Further reading
Numerical analysis | Uzawa iteration | Mathematics | 372 |
59,837,099 | https://en.wikipedia.org/wiki/Pseudo-panspermia | Pseudo-panspermia (sometimes called soft panspermia, molecular panspermia or quasi-panspermia) is a well-supported hypothesis for a stage in the origin of life. The theory first asserts that many of the small organic molecules used for life originated in space (for example, being incorporated in the solar nebula, from which the planets condensed). It continues that these organic molecules were distributed to planetary surfaces, where life then emerged on Earth and perhaps on other planets. Pseudo-panspermia differs from the fringe theory of panspermia, which asserts that life arrived on Earth from distant planets.
Background
Theories of the origin of life have been recorded since the 5th century BC, when the Greek philosopher Anaxagoras proposed an initial version of panspermia: life arrived on earth from the heavens. In modern times, full panspermia has little support amongst mainstream scientists. Pseudo-panspermia, in which molecules are formed and transported through space is, however, well-supported.
Extraterrestrial creation of organic molecules
Interstellar molecules are formed by chemical reactions within very sparse interstellar or circumstellar clouds of dust and gas. Usually this occurs when a molecule becomes ionised, often as the result of an interaction with cosmic rays. This positively charged molecule then draws in a nearby reactant by electrostatic attraction of the neutral molecule's electrons. Molecules can also be generated by reactions between neutral atoms and molecules, although this process is generally slower. The dust plays a critical role of shielding the molecules from the ionizing effect of ultraviolet radiation emitted by stars. The Murchison meteorite contains the organic molecules uracil and xanthine, which must therefore already have been present in the early Solar System, where they could have played a role in the origin of life.
Nitriles, key molecular precursors of the RNA World scenario, are among the most abundant chemical families in the universe and have been found in molecular clouds in the center of the Milky Way, protostars of different masses, meteorites and comets, and also in the atmosphere of Titan, the largest moon of Saturn.
Evidence for the extraterrestrial creation of organic molecules includes both their discovery in various contexts in space, and their laboratory synthesis under extraterrestrial conditions:
Planetary distribution of organic molecules
Organic molecules can then be distributed to planets including Earth both when the planets formed and later. If the materials from which planets formed contained organic molecules, and were not destroyed by heat or other processes, then these would be available for abiogenesis on those planets.
Later distribution is by means of bodies such as comets and asteroids. These may fall to the planetary surface as meteorites, releasing any molecules they are carrying as they vaporise on impact or later as they erode. Findings of organic molecules in meteorites include:
References
Astrobiology
Origin of life
Speculative evolution
Astrochemistry | Pseudo-panspermia | Chemistry,Astronomy,Biology | 585 |
11,055,618 | https://en.wikipedia.org/wiki/Tableau%20de%20Concordance | The Tableau de Concordance was the main French diplomatic code used during World War I; the term also refers to any message sent using the code. It was a superenciphered four-digit code that was changed three times between 1 August 1914 and 15 January 1915.
The Tableau de Concordance is considered superenciphered because there is more than one step required to use it. First, each word in a message is replaced by four digits via a codebook. These four digits are divided into three groups (one digit, two digits, one digit) so that when the whole message has been translated into code, the four-digit sets can be put together so it looks like the entire message is made up of two-digit pairs. This is called a "Straddle Gimmick." Then, in turn, each of these two digit pairs (and the single digits at the beginning and end) are replaced by two letters. The letters are then combined with no spaces for the final ciphertext.
The manual for the Tableau de Concordance included the instruction that if there was not adequate time for completely enciphering the message, it should simply be sent in clear, because a partially enciphered message would have provided insight into the inner workings of the code.
Sources
The Codebreakers, by David Kahn, copyright 1967, 1996
France in World War I
Cryptography | Tableau de Concordance | Mathematics,Engineering | 286 |
2,542,454 | https://en.wikipedia.org/wiki/Electrical%20equipment%20in%20hazardous%20areas | In electrical and safety engineering, hazardous locations (HazLoc, pronounced haz·lōk) are places where fire or explosion hazards may exist. Sources of such hazards include gases, vapors, dust, fibers, and flyings, which are combustible or flammable. Electrical equipment installed in such locations can provide an ignition source, due to electrical arcing, or high temperatures. Standards and regulations exist to identify such locations, classify the hazards, and design equipment for safe use in such locations.
Overview
A light switch may cause a small, harmless spark when switched on or off. In an ordinary household this is of no concern, but if a flammable atmosphere is present, the arc might start an explosion. In many industrial, commercial, and scientific settings, the presence of such an atmosphere is a common, or at least commonly possible, occurrence. Protecting against fire and explosion is of interest for both personnel safety as well as reliability reasons.
Several protection strategies exist. The simplest is to minimize the amount of electrical equipment installed in a hazardous location, either by keeping the equipment out of the area altogether, or by making the area less hazardous (for example, by process changes, or ventilation with clean air).
When equipment must be placed in a hazardous location, it can be designed to reduce the risk of fire or explosion. Intrinsic safety designs equipment to operate using minimal energy, insufficient to cause ignition. Explosion-proofing designs equipment to contain ignition hazards, prevent entry of hazardous substances, and, contain any fire or explosion that could occur.
Different countries have approached the standardization and testing of equipment for hazardous areas in different ways. Terminology for both hazards and protective measures can vary. Documentation requirements likewise vary. As world trade becomes more globalized, international standards are slowly converging, so that a wider range of acceptable techniques can be approved by national regulatory agencies.
The process of determining the type and size of hazardous locations is called classification. Classification of locations, testing and listing of equipment, and inspection of installation, is typically overseen by governmental bodies. For example, in the US by the Occupational Safety and Health Administration.
Standards
North America
In the US, the independent National Fire Protection Association (NFPA) publishes several relevant standards, and they are often adopted by government agencies. Guidance on assessment of hazards is given in NFPA497 (explosive gas) and NFPA499 (dust). The American Petroleum Institute publishes analogous standards in RP500 andRP505.
NFPA 70, the National Electrical Code (NEC), defines area classification and installation principles. NEC article 500 describes the NEC Division classification system, while articles 505 and 506 describe the NEC Zone classification system. The NEC Zone system was created to harmonize with IEC classification system, and therefore reduce the complexity of management.
Canada has a similar system with CSA Group standard C22.1, the Canadian Electrical Code, which defines area classification and installation principles. Two possible classifications are described, in Section 18 (Zones), and Appendix J (Divisions).
International Electrotechnical Commission
The International Electrotechnical Commission publishes the 60079 series of standards which defines a system for classification of locations, as well as categorizing and testing of equipment designed for use in hazardous locations, known as "Ex equipment". IEC 60079-10-1 covers classification of explosive gas atmospheres, and IEC 60079-10-2 explosive dust. Equipment is placed into protection level categories according to manufacture method and suitability for different situations. Unlike ATEX which uses numbers to define the safety "Category" of equipment (namely 1, 2, and 3), the IEC continued to utilise the method used for defining the safe levels of intrinsic safety namely "a" for zone 0, "b" for zone 1 and "c" for zone 2 and apply this Equipment Level of Protection to all equipment for use in hazardous areas since 2009. <IEC 60079.14>
The IEC 60079 standard set has been adapted for use in Australia and New Zealand and is published as the AS/NZS 60079 standard set.
Hazards
In an industrial plant, such as a refinery or chemical plant, handling of large quantities of flammable liquids and gases creates a risk of exposure. Coal mines, grain mills, elevators, and similar facilities likewise present the risk of a clouds of dust. In some cases, the hazardous atmosphere is present all the time, or for long periods. In other cases, the atmosphere is normally non-hazardous, but a dangerous concentration can be reasonably foreseen—such as operator error or equipment failure. Locations are thus classified by type and risk of release of gas, vapor, or dust. Various regulations use terms such as class, division, zone, and group to differentiate the various hazards.
Often an area classification plan view is provided to identify equipment ratings and installation techniques to be used for each classified area. The plan may contain the list of chemicals with their group and temperature rating. The classification process requires the participation of operations, maintenance, safety, electrical and instrumentation professionals; and the use of process diagrams, material flows, safety data sheets, and other pertinent documents. Area classification documentations are reviewed and updated to reflect process changes.
Explosive gas
Typical gas hazards are from hydrocarbon compounds, but hydrogen and ammonia are also common industrial gases that are flammable.
Class I, Division 1 classified locations An area where ignitable concentrations of flammable gases, vapors or liquids can exist all of the time or some of the time under normal operating conditions. A Class I, Division 1 area encompasses the combination of Zone 0 and Zone 1 areas.
Zone 0 classified locations An area where ignitable concentrations of flammable gases, vapors or liquids are present continuously or for long periods of time under normal operating conditions. An example of this would be the vapor space above the liquid in the top of a tank or drum. The ANSI/NEC classification method consider this environment a Class I, Division 1 area. As a guide for Zone 0, this can be defined as over 1000 hours per year or more than 10% of the time.
Zone 1 classified location An area where ignitable concentrations of flammable gases, vapors or liquids are likely to exist under normal operating conditions. As a guide for Zone 1, this can be defined as 10–1000 hours per year or 0.1–10% of the time.
Class I, Division 2 or Zone 2 classified locations An area where ignitable concentrations of flammable gases, vapors or liquids are not likely to exist under normal operating conditions. In this area the gas, vapor or liquids would only be present under abnormal conditions (most often leaks under abnormal conditions). As a general guide for Zone 2, unwanted substances should only be present under 10 hours per year or 0–0.1% of the time.
Unclassified locations Also known as non-hazardous or ordinary locations, these locations are determined to be neither Class I, Division 1 or Division 2; Zone 0, Zone 1 or Zone 2; or any combination thereof. Such areas include a residence or office where the only risk of a release of explosive or flammable gas would be such things as the propellant in an aerosol spray. The only explosive or flammable liquids are paint and brush cleaner. These are designated as very low risk of causing an explosion and are more of a fire risk (although gas explosions in residential buildings do occur). Unclassified locations in chemical and other plants are present where it is absolutely certain that the hazardous gas is diluted to a concentration below 25% of its lower flammability limit (or lower explosive limit (LEL)).
Explosive dust
Dust or other small particles suspended in air can explode.
NEC
United Kingdom
An old British standard used letters to designate zones. This has been replaced by a European numerical system, as set out in directive 1999/92/EU implemented in the UK as the Dangerous Substances and Explosives Atmospheres Regulations 2002.
Gas and dust groups
Different explosive atmospheres have chemical properties that affect the likelihood and severity of an explosion. Such properties include flame temperature, minimum ignition energy, upper and lower explosive limits, and molecular weight. Empirical testing is done to determine parameters such as the maximum experimental safe gap (MESG), minimum igniting current (MIC) ratio, explosion pressure and time to peak pressure, spontaneous ignition temperature, and maximum rate of pressure rise. Every substance has a differing combination of properties but it is found that they can be ranked into similar ranges, simplifying the selection of equipment for hazardous areas.
Flammability of combustible liquids are defined by their flash-point. The flash-point is the temperature at which the material will generate sufficient quantity of vapor to form an ignitable mixture. The flash point determines if an area needs to be classified. A material may have a relatively low autoignition temperature yet if its flash-point is above the ambient temperature, then the area may not need to be classified. Conversely if the same material is heated and handled above its flash-point, the area must be classified for proper electrical system design, as it will then form an ignitable mixture.
Each chemical gas or vapour used in industry is classified into a gas group.
Group IIC is the most severe zone system gas group. Hazards in this group gas can be ignited very easily indeed. Equipment marked as suitable for Group IIC is also suitable for IIB and IIA. Equipment marked as suitable for IIB is also suitable for IIA but NOT for IIC. If equipment is marked, for example, Ex e II T4 then it is suitable for all subgroups IIA, IIB and IIC
A list must be drawn up of every explosive material that is on the refinery or chemical complex and included in the site plan of the classified areas. The above groups are formed in order of how explosive the material would be if it was ignited, with IIC being the most explosive zone system gas group and IIA being the least. The groups also indicate how much energy is required to ignite the material by energy or thermal effects, with IIA requiring the most energy and IIC the least for zone system gas groups.
Temperature
Equipment should be tested to ensure that it does not exceed 80% of the autoignition temperature of the hazardous atmosphere. Both external and internal temperatures are taken into consideration. The autoignition temperature is the lowest temperature at which the substance will ignite without an additional heat or ignition source (at atmospheric pressure). This temperature is used for classification for industry and technology applications.
The temperature classification on the electrical equipment label will be one of the following (in degree Celsius):
The above table shows that the surface temperature of a piece of electrical equipment with a temperature classification of T3 will not rise above 200 °C. The surface of a high pressure steam pipe may be above the autoignition temperature of some fuel-air mixtures.
Equipment
General types and methods
Equipment can be designed or modified for safe operation in hazardous locations. The two general approaches are:
Intrinsic safety Intrinsic safety, also called non-incendive, limits the energy present in a system, such that it is insufficient to ignite a hazardous atmosphere under any conditions. This includes both low power levels, and low stored energy. Common with instrumentation.
Explosion proof Explosion-proof or flame-proof equipment is sealed and rugged, such that it will not ignite a hazardous atmosphere, despite any sparks or explosion within.
Several techniques of flame-proofing exist, and they are often used in combination:
The equipment housing may be sealed to prevent entry of flammable gas or dust into the interior.
The housing may be strong enough to contain and cool any combustion gases produced internally.
Enclosures can be pressurized with clean air or inert gas, displacing any hazardous substance.
Arc-producing elements can be isolated from the atmosphere, by encapsulation in resin, immersion in oil, or similar.
Heat-producing elements can be designed to limit their maximum temperature below the autoignition temperature of the material involved.
Controls can be fitted to detect dangerous concentrations of hazardous gas, or failure of countermeasures. Upon detection, appropriate action is automatically taken, such as removing power, or providing notification.
IEC 60079
Types of protection
The types of protection are subdivided into several sub classes, linked to EPL: ma and mb, px, py and pz, ia, ib and ic.
The a subdivisions have the most stringent safety requirements, taking into account more than one independent component faults simultaneously.
Many items of EEx rated equipment will employ more than one method of protection in different components of the apparatus. These would be then labeled with each of the individual methods. For example, a socket outlet labeled EEx'de' might have a case made to EEx 'e' and switches that are made to EEx 'd'.
Equipment protection level (EPL)
In recent years also the EPL is specified for several kinds of protection. The required protection level is linked to the intended use in the zones described below:
Equipment category
The equipment category indicates the level of protection offered by the equipment.
Category 1 equipment may be used in zone 0, zone 1 or zone 2 areas.
Category 2 equipment may be used in zone 1 or zone 2 areas.
Category 3 equipment may only be used in zone 2 areas.
NEMA enclosure types
In the US, the National Electrical Manufacturers Association (NEMA) defines standards for enclosure types for a variety of applications. Some of these are specifically for hazardous locations:
Labeling
All equipment certified for use in hazardous areas must be labelled to show the type and level of protection applied.
Europe
In Europe the label must show the CE mark and the code number of the certifying/notified body). The CE mark is complemented with the Ex mark: A yellow-filled hexagon with the Greek letters εχ (epsilon chi), followed by the Group, Category, and, if Group II, G or D (gas or dust). Specific types of protection being used will also be marked.
Industrial electrical equipment for hazardous area has to conform to appropriate parts of standard: IEC-60079 for gas hazards, and IEC-61241 for dust hazards. In some cases, it must be certified as meeting that standard. Independent test houses—Notified Bodies—are established in most European countries, and a certificate from any of these will be accepted across the EU. In the United Kingdom, Sira and Baseefa are the most well known such bodies.
Australia and New Zealand use the same IEC-60079 standards (adopted as AS/NZS60079), however the CE mark is not required.
North America
In North America the suitability of equipment for the specific hazardous area must be tested by a Nationally Recognized Testing Laboratory, such as UL, FM Global, CSA Group, or Intertek (ETL).
The label will always list the class, division and may list the group and temperature code. Directly adjacent on the label one will find the mark of the listing agency.
Some manufacturers claim "suitability" or "built-to" hazardous areas in their technical literature, but in effect lack the testing agency's certification and thus unacceptable for the AHJ (Authority Having Jurisdiction) to permit operation of the electrical installation/system.
All equipment in Division 1 areas must have an approval label, but certain materials, such as rigid metallic conduit, does not have a specific label indicating the Cl./Div.1 suitability and their listing as approved method of installation in the NEC serves as the permission. Some equipment in Division 2 areas do not require a specific label, such as standard 3 phase induction motors that do not contain normally arcing components.
Also included in the marking are the manufacturers name or trademark and address, the apparatus type, name and serial number, year of manufacture and any special conditions of use. The NEMA enclosure rating or IP code may also be indicated, but it is usually independent of the Classified Area suitability.
History
With the advent of electric power, electricity was introduced into coal mines for signaling, illumination, and motors. This was accompanied by electrically initiated explosions of flammable gas such as fire damp (methane) and suspended coal dust.
At least two British mine explosions were attributed to an electric bell signal system. In this system, two bare wires were run along the length of a drift, and any miner desiring to signal the surface would momentarily touch the wires to each other or bridge the wires with a metal tool. The inductance of the signal bell coils, combined with breaking of contacts by exposed metal surfaces, resulted in sparks, causing an explosion.
See also
Arc flash
ATEX directive
CompEx competency standard
Electrical conduit
Grounding kit
Intrinsic safety
Mineral-insulated copper-clad cable
Notified Body
Pressure piling
References
Further reading
Alan McMillan, Electrical Installations in Hazardous Areas, Butterworth-Heineman 1998,
Peter Schram Electrical Installations in Hazardous Locations, Jones and Bartlett, 1997,
EEMUA, A Practitioner's Handbook for potentially explosive atmospheres, The Engineering Equipment and Materials Users Association, 2017,
Electrical safety
Explosion protection
Natural gas safety | Electrical equipment in hazardous areas | Chemistry,Engineering | 3,517 |
2,977,910 | https://en.wikipedia.org/wiki/N-skeleton | In mathematics, particularly in algebraic topology, the of a topological space presented as a simplicial complex (resp. CW complex) refers to the subspace that is the union of the simplices of (resp. cells of ) of dimensions In other words, given an inductive definition of a complex, the is obtained by stopping at the .
These subspaces increase with . The is a discrete space, and the a topological graph. The skeletons of a space are used in obstruction theory, to construct spectral sequences by means of filtrations, and generally to make inductive arguments. They are particularly important when has infinite dimension, in the sense that the do not become constant as
In geometry
In geometry, a of P (functionally represented as skelk(P)) consists of all elements of dimension up to k.
For example:
skel0(cube) = 8 vertices
skel1(cube) = 8 vertices, 12 edges
skel2(cube) = 8 vertices, 12 edges, 6 square faces
For simplicial sets
The above definition of the skeleton of a simplicial complex is a particular case of the notion of skeleton of a simplicial set. Briefly speaking, a simplicial set can be described by a collection of sets , together with face and degeneracy maps between them satisfying a number of equations. The idea of the n-skeleton is to first discard the sets with and then to complete the collection of the with to the "smallest possible" simplicial set so that the resulting simplicial set contains no non-degenerate simplices in degrees .
More precisely, the restriction functor
has a left adjoint, denoted . (The notations are comparable with the one of image functors for sheaves.) The n-skeleton of some simplicial set is defined as
Coskeleton
Moreover, has a right adjoint . The n-coskeleton is defined as
For example, the 0-skeleton of K is the constant simplicial set defined by . The 0-coskeleton is given by the Cech nerve
(The boundary and degeneracy morphisms are given by various projections and diagonal embeddings, respectively.)
The above constructions work for more general categories (instead of sets) as well, provided that the category has fiber products. The coskeleton is needed to define the concept of hypercovering in homotopical algebra and algebraic geometry.
References
External links
Algebraic topology
General topology | N-skeleton | Mathematics | 526 |
70,810,630 | https://en.wikipedia.org/wiki/Structure%20field%20map | Structure field maps (SFMs) or structure maps are visualizations of the relationship between ionic radii and crystal structures for representing classes of materials. The SFM and its extensions has found broad applications in geochemistry, mineralogy, chemical synthesis of materials, and nowadays in materials informatics.
History
The intuitive concept of the SFMs led to different versions of the visualization method established in different domains of materials science.
Structure field map was first introduced in 1954 by MacKenzie L. Keith and Rustum Roy to classify structural prototypes for the oxide perovskites of the chemical formula ABO3. It was later popularized by a compiled handbook written by Olaf Muller and Rustum Roy, published in 1974 that included many more known materials.
Examples
A structure field map is typically two-dimensional, although higher dimensional versions are feasible. The axes in an SFM are the ionic sequences. For example, in oxide perovskites ABO3, where A and B represent two metallic cations, the two axes are ionic radii of the A-site and B-site cations. SFMs are constructed according to the oxidation states of the constituent cations. For perovskites of the type ABO3, three ways of cation pairings exist: A3+B3+O3, A2+B4+O3, and A1+B5+O3, therefore, three different SFMs exist for each pairs of cation oxidation states.
See also
Goldschmidt tolerance factor
Ramachandran plot
References
Materials science
Crystallography
Scientific visualization
Inorganic chemistry
Mineralogy concepts | Structure field map | Physics,Chemistry,Materials_science,Engineering | 326 |
75,054,010 | https://en.wikipedia.org/wiki/Keel%20block | In marine terms, a keel block, is a concrete or dense wood cuboid that rests under a ship during a time of repair, construction, or in the event of a dock being drained. The block rests under the keel of a ship.
Purpose
The purpose of a keel block is to prevent the ship from sitting directly on the ground and to prevent damage or instability that sitting on the ground may cause.
References
Nautical terminology
Shipbuilding | Keel block | Engineering | 87 |
75,746,300 | https://en.wikipedia.org/wiki/Biodiversity%20Impact%20Credit | A Biodiversity Impact Credit (BIC) is a transferable biodiversity credit designed to reduce global species extinction risk. The underlying BIC metric, developed by academics working at Queen Mary University of London and Bar-Ilan University, is given by a simple formula that quantifies the positive and negative effects that interventions in nature have on the mean long-term survival probability of species. In particular, an organisation's global footprint in terms of BICs can be computed from PDF-based biodiversity footprints. The metric is broadly applicable across taxa (taxonomic groups) and ecosystems. Organisations whose overall biodiversity impact is positive in terms of the BIC metric contribute to achieving the objective of the Global Biodiversity Framework to "significantly reduce extinction risk".
Use of BICs by businesses has been recommended by the Task Force on Nature-related Financial Disclosures and the first provider of BICs for sale is Botanic Gardens Conservation International (BGCI). The credits are generated by BGCI's international member organisations by rebuilding the populations of tree species at high risk of extinction under the IUCN Red List methodology.
Theory
Definition
Users of BICs distinguish between the metric's scientific definition and how metric values are estimated through methodologies and approximations suitable for particular contexts. This mirrors the situation with carbon credits, which are designed to quantify avoidance or reductions of atmospheric carbon dioxide load but in practice are estimated using a broad variety of context-specific methodologies.
For a given taxonomic or functional group of species, let be a measure of the current global population size of the th species. This can be measured, e.g., by the number of mature individuals or population biomass, in some cases even by the number of colonies, whichever approximates total reproductive value well. Denote by the change in the global population of species resulting from a specific intervention in nature. The corresponding Biodiversity Impact Credits are then given by
where denotes the population size of species at which environmental and demographic stochasticity are of the same magnitude.
Calculation
Depending on the kind of intervention, the system affected and the available data, a variety of methods is available to estimate BICs. Since typical values of lie in the range of 1 to 100 adult individuals, the contribution of in the definition above is often negligibly small compared to . The formula then simplifies to
In projects that aim to rebuild the population of a single endangered species , the term associated with that species will often dominate the sum in the formula above so that it simplifies further to
When a species restoration project has increased the population of a species by an amount that is much larger than the original population (and ) and no comparable increases in the population of that species have occurred elsewhere, then the species' current population is nearly identical to the increase of the population achieved. In this case, the formula above simplifies to
For use over large areas, approximations expressing BICs in terms of Range Size Rarity, Potentially Disappearing Fraction (PDF) of species, or combinations thereof are available. In particular, an organisation's global footprint in terms of BICs can be computed from PDF-based biodiversity footprints.
Interpretation
As a simple interpretation, the BIC metric measures the equivalent number of endangered species whose populations have been restored or (for negative BIC) the number of species that should be restored to achieve net zero biodiversity impact. This follows from above approximation that BIC = 1 for the restoration of a single threatened species.
However, the BIC metric goes beyond simply counting the number of threatened species that have been restored. It takes into account that decline or recovery of a species can be the result of many small impacts by different actors and attributes both positive and negative credits accordingly. Specifically, it is constructed such that, according to a simple model, BIC > 0 implies that the underlying intervention or combination of interventions leads to a reduction of mean long-term global species extinction risk for the taxonomic or functional group considered. According to the same model, a perfect market for BICs would lead to near-optimal allocation of resources to long-term species conservation.
Compatibility with other standards
The BIC metric aligns with other globally-recognised biodiversity measures such as the Range Size Rarity, the Species Threat Abatement and Recovery Metric (START) by IUCN/TNFD, and the Ecosystem Damage Metric underlying the Biodiversity Footprint for Financial Institutions (BFFI).
Biodiversity Impact Credits in practice
Rationale
The search for standardised systems to quantify biodiversity impacts has gained momentum in light of the accelerating rates of biodiversity loss worldwide. Traditional biodiversity conservation efforts can lack scalability and are hard to measure: Improving one area of land or river has a different impact on local biodiversity from improving another, so their impacts are difficult to compare. BICs were developed with the aim to simplify assessments of biodiversity change by focusing on reducing species' extinction risks. The 2022 United Nations Biodiversity Conference emphasised the importance of global collaboration to halt biodiversity loss, marking the adoption of the Kunming-Montreal Global Biodiversity Framework (GBF). BICs are designed to address Target 4 of this framework ("to halt extinction of known threatened species ... and significantly reduce extinction risk" and Target 15: "[Take measures] to ensure that large transnational companies and financial institutions [...] transparently disclose their risks, dependencies and impacts on biodiversity ... in order to progressively reduce negative impacts."
The Task Force on Nature-related Financial Disclosures via their LEAP methodology recommends use of BICs to quantify impacts on species extinction risk in version 1.1 of their disclosure recommendations. The BIC methodology was one of four recognised metrics for assessing extinction risk.
Trees are at the base of the ecological pyramid. Countless species rely on native trees for survival, including fungi, lichen, insects, birds and other vertebrates. Repopulating native tree species improves local biodiversity, helps prevents soil erosion, conserves water and helps cools the planet as well as being a carbon store.
BGCI developed the GlobalTreeSearch database which is the only comprehensive, geo-referenced list of all the world's c.60,000 tree species. Working with the International Union for Conservation of Nature (IUCN) they then produced the Global Tree Assessment which concluded that more than 17,500 tree species (c.30%) are threatened with extinction. Finally, BGCI's Global Tree Conservation Program is the only global programme dedicated to saving the world's threatened tree species. Even before BICs were are launched, over 400 rare and threatened tree species had already been conserved in over 50 countries.
Implementation
One of the critical components of the BIC system is that it is being driven by conservation organisations like BGCI and their international network of members, and backed by theoretical analyses by several Queen Mary University London academics. These organisations provide the practical know-how and decades of experience in species conservation, focusing particularly on native trees which play a pivotal role in local ecosystems. BGCI is now mediating issuance of transferable BIC certificates to organisations who sponsor tree conservation projects by BGCI member organisations. The BIC system has been designed for easy adoption and scalability. This is crucial for engaging financial institutions and other large corporations that require streamlined, global, comparable, and straightforward metrics to set their sustainability goals. BGCI unveiled their Global Biodiversity Standard at the 2021 United Nations Climate Change Conference – a global biodiversity accreditation framework. BICs are due to be formally launched in early 2024.
Critique
Biodiversity credits have been criticised by some who say that putting a monetary value on nature is wrong or regard it as impossible because of the complexity of biodiversity. Others say that they are always bought to offset damage to nature.
Biodiversity credits have also been criticised as a way for companies to make false sustainability claims, a practice called greenwashing.
Since February 2024, a Biodiversity Net Gain policy has been in place in England. Under this policy, developers must buy biodiversity credits from the government as a last resort if they cannot achieve net gain in biodiversity in other ways. It is not yet known how successful these requirements for builders to compensate for nature loss will be.
See also
Biodiversity offsetting
Biodiversity banking
References
Biodiversity
Climate change
Credit
Queen Mary University of London | Biodiversity Impact Credit | Biology | 1,664 |
4,653,992 | https://en.wikipedia.org/wiki/Underhanded%20C%20Contest | The Underhanded C Contest was a programming contest to turn out code that is malicious, but passes a rigorous inspection, and looks like an honest mistake even if discovered. The contest rules define a task, and a malicious component. Entries must perform the task in a malicious manner as defined by the contest, and hide the malice. Contestants are allowed to use C-like compiled languages to make their programs.
The contest was organized by Dr. Scott Craver of the Department of Electrical Engineering at Binghamton University. The contest was initially inspired by Daniel Horn's Obfuscated V contest in the fall of 2004. For the 2005 to 2008 contests, the prize was a $100 gift certificate to ThinkGeek. The 2009 contest had its prize increased to $200 due to the very late announcement of winners, and the prize for the 2013 contest is also a $200 gift certificate.
Contests
2005
The 2005 contest had the task of basic image processing, such as resampling or smoothing, but covertly inserting unique and useful "fingerprinting" data into the image. Winning entries from 2005 used uninitialized data structures, reuse of pointers, and an embedding of machine code in constants.
2006
The 2006 contest required entries to count word occurrences, but have vastly different runtimes on different platforms. To accomplish the task, entries used fork implementation errors, optimization problems, endian differences and various API implementation differences. The winner called strlen() in a loop, leading to quadratic complexity which was optimized out by a Linux compiler but not by Windows.
2007
The 2007 contest required entries to encrypt and decrypt files with a strong, readily available encryption algorithm such that a low percentage (1% - 0.01%) of the encrypted files may be cracked in a reasonably short time. The contest commenced on April 16 and ended on July 4. Entries used misimplementations of RC4, misused API calls, and incorrect function prototypes.
2008
The 2008 contest required entries to redact a rectangular portion of a PPM image in a way that the portion may be reconstructed. Any method of "blocking out" the rectangle was allowed, as long as the original pixels were removed, and the pixel reconstruction didn't have to be perfect (although the reconstruction's fidelity to the original file would be a factor in judging). The contest began on June 12, and ended on September 30. Entries tended to either xor the region with a retrievable pseudo-random mask or append the masked data to the end of the file format. The second placing programs both used improperly defined macros while the winner, choosing to work with an uncommon text based format, zeroed out pixel values while keeping the number of digits intact.
2009
The 2009 contest required participants to write a program that sifts through routing directives but redirects a piece of luggage based on some innocuous-looking comment in the space-delimited input data file. The contest began December 29, 2009, and was due to end on March 1, 2010. However, no activity occurred for three years. The winners were only announced on April 1, 2013, with one overall winner and six runners-up.
2013
The 2013 contest was announced on April 1, 2013, and was due July 4, 2013; results were announced on September 29, 2014. It was about a fictional social website called "ObsessBook". The challenge was to write a function to compute the DERPCON (Degrees of Edge-Reachable Personal CONnection) between two users that "accidentally" computes a too low distance for a special user.
2014
The 2014 contest was announced on November 2, 2014, and was due January 1, 2015. The results were announced on June 1, 2015. The objective was to write surveillance code for a Twitter-like social networking service, to comply with a secret government surveillance request; but for non-obvious reasons, the code must subtly leak the act of surveillance to a user. The general approach is to obfuscate writes to the user data as writing to surveillance data, and the winning entry did so by implementing a buggy time-checking function that overwrites the input.
2015
The 2015 contest was announced on August 15, 2015, and was due November 15, 2015. The results were announced on January 15, 2016. The scenario was a nuclear disarmament process between the Peoples Glorious Democratic Republic of Alice and the Glorious Democratic Peoples Republic of Bob (Alice and Bob), and the mission was to write a test function for comparing potentially fissile material against a reference sample, which under certain circumstances would label a warhead as containing fissile material when it doesn't. Around a third of the submissions used NaN poisoning by erroneous floating-point operations, which generates more NaN's in the later computation and always evaluates to false for a comparison.
The winning entry used a confusion of datatypes between double and float to distort values.
See also
International Obfuscated C Code Contest
References
External links
Official contest page
Prior page with 2014 winners
C (programming language) contests
Programming contests
Binghamton University
Malware
Software obfuscation
Recurring events established in 2005 | Underhanded C Contest | Technology,Engineering | 1,073 |
37,955,705 | https://en.wikipedia.org/wiki/Pi1%20Pegasi | {{DISPLAYTITLE:Pi1 Pegasi}}
Pi1 Pegasi, Latinized from π1 Pegasi, is a star in the constellation Pegasus. Based upon changes to the proper motion of the visible component, this is a probable astrometric binary. It has a yellow hue and is dimply visible to the naked eye with a combined apparent visual magnitude of +5.58. The system is located approximately 319 light years distant from the Sun based on parallax, and is drifting further away with a radial velocity of +5 km/s. It is a member of the Ursa Major Moving Group of co-moving stars.
The visible component is an aging giant star with a stellar classification of G8IIIb. It has a high rate of spin, with a projected rotational velocity of 135 km/s. This is giving it an equatorial bulge that is 17% larger than the polar radius. It is a shell star, being orbited by a circumstellar shell of cooler gas. This star is 530 million years old with 2.5 times the mass of the Sun. With the supply of hydrogen exhausted at its core, the star has cooled and expanded to 11 times the Sun's radius. It is radiating 63 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,898 K.
References
External links
G-type giants
Shell stars
Astrometric binaries
Pegasus (constellation)
Pegasi, Pi1
Durchmusterung objects
Pegasi, 27
210354
109352
8449
Ursa Major moving group | Pi1 Pegasi | Astronomy | 324 |
15,214,644 | https://en.wikipedia.org/wiki/ZRF1 | DnaJ homolog subfamily C member 2 is a protein that in humans is encoded by the DNAJC2 gene.
This gene is a member of the M-phase phosphoprotein (MPP) family. The gene encodes a phosphoprotein with a J domain and a Myb DNA-binding domain which localizes to both the nucleus and the cytosol. The protein is capable of forming a heterodimeric complex that associates with ribosomes, acting as a molecular chaperone for nascent polypeptide chains as they exit the ribosome. This protein was identified as a leukemia-associated antigen and expression of the gene is upregulated in leukemic blasts. Also, chromosomal aberrations involving this gene are associated with primary head and neck squamous cell tumors. This gene has a pseudogene on chromosome 6. Alternatively spliced variants which encode different protein isoforms have been described; however, not all variants have been fully characterized.
References
Further reading | ZRF1 | Chemistry | 216 |
48,596,894 | https://en.wikipedia.org/wiki/Confined%20liquid | In condensed matter physics, a confined liquid is a liquid that is subject to geometric constraints on a nanoscopic scale so that most molecules are close enough to an interface to sense some difference from standard bulk liquid conditions. Typical examples are liquids in porous media, gels, or bound in solvation shells.
Confinement regularly prevents crystallization, which enables liquids to be supercooled below their homogeneous nucleation temperature even if this is impossible in the bulk state. This holds in particular for water, which is by far the most studied confined liquid.
Liquids under sub-millimeter confinement (e.g. in the gap between rigid walls) exhibit a nearly solid-like mechanical response and possess a surprisingly large low-frequency elastic shear modulus, which scales with the inverse cubic power of the confinement length.
Further reading
References
Condensed matter physics
Liquids | Confined liquid | Physics,Chemistry,Materials_science,Engineering | 171 |
4,290,903 | https://en.wikipedia.org/wiki/Red%20rain%20in%20Kerala | The Kerala red rain phenomenon was a blood rain event that occurred in Wayanad district of southern Indian state Kerala on Monday, 15 July 1957 and the colour subsequently turned yellow and also 25 July to 23 September 2001, when heavy downpours of red-coloured rain fell sporadically in Kerala, staining clothes pink. Yellow, green and black rain was also reported. Coloured rain was also reported in Kerala in 1896 and several times since, most recently in June 2012, and from 15 November 2012 to 27 December 2012 in eastern and north-central provinces of Sri Lanka.
Following a light-microscopy examination in 2001, it was initially thought that the rains were coloured by fallout from a hypothetical meteor burst, but a study commissioned by the Government of India concluded that the rains had been coloured by airborne spores from a locally prolific terrestrial green algae from the genus Trentepohlia.
Occurrence
The coloured rain of Kerala began falling on 25 July 2001, in the districts of Kottayam and Idukki in the southern part of the state. Yellow, green, and black rain was also reported. Many more occurrences of the red rain were reported over the following ten days, and then with diminishing frequency until late September. According to locals, the first coloured rain was preceded by a loud thunderclap and flash of light, and followed by groves of trees shedding shrivelled grey "burnt" leaves. Shriveled leaves and the disappearance and sudden formation of wells were also reported around the same time in the area. It typically fell over small areas, no more than a few square kilometres in size, and was sometimes so localised that normal rain could be falling just a few meters away from the red rain. Red rainfalls typically lasted less than 20 minutes. Each millilitre of rain water contained about 9 million red particles. Extrapolating these figures to the total amount of red rain estimated to have fallen, it was estimated that of red particles had fallen on Kerala.
Description of the particles
The brownish-red solid separated from the red rain consisted of about 90% round red particles and the balance consisted of debris. The particles in suspension in the rain water were responsible for the colour of the rain, which at times was strongly coloured red. A small percentage of particles were white or had light yellow, bluish grey and green tints. The particles were typically 4 to 10 μm across and spherical or oval. Electron microscope images showed the particles as having a depressed centre. At still higher magnification some particles showed internal structures.
Chemical composition
Some water samples were taken to the Centre for Earth Science Studies (CESS) in India, where they separated the suspended particles by filtration. The pH of the water was found to be around 7 (neutral). The electrical conductivity of the rainwater showed the absence of any dissolved salts. Sediment (red particles plus debris) was collected and analysed by the CESS using a combination of ion-coupled plasma mass spectrometry, atomic absorption spectrometry and wet chemical methods. The major elements found are listed below. The CESS analysis also showed significant amounts of heavy metals, including nickel (43 ppm), manganese (59 ppm), titanium (321 ppm), chromium (67ppm) and copper (55 ppm).
Physicists Godfrey Louis and Santhosh Kumar of the Mahatma Gandhi University, Kerala, used energy dispersive X-ray spectroscopy analysis of the red solid and showed that the particles were composed of mostly carbon and oxygen, with trace amounts of silicon and iron. A CHN analyser showed content of 43.03% carbon, 4.43% hydrogen, and 1.84% nitrogen.
Tom Brenna in the Division of Nutritional Sciences at Cornell University conducted carbon and nitrogen isotope analyses using a scanning electron microscope with X-ray micro-analysis, an elemental analyser, and an isotope ratio (IR) mass spectrometer. The red particles collapsed
when dried, which suggested that they were filled with fluid. The amino acids in the particles were analysed and seven were identified (in order of concentration): phenylalanine, glutamic acid/glutamine, serine, aspartic acid, threonine, and arginine. The results were consistent with a marine origin or a terrestrial plant that uses a C4 photosynthetic pathway.
Government report
Initially, the Centre for Earth Science Studies (CESS) stated that the likely cause of the red rain was an exploding meteor, which had dispersed about 1,000 kg (one ton) of material. A few days later, following a basic light microscopy evaluation, the CESS retracted this as they noticed the particles resembled spores, and because debris from a meteor would not have continued to fall from the stratosphere onto the same area while unaffected by wind. A sample was, therefore, handed over to the Tropical Botanical Garden and Research Institute (TBGRI) for microbiological studies, where the spores were allowed to grow in a medium suitable for growth of algae and fungi. The inoculated petri dishes and conical flasks were incubated for three to seven days and the cultures were observed under a microscope.
In November 2001, commissioned by the Government of India's Department of Science & Technology, the Centre for Earth Science Studies (CESS) and the Tropical Botanical Garden and Research Institute (TBGRI) issued a joint report, which concluded:
The site was again visited on 16 August 2001 and it was found that almost all the trees, rocks and even lamp posts in the region were covered with Trentepohlia estimated to be in sufficient amounts to generate the quantity of spores seen in the rainwater. Although red or orange, Trentepohlia is a chlorophyte green alga which can grow abundantly on tree bark or damp soil and rocks, but is also the photosynthetic symbiont or photobiont of many lichens, including some of those abundant on the trees in Changanassery area. The strong orange colour of the algae, which masks the green of the chlorophyll, is caused by the presence of large quantities of orange carotenoid pigments. A lichen is not a single organism, but the result of a partnership (symbiosis) between a fungus and an alga or cyanobacterium.
The report also stated that there was no meteoric, volcanic or desert dust origin present in the rainwater and that its colour was not due to any dissolved gases or pollutants. The report concluded that heavy rains in Kerala – in the weeks preceding the red rains – could have caused the widespread growth of lichens, which had given rise to a large quantity of spores into the atmosphere. However, for these lichen to release their spores simultaneously, it is necessary for them to enter their reproductive phase at about the same time. The CESS report noted that while this may be a possibility, it is quite improbable. Also, they could find no satisfactory explanation for the apparently extraordinary dispersal, nor for the apparent uptake of the spores into clouds. CESS scientists noted that "While the cause of the colour in the rainfall has been identified, finding the answers to these questions is a challenge." Attempting to explain the unusual spore proliferation and dispersal, researcher Ian Goddard proposed several local atmospheric models.
Parts of the CESS/TBGRI report were supported by Milton Wainwright at the University of Sheffield, who, together with Chandra Wickramasinghe, has studied stratospheric spores. In March 2006 Wainwright said the particles were similar in appearance to spores of a rust fungus, later saying that he had confirmed the presence of DNA, and reported their similarity to algal spores, and found no evidence to suggest that the rain contained dust, sand, fat globules, or blood. In November 2012, Rajkumar Gangappa and Stuart Hogg from the University of Glamorgan, UK, confirmed that the red rain cells from Kerala contain DNA.
In February 2015, a team of scientists from India and Austria, also supported the identification of the algal spores as Trentepohlia annulata, however, they speculate that the spores from the 2011 incident were carried by winds from Europe to the Indian subcontinent.
Alternative hypotheses
History records many instances of unusual objects falling with the rain – in 2000, in an example of raining animals, a small waterspout in the North Sea sucked up a school of fish a mile off shore, depositing them shortly afterwards on Great Yarmouth in the United Kingdom. Coloured rain is by no means rare, and can often be explained by the airborne transport of rain dust from deserts or other dry regions which have been washed down by rain. "Red Rains" have been frequently described in southern Europe, with increasing reports in recent years. One such case occurred in England in 1903, when dust was carried from the Sahara and fell with rain in February of that year.
At first, the red rain in Kerala was attributed to the same effect, with dust from the deserts of Arabia initially the suspect. LIDAR observations had detected a cloud of dust in the atmosphere near Kerala in the days preceding the outbreak of the red rain. However, laboratory tests from all involved teams ruled out the particles were desert sand.
K.K. Sasidharan Pillai, a senior scientific assistant in the Indian Meteorological Department, proposed dust and acidic material from an eruption of Mayon Volcano in the Philippines as an explanation for the coloured rain and the "burnt" leaves. The volcano was erupting in June and July 2001 and Pillai calculated that the Eastern or Equatorial jet stream could have transported volcanic material to Kerala in 25–36 hours. The Equatorial jet stream is unusual in that it sometimes flows from east to west at about 10° N, approximately the same latitude as Kerala (8° N) and Mayon Volcano (13° N). This hypothesis was also ruled out as the particles were neither acidic nor of volcanic origin, but were spores.
A study has been published showing a correlation between historic reports of coloured rains and of meteors; the author of the paper, Patrick McCafferty, stated that sixty of these colored rain events, or 36%, were linked to meteoritic or cometary activity, though not always strongly. Sometimes the fall of red rain seems to have occurred after an air-burst, as from a meteor exploding in air; other times the odd rainfall is merely recorded in the same year as the appearance of a comet.
Panspermia hypothesis
In 2003 Godfrey Louis and Santhosh Kumar, physicists at the Mahatma Gandhi University in Kottayam, Kerala, posted an article entitled "Cometary panspermia explains the red rain of Kerala" in the non-peer reviewed arXiv web site. While the CESS report said there was no apparent relationship between the loud sound (possibly a sonic boom) and flash of light which preceded the red rain, to Louis and Kumar it was a key piece of evidence. They proposed that a meteor (from a comet containing the red particles) caused the sound and flash and when it disintegrated over Kerala it released the red particles which slowly fell to the ground. However, they omitted an explanation on how debris from a meteor continued to fall in the same area over a period of two months while unaffected by winds.
Their work indicated that the particles were of biological origin (consistent with the CESS report), however, they invoked the panspermia hypothesis to explain the presence of cells in a supposed fall of meteoric material. Additionally, using ethidium bromide they were unable to detect DNA or RNA in the particles. Two months later they posted another paper on the same web site entitled "New biology of red rain extremophiles prove cometary panspermia" in which they reported that
The microorganism isolated from the red rain of Kerala shows very extraordinary characteristics, like the ability to grow optimally at and the capacity to metabolise a wide range of organic and inorganic materials.
These claims and data have yet to be verified and reported in any peer reviewed publication. In 2006 Louis and Kumar published a paper in Astrophysics and Space Science entitled "The red rain phenomenon of Kerala and its possible extraterrestrial origin" which reiterated their arguments that the red rain was biological matter from an extraterrestrial source but made no mention of their previous claims to having induced the cells to grow. The team also observed the cells using phase contrast fluorescence microscopy, and they concluded that: "The fluorescence behaviour of the red cells is shown to be in remarkable correspondence with the extended red emission observed in the Red Rectangle Nebula and other galactic and extragalactic dust clouds, suggesting, though not proving an extraterrestrial origin." One of their conclusions was that if the red rain particles are biological cells and are of cometary origin, then this phenomenon can be a case of cometary panspermia.
In August 2008 Louis and Kumar again presented their case in an astrobiology conference. The abstract for their paper states that The red cells found in the red rain in Kerala, India are now considered as a possible case of extraterrestrial life form. These cells can undergo rapid replication even at an extreme high temperature of . They can also be cultured in diverse unconventional chemical substrates. The molecular composition of these cells is yet to be identified.
In September 2010 a similar paper was presented at a conference in California, US.
Cosmic ancestry
Researcher Chandra Wickramasinghe used Louis and Kumar's "extraterrestrial origin" claim to further support his panspermia hypothesis called cosmic ancestry. This hypothesis postulates that life is neither the product of supernatural creation, nor is it spontaneously generated through abiogenesis, but that it has always existed in the universe. Cosmic ancestry speculates that higher life forms, including intelligent life, descend ultimately from pre-existing life which was at least as advanced as the descendants.
Criticism
Louis and Kumar made their first publication of their finding on a web site in 2003, and have presented papers at conferences and in astrophysics magazines a number of times since. The controversial conclusion of Louis et al. is the only hypothesis suggesting that these organisms are of extraterrestrial origin. Such reports have been popular in the media, with major news agencies like CNN repeating the panspermia theory without critique.
The hypothesis' authors – G. Louis and Kumar – did not explain how debris from a meteor could have continued to fall on the same area over a period of two months, despite the changes in climatic conditions and wind pattern spanning over two months. Samples of the red particles were also sent for analysis to his collaborators Milton Wainwright at the University of Sheffield and Chandra Wickramasinghe at Cardiff University. Louis then incorrectly reported on 29 August 2010 in the non-peer reviewed online physics archive "arxiv.org" that they were able to have these cells "reproduce" when incubated at high pressure saturated steam at 121 °C (autoclaved) for up to two hours. Their conclusion is that these cells reproduced, without DNA, at temperatures higher than any known life form on earth is able to. They claimed that the cells, however, were unable to reproduce at temperatures similar to known organisms.
Regarding the "absence" of DNA, Louis admits he has no training in biology, and has not reported the use of any standard microbiology growth medium to culture and induce germination and growth of the spores, basing his claim of "biological growth" on light absorption measurements following aggregation by supercritical fluids, an inert physical observation. Both his collaborators, Wickramasinghe and Milton Wainwright independently extracted and confirmed the presence of DNA from the spores. The absence of DNA was key to Louis and Kumar's hypothesis that the cells were of extraterrestrial origins.
Louis' only reported attempt to stain the spores' DNA was by the use of malachite green, which is generally used to stain bacterial endospores, not algal spores, whose primary function of their cell wall and their impermeability is to ensure its own survival through periods of environmental stress. They are therefore resistant to ultraviolet and gamma radiation, desiccation, lysozyme, temperature, starvation and chemical disinfectants. Visualizing algal spore DNA under a light microscope can be difficult due to the impermeability of the highly resistant spore wall to dyes and stains used in normal staining procedures. The spores' DNA is tightly packed, encapsulated and desiccated, therefore, the spores must first be cultured in suitable growth medium and temperature to first induce germination, then cell growth followed by reproduction before staining the DNA.
Other researchers have noted recurring instances of red rainfalls in 1818, 1846, 1872, 1880, 1896, and 1950 and several times since then. Most recently, coloured rainfall occurred over Kerala during the summers of 2001, 2006, 2007, 2008, and 2012; since 2001, the botanists have found the same Trentepohlia spores every time. This supports the notion that the red rain is a seasonal local environmental feature caused by algal spores.
In popular culture
The science fiction film Red Rain was loosely based on the red rain in Kerala story. It was directed by Rahul Sadasivan and released in India on 6 December 2013.
See also
References
External links
Sampath, S., Abraham, T. K., Sasi Kumar, V., & Mohanan, C.N. (2001). Colored Rain: A Report on the Phenomenon. CESS-PR-114-2001, Center for Earth Science Studies and Tropical Botanic Garden and Research Institute.
"When aliens rained over India" by Hazel Muir in New Scientist
"Searching for 'our alien origins'" by Andrew Thompson in BBC News
"Fluorescence Mystery in Red Rain Cells of Kerala, India " Linda Moulton Howe Earthfiles
"Home page of Dr A Santhosh Kumar"
2001 in India
2001 meteorology
Anomalous weather
Environment of Kerala
Panspermia
Weather events in India
Rain
Changanassery
Meteorological hypotheses
Trentepohliaceae | Red rain in Kerala | Physics,Biology | 3,712 |
1,425,916 | https://en.wikipedia.org/wiki/Fulkerson%20Prize | The Fulkerson Prize for outstanding papers in the area of discrete mathematics is sponsored jointly by the Mathematical Optimization Society (MOS) and the American Mathematical Society (AMS). Up to three awards of $1,500 each are presented at each (triennial) International Symposium of the MOS. Originally, the prizes were paid out of a memorial fund administered by the AMS that was established by friends of the late Delbert Ray Fulkerson to encourage mathematical excellence in the fields of research exemplified by his work. The prizes are now funded by an endowment administered by MPS.
Winners
1979:
Richard M. Karp for classifying many important NP-complete problems.
Kenneth Appel and Wolfgang Haken for the four color theorem.
Paul Seymour for generalizing the max-flow min-cut theorem to matroids.
1982:
D.B. Judin, Arkadi Nemirovski, Leonid Khachiyan, Martin Grötschel, László Lovász and Alexander Schrijver for the ellipsoid method in linear programming and combinatorial optimization.
G. P. Egorychev and D. I. Falikman for proving van der Waerden's conjecture that the matrix with all entries equal has the smallest permanent of any doubly stochastic matrix.
1985:
Jozsef Beck for tight bounds on the discrepancy of arithmetic progressions.
H. W. Lenstra Jr. for using the geometry of numbers to solve integer programs with few variables in time polynomial in the number of constraints.
Eugene M. Luks for a polynomial time graph isomorphism algorithm for graphs of bounded maximum degree.
1988:
Éva Tardos for finding minimum cost circulations in strongly polynomial time.
Narendra Karmarkar for Karmarkar's algorithm for linear programming.
1991:
Martin E. Dyer, Alan M. Frieze and Ravindran Kannan for random-walk-based approximation algorithms for the volume of convex bodies.
Alfred Lehman for 0,1-matrix analogues of the theory of perfect graphs.
Nikolai E. Mnev for Mnev's universality theorem, that every semialgebraic set is equivalent to the space of realizations of an oriented matroid.
1994:
Louis Billera for finding bases of piecewise-polynomial function spaces over triangulations of space.
Gil Kalai for making progress on the Hirsch conjecture by proving subexponential bounds on the diameter of d-dimensional polytopes with n facets.
Neil Robertson, Paul Seymour and Robin Thomas for the six-color case of Hadwiger's conjecture.
1997:
Jeong Han Kim for finding the asymptotic growth rate of the Ramsey numbers R(3,t).
2000:
Michel X. Goemans and David P. Williamson for approximation algorithms based on semidefinite programming.
Michele Conforti, Gérard Cornuéjols, and M. R. Rao for recognizing balanced 0-1 matrices in polynomial time.
2003:
J. F. Geelen, A. M. H. Gerards and A. Kapoor for the GF(4) case of Rota's conjecture on matroid minors.
Bertrand Guenin for a forbidden minor characterization of the weakly bipartite graphs (graphs whose bipartite subgraph polytope is 0-1).
Satoru Iwata, Lisa Fleischer, Satoru Fujishige, and Alexander Schrijver for showing submodular minimization to be strongly polynomial.
2006:
Manindra Agrawal, Neeraj Kayal and Nitin Saxena, for the AKS primality test.
Mark Jerrum, Alistair Sinclair and Eric Vigoda, for approximating the permanent.
Neil Robertson and Paul Seymour, for the Robertson–Seymour theorem showing that graph minors form a well-quasi-ordering.
2009:
Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas, for the strong perfect graph theorem.
Daniel A. Spielman and Shang-Hua Teng, for smoothed analysis of linear programming algorithms.
Thomas C. Hales and Samuel P. Ferguson, for proving the Kepler conjecture on the densest possible sphere packings.
2012:
Sanjeev Arora, Satish Rao, and Umesh Vazirani for improving the approximation ratio for graph separators and related problems from to .
Anders Johansson, Jeff Kahn, and Van H. Vu for determining the threshold of edge density above which a random graph can be covered by disjoint copies of a given smaller graph.
László Lovász and Balázs Szegedy for characterizing subgraph multiplicity in sequences of dense graphs.
2015:
Francisco Santos Leal for a counter-example of the Hirsch conjecture.
2018:
Robert Morris, Yoshiharu Kohayakawa, Simon Griffiths, Peter Allen, and Julia Böttcher for The chromatic thresholds of graphs
Thomas Rothvoss for his work on the extension complexity of the matching polytope.
2021:
Béla Csaba, Daniela Kühn, Allan Lo, Deryk Osthus, and Andrew Treglown for Proof of the 1-factorization and Hamilton decomposition conjectures
Jin-Yi Cai and Xi Chen for Complexity of Counting CSP with Complex Weights
Ken-Ichi Kawarabayashi and Mikkel Thorup for Deterministic Edge Connectivity in Near-Linear Time
Source: Mathematical Optimization Society official website.
2024:
Ben Cousins and Santosh Vempala for Gaussian cooling and algorithms for volume and Gaussian volume
Zilin Jiang, Jonathan Tidor, Yuan Yao, Shengtong Zhang, and Yufei Zhao for Equiangular lines with a fixed angle
Nathan Keller and Noam Lifshitz for The junta method for hypergraphs and the Erdős–Chvátal simplex conjecture
Source: American Mathematical Society official website.
See also
List of mathematics awards
References
External links
Official web page (MOS)
Official site with award details (AMS website)
AMS archive of past prize winners
Computer science awards
Awards of the American Mathematical Society
Awards of the Mathematical Optimization Society
Triennial events
1979 establishments in the United States
Discrete mathematics
Awards established in 1979 | Fulkerson Prize | Mathematics,Technology | 1,263 |
1,633,981 | https://en.wikipedia.org/wiki/Insect%20repellent | An insect repellent (also commonly called "bug spray") is a substance applied to the skin, clothing, or other surfaces to discourage insects (and arthropods in general) from landing or climbing on that surface. Insect repellents help prevent and control the outbreak of insect-borne (and other arthropod-bourne) diseases such as malaria, Lyme disease, dengue fever, bubonic plague, river blindness, and West Nile fever. Pest animals commonly serving as vectors for disease include insects such as flea, fly, and mosquito; and ticks (arachnids).
Some insect repellents are insecticides (bug killers), but most simply discourage insects and send them flying or crawling away. Nearly any would be fatal upon reaching the median lethal dose, but classification as an insecticide implies death even at lower doses.
Effectiveness
Synthetic repellents tend to be more effective and/or longer lasting than "natural" repellents.
For protection against ticks and mosquito bites, the U.S. Centers for Disease Control (CDC) recommends DEET, icaridin (picaridin, KBR 3023), oil of lemon eucalyptus (OLE), para-menthane-diol (PMD), IR3535 and 2-undecanone with the caveat that higher percentages of the active ingredient provide longer protection.
In 2015, researchers at New Mexico State University tested 10 commercially available products for their effectiveness at repelling mosquitoes. The known active ingredients tested included DEET (at various concentrations), geraniol, p-menthane-3-8-diol (found in lemon eucalyptus oil), thiamine, and several oils (soybean, rosemary, cinnamon, lemongrass, citronella, and lemon eucalyptus). Two of the products tested were fragrances where the active ingredients were unknown. On the mosquito Aedes aegypti, the vector of Zika virus, only one repellent that did not contain DEET had a strong effect for the duration of the 240 minutes test: a lemon eucalyptus oil repellent. However, Victoria's Secret Bombshell, a perfume not advertised as an insect repellant, performed effectively during the first 120 minutes after application.
In one comparative study from 2004, IR3535 was as effective or better than DEET in protection against Aedes aegypti and Culex quinquefasciatus mosquitoes. Other sources (official publications of the associations of German physicians as well as of German druggists) suggest the contrary and state DEET is still the most efficient substance available and the substance of choice for stays in malaria regions, while IR3535 has little effect. However, some plant-based repellents may provide effective relief as well. Essential oil repellents can be short-lived in their effectiveness.
A test of various insect repellents by an independent consumer organization found that repellents containing DEET or icaridin are more effective than repellents with "natural" active ingredients. All the synthetics gave almost 100% repellency for the first 2 hours, where the natural repellent products were most effective for the first 30 to 60 minutes, and required reapplication to be effective over several hours.
Although highly toxic to cats, permethrin is recommended as protection against mosquitoes for clothing, gear, or bed nets. In an earlier report, the CDC found oil of lemon eucalyptus to be more effective than other plant-based treatments, with a similar effectiveness to low concentrations of DEET. However, a 2006 published study found in both cage and field studies that a product containing 40% oil of lemon eucalyptus was just as effective as products containing high concentrations of DEET. Research has also found that neem oil is mosquito repellent for up to 12 hours. Citronella oil's mosquito repellency has also been verified by research, including effectiveness in repelling Aedes aegypti, but requires reapplication after 30 to 60 minutes.
There are also products available based on sound production, particularly ultrasound (inaudibly high-frequency sounds) which purport to be insect repellents. However, these electronic devices have been shown to be ineffective based on studies done by the United States Environmental Protection Agency and many universities.
Safety issues
For humans
Children may be at greater risk for adverse reactions to repellents, in part, because their exposure may be greater.
Children can be at greater risk of accidental eye contact or ingestion.
As with chemical exposures in general, pregnant women should take care to avoid exposures to repellents when practical, as the fetus may be vulnerable.
Some experts also recommend against applying chemicals such as DEET and sunscreen simultaneously since that would increase DEET penetration. Canadian researcher, Xiaochen Gu, a professor at the University of Manitoba's faculty of Pharmacy who led a study about mosquitos, advises that DEET should be applied 30 or more minutes later. Gu also recommends insect repellent sprays instead of lotions which are rubbed into the skin "forcing molecules into the skin".
Regardless of which repellent product used, it is recommended to read the label before use and carefully follow directions. Usage instructions for repellents vary from country to country. Some insect repellents are not recommended for use on younger children.
In the DEET Reregistration Eligibility Decision (RED) the United States Environmental Protection Agency (EPA) reported 14 to 46 cases of potential DEET associated seizures, including 4 deaths. The EPA states: "... it does appear that some cases are likely related to DEET toxicity," but observed that with 30% of the US population using DEET, the likely seizure rate is only about one per 100 million users.
The Pesticide Information Project of Cooperative Extension Offices of Cornell University states that, "Everglades National Park employees having extensive DEET exposure were more likely to have insomnia, mood disturbances and impaired cognitive function than were lesser exposed co-workers".
The EPA states that citronella oil shows little or no toxicity and has been used as a topical insect repellent for 60 years. However, the EPA also states that citronella may irritate skin and cause dermatitis in certain individuals. Canadian regulatory authorities concern with citronella based repellents is primarily based on data-gaps in toxicology, not on incidents.
Within countries of the European Union, implementation of Regulation 98/8/EC, commonly referred to as the Biocidal Products Directive, has severely limited the number and type of insect repellents available to European consumers. Only a small number of active ingredients have been supported by manufacturers in submitting dossiers to the EU Authorities.
In general, only formulations containing DEET, icaridin (sold under the trade name Saltidin and formerly known as Bayrepel or KBR3023), IR3535 and citriodiol (p-menthane-3,8-diol) are available. Most "natural" insect repellents such as citronella, neem oil, and herbal extracts are no longer permitted for sale as insect repellents in the EU due to their lack of effectiveness; this does not preclude them from being sold for other purposes, as long as the label does not indicate they are a biocide (insect repellent).
Toxicity for other animals
A 2018 study found that Icaridin, is highly toxic to salamander larvae, in what the authors described as conservative exposure doses. The LC50 standard was additionally found to be completely inadequate in the context of finding this result.
Permethrin is highly toxic to cats but not to dogs or humans.
Common insect repellents
Common synthetic insect repellents
Benzaldehyde, for bees
Butopyronoxyl (trade name Indalone). Widely used in a "6-2-2" mixture (60% Dimethyl phthalate, 20% Indalone, 20% Ethylhexanediol) during the 1940s and 1950s before the commercial introduction of DEET
DEET (N,N-diethyl-m-toluamide) the most common and effective insect repellent
Dimethyl carbate
Dimethyl phthalate, not as common as it once was but still occasionally an active ingredient in commercial insect repellents
Ethyl butylacetylaminopropionate (IR3535 or 3-[N-Butyl-N-acetyl]-aminopropionic acid, ethyl ester)
Ethylhexanediol, also known as Rutgers 612 or "6–12 repellent," discontinued in the US in 1991 due to evidence of causing developmental defects in animals
Icaridin, also known as picaridin, Bayrepel, and KBR 3023 considered equal in effectiveness to DEET
Methyl anthranilate and other anthranilate-based insect repellents
Metofluthrin
Permethrin is a contact insecticide rather than a repellent
SS220 is a repellent being researched that has shown promise to provide significantly better protection than DEET
Tricyclodecenyl allyl ether, a compound often found in synthetic perfumes
Common natural insect repellents
Beautyberry (Callicarpa) leaves
Birch tree bark is traditionally made into tar. Combined with another oil (e.g., fish oil) at 1/2 dilution, it is then applied to the skin for repelling mosquitos
Bog myrtle (Myrica gale)
Catnip oil whose active compound is Nepetalactone
Citronella oil (citronella candles are not effective)
Essential oil of the lemon eucalyptus (Corymbia citriodora) and its active compound p-menthane-3,8-diol (PMD)
Lemongrass
Neem oil
Tea tree oil from the leaves of Melaleuca alternifolia
Tobacco
Insect repellents from natural sources
Several natural ingredients are certified by the United States Environmental Protection Agency as insect repellents, namely catnip oil, oil of lemon eucalyptus (OLE) (and its active ingredient p-Menthane-3,8-diol), oil of citronella, and 2-Undecanone, which is usually produced synthetically but has also been isolated from many plant sources.
Many other studies have also investigated the potential of natural compounds from plants as insect repellents. Moreover, there are many preparations from naturally occurring sources that have been used as a repellent to certain insects. Some of these act as insecticides while others are only repellent. Below is a list of some natural products with repellent activity:
Achillea alpina (mosquitos)
alpha-terpinene (mosquitos)
Andrographis paniculata extracts (mosquito)
Basil
Sweet basil (Ocimum basilicum)
Breadfruit (Insect repellent, including mosquitoes)
Callicarpa americana (beautyberry)
Camphor (mosquitoes)
Carvacrol (mosquitos)
Castor oil (Ricinus communis) (mosquitos)
Catnip oil (Nepeta species) (nepetalactone against mosquitos)
Cedar oil (mosquitos, moths)
Celery extract (Apium graveolens) (mosquitos) In clinical testing an extract of celery was demonstrated to be at least equally effective to 25% DEET, although the commercial availability of such an extract is not known.
Cinnamon (leaf oil kills mosquito larvae)
Citronella oil (repels mosquitos) (contains insect repelling substances, such as citronellol and geraniol)
Clove oil (mosquitos)
D-Limonene (ticks, fleas, flies, mosquitoes, and other insects) (widely used in insect repellents for pets)
Eucalyptus oil (70%+ eucalyptol), (cineol is a synonym), mosquitos, flies, dust mites In the U.S., eucalyptus oil was first registered in 1948 as an insecticide and miticide.
Fennel oil (Foeniculum vulgare) (mosquitos)
Garlic (Allium sativum) (Mosquito, rice weevil, wheat flour beetle)
Geranium oil (also known as Pelargonium graveolens)
Hinokitiol (ticks, mosquitos, larvae)
Lavender (ineffective alone, but measurable effect in certain repellent mixtures)
Lemon eucalyptus (Corymbia citriodora) essential oil and its active ingredient p-menthane-3,8-diol (PMD)
Lemongrass oil (Cymbopogon species) (mosquitos)
East-Indian lemon grass (Cymbopogon flexuosus)
Linalool (ticks, fleas, mites, mosquitoes, spiders, cockroach)
Marjoram (spider mites Tetranychus urticae and Eutetranychus orientalis)
Mint (menthol is active chemical.) (Mentha sp.)
Neem oil (Azadirachta indica) (Repels or kills mosquitos, their larvae and a plethora of other insects including those in agriculture)
Nootkatone (ticks, mosquitoes and other insects)
Oleic acid, repels bees and ants by simulating the "smell of death" produced by their decomposing corpses.
Pennyroyal (Mentha pulegium) (mosquitos, fleas,) but very toxic to pets
Peppermint (Mentha x piperita) (mosquitos)
Pyrethrum (from Chrysanthemum species, particularly C. cinerariifolium and C. coccineum)
Rosemary (Rosmarinus officinalis) (mosquitos)
Spanish Flag (Lantana camara) (against Tea Mosquito Bug, Helopeltis theivora)
Tea tree oil from the leaves of Melaleuca alternifolia
Thyme (Thymus species) (mosquitos)
Yellow nightshade (Solanum villosum), berry juice (against Stegomyia aegypti mosquitos)
Less effective methods
Some old studies suggested that the ingestion of large doses of thiamine (vitamin B1) could be effective as an oral insect repellent against mosquito bites. However, there is now conclusive evidence that thiamin has no efficacy against mosquito bites. Some claim that plants such as wormwood or sagewort, lemon balm, lemon grass, lemon thyme, and the mosquito plant (Pelargonium) will act against mosquitoes. However, scientists have determined that these plants are "effective" for a limited time only when the leaves are crushed and applied directly to the skin.
There are several, widespread, unproven theories about mosquito control, such as the assertion that vitamin B, in particular B1 (thiamine), garlic, ultrasonic devices or incense can be used to repel or control mosquitoes. Moreover, manufacturers of "mosquito repelling" ultrasonic devices have been found to be fraudulent, and their devices were deemed "useless" according to a review of scientific studies.
Alternatives to repellent
People can reduce the number of mosquito bites they receive (to a greater or lesser degree) by:
Using a mosquito net
Wearing long clothing that covers the skin and is tucked in to seal up holes
Avoiding the outdoors during dawn and dusk, when mosquitos are most active
Keeping air moving to prevent mosquitos from landing, such as by using a fan
Wearing light-colored clothing (light objects are harder for mosquitos to detect)
Reducing exercise, which reduces output of carbon dioxide used by mosquitos for detection
History
Testing and scientific certainty were desired at the end of the 1940s. To that end products meant to be used by humans were tested with model animals to speed trials. Eddy & McGregor 1949 and Wiesmann & Lotmar 1949 used mice, Wasicky et al. 1949 canaries and guinea pigs, Kasman et al. 1953 also guinea pigs, Starnes & Granett 1953 rabbits, and many used cattle.
See also
Fly spray (insecticide)
Mosquito coil
Mosquito control
Mosquito net
Pest control
RID Insect Repellent
Slug tape
VUAA1
Chemical ecology
References
External links
2011 review of studies of plant-based mosquito repellents – NIH
Aphid repellents
Choosing and Using Insect Repellents – National Pesticide Information Center
Dr. Duke's Phytochemical and Ethnobotanical Databases (plant parts with Insect-repellent Activity from the chemical Borneol)
Mosquito repellents; Florida U
Insect repellent active ingredients recommended by the CDC
Chemical ecology
Hiking equipment
Household chemicals
mt:Repellent tal-insetti | Insect repellent | Chemistry,Biology | 3,469 |
11,420,856 | https://en.wikipedia.org/wiki/G-CSF%20factor%20stem-loop%20destabilising%20element | The G-CSF factor stem-loop destabilising element (SLDE) is an RNA element secreted by fibroblasts and endothelial cells in response to the inflammatory mediators interleukin-1 (IL-1) and tumour necrosis factor-alpha and by activated macrophages. The synthesis of G-CSF is regulated both transcriptionally and through control of mRNA stability. In unstimulated cells G-CSF mRNA is unstable but becomes stabilised in response to IL-1 or tumour necrosis factor alpha, and also in the case of monocytes and macrophages, in response to lipopolysaccharide. It is likely that the presence of the SLDE in the G-CSF mRNA contributes to the specificity of regulation of G-CSF mRNA and enhances the rate of shortening of the poly(A) tail.
Adenylate uridylate-rich elements (AUREs) are present in other cytokine mRNAs, but the SLDE is the most important element that stabilizes G-CSF mRNA in response to IL-1 or tumor necrosis factor- alpha. Additionally, there are destabilizing elements similar to SLDE found in IL-2 and IL-6. The 3'-UTR of G-CSF mRNA contains a destabilizing element that is insensitive to calcium ionophore, hence SLDE regulates G-CSF mRNA. AUDEs do not function in 5637 Bladder carcinoma cells, but the SLDE does. The two destablizing elements, SLDE and AURE, provide multiple mechanisms to regulate cytokine expression.
Neutrophils, are the most abundant type of granulocytes and are responsible for leading the first response of the immune system response against invaders. Granulocyte-colony stimulating factor (G-CSF) is a glycoprotein that stimulates proliferation of neutrophil progenitor cells and leads to the maturation of neutrophils. monocytes and macrophages are the cells that secrete G-CSF, but it is found that endothelial cells, fibroblasts, and bone marrow stromal cells also secrete the glycoprotein. Expression of G-CSF glycoprotein is complex and has both transcription and post transcription regulation. Two specific types of regulatory elements are present in the 3' untranslated region (3'UTR) of G-CSF mRNA. These elements are referred to as adenylate uridylate-rich elements (AUREs) and stem-loop destabilizing element (SLDE). They have been shown to be destabilizing elements of the G-CSF mRNA. On the other hand, the stability of the mRNA is regulated by p38 mitogen-activated protein kinase (MAPK) and this phosphorylating enzyme has been shown to be linked to the AUREs in the 3'UTR.
SB203580 specifically inhibits the catalytic activity of p38 MAPK by competitively binding to the active site where ATP is supposed to bind and is used to probe the role of p38 MAPK in cells. SB203580 amplified the lipopolysaccharide-induced increase in the G-CSF mRNA levels in mouse bone marrow-derived macrophages and in THP-1 human macrophages. By displaying that the decay of G-CSF mRNA, in the presence of actinomycin D, was slower in SB203580-treated cells. SB203580 increased the stability of G-CSF mRNA. SLDE is essential for the SB203580-induced increase in the stability of mRNA.
References
External links
Cis-regulatory RNA elements | G-CSF factor stem-loop destabilising element | Chemistry | 795 |
36,043,942 | https://en.wikipedia.org/wiki/ZombiU | ZombiU is a 2012 first-person survival horror video game developed by Ubisoft Montpellier and published by Ubisoft. It was released for the Wii U as one of its launch games in November 2012. In the game, the player assumes control of a human survivor amid a 2012 zombie apocalypse. Featuring a permadeath system, it uses the Wii U GamePad extensively to scan the environment and maintain the survivor's inventory. The game was released under the name Zombi for PlayStation 4, Windows, and Xbox One in August 2015. The port, handled by Straight Right, adds new melee weapons and removes the multiplayer feature.
Ubisoft was approached by Nintendo to develop a mature game for the Wii U. Originally envisioned as a spin-off of the Raving Rabbids franchise and a fast-paced first-person shooter, Killer Freaks from Outer Space (with small, agile monsters), the game was retooled as ZombiU after the development team realized that the Wii U GamePad was suitable for slower-paced games. Many of the game's features, including the GamePad feature and the permadeath feature, went through several iterations. Among other video games, ZombiU was influenced by Resident Evil, Condemned: Criminal Origins and Peter Jackson's King Kong. London was chosen as the game's setting because of its blend of modern and medieval architecture and its rich history. Ubisoft Bucharest led development of the game's multiplayer portion.
ZombiU received generally positive reviews from critics, while the game's Zombi version received mixed reviews. Critics praised the game's emphasis on survival horror, atmosphere, and the permadeath system. They had mixed opinions about gameplay, multiplayer, story, and use of the GamePad. Technical problems, such as glitches and load times, were criticized. The game was unprofitable for Ubisoft, prompting it to turn another Wii U exclusive, Rayman Legends, into a multiplatform game. A prototype for a sequel was developed, but was canceled when the game failed financially.
Gameplay
In ZombiU, a first-person survival horror game set in London, the player is a survivor of a zombie apocalypse. They are contacted by the Prepper, a mysterious figure who tasks them to maintain his safe house in order to be properly prepared for survival in this new destroyed London, all while the survivor seeks out a cure for the infection at the behest of Dr. Knight, a scientist/doctor stationed at Buckingham Palace. The player has a number of ways to deal with enemies, and can confront the zombies with firearms, land mines, and Molotov cocktails. They can use an unbreakable cricket bat, which does less damage than other weapons. There are several types of zombies, including those which explode, are armored, or spill corrosive fluid. The zombies are sensitive to light, and are attracted to a player who is using a flashlight or flare. Sound produced by firearms will also attract nearby enemies to a player, who can push them away with their weapon. The player can also use stealth to avoid being noticed by enemies.
Exploration is encouraged in the game. By exploring the game's world, players can find beds (which are save points) and routes leading them back to their safe house. New weapons, ammunition, and scarce items can be collected by searching areas or dead bodies. These items include food and medkits (which restore health), plywood scraps (used to barricade doors), and weapon parts (which can be used in the safe house to upgrade weapons). As players discover new items during the game, previously-inaccessible areas can be entered. These items are stored in the player character's inventory (known as a bug-out-bag) or a cache in the safe house. If the player's character is killed by a zombie (which can occur with one bite), the character will permanently die and the player will then assume the role of another survivor. The previous character will become a zombie, whom the player must kill to reclaim their inventory. If Miiverse is enabled, other players' characters may also appear as zombies carrying the items they collected. A player can also place symbolic clues and hints, visible to other players, on walls. In addition to the normal-difficulty mode, the game has a "Chicken" mode, which lowers the game's difficulty, for new and inexperienced players, and a survival mode where the game ends when the player's original character dies.
The game's controls make extensive use of the Wii U GamePad. During normal play, the touchscreen manages player inventory and displays a mini-map of the immediate area (indicating the locations of the player and nearby enemies). The touchscreen is also used for context-sensitive actions such as executions, escaping from enemies, and hacking combination locks. The GamePad's gyroscope, which allows the controller to sense its rotation and tilt in three-dimensional space, is also used. While viewing the touchscreen, a player can move the controller to scan different areas to find items. While performing these actions, the television's perspective switches to a fixed third-person view of the player character and the surrounding area. The player is vulnerable to attack in this state, and must watch the GamePad's touchscreen and the television screen in to avoid harm. The Wii U camera allows players to import a face, which will be "zombified", into the game.
ZombiU has three multiplayer modes. In Time Attack mode, players must kill as many zombies as they can in a limited time. In survival mode, players stay alive as long as possible with a limited supply of resources. The results of these two modes will be posted on the game's online leaderboard. The third mode, King of Zombies, is an offline asymmetrical multiplayer mode. A player, using the Wii U Pro Controller, plays a human survivor who must navigate a map to obtain all capture points. Another player, known as King Boris and using the Wii U GamePad with a top-down perspective, must deploy a maximum of 10 zombies to halt the players' progress and capture a flag. A progression system is present, in which the Pro Controller player and the GamePad player unlock more-powerful weapons and different types of zombies.
Plot
Four hundred years ago, scientist John Dee made what was known as the Black Prophecy. The Prepper, an ex-army, no-nonsense man, prepares for the coming apocalypse, and a secret society known as the Ravens of Dee researches Dee's predictions to stop them from taking place. In November 2012, the Black Prophecy begins to be realized in the form of an outbreak of zombies in London.
After the outbreak the player character is led to the Prepper's bunker, where he receives weapons, a bug-out bag and the Prepper Pad, a device with a built-in scanner and radar. The Prepper, a Raven of Dee, left the organization due to a difference in interpreting Dee's prophecy. According to the scientist, Black Angels would save the world; the Ravens thought that Dee was referring to their organization, but the Prepper believed that he was talking about something worse.
The player goes underground while scavenging at Buckingham Palace and is contacted by Dr. Knight, a physician for the Royal Family who is trying to find a cure. He orders the player to find diaries and notes left by Dee in exchange for scanner upgrades and a virucide, a chemical capable of killing the virus in those that are infected. This is done without the knowledge of the Prepper, who believes that a cure is impossible. After leaving the palace the player is led by Sondra to the remnants of the Ravens of Dee and tries to escape with the player in a helicopter on the Tower of London. A supernatural force drives a flock of ravens into the propeller, making the helicopter crash. The player returns to the Prepper's bunker, and Sondra promises to save them.
In the bunker, the generator is running out of fuel and the Prepper tells the player to get petrol from a man named Vikram. The player finds Vikram, who has gone mad; he and his young child tell the player to go to a local nursery to get antibiotics for his wife. Despite the Prepper's protests, who thinks the player should raid the petrol station, he goes to the nursery; the staff and children have been eaten by a nurse, who became a ghost-like monster capable of teleportation. The player returns with the antibiotics only to find that Vikram has become infected and eaten his family, though he begs the player to kill him before succumbing to the virus, with the player granting his request.
On the way back to the bunker with the petrol, the player receives a distress signal from a young girl that she and her family are barricaded in St. George's Church. Although the Prepper correctly insists it is a trap, the player goes to rescue her. The player is kidnapped by Boris, the self-proclaimed King of Zombies, and his gang, who force survivors to fight off waves of zombies in the church's courtyard. Boris' lights, microphone, and music attract a horde of zombies, and he and his gang are eaten whilst the player escapes.
The player has enough upgrades to collect Dee's remaining letters for Dr. Knight, who believes that he can now make a panacea. The player returns to the safe house, where Knight reveals that the panacea is not a cure but a vaccine. The Prepper expresses profound disappointment in the player as he returns to Buckingham Palace. Knight has left his bunker for the queen's quarters to access data on a USB flash drive. The player goes through the palace, only to discover that Knight has been turned into a zombie, forcing the player to kill him. The player uses Knight's eye to bypass a retinal scanner and collect the flash drive.
They run back to the Tower of London as the RAF is about to firebomb London and Sondra tries to keep her promise to rescue the player. This enrages the Prepper, who orders the player to leave the safe house and vows to outlive the Ravens of Dee. After the player leaves, the Prepper (who has been using survivors to stock his food supplies) finds a new lackey to do his bidding.
Development
Origin
ZombiU was developed by Ubisoft Montpellier with an 80-person team; Ubisoft Bucharest provided additional assistance. The game's development paralleled that of Rayman Legends, another Montpellier project. It was produced by Guillaume Brunier, with Gabrielle Shrager its director and lead writer; Brunier and Shrager had collaborated on the development of From Dust. The team was approached by Nintendo to develop a mature game for their Wii U platform. The team developed the game using the proprietary LyN game engine. Since the hardware design was incomplete, the team made a number of prototypical games and decided that the new title would extensively utilize the Wii U GamePad. Originally conceived as a tie-in for Raving Rabbids, ZombiU was described as an experiment by Montpellier in adapting the Rabbids franchise for a more "hardcore" audience. According to Guillaume, the monsters were based on the Rabbids because some people found the sound generated by the Rabbids annoying; in the game, players can "trash and destroy" them. The association with Rabbids was eliminated by Ubisoft during development, since they thought that turning Rabbids into monsters did not suit the Raving Rabbids franchise. Announced at E3 2011 as Killer Freaks from Outer Space, the arcade first-person shooter had players with extravagant weapons killing small monsters.
Although reaction to the game's announcement satisfied the studio, they realized that its fast pace would make extensive use of the GamePad while muting the TV display's utility; players must focus on one screen while ignoring the other. The tiny monsters were problematic, since shooting them was unsatisfying. The issues were singled out by Ubisoft management at a Ubisoft Paris meeting, and the team decided to slow the game's pace. The enemies were changed from small monsters to zombies, since their movement is slower and they are easily identified as monsters. According to Jean-Karl Tupin-Bron, the team was saddened by the fact that Killer Freaks would not work but delighted by the results of the transition. By September 2011, most of the Killer Freaks team had joined the zombie-game team. During the transition, the game's tone became more serious. The team tried to distinguish itself from other zombie games in the video-game market, making the GamePad its central feature. They were inspired by the 2007 film, I Am Legend, in which the protagonist is the only survivor left in a city and must be constantly on guard against zombies. ZombiU, the game's working title, became its official name since it was a zombie game for Wii U.
Design
The team chose London as ZombiU setting because the prominence of the city after hosting the 2012 Olympics, and its dark history (including Jack the Ripper) matched the tone of the game. The blend of modern and medieval architecture supported the world they wanted to build, something not present in the US. London's proximity to the French developer also allowed the team to research the city. According to writer Antony Johnston, the game features environments from a lavish palace to decaying ruins to "reflect the contrast of modern London". London allowed the development team to use Beefeaters, royal guards and a cricket bat inspired by Shaun of the Dead. The cricket bat was the only melee weapon in the game's original Wii U version. After realizing that some players used the bat almost exclusively while playing the game, Shrager regretted not adding more melee weapons and thought that the players deserved a more-varied experience.
The Wii U GamePad was described as a survival kit for players. It was inspired by the studio's previous game, Peter Jackson's King Kong, in which players make use of the environment to survive (such as using a torch to scare off enemies); the team made the GamePad a survival kit, taking considerable time to fine-tune the GamePad's functions. Although it was initially possible for players to plan strategy with the GamePad, they rarely did so. The team then retooled the system to include features such as the mini-map, resulting in players focusing on the GamePad and ignoring the action on the TV. Eventually, the team balanced player time on both screens by switching from first- to third-person perspective when the player uses the survival kit. Players are vulnerable when using the kit, prompting them to look at the TV screen while using the GamePad. This increases the game's tension, making it more scary to play. According to Guillaume, the team realized this potential when they implemented the lock-pick feature (when players must be aware of their surroundings while trying to unlock a door). To further immerse players in the game, the voice of the Prepper is channeled through the GamePad's speakers and players need to read text with the pad; since the GamePad is a real-world object, the experience is more intimate. The pad displayed most crucial information to the player, and the game did not have a user interface.
ZombiU was designed as a realistic zombie game. Fear was the game's core emotion and it should be played slowly, similar to some classic horror games. According to game director Jean-Phillipe Caro, players will do badly if they try to play it as a fast-paced action shooter like Call of Duty and ignore the GamePad. A prominent game mechanic is the permadeath system, intended to increase its tension. With the system, the game featured several survivors instead of a hero character. Its narrative was designed to rely on environmental storytelling rather than cutscenes, and players would understand the story through the game experience. The team introduced a persistent level design, in which when players continue playing after their initial survivor is killed. Action by the previous survivor will still affect the game's world; previously-killed zombies will not re-spawn. The permadeath system was debated, with concerns about loading times and backtracking. To ensure that the game's narrative would continue after a player-controlled character died, the survivors were not voiced to avoid narrative dissonance; this made the player, not the survivors, the protagonist. Although Shrager was relieved when the central character was dropped, writing the game was the "toughest challenge" she had ever faced in games. Since survivors can die anywhere in the game, linking the story became difficult. The team introduced the Prepper, an important character who communicates with the survivors by radio to link them and advance the narrative. During development, the permadeath concept was nearly removed.
A core team goal was a game which was difficult but fair, inspired by difficult games such as Dark Souls and Demon's Souls. The permadeath system ensured that players are often challenged, since every zombie can be a deadly threat. It fit the game's survival theme, giving players an incentive to continue. The game was designed for players to learn by experience, slowly improving their skill. The Metroid franchise inspired ZombiU structure, enabling players to backtrack; Left 4 Dead inspired enemy placement, and Condemned: Criminal Origins influenced the game's first-person perspective. Its content was inspired by Capcom's Resident Evil and zombie comics, films and books such as Dead Set and Night of the Living Dead: London.
ZombiU offline multiplayer mode was developed by Ubisoft Bucharest. Conceived during early development, it was kept when the core game was reiterated; its asymmetrical experience made the developer believe in the Wii U's potential, and was the demo Ubisoft showed the public during E3 2011. A cooperative multiplayer mode and online play were planned by the developer, but were removed due to time constraints.
Cris Velasco was the game's composer. By the time Velasco began composing its music, ZombiU was playable; its audio director sent him game footage to help him understand the type of music needed. He spent 10 days composing the music, and assembled a string quintet of two violins, a viola, and two cellos to record it. Known as the Apocalypse Ensemble, it played "rawer sound" which makes the soundtrack menacing.
Release
ZombiU was introduced during Ubisoft's press conference at E3 2012. A webcomic, written by Johnston and illustrated by Kev Crossley, was published on November 12 to promote the game. The 14-page comic, Z-14, was the game's prequel and linked ZombiU to the initial zombie outbreak. ZombiU was released on November 18, 2012 as a Wii U launch game. At release, European players could only purchase the game from 11 p.m. to 3 a.m. due to a Nintendo-imposed restriction. According to Nintendo of Europe, a German law required that games with mature content be purchased only at night. After negotiations with the German entertainment software self-regulating organization, the restriction was lifted in March 2013. A Wii U package with the game was released on February 13, 2013. In addition to the Wii U, its accessories, a Pro Controller and ZombiU, the package contained a downloadable copy of Nintendo Land and ZombiU artwork and developer commentary.
Ubisoft executive Tony Key said that although the game was designed for the Wii U, the company might bring it to other platforms. Following several leaks, Ubisoft confirmed Zombi for PlayStation 4, Windows, and Xbox One. Developed by Straight Right (who had developed Mass Effect 3 Wii U version), it was released on August 18, 2015. The game is similar to the original, with minor updates and enhancements. Second-screen gameplay was moved to the main screen, only appearing when required. Although the cricket bat was the only melee weapon in ZombiU, Zombi added two weapons. The shovel has a longer range and the ability to hit more than one zombie at a time. The second (a nailed bat) inflicts more damage, has a higher critical-hit chance and can also hit more than one enemy at a time. Zombi flashlight can switch to a wider, further-reaching beam which uses more battery life and increases the risk of attracting zombies. The flashlight must be kept off for 30 seconds to recharge, requiring that its use be planned. Zombi has only the solo campaign, without local multiplayer or online single-player features. The boxed game was released on January 21, 2016.
Reception
ZombiU critical reception
ZombiU received "generally favorable" reviews from critics according to review aggregator website Metacritic, with scores ranging from 4/10 given by GameSpot to 92% awarded by Official Nintendo Magazine. Although Montpellier was initially disappointed by some of the critical reviews, the studio was pleased with the game's overall reception.
James Stephanie Sterling of Destructoid wrote that ZombiU zombies were intimidating by comparison with other zombie games. Sterling called it an "oppressive" experience, since players were diverted by the GamePad while the game continued in real time. This was echoed by Rich Stanton of Eurogamer, who thought that the system made the game experience scarier. According to Stanton, it was one of the few zombie titles which lived up to the phrase "survival horror". Hollander Cooper of GamesRadar agreed, saying that ZombiU "[honed] in on the 'survival' part of the 'survival horror' genre better than any release in recent memory". Patrick Klepek of Giant Bomb wrote that the game had some very scary moments, which players who liked survival horror would enjoy. Richard Mitchell of Joystiq praised its atmosphere, which he compared favourably with Condemned: Criminal Origins. Chris Schilling of VideoGamer.com agreed, calling its atmosphere "authentic" and "rich". However, Schilling found the later stages disappointing as the game becomes less frightening with the introduction of more-powerful weapons.
Sterling wrote that the permadeath system makes the game feel scarier, saying that the emotional impact on players is more significant than the typical game over screen; the former player character could be considered a "mute monument" to player failure. According to Klepek, seeing a player character become a zombie was depressing and "soul-crushing". Arthur Gies of Polygon wrote that the system helped create an "overwhelming sense of fear and risk", and made him play the game in a more careful, tactical manner. Stanton wrote that the system often leads to "fraught backpack hunts", since players must retrieve their items from the previous player character; if they die, the items would be lost forever. According to Stanton, the system makes the game's items more meaningful and backtracking more fun. Greg Miller of IGN found the game paradoxically cool and nonsensical. Marty Sliva of 1Up.com thought the game was decent but could be improved stating: "I really hope to see Ubisoft work on a followup that fixes some of the rough patches and delivers the survival experience which the mechanics seem to be promising. Until then, we're left with a launch title full of unique ideas, but lacking the necessary cohesion to be truly great."
Sterling called the plot "typical", writing that it was a basic backstory to complement the game's survival theme. Miller agreed, saying that ZombiU did not require a strong narrative. Stanton found the game's narrative not particularly engaging, but the environmental storytelling was "skillfully done". Turi found its concept interesting, but was disappointed with the lack of a central character and the game's excessive backtracking. Cooper called the story "light", but too complicated by the end; he was dissatisfied with the lack of interesting characters. Gies appreciated the game world's development, praising Ubisoft for creating a believable setting; however, he and Schilling were disappointed by the story's supernatural elements.
Sterling wrote that there was too little enemy variety, which made the game's combat repetitive. He also criticized its map design, saying that in his 11-hour experience with the game, he could not find his way to objectives for at least two hours. Turi called the game's map confusing, agreeing with Sterling that players would easily get lost. He criticized the game's melee weapon as unsatisfying to use, comparing it unfavorably to Dead Island and Left 4 Dead 2. According to Turi, the melee-combat shortcomings were highlighted by the game's shortage of ammunition. Maxwell McGee of GameSpot and Miller shared similar concerns, calling the melee combat repetitive. According to McGee, the game's simplistic mission design and puzzle offer little variety. Stanton wrote that although the game's combat was simple, it developed into "something special" since players needed to choose their weapons wisely. Cooper found several flaws in the systems, such as characters not acknowledging the death of a previous player character and forcing players to replay some game segments. Klepek found the gunplay to be mediocre, but thematically correct.
Sterling found the use of the Wii U GamePad "hit-and-miss", stating that the scanning and radar functions are enjoyable. He noted that he was annoyed by the lock-picking and barricading minigames, which required players to press the touchscreen rapidly. According to Sterling, the latter two functions were pointless filler. Stanton praised the scanning system for immersing players in the game. Turi found the two-screen experience a noble effort, but was dissatisfied with its controls. According to McGee, the GamePad adds little to the game and some of its functions are better handled with a controller. Klepek called most of GamePad features well-implemented with the exception of the gyroscope controls. Mitchell wrote that the GamePad enhanced the overall gameplay, but disliked the screen-taping minigame in certain GamePad features.
Sterling compared its graphics to a Wii title and noted technical flaws, including glitches and slow loading times. He and Stanton thought that some of the game's assets were used excessively. Turi severely criticized the game's lighting system, saying that it looked blurry even given its nighttime setting. Miller also noted the game's sub-par texture, and was disappointed by its lack of visual appeal. Turi and Gies noted the difficulty of dragging items with the GamePad, but Gies thought it added tension to the game.
Sterling called the multiplayer mode King of Zombie being fun to play; playing as King Boris was more enjoyable than playing a zombie survivor. Turi echoed him, noting that a player should only play as a human survivor with a Pro Controller; playing with the nunchuck was "unbearable". McGee enjoyed this mode more than the single-player campaign for its action. Cooper liked the mode, but felt that the lack of online multiplayer limited its longevity. Klepek also liked the mode's concept but called the two-player mode a bit limited and not sufficiently enjoyable, a sentiment shared by Mitchell. Schilling thought the mode offered only "brief entertainment" and was an "afterthought".
Larry Frum from CNN listed ZombiU in their top 10 best video games of 2012, calling the story "dark" and that the game "captured" his attention for "its unique style of gameplay and gritty scenarios".
Zombi critical reception
The game's Zombi version received "mixed or average" reviews, according to Metacritic. Christopher Livingston of PC Gamer noted that the PC version had many glitches and occasionally crashed. Shabana Arif of GamesRadar found the system unrefined and repetitive, even with new melee weapons. Disappointed with the lack of enhancement of the original game's graphics, the limited 30 frames per second and the GamePad transition to other home consoles, she called the package a wasted opportunity. James Orry of VideoGamer.com also criticized the lack of visual upgrading and overall improvement and the exclusion of the multiplayer mode, but found the game "an excellent entry in the survival horror genre".
Sales and legacy
ZombiU was the 17th best-selling game in the United Kingdom in its first week of release, the best-selling third-party Wii U launch title. The game was unprofitable for Ubisoft, and a sequel (which would have had cooperative multiplayer and levels with multiple paths) was cancelled. ZombiU helped shape the asynchronous gameplay of Ubisoft's 2014 game, Watch Dogs; its disappointing sales prompted Ubisoft to turn Rayman Legends (an announced 2013 Wii U exclusive) into a multi-platform game to reach a larger audience.
See also
Zombi, an adventure game released as Ubisoft's first title in 1986
List of horror video games
Notes
References
External links
(archived)
Official Zombi website
2012 video games
Asymmetrical multiplayer video games
First-person shooters
2010s horror video games
Nintendo Network games
PlayStation 4 games
Multiplayer and single-player video games
Post-apocalyptic video games
Survival horror video games
Ubisoft games
Video games scored by Cris Velasco
Video games developed in France
Video games featuring female protagonists
Video games set in 2012
Video games set in London
Wii U eShop games
Wii U games
LyN games
Windows games
Xbox One games
Video games about zombies
Video games set in the United Kingdom
Straight Right games | ZombiU | Physics | 6,153 |
23,699,874 | https://en.wikipedia.org/wiki/SOFA%20Statistics | SOFA Statistics is an open-source statistical package. The name stands for Statistics Open For All. It has a graphical user interface and can connect directly to MySQL, PostgreSQL, SQLite, MS Access (map), and Microsoft SQL Server. Data can also be imported from CSV and Tab-Separated files or spreadsheets (Microsoft Excel, OpenOffice.org Calc, Gumeric, Google Docs). The main statistical tests available are Independent and Paired t-tests, Wilcoxon signed ranks, Mann–Whitney U, Pearson's chi squared, Kruskal Wallis H, one-way ANOVA, Spearman's R, and Pearson's R. Nested tables can be produced with row and column percentages, totals, standard deviation, mean, median, lower and upper quartiles, and sum.
Installation packages are available for several operating systems such as Microsoft Windows, Ubuntu, Arch Linux, Linux Mint. On macOS, SOFA only runs on older versions the OS with Leopard being the minimum version.
SOFA Statistics is written in Python, and the widget toolkit used is WxPython. The statistical analyses are based on functions available through the SciPy stats module.
Statistics Features - Workflows
Users are guided through the selection of the appropriate basic statistical methods and assignment of the basic statistical on the table column of the data that should be analyzed.
The features available within SOFA for statistical analysis are limited compared to those found in Open Source R Statistics Software, which contains a large repository of statistics packages.
See also
Comparison of statistical packages
List of statistical packages
List of open-source software for mathematics
References
External links
SOFA Statistics Homepage
SOFA Statistics project page at Source Forge
SOFA Statistics project page at Launchpad
SOFA Statistics page at Show Me Do
Cross-platform free software
Cross-platform software
Free statistical software
Numerical software
Science software for Linux
Science software for macOS
Science software for Windows
Software that uses wxPython
Software using the GNU Affero General Public License | SOFA Statistics | Mathematics | 416 |
7,457,217 | https://en.wikipedia.org/wiki/Gallon%20%28Scots%29 | The Scots gallon () was a unit of liquid volume measurement that was in use in Scotland from at least 1661 – and possibly as early as the 15th century – until the late 19th century. It was approximately equivalent to 13.568 litres, or very roughly three times larger than the Imperial gallon that was adopted in 1824. A Scots gallon could be subdivided into eight Jougs (or Scots pints, of 1696 mL each), or into sixteen chopins (of 848 mL each).
An ale or beer barrel was 12 Scots gallons (35.81 Imperial gallons [162.816 litres]).
References
See also
Obsolete Scottish units of measurement
Gill
Mutchkin
Obsolete Scottish units of measurement
Units of volume
Alcohol measurement | Gallon (Scots) | Mathematics | 150 |
36,522,829 | https://en.wikipedia.org/wiki/TauD%20protein%20domain | In molecular biology, TauD refers to a protein domain that in many enteric bacteria is used to break down taurine (2-aminoethanesulfonic acid) as a source of sulfur under stress conditions. In essence, they are domains found in enzymes that provide bacteria with an important nutrient.
Function
This protein family consists of TauD/TfdA taurine catabolism dioxygenases. The Escherichia coli tauD gene is required for the utilization of taurine (2-aminoethanesulfonic acid) as a sulfur source and is expressed only under conditions of sulfate starvation. TauD is an alpha-ketoglutarate-dependent dioxygenase catalyzing the oxygenolytic release of sulfite from taurine. The 2,4-dichlorophenoxyacetic acid/alpha-ketoglutarate dioxygenase from Burkholderia sp. (strain RASC) also belongs to this family. TfdA from Ralstonia eutropha (Alcaligenes eutrophus) is a 2,4-D monooxygenase.
Structure
This structure has a number of alpha helices and beta sheets.
References
Protein families
Protein domains
Oxidoreductases | TauD protein domain | Chemistry,Biology | 263 |
36,155,776 | https://en.wikipedia.org/wiki/Ethyl%20thiocyanate | Ethyl thiocyanate is a chemical compound used as an agricultural insecticide.
References
Ethylthiocyanate at www.chemicalbook.com.
Thiocyanates | Ethyl thiocyanate | Chemistry | 40 |
32,781,309 | https://en.wikipedia.org/wiki/Retainage | Retainage is a portion of the agreed upon contract price deliberately withheld until the work is complete to assure that contractor or subcontractor will satisfy its obligations and complete a construction project. A retention is money withheld by one party in a contract to act as security against incomplete or defective works. They have their origin in the Railway Mania of the 1840s but are now common across the industry, featuring in the majority of construction contracts. A typical retention rate is 5% of which half is released at completion and half at the end of the defects liability period (often 12 months later). There has been criticism of the practice for leading to uncertainty on payment dates, increasing tensions between parties and putting monies at risk in cases of insolvency. There have been several proposals to replace the practice with alternative systems.
History and purpose
The practice of retainage dates back to the construction of the United Kingdom railway system in the 1840s. The size of the railway project increased demand for contractors, which led to the entrance of new contractors into the labor market. These new contractors were inexperienced, unqualified and unable to successfully complete the project. Consequently, the railway companies began to withhold as much as 20% of contractors' payments to ensure performance and offset completion costs should the contractor default. The point was to withhold the contractor's profit only, not to make the contractor and its subcontractors finance the project.
Given the often large-scale, complexity, cost and length of construction projects, the risk of something not going according to plan is almost certain. Accordingly, a common approach that contracting parties take in order to mitigate this risk is to include retainage provisions within their agreements. The concept of retainage is unique to the construction industry and attempts to do two things: provide an incentive to the contractor or subcontractor to complete the project and protect the owner against any liens, claims or defaults, which may surface as the project nears completion. Incidentally, owners and contractors use retainage as a source of financing for the project, contractors in turn withhold retainage from subcontractors, frequently at a greater percentage than is being withheld from them.
United Kingdom
Retentions are widely used in the British construction industry: in the majority of all contracts awarded, a sum of money is withheld as a security against poor quality products (defects) or works left incomplete. Clients withhold retention against main contractors and main contractors withhold payment against sub-contractors. Retentions typically take the form of a percentage on the contract value. The rate can vary wildly but is typically around 5%. The general state of the economy can affect the rates set: in a buoyant economy with plentiful work sub-contractors are able to pick which work they accept and therefore have potential to negotiate more favourable rates.
The chain of retention starts with the client who withholds money on the main contractor. The main contractor withholds money on sub-contractors who may also then withhold on sub-sub contractors. The retention money is typically released in two portions (known as moieties); the first being payable at completion of a project and the second at the end of the defects liability period. This period is the time during which the client is able to identify works that are defective to the contractor who must then remedy them; it is often twelve months. The use of retentions is not common to all sectors of the industry; for example lift installers have developed their own guarantee system instead.
A mobilization payment is an advance payment to a contractor at the start of a project to assist in the beginning of operations.
Impact
The use of retentions is intended to encourage efficiency and productivity. The contractor has a financial incentive to achieve completion as early as possible (to release the first moiety payment) and to minimise defects in the works (to achieve the second payment). Retentions held against sub-contractors are also a key source of cash for main contractors, who may use them to finance new projects.
However sub-contractors often complain about the system. They sometimes lack a firm date on which retention monies will be paid and a 2017 British government report noted that more than half of contractors had experienced late or non-payment of retention monies. Delays are reportedly longer for sub-contractors and sub-sub contractors than for the main contractor. This restricts cash flow available for the company as a going concern and for capital investment. The chasing up of payments is also resource intensive, as such smaller businesses are hit more severely than larger ones. Some smaller companies simply write off the retention money, increasing their prices to compensate. The practice has also been described as increasing tensions between the parties in contract.
There is no current requirement for retention monies to be ring-fenced (kept separately to general company funds and preserved from spending) and they are usually held in a client's or contractor's main bank account. This can cause problems in cases of insolvency, where the money can be lost and payments owed to the supply chain put at risk. The use of retentions (which are considered a form of stage payments) can also render construction companies unsuitable for factoring (the sale of accounts receivable).
Development in the UK
Railway construction in the 1840's saw a rapid increase in the number of contractors, often with little experience of the industry. There was a rise in the number of insolvencies and a drop in workmanship standards. Railway companies therefore began withholding a minimum of 20% of payments to contractors as a security against incomplete and defective works. This practice had spread across the industry by the mid-19th century.
The 1994 Latham Report recommended that legislation be introduced to protect retention monies held by a party, which would prevent it being lost during a liquidation. Despite all of Latham's other payment recommendations being incorporated into the Construction Act 1998 this one was omitted. The practice was reformed somewhat by the Construction Act 2011. This made it illegal for the release of retention under one contract to be linked to that of a second. This ended the practice whereby contractors would refuse to release retention to sub-contractors until they had been paid it themselves by the client, over which the sub-contractor had no influence.
The 2018 collapse of contractor Carillion had a dramatic effect on the industry. Many of its sub-contractors lost large sums of money as £250 million in unpaid retention was lost when the business went into liquidation.
Proposed replacement
There is limited use of alternatives to retention in the British construction industry. However, there have been recent movements to try to effect change. The Department for Business, Energy and Industrial Strategy (DBEIS) commissioned research into the matter to determine the extent of the use of the practice and its effects on the industry and economy. This was published in 2017 and also identified a number of alternatives to the practice. A DBEIS public consultation was subsequently launched; this closed on 19 January 2018 but no recommendations were subsequently made for government action. A private members bill was introduced to the House of Commons by Peter Aldous on 9 January 2018 seeking to introduce protection to retention money but did not proceed through parliament.
The Build UK industry group aimed to secure abolition of retentions by 2025, following an ambition outlined by the Construction Leadership Council in 2014. Build UK put forward proposals that retentions by the main contractor on sub-contractors should be no more onerous than those imposed by the client on the main contractor. They also proposed that retentions should only apply to permanent works, as temporary works are unlikely to lead to defects. The organisation also wants small value contracts (less than £100,000) to become retention-free by 2021, as the risk to the main works is lower for these contracts.
The Construction supply chain payment charter, adopted in 2014, had a target for "ZERO retentions" by 2025 in construction contracts dated 1 January 2015 or later, along with the adoption of 30 days' standard payment terms across the construction sector. However, the charter was withdrawn on 18 January 2022 in favour of reporting regulations applicable to large businesses. The reporting regulations lapsed on 6 April 2024.
Some organisations have proposed retention deposit schemes, whereby money is deposited with a third party, although these lead to increased fees and bureaucracy and do not solve disputes between parties over when retention should be released. A mandatory retention deposit system was proposed for inclusion in the Enterprise Act 2016, but the proposed scheme was not subsequently included within the Act. Following the collapse of Carillion there have been increased calls for retention reform. The Scottish Government began a consultation on retentions in 2019. It stated that the UK was behind other countries by continuing the practice, despite the matter having been looked into several times by the UK Government. Alternatives include project bank accounts (which are used for all payments from the client and contractor), retention bonds (see below), performance bonds, escrow stakeholder accounts (monies held by a third party), parent company guarantees (guarantee of completion by the main contractor's parent organisation) or trust funds to hold retention monies.
Retention bonds
A retention bond is a form of performance bond or insurance against defects, taken out by the contractor at the request of the client, or by a subcontractor at the request of the contractor, seen as being fairer and more efficient than a cash retention. An agreement is entered into by the two parties and a third party known as a surety provider, who acts as a guarantor between the two parties. The agreement states that cash retentions will not be used and, instead, the surety provider agrees to pay up to the amount which would have been held as a cash retention if the contractor or subcontractor fail to carry out the works as contracted or to remedy any defects. Build UK and its predecessor, the National Specialist Contractors Council, have endorsed the use of retention bonds in their Fair Payment Campaign.
Contractual basis
The Joint Contracts Tribunal contracts system allowed for a reform of retentions by permitting the employer (client) to hold retention monies in trust. The 1998 revision of the contract allowed the contractor to request that the client hold the money in a separate bank account; it also permitted the use of retention bonds. The 2016 JCT contract allows for retention-free projects.
The NEC Engineering and Construction Contract, introduced in 1993, has no allowance for retentions in its core clauses. The basic contract relies on the spirit of collaboration between parties to minimise defects, but retentions can be, and often are, introduced by clients through variant clauses (so-called "x clauses"). There is an allowance for retention bonds within the fourth edition of the contract (introduced in 2017). The contract also allows for retention to be withheld only on the labour-element of any price or only to be applied on the final few payments made. The NEC system also has an option to allow the use of project bank accounts in lieu of retention.
United States
Creation and enforcement
If there is to be retainage on the construction project, it is set forth in the construction contract. Retainage provisions are applicable to subcontracts as well as prime contracts. The amount withheld from the contractor or subcontractor should be determined on a case-by-case basis by the parties negotiating the contract, usually based upon such factors as past performance and the likelihood that the contractor or subcontractor will perform well under the contract.
One can structure retainage arrangements in any number of ways. Subject to state statutory requirements, 10% is the retainage amount most often used by contracting parties. Another approach is to start off with a 10% retainage and to reduce it to 5% once the project is 50% complete. A third approach is to carve out material costs from a withholding requirement on the theory that suppliers, unlike subcontractors, may not accept retainage provisions in their purchase orders.
Retainage clauses are usually found within the contract terms outlining the procedure for submitting payment applications. A typical retainage clause parallels the following language: "Owner shall pay the amount due on the Payment Application less retainage of [a specific percentage]."
Substantially Complete
Retainage is generally due to the contractor or subcontractor once their work is complete. Disputes often arise regarding just when completion occurs - it could be "substantial completion", which is generally when the owner can occupy a structure and use it for its intended purpose; or more often, it could be once a punch list of work has been completed.
Retainage abuse
Subcontractors tend to bear the brunt of retainage provisions, especially subcontractors performing work early on in the construction process. The main reason for this, is because many contractors pass down the owner's right to withhold retainage to the subcontractor, but frequently withhold more than is being withheld from them. For example, a subcontractor performing site work may complete its work in the first few months of the construction project, but generally is not allowed to recover the amount withheld from the owner and contractor until the project is "substantially complete", which could take a few years depending on the size of the project. Coupled with a contingent payment clause, the retainage can cause significant financial distress to a subcontractor.
Another problem arises when the contractor withholds from its subcontractors at a greater percentage than the owner has withheld from them. The owner is to pay retainage to the contractor when substantial completion has occurred, however, in this abusive, over-withholding scenario, the contractor will already have been paid a portion of the subcontractors' funds, meaning that the contractor will have to fund the balance of the payment from its own cash flow. This could cause a delay in the project closeout. The contractor may feel that it is more advantageous to keep the project incomplete, than by never being paid its retainage and making the argument that the subcontractors are therefore not due their portion of the retainage.
Federal retainage policy
In 1974, Congress established the Office of Federal Procurement Policy to provide a uniform government-wide procurement policy. Since the mid-1970s, there has been an overall trend in the reduction of percentage withheld on federal construction projects. The current Federal Acquisition Regulation (F.A.R.) continues to support this trend. Paragraph 32.103 of the regulation states, " . . . Retainage should not be used as a substitute for good contract management, and the contracting officer should not withhold funds without cause. Determinations to retain and the specific amount to be withheld shall be made by the contracting officers on a case-by-case basis. Such decisions will be based on the contracting officer's assessment of past performance and the likelihood that such performance will continue." Currently, federal agencies such as the Department of Defense, the General Services Administration, and the US Department of Transportation have 'zero' retainage policies.
Alternatives to retainage
Several alternatives exist to standard retainage provisions that provide the same benefits and protections. For example, parties can agree to establish a trust account. A trust account provides the contractor with some control over its money, even if it is being held by the owner. In a trust account, retainage is withheld by the owner, placed in a trust account with a trustee that has a fiduciary relationship to the contractor. The trustee can invest the retainage at the contractor's direction, thereby allowing the contractor to "use" the retained funds that normally would sit idle in an escrow account.
Other alternatives to retainage are to allow the contractor to supply substitute security to the owner in the form of a performance bond, bank letter of credit, or a security of, or guaranteed by, the United States, such as bills, certificates, notes or bonds.
Other countries
Retentions are used in several other countries. They are common in China, though in some cases the moiety payments are guaranteed by the Agricultural Bank of China. They are also used in the United States where the percentage retained is typically higher at around 10%. However the release of retention is different with 50% of the withheld money often released once the works are considered to be 50% complete. Some states have taken measures to abolish or limit the use of retentions in public contracts. In the United States the use of retention bonds is more common than in the UK. Retentions are common in Qatar where the proportion retained may be up to 30% of contract value due to the large number of foreign companies that operate under limited liability law in the state. In Canada retentions are known as "holdback" payments; since 1997 all retention monies in Canada must be held in ring-fenced accounts.
Retentions are used in Australia; in New South Wales all retention monies for projects in excess of $20 million must be held in ring-fenced accounts with an authorised bank. In New Zealand all retention monies are required to be held in trust and must be in cash or other liquid assets; this requirement was introduced following the 2013 collapse of main contractor Mainzeal. However, after the 2019 collapse of Stanley Group it was discovered that retention money was not properly administered, residing in the company's main account, despite the group claiming to sub-contractors that it had been held in separate accounts, and was therefore liable to loss during the liquidation process. The retention system is not used in Germany where the works remain the property of the contractor until completion and are, therefore, liable to be withheld from the client in cases of dispute.
See also
Construction law
Contracts
Cost contingency
Punch list
References
Construction
Contract law | Retainage | Engineering | 3,590 |
4,756,896 | https://en.wikipedia.org/wiki/Scanning%20helium%20ion%20microscope | A scanning helium ion microscope (SHIM, HeIM or HIM) is an imaging technology based on a scanning helium ion beam. Similar to other focused ion beam techniques, it allows to combine milling and cutting of samples with their observation at sub-nanometer resolution.
In terms of imaging, SHIM has several advantages over the traditional scanning electron microscope (SEM). Owing to the very high source brightness, and the short De Broglie wavelength of the helium ions, which is inversely proportional to their momentum, it is possible to obtain qualitative data not achievable with conventional microscopes which use photons or electrons as the emitting source. As the helium ion beam interacts with the sample, it does not suffer from a large excitation volume, and hence provides sharp images with a large depth of field on a wide range of materials. Compared to a SEM, the secondary electron yield is quite high, allowing for imaging with currents as low as 1 femtoamp. The detectors provide information-rich images which offer topographic, material, crystallographic, and electrical properties of the sample. In contrast to other ion beams, there is no discernible sample damage due to relatively light mass of the helium ion. The drawback is the cost.
SHIMs have been commercially available since 2007, and a surface resolution of 0.24 nanometers has been demonstrated.
References
External links
Carl Zeiss SMT – Nano Technology Systems Division: ORION He-Ion microscope
Microscopy Today, Volume 14, Number 04, July 2006: An Introduction to the Helium Ion Microscope
How New Helium Ion Microscope Measures Up – ScienceDaily
Microscopes
Helium | Scanning helium ion microscope | Chemistry,Technology,Engineering | 337 |
43,239,622 | https://en.wikipedia.org/wiki/Physics%20education%20research | <noinclude>
Physics education research (PER) is a form of discipline-based education research specifically related to the study of the teaching and learning of physics, often with the aim of improving the effectiveness of student learning. PER draws from other disciplines, such as sociology, cognitive science, education and linguistics, and complements them by reflecting the disciplinary knowledge and practices of physics. Approximately eighty-five institutions in the United States conduct research in science and physics education.
Goals
One primary goal of PER is to develop pedagogical techniques and strategies that will help students learn physics more effectively and help instructors to implement these techniques. Because even basic ideas in physics can be confusing, together with the possibility of scientific misconceptions formed from teaching through analogies, lecturing often does not erase common misconceptions about physics that students acquire before they are taught physics. Research often focuses on learning more about common misconceptions that students bring to the physics classroom so that techniques can be devised to help students overcome these misconceptions.
In most introductory physics courses, mechanics is usually the first area of physics that is taught. Newton's laws of motion about interactions between forces and objects are central to the study of mechanics. Many students hold the Aristotelian misconception that a net force is required to keep a body moving; instead, motion is modeled in modern physics with Newton's first law of inertia, stating that a body will keep its state of rest or movement unless a net force acts on the body. Like students who hold this misconception, Newton arrived at his three laws of motion through empirical analysis, although he did it with an extensive study of data that included astronomical observations. Students can erase such as misconception in a nearly frictionless environment, where they find that objects move at an almost constant velocity without a constant force.
Major areas
The broad goal of the PER community is to understand the processes involved in the teaching and learning of physics through rigorous scientific investigation.
According to the University of Washington PER group, one of the pioneers in the field, work within PER tends to fall within one or more of several broad descriptions, including:
Identifying student difficulties
Developing methods to address these difficulties and measure learning gains
Developing surveys to measure student performance and other characteristics
Investigating student attitudes and beliefs as relating to physics
Studying small and large group dynamics analyzing student patterns using framing and other new and existing epistemological methods
"An Introduction to Physics Education Research", by Robert Beichner, identifies eight trends in PER:
Conceptual understanding: Investigating what students know and how they learn it is a centerpiece of PER. Early research involved identifying and treating misconceptions about the principles of physics. The term has since evolved to "student difficulties" based on the consideration of alternative theoretical frameworks for student learning. A difficulty with a concept can be built into a correct concept; a misconception is rooted out and replaced by a correct conception. The PER group at the University of Washington specializes in research about conceptual understanding and student difficulty.
Epistemology: PER began as a trial-and-error approach to improve learning. Because of the downsides of such an approach, theoretical bases for research were developed early on, most notably through the University of Maryland. The theoretical underpinnings of PER are mostly built around Piagettean constructivism. Theories on cognition in physics learning were put forward by Redish, Hammer, Elby and Scherr, who built off of diSessa's "Knowledge in Pieces". The Resources Framework, developed from this work, builds off of research in neuroscience, sociology, linguistics, education and psychology. Additional frameworks are forthcoming, most recently the "Possibilities Framework", which builds off of deductive reasoning research started by Wason and Philip Johnson-Laird.
Problem solving: It plays an important role in the processes that advance physics research, featured in high numbers of exercises in conventional textbooks. Most research in this area rests on examining the difference between novice and expert problem solvers (freshmen and sophomores, and graduate-level and postdoctorate students, respectively). Approaches in researching problem solving have been a focus for the University of Minnesota's PER group. Recently, a paper was published in PRL Special Section: PER that identified over 30 behaviors, attitudes, and skills that are used in the solving of a typical physics problem. Greater resolution and specific attention to the details are used in the field of problem solving.
Attitudes: The University of Colorado developed an instrument that reveals student attitudes and expectations about physics as a subject and as a class. Student attitudes are often found to decline after traditional instruction, but recent work by Redish and Hammer show that this can be reversed and positive attitudinal gains can be seen if attention is paid to "explicate the epistemological elements of the implicit curriculum."
Social aspects: Research has been conducted into gender, race, and other socioeconomic issues that can influence learning in physics and other fields. Other research has investigated the impacts on learning physics of body language, group dynamics, and classroom setup.
Technology: Student response systems (clickers) are based on Eric Mazur's work in Peer Instruction. Research in PER examines the influence, applications of, and possibilities for technology in the classroom.
Instructional interventions: PER's curriculum design is based on more than two decades of research in physics education. Notable textbooks include Tutorials in Physics, Physics by Inquiry, Investigative Science Learning Environment, and Paradigms in Physics, as well as many new textbooks in introductory and junior level coursework. The Kansas State University Physics Education Research Group has developed a program, Visual Quantum Mechanics (VQM), to teach quantum mechanics to high school and college students who do not have advanced backgrounds in physics or math.
Instructional materials: For undergraduates, publishers now emphasize a PER basis for their physics textbooks as a major selling point. One of the earliest comprehensive physics textbooks to incorporate PER findings was written by Serway and Beichner. Apart from textbooks, instructional material for pre-college physics students now include PhET (Physics Education Technology) simulations. This is made possible through advances in personal computer hardware, platform-independent software such as Adobe Flash Player and Java, and more recently HTML5, CSS3 and JavaScript. According to Wieman, PhET simulations offer a direct and powerful tool for probing student thinking and learning.
Journal association
Physics education research papers in the United States are primarily issued among four publishing venues. Papers submitted to the American Journal of Physics: Physics Education Research Section (PERS) are mostly to consumers of physics education research. The Journal of the Learning Sciences (JLS) publishes papers that regard real-life or non-laboratory environments, often in the context of technology, and are about learning, not teaching. Meanwhile, papers at Physical Review Special Topics: Physics Education Research (PRST:PER) are aimed at those for whom research is conducted on PER rather than to consumers. The audience for Physics Education Research Conference Proceedings (PERC) is designed for a mix of consumers and researchers. The latter provides a snapshot of the field and as such is open to preliminary results and research in progress, as well as papers that would simply be thought-provoking to the PER community. Other journals include Physics Education (UK), the European Journal of Physics (UK), and The Physics Teacher. Leon Hsu and others published an article about publishing and refereeing papers in physics education research in 2007.
See also
Teaching quantum mechanics
References
Physics education
Educational research | Physics education research | Physics | 1,532 |
76,092,414 | https://en.wikipedia.org/wiki/Amauroderma%20fuscatum | Amauroderma fuscatum is a tough woody mushroom in the family Ganodermataceae. It is a polypore fungus.
References
fuscatum
Fungi described in 1969
Fungi of Africa
Fungus species
Taxa named by Curtis Gates Lloyd | Amauroderma fuscatum | Biology | 48 |
77,420,715 | https://en.wikipedia.org/wiki/NGC%205377 | NGC 5377 is an intermediate barred spiral galaxy located in the constellation Canes Venatici. Its speed relative to the cosmic microwave background is 1,951 ± 11 km/s, which corresponds to a Hubble distance of 28.8 ± 2.0 Mpc (∼93.9 million ly). NGC 5377 was discovered by German-British astronomer William Herschel in 1787.
NGC 5377 was used by Gérard de Vaucouleurs as a morphological type galaxy SAb in his galaxy atlas.
The luminosity class of NGC 5377 is I and it has a broad HI line. NGC 5377 also has an active galactic nucleus.
To date, 17 non-redshift measurements yield a distance of 25.918 ± 5.770 Mpc (∼84.5 million ly), which is within the Hubble distance range. Note, however, that it is with the average value of independent measurements, when they exist, that the NASA/IPAC database calculates the diameter of a galaxy and that consequently the diameter of NGC 5377 could be approximately 37.7 kpc (∼123,000 ly) if Hubble distance is used to calculate it.
Nucleic disk
With observations from the Hubble Space Telescope, a star-forming disk was observed around the core of NGC 5377. The size of its semi-major axis is estimated at 790 pc (~2,575 light years).
Supermassive black hole
According to a study based on near-infrared K-band luminosity measurements of the nuclei of NGC 5377, a supermassive black hole with an apparent mass of approximately 107.8 𝑀⊙ (63 million solar masses) exists within the core of the galaxy.
Supernova
Supernova SN 1992H (type II, mag. 15) was discovered on February 11, 1992, by William R. Wren of the McDonald Observatory at the University of Texas at Austin.
NGC 5448 group
NGC 5377 is a member of the NGC 5448 Group according to A.M. Garcia. The group has nine galaxies, including NGC 5425, NGC 5448, NGC 5481, NGC 5500, NGC 5520, UGC 9056 and UGC 9083.
See also
List of NGC objects (5001–6000)
External links
NGC 5377 at NASA/IPAC
NGC 5377 at SIMBAD
NGC 5377 at LEDA
NGC 5377 (DSS2) at WikiSky
NGC 5377 (SDSS) at WikiSky
NGC 5377 (GALEX) at WikiSky
References
5377
Canes Venatici
Discoveries by William Herschel
Barred spiral galaxies
049563
08863
? | NGC 5377 | Astronomy | 567 |
63,771,142 | https://en.wikipedia.org/wiki/Pleolipoviridae | Pleolipoviridae is a family of DNA viruses that infect archaea.
Taxonomy
The following genera are recognized:
Alphapleolipovirus
Betapleolipovirus
Gammapleolipovirus
References
Monodnaviria
DNA viruses | Pleolipoviridae | Biology | 49 |
25,399,461 | https://en.wikipedia.org/wiki/C2H6N2O | {{DISPLAYTITLE:C2H6N2O}}
The molecular formula C2H6N2O (molar mass: 74.08 g/mol, exact mass: 74.0480 u) may refer to:
Azoxymethane (AOM)
Glycinamide
N-Nitrosodimethylamine (NDMA), or DMN | C2H6N2O | Chemistry | 83 |
1,359,001 | https://en.wikipedia.org/wiki/Phidget | A phidget is a physical representation or implementation of a GUI widget. For example, an on-screen dial widget could be implemented physically as a knob.
Phidgets are a system of low-cost electronic components and sensors that are controlled by a personal computer. Using the Universal Serial Bus (USB) as the basis for all phidgets, the complexity is managed behind an Application Programming Interface (API). Applications can be developed in Mac OS X, Linux, Windows CE and Windows operating systems.
Their usage is primarily focused to allow exploration of alternative physical computer interaction systems, but have most notably been adopted by robotic enthusiasts as they greatly simplify PC-Robot interaction. Phidgets are an attempt to build physical analogue to software widgets, allowing the construction of complex physical systems out of simpler components. Phidgets are designed and produced by Phidgets Inc.
Phidget
A phidget (physical widget) is attached to a host computer via USB. There are various phidgets available, each having a counterpart class in the phidget API. As each phidget is attached to the host computer, it is made available to control in the API, where its state can be accessed and set.
Phidgets arose out of a research project in 2001 directed by Saul Greenberg at the Department of Computer Science, University of Calgary.
Phidget API
Phidgets can be programmed using a variety of software and programming languages, ranging from Java to Microsoft Excel.
Examples of programming languages are:
Adobe Director, AutoIt, C#, C/C++, Cocoa, Delphi, Flash AS3, Flex AS3, Java, LabVIEW, MATLAB, Max/MSP, Microsoft Robotics Studio 1.5, Python Module (version: 2.1.6.20100317), REALBasic, Visual Basic .NET, Visual Basic 6.0, Visual Basic for Applications, Visual Basic Script, Visual C/C++/Borland and FlowStone.
The phidget API is what allows systems to access the phidget devices in a high level manner. The API allows the management of devices as they are attached, to subscribe to events and to access the state of the phidgets. The core API is originally written in C and has been extended to work in numerous languages including .NET and Java.
Examples of Phidgets
Servo – Allows control of up to 4 servo motors. Each servo can be addressed individually where it can have its position read and set.
PhidgetAccelerometer – The accelerometer senses acceleration in 2 and 3 dimensions.
TextLCD – A 20 character * 2 line LCD display, acting as an alternative display mechanism in a phidget project.
InterfaceKit – Allows input/output interface to analog and digital sensors and switches.
References
Robotics projects
User interfaces | Phidget | Technology | 568 |
61,006,900 | https://en.wikipedia.org/wiki/Tip%20dating | Tip dating is a technique used in molecular dating that allows the inference of time-calibrated phylogenetic trees. Its defining feature is that it uses the ages of the samples to provide time information for the analysis, in contrast with traditional 'node dating' methods that require age constraints to be applied to the internal nodes of the evolutionary tree.
In tip dating, morphological data and molecular data are typically analysed together to estimate the evolutionary relationships (tree topology) and the divergence times among lineages (node times); this approach is also known as 'total-evidence dating'. However, tip dating can also be used to analyse data sets that only comprise morphological characters or that only comprise molecular characters (e.g., data sets that include samples of ancient DNA or of serially sampled viruses).
Tip dating has been implemented in Bayesian phylogenetic software and typically draws on the fossilised birth-death model for evolution. This is a model of diversification that allows speciation, extinction, and sampling of fossil and extant taxa.
This promising method is not yet fully mature, and there are a number of possible biases or undesirable behaviour that must be taken into account when interpreting its results.
References
Phylogenetics | Tip dating | Biology | 249 |
23,686,829 | https://en.wikipedia.org/wiki/Nils%20Heribert-Nilsson | Nils Heribert-Nilsson (26 May 1883 in Skivarp, Scania – 3 August 1955) was a Swedish botanist and geneticist.
Heribert-Nilsson received his Ph.D. at Lund University in 1915 with the thesis Die Spaltungserscheinungen der Oenothera lamarckiana. From 1934 to 1948 he was professor of botany, in particular systematics, morphology and plant geography, at Lund University.
Heribert-Nilsson was active in plant breeding. His most important research concerned Salix and its taxonomy, which is complicated by frequent hybridization. Among his research on Salix was hybridization studies on Salix viminalis and Salix caprea.
In 1943 he was elected a member of the Royal Swedish Academy of Sciences.
Emication
In 1953, Heribert-Nilsson published his most voluminous work Synthetische Artbildung ("Synthetic speciation"). In a review for Science, Joel Hedgpeth summarizes the thesis of the "elegantly printed two-volume opus" as follows:
The concept of evolution as a continuously flowing process can be proved only on Lamarckian lines, since "evolution and Lamarckism are inseparable because they include the same fundamental ideas." There is no proof from the data of genetic recombinations or mutations to support the generally accepted concept of evolution; therefore, evolution is not occurring at this time. Nor does it seem to have occurred in the past, since the fossil record is the result of piling up and preservation of world biota during the periods when the nearness of the moon induced tremendous tidal action (the "Tethys sea") and freezing at high latitudes because of the pulling of air toward the equator hastened such preservation. During these revolutionary periods there was resynthesis of the entire world biota by gene material or gametes along the same basic lines (hence, there is no point to phylogenies, since the similarities of organic life are due to the synthetic activity of similar "gametes"); this process is termed "emication".
Oblivious to continental drift (not a commonly accepted theory at the time), Heribert-Nilsson invokes tremendous tsunamis for the fact that many fossil floras, such as that of the London Clay, consist of species whose modern relatives live in tropical countries far removed from the site of deposition, as G. Ledyard Stebbins writes in an article for The Quarterly Review of Biology in 1955.
According to Stebbins Heribert-Nilsson's final line of "evidence" against evolution consists of an attempt to criticise certain basic principles of genetics, particularly the linear order of the genes on the chromosomes.
Selected publications
Die Spaltungserscheinungen der Oenothera lamarckiana, Ph.D. Thesis, 1915.
Experimentelle Studien über Variabilität, Spaltung, Artbildung und Evolution in der Gattung Salix, 1918.
Synthetische Bastardierungsversuche in der Gattung Salix, 1930.
Linné, Darwin, Mendel: trenne biografiska skisser ("Linnaeus, Darwin, Mendel: three biographic sketches"), popular science, 1930.
Der Entwicklungsgedanke und die moderne Biologie ("The development thought and modern biology"), 1941.
Synthetische Artbildung: Grundlinien einer exakten Biologie, 2 vols., 1953.
References
1883 births
1955 deaths
Lamarckism
20th-century Swedish botanists
Swedish geneticists
Academic staff of Lund University
Members of the Royal Swedish Academy of Sciences | Nils Heribert-Nilsson | Biology | 757 |
78,908,504 | https://en.wikipedia.org/wiki/LMC%20N79 | LMC N79 (or just N79) is an emission nebula in the Large Magellanic Cloud. The nebula is part of the catalog of H-alpha stars and nebulae by Karl G. Henize, published in 1956. It is composed of the smaller nebulae N79A to N79E From a CO survey it was however seen that the nebula is larger and contains N79-S, N79-W and N79-E. These nebulae were described by Henize with other names, with N79-S being the original N79 nebula, N79-W being N77 and N79-E being N83.
Super star cluster
The central nebula N79-S contains the super star cluster (SSC) H72.97-69.39, also called HSO BMHERICC J072.9711-69.3911. This SSC was first suspected to exist in N79 in 2017 from Spitzer and Herschel observations. The SSC was observed with ALMA. This showed that the SSC is at the center of two colliding filaments. ALMA also showed bipolar outflows that are 65,000 years old and a HII region associated with the SSC. The stellar content was first studied with Gemini in 2021. At that time it was estimated that the SSC contains stars with a mass between 10,000 and 100,000 . Observations with JWST confirmed H72.97-69.39 as a SSC. Researchers discovered five massive stars in the center of the SSC with masses ranging between 20 and 40 . The youngest massive young stellar objects (YSOs) of H72.97-69.39 is called Y3 and is 10,000 years old. The central ionizing source is Y4, which is the most massive of the YSOs with a mass of around 40 . With MIRI the researchers identified 102 embedded YSOs in total. Yet to be published work with NIRCam detected 1550 young stars in N79.
Gallery
See also
List of most massive stars
NGC 2070 with central condensation R136 is another SSC in the Large Magellanic Cloud
Milky Way SSCs:
Westerlund 1
NGC 3603
References
Emission nebulae
Large Magellanic Cloud
Dorado
H II regions
Star-forming regions
Astronomical objects discovered in 1956
Super star clusters | LMC N79 | Astronomy | 505 |
3,673,376 | https://en.wikipedia.org/wiki/List%20of%20Python%20software | The Python programming language is actively used by many people, both in industry and academia, for a wide variety of purposes.
Integrated Development Environments (IDEs) for Python
Atom, an open source cross-platform IDE with autocomplete, help and more Python features under package extensions.
Codelobster, a cross-platform IDE for various languages, including Python.
EasyEclipse, an open source IDE for Python and other languages.
Eclipse ,with the Pydev plug-in. Eclipse supports many other languages as well.
Emacs, with the built-in python-mode.
Eric, an IDE for Python and Ruby
Geany, IDE for Python development and other languages.
IDLE, a simple IDE bundled with the default implementation of the language.
Jupyter Notebook, an IDE that supports markdown, Python, Julia, R and several other languages.
Komodo IDE an IDE PHOTOS Python, Perl, PHP and Ruby.
NetBeans, is written in Java and runs everywhere where a JVM is installed.
Ninja-IDE, free software, written in Python and Qt, Ninja name stands for Ninja-IDE Is Not Just Another IDE
PyCharm, a proprietary and Open Source IDE for Python development.
PythonAnywhere, an online IDE and Web hosting service.
Python Tools for Visual Studio, Free and open-source plug-in for Visual Studio.
Spyder, IDE for scientific programming.
Vim, with "lang#python" layer enabled.
Visual Studio Code, an Open Source IDE for various languages, including Python.
Wing IDE, cross-platform proprietary with some free versions/licenses IDE for Python.
Replit, an online IDE that supports multiple languages.
Unit testing frameworks
Python package managers and Python distributions
Anaconda, Python distribution with conda package manager
Enthought, Enthought Canopy Python with Python package manager
pip, package management system used to install and manage software written in Python
Applications
A-A-P, a tool used to download, build and install software via Makefile-like "recipes"
Anaconda (installer), an open-source system installer for Linux distributions primarily used in Fedora Linux, CentOS, and Red Hat Enterprise Linux.
Anki, a spaced repetition flashcard program
Ansible, a configuration management engine for computers by combining multi-node software deployment and ad hoc task execution
Bazaar, a free distribution deed revision computer control system
BitBake, a make-like build tool with the special focus of distributions and packages for embedded Linux cross compilation
BitTorrent, original client, along with several derivatives
Buildbot, a continuous integration system
Buildout, a software build tool, primarily used to download and set up development or deployment software dependencies
Calibre, an open source e-book management tool
Celery, an asynchronous task queue/job queue based on distributed message passing
Chandler, a personal information manager including calendar, email, tasks and notes support that is not currently under development
Cinema 4D, a 3D art and animation program for creating intros and 3-Dimensional text. Has a built in Python scripting console and engine.
Conch, implementation of the Secure Shell (SSH) protocol with Twisted
Deluge, a ça BitTorrent client for GNOME
Dropbox, a web-based file hosting service
Exaile, an open source audio player
Gajim, an instant messaging client for the XMPP protocol
GlobaLeaks, an open-source whistleblowing framework
GNOME Soundconverter, a program for converting sound files to various formats and qualities (wrapper around GStreamer).
Gramps, an open source genealogy software
Gunicorn, a pre-fork web server for WSGI applications
GYP (Generate Your Projects), a build automation tool (similar to CMake and Premake) designed to generate native IDE project files (e.g., Visual Studio, Xcode, etc.) from a single configuration
Image Packaging System (IPS), an advanced, cross-platform package management system primarily used in Solaris and OpenSolaris/illumos derivatives
Juice, a popular podcast downloader
Mercurial a cross-platform, distributed source management tool
Miro, a cross-platform internet television application
Morpheus, a file-sharing client/server software operated by the company StreamCast
MusicBrainz Picard, a cross-platform MusicBrainz tag editor
Nicotine, a PyGTK Soulseek client
OpenLP, lyrics projection software
OpenShot Video Editor
OpenStack, a cloud computing IaaS platform
Pip, a package manager used to install and manage Python software packages such as those from the Python Package Index (PyPI) software repository
PiTiVi, a non-linear video editor
Portage, the heart of Gentoo Linux, an advanced package management system based on the BSD-style ports system
Pungi (software), an open-source distribution compose tool for orchestrating the creation of YUM and system image repositories
Pychess, a cross-platform computer chess program
Quake Army Knife, an environment for developing 3D maps for games based on the Quake engine
Quod Libet, a cross-platform free and open source music player, tag editor and library organizer
Resolver One, a spreadsheet
SageMath, a combination of more than 20 main opensource math packages and provides easy to use web interface with the help of Python
Salt, a configuration management and remote execution engine
SCons, a tool for building software
Shinken, a computer system and network monitoring software application compatible with Nagios
TouchDesigner, a node based visual programming language for real time interactive multimedia content
Tryton, a three-tier high-level general purpose computer application platform
Ubuntu Software Center, a graphical package manager, was installed by default in Ubuntu 9.10, and stopped being included in Ubuntu releases starting with the Ubuntu 16.04 release.
Wicd, a network manager for Linux
YUM, a package management utility for RPM-compatible Linux operating systems
Waf, a build automation tool designed to assist in the automatic compilation and installation of computer software
Xpra, a tool which runs X clients, typically on a remote host, and directs their display to the local machine without losing any state
Web applications
Allura, an ASF software forge for managing source code repositories, bug reports, discussions, wiki pages, blogs and more for multiple projects
Bloodhound, an ASF project management and bug tracking system
ERP5, a powerful open source ERP / CRM used in Aerospace, Apparel, Banking and for e-government
ERPNext, an open source ERP / CRM
FirstVoices, an open source language revitalization platform
Kallithea, a source code management system
Mailman, one of the more popular packages for running email mailing lists
MakeHuman, free software for creating realistic 3D humans.
MoinMoin, a wiki engine
Odoo (formerly OpenERP), business management software
Planet, a feed aggregator
Plone, an open source content management system
Roundup, a bug tracking system
Tor2web, an HTTP proxy for Tor Hidden Services (HS)
Trac, web-based bug/issue tracking database, wiki, and version control front-end
ViewVC, a web-based interface for browsing CVS and SVN repositories
Video games
Battlefield 2 uses Python for all of its add-ons and a lot of its functionality.
Bridge Commander
Disney's Toontown Online is written in Python and uses Panda3D for graphics.
Doki Doki Literature Club!, a psychological horror visual novel using the Ren'Py engine
Eve Online uses Stackless Python.
Frets on Fire is written in Python and uses Pygame
Mount & Blade is written in Python.
Pirates of the Caribbean Online is written in Python and uses Panda3D for graphics.
SpongeBob SquarePants: Revenge of the Flying Dutchman uses Python as a scripting language.
The Sims 4 uses Python
The Temple of Elemental Evil, a computer role-playing game based on the classic Greyhawk Dungeons & Dragons campaign setting
Unity of Command (video game) is an operational-level wargame about the 1942–43 Stalingrad Campaign on the Eastern Front.
Vampire: The Masquerade – Bloodlines, a computer role-playing game based on the World of Darkness campaign setting
Vega Strike, an open source space simulator, uses Python for internal scripting
World of Tanks uses Python for most of its tasks.
Web frameworks
BlueBream, a rewrite by the Zope developers of the Zope 2 web application server
CherryPy, an object-oriented web application server and framework
CubicWeb, a web framework that targets large-scale semantic web and linked open data applications and international corporations
Django, an MVT (model, view, template) web framework
Flask, a modern, lightweight, well-documented microframework based on Werkzeug and Jinja 2
Google App Engine, a platform for developing and hosting web applications in Google-managed data centers, including Python.
Grok, a web framework based on Zope Toolkit technology
Jam.py (web framework), a "full stack" WSGI rapid application development framework
Nevow, a web application framework originally developed by the company Divmod
Pylons, a lightweight web framework emphasizing flexibility and rapid development
Pyramid, a minimalistic web framework inspired by Zope, Pylons and Django
Python Paste, a set of utilities for web development that has been described as "a framework for web frameworks"
Quixote, a framework for developing Web applications in Python
RapidSMS, a web framework which extends the logic and capabilities of Django to communicate with SMS messages
Spyce, a technology to embed Python code into webpages
TACTIC, a web-based smart process application and digital asset management system
Tornado, a lightweight non-blocking server and framework
TurboGears, a web framework combining SQLObject/SQLAlchemy, Kid/Genshi, and CherryPy/Pylons
web2py, a full-stack enterprise web application framework, following the MVC design
Zope 2, an application server, commonly used to build content management systems
Graphics frameworks
Pygame, Python bindings for SDL
Panda3D, a 3D game engine for Python
Python Imaging Library, a module for working with images
Python-Ogre, a Python Language binding for the OGRE 3D engine
UI frameworks
appJar, cross-platform, open source GUI library for Python. Provides easy wrapper functions around most of Tkinter with extra functionality built in.
Kivy, open source Python library for developing multitouch application software with a natural user interface (NUI).
PyGTK, a popular cross-platform GUI library based on GTK+; furthermore, other GNOME libraries also have bindings for Python
PyQt, another cross-platform GUI library based on Qt; as above, KDE libraries also have bindings
PySide, an alternative to the PyQt library, released under the BSD-style licence
Tkinter is Python's de facto GUI it is shipped in most versions of Python and is integrated in the IDLE. It is based Tcl command tool.
wxPython, a port of wxWidgets and a cross-platform GUI library for Python
Scientific packages
Astropy, a library of Python tools for astronomy and astrophysics.
Biopython, a Python molecular biology suite
Gensim, a library for natural language processing, including unsupervised topic modeling and information retrieval
graph-tool, a Python module for manipulation and statistical analysis of graphs.
Natural Language Toolkit, or NLTK, a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English
Orange, an open-source visual programming tool featuring interactive data visualization and methods for statistical data analysis, data mining, and machine learning.
NetworkX, a package for the creation, manipulation, and study of complex networks.
SciPy, collection of packages for mathematics, science, and engineering
scikit-learn, a library for machine learning.
TomoPy, a package for tomographic data processing and image reconstruction
Veusz, a scientific plotting package
VisTrails, a scientific workflow and provenance management software with visual programming interface and integrated visualization (via Matplotlib, VTK).
Apache Singa, a library for deep learning.
Mathematical libraries
CuPy, a library for GPU-accelerated computing
Dask, a library for parallel computing
Mathics, an open-source implementation of the Mathematica programming language
Matplotlib, providing MATLAB-like plotting and mathematical functions (using NumPy).
NumPy, a language extension that adds support for large and fast, multi-dimensional arrays and matrices
Plotly is a scientific plotting library for creating browser-based graphs.
SageMath is a large mathematical software application which integrates the work of nearly 100 free software projects.
SymPy, a symbolic mathematical calculations package
PyMC, python module containing Bayesian statistical models and fitting algorithms, including Markov chain Monte Carlo.
Numerical libraries
Additional development packages
Beautiful Soup, a package for parsing HTML and XML documents
Cheetah, a Python-powered template engine and code-generation tool
Construct, a python library for the declarative construction and deconstruction of data structures
Genshi, a template engine for XML-based vocabularies
IPython, a development shell both written in and designed for Python
Jinja, a Python-powered template engine, inspired by Django's template engine
Kid, simple template engine for XML-based vocabularies
Meson build system, a software tool for automating the building (compiling) of software
mod_python, an Apache module allowing direct integration of Python scripts with the Apache web server
PyObjC, a Python to Objective-C bridge that allows writing OS X software in Python
Robot Framework, a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD)
Setuptools, a package development process library designed to facilitate packaging Python projects by enhancing the Python (distribution utilities) standard library.
Sphinx, which converts reStructuredText files into HTML websites and other formats including PDF, EPub and Man pages
SQLAlchemy, database backend and ORM
SQLObject, an ORM for providing an object interface to a database
Storm, an ORM from Canonical
Twisted, a networking framework for Python
VPython, the Python programming language plus a 3D graphics module called Visual
Embedded as a scripting language
Python is, or can be used as the scripting language in these notable software products:
Abaqus (Finite Element Software)
ADvantage Framework
Amarok
ArcGIS, a prominent GIS platform, allows extensive modelling using Python
Autodesk Maya, professional 3D modeler allows Python scripting as an alternative to MEL as of version 8.5
Autodesk MotionBuilder
Autodesk Softimage (formerly Softimage|XSI)
BioNumerics a bioinformatics software suite for the management, storage and (statistical) analysis of all types of biological data.
Blender
Boxee, a cross-platform home theater PC software
Cinema 4D
Civilization IV has the map editor supporting Python.
Corel Paint Shop Pro
Claws Mail with Python plugin
DSHub
ERDAS Imagine
FL Studio, a Digital audio workstation, uses Python to support MIDI Controller integration, as well as scripting within its piano roll and Edison audio editor.
FreeCAD
gedit
GIMP
GNAT The GNAT programming chain tool (Ada language implementation in GNU gcc), as a GNATcoll reusable components for the applications (with or without PyGTK) and as a scripting language for the commands in the GPS programming environment
Houdini highly evolved 3D animation package, fully extensible using python
Inkscape, a free vector graphics editor
Krita, a free raster graphics editor for digital painting
MeVisLab, a medical image processing and visualization software, uses Python for network scripting, macro modules, and application building
Modo
Micromine
Minecraft: Pi Edition (game)
MSC.Software's CAE packages: Adams, Mentat, SimXpert
MySQL Workbench, a visual database design tool
Notepad++ has a plugin named PythonScript that allows scripting Notepad++ in Python
Nuke (compositing for visual effects)
OriginPro, a commercial graphic and analysis software, provides Python environment for access
ParaView, an opensource scientific visualization software
Poser, a 3D rendering and animation computer program that uses for scripting a special dialect of Python, called PoserPython
PTV AG products for traffic and transportation analysis, including PTV VISSIM
PyMOL, a popular molecular viewer that embeds Python for scripting and integration
OriginPro, a commercial graphing and analysis software, provides a Python environment for both embedded and external access
QGIS uses Python for scripting and plugin-development
Rhinoceros 3D version 5.0 and its visual-scripting language Grasshopper uses IronPython
Rhythmbox
Scribus
3DSlicer, medical image visualisation and analysis software. Python is available for algorithm implementation, analysis pipelines, and GUI creation.
SPSS statistical software SPSS Programmability Extension allows users to extend the SPSS command syntax language with Python
SublimeText
Totem, a media player for the GNOME desktop environment
Vim
VisIt
WeeChat, a console IRC client
Commercial uses
CCP Games uses Stackless Python in both its server-side and client-side applications for its MMO Eve Online.
Instagram's backend is written in Python.
NASA is using Python to implement a CAD/CAE/PDM repository and model management, integration, and transformation system which will be the core infrastructure for its next-generation collaborative engineering environment. It is also the development language for OpenMDAO, a framework developed by NASA for solving multidisciplinary design optimization problems.
"Python has been an important part of Google since the beginning, and remains so as the system grows and evolves. Today dozens of Google engineers use Python."
Reddit was originally written in Common Lisp, but was rewritten in Python in 2005
Yahoo! Groups uses Python "to maintain its discussion groups"
YouTube uses Python "to produce maintainable features in record times, with a minimum of developers"
Enthought uses Python as the main language for many custom applications in Geophysics, Financial applications, Astrophysics, simulations for consumer product companies, ...
Rosneft uses Python as one of the main languages for its geoengineering applications development. RN-GRID, a hydraulic fracturing simulation software, has a graphical user interface written entirely in Python.
Python implementations
Implementations of Python include:
CLPython – Implementation, written in Common Lisp
CPython – The reference implementation, written in C11. Some notable distributions include:
ActivePython – Distribution with more than 300 included packages
Intel Distribution for Python – High performance distribution with conda and pip package managers
PSF Python – Reference distribution that includes only selected standard libraries
Cython – programming language to simplify writing C and C++ extension modules for the CPython Python runtime.
IronPython – Python for CLI platforms (including .NET and Mono)
Jython – Python for Java platforms
MicroPython – Python 3 implementation for microcontroller platforms
Nuitka – a source-to-source compiler which compiles Python code to C/C++ executables, or source code.
Numba – NumPy aware LLVM-based JIT compiler
Pyjs – a framework (based on Google Web Toolkit (GWT) concept) for developing client-side Python-based web applications, including a stand-alone Python-to-JavaScript compiler, an Ajax framework and widget toolkit
PyPy – Python (originally) coded in Python, used with RPython, a restricted subset of Python that is amenable to static analysis and thus a JIT.
Shed Skin – a source-to-source compiler from Python to C++
Stackless Python – CPython with coroutines
Historic Python implementations include:
Parrot – Virtual machine being developed mainly as the runtime for Raku, and intended to support dynamic languages like Python, Ruby, Tcl, etc.
Psyco – specialized JIT compiler project that has mostly been eclipsed by PyPy
Pyrex – Python-like Python module development project that has mostly been eclipsed by Cython
Python for S60 – CPython port to the S60 platform
Unladen Swallow – performance-orientated implementation based on CPython which natively executed its bytecode via an LLVM-based JIT compiler. Funded by Google, stopped circa 2011
References
External links
Python Package Index (formerly the Python Cheese Shop) is the official directory of Python software libraries and modules
Useful Modules in the Python.org wiki
Organizations Using Python – a list of projects that make use of Python
Python.org editors – Multi-platform table of various Python editors
Python (programming language)
Python | List of Python software | Technology | 4,403 |
565,896 | https://en.wikipedia.org/wiki/Exploding%20head%20syndrome | Exploding head syndrome (EHS) is an abnormal sensory perception during sleep in which a person experiences auditory hallucinations that are loud and of short duration when falling asleep or waking up. The noise may be frightening, typically occurs only occasionally, and is not a serious health concern. People may also experience a flash of light. Pain is typically absent.
The cause is unknown. Potential organic explanations that have been investigated but ruled out include ear problems, temporal lobe seizure, nerve dysfunction, or specific genetic changes. Potential risk factors include psychological stress. It is classified as a sleep disorder or headache disorder. People often go undiagnosed.
There is no high-quality evidence to support treatment. Reassurance may be sufficient. Clomipramine and calcium channel blockers have been tried. While the frequency of the condition is not well studied, some have estimated that it occurs in about 10% of people. Women are reportedly more commonly affected. The condition was initially described at least as early as 1876. The current name came into use in 1988.
Signs and symptoms
Individuals with exploding head syndrome hear or experience loud imagined noises as they are falling asleep or are waking up, have a strong, often frightened emotional reaction to the sound, and do not report significant pain; around 10% of people also experience visual disturbances like perceiving visual static, lightning, or flashes of light. Some people may also experience heat, strange feelings in their torso, or a feeling of electrical tingling that ascends to the head before the auditory hallucinations occur. With the heightened arousal, people experience distress, confusion, myoclonic jerks, tachycardia, sweating, and a feeling that they have stopped breathing and need to make a conscious effort to breathe again.
The pattern of the auditory hallucinations is variable. Some people report having a total of two or four attacks followed by a prolonged or total remission, having attacks over the course of a few weeks or months before the attacks spontaneously disappear, or the attacks may even recur irregularly every few days, weeks, or months for much of a lifetime.
Causes
The cause of EHS is unknown. A number of hypotheses have been put forth with the most common being dysfunction of the reticular formation in the brainstem responsible for transition between waking and sleeping.
Other theories into causes of EHS include:
Minor seizures affecting the temporal lobe
Ear dysfunctions, including sudden shifts in middle ear components or the Eustachian tube, or a rupture of the membranous labyrinth or labyrinthine fistula
Stress and anxiety
Variable and broken sleep, associated with a decline in delta sleep
Antidepressant discontinuation syndrome
Temporary calcium channel dysfunction
PTSD
Exploding head syndrome was first described in the 19th century, and may have first been mentioned in the 17th century.
Diagnosis
Exploding head syndrome is classified under other parasomnias by the 2014 International Classification of Sleep Disorders (ICSD, 3rd.Ed.) and is an unusual type of auditory hallucination in that it occurs in people who are not fully awake.
According to ICD-10 and DSM-5 EHS is classified as either other specified sleep-wake disorder (codes:780.59 or G47.8) or unspecified sleep-wake disorder (codes: 780.59 or G47.9).
Treatment
, no clinical trials had been conducted to determine what treatments are safe and effective; a few case reports had been published describing treatment of small numbers of people (two to twelve per report) with clomipramine, flunarizine, nifedipine, topiramate, carbamazepine. Studies suggest that education and reassurance can reduce the frequency of EHS episodes. There is some evidence that individuals with EHS rarely report episodes to medical professionals.
Epidemiology
There have not been sufficient studies to make conclusive statements about how common or who is most often affected. One study found that 14% of a sample of undergrads reported at least one episode over the course of their lives, with higher rates in those who also have sleep paralysis.
History
Case reports of EHS have been published since at least 1876, which Silas Weir Mitchell described as "sensory discharges" in a patient. However, it has been suggested that the earliest written account of EHS was described in the biography of the French philosopher René Descartes in 1691. The phrase "snapping of the brain" was coined in 1920 by the British physician and psychiatrist Robert Armstrong-Jones. A detailed description of the syndrome and the name "exploding head syndrome" was given by British neurologist John M. S. Pearce in 1989. More recently, Peter Goadsby and Brian Sharpless have proposed renaming EHS "episodic cranial sensory shock" as it describes the symptoms more accurately and better attributes to Mitchell.
See also
References
Further reading
External links
Sleep disorders
Sleep physiology
Lucid dreams
Neurological disorders
Hallucinations
Syndromes of unknown causes
Parasomnias
Syndromes affecting the nervous system
Wikipedia medicine articles ready to translate | Exploding head syndrome | Biology | 1,028 |
42,518,492 | https://en.wikipedia.org/wiki/Green%20criminology | Green criminology is a branch of criminology that involves the study of harms and crimes against the environment broadly conceived, including the study of environmental law and policy, the study of corporate crimes against the environment, and environmental justice from a criminological perspective.
Origins
The term "green criminology" was introduced by Michael J. Lynch in 1990, and expanded upon in Nancy Frank and Michael J. Lynch's 1992 book, Corporate Crime, Corporate Violence, which examined the political economic origins of green crime and injustice, and the scope of environmental law. The term became more widely used following publication of a special issue on green criminology in the journal Theoretical Criminology edited by Piers Beirne and Nigel South in 1998. Green criminology has recently started to feature in university-level curriculum and textbooks in criminology and other disciplinary fields.
The study of green criminology has expanded significantly over time, and is supported by groups such as the International Green Criminology Working Group. There are increasing interfaces and hybrid empirical and theoretical influences between the study of green criminology, which focuses on environmental harms and crimes, and mainstream criminology and criminal justice, with criminologists studying the 'greening' of criminal justice institutions and practices in efforts to become more environmentally sustainable and the involvement of people in prison or on probation in ecological justice initiatives.
Approaches
Though green criminology was originally proposed as a political economic approach for the study of environmental harm, crime, law and justice, there are now several varieties of green criminology as noted below.
Political economy, environmental justice, and the treadmill of production approach
The initial grounding of green criminology was in political economic theory and analysis. In his original 1990 article, Lynch proposed green criminology as an extension of radical criminology and its focus on political economic theory and analysis. In that view, it was essential to examine the political economic dimensions of green crime and justice in order to understand the major environmental issues of our times and how they connect with the political economy of capitalism. The political economic approach was expanded upon by Lynch and Paul B. Stretesky in two additional articles in The Critical Criminologist. In those articles, Lynch and Stretesky extended the scope of green criminology to apply to the study of environmental justice, and followed that work with a series of studies addressing environmental justice concerns, the distribution of environmental crimes and hazards, and empirical studies of environmental justice movements and enforcement. Later, working with Michael A. Long and then Kimberly L. Barrett, the political economic explanation and empirical studies of green crimes were adapted to include a perspective on the structural influence of the treadmill of production on the creation of green crimes drawn from the work of Allan Schnaiberg, environmental sociology, eco-socialism and ecological Marxism. Throughout the development of the political economic approach to green criminology, scholars have made significant use of scientific and ecological literatures, as well as empirical analysis, which have become characteristics of this approach and distinguish it from other varieties of green criminology.
Nonspeciesist and nonhuman animal studies
The second major variation of green criminology is the nonspeciesist argument proposed by Piers Beirne. In Beirne's view, the study of harms against nonhuman animals is an important criminological topic which requires attention and at the same time illustrates the limits of current criminological theorizing about, crime/harm, law and justice with its focus almost exclusively on humans. This approach also includes discussions of animal rights. Beirne's approach to green criminology has been extremely influential, and there are now a significant number of studies within the green criminological literature focusing on nonhuman animal crimes and animal abuse. In addition to studies of animal abuse, included within the scope of nonhuman animal studies are those focused on illegal wildlife trade, poaching, wildlife smuggling, animal trafficking and the international trade in endangered species. Many of the studies green criminologists undertake in this area of research are theoretical or qualitative. Ron Clarke and several colleagues, however, have explored empirical examinations of illegal animal trade and trafficking, and this has become a useful approach for examining green crimes. Clarke's approach draws on more traditional criminological theory such as rational choice theory and crime opportunity theory, and hence is not within the mainstream of green criminological approaches. Nevertheless, Clarke's approach has drawn attention to important empirical explanations of green crimes.
Bio-piracy and eco-crimes
Similar to the political economic approach but without grounding in political economic theory, some green criminologists have explored the issue of green crime by examining how corporate behavior impacts green crimes. Among other issues, this approach has included discussions of eco-crimes and activities such as bio-piracy as discussed by Nigel South. Bio-piracy is largely an effort by corporations to commodify native knowledge and to turn native knowledge and practices into for-profit products while depriving native peoples of their rights to that knowledge and those products, and in most cases, avoiding payments to natives for their knowledge or products. Bio-piracy includes issues of social and economic justice for native peoples. These kinds of crimes fall into the category of eco-crimes, a term associated with the work of Reece Walters. Also included within the examination of eco-crimes is the analysis of other ecologically harmful corporate behaviors such as the production of genetically modified foods and various forms of toxic pollution.
Ecocide
Ecocide describes attempts to criminalize human activities that cause extensive damage to, destruction of or loss of ecosystems of a given territory; and which diminish the health and well-being of species within these ecosystems including humans. It involves transgressions that violate the principles of environmental justice, ecological justice and species justice. When this occurs as a result of human behaviour, advocates argue that a crime has occurred. However, this has not yet been accepted as an international crime by the United Nations.
Eco-global criminology
Some of those who study environmental crime and justice prefer the use of Rob White's term, eco-global criminology. In proposing this term, White suggested that it is necessary to employ a critical analysis of environmental crime as it occurs in its global context and connections. Similar to Lynch's political economic approach to green criminology, White has also noted that it is desirable to refer to the political economy of environmental crime, and to social and environmental justice issues.
Green-cultural criminology
As proposed by Avi Brisman and Nigel South green-cultural criminology attempts to integrate green and cultural criminology to explore the cultural meaning and significance of terms such as "environment" and "environmental crime". Green-cultural criminology goes against traditional approaches in regards to criminology, bringing attention to social harms and social consequences.
Conservation criminology
Conservation criminology is complementary to green criminology. Originally proposed by an interdisciplinary group of scholars from the Department of Fisheries & Wildlife, School of Criminal Justice, and Environmental Science & Policy Program at Michigan State University, conservation criminology seeks to overcome limitations inherent to single-discipline science and provide practical guidance about on-the-ground reforms. Conservation criminology is an interdisciplinary and applied paradigm for understanding programs and policies associated with global conservation risks. By integrating natural resources management, risk and decision science, and criminology, conservation criminology-based approaches ideally result in improved environmental resilience, biodiversity conservation, and secure human livelihoods. As an interdisciplinary science, conservation criminology requires the constant and creative combination of theories, methods, and techniques from diverse disciplines throughout the entire processes of research, practice, education, and policy. Thinking about the interdisciplinary nature of conservation criminology can be quite exciting but does require patience and understanding of the different languages, epistemologies and ontologies of the core disciplines. Conservation criminology has been extensively applied to extralegal exploitation of natural resources such as wildlife poaching in Namibia and Madagascar corruption in conservation, e-waste, and general noncompliance with conservation rules. By relying on multiple disciplines, conservation criminology leapfrogs this ideal; it promotes thinking about second- and third-order consequences of risks, not just isolated trends.
Green Crime and Media
The way of seeing eco-crime through media in the form of images portrays racism. Photography is very powerful tool to generate perspective and interpretation when representing the eco-crime. The blackness of the eco-crime be it in just a background or sillhoutte of the people on the site of eco-crime or the title of the images which has a racist content can be a tool to racialize the community where eco-crime happens or creating a symbol where green crime is black. Reading race through an image is one of beneficial approach to see how racism pictured through an images of eco-crime. Moreover, the meaning of green also deducted by media. Media advertisement tend to use all the so called "go green" to sell the product even though the product is not really a sustainable product and not environmentally friendly. This act by media to advertise their product to increase selling by sabotaging the "go green" movement is called 'greenwashing'. Criminologist and media should study and create a focus on how the media portrays eco-crime to provide an equal information free from bias be it gender and race as well as eager to pay an attention towards green offender (e.g. corporations which violate environmental laws).
Green criminological theory
It is often noted that green criminology is interdisciplinary and as a result, lacks its own unique theory or any preferred theoretical approach. Moreover, significant portions of the green criminological literature are qualitative and descriptive, and those studies have generally not proposed a unique or unifying theory. Despite this general lack of a singular theory, some of the approaches noted above indicate certain theoretical preferences. For example, as noted, the political economic approach to green criminology develops explanations of green crime, victimization and environmental justice consistent with several existing strains of political economic analysis. Beirne's approach takes an interdisciplinary view of theory with respect to various animal rights models and arguments. Clarke's rational choice models of animal poaching and trafficking build on the rational choice tradition found within the criminological literature. To date, these different theoretical approaches have not been examined as competing explanations for green crime and justice, a situation that is found with respect to orthodox or traditional criminological theories of street crime.
References
External links
Meredith L. Gore on conservation criminology
Criminology
Environmental crime
Environmental social science | Green criminology | Environmental_science | 2,189 |
10,902 | https://en.wikipedia.org/wiki/Force | A force is an influence that can cause an object to change its velocity unless counterbalanced by other forces. The concept of force makes the everyday notion of pushing or pulling mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol .
Force plays an important role in classical mechanics. The concept of force is central to all three of Newton's laws of motion. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part often applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In equilibrium, these stresses cause no acceleration of the body as the forces balance one another. If these are not in equilibrium, they can cause deformation of solid materials or flow in fluids.
In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force. However, the understanding of force provided by classical mechanics is useful for practical purposes.
Development of the concept
Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years.
By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational. High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction.
Pre-Newtonian concepts
Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids.
Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion.
Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics.
In the early 17th century, before Newton's Principia, the term "force" () was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named (live force) by Leibniz. The modern concept of force corresponds to Newton's (accelerating force).
Newtonian mechanics
Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches.
First law
Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so.
Second law
According to the first law, motion at constant speed in a straight line does not need a cause. It is change in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion.
Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object.
A modern statement of Newton's second law is a vector equation:
where is the momentum of the system, and is the net (vector sum) force. If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time.
In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum,
where m is the mass and is the velocity. If Newton's second law is applied to a system of constant mass, m may be moved outside the derivative operator. The equation then becomes
By substituting the definition of acceleration, the algebraic version of Newton's second law is derived:
Third law
Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if is the force of body 1 on body 2 and that of body 2 on body 1, then
This law is sometimes referred to as the action-reaction law, with called the action and the reaction.
Newton's Third Law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body.
In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero:
More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system.
Combining Newton's Second and Third Laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if is the momentum of object 1 and the momentum of object 2, then
Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.
Defining "force"
Some textbooks use Newton's second law as a definition of force. However, for the equation for a constant mass to then have any predictive content, it must be combined with further information. Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference. The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways, which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll.
Combining forces
Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous.
Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action.
Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force.
As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two.
Equilibrium
When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium. Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in static equilibrium with respect to a frame of reference if it at rest and not accelerating, whereas a body in dynamic equilibrium is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa.
Static
Static equilibrium was understood well before the invention of classical mechanics. Objects that are not accelerating have zero net force acting on them.
The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration.
Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object.
A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his Three Laws of Motion.
Dynamic
Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity.
Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.
Examples of forces in classical mechanics
Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body.
Gravitational force or Gravity
What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of will experience a force:
For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward.
Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion.
Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass () and the radius () of the Earth to the gravitational acceleration:
where the vector direction is given by , is the unit vector directed outward from the center of the Earth.
In this equation, a dimensional constant is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass due to the gravitational pull of mass is
where is the distance between the two objects' centers of mass and is the unit vector pointed in the direction away from the center of the first object toward the center of the second object.
This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the solar system until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed.
Electromagnetic
The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges. The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement.
Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force. Thus the electric field anywhere in space is defined as
where is the magnitude of the hypothetical test charge. Similarly, the idea of the magnetic field was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge due to electric and magnetic fields:
where is the electromagnetic force, is the electric field at the body's location, is the magnetic field, and is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field.
The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These "Maxwell's equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum.
Normal
When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects. The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.
Friction
Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction.
The static friction force () will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction () multiplied by the normal force (). In other words, the magnitude of the static friction force satisfies the inequality:
The kinetic friction force () is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals:
where is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction.
Tension
Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.
Spring
A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If is the displacement, the force exerted by an ideal spring equals:
where is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.
Centripetal
For an object in uniform circular motion, the net force acting on the object equals:
where is the mass of the object, is the velocity of the object and is the distance to the center of the circular path and is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction.
Continuum mechanics
Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows:
where is the volume of the object in the fluid and is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.
A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction:
where:
is a constant that depends on the properties of the fluid and the dimensions of the object (usually the cross-sectional area), and
is the velocity of the object.
More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as
where is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions.
Fictitious
There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating. Because these forces are not genuine they are also referred to as "pseudo forces".
In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry.
Concepts derived from force
Rotation and torque
Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force is defined relative to an arbitrary reference point as the cross product:
where is the position vector of the force application point relative to the reference point.
Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body:
where
is the moment of inertia of the body
is the angular acceleration of the body.
This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation.
Equivalently, the differential form of Newton's Second Law provides an alternative definition of torque:
where is the angular momentum of the particle.
Newton's Third Law of Motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques.
Yank
The yank is defined as the rate of change of force
The term is used in biomechanical analysis, athletic assessment and robotic control. The second ("tug"), third ("snatch"), fourth ("shake"), and higher derivatives are rarely used.
Kinematic integrals
Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse:
which by Newton's Second Law must be equivalent to the change in momentum (yielding the Impulse momentum theorem).
Similarly, integrating with respect to position gives a definition for the work done by a force:
which is equivalent to changes in kinetic energy (yielding the work energy theorem).
Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change in a time interval dt:
so
with the velocity.
Potential energy
Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field is defined as that field whose gradient is equal and opposite to the force produced at every point:
Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.
Conservation
A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.
Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector emanating from spherically symmetric potentials. Examples of this follow:
For gravity:
where is the gravitational constant, and is the mass of object n.
For electrostatic forces:
where is electric permittivity of free space, and is the electric charge of object n.
For spring forces:
where is the spring constant.
For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.
The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.
Units
The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s−2.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s−2. A newton is thus equal to 100,000 dynes.
The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s−2. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared.
The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s−2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque.
See also Ton-force.
Revisions of the force concept
At the beginning of the 20th century, new physical ideas emerged to explain experimental results in astronomical and submicroscopic realms. As discussed below, relativity alters the definition of momentum and quantum mechanics reuses the concept of "force" in microscopic contexts where Newton's laws do not apply directly.
Special theory of relativity
In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's Second Law,
remains valid because it is a mathematical definition. But for momentum to be conserved at relativistic relative velocity, , momentum must be redefined as:
where is the rest mass and the speed of light.
The expression relating force and acceleration for a particle with constant non-zero rest mass moving in the direction at velocity is:
where
is called the Lorentz factor. The Lorentz factor increases steeply as the relative velocity approaches the speed of light. Consequently, the greater and greater force must be applied to produce the same acceleration at extreme velocity. The relative velocity cannot reach .
If is very small compared to , then is very close to 1 and
is a close approximation. Even for use in relativity, one can restore the form of
through the use of four-vectors. This relation is correct in relativity when is the four-force, is the invariant mass, and is the four-acceleration.
The general theory of relativity incorporates a more radical departure from the Newtonian way of thinking about force, specifically gravitational force. This reimagining of the nature of gravity is described more fully below.
Quantum mechanics
Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence.
In quantum mechanics, interactions are typically described in terms of energy rather than force. The Ehrenfest theorem provides a connection between quantum expectation values and the classical concept of force, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law, with a force defined as the negative derivative of the potential energy. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance.
Quantum mechanics also introduces two new constraints that interact with forces at the submicroscopic scale and which are especially important for atoms. Despite the strong attraction of the nucleus, the uncertainty principle limits the minimum extent of an electron probability distribution and the Pauli exclusion principle prevents electrons from sharing the same probability distribution. This gives rise to an emergent pressure known as degeneracy pressure. The dynamic equilibrium between the degeneracy pressure and the attractive electromagnetic force give atoms, molecules, liquids, and solids stability.
Quantum field theory
In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions".
While sophisticated mathematical descriptions are needed to predict, in full detail, the result of such interactions, there is a conceptually simple way to describe them through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex. The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force.
Fundamental interactions
All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference.
The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This Standard Model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation.
Gravitational
Newton's law of gravitation is an example of action at a distance: one body, like the Sun, exerts an influence upon any other body, like the Earth, no matter how far apart they are. Moreover, this action at a distance is instantaneous. According to Newton's theory, the one body shifting position changes the gravitational pulls felt by all other bodies, all at the same instant of time. Albert Einstein recognized that this was inconsistent with special relativity and its prediction that influences cannot travel faster than the speed of light. So, he sought a new theory of gravitation that would be relativistically consistent. Mercury's orbit did not match that predicted by Newton's law of gravitation. Some astrophysicists predicted the existence of an undiscovered planet (Vulcan) that could explain the discrepancies. When Einstein formulated his theory of general relativity (GR) he focused on Mercury's problematic orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's theory of gravity had been shown to be inexact.
Since then, general relativity has been acknowledged as the theory that best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved spacetime – defined as the shortest spacetime path between two spacetime events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of spacetime can be observed and the force is inferred from the object's curved path. Thus, the straight line path in spacetime is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its spacetime trajectory is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force".
Electromagnetic
Maxwell's equations and the set of techniques built around them adequately describe a wide range of physics involving force in electricity and magnetism. This classical theory already includes relativity effects. Understanding quantized electromagnetic interactions between elementary particles requires quantum electrodynamics (or QED). In QED, photons are fundamental exchange particles, describing all interactions relating to electromagnetism including the electromagnetic force.
Strong nuclear
There are two "nuclear forces", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei, and gains its name from its ability to overpower the electromagnetic repulsion between protons.
The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong force only acts directly upon elementary particles. A residual is observed between hadrons (notably, the nucleons in atomic nuclei), known as the nuclear force. Here the strong force acts indirectly, transmitted as gluons that form part of the virtual pi and rho mesons, the classical transmitters of the nuclear force. The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement.
Weak nuclear
Unique among the fundamental interactions, the weak nuclear force creates no bound states. The weak force is due to the exchange of the heavy W and Z bosons. Since the weak force is mediated by two types of bosons, it can be divided into two types of interaction or "vertices" — charged current, involving the electrically charged W+ and W− bosons, and neutral current, involving electrically neutral Z0 bosons. The most familiar effect of weak interaction is beta decay (of neutrons in atomic nuclei) and the associated radioactivity. This is a type of charged-current interaction. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately . Such temperatures occurred in the plasma collisions in the early moments of the Big Bang.
See also
References
External links
Natural philosophy
Classical mechanics
Vector physical quantities
Temporal rates | Force | Physics,Mathematics | 10,093 |
40,429,647 | https://en.wikipedia.org/wiki/Elaiomycin | Elaiomycin is an antimicrobial chemical compound, classified as an conjugated azoxyalkene, which was first isolated from Streptomyces in 1954. A laboratory synthesis of elaiomycin was reported in 1977.
A variety related compounds, collectively called elaiomycins, have also been reported.
References
Antimicrobials | Elaiomycin | Chemistry,Biology | 77 |
70,390,462 | https://en.wikipedia.org/wiki/Funicin | Funicin is an antibiotic which is produced by the fungi Aspergillus funiculosus. Funicin has the molecular formula C17H18O5
References
Benzoate esters
Diphenyl ethers
Ethyl esters
Antibiotics | Funicin | Biology | 53 |
37,364,964 | https://en.wikipedia.org/wiki/Corpus%20of%20Electronic%20Texts | The Corpus of Electronic Texts, or CELT, is an online database of contemporary and historical documents relating to Irish history and culture. As of 8 December 2016, CELT contained 1,601 documents, with a total of over 18 million words. In 1992, CELT originated from the ashes of an unsuccessful partnership between University College Cork (UCC/NUI) and the Royal Irish Academy (RIA) through a project named CURIA. According to CELT, the database "caters for academic scholars, teachers, students, and the general public, all over the world".
References
External links
CELT: Corpus of Electronic Texts
Databases in Ireland
Culture of Ireland
Irish digital libraries
Online databases
University College Cork
Internet properties established in 1992
1992 establishments in Ireland
Corpora
Libraries established in 1992 | Corpus of Electronic Texts | Technology | 160 |
35,006,837 | https://en.wikipedia.org/wiki/Setralit | Setralit is a technical natural fiber based on plant fibers whose property profile has been modified selectively in order to meet different industrial requirements. It was first manufactured in 1989 by Jean-Léon Spehner, an Alsatian engineer, and further developed by the German company ECCO Gleittechnik GmbH. The name “Setralit“ is derived from the French company Setral S.à.r.l. which is a subsidiary company of ECCO, where Spehner was employed at that time. Setralit was officially described first in 1990.
History
In the late eighties and early nineties asbestos in friction pads was banned at first in Germany and subsequently in the European Union (EU). Consequently, the friction lining industry was looking for a substitute that was suitable as a reinforcing as well as a processing fiber. At the same time the EU established and subsidized a mandatory property set-aside to restrict the grain production. Only plants für use in industry could be grown on the set-aside land without affecting subsidies. Both the EU and the Federal Republic of Germany supplied money to boost the development of new materials and new manufacturing processes of such “renewable resources”, first of all for bast fiber plants like flax and – since 1996 – hemp with low THC content.
Against this background ECCO took part in joint project for the utilization of flax fibers in brake and clutch linings, funded by the German Federal Department for Research and Technology (BMFT). During this project several Setralit fiber types were being used for the first time. They had been generated by a chemical, thermal and/or mechanical treatment of flax tow which is a side product of the textile industry. The German popular press praised this approach as a ”sensational invention“.
However, the varying properties of the base material of first generation Setralit turned out to be a serious disadvantage because these variations affected the performance characteristics of the final product in an unforeseeable way. These differences are mainly being caused by growth and harvest conditions and as such are being influenced by the climate as well as by short-term weather fluctuations in the growing area. These effects are particularly critical during dew retting.
In order to avoid this problem ECCO developed an ultrasonic decomposition process (named “ultrasonic break-down“) at the end of the 90's. Thanks to this controllable, physico-chemical extraction most of the associated material of the plant fibers (lignin, pectin, waxes, natural adhesives, fragrances and dyestuffs, as well as dust, bacteria, and fungi spores) is removed or destroyed. These second generation Setralit-fibers show an immensely smaller range of property variations compared to those of the first generation, which makes them more attractive for industrial use.
Following this, ECCO developed a series of Setralit types for various industrial end applications in cooperation with several industry partners in the construction, plastic and paper industries. In 2005, a fibrillated Setralit fiber managed to achieve the industrial breakthrough. This type is mainly used as a substitute for aramid pulp (Kevlar, Twaron et al.) for example in friction pads.
The omnipresent political discussions about sustainability, protection of the natural resources, and reduction of global warming gases push the Setralit fiber into the focus of new industrial users.
According to the Nova Institute, Hürth, Germany, in the future there is no alternative to the increased substantial use of agricultural raw materials. Bio-based substances such as biodegradable and durable bio plastics, natural fiber reinforced (bio) plastics (bio composites) and wood-plastic composites (WPC) form an interesting new class of materials thereby.
Production (Setralit process)
Setralit production is two-staged. During the first step the raw material is being subjected to an aqueous ultrasonic procedure followed by washing and drying. During the second step of conditioning the so-cleaned Setralit-fiber is specifically threatened depending on its end use. In general this step is merely mechanical (cutting, grinding, fibrillating etc.), but it can also be combined with a thermal or chemical treatment. The term Setralit process means the combination of two (or more) manufacturing steps in a row. Decomposition of the plant fiber bundles leaves the basic physical properties of the elementary fibers unaffected; therefore Setralit is still identified as a natural fiber.
The chemical fiber extraction by ultrasound is controllable, as are the following conditioning processes. Thus the properties can be fitted to the requirements of the end products – within the range that nature of fiber allows.
In principle, any plant fibers can be considered as raw material. However, bast fibers of annuals (flax, hemp, jute, kenaf et al.) are preferred. Appropriate are stem fibers of perennial plants (nettle, ramie), leaf fibers (sisal, abaca, cabuja, curaua) plus seed and fruit fibers (cotton, kapok, coir). In contrast, the application of the Setralit technique to herbage (bamboo, miscanthus, bagasse, cereal, rice and corn straw) and wood has only been explored rudimentarily.
The Setralit techniques are applied either on crushed dry straw of bast fiber plants (mechanical fiber extraction) or on decorticated fibers (long fibers, flax tow). In the first case the fiber has to be separated mechanically, after the ultrasonic break-down, from its non-fibrous components (shives). As a benefit one gets clean shives as a byproduct which can serve as a raw material for high quality fiber powder. With the help of modified Setralit techniques the shives may also be upgraded separately.
Characteristics
Concerning the technical characteristics, Setralit fibers differ remarkably from the raw fibers from which they were extracted from. The distinctive attribute of Setralit compared to a conventionally gained fiber is the reproducibility of its technical properties. These are being produced by standardized treatment processes. As conventional natural fibers mainly reflect the quality variations of the primary material, these are being evened out by ultrasonic extraction.
Other differing characteristics are:
High cleanliness
Brighter color
Higher temperature resistance
Customized and constant quality
Rapid, high, and equal water absorption.
The different, mostly mechanical conditioning actions of step two lead to a range of well defined Setralit-types that differ in appearance as well as in technical characteristics. These properties are suited for the possible application of a certain Setralit-type. They are stated in the technical datasheet. The specification describes the permitted variation of such parameters.
Specification of a fibrillated natural fiber SETRALIT® NFU/31-2
Natural fiber-pulp, cleaned, fibrillated
(*1) * 10,000 cm2/g BD (Blaine-Dyckerhoff) is equivalent to ~ 6 m2/g BET (Brunauer, Emmett, Teller)
(*2) In this context “fibril” means a part of a fiber whose diameter is smaller than one third of the original fiber (mother or stem fiber). This line is drawn arbitrarily.
The mechanical strength values of Setralit-fibers reflect those of the primary material (e.g. flax, hemp, ramie fiber). The ultrasonic procedure does not lead to a damage of the fiber. Any losses in strength properties are due only to the mechanical wear during the second processing step.
Comparison fibrillated nature fiber – aramid pulp
(Application: friction linings) The characteristics of a fibrillated Setralit-fiber compared to a synthetic high-performance fiber (aramid pulp):
Reference:
[1] Own measurements of ECCO Gleittechnik GmbH.
[2] Sotton 1998: Flax a natural fibre with outstanding properties - Techtextil, Frankfurt. f
[3] Akzo: Twaron® - The power of aramid - Informational brochure.
[4] Final Report to Project: Ermittlung werkstoffkundlicher Merkmale von Flachsfasern “[Determination of flax fiber properties] - Institut für Kunststoffverarbeitung Aachen. f
Remarks
a Measured on fiber bundles
b Measured on elementary fibers
c Determined by thermal gravimetric analysis: here defined as temperature at 5% weight loss of dry fiber (heating-up rate: 5 °C/min).
d Here defined as mass of fibrils divided by total pulp mass.
e Determined by another method (Blaine-Dyckerhoff: flow resistance of N2) than used by Akzo (Brunauer, Emmett, Teller: absorption of N2); equivalent to ~ 6 m2/g BET.
f The mechanical properties of elementary flax and hemp fibers are comparable.
Applications
Setralit® is a raw material for industrial manufacturing and allows multifunctional utilization. It can be substituted expensive synthetic fiber (e.g. aramid) in challenging technical applications.
It can either be processed to form a fiber reinforced composite (semi-finished product, compound) or be used directly in final products (fiber-reinforced building materials, gaskets, fiber mats etc.).
Example for a possible future use of: glove compartment made of hemp fiber reinforced plastic with polypropylene (PP) matrix produced by NF injection molding.
Areas of application for Setralit®-fibers''
Friction Linings: today they represent the main application area of Setralit. Its use is important in mechanical engineering, because abrasion is non-harmful to the health of the operating personnel in contrast to the materials used previously (see 3.2).
Brake linings for vehicles consisting of many very different components including fillers and temperature resistant resins may contain Setralit fibers. The most important markets for brake linings are Europe, Japan and the United States, closely followed by emerging Asian economies India and China. Although the use of asbestos in friction linings has been prohibited in the EU since 1989, elevated asbestos levels are still detected there in areas which include a lot of braking like junctions, motorway exits, landing strips, or railroad stations.
Building sector: plaster, dry mortar, fiber cement, concrete, aerated concrete, hard plaster, floor pavement, lime-sand brick, gypsum cardboard, insulating materials, insulating boards, dispersion paints.
Plastics: semi-finished composites, prepregs, SMC, BMC, injection molding, formed parts, fiber reinforced polymers, especially biopolymers.
Textiles: clothes, home textiles, industrial textiles, geotextiles, filters, spunlace mats, medical and sanitary articles.
Chemical industry: friction linings, sealants, filtering agents, filler materials, thixotropic agents, bitumen, rubber, polishing agents, putties, adhesives.
Paper: technical papers, cardboard boxes, specialty papers.
Other applications: (especially for shives and other by-products): animal bedding, bulk solids, animal food (pectin et al.), biogas, energy generation.The fiber length of the technical Setralit® fiber is linked to general application fields as follows:'''
Long fiber, > 100 mm: textile applications
Yarn, cloth, fiber mats (e.g. heat insulation mats)
Short fiber, 0.5 – 10 mm: reinforcement fiber, textile short fiber
Material reinforcement (e.g. plastic injection molding, aerated concrete ), spunlace nonwoven
Process fiber < 1 mm
Improvement of manufacturing processes (e.g. friction pads)
Brand name Setralit
“SETRALIT®“ is a worldwide protected brand name of ECCO Gleittechnik GmbH and a registered trademark.
Original Setralit Fibers produced by ECCO enter the market also under other labeling.
The company
Karl-Heinz Hensel launched the enterprise ECCO Gleittechnik GmbH in 1982, and their German daughter company Setral in 1984. Since 1985 the French society Sétral S.A. (today S.à.r.l.) belongs to the group, too. Whereas ECCO predominantly works in the fields of renewable fibers and alternative solid lubricants, mainly research and development, their affiliates Setral and Sétral S.à.r.l. develop, produce und sell high performance special lubricants and maintenance products all over the world.
See also
Fiber crop
Injection molding
Retting
References
External links
ECCO-Setralit site concerning fiber activities
Use of flax in friction linings
Information referring to Setralit in „lesfibresvegetales.info“
”New Process and Reinforcing Fibers for Friction Materials Based on Renewable Raw Materials” – Presentation by Volker von Drach 2001 in “papers.sae.org “, SAE international technical papers
Fibers
Composite materials | Setralit | Physics | 2,691 |
23,431,744 | https://en.wikipedia.org/wiki/C9H9NO3 | {{DISPLAYTITLE:C9H9NO3}}
The molecular formula C9H9NO3 (molar mass: 179.175 g/mol) may refer to:
Acedoben
N-Acetylanthranilic acid
Adrenochrome
Hippuric acid
Molecular formulas | C9H9NO3 | Physics,Chemistry | 65 |
5,253,593 | https://en.wikipedia.org/wiki/114P/Wiseman%E2%80%93Skiff | 114P/Wiseman–Skiff is a periodic comet in the Solar System.
It was discovered by Jennifer Wiseman in January 1987 on two photographic plates that had been taken on December 28, 1986, by Brian A. Skiff of Lowell Observatory. Wiseman and Skiff confirmed the comet on January 19, 1987.
Comet 114P/Wiseman–Skiff is believed to have been the parent body of a meteor shower on Mars and the source of the first meteor photographed from Mars on March 7, 2004.
Aphelion is located near the orbit of Jupiter. On February 25, 2043, the comet will pass from Jupiter.
The nucleus of the comet has a radius of 0.78 ± 0.05 kilometers, assuming a geometric albedo of 0.04.
References
External links
114P at Kronk's Cometography
Periodic comets
0114
114P
114P
19861228 | 114P/Wiseman–Skiff | Astronomy | 183 |
22,034,831 | https://en.wikipedia.org/wiki/Donald%20H.%20Weingarten | Donald Henry Weingarten (born February 16, 1945) is a computational physicist.
Born in Boston, Massachusetts, he received an undergraduate degree in 1965 and a Ph.D. in 1970 from Columbia University, New York.
From 1969 to 1976 he held research positions at Fermilab (then the National Accelerator Laboratory), the University of Copenhagen, the University of Paris, and the University of Rochester. From 1976 to 1983 he was an Assistant Professor, Associate Professor, and full Professor at Indiana University. In 1983 he took his present position in the Research Division of IBM in Yorktown Heights, New York.
In 1987 he was elected a Fellow of the American Physical Society "for his original theoretical contributions to particle physics, especially the introduction of Monte Carlo methods for field theories with fermions, rigorous inequalities among fermion bound state masses, and lattice formulation of string theory".
In 1997 he received the Aneesur Rahman Prize for Computational Physics, their highest honor for work in computational physics.
References
1945 births
Living people
Scientists from Boston
21st-century American physicists
Columbia University alumni
Indiana University faculty
Computational physicists
Fellows of the American Physical Society
Physicists from Massachusetts | Donald H. Weingarten | Physics | 239 |
53,897,986 | https://en.wikipedia.org/wiki/Moshe%20Shachak | Moshe Shachak (born Moshe Charshak; 1936) is an ecologist at the Ben Gurion University. Shachak’s research focuses on ecosystem engineers, organisms that modulate the abiotic environment. Most of his studies were conducted in arid and semi arid ecosystems.
Major contributions
Shachak was born in Tel Aviv, Israel. In his early career, he studied desert animals and eco-hydrological processes in small desert watershed. Together with colleges he showed that herbivory of snails on cyanobacteria living inside rocks has major impacts of weathering of that rocky desert. This effect was found to be similar in magnitude to aeolian deposition in that area. A follow-up study showed that this herbivory has a fertilization effect which is about 11% of the nitrogen input in that system. These findings, led to the development of the concept of ecosystem engineers together with Clive Jones and John Lawton. Ecosystem engineers are organisms that change the environment thereby affecting the distribution of many other organisms. Although controversial at the beginning, this concept had become widely accepted. One of the original papers was named in the list of the 100 most influential papers in ecology and today the concept appears in mainstream ecological textbooks. Shachak’s more recent research focuses on plants and cyanobacteria engineers and pattern formation.
References
1936 births
Academic staff of Ben-Gurion University of the Negev
Ecologists
Israeli biologists
Living people
People from Tel Aviv | Moshe Shachak | Environmental_science | 301 |
39,632,259 | https://en.wikipedia.org/wiki/Skin%20immunity | Skin immunity is a property of skin that allows it to resist infections from pathogens. In addition to providing a passive physical barrier against infection, the skin also contains elements of the innate and adaptive immune systems which allows it to actively fight infections. Hence the skin provides defense in depth against infection.
The skin acts as a barrier, a kind of sheath, made of several layers of cells and their related glands. The skin is a dynamic organ that contains different cells which contains elements of the innate and the adaptive immune systems which are activated when the tissue is under attack by invading pathogens. Shortly after infection, the immune adaptive response is induced by dendritic cells (Langerhans cells) present in the epidermis; they are responsible for the capture, processing, and presentation of antigens to T lymphocytes in local lymphoid organs. As a result, T lymphocytes express the cutaneous lymphocyte antigen (CLA) molecule, a modified form of P-selectin glycoprotein ligand-1. Lymphocytes move to the epidermis where they reside as memory T cells, they will thus be activated and will trigger an inflammatory response. Dysregulation of these mechanisms is associated with inflammatory diseases of the skin.
Afferent and efferent phases of the immune system of the skin
Some humoral and cellular components of the skin pass through the vessel lymph to get to the circulation. This circulation net has a big importance, it's the way of direct communication between the specific site of the skin and the lymph cells found inside the lymph node and the systematic tissues.
The epidermis antigens are connected with some cells of the skin. Among them there are the APC, antigen presenting cells (Langerhans, dentritic and cutaneous). They capture the antigen, they process it and they present it on their surface as being associated with the MHC-II. Keratinocytes produce TNFα and IL-1 which act on the Langerhans cells, inducing an increase of the expression of histocompatibility complex and cytokine secretion. Moreover, they induce their migration from the skin to the paracortical areas of the lymph nodes. Once there, these cells can provide the necessary stimulus for the lymphocytes T, who will proliferate and express the cutaneous receptor recruitment and to various chemo attractants that promote the accumulation of dermal micro vascular endothelial cells of inflamed skin to finally enter the skin tissue.
Once the activated lymphocytes arrive, they get in contact with the antigen, they proliferate and develop their effector functions in order to neutralize or eliminate the pathogen.
The Langerhans cells promote and permit the start of the cellular immune response of lymphocytes through the skin and are recruited from the peripheral blood. Antigen presentation may occur in peripheral lymphoid tissues.
Antigenic presentation from the Langerhans cells to the lymphocytes
The Langerhans cells, once they are activated, rapidly migrate to the lymph nodes where they will accumulate in the paracortex and show the antigen of the skin to the lymph nodes via efferent lymph vessels. The Langerhans cells induce a vast proliferation of the naïve lymphocytes T and they participate in the immunoestimulation phase of the immune response, converting the lymphocytes in T helper cells. Recently, it has been shown that Langerhans cells can express an antigenic peptide associated to MHC-I capable of inducing a response from the cytotoxic LT and effector functions, such as the production of cytokines.
Microbiota and skin immunity
Skin microbiota plays an important role in tissue homeostasis and local immunity.
Skin microbial communities are highly diverse and can be remodeled over time or in response to environment challenges.
From around 2005 on, the scientific community has thoroughly developed the concept of human microbiome and begun the systematic study to establish the relationship between the microbiome and human physiology in health and disease. We begin to understand that gut microbiota helps modulating host immunity at a systemic level. However, gut microbiome does not affect skin immunity significantly, instead, skin immunity is modulated by skin microflora according to the results obtained by Naik et al.
Analyzing immunologic changes of germ-free (GF) mice with reconstituted gut microbiota showed a recovery of Il-17A and IFN-γ levels up to those observed in the gastrointestinal tract of specific pathogen free (SPF) mice but gut microbiome restoration did not affect skin immunity. Comparing GF and SPF mice showed a decrease in the skin production of IFN-γ and IL-17A. To evaluate the functional consequences of the absence of skin microbiota Leishmania major was introduced intradermally and the lesions were evaluated. L. major lesions in GF mice were significantly smaller and less severe than in SPF mice, however, the number of parasites after infection was significantly higher in GF mice. These results clearly indicate that GF mice have an impaired capacity of response in front of infections compared to SPF mice. Finally, mono-association of GF mice with S. epidermidis clearly restored immunity function which in the case of skin is mediated by IL-1 which is key for the restoration of IL-17A and IFN-γ levels. Thus skin commensals exert their effect by enhancing IL-1 signaling and amplifying responses according to local inflammatory milieu. As IL-1 has been implicated in the etiology and pathology of psoriasis and other cutaneous disorders, it is likely that skin commensals are important drivers and amplifiers of skin pathologies.
T cells and microbiota in skin immunity
Recent studies have demonstrated that specific components of the microbiota, as well as their metabolites, selectively promote the activation and the expansion of different T cell subsets under normal and/or pathological conditions. For example, colonization with Staphylococcus epidermidis may have diverse effects, as promote the growth of IL-17A+ CD8+ T cells that reside in the epidermis. This, would limit pathogen invasion improving innate immune barrier in an IL-17 dependent manner. According to an investigation led by US researchers, skin-resident CD11b+ dendritic cells would be the ones to orchestrate a specific response after interacting with commensal bacteria stimulating the proliferation of IL-17A+ CD8+ T cells through their capacity to produce IL-1. This activation mechanism is commensal specific and clearly belongs to the adaptive immune system; however, it strikingly improves innate immune protection as shown after challenging gnobiotic mice with Candida albicans. Indeed, mono-association of gnobiotic mice with S. epidermidis significantly improves innate protection against C. albicans. The connection between the innate and the adaptive system is driven in this case by the production of alarmins S100A8 and S100A9 known to elicit microbicidal responses and as potent chemoattractants for neutrophils.
The majority bacteria tested increased the number of skin T cells. Interactions between T cells and specific microbiota components may represent evolutionary outcome by which the skin immune system and the microbiota provide heterologous protection against invasive pathogens and calibrate barrier immunity through the use of chemical signals. This shows that the skin immune system is a highly dynamic environment that can be rapidly and specifically remodeled by certain commensals.
Finally, studying microbiota interactions and skin T cells can help to detect the cause of various diseases and possible cures for these. The increasing development of tools for personalized medicine will undoubtedly help to this goal, because each person has a different microbiota.
References
Further reading
immunity
Immune system | Skin immunity | Biology | 1,657 |
52,378,053 | https://en.wikipedia.org/wiki/Jon%20Hirschtick | Jon Hirschtick is a CAD software developer, founder and former CEO of SolidWorks, a popular solid modeling 3D CAD and CAE system for Microsoft Windows, and Onshape, a cloud platform for product development that includes tools for CAD, data management, collaboration, workflow, analytics, etc.
Education
Hirschtick holds a Bachelors and Masters' degree from MIT, graduating in 1986.
Career
Hirschtick was director of engineering at Computervision from 1991–1993, and a manager at the MIT CADLab. He was a player and instructor on the MIT Blackjack Team featured in the movies 21 and Breaking Vegas.
Hirschtick founded the SolidWorks Corporation in 1993 using $1 million he made while a member of the MIT Blackjack Team. Under his leadership, SolidWorks revenue eventually grew to $600 million. When Solidworks was acquired by Dassault Systèmes in 1997, Hirschtick continued on as CEO and then a group executive for the next 14 years. In October 2011, Hirschtick left Solidworks and in 2012 founded Belmont Technology (later changed to Onshape) with other members of the original SolidWorks team. Hirschtick is currently CEO at Onshape. In October 2019 Onshape entered into an agreement to be acquired by PTC.
Hirschtick was awarded the CAD Society Leadership Award, joining Autodesk’s Carl Bass, Dassault Systèmes’ Bernard Charles, and 3D Systems's Ping Fu, and is a recipient of the American Society of Mechanical Engineers Leadership Award. He is a member of the Advisory Board at Boston University and Arcbazar, where he was once director, and is an advisor to Magic Leap and MarkForged, Inc.
References
Living people
American computer businesspeople
American technology chief executives
1962 births
American chairpersons of corporations
American software engineers
American technology company founders
Businesspeople in software
History of computing
Massachusetts Institute of Technology people | Jon Hirschtick | Technology | 380 |
36,454,902 | https://en.wikipedia.org/wiki/HD%20168607 | HD 168607 (V4029 Sagittarii) is a blue hypergiant and luminous blue variable (LBV) star located in the constellation of Sagittarius, easy to see with amateur telescopes. It forms a pair with HD 168625, also a blue hypergiant and possible luminous blue variable, that can be seen at the south-east of M17, the Omega Nebula.
Physical properties
HD 168607 was estimated to be about as far away as is the Omega Nebula (2.2 kiloparsecs, 7,200 light years, from the Sun) and no respective measurements have been found that discount physical association with HD 168625. Assuming this distance is correct, this star is 240,000 times brighter than the Sun with a surface temperature of . The Gaia Data Release 2 parallax of implies a closer distance of about .
The apparent magnitude of this star or star system was observed to vary by 0.25 to 0.30 magnitudes with a period of 64 days when it was first identified as an α Cygni variable. Unlike its neighbour HD 168625, no nebula has been found around this star. It is classified in the General Catalogue of Variable Stars as a luminous blue variable or S Doradus variable with the variable star designation V4029 Sagittarii and a maximum and minimum visual magnitude of 8.12 and 8.29 respectively. Although it is suspected of being in, or about to enter, an S Doradus phase, no outbursts have been observed. A magnitude variation between 8.05 and 8.41 is reported from a broader range of observations.
HD 168607 is thought to have had a mass between when it first formed on the main sequence, but now much less. Analysis of its period and photospheric abundances suggest that it has evolved through a red supergiant stage and has now expelled its outer atmosphere and increased its temperature again.
References
B-type hypergiants
Luminous blue variables
Sagittarius (constellation)
168607
Sagittarii, V4029
Durchmusterung objects
089956 | HD 168607 | Astronomy | 434 |
22,360,123 | https://en.wikipedia.org/wiki/Karl%20James%20Jalkanen | Karl James Jalkanen, FRSC, (born 1958 in Chassell, Michigan), is a research scientist in molecular biophysics. He is currently a research scientist at the Gilead Sciences new La Verne, California manufacturing facility in the Department of Technical Services.
Biography
Before moving to California he was a visiting senior research scientist in the Department of Micro- and Nanotechnology at the Technical University of Denmark (DTU Nanotech) in Kgs. Lyngby, Denmark, a visiting research professor at Aalto University School of Science and Technology in the Department of Applied Physics, a visiting senior research scientist in the Department of Micro- and Nanotechnology at the Technical University of Denmark (DTU Nanotech) in Kgs. Lyngby, Denmark; a visiting research professor at Kyushu University in Kasuga, Fukuoka, Japan; a visiting FAPESP Professor of Molecular Biophysics at the University do Vale do Paraíba, UniVap, in São Jose dos Campos, São Paulo, Brazil in the Laboratory of Biomedical Vibrational Spectroscopy, LEVB; a visiting senior research scientist at the Bremen Center for Computational Material Science (BCCMS) at the University of Bremen in Bremen, Germany; a visiting professor of biophysics at the Nanochemistry Research Institute (NRI) at Curtin University of Technology in Perth, WA, Australia; a visiting scholar at the German Cancer Research Center (DKFZ) in Heidelberg, Germany; a visiting professor of biophysics in the Laboratory of Physics at Helsinki University of Technology, now Aalto University in Otaniemi, Finland; and an associate professor of biophysics at the Technical University of Denmark.
Research
From his biography on the Australian Research Network for Advanced Materials (ARNAM), his focus is:
His 72 peer reviewed scientific papers have been cited 3500 times in journals such as the Journal of the American Chemical Society, the Journal of Physical Chemistry, the Journal of Chemical Physics, the Journal of Computational Chemistry, Chemical Physics Letters and Theoretical Chemistry Accounts. According to the Royal Society of Chemistry, he is an "expert in spectroscopic methods used in biophysics".
Editor
He is currently the Editor-in-Chief (EiC) for Current Physical Chemistry. He has also been guest editor for three special issues of Theoretical Chemistry Accounts, the P.J. Stephens Honorary Issue, volume 119, numbers 1–3, the January 2008 issue, with Dr. Gerard M. Jensen, Gilead Sciences, Inc., the Suhai Festschrift Honorary Issue, volume 125, numbers 3–6, the March 2010 issue, and the Akira Imamura Honorary Issue, volume 130, numbers 4–6, the December 2011 issue. He was guest editor with Dr. Gerard M. Jensen, Gilead Sciences, Inc. for the two issue Quantum Nanobiology and Biophysical Chemistry series in Current Physical Chemistry (CPC) that appeared as the January and April in 2013 in volume 3, issues 1 and 2
. The January 2013 and April 2013 issues have been made available online. The Imamura Festschrift Issue articles have appeared online and can be accessed at the Theoretical Chemistry Accounts (TCA) website, along with all other articles in TCA, including articles discussing the triple helix and biospectroscopy papers
Education
University of Southern California
PhD, MSc, Chemistry, Applied Mathematics, 1980 — 1989
Michigan Technological University
BSc, Chemistry, 1977 — 1980
Michigan State University
Chemistry 1976 — 1977
References
External links
1958 births
Living people
People from Chassell, Michigan
American people of Finnish descent
21st-century American biologists
21st-century American chemists
Academic staff of Kyushu University
Computational chemists
Michigan Technological University alumni | Karl James Jalkanen | Chemistry | 744 |
57,970,616 | https://en.wikipedia.org/wiki/Metarhizium%20rileyi | Metarhizium rileyi is a species of entomopathogenic fungus in the family Clavicipitaceae. This species is known to infect Lepidoptera, including economically important insects in the Noctuoidea and Bombycoidea; there is an extensive (pre 2014) literature on this fungus under its synonym Nomuraea rileyi.
Importance
In sericulture, the term "green muscardine" has been used for fungal infections of silkworms caused by M. rileyi.
M. rileyi has been considered as a potential mycoinsecticide for use against several noctuid insect pests. Blastospores of M. rileyi can be produced easily in liquid media: but conidia are preferred for practical field use. For laboratory purposes, these can be produced, expensively, on Sabouraud’s maltose agar supplemented with 1% yeast extract, but the use of grain substrates is preferred for mass production.
References
External links
Images at iNaturalist
Clavicipitaceae
Hypocreales genera
Biological pest control
Fungi described in 1883
Fungus species | Metarhizium rileyi | Biology | 232 |
58,903,443 | https://en.wikipedia.org/wiki/Ruth%20Gates | Ruth Deborah Gates (March 28, 1962 – October 25, 2018) was the Director of the Hawaiʻi Institute of Marine Biology and the first woman to be President of the International Society for Reef Studies. Her research was dedicated to understanding coral reef ecosystems, specifically coral-algal symbiosis and the capacity for corals to acclimatize under future climate change conditions. Doctor Gates is most accredited with looking at coral biology and human-assisted coral evolution, known as super corals, as notably seen in the documentary Chasing Coral, available on Netflix.
Education
Gates was inspired by the documentary The Undersea World of Jacques Cousteau. She studied biology at Newcastle University where she earned a Bachelor of Science degree in 1984. She fell in love with corals during a diving trip to the West Indies. In 1985 she moved to the West Indies to study corals. She completed her PhD at Newcastle University in 1989 on seawater temperature and algal-cnidarian symbiosis. During her postgraduate work in Jamaica, she was exposed to the bleaching response of coral resulting from rising temperatures.
Career and research
After her PhD, Gates was appointed a postdoctoral researcher at the University of California, Los Angeles. Here she spent thirteen years working as a junior researcher in California, developing skills in cellular biology, evolutionary biology, and molecular genetics. She was there during the 1998 bleaching event that killed more than 15% of corals across the world.
Gates joined the Hawaiʻi Institute of Marine Biology in 2003. She studied corals and reefs, learning how they function and working on ways to slow their decline. She worked on Coconut Island, trying to identify why some corals survive bleaching. Her group monitored the ecosystems of coral reefs to understand how a changing environment impacted coral health. The corals in shallow patches like Kāneʻohe Bay are subject to high temperatures and irradiance. Alongside seawater temperature, they measure photosynthetic active radiation, salinity and nutrient composition. This allowed them to build 3D models of reefs. They study the Symbiodinium that live within coral tissues. These provide the corals with energy and are lost during coral bleaching. They develop new techniques for data analysis and management, including developing EarthCube and CRESCYNT. Gates was concerned about sunscreen that contains octinoxate and oxybenzone, and in 2015 called for it to be banned in Hawaii. These sunscreens were banned in 2018. In 2012 she demonstrated that the choice of symbiotic algae was crucial for how tropical reefs survived environmental stresses. She predicted that more than 90 percent of the world's corals will be dead by 2050.
Gates Coral Lab
Gates established the Gates Coral Lab at the Hawaiʻi Institute of Marine Biology. Even after the death of Ruth Gates in October 2018, her team continues on to conduct research centered around the biological traits of coral reef ecosystems. The team uses their research to inform restoration efforts and management policies. Significant contributions to coral reef research has been contributed by the Gates Coral Lab. The team works in collaboration with the Australian Institute of Marine Science on the Coral Assisted Evolution Project, which attempts to "stabilize and restore coral reefs" in the face of climate change.
Gates' research team hosted the first coral restoration workshop in Hawaii at the Hawaiʻi Institute of Marine Biology in 2017. The research team's restoration efforts in Hawaii's coral reefs focus on realistic and effective approaches. Recent publications have discussed the necessity of focusing on local restoration and recovery efforts as opposed to mass scale restoration until there is more substantial research on how to best combat the root of the problem of bleaching events, climate change. Other research and restoration publications have discussed the effects of beneficial mutations, genetic variation, and human assisted relocation.
Super Coral
"Super corals" were defined as those that did not bleach during natural bleaching events when sea temperatures were high. Gates identified these so-called "super corals" as a potential mechanism for preventing coral extinction. Gates said, "I just cannot bear the idea that future generations may not experience a coral reef. The mission is to start solving the problem, not just to study it." In 2013, she won the Paul G. Allen Ocean Challenge, a $10,000 prize that allowed her to improve the resilience of vulnerable coral reef ecosystems. For the proposal, Gates joined Madeleine van Oppen, and used genetic selection to boost resilience to environmental stress. They did this by exposing cross-bred corals to successively warmer and more acidic experimental tanks. In the laboratory, they took resistant corals and collected their reproductive products after spawning, raised their offspring in the lab, and tested for increased temperature resistance. Gates was awarded the University of Hawaii Board of Regents Medal for Excellence in Research. Coral Assisted Evolution, a $4 million research project, was funded by the Paul G. Allen Frontiers Group. This supported Gates' research for four years from 2016, developing super corals that can withstand climate change. Whilst Gates was concerned about playing with nature, she could not sit by and watch species become extinct without acting. In 2016, Gates was named by Hawaii Business as one of the top 20 leaders of Hawaii. She explored whether non-super corals could be encouraged to take on new symbionts to improve their ability to withstand high temperatures. If Gates' project is successful, it could save the US$9.9 trillion. In 2018, the foundation supported a coral reef map, that allowed scientists to monitor corals in unprecedented detail.
Public engagement
In addition to her career in research, Gates served as a mentor, public speaker, science communicator, and proponent for change and progress in the field of marine science. She captivated and inspired audiences with her passion, optimism, and, as she modestly put it, her English boarding school accent. She was elected the first female president of the International Society for Reef Studies in 2015 and significantly increased membership and involvement while she served. The Super Coral proposals were featured in Fast Company, Gizmodo, PBS, Newsweek, Hawaii Business, National Geographic, the Huffington Post, New Scientist and the BBC. Her work was featured in the Netflix documentary Chasing Coral. She was an invited speaker at the 2017 Aspen Ideas Festival. She was featured on the University of Hawaiʻi Foundation video series in 2018. The Gates Coral Lab is involved in a wide range of public engagement and outreach, including hosting students from Mo'orea. She was a member of the Tetiaroa Society.
Chasing Coral
Gates' work at the Hawaiʻi Institute of Marine Biology is featured in the captivating Netflix documentary, Chasing Coral. In the documentary, she explains her amazement with corals: "I have the utmost respect for corals because I think they have got us all fooled. Simplicity on the outside does not mean simplicity on the inside." The documentary showcases her work with Richard Vevers and the rest of his diving team on a project to capture the process of coral bleaching in the wild for the first time. Gates provides the scientific foundation of knowledge for the conduction of this project, educating the team of divers and the audience of the film throughout. She warns the audience of the "eradication of an entire ecosystem in our lifespan" to encourage progress in the movement against climate change. Her appearance in Chasing Coral was one of Gates' several efforts of public outreach and engagement, working to raise awareness of coral bleaching and inspire the public to put a stop to these events.
Personal life
Gates was born in Akrotiri, Cyprus, the sister of Timothy Gates and the daughter of John Amos Gates (RAF) and Muriel Peel Gates (physiotherapist). Her wife was Robin Burton-Gates, whom she married in September 2018. In her free time, she was an accomplished scuba diver, earned a black belt in karate, and started a school for karate in Hawaii.
Gates was diagnosed with brain cancer at 56 years old, but died from complications during a surgery for diverticulitis, unrelated to her former diagnosis. Gates leaves a legacy of optimism and progress in the field of marine science: Van Oppen, the Gates Coral Lab, and multiple other labs across the globe, continue to study the mechanisms of resistance to climate change and how they may be passed down generations.
References
1962 births
2018 deaths
English biologists
Akrotiri and Dhekelia people
Alumni of Newcastle University
University of Hawaiʻi at Mānoa faculty
University of California, Los Angeles faculty
American environmental scientists
British LGBTQ scientists
LGBTQ academics | Ruth Gates | Environmental_science | 1,747 |
6,014,870 | https://en.wikipedia.org/wiki/PCB%20Piezotronics | PCB Piezotronics is a manufacturer of piezoelectric sensors.
The name "PCB" is abbreviation for "PicoCoulomB" which is technical terminology defining an electrical charge of the type generated by the piezoelectric sensors they manufacture. It is also a registered trademark of the company. "Piezotronics" combines the science of Piezoelectricity and electronics. PCB® manufactures sensors and related instrumentation. Sensors are small electromechanical instruments for the measurement of acceleration, dynamic pressure, force, acoustics, torque, load, strain, shock, vibration and sound.
History
Founded by Robert W. Lally and James (Jim) F. Lally in 1967, PCB Piezotronics has evolved from a family business to a large company engineering and manufacturing operation, with technical emphasis on the incorporation of integrated circuit-piezoelectric sensor technology. In 1967 the integrated circuit piezoelectric sensor, also known as ICP sensors, incorporated microelectronic circuitry, were developed and marketed.
The 1970s for PCB Piezotronics saw expansion of its standard product offerings, to include other types of sensor technologies. In 1971, the company developed a 100,000 g high-shock, ICP® quartz accelerometer; Impulse Hammers for structural excitation were developed in 1972; and in 1973, the first rugged, industrial-grade ICP® accelerometer was introduced to serve the emerging machinery health monitoring market. Employment grew to 25 employees. By 1975, PCB® had become one of the largest U.S. manufacturers of piezoelectric sensors.
During the 1980s, PCB® continued to develop new products. In 1982, the Structural* Modal Array Sensing System was developed to ease sensor installation and reduce set-up time on larger-scale modal surveys. Modally-Tuned* Impulse Hammers won the IR-100 Award as one of the top 100 technical developments for 1983. The 128-channel Data Harvester was invented in 1984 to provide sensor power and speed modal analysis by offering automatic bank switching capability. In 1986, PCB developed the first commercial quartz shear-structured ICP® accelerometer. Additionally in 1980, PCB® broke ground on of land at 3425 Walden Avenue for its new quartz technology center, a location which it continues to occupy today. The facility doubled in size in 1985, and in 1996 an additional was added. An acre of land to the west of the building was purchased for future expansion and in 1999 a addition was completed.
In 1995, Underwriters Laboratory certified PCB to the International Quality Standard ISO-9001. In January 2002, The American Association for Laboratory Accreditation (A2LA) recognized PCB with accreditation to ISO 17025, an international standard for assuring technical competence in calibration and testing.
In 2015, Jim Lally was presented the lifetime achievement award at the 86th annual Shock and Vibration Symposium in Orlando. "This award recognized Jim Lally's 60 years of dedication to providing dynamic sensor technology in blast, ballistics, shock, vibration, acoustics, strain, and dynamic force to the SAVE community. It also recognizes both his generous contributions to educational institutions and his professionalism in corporate interactions."
PCB Group, Inc. was acquired by MTS Systems Corporation in July 2016 but retained its president David Hore and all its employees and facilities.
Divisions
Today the company is organized into various divisions and product groups, and has representation in more than 60 countries worldwide. These divisions include PCB Automotive Sensors, based in Farmington Hills, Michigan; PCB Aerospace & Defense; IMI Sensors; and Larson Davis, based in Depew, NY. PCB product groups include Shock and Vibration; Microphones; Force; Pressure; and Electronics.
References
External links
Manufacturing companies based in New York (state)
Sensors
Technology companies established in 1967
1967 establishments in New York (state)
Privately held companies based in New York (state) | PCB Piezotronics | Technology,Engineering | 815 |
5,296,529 | https://en.wikipedia.org/wiki/NASA%20Space%20Science%20Data%20Coordinated%20Archive | The NASA Space Science Data Coordinated Archive (NSSDCA) serves as the permanent archive for NASA space science mission data. "Space science" includes astronomy and astrophysics, solar and space plasma physics, and planetary and lunar science. As the permanent archive, NSSDCA teams with NASA's discipline-specific space science "active archives" which provide access to data to researchers and, in some cases, to the general public. NSSDCA also serves as NASA's permanent archive for space physics mission data. It provides access to several geophysical models and to data from some non-NASA mission data. NSSDCA was called the National Space Science Data Center (NSSDC) prior to March 2015.
NSSDCA supports active space physics and astrophysics researchers. Web-based services allow the NSSDCA to support the general public. This support is in the form of information about spacecraft and access to digital versions of selected imagery. NSSDCA also
provides access to portions of their database contains information about data archived at NSSDCA (and, in some cases, other facilities), the spacecraft which generate space science data and experiments which generate space science data. NSSDCA services also included are data management standards and technologies.
NSSDCA is part of the Solar System Exploration Data Services Office (SSEDSO) in the Solar System Exploration Division at NASA's Goddard Space Flight Center. NSSDCA is sponsored by the Heliophysics Division of NASA's Science Mission Directorate. NSSDCA acts in concert with various NASA discipline data systems in providing certain data and services.
Overview
NSSDCA was first established (as NSSDC) at Goddard Space Flight Center in 1966. NSSDCA's staff consists largely of physical scientists, computer scientists, analysts, programmers, and data technicians. Staffing level, including civil service and onsite contractors, has ranged between 15 and 100 over the life of NSSDCA. Early in its life, NSSDCA accumulated data primarily on 7-track and 9-track tape and on various photoproducts, and all data dissemination was via media replication and mailing. Starting in the mid-1980s, NSSDCA received and disseminated increasing data volumes via electronic networks. Dissemination formats are presently via the internet, either via HTTP or FTP.
Astrophysics
Data Services: contains data and mission information: The Multiwavelength Milky Way, the Multimedia Catalog and the NSSDC Photo Gallery.
Flight Mission Information: contains lists of flight missions and information about them; this is where the NSSDC Master Catalog is along with mission-specific access. A graphical interface to mission information is in this area as well.
Related Information Services: have detailed information about data held at NSSDC via the Master Catalog, NSSDC Lunar and Planetary Science, and NSSDC Heliophysics.
NASA Astrophysics Data Archive/Service Centers: these include HEASARC (High Energy Astrophysics Science Archive Research Center), IRSA (Infrared Science Archives), LAMBDA (Legacy Archive for Microwave Background Data Analysis), MAST (Mikulski Archive for Space Telescopes).
Master Catalog
The NSSDC Master Catalog is available for the queries pertaining to information about data that are archived at the NSSDC as well as for supplemental information via the following query mechanisms:
Spacecraft Query. This interface allows queries to our database of orbital, suborbital, and interplanetary spacecraft.
Experiment Query. This interface allows queries for information about scientific experiments that flew on-board various space missions.
Data Collection Query. This interface allows queries for data that are tracked by NSSDC, primarily those that are currently archived here.
Personnel Query. This interface allows queries for locator information for personnel that were associated with various missions and/or data collections submitted to NSSDC.
Publication Query. This interface allows queries information about publications that are relevant to the data NSSDC archives or to the experiments and/or missions that accumulated the data. The publications so captured are not intended to be comprehensive bibliographies.
Lunar and Planetary Map Query. This interface allows queries of the lunar and planetary maps that NSSDC currently has in stock.
New and Updated Data Query. This interface allows queries for those data collections for which the NSSDC has recently acquired new data, either additions to existing collections or entirely new collections.
Lunar and Planetary Events Query. This interface allows queries for events that have occurred which are related to the exploration of the Moon and the Solar System.
See also
Planetary Data System
NASA/IPAC Extragalactic Database
HEASARC
Astrophysics Data System
NSSTC
References
Meeks, Brock N. (Feb 12, 1990) "NASA Data: Where No LAN Has Gone Before" InfoWorld Vol. 12, No. 7:S17
NASA Office of Space Science and Applications (OSSA) (1992) State of the Data Union NASA-TM-109951 pp 28-32
Bretherton, Francis (1993) 1992 Review of the World Data Center-A for Rockets and Satellites and the National Space Science Data Center US National Research Council, Washington, DC: US National Academies Press OCLC 30712126
US National Research Council (2002) Assessment of the Usefulness and Availability of NASA's Earth and Space Science Mission Data Washington, DC: The National Academies Press pages 24, 27, 33, 42-43
Burden, Paul R. (2010) A Subject Guide to Quality Web Sites Scarecrow Press page 506
External links
NSSDCA website
NSSDCA Master Catalog
Planetary Fact Sheets
NASA groups, organizations, and centers
Online databases
Spaceflight
Space science organizations | NASA Space Science Data Coordinated Archive | Astronomy | 1,149 |
24,980,071 | https://en.wikipedia.org/wiki/Shut%20up | "Shut up" is a direct command with a meaning very similar to "be quiet", but which is commonly perceived as a more forceful command to stop making noise or otherwise communicating, such as talking. The phrase is probably a shortened form of "shut up your mouth" or "shut your mouth up". Its use is generally considered rude and impolite, and may also be considered a form of profanity by some.
Initial meaning and development
Before the twentieth century, the phrase "shut up" was rarely used as an imperative, and had a different meaning altogether. To say that someone was "shut up" meant that they were locked up, quarantined, or held prisoner. For example, several passages in the King James Version of the Bible instruct that if a priest determines that a person shows certain symptoms of illness, "then the priest shall shut up him that hath the plague of the scall seven days". This meaning was also used in the sense of closing something, such as a business, and it is also from this use that the longer phrase "shut up your mouth" likely originated.
One source has indicated this:
However, Shakespeare's use of the phrase in King Lear is limited to a reference to the shutting of doors at the end of Scene II, with the characters of Regan and Cornwall both advising the King, "Shut up your doors". The earlier meaning of the phrase, to close something, is widely used in Little Dorrit, but is used in one instance in a manner which foreshadows the modern usage:
In another instance in that work, the phrase "shut it up" is used to indicate the resolution of a matter:
The Routledge Dictionary of Historical Slang cites an 1858 lecture on slang as noting that "when a man... holds his peace, he shuts up." As early as 1859, use of the shorter phrase was expressly conveyed in a literary work:
One 1888 source identifies the phrase by its similarity to Shakespeare's use in Much Ado About Nothing of "the Spanish phrase poeat palabrât, 'few words,' which is said to be pretty well the equivalent of our slang phrase 'shut up'". The usage by Rudyard Kipling appears in his poem "The Young British Soldier", published in 1892, told in the voice of a seasoned military veteran who says to the fresh troops, "Now all you recruities what's drafted to-day,/You shut up your rag-box an' 'ark to my lay".
Variations
More forceful and sometimes vulgar forms of the phrase may be constructed by the infixation of modifiers, including "shut the hell up" and "shut the fuck up". In shut the heck up, heck is substituted for more aggressive modifiers. In instant messenger communications, these are in turn often abbreviated to STHU and STFU, respectively. Similar phrases include "hush" and "shush" or "hush up" and "shush up" (which are generally less aggressive). Another common variation is "shut your mouth", sometimes substituting "mouth" with another word conveying similar meaning, such as head, face, teeth, trap, yap, chops, crunch, cake-hole (in places including the UK and New Zealand), pie-hole (in the United States), or, more archaically, gob. Another variation, shut it, substitutes "it" for the mouth, leaving the thing to be shut to be understood by implication.
Variations produced by changes in spelling, spacing, or slurring of words include shaddap, shurrup, shurrit, shutup, and shuttup. By derivation, a "shut-up sandwich" is another name for a punch in the mouth. On The King of Queens, Doug Heffernan (the main character played by Kevin James) is known for saying shutty, which is also a variation of the phrase that has since been used by the show's fans.
A dysphemism, shut the front door, was used often by Stacy London of TLC's What Not to Wear during the U.S. show's run from 2003–2013. It was also used in an Oreo commercial on American TV in 2011, prompting some commentators to object.
A similar phrase in Spanish, (), was said by King Juan Carlos I of Spain to Venezuelan president Hugo Chávez, in response to repeated interruptions by Chávez at a 2007 diplomatic conference. The blunt comment from one head of state to another surprised many, and received "general applause" from the audience.
Objectionability
The objectionability of the phrase has varied over time. For example, in 1957, Milwaukee morning radio personality Bob "Coffeehead" Larsen banned the song "Mama Look at Bubu" from his show for its repeated inclusion of the phrase, which Larsen felt would set a bad example for the younger listeners at that hour. In 1968, the use of the phrase on the floor of the Australian Parliament drew a rebuke that "The phrase 'shut up' is not a parliamentary term. The expression is not the type which one should hear in a Parliament".
Alternative meanings
An alternative modern spoken usage is to express disbelief, or even amazement. When this (politer) usage is intended, the phrase is uttered with mild inflexion to express surprise. The phrase is also used in an ironic fashion, when the person demanding the action simultaneously demands that the subject of the command speak, as in "shut up and answer the question". The usage of this phrase for comedic effect traces at least as far back as the 1870s, where the title character of a short farce titled "Piperman's Predicaments" is commanded to "Shut up; and answer plainly". Another seemingly discordant use, tracing back to the 1920s, is the phrase "shut up and kiss me", which typically expresses both impatience and affection.
See also
Shut your mouth (disambiguation)
Silence
Talk to the hand
¿Por qué no te callas?
References
English-language slang
English-language idioms
Harassment and bullying | Shut up | Biology | 1,265 |
48,536,780 | https://en.wikipedia.org/wiki/Corf%20%28mining%29 | A corf (pl. corves) also spelt corve (pl. corves) in mining is a wicker basket or a small human powered (in later times in the case of the larger mines, horse drawn) minecart for carrying or transporting coal, ore, etc. Human powered corfs had generally been phased out by the turn of the 20th century, with horse drawn corfs having been mostly replaced by horse drawn or motorised minecarts mounted on rails by the late 1920s. Also similar is a Tram, originally a box on runners, dragged like a sledge.
Origin of term
1350–1400; Middle English from Dutch and German Korb, ultimately borrowed from Latin corbis basket; cf. corbeil.
Survivors
The National Coal Mining Museum for England has a hazel basket type Corf from William Pit near Whitehaven.
See also
Corf (fishing)
Decauville wagon
Minecart
Mineral wagon
Mines and Collieries Act 1842
References
External links
Mining equipment
History of mining in the United Kingdom
Weaving
Wagons
Human-powered vehicles
Animal-powered vehicles
History of the British Isles
Traditional mining
Underground mining | Corf (mining) | Engineering | 235 |
76,958,638 | https://en.wikipedia.org/wiki/Frederick%20Snare | Frederick Snare (December 4, 1862September 27, 1946) was an American engineer and international construction contractor.
Career
After an unsuccessful contracting business in 1885 in Huntingdon, he relocated to Philadelphia and established a new contracting firm. Frederick Snare and Wolfgang Gustav Triest established the Snare & Triest Company in 1898. The Snare & Triest Company was incorporated in 1900, with Snare as the President. The Snare & Triest Company became the Frederick Snare Corporation in the 1920s. Snare's company operated in the United States, Cuba, Peru, Argentina, Columbia, and Panama. It grew to become one of Latin America's major contractual engineering firms.
In Havana, he constructed a country club after a group of American and British residents, led by Snare, arrived in 1911 and purchased an estate in Marianao. The original country club that Snare had established was renamed the Havana Biltmore Yacht and Country Club by the 1930s.
Golf
In 1922 and 1925, he won the Seniors' Golf Championship, an annual tournament of the United States Seniors Golf Association. Snare was a member of the Garden City Golf Club and National Golf Links of America. In 1927, he captained the United States Expeditionary Golf Forces at the first annual triangular international tournament in England.
Death
Frederick Snare died on September 22, 1946, at the Anglo-American Hospital in Havana, Cuba.
References
1862 births
1946 deaths
Civil engineers
American civil engineers
Engineers from Pennsylvania
Civil engineering contractors
American civil engineering contractors
American bridge engineers | Frederick Snare | Engineering | 308 |
25,587,984 | https://en.wikipedia.org/wiki/Malus%20%C3%97%20kaido | Malus × kaido, the midget crab apple or Kaido crab apple, is a hybrid species of genus Malus in the rose family, Rosaceae. It is a naturally-occurring hybrid of Malus baccata and M. spectabilis. It is native to north-central and northeastern China.
References
koido
Crabapples
Hybrid plants
Flora of China
Plants described in 1873 | Malus × kaido | Biology | 81 |
67,816,296 | https://en.wikipedia.org/wiki/HD%201 | HD 1, also known as HIP 422, is the first star catalogued in the Henry Draper Catalogue. It is located in the northern circumpolar constellation Cepheus and has an apparent magnitude of 7.42, making it readily visible in binoculars, but not to the naked eye. The object is located relatively far away at a distance of 1,220 light years but is approaching the Solar System with a spectroscopic radial velocity of .
Characteristics
Originally thought to be a single object, observations from Griffin & McClure (2009) reveal it to be a single-lined spectroscopic binary. The components take approximately 6 years to circle each other in an eccentric orbit. The visible component is an evolved red giant branch (RGB) star with a stellar classification of G9-K0 IIIa, a spectral class intermediate between a G9 and K0 giant star. It has 3 times the mass of the Sun, but at the age of 350 million years it has expanded to 30 times its girth. It radiates 226 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of , giving it a yellowish-orange hue. HD 1A is metal enriched, with an iron abundance 74% above solar levels. The objects spins modestly with a projected rotational velocity of .
References
Spectroscopic binaries
1
000422
K-type giants
G-type giants
Cepheus (constellation)
BD+67 01599 | HD 1 | Astronomy | 298 |
37,231,770 | https://en.wikipedia.org/wiki/Pregroup%20grammar | Pregroup grammar (PG) is a grammar formalism intimately related to categorial grammars. Much like categorial grammar (CG), PG is a kind of type logical grammar. Unlike CG, however, PG does not have a distinguished function type. Rather, PG uses inverse types combined with its monoidal operation.
Definition of a pregroup
A pregroup is a partially ordered algebra such that is a monoid, satisfying the following relations:
(contraction)
(expansion)
The contraction and expansion relations are sometimes called Ajdukiewicz laws.
From this, it can be proven that the following equations hold:
and are called the left and right adjoints of x, respectively.
The symbols and are also written and respectively. In category theory, pregroups are also known as autonomous categories or (non-symmetric) compact closed categories. More typically, will just be represented by adjacency, i.e. as .
Definition of a pregroup grammar
A pregroup grammar consists of a lexicon of words (and possibly morphemes) L, a set of atomic types T which freely generates a pregroup, and a relation that relates words to types. In simple pregroup grammars, typing is a function that maps words to only one type each.
Examples
Some simple, intuitive examples using English as the language to model demonstrate the core principles behind pregroups and their use in linguistic domains.
Let L = {John, Mary, the, dog, cat, met, barked, at}, let T = {N, S, N0}, and let the following typing relation hold:
A sentence S that has type T is said to be grammatical if . We can prove this by use of a chain of . For example, we can prove that is grammatical by proving that :
by first using contraction on and then again on . A more convenient notation exists, however, that indicates contractions by connecting them with a drawn link between the contracting types (provided that the links are nested, i.e. don't cross). Words are also typically placed above their types to make the proof more intuitive. The same proof in this notation is simply
A more complex example proves that the dog barked at the cat is grammatical:
Historical notes
Pregroup grammars were introduced by Joachim Lambek in 1993 as a development of his syntactic calculus, replacing the quotients by adjoints. Such adjoints had already been used earlier by Harris but without iterated adjoints and expansion rules.
Adding such adjoints was interesting to handle more complex linguistic cases, where the fact that is needed. It was also motivated by a more algebraic viewpoint: the definition of a pregroup is a weakening of that of a group, introducing a distinction between the left and right inverses and replacing the equality by an order. This weakening was needed because using types from a free group would not work: an adjective would get the type , hence it could be inserted at any position in the sentence.
Pregroup grammars have then been defined and studied for various languages (or fragments of them) including English, Italian, French, Persian and Sanskrit. Languages with a relatively free word order such as Sanskrit required to introduce commutation relations to the pregroup, using precyclicity.
Semantics of pregroup grammars
Because of the lack of function types in PG, the usual method of giving a semantics via the λ-calculus or via function denotations is not available in any obvious way. Instead, two different methods exist, one purely formal method that corresponds to the λ-calculus, and one denotational method analogous to (a fragment of) the tensor mathematics of quantum mechanics.
Purely formal semantics
The purely formal semantics for PG consists of a logical language defined according to the following rules:
Given a set of atomic terms T = {a, b, ...} and atomic function symbols F = {fm, gn, ...} (where subscripts are meta-notational indicating arity), and variables x, y, ..., all constants, variables, and well-formed function applications are basic terms (a function application is well-formed when the function symbol is applied to the appropriate number of arguments, which can be drawn from the atomic terms, variables, or can be other basic terms)
Any basic term is a term
Given any variable x, [x] is a term
Given any terms m and n, is a term
Some examples of terms are f(x), g(a,h(x,y)), . A variable x is free in a term t if [x] does not appear in t, and a term with no free variables is a closed term. Terms can be typed with pregroup types in the obvious manner.
The usual conventions regarding α conversion apply.
For a given language, we give an assignment I that maps typed words to typed closed terms in a way that respects the pregroup structure of the types. For the English fragment given above we might therefore have the following assignment (with the obvious, implicit set of atomic terms and function symbols):
where E is the type of entities in the domain, and T is the type of truth values.
Together with this core definition of the semantics of PG, we also have a reduction rules that are employed in parallel with the type reductions. Placing the syntactic types at the top and semantics below, we have
For example, applying this to the types and semantics for the sentence (emphasizing the link being reduced)
For the sentence :
See also
Compact closed category
Lambek calculus
References
Claudia Casadio (2004), Pregroup Grammar. Theory and Applications
Grammar frameworks
Semantics
Type theory | Pregroup grammar | Mathematics | 1,169 |
10,590,885 | https://en.wikipedia.org/wiki/Ice%20class | Ice class refers to a notation assigned by a classification society or a national authority to denote the additional level of strengthening as well as other arrangements that enable a ship to navigate through sea ice. Some ice classes also have requirements for the ice-going performance of the vessel.
Significance of ice class
Not all ships are built to an ice class. Building a ship to an ice class means that the hull must be thicker, and more scantlings must be in place. Sea chests may need to be arranged differently depending on the class. Sea bays may also be required to ensure that the sea chest does not become blocked with ice. Most of the stronger classes require several forms of rudder and propeller protection. Two rudder pintles are usually required, and strengthened propeller tips are often required in the stronger ice classes. More watertight bulkheads, in addition to those required by a ship's normal class, are usually required. In addition, heating arrangements for fuel tanks, ballast tanks, and other tanks vital to the ship's operation may also be required depending on the class.
Different ice classes
IACS Polar Class
Ships can be assigned one of seven Polar Classes (PC) ranging from PC 1 for year-round operation in all polar waters to PC 7 for summer and autumn operation in thin first-year ice based on the Unified Requirements for Polar Class Ships developed by the International Association of Classification Societies (IACS). The IACS Polar Class rules were developed to harmonize the ice class rules between different classification societies and complement the IMO Guidelines for Ships Operating in Arctic Ice Covered Waters.
Finnish-Swedish ice class
Traffic restrictions in the Baltic Sea during winter months are based on the Finnish-Swedish ice classes. These restrictions, imposed by the local maritime administrations, declare the minimum requirements for ships that are given icebreaker assistance, for example "ice class 1A, 2000 DWT".
In the Finnish-Swedish ice class rules, merchant ships operating in first-year ice in the Baltic Sea are divided into six ice classes based on requirements for hull structural design, engine output and performance in ice according to the regulations issued by the Swedish Maritime Administration and the Finnish Transport and Communications Agency (Traficom). International classification societies have incorporated the Finnish-Swedish ice class rules to their own rulebooks and offer equivalent ice class notations that are recognized by the Finnish and Swedish authorities.
Ships of the highest ice class, 1A Super, are designed to operate in difficult ice conditions mainly without icebreaker assistance while ships of lower ice classes 1A, 1B and 1C are assumed to rely on icebreaker assistance. In addition there are ice class 2 for steel-hulled ships with no ice strengthening that are capable of operating independently in very light ice conditions and class 3 for vessels that do not belong to any other class such as barges. In official context and legislation, the ice classes are usually spelled with Roman numerals, e.g. IA. Classification societies may sometimes use somewhat different distinguishing marks for Finnish-Swedish ice classes; for example, 1A Super is defined as Ice Class I AA by the American Bureau of Shipping (ABS) and ICE(1A*) by DNV GL.
Classification societies
American Bureau of Shipping
The American Bureau of Shipping has a system of ice classes which includes classes A5 through A0; B0, C0, and D0. A5 class is the strongest built of the classes, with D0 being the weakest. All other major classification societies have a similar system of ice classes, and converting between ice classes is relatively easy. In most cases only the names of the classes are changed and the specifics of the Arctic class are identical. ABS Class A5 is the only Arctic Class that may act independently in extreme Arctic waters with no limitations. Other classes are subject to limitations on time of year, required escort (always with a vessel of higher ice class) and ice conditions.
DNV
Prior to the adoption of Unified Requirements for Polar Class Ships, DNV (Det Norske Veritas until 2013; DNV GL in 2013–2021) maintained its own set of requirements for ships operating independently in freezing sub-Arctic, Arctic and Antarctic waters. Ships operating in first-year winter ice with pressure ridges could be assigned class notation ICE-05, -10, or -15 where the number indicated nominal ice thickness used for structural design; for example, for ICE-05. Vessels expected to encounter multi-year sea ice and glacial ice inclusions could be assigned more stringent requirements class notation with POLAR-10, -20, or -30. Finally, vessels intended for icebreaking as their main purpose could be assigned an additional class notation "Icebreaker" after the ice class, e.g. POLAR-10 Icebreaker.
Following the merger of Det Norske Veritas and Germanischer Lloyd in 2013, the old Det Norske Veritas ice class rules were superseded by new DNV GL ice classes.
DNV GL
DNV GL rules include requirements and additional class notations Ice(C) and Ice(E) for ships intended for service in waters with light ice conditions and localized drift ice, Ice(1C) through Ice(1A*) for vessels operating in northern Baltic Sea (corresponding to Finnish-Swedish ice classes 1C through 1A Super), an additional notation Ice(1A*F) for high-powered ships in regular traffic in heavy Baltic ice, and PC(1) through PC(7) for ships meeting the IACS Polar Class requirements. Ships engaged in icebreaking operations may be assigned an additional notation "Icebreaker" and ships designed to operate stern-first in ice an additional notation DAV.
Russian Maritime Register of Shipping
The Russian Maritime Register of Shipping (RS), established in 1913, has a long history of classing icebreakers and ice-strengthened vessels, and today maintains its own set of ice class rules for vessels navigating in freezing non-Arctic and Arctic seas. Out of about 5,000 vessels classified by the RS, over 3,200 are strengthened for navigation in ice and 300 of these have an ice class intended for operations in Arctic waters.
The RS ice class rules have been revised and the class notations changed several times over the years. , the ice classes are divided to non-Arctic, Arctic and icebreaker classes. The ice class notation is followed by a number which denotes the level of ice strengthening: Ice1 to Ice3 for non-Arctic ships, Arc4 to Arc9 for Arctic ships, and Icebreaker6 to Icebreaker9 for icebreakers. These ice classes can be assigned in parallel with the Finnish-Swedish ice class and/or the IACS Polar Class, provided the vessel complies with all applicable rules. The selection of ice class is based on the operating area in the Russian Arctic, time of year, ice conditions, operating tactics, and whether the vessel operates under icebreaker escort or independently. In addition, icebreaker classes have additional requirements for minimum shaft power and icebreaking capability.
Lloyd's Register
Ice classification by Lloyd's Register based on Baltic Sea and Arctic Ocean conditions.
Canadian ice classes
Arctic Class
A class attributed to a vessel under the Canadian Arctic Shipping Pollution Prevention Regulations regime, which indicates that the vessel met the requirements of those regulations.
Up to December 2017, Canadian Arctic Shipping Pollution Prevention Regulations establish 9 Arctic classes for ship (Arctic Class 1, 1A, 2, 3, 4, 6, 7, 8 or 10) based on requirements for hull structural design, engine power, engine cooling water arrangement, propeller, rudder and steering gear and performance in ice.
Canadian Arctic Class (CAC)
Source:
A class attributed to a vessel under the Canada Shipping Act regime, which indicates that the vessel met the requirements of the applicable standards of TP 12260 Equivalent Standards for the Construction of Arctic Class Ships, published by the Department of Transport, on December 1, 1995.
This new system exists for determining how the most highly ice-strengthened vessels are classed by Transport Canada, Marine Safety. Four Canadian Arctic Classes (CAC) have now replaced the previous Arctic 1 - Arctic 10 Classes. Details of the new structural classifications are provided in the Transport Canada publication Equivalent Standards For The Construction Of Arctic Class Ships - TP 12260E; to summarize:
CAC 1 is seen as an icebreaker which can operate anywhere in the Arctic and can proceed through Multi-Year ice continuously or by ramming according to the owner's performance requirements. A CAC 1 ship is capable of navigation in any ice regime found in the Canadian Arctic and unrestricted ramming of the heaviest ice features (except icebergs or similar ice formations) for the purpose of ice management.
CAC 2 is seen as a commercial cargo carrying ship which can trade anywhere in the Arctic, but would take the easiest route. It could proceed through Multi-Year ice continuously or by ramming according to the owner's performance requirements. A CAC 2 ship is capable of navigation in any ice regime found in the Canadian Arctic and ramming of heavy ice feature restricted by its structural capability.
CAC 3 is seen as commercial cargo carrying ship which can trade in the Arctic where ice regimes permit. It would proceed through Multi-Year ice only when it is unavoidable and would do so in a controlled manner usually by ramming. It would be unrestricted in Second and heavy First-Year ice.
CAC 4 is seen as commercial cargo carrying ship which can trade in the Arctic where ice regimes permit. It would be capable of navigating in any thickness of First-Year ice found in the Canadian Arctic, including First-Year ridges. It would avoid Multi-Year ice and when this is not possible it would push or ram at very low speeds.
Vessels CAC 1, 2, 3, and 4 may also be considered suitable escorts, capable of escorting ships of lower classes. Canada has developed structural standards for each of these classes. Ships built to polar standards of other Classification Societies and national authorities can apply for CAC equivalency on a case-by-case basis, as can owners of vessels previously classified under the existing Canadian system for Arctic Class vessels.
Note: The CAC categories are equivalent to the Arctic Classes as shown in table. These nominal equivalencies are not reciprocal.
References
External links
Ship classification societies
Shipbuilding
Sea ice
Baltic Sea | Ice class | Physics,Engineering | 2,106 |
147,487 | https://en.wikipedia.org/wiki/Strobilurin | Strobilurins are a group of natural products and their synthetic analogs. A number of strobilurins are used in agriculture as fungicides. They are part of the larger group of QIs (Quinone outside Inhibitors), which act to inhibit the respiratory chain at the level of Complex III.
The first parent natural products, strobilurins A and B, were extracted from the fungus Strobilurus tenacellus.
Commercial strobilurin fungicides were developed through optimization of photostability and activity.
Strobilurins represented a major development in fungus-based fungicides. First released in 1996, there are now ten major strobilurin fungicides on the market, which account for 23-25 % of the global fungicide sales.
Examples of commercialized strobilurin derivatives are azoxystrobin, kresoxim-methyl, picoxystrobin, fluoxastrobin, oryzastrobin, dimoxystrobin, pyraclostrobin and trifloxystrobin.
Strobilurins are mostly contact fungicides with a long half time as they are absorbed into the cuticle and not transported any further. They have a suppressive effect on other fungi, reducing competition for nutrients; they inhibit electron transfer in mitochondria, disrupting metabolism and preventing growth of the target fungi.
Natural strobilurins
Strobilurin A
Strobilurin A (also known as mucidin) is produced by Oudemansiella mucida, Strobilurus tenacellus, Bolinea lutea, and others. When first isolated it was incorrectly assigned as the E E E geometric isomer but was later identified by total synthesis as being the E Z E isomer, as shown.
9-Methoxystrobilurin A
9-Methoxystrobilurin A is produced by Favolaschia spp.
Strobilurin B
Strobilurin B is produced by S. tenacellus.
Strobilurin C
Strobilurin C is produced by X. longipes and X. melanotricha.
Strobilurin D and G
Strobilurin D is produced by Cyphellopsis anomala. Its structure was originally incorrectly assigned and is now considered to be identical to that of strobilurin G, produced by B. lutea. A related material, hydroxystrobilurin D, with an additional hydroxyl group attached to the methyl of the main chain is produced by Mycena sanguinolenta.
Strobilurin E
Strobilurin E is produced by Crepidotus fulvotomentosus and Favolaschia spp.
Strobilurin F2
Strobilurin F2 is produced by B. lutea.
Strobilurin H
Strobilurin H is produced by B. lutea. The natural product with a phenolic hydroxy group in place of the aromatic methoxy group of strobilurin H is called strobilurin F1 and is found in C. anomala and Agaricus spp.
Strobilurin X
Strobilurin X is produced by O. mucida.
Oudemansins
The oudemansins are closely related to the strobilurins and are also quinone outside inhibitors.
Oudemansin A with R1 = R2 = H was first described in 1979, after being isolated from mycelial fermentations of the basidiomycete fungus Oudemansiella mucida.
Later it was found in cultures of the basidiomycete fungi Mycena polygramma and Xerula melanotricha. The latter fungus also produces oudemansin B, with R1 = MeO and R2 = Cl. Oudemansin X, with R1 = H and R2 = MeO was isolated from Oudemansiella radicata.
Synthetic strobilurins
The discovery of the strobilurin class of fungicides led to the development of a group of commercial fungicides used in agriculture. Examples are shown below.
See also
Plant pathology
References
External links
, from: David Moore, Geoffrey D. Robson, Anthony P. J. Trinci, 21st Century Guidebook to Fungi'', 2nd edition.
Fungicides
Respiratory toxins
Complex III inhibitors
Strobilurins | Strobilurin | Chemistry,Biology | 955 |
5,269,034 | https://en.wikipedia.org/wiki/UCSF%20Chimera | UCSF Chimera (or simply Chimera) is an extensible program for interactive visualization and analysis of molecular structures and related data, including density maps, supramolecular assemblies, sequence alignments, docking results, trajectories, and conformational ensembles. High-quality images and movies can be created. Chimera includes complete documentation and can be downloaded free of charge for noncommercial use.
Chimera is developed by the Resource for Biocomputing, Visualization, and Informatics (RBVI) at the University of California, San Francisco. Development is partially supported by the National Institutes of Health (NIGMS grant P41-GM103311).
The next-generation program is UCSF ChimeraX.
General structure analysis
automatic identification of atom
hydrogen addition and partial charge assignment
high-quality hydrogen bond, contact, and clash detection
measurements: distances, angles, surface area, volume
calculation of centroids, axes, planes and associated measurements
amino acid rotamer libraries, protein Ramachandran plot, protein contact map
structure building and bond rotation
molecular dynamics trajectory playback (many formats), distance and angle plots
morphing between conformations of a protein or even different proteins
display of attributes (B-factor, hydrophobicity, etc.) with colors, radii, "worms"
easy creation of custom attributes with simple text file inputs
ViewDock tool to facilitate interactive screening of docking results
rich set of commands, powerful specification syntax
many formats read, PDB and Mol2 written
Web and fetch from Protein Data Bank, CATH or SCOP (domains), EDS (density maps), EMDB (density maps), ModBase (comparative models), CASTp (protein pocket measurements), Pub3D (small molecule structures), VIPERdb (icosahedral virus capsids), UniProt (protein sequences with feature annotations), others
interfaces to PDB2PQR charge/radius assignment, APBS electrostatics calculations, AutoDock Vina single-ligand docking
Presentation images and movies
high-resolution images
visual effects including depth-cueing, interactive shadows, silhouette edges, multicolor backgrounds
standard molecular representations (sticks, spheres, ribbons, molecular surfaces)
pipes-and-planks for helices and strands; nucleotide objects including lollipops and ladder rungs
ellipsoids to show anisotropic B-factors
nonmolecular geometric objects
renderings of density maps and other volume data (see below)
labeling with text, symbols, arrows, color keys
different structures can be clipped differently and at any angle
optional raytracing with bundled POV-Ray
scene export to X3D and other formats
simple graphical interface for creating movies interactively
scenes can be placed as keyframes along an animation timeline
alternatively, movie content and recording can be scripted; rich set of related commands
movie recording is integrated with morphing and MD trajectory playback
Volume data tools
many formats of volume data maps (electron density, electrostatic potential, others) read, several written
interactive threshold adjustment, multiple isosurfaces (mesh or solid), transparent renderings
fitting of atomic coordinates to maps and maps to maps
density maps can be created from atomic coordinates
markers can be placed in maps and connected with smooth paths
display of individual data planes or multiple orthogonal planes
volume data time series playback and morphing
many tools for segmenting and editing maps
Gaussian smoothing, Fourier transform, other filtering and normalization
measurements: surface area, surface-enclosed volume, map symmetry, others
Sequence-structure tools
many sequence alignment formats read, written
sequence alignments can be created, edited
sequences automatically associate with structures
sequence-structure crosstalk: highlighting in one highlights the other
protein BLAST search via Web service
multiple sequence alignment via Clustal Omega and MUSCLE Web services
interfaces to MODELLER for homology modeling and loop building
structure superposition with or without pre-existing sequence alignment
generation of structure-based sequence alignments from multiple superpositions
several methods for calculating conservation and displaying values on associated structures
RMSD header (histogram above the sequences) showing spatial variability of associated structures
user-defined headers including histograms and colored symbols
UniProt and CDD feature annotations shown as colored boxes on sequences
trees in Newick format read/displayed
See also
List of molecular graphics systems
Molecular modelling
Molecular graphics
Molecular dynamics
Molecule editor
Software for molecular mechanics modeling
References
External links
Program download
Program documentation
Chimera Image Gallery and Animation Gallery
Publications about Chimera
Resource for Biocomputing, Visualization, and Informatics
University of California, San Francisco
UCSF ChimeraX website
Freeware
Molecular modelling software | UCSF Chimera | Chemistry | 950 |
39,657,209 | https://en.wikipedia.org/wiki/Perfusion%20MRI | Perfusion MRI or perfusion-weighted imaging (PWI) is perfusion scanning by the use of a particular MRI sequence. The acquired data are then post-processed to obtain perfusion maps with different parameters, such as BV (blood volume), BF (blood flow), MTT (mean transit time) and TTP (time to peak).
Clinical use
In cerebral infarction, the penumbra has decreased perfusion. Another MRI sequence, diffusion weighted MRI, estimates the amount of tissue that is already necrotic, and the combination of those sequences can therefore be used to estimate the amount of brain tissue that is salvageable by thrombolysis and/or thrombectomy.
Sequences
There are 3 main techniques for perfusion MRI:
Dynamic susceptibility contrast (DSC): Gadolinium contrast is injected, and rapid repeated imaging (generally gradient-echo echo-planar T2*-weighted) quantifies susceptibility-induced signal loss.
Dynamic contrast enhanced (DCE): Measuring shortening of the spin–lattice relaxation (T1) induced by a gadolinium contrast bolus
Arterial spin labelling (ASL): Magnetic labeling of arterial blood below the imaging slab, without the need of gadolinium contrast
It can also be argued that diffusion MRI models, such as intravoxel incoherent motion, also attempt to capture perfusion.
Dynamic susceptibility contrast
In Dynamic susceptibility contrast MR imaging (DSC-MRI, or simply DSC), Gadolinium contrast agent (Gd) is injected (usually intravenously) and a time series of fast T2*-weighted images is acquired. As Gadolinium passes through the tissues, it induces a reduction of T2* in the nearby water protons; the corresponding decrease in signal intensity observed depends on the local Gd concentration, which may be considered a proxy for perfusion. The acquired time series data are then postprocessed to obtain perfusion maps with different parameters, such as BV (blood volume), BF (blood flow), MTT (mean transit time) and TTP (time to peak).
Dynamic contrast-enhanced imaging
Dynamic contrast-enhanced (DCE) imaging gives information about physiological tissue characteristics such transport from blood to tissue and blood volume. It is typically used to measure how a contrast agent moves from the blood to the tissue. The concentration of the contrast agent is measured as it passes from the blood vessels to the extracellular space of the tissue (it does not pass the membranes of cells) and as it goes back to the blood vessels.
The contrast agents used for DCE-MRI are often gadolinium based. Interaction with the gadolinium (Gd) contrast agent (commonly a gadolinium ion chelate) causes the relaxation time of water protons to decrease, and therefore images acquired after gadolinium injection display higher signal in T1-weighted images indicating the present of the agent. It is important to note that, unlike some techniques such as PET imaging, the contrast agent is not imaged directly, but by an indirect effect on water protons. The common procedure for a DCE-MRI exam is to acquire a regular T1-weighted MRI scan (with no gadolinium), and then gadolinium is injected (usually as an intravenous bolus at a dose of 0.05–0.1 mmol/kg) before further T1-weighted scanning. DCE-MRI may be acquired with or without a pause for contrast injection and may have varying time resolution depending on preference – faster imaging (less than 10s per imaging volume) allows pharmacokinetic (PK) modelling of contrast agent but can limit possible image resolution. Slower time resolution allows more detailed images, but may limit interpretation to only looking at signal intensity curve shape. In general, persistent increased signal intensity (corresponding to decreased T1 and thus increased Gd interaction) in a DCE-MRI image voxel indicates permeable blood vessels characteristic of tumor tissue, where Gd has leaked into the extravascular extracellular space. In tissues with healthy cells or a high cell density, gadolinium re-enters the vessels faster since it cannot pass the cell membranes. In damaged tissues or tissues with a lower cell density, the gadolinium stays in the extracellular space longer.
Pharmacokinetic modelling of gadolinium in DCE-MRI is complex and requires choosing a model. There are a variety of models, which describe tissue structure differently, including size and structure of plasma fraction, extravascular extracellular space, and the resulting parameters relating to permeability, surface area, and transfer constants. DCE-MRI can also provide model-independent parameters, such as T1 (which is not technically part of the contrast scan and can be acquired independently) and (initial) area under the gadolinium curve (IAUGC, often given with number of seconds from injection - i.e., IAUGC60), which may be more reproducible. Accurate measurement of T1 is required for some pharmacokinetic models, which can be estimated from 2 pre-gadolinium images of varying excitation pulse flip angle, though this method is not intrinsically quantitative. Some models require knowledge of the arterial input function, which may be measured on a per patient basis or taken as a population function from literature, and can be an important variable for modelling.
Arterial spin labelling
Arterial spin labelling (ASL) has the advantage of not relying on an injected contrast agent, instead inferring perfusion from a drop in signal observed in the imaging slice arising from inflowing spins (outside the imaging slice) having been selectively inverted or saturated. A number of ASL schemes are possible, the simplest being flow alternating inversion recovery (FAIR) which requires two acquisitions of identical parameters with the exception of the out-of-slice inversion; the difference in the two images is theoretically only from inflowing spins, and may be considered a 'perfusion map'.
References
Diagnostic neurology
Neuroimaging
Magnetic resonance imaging | Perfusion MRI | Chemistry | 1,285 |
14,831,122 | https://en.wikipedia.org/wiki/ACM%20Software%20System%20Award | The ACM Software System Award is an annual award that honors people or an organization "for developing a software system that has had a lasting influence, reflected in contributions to concepts, in commercial acceptance, or both". It is awarded by the Association for Computing Machinery (ACM) since 1983, with a cash prize sponsored by IBM of currently $35,000.
Recipients
The following is a list of recipients of the ACM Software System Award:
See also
Software system
List of computer science awards
References
External links
Software System Award — ACM Awards
Awards established in 1983
Software System Award
Computer science awards | ACM Software System Award | Technology | 120 |
1,635,629 | https://en.wikipedia.org/wiki/Society%20of%20Petroleum%20Engineers | The Society of Petroleum Engineers (SPE) is a 501(c)(3) not-for-profit professional organization.
SPE provides a worldwide forum for oil and natural gas exploration and production (E&P) professionals to exchange technical knowledge and best practices. SPE manages OnePetro and PetroWiki, in addition to publishing magazines, peer-reviewed journals, and books. SPE also hosts more than 100 events each year across the globe as well as providing online tools and in-person training opportunities. SPE's technical library (OnePetro) contains more than 314,000 technical papers—products of SPE conferences and periodicals, made available to the entire industry.
SPE has offices in Dallas, Houston, Calgary, Dubai and Kuala Lumpur. SPE is a professional association for more than 127,000 engineers, scientists, managers, and educators. There are about 59,000 student members of SPE.
History
The history of the SPE began well before its actual establishment. During the decade after the 1901 discovery of the Spindletop field, the American Institute of Mining Engineers (AIME) saw a growing need for a forum in the booming new field of petroleum engineering. As a result, AIME formed a standing committee on oil and gas in 1913.
In 1922, the committee was expanded to become one of AIME's 10 professional divisions. The Petroleum Division of AIME continued to grow throughout the next three decades. By 1950, the Petroleum Division had become one of three separate branches of AIME, and in 1957 the Petroleum Branch of AIME was expanded once again to form a professional society.
SPE became tax-exempt in March 1985.
The first SPE Board of Directors meeting was held 6 October 1957. SPE continues to operate more than 100 events around the world.
Membership
SPE is a non-profit association for petroleum engineers. Petroleum engineers who become members of SPE gain access to several member benefits like a complimentary subscription to the Journal of Petroleum Technology, unlimited free webinars, and discounts on SPE events (conferences, workshops, training courses, etc.) and publications. SPE Connect is a site and app for SPE members to exchange technical knowledge, answer each other's practical application questions, and share best practices.
SPE is made up of about 127,000 members in 145 countries. SPE Sections are groups of SPE Professional Members, and SPE Student Chapters are groups of SPE Student Members typically named for the hosting university or a geographical region. 67,000+ professional members are affiliated with 192 SPE Sections, and about 59,000 student members are affiliated with the 392 SPE Student Chapters.
SPE annually grants scholarships to student members.
Awards
Annually, SPE recognizes individuals for their contribution to the oil and gas industry at the regional and international levels.
All individuals who receive SPE Awards were nominated by either an industry colleague, mentor, etc., except for recipients of the Cedric K. Ferguson Young Technical Author Medal,which is awarded to SPE members who author a paper approved for publication in an SPE journal (peer-reviewed journals on oil and gas topics) before age 36. Eligibility for the awards is denoted online.
SPE International Awards are announced online, featured in the Journal of Petroleum Technology, and presented at the Annual Technical Conference and Exhibition.
Regional awards
SPE grants technical and professional awards at the regional level. To be considered for these awards, one must be nominated online. Regional technical award eligibility is described online. SPE regional award recipients are considered for the international level of the award they received in the following award season. Regional awards are presented at regional or section meetings.
Distinguished Lecturers
The SPE Distinguished Lecturer Committee (DL) each year selects a group of nominees to become SPE Distinguished Lecturers. SPE Distinguished Lecturers are nominated for the program and selected by the committee to share their industry expertise by lecturing at local SPE sections across the globe. Nominees are notified of their nomination and must submit a summary of their biography, a presentation that can be given in thirty minutes or less, and additional information for the DL committee. The schedule of DL talks is available online. Some DL talks are very popular and are made available online as webinars.
Publications
SPE publishes peer-reviewed journals, magazines, and books. Technical papers presented at SPE conferences or approved for publication in SPE peer-reviewed journals are also published to OnePetro.org.
Peer-reviewed Journal
SPE Journal, a leading publication in oil, petroleum, and natural gas, offers peer-reviewed papers showcasing methods and technology solutions by industry experts. Its first issue was published in 1996.
Magazines
SPE publishes five online magazines:
Journal of Petroleum Technology (JPT) is the SPE flagship magazine, providing articles on oil & gas technology advancements, issues, and other exploration and production industry news. The JPT newsletter is sent out weekly on Wednesdays. Anyone may sign up to receive the JPT newsletter, though some content is only accessible to members of SPE. Every SPE member receives a complimentary subscription to JPT.
Oil and Gas Facilities (OGF) is focused on delivering the latest news on project and technology shifts in the industry.
The Way Ahead (TWA) is by and for young professional members of SPE. It is the newest SPE magazine. It was first published in 2006 and moved from print to online in 2016.
HSE Now is aimed at covering the changes in health, safety, security, environmental, social responsibility, and regulations that impact the oil and gas industry.
Data Science and Digital Engineering presents the evolving landscape of digital technology and data management in the upstream oil and gas industry.
OnePetro
Launched in March 2007, OnePetro.org is a multi-society library that allows users to search for and access a broad range of technical literature related to the oil and gas exploration and production industry. OnePetro is a multi-association effort that reflects participation of many organizations. The Society of Petroleum Engineers (SPE) operates OnePetro on behalf of the participating organizations.
OnePetro currently contains more than 1.3 million searchable documents from 23 publishing partners. OnePetr users viewed 4.9 million items in 2023. OnePetro is the first online offering of documents from some organizations, making these materials widely available for the first time.
SPE Petroleum Engineering Certification
The SPE Petroleum Engineering Certification program was instituted as a way to certify petroleum engineers by examination and experience. This certification is similar to the Registration of Petroleum Engineers by state in the United States.
Certified professionals use "SPEC" after their name.
See also
Energy law
References
International professional associations
Engineering societies based in the United States
Petroleum engineering
International organizations based in the United States
Organizations based in Dallas | Society of Petroleum Engineers | Engineering | 1,381 |
24,385,398 | https://en.wikipedia.org/wiki/C12H17N | {{DISPLAYTITLE:C12H17N}}
The molecular formula C12H17N (molar mass: 175.27 g/mol) may refer to:
2-Benzylpiperidine
4-Benzylpiperidine
Indanylaminopropane
2-Methyl-3-phenylpiperidine
4-Phenylazepane
Phenylethylpyrrolidine | C12H17N | Chemistry | 90 |
76,747,649 | https://en.wikipedia.org/wiki/Anna%20Lawniczak | Anna T. Lawniczak (born 1953) is an applied mathematician known for her work on complex systems including lattice gas automata, a type of cellular automaton used to model fluid dynamics. Educated in Poland and the US, she has worked in the US and Canada, where she is a professor at the University of Guelph. She is the former president of the Canadian Applied and Industrial Mathematics Society.
Education and career
After earning a master's degree in engineering (summa cum laude) from the Wrocław University of Science and Technology in Poland, Lawniczak went to Southern Illinois University in the US for doctoral study in mathematics. She completed her Ph.D. in 1981, supervised by Philip J. Feinsilver.
Before taking her current position at the University of Guelph in 1989, Lawniczak was a professor at Louisiana State University in the US, and the University of Toronto in Canada.
She was president of the Canadian Applied and Industrial Mathematics Society / Société Canadienne de Mathématiques Appliquées et Industrielles (CAIMS/SCMAI) from 1997 to 2001. As president she guided a 1998 transition that included a new constitution, formal incorporation, a new annual conference, and a change from its former name, the Canadian Applied Mathematics Society / Société Canadienne de Mathématiques Appliquées.
Recognition
The Canadian Applied and Industrial Mathematics Society gave Lawniczak their Arthur Beaumont Distinguished Service Award in 2003. In the same year, the Fields Institute listed her as a Fellow in recognition of her "outstanding contributions to the Fields Institute and its activities".
The Engineering Institute of Canada named her as an EIC Fellow in 2018, after a nomination from IEEE Canada, naming her as "an international authority in the discrete modeling & simulation methods like Individually Based Simulation Models, Agent Based Simulations, Cellular Automata and Lattice Gas Cellular Automata, a field of which she is one of the co-developers".
References
External links
Home page
1953 births
Living people
Polish mathematicians
Polish women mathematicians
Canadian mathematicians
Canadian women mathematicians
Applied mathematicians
Cellular automatists
Wrocław University of Technology alumni
Southern Illinois University alumni
Louisiana State University faculty
Academic staff of the University of Toronto
Academic staff of the University of Guelph | Anna Lawniczak | Mathematics | 452 |
13,759,531 | https://en.wikipedia.org/wiki/HD%20173417 | HD 173417 is a single star in the northern constellation of Lyra. It is dimly visible to the naked eye with an apparent visual magnitude of 5.68, positioned about two degrees to the southwest of the bright star Sheliak. The distance to this star is approximately 169 light years based on parallax measurements, and it is slowly drifting closer with a radial velocity of −3 km/s.
The stellar classification of this star is F1III-IV, matching an evolving star with the mixed luminosity traits of a subgiant and giant star. It is 1.7 billion years old with a low metallicity and a relatively high projected rotational velocity of 54 km/s. The star has 1.6 times the mass of the Sun and 2.2 times the Sun's radius. It is radiating over 10 times the luminosity of the Sun from its photosphere at an effective temperature of 6,928 K.
References
F-type giants
F-type subgiants
Lyra
BD+31 3348
173417
091883
7044 | HD 173417 | Astronomy | 221 |
21,482,999 | https://en.wikipedia.org/wiki/Delta%20L | The Delta L, Thor-Delta L, or Thrust-Augmented Long Tank Thor-Delta was a US expendable launch system used to launch the Pioneer E and TETR satellites in 1969 (failed) and HEOS satellite in 1972. It was a member of the Delta family of rockets.
The Delta L was a three-stage rocket. The first stage was a Long Tank Thor, a stretched version of the Thor missile, augmented by three Castor-2 solid rocket boosters. The second stage was the Delta E. An FW-1D solid rocket motor was used as the third stage.
The first launch of the Delta L took place on 27 August 1969, from Launch Complex 17A at the Cape Canaveral Air Force Station. A defective valve caused plumbing in the hydraulics system to rupture and leak fluid, causing first-stage engine gimbaling to fail around minutes into launch. The stage completed its burn successfully, but threw the second stage far off course. Orbital velocity could not be achieved, and Range Safety sent the destruct command at T+383 seconds. Neither the Pioneer E nor the TETR payloads achieved orbit. The second and final Delta L launch was from Space Launch Complex 2E at the Vandenberg Air Force Base, on 31 January 1972. It successfully placed the HEOS-2 satellite into a highly elliptical orbit.
References
Delta (rocket family) | Delta L | Astronomy | 286 |
3,286,347 | https://en.wikipedia.org/wiki/Neutral%20red | Neutral red (toluylene red, Basic Red 5, or C.I. 50040) is a eurhodin dye used for staining in histology. It stains lysosomes red. It is used as a general stain in histology, as a counterstain in combination with other dyes, and for many staining methods. Together with Janus Green B, it is used to stain embryonal tissues and supravital staining of blood. It can be used for staining the Golgi apparatus in cells and Nissl granules in neurons.
In microbiology, it is used in the MacConkey agar to differentiate bacteria for lactose fermentation.
Neutral red can be used as a vital stain. The Neutral Red Cytotoxicity Assay was first developed by Ellen Borenfreund in 1984. In the Neutral Red Assay live cells incorporate neutral red into their lysosomes. As cells begin to die, their ability to incorporate neutral red diminishes. Thus, loss of neutral red uptake corresponds to loss of cell viability. The neutral red is also used to stain cell cultures for plate titration of viruses.
Neutral red is added to some growth media for bacterial and cell cultures. It usually is available as a chloride salt.
Neutral red acts as a pH indicator, changing from red to yellow between pH 6.8 and 8.0.
References
Other references
Azin dyes
Vital stains | Neutral red | Chemistry,Materials_science | 304 |
36,732,980 | https://en.wikipedia.org/wiki/Phoenix%20Cluster | The Phoenix Cluster (SPT-CL J2344-4243) is a massive, Abell class type I galaxy cluster located at its namesake, southern constellation of Phoenix. It was initially detected in 2010 during a 2,500 square degree survey of the southern sky using the Sunyaev–Zeldovich effect by the South Pole Telescope collaboration. It is one of the most massive galaxy clusters known, with the mass on the order of 2 , and is the most luminous X-ray cluster discovered, producing more X-rays than any other known massive cluster. It is located at a comoving distance of from Earth. About 42 member galaxies were identified and currently listed in the SIMBAD Astronomical Database, though the real number may be as high as 1,000 galaxies.
Discovery
The Phoenix Cluster was first reported in a paper by R. Williamson and colleagues during a survey by the South Pole Telescope in Antarctica, being one of the 26 galaxy clusters identified by the survey. The detection has been conducted at frequencies between 95, 150, and 220 GHz, with 14 of the galaxy clusters detected have been previously identified, while 12 – including Phoenix Cluster, being new discoveries. The would-be named Phoenix Cluster (still identified by its numerical catalogue entry SPT-CL J2344–4243) has been remarked to be having "the largest X-ray luminosity of any cluster" described by the survey. A bright, type-2 Seyfert galaxy has also been pronounced lying 19 arcseconds from the apparent center of the cluster that has been identified as 2MASX J23444387-4243124, which would later be named Phoenix A, the cluster's central galaxy.
Characteristics
Owing to its extreme properties, the Phoenix Cluster has been extensively studied and is considered one of the most important class of objects of its type. A multiwavelength observational study by M. McDonald and colleagues show that it has an extremely strong cooling flow rate (roughly 3,280 per annum), described as a runaway cooling flow. This measurement is one of the highest ever seen in the middle of a galaxy cluster. The very strong cooling flow, in contrast to other galaxy clusters, has been a suggested result of the feedback mechanism to prevent a runaway cooling flow which may not be established yet in the Phoenix Cluster; the heating mechanism expected to be produced by the central black hole being inadequate to create a feedback (in contrast to the Perseus and Virgo clusters). This is further supported by the high starburst activity of the central galaxy Phoenix A, where stars are formed at 740 per annum (compared to the Milky Way's 1 per annum of star production); the central active galactic nucleus attested to not have been producing sufficient energy to ionize the galaxy's gas and prevent starburst activity.
Components
Central galaxy
The central elliptical cD galaxy of this cluster, Phoenix A (RBS 2043, 2MASX J23444387-4243124), hosts an active galactic nucleus that has been described as sharing both the properties of being a quasar and a type 2 Seyfert galaxy, which is powered by a central supermassive black hole. The galaxy has an uncertain morphology. Based on the "total" aperture at the K-band, Phoenix A has an angular diameter of 16.20 arcseconds, corresponding to a large isophotal diameter of , making it one of the largest known galaxies discovered from Earth.
Phoenix A also contains vast amounts of hot gas. More normal matter is present there than the total of all the other galaxies in the cluster. Data from observations indicate that hot gas is cooling in the central regions at a rate of /yr, the highest ever recorded.
It is also undergoing a massive starburst, the highest recorded in the middle of a galaxy cluster, although other galaxies at higher redshifts have a higher starburst rate .
Observations by a variety of telescopes including the GALEX and Herschel space telescopes shows that it has been converting the material to stars at an exceptionally high rate of 740 per year. This is considerably higher than that of NGC 1275 A, the central galaxy of the Perseus Cluster, where stars are formed at a rate around 20 times lower, or the one per year rate of star formation in the Milky Way.
Supermassive black hole
The central black hole of the Phoenix Cluster is the engine that drives both the Seyfert nucleus of Phoenix A, as well as the relativistic jets that produce the inner cavities in the cluster center. M. Brockamp and colleagues had used a modelling of the innermost stellar density of the central galaxy and the adiabatic process that fuels the growth of its central black hole to create a calorimetric tool to measure the black hole's mass. The team deduced an energy conversion parameter and related it to the behavior of the hot intracluster gas, the AGN feedback parameter, and the dynamics and density profiles of the galaxy to create an evolutionary modelling of how the central black hole may have grown in the past. In the case of Phoenix A, it has been shown to have far more extreme characteristics, with adiabatic models near the theoretical limitations.
These models, as suggested by the paper, are indicative of a central black hole with estimated mass on the order of 100 billion , possibly even exceeding this mass, though the black hole's mass itself has not yet been measured through orbital mechanics. Such a high mass makes it potentially one of the most massive black holes known in the observable universe. A black hole of this mass has:
24,100 times the mass of the black hole at the center of the Milky Way (Sagittarius A*)
Twice the mass of the Triangulum Galaxy, including its dark matter halo.
Assuming it is a non-rotating black hole, an immense event horizon with the Schwarzschild diameter of , 100 times the distance from the Sun to Pluto.
A circumference that would take 71 days and 14 hours to travel at light speed.
Such a high mass may place it into a proposed category of stupendously large black holes (SLABs), black holes that may have been seeded by primordial black holes with masses that may reach or more, larger than the upper maximum limit for at least luminous accreting black holes hosted by disc galaxies of about
References
External links
Animation of the Phoenix Cluster
Chandra X-Ray Observatory, Blog Home: Q&A With Michael McDonald Wed, 08/08/2012 – 16:13
The Prediction and Fulfillment of the "Effect": An Interview with Rashid Sunyaev, August 15, 2012
Astronomical objects discovered in 2010
Astronomical radio sources
Astronomical X-ray sources
Galaxy clusters
Phoenix (constellation) | Phoenix Cluster | Astronomy | 1,386 |
3,470,540 | https://en.wikipedia.org/wiki/Zener%20pinning | Zener pinning is the influence of a dispersion of fine particles on the movement of low- and high-angle grain boundaries through a polycrystalline material. Small particles act to prevent the motion of such boundaries by exerting a pinning pressure which counteracts the driving force pushing the boundaries. Zener pinning is very important in materials processing as it has a strong influence on recovery, recrystallization and grain growth.
Origin of the pinning force
A boundary is an imperfection in the crystal structure and as such is associated with a certain quantity of energy. When a boundary passes through an incoherent particle then the portion of boundary that would be inside the particle essentially ceases to exist. In order to move past the particle some new boundary must be created, and this is energetically unfavourable. While the region of boundary near the particle is pinned, the rest of the boundary continues trying to move forward under its own driving force. This results in the boundary becoming bowed between those points where it is anchored to the particles.
Mathematical description
The figure illustrates a boundary intersecting with an incoherent particle of radius . The pinning force acts along the line of contact between the boundary and the particle, i.e., a circle of diameter . The force per unit length of boundary in contact is , where is the interfacial energy. Hence, the total force acting on the particle-boundary interface is
The maximum restraining force occurs when , so .
In order to determine the pinning force resulting from a given dispersion of particles, Clarence Zener made several important assumptions:
The particles are spherical.
The passage of the boundary does not alter the particle-boundary interaction.
Each particle exerts the maximum pinning force on the boundary, regardless of contact position.
The contacts between particles and boundaries are completely random.
The number density of particles on the boundary is that expected for a random distribution of particles.
For a volume fraction, , of randomly distributed spherical particles of radius , the number or particles per unit volume (number density) is given by
From this total number density, only those particles that are within one particle radius will be able to interact with the boundary. If the boundary is essentially planar, then this fraction will be given by
Given the assumption that all particles apply the maximum pinning force, , the total pinning pressure exerted by the particle distribution per unit area of the boundary is
This is referred to as the Zener pinning pressure. It follows that large pinning pressures are produced by:
Increasing the volume fraction of particles
Reducing the particle size
The Zener pinning pressure is orientation dependent, which means that the exact pinning pressure depends on the amount of coherence at the grain boundaries.
Computer Simulation
Particle pinning has been studied extensively with computer simulations, such as Monte Carlo and phase field methods. These methods can capture interfaces with complex shapes and provide better approximations for the pinning force.
Notes
According to Current issues in recrystallization: a review, R.D. Doherty et al., Materials Science and Engineering A238 (1997), p 219-274
For information on zener pinning modeling see:
- "Contribution à l'étude de la dynamique du Zener pinning: simulations numériques par éléments finis", Thesis in French (2003). by G. Couturier.
- "3D finite element simulation of the inhibition of normal grain growth by particles". Acta Materialia, 53, pp. 977–989, (2005). by G. Couturier, R. Doherty, Cl. Maurice, R. Fortunier.
- "3D finite element simulation of Zener pinning dynamics". Philosophical Magazine, vol 83, n° 30, pp. 3387–3405, (2003). by G. Couturier, Cl. Maurice, R. Fortunier.
Materials science | Zener pinning | Physics,Materials_science,Engineering | 789 |
2,604 | https://en.wikipedia.org/wiki/Abated | See also, Abatement.
Abated, an ancient technical term applied in masonry and metal work to those portions which are sunk beneath the surface, as in inscriptions where the ground is sunk round the letters so as to leave the letters or ornament in relief.
References
Construction
Masonry | Abated | Engineering | 59 |
45,111,627 | https://en.wikipedia.org/wiki/Starlink | Starlink is a satellite internet constellation operated by Starlink Services, LLC, an international telecommunications provider that is a wholly owned subsidiary of American aerospace company SpaceX, providing coverage to over 100 countries and territories. It also aims to provide global mobile broadband. Starlink has been instrumental to SpaceX's growth.
SpaceX started launching Starlink satellites in 2019. As of September 2024, the constellation consists of over 7,000 mass-produced small satellites in low Earth orbit (LEO) that communicate with designated ground transceivers. Nearly 12,000 satellites are planned to be deployed, with a possible later extension to 34,400. SpaceX announced reaching more than 1 million subscribers in December 2022 and 4 million subscribers in September 2024.
The SpaceX satellite development facility in Redmond, Washington, houses the Starlink research, development, manufacturing, and orbit control facilities. In May 2018, SpaceX estimated the total cost of designing, building and deploying the constellation would be at least US$10 billion. Revenues from Starlink in 2022 were reportedly $1.4 billion accompanied by a net loss, with a small profit being reported that began only in 2023. In May 2024 revenue was expected to reach $6.6 billion in 2024 but later in that year the prediction was raised to $7.7 billion. Revenue is expected to reach $11.8 billion in 2025.
Starlink has been extensively used in the Russo-Ukrainian War, a role for which it has been contracted by the United States Department of Defense. Starshield, a military version of Starlink, is designed for government use.
Astronomers raised concerns about the effect the constellation may have on ground-based astronomy, and how the satellites will contribute to an already congested orbital environment. SpaceX has attempted to mitigate astronometric interference concerns with measures to reduce the satellites' brightness during operation. The satellites are equipped with Hall-effect thrusters allowing them to raise their orbit, station-keep, and de-orbit at the end of their lives. They are also designed to autonomously and smoothly avoid collisions based on uplinked tracking data.
History
Background
Constellations of low Earth orbit satellites were first conceptualized in the mid-1980s as part of the Strategic Defense Initiative, culminating in Brilliant Pebbles, where weapons were to be staged in low orbits to intercept ballistic missiles at short notice. The potential for low-latency communication was also recognized and development offshoots in the 1990s led to numerous commercial megaconstellations using around 100 satellites such as Celestri, Teledesic, Iridium, and Globalstar. However all entities entered bankruptcy by the dot-com bubble burst, due in part to excessive launch costs at the time.
In 2004, Larry Williams, SpaceX VP of Strategic Relations and former VP of Teledesic's "Internet in the sky" program, opened the SpaceX Washington DC office. That June, SpaceX acquired a stake in Surrey Satellite Technology (SSTL) as part of a "shared strategic vision". SSTL was at that time working to extend the Internet into space. However, SpaceX's stake was eventually sold back to EADS Astrium in 2008 after the company became more focused on navigation and Earth observation.
In early 2014, Elon Musk and Greg Wyler were working together planning a constellation of around 700 satellites called WorldVu, which would be over 10 times the size of the then largest Iridium satellite constellation. However, these discussions broke down in June 2014, and SpaceX instead filed an International Telecommunications Union (ITU) application via the Norwegian Communications Authority under the name STEAM. SpaceX confirmed the connection in the 2016 application to license Starlink with the Federal Communications Commission (FCC). SpaceX trademarked the name Starlink in the United States for their satellite broadband network; the name was inspired by the 2012 novel The Fault in Our Stars.
Design phase (2015–2016)
Starlink was publicly announced in January 2015 with the opening of the SpaceX satellite development facility in Redmond, Washington. During the opening, Musk stated there is still significant unmet demand worldwide for low-cost broadband capabilities. and that Starlink would target bandwidth to carry up to 50% of all backhaul communications traffic, and up to 10% of local Internet traffic, in high-density cities. Musk further stated that the positive cash flow from selling satellite internet services would be necessary to fund their Mars plans. Furthermore, SpaceX has long-term plans to develop and deploy a version of the satellite communication system to serve Mars.
Starting with 60 engineers, the company operated in of leased space, and by January 2017 had taken on a second facility, both in Redmond. In August 2018, SpaceX consolidated all their Seattle-area operations with a move to a larger three-building facility at Redmond Ridge Corporate Center to support satellite manufacturing in addition to R&D. In July 2016, SpaceX acquired an additional creative space in Irvine, California (Orange County). The Irvine office would include signal processing, RFIC, and ASIC development for the satellite program.
By October 2016, the satellite division was focusing on a significant business challenge of achieving a sufficiently low-cost design for the user equipment. SpaceX President Gwynne Shotwell said then that the project remained in the "design phase as the company seeks to tackle issues related to user-terminal cost".
Start of development phase (2016–2019)
In November 2016, SpaceX filed an application with the FCC for a "non-geostationary orbit (NGSO) satellite system in the fixed-satellite service using the Ku- and Ka- frequency bands".
In September 2017, the FCC ruled that half of the constellation must be in orbit within six years to comply with licensing terms, while the full system should be in orbit within nine years from the date of the license.
SpaceX filed documents in late 2017 with the FCC to clarify their space debris mitigation plan, under which the company was to:
"...implement an operations plan for the orderly de-orbit of satellites nearing the end of their useful lives (roughly five to seven years) at a rate far faster than is required under international standards. [Satellites] will de-orbit by propulsively moving to a disposal orbit from which they will re-enter the Earth's atmosphere within approximately one year after completion of their mission."
In March 2018, the FCC granted SpaceX approval for the initial 4,425 satellites, with some conditions. SpaceX would need to obtain a separate approval from the ITU. The FCC supported a NASA request to ask SpaceX to achieve an even higher level of de-orbiting reliability than the standard that NASA had previously used for itself: reliably de-orbiting 90% of the satellites after their missions are complete.
In May 2018, SpaceX expected the total cost of development and buildout of the constellation to approach $10 billion (). In mid-2018, SpaceX reorganized the satellite development division in Redmond, and terminated several members of senior management.
First launches (2019–2020)
After launching two test satellites in February 2018, the first batch of 60 operational Starlink satellites were launched in May 2019.
By late 2019, SpaceX was transitioning their satellite efforts from research and development to manufacturing, with the planned first launch of a large group of satellites to orbit, and the clear need to achieve an average launch rate of "44 high-performance, low-cost spacecraft built and launched every month for the next 60 months" to get the 2,200 satellites launched to support their FCC spectrum allocation license assignment. SpaceX said they will meet the deadline of having half the constellation "in orbit within six years of authorization... and the full system in nine years".
By July 2020, Starlink's limited beta internet service was opened to invitees from the public. Invitees had to sign non-disclosure agreements, and were only charged $2 per month to test out billing services. In October 2020 a wider public beta was launched, where beta testers were charged the full monthly cost and could speak freely about their experience. Starlink beta testers reported speeds over 150 Mbit/s, above the range announced for the public beta test.
Commercial service (2021–present)
Pre-orders were first opened to the public in the United States and Canada in early 2021.
The FCC had earlier awarded SpaceX with $885.5 million worth of federal subsidies to support rural broadband customers in 35 U.S. states through Starlink. but the $885.5 million aid package was revoked in August 2022, with the FCC stating that Starlink "failed to demonstrate" its ability to deliver the promised service. SpaceX later appealed the decision saying they met or surpassed all RDOF deployment requirements that existed during bidding and that the FCC created "new standards that no bidder could meet today". In December 2023, the FCC formally denied SpaceX's appeal since "Starlink had not shown that it was reasonably capable of fulfilling RDOF's requirements to deploy a network of the scope, scale, and size" required to win the subsidy.
In March 2021, SpaceX submitted an application to the FCC for mobile variations of their terminal designed for vehicles, vessels and aircraft, and later in June the company applied to the FCC to use mobile Starlink transceivers on launch vehicles flying to Earth orbit, after having previously tested high-altitude low-velocity mobile use on a rocket prototype in May 2021.
In 2022, SpaceX announced the Starlink Business service tier, a higher-performance version of the service. It provides a larger high-performance antenna and listed speeds of between 150 and 500 Mbit/s with a cost of $2500 for the antenna and a $500 monthly service fee. The service includes 24/7, prioritized support. Deliveries are advertised to begin in the second quarter of 2022. The FCC also approved the licensing of Starlink services to boats, aircraft, and moving vehicles. Starlink terminal production being delayed by the 2020–2023 global chip shortage led to only 5,000 subscribers for the last two months of 2021 but this was soon resolved.
On December 1, 2022, the FCC issued an approval for SpaceX to launch the initial 7500 satellites for its second-generation (Gen2) constellation, in three low-Earth-orbit orbital shells, at 525, 530, and 535 km (326, 329 and 332 mile) altitude. Overall, SpaceX had requested approval for as many as 29,988 Gen2 satellites, with approximately 10,000 in the 525–535 km (326 to 332 mile) altitude shells, plus ~20,000 in 340–360 km (210 mile to 220 mile) shells and nearly 500 in 604–614 km (375 to 382 mile) shells. However, the FCC noted that this is not a net increase in approved on-orbit satellites for SpaceX since SpaceX is no longer planning to deploy 7518 V-band satellites at altitude that had previously been authorized.
In March 2023, the company reported that they were manufacturing six Starlink "v2 mini" satellites per day as well as thousands of users terminals. The v2 mini has Gen2 Starlink satellite features while being assembled in a smaller form factor than the larger Gen2 sats. The Gen2 satellites require the 9 meter (29.5 foot) diameter Starship in order to launch them. The Starlink business unit had a single cash-flow-positive quarter during 2022, and is expecting to be profitable in 2023.
In May 2018, SpaceX estimated the total cost of designing, building and deploying the constellation would be at least US$10 billion. In January 2017, SpaceX expected annual revenue from Starlink to reach $12 billion by 2022 and exceed $30 billion by 2025. Starlink was at annual loss in 2021. Revenues from Starlink in 2022 were reportedly $1.4 billion accompanied by a net loss, with a small profit being reported by Musk starting in 2023.
Tensions between Brazil and Elon Musk's business ventures escalated in 2024 as the country's telecom regulator Anatel threatened to sanction Starlink after Brazil's top court upheld a ban on X. Luiz Inácio Lula da Silva supported the decision, citing X's role in allegedly spreading hate and misinformation undermining Brazil's democracy. Judge Alexandre de Moraes had frozen Starlink's accounts, and Starlink refused to comply with an order to block domestic access to X until the freeze was lifted, risking its license to operate.
The Wall Street Journal reported in October 2024 that Musk had been in regular contact with Russian President Vladimir Putin and other high ranking Russian government officials since late 2022, discussing personal topics, business and geopolitical matters. The Journal reported that Putin had asked Musk to avoid activating his Starlink satellite system over Taiwan, to appease Chinese Communist Party general secretary Xi Jinping. The communications were reported to be a closely held secret in government, given Musk's involvement in promoting the presidential candidacy of Donald Trump, and his security clearance to access classified government information. One person said no alerts were raised by the U.S. government, noting the dilemma of the government being dependent on Musk's technologies. Musk initially voiced support for Ukraine's defense against Russia's 2022 invasion by donating Starlink terminals, but made later decisions to limit Ukrainian access to Starlink, which coincided with Russian pressure in public and in private. In a November 2024 call with President [Volodymyr Zelenskyy], Musk said he will continue supporting Ukraine through Starlink.
SpaceX has asked its numerous Taiwanese suppliers to move production abroad citing geopolitical risk concerns. This move was questioned by the Taiwanese government and resulted in significant anger from the Taiwanese public with citizens pointing out that Starlink was unavailable in Taiwan despite its suppliers underlying the technology and others calling for a boycott of Tesla products.
In November 2024, SpaceX proposed a constellation of Starlink satellites around Mars, referred to as "Marslink." The proposed system would be capable of providing more than 4 Mbit/s of bandwidth between Earth and Mars as well as imaging services.
Starting in July 2024, SpaceX began conducting tests on Starlink in cooperation with the Romanian Ministry of National Defense and National Authority for Communications Administration and Regulation (ANCOM). These tests aim at demonstrating that the Equivalent Power Flux Density (EPFD) limit can be safely increased, thus improving the speed and coverage area of Starlink, without affecting classic, geostationary satellites. The results of these tests will be used to help change a rule set by the International Telecommunication Union in the 1990s regarding the limits of non-geostationary satellites.
Subscribers
As of December 2024, Starlink reports the number of its customers worldwide as more than 4.6 million.
Services
Satellite internet
Starlink provides satellite-based internet connectivity to underserved areas of the planet, as well as competitively priced service in more urbanized areas.
In the United States, Starlink charged, at launch, a one-time hardware fee of $599 for a user terminal and $120 per month for internet service at a fixed service address. An additional $25 per month allows the user terminal to move beyond a fixed location (Starlink For RVs) but with service speeds deprioritized compared to the fixed users in that area. Fixed users are told to expect typical throughput of "50 to 150 Mbit/s and latency from 20 to 40 ms", a study found users averaged download speeds of 90.55 Mbit/s in the first quarter of 2022, but dropped to 62.5 Mbit/s in the second quarter. A higher performance version of the service (Starlink Business) advertises speeds of 150 to 500 Mbit/s in exchange for a more costly $2,500 user terminal and a $500 monthly service fee. Another service called Starlink Maritime became available in July 2022 providing internet access on the open ocean, with speeds of 350 Mbit/s, requiring purchase of a maritime-grade $10,000 user terminal and a $5,000 monthly service fee.
Sales are capped to a few hundred fixed users per 20 km (10 mile) "service cell area" due to limited wireless capacity. Starlink alternatively offers a Best Effort service tier allowing homes in capped areas to receive the current unused bandwidth of their cell while they are on the waiting list for more prioritized service. The price and equipment are the same as the residential service at $110 per month. To improve the service quality in densely populated areas, Starlink introduced a monthly 1 TB data cap for all non-business users which was enforced starting in 2023.
In August 2022, SpaceX lowered monthly service costs for users in select countries. For example, users in Brazil and Chile saw monthly fee decreases of about 50%.
According to internet analysis company Ookla, Starlink speeds degraded during the first half of 2022 as more customers signed up for the service. SpaceX has said that Starlink speeds will improve as more satellites are deployed.
In September 2023, satellite operator SES announced a satellite internet service for cruise lines using both the Starlink satellites in Low Earth Orbit (LEO) and SES' own O3b mPOWER satellite constellation in Medium Earth Orbit (MEO). Integrated, sold and delivered by SES, the SES Cruise mPOWERED + Starlink service claims to combine the best features of LEO and MEO orbits to provide high-speed, secure connectivity at up to 3 Gbit/s per ship, to cruise ships anywhere in the world. In February 2024, SES announced that Virgin Voyages will be the first cruise line to deploy the service.
Satellite cellular service
For future service, T-Mobile US and SpaceX are partnering to add satellite cellular service capability to Starlink satellites. It will provide dead-zone cell phone coverage across the US using the existing midband PCS spectrum owned by T-Mobile. Cell coverage will begin with text messaging and expand to include voice and limited data services later, with testing to begin in 2024. T-Mobile plans to connect to Starlink satellites via existing 4G LTE mobile devices, unlike previous generations of satellite phones, which used specialized radios, modems, and antennas to connect to satellites in higher orbits. Bandwidth will be limited to 2 to 4 megabits per second total, split across a very large cell coverage area, which would be limited to thousands of voice calls or millions of text messages simultaneously in a coverage area. The size of a single coverage cell has not yet been publicly released.
The first six cell phone capable satellites launched on January 2, 2024.
Rogers Communications, in April 2023, signed an agreement with SpaceX for using Starlink for satellite-to-phone services in Canada. Also in April 2023, One NZ (formerly Vodafone New Zealand) announced that they would be partnering with SpaceX's Starlink to provide 100% mobile network coverage over New Zealand. SMS text service is expected to begin in 2024, with voice and data functionality in 2025. In July 2023, Optus in Australia announced a similar partnership.
On January 8, 2024, it was confirmed by SpaceX that they had successfully tested text messaging using the new Direct-to-Cell capability on T-Mobile's network.
Military applications
SpaceX also designs, builds, and launches customized military satellites based on variants of the Starlink satellite bus, with the largest publicly known customer being the Space Development Agency (SDA).
SDA accelerates development of missile defense capabilities, primarily via observation platforms, using industry-procured low-cost low Earth orbit satellite platforms.
In October 2020, SDA awarded SpaceX an initial $150 million dual-use contract to develop 4 satellites to detect and track ballistic and hypersonic missiles. The first batch of satellites were originally scheduled to launch September 2022 to form part of the Tracking Layer Tranche 0 of the U.S. Space Force's National Defense Space Architecture (NDSA), a network of satellites performing various roles including missile tracking. The launch schedule slipped multiple times but eventually launched in April 2023.
In 2020, SpaceX hired retired four-star general Terrence J. O'Shaughnessy who, according to some sources, is associated with Starlink's military satellite development, and according to one source, is listed as a "chief operating officer" at SpaceX. While still on active duty, O'Shaughnessy advocated before the United States Senate Committee on Armed Services for a layered capability with lethal follow-on that incorporates machine learning and artificial intelligence to gather and act upon sensor data quickly.
SpaceX was not awarded a contract for the larger Tranche 1, with awards going to York Space Systems, Lockheed Martin Space, and Northrop Grumman Space Systems.
Starshield
In December 2022, SpaceX announced Starshield, a separate Starlink service designed for government entities and military agencies. Starshield enables the U.S. Department of Defense (DoD) to own or lease Starshield satellites for partners and allies. Cybernews remarked that Starshield was first announced in late 2022, when Starlink's presence in Ukraine showed the importance it can have in modern warfare. While Starlink had not been adapted for military use, Starshield has the usual requirements for mobile military systems like encryption and anti-jam capabilities. Elon Musk stated that "Starlink needs to be a civilian network, not a participant to combat. Starshield will be owned by the US government and controlled by DoD Space Force. This is the right order of things."
Starshield satellites are advertised as capable of integrating a wide variety of payloads. Starshield satellites will be compatible with, and interconnect to, the existing commercial Starlink satellites via optical inter-satellite links.
In January 2022, SpaceX deployed four national security satellites for the U.S. government on their Transporter-3 rideshare mission. In the same year they launched another group of four U.S. satellites with an on-orbit spare Globalstar FM-15 satellite in June.
In September 2023, the Starshield program received its first contract from the U.S. Space Force to provide customized satellite communications for the military. This is under the Space Force's new "Proliferated Low Earth Orbit" program for LEO satellites, where Space Force will allocate up to $900 million worth of contracts over the next 10 years. Although 16 vendors are competing for awards, the SpaceX contract is the only one to have been issued to date. The one-year Starshield contract was awarded on September 1, 2023. The contract is expected to support 54 mission partners across the Army, Navy, Air Force, and Coast Guard.
Military communications
In 2019, tests by the United States Air Force Research Laboratory (AFRL) demonstrated a 610 Mbit/s data link through Starlink to a Beechcraft C-12 Huron aircraft in flight. Additionally, in late 2019, the United States Air Force successfully tested a connection with Starlink on an AC-130 Gunship.
In 2020, the Air Force used Starlink in support of its Advanced Battlefield management system during a live-fire exercise. They demonstrated Starlink connected to a "variety of air and terrestrial assets" including the Boeing KC-135 Stratotanker.
Expert on battlefield communications Thomas Wellington has argued that Starlink signals, because they use narrow focused beams, are less vulnerable to interference and jamming by the enemy in wartime than satellites flying in higher orbits.
In May 2022, Chinese military researchers published an article in a peer-reviewed journal describing a strategy for destroying the Starlink constellation if they threaten national security. The researchers specifically highlight concerns with reported Starlink military capabilities. Musk has declared Starlink is meant for peaceful use and has suggested Starlink could enforce peace by taking strategic initiative. Russian officials including the head of Russia's space agency Dmitry Rogozin, have warned Elon Musk and criticized Starlink, including warning that Starlink could become a legitimate military target in the future.
Russo-Ukrainian War
Starlink was activated during the Russian invasion of Ukraine, after a request from the Ukrainian government. Ukraine's military and government rapidly became dependent on Starlink to maintain Internet access. Starlink is used by Ukraine for communication, such as keeping in touch with the outside world and keeping the energy infrastructure working.
The service is also notably used for warfare. Starlink is used for connecting combat drones, naval drones, artillery fire coordination systems and attacks on Russian positions. SpaceX has expressed reservations about the offensive use of Starlink by Ukraine beyond military communications and restricted Starlink communication technology for military use on weapon systems, but has kept most of the service online. Its use in attacking Russian targets has been criticized by the Kremlin.
Musk has warned that the service was costing $20 million per month, and a Ukrainian official estimated SpaceX's contributions as over $100 million. In June 2023, the United States Department of Defense signed a contract with SpaceX to finance Starlink use in Ukraine.
Israel–Hamas War
In October 2023 after the Israel–Hamas conflict started, users shared the hashtag #starlinkforgaza on Elon Musk's social network X (formerly Twitter), demanding he activate Starlink in Gaza after Internet service in the region was lost. Musk answered that Starlink connectivity would be provided for aid groups in Gaza. At the end of November, Musk said the Starlink service would only be provided for Gaza with the approval of the government of Israel.
Iran
In 2022, the U.S. State Department and U.S. Treasury Department updated rules regarding export of technology to Iran, allowing Starlink to be exported to Iran in support of the Iranian protests against compulsory hijab, which had triggered extensive government censorship. Immediately afterwards, Starlink service was activated in Iran. In 2023, the Iranian government filed a complaint with the ITU against SpaceX for unauthorized Starlink operation in Iran. In October 2023 and March 2024, the ITU ruled in favor of Iran, dismissing a SpaceX assertion that it should not be expected to verify the location of every terminal connecting to its satellites. Iran claimed that SpaceX was capable of determining their user terminal locations by citing a tweet from Musk saying there were 100 Starlink terminals operating within Iran.
Internet availability and regulatory approval by country
In order to offer satellite services in any nation-state, International Telecommunication Union (ITU) regulations and long-standing international treaties require that landing rights be granted by each country jurisdiction, and within a country, by the national communications regulators. As a result, even though the Starlink network has near-global reach at latitudes below approximately 60°, broadband services can only be provided in 40 countries as of September 2022. SpaceX can also have business operation and economic considerations that may make a difference in which countries Starlink service is offered, in which order, and how soon. For example, SpaceX formally requested authorization for Canada only in June 2020, the Canadian regulatory authority approved it in November 2020, and SpaceX rolled out service two months later, in January 2021. As of September 2022, Starlink services were on offer in 40 countries, with applications pending regulatory approval in many more.
Canada was the first outside country to approve the service with the Innovation, Science and Economic Development Canada announcing regulatory approval for the Starlink low Earth orbit satellite constellation on November 6, 2020.
In May 2022, Starlink entered the Philippine market, the company's first deployment in Asia, because of a landmark legislative change (RA 11659, Public Services Act) about all-foreign allowance of company ownership in regards to utility entities such as internet and telco companies. Starlink got provisional permission from the country's Department of Information and Communication Technologies (DICT), National Telecommunications Commission (NTC), and Department of Trade and Industry (DTI) and soon began commercial services, aimed at regions with lower internet connectivity.
In August 2022, SpaceX secured its first contract for services in the passenger shipping industry. Royal Caribbean Group has added Starlink internet to Freedom of the Seas and planned to offer the service on 50 ships under its Royal Caribbean International, Celebrity Cruises, and Silversea Cruises brands by March 2023. Starlink services on private jet charter flights in the U.S. by JSX airline are expected to begin in late 2022, and Hawaiian Airlines had contracted to provide "Starlink services on transpacific flights to and from Hawaii in 2023."
In June 2023, a license to offer internet services in Zambia was granted to Starlink by the Zambian Government through its Electronic Government Division – SMART Zambia, after the completion of many trial projects throughout the country. In October 2023, Starlink officially went live in Zambia.
In July 2023, the Mongolian government issued two licenses to SpaceX to provide internet access in the country.
In July 2023, it was reported by Bloomberg that attempts to sell the service to Taiwan in 2022 fell through when SpaceX insisted on 100% ownership of the Taiwan subsidiary running Starlink in the country. This went against Taiwanese law that required that internet service providers (ISP) are at least 51% controlled by local companies, an impracticality when dealing with a globe-spanning ISP.
Japan's major mobile provider, KDDI, announced a partnership with SpaceX to begin offering in 2022 expanded connectivity for its rural mobile customers via 1,200 remote mobile towers.
On April 25, 2022, Hawaiian Airlines announced an agreement with Starlink to provide free internet access on its aircraft, becoming the first airline to use Starlink. By July 2022, Starlink internet service was available in 36 countries and 41 markets.
In May 2022, it was announced that regulatory approval had been granted for Nigeria, Mozambique, and the Philippines. In the Philippines, commercial availability began on February 22, 2023.
In September 2022, trials began at McMurdo Station in Antarctica and from December 2022 on field missions. Antarctica has no ground stations, so polar-orbiting satellites with optical interlinks are used to connect to ground stations in South America, New Zealand, and Australia.
In September 2023, the US-based United Against Nuclear Iran started donating subscriptions and terminals to Iranians to allow them to circumvent Iran's internet blackout.
In September 2023, it was reported by some Indian news outlets that Starlink would imminently receive its license to operate in India after Starlink was able to meet all regulatory requirements, but that it would still be required to apply for spectrum allocation in order to provide service. SpaceX had earlier sold 5000 Starlink preorders in India, and in 2021 had announced that Sanjay Bhargava, who had worked with Musk as part of a team that founded electronic payment firm PayPal, would head the tech billionaire entrepreneur's Starlink satellite broadband venture in India. Three months later, Bhargava resigned "for personal reasons" after the Indian government ordered SpaceX to halt selling preorders for Starlink service until SpaceX gained regulatory approval for providing satellite internet services in the country. In April 2024, it was reported in some Indian news outlets that Starlink had received its "in-principle government approval" and that the approval now "lies at the desk of communications minister Ashwini Vaishnaw"
In November 2023, Starlink received the licenses to operate in Fiji. The service was launched in Fiji in May 2024.
In April 2024, it was reported that the company would begin trial service in Indonesia in May. Starlink received its license to operate in Indonesia in early May.
In May 2024, Starlink service was available for pre-order in Sri Lanka, pending regulatory approval. Starlink received its license to operate in Sri Lanka in August of the same year.
In August 2024, Starlink received the licenses to operate in Yemen. Starlink services will soon be implemented through the corporation's sales points distributed across most governorates. These points will provide a full range of services, including device sales, activation, subscription fee payments, and direct technical support.
On 22 October 2024, Qatar Airways launched the first Starlink-equipped Boeing 777 flight, flying from Doha to London. As of November 2024, Morocco is set to give regulatory approval to Starlink by 2025.
Technology
Satellite hardware
The internet communication satellites were expected to be smallsats, in mass, and were intended to be in low Earth orbit (LEO) at an altitude of approximately , according to early public releases of information in 2015. The first significant deployment of 60 satellites was in May 2019, with each satellite weighing . SpaceX decided to place the satellites at a relatively low due to concerns associated with space debris from failures or low fuel in the space environment, as well as letting them use fewer satellites than what was initially needed. Initial plans were for the constellation to be made up of approximately 4,000 cross-linked satellites, more than twice as many operational satellites as were in orbit in January 2015.
The satellites employ optical inter-satellite links and phased array beam-forming and digital processing technologies in the Ku and Ka microwave bands (super high frequency [SHF] to extremely high frequency [EHF]), according to documents filed with the U.S. FCC. While specifics of the phased array technologies have been disclosed as part of the frequency application, SpaceX enforced confidentiality regarding details of the optical inter-satellite links. Early satellites were launched without laser links. The inter-satellite laser links were successfully tested in late 2020.
The satellites are mass-produced, at a much lower cost per unit of capability than previously existing satellites. Musk said, "We're going to try and do for satellites what we've done for rockets." "In order to revolutionize space, we have to address both satellites and rockets." "Smaller satellites are crucial to lowering the cost of space-based Internet and communications".
In February 2015, SpaceX asked the FCC to consider future innovative uses of the Ka-band spectrum before the FCC commits to 5G communications regulations that would create barriers to entry, since SpaceX is a new entrant to the satellite communications market. The SpaceX non-geostationary orbit communications satellite constellation will operate in the high-frequency bands above 24 GHz, "where steerable Earth station transmit antennas would have a wider geographic impact, and significantly lower satellite altitudes magnify the impact of aggregate interference from terrestrial transmissions".
Internet traffic via a geostationary satellite has a minimum theoretical round-trip latency of at least 477 milliseconds (ms; between user and ground gateway), but in practice, current satellites have latencies of 600 ms or more. Starlink satellites are orbiting at to of the height of geostationary orbits, and thus offer more practical Earth-to-satellite latencies of around 25 to 35 ms, comparable to existing cable and fiber networks. The system uses a peer-to-peer protocol claimed to be "simpler than IPv6"; it also incorporates native end-to-end encryption.
Starlink satellites use Hall-effect thrusters with krypton or argon gas as the reaction mass for orbit raising and station keeping. Krypton Hall thrusters tend to exhibit significantly higher erosion of the flow channel compared to a similar electric propulsion system operated with xenon, but krypton is much more abundant and has a lower market price. SpaceX claims that its 2nd generation thruster using argon has 2.4× the thrust and 1.5× the specific impulse of the krypton fueled thruster.
User terminals
The Starlink system has multiple modes of connectivity including direct-to-cell capability as well as broadband satellite internet service. Direct-to-cell provides connectivity to unmodified cellular phones and is being offered globally in partnership with various national cellular service providers. Starlink's broadband internet service is accessed via flat user terminals the size of a pizza box, which have phased array antennas and track the satellites. The terminals can be mounted anywhere, as long as they can see the sky. This includes fast-moving objects like trains. Photographs of the customer antennas were first seen on the internet in June 2020, supporting earlier statements by SpaceX CEO Musk that the terminals would look like a "UFO on a stick. Starlink Terminal has motors to self-adjust optimal angle to view sky". The antenna is known internally as "Dishy McFlatface".
In October 2020, SpaceX launched a paid-for beta service in the U.S. called "Better Than Nothing Beta", charging $499 () for a user terminal, with an expected service of "50 to 150 Mbit/s and latency from 20 to 40 ms over the next several months". From January 2021, the paid-for beta service was extended to other continents, starting with the United Kingdom.
A larger, high-performance version of the antenna is available for use with the Starlink Business service tier.
In September 2020, SpaceX applied for permission to put terminals on 10 of its ships with the expectation of entering the maritime market in the future.
In August 2022, and in response to an open invitation from SpaceX to have the terminal examined by the security community, security specialist Lennert Wouters presented several technical architecture details about the then-current starlink terminals: the main control unit of the dish is a STMicroelectronics custom designed chip code-named Catson which is a quad-core ARM Cortex-A53-based control processor running the Linux kernel and booted using U-Boot. The main processor uses several other custom chips such as a digital beam former named Shiraz and a front-end module named Pulsarad. The main control unit controls an array of digital beamformers. Each beamformer controls 16 front-end modules. In addition the terminal has a GPS receiver, motor controllers, synchronous clock generation and Power over Ethernet circuits, all manufactured by STMicroelectronics.
In June 2024, a portable user terminal dubbed "Starlink Mini" was announced to be imminently available. The Mini supports 100 Mbit/s of download speed and will fit in a backpack. Initial rollout was in Latin America at a $200 price point.
Ground stations
SpaceX has made applications to the FCC for at least 32 ground stations in United States, and has approvals for five of them (in five states). Until February 2023, Starlink used the Ka-band to connect with ground stations. With the launch of v2 Mini, frequencies were added in the 71–86 GHz W band (or E band waveguide) range.
A typical ground station has nine 2.86 m (9.4 ft) antennas in a 400 m2 (4,306 sq ft) fenced in area.
According to their filing, SpaceX's ground stations would also be installed on-site at Google data-centers world-wide.
Satellite revisions
MicroSat
MicroSat-1a and MicroSat-1b were originally slated to be launched into circular orbits at approximately 86.4° inclination, and to include panchromatic video imager cameras to film images of Earth and the satellite. The two satellites, "MicroSat-1a" and "MicroSat-1b" were meant to be launched together as secondary payloads on one of the Iridium NEXT flights, but they were instead used for ground-based tests.
Tintin
At the time of the June 2015 announcement, SpaceX had stated plans to launch the first two demonstration satellites in 2016, but the target date was subsequently moved out to 2018. SpaceX began flight testing their satellite technologies in 2018 with the launch of two test satellites. The two identical satellites were called MicroSat-2a and MicroSat-2b during development but were renamed Tintin A and Tintin B upon orbital deployment on February 22, 2018. The satellites were launched by a Falcon 9 rocket, and they were piggy-pack payloads launching with the Paz satellite.
Tintin A and B were inserted into a orbit. Per FCC filings, they were intended to raise themselves to a orbit, the operational altitude for Starlink LEO satellites per the earliest regulatory filings, but stayed close to their original orbits. SpaceX announced in November 2018 that they would like to operate an initial shell of about 1600 satellites in the constellation at about orbital altitude, at an altitude similar to the orbits Tintin A and B stayed in.
The satellites orbit in a circular low Earth orbit at about altitude in a high-inclination orbit for a planned six to twelve-month duration. The satellites communicate with three testing ground stations in Washington State and California for short-term experiments of less than ten minutes duration, roughly daily.
v0.9 (test)
The 60 Starlink v0.9 satellites, launched in May 2019, had the following characteristics:
Flat-panel design with multiple high-throughput antennas and a single solar array
Mass:
Hall-effect thrusters using krypton as the reaction mass, for position adjustment on orbit, altitude maintenance, and deorbit
Star tracker navigation system for precision pointing
Able to use U.S. Department of Defense-provided debris data to autonomously avoid collision
Altitude of
95% of "all components of this design will quickly burn in Earth's atmosphere at the end of each satellite's lifecycle".
v1.0 (operational)
The Starlink v1.0 satellites, launched since November 2019, have the following additional characteristics:
100% of all components of this design will completely demise, or burn up, in Earth's atmosphere at the end of each satellite's life.
Ka-band added
Mass:
One of them, numbered 1130 and called DarkSat, had its albedo reduced using a special coating but the method was abandoned due to thermal issues and IR reflectivity.
All satellites launched since the ninth launch at August 2020 have visors to block sunlight from reflecting from parts of the satellite to reduce its albedo further.
v1.5 (operational)
The Starlink v1.5 satellites, launched since January 24, 2021, have the following additional characteristics:
Lasers for inter-satellite communication
Mass: ~
Visors that blocked sunlight were removed from satellites launched from September 2021 onwards.
Starshield (operational)
These are satellites buses with two solar arrays derived from Starlink v1.5 and v2.0 for military use and can host classified government or military payloads.
v2 (initial deployment)
SpaceX was preparing for the production of Starlink v2 satellites by early 2021. According to Musk, Starlink v2 satellites will be "…an order of magnitude better than Starlink 1" in terms of communications bandwidth.
SpaceX hoped to begin launching Starlink v2 in 2022. , SpaceX had said publicly that the satellites of second-generation (Gen2) constellation would need to be launched on Starship, as they are too large to fit inside a Falcon 9 fairing. However, in August 2022, SpaceX made formal regulatory filings with the FCC that indicated they would build satellites of the second-generation (Gen2) constellation in two different, but technically identical, form factors: one with the physical structures tailored to launching on Falcon 9, and one tailored for the launching on Starship. Starlink v2 is both larger and heavier than Starlink v1 satellites.
Starlink second-generation satellites planned for launch on Starship have the following characteristics:
Lasers for inter-satellite communication
Mass: ~
Length: ~
Further improvements to reduce its brightness, including the use of a dielectric mirror film.
On 2,016 of the initially licensed 7,500 satellites: Gen2 Starlink satellites will also include an approximately 25 square meter antenna that would allow T-Mobile subscribers to be able to communicate directly via satellite through their regular mobile devices. It will be implemented via a German-licensed hosted payload developed together with SpaceX's subsidiary Swarm Technologies and T-Mobile. This hardware is supplemental to the existing Ku-band and Ka-band systems, and inter-satellite laser links, that have been on the first generation satellites launching as of mid-2022.
In October 2022, SpaceX revealed the configuration of early v2s to be launched on Falcon 9. In May 2023, SpaceX introduced two more form factors with direct-to-cellular (DtC) capability.
Bus F9-1, 303 kg (668 lbs) mass, having roughly the same dimensions and mass as V1.5 satellites. Deployed in Group 5 (see constellation design section).
Bus F9-2 (typically called "v2 mini"), up to 800 kg (1,764 lbs) mass and measuring by with a total array of . The Solar arrays are 2 in number. It could offer around 3–4 times more usable bandwidth per satellite. They are smaller than Starlink's original ones (and so can be launched on Falcon 9) and have four times the capacity to the ground station to increase speed and capacity. This is due to a more efficient array of antennas and the use of radio frequencies in the W band (E band waveguide) range. They were deployed in Groups 6 and 7 (see constellation design section).
Bus F9-3, F9-2 with direct-to-cellular capability. The bus length increased to . Mass increased to 970 kg (2,152 lbs). Deployed in Group 7 (see constellation design section).
Bus Starship-1 (planned), 2000 kg (4,409 lbs) mass and measuring by with a total array of .
Bus Starship-2 (planned), Starship-1 with direct-to-cellular capability. The bus length increased to .
The first six F9-3 satellites with direct-to-cellular (DtC) capability were launched on January 2, 2024, in Groups 7–9.
Launches
Between February 2018 and May 2024, SpaceX successfully launched over 6,000 Starlink satellites into orbit, including prototypes and satellites that later failed or were de-orbited before entering operational service. In March 2020, SpaceX reported producing six satellites per day.
The deployment of the first 1,440 satellites was planned in 72 orbital planes of 20 satellites each, with a requested lower minimum elevation angle of beams to improve reception: 25° rather than the 40° of the other two orbital shells. SpaceX launched the first 60 satellites of the constellation in May 2019 into a orbit and expected up to six launches in 2019 at that time, with 720 satellites (12 × 60) for continuous coverage in 2020.
Starlink satellites are also planned to launch on Starship, an under-development rocket of SpaceX with a much larger payload capability. The initial announcement included plans to launch 400 Starlink (version 1.0) satellites at a time. Current plans now call for Starship to be the only launch vehicle to be used to launch the much larger Starlink version 2.0.
Constellation design and status
In March 2017, SpaceX filed plans with the FCC to field a second orbital shell of more than 7,500 "V-band satellites in non-geosynchronous orbits to provide communications services" in an electromagnetic spectrum that has not previously been heavily employed for commercial communications services. Called the "Very-low Earth orbit (VLEO) constellation", it was to have comprised 7,518 satellites that were to orbit at just altitude, while the smaller, originally planned group of 4,425 satellites would operate in the Ka- and Ku-bands and orbit at altitude. By 2022, SpaceX had withdrawn plans to field the 7,518-satellite V-band system, superseding it with a more comprehensive design for a second-generation (Gen2) Starlink network.
In November 2018, SpaceX received U.S. regulatory approval to deploy 7,518 V-band broadband satellites, in addition to the 4,425 approved earlier; however, the V-band plans were subsequently withdrawn by 2022. At the same time, SpaceX also made new regulatory filings with the U.S. FCC to request the ability to alter its previously granted license in order to operate approximately 1,600 of the 4,425 Ka-/Ku-band satellites approved for operation at in a "new lower shell of the constellation" at only orbital altitude. These satellites would effectively operate in a third orbital shell, a orbit, while the higher and lower orbits at approximately and approximately would be used only later, once a considerably larger deployment of satellites becomes possible in the later years of the deployment process. The FCC approved the request in April 2019, giving approval to place nearly 12,000 satellites in three orbital shells: initially approximately 1,600 in a – altitude shell, and subsequently placing approximately 2,800 Ku- and Ka-band spectrum satellites at and approximately 7,500 V-band satellites at . In total, nearly 12,000 satellites were planned to be deployed, with (as of 2019) a possible later extension to 42,000.
In February 2019, a sister company of SpaceX, SpaceX Services Incorporated, filed a request with the FCC to receive a license for the operation of up to a million fixed satellite Earth stations that would communicate with its non-geostationary orbit (NGSO) satellite Starlink system.
In June 2019, SpaceX applied to the FCC for a license to test up to 270 ground terminals – 70 nationwide across the United States and 200 in Washington state at SpaceX employee homes – and aircraft-borne antenna operation from four distributed United States airfields; as well as five ground-to-ground test locations.
On October 15, 2019, the United States FCC submitted filings to the International Telecommunication Union (ITU) on SpaceX's behalf to arrange spectrum for 30,000 additional Starlink satellites to supplement the 12,000 Starlink satellites already approved by the FCC. That month, Musk publicly tested the Starlink network by using an Internet connection routed through the network to post a first tweet to social media site Twitter.
First generation
The chart below contains all v0.9 and first generation satellites (Tintin A and Tintin B, as test satellites, are not included).
Early designs had all phase 1 satellites in altitudes of around . SpaceX initially requested to lower the first 1584 satellites, and in April 2020 requested to lower all other higher satellite orbits to about . In April 2020, SpaceX modified the architecture of the Starlink network. SpaceX submitted an application to the FCC proposing to operate more satellites in lower orbits in the first phase than the FCC previously authorized. The first phase will still include 1,440 satellites in the first shell orbiting at in planes inclined 53.0°, with no change to the first shell of the constellation launched largely in 2020. SpaceX also applied in the United States for use of the E-band in their constellation The FCC approved the application in April 2021.
On January 24, 2021 SpaceX released a new group of 10 Starlink satellites, the first Starlink satellites in polar orbits. The launch surpassed ISRO's record of launching the most satellites in one mission (143), taking to 1,025 the cumulative number of satellites deployed for Starlink to that date.
On February 3, 2022, 49 satellites were launched as Starlink Group 4–7. A G2-rated geomagnetic storm occurred on February 4, caused the atmosphere to warm and density at the low deployment altitudes to increase. Predictions were that up to 40 of the 49 satellites might be lost due to drag. After the event, 38 satellites reentered the atmosphere by February 12 while the remaining 11 were able to raise their orbits and avoid loss due to the storm.
In March 2023, SpaceX submitted an application to add V-band payload to the second generation satellites rather than fly phase 2 V-band satellites as originally planned and authorized. The request is subject to FCC approval.
Second Generation
With the unknown of when Starship will be able to launch the second generation satellites, SpaceX modified the original V2 blueprint into a smaller, more compact one named "v2 mini". This adjustment allowed Falcon 9 to transport these satellites, though not as many, into orbit. The first set of 21 of these satellites was launched on February 27, 2023. SpaceX committed to reducing debris by keeping the Starlink tension rods, which hold the V2 mini-satellites together, attached to the Falcon 9 second stage. These tension rods were discarded into orbit while launching earlier versions of Starlink satellites. Observations confirm these V2 mini-satellites host two solar panels like the Starship V2 satellites.
SpaceX planned to test the deployment system for a new version of their Starlink satellites. On 16 January 2025, S33 was also expected to deploy ten Starlink "simulators," which were also expected to reenter over the Indian Ocean. Contact with S33 was lost shortly before its engines were scheduled to shut down.
Impact on astronomy
The planned large number of satellites has been met with criticism from the astronomical community because of concerns over light pollution. Astronomers claim that their brightness in both optical and radio wavelengths will severely impact scientific observations. While astronomers can schedule observations to avoid pointing where satellites currently orbit, it is "getting more difficult" as more satellites come online. The International Astronomical Union (IAU), National Radio Astronomy Observatory (NRAO), and Square Kilometre Array Organization (SKAO) have released official statements expressing concern on the matter. Recent studies have proved that the "unintended electromagnetic radiation" affects radio telescopes creating distortions and excessive noise and the IAU Centre for the Protection of the Dark and Quiet Sky from Satellite Constellation Interference was created to manage these new man made obstacles to space exploration.
Visible Optical interference
On November 20, 2019, the four-meter (13') Blanco telescope of the Cerro Tololo Inter-American Observatory (CTIO) recorded strong signal loss and the appearance of 19 white lines on a DECam shot (right image). This image noise was correlated to the transit of a Starlink satellite train, launched a week earlier.
SpaceX representatives and Musk have claimed that the satellites will have minimal impact, being easily mitigated by pixel masking and image stacking. However, professional astronomers have disputed these claims based on initial observation of the Starlink v0.9 satellites on the first launch, shortly after their deployment from the launch vehicle. In later statements on Twitter, Musk stated that SpaceX will work on reducing the albedo of the satellites and will provide on-demand orientation adjustments for astronomical experiments, if necessary. One Starlink satellite (Starlink 1130 / DarkSat) launched with an experimental coating to reduce its albedo. The reduction in g-band magnitude is 0.8 magnitude (55%). Despite these measures, astronomers found that the satellites were still too bright, thus making DarkSat essentially a "dead end".
On April 17, 2020, SpaceX wrote in an FCC filing that it would test new methods of mitigating light pollution, and also provide access to satellite tracking data for astronomers to "better coordinate their observations with our satellites". On April 27, 2020, Musk announced that the company would introduce a new sunshade designed to reduce the brightness of Starlink satellites. , over 200 Starlink satellites had a sunshade. An October 2020 analysis found them to be only marginally fainter than DarkSat. A January 2021 study pinned the brightness at 31% of the original design.
According to a May 2021 study, "A large number of fast-moving transmitting stations (i.e. satellites) will cause further interference. New analysis methods could mitigate some of these effects, but data loss is inevitable, increasing the time needed for each study and limiting the overall amount of science done".
In February 2022, the International Astronomical Union (IAU) established a center to help astronomers deal with the adverse effects of satellite constellations such as Starlink. Work will include the development of software tools for astronomers, advancement of national and international policies, community outreach and work with industry on relevant technologies.
In June 2022, the IAU released a website for astronomers to deal with some adverse effects via satellite tracking. This will enable astronomers to be able to track satellites to be able to avoid and time them for minimal impact on current work.
The first batch of Generation 2 spacecraft was launched in February 2023. These satellites are referred to as "Mini" because they are smaller than the full-sized Gen 2 spacecraft that will come later. SpaceX uses brightness mitigation for Gen 2 that includes a mirror-like surface which reflects sunlight back into space and they orient the solar panels so that observers on the ground only see the dark sides.
The Minis are fainter than Gen 1 spacecraft despite being four times as large according to an observational study published in June 2023. They are 44% as bright as VisorSats, 24% compared to V1.5 and 19% compared to the original design which had no brightness mitigation. Minis appear 12 times brighter before they reach the target orbit.
Radio interference
In October 2023, research published in "Astronomy and Astrophysics Letters" had reportedly found that Starlink satellites were "leaking radio signals" finding that at the site of the future Square Kilometer Array, radio emissions from Starlink satellites were brighter than any natural source in the sky. The paper concluded that these emissions will be "detrimental to key SKA science goals without future mitigation".
Increased risk of satellite collision
The large number of satellites employed by Starlink may create the long-term danger of space debris resulting from placing thousands of satellites in orbit and the risk of causing a satellite collision, potentially triggering a cascade phenomenon known as Kessler syndrome. SpaceX has said that most of the satellites are launched at a lower altitude, and failed satellites are expected to deorbit within five years without propulsion.
Early in the program, a near-miss occurred when SpaceX did not move a satellite that had a 1 in 1,000 chance of colliding with a European one, ten times higher than the ESA's threshold for avoidance maneuvers. SpaceX subsequently fixed an issue with its paging system that had disrupted emails between the ESA and SpaceX. The ESA said it plans to invest in technologies to automate satellite collision avoidance maneuvers. In 2021, Chinese authorities lodged a complaint with the United Nations, saying their space station had performed evasive maneuvers that year to avoid Starlink satellites. In the document, Chinese delegates said that the continuously maneuvering Starlink satellites posed a risk of collision, and two close encounters with the satellites in July and October constituted dangers to the life or health of astronauts aboard the Chinese Tiangong space station.
All these reported issues, plus current plans for the extension of the constellation, motivated a formal letter from the National Telecommunications and Information Administration (NTIA) on behalf of NASA and the NSF, submitted to the FCC on February 8, 2022, warning about the potential impact on low Earth orbit, increased collision risk, impact on science missions, rocket launches, International Space Station and radio frequencies.
SpaceX satellites will maneuver if the probability of collision is greater than (1 in 100,000 chance of collision), as opposed to the industry standard of (1 in 10,000 chance of collision). SpaceX has budgeted sufficient propellant to accommodate approximately 5,000 propulsive maneuvers over the life of a Gen2 satellite, including a budget of approximately 350 collision avoidance maneuvers per satellite over that time period.
As of May 2022, the average Starlink satellite had conducted fewer than three collision-avoidance maneuvers over the 6 preceding months. Over 1,700 out of 6,873 maneuvers were performed to avoid Kosmos 1408 debris.
Competition and market effects
In addition to the OneWeb constellation, announced nearly concurrently with the SpaceX constellation, a 2015 proposal from Samsung outlined a 4,600-satellite constellation orbiting at that could provide a zettabyte per month capacity worldwide, an equivalent of 200 gigabytes per month for 5 billion users of Internet data, but by 2020, no more public information had been released about the Samsung constellation. Telesat announced a smaller 117 satellite constellation in 2015 with plans to deliver initial service in 2021. Amazon announced a large broadband internet satellite constellation in April 2019, planning to launch 3,236 satellites in the next decade in what the company calls "Project Kuiper", a satellite constellation that will work in concert with Amazon's previously announced large network of twelve satellite ground station facilities (the "AWS ground station unit") announced in November 2018.
In February 2015, financial analysts questioned established geosynchronous orbit communications satellite fleet operators as to how they intended to respond to the competitive threat of SpaceX and OneWeb LEO communication satellites. In October 2015, SpaceX President Gwynne Shotwell indicated that while development continues, the business case for the long-term rollout of an operational satellite network was still in an early phase.
By October 2017, the expectation for large increases in satellite network capacity from emerging lower-altitude broadband constellations caused market players to cancel some planned investments in new geosynchronous orbit broadband communications satellites.
SpaceX was challenged regarding Starlink in February 2021 when the National Rural Electric Cooperative Association (NRECA), a political interest group representing traditional rural internet service providers, urged the U.S. Federal Communications Commission (FCC) to "actively, and aggressively, and thoughtfully vet" the subsidy applications of SpaceX and other broadband providers. At the time, SpaceX had provisionally won $886 million for a commitment to provide service to approximately 643,000 locations in 35 states as part of the Rural Digital Opportunity Fund (RDOF). The NRECA criticisms included that the funding allocation to Starlink would include service to locations—such as Harlem and terminals at Newark Liberty International Airport and Miami International Airport—that are not rural, and because SpaceX was planning to build the infrastructure and serve any customers who request service with or without the FCC subsidy. Additionally, Jim Matheson, chief executive officer of the NRECA voiced concern about technologies that had not yet been proven to meet the high speeds required for the award category. Starlink was specifically criticized for being still in beta testing and for unproven technology.
While Starlink is deployed worldwide, it has encountered trademark conflicts in some countries such as Mexico and Ukraine.
Similar or competitive systems
OneWeb satellite constellation – a satellite constellation project that began operational deployment of satellites in 2020.
China national satellite internet project – a planned satellite internet offering for the Chinese market.
Kuiper Systems – a planned 3,276 LEO satellite Internet constellation by an Amazon subsidiary.
Hughes Network Systems – a broadband satellite provider providing fixed, cellular backhaul, and airborne antennas.
Viasat, Inc. – a broadband satellite provider providing fixed, ground mobile, and airborne antennas.
O3b and O3b mPOWER – medium Earth orbit constellations that provide maritime, aviation and military connectivity, and cellular backhaul; coverage between latitudes 50°N and 50°S.
See also
Kuiper Systems – Amazon's large internet satellite constellation
AST SpaceMobile – a satellite-to-mobile-phone satellite constellation working with large mobile network operators such as Vodafone, AT&T, Orange, Rakuten, Telestra, Telefónica, etc. with the objective to provide broadband internet coverage to existing unmodified mobile phones
Orbcomm – an operational constellation used to provide global asset monitoring and messaging services from its constellation of 29 LEO communications satellites orbiting at 775 km (480 miles)
Globalstar – an operational low Earth orbit (LEO) satellite constellation for satellite phone and low-speed data communications, covering most of the world's landmass
Iridium – an operational constellation of 66 cross-linked satellites in a polar orbit, used to provide satellite phone and low-speed data services over the entire surface of Earth
Inmarsat – a satellite based nautical distress network for transmitting telex, fax, and other text messages since 1979 – typically used in nautical scenarios and disaster scenarios
Lynk Global – a satellite-to-mobile-phone satellite constellation with the objective to coverage to traditional low-cost mobile devices
Teledesic – a former (1990s) venture to accomplish broadband satellite internet services
Project Loon – former concept to provide internet access via balloons in the stratosphere
Satellite Internet
Satellite internet constellation
Satellite Flare
References
External links
Articles containing video clips
Communications satellite constellations
Communications satellites in low Earth orbit
Communications satellites of the United States
Communications satellite operators
High throughput satellites
Internet service providers
Internet service providers of the United States
Satellite Internet access
SpaceX satellites
Spacecraft launched in 2019
Spacecraft launched in 2020
Spacecraft launched in 2021
Spacecraft launched in 2022
Spacecraft launched in 2023
Spacecraft launched in 2024
Wireless networking
Telecommunications companies of the United States
Technology companies of the United States
Space technology | Starlink | Astronomy,Technology,Engineering | 13,383 |
616,985 | https://en.wikipedia.org/wiki/Computability%20logic | Computability logic (CoL) is a research program and mathematical framework for redeveloping logic as a systematic formal theory of computability, as opposed to classical logic, which is a formal theory of truth. It was introduced and so named by Giorgi Japaridze in 2003.
In classical logic, formulas represent true/false statements. In CoL, formulas represent computational problems. In classical logic, the validity of an argument depends only on its form, not on its meaning. In CoL, validity means being always computable. More generally, classical logic tells us when the truth of a given statement always follows from the truth of a given set of other statements. Similarly, CoL tells us when the computability of a given problem A always follows from the computability of other given problems B1,...,Bn. Moreover, it provides a uniform way to actually construct a solution (algorithm) for such an A from any known solutions of B1,...,Bn.
CoL formulates computational problems in their most general—interactive—sense. CoL defines a computational problem as a game played by a machine against its environment. Such a problem is computable if there is a machine that wins the game against every possible behavior of the environment. Such a game-playing machine generalizes the Church–Turing thesis to the interactive level. The classical concept of truth turns out to be a special, zero-interactivity-degree case of computability. This makes classical logic a special fragment of CoL. Thus CoL is a conservative extension of classical logic. Computability logic is more expressive, constructive and computationally meaningful than classical logic. Besides classical logic, independence-friendly (IF) logic and certain proper extensions of linear logic and intuitionistic logic also turn out to be natural fragments of CoL. Hence meaningful concepts of "intuitionistic truth", "linear-logic truth" and "IF-logic truth" can be derived from the semantics of CoL.
CoL systematically answers the fundamental question of what can be computed and how; thus CoL has many applications, such as constructive applied theories, knowledge base systems, systems for planning and action. Out of these, only applications in constructive applied theories have been extensively explored so far: a series of CoL-based number theories, termed "clarithmetics", have been constructed as computationally and complexity-theoretically meaningful alternatives to the classical-logic-based first-order Peano arithmetic and its variations such as systems of bounded arithmetic.
Traditional proof systems such as natural deduction and sequent calculus are insufficient for axiomatizing nontrivial fragments of CoL. This has necessitated developing alternative, more general and flexible methods of proof, such as cirquent calculus.
Language
The full language of CoL extends the language of classical first-order logic. Its logical vocabulary has several sorts of conjunctions, disjunctions, quantifiers, implications, negations and so called recurrence operators. This collection includes all connectives and quantifiers of classical logic. The language also has two sorts of nonlogical atoms: elementary and general. Elementary atoms, which are nothing but the atoms of classical logic, represent elementary problems, i.e., games with no moves that are automatically won by the machine when true and lost when false. General atoms, on the other hand, can be interpreted as any games, elementary or non-elementary. Both semantically and syntactically, classical logic is nothing but the fragment of CoL obtained by forbidding general atoms in its language, and forbidding all operators other than ¬, ∧, ∨, →, ∀, ∃.
Japaridze has repeatedly pointed out that the language of CoL is open-ended, and may undergo further extensions. Due to the expressiveness of this language, advances in CoL, such as constructing axiomatizations or building CoL-based applied theories, have usually been limited to one or another proper fragment of the language.
Semantics
The games underlying the semantics of CoL are called static games. Such games have no turn order; a player can always move while the other players are "thinking". However, static games never punishes a player for "thinking" too long (delaying its own moves), so such games never become contests of speed. All elementary games are automatically static, and so are the games allowed to be interpretations of general atoms.
There are two players in static games: the machine and the environment. The machine can only follow algorithmic strategies, while there are no restrictions on the behavior of the environment. Each run (play) is won by one of these players and lost by the other.
The logical operators of CoL are understood as operations on games. Here we informally survey some of those operations. For simplicity we assume that the domain of discourse is always the set of all natural numbers: {0,1,2,...}.
The operation ¬ of negation ("not") switches the roles of the two players, turning moves and wins by the machine into those by the environment, and vice versa. For instance, if Chess is the game of chess (but with ties ruled out) from the white player's perspective, then ¬Chess is the same game from the black player's perspective.
The parallel conjunction ∧ ("pand") and parallel disjunction ∨ ("por") combine games in a parallel fashion. A run of A∧B or A∨B is a simultaneous play in the two conjuncts. The machine wins A∧B if it wins both of them. The machine wins A∨B if it wins at least one of them. For example, Chess∨¬Chess is a game on two boards, one played white and one black, and where the task of the machine is to win on at least one board. Such a game can be easily won regardless who the adversary is, by copying his moves from one board to the other.
The parallel implication operator → ("pimplication") is defined by A→B = ¬A∨B. The intuitive meaning of this operation is reducing B to A, i.e., solving A as long as the adversary solves B.
The parallel quantifiers ∧ ("pall") and ∨ ("pexists") can be defined by ∧xA(x) = A(0)∧A(1)∧A(2)∧... and ∨xA(x) = A(0)∨A(1)∨A(2)∨.... These are thus simultaneous plays of A(0),A(1),A(2),..., each on a separate board. The machine wins ∧xA(x) if it wins all of these games, and ∨xA(x) if it wins some.
The blind quantifiers ∀ ("blall") and ∃ ("blexists"), on the other hand, generate single-board games. A run of ∀xA(x) or ∃xA(x) is a single run of A. The machine wins ∀xA(x) (respectively ∃xA(x)) if such a run is a won run of A(x) for all (respectively at least one) possible values of x, and wins ∃xA(x) if this is true for at least one.
All of the operators characterized so far behave exactly like their classical counterparts when they are applied to elementary (moveless) games, and validate the same principles. This is why CoL uses the same symbols for those operators as classical logic does. When such operators are applied to non-elementary games, however, their behavior is no longer classical. So, for instance, if p is an elementary atom and P a general atom, p→p∧p is valid while P→P∧P is not. The principle of the excluded middle P∨¬P, however, remains valid. The same principle is invalid with all three other sorts (choice, sequential and toggling) of disjunction.
The choice disjunction ⊔ ("chor") of games A and B, written A⊔B, is a game where, in order to win, the machine has to choose one of the two disjuncts and then win in the chosen component. The sequential disjunction ("sor") AᐁB starts as A; it also ends as A unless the machine makes a "switch" move, in which case A is abandoned and the game restarts and continues as B. In the toggling disjunction ("tor") A⩛B, the machine may switch between A and B any finite number of times. Each disjunction operator has its dual conjunction, obtained by interchanging the roles of the two players. The corresponding quantifiers can further be defined as infinite conjunctions or disjunctions in the same way as in the case of the parallel quantifiers. Each sort of disjunction also induces a corresponding implication operation the same way as this was the case with the parallel implication →. For instance, the choice implication ("chimplication") A⊐B is defined as ¬A⊔B.
The parallel recurrence ("precurrence") of A can be defined as the infinite parallel conjunction A∧A∧A∧... The sequential ("srecurrence") and toggling ("trecurrence") sorts of recurrences can be defined similarly.
The corecurrence operators can be defined as infinite disjunctions. Branching recurrence ("brecurrence") ⫰, which is the strongest sort of recurrence, does not have a corresponding conjunction. ⫰A is a game that starts and proceeds as A. At any time, however, the environment is allowed to make a "replicative" move, which creates two copies of the then-current position of A, thus splitting the play into two parallel threads with a common past but possibly different future developments. In the same fashion, the environment can further replicate any of positions of any thread, thus creating more and more threads of A. Those threads are played in parallel, and the machine needs to win A in all threads to be the winner in ⫰A. Branching corecurrence ("cobrecurrence") ⫯ is defined symmetrically by interchanging "machine" and "environment".
Each sort of recurrence further induces a corresponding weak version of implication and weak version of negation. The former is said to be a rimplication, and the latter a refutation. The branching rimplication ("brimplication") A⟜B is nothing but ⫰A→B, and the branching refutation ("brefutation") of A is A⟜⊥, where ⊥ is the always-lost elementary game. Similarly for all other sorts of rimplication and refutation.
As a problem specification tool
The language of CoL offers a systematic way to specify an infinite variety of computational problems, with or without names established in the literature. Below are some examples.
Let f be a unary function. The problem of computing f will be written as ⊓x⊔y(y=f(x)). According to the semantics of CoL, this is a game where the first move ("input") is by the environment, which should choose a value m for x. Intuitively, this amounts to asking the machine to tell the value of f(m). The game continues as ⊔y(y=f(m)). Now the machine is expected to make a move ("output"), which should be choosing a value n for y. This amounts to saying that n is the value of f(m). The game is now brought down to the elementary n=f(m), which is won by the machine if and only if n is indeed the value of f(m).
Let p be a unary predicate. Then ⊓x(p(x)⊔¬p(x)) expresses the problem of deciding p, ⊓x(p(x)&ᐁ¬p(x)) expresses the problem of semideciding p, and ⊓x(p(x)⩛¬p(x)) the problem of recursively approximating p.
Let p and q be two unary predicates. Then ⊓x(p(x)⊔¬p(x))⟜⊓x(q(x)⊔¬q(x)) expresses the problem of Turing-reducing q to p (in the sense that q is Turing reducible to p if and only if the interactive problem ⊓x(p(x)⊔¬p(x))⟜⊓x(q(x)⊔¬q(x)) is computable). ⊓x(p(x)⊔¬p(x))→⊓x(q(x)⊔¬q(x)) does the same but for the stronger version of Turing reduction where the oracle for p can be queried only once. ⊓x⊔y(q(x)↔p(y)) does the same for the problem of many-one reducing q to p. With more complex expressions one can capture all kinds of nameless yet potentially meaningful relations and operations on computational problems, such as, for instance, "Turing-reducing the problem of semideciding r to the problem of many-one reducing q to p". Imposing time or space restrictions on the work of the machine, one further gets complexity-theoretic counterparts of such relations and operations.
As a problem solving tool
The known deductive systems for various fragments of CoL share the property that a solution (algorithm) can be automatically extracted from a proof of a problem in the system. This property is further inherited by all applied theories based on those systems. So, in order to find a solution for a given problem, it is sufficient to express it in the language of CoL and then find a proof of that expression. Another way to look at this phenomenon is to think of a formula G of CoL as program specification (goal). Then a proof of G is – more precisely, translates into – a program meeting that specification. There is no need to verify that the specification is met, because the proof itself is, in fact, such a verification.
Examples of CoL-based applied theories are the so-called clarithmetics. These are number theories based on CoL in the same sense as first-order Peano arithmetic PA is based on classical logic. Such a system is usually a conservative extension of PA. It typically includes all Peano axioms, and adds to them one or two extra-Peano axioms such as ⊓x⊔y(y=x''') expressing the computability of the successor function. Typically it also has one or two non-logical rules of inference, such as constructive versions of induction or comprehension. Through routine variations in such rules one can obtain sound and complete systems characterizing one or another interactive computational complexity class C. This is in the sense that a problem belongs to C'' if and only if it has a proof in the theory. So, such a theory can be used for finding not merely algorithmic solutions, but also efficient ones on demand, such as solutions that run in polynomial time or logarithmic space. It should be pointed out that all clarithmetical theories share the same logical postulates, and only their non-logical postulates vary depending on the target complexity class. Their notable distinguishing feature from other approaches with similar aspirations (such as bounded arithmetic) is that they extend rather than weaken PA, preserving the full deductive power and convenience of the latter.
See also
Game semantics
Interactive computation
Logic
Logics for computability
References
External links
Computability Logic Homepage Comprehensive survey of the subject.
Giorgi Japaridze
Game Semantics or Linear Logic?
Lecture Course on Computability Logic
On abstract resource semantics and computabilty logic Video lecture by N. Vereshchagin.
A Survey of Computability Logic (PDF) Downloadable equivalent of the above homepage.
Computability theory
Logic in computer science
Non-classical logic | Computability logic | Mathematics | 3,294 |
2,940,855 | https://en.wikipedia.org/wiki/Fiber%20Bragg%20grating | A fiber Bragg grating (FBG) is a type of distributed Bragg reflector constructed in a short segment of optical fiber that reflects particular wavelengths of light and transmits all others. This is achieved by creating a periodic variation in the refractive index of the fiber core, which generates a wavelength-specific dielectric mirror. Hence a fiber Bragg grating can be used as an inline optical filter to block certain wavelengths, can be used for sensing applications, or it can be used as wavelength-specific reflector.
History
The first in-fiber Bragg grating was demonstrated by Ken Hill in 1978. Initially, the gratings were fabricated using a visible laser propagating along the fiber core. In 1989, Gerald Meltz and colleagues demonstrated the much more flexible transverse holographic inscription technique where the laser illumination came from the side of the fiber. This technique uses the interference pattern of ultraviolet laser light to create the periodic structure of the fiber Bragg grating.
Theory
The fundamental principle behind the operation of an FBG is Fresnel reflection, where light traveling between media of different refractive indices may both reflect and refract at the interface.
The refractive index will typically alternate over a defined length. The reflected wavelength (), called the Bragg wavelength, is defined by the relationship,
where is the effective refractive index of the fiber core and is the grating period. The effective refractive index quantifies the velocity of propagating light as compared to its velocity in vacuum. depends not only on the wavelength but also (for multimode waveguides) on the mode in which the light propagates. For this reason, it is also called modal index.
The wavelength spacing between the first minima (nulls, see Fig. 2), or the bandwidth (), is (in the strong grating limit) given by,
where is the variation in the refractive index (), and is the fraction of power in the core. Note that this approximation does not apply to weak gratings where the grating length, , is not large compared to \ .
The peak reflection () is approximately given by,
where is the number of periodic variations. The full equation for the reflected power (), is given by,
where,
Types of gratings
The term type in this context refers to the underlying photosensitivity mechanism by which grating fringes are produced in the fiber. The different methods of creating these fringes have a significant effect on physical attributes of the produced grating, particularly the temperature response and ability to withstand elevated temperatures. Thus far, five (or six) types of FBG have been reported with different underlying photosensitivity mechanisms. These are summarized below:
Standard, or type I, gratings
Written in both hydrogenated and non-hydrogenated fiber of all types, type I gratings are usually known as standard gratings and are manufactured in fibers of all types under all hydrogenation conditions. Typically, the reflection spectra of a type I grating is equal to 1-T where T is the transmission spectra. This means that the reflection and transmission spectra are complementary and there is negligible loss of light by reflection into the cladding or by absorption. Type I gratings are the most commonly used of all grating types, and the only types of grating available off-the-shelf at the time of writing.
Type IA gratings
Regenerated grating written after erasure of a type I grating in hydrogenated germanosilicate fiber of all types
Type IA gratings were first observed in 2001 during experiments designed to determine the effects of hydrogen loading on the formation of IIA gratings in germanosilicate fiber. In contrast to the anticipated decrease (or 'blue shift') of the gratings' Bragg wavelength, a large increase (or 'red shift') was observed.
Later work showed that the increase in Bragg wavelength began once an initial type I grating had reached peak reflectivity and begun to weaken. For this reason, it was labeled as a regenerated grating.
Determination of the type IA gratings' temperature coefficient showed that it was lower than a standard grating written under similar conditions.
The key difference between the inscription of type IA and IIA gratings is that IA gratings are written in hydrogenated fibers, whereas type IIA gratings are written in non-hydrogenated fibers.
Type IIA, or type In, gratings
These are gratings that form as the negative part of the induced index change overtakes the positive part. It is usually associated with gradual relaxation of induced stress along the axis and/or at the interface. It has been proposed that these gratings could be relabeled type In (for type 1 gratings with a negative index change; type II label could be reserved for those that are distinctly made above the damage threshold of the glass).
Later research by Xie et al. showed the existence of another type of grating with similar thermal stability properties to the type II grating. This grating exhibited a negative change in the mean index of the fiber and was termed type IIA. The gratings were formed in germanosilicate fibers with pulses from a frequency doubled XeCl pumped dye laser. It was shown that initial exposure formed a standard (type I) grating within the fiber which underwent a small red shift before being erased. Further exposure showed that a grating reformed which underwent a steady blue shift whilst growing in strength.
Regenerated gratings
These are gratings that are reborn at higher temperatures after erasure of gratings, usually type I gratings and usually, though not always, in the presence of hydrogen. They have been interpreted in different ways including dopant diffusion (oxygen being the most popular current interpretation) and glass structural change. Recent work has shown that there exists a regeneration regime beyond diffusion where gratings can be made to operate at temperatures in excess of 1,295 °C, outperforming even type II femtosecond gratings. These are extremely attractive for ultra high temperature applications.
Type II gratings
Damage written gratings inscribed by multiphoton excitation with higher intensity lasers that exceed the damage threshold of the glass. Lasers employed are usually pulsed in order to reach these intensities. They include recent developments in multiphoton excitation using femtosecond pulses where the short timescales (commensurate on a timescale similar to local relaxation times) offer unprecedented spatial localization of the induced change. The amorphous network of the glass is usually transformed via a different ionization and melting pathway to give either higher index changes or create, through micro-explosions, voids surrounded by more dense glass.
Archambault et al. showed that it was possible to inscribe gratings of ~100% (>99.8%) reflectance with a single UV pulse in fibers on the draw tower. The resulting gratings were shown to be stable at temperatures as high as 800 °C (up to 1,000 °C in some cases, and higher with femtosecond laser inscription). The gratings were inscribed using a single 40 mJ pulse from an excimer laser at 248 nm. It was further shown that a sharp threshold was evident at ~30 mJ; above this level the index modulation increased by more than two orders of magnitude, whereas below 30 mJ the index modulation grew linearly with pulse energy. For ease of identification, and in recognition of the distinct differences in thermal stability, they labeled gratings fabricated below the threshold as type I gratings and above the threshold as type II gratings. Microscopic examination of these gratings showed a periodic damage track at the grating's site within the fiber [10]; hence type II gratings are also known as damage gratings. However, these cracks can be very localized so as to not play a major role in scattering loss if properly prepared.
Grating structure
The structure of the FBG can vary via the refractive index, or the grating period. The grating period can be uniform or graded, and either localised or distributed in a superstructure. The refractive index has two primary characteristics, the refractive index profile, and the offset. Typically, the refractive index profile can be uniform or apodized, and the refractive index offset is positive or zero.
There are six common structures for FBGs;
uniform positive-only index change,
Gaussian apodized,
raised-cosine apodized,
chirped,
discrete phase shift, and
superstructure.
The first complex grating was made by J. Canning in 1994. This supported the development of the first distributed feedback (DFB) fiber lasers, and also laid the groundwork for most complex gratings that followed, including the sampled gratings first made by Peter Hill and colleagues in Australia.
Apodized gratings
There are basically two quantities that control the properties of the FBG. These are the grating length, , given as
and the grating strength, . There are, however, three properties that need to be controlled in a FBG. These are the reflectivity, the bandwidth, and the side-lobe strength. As shown above, in the strong grating limit (i.e., for large ) the bandwidth depends on the grating strength, and not the grating length. This means the grating strength can be used to set the bandwidth. The grating length, effectively , can then be used to set the peak reflectivity, which depends on both the grating strength and the grating length. The result of this is that the side-lobe strength cannot be controlled, and this simple optimisation results in significant side-lobes. A third quantity can be varied to help with side-lobe suppression. This is apodization of the refractive index change. The term apodization refers to the grading of the refractive index to approach zero at the end of the grating. Apodized gratings offer significant improvement in side-lobe suppression while maintaining reflectivity and a narrow bandwidth. The two functions typically used to apodize a FBG are Gaussian and raised-cosine.
Chirped fiber Bragg gratings
The refractive index profile of the grating may be modified to add other features, such as a linear variation in the grating period, called a chirp. The reflected wavelength changes with the grating period, broadening the reflected spectrum. A grating possessing a chirp has the property of adding dispersion—namely, different wavelengths reflected from the grating will be subject to different delays. This property has been used in the development of phased-array antenna systems and polarization mode dispersion compensation, as well.
Tilted fiber Bragg gratings
In standard FBGs, the grading or variation of the refractive index is along the length of the fiber (the optical axis), and is typically uniform across the width of the fiber. In a tilted FBG (TFBG), the variation of the refractive index is at an angle to the optical axis. The angle of tilt in a TFBG has an effect on the reflected wavelength, and bandwidth.
Long-period gratings
Typically the grating period is the same size as the Bragg wavelength, as shown above. For a grating that reflects at 1,500 nm, the grating period is 500 nm, using a refractive index of 1.5. Longer periods can be used to achieve much broader responses than are possible with a standard FBG. These gratings are called long-period fiber grating. They typically have grating periods on the order of 100 micrometers, to a millimeter, and are therefore much easier to manufacture.
Phase-shifted fiber Bragg gratings
Phase-shifted fiber Bragg gratings (PS-FBGs) are an important class of gratings structures which have interesting applications in optical communications and sensing due to their special filtering characteristics. These types of gratings can be reconfigurable through special packaging and system design.
Different coatings of diffractive structure are used for fiber Bragg gratings in order to reduce the mechanical impact on the Bragg wavelength shift for 1.1–15 times as compared to an uncoated waveguide.
Addressed fiber Bragg structures
Addressed fiber Bragg structures (AFBS) is an emerging class of FBGs developed in order to simplify interrogation and enhance performance of FBG-based sensors. The optical frequency response of an AFBS has two narrowband notches with the frequency spacing between them being in the radio frequency (RF) range. The frequency spacing is called the address frequency of AFBS and is unique for each AFBS in a system. The central wavelength of AFBS can be defined without scanning its spectral response, unlike conventional FBGs that are probed by optoelectronic interrogators. An interrogation circuit of AFBS is significantly simplified in comparison with conventional interrogators and consists of a broadband optical source, an optical filter with a predefined linear inclined frequency response, and a photodetector.
Manufacture
Fiber Bragg gratings are created by "inscribing" or "writing" systematic (periodic or aperiodic) variation of refractive index into the core of a special type of optical fiber using an intense ultraviolet (UV) source such as a UV laser. Two main processes are used: interference and masking. The method that is preferable depends on the type of grating to be manufactured. Although polymer optic fibers starting gaining research interest in the 2000s, germanium-doped silica fiber is most commonly used. The germanium-doped fiber is photosensitive, which means that the refractive index of the core changes with exposure to UV light. The amount of the change depends on the intensity and duration of the exposure as well as the photosensitivity of the fiber. To write a high reflectivity fiber Bragg grating directly in the fiber the level of doping with germanium needs to be high. However, standard fibers can be used if the photosensitivity is enhanced by pre-soaking the fiber in hydrogen.
Interference
This was the first method used widely for the fabrication of fiber Bragg gratings and uses two-beam interference. Here the UV laser is split into two beams which interfere with each other creating a periodic intensity distribution along the interference pattern. The refractive index of the photosensitive fiber changes according to the intensity of light that it is exposed to. This method allows for quick and easy changes to the Bragg wavelength, which is directly related to the interference period and a function of the incident angle of the laser light.
Sequential writing
Complex grating profiles can be manufactured by exposing a large number of small, partially overlapping gratings in sequence. Advanced properties such as phase shifts and varying modulation depth can be introduced by adjusting the corresponding properties of the subgratings. In the first version of the method, subgratings were formed by exposure with UV pulses, but this approach had several drawbacks, such as large energy fluctuations in the pulses and low average power. A sequential writing method with continuous UV radiation that overcomes these problems has been demonstrated and is now used commercially. The photosensitive fiber is translated by an interferometrically controlled airbearing borne carriage. The interfering UV beams are focused onto the fiber, and as the fiber moves, the fringes move along the fiber by translating mirrors in an interferometer. As the mirrors have a limited range, they must be reset every period, and the fringes move in a sawtooth pattern. All grating parameters are accessible in the control software, and it is therefore possible to manufacture arbitrary gratings structures without any changes in the hardware.
Photomask
A photomask having the intended grating features may also be used in the manufacture of fiber Bragg gratings. The photomask is placed between the UV light source and the photosensitive fiber. The shadow of the photomask then determines the grating structure based on the transmitted intensity of light striking the fiber. Photomasks are specifically used in the manufacture of chirped Fiber Bragg gratings, which cannot be manufactured using an interference pattern.
Point-by-point
A single UV laser beam may also be used to 'write' the grating into the fiber point-by-point. Here, the laser has a narrow beam that is equal to the grating period. The main difference of this method lies in the interaction mechanisms between infrared laser radiation and dielectric material - multiphoton absorption and tunnel ionization. This method is specifically applicable to the fabrication of long period fiber gratings. Point-by-point is also used in the fabrication of tilted gratings.
Production
Originally, the manufacture of the photosensitive optical fiber and the 'writing' of the fiber Bragg grating were done separately. Today, production lines typically draw the fiber from the preform and 'write' the grating, all in a single stage. As well as reducing associated costs and time, this also enables the mass production of fiber Bragg gratings. Mass production is in particular facilitating applications in smart structures utilizing large numbers (3000) of embedded fiber Bragg gratings along a single length of fiber.
Applications
Communications
The primary application of fiber Bragg gratings is in optical communications systems. They are specifically used as notch filters. They are also used in optical multiplexers and demultiplexers with an optical circulator, or optical add-drop multiplexer (OADM). Figure 5 shows 4 channels, depicted as 4 colours, impinging onto a FBG via an optical circulator. The FBG is set to reflect one of the channels, here channel 4. The signal is reflected back to the circulator where it is directed down and dropped out of the system. Since the channel has been dropped, another signal on that channel can be added at the same point in the network.
A demultiplexer can be achieved by cascading multiple drop sections of the OADM, where each drop element uses an FBG set to the wavelength to be demultiplexed. Conversely, a multiplexer can be achieved by cascading multiple add sections of the OADM. FBG demultiplexers and OADMs can also be tunable. In a tunable demultiplexer or OADM, the Bragg wavelength of the FBG can be tuned by strain applied by a piezoelectric transducer. The sensitivity of a FBG to strain is discussed below in fiber Bragg grating sensors.
Fiber Bragg grating sensors
As well as being sensitive to strain, the Bragg wavelength is also sensitive to temperature. This means that fiber Bragg gratings can be used as sensing elements in optical fiber sensors. In a FBG sensor, the measurand causes a shift in the Bragg wavelength, . The relative shift in the Bragg wavelength, , due to an applied strain () and a change in temperature () is approximately given by,
or,
Here, is the coefficient of strain, which is related to the strain optic coefficient . Also, is the coefficient of temperature, which is made up of the thermal expansion coefficient of the optical fiber, , and the thermo-optic coefficient, .
Fiber Bragg gratings can then be used as direct sensing elements for strain and temperature. They can also be used as transduction elements, converting the output of another sensor, which generates a strain or temperature change from the measurand, for example fiber Bragg grating gas sensors use an absorbent coating, which in the presence of a gas expands generating a strain, which is measurable by the grating. Technically, the absorbent material is the sensing element, converting the amount of gas to a strain. The Bragg grating then transduces the strain to the change in wavelength.
Specifically, fiber Bragg gratings are finding uses in instrumentation applications such as seismology, pressure sensors for extremely harsh environments, and as downhole sensors in oil and gas wells for measurement of the effects of external pressure, temperature, seismic vibrations and inline flow measurement. As such they offer a significant advantage over traditional electronic gauges used for these applications in that they are less sensitive to vibration or heat and consequently are far more reliable. In the 1990s, investigations were conducted for measuring strain and temperature in composite materials for aircraft and helicopter structures.
Fiber Bragg gratings used in fiber lasers
Recently the development of high power fiber lasers has generated a new set of applications for fiber Bragg gratings (FBGs), operating at power levels that were previously thought impossible. In the case of a simple fiber laser, the FBGs can be used as the high reflector (HR) and output coupler (OC) to form the laser cavity. The gain for the laser is provided by a length of rare earth doped optical fiber, with the most common form using Yb3+ ions as the active lasing ion in the silica fiber. These Yb-doped fiber lasers first operated at the 1 kW CW power level in 2004 based on free space cavities but were not shown to operate with fiber Bragg grating cavities until much later.
Such monolithic, all-fiber devices are produced by many companies worldwide and at power levels exceeding 1 kW. The major advantage of these all fiber systems, where the free space mirrors are replaced with a pair of fiber Bragg gratings (FBGs), is the elimination of realignment during the life of the system, since the FBG is spliced directly to the doped fiber and never needs adjusting. The challenge is to operate these monolithic cavities at the kW CW power level in large mode area (LMA) fibers such as 20/400 (20 μm diameter core and 400 μm diameter inner cladding) without premature failures at the intra-cavity splice points and the gratings. Once optimized, these monolithic cavities do not need realignment during the life of the device, removing any cleaning and degradation of fiber surface from the maintenance schedule of the laser. However, the packaging and optimization of the splices and FBGs themselves are non-trivial at these power levels as are the matching of the various fibers, since the composition of the Yb-doped fiber and various passive and photosensitive fibers needs to be carefully matched across the entire fiber laser chain. Although the power handling capability of the fiber itself far exceeds this level, and is possibly as high as >30 kW CW, the practical limit is much lower due to component reliability and splice losses.
Process of matching active and passive fibers
In a double-clad fiber there are two waveguides – the Yb-doped core that forms the signal waveguide and the inner cladding waveguide for the pump light. The inner cladding of the active fiber is often shaped to scramble the cladding modes and increase pump overlap with the doped core. The matching of active and passive fibers for improved signal integrity requires optimization of the core/clad concentricity, and the MFD through the core diameter and NA, which reduces splice loss. This is principally achieved by tightening all of the pertinent fiber specifications.
Matching fibers for improved pump coupling requires optimization of the clad diameter for both the passive and the active fiber. To maximize the amount of pump power coupled into the active fiber, the active fiber is designed with a slightly larger clad diameter than the passive fibers delivering the pump power. As an example, passive fibers with clad diameters of 395-μm spliced to active octagon shaped fiber with clad diameters of 400-μm improve the coupling of the pump power into the active fiber. An image of such a splice is shown, showing the shaped cladding of the doped double-clad fiber.
The matching of active and passive fibers can be optimized in several ways. The easiest method for matching the signal carrying light is to have identical NA and core diameters for each fiber. This however does not account for all the refractive index profile features. Matching of the MFD is also a method used to create matched signal carrying fibers. It has been shown that matching all of these components provides the best set of fibers to build high power amplifiers and lasers. Essentially, the MFD is modeled and the resulting target NA and core diameter are developed. The core-rod is made and before being drawn into fiber its core diameter and NA are checked. Based on the refractive index measurements, the final core/clad ratio is determined and adjusted to the target MFD. This approach accounts for details of the refractive index profile which can be measured easily and with high accuracy on the preform, before it is drawn into fiber.
See also
Bragg's law
Dielectric mirror
Diffraction
Diffraction grating
Distributed temperature sensing by fiber optics
Hydrogen sensor
Long-period fiber grating
PHOSFOS project – embedding FBGs in flexible skins
Photonic crystal fiber
References
External links
FOSNE - Fibre Optic Sensing Network Europe
Bragg gratings in Subsea infrastructure monitoring
Fiber optics
Diffraction | Fiber Bragg grating | Physics,Chemistry,Materials_science | 5,232 |
22,475 | https://en.wikipedia.org/wiki/Octans | Octans is a faint constellation located in the deep Southern Sky. Its name is Latin for the eighth part of a circle, but it is named after the octant, a navigational instrument. Devised by French astronomer Nicolas Louis de Lacaille in 1752, Octans remains one of the 88 modern constellations. The southern celestial pole is located within the boundaries of Octans.
History and mythology
Octans was one of 14 constellations created by French astronomer Nicolas Louis de Lacaille during his expedition to the Cape of Good Hope, and was originally named l’Octans de Reflexion (“the reflecting octant”) in 1752, after he had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment.
It was part of his catalogue of the southern sky, the Coelum Australe Stelliferum, which was published posthumously in 1763. In Europe, it became more widely known as Octans Hadleianus, in honor of English mathematician John Hadley, who invented the octant in 1730. There is no real mythology related to Octans, partially due to its faintness and relative recentness, but mostly because of its extreme southerly latitude.
Notable features
Stars
Octans is a generally inconspicuous constellation with only one star brighter than magnitude 4; its brightest member is Nu Octantis, a spectral class K1 III giant star with an apparent magnitude 3.73. It is 63.3 ± 0.8 light-years distant from Earth.
Beta Octantis is the second brightest star in the constellation.
Polaris Australis (Sigma Octantis), the southern pole star, is a magnitude 5.4 star just over 1 degree away from the true south celestial pole. Its relative faintness means that it is not practical for navigation.
BQ Octantis is a fainter, magnitude 6.82 star located much closer to the South Pole (at less than a degree) than Sigma.
In addition to having the current southern pole star of Earth, Octans also contains the southern pole star of the planet Saturn, which is the magnitude 4.3 Delta Octantis.
The Astronomical Society of Southern Africa in 2003 reported that observations of the Mira variable stars R and T Octantis were urgently needed.
Four star systems are known to have planets. Mu2 Octantis is a binary star system, the brighter component of which has a planet. Nu Octantis A also has a planet orbiting. HD 142022 is a binary system, a component of which is a sunlike star with a massive planet with an orbital period of 1928 ± 46 days. HD 212301 is a yellow-white main sequence star with a hot jupiter that completes an orbit every 2.2 days.
Deep sky objects
NGC 2573 (also known as Polarissima Australis) is a faint barred spiral galaxy that happens to be the closest NGC object to the south celestial pole. NGC 7095 and NGC 7098 are two barred spiral galaxies that are 115 million and 95 million light-years distant from Earth respectively. The sparse open cluster Collinder 411 is also located in the constellation.
Namesakes
was a stores ship used by the United States Navy during World War II.
See also
Octans (Chinese astronomy)
Notes
References
Citations
References
External links
The Deep Photographic Guide to the Constellations: Octans
The clickable Octans
Starry Night Photography : Octans
Star Tales – Octans
Southern constellations
Constellations listed by Lacaille | Octans | Astronomy | 749 |
8,409,262 | https://en.wikipedia.org/wiki/C6H6Cl6 | {{DISPLAYTITLE:C6H6Cl6}}
The molecular formula C6H6Cl6 may refer to:
Hexachlorocyclohexane
alpha-Hexachlorocyclohexane
beta-Hexachlorocyclohexane
gamma-Hexachlorocyclohexane, a.k.a. Lindane, a pesticide
Polyvinylidene chloride | C6H6Cl6 | Chemistry | 94 |
546,920 | https://en.wikipedia.org/wiki/Seven-segment%20display | A seven-segment display is a form of electronic display device for displaying decimal numerals that is an alternative to the more complex dot matrix displays.
Seven-segment displays are widely used in digital clocks, electronic meters, basic calculators, and other electronic devices that display numerical information.
History
Seven-segment representation of figures can be found in patents as early as 1903 (in ), when Carl Kinsley invented a method of telegraphically transmitting letters and numbers and having them printed on tape in a segmented format. In 1908, F. W. Wood invented an 8-segment display, which displayed the number 4 using a diagonal bar (). In 1910, a seven-segment display illuminated by incandescent bulbs was used on a power-plant boiler room signal panel. They were also used to show the dialed telephone number to operators during the transition from manual to automatic telephone dialing. They did not achieve widespread use until the advent of LEDs in the 1970s.
Some early seven-segment displays used incandescent filaments in an evacuated bulb; they are also known as numitrons. A variation (minitrons) made use of an evacuated potted box. Minitrons are filament segment displays that are housed in DIP (dual in-line package) packages like modern LED segment displays. They may have up to 16 segments. There were also segment displays that used small incandescent light bulbs instead of LEDs or incandescent filaments. These worked similarly to modern LED segment displays.
Vacuum fluorescent display versions were also used in the 1970s.
Many early (c. 1970s) LED seven-segment displays had each digit built on a single die. This made the digits very small. Some included magnifying lenses in the design to try to make the digits more legible. Other designs used 1 or 2 dies for every segment of the display.
The seven-segment pattern is sometimes used in posters or tags, where the user either applies color to pre-printed segments, or applies color through a seven-segment digit template, to compose figures such as product prices or telephone numbers.
For many applications, dot-matrix liquid-crystal displays (LCDs) have largely superseded LED displays in general, though even in LCDs, seven-segment displays are common. Unlike LEDs, the shapes of elements in an LCD panel are arbitrary since they are formed on the display by photolithography. In contrast, the shapes of LED segments tend to be simple rectangles, because they have to be physically moulded to shape, which makes it difficult to form more complex shapes than the segments of seven-segment displays. However, the easy recognition of seven-segment displays, and the comparatively high visual contrast obtained by such displays relative to dot-matrix digits, makes seven-segment multiple-digit LCD screens very common on basic calculators.
The seven-segment display has inspired type designers to produce typefaces reminiscent of that display (but more legible), such as New Alphabet, "DB LCD Temp", "ION B", etc.
Using a restricted range of letters that look like (upside-down) digits, seven-segment displays are commonly used by school children to form words and phrases using a technique known as "calculator spelling".
Implementations
Seven-segment displays may use a liquid-crystal display (LCD), a light-emitting diode (LED) for each segment, an electrochromic display, or other light-generating or -controlling techniques such as cold cathode gas discharge (neon) (Panaplex), vacuum fluorescent (VFD), incandescent filaments (Numitron), and others. For gasoline price totems and other large signs, electromechanical seven-segment displays made up of electromagnetically flipped light-reflecting segments are still commonly used. A precursor to the 7-segment display in the 1950s through the 1970s was the cold-cathode, neon-lamp-like nixie tube. Starting in 1970, RCA sold a display device known as the that used incandescent filaments arranged into a seven-segment display. In USSR, the first electronic calculator "Vega", which was produced from 1964, contains 20 decimal digits with seven-segment electroluminescent display.
In a simple LED package, typically all of the cathodes (negative terminals) or all of the anodes (positive terminals) of the segment LEDs are connected and brought out to a common pin; this is referred to as a "common cathode" or "common anode" device. Hence a 7 segment plus decimal point package will only require nine pins, though commercial products typically contain more pins, and/or spaces where pins would go, in order to match standard IC sockets. Integrated displays also exist, with single or multiple digits. Some of these integrated displays incorporate their own internal decoder, though most do not: each individual LED is brought out to a connecting pin as described.
Multiple-digit LED displays as used in pocket calculators and similar devices used multiplexed displays to reduce the number of I/O pins required to control the display. For example, all the anodes of the A segments of each digit position would be connected together and to a driver circuit pin, while the cathodes of all segments for each digit would be connected. To operate any particular segment of any digit, the controlling integrated circuit would turn on the cathode driver for the selected digit, and the anode drivers for the desired segments; then after a short blanking interval the next digit would be selected and new segments lit, in a sequential fashion. In this manner an eight digit display with seven segments and a decimal point would require only 8 cathode drivers and 8 anode drivers, instead of sixty-four drivers and IC pins. Often in pocket calculators the digit drive lines would be used to scan the keyboard as well, providing further savings; however, pressing multiple keys at once would produce odd results on the multiplexed display.
Although to a naked eye all digits of an LED display appear lit, only one digit is lit at any given time in a multiplexed display. The digit changes at a high enough rate that the human eye cannot see the flashing (on earlier devices it could be visible to peripheral vision).
Characters
The seven segments are arranged as a rectangle, with two vertical segments on each side and one horizontal segment each at the top, middle, and bottom. Often the rectangle is oblique (slanted), which may aid readability. In most applications, the segments are of nearly uniform shape and size (usually elongated hexagons, though trapezoids and rectangles can also be used); though in the case of adding machines, the vertical segments are longer and more oddly shaped at the ends, to try to make them more easily readable. The seven elements of the display can be lit in different combinations to represent each of the Arabic numerals.
The individual segments are referred to by the letters "a" to "g", and an optional decimal point (an "eighth segment", referred to as DP) is sometimes used for the display of non-integer numbers. A single byte can encode the full state of a seven-segment display, including the decimal point. The most popular bit encodings are gfedcba and abcdefg. In the gfedcba representation, a byte value of 0x06 would turn on segments "c" and "b", which would display a "1".
Decimal
The numerical digits 0 to 9 are the most common characters displayed on seven-segment displays. The most common patterns used for each of these are:
Alternative patterns: The numeral 1 may be represented with the left segments, the numerals 6 and 9 may be represented without a "tail", and the numeral 7 represented with a 'tail':
In Unicode 13.0, 10 codepoints had been given for segmented digits 0–9 in the Symbols for Legacy Computing block, to replicate early computer fonts that included seven-segment versions of the digits. The official reference shows the less-common four-segment "7".
Hexadecimal
The binary-coded decimal (BCD) 0 to 9 digit values require four binary bits to hold their values. Since four bits (24) can hold 16 values, this means hexadecimal (hex) digits can be represented by four bits too. Since there are a limited number of segments in seven-segment displays, a couple of the hexadecimal digits are required to be displayed as lowercase letters, otherwise the uppercase letter "B" would be the same as the digit "8", and the uppercase letter "D" would be the same as the digit "0". The digit "6" must also be displayed with the topmost segment as to avoid ambiguity with the letter "b".
Early decoder IC's often produced random patterns or duplicates of digits for 10-15, as they were designed to use as few gates as possible and only required to produce 0-9.
Letters
Many letters of the latin alphabet can be reasonably implemented on a seven-segment display. Though not every letter is available, it is possible to create many useful words. By careful choice of words, one can sometimes work around unavailable letters. Uppercase letters "I", "O", "S", "Z" conflict with the common seven-segment representation of digits "1", "0", 5", "2", and the lowercase letter "g" with digit "9". Upper case could be put on the left (as lower-case L is shown here) but this is not often done. Lowercase 'b' and 'q' are identical to the alternate numerical digits '6' and '9'.
{| class="wikitable" style="text-align:center;background:#FFF"
|+ Latin alphabet
! !! A !! B !! C !! D !! E !! F !! G !! H !! I !! J !! K !! L !! M !! N !! O !! P !! Q !! R !! S !! T !! U !! V !! W !! X !! Y !! Z
|-
! Upper
| || || || || || || || || style="background: #FEE| || || || || || || style="background: #FEE| || || || || style="background: #FEE| || || || || || || || style="background: #FEE|
|-
! Lower
| || || || || || || style="background: #FEE| || || || || || || || || || || || || || || || || || || ||
|}
The following are some real world English word examples seen on actual electronic equipment (first line appeared on some CD players):
, , , , ,
, , , , , ,
, , , , , ,
See also
There are also fourteen- and sixteen-segment displays (for full alphanumerics); however, these have mostly been replaced by dot matrix displays. 22-segment displays capable of displaying the full ASCII character set were briefly available in the early 1980s but did not prove popular.
Eight-segment display
Nine-segment display
Fourteen-segment display
Sixteen-segment display
Dot matrix display
Nixie tube display
Vacuum fluorescent display
References
External links
Interactive Demonstration of a Seven Segment Display
Interfacing Seven Segment Display to 8051 Microcontroller
Interfacing 7-Segment Display with AVR Microcontroller
Display technology | Seven-segment display | Engineering | 2,461 |
1,132,764 | https://en.wikipedia.org/wiki/Westwork | A westwork (), forepart, avant-corps or avancorpo is the monumental, west-facing entrance section ("west front") of a Carolingian, Ottonian, or Romanesque church. The exterior consists of multiple stories between two towers. The interior includes an entrance vestibule, a chapel, and a series of galleries overlooking the nave. A westwork is usually broader than the width of the nave and aisles. It is sometimes used synonymously with narthex. The structural purpose of the massive westwork is to resolve the horizontal thrust of the east-to-west arcades of the nave. Church towers as a part of a church began with the construction of the first westworks.
Charlemagne dreamt of reviving the Roman Empire in the West. His dream along with his artistic skillset allowed him to implement artwork into buildings with westwork during this time period and can be found in the Corvey Abbey and scattered throughout other westwork buildings today.
The Corvey Abbey (built in 885) located in Germany is the oldest example of westwork to date. The Corvey Abbey provides an example of westwork preserved from the time being built. The frescos (originally of the 9th century) inside the westwork show scenes from the Odyssey. The King, later the Emperor, and his entourage lodged in the westwork when visiting the abbey during their travels around the country. This is known as the Kaiserloge on the upper, or second story. The centered room located on the main floor surrounded on all three sides by galleries as well as an arch found in the entrance hall of the Corvey Abbey shows an example of ancient styles used during this time. Westwork from the Corvey Abbey provided a basis in the following years for more architectural advancements in the Romanesque and Gothic periods.
The primary source of Trajan's Aqueduct, the Aqua Traiana, a nymphaeum known as the Madonna della Fiora near Rome, is documented in the Historical Diocesan Archive of Nepi and Sutri as having been converted into a church in medieval times by constructing a westwork. "It was adapted to a church by building a two-floor masonary forepart: the lower floor as the facade of the church; the upper floor as residence of the parish priest divided into 5 rooms."
The feature was introduced into Norman architecture in the 11th century by Robert of Jumièges at the church of Jumièges Abbey, consecrated in 1067. The pattern was continued in German Gothic architecture.
References
Sources
External links
Church architecture
Architectural elements
Carolingian architecture
Romanesque architecture | Westwork | Technology,Engineering | 535 |
15,723,187 | https://en.wikipedia.org/wiki/Terra%20Incognita%3A%20The%20Perils%20and%20Promise%20of%20Stem%20Cell%20Research | Terra Incognita: The Perils and Promise of Stem Cell Research, also known as Terra Incognita: Mapping Stem Cell Research, is a documentary film released by Kartemquin Films in 2007. The film follows Dr. Jack Kessler of Northwestern University in his search for a cure for spinal cord injuries using embryonic stem cells. When Kessler was invited to head up the Neurology Department at Northwestern, his focus was on using stem cells to help cure diabetes. However, soon after his move to Chicago, his daughter Allison – then age 15, was injured in a skiing accident and paralyzed from the waist down. In the moments following the accident, Dr. Kessler made the decision to change the focus of his research to begin looking for a cure for spinal cord injuries using embryonic stem cells.
Kessler's story brings the stem cell debate to the public for discussion. The film follows the constantly evolving interplay between the promise of new discoveries, the controversy of modern science and the resilience and courage of people living every day with devastating disease and injury.
The film was directed by Maria Finitzo (5 Girls), and was broadcast on PBS' award-winning series Independent Lens in 2008. Terra Incognita won a 2008 Peabody Award recognizing the film's uncompromising look at stem-cell research. The film also won Best Documentary Feature at the 2009 Kos International Health Film Festival in Greece.
References
External links
Documentary films about science
2007 documentary films
Stem cell research
Documentary films about health care
Documentary films about Chicago
Kartemquin Films films
2000s English-language films
2000s American films
English-language documentary films | Terra Incognita: The Perils and Promise of Stem Cell Research | Chemistry,Biology | 336 |
64,005,213 | https://en.wikipedia.org/wiki/Ponting%20Bridges | Ponting Bridges is a Slovenian studio for structural engineering, focusing mainly on bridge structures, with headquarters in Maribor. The practice is led by a duo of its founders, Dr Viktor Markelj and Marjan Pipenbaher, and has constructed many high-profile bridges. These include Ada Bridge in Belgrade (2012), Pelješac Bridge (2022), drawbridge in Gdansk (2017), Nissibi Euphrates Bridge in Turkey (2015), Puch Bridge in Ptuj (2007) and Črni Kal Viaduct in Slovenia (2004).
History
The studio was established by Dr Viktor Markelj and Marjan Pipenbaher as Ponting inženirski biro in 1990, after both left structural engineering company Gradis.
Major projects
Major projects, by year of completion and ordered by type, are:
Bridges
Carinthian bridge, Maribor, Slovenia (1996)
Bridge over Mura River, highway Vučja vas - Beltinci, Slovenia (2003)
Črni Kal Viaduct, Slovenia (2004)
Viaduct Bivje, Slovenia (2004)
Millennium Bridge, Podgorica, Montenegro (2006)
Puch Bridge, Ptuj, Slovenia (2007)
Viaduct Bonifika, Koper, Slovenia (2007)
Viaduct Šumljak, highway Razdrto - Selo, Slovenia (2009)
Viaduct Dobruša, Slovenia (2010)
Viaduct Lešnica North / South, Slovenia (2007/2011)
Ada Bridge, Belgrade, Serbia (2012)
Peračica viaducts, Slovenia (2012)
Giborim bridge, Haifa, Israel (2012)
Nissibi Euphrates Bridge, highway Adiyaman - Diyarbakir, Turkey (2015)
High speed railway bridge no. 10, HSR Tel Aviv - Jerusalem, Israel (2017)
NAR Viaducts, Belgrade, Serbia (2018)
Pelješac Bridge, Croatia (2022)
Over- and underpasses
Arch overpass 4-3 in Kozina, Slovenia (1997)
Underpass in Celje, Slovenia (2004)
Overpass 4-6 in Slivnica, Slovenia (2008)
Viaduct/overpass Grobelno, Slovenia (2015)
Pedestrian and cyclist bridges
Footbridge in Ptuj, Slovenia (1997)
Footbridge over Soča, Bovec, Slovenia (2007)
Studenci footbridge, Maribor, Slovenia (2007)
Marinič footbridge, Škocjan Caves Park, Slovenia (2010)
Ribja brv, Ljubljana, Slovenia (2014)
Pedestrian and cyclist drawbridge to Ołowianka Island, Gdansk, Poland (2017)
Langur Way Canopy Walk, Penang Hill, Malaysia (2018)
Pedestrian and cycle bridge in Tremerje, Laško, Slovenia (2019)
Tunnels and galleries
Tunnel Malečnik, Maribor, Slovenia (2009)
Arcade gallery Meljski hrib, Maribor, Slovenia (2012)
Current
Kömürhan Bridge, Turkey (under construction)
Ada Huja Bridge, Belgrade, Serbia (preliminary design)
Highway bridge and parallel pedestrian bridge over Krka river, Slovenia (construction completed, finishing works)
Railway viaduct Pesnica, Slovenia (preliminary design)
Selected works
Awards
2019 Jožef Mrak Award for Pelješac Bridge
2019 Honorary City Certificate of Slovenska Bistrica to Dr. Viktor Markelj and Marjan Pipenbaher
2019 Polish Minister of Investment and Development Award to Footbridge to Ołowianka Island in Gdansk
2018 City of Gdansk Award to Footbridge to Ołowianka Island in Gdansk
2015 SCE Award to Viaduct Grobelno
2012 WEF Award to Ada Bridge Belgrade
2012 CES AWARD to Ada Bridge Belgrade
2012 AAB Award to Ada Bridge Belgrade
2011 Footbridge Award to Marinic Bridge
2009 City seal of Maribor to Studenci Footbridge Maribor
2009 Award CSS of CCIS to Studenci Footbridge Maribor
2008 Footbridge Award to Studenci Footbridge Maribor
2007 SCE Award to Puch Bridge over Drava in Ptuj
2004 SCE Award to Bridge over Mura River
2004 UM Award 2004: Golden recognition award to Mr. Marjan Pipenbaher and Mr. Viktor Markelj
1999 Award CSS of CCIS to Footbridge in Ptuj
References
External links
Ponting Bridges Website
Researchgate Website, Viktor Markelj's publications
Bridge design
Slovenian architects
Companies based in Maribor
1990 establishments in Slovenia | Ponting Bridges | Engineering | 907 |
58,125,800 | https://en.wikipedia.org/wiki/R.%20Cengiz%20Ertekin | R. Cengiz Ertekin is a professor of Marine Hydrodynamics and Ocean Engineering. He currently holds a guest professor position at Harbin Engineering University of China. He is best known for his contributions to the development of nonlinear water wave theories, hydroelasticity of very large floating structures (VLFS), wave energy, and tsunami and storm impact on coastal bridges. He is also the co-developer, along with Professor H. Ronald Riggs of the University of Hawaiʻi, of the computer program HYDRAN for solving linear fluid-structure interaction problems of floating and fixed bodies.
Early life and education
R. Cengiz Ertekin was born and raised in Turkey. He received a B.Sc. degree in Naval Architecture and Marine Engineering from Istanbul Technical University, the top technical university of Turkey, in 1977. Following the encouragement of his advisor, Prof. M Cengiz Dokmeci, he moved to the Department of Naval Architecture and Offshore Engineering of the University of California, Berkeley, United States, for higher education. He received his M.Sc. and Ph.D. degrees in 1980 and 1984, respectively. His M.Sc. advisors were Professors Marshall P. Tulin and William C. Webster. His Ph.D. advisor was Professor John V. Wehausen. Cengiz was the last student of Prof. John V. Wehausen before his retirement. After graduation, Professor Wehausen offered Cengiz a postdoctoral research assistant position for 18 months at U.C. Berkeley.
Professional career
Most of Ertekin's professional career has been dedicated to academic work; however, he also has several years of experience of working in the industry.
In 1985, Ertekin joined the Research Center of Shell Development Company in Houston, Texas. He took a faculty position (hired at the associate professor level) at the Department of Ocean Engineering of the University of Hawaiʻi at Mānoa in 1986, and received tenure within four years and was promoted to Professor in 1994. The Ocean Engineering Department of UH was established by Professor Charles Bretschneider in 1966 and is one of the first of its kind in the US.
At the University of Hawaiʻi, Ertekin led and contributed immensely to the success of School of Ocean and Earth Science and Technology and the Department of Ocean and Resources Engineering (ORE, formerly Ocean Engineering). In the era of PCs, for example, Professor Ertekin played a key role in transferring the department from one focusing mostly on field and experimental studies, to also a leading institute in modern and computational hydrodynamics. The department was the host of some of the internationally leading conferences, workshops and meetings (details given below), mostly organized and chaired by Cengiz.
After almost 30 years, he retired from the University of Hawaiʻi in September 2015. Starting in March 2014, he became a guest professor at the College of Shipbuilding Engineering of Harbin Engineering University in China.
Teaching and advising
Ertekin has taught numerous courses on hydrodynamics and ocean engineering at the University of Hawaiʻi at Mānoa, and at University of California, Berkeley.
At the Ocean Engineering Department of the University of Hawaiʻi, Ertekin developed and taught several courses including Nonlinear water wave theories (ORE 707), Hydrodynamics of Fluid-Body Interaction (ORE 609), Buoyancy and Stability (ORE 411), and Marine Renewable Energy (ORE 677), to name a few. At the University of California, Berkeley, he taught Ship Statics (NAOE 151) and Ship Resistance and Propulsion (NAOE 152A).
At the University of Hawaiʻi, Ertekin advised and mentored over 50 graduate students.
Research
Ertekin's research on Marine Hydrodynamics and Ocean Engineering has extended over a period of about forty years. His work cover both basic and applied research through analytical, computational and experimental approaches. Below are an examples of his pioneering contributions. Other topics of significant research contribution by Ertekin include ship resistance, marine energy, and oil spills.
The Green-Naghdi water wave theory
The Green-Naghdi (GN) equations are nonlinear water wave equations that were originally developed by British mathematician Albert E. Green and Iranian-American mechanical engineer Paul M. Naghdi in the 1970s (see,). The original equations, namely the Level I GN equations, are mostly applicable to the propagation of long waves in shallow waters. However, high level GN equations are also developed which are applicable to deep water waves. The equations differ from the classical water wave theories (e.g. Boussinesq equations) in that the flow need not be irrotational, and that no perturbation is used in deriving the equations. Hence, the GN equations satisfy the nonlinear boundary conditions exactly, and postulate the integrated conservation laws. Although the GN equations were developed very recently (compared to other wave theories), they are well-known and fairly understood by the research and scientific community.
Ertekin's Ph.D. advisor and dissertation committee chair was Professor Wehausen. Others on his Ph.D. committee were Professor William Webster, and Professor Paul M. Naghdi. Working under close guidance of his advisors, he was one of the first to use the nonlinear equations (that were introduced just a couple of years earlier by Profs. Green and Naghdi). In his Ph.D. dissertation, Ertekin was the first to give the equations in now a familiar form to the hydrodynamics community by providing closed-form relations for the pressures. He named the equations, The Green-Naghdi Equations.
Upon completion of his Ph.D., Ertekin continued research on the GN equations. He has patiently introduced the GN equations to his graduate students and postdoctoral researchers and has guided many of them to perform basic and applied research on or by use of the GN equations. Along with his research assistant and postdocs, they developed the Irrotational GN (IGN) equations (see e.g., and ), and high-level GN equations (see e.g., and ). They have solved some of the classical and challenging hydrodynamics problems by use of the GN equations, including nonlinear wave diffraction and refraction(see e.g.), nonlinear wave loads on vertical cylinders (see e.g.), wave interaction with elastic bodies and VLFS (see e.g.), wave loads on coastal bridges (see e.g.), and wave interaction with wave energy devices (see e.g.), among many others.
Hydroelasticity and VLFS
The Mobile Offshore Base (MOB) project of USA and the Mega-Float project of Japan are two examples of Very Large Floating Structures (VLFS). These are very large floating platforms consist of interconnected modules whose length can extend to several kilometers. Due to the unprecedented long length, displacement and associated hydroelastic response of VLFS, the state of the art analysis and design approaches that was used for smaller floating platforms was not adequate. It quickly became obvious that new approaches must be developed to tackle the complex problems associated to dynamics and response of VLFS.
Starting 1990's, Ertekin pioneered the research on hydroelasticity of VLFS. He and H. Ronald Riggs of the Civil Engineering Department at the University of Hawaii coined the term VLFS. They have solved the hydroelasticity problem of VLFS by use of both linear and nonlinear approaches, in two and three dimensions. Ertekin has also introduced new approaches and equations to study this topic, including the use of nonlinear water wave models to analyse the hydroelastic response of VLFS of mat type (see e.g., and ).
His work and research on hydroelasticity of VLFS has opened a new era for these topics and gave more confidence in understanding the dynamics and response of the structures.
Wave loads on coastal bridges
Some of the recent tsunami and hurricanes, such as Tohoku tsunami in Japan (2011) and Hurricane Katrina in the United States (2005), caused significant damage to the decks of coastal bridges and structures. Interaction of surface waves with coastal bridges is a complex problem, involving fluid-structure interaction, multi-phase fluids, wave breaking, and overtopping. These are of course in addition to the difficulties associated to the structural analysis. Ertekin and his students studied bridge failure mechanisms and possible mitigating solutions. They developed models used to assess the vulnerability of coastal bridges in USA to tsunami and storm surge and waves.
Publications and professional services
Ertekin has over 150 peer-reviewed publications.
He has been on the editorial board of more than ten internationally leading journals since early 1990s (see e.g., and ), and editor of several special issues in various journals, see e.g. Renewable Energy: Leveraging Ocean and Waterways special issue of Applied Ocean Research journal (2009). He was the co-editor-in-chief of Elsevier's Ocean Engineering journal (2006–2010), and he is the founding editor-in-chief of Springer's Journal of Ocean Engineering and Marine Energy. Ertekin has been keynote speaker of several leading meetings and conferences, see e.g. and.
References
1954 births
Living people
Fluid dynamicists
Scientific journal editors
Shell plc people
People from Turgutlu
Istanbul Technical University alumni
University of California, Berkeley alumni
University of Hawaiʻi at Mānoa faculty
UC Berkeley College of Engineering faculty
Academic staff of Harbin Engineering University | R. Cengiz Ertekin | Chemistry | 1,981 |
76,702,336 | https://en.wikipedia.org/wiki/NGC%205278 | NGC 5278 is a spiral galaxy in the constellation Ursa Major. It was discovered by German-British astronomer William Herschel in 1789.
NGC 5278 is in gravitational interaction with the galaxy NGC 5279. This pair of galaxies appears in the Halton Arp's Atlas of Peculiar Galaxies under the symbol Arp 239. The luminosity class of NGC 5278 is II. The nucleus of this galaxy presents a burst of star formation (SBNG starburst nucleus galaxies) and it is an active Seyfert 2 type galaxy. In addition, NGC 5278 is possibly a LINER galaxy, a galaxy whose nucleus presents an emission spectrum characterized by broad lines of weakly ionized atoms. NGC 5278 is also a galaxy whose core shines in the ultraviolet spectrum. It is listed in the Markarian catalog under the reference Mrk 271 (MK 271).
Supernovae
Two supernovae have been observed in NGC 5278: SN2001ai (typeIc, mag. 17.6) and SN2019cec (typeII, mag. 18.26a).
See also
List of NGC objects (5001–6000)
New General Catalogue
References
External links
NGC 5278 at SIMBAD
NGC 5278 at LEDA
Spiral galaxies
Interacting galaxies
Ursa Major
Discoveries by William Herschel
Astronomical objects discovered in 1789
5278
08677
Markarian galaxies
239
048473
Peculiar galaxies
+09-22-101 | NGC 5278 | Astronomy | 306 |
30,341,295 | https://en.wikipedia.org/wiki/Utah%20Data%20Center | The Utah Data Center (UDC), also known as the Intelligence Community Comprehensive National Cybersecurity Initiative Data Center, is a data storage facility for the United States Intelligence Community that is designed to store data estimated to be on the order of exabytes or larger. Its purpose is to support the Comprehensive National Cybersecurity Initiative (CNCI), though its precise mission is classified. The National Security Agency (NSA) leads operations at the facility as the executive agent for the Director of National Intelligence. It is located at Camp Williams near Bluffdale, Utah, between Utah Lake and Great Salt Lake and was completed in May 2014 at a cost of $1.5 billion.
Purpose
Critics believe that the data center has the capability to process "all forms of communication, including the complete contents of private emails, cell phone calls, and Internet searches, as well as all types of personal data trails—parking receipts, travel itineraries, bookstore purchases, and other digital 'pocket litter'." In response to claims that the data center would be used to illegally monitor email of U.S. citizens, in April 2013 an NSA spokesperson said, "Many unfounded allegations have been made about the planned activities of the Utah Data Center, ... one of the biggest misconceptions about NSA is that we are unlawfully listening in on, or reading emails of, U.S. citizens. This is simply not the case."
In April 2009, officials at the United States Department of Justice acknowledged that the NSA had engaged in large-scale overcollection of domestic communications in excess of the United States Foreign Intelligence Surveillance Court's authority, but claimed that the acts were unintentional and had since been rectified.
In August 2012, The New York Times published short documentaries by independent filmmakers titled The Program, based on interviews with former NSA technical director and whistleblower William Binney. The project had been designed for foreign signals intelligence (SIGINT) collection, but Binney alleged that after the September 11 terrorist attacks, controls that limited unintentional collection of data pertaining to U.S. citizens were removed, prompting concerns by him and others that the actions were illegal and unconstitutional. Binney alleged that the Bluffdale facility was designed to store a broad range of domestic communications for data mining without warrants.
Documents leaked to the media in June 2013 described PRISM, a national security computer and network surveillance program operated by the NSA, as enabling in-depth surveillance on live Internet communications and stored information. Reports linked the data center to the NSA's controversial expansion of activities, which store extremely large amounts of data. Privacy and civil liberties advocates raised concerns about the unique capabilities that such a facility would give to intelligence agencies. "They park stuff in storage in the hopes that they will eventually have time to get to it," said James Lewis, a cyberexpert at the Center for Strategic and International Studies, "or that they'll find something that they need to go back and look for in the masses of data." But, he added, "most of it sits and is never looked at by anyone."
The UDC was expected to store Internet data, as well as telephone records from the controversial NSA telephone call database, MAINWAY, when it opened in 2013.
In light of the controversy over the NSA's involvement in the practice of mass surveillance in the United States, and prompted by the 2013 mass surveillance disclosures by ex-NSA contractor Edward Snowden, the Utah Data Center was hailed by The Wall Street Journal as a "symbol of the spy agency's surveillance prowess".
Binney has said that the facility was built to store recordings and other content of communications, not only for metadata.
According to an interview with Snowden, the project was initially known as the Massive Data Repository within NSA, but was renamed to Mission Data Repository due to the former sounding too "creepy".
Structure
The structure provides , with of data center space and more than of technical support and administrative space. It is projected to cost $1.5–2 billion. A report suggested that it will cost another $2 billion for hardware, software, and maintenance.
The completed facility is expected to require 65 megawatts of electricity, costing about $40 million per year. Given its open-evaporation-based cooling system, the facility is expected to use of water per day.
An article by Forbes estimates the storage capacity as between 3 and 12 exabytes as of 2013, based on analysis of unclassified blueprints, but mentions Moore's Law, meaning that advances in technology could be expected to increase the capacity by orders of magnitude in the coming years.
Toward the end of the project's construction it was plagued by electrical problems in the form of "massive power surges" that damaged equipment. This delayed its opening by a year.
The finished structure is characterized as a Tier III data center, with over a million square feet, that cost over 1.5 billion dollars to build. Of the million square feet, 100,000 square feet are dedicated to the data center. The other 900,000 square feet are utilized as technical support and administrative space.
See also
Big data
Cyberethics
Electronic Communications Privacy Act
FISA Amendments Act of 2008
Multiprogram Research Facility
Privacy law
Secrecy of correspondence
Texas Cryptologic Center
Electronic Frontier Foundation
References
External links
&
Buildings and structures in Salt Lake County, Utah
Counterterrorism in the United States
Government buildings in Utah
Government databases in the United States
Law enforcement databases in the United States
Mass surveillance
National Security Agency facilities
Privacy of telecommunications
Surveillance
Data centers
Supercomputer sites
2014 establishments in Utah | Utah Data Center | Technology | 1,148 |
43,071,232 | https://en.wikipedia.org/wiki/Enterprise%20interoperability%20framework | The enterprise interoperability framework is used as a guideline for collecting and structuring knowledge/solution for enterprise interoperability. The framework defines the domains and sub-domains for interoperability research and development in order to identify a set of pieces of knowledge for solving enterprise interoperability problems by removing barriers to interoperability.
Existing interoperability frameworks
Some existing works on interoperability have been carried out to define interoperability framework or reference models, in particular, the LISI reference model, European Interoperability Framework (EIF), IDEAS interoperability framework, ATHENA interoperability framework, and E-Health Interoperability Framework. These existing approaches constitute the basis for the enterprise interoperability framework.
Existing interoperability frameworks do not explicitly address barriers to interoperability, which is a basic assumption of this research; they are not aimed at structuring interoperability knowledge with respect to their ability to remove various barriers.
The enterprise Interoperability framework has three basic dimensions:
Interoperability concerns define the content (or aspect) of interoperation that may take place at various levels of the enterprise. In the domain of Enterprise Interoperability, the following four interoperability concerns are identified: data, service, process, and business.
Interoperability barriers: Interoperability barrier is a fundamental concept in defining the interoperability domain. Many interoperability issues are specific to particular application domains. These can be things like support for particular attributes or particular access control regimes. Nevertheless, general barriers and problems of interoperability can be identified; and most of them being already addressed, Consequently, the objective is to identify common barriers to interoperability. By the term ‘barrier’ we mean an ‘incompatibility’ or ‘mismatch’ which obstructs the sharing and exchanging of information. Three categories of barriers are identified: conceptual, technological and organisational.
Interoperability approaches represent the different ways in which barriers can be removed (integrated, unified, and federated)
The framework with its three basic dimensions is shown.
Use
The Enterprise Interoperability Framework allows to:
Capture and structure interoperability knowledge/solutions in the framework through a barrier-driven approach
Provide support to enterprise interoperability engineers and industry end users to carry out their interoperability projects.
The enterprise interoperability framework not only aims at structuring concepts, defining research domain and capturing knowledge but also at helping industries to solve their interoperability problems. When carrying out an interoperability project involving two particular enterprises, interoperability concerns and interoperability barriers between the two enterprises will be identified first and mapped to this Enterprise Interoperability Framework.
Using the framework, existing interoperability degrees can be characterized and targeted interoperability degrees can be defined as the objective to meet. Then knowledge/solutions associated with the barriers and concerns can be searched in the framework, and solutions found will be proposed to users for possible adaptation and/or combination with other solutions to remove the identified barriers so that the required interoperability can be established.
References
External links
INTEROP-VLab
DI.2.Enterprise Interoperability Framework and knowledge corpus
DI.3.Enterprise Interoperability Framework and knowledge corpus
Interoperability
Enterprise modelling
Knowledge representation | Enterprise interoperability framework | Engineering | 667 |
5,761,568 | https://en.wikipedia.org/wiki/Attachment%20in%20children | Attachment in children is "a biological instinct in which proximity to an attachment figure is sought when the child senses or perceives threat or discomfort. Attachment behaviour anticipates a response by the attachment figure which will remove threat or discomfort". Attachment also describes the function of availability, which is the degree to which the authoritative figure is responsive to the child's needs and shares communication with them. Childhood attachment can define characteristics that will shape the child's sense of self, their forms of emotion-regulation, and how they carry out relationships with others. Attachment is found in all mammals to some degree, especially primates.
Attachment theory has led to a new understanding of child development. Children develop different patterns of attachment based on experiences and interactions with their caregivers at a young age. Four different attachment classifications have been identified in children: secure attachment, anxious-ambivalent attachment, anxious-avoidant attachment, and disorganized attachment. Attachment theory has become the dominant theory used today in the study of infant and toddler behavior and in the fields of infant mental health, treatment of children, and related fields.
Attachment theory and children
Attachment theory (developed by the psychoanalyst Bowlby 1969, 1973, 1980) is rooted in the ethological notion that a newborn child is biologically programmed to seek proximity with caregivers, and this proximity-seeking behavior is naturally selected. Through repeated attempts to seek physical and emotional closeness with a caregiver and the responses the child gets, the child develops an internal working model (IWM) that reflects the response of the caregiver to the child. According to Bowlby, attachment provides a secure base from which the child can explore the environment, a haven of safety to which the child can return when he or she is afraid or fearful. Bowlby's colleague Mary Ainsworth identified that an important factor which determines whether a child will have a secure or insecure attachment is the degree of sensitivity shown by their caregiver:
The sensitive caregiver responds socially to attempts to initiate social interaction, playfully to his attempts to initiate play. She picks him up when he seems to wish it, and puts him down when he wants to explore. When he is distressed, she knows what kinds and degree of soothing he requires to comfort him – and she knows that sometimes a few words or a distraction will be all that is needed. On the other hand, the mother who responds inappropriately tries to socialize with the baby when he is hungry, play with him when he is tired, or feed him when he is trying to initiate social interaction.
However, it should be recognized that "even sensitive caregivers get it right only about 50 percent of the time. Their communications are either out of synch, or mismatched. There are times when parents feel tired or distracted. The telephone rings or there is breakfast to prepare. In other words, attuned interactions rupture quite frequently. But the hallmark of a sensitive caregiver is that the ruptures are managed and repaired."
Attachment classification in children: the Strange Situation Protocol
The most common and empirically supported method for assessing attachment in infants (12 months – 20 months) is the Strange Situation Protocol, developed by Mary Ainsworth as a result of her careful in-depth observations of infants with their mothers in Uganda(see below). The Strange Situation Protocol is a research, not a diagnostic, tool and the resulting attachment classifications are not 'clinical diagnoses.' While the procedure may be used to supplement clinical impressions, the resulting classifications should not be confused with the clinically diagnosed 'Reactive Attachment Disorder (RAD).' The clinical concept of RAD differs in a number of fundamental ways from the theory and research driven attachment classifications based on the Strange Situation Procedure. The idea that insecure attachments are synonymous with RAD is, in fact, not accurate and leads to ambiguity when formally discussing attachment theory as it has evolved in the research literature. This is not to suggest that the concept of RAD is without merit, but rather that the clinical and research conceptualizations of insecure attachment and attachment disorder are not synonymous.
The 'Strange Situation' is a laboratory procedure used to assess infant patterns of attachment to their caregiver. In the procedure, the mother and infant are placed in an unfamiliar playroom equipped with toys while a researcher observes/records the procedure through a one-way mirror. The procedure consists of eight sequential episodes in which the child experiences both separation from and reunion with the mother as well as the presence of an unfamiliar stranger. The protocol is conducted in the following format unless modifications are otherwise noted by a particular researcher:
Episode 1: Mother (or other familiar caregiver), Baby, Experimenter (30 seconds)
Episode 2: Mother, Baby (3 mins)
Episode 3: Mother, Baby, Stranger (3 mins or less)
Episode 4: Stranger, Baby (3 mins)
Episode 5: Mother, Baby (3 mins)
Episode 6: Baby Alone (3 mins or less)
Episode 7: Stranger, Baby (3 mins or less)
Episode 8: Mother, Baby (3 mins)
Mainly on the basis of their reunion behaviours (although other behaviours are taken into account) in the Strange Situation Paradigm (Ainsworth et al., 1978; see below), infants can be categorized into three 'organized' attachment categories: Secure (Group B); Avoidant (Group A); and Anxious/Resistant (Group C). There are subclassifications for each group (see below). A fourth category, termed Disorganized (D), can also be assigned to an infant assessed in the Strange Situation although a primary 'organized' classification is always given for an infant judged to be disorganized. Each of these groups reflects a different kind of attachment relationship with the mother. A child may have a different type of attachment to each parent as well as to unrelated caregivers. Attachment style is thus not so much a part of the child's thinking, but is characteristic of a specific relationship. However, after about age five the child exhibits one primary consistent pattern of attachment in relationships.
The pattern the child develops after age five demonstrates the specific parenting styles used during the developmental stages within the child. These attachment patterns are associated with behavioural patterns and can help further predict a child's future personality.
Attachment patterns
"The strength of a child's attachment behaviour in a given circumstance does not indicate the 'strength' of the attachment bond. Some insecure children will routinely display very pronounced attachment behaviours, while many secure children find that there is no great need to engage in either intense or frequent shows of attachment behaviour".
Secure attachment
A toddler who is securely attached to its parent (or other familiar caregiver) will explore freely while the caregiver is present, typically engages with strangers, is often visibly upset when the caregiver departs, and is generally happy to see the caregiver return. The extent of exploration and of distress are affected by the child's temperamental make-up and by situational factors as well as by attachment status, however. A child's attachment is largely influenced by their primary caregiver's sensitivity to their needs. Parents who consistently (or almost always) respond to their child's needs will create securely attached children. Such children are certain that their parents will be responsive to their needs and communications.
In the traditional Ainsworth et al. (1978) coding of the Strange Situation, secure infants are denoted as "Group B" infants and they are further subclassified as B1, B2, B3, and B4. Although these subgroupings refer to different stylistic responses to the comings and goings of the caregiver, they were not given specific labels by Ainsworth and colleagues, although their descriptive behaviours led others (including students of Ainsworth) to devise a relatively 'loose' terminology for these subgroups. B1's have been referred to as 'secure-reserved', B2's as 'secure-inhibited', B3's as 'secure-balanced,' and B4's as 'secure-reactive.' In academic publications however, the classification of infants (if subgroups are denoted) is typically simply "B1" or "B2" although more theoretical and review-oriented papers surrounding attachment theory may use the above terminology.
Securely attached children are best able to explore when they have the knowledge of a secure base to return to in times of need. When assistance is given, this bolsters the sense of security and also, assuming the parent's assistance is helpful, educates the child in how to cope with the same problem in the future. Therefore, secure attachment can be seen as the most adaptive attachment style. According to some psychological researchers, a child becomes securely attached when the parent is available and able to meet the needs of the child in a responsive and appropriate manner. At infancy and early childhood, if parents are caring and attentive towards their children, those children will be more prone to secure attachment.
Anxious-resistant insecure attachment
Anxious-resistant insecure attachment is also called ambivalent attachment. In general, a child with an anxious-resistant attachment style will typically explore little (in the Strange Situation) and is often wary of strangers, even when the caregiver is present. When the caregiver departs, the child is often highly distressed. The child is generally ambivalent when they return. The Anxious-Ambivalent/Resistant strategy is a response to unpredictably responsive caregiving, and that the displays of anger or helplessness towards the caregiver on reunion can be regarded as a conditional strategy for maintaining the availability of the caregiver by preemptively taking control of the interaction.
The C1 subtype is coded when:
"...resistant behavior is particularly conspicuous. The mixture of seeking and yet resisting contact and interaction has an unmistakably angry quality and indeed an angry tone may characterize behavior in the preseparation episodes..."
The C2 subtype is coded when:
"Perhaps the most conspicuous characteristic of C2 infants is their passivity. Their exploratory behavior is limited throughout the SS and their interactive behaviors are relatively lacking in active initiation. Nevertheless, in the reunion episodes they obviously want proximity to and contact with their mothers, even though they tend to use signalling rather than active approach, and protest against being put down rather than actively resisting release...In general the C2 baby is not as conspicuously angry as the C1 baby."
Anxious-avoidant insecure attachment
A child with the anxious-avoidant insecure attachment style will avoid or ignore the caregiver – showing little emotion when the caregiver departs or returns. The child will not explore very much regardless of who is there. Infants classified as anxious-avoidant (A) represented a puzzle in the early 1970s. They did not exhibit distress on separation, and either ignored the caregiver on their return (A1 subtype) or showed some tendency to approach together with some tendency to ignore or turn away from the caregiver (A2 subtype). Ainsworth and Bell theorised that the apparently unruffled behaviour of the avoidant infants is in fact as a mask for distress, a hypothesis later evidenced through studies of the heart-rate of avoidant infants.
Infants are depicted as anxious-avoidant insecure when there is:
"...conspicuous avoidance of the mother in the reunion episodes which is likely to consist of ignoring her altogether, although there may be some pointed looking away, turning away, or moving away...If there is a greeting when the mother enters, it tends to be a mere look or a smile...Either the baby does not approach his mother upon reunion, or they approach in 'abortive' fashions with the baby going past the mother, or it tends to only occur after much coaxing...If picked up, the baby shows little or no contact-maintaining behavior; he tends not to cuddle in; he looks away and he may squirm to get down."
Ainsworth's narrative records showed that infants avoided the caregiver in the stressful Strange Situation Procedure when they had a history of experiencing rebuff of attachment behaviour. The child's needs are frequently not met and the child comes to believe that communication of needs has no influence on the caregiver. Ainsworth's student Mary Main theorised that avoidant behaviour in the Strange Situational Procedure should be regarded as 'a conditional strategy, which paradoxically permits whatever proximity is possible under conditions of maternal rejection' by de-emphasising attachment needs. Main proposed that avoidance has two functions for an infant whose caregiver is consistently unresponsive to their needs. Firstly, avoidant behaviour allows the infant to maintain a conditional proximity with the caregiver: close enough to maintain protection, but distant enough to avoid rebuff. Secondly, the cognitive processes organising avoidant behaviour could help direct attention away from the unfulfilled desire for closeness with the caregiver – avoiding a situation in which the child is overwhelmed with emotion ('disorganised distress'), and therefore unable to maintain control of themselves and achieve even conditional proximity.
Disorganized/disoriented attachment
Ainsworth herself was the first to find difficulties in fitting all infant behaviour into the three classifications used in her Baltimore study. Ainsworth and colleagues sometimes observed 'tense movements such as hunching the shoulders, putting the hands behind the neck and tensely cocking the head, and so on. It was our clear impression that such tension movements signified stress, both because they tended to occur chiefly in the separation episodes and because they tended to be prodromal to crying. Indeed, our hypothesis is that they occur when a child is attempting to control crying, for they tend to vanish if and when crying breaks through'. Such observations also appeared in the doctoral theses of Ainsworth's students. Crittenden, for example, noted that one abused infant in her doctoral sample was classed as secure (B) by her undergraduate coders because her strange situation behaviour was "without either avoidance or ambivalence, she did show stress-related stereotypic headcocking throughout the strange situation. This pervasive behaviour, however, was the only clue to the extent of her stress".
Drawing on records of behaviours discrepant with the A, B, and C classifications, a fourth classification was added by Ainsworth's colleague Mary Main and Judith Solomon. In the Strange Situation, the attachment system is expected to be activated by the departure and return of the caregiver. If the behaviour of the infant does not appear to the observer to be coordinated in a smooth way across episodes to achieve either proximity or some relative proximity with the caregiver, then it is considered 'disorganised' as it indicates a disruption or flooding of the attachment system (e.g. by fear). Infant behaviours in the Strange Situation Protocol coded as disorganised/disoriented include overt displays of fear; contradictory behaviours or affects occurring simultaneously or sequentially; stereotypic, asymmetric, misdirected or jerky movements; or freezing and apparent dissociation. Lyons-Ruth has urged, however, that it should be wider 'recognized that 52% of disorganized infants continue to approach the caregiver, seek comfort, and cease their distress without clear ambivalent or avoidant behavior.'
There is 'rapidly growing interest in disorganized attachment' from clinicians and policy-makers as well as researchers. Yet the Disorganized/disoriented attachment (D) classification has been criticised by some for being too encompassing. In 1990, Ainsworth put in print her blessing for the new 'D' classification, though she urged that the addition be regarded as 'open-ended, in the sense that subcategories may be distinguished', as she worried that the D classification might be too encompassing and might treat too many different forms of behaviour as if they were the same thing. Indeed, the D classification puts together infants who use a somewhat disrupted secure (B) strategy with those who seem hopeless and show little attachment behaviour; it also puts together infants who run to hide when they see their caregiver in the same classification as those who show an avoidant (A) strategy on the first reunion and then an ambivalent-resistant (C) strategy on the second reunion. Perhaps responding to such concerns, George and Solomon have divided among indices of Disorganized/disoriented attachment (D) in the Strange Situation, treating some of the behaviours as a 'strategy of desperation' and others as evidence that the attachment system has been flooded (e.g. by fear, or anger). Crittenden also argues that some behaviour classified as Disorganized/disoriented can be regarded as more 'emergency' versions of the avoidant and/or ambivalent/resistant strategies, and function to maintain the protective availability of the caregiver to some degree. Sroufe et al. have agreed that 'even disorganised attachment behaviour (simultaneous approach-avoidance; freezing, etc.) enables a degree of proximity in the face of a frightening or unfathomable parent'. However, 'the presumption that many indices of "disorganisation" are aspects of organised patterns does not preclude acceptance of the notion of disorganisation, especially in cases where the complexity and dangerousness of the threat are beyond children's capacity for response'. For example, 'Children placed in care, especially more than once, often have intrusions. In videos of the Strange Situation Procedure, they tend to occur when a rejected/neglected child approaches the stranger in an intrusion of desire for comfort, then loses muscular control and falls to the floor, overwhelmed by the intruding fear of the unknown, potentially dangerous, strange person'.
Main and Hesse found that most of the mothers of these children had suffered major losses or other trauma shortly before or after the birth of the infant and had reacted by becoming severely depressed. In fact, 56% of mothers who had lost a parent by death before they completed high school subsequently had children with disorganized attachments. Subsequently, studies, whilst emphasising the potential importance of unresolved loss, have qualified these findings. For example, Solomon and George found that unresolved loss in the mother tended to be associated with disorganised attachment in their infant primarily when they had also experienced an unresolved trauma in their life prior to the loss.
Later patterns and the dynamic-maturational model
Studies of older children have identified further attachment classifications. Main and Cassidy observed that disorganized behaviour in infancy can develop into a child using caregiving-controlling or punitive behaviour in order to manage a helpless or dangerously unpredictable caregiver. In these cases, the child's behaviour is organised, but the behaviour is treated by researchers as a form of 'disorganization' (D) since the hierarchy in the family is no longer organised according to parenting authority.
Patricia McKinsey Crittenden has elaborated classifications of further forms of avoidant and ambivalent attachment behaviour. These include the caregiving and punitive behaviours also identified by Main and Cassidy (termed A3 and C3 respectively), but also other patterns such as compulsive compliance with the wishes of a threatening parent (A4).
Crittenden's ideas developed from Bowlby's proposal that 'given certain adverse circumstances during childhood, the selective exclusion of information of certain sorts may be adaptive. Yet, when during adolescence and adult the situation changes, the persistent exclusion of the same forms of information may become maladaptive'.
Crittenden proposed that the basic components of human experience of danger are two kinds of information:
'Affective information' – the emotions provoked by the potential for danger, such as anger or fear. Crittenden terms this 'affective information'. In childhood this information would include emotions provoked by the unexplained absence of an attachment figure. Where an infant is faced with insensitive or rejecting parenting, one strategy for maintaining the availability of their attachment figure is to try to exclude from consciousness or from expressed behaviour any emotional information that might result in rejection.
Causal or other sequentially ordered knowledge about the potential for safety or danger. In childhood this would include knowledge regarding the behaviours that indicate an attachment figure's availability as a secure haven. If knowledge regarding the behaviours that indicate an attachment figure's availability as a secure haven is subject to segregation, then the infant can try to keep the attention of their caregiver through clingy or aggressive behaviour, or alternating combinations of the two. Such behaviour may increase the availability of an attachment figure who otherwise displays inconsistent or misleading responses to the infant's attachment behaviours, suggesting the unreliability of protection and safety.
Crittenden proposes that both kinds of information can be split off from consciousness or behavioural expression as a 'strategy' to maintain the availability of an attachment figure: 'Type A strategies were hypothesized to be based on reducing perception of threat to reduce the disposition to respond. Type C was hypothesized to be based on heightening perception of threat to increase the disposition to respond' Type A strategies split off emotional information about feeling threatened and type C strategies split off temporally-sequenced knowledge about how and why the attachment figure is available. By contrast, type B strategies effectively use both kinds of information without much distortion. For example: a toddler may have come to depend upon a type C strategy of tantrums in working to maintain the availability of an attachment figure whose inconsistent availability has led the child to distrust or distort causal information about their apparent behaviour. This may lead their attachment figure to get a clearer grasp on their needs and the appropriate response to their attachment behaviours. Experiencing more reliable and predictable information about the availability of their attachment figure, the toddler then no longer needs to use coercive behaviours with the goal of maintaining their caregiver's availability and can develop a secure attachment to their caregiver since they trust that their needs and communications will be heeded.
Significance of patterns
Research based on data from longitudinal studies, such as the National Institute of Child Health and Human Development Study of Early Child Care and the Minnesota Study of Risk and Adaption from Birth to Adulthood, and from cross-sectional studies, consistently shows associations between early attachment classifications and peer relationships as to both quantity and quality. Lyons-Ruth, for example, found that 'for each additional withdrawing behavior displayed by mothers in relation to their infant's attachment cues in the Strange Situation Procedure, the likelihood of clinical referral by service providers was increased by 50%.'
Secure children have more positive and fewer negative peer reactions and establish more and better friendships. Insecure-ambivalent children have a tendency to anxiously but unsuccessfully seek positive peer interaction whereas insecure-avoidant children appear aggressive and hostile and may actively repudiate positive peer interaction. On only a few measures is there any strong direct association between early experience and a comprehensive measure of social functioning in early adulthood but early experience significantly predicts early childhood representations of relationships, which in turn predicts later self and relationship representations and social behaviour.
Studies have suggested that infants with a high-risk for Autism Spectrum Disorders (ASD) may express attachment security differently from infants with a low-risk for ASD. Behavioural problems and social competence in insecure children increase or decline with deterioration or improvement in quality of parenting and the degree of risk in the family environment.
Criticism of the Strange Situation Protocol
Michael Rutter describes the procedure in the following terms:
"It is by no means free of limitations (see Lamb, Thompson, Gardener, Charnov & Estes, 1984). To begin with, it is very dependent on brief
separations and reunions having the same meaning for all children. This may be a major constraint when applying the procedure in cultures, such as that in Japan (see Miyake et al., 1985), where infants are rarely separated from their mothers in ordinary circumstances. Also, because older children have a cognitive capacity to maintain relationships when the older person is not present, separation may not provide the same stress for them. Modified procedures based on the Strange Situation have been developed for older preschool children (see Belsky et al., 1994; Greenberg et al., 1990) but it is much more dubious whether the same approach can be used in middle childhood.Greenberg, M. T., Cicchetti, D. & Cummings, M. (Eds), (1990). Attachment in the preschool years; theory research and intervention. Chicago; University of Chicago Press. Also, despite its manifest strengths, the procedure is based on just 20 minutes of behaviour. It can be scarcely expected to tap all the relevant qualities of a child's attachment relationships. Q-sort procedures based on much longer naturalistic observations in the home, and interviews with the mothers have developed in order to extend the data base (see Vaughn & Waters, 1990). A further constraint is that the coding procedure results in discrete categories rather than continuously distributed dimensions. Not only is this likely to provide boundary problems, but also it is not at all obvious that discrete categories best represent the concepts that are inherent in attachment security. It seems much more likely that infants vary in their degree of security and there is need for a measurement systems that can quantify individual variation".
Ecological validity and universality of Strange Situation attachment classification distributions
With respect to the ecological validity of the Strange Situation, a meta-analysis of 2,000 infant-parent dyads, including several from studies with non-Western language and/or cultural bases found the global distribution of attachment categorizations to be A (21%), B (65%), and C (14%). This global distribution was generally consistent with Ainsworth et al.'s (1978) original attachment classification distributions.
However, controversy has been raised over a few cultural differences in these rates of 'global' attachment classification distributions. In particular, two studies diverged from the global distributions of attachment classifications noted above. One study was conducted in North Germany in which more avoidant (A) infants were found than global norms would suggest, and the other in Sapporo, Japan, where more resistant (C) infants were found. Of these two studies, the Japanese findings have sparked the most controversy as to the meaning of individual differences in attachment behaviour as originally identified by Ainsworth et al. (1978).
In a recent study conducted in Sapporo, Behrens et al. (2007) found attachment distributions consistent with global norms using the six-year Main & Cassidy scoring system for attachment classification. In addition to these findings supporting the global distributions of attachment classifications in Sapporo, Behrens et al. also discuss the Japanese concept of amae and its relevance to questions concerning whether the insecure-resistant (C) style of interaction may be engendered in Japanese infants as a result of the cultural practice of amae.
A separate study was conducted in Korea, to help determine if mother-infant attachment relationships are universal or culture-specific. The results of the study of infant-mother attachment were compared to a national sample and showed that the four attachment patterns, secure, avoidance, ambivalent, and disorganized, exist in Korea as well as other varying cultures.
Van IJzendoorn and Kroonenberg conducted a meta-analysis of various countries, including Japan, Israel, Germany, China, the UK and the USA using the Strange Situation. The research showed that though there were cultural differences, the four basic patterns, secure, avoidance, ambivalent, and disorganized can be found in every culture in which studies have been undertaken, even where communal sleeping arrangements are the norm. Selection of the secure pattern is found in the majority of children across cultures studied. This follows logically from the fact that attachment theory provides for infants to adapt to changes in the environment, selecting optimal behavioural strategies. How attachment is expressed shows cultural variations which need to be ascertained before studies can be undertaken.
Discrete or continuous attachment measurement
Regarding the issue of whether the breadth of infant attachment functioning can be captured by a categorical classification scheme, continuous measures of attachment security have been developed which have demonstrated adequate psychometric properties. These have been used either individually or in conjunction with discrete attachment classifications in many published reports. The original Richter's et al. (1998) scale is strongly related to secure versus insecure classifications, correctly predicting about 90% of cases. Readers further interested in the categorical versus continuous nature of attachment classifications (and the debate surrounding this issue) should consult a paper by Fraley and Spieker and the rejoinders in the same issue by many prominent attachment researchers including J. Cassidy, A. Sroufe, E. Waters & T. Beauchaine, and M. Cummings.
See also
References
Recommended reading
Cassidy, J., & Shaver, P., (Eds). (1999) Handbook of Attachment: Theory, Research, and Clinical Applications. Guilford Press, NY.
Greenberg, MT, Cicchetti, D., & Cummings, EM., (Eds) (1990) Attachment in the Preschool Years: Theory, Research and Intervention University of Chicago, Chicago.
Greenspan, S. (1993) Infancy and Early Childhood. Madison, CT: International Universities Press. .
Holmes, J. (1993) John Bowlby and Attachment Theory. Routledge. .
Holmes, J. (2001) The Search for the Secure Base: Attachment Theory and Psychotherapy. London: Brunner-Routledge. .
Karen R (1998) Becoming Attached: First Relationships and How They Shape Our Capacity to Love. Oxford University Press. .
Zeanah, C., (1993) Handbook of Infant Mental Health. Guilford, NY.
Parkes, CM, Stevenson-Hinde, J., Marris, P., (Eds.) (1991) Attachment Across The Life Cycle Routledge. NY.
Siegler R., DeLoache, J. & Eisenberg, N. (2003) How Children develop. New York: Worth. .
Bausch, Karl Heinz (2002) Treating Attachment Disorders NY: Guilford Press.
Mercer, J. Understanding Attachment, Praeger 2005.
Love
Interpersonal relationships
Human development
Attachment theory
Adoption, fostering, orphan care and displacement
Evolutionary psychology
he:תאוריית ההתקשרות
fi:Kiintymyssuhdeteoria
sv:Anknytning
zh:依附理論 | Attachment in children | Biology | 6,331 |
78,159,609 | https://en.wikipedia.org/wiki/Fen%20%28land%29 | The fen () in Mandarin, fan in Cantonese or hun in Taiwanese, is a traditional Chinese unit of measurement for land area. One fen equals 1/10th of a mu in China mainland, Hong Kong and Taiwan.
Conversions
In China mainland,
1915 ~ 1929: 1 fen = 1⁄10 mu = 61.44 square meters = 73.48 square yards
1930 ~ present: 1 fen = 1⁄10 mu = 66+2⁄3 square meters = 79.73 square yards.
In Hong Kong and Macau, 1 fen = 1⁄10 mu = 76.14 square meters = 91.06 square yards.
In Taiwan and Japan, 1 fen = 1⁄10 jia = 969.92 square meters = 10,440 square feets.
Taiwan used to be ruled by Holand and then by Japan. Its measurement system was influenced by these two countries. And 1 fen has been set to be 1/10 of a Jia instead of a mu.
For details, please see article Mu (land).
Idioms
One mu and three fen of land, or 1.3 mu of land () is a Chinese idiom that figuratively refers to someone's small personal domain or limited territory, often implying a narrow scope of influence or control.
It is also the name of a Chinese website 1Point3Acres.
See also
Chinese units of measurement
Taiwanese units of measurement
Hong Kong units of measurement
References
Units of area
Customary units of measurement | Fen (land) | Mathematics | 294 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.