id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
333,981 | https://en.wikipedia.org/wiki/Nucleomorph | Nucleomorphs are small, vestigial eukaryotic nuclei found between the inner and outer pairs of membranes in certain plastids. They are thought to be vestiges of red and green algal nuclei that were engulfed by a larger eukaryote. Because the nucleomorph lies between two sets of membranes, nucleomorphs support the endosymbiotic theory and are evidence that the plastids containing them are complex plastids. Having two sets of membranes indicate that the plastid, a prokaryote, was engulfed by a eukaryote, an alga, which was then engulfed by another eukaryote, the host cell, making the plastid an example of secondary endosymbiosis.
Organisms with known nucleomorphs
As of 2007, only two monophyletic groups of organisms are known to contain plastids with a vestigial nucleus or nucleomorph: the cryptomonads of the supergroup Cryptista and the chlorarachniophytes of the supergroup Rhizaria, both of which have examples of sequenced nucleomorph genomes. Studies of the genomic organization and of the molecular phylogeny have shown that the nucleomorph of the cryptomonads used to be the nucleus of a red alga, whereas the nucleomorph of the chlorarchniophytes was the nucleus of a green alga. In both groups of organisms the plastids originate from engulfed photoautotrophic eukaryotes.
Of the two known plastids that contain nucleomorphs, both have four membranes, the nucleomorph residing in the periplastidial compartment, evidence of being engulfed by a eukaryote through phagocytosis.
In 2020, genetic work identified the plastid in Lepidodinium and two previously undescribed dinoflagellates ("MGD" and "TGD") as being most closely related to the green alga Pedinomonas. The observation of a nucleomorph in Lepidodinium is controversial, but MGD and TGD are proven to have DNA-containing nucleomorphs. The transcriptomes of the nucleomorphs have been sequenced. One slight issue in understanding the sequence of evolution is that although the phylogenetic tree built from Lepidodinium-MGD-TGD's plastid is monophyletic, the tree built from their host-nucleus DNA is not, implying that they might have acquired very similar algae independently.
Structure
A cryptomonad nucleomorph is typically much smaller than the host nucleus. A relatively large portion of its size is devoted to the nucleolus, which contains its own ribosomes and rRNA. There seems to be nuclear pores observable by imaging, but genetic work has failed to find any protein appropriate for forming the nuclear pore complex.
There is one nucleomorph per plastid. The nucleomorph divides before the accompanying plastid. The dividing nucleomorph lacks a mitotic spindle, and the nucleomorph envelope persists throughout division.
Between the plastid and the cytoplasm of the host there are four membranes: the inner and outer membranes of the chloroplast, the periplastid membrane, and the epiplastid membrane. The epiplastid membrane is encrusted with ribosomes (in cryptomonads) and is in many ways similar to a endoplasmic reticulum, hence the name "chloroplast endoplasmic reticulum" (cER). Plastid-targeted proteins encoded in the host genome must cross all four membranes to reach the plastid. First they use classic secretory signal peptides to cross the epiplastid membrane. Then the symbiont-specific ERAD-like machinery (SELMA) – encoded in the nucleomorph as a repurposed ERAD – pulls the protein from the epiplastid space (or the lumen of the cER) into the periplastid space (the cytoplasm of the symbiote). The standard chloroplast transit peptide then acts to cross the remaining two layers via TIC/TOC complex.
The chlorarachniophytes, on the other hand, has no such thing as a cER, hence the initial import into the epiplastid space must occur by some other mechanism. It's only known that their plastid-targeted proteins are prefixed by both a signal peptide and a chloroplast-targeting peptide much like cryptomonads. Based on research done on apicomplexa, which also has 4 membranes but no cER, it's possible that the protein is first sent into the ER, then sent to the epiplastid space by the endomembrane sorting system. Some sort of a pore may then move the peptide into the periplastid space, but there seems to be no SELMA-like pore in this group. It's only known that the TIC/TOC complex exists for crossing the last two layers.
Nucleomorph genome
Nucleomorphs represent some of the smallest genomes ever sequenced. After the red or green alga was engulfed by a cryptomonad or chlorarachniophyte, respectively, its genome was reduced. The nucleomorph genomes of both cryptomonads and chlorarachniophytes converged upon a similar size from larger genomes. They retained only three chromosomes and many genes were transferred to the nucleus of the host cell, while others were lost entirely. Chlorarachniophytes contain a nucleomorph genome that is diploid and cryptomonads contain a nucleomorph genome that is tetraploid. The unique combination of host cell and complex plastid results in cells with four genomes: two prokaryotic genomes (mitochondrion and plastid of the red or green algae) and two eukaryotic genomes (nucleus of host cell and nucleomorph).
The model cryptomonad Guillardia theta became an important focus for scientists studying nucleomorphs. Its complete nucleomorph sequence was published in 2001, coming in at 551 Kbp. The G. theta sequence gave insight as to what genes were retained in nucleomorphs. Most of the genes that moved to the host cell involved protein synthesis, leaving behind a compact genome with mostly single-copy “housekeeping” genes (affecting transcription, translation, protein folding and degradation and splicing) and no mobile elements. The genome contains 513 genes, 465 of which code for protein. Thirty genes are considered “plastid” genes, coding for plastid proteins. It has three chromosomes with eukaryotic telomeres subtended by rRNA.
The genome sequence of another organism, the chlorarachniophyte Bigelowiella natans indicates that its nucleomorph is probably the vestigial nucleus of a green alga, whereas the nucleomorph in G. theta probably came from a red alga. The B. natans genome is smaller than that of G. theta, with about 373 Kbp and contains 293 protein-coding genes as compared to the 465 genes in G. theta. B. natans also only has 17 genes that code for plastid proteins, again fewer than G. theta. Comparisons between the two organisms have shown that B. natans contains significantly more introns (852) than G. theta (17). B. natans also had smaller introns, ranging from 18-21 bp, whereas G. theta’s introns ranged from 42-52 bp.
Both the genomes of B. natans and G. theta display evidence of genome reduction besides elimination of genes and tiny size, including elevated composition of adenine (A) and thymine (T), and high substitution rates.
Persistence of nucleomorphs
There are no recorded instances of vestigial nuclei in any other secondary plastid-containing organisms, yet they have been retained independently in the cryptomonads and chlorarachniophytes. Plastid gene transfer happens frequently in many organisms, and it is unusual that these nucleomorphs have not disappeared entirely. One theory as to why these nucleomorphs have not disappeared as they have in other groups is that introns present in nucleomorphs are not recognized by host spliceosomes because they are too small and therefore cannot be cut and later incorporated into host DNA.
Nucleomorphs also often code for many of their own critical functions, like transcription and translation. Some say that as long as there exists a gene in the nucleomorph that codes for proteins necessary for the plastid’s functioning that are not produced by the host cell, the nucleomorph will persist. The cryptomonad nucleomorph also codes for genes that function in plastid maintenance.
In cryptophytes and chlorarachniophytes all DNA transfer between the nucleomorph and host genome seems to have ceased, but the process is still going on in a few dinoflagellates (MGD and TGD).
Tertiary endosymbiosis
The standard nucleomorph is the result of secondary endosymbiosis: a cyanobacterium first became the chloroplast of ancestral plants, which diverged into green and red algae among other groups; the algal cell is then captured by another eukaryote. The chloroplast is surrounded by 4 membranes: 2 layers resulting from the primary, and 2 resulting from the secondary. When the nucleus of the algal endosymbiont remains, it's called a "nucleomorph".
Most tertiary endosymbiosis events end up with only the plastid retained. However, in the case of dinotoms (i.e. those having diatom endosymbionts), the symbiont's nucleus appears to be of normal size with a large amount of DNA, surrounded by plenty of cytoplasm. The symbiont even has its own DNA-containing mitochondria. As a result, the organism has two eukaryotic genomes and three prokaryotic-derived organelle genomes.
See also
Endosymbiont
References
External links
Insight into the Diversity and Evolution of the Cryptomonad Nucleomorph Genome
Cryptophyta at NCBI taxbrowser
Cercozoa at NCBI taxbrowser
According to GenBank release 164 (Feb 2008), there are 13 Cercozoa and 181 Cryptophyta entries (an entry is the submission of a sequence to the DDBJ/EMBL/GenBank public database of sequences). Most sequenced organisms were:
Guillardia theta: 54;
Rhodomonas salina: 18;
Cryptomonas sp.: 15;
Chlorarachniophyceae sp.:10;
Cryptomonas paramecium: 9;
Cryptomonas erosa: 7.
Organelles
Plant physiology
Mitochondrial genetics
Microbiology
Algae
Phycology
Evolution
Symbiosis
Endosymbiotic events | Nucleomorph | [
"Chemistry",
"Biology"
] | 2,437 | [
"Plant physiology",
"Behavior",
"Algae",
"Symbiosis",
"Plants",
"Endosymbiotic events",
"Biological interactions",
"Microbiology",
"Phycology",
"Microscopy"
] |
333,996 | https://en.wikipedia.org/wiki/Ultrametric%20space | In mathematics, an ultrametric space is a metric space in which the triangle inequality is strengthened to for all , , and . Sometimes the associated metric is also called a non-Archimedean metric or super-metric.
Formal definition
An ultrametric on a set is a real-valued function
(where denote the real numbers), such that for all :
;
(symmetry);
;
if then ;
} (strong triangle inequality or ultrametric inequality).
An ultrametric space is a pair consisting of a set together with an ultrametric on , which is called the space's associated distance function (also called a metric).
If satisfies all of the conditions except possibly condition 4 then is called an ultrapseudometric on . An ultrapseudometric space is a pair consisting of a set and an ultrapseudometric on .
In the case when is an Abelian group (written additively) and is generated by a length function (so that ), the last property can be made stronger using the Krull sharpening to:
with equality if .
We want to prove that if , then the equality occurs if . Without loss of generality, let us assume that This implies that . But we can also compute . Now, the value of cannot be , for if that is the case, we have contrary to the initial assumption. Thus, , and . Using the initial inequality, we have and therefore .
Properties
From the above definition, one can conclude several typical properties of ultrametrics. For example, for all , at least one of the three equalities or or holds. That is, every triple of points in the space forms an isosceles triangle, so the whole space is an isosceles set.
Defining the (open) ball of radius centred at as , we have the following properties:
Every point inside a ball is its center, i.e. if then .
Intersecting balls are contained in each other, i.e. if is non-empty then either or .
All balls of strictly positive radius are both open and closed sets in the induced topology. That is, open balls are also closed, and closed balls (replace with ) are also open.
The set of all open balls with radius and center in a closed ball of radius forms a partition of the latter, and the mutual distance of two distinct open balls is (greater or) equal to .
Proving these statements is an instructive exercise. All directly derive from the ultrametric triangle inequality. Note that, by the second statement, a ball may have several center points that have non-zero distance. The intuition behind such seemingly strange effects is that, due to the strong triangle inequality, distances in ultrametrics do not add up.
Examples
The discrete metric is an ultrametric.
The p-adic numbers form a complete ultrametric space.
Consider the set of words of arbitrary length (finite or infinite), Σ*, over some alphabet Σ. Define the distance between two different words to be 2−n, where n is the first place at which the words differ. The resulting metric is an ultrametric.
The set of words with glued ends of the length n over some alphabet Σ is an ultrametric space with respect to the p-close distance. Two words x and y are p-close if any substring of p consecutive letters (p < n) appears the same number of times (which could also be zero) both in x and y.
If r = (rn) is a sequence of real numbers decreasing to zero, then |x|r := lim supn→∞ |xn|rn induces an ultrametric on the space of all complex sequences for which it is finite. (Note that this is not a seminorm since it lacks homogeneity — If the rn are allowed to be zero, one should use here the rather unusual convention that 00 = 0.)
If G is an edge-weighted undirected graph, all edge weights are positive, and d(u,v) is the weight of the minimax path between u and v (that is, the largest weight of an edge, on a path chosen to minimize this largest weight), then the vertices of the graph, with distance measured by d, form an ultrametric space, and all finite ultrametric spaces may be represented in this way.
Applications
A contraction mapping may then be thought of as a way of approximating the final result of a computation (which can be guaranteed to exist by the Banach fixed-point theorem). Similar ideas can be found in domain theory. p-adic analysis makes heavy use of the ultrametric nature of the p-adic metric.
In condensed matter physics, the self-averaging overlap between spins in the SK Model of spin glasses exhibits an ultrametric structure, with the solution given by the full replica symmetry breaking procedure first outlined by Giorgio Parisi and coworkers. Ultrametricity also appears in the theory of aperiodic solids.
In taxonomy and phylogenetic tree construction, ultrametric distances are also utilized by the UPGMA and WPGMA methods. These algorithms require a constant-rate assumption and produce trees in which the distances from the root to every branch tip are equal. When DNA, RNA and protein data are analyzed, the ultrametricity assumption is called the molecular clock.
Models of intermittency in three dimensional turbulence of fluids make use of so-called cascades, and in discrete models of dyadic cascades, which have an ultrametric structure.
In geography and landscape ecology, ultrametric distances have been applied to measure landscape complexity and to assess the extent to which one landscape function is more important than another.
References
Bibliography
Further reading
.
External links
Metric geometry
Metric spaces | Ultrametric space | [
"Mathematics"
] | 1,166 | [
"Mathematical structures",
"Space (mathematics)",
"Metric spaces"
] |
334,173 | https://en.wikipedia.org/wiki/List%20of%20garden%20plants%20in%20North%20America | This is a partial list of garden plants, plants that can be cultivated in gardens in North America, listed alphabetically by genus.
A
Abelia
Abeliophyllum (white forsythia)
Abelmoschus (okra)
Abies (fir)
Abroma
Abromeitiella (obsolete)
Abronia (sand verbena)
Abrus
Abutilon
Acacia (wattle)
Acaena
Acalypha
Acanthaceae
Acanthodium
Acantholimon
Acanthopale
Acanthophoenix
Acanthus
Acca
Acer (maple)
Achariaceae
Achillea (yarrow)
Achimenantha (hybrid genus)
Achimenes
Acinos (calamint)
Aciphylla
Acmena
Acoelorraphe (saw palm)
Acokanthera
Aconitum (aconite, monkshood)
Acorus
Acradenia
Acrocomia
Actaea (baneberry)
Actinidia (kiwifruit)
Ada orchid genus
Adansonia
Adenandra
Adenanthos
Adenia
Adenium
Adenocarpus
Adenophora
Adenostoma
Adiantum (maidenhair fern)
Adlumia
Adonis
Adromischus
Aechmea
Aegopodium
Aeonium
Aerangis (an orchid genus)
Aerides (an orchid genus)
Aeschynanthus
Aesculus
Aethionema
Afgekia
Agapanthus
Agapetes
Agastache
Agathis
Agathosma
Agave
Ageratum
Aglaia
Aglaomorpha
Aglaonema
Agonis
Agrimonia
Agrostemma (corn cockle)
Agrostis
Aichryson
Ailanthus (tree of heaven, etc.)
Aiphanes
Aira (hair grass)
Ajania
Ajuga (bugleweed)
Akebia
Alangium
Alberta
Albizia (silk tree)
Albuca
Alcea (hollyhock)
Alchemilla
Aldrovanda
Aleurites
× Aliceara (hybrid genus)
Alisma (water plantain)
Alkanna
Allagoptera
Allamanda
Allium (onion)
Allocasuarina
Allosyncarpia
Alloxylon
Alluaudia
Alnus (alder)
Alocasia
Aloe
Aloinopsis
Alonsoa
Alopecurus (foxtail grass)
Aloysia
Alphitonia
Alpinia (ginger lily)
Alsobia
Alstonia
Alstroemeria
Alternanthera
Althaea
Alyogyne
Alyssum
Alyxia
Amaranthus
Amarcrinum (hybrid genus)
Amarygia (hybrid genus)
Amaryllis
Amberboa
Amelanchier
Amesiella
Amherstia
Amicia
Ammi
Ammobium
Amorpha
Amorphophallus
Ampelopsis
Amsonia
Anacampseros
Anacardium
Anacyclus
Anagallis (pimpernel)
Ananas (pineapple)
Anaphalis
Anchusa
Andersonia
Andira
Androlepis
Andromeda
Andropogon
Androsace
Anemone (windflower)
Anemonella
Anemonopsis
Anemopaegma
Anethum (dill)
Angelica
Angelonia
Angiopteris
Angophora
Angraecum (an orchid genus)
Anguloa (an orchid genus)
Angulocaste (a hybrid orchid genus)
Anigozanthos
Anisacanthus
Anisodontea
Annona
Anoda
Anomatheca (See Freesia)
Anopterus
Anredera
Antennaria
Anthemis
Anthericum
Anthocleista
Anthotroche
Anthriscus
Anthurium
Anthyllis
Antidesma
Antigonon
Antirrhinum (snapdragon)
Apera
Aphelandra
Aphyllanthes
Apium
Apocynum
Aponogeton
Apophyllum
Apodytes
Aponogeton
Aporocactus
Aporoheliocereus (hybrid genus)
Aprevalia
Aptenia, synonym of Mesembryanthemum
Aquilegia (columbine)
Arabis (rock cress)
Arachis
Arachniodes
Arachnis (scorpion orchid) (orchid genus)
Araeococcus
Araiostegia
Aralia
Araucaria (monkey-puzzle)
Araujia
Arbutus (madrone)
Archidendron
Archontophoenix (king palm)
Arctium
Arctostaphylos (bearberry, manzanita)
Arctotheca
Arctotis (African daisy)
Ardisia
Areca
Arenaria (sandwort)
Arenga
Argemone (prickly poppy)
Argyranthemum
Argyreia
Argyroderma
Ariocarpus
Arisaema
Arisarum
Aristea
Aristolochia
Aristotelia
Armeria
Armoracia
Arnebia
Arnica
Aronia (chokeberry)
Arrabidaea, see Bignonia magnifica
Arrhenatherum (oat grass)
Artanema
Artabotrys
Artemisia (mugwort, sagebrush, wormwood)
Arthrocereus
Arthropodium
Artocarpus
Arum
Aruncus
Arundina
Arundinaria
Arundo
Asarina
Asarum (wild ginger)
Asclepias (milkweed, silkweed)
× Ascocenda (hybrid genus) (an orchid genus)
Ascocentrum an orchid genus
Asimina
Asparagus
Asperula (woodruff)
Asphodeline
Asphodelus (asphodel)
Aspidistra
Asplenium
Astelia
Aster
Asteranthera
Astilbe
Astilboides
Astragalus (milk vetch)
Astrantia
Astrophytum
Asystasia
Atalaya
Athamanta
Atherosperma
Athrotaxis
Athyrium
Atriplex
Attalea
Aubrieta
Aucuba
Aulax
Auranticarpa
Aurinia
Austrocedrus
Austrocylindropuntia
Austrostipa
Averrhoa
Avicennia
Azadirachta
Azalea
Azara
Azolla (aquatic ferns)
Azorella
Azorina
Aztekium
Azetura
B
Babiana
Baccharis
Backhousia
Bacopa (water hyssop)
Bactris
Baeckea
Baikiaea
Baileya
Ballota
Balsamorhiza (balsam root)
Bambusa (bamboo)
Banksia
Baptisia an orchid genus
Barbarea (yellow rocket or winter cress)
Barkeria (an orchid genus)
Barleria
Barklya (gold blossom tree)
Barnadesia
Barringtonia
Bartlettina
Basselinia
Bassia
Bauera
Bauhinia
Baumea
× Beallara an orchid hybrid genus
Beaucarnea
Beaufortia
Beaumontia
Beccariella
Bedfordia
Begonia
Belamcanda
Bellevalia
Bellis (daisy)
Bellium
Berberidopsis
Berberis (barberry)
Berchemia
Bergenia
Bergerocactus
Berkheya
Berlandiera
Berrya
Bertolonia
Berzelia
Beschorneria
Bessera
Beta (beet)
Betula (birch)
Biarum
Bidens
Bignonia
Bikkia
Billardiera
Billbergia
Bischofia
Bismarckia
Bixa
Blandfordia
Blechnum (hard fern)
Bletilla (an orchid genus)
Blighia
Bloomeria
Blossfeldia
Bocconia
Boenninghausenia
Bolax
Bolbitis
Bollea (an orchid genus)
Boltonia
Bolusanthus
Bomarea
Bombax
Bongardia
Boophone
Borago
Borassodendron
Borassus
Boronia
Bosea
Bossiaea
Bothriochloa
Bougainvillea
Bouteloua
Bouvardia
Bowenia
Bowiea
Bowkeria
Boykinia
Brabejum
Brachychiton
Brachyglottis
Brachylaena
Brachypodium
Brachyscome
Brachysema
Brachystelma
Bracteantha
Brahea (hesper palm)
Brassavola (an orchid genus)
Brassaia (octopus tree)
Brassia (an orchid genus)
Brassica (mustard, cabbage)
× Brassidium (hybrid orchids)
× Brassocattleya (hybrid orchids)
× Brassolaeliocattleya (trigeneric hybrid orchids)
Breynia
Brillantaisia
Brimeura
Briza (quaking grass)
Brodiaea
Bromelia
Broughtonia an orchid genus
Broussonetia
Browallia
Brownea
Browningia
Bruckenthalia
Brugmansia
Brunfelsia
Brunia
Brunnera
Brunsvigia
Brya
Buchloe
Buckinghamia
Buddleja
Buglossoides
Bulbine
Bulbinella
Bulbocodium
Bulbophyllum (an orchid genus)
Bulnesia
Bunchosia
Buphthalmum
Bupleurum
Burchardia
Burchellia
× Burrageara an orchid hybrid genus
Burretiokentia
Bursaria
Bursera
Burtonia
Butea
Butia
Butomus
Buxus (boxwood)
Byrsonima
Bystropogon
C
Cabomba
Cadia
Caesalpinia (dwarf poinciana, Pride of Barbados)
Caladium
Calamagrostis (reed grass, smallweed)
Calamintha (calamint)
Calamus
Calandrinia
Calanthe an orchid genus
Calathea
Calceolaria (slipperwort)
Calendula (pot marigold)
Calibanus
Calibrachoa
Calla
Calliandra
Callianthemum
Callicarpa (beauty berry)
Callicoma (black wattle)
Callirhoe (poppy mallow)
Callisia
Callistemon (bottlebrush)
Callistephus (Chinese aster)
Callitriche (water starwort)
Callitris (cypress pine)
Calluna (heather)
Calocedrus (incense cedar)
Calochone
Calochortus
Calodendrum (cape chestnut)
Calomeria
Calophaca
Calophyllum
Calopyxis
Caloscordum
Calothamnus
Calotropis
Calpurnia
Caltha (kingcup, marsh marigold)
Calycanthus
Calymmanthium
Calypso (an orchid genus)
Calytrix (starflower)
Camassia (quamash)
Camellia
Camoensia
Campanula (bellflower)
Campsis (trumpet vine)
Campylotropis (See Lespedeza)
Cananga (ylang ylang)
Canarina
Canistrum
Canna
Cantua
Capparis
Capsicum (pepper)
Caragana (peashrub)
Caralluma
Cardamine (bittercress)
Cardiocrinum
Cardiospermum
Cardwellia
Carex (sedge)
Carissa
Carlina
Carludovica
Carmichaelia
Carnegiea (saguaro)
Carpentaria
Carphalea
Carpinus (hornbeam)
Carpobrotus
Carthamus (safflower)
Carum (caraway)
Carya (hickory, pecan)
Caryopteris
Caryota (fishtail palm)
Cassia (shower tree)
Cassinia
Cassiope
Cassipourea
Castanea (chestnut)
Castanopsis
Castanospermum (black bean)
Casuarina (sheoak)
Catalpa (Indian bean)
Catananche
Catasetum (an orchid genus)
Catha (khat tree)
Catharanthus (Madagascar periwinkle)
Catopsis
Cattleya (an orchid genus)
Caulophyllum
Cautleya
Cavendishia
Ceanothus (California-lilac)
Cedrela (toon)
Cedronella
Cedrus (cedar)
Ceiba (kapok)
Celastrus (staff-vine)
Celmisia (New Zealand daisy, New Zealand aster)
Celosia (cockscomb)
Celtis (hackberry)
Centaurea
Centaurium
Centradenia
Centranthus (valerian)
Cephalaria
Cephalocereus
Cephalophyllum
Cephalotaxus (plum-yew)
Ceraria
Cerastium
Ceratonia (St. John's bread, carob bean)
Ceratopetalum (coachwood)
Ceratophyllum
Ceratopteris
Ceratostigma
Ceratozamia
Cerbera (sea mango)
Cercidiphyllum
Cercis (Judas tree, redbud)
Cercocarpus
Cereus
Ceropegia
Cestrum
Chadsia
Chaenomeles (flowering quince)
Chaenorhinum (dwarf snapdragon)
Chaerophyllum
Chamaecyparis (false cypress)
Chamaecytisus
Chamaedaphne
Chamaedorea
Chamaelirium
Chamaemelum (chamomile)
Chamaerops
Chamelaucium (wax flower)
Chasmanthe
Chasmanthium
Cheilanthes
Cheiridopsis
Chelidonium
Chelone (turtlehead)
Chiastophyllum
Chiliotrichum
Chilopsis (desert willow)
Chimaphila
Chimonanthus (wintersweet)
Chimonobambusa
Chionanthus (fringe tree)
Chionochloa
Chirita
Chlidanthus
Choisya
Chonemorpha (Frangipani vine)
Choricarpia (brush turpentine)
Chorisia (floss silk tree)
Chorizema
Chrysalidocarpus
Chrysanthemoides
Chrysanthemum
Chrysobalanus
Chrysogonum
Chrysolepis
Chrysophyllum (star apple)
Chrysothemis
Chusquea
Cibotium
Cicerbita
Cichorium (chicory, endive)
Cimicifuga (bugbane)
Cinnamomum (camphor laurel)
Cionura
Cirsium
Cissus
Cistus (rock rose, sun rose)
Citharexylum (fiddlewood)
Citrofortunella (hybrid)
Citrus (lime, lemon)
Cladanthus
Cladrastis
Clarkia
Claytonia
Cleistocactus
Clematis
Cleome (spider flower)
Clerodendrum
Clethra (summersweet)
Cleyera
Clianthus
Clintonia
Clitoria
Clivia
Clusia
Clytostoma
Cobaea
Coccoloba (sea grape)
Coccothrinax (thatch palm)
Cocculus
Cochlioda an orchid genus – synonym of Oncidium
Cochlospermum (buttercup tree, Maximiliana)
Cocos (coconut)
Codiaeum (croton)
Codonanthe
Codonopsis
Coelia
Coelogyne (an orchid genus)
Coffea (coffee tree)
Coix
Colchicum (autumn crocus, meadow saffron)
Coleonema
Colletia
Collinsia
Collomia
Colocasia (taro)
Colquhounia
Columnea
Colutea (bladder senna)
Coluteocarpus
Colvillea
Combretum
Comesperma
Commelina (day flower, spiderwort, widow's tears)
Commersonia
Commidendrum
Commiphora
Comptonella
Comptonia (Sweetfern)
Conandron
Congea
Conicosia
Coniogramme
Conoclinium (mistflower)
Conophytum
Conospermum
Conostylis
Conradina
Consolida (larkspur)
Convallaria (lily-of-the-valley)
Convolvulus (bindweed, morning glory)
Copernicia (caranda palm, wax palm)
Copiapoa syn. Pilocopiapoa
Coprosma
Coptis (goldthread)
Cordia (bird lime tree)
Cordyline
Coreopsis (tickseed)
Coriandrum (coriander cilantro)
Coriaria
Cornus (dogwood, cornel)
Corokia
Coronilla
Correa
Corryocactus
Cortaderia (pampas grass, tussock grass)
Cortusa
Corybas (helmet orchid)
Corydalis
Corylopsis (winter-hazel)
Corylus (hazel, filbert)
Corymbia
Corynocarpus
Corypha
Coryphantha
Cosmos
Costus
Cotinus (smoke bush)
Cotoneaster
Cotula (brass buttons)
Cotyledon
Couroupita (cannonball tree)
Crambe
Craspedia
Crassula
+ Crataegomespilus (graft chimera)
Crataegus (hawthorn)
× Crataemespilus (hybrid)
Crepis
Crescentia (calabash)
Crinodendron
Crinum
Crocosmia (falling stars, montbretia)
Crocus
Crossandra (firecracker flower)
Crotalaria (rattlepod)
Croton
Crowea
Cryptanthus (earth stars)
Cryptbergia (hybrid)
Cryptocarya
Cryptocoryne (water trumpet)
Cryptomeria (sugi, Japanese cedar)
Cryptostegia (Indian rubber vine)
Cryptotaenia
Ctenanthe
Cucumis
Cucurbita
Cuminum
Cunila
Cunninghamia (China-fir)
Cunonia
Cupaniopsis (tuckeroo)
Cuphea
Cupressus (cypress)
Cuprocyparis (hybrid)
Curcuma
Cussonia
Cyananthus (trailing bellflower)
Cyanotis
Cyathea (tree fern)
Cyathodes
Cybistax
Cycas (cycad, sago palm)
Cyclamen
Cycnoches an orchid genus
Cydista, synonym of Bignonia
Cydonia (quince)
Cylindropuntia
Cymbalaria (ivy-leaved toadflax)
Cymbidium (an orchid genus)
Cymbopogon
Cynara
Cynodon
Cynoglossum (hound's tongue)
Cypella
Cyperus
Cyphomandra (tree tomato)
Cyphostemma
Cypripedium (lady's slipper; an orchid genus)
Cyrilla
Cyrtanthus (fire lily)
Cyrtomium
Cyrtostachys
Cystopteris (bladder fern)
Cytisus (broom)
D
Daboecia
Dacrydium
Dactylis
Dactylorhiza (marsh orchid)
Dahlia
Dalea (indigo bush)
Dalechampia
Damasonium
Dampiera
Danae
Daphne
Daphniphyllum
Darlingia
Darmera syn. Peltiphyllum
Darwinia (lemon scented myrtle)
Dasylirion
Datura
Davallia (hare's foot fern)
Davidia
Daviesia
Decaisnea
× Degarmoara (a hybrid orchid genus)
Decarya
Decumaria
Deinanthe
Delairea
Delonix
Delosperma
Delphinium
Dendranthema
Dendrobium (an orchid genus)
Dendrocalamus
Dendrochilum (an orchid genus)
Dendromecon (tree poppy)
Denmoza
Dennstaedtia (Hayscented fern or Cup fern)
Deppea
Derris
Derwentia
Deschampsia (hair grass)
Desfontainia
Desmodium
Deuterocohnia syn. Abromeitiella
Deutzia
Dianella (flax lily)
Dianthus (carnation, pink)
Diascia (twinspur)
Dicentra (bleeding heart)
Dichelostemma
Dichondra
Dichorisandra
Dichroa
Dicksonia
Dicliptera
Dictamnus (burning bush, dittany)
Dictyosperma (princess palm)
Didymochlaena
Dieffenbachia (dumb cane, mother-in-law's tongue, tuftroot)
Dierama (African harebell, angel's fishing rod, wand flower)
Diervilla (bush honeysuckle)
Dietes
Digitalis (foxglove)
Dillenia
Dillwynia
Dimorphotheca (African daisy)
Dionaea (Venus flytrap)
Dionysia
Dioon
Dioscorea syns. Rajania, Tamus, Testudinaria(yam)
Diospyros (ebony, persimmon)
Dipcadi
Dipelta
Diphylleia
Diplarrhena (butterfly flag)
Diplazium
Diplocyclos
Diploglottis
Diplolaena
Dipsacus (teasel)
Dipteris
Dipteronia
Dipteryx
Dirca (leatherwood)
Disa (an orchid genus)
Disanthus
Discaria
Dischidia
Discocactus
Disocactus
Disporopsis
Disporum (fairy-bells)
Dissotis
Distictis
Distylium
Dizygotheca
Docynia
Dodecatheon (shooting stars, American cowslip), now Primula sect. Dodecatheon
Dodonaea (hop bush)
Dolichandrone
Dombeya
Doodia (hacksaw fern, rasp fern)
Doronicum (leopard's bane)
Dorotheanthus (ice plant, Livingstone daisy)
Dorstenia
Doryanthes (spear lily)
Doryopteris
Dovyalis
Draba (whitlow grass)
Dracaena
Dracocephalum
Dracophyllum
Dracula (an orchid genus)
Dracunculus
Drimys
Drosanthemum
Drosera (sundew)
Dryandra
Dryas (mountain avens)
Drynaria
Dryopteris (buckler fern, shield fern, wood fern)
Duboisia
Duchesnea (Indian strawberry, mock strawberry)
Dudleya
Duranta
Duvalia
Dyckia
Dymondia
Dypsis syn. Chrysalidocarpus, Neodypsis
E
Ebracteola
Ecballium
Eccremocarpus
Echeveria
Echidnopsis
Echinacea (coneflower)
Echinocactus
Echinocereus
Echinops (globe thistle)
Echinopsis
Echium
Edgeworthia
Edithcolea
Edraianthus
Egeria
Ehretia
Eichhornia (water hyacinth)
Elaeagnus
Elaeis (oil palm)
Elaeocarpus
Elatostema
Eleocharis (spike rush)
Elettaria
Eleutherococcus
Elodea (pondweed)
Elsholtzia
Elymus (wild rye)
Embothrium
Emilia (tasselflower)
Emmenopterys
Encelia
Encephalartos (Kaffir bread)
Encyclia (an orchid genus)
Enkianthus
Ensete
Eomecon (snow poppy)
Epacris
Ephedra (ephedra)
Epidendrum (an orchid genus)
Epigaea
Epilobium
Epimedium (barrenwort)
Epipactis (helleborine, an orchid genus)
Epiphyllum (orchid cactus)
Episcia (flame violet)
Epithelantha
Equisetum (horsetail)
Eragrostis (love grass)
Eranthemum
Eranthis (winter aconite)
Ercilla
Eremophila (emu bush)
Eremurus
Erica (heath/heather)
Erigeron (fleabane)
Erinacea
Erinus
Eriobotrya
Eriogonum
Eriophorum (cotton grass)
Eriophyllum
Eriostemon (waxflower)
Eritrichium
Erodium
Eryngium (eryngo, sea holly)
Erysimum (wallflower)
Erythrina (coral tree)
Erythronium
Escallonia
Eschscholzia (California poppy)
Escobaria
Espostoa
Etlingera
Eucalyptus (gum tree, ironbark)
Eucharis
Eucomis
Eucommia
Eucryphia
Eulophia (an orchid genus)
Euonymus
Eupatorium
Euphorbia (spurge)
Euptelea
Eurya
Euryale ferox
Euryops
Eustoma
Evolvulus
Exacum
Exochorda
F
Fabiana
Fagus (beech)
Fallopia
Farfugium
Fargesia
Fascicularia
× Fatshedera (hybrid genus)
Fatsia
Faucaria
Felicia (blue daisy)
Fendlera
Fenestraria
Ferocactus
Ferraria
Ferula (giant fennel)
Festuca (fescue)
Fibigia
Ficus (fig)
Ficus pumila
Filipendula
Firmiana
Fittonia
Fitzroya
Fockea
Foeniculum (fennel)
Fontanesia
Forsythia
Fortunella (kumquat)
Fothergilla
Fouquieria
Fragaria (strawberry)
Frailea
Francoa
Frangipani
Franklinia
Fraxinus (ash)
Freesia
Fremontodendron
Fritillaria (fritillary)
Fuchsia
Furcraea
G
Gagea
Gaillardia
Galanthus (snowdrop)
Galax
Galega
Galium (bedstraw)
Galtonia
Gardenia
Garrya
Gasteria
Gaultheria
Gaura
Gaylussacia (huckleberry)
Gazania
Geissorhiza
Gelsemium
Genista
Gentiana (gentian)
Gentianopsis
Geranium (cranesbill, not same as Pelargonium)
Gerbera
Gesneria
Geum (avens)
Gevuina
Gibbaeum
Gilia
Gillenia
Ginkgo
Gladiolus
Glaucidium
Glaucium
Gleditsia (honey locust)
Globba
Globularia (globe daisy)
Gloriosa
Glottiphyllum
Gloxinia
Glyceria
Glycyrrhiza
Gomphocarpus
Gomphrena
Goniolimon
Goodyera (jewel orchid)
Gordonia
Graptopetalum
Graptophyllum
Graptoveria (hybrid genus)
Grevillea
Grewia
Greyia
Grindelia
Griselinia
Gunnera (dinosaur food)
Guzmania
Gymnocalycium
Gymnocarpium
Gymnocladus
Gynandriris
Gynura
Gypsophila
H
Haageocereus
Haastia
Habenaria (an orchid genus)
Haberlea
Habranthus
Haemanthus (blood lily)
Hakea
Hakonechloa
Halesia (silverbell)
Halimiocistus (hybrid genus)
Halimium
Halimodendron
Hamamelis (witch-hazel)
Haplopappus
Hardenbergia (coral pea)
Harrisia
Hatiora
Haworthia
Hebe
Hechtia
Hedera (ivy)
Hedychium
Hedyotis (bluets)
Hedysarum
Hedyscepe (umbrella palm)
Helenium (sneezeweed)
Helianthemum (rock rose)
Helianthus (sunflower)
Helichrysum
Heliconia
Helictotrichon
Heliocereus
Heliophila
Heliopsis (ox eye)
Heliotropium (heliotrope)
Helleborus (hellebore)
Heloniopsis
Hemerocallis (daylily)
Hemigraphis
Hepatica
Heptacodium
Heracleum
Herbertia
Hereroa
Hermannia
Hermodactylus
Hesperaloe
Hesperantha
Hesperis
Hesperocallis
Heterocentron
Heterotheca
Heuchera (coral flower)
× Heucherella (hybrid genus)
Hibbertia
Hibiscus (rose of Sharon)
Hieracium (hawkweed)
Himalayacalamus
Hippeastrum (amaryllis)
Hippocrepis
Hippophae
Hohenbergia
Hohenbergiopsis
Hoheria
Holboellia
Holcus
Holmskioldia
Holodiscus
Homalocladium (ribbon bush)
Homeria
Hoodia
Hordeum (barley)
Horminum
Hosta (plantain lily)
Hottonia
Houttuynia
Hovea
Hovenia
Howea (sentry palm)
Hoya (wax flower)
Huernia
Humulus (hops)
Hunnemannia
Huntleya an orchid genus
Hyacinthella
Hyacinthoides
Hyacinthus (hyacinth)
Hydrangea
Hydrastis (goldenseal)
Hydrocharis (frogbit)
Hydrocleys
Hydrocotyle (pennywort)
Hygrophila
Hylocereus
Hylomecon
Hymenocallis
Hymenosporum
Hyophorbe (bottle palm)
Hyoscyamus (henbane)
Hypericum (St. John's wort, rose of Sharon)
Hyphaene (doum palm)
Hypocalymma
Hypoestes
Hypoxis (starflower)
Hypsela
Hyssopus (hyssop)
I
Iberis (candytuft)
Ibervillea
Idesia
Ilex (holly)
Illicium
Impatiens (balsam)
Imperata
Incarvillea
Indigofera
Inula
Iochroma
Ipheion
Ipomoea (morning glory)
Ipomopsis
Iresine
Iris
Isatis
Isoplexis
Isopyrum
Itea
Ixia (corn lily)
Ixiolirion
Ixora
J
Jaborosa
Jacaranda
Jacquemontia
Jamesia
Jasione
Jasminum (jasmine, jessamine)
Jatropha
Jeffersonia
Jovellana
Jovibarba
Juanulloa
Jubaea (Chilean wine palm)
Juglans (walnut)
Juncus (rush)
Juniperus (juniper)
Justicia
K
Kadsura
Kaempferia
Kalanchoe
Kalimeris
Kalmia (mountain laurel)
Kalmiopsis
Kalopanax
Kelseya
Kerria
Kigelia (sausage tree)
Kirengeshoma
Kitaibela
Kleinia
Knautia
Knightia
Kniphofia
Koeleria (junegrass)
Koelreuteria (golden rain tree)
Kohleria
Kolkwitzia (beautybush)
Kosteletzkya
Kunzea
L
Lablab
Laburnocytisus
Laburnum (laburnum)
Laccospadix
Lachenalia (Cape cowslip)
Laelia (an orchid genus)
× Laeliocattleya (hybrid orchid genus)
Lagarosiphon
Lagenophora
Lagerstroemia
Lagunaria
Lagurus
Lamarckia
Lambertia
Lamium (deadnettle)
Lampranthus
Lantana (shrub verbena)
Lapageria
Lardizabala
Larix (larch)
Larrea (creosote bush)
Latania (Latan palm)
Lathraea
Lathyrus
Laurelia
Laurus
Lavandula (lavender)
Lavatera (mallow)
Lawsonia
Layia
Ledebouria
Ledodendron (hybrid genus)
Ledum
Leea
Legousia
Leiophyllum
Leipoldtia
Leitneria
Lemboglossum
Lenophyllum
Leonotis
Leontice
Leontopodium (edelweiss)
Lepidozamia
Leptinella
Lechenaultia
Lespedeza (bush clover)
Leucadendron
Leucanthemella
Leucanthemopsis
Leucanthemum
Leuchtenbergia
Leucocoryne
Leucogenes
Leucojum (snowflake)
Leucophyllum
Leucophyta
Leucopogon
Leucoraoulia (hybrid genus)
Leucospermum (pincushion)
Leucothoe
Lewisia
Leycesteria
Leymus
Liatris
Libertia
Libocedrus
Ligularia
Ligustrum (privet)
Lilium (lily)
Limnanthes
Limnocharis
Limonium (sea lavender)
Linanthus
Linaria (toadflax)
Lindelofia
Lindera
Lindheimera (star daisy)
Linnaea (twinflower)
Linospadix
Linum (flax)
Liquidambar (sweetgum)
Liriodendron (tulip tree)
Liriope (lilyturf)
Lithocarpus
Lithodora
Lithophragma
Lithops
Littonia
Livistona
Loasa
Lobelia
Lobularia (sweet alyssum)
Lodoicea (coco de mer)
Loiseleuria
Lomandra (mat rush)
Lomatia
Lomatium
Lomatophyllum
Lonicera (honeysuckle)
Lopezia
Lophomyrtus
Lophospermum
Lophostemon
Loropetalum
Lotus
Luculia
Ludwigia
Luma
Lunaria
Lupinus (lupin)
Luzula (woodrush)
Lycaste (an orchid genus)
Lychnis (campion)
Lycium
Lycopodium (club moss)
Lycoris
Lygodium (climbing fern)
Lyonia
Lyonothamnus
Lysichiton (yellow skunk cabbage)
Lysiloma
Lysimachia
Lythrum (loosestrife)
M
Maackia
Macfadyena
Machaeranthera
Mackaya
Macleania
Macleaya
Maclura
Macropidia
Macrozamia
Magnolia
Mahonia
Maianthemum (May lily)
Maihuenia
Malcolmia
Malephora
Malope
Malpighia
Malus (apple, crabapple)
Malva (mallow)
Malvastrum
Malvaviscus
Mammillaria
Mandevilla
Mandragora (mandrake)
Manettia
Manglietia
Maranta
Margyricarpus
Marrubium (horehound)
Marsilea (pepperwort)
Masdevallia (an orchid genus)
Matteuccia
Matthiola (stock)
Maurandella
Maurandya
Maxillaria (an orchid genus)
Maytenus
Mazus
Meconopsis
Medicago (alfalfa)
Medinilla
Meehania
Megacodon
Megaskepasma
Melaleuca (paperbark)
Melasphaerula
Melastoma
Melia
Melianthus
Melica (melic)
Melicytus
Melinis
Meliosma
Melissa (balm)
Melittis (bastard balm)
Melocactus
Menispermum (moonseed)
Mentha (mint)
Mentzelia (starflower)
Menyanthes
Menziesia
Merendera
Merremia
Mertensia
Mespilus
Metasequoia (dawn redwood)
Metrosideros
Meum
Mexicoa
Michauxia
Michelia
Microbiota
Microcachrys
Microlepia
Micromeria
Mikania
Milium
Milla
Millettia
Miltonia (an orchid genus)
Miltoniopsis (pansy orchid)
Mimetes
Mimosa (mimosa, or sensitive plant)
Mimulus (monkey flower)
Mirabilis
Miscanthus
Mitchella (partridge berry)
Mitella
Mitraria
Molinia
Moltkia
Moluccella
Monadenium
Monanthes
Monarda (bee balm)
Monardella
Monstera
Moraea
Morina
Morisia
Morus (mulberry)
Mucuna
Muehlenbeckia
Mukdenia
Musa (banana, plantain)
Muscari (grape hyacinth)
Mussaenda
Mutisia
Myoporum
Myosotidium
Myosotis (forget-me-not)
Myrica
Myriophyllum (milfoil)
Myrrhis (sweet cicely)
Myrsine
Myrteola
Myrtillocactus
Myrtus (myrtle)
N
Nandina (heavenly bamboo)
Narcissus (daffodil)
Nasturtium (watercress)
Nautilocalyx
Nectaroscordum
Neillia
Nelumbo (lotus)
Nematanthus
Nemesia
Nemopanthus (mountain holly)
Nemophila
Neobuxbaumia
Neolitsea
Neolloydia
Neomarica
Neoporteria
Neoregelia
Nepenthes (pitcher plant)
Nepeta (catmint)
Nephrolepis
Nerine
Nerium (oleander)
Nertera
Nicandra
Nicotiana (tobacco)
Nidularium
Nierembergia
Nigella
Nipponanthemum
Nolana
Nomocharis
Nopalxochia
Nothofagus (southern beech)
Notholirion
Nothoscordum (false garlic)
Notospartium
Nuphar (spatterdock)
Nymania
Nymphaea (waterlily)
Nymphoides (floating heart)
Nyssa (tupelo)
O
Obregonia
Ochagavia
Ochna
Ocimum
× Odontioda (hybrid orchid genus)
× Odontocidium (hybrid orchid genus)
Odontoglossum (an orchid genus)
Odontonema
× Odontonia (hybrid orchid genus)
Oemleria
Oenanthe (water dropwort)
Oenothera (evening primrose, sundrops)
Olea (olive)
Olearia (daisy bush)
Olneya
Olsynium
Omphalodes (navelwort)
Omphalogramma
Oncidium (an orchid genus)
Onoclea
Ononis (restharrow)
Onopordum
Onosma
Oophytum
Ophiopogon (lilyturf)
Ophrys (an orchid genus)
Oplismenus
Opuntia (prickly pears, chollas and many other cactus species)
Orbea
Orbeopsis
Orchis (an orchid genus)
Oreocereus
Origanum (marjoram, oregano)
Orixa
Ornithogalum
Orontium (golden club)
Orostachys
Oroya
Ortegocactus
Orthophytum
Orthrosanthus
Orychophragmus
Oryza (rice)
Osbeckia
Osmanthus
Osmunda (royal fern)
Osteomeles
Osteospermum
Ostrowskia (giant bellflower)
Ostrya
Othonna
Ourisia
Oxalis (shamrock, sorrel)
Oxydendrum
Oxypetalum
Ozothamnus
P
Pachistima
Pachycereus
Pachycormus
Pachycymbium
Pachyphragma
Pachyphytum
Pachypodium
Pachysandra
Pachystachys
Pachystegia
Pachystima
Pachyveria (hybrid genus)
Paeonia (peony)
Paliurus
Pamianthe
Panax (ginseng)
Pancratium (sea lily)
Pandanus (screw pine)
Pandorea
Panicum
Pansy
Papaver (poppy)
Paphiopedilum (slipper orchid)
Paradisea (paradise lily)
Parahebe
Paraquilegia
Parkinsonia
Parnassia
Parochetus
Parodia
Paronychia
Parrotia
Parrotiopsis
Parthenocissus
Passiflora (granadilla, passionflower)
Patersonia
Patrinia
Paulownia
Paurotis
Pavonia
Pedilanthus
Pediocactus
Pelargonium (geranium)
Pellaea
Peltandra (arrow arum)
Peltoboykinia
Peltophorum
Peniocereus
Pennisetum
Penstemon
Pentachondra
Pentaglottis
Pentas
Peperomia
Peraphyllum
Pereskia
Perezia
Pericallis
Perilla
Periploca
Perovskia (now included in Salvia)
Pernettya (now included in Gaultheria)
Persea
Persicaria (fleeceflower, knotweed)
Petasites (butterbur, sweet coltsfoot)
Petrea
Petrocosmea
Petrophile
Petrophytum
Petrorhagia
Petroselinum (parsley)
Petteria
Petunia
Phacelia
Phaedranassa (queen lily)
Phaius (an orchid genus)
Phalaenopsis (moth orchid)
Phalaris
Phebalium
Phegopteris (beech fern)
Phellodendron (cork tree)
Philadelphus (mock orange)
Philageria (hybrid genus)
Philesia
Phillyrea
Philodendron
Phlebodium
Phlomis
Phlox
Phoenix (date palm)
Phormium
Photinia
Phragmipedium (an orchid genus)
Phragmites (reed)
Phuopsis
Phygelius
Phylica (Cape myrtle)
× Phylliopsis (hybrid genus)
Phyllocladus (toatoa)
Phyllodoce
Phyllostachys
Phyllothamnus (hybrid genus)
Physalis (ground cherry)
Physaria (bladderpod)
Physocarpus
Physoplexis
Physostegia
Phyteuma
Phytolacca (pokeweed)
Picea (spruce)
Picrasma
Pieris
Pilea
Pileostegia
Pilosella
Pilosocereus
Pimelea
Pimpinella
Pinanga
Pinckneya
Pinellia
Pinguicula (butterwort)
Pinus (pine)
Piper (pepper)
Piptanthus
Pisonia
Pistacia (pistachio)
Pistia
Pitcairnia
Pithecellobium
Pittosporum
Pityrogramma
Plantago (plantain)
Platanus (plane tree, sycamore)
Platycarya
Platycerium (staghorn fern)
Platycladus (Chinese arborvitae)
Platycodon (balloon flower)
Platystemon (creamcups)
Plectranthus an orchid genus
Pleioblastus
Pleione (an orchid genus)
Pleiospilos (living granite)
Pleurothallis (an orchid genus)
Plumeria (frangipani)
Poa
Podalyria
Podocarpus
Podophyllum (mayapple)
Podranea
Polemonium (jacob's ladder, abscess root)
Polianthes
Poliothyrsis
Polygala (milkwort, seneca, snakeroot)
Polygonatum
Polygonum (knotweed, knotgrass)
Polypodium
Polyscias
Polystichum
Poncirus
Pongamia
Pontederia (pickerel weed)
Populus (aspen, poplar, cottonwood)
Porana
Portea
Portulaca (purslane, moss rose)
Portulacaria
Posoqueria
Potamogeton
Potentilla (cinquefoil)
Pothos
× Potinara (hybrid orchid genus)
Pratia
Primula (primrose)
Prinsepia
Pritchardia
Proboscidea (unicorn plant)
Promenaea an orchid genus
Prosopis (mesquite)
Prostanthera (mint bush)
Protea
Prumnopitys
Prunella (self-heal)
Prunus (almond, apricot, cherry, peach, plum)
Pseuderanthemum
Pseudocydonia
Pseudolarix (golden-larch)
Pseudopanax
Pseudosasa
Pseudotsuga (douglas-fir)
Pseudowintera
Psilotum
Psychopsis (butterfly orchid)
Psylliostachys (statice)
Ptelea
Pteris (brake, table fern)
Pterocactus
Pterocarya (wingnut)
Pteroceltis
Pterocephalus
Pterodiscus
Pterostyrax
Ptilotus
Ptychosperma
Pueraria
Pulmonaria (lungwort)
Pulsatilla
Pultenaea
Punica (pomegranate)
Purshia
Puschkinia
Putoria
Puya
Pycnanthemum
Pycnostachys
Pyracantha (firethorn)
Pyrola (wintergreen)
Pyrostegia
Pyrrosia
Pyrus (pear)
Q
Quamoclit
Quaqua
Quercus (oak)
Quesnelia
Quisqualis
Quince
R
Ramonda
Ranunculus (buttercup, crowfoot)
Ranzania
Raoulia
Raphia (raffia)
Ratibida
Ravenala (traveler's tree)
Rebutia
Rehderodendron
Rehmannia
Reineckea
Reinwardtia
Reseda (Mignonette)
Retama
Rhamnus
Rhaphidophora
Rhaphiolepis
Rhapidophyllum (needle palm)
Rhapis (lady palm)
Rheum (rhubarb)
Rhexia
Rhipsalis
Rhodanthe (strawflower)
Rhodanthemum
Rhodiola
Rhodochiton
Rhododendron
Rhodohypoxis
Rhodophiala
Rhodothamnus
Rhodotypos
Rhoeo
Rhoicissus
Rhombophyllum
Rhus (sumac)
Rhynchelytrum
Rhynchostylis (an orchid genus)
Ribes (currant)
Richea
Ricinus (castor-oil plant)
Rigidella
Robinia
Rodgersia
Rodriguezia an orchid genus
Rohdea
Romanzoffia
Romneya (Matilija poppy, tree poppy)
Romulea
Rondeletia
Rosa (rose)
Roscoea
Rosmarinus (rosemary)
Rossioglossum an orchid genus
Rothmannia
Roystonea (royal palm)
Rubus (raspberry)
Rudbeckia (coneflower)
Ruellia
Rumex (dock)
Rumohra
Rupicapnos
Ruschia
Ruscus
Russelia
Ruta (rue)
S
Sabal (palmetto)
Saccharum (plume grass, sugar cane)
Sadleria
Sagina (pearlwort)
Sagittaria (arrowhead)
Salix (willow)
Salpiglossis
Salvia (sage)
Salvinia
Sambucus (elder)
Sanchezia
Sandersonia
Sanguinaria (bloodroot)
Sanguisorba (burnet)
Sanicula
Sansevieria
Santolina
Sanvitalia (creeping zinnia)
Sapindus
Sapium (tallow tree)
Saponaria (soapwort)
Sarcocapnos
Sarcocaulon
Sarcococca
Saritaea, see Bignonia magnifica
Sarmienta
Sarracenia (pitcher plant)
Sasa
Sassafras
Satureja (savory)
Sauromatum
Saxegothaea
Saxifraga (saxifrage)
Scabiosa (scabious plant)
Scadoxus (blood lily)
Scaevola
Schefflera
Schima
Schinus
Schisandra
Schizachyrium
Schizanthus
Schizopetalon
Schizophragma
Schizostylis
Schlumbergera
Schoenoplectus
Schomburgkia an orchid genus
Schotia
Schwantesia
Sciadopitys
Scilla (including the former genus Chionodoxa)
Scindapsus
Scirpoides
Sclerocactus
Scoliopus
Scopolia
Scrophularia (figwort)
Scutellaria
Securinega
Sedum (stonecrop)
Selaginella
Selago
Selenicereus
Selinum
Semele
Semiaquilegia
Semiarundinaria
Sempervivum (hens and chicks)
Senecio (ragwort)
Senna
Sequoia (coast redwood)
Sequoiadendron (giant sequoia)
Seriphidium
Serissa
Serruria
Sesbania
Sesleria
Setaria
Shepherdia
Shibataea
Shortia
Sibiraea
Sidalcea
Sideritis
Silene (campion)
Silphium
Silybum
Simmondsia (jojoba)
Sinningia
Sinofranchetia
Sinojackia
Sinowilsonia
Sisyrinchium
Skimmia
Smilacina
Smilax
Smithiantha
Smyrnium
Sobralia an orchid genus
Solandra
Solanum (potato, nightshade)
Soldanella (snowbell)
Soleirolia
Solenopsis
Solenostemon
Solidago (goldenrod)
Solidaster (supposedly hybrid genus; see Solidago)
Sollya
Sonerila
Sophora
× Sophrolaeliocattleya (trigeneric hybrid orchid)
Sophronitis (an orchid genus)
Sorbaria
Sorbus (rowan, whitebeam)
Sorghastrum
Sparaxis
Sparganium (bur-reed)
Sparrmannia
Spartina (cord grass)
Spartium (broom)
Spathiphyllum
Spathodea
Sphaeralcea
Spigelia
Spiraea (spirea)
Sprianthes (an orchid genus)
Sporobolus
Sprekelia
Stachys (betony)
Stachyurus
Stangeria
Stanhopea (an orchid genus)
Stapelia
Stapelianthus
Staphylea (bladdernut)
Stauntonia
Stenanthium
Stenocactus
Stenocarpus
Stenocereus
Stenomesson
Stenotaphrum
Stenotus
Stephanandra
Stephanocereus
Stephanotis
Sternbergia
Stigmaphyllon
Stipa
Stokesia
Stomatium
Stratiotes
Strelitzia (bird of paradise)
Streptocarpus (Cape primrose)
Streptosolen
Strobilanthes
Stromanthe
Strombocactus
Strongylodon
Stuartia
Stylidium
Stylophorum
Styphelia
Styrax
Succisa
Sulcorebutia
Sutera
Sutherlandia
Swainsona
Swainsonia
Syagrus
Sycoparrotia (hybrid genus)
Sycopsis
Symphoricarpos (snowberry)
Symphyandra
Symphytum (comfrey)
Symplocos
Synadenium
Syneilesis
Syngonium
Synthyris
Syringa (lilac)
Syzygium (rose apple)
T
Tabebuia
Tabernaemontana
Tacca
Tagetes (Mexican or French marigold)
Talinum (fameflower)
Tamarix (tamarisk)
Tanacetum (tansy)
Tanakaea
Tapeinochilos
Taxodium (bald cypress)
Taxus (yew)
Tecoma
Tecomanthe
Tecomaria
Tecophilaea
Telekia
Telephium
Tellima
Telopea (waratah)
Templetonia
Terminalia
Ternstroemia
Tetracentron
Tetradium (bee tree)
Tetranema
Tetraneuris
Tetrapanax
Tetrastigma
Tetratheca
Teucrium
Thalia
Thalictrum
Thelesperma
Thelocactus
Thelypteris
Thermopsis
Thespesia
Thevetia
Thlaspi
Thrinax (thatch palm)
Thryptomene (heath myrtle)
Thuja (thuja, arborvitae)
Thujopsis (hiba)
Thunbergia
Thymophylla
Thymus (thyme)
Tiarella
Tibouchina
Tigridia
Tilia (linden)
Tillandsia (air plant, Spanish moss)
Tipuana
Titanopsis
Tithonia (Mexican sunflower)
Todea
Tolmiea
Tolpis
Toona
Torenia
Torreya (nutmeg yew)
Tovara
Townsendia
Trachelium
Trachelium caeruleum (blue throatwort)
Trachelospermum
Trachycarpus (chusan palm)
Trachymene
Tradescantia (spiderwort)
Trapa (water caltrop)
Trichodiadema
Trichosanthes
Tricyrtis (toad lily)
Trientalis
Trifolium (clover)
Trillium
Tripetaleia
Tripterygium
Triteleia (triplet lily)
Tritonia
Trochodendron
Trollius (globeflower)
Tropaeolum (nasturtium)
Tsuga (hemlock)
Tsusiophyllum
Tuberaria
Tulbaghia
Tulipa (tulip)
Tweedia
Tylecodon
Typha (cattail)
U
Uebelmannia
Ugni
Ulex (gorse)
Ulmus (elm)
Umbellularia
Uncinia
Uniola
Urceolina
Urginea
Ursinia
Utricularia (bladderwort)
Uvularia (merrybells, bellwort)
V
Valeriana (garden valerian)
Vallea
Vancouveria
Vanda (an orchid genus)
Vanilla an orchid genus
Veitchia
Vellozia
Veltheimia
Veratrum
Verbascum (mullein)
Verbena
Vernonia (ironweed)
Veronica (speedwell)
Veronicastrum
Verticordia
Vestia
Viburnum
Victoria (giant waterlily)
Vigna (cowpea and various beans)
Viguiera
Vinca (periwinkle)
Viola (pansy, violet)
Virgilia
Viscaria
Vitaliana
Vitex
Vitis (grape)
Vriesea
W
Wachendorfia
Wahlenbergia
Waldsteinia
Washingtonia
Watsonia
Weberocereus
Wedelia
Weigela
Weingartia
Weldenia
Welwitschia
Westringia
Widdringtonia
Wigandia
Wigginsia
Wikstroemia
Wilsonaria (hybrid orchid genus)
Wisteria
Wittrockia
Wolffia
Woodsia
Woodwardia (chain fern)
Worsleya
Wulfenia
X
Xanthoceras
Xanthorhiza
Xanthosoma
Xeranthemum
Xerophyllum
Xylosma
Y
Yucca
Yushania
Z
Zaluzianskya
Zamia
Zamioculcas
Zantedeschia (calla lily)
Zanthoxylum
Zauschneria
Zea (maize)
Zelkova
Zenobia
Zephyranthes
Zigadenus
Zinnia
Zizania (wild rice)
Zygopetalum (an orchid genus)
See also
List of culinary fruits
List of foods
List of vegetables
List of leaf vegetables
Lists of plants
Garden
Plants | List of garden plants in North America | [
"Biology"
] | 10,966 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
334,186 | https://en.wikipedia.org/wiki/List%20of%20mountains%20on%20the%20Moon | This is a list of mountains on the Moon (with a scope including all named mons and montes, planetary science jargon terms roughly equivalent to 'isolated mountain'/'massif' and 'mountain range').
Caveats
This list is not comprehensive, as surveying of the Moon is a work in progress.
Heights are in meters; most peaks have not been surveyed with the precision of a single meter.
Mountains on the Moon have heights and elevations/altitudes defined relative to various vertical datums (referring to the lunoid), each in turn defined relative to the center of mass (CoM) of the Moon.
— the U.S. Army Mapping Service datum was established 1,737,988 meters from the CoM.
— the U.S. Defense Mapping Agency used 1,730,000 meters.
— The Clementine topographic data use 1,737,400 meters as the baseline, and show a range of about 18,100 meters from lowest to highest point on the Moon.
This is not a list of the highest places on the Moon, meaning those farthest from the CoM. Rather, it is a list of peaks at various heights relative to the relevant datum. This is because the Moon has mass asymmetries: the highest point, located on the far side of the Moon, is approximately 6,500 meters higher than Mons Huygens (usually listed as the tallest mountain).
List
Gallery
See also
List of mountain ranges
List of features on the Moon
Boot Hill
Duke Island
List of craters on the Moon
List of maria on the Moon
List of valleys on the Moon
List of tallest mountains in the Solar System
Notes
References
External links
List of named lunar mountains in Gazetteer of Planetary Nomenclature
Digital Lunar Orbiter Photographic Atlas of the Moon
Moon
Moon
Mountains | List of mountains on the Moon | [
"Astronomy"
] | 360 | [
"Lists of extraterrestrial mountains",
"Astronomy-related lists"
] |
334,290 | https://en.wikipedia.org/wiki/Neutralino | In supersymmetry, the neutralino is a hypothetical particle. In the Minimal Supersymmetric Standard Model (MSSM), a popular model of realization of supersymmetry at a low energy, there are four neutralinos that are fermions and are electrically neutral, the lightest of which is stable in an R-parity conserved scenario of MSSM. They are typically labeled (the lightest), , and (the heaviest) although sometimes is also used when is used to refer to charginos.
These four states are composites of the bino and the neutral wino (which are the neutral electroweak gauginos), and the neutral higgsinos. As the neutralinos are Majorana fermions, each of them is identical to its antiparticle.
Expected behavior
If they exist, these particles would only interact with the weak vector bosons, so they would not be directly produced at hadron colliders in copious numbers. They would primarily appear as particles in cascade decays (decays that happen in multiple steps) of heavier particles usually originating from colored supersymmetric particles such as squarks or gluinos.
In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade-decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum in a detector.
The heavier neutralinos typically decay through a neutral Z boson to a lighter neutralino or through a charged W boson to a light chargino:
{|
|
|
|
| +
|
| colspan=6|
|
| Missing energy
| +
|
| +
|
|-
|
|
|
| +
|
|
|
| +
|
| +
|
|
| Missing energy
| +
| +
| +
| +
|}
The mass splittings between the different neutralinos will dictate which patterns of decays are allowed.
Up to present, neutralinos have never been observed or detected in an experiment.
Origins in supersymmetric theories
In supersymmetry models, all Standard Model particles have partner particles with the same quantum numbers except for the quantum number spin, which differs by from its partner particle. Since the superpartners of the Z boson (zino), the photon (photino) and the neutral higgs (higgsino) have the same quantum numbers, they can mix to form four eigenstates of the mass operator called "neutralinos". In many models the lightest of the four neutralinos turns out to be the lightest supersymmetric particle (LSP), though other particles may also take on this role.
Phenomenology
The exact properties of each neutralino will depend on the details of the mixing (e.g. whether they are more higgsino-like or gaugino-like), but they tend to have masses at the weak scale (100 GeV ~ 1 TeV) and couple to other particles with strengths characteristic of the weak interaction. In this way, except for mass, they are phenomenologically similar to neutrinos, and so are not directly observable in particle detectors at accelerators.
In models in which R-parity is conserved and the lightest of the four neutralinos is the LSP, the lightest neutralino is stable and is eventually produced in the decay chain of all other superpartners. In such cases supersymmetric processes at accelerators are characterized by the expectation of a large discrepancy in energy and momentum between the visible initial and final state particles, with this energy being carried off by a neutralino which departs the detector unnoticed.
This is an important signature to discriminate supersymmetry from Standard Model backgrounds.
Relationship to dark matter
As a heavy, stable particle, the lightest neutralino is an excellent candidate to form the universe's cold dark matter. In many models the lightest neutralino can be produced thermally in the hot early universe and leave approximately the right relic abundance to account for the observed dark matter. A lightest neutralino of roughly is the leading weakly interacting massive particle (WIMP) dark matter candidate.
Neutralino dark matter could be observed experimentally in nature either indirectly or directly. For indirect observation, gamma ray and neutrino telescopes look for evidence of neutralino annihilation in regions of high dark matter density such as the galactic or solar centre. For direct observation, special purpose experiments such as the Cryogenic Dark Matter Search (CDMS) seek to detect the rare impacts of WIMPs in terrestrial detectors. These experiments have begun to probe interesting supersymmetric parameter space, excluding some models for neutralino dark matter, and upgraded experiments with greater sensitivity are under development.
See also
List of hypothetical particles
Weakly interacting slender particle
References
Dark matter
Fermions
Supersymmetric quantum field theory
Hypothetical elementary particles | Neutralino | [
"Physics",
"Materials_science",
"Astronomy"
] | 1,031 | [
"Dark matter",
"Symmetry",
"Unsolved problems in astronomy",
"Supersymmetric quantum field theory",
"Concepts in astronomy",
"Fermions",
"Unsolved problems in physics",
"Subatomic particles",
"Condensed matter physics",
"Exotic matter",
"Hypothetical elementary particles",
"Supersymmetry",
"... |
334,320 | https://en.wikipedia.org/wiki/Rule%20of%2072 | In finance, the rule of 72, the rule of 70 and the rule of 69.3 are methods for estimating an investment's doubling time. The rule number (e.g., 72) is divided by the interest percentage per period (usually years) to obtain the approximate number of periods required for doubling. Although scientific calculators and spreadsheet programs have functions to find the accurate doubling time, the rules are useful for mental calculations and when only a basic calculator is available.
These rules apply to exponential growth and are therefore used for compound interest as opposed to simple interest calculations. They can also be used for decay to obtain a halving time. The choice of number is mostly a matter of preference: 69 is more accurate for continuous compounding, while 72 works well in common interest situations and is more easily divisible.
There are a number of variations to the rules that improve accuracy. For periodic compounding, the exact doubling time for an interest rate of r percent per period is
,
where t is the number of periods required. The formula above can be used for more than calculating the doubling time. If one wants to know the tripling time, for example, replace the constant 2 in the numerator with 3. As another example, if one wants to know the number of periods it takes for the initial value to rise by 50%, replace the constant 2 with 1.5.
Using the rule to estimate compounding periods
To estimate the number of periods required to double an original investment, divide the most convenient "rule-quantity" by the expected growth rate, expressed as a percentage.
For instance, if you were to invest $100 with compounding interest at a rate of 9% per annum, the rule of 72 gives 72/9 = 8 years required for the investment to be worth $200; an exact calculation gives ln(2)/ln(1+0.09) = 8.0432 years.
Similarly, to determine the time it takes for the value of money to halve at a given rate, divide the rule quantity by that rate.
To determine the time for money's buying power to halve, financiers divide the rule-quantity by the inflation rate. Thus at 3.5% inflation using the rule of 70, it should take approximately 70/3.5 = 20 years for the value of a unit of currency to halve.
To estimate the impact of additional fees on financial policies (e.g., mutual fund fees and expenses, loading and expense charges on variable universal life insurance investment portfolios), divide 72 by the fee. For example, if the Universal Life policy charges an annual 3% fee over and above the cost of the underlying investment fund, then the total account value will be cut to 50% in 72 / 3 = 24 years, and then to 25% of the value in 48 years, compared to holding exactly the same investment outside the policy.
Choice of rule
The value 72 is a convenient choice of numerator, since it has many small divisors: 1, 2, 3, 4, 6, 8, 9, and 12. It provides a good approximation for annual compounding, and for compounding at typical rates (from 6% to 10%); the approximations are less accurate at higher interest rates.
For continuous compounding, 69 gives accurate results for any rate, since ln(2) is about 69.3%; see derivation below. Since daily compounding is close enough to continuous compounding, for most purposes 69, 69.3 or 70 are better than 72 for daily compounding. For lower annual rates than those above, 69.3 would also be more accurate than 72. For higher annual rates, 78 is more accurate.
Note: The most accurate value on each row is in bold.
History
An early reference to the rule is in the Summa de arithmetica (Venice, 1494. Fol. 181, n. 44) of Luca Pacioli (1445–1514). He presents the rule in a discussion regarding the estimation of the doubling time of an investment, but does not derive or explain the rule, and it is thus assumed that the rule predates Pacioli by some time.
Roughly translated:
Derivation
Periodic compounding
For periodic compounding, future value is given by:
where is the present value, is the number of time periods, and stands for the interest rate per time period.
The future value is double the present value when:
which is the following condition:
This equation is easily solved for :
A simple rearrangement shows
.
If is small, then approximately equals (this is the first term in the Taylor series). That is, the latter factor grows slowly when is close to zero.
Call this latter factor . The function is shown to be accurate in the approximation of for a small, positive interest rate when (see derivation below). , and we therefore approximate time as:
.
This approximation increases in accuracy as the compounding of interest becomes continuous (see derivation below).
In order to derive a more precise adjustment, it is noted that is more closely approximated by (using the second term in the Taylor series). can then be further simplified by Taylor approximations:
.
Replacing the in on the third line with 7.79 gives 72 on the numerator. This shows that the rule of 72 is most accurate for periodically compounded interests around 8 %. Similarly, replacing the in on the third line with 2.02 gives 70 on the numerator, showing the rule of 70 is most accurate for periodically compounded interests around 2 %.
As a sophisticated but elegant mathematical method to achieve a more accurate fit, the function is developed in a Laurent series at the point . With the first two terms one obtains:
or rounded
.
Continuous compounding
In the case of theoretical continuous compounding, the derivation is simpler and yields to a more accurate rule:
See also
Exponential growth
Time value of money
Interest
Discount
Rule of 16
Rule of three (statistics)
References
Sources
External links
The Scales Of 70 – extends the rule of 72 beyond fixed-rate growth to variable rate compound growth including positive and negative rates.
Debt
Exponentials
Interest
Rules of thumb
Mathematical finance
Mental calculation | Rule of 72 | [
"Mathematics"
] | 1,272 | [
"Applied mathematics",
"E (mathematical constant)",
"Mental calculation",
"Arithmetic",
"Exponentials",
"Mathematical finance"
] |
334,420 | https://en.wikipedia.org/wiki/Telecommunications%20device%20for%20the%20deaf | A telecommunications device for the deaf (TDD) is a teleprinter, an electronic device for text communication over a telephone line, that is designed for use by persons with hearing or speech difficulties. Other names for the device include teletypewriter (TTY), textphone (common in Europe), and minicom (United Kingdom).
The typical TDD is a device about the size of a typewriter or laptop computer with a QWERTY keyboard and small screen that uses an LED, LCD, or VFD screen to display typed text electronically. In addition, TDDs commonly have a small spool of paper on which text is also printedold versions of the device had only a printer and no screen. The text is transmitted live, via a telephone line, to a compatible device, i.e. one that uses a similar communication protocol.
Special telephone services have been developed to carry the TDD functionality even further. In certain countries, there are systems in place so that a deaf person can communicate with a hearing person on an ordinary voice phone using a human relay operator. There are also "carry-over" services, enabling people who can hear but cannot speak ("hearing carry-over," a.k.a. "HCO"), or people who cannot hear but are able to speak ("voice carry-over," a.k.a. "VCO") to use the telephone.
The term TDD is sometimes discouraged because people who are deaf are increasingly using mainstream devices and technologies to carry out most of their communication. The devices described here were developed for use on the partially-analog Public Switched Telephone Network (PSTN). They do not work well on the new internet protocol (IP) networks. Thus as society increasingly moves toward IP based telecommunication, the telecommunication devices used by people who are deaf will not be TDDs. In the US and Canada, the devices are referred to as TTYs.
Teletype Corporation, of Skokie, Illinois, made page printers for text, notably for news wire services and telegrams, but these used standards different from those for deaf communication, and although in quite widespread use, were technically incompatible. Furthermore, these were sometimes referred to by the "TTY" initialism, short for "Teletype". When computers had keyboard input mechanisms and page printer output, before CRT terminals came into use, Teletypes were the most widely used devices. They were called "console typewriters". (Telex used similar equipment, but was a separate international communication network.)
History
APCOM acoustic coupler or MODEM device
The TDD concept was developed by James C. Marsters (1924–2009), a dentist and private airplane pilot who became deaf as an infant because of scarlet fever, and Robert Weitbrecht, a deaf physicist. In 1964, Marsters, Weitbrecht and Andrew Saks, an electrical engineer and grandson of the founder of the Saks Fifth Avenue department store chain, founded APCOM (Applied Communications Corp.), located in the San Francisco Bay area, to develop the acoustic coupler, or modem; their first product was named the PhoneType. APCOM collected old teleprinter machines (TTYs) from the Department of Defense and junkyards. Acoustic couplers were cabled to TTYs enabling the AT&T standard Model 500 telephone to couple, or fit, into the rubber cups on the coupler, thus allowing the device to transmit and receive a unique sequence of tones generated by the different corresponding TTY keys. The entire configuration of teleprinter machine, acoustic coupler, and telephone set became known as the TTY. Weitbrecht invented the acoustic coupler modem in 1964. The actual mechanism for TTY communications was accomplished electro-mechanically through frequency-shift keying (FSK) allowing only half-duplex communication, where only one person at a time can transmit.
Paul Taylor TTY device
During the late 1960s, Paul Taylor combined Western Union Teletype machines with modems to create teletypewriters, known as TTYs. He distributed these early, non-portable devices to the homes of many in the deaf community in St. Louis, Missouri. He worked with others to establish a local telephone wake-up service. In the early 1970s, these small successes in St. Louis evolved into the nation's first local telephone relay system for the deaf.
Micon Industries MCM device
In 1973, the Manual Communications Module (MCM), which was the world's first electronic portable TTY allowing two-way telecommunications, premiered at the California Association of the Deaf convention in Sacramento, California. The battery-powered MCM was invented and designed by a deaf news anchor and interpreter, Kit Patrick Corson, in conjunction with Michael Cannon and physicist Art Ogawa. It was manufactured by Michael Cannon's company, Micon Industries, and initially marketed by Kit Corson's company, Silent Communications. In order to be compatible with the existing TTY network, the MCM was designed around the five-bit Baudot code established by the older TTY machines instead of the ASCII code used by computers. The MCM was an instant success with the deaf community despite the drawback of a $599 cost. Within six months there were more MCMs in use by the deaf and hard of hearing than TTY machines. After a year Micon took over the marketing of the MCM and subsequently concluded a deal with Pacific Bell (who coined the term "TDD") to purchase MCMs and rent them to deaf telephone subscribers for $30 per month.
After Micon formed an alliance with APCOM, Michael Cannon (Micon), Paul Conover (Micon), and Andrea Saks (APCOM) successfully petitioned the California Public Utilities Commission (CPUC), resulting in a tariff that paid for TTY devices to be distributed free of cost to deaf persons. Micon produced over 1,000 MCMs per month, resulting in approximately 50,000 MCMs being disseminated into the deaf community.
Before he left Micon in 1980, Michael Cannon developed several computer compatible variations of the MCM and a portable, battery operated printing TTY, but they were never as popular as the original MCM. Newer model TTYs could communicate with selectable codes that allow communications at a higher bit rate on those models similarly equipped. However, the lack of true computer interface functionality spelled the demise of the original TTY and its clones. During the mid-1970s, other so-called portable telephone devices were being cloned by other companies, and this was the time period when the term "TDD" began being used largely by those outside the deaf community.
Text messaging and the Def-Tone System (DTS)
This relay system became known commonly as the Def-Tone System (DTS) because the tones representing letters of the alphabet were eventually carried in tones outside the range of human hearing. Today, this is commonly called multi-tap because you press a number 1, 2 or 3 times to get a corresponding letter. In 1994 Joseph Alan Poirier, a college student-worker, recommended using the system to send texts to forklifts to improve delivery of parts to the assembly line at GM Powertrain in Toledo, Ohio, and sending a text to pagers. He recommended taking pagers to alphanumeric displays incorporating the same system in discussions with the pager supplier for Outback Steakhouse and having relays put in the forklifts to ping alert messages to the pagers used in that system. He called it text messaging, coining the phrase. It is theorized that when Toyota forklift was allegedly hired by GM for this work, one of the subcontractors, Kyocera, utilized the work for the Toyota forklift company to create text messaging for cell phones.
Marsters Award
In 2009, AT&T received the James C. Marsters Promotion Award from TDI (formerly Telecommunications for the Deaf, Inc.) for its efforts to increase accessibility to communication for people with disabilities. The award holds some irony; it was AT&T that, in the 1960s, resisted efforts to implement TTY technology, claiming it would damage its communication equipment. In 1968, the Federal Communications Commission struck down AT&T's policy and forced it to offer TTY access to its network.
Protocols
There are many different standards for TDDs and textphones.
Original 5-bit Baudot code
The original standard used by TTYs is a variant of the Baudot code. The maximum speed of this protocol is 10 characters per second. This is a half-duplex protocol, which means that only one person at a time may transmit characters. If both try to transmit at the same time, the characters will be garbled on the other end.
This protocol is commonly used in the United States.
This is a variant of the Baudot code, implemented as 5-bits per character transmitted asynchronously using frequency-shift key-modulation at either 45.5 or 50 baud, 1 start bit, 5 data bits, and 1.5 stop bits. Details of the protocol implementation are available in TIA-825-A and also in T-REC V.18 Annex A "5-bit operational mode".
Turbo Code
The UltraTec company implements another protocol known as Enhanced TTY, which it calls "Turbo Code," in its products. Turbo Code has some advantages over Baudot protocols, such as a higher data rate, full ASCII compliance, and full-duplex capability. However, Turbo Code is proprietary, and UltraTec gives its specifications only to parties who are willing to license it, although some information concerning it is disclosed in .
Other legacy protocols
Other protocols used for text telephony are European Deaf Telephone (EDT) and dual-tone multi-frequency signaling (DTMF).
The ITU-T V-series recommendations include the following early modem standards approved by the ITU in 1988:
ITU-T V.21 specifies 300 bits per second duplex mode.
ITU-T V.23 specifies audio frequency-shift keying modulation to encode and transfer data at 600/1200 bits per second.
V.18
In 1994, the ITU approved the V.18 standard, which comprises two major parts, a dual standard. It is both an umbrella protocol that allows recognition and interoperability of some of the most commonly used textphone protocols, as well as offering a native V.18 mode, which is an ASCII full- or half-duplex modulation method.
Computers can, with appropriate software and modem, emulate a V.18 TTY. Some voice modems, coupled with appropriate software, can now be converted to TTY modems by using a software-based decoder for TTY tones. Same can be done with such software using a computer's sound card, when coupled to the telephone line.
In the UK, a virtual V.18 network, called TextDirect, exists as part of the Public Switched Telephone Network (PSTN), thereby offering interoperability between textphones using different protocols. The platform also offers additional functionality like call progress and status information in text and automatic invocation of a relay service for speech-to-text calls.
Cell phones
Many digital cell phones are compatible with TTY devices.
Many people want to replace TTY with real-time text over IP (RTT), which can be used on a digital cell phone or tablet without a separate TTY device.
New technologies
As TDDs are increasingly considered legacy devices, with the emergence of modern technologies such as email, texting and instant messaging, text from TDD are increasingly being sent over Text over IP gateways, or other real-time text protocols. However, these newer methods require IP connections and will not work with regular analog phone lines, unless a data connection is used (i.e. dial-up Internet, or the modem method of multiplexing text and voice that is done on a Captioned Telephone hardware handset). Because some people have no access to any kind of data connection, and it is not even available in some parts within many countries, TTYs are still the only method for analog landline text phone calls, although TTYs include any device with a suitable modem and software.
Other devices for the deaf or hard of hearing
In addition to TDD, there are a number of pieces of equipment that can be coupled to telephones to improve their utility. For those with hearing difficulties the telephone ring and conversation sound level can be amplified or pitch adjusted; ambient noise can also be filtered. The amplifier can be a simple addition or through an inductive coupler to interact with suitable hearing aids. The ring can also be supplemented with extension bells or a visual call indicator.
Etiquette
There are some etiquette rules that users of TTYs must be aware of. Because of the inability to detect when a person has finished speaking—and the fact that two people typing will scramble the text on both ends—the term "Go Ahead" (GA) is used to denote the end of a turn, and an indication for the other person to begin typing.
Sample conversation
Caller A: HELLO JOHN, WHAT TIME WILL YOU BE COMING AROUND TODAY Q GA
Caller B: HI FRED, I WILL BE AROUND NOON GA
Caller A: OK, NO PROBLEM, DON'T FORGET TO BRING THE BOOKS AND THE WORK SO FAR GA
Caller B: WILL DO SK
Caller A: BYE BYE SKSK
SK is used to allow the users to say their farewells, while SKSK indicates an immediate call hang-up.
Sample conversation 2
Caller A HI, THIS IS JOHN, CAN I ASK WHO IS CALLING? GA
Caller B HI JOHN, ITS ME FRED, I AM WONDERING WHERE YOU ARE, ITS GETTING LATE TO GO OUT TO THE PUB GA
Caller A HI FRED, SORRY I DONT THINK I CAN GO GA
Caller B WHY CANT YOU GO? GA
Caller A MY WIFE IS NOT FEELING WELL AND I HAVE NO BABYSITTER FOR MY KIDS! GA
Caller B AWWWW DARN. I WANTED YOU THERE. OH WELL WHAT CAN YOU DO ? GA
Caller A I KNOW.. I GOTTA GO. THE KIDS NEED ME. SEE YOU AROUND! BYE FOR NOW SK
Caller B OK NO WORRIES SEE YOU SOON! BYE BYE SK GA
Caller A SKSK (THE PARTY HAS HUNG UP)
Sample text relay call
Caller A TXD DIALING.. TXD RING... TXD OPERATOR CONNECTED.. EXPLAINING TEXT RELAY SERVICE. PLEASE WAIT.... HI THIS IS JOHN GA
Caller B HI JOHN ITS ME FRED. I AM WONDERING WHAT YOU ARE DOING TONIGHT? GA
Caller A HI FRED. I AM THINKING OF HAVING A POKER NIGHT AT MINE, WHAT DO YOU THINK? GA
Caller B GOOD IDEA, I'LL CALL A FEW MATES TO COME ROUND AND HAVE A GOOD GAME GA
Caller A OK SEE YOU AT 7PM. BYE BYE SK GA
Caller B OK SEE YOU AT 7PM BYE BYE SKSKSKSK GA
Caller A THANK YOU FOR USING TEXT RELAY SERVICE. GOODBYE
Note: TTYs use only capital letters except when there are computer screens.
''Note: In the UK, Text relay service used to be called typetalk (RNID) but have since merged with the phone line using the dialling prefix 18001 (TTY) or the 18002 (voice relay). The emergency line is 18000 (TTY).
TRS relay
One of the most common uses for a TTY is to place calls to a Telecommunications Relay Service (TRS), which makes it possible for the deaf to successfully make phone calls to users of regular phone systems.
The voice recognition systems are in limited use, due to problems with the technology. A new development called the captioned telephone now uses voice recognition to assist the human operators. Newer text-based communication methods, such as short message service (SMS), Internet Relay Chat (IRC), and instant messaging have also been adopted by the deaf as an alternative or adjunct to TTY.
See also
List of video telecommunication services and product brands
Telecommunications relay service
Video relay service (VRS), using videotelephony
Notes
References
Assistive technology
Deafness
Telecommunications
American inventions | Telecommunications device for the deaf | [
"Technology"
] | 3,339 | [
"Information and communications technology",
"Telecommunications"
] |
334,423 | https://en.wikipedia.org/wiki/AX.25 | AX.25 (Amateur X.25) is a data link layer protocol originally derived from layer 2 of the X.25 protocol suite and designed for use by amateur radio operators. It is used extensively on amateur packet radio networks.
AX.25 v2.0 is responsible for establishing link layer connections, transferring data encapsulated in frames between nodes, and detecting errors introduced by the communications channel.
AX.25 v2.2 (1998) added improvements to improve efficiency, especially at higher data rates. Stations can automatically negotiate payload sizes larger than the previous limitation of 256 bytes. Extended sequence numbers (7 vs. 3 bits) allow a larger window size, the number of frames that can be sent before waiting for acknowledgement. "Selective Reject" allows only the missing frames to be resent, rather than having to wastefully resend frames that have already been received successfully. Despite all these advantages, few implementations have been updated to include these improvements published more than 20 years ago. The only known complete implementation of v2.2, at this time (2020), is the Dire Wolf software TNC.
AX.25 is commonly used as the data link layer for network layer such as IPv4, with TCP used on top of that. AX.25 supports a limited form of source routing. Although it is possible to build AX.25 switches similar to the way Ethernet switches work, this has not yet been accomplished.
Specification
AX.25 does not define a physical layer implementation. In practice 1200 baud Bell 202 tones and 9600 baud G3RUH DFSK are almost exclusively used on VHF and UHF. On HF the standard transmission mode is 300 baud Bell 103 tones. At the physical layer, AX.25 defines only a "physical layer state machine" and some timers related to transmitter and receiver switching delays.
At the link layer, AX.25 uses HDLC frame syntax and procedures. (ISO 3309) frames are transmitted with NRZI encoding. HDLC specifies the syntax, but not the semantics, of the variable-length address field of the frame. AX.25 specifies that this field is subdivided into multiple addresses: a source address, zero or more repeater addresses, and a destination address, with embedded control fields for use by the repeaters. To simplify compliance with amateur radio rules, these addresses derive from the station call signs of the source, destination and repeater stations.
Media access control follows the Carrier sense multiple access approach with collision recovery (CSMA/CR).
AX.25 supports both virtual-circuit connected and datagram-style connectionless modes of operation. The latter is used to great effect by the Automatic Packet Reporting System (APRS).
A simple source routing mechanism using digipeaters is available at the datalink level. Digipeaters act as simplex repeaters, receiving, decoding and retransmitting packets from local stations. They allow multi-hop connections to be established between two stations unable to communicate directly. The digipeaters use and modify the information in the frame's address field to perform this function.
The AX.25 specification defines a complete, albeit point to point only network layer protocol, but this has seen little use outside of keyboard-to-keyboard or keyboard-to-BBS connections. NET/ROM, ROSE, and TexNet exist to provide routing between nodes. In principle, a variety of layer 3 protocols can be used with AX.25, including the ubiquitous Internet Protocol (IP). This approach is used by AMPRNet, which is an amateur radio TCP/IP network using AX.25 UI-frames at the datalink layer.
Implementations
Traditionally, amateur radio operators have connected to AX.25 networks through the use of a terminal node controller, which contains a microprocessor and an implementation of the protocol in firmware. These devices allow network resources to be accessed using only a dumb terminal and a transceiver.
AX.25 has also been implemented on personal computers. For example, the Linux kernel includes native support for AX.25. The computer connects to a transceiver via its audio interface or via a simple modem. The computers can also interconnect to other computers or be bridged or routed to TNCs and transceivers located elsewhere using BPQ over Ethernet framing, which is also natively supported by the Linux kernel to facilitate more modern setups with the actual transceivers directly placed under or in the antenna mast, creating a 'low loss', shorter RF wiring need, and replacing expensive and long and thick coax cables and amplifiers with cheap fiber (RFI (both ways)/EMP/lightning resistant) or copper Ethernet wiring. BPQ Ethernet framing allows connecting entire stacks of TNC+transceiver pairs to any existing network of computers which then can all access all radio links offered simultaneously (transparently bridged), communicate with each other internally over AX.25, or with filtered routing select specific TNCs/radio frequencies.
Dire Wolf is a free open-source replacement for the 1980s-style TNC. It contains DSP software modems and a complete implementation of AX25 v2.2 plus FX.25 forward error correction. It can function as a digital repeater, GPS tracker, and APRS Internet Gateway (IGate) without any additional software.
KISS-mode framing
See full article at KISS (TNC)
AX.25 is often used with a TNC that implements the KISS framing as a low-cost alternative to using expensive and uncommon HDLC controller cards.
The KISS framing is not part of the AX.25 protocol itself nor is it sent over the air. It merely serves to encapsulate the protocol frames in a way that can successfully pass over a serial link to the TNC. The KISS framing is derived from SLIP, and makes many of the same assumptions, such as there only being two "endpoints" involved in the conversation. With SLIP, these were the two SLIP-connected hosts; with KISS, it is assumed that the KISS framing link is over serial with only the host computer and the TNC involved. Among other things, this makes it awkward to address multiple TNCs without having multiple (serial) data channels.
Alternatives to KISS do exist that address these limitations, such as 6PACK.
Applications
AX.25 has most frequently been used to establish direct, point-to-point links between packet radio stations, without any additional network layers. This is sufficient for keyboard-to-keyboard contacts between stations and for accessing local bulletin board systems and DX clusters.
In recent years, APRS has become a popular application.
For tunneling of AX.25 packets over IP, AXIP and AXUDP are used to encapsulate AX.25 into IP or UDP packets.
Limitations
At the speeds commonly used to transmit packet radio data (rarely higher than 9,600 bit/s, and typically 1,200 bit/s), the use of additional network layers with AX.25 is impractical due to the data overhead involved. This is not a limitation of AX.25 per se, but places constraints on the sophistication of applications designed to use it.
HDLC protocols identify each frame by an address. The AX.25 implementation of HDLC includes sender and destination station call-sign plus four-bit Secondary Station Identifier (SSID) value in range 0 through 15 in the frame address.
At ITU WARC2003 the radio amateur station callsign specification was amended so that the earlier maximum length of six characters was raised to seven characters. However, AX.25 has a built-in hard limit of six characters, which means a seven-character callsign cannot be used in an AX.25 network.
AX.25 lacks an explicit port (or SAP); the SSID often assumes this role. Thus there can be only one service per AX.25 station SSID address, which is often kludged around with varying degrees of success.
Some amateurs, notably Phil Karn KA9Q, have argued that AX.25 is not well-suited to operation over noisy, limited-bandwidth radio links, citing its lack of forward error correction (FEC) and automatic data compression. However, a viable widely adopted successor to AX.25 has yet to emerge. Likely reasons may include:
a large existing deployment of recycled narrowband FM radios and especially existing APRS applications,
easy availability of cheap, low-power FM transmitters, especially for the 430 MHz UHF band, to match existing legacy radio gear,
new radio level modulations would need different radio gear than what is currently in use and the resulting system would be incompatible with the existing one thus requiring a large initial investment in new radio gear,
adoption of newer line codings potentially including forward error correction takes more effort than the 1,200 bit/s AFSK of Bell 202. Previously sufficient small 8-bit microprocessors with 128 bytes of RAM would not be enough, and new ones might cost US$30 instead of US$3. Phil Karn did demo decoding of this new modulation of his by running it on a Pentium II machine some 10 years later, mid-level embedded microprocessors are capable enough to do the same with under US$50 system cost.
Despite these limitations, an extension to the AX.25 protocol, supporting forward error correction, has been created by the TAPR. This extension is called FX.25.
Small gadget transmitters do not need to know what is being transmitted. There is only a need to monitor channel occupation by radio receiver RSSI (Received Signal Strength Indication) to know when not to send. Transmitting interleaved Reed-Solomon FEC signal in some smart modulation needs a lot fewer resources than reception of the same signal, thus a sufficient microprocessor might cost just US$5 instead of US$30 and a system cost might stay below US$50, transmitter included. However, in recent years, the ability to receive as well as send using cheap microcontrollers (such as the Atmel AVR or the Motorola 68HC08 families) has been demonstrated.
It seems, however, that any new system that is not compatible with the current Bell 202 modulation is unlikely to be widely adopted. The current modulation seems to fulfill sufficient need that little motivation exists to move to a superior design, especially if the new design requires significant hardware purchases.
Most recently, a wholly new protocol with forward error correction has been created by Nino Carillo, KK4HEJ, called Improved Layer 2 Protocol (IL2P).
See also
Packet radio
Automatic Packet Reporting System (APRS)
FX.25 Forward Error Correction
Improved Layer 2 Protocol (IL2P)
References
Further reading
AMPRNet a project to construct a global, radio-based network using TCP/IP over AX.25 links
Linux-AX25.org a site dedicated to packet radio on Linux
AX.25 Layer 2 a web site established to be a concise repository for AX.25 layer 2 design activities
APRS Bob Bruninga's official APRS website
TARPN Tadd Torborg KA2DEW - Terrestrial Amateur Radio Packet Network site
AX.25 Specification ax25.net
Packet radio
Link protocols
X.25 | AX.25 | [
"Technology"
] | 2,304 | [
"Wireless networking",
"Packet radio"
] |
334,463 | https://en.wikipedia.org/wiki/Corm | Corm, bulbo-tuber, or bulbotuber is a short, vertical, swollen underground plant stem that serves as a storage organ that some plants use to survive winter or other adverse conditions such as summer drought and heat (perennation).
The word cormous usually means plants that grow from corms, parallel to the terms tuberous and bulbous to describe plants growing from tubers and bulbs.
A corm consists of one or more internodes with at least one growing point, generally with protective leaves modified into skins or tunics. The tunic of a corm forms from dead petiole sheaths—remnants of leaves produced in previous years. They act as a covering, protecting the corm from insects, digging animals, flooding, and water loss. The tunics of some species are thin, dry, and papery, at least in young plants, however, in some families, such as Iridaceae, the tunic of a mature corm can be formidable protection. For example, some of the larger species of Watsonia accumulate thick, rot-resistant tunics over a period of years, producing a structure of tough, reticulated fibre. Other species, such as many in the genus Lapeirousia, have tunics of hard, woody layers.
Internally, a typical corm mostly consists of parenchyma cells, rich in starch, above a circular basal node from which roots grow.
Long-lived cormous plants vary in their long-term development. Some regularly replace their older corms with a stack of younger corms, increased more or less seasonally. By splitting such a stack before the older corm generations wither too badly, the horticulturist can exploit the individual corms for propagation. Other species seldom do anything of that kind; their corms simply grow larger in most seasons. Yet others split when multiple buds or stolons on a large corm sprout independently, forming a tussock.
Corms can be dug up and used to propagate or redistribute the plant (see, for example, taro). Plants with corms generally can be propagated by cutting the corms into sections and replanting. Suitably treated, each section with at least one bud usually can generate a new corm.
Comparison to bulbs
Corms are sometimes confused with true bulbs; they are often similar in appearance to bulbs externally, and thus erroneously called bulbs. Corms are stems that are internally structured with solid tissues, which distinguishes them from bulbs, which are mostly made up of layered fleshy scales that are modified leaves. As a result, a corm cut in half appears solid inside, but a true bulb cut in half reveals that it is made up of layers. Corms are structurally plant stems, with nodes and internodes with buds and produce adventitious roots. On the top of the corm, one or a few buds grow into shoots that produce normal leaves and flowers.
Cormels
Corms can form many small cormlets called cormels, from the basal areas of the new growing corms, especially when the main growing point is damaged. These propagate corm-forming plants. A number of species replace corms every year by growing a new corm. This process starts after the shoot develops fully expanded leaves. The new corm forms at the shoot base just above the old corm. As the new corm grows, short stolons appear that end with the newly growing small cormels. As the plants grow and flower, they use up the old corm, which shrivels away. The new corm that replaces the old corm grows in size, especially after flowering ends.
The old corm produces the greatest number of cormels when close to the soil surface. Small cormels normally take one or two more years of growth before they are large enough to flower.
Cormels do have a reproductive function, but in the wild they also are important as a survival strategy. In most places where geophytes are common, so are animals that feed on them, whether from above like pigs, or from below like bulb weevils, mole rats, or pocket gophers. Such animals eat through protective tunics, but they generally miss several cormels that remain in the soil to replace the consumed plant. Plants such as Homeria, Watsonia and Gladiolus, genera that are vulnerable to such animals, are probably the ones that produce cormels in the greatest numbers and most widely distributed over the plant. Homeria species produce bunches of cormels on underground stem nodes, and Watsonia meriana for example actually produces cormels profusely from under
bracts on the inflorescences.
Those growing from the bottom of the corm are normal fibrous roots formed as the shoots grow, and are produced from the basal area at the bottom of the corm. The second type are thicker layered roots called contractile roots that form as the new corms are growing. They pull the corm deeper into the soil. In some species contractile roots are produced in response to fluctuating soil temperatures and light levels. In such species, once the corm is deep enough within the soil where the temperature is more uniform and there is no light, the contractile roots no longer grow and the corm is no longer pulled deeper into the soil. In some other species, contractile roots seem to be a defence against digging animals and can bury the corm surprisingly deeply over the years. Wurmbea marginata is one example of a small plant that can be challenging to dig unharmed out of a hard, clayey, hillside.
Cormous plants
Cultivated plants that form corms include:
Alismataceae
Sagittaria spp. (arrowhead or wapatoo)
Araceae
Alocasia macrorrhizos (giant taro)
Arisaema
Colocasia esculenta (taro)
Cyrtosperma merkusii (giant swamp taro)
Xanthosoma spp. (malanga, cocoyam, tannia, and other names)
Asparagaceae
Bessera
Brodiaea
Dichelostemma
Milla
Tecophilaea
Asteraceae
Liatris
Colchicaceae Colchicum Cyperaceae
Eleocharis dulcis (Chinese water chestnut)
Iridaceae
Crocosmia (Montbretia)
Crocuses, including the saffron crocus (Crocus spp.)
Dierama Freesia Gladiolus Some species of irises (Iris spp.)
Romulea Musaceae
Bananas (Musa spp.)
Ensete'' spp. (enset)
See also
Rhizome
Root vegetable
Tuber
References
Plant reproduction
Plant stem morphology
Garden plants
de:Pflanzenknolle
he:פקעת | Corm | [
"Biology"
] | 1,431 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction"
] |
334,626 | https://en.wikipedia.org/wiki/Telephone%20booth | A telephone booth, telephone kiosk, telephone call box, telephone box or public call box is a tiny structure furnished with a payphone and designed for a telephone user's convenience; typically the user steps into the booth and closes the booth door while using the payphone inside.
In the United States and Canada, "telephone booth" (or "phone booth") is the commonly used term for the structure, while in the Commonwealth of Nations (particularly the United Kingdom and Australia), it is a "phone box".
Such a booth usually has lighting, a door to provide privacy, and windows to let others know if the booth is in use. The booth may be furnished with a printed directory of local telephone numbers, and in a formal setting, such as a hotel, may be furnished with paper and pen and even a seat. An outdoor booth may be made of metal and plastic to withstand the elements and heavy use, while an indoor booth (known as a silence cabinet) may have more elaborate design and furnishings. Most outdoor booths feature the name and logo of the telephone service provider.
History
The world's first telephone box called "Fernsprechkiosk", was opened on 12 January 1881 at Potsdamer Platz, Berlin. To use it, one had to buy paper tickets called Telefonbillet which allowed for a few minutes of talking time. In 1899, it was replaced by a coin-operated telephone.
William Gray is credited with inventing the coin payphone in the United States in 1889, and George A. Long was its developer.
In the UK, the creation of a national network of telephone boxes commenced in 1920, beginning with the K1 model which was made of concrete; however, the city of Kingston upon Hull is noted for having its individual phone service, Kingston Communications, with cream coloured phone boxes, as opposed to classic royal red in the rest of Britain. The Post Office was forced into allowing a less strident grey with red glazing bars scheme for areas of natural and architectural beauty. Ironically, some of these areas that have preserved their telephone boxes have now painted them red.
In the 1940, at military bases during WWII, outdoor booths started to appear. But in general they were most commonly placed indoors, as they were mostly made of wood and didn't handle exposure to the elements well. This changed in 1954, when the Airlight outdoor telephone booth was introduced. Being made of glass and aluminium, they were designed especially for the outdoors and originally intended to serve motorists traveling on the highway.
Design
Starting in the 1970s, pay telephones were less commonly placed in booths in the United States. In many cities where they were once common, telephone booths have now been almost completely replaced by non-enclosed pay phones. In the United States, this replacement was caused, at least in part, by an attempt to make the pay telephones more accessible to disabled people. However, in the United Kingdom, telephones remained in booths more often than the non-enclosed setup. Although still fairly common, the number of phone boxes has declined sharply in Britain since the late 1990s due to the rise in use of mobile phones.
Many locations that provide pay-phones mount the phones on kiosks rather than in booths—this relative lack of privacy and comfort discourages lengthy calls in high-demand areas such as airports.
Special equipment installed in some telephone booths allows a caller to use a computer, a portable fax machine, or a telecommunications device for the deaf.
The Jabbrrbox, an enclosed structure for installation in open plan offices, was inspired by the telephone booth.
Cultural impact
The ubiquity of the phone booth led to its depiction in fiction. In comic books published by DC Comics, the telephone booth is occasionally the place where reporter Clark Kent discards his street clothing and transforms into the costumed superhero Superman. Some films and television series have reused or parodied this plot device. The 1965–1970 television series Get Smart used a phone booth, among other devices, as a secure means of entering CONTROL headquarters. The 2002 film Phone Booth takes place almost entirely in a telephone booth; a 2023 retrospective article notes that "the obsolescence is to the film's advantage."
The 1986 comedy film Clockwise features John Cleese's character vandalising a phone in a booth in frustration after it malfunctions. The scene played on the public perception in Britain at the time that telephone booths were frequently out of order.
Privacy
Phone booths have been subject to wireless surveillance by law enforcement. For example, the landmark U.S. Supreme Court case of Katz v. United States involved the Constitutional question of whether the Federal Bureau of Investigation (FBI) could install a listening device outside of the booth.
Recent developments
Wireless services
The increasing use of mobile phones has led to a decreased demand for payphones, while the increasing use of laptops is leading to a new kind of service: in 2003, service provider Verizon announced that it would begin offering wireless computer connectivity in the vicinity of its phone booths in Manhattan. In 2006, the Verizon Wi-Fi telephone booth service was discontinued in favor of the more expensive Verizon Wireless' EVDO system.
Wireless access is motivating telephone companies to place wireless stations at locations that have traditionally hosted telephone booths, but stations are also appearing in new kinds of locations such as libraries, cafés, and trains. Phone booths have been slowly disappearing with the growth in use of mobile phones.
Vandalism
A rise in vandalism has prompted several companies to manufacture simpler booths with extremely durable pay phones.
Withdrawal of services
Pay phones may still be used by mobile/cellular phone users if their phone become unusable, is stolen, or for other emergency uses. These uses may make the complete disappearance of pay phones in the near future less likely.
Australia
Under the Universal Service Obligation, the Government of Australia legally requires Telstra to ensure standard phone services and payphones are "reasonably accessible to all people in Australia". Some communities, particularly in remote regional areas, rely on payphones, as well as people who do not have access to a mobile phone.
At their peak in the early 1990s, there were more than 80,000 public phone boxes across the country. By June 30, 2016, according to the Australian Communications & Media Authority there were about 24,000 payphones across Australia. On August 3, 2021, with 15,000 public phones remaining across Australia, Telstra announced that all calls to fixed line and mobile phones within Australia from public phones would become free of charge, and that it had no plans to further eliminate public phones.
Belgium
In Belgium, majority state-owned telco Belgacom took the last remaining phone booths out of service in June 2015.
Czechia
In June 2021 the last phone booth in Czechia was closed and dismantled.
Denmark
In December 2017 the last three public telephone booths in Denmark had their telephones removed. They were situated in the town of Aarhus.
Finland
By 2007, Finnet companies and TeliaSonera Finland had discontinued their public telephones, and the last remaining operator Elisa Oyj did so early the same year.
France
According to Orange CEO, Stéphane Richard, there were only 26 public phone booths still operating in France as of 2021. The "Macron law" of 2015 ended Orange mandatory maintenance of a public phone booth network, its decline in use being caused by the cell phones era. These are, by law, maintained in rural area where there is no cell phone service. Consequently, they are removed once the area is properly covered by at least one mobile phone operator.
Ireland
Eir, the Universal Service Obligation carrier with regard to payphones, has been systematically removing payphones which fall under the minimum requirement for retention, of a rolling average of one minute of usage a day over six months.
As of June 2019, 456 locations retained payphones (with none in the entirety of County Leitrim); this was down from 1,320 in March 2014.
Italy
In May 2023 AGCOM established that TIM no longer has the obligation to guarantee the availability of telephone booths, with the exception of "places of social importance", such as hospitals (with at least ten beds), prisons, and barracks with at least fifty occupants. TIM will also be able to decommission booths in mountain refuges, while ensuring access to the mobile telephone network. AGCOM declared that 99.2% of public telephones are already covered by a mobile network with at least 2G technology (May 2023). In September 2023 over 90,000 booths which do not fall into the above-mentioned exceptions began being removed.
Jordan
In 2004, Jordan became the first country in the world not to have telephone booths generally available. The mobile/cellular phone penetration in that country has become so high that telephone booths had been rarely used for years. The two private payphone service companies, namely ALO and JPP, closed down.
Norway
The last functioning phone box in Norway was taken out of service in June 2016. However, 100 phone boxes have been preserved around the country and are protected under cultural heritage laws.
Sweden
The first telephone booth in Sweden was erected in 1890. In 1981 there were 44,000, but by 2013, only 1,200 remained, with the removal of the last one in 2015. A survey showed that in 2013, only 1% of the population in Sweden had used one during the previous year.
United Kingdom
The red telephone kiosk is recognised as a British icon and the BT Group still hold intellectual property rights in the designs of many of the telephone boxes, including registered trademark rights.
BT is steadily removing public telephone kiosks from the streets of the UK. It is permitted to remove a kiosk without consultation provided that there is another kiosk within walking distance. In other cases, it is required to comply with Ofcom rules in consultation with the local authority. Some decommissioned red telephone boxes have been converted for other uses with the permission of BT Group, such as housing small community libraries or automated external defibrillators.
United States
Beginning in the 1990s, many large cities began instituting restrictions on where pay phones could be placed, under the belief that they facilitated crime. In 1999, there were approximately 2 million phone booths in the United States. Only five percent of those remained in service by 2018. In 2008, AT&T began withdrawing pay phone support citing profitability, and a few years later Verizon also left the pay phone market. In 2015, a phone booth in Prairie Grove, Arkansas was placed on the National Register of Historic Places. New phone booth installations do sometimes occur, including the installation of a phone booth at Eaton Rapid's city hall.
In 2018, about a fifth of America's 100,000 remaining pay phones were in New York, according to the FCC. Only four phone booths remain in New York City, all on Manhattan's Upper West Side; the rest have been converted into WiFi hotspots. Incoming calls are no longer available, and outgoing calls are now free. In February 2020, the city confirmed that despite a plan to remove dozens of pay phones, the iconic booths would continue to be maintained.
Advertising
Many telephone boxes in the United Kingdom are now used for advertisements, bearing posters, with the development of "StreetTalk" by JCDecaux. This is in addition to the ST6 public telephone introduced in 2007 which is designed to feature a phone on one side and a JCDecaux-owned advertising space on the otherside. The advertising pays for the cost of maintaining the phone.
In 2018, the UK Local Government Association drew attention to "Trojan" telephone boxes. These are telephone boxes whose main purpose is advertising. A loophole in planning law allows these to be erected without planning permission and the LGA is seeking to close this loophole.
See also
Callbox
Hotspot (Wi-Fi)
Interactive kiosk
KX telephone boxes
Mojave phone booth
Payphone
Police box
Red telephone box
Sir Giles Gilbert Scott, the English architect who designed the iconic red telephone box
Phonebooth stuffing
References
External links
PayPhoneBox Index of payphone numbers and photographs of payphones in unusual or famous places around the world.
Public phones
Street furniture
Telephone services
Vending machines
1881 introductions | Telephone booth | [
"Engineering"
] | 2,495 | [
"Vending machines",
"Automation"
] |
334,641 | https://en.wikipedia.org/wiki/Directory%20service | In computing, a directory service or name service maps the names of network resources to their respective network addresses. It is a shared information infrastructure for locating, managing, administering and organizing everyday items and network resources, which can include volumes, folders, files, printers, users, groups, devices, telephone numbers and other objects. A directory service is a critical component of a network operating system. A directory server or name server is a server which provides such a service. Each resource on the network is considered an object by the directory server. Information about a particular resource is stored as a collection of attributes associated with that resource or object.
A directory service defines a namespace for the network. The namespace is used to assign a name (unique identifier) to each of the objects. Directories typically have a set of rules determining how network resources are named and identified, which usually includes a requirement that the identifiers be unique and unambiguous. When using a directory service, a user does not have to remember the physical address of a network resource; providing a name locates the resource. Some directory services include access control provisions, limiting the availability of directory information to authorized users.
Comparison with relational databases
Several things distinguish a directory service from a relational database. Data can be made redundant if it aids performance (e.g. by repeating values through rows in a table instead of relating them to the contents of a different table through a key, which technique is called denormalization; another technique could be the utilization of replicas for increasing actual throughput).
Directory schemas are object classes, attributes, name bindings and knowledge (namespaces) where an object class has:
Must - attributes that each instances must have
May - attributes which can be defined for an instance but can be omitted, with the absence similar to NULL in a relational database
Attributes are sometimes multi-valued, allowing multiple naming attributes at one level (such as machine type and serial number concatenation, or multiple phone numbers for "work phone"). Attributes and object classes are usually standardized throughout the industry; for example, X.500 attributes and classes are often formally registered with the IANA for their object ID. Therefore, directory applications try to reuse standard classes and attributes to maximize the benefit of existing directory-server software.
Object instances are slotted into namespaces; each object class inherits from its parent object class (and ultimately from the root of the hierarchy), adding attributes to the must-may list. Directory services are often central to the security design of an IT system and have a correspondingly-fine granularity of access control.
Replication and distribution
Replication and distribution have distinct meanings in the design and management of a directory service. Replication is used to indicate that the same directory namespace (the same objects) are copied to another directory server for redundancy and throughput reasons; the replicated namespace is governed by the same authority. Distribution is used to indicate that multiple directory servers in different namespaces are interconnected to form a distributed directory service; each namespace can be governed by a different authority.
Implementations
Directory services were part of an Open Systems Interconnection (OSI) initiative for common network standards and multi-vendor interoperability. During the 1980s, the ITU and ISO created the X.500 set of standards for directory services, initially to support the requirements of inter-carrier electronic messaging and network-name lookup. The Lightweight Directory Access Protocol (LDAP) is based on the X.500 directory-information services, using the TCP/IP stack and an X.500 Directory Access Protocol (DAP) string-encoding scheme on the Internet.
Systems developed before the X.500 include:
Domain Name System (DNS): The first directory service on the Internet, still in use
Hesiod: Based on DNS and used at MIT's Project Athena
Network Information Service (NIS): Originally Yellow Pages (YP) Sun Microsystems' implementation of a directory service for Unix network environments. It played a role similar to Hesiod.
NetInfo: Developed by NeXT during the late 1980s for NEXTSTEP. After its acquisition by Apple, it was released as open source and was the directory service for Mac OS X before it was deprecated for the LDAP-based Open Directory. Support for NetInfo was removed with the release of 10.5 Leopard.
Banyan VINES: First scalable directory service
NT Domains: Developed by Microsoft to provide directory services for Windows machines before the release of the LDAP-based Active Directory in Windows 2000. Windows Vista continues to support NT Domains after relaxing its minimum authentication protocols.
LDAP implementations
LDAP/X.500-based implementations include:
389 Directory Server: Free Open Source server implementation by Red Hat, with commercial support by Red Hat and SUSE.
Active Directory: Microsoft's directory service for Windows, originating from the X.500 directory, created for use in Exchange Server, first shipped with Windows 2000 Server and supported by successive versions of Windows
Apache Directory Server: Directory service, written in Java, supporting LDAP, Kerberos 5 and the Change Password Protocol; LDAPv3 certified
Apple Open Directory: Apple's directory server for Mac OS X, available through Mac OS X Server
eDirectory: NetIQ's implementation of directory services supports multiple architectures, including Windows, NetWare, Linux and several flavours of Unix and is used for user administration and configuration and software management; previously known as Novell Directory Services.
Red Hat Directory Server: Red Hat released Red Hat Directory Server, acquired from AOL's Netscape Security Solutions unit, as a commercial product running on top of Red Hat Enterprise Linux as the community-supported 389 Directory Server project. Upstream open source project is called FreeIPA.
Oracle Internet Directory: (OID) is Oracle Corporation's directory service, compatible with LDAP version 3.
Sun Java System Directory Server: Sun Microsystems' directory service
OpenDS: Open-source directory service in Java, backed by Sun Microsystems
Oracle Unified Directory: (OUD) is Oracle Corporation's next-generation unified directory solution. It integrates storage, synchronization, and proxy functionalities.
Windows NT Directory Services (NTDS), later renamed Active Directory, replaced the former NT Domain system.
Critical Path Directory Server
OpenLDAP: Derived from the original University of Michigan LDAP implementation (like Netscape, Red Hat, Fedora and Sun JSDS implementations), it supports all computer architectures (including Unix and Unix derivatives, Linux, Windows, z/OS and a number of embedded-realtime systems).
Lotus Domino
Nexor Directory
OpenDJ - a Java-based LDAP server and directory client that runs in any operating environment, under license CDDL. Developed by ForgeRock, until 2016, now maintained by OpenDJ Community
Open-source tools to create directory services include OpenLDAP, the Kerberos protocol and Samba software, which can function as a Windows domain controller with Kerberos and LDAP back ends. Administration is by GOsa or Samba SWAT.
Using name services
Unix systems
Name services on Unix systems are typically configured through nsswitch.conf. Information from name services can be retrieved with getent.
See also
Access control list
Directory Services Markup Language
Hierarchical database model
LDAP Data Interchange Format
Metadirectory
Service delivery platform
Virtual directory
References
Citations
Sources
Computer access control
Computer access control protocols
Domain Name System | Directory service | [
"Engineering"
] | 1,533 | [
"Cybersecurity engineering",
"Computer access control"
] |
334,779 | https://en.wikipedia.org/wiki/Gadu-Gadu | Gadu-Gadu (Polish for "chit-chat"; commonly known as GG or gg) is a Polish instant messaging client using a proprietary protocol. At one time, Gadu-Gadu was the most popular IM service in Poland, with over 15 million registered accounts and approximately 6.5 million users online daily. Gadu-Gadu's casual gaming portal had some 500,000 active users as at the end of March 2009. Users send up to 300 million messages per day.
Gadu-Gadu is financed by the display of advertisements. The developer is based in Koszalin, Poland and the company is wholly owned by another Polish company, Fintecom.
Features
Gadu-Gadu uses its own proprietary protocol. As with ICQ, users are identified by unique serial numbers. Protocol's features include status messages, file sharing, and VoIP. Users may format and embed images in messages. Starting from client version 6.0, an experimental feature utilizing a secure SSL connection was introduced, although it remained inactive until the beta release of version 10.0.
The official client provides over 150 emoticons, allows grouping contacts, sending SMS and integrates with other services run by the same company: a virtual Internet dial-up, a social networking site MojaGeneracja.pl (defunct since 5 November 2012) and an internet radio Open.fm.
On February 9, 2009, a significant new major version of the application, named Nowe Gadu-Gadu (which translates to "New Gadu-Gadu" in Polish), was released. The graphical user interface underwent a complete overhaul, being rebuilt from scratch using the Qt framework. This update brought along a range of notable enhancements, including the introduction of voice calling capabilities, a spell checker, an anti-spam filter, the ability to display YouTube videos directly within the chat window as embeds, customizable skins, the addition of tabs, and several other additions aimed at improving the user experience and security.
Gadu-Gadu allows its users to add personal information to a publicly searchable directory. Language options include English and Polish. There is also a browser version available.
Blip.pl
Blip.pl (or just Blip) was a Polish social networking internet service, founded in May 2007 and currently owned by Gadu-Gadu. It had microblogging capability. Soon after being established, it was purchased by Gadu-Gadu S.A in June 2007.
About 329,000 people visited the site in June 2009. In October 2009 the number of posts on the network exceeded ten million. By 4 December 2010 the service had 80,000 users. Notable Polish celebrities and politicians, such as Lech Wałęsa and Grzegorz Napieralski, used the site. The service was used as a communication channel of various governmental services, for example ZUS.
Blip was closed on 2 September 2013, following a two-month advance notice. During this period, users could choose to migrate their accounts to Wykop, which hosts a similar microblogging service.
See also
Comparison of instant messaging clients
Comparison of instant messaging protocols
References
External links
Instant messaging clients
Symbian instant messaging clients
Internet in Poland
Software that uses Qt
Windows instant messaging clients
Polish brands
2000 software | Gadu-Gadu | [
"Technology"
] | 687 | [
"Instant messaging",
"Instant messaging clients"
] |
334,803 | https://en.wikipedia.org/wiki/Public%20sector | The public sector, also called the state sector, is the part of the economy composed of both public services and public enterprises. Public sectors include the public goods and governmental services such as the military, law enforcement, public infrastructure, public transit, public education, along with public health care and those working for the government itself, such as elected officials. The public sector might provide services that a non-payer cannot be excluded from (such as street lighting), services which benefit all of society rather than just the individual who uses the service. Public enterprises, or state-owned enterprises, are self-financing commercial enterprises that are under public ownership which provide various private goods and services for sale and usually operate on a commercial basis.
Organizations that are not part of the public sector are either part of the private sector or voluntary sector. The private sector is composed of the economic sectors that are intended to earn a profit for the owners of the enterprise. The voluntary, civic, or social sector concerns a diverse array of non-profit organizations emphasizing civil society. In the United Kingdom, the term "wider public sector" is often used, referring to public sector organizations outside central government.
Organization
The organization of the public sector can take several forms, including:
Direct administration funded through taxation; the delivering organization generally has no specific requirement to meet commercial success criteria, and production decisions are determined by government.
State-owned enterprises; which differ from direct administration in that they have greater management autonomy and operate according to commercial criteria, and production decisions are not generally taken by a government (although goals may be set for them by the government).
The public sector in many countries is organized at three levels: Federal or National, Regional (State or Provincial), and Local (Municipal or County).
Partial outsourcing (of the scale many businesses do, e.g. for IT services) is considered a public sector model.
A borderline form is as follows:
Complete outsourcing or contracting out, with a privately owned corporation delivering the entire service on behalf of the government. This may be considered a mixture of private sector operations with public ownership of assets, although in some forms the private sector's control and/or risk is so great that the service may no longer be considered part of the public sector (Barlow et al., 2010). (See the United Kingdom's Private Finance Initiative.)
Public employee unions represent workers. Since contract negotiations for these workers are dependent on the size of government budgets, this is the one segment of the labor movement that can actually contribute directly to the people with ultimate responsibility for its livelihood. While their giving pattern matches that of other unions, public sector unions also concentrate contributions on members of Congress from both parties who sit on committees that deal with federal budgets and agencies.
Infrastructure
Infrastructure includes areas that support both the public's members and the public sector itself. Streets and highways are used both by those who work for the public sector and also by the citizenry. The former, who are public employees, are also part of the citizenry.
Public roads, bridges, tunnels, water supply, sewers, electrical grids and telecommunications networks are among the public infrastructure.
Public sector staff
Rates of pay for public sector staff may be negotiated by employers and their staff or staff representatives such as trade unions. In some cases, for example in the United Kingdom, a pay review body is charged with making independent recommendations on rates of pay for groups of public sector staff.
By country
France
As of 2017, France had 5.6 million civil servants, amounting to 20% of all jobs in France. They are subdivided into three types: the State civil service (, FPE) includes teachers and soldiers, and employs 44% of the workforce. The local civil service (; FPT) is made up of employees of town halls and regional councils: 25% of the workforce. The hospital civil service (, FPH) consists of doctors and nurses and is 21% of the workforce.
Criticism
Right-libertarian and Austrian School economists have criticized the idea of public sector provision of goods and services as inherently inefficient. In 1961, Murray Rothbard wrote: "Any reduction of the public sector, any shift of activities from the public to the private sphere, is a net moral and economic gain."
American libertarians and anarcho-capitalists have also argued that the system by which the public sector is funded, namely taxation, is itself coercive and unjust. However, some small-government proponents have pushed back on this point of view, citing the ultimate necessity of a public sector for provision of certain services, such as national defense, public works and utilities, and pollution controls.
See also
Civil service
Government agency
List of countries by government spending as percentage of GDP
List of countries by public sector
Nationalization
Privatization
Private sector
Public ownership
Public–private partnership
Public sector business cases for projects
Special-purpose district
State-owned enterprise
References
Citations
Sources
Barlow, J. Roehrich, J.K. and Wright, S. (2010). "De facto privatisation or a renewed role for the EU? Paying for Europe's healthcare infrastructure in a recession." Journal of the Royal Society of Medicine. 103:51–55.
Lloyd G. Nigro, Decision Making in the Public Sector (1984), Marcel Dekker Inc.
David G. Carnevale, Organizational Development in the Public Sector (2002), Westview Pr.
Jan-Erik Lane, The Public Sector: Concepts, Models and Approaches (1995), Sage Pubns.
A Primer on Public-Private Partnerships PFM blog: A primer on Public-Private Partnerships
What is the Public Sector? Definition & Examples. (2016, June & July). Retrieved June 10, 2017, from What is the Public Sector? Definition & Examples
External links
Public economics
Economic sectors
Libertarian theory | Public sector | [
"Technology"
] | 1,187 | [
"Economic sectors",
"Components"
] |
334,816 | https://en.wikipedia.org/wiki/Route%20of%20administration | In pharmacology and toxicology, a route of administration is the way by which a drug, fluid, poison, or other substance is taken into the body.
Routes of administration are generally classified by the location at which the substance is applied. Common examples include oral and intravenous administration. Routes can also be classified based on where the target of action is. Action may be topical (local), enteral (system-wide effect, but delivered through the gastrointestinal tract), or parenteral (systemic action, but is delivered by routes other than the GI tract). Route of administration and dosage form are aspects of drug delivery.
Classification
Routes of administration are usually classified by application location (or exposition).
The route or course the active substance takes from application location to the location where it has its target effect is usually rather a matter of pharmacokinetics (concerning the processes of uptake, distribution, and elimination of drugs). Exceptions include the transdermal or transmucosal routes, which are still commonly referred to as routes of administration.
The location of the target effect of active substances is usually rather a matter of pharmacodynamics (concerning, for example, the physiological effects of drugs). An exception is topical administration, which generally means that both the application location and the effect thereof is local.
Topical administration is sometimes defined as both a local application location and local pharmacodynamic effect, and sometimes merely as a local application location regardless of location of the effects.
By application location
Enteral/gastrointestinal route
Through the gastrointestinal tract is sometimes termed enteral or enteric administration (literally meaning 'through the intestines'). Enteral/enteric administration usually includes oral (through the mouth) and rectal (into the rectum) administration, in the sense that these are taken up by the intestines. However, uptake of drugs administered orally may also occur already in the stomach, and as such gastrointestinal (along the gastrointestinal tract) may be a more fitting term for this route of administration. Furthermore, some application locations often classified as enteral, such as sublingual (under the tongue) and sublabial or buccal (between the cheek and gums/gingiva), are taken up in the proximal part of the gastrointestinal tract without reaching the intestines. Strictly enteral administration (directly into the intestines) can be used for systemic administration, as well as local (sometimes termed topical), such as in a contrast enema, whereby contrast media are infused into the intestines for imaging. However, for the purposes of classification based on location of effects, the term enteral is reserved for substances with systemic effects.
Many drugs as tablets, capsules, or drops are taken orally. Administration methods directly into the stomach include those by gastric feeding tube or gastrostomy. Substances may also be placed into the small intestines, as with a duodenal feeding tube and enteral nutrition. Enteric coated tablets are designed to dissolve in the intestine, not the stomach, because the drug present in the tablet causes irritation in the stomach.
The rectal route is an effective route of administration for many medications, especially those used at the end of life. The walls of the rectum absorb many medications quickly and effectively. Medications delivered to the distal one-third of the rectum at least partially avoid the "first pass effect" through the liver, which allows for greater bio-availability of many medications than that of the oral route. Rectal mucosa is highly vascularized tissue that allows for rapid and effective absorption of medications. A suppository is a solid dosage form that fits for rectal administration. In hospice care, a specialized rectal catheter, designed to provide comfortable and discreet administration of ongoing medications provides a practical way to deliver and retain liquid formulations in the distal rectum, giving health practitioners a way to leverage the established benefits of rectal administration. The Murphy drip is an example of rectal infusion.
Parenteral route
The parenteral route is any route that is not enteral (par- + enteral).
Parenteral administration can be performed by injection, that is, using a needle (usually a hypodermic needle) and a syringe, or by the insertion of an indwelling catheter.
Locations of application of parenteral administration include:
Central nervous system:
Epidural (synonym: peridural) (injection or infusion into the epidural space), e.g. epidural anesthesia.
Intracerebral (into the cerebrum) administration by direct injection into the brain. Used in experimental research of chemicals and as a treatment for malignancies of the brain. The intracerebral route can also interrupt the blood brain barrier from holding up against subsequent routes.
Intracerebroventricular (into the cerebral ventricles) administration into the ventricular system of the brain. One use is as a last line of opioid treatment for terminal cancer patients with intractable cancer pain.
Epicutaneous (application onto the skin). It can be used both for local effect as in allergy testing and typical local anesthesia, as well as systemic effects when the active substance diffuses through skin in a transdermal route.
Sublingual and buccal medication administration is a way of giving someone medicine orally (by mouth). Sublingual administration is when medication is placed under the tongue to be absorbed by the body. The word "sublingual" means "under the tongue." Buccal administration involves placement of the drug between the gums and the cheek. These medications can come in the form of tablets, films, or sprays. Many drugs are designed for sublingual administration, including cardiovascular drugs, steroids, barbiturates, opioid analgesics with poor gastrointestinal bioavailability, enzymes and, increasingly, vitamins and minerals.
Extra-amniotic administration, between the endometrium and fetal membranes.
Nasal administration (through the nose) can be used for topically acting substances, as well as for insufflation of e.g. decongestant nasal sprays to be taken up along the respiratory tract. Such substances are also called inhalational, e.g. inhalational anesthetics.
Intra-arterial (into an artery), e.g. vasodilator drugs in the treatment of vasospasm and thrombolytic drugs for treatment of embolism.
Intra-articular, into a joint space. It is generally performed by joint injection. It is mainly used for symptomatic relief in osteoarthritis.
Intracardiac (into the heart), e.g. adrenaline during cardiopulmonary resuscitation (no longer commonly performed).
Intracavernous injection, an injection into the base of the penis.
Intradermal, (into the skin itself) is used for skin testing some allergens, and also for mantoux test for tuberculosis.
Intralesional (into a skin lesion), is used for local skin lesions, e.g. acne medication.
Intramuscular (into a muscle), e.g. many vaccines, antibiotics, and long-term psychoactive agents. Recreationally the colloquial term 'muscling' is used.
Intraocular, into the eye, e.g., some medications for glaucoma or eye neoplasms.
Intraosseous infusion (into the bone marrow) is, in effect, an indirect intravenous access because the bone marrow drains directly into the venous system. This route is occasionally used for drugs and fluids in emergency medicine and pediatrics when intravenous access is difficult.
Intraperitoneal, (infusion or injection into the peritoneum) e.g. peritoneal dialysis.
Intrathecal (into the spinal canal) is most commonly used for spinal anesthesia and chemotherapy.
Intrauterine.
Intravaginal administration, in the vagina.
Intravenous (into a vein), e.g. many drugs, total parenteral nutrition.
Intravesical infusion is into the urinary bladder.
Intravitreal, through the eye.
Subcutaneous (under the skin). This generally takes the form of subcutaneous injection, e.g. with insulin. Skin popping is a slang term that includes subcutaneous injection, and is usually used in association with recreational drugs. In addition to injection, it is also possible to slowly infuse fluids subcutaneously in the form of hypodermoclysis.
Transdermal (diffusion through the intact skin for systemic rather than topical distribution), e.g. transdermal patches such as fentanyl in pain therapy, nicotine patches for treatment of addiction and nitroglycerine for treatment of angina pectoris.
Perivascular administration (perivascular medical devices and perivascular drug delivery systems are conceived for local application around a blood vessel during open vascular surgery).
Transmucosal (diffusion through a mucous membrane), e.g. insufflation (snorting) of cocaine, sublingual, i.e. under the tongue, sublabial, i.e. between the lips and gingiva, and oral spray or vaginal suppository for nitroglycerine.
Topical route
The definition of the topical route of administration sometimes states that both the application location and the pharmacodynamic effect thereof is local.
In other cases, topical is defined as applied to a localized area of the body or to the surface of a body part regardless of the location of the effect. By this definition, topical administration also includes transdermal application, where the substance is administered onto the skin but is absorbed into the body to attain systemic distribution.
If defined strictly as having local effect, the topical route of administration can also include enteral administration of medications that are poorly absorbable by the gastrointestinal tract. One such medication is the antibiotic vancomycin, which cannot be absorbed in the gastrointestinal tract and is used orally only as a treatment for Clostridioides difficile colitis.
Choice of routes
The reason for choice of routes of drug administration are governing by various factors:
Physical and chemical properties of the drug. The physical properties are solid, liquid and gas. The chemical properties are solubility, stability, pH, irritancy etc.
Site of desired action: the action may be localised and approachable or generalised and not approachable.
Rate of extent of absorption of the drug from different routes.
Effect of digestive juices and the first pass metabolism of drugs.
Condition of the patient.
In acute situations, in emergency medicine and intensive care medicine, drugs are most often given intravenously. This is the most reliable route, as in acutely ill patients the absorption of substances from the tissues and from the digestive tract can often be unpredictable due to altered blood flow or bowel motility.
Convenience
Enteral routes are generally the most convenient for the patient, as no punctures or sterile procedures are necessary. Enteral medications are therefore often preferred in the treatment of chronic disease. However, some drugs can not be used enterally because their absorption in the digestive tract is low or unpredictable. Transdermal administration is a comfortable alternative; there are, however, only a few drug preparations that are suitable for transdermal administration.
Desired target effect
Identical drugs can produce different results depending on the route of administration. For example, some drugs are not significantly absorbed into the bloodstream from the gastrointestinal tract and their action after enteral administration is therefore different from that after parenteral administration. This can be illustrated by the action of naloxone (Narcan), an antagonist of opiates such as morphine. Naloxone counteracts opiate action in the central nervous system when given intravenously and is therefore used in the treatment of opiate overdose. The same drug, when swallowed, acts exclusively on the bowels; it is here used to treat constipation under opiate pain therapy and does not affect the pain-reducing effect of the opiate.
Oral
The oral route is generally the most convenient and costs the least. However, some drugs can cause gastrointestinal tract irritation. For drugs that come in delayed release or time-release formulations, breaking the tablets or capsules can lead to more rapid delivery of the drug than intended. The oral route is limited to formulations containing small molecules only while biopharmaceuticals (usually proteins) would be digested in the stomach and thereby become ineffective. Biopharmaceuticals have to be given by injection or infusion. However, recent research found various ways to improve oral bioavailability of these drugs. In particular permeation enhancers, ionic liquids, lipid-based nanocarriers, enzyme inhibitors and microneedles have shown potential.
Oral administration is often denoted "PO" from "per os", the Latin for "by mouth".
The bioavailability of oral administration is affected by the amount of drug that is absorbed across the intestinal epithelium and first-pass metabolism.
Oral mucosal
The oral mucosa is the mucous membrane lining the inside of the mouth.
Buccal
Buccally administered medication is achieved by placing the drug between gums and the inner lining of the cheek. In comparison with sublingual tissue, buccal tissue is less permeable resulting in slower absorption.
Sublabial
Sublingual
Sublingual administration is fulfilled by placing the drug between the tongue and the lower surface of the mouth. The sublingual mucosa is highly permeable and thereby provides access to the underlying expansive network composed of capillaries, leading to rapid drug absorption.
Intranasal
Drug administration via the nasal cavity yields rapid drug absorption and therapeutic effects. This is because drug absorption through the nasal passages does not go through the gut before entering capillaries situated at tissue cells and then systemic circulation and such absorption route allows transport of drugs into the central nervous system via the pathways of olfactory and trigeminal nerve.
Intranasal absorption features low lipophilicity, enzymatic degradation within the nasal cavity, large molecular size, and rapid mucociliary clearance from the nasal passages, which explains the low risk of systemic exposure of the administered drug absorbed via intranasal.
Local
By delivering drugs almost directly to the site of action, the risk of systemic side effects is reduced.
Skin absorption (dermal absorption), for example, is to directly deliver drug to the skin and, hopefully, to the systemic circulation. However, skin irritation may result, and for some forms such as creams or lotions, the dosage is difficult to control. Upon contact with the skin, the drug penetrates into the dead stratum corneum and can afterwards reach the viable epidermis, the dermis, and the blood vessels.
Parenteral
The term parenteral is from para-1 'beside' + Greek enteron 'intestine' + -al. This name is due to the fact that it encompasses a route of administration that is not intestinal. However, in common English the term has mostly been used to describe the four most well-known routes of injection.
The term injection encompasses intravenous (IV), intramuscular (IM), subcutaneous (SC) and intradermal (ID) administration.
Parenteral administration generally acts more rapidly than topical or enteral administration, with onset of action often occurring in 15–30 seconds for IV, 10–20 minutes for IM and 15–30 minutes for SC. They also have essentially 100% bioavailability and can be used for drugs that are poorly absorbed or ineffective when they are given orally. Some medications, such as certain antipsychotics, can be administered as long-acting intramuscular injections. Ongoing IV infusions can be used to deliver continuous medication or fluids.
Disadvantages of injections include potential pain or discomfort for the patient and the requirement of trained staff using aseptic techniques for administration. However, in some cases, patients are taught to self-inject, such as SC injection of insulin in patients with insulin-dependent diabetes mellitus. As the drug is delivered to the site of action extremely rapidly with IV injection, there is a risk of overdose if the dose has been calculated incorrectly, and there is an increased risk of side effects if the drug is administered too rapidly.
Respiratory tract
Mouth inhalation
Inhaled medications can be absorbed quickly and act both locally and systemically. Proper technique with inhaler devices is necessary to achieve the correct dose. Some medications can have an unpleasant taste or irritate the mouth.
In general, only 20–50% of the pulmonary-delivered dose rendered in powdery particles will be deposited in the lung upon mouth inhalation. The remainder of 50-70% undeposited aerosolized particles are cleared out of lung as soon as exhalation.
An inhaled powdery particle that is >8 μm is structurally predisposed to depositing in the central and conducting airways (conducting zone) by inertial impaction.
An inhaled powdery particle that is between 3 and 8 μm in diameter tend to largely deposit in the transitional zones of the lung by sedimentation.
An inhaled powdery particle that is <3 μm in diameter is structurally predisposed to depositing primarily in the respiratory regions of the peripheral lung via diffusion.
Particles that deposit in the upper and central airways are generally absorbed systemically to great extent because they are only partially removed by mucociliary clearance, which results in orally mediated absorption when the transported mucus is swallowed, and first pass metabolism or incomplete absorption through loss at the fecal route can sometimes reduce the bioavailability. This should in no way suggest to clinicians or researchers that inhaled particles are not a greater threat than swallowed particles, it merely signifies that a combination of both methods may occur with some particles, no matter the size of or lipo/hydrophilicity of the different particle surfaces.
Nasal inhalation
Inhalation by nose of a substance is almost identical to oral inhalation, except that some of the drug is absorbed intranasally instead of in the oral cavity before entering the airways. Both methods can result in varying levels of the substance to be deposited in their respective initial cavities, and the level of mucus in either of these cavities will reflect the amount of substance swallowed. The rate of inhalation will usually determine the amount of the substance which enters the lungs. Faster inhalation results in more rapid absorption because more substance finds the lungs. Substances in a form that resists absorption in the lung will likely resist absorption in the nasal passage, and the oral cavity, and are often even more resistant to absorption after they fail absorption in the former cavities and are swallowed.
Research
Neural drug delivery is the next step beyond the basic addition of growth factors to nerve guidance conduits. Drug delivery systems allow the rate of growth factor release to be regulated over time, which is critical for creating an environment more closely representative of in vivo development environments.
See also
ADME
Catheter
Dosage form
Drug injection
Ear instillation
Hypodermic needle
Intravenous marijuana syndrome
List of medical inhalants
Nanomedicine
Absorption (pharmacology)
References
External links
The 10th US-Japan Symposium on Drug Delivery Systems
FDA Center for Drug Evaluation and Research Data Standards Manual: Route of Administration.
FDA Center for Drug Evaluation and Research Data Standards Manual: Dosage Form.
A.S.P.E.N. American Society for Parenteral and Enteral Nutrition
Drugs
Pharmacokinetics | Route of administration | [
"Chemistry"
] | 4,148 | [
"Pharmacology",
"Pharmacokinetics",
"Products of chemical industry",
"Routes of administration",
"Chemicals in medicine",
"Drugs"
] |
334,820 | https://en.wikipedia.org/wiki/Subcutaneous%20administration | Subcutaneous administration is the insertion of medications beneath the skin either by injection or infusion.
A subcutaneous injection is administered as a bolus into the subcutis, the layer of skin directly below the dermis and epidermis, collectively referred to as the cutis. The instruments are usually a hypodermic needle and a syringe. Subcutaneous injections are highly effective in administering medications such as insulin, morphine, diacetylmorphine and goserelin. Subcutaneous administration may be abbreviated as SC, SQ, subcu, sub-Q, SubQ, or subcut. Subcut is the preferred abbreviation to reduce the risk of misunderstanding and potential errors.
Subcutaneous tissue has few blood vessels and so drugs injected into it are intended for slow, sustained rates of absorption, often with some amount of depot effect. Compared with other routes of administration, it is slower than intramuscular injections but still faster than intradermal injections. Subcutaneous infusion (as opposed to subcutaneous injection) is similar but involves a continuous drip from a bag and line, as opposed to injection with a syringe.
Medical uses
A subcutaneous injection is administered into the fatty tissue of the subcutaneous tissue, located below the dermis and epidermis. They are commonly used to administer medications, especially those which cannot be administered by mouth as they would not be absorbed from the gastrointestinal tract. A subcutaneous injection is absorbed slower than a substance injected intravenously or into a muscle, but faster than a medication administered by mouth.
Medications
Medications commonly administered via subcutaneous injection or infusion include insulin, live vaccines, monoclonal antibodies, and heparin. These medications cannot be administered orally as the molecules are too large to be absorbed in the intestines. Subcutaneous injections can also be used when the increased bioavailability and more rapid effects over oral administration are preferred. They are also the easiest form of parenteral administration of medication to perform by lay people, and are associated with less adverse effects such as pain or infection than other forms of injection.
Insulin
Perhaps the most common medication administered subcutaneously is insulin. While attempts have been made since the 1920s to administer insulin orally, the large size of the molecule has made it difficult to create a formulation with absorption and predictability that comes close to subcutaneous injections of insulin. People with type 1 diabetes almost all require insulin as part of their treatment regimens, and a smaller proportion of people with type 2 diabetes do as well — with tens of millions of prescriptions per year in the United States alone.
Insulin historically was injected from a vial using a syringe and needle, but may also be administered subcutaneously using devices such as injector pens or insulin pumps. An insulin pump consists of a catheter which is inserted into the subcutaneous tissue, and then secured in place to allow insulin to be administered multiple times through the same injection site.
Recreational drug use
Subcutaneous injection may also be used by people to (self-) administer recreational drugs. This can be referred to as skin popping. In some cases, the administration of illicit drugs in this way is associated with unsafe practices leading to infections and other adverse effects. In rare cases, this results in serious side effects such as AA amyloidosis. Recreational drugs reported to be administered subcutaneously have included cocaine, mephedrone, and amphetamine derivatives such as PMMA.
Contraindications
Contraindications to subcutaneous injections primarily depend on the specific medication being administered. Doses which would require more than 2 mL to be injected at once are not administered subcutaneously. Medications which may cause necrosis or otherwise be damaging or irritating to tissues should also not be administered subcutaneously. An injection should not be given at a specific site if there is inflammation or skin damage in the area.
Risks and complications
With normal doses of medicine (less than 2 mL in volume), complications or adverse effects are very rare. The most common adverse reactions after subcutaneous injections are administered are termed "injection site reactions". This term encompasses any combination of redness, swelling, itching, bruising, or other irritation that does not spread beyond the immediate vicinity of the injection. Injection site reactions may be minimized if repeated injections are necessary by moving the injection site at least one inch from previous injections, or using a different injection location altogether. There may also be specific complications associated with the specific medication being administered.
Medication-specific
Due to the frequency of injections required for the administration of insulin products via subcutaneous injection, insulin is associated with the development of lipohypertrophy and lipoatrophy. This can lead to slower or incomplete absorption from the injection site. Rotating the injection site is the primary method of preventing changes in tissue structure from insulin administration. Heparin-based anticoagulants injected subcutaneously may cause hematoma and bruising around the injection site due to their anticoagulant effect. This includes heparin and low molecular weight heparin products such as enoxaparin. There is some low certainty evidence that administering the injection more slowly may decrease the pain from heparin injections, but not the risk of or extent of bruising. Subcutaneous heparin-based anticoagulation may also lead to necrosis of the surrounding skin or lesions, most commonly when injected in the abdomen.
Many medications have the potential to cause local lesions or swelling due to the irritating effect the medications have on the skin and subcutaneous tissues. This includes medications such as apomorphine and hyaluronic acid injected as a filler, which may cause the area to appear bruised. Hyaluronic acid "bruising" may be treated using injections of hyaluronidase enzyme around the location.
Other common medication-specific side effects include pain, burning or stinging, warmth, rash, flushing, or multiple of these reactions at the injection site, collectively termed "injection site reactions". This is seen with the subcutaneous injection of triptans for migraine headache, medroxyprogesterone acetate for contraception, as well as many monoclonal antibodies. In most cases, injection site reactions are self-limiting and resolve on their own after a short time without treatment, and do not require the medication to be discontinued.
The administration of vaccines subcutaneously is also associated with injection site reactions. This includes the BCG vaccine which is associated with a specific scar appearance which can be used as evidence of prior vaccination. Other subcutaneous vaccines, many of which are live vaccines including the MMR vaccine and the varicella vaccine, which may cause fever and rash, as well as a feeling of general malaise for a day or two following the vaccination.
Technique
Subcutaneous injections are performed by cleaning the area to be injected followed by an injection, usually at a 45-degree angle to the skin when using a syringe and needle, or at a 90-degree angle (perpendicular) if using an injector pen. The appropriate injection angle is based on the length of needle used, and the depth of the subcutaneous fat in the skin of the specific person. A 90-degree angle is always used for medications such as heparin. If administered at an angle, the skin and underlying tissue may be pinched upwards prior to injection. The injection is administered slowly, lasting about 10 seconds per milliliter of fluid injected, and the needle may be left in place for 10 seconds following injection to ensure the medicine is fully injected.
Equipment
The gauge of the needle used can range from 25 gauge to 27 gauge, while the length can vary between -inch to -inch for injections using a syringe and needle. For subcutaneous injections delivered using devices such as injector pens, the needle used may be as thin as 34 gauge (commonly 30–32 gauge), and as short as 3.5 mm (commonly 3.5 mm to 5 mm). Subcutaneous injections can also be delivered via a pump system which uses a cannula inserted under the skin. The specific needle size/length, as well as appropriateness of a device such as a pen or pump, is based on the characteristics of a person's skin layers.
Locations
Commonly used injection sites include:
The outer area of the upper arm.
The abdomen, avoiding a 2-inch circle around the navel.
The front of the thigh, between 4 inches from the top of the thigh and 4 inches above the knee.
The upper back.
The upper area of the buttock, just behind the hip bone.
The choice of specific injection site is based on the medication being administered, with heparin almost always being administered in the abdomen, as well as preference. Injections administered frequently or repeatedly should be administered in a different location each time, either within the same general site or a different site, but at least one inch away from recent injections.
Self-administration
As opposed to intramuscular or intravenous injections, subcutaneous injections can be easily performed by people with minor skill and training required. The injection sites for self-injection of medication are the same as for injection by a healthcare professional, and the skill can be taught to patients using pictures, videos, or models of the subcutaneous tissue for practice. People who are to self-inject medicine subcutaneously should be trained how to evaluate and rotate the injection site if complications or contraindications arise. Self-administration by subcutaneous injection generally does not require disinfection of the skin outside of a hospital setting as the risk of infection is extremely low, but instead it is recommended to ensure that the site and person's hands are simply clean prior to administration.
Infusion
Subcutaneous infusion, also known as interstitial infusion or hypodermoclysis, is a form of subcutaneous (under the skin) administration of fluids to the body, often saline or glucose solutions. It is the infusion counterpart of subcutaneous injection with a syringe.
Subcutaneous infusion can be used where a slow rate of fluid uptake is required compared to intravenous infusion. Typically, it is limited to 1 mL per minute, although it is possible to increase this by using two sites simultaneously. The chief advantages of subcutaneous infusion over intravenous infusion is that it is cheap and can be administered by non-medical personnel with minimal supervision. It is therefore particularly suitable for home care. The enzyme hyaluronidase can be added to the fluid to improve absorption during the infusion.
Subcutaneous infusion can be speeded up by applying it to multiple sites simultaneously. The technique was pioneered by Evan O'Neill Kane in 1900. Kane was looking for a technique that was as fast as intravenous infusion but not so risky to use on trauma patients in unhygienic conditions in the field.
See also
Intramuscular injection
Intravenous injection
Intradermal injection
References
Dosage forms
Routes of administration
Injection (medicine) | Subcutaneous administration | [
"Chemistry"
] | 2,310 | [
"Pharmacology",
"Routes of administration"
] |
334,821 | https://en.wikipedia.org/wiki/Intramuscular%20injection | Intramuscular injection, often abbreviated IM, is the injection of a substance into a muscle. In medicine, it is one of several methods for parenteral administration of medications. Intramuscular injection may be preferred because muscles have larger and more numerous blood vessels than subcutaneous tissue, leading to faster absorption than subcutaneous or intradermal injections. Medication administered via intramuscular injection is not subject to the first-pass metabolism effect which affects oral medications.
Common sites for intramuscular injections include the deltoid muscle of the upper arm and the gluteal muscle of the buttock. In infants, the vastus lateralis muscle of the thigh is commonly used. The injection site must be cleaned before administering the injection, and the injection is then administered in a fast, darting motion to decrease the discomfort to the individual. The volume to be injected in the muscle is usually limited to 2–5 milliliters, depending on injection site. A site with signs of infection or muscle atrophy should not be chosen. Intramuscular injections should not be used in people with myopathies or those with trouble clotting.
Intramuscular injections commonly result in pain, redness, and swelling or inflammation around the injection site. These side effects are generally mild and last no more than a few days at most. Rarely, nerves or blood vessels around the injection site can be damaged, resulting in severe pain or paralysis. If proper technique is not followed, intramuscular injections can result in localized infections such as abscesses and gangrene. While historically aspiration, or pulling back on the syringe before injection, was recommended to prevent inadvertent administration into a vein, it is no longer recommended for most injection sites by some countries.
Uses
Intramuscular injection is commonly used for medication administration. Medication administered in the muscle is generally quickly absorbed in the bloodstream, and avoids the first pass metabolism which occurs with oral administration. The medication may not be considered 100% bioavailable as it must still be absorbed from the muscle, which occurs over time. An intramuscular injection is less invasive than an intravenous injection and also generally takes less time, as the site of injection (a muscle versus a vein) is much larger. Medications administered in the muscle may also be administered as depot injections, which provide slow, continuous release of medicine over a longer period of time. Certain substances, including ketamine, may be injected intramuscularly for recreational purposes. Disadvantages of intramuscular administration include skill and technique required, pain from injection, anxiety or fear (especially in children), and difficulty in self-administration which limits its use in outpatient medicine.
Vaccines, especially inactivated vaccines, are commonly administered via intramuscular injection. However, it has been estimated that for every vaccine injected intramuscularly, 20 injections are given to administer drugs or other therapy. This can include medications such as antibiotics, immunoglobulin, and hormones such as testosterone and medroxyprogesterone. In a case of severe allergic reaction, or anaphylaxis, a person may use an epinephrine autoinjector to self-administer epinephrine into the muscle.
Contraindications
Because an intramuscular injection can be used to administer many types of medications, specific contraindications depend in large part on the medication being administered. Injections of medications are necessarily more invasive than other forms of administration such as by mouth or topical and require training to perform appropriately, without which complications can arise regardless of the medication being administered. For this reason, unless there are desired differences in rate of absorption, time to onset, or other pharmacokinetic parameters in the specific situation, a less invasive form of drug administration (usually by mouth) is preferred.
Intramuscular injections are generally avoided in people with low platelet count or clotting problems, to prevent harm due to potential damage to blood vessels during the injection. They are also not recommended in people who are in hypovolemic shock, or have myopathy or muscle atrophy, as these conditions may alter the absorption of the medication. The damage to the muscle caused by an intramuscular injections may interfere with the accuracy of certain cardiac tests for people with suspected myocardial infarction and for this reason other methods of administration are preferred in such instances. In people with an active myocardial infarction, the decrease in circulation may result in slower absorption from an IM injection. Specific sites of administration may also be contraindicated if the desired injection site has an infection, swelling, or inflammation. Within a specific site of administration, the injection should not be given directly over irritation or redness, birthmarks or moles, or areas with scar tissue.
Risks and complications
As an injection necessitates piercing the skin, there is a risk of infection from bacteria or other organisms present in the environment or on the skin before the injection. This risk is minimized by using proper aseptic technique in preparing the injection and sanitizing the injection site before administration. Intramuscular injections may also cause an abscess or gangrene at the injection site, depending on the specific medication and amount administered. There is also a risk of nerve or vascular injury if a nerve or blood vessel is inadvertently hit during injection. If single-use or sterilized equipment is not used, there is the risk of transmission of infectious disease between users, or to a practitioner who inadvertently injures themselves with a used needle, termed a needlestick injury.
Site-specific complications
Injections into the deltoid site in the arm can result in unintentional damage to the radial and axillary nerves. In rare cases when not performed properly, the injection may result in shoulder dysfunction. The most frequent complications of a deltoid injection include pain, redness, and inflammation around the injection site, which are almost always mild and last only a few days at most.
The dorsogluteal site of injection is associated with a higher risk of skin and tissue trauma, muscle fibrosis or contracture, hematoma, nerve palsy, paralysis, and infections such as abscesses and gangrene. Furthermore, injection in the gluteal muscle poses a risk for damage to the sciatic nerve, which may cause shooting pain or a sensation of burning. Sciatic nerve damage can also affect a person's ability to move their foot on the affected side, and other parts of the body controlled by the nerve. Damage to the sciatic nerve can be prevented by using the ventrogluteal site instead, and by selecting an appropriate size and length of needle for the injection.
Technique
An intramuscular injection can be administered in multiple different muscles of the body. Common sites for intramuscular injection include: deltoid, dorsogluteal, rectus femoris, vastus lateralis and ventrogluteal muscles. Sites that are bruised, tender, red, swollen, inflamed or scarred are generally avoided. The specific medication and amount being administered will influence the decision of the specific muscle chosen for injection.
The injection site is first cleaned using an antimicrobial and allowed to dry. The injection is performed in a quick, darting motion perpendicular to the skin, at an angle between 72 and 90 degrees. The practitioner will stabilize the needle with one hand while using their other hand to depress the plunger to slowly inject the medication – a rapid injection causes more discomfort. The needle is withdrawn at the same angle inserted. Gentle pressure may be applied with gauze if bleeding occurs. Pressure or gentle massage of the muscle following injection may reduce the risk of pain.
Aspiration
Aspirating for blood to rule out injecting into a blood vessel is not recommended by the US CDC, Public Health Agency of Canada, or Norway Institute of Public Health, as the injection sites do not contain large blood vessels and aspiration results in greater pain. There is no evidence that aspiration is useful to increase safety of intramuscular injections when injecting in a site other than the dorsogluteal site.
Aspiration was recommended by the Danish Health Authority for COVID-19 vaccines for a time to investigate the potential rare risk of blood clotting and bleeding, but it is no longer a recommendation.
Z-track method
The Z-track method is a method of administering an IM injection that prevents the medication being tracked through the subcutaneous tissue, sealing the medication in the muscle, and minimizing irritation from the medication. Using the Z-track technique, the skin is pulled laterally, away from the injection site, before the injection; then the medication is injected, the needle is withdrawn, and the skin is released. This method can be used if the overlying tissue can be displaced.
Injection sites
The deltoid muscle in the outer portion of the upper arm is used for injections of small volume, usually equal to or less than 1 mL. This includes most intramuscular vaccinations. It is not recommended to use the deltoid for repeated injections due to its small area, which makes it difficult to space out injections from each other. The deltoid site is located by locating the lower edge of the acromion process, and injecting in the area which forms an upside down triangle with its base at the acromion process and its midpoint in line with the armpit. An injection into the deltoid muscle is commonly administered using a 1-inch long needle, but may use a -inch long needle for younger people or very frail elderly people.
The ventrogluteal site on the hip is used for injections which require a larger volume to be administered, greater than 1 mL, and for medications which are known to be irritating, viscous, or oily. It is also used to administer narcotic medications, antibiotics, sedatives and anti-emetics. The ventrogluteal site is located in a triangle formed by the anterior superior iliac spine and the iliac crest, and may be located using a hand as a guide. The ventrogluteal site is less painful for injection than other sites such as the deltoid site.
The vastus lateralis site is used for infants less than 7 months old and people who are unable to walk or who have loss of muscular tone. The site is located by dividing the front thigh into thirds vertically and horizontally to form nine squares; the injection is administered in the outer middle square. This site is also the usual site of administration for epinephrine autoinjectors, which are used in the outer thigh, corresponding to the location of the vastus lateralis muscle.
The dorsogluteal site of the buttock site is not routinely used due to its location near major blood vessels and nerves, as well as having inconsistent depth of adipose tissue. Many injections in this site do not penetrate deep enough under the skin to be correctly administered in the muscle. While current evidence-based practice recommends against using this site, many healthcare providers still use this site, often due to a lack of knowledge about alternative sites for injection.
This site is located by dividing the buttock into four using a cross shape, and administering the injection in the upper outer quadrant. This is the only intramuscular injection site for which aspiration is recommended of the syringe before injection, due to higher likelihood of accidental intravenous administration in this area. However, aspiration is not recommended by the Centers for Disease Control and Prevention, which considers it outdated for any intramuscular injection.
Special populations
Some populations require a different injection site, needle length, or technique. In very young or weak elderly patients, a normal-length needle may be too long to inject properly. In these patients, a shorter needle is indicated to avoid injecting too deeply. It is also recommended to consider using the anterolateral thigh as an injection site in infants under one year old.
To help infants and children cooperate with injection administration, the Advisory Committee on Immunization Practices in the United States recommends using distractions, giving something sweet, and rocking the baby side to side. In people who are overweight, a 1.5-inch needle may be used to ensure the injection is given below the subcutaneous layer of skin, while a -inch needle may be used for people who weigh under . In any case, the skin does not need to be pinched up before injecting when the appropriate length needle is used.
History
Injections into muscular tissue may have taken place as early as the year 500 AD. Beginning in the late 1800s, the procedure began to be described in more detail and techniques began to be developed by physicians. In the early days of intramuscular injections, the procedure was performed almost exclusively by physicians. After the introduction of antibiotics in the middle of the 20th century, nurses began preparing equipment for intramuscular injections as part of their delegated duties from physicians, and by 1961 they had "essentially taken over the procedure". Until this delegation became virtually universal, there were no uniform procedures or education for nurses in proper administration of intramuscular injections, and complications from improper injection were common.
Intramuscular injections began to be used for administration of vaccines for diphtheria in 1923, whooping cough in 1926, and tetanus in 1927. By the 1970s, researchers and instructors began forming guidance on injection site and technique to reduce the risk of injection complications and side effects such as pain. Also in the early 1970s, botulinum toxin began to be injected into muscles to intentionally paralyze them for therapeutic reasons, and later for cosmetic reasons. Until the 2000s, aspiration after inserting the needle was recommended as a safety measure, to ensure the injection was being administered in a muscle and not inadvertently in a vein. However, this is no longer recommended as evidence shows no safety benefit and it lengthens the time taken for injection, which causes more pain.
Veterinary medicine
In animals common sites for intramuscular injection include the quadriceps, the lumbodorsal muscles, and the triceps muscle.
See also
Subcutaneous injection
Intradermal injection
Intravenous injection
References
External links
Prevention and Control of Influenza, Recommendations of ACIP
Medical treatments
Routes of administration
Dosage forms
Injection (medicine)
Muscular system | Intramuscular injection | [
"Chemistry"
] | 2,960 | [
"Pharmacology",
"Routes of administration"
] |
334,940 | https://en.wikipedia.org/wiki/Automatic%20transmission | An automatic transmission (sometimes abbreviated AT) is a multi-speed transmission used in motor vehicles that does not require any input from the driver to change forward gears under normal driving conditions.
The 1904 Sturtevant "horseless carriage gearbox" is often considered to be the first true automatic transmission. The first mass-produced automatic transmission is the General Motors Hydramatic four-speed hydraulic automatic, which was introduced in 1939.
Automatic transmissions are especially prevalent in vehicular drivetrains, particularly those subject to intense mechanical acceleration and frequent idle/transient operating conditions; commonly commercial/passenger/utility vehicles, such as buses and waste collection vehicles.
Prevalence
Vehicles with internal combustion engines, unlike electric vehicles, require the engine to operate in a narrow range of rates of rotation, requiring a gearbox, operated manually or automatically, to drive the wheels over a wide range of speeds.
Globally, 43% of new cars produced in 2015 were manual transmissions, falling to 37% by 2020. Automatic transmissions have long been prevalent in the United States, but only started to become common in Europe much later. In Europe in 1997, only 10–12% of cars had automatic transmissions.
In 1957 over 80% of new cars in the United States had automatic transmissions. Automatic transmissions have been standard in large cars since at least 1974. By 2020 only 2.4% of new cars had manual transmissions. Historically, automatic transmissions were less efficient, but lower fuel prices in the US made this less of a problem than in Europe.
In the United Kingdom, a majority of new cars have had automatic transmissions since 2020. Several manufacturers including Mercedes and Volvo no longer sell cars with manual transmissions. The growing prevalence of automatic transmissions is attributed to the increasing number of electric and hybrid cars, and the ease of integrating it with safety systems such as Autonomous Emergency Braking.
Efficiency
The efficiency (power output as a percentage of input) of conventional automatic transmissions ranges from 86 to 94%. Manual transmissions are more fuel efficient than all but the newest automatic transmissions due to their inherently low parasitic losses (typically of about 4%) in addition to being cheaper to make, lighter, better performing, and of simpler mechanical design. Fuel economy worsens with lower efficiency. However, manual transmissions have the disadvantage of requiring the driver to operate the clutch and change gear whenever required.
Real-world tests reported in 2022 found that in typical driving manual transmissions achieved 2 to 5% better fuel economy than automatics, increasing to 20% with an expert driver. Some laboratory tests show automatics in a better light due to the tests using a prescribed shifting pattern for manuals not always optimized for economy. However, on long highway journeys manual transmissions require maintaining a very specific cruising speed to optimise economy, making automatics preferable.
Hydraulic automatic transmission
Design
The most common design of automatic transmissions is the hydraulic automatic, which typically uses planetary gearsets that are operated using hydraulics. The transmission is connected to the engine via a torque converter (or a fluid coupling prior to the 1960s), instead of the friction clutch used by most manual transmissions.
Gearsets and shifting mechanism
A hydraulic automatic transmission uses planetary gearsets instead of the manual transmission's design of gears lined up along input, output and intermediate shafts. To change gears, the hydraulic automatic uses a combination of internal clutches, friction bands or brake packs. These devices are used to lock certain gears, thus setting which gear ratio is in use at a given time.
A sprag clutch (a ratchet-like device which can freewheel and transmits torque in only one direction) is often used for routine gear shifts. The advantage of a sprag clutch is that it eliminates the sensitivity of timing a simultaneous clutch release/apply on two planetary gearsets, simply "taking up" the drivetrain load when actuated, and releasing automatically when the next gear's sprag clutch assumes the torque transfer.
The friction bands are often used for manually selected gears (such as low range or reverse) and operate on the planetary drum's circumference. Bands are not applied when the drive/overdrive range is selected, the torque being transmitted by the sprag clutches instead.
Hydraulic controls
The aforementioned friction bands and clutches are controlled using automatic transmission fluid (ATF), which is pressurized by a pump and then directed to the appropriate bands/clutches to obtain the required gear ratio. The ATF provides lubrication, corrosion prevention, and a hydraulic medium to transmit the power required to operate the transmission. Made from petroleum with various refinements and additives, ATF is one of the few parts of the automatic transmission that needs routine service as the vehicle ages.
The main pump which pressurises the ATF is typically a gear pump mounted between the torque converter and the planetary gear set. The input for the main pump is connected to the torque converter housing, which in turn is bolted to the engine's flexplate, so the pump provides pressure whenever the engine is running. A disadvantage of this arrangement is that there is no oil pressure to operate the transmission when the engine is not running, therefore it is not possible to push start a vehicle equipped with an automatic transmission with no rear pump (aside from several automatics built prior to 1970, which also included a rear pump for towing and push-starting purposes). The pressure of the ATF is regulated by a governor connected to the output shaft, which varies the pressure depending on the vehicle speed.
The valve body inside the transmission is responsible for directing hydraulic pressure to the appropriate bands and clutches. It receives pressurized fluid from the main pump and consists of several spring-loaded valves, check balls, and servo pistons. In older automatic transmissions, the valves use the pump pressure and the pressure from a centrifugal governor on the output side (as well as other inputs, such as throttle position or the driver locking out the higher gears) to control which ratio is selected. As the vehicle and engine change speed, the difference between the pressures changes, causing different sets of valves to open and close. In more recent automatic transmissions, the valves are controlled by solenoids. These solenoids are computer-controlled, with the gear selection decided by a dedicated transmission control unit (TCU) or sometimes this function is integrated into the engine control unit (ECU). Modern designs have replaced the centrifugal governor with an electronic speed sensor that is used as an input to the TCU or ECU. Modern transmissions also factor in the amount of load on an engine at any given time, which is determined from either the throttle position or the amount of intake manifold vacuum.
The multitude of parts, along with the complex design of the valve body, originally made hydraulic automatic transmissions much more expensive and time-consuming to build and repair than manual transmissions; however mass-production and developments over time have reduced this cost gap.
Torque converter
To provide coupling and decoupling of the engine, a modern automatic transmission uses a torque converter instead of the friction clutch used in a manual transmission.
History
1904–1939: Predecessors to the hydraulic automatic
The 1904 Sturtevant "horseless carriage gearbox" is often considered to be the first automatic transmission for motor vehicles. At higher engine speeds, high gear was engaged. As the vehicle slowed down and engine speed decreased, the gearbox would shift back to low. However, the transmission was prone to sudden failure, due to the transmission being unable to withstand forces from the abrupt gear changes.
The adoption of planetary gearsets was a significant advance towards the modern automatic transmission. One of the first transmissions to use this design was the manual transmission fitted to the 1901–1904 Wilson-Pilcher automobile. This transmission was built in the United Kingdom and used two epicyclic gears to provide four gear ratios. A foot clutch was used for standing starts, gear selection was using a hand lever, helical gears were used (to reduce noise) and the gears used a constant-mesh design. A planetary gearset was also used in the 1908 Ford Model T, which was fitted with a two-speed manual transmission (without helical gears).
An early patent for the automatic transmission was granted to Canadian inventor Alfred Horner Munro of Regina in 1923. Being a steam engineer, Munro designed his device to use compressed air rather than hydraulic fluid, and so it lacked power and never found commercial application.
In 1923, a patent was approved in the United States describing the operation of a transmission where the manual shifting of gears and manual operation of a clutch was eliminated. This patent was submitted by Henry R. Hoffman from Chicago and was titled: Automatic Gear Shift and Speed Control. The patent described the workings of such a transmission as "...having a series of clutches disposed intermediate the engine shaft and the differential shaft and in which the clutches are arranged to selectively engage and drive the differential shaft dependent upon the speed at which the differential shaft rotates". However, it would be over a decade later until automatic transmissions were produced in significant quantities. In the meantime, several European and British manufacturers would use preselector gearboxes, a form of manual transmission which removed the reliance on the driver's skill to achieve smooth gear shifts.
The first automatic transmission using hydraulic fluid was developed in 1932 by two Brazilian engineers, José Braz Araripe and Fernando Lehly Lemos.
The evolution towards mass-produced automatic transmissions continued with the 1933–1935 REO Motor Car Company Self-Shifter semi-automatic transmission, which automatically shifted between two forward gears in the "Forward" mode (or between two shorter gear ratios in the "Emergency low" mode). Driver involvement was still required during normal driving, since standing starts required the driver to use the clutch pedal. This was followed in 1937 by the Oldsmobile Automatic Safety Transmission. Similar in operation to the REO Self-Shifter, the Automatic Safety Transmission shifted automatically between the two gear ratios available in the "Low" and "High" ranges and the clutch pedal was required for standing starts. It used a planetary gearset. The Chrysler Fluid Drive, introduced in 1939, was an optional addition to manual transmissions where a fluid coupling (similar to a torque-convertor, but without the torque multiplication) was added, to avoid the need to operate a manual clutch.
1939–1964: Early hydraulic automatics
The General Motors Hydra-Matic became the first mass-produced automatic transmission following its introduction in 1939 (1940 model year). Available as an option in cars such as the Oldsmobile Series 60 and Cadillac Sixty Special, the Hydra-Matic combined a fluid coupling with three hydraulically controlled planetary gearsets to produce four forward speeds plus reverse. The transmission was sensitive to engine throttle position and road speed, producing fully automatic up- and down-shifting that varied according to operating conditions. Features of the Hydra-Matic included a wide spread of ratios (allowing both good acceleration in first gear and cruising at low engine speed in top gear) and the fluid coupling handling only a portion of the engine's torque in the top two gears (increasing fuel economy in those gears, similar to a lock-up torque converter). Use of the Hydra-Matic spread to other General Motors brands and then to other manufacturers starting 1948 including Hudson, Lincoln, Kaiser, Nash, Holden (Australia), as well as Rolls-Royce and Bentley licensing production in the UK and providing the transmission to Jensen Motors, Armstrong Siddeley and other UK manufacturers. During World War II, the Hydra-Matic was used in some military vehicles.
The first automatic transmission to use a torque converter (instead of a fluid coupling) was the Buick Dynaflow, which was introduced for the 1948 model year. In normal driving, the Dynaflow used only the top gear, relying on the torque multiplication of the torque convertor at lower speeds. The Dynaflow was followed by the Packard Ultramatic in mid-1949 and the Chevrolet Powerglide for the 1950 model year. Each of these transmissions had only two forward speeds, relying on the converter for additional torque multiplication. In the early 1950s, BorgWarner developed a series of three-speed torque converter automatics for car manufacturers such as American Motors, Ford and Studebaker. Chrysler was late in developing its own true automatic, introducing the two-speed torque converter PowerFlite in 1953, and the three-speed TorqueFlite in 1956. The latter was the first to utilize the Simpson compound planetary gearset.
In 1956, the General Motors Hydra-Matic (which still used a fluid coupling) was redesigned based around the use of two fluid couplings to provide smoother shifts. This transmission was called the Controlled Coupling Hydra-Matic, or "Jetaway" transmission. The original Hydra-Matic remained in production until the mid-1960s at GM, with the licensed Rolls-Royce Automatic transmission soldiering on until 1978 on the Rolls-Royce Phantom VI. In 1964, General Motors released a new transmission, the Turbo Hydramatic, a three-speed transmission which used a torque convertor. The Turbo Hydramatic was among the first to have the basic gear selections (park, reverse, neutral, drive, low) which became the standard gear selection used for several decades.
1965–present: increased ratio count and electronics
By the late 1960s, most of the fluid-coupling two-speed and four-speed transmissions had disappeared in favor of three-speed units with torque converters. Also around this time, whale oil was removed from the automatic transmission fluid. During the 1980s, automatic transmissions with four gear ratios became increasingly common, and many were equipped with lock-up torque convertors in order to improve fuel economy.
Electronics began to be more commonly used to control the transmission, replacing mechanical control methods such as spring-loaded valves in the valve body. Most systems use solenoids which are controlled by either the engine control unit, or a separate transmission control unit. This allows for more precise control of shift points, shift quality, lower shift times and manual control.
The first five-speed automatic was the ZF 5HP18 transmission, debuting in 1991 on various BMW models. The first six-speed automatic was the ZF 6HP26 transmission, which debuted in the 2002 BMW 7 Series (E65). The first seven-speed automatic was the Mercedes-Benz 7G-Tronic transmission, which debuted a year later. In 2007, the first eight-speed transmission to reach production was the Toyota AA80E transmission. The first nine-speed and ten-speed transmissions were the 2013 ZF 9HP transmission and 2017 Toyota Direct Shift-10A (used in the Lexus LC) respectively.
Gear selectors
The gear selector is the input by which the driver selects the operating mode of an automatic transmission. Traditionally the gear selector is located between the two front seats or on the steering column, however electronic rotary dials and push-buttons have also been occasionally used since the 1980s, as well as push buttons having been used in the 1950s and 1960s by Rambler (automobile), Edsel, and most famously, by Chrysler. A few automobiles employed a lever on the instrument panel, such as the 1955 Chrysler Corporation cars, and notably, the Corvair.
P–R–N–D–L positions
Most cars use a "P–R–N–D–L" layout for the gear selector, which consists of the following positions:
Park (P): This position disengages the transmission from the engine (as with the neutral position), and a parking pawl mechanically locks the output shaft of the transmission. This prevents the driven wheels from rotating to prevent the vehicle from moving. The use of the hand brake (parking brake) is also recommended when parking on slopes, since this provides greater protection from the vehicle moving. The park position is omitted on buses/coaches/tractors, which must instead be placed in neutral with the air-operated parking brakes set. Some early passenger car automatics, such as the pre-1960 Chrysler cars and the Corvair Powerglide, did not have the park feature at all. These cars were started in neutral and required the driver to apply a parking brake when parked. The original Hydra-Matic from GM instead engaged a parking pawl when placed in reverse with the engine off, thus dispensing with a park position until the adoption of the Controlled Coupling Hydra-Matic in 1956.The park position usually includes a lockout function (such as a button on the side of the gear selector or requiring that the brake pedal be pressed) which prevents the transmission from being accidentally shifted from park into other gear selector positions. Many cars also prevent the engine from being started when the selector is in any position other than park or neutral (often in combination with requiring the brake pedal to be pressed).
Reverse (R): This position engages reverse gear, so that the vehicle drives in a backwards direction. It also operates the reversing lights and on some vehicles can activate other functions including parking sensors, backup cameras and reversing beepers (to warn pedestrians).Some modern transmissions have a mechanism that will prevent shifting into the reverse position when the vehicle is moving forward, often using a switch on the brake pedal or electronic transmission controls that monitor the vehicle speed.
Neutral (N): This position disengages the transmission from the engine, allowing the vehicle to move regardless of the engine's speed. Prolonged movement of the vehicle in neutral with the engine off at significant speeds ("coasting") can damage some automatic transmissions, since the lubrication pump is often powered by the input side of the transmission and is therefore not running when the transmission is in neutral. The vehicle may be started in neutral as well as park.
Drive (D): This position is the normal mode for driving forwards. It allows the transmission to engage the full range of available forward gear ratios.
Low (L): This position provides for engine braking on steep hills. It also provides for a lower gear ratio for starting out when heavily loaded.
Some automatic transmissions, especially by General Motors from 1940 to 1964, used a layout with reverse as the bottom position (e.g. N–D–L–R or P–N–D–L–R).
Other positions and modes
Many transmissions also include positions to restrict the gear selection to the lower gears and engages the engine brake. These positions are often labelled "L" (low gear), "S" (second gear) or the number of the highest gear used in that position (eg 3, 2 or 1). If these positions are engaged at a time when it would result in excessive engine speed, many modern transmissions disregard the selector position and remain in the higher gear.
In descending order of the highest gear available:
3: Restricts the transmission to the lowest three gear ratios. In a 4-speed automatic transmission, this is often used to prevent the car shifting into the overdrive ratio. In some cars, the position labelled "D" performs this function, while another position labelled "OD" or a boxed "[D]" allows all gears to be used.
2 (also labelled "S"): Restricts the transmission to the lowest two gear ratios. In some cars, it is also used to accelerate from standstill in 2nd gear instead of 1st, for situations of reduced traction (such as snow or gravel). This function is sometimes called "winter mode", labelled "W".
1 (also labelled "L"): Restricts the transmission to 1st gear only, also known as a "low gear". This is useful when a large torque is required at the wheels (for example, when accelerating up a steep incline); however use at higher speeds can run the engine at an excessive speed, risking overheating or damage.
Many modern transmissions include modes to adjust the shift logic to prefer either power or fuel economy. "Sport" (also called "Power" or "Performance") modes cause gear shifts to occur at higher engine speeds, allowing higher acceleration. "Economy" (also called "Eco" or "Comfort") modes cause gear shifts to occur at lower engine speeds to reduce fuel consumption.
Manual controls
Since the 1990s, systems to manually request a specific gear or an upshift/downshift have become more common. These manumatic transmissions offer the driver greater control over the gear selection that the traditional modes to restrict the transmission to the lower gears.
Use of the manumatic functions are typically achieved either via paddles located beside the steering column, or "+" and "-" controls on the gear selector. Some cars offer drivers both methods to request a manual gear selection.
Continuously variable transmission (CVT)
A continuously variable transmission (CVT) can change seamlessly through a continuous (infinite) range of gear ratios, compared with other automatic transmissions that provide a limited number of gear ratios in fixed steps. The flexibility of a CVT with suitable control may allow the engine to operate at a constant angular velocity while the vehicle moves at varying speeds.
CVTs are used in cars, tractors, UTVs, motor scooters, snowmobiles, and earthmoving equipment.
The most common type of CVT uses two pulleys connected by a belt or chain, however, several other designs have also been used.
Dual-clutch transmission (DCT)
A dual-clutch transmission (DCT, sometimes referred to as a twin-clutch transmission, or double-clutch transmission) uses two separate clutches for odd and even gear sets. The design is often similar to two separate manual transmissions with their respective clutches contained within one housing, and working as one unit. In most car and truck applications, the DCT functions as an automatic transmission, requiring no driver input to change gears.
The first DCT to reach production was the Easidrive automatic transmission introduced on the 1961 Hillman Minx mid-size car. This was followed by various eastern European tractors through the 1970s (using manual operation via a single clutch pedal), then the Porsche 962 C racing car in 1985. The first DCT of the modern era was used in the 2003 Volkswagen Golf R32. Since the late 2000s, DCTs have become increasingly widespread, and have supplanted hydraulic automatic transmissions in various models of cars.
Automated manual transmission (AMT)
Automated manual transmission (AMT), sometimes referred to as a clutchless manual, is a type of multi-speed automobile transmission system that is closely based on the mechanical design of a conventional manual transmission, and automates either the clutch system, the gear shifting, or both simultaneously, requiring partial, or no driver input or involvement.
Earlier versions of these transmissions that are semi-automatic in operation, such as Autostick, control only the clutch system automatically — and use different forms of actuation (usually via an actuator or servo) to automate the clutch, but still require the driver's input and full control to manually actuate gear changes by hand. Modern versions of these systems that are fully automatic in operation, such as Selespeed and Easytronic, require no driver input over gear changes or clutch operation. Semi-automatic versions require only partial driver input (i.e., the driver must change gears manually), while fully automatic versions require no manual driver input, whatsoever (TCU or ECU operates both the clutch system and gear shifts automatically).
Modern automated manual transmissions (AMT) have their roots and origins in older clutchless manual transmissions that began to appear on mass-production automobiles in the early 1930s and 1940s, prior to the introduction of hydraulic automatic transmissions. These systems were designed to reduce the amount of clutch or gear shifter usage required by the driver. These devices were intended to reduce the difficulty of operating conventional unsynchronised manual transmissions ("crash gearboxes") that were commonly used at the time, especially in stop-start driving. An early example of this transmission was introduced with the Hudson Commodore in 1942, called Drive-Master. This unit was an early semi-automatic transmission, based on the design of a conventional manual transmission, which used a servo-controlled vacuum-operated clutch system, with three different gear shifting modes, at the touch of a button; manual shifting and manual clutch operation (fully manual), manual shifting with automated clutch operation (semi-automatic), and automatic shifting with automatic clutch operation (fully automatic). Another early example of this transmission system was introduced in the 1955 Citroën DS, which used a 4-speed BVH transmission. This semi-automatic transmission used an automated clutch, which was actuated using hydraulics. Gear selection also used hydraulics, however, the gear ratio needs to be manually selected by the driver. This system was nicknamed Citro-Matic in the U.S.
The first modern AMTs were introduced by BMW and Ferrari in 1997, with their SMG and F1 transmissions, respectively. Both systems used hydraulic actuators and electrical solenoids, and a designated transmission control unit (TCU) for the clutch and shifting, plus steering wheel-mounted paddle shifters, if the driver wanted to change gear manually.
Modern fully automatic AMTs, such as Selespeed and Easytronic, have now been largely superseded and replaced by the increasingly widespread dual-clutch transmission design.
See also
:Category:Automobile transmissions
:Category:Automatic transmission tradenames
Motor vehicle
Park by wire
Shift by wire
Torque converter
References
Automotive transmission technologies
Automobile transmissions
Mechanical power control | Automatic transmission | [
"Physics"
] | 5,178 | [
"Mechanics",
"Mechanical power control"
] |
334,947 | https://en.wikipedia.org/wiki/Business%20Process%20Execution%20Language | The Web Services Business Process Execution Language (WS-BPEL), commonly known as BPEL (Business Process Execution Language), is an OASIS standard executable language for specifying actions within business processes with web services. Processes in BPEL export and import information by using web service interfaces exclusively.
Overview
One can describe web service interactions in two ways: as executable business processes and as abstract business processes.
An executable business process: models an actual behavior of a participant in a business interaction.
An abstract business process: is a partially specified process that is not intended to be executed. Contrary to Executable Processes, an Abstract Process may hide some of the required concrete operational details. Abstract Processes serve a descriptive role, with more than one possible use case, including observable behavior and/or process template.
WS-BPEL aims to model the behavior of processes, via a language for the specification of both Executable and Abstract Business Processes. By doing so, it extends the Web Services interaction model and enables it to support business transactions. It also defines an interoperable integration model that should facilitate the expansion of automated process integration both within and between businesses. Its development came out of the notion that programming in the large and programming in the small required different types of languages.
As such, it is serialized in XML and aims to enable programming in the large.
Programming in the large/small
The concepts of programming in the large and programming in the small distinguish between two aspects of writing the type of long-running asynchronous processes that one typically sees in business processes:
Programming in the large generally refers to the high-level state transition interactions of a process. BPEL refers to this concept as an Abstract Process. A BPEL Abstract Process represents a set of publicly observable behaviors in a standardized fashion. An Abstract Process includes information such as when to wait for messages, when to send messages, when to compensate for failed transactions, etc.
Programming in the small, in contrast, deals with short-lived programmatic behavior, often executed as a single transaction and involving access to local logic and resources such as files, databases, et cetera.
History
The origins of WS-BPEL go back to Web Services Flow Language (WSFL) and Xlang.
In 2001, IBM and Microsoft had each defined their own fairly similar, "programming in the large" languages: WSFL (Web Services Flow Language) and Xlang, respectively. Microsoft even went ahead and created a scripting variant called XLANG/s which would later serve as the basis for their Orchestrations services inside their BizTalk Server. They specifically documented that this language "is proprietary and is not fully documented."
With the advent and popularity of BPML, and the growing success of BPMI.org and the open BPMS movement led by JBoss and Intalio Inc., IBM and Microsoft decided to combine these languages into a new language, BPEL4WS. In April 2003, BEA Systems, IBM, Microsoft, SAP, and Siebel Systems submitted BPEL4WS 1.1 to OASIS for standardization via the Web Services BPEL Technical Committee. Although BPEL4WS appeared as both a 1.0 and 1.1 version, the OASIS WS-BPEL technical committee voted on 14 September 2004 to name their spec "WS-BPEL 2.0". (This change in name aligned BPEL with other web service standard naming conventions which start with "WS-" (similar to WS-Security) and took account of the significant enhancements made between BPEL4WS 1.1 and WS-BPEL 2.0.) If not discussing a specific version, the moniker BPEL is commonly used.
In June 2007, Active Endpoints, Adobe Systems, BEA, IBM, Oracle, and SAP published the BPEL4People and WS-HumanTask specifications, which describe how human interaction in BPEL processes can be implemented.
Topics
Design goals
There were ten original design goals associated with BPEL:
Define business processes that interact with external entities through web service operations defined using Web Services Description Language (WSDL) 1.1, and that manifest themselves as Web services defined using WSDL 1.1. The interactions are "abstract" in the sense that the dependence is on portType definitions, not on port definitions.
Define business processes using an XML-based language. Do not define a graphical representation of processes or provide any particular design methodology for processes.
Define a set of Web service orchestration concepts that are meant to be used by both the external (abstract) and internal (executable) views of a business process. Such a business process defines the behavior of a single autonomous entity, typically operating in interaction with other similar peer entities. It is recognized that each usage pattern (i.e., abstract view and executable view) will require a few specialized extensions, but these extensions are to be kept to a minimum and tested against requirements such as import/export and conformance checking that link the two usage patterns.
Provide both hierarchical and graph-like control regimes, and allow their use to be blended as seamlessly as possible. This should reduce the fragmentation of the process modeling space.
Provide data manipulation functions for the simple manipulation of data needed to define process data and control flow.
Support an identification mechanism for process instances that allows the definition of instance identifiers at the application message level. Instance identifiers should be defined by partners and may change.
Support the implicit creation and termination of process instances as the basic lifecycle mechanism. Advanced lifecycle operations such as "suspend" and "resume" may be added in future releases for enhanced lifecycle management.
Define a long-running transaction model that is based on proven techniques like compensation actions and scoping to support failure recovery for parts of long-running business processes.
Use Web Services as the model for process decomposition and assembly.
Build on Web services standards (approved and proposed) as much as possible in a composable and modular manner.
The BPEL language
BPEL is an orchestration language, and not a choreography language. The primary difference between orchestration and choreography is executability and control. An orchestration specifies an executable process that involves message exchanges with other systems, such that the message exchange sequences are controlled by the orchestration designer. A choreography specifies a protocol for peer-to-peer interactions, defining, e.g., the legal sequences of messages exchanged with the purpose of guaranteeing interoperability. Such a protocol is not directly executable, as it allows many different realizations (processes that comply with it). A choreography can be realized by writing an orchestration (e.g., in the form of a BPEL process) for each peer involved in it. The orchestration and the choreography distinctions are based on analogies: orchestration refers to the central control (by the conductor) of the behavior of a distributed system (the orchestra consisting of many players), while choreography refers to a distributed system (the dancing team) which operates according to rules (the choreography) but without centralized control.
BPEL's focus on modern business processes, plus the histories of WSFL and XLANG, led BPEL to adopt web services as its external communication mechanism. Thus BPEL's messaging facilities depend on the use of the Web Services Description Language (WSDL) 1.1 to describe outgoing and incoming messages.
In addition to providing facilities to enable sending and receiving messages, the BPEL programming language also supports:
A property-based message correlation mechanism
XML and WSDL typed variables
An extensible language plug-in model to allow writing expressions and queries in multiple languages: BPEL supports XPath 1.0 by default
Structured-programming constructs including if-then-elseif-else, while, sequence (to enable executing commands in order) and flow (to enable executing commands in parallel)
A scoping system to allow the encapsulation of logic with local variables, fault-handlers, compensation-handlers and event-handlers
Serialized scopes to control concurrent access to variables.
Relationship of BPEL to BPMN
There is no standard graphical notation for WS-BPEL, as the OASIS technical committee decided this was out of scope. Some vendors have invented their own notations. These notations take advantage of the fact that most constructs in BPEL are block-structured (e.g., sequence, while, pick, scope, etcetera.) This feature enables a direct visual representation of BPEL process descriptions in the form of structograms, in a style reminiscent of a Nassi–Shneiderman diagram.
Others have proposed to use a substantially different business process modeling language, namely Business Process Model and Notation (BPMN), as a graphical front-end to capture BPEL process descriptions. As an illustration of the feasibility of this approach, the BPMN specification includes an informal and partial mapping from BPMN to BPEL 1.1. A more detailed mapping of BPMN to BPEL has been implemented in a number of tools, including an open-source tool known as BPMN2BPEL. However, the development of these tools has exposed fundamental differences between BPMN and BPEL, which make it very difficult, and in some cases impossible, to generate human-readable BPEL code from BPMN models. Even more difficult is the problem of BPMN-to-BPEL round-trip engineering: generating BPEL code from BPMN diagrams and maintaining the original BPMN model and the generated BPEL code synchronized, in the sense that any modification to one is propagated to the other.
Adding 'programming in the small' support to BPEL
BPEL's control structures such as 'if-then-elseif-else' and 'while' as well as its variable manipulation facilities depend on the use of 'programming in the small' languages to provide logic. All BPEL implementations must support XPath 1.0 as a default language. But the design of BPEL envisages extensibility so that systems builders can use other languages as well. BPELJ is an effort related to JSR 207 that may enable Java to function as a 'programming in the small' language within BPEL.
BPEL4People
Despite wide acceptance of Web services in distributed business applications, the absence of human interactions was a significant gap for many real-world business processes.
To fill this gap, BPEL4People extended BPEL from orchestration of Web services alone to orchestration of role-based human activities as well.
Objectives
Within the context of a business process BPEL4People
supports role-based interaction of people
provides means of assigning users to generic human roles
takes care to delegate ownership of a task to a person only
supports scenario as
four-eyes scenario
nomination
escalation
chained execution
by extending BPEL with additional independent syntax and semantic.
The WS-HumanTask specification introduces the definition of human tasks and notifications, including their properties, behavior and a set of operations used to manipulate human tasks. A coordination protocol is introduced in order to control autonomy and life cycle of service-enabled human tasks in an interoperable manner.
The BPEL4People specification introduces a WS-BPEL extension to address human interactions in WS-BPEL as a first-class citizen. It defines a new type of basic activity which uses human tasks as an implementation, and allows specifying tasks local to a process or use tasks defined outside of the process definition. This extension is based on the WS-HumanTask specification.
WS-BPEL 2.0
Version 2.0 introduced some changes and new features:
New activity types: repeatUntil, validate, forEach (parallel and sequential), rethrow, extensionActivity, compensateScope
Renamed activities: switch/case renamed to if/else, terminate renamed to exit
Termination Handler added to scope activities to provide explicit behavior for termination
Variable initialization
XSLT for variable transformations (New XPath extension function bpws:doXslTransform)
XPath access to variable data (XPath variable syntax $variable[.part]/location)
XML schema variables in Web service activities (for WS-I doc/lit style service interactions)
Locally declared messageExchange (internal correlation of receive and reply activities)
Clarification of Abstract Processes (syntax and semantics)
Enable expression language overrides at each activity
See also
BPEL4People
BPELscript
Business Process Model and Notation
Business Process Modeling
List of BPEL engines
Web Services Conversation Language
Workflow
WS-CDL
XML Process Definition Language
Yet Another Workflow Language
References
Further reading
Books on BPEL 2.0
SOA for the Business Developer: Concepts, BPEL, and SCA.
XML-based standards
Web service specifications
Workflow languages | Business Process Execution Language | [
"Technology"
] | 2,649 | [
"Computer standards",
"XML-based standards"
] |
334,955 | https://en.wikipedia.org/wiki/Therapeutic%20index | The therapeutic index (TI; also referred to as therapeutic ratio) is a quantitative measurement of the relative safety of a drug with regard to risk of overdose. It is a comparison of the amount of a therapeutic agent that causes toxicity to the amount that causes the therapeutic effect. The related terms therapeutic window or safety window refer to a range of doses optimized between efficacy and toxicity, achieving the greatest therapeutic benefit without resulting in unacceptable side-effects or toxicity.
Classically, for clinical indications of an approved drug, TI refers to the ratio of the dose of the drug that causes adverse effects at an incidence/severity not compatible with the targeted indication (e.g. toxic dose in 50% of subjects, TD) to the dose that leads to the desired pharmacological effect (e.g. efficacious dose in 50% of subjects, ED). In contrast, in a drug development setting TI is calculated based on plasma exposure levels.
In the early days of pharmaceutical toxicology, TI was frequently determined in animals as lethal dose of a drug for 50% of the population (LD50) divided by the minimum effective dose for 50% of the population (ED50). In modern settings, more sophisticated toxicity endpoints are used.
For many drugs, severe toxicities in humans occur at sublethal doses, which limit their maximum dose. A higher safety-based therapeutic index is preferable instead of a lower one; an individual would have to take a much higher dose of a drug to reach the lethal threshold than the dose taken to induce the therapeutic effect of the drug. However, a lower efficacy-based therapeutic index is preferable instead of a higher one; an individual would have to take a higher dose of a drug to reach the toxic threshold than the dose taken to induce the therapeutic effect of the drug.
Generally, a drug or other therapeutic agent with a narrow therapeutic range (i.e. having little difference between toxic and therapeutic doses) may have its dosage adjusted according to measurements of its blood levels in the person taking it. This may be achieved through therapeutic drug monitoring (TDM) protocols. TDM is recommended for use in the treatment of psychiatric disorders with lithium due to its narrow therapeutic range.
Types
Based on efficacy and safety of drugs, there are two types of therapeutic index:
Safety-based therapeutic index
It is desirous for the value of LD to be as large as possible, to decrease risk of lethal effects and increase the therapeutic window. In the above formula, TI increases as the difference between LD and ED increases—hence, a higher safety-based therapeutic index indicates a larger therapeutic window, and vice versa.
Efficacy-based therapeutic index
Ideally the ED is as low as possible for faster drug response and larger therapeutic window, whereas a drugs TD is ideally as large as possible to decrease risk of toxic effects. In the above equation, the greater the difference between ED and TD, the greater the value of TI. Hence, a lower efficacy-based therapeutic index indicates a larger therapeutic window.
Protective index
Similar to safety-based therapeutic index, the protective index uses TD50 (median toxic dose) in place of LD50.
For many substances, toxicity can occur at levels far below lethal effects (that cause death), and thus, if toxicity is properly specified, the protective index is often more informative about a substance's relative safety. Nevertheless, the safety-based therapeutic index () is still useful as it can be considered an upper bound of the protective index, and the former also has the advantages of objectivity and easier comprehension.
Since the protective index (PI) is calculated as TD divided by ED, it can be mathematically expressed that:
which means that is a reciprocal of protective index.
All the above types of therapeutic index can be used in both pre-clinical trials and clinical trials.
Drug development
A low efficacy-based therapeutic index () and a high safety-based therapeutic index () are preferable for a drug to have a favorable efficacy vs safety profile. At the early discovery/development stage, the clinical TI of a drug candidate is unknown. However, understanding the preliminary TI of a drug candidate is of utmost importance as early as possible since TI is an important indicator of the probability of successful development. Recognizing drug candidates with potentially suboptimal TI at the earliest possible stage helps to initiate mitigation or potentially re-deploy resources.
TI is the quantitative relationship between pharmacological efficacy and toxicological safety of a drug, without considering the nature of pharmacological or toxicological endpoints themselves. However, to convert a calculated TI into something useful, the nature and limitations of pharmacological and/or toxicological endpoints must be considered. Depending on the intended clinical indication, the associated unmet medical need and/or the competitive situation, more or less weight can be given to either the safety or efficacy of a drug candidate in order to create a well balanced indication-specific efficacy vs safety profile.
In general, it is the exposure of a given tissue to drug (i.e. drug concentration over time), rather than dose, that drives the pharmacological and toxicological effects. For example, at the same dose there may be marked inter-individual variability in exposure due to polymorphisms in metabolism, DDIs or differences in body weight or environmental factors. These considerations emphasize the importance of using exposure instead of dose to calculate TI. To account for delays between exposure and toxicity, the TI for toxicities that occur after multiple dose administrations should be calculated using the exposure to drug at steady state rather than after administration of a single dose.
A review published by Muller and Milton in Nature Reviews Drug Discovery critically discusses TI determination and interpretation in a translational drug development setting for both small molecules and biotherapeutics.
Range of therapeutic indices
The therapeutic index varies widely among substances, even within a related group.
For instance, the opioid painkiller remifentanil is very forgiving, offering a therapeutic index of 33,000:1, while Diazepam, a benzodiazepine sedative-hypnotic and skeletal muscle relaxant, has a less forgiving therapeutic index of 100:1. Morphine is even less so with a therapeutic index of 70.
Less safe are cocaine (a stimulant and local anaesthetic) and ethanol (colloquially, the "alcohol" in alcoholic beverages, a widely available sedative consumed worldwide): the therapeutic indices for these substances are 15:1 and 10:1, respectively. Paracetamol, alternatively known by its trade names Tylenol or Panadol, also has a therapeutic index of 10.
Even less safe are drugs such as digoxin, a cardiac glycoside; its therapeutic index is approximately 2:1.
Other examples of drugs with a narrow therapeutic range, which may require drug monitoring both to achieve therapeutic levels and to minimize toxicity, include dimercaprol, theophylline, warfarin and lithium carbonate.
Some antibiotics and antifungals require monitoring to balance efficacy with minimizing adverse effects, including: gentamicin, vancomycin, amphotericin B (nicknamed 'amphoterrible' for this very reason), and polymyxin B.
Cancer radiotherapy
Radiotherapy aims to shrink tumors and kill cancer cells using high energy. The energy arises from x-rays, gamma rays, or charged or heavy particles. The therapeutic ratio in radiotherapy for cancer treatment is determined by the maximum radiation dose for killing cancer cells and the minimum radiation dose causing acute or late morbidity in cells of normal tissues. Both of these parameters have sigmoidal dose–response curves. Thus, a favorable outcome in dose–response for tumor tissue is greater than that of normal tissue for the same dose, meaning that the treatment is effective on tumors and does not cause serious morbidity to normal tissue. Conversely, overlapping response for two tissues is highly likely to cause serious morbidity to normal tissue and ineffective treatment of tumors. The mechanism of radiation therapy is categorized as direct or indirect radiation. Both direct and indirect radiation induce DNA mutation or chromosomal rearrangement during its repair process. Direct radiation creates a DNA free radical from radiation energy deposition that damages DNA. Indirect radiation occurs from radiolysis of water, creating a free hydroxyl radical, hydronium and electron. The hydroxyl radical transfers its radical to DNA. Or together with hydronium and electron, a free hydroxyl radical can damage the base region of DNA.
Cancer cells cause an imbalance of signals in the cell cycle. G1 and G2/M arrest were found to be major checkpoints by irradiating human cells. G1 arrest delays the repair mechanism before synthesis of DNA in S phase and mitosis in M phase, suggesting it is a key checkpoint for survival of cells. G2/M arrest occurs when cells need to repair after S phase but before mitotic entry. It is known that S phase is the most resistant to radiation and M phase is the most sensitive to radiation. p53, a tumor suppressor protein that plays a role in G1 and G2/M arrest, enabled the understanding of the cell cycle through radiation. For example, irradiation of myeloid leukemia cells leads to an increase in p53 and a decrease in the level of DNA synthesis. Patients with Ataxia telangiectasia delays have hypersensitivity to radiation due to the delay of accumulation of p53. In this case, cells are able to replicate without repair of their DNA, becoming prone to incidence of cancer. Most cells are in G1 and S phase. Irradiation at G2 phase showed increased radiosensitivity and thus G1 arrest has been a focus for therapeutic treatment.
Irradiation of a tissue induces a response in both irradiated and non-irridiated cells. It was found that even cells up to 50–75 cell diameters distant from irradiated cells exhibit a phenotype of enhanced genetic instability such as micronucleation. This suggests an effect on cell-to-cell communication such as paracrine and juxtacrine signaling. Normal cells do not lose their DNA repair mechanism whereas cancer cells often lose it during radiotherapy. However, the high energy radiation can override the ability of damaged normal cells to repair, leading to additional risk of carcinogenesis. This suggests a significant risk associated with radiation therapy. Thus, it is desirable to improve the therapeutic ratio during radiotherapy. Employing IG-IMRT, protons and heavy ions are likely to minimize the dose to normal tissues by altered fractionation. Molecular targeting of the DNA repair pathway can lead to radiosensitization or radioprotection. Examples are direct and indirect inhibitors on DNA double-strand breaks. Direct inhibitors target proteins (PARP family) and kinases (ATM, DNA-PKCs) that are involved in DNA repair. Indirect inhibitors target protein tumor cell signaling proteins such as EGFR and insulin growth factor.
The effective therapeutic index can be affected by targeting, in which the therapeutic agent is concentrated in its desirable area of effect. For example, in radiation therapy for cancerous tumors, shaping the radiation beam precisely to the profile of a tumor in the "beam's eye view" can increase the delivered dose without increasing toxic effects, though such shaping might not change the therapeutic index. Similarly, chemotherapy or radiotherapy with infused or injected agents can be made more efficacious by attaching the agent to an oncophilic substance, as in peptide receptor radionuclide therapy for neuroendocrine tumors and in chemoembolization or radioactive microspheres therapy for liver tumors and metastases. This concentrates the agent in the targeted tissues and lowers its concentration in others, increasing efficacy and lowering toxicity.
Safety ratio
Sometimes the term safety ratio is used, particularly when referring to psychoactive drugs used for non-therapeutic purposes, e.g. recreational use. In such cases, the effective dose is the amount and frequency that produces the desired effect, which can vary, and can be greater or less than the therapeutically effective dose.
The Certain Safety Factor, also referred to as the Margin of Safety (MOS), is the ratio of the lethal dose to 1% of population to the effective dose to 99% of the population (LD/ED). This is a better safety index than the LD50 for materials that have both desirable and undesirable effects, because it factors in the ends of the spectrum where doses may be necessary to produce a response in one person but can, at the same dose, be lethal in another.
Synergistic effect
A therapeutic index does not consider drug interactions or synergistic effects. For example, the risk associated with benzodiazepines increases significantly when taken with alcohol, opiates, or stimulants when compared with being taken alone. Therapeutic index also does not take into account the ease or difficulty of reaching a toxic or lethal dose. This is more of a consideration for recreational drug users, as the purity can be highly variable.
Therapeutic window
The therapeutic window (or pharmaceutical window) of a drug is the range of drug dosages which can treat disease effectively without having toxic effects. Medication with a small therapeutic window must be administered with care and control, frequently measuring blood concentration of the drug, to avoid harm. Medications with narrow therapeutic windows include theophylline, digoxin, lithium, and warfarin.
Optimal biological dose
Optimal biological dose (OBD) is the quantity of a drug that will most effectively produce the desired effect while remaining in the range of acceptable toxicity.
Maximum tolerated dose
The maximum tolerated dose (MTD) refers to the highest dose of a radiological or pharmacological treatment that will produce the desired effect without unacceptable toxicity. The purpose of administering MTD is to determine whether long-term exposure to a chemical might lead to unacceptable adverse health effects in a population, when the level of exposure is not sufficient to cause premature mortality due to short-term toxic effects. The maximum dose is used, rather than a lower dose, to reduce the number of test subjects (and, among other things, the cost of testing), to detect an effect that might occur only rarely. This type of analysis is also used in establishing chemical residue tolerances in foods. Maximum tolerated dose studies are also done in clinical trials.
MTD is an essential aspect of a drug's profile. All modern healthcare systems dictate a maximum safe dose for each drug, and generally have numerous safeguards (e.g. insurance quantity limits and government-enforced maximum quantity/time-frame limits) to prevent the prescription and dispensing of quantities exceeding the highest dosage which has been demonstrated to be safe for members of the general patient population.
Patients are often unable to tolerate the theoretical MTD of a drug due to the occurrence of side-effects which are not innately a manifestation of toxicity (not considered to severely threaten a patient's health) but cause the patient sufficient distress and/or discomfort to result in non-compliance with treatment. Such examples include emotional "blunting" with antidepressants, pruritus with opiates, and blurred vision with anticholinergics.
See also
Drug titration – process of finding the correct dose of a drug
Effective dose
EC50
IC50
LD50
Hormesis
References
Pharmacokinetics
Life sciences industry | Therapeutic index | [
"Chemistry",
"Biology"
] | 3,160 | [
"Pharmacology",
"Life sciences industry",
"Pharmacokinetics"
] |
334,986 | https://en.wikipedia.org/wiki/Allopatric%20speciation | Allopatric speciation () – also referred to as geographic speciation, vicariant speciation, or its earlier name the dumbbell model – is a mode of speciation that occurs when biological populations become geographically isolated from each other to an extent that prevents or interferes with gene flow.
Various geographic changes can arise such as the movement of continents, and the formation of mountains, islands, bodies of water, or glaciers. Human activity such as agriculture or developments can also change the distribution of species populations. These factors can substantially alter a region's geography, resulting in the separation of a species population into isolated subpopulations. The vicariant populations then undergo genetic changes as they become subjected to different selective pressures, experience genetic drift, and accumulate different mutations in the separated populations' gene pools. The barriers prevent the exchange of genetic information between the two populations leading to reproductive isolation. If the two populations come into contact they will be unable to reproduce—effectively speciating. Other isolating factors such as population dispersal leading to emigration can cause speciation (for instance, the dispersal and isolation of a species on an oceanic island) and is considered a special case of allopatric speciation called peripatric speciation.
Allopatric speciation is typically subdivided into two major models: vicariance and peripatric. These models differ from one another by virtue of their population sizes and geographic isolating mechanisms. The terms allopatry and vicariance are often used in biogeography to describe the relationship between organisms whose ranges do not significantly overlap but are immediately adjacent to each other—they do not occur together or only occur within a narrow zone of contact. Historically, the language used to refer to modes of speciation directly reflected biogeographical distributions. As such, allopatry is a geographical distribution opposed to sympatry (speciation within the same area). Furthermore, the terms allopatric, vicariant, and geographical speciation are often used interchangeably in the scientific literature. This article will follow a similar theme, with the exception of special cases such as peripatric, centrifugal, among others.
Observation of nature creates difficulties in witnessing allopatric speciation from "start-to-finish" as it operates as a dynamic process. From this arises a host of issues in defining species, defining isolating barriers, measuring reproductive isolation, among others. Nevertheless, verbal and mathematical models, laboratory experiments, and empirical evidence overwhelmingly supports the occurrence of allopatric speciation in nature. Mathematical modeling of the genetic basis of reproductive isolation supports the plausibility of allopatric speciation; whereas laboratory experiments of Drosophila and other animal and plant species have confirmed that reproductive isolation evolves as a byproduct of natural selection.
Vicariance model
The notion of vicariant evolution was first developed by Léon Croizat in the mid-twentieth century. The vicariance theory, which showed coherence along with the acceptance of plate tectonics in the 1960s, was developed in the early 1950s by this Venezuelan botanist, who had found an explanation for the similarity of plants and animals found in South America and Africa by deducing that they had originally been a single population before the two continents drifted apart.
Currently, speciation by vicariance is widely regarded as the most common form of speciation; and is the primary model of allopatric speciation. Vicariance is a process by which the geographical range of an individual taxon, or a whole biota, is split into discontinuous populations (disjunct distributions) by the formation of an extrinsic barrier to the exchange of genes: that is, a barrier arising externally to a species. These extrinsic barriers often arise from various geologic-caused, topographic changes such as: the formation of mountains (orogeny); the formation of rivers or bodies of water; glaciation; the formation or elimination of land bridges; the movement of continents over time (by tectonic plates); or island formation, including sky islands. Vicariant barriers can change the distribution of species populations. Suitable or unsuitable habitat may be come into existence, expand, contract, or disappear as a result of global climate change or even large scale human activities (for example, agricultural, civil engineering developments, and habitat fragmentation). Such factors can alter a region's geography in substantial ways, resulting in the separation of a species population into isolated subpopulations. The vicariant populations may then undergo genotypic or phenotypic divergence as: (a) different mutations arise in the gene pools of the populations, (b) they become subjected to different selective pressures, and/or (c) they independently undergo genetic drift. The extrinsic barriers prevent the exchange of genetic information between the two populations, potentially leading to differentiation due to the ecologically different habitats they experience; selective pressure then invariably leads to complete reproductive isolation. Furthermore, a species' proclivity to remain in its ecological niche (see phylogenetic niche conservatism) through changing environmental conditions may also play a role in isolating populations from one another, driving the evolution of new lineages.
Allopatric speciation can be represented as the extreme on a gene flow continuum. As such, the level of gene flow between populations in allopatry would be , where equals the rate of gene exchange. In sympatry (panmixis), while in parapatric speciation, represents the entire continuum, although some scientists argue that a classification scheme based solely on geographic mode does not necessarily reflect the complexity of speciation. Allopatry is often regarded as the default or "null" model of speciation, but this too is debated.
Reproductive isolation
Reproductive isolation acts as the primary mechanism driving genetic divergence in allopatry and can be amplified by divergent selection. Pre-zygotic and post-zygotic isolation are often the most cited mechanisms for allopatric speciation, and as such, it is difficult to determine which form evolved first in an allopatric speciation event. Pre-zygotic simply implies the presence of a barrier prior to any act of fertilization (such as an environmental barrier dividing two populations), while post-zygotic implies the prevention of successful inter-population crossing after fertilization (such as the production of an infertile hybrid). Since species pairs who diverged in allopatry often exhibit pre- and post-zygotic isolation mechanisms, investigation of the earliest stages in the life cycle of the species can indicate whether or not divergence occurred due to a pre-zygotic or post-zygotic factor. However, establishing the specific mechanism may not be accurate, as a species pair continually diverges over time. For example, if a plant experiences a chromosome duplication event, reproduction will occur, but sterile hybrids will result—functioning as a form of post-zygotic isolation. Subsequently, the newly formed species pair may experience pre-zygotic barriers to reproduction as selection, acting on each species independently, will ultimately lead to genetic changes making hybrids impossible. From the researcher's perspective, the current isolating mechanism may not reflect the past isolating mechanism.
Reinforcement
Reinforcement has been a contentious factor in speciation. It is more often invoked in sympatric speciation studies, as it requires gene flow between two populations. However, reinforcement may also play a role in allopatric speciation, whereby the reproductive barrier is removed, reuniting the two previously isolated populations. Upon secondary contact, individuals reproduce, creating low-fitness hybrids. Traits of the hybrids drive individuals to discriminate in mate choice, by which pre-zygotic isolation increases between the populations. Some arguments have been put forth that suggest the hybrids themselves can possibly become their own species: known as hybrid speciation. Reinforcement can play a role in all geographic modes (and other non-geographic modes) of speciation as long as gene flow is present and viable hybrids can be formed. The production of inviable hybrids is a form of reproductive character displacement, under which most definitions is the completion of a speciation event.
Research has well established the fact that interspecific mate discrimination occurs to a greater extent between sympatric populations than it does in purely allopatric populations; however, other factors have been proposed to account for the observed patterns. Reinforcement in allopatry has been shown to occur in nature (evidence for speciation by reinforcement), albeit with less frequency than a classic allopatric speciation event. A major difficulty arises when interpreting reinforcement's role in allopatric speciation, as current phylogenetic patterns may suggest past gene flow. This masks possible initial divergence in allopatry and can indicate a "mixed-mode" speciation event—exhibiting both allopatric and sympatric speciation processes.
Mathematical models
Developed in the context of the genetic basis of reproductive isolation, mathematical scenarios model both prezygotic and postzygotic isolation with respect to the effects of genetic drift, selection, sexual selection, or various combinations of the three. Masatoshi Nei and colleagues were the first to develop a neutral, stochastic model of speciation by genetic drift alone. Both selection and drift can lead to postzygotic isolation, supporting the fact that two geographically separated populations can evolve reproductive isolation—sometimes occurring rapidly. Fisherian sexual selection can also lead to reproductive isolation if there are minor variations in selective pressures (such as predation risks or habitat differences) among each population. (See the Further reading section below). Mathematical models concerning reproductive isolation-by distance have shown that populations can experience increasing reproductive isolation that correlates directly with physical, geographical distance. This has been exemplified in models of ring species; however, it has been argued that ring species are a special case, representing reproductive isolation-by distance, and demonstrate parapatric speciation instead—as parapatric speciation represents speciation occurring along a cline.
Other models
Various alternative models have been developed concerning allopatric speciation. Special cases of vicariant speciation have been studied in great detail, one of which is peripatric speciation, whereby a small subset of a species population becomes isolated geographically; and centrifugal speciation, an alternative model of peripatric speciation concerning expansion and contraction of a species' range. Other minor allopatric models have also been developed are discussed below.
Peripatric
Peripatric speciation is a mode of speciation in which a new species is formed from an isolated peripheral population. If a small population of a species becomes isolated (e.g. a population of birds on an oceanic island), selection can act on the population independent of the parent population. Given both geographic separation and enough time, speciation can result as a byproduct. It can be distinguished from allopatric speciation by three important features: 1) the size of the isolated population, 2) the strong selection imposed by the dispersal and colonization into novel environments, and 3) the potential effects of genetic drift on small populations. However, it can often be difficult for researchers to determine if peripatric speciation occurred as vicariant explanations can be invoked due to the fact that both models posit the absence of gene flow between the populations. The size of the isolated population is important because individuals colonizing a new habitat likely contain only a small sample of the genetic variation of the original population. This promotes divergence due to strong selective pressures, leading to the rapid fixation of an allele within the descendant population. This gives rise to the potential for genetic incompatibilities to evolve. These incompatibilities cause reproductive isolation, giving rise to rapid speciation events. Models of peripatry are supported mostly by species distribution patterns in nature. Oceanic islands and archipelagos provide the strongest empirical evidence that peripatric speciation occurs.
Centrifugal
Centrifugal speciation is a variant, alternative model of peripatric speciation. This model contrasts with peripatric speciation by virtue of the origin of the genetic novelty that leads to reproductive isolation. When a population of a species experiences a period of geographic range expansion and contraction, it may leave small, fragmented, peripherally isolated populations behind. These isolated populations will contain samples of the genetic variation from the larger parent population. This variation leads to a higher likelihood of ecological niche specialization and the evolution of reproductive isolation. Centrifugal speciation has been largely ignored in the scientific literature. Nevertheless, a wealth of evidence has been put forth by researchers in support of the model, much of which has not yet been refuted. One example is the possible center of origin in the Indo-West Pacific.
Microallopatric
Microallopatry refers to allopatric speciation occurring on a small geographic scale. Examples of microallopatric speciation in nature have been described. Rico and Turner found intralacustrine allopatric divergence of Pseudotropheus callainos (Maylandia callainos) within Lake Malawi separated only by 35 meters. Gustave Paulay found evidence that species in the subfamily Cryptorhynchinae have microallopatrically speciated on Rapa and its surrounding islets. A sympatrically distributed triplet of diving beetle (Paroster) species living in aquifers of Australia's Yilgarn region have likely speciated microallopatrically within a 3.5 km2 area. The term was originally proposed by Hobart M. Smith to describe a level of geographic resolution. A sympatric population may exist in low resolution, whereas viewed with a higher resolution (i.e. on a small, localized scale within the population) it is "microallopatric". Ben Fitzpatrick and colleagues contend that this original definition, "is misleading because it confuses geographical and ecological concepts".
Modes with secondary contact
Ecological speciation can occur allopatrically, sympatrically, or parapatrically; the only requirement being that it occurs as a result of adaptation to different ecological or micro-ecological conditions. Ecological allopatry is a reverse-ordered form of allopatric speciation in conjunction with reinforcement. First, divergent selection separates a non-allopatric population emerging from pre-zygotic barriers, from which genetic differences evolve due to the obstruction of complete gene flow. The terms allo-parapatric and allo-sympatric have been used to describe speciation scenarios where divergence occurs in allopatry but speciation occurs only upon secondary contact. These are effectively models of reinforcement or "mixed-mode" speciation events.
Observational evidence
As allopatric speciation is widely accepted as a common mode of speciation, the scientific literature is abundant with studies documenting its existence. The biologist Ernst Mayr was the first to summarize the contemporary literature of the time in 1942 and 1963. Many of the examples he set forth remain conclusive; however, modern research supports geographic speciation with molecular phylogenetics—adding a level of robustness unavailable to early researchers. The most recent thorough treatment of allopatric speciation (and speciation research in general) is Jerry Coyne and H. Allen Orr's 2004 publication Speciation. They list six mainstream arguments that lend support to the concept of vicariant speciation:
Closely related species pairs, more often than not, reside in geographic ranges adjacent to one another, separated by a geographic or climatic barrier.
Young species pairs (or sister species) often occur in allopatry, even without a known barrier.
In occurrences where several pairs of related species share a range, they are distributed in abutting patterns, with borders exhibiting zones of hybridization.
In regions where geographic isolation is doubtful, species do not exhibit sister pairs.
Correlation of genetic differences between an array of distantly related species that correspond to known current or historical geographic barriers.
Measures of reproductive isolation increase with the greater geographic distance of separation between two species pairs. (This has been often referred to as reproductive isolation by distance.)
Endemism
Allopatric speciation has resulted in many of the biogeographic and biodiversity patterns found on Earth: on islands, continents, and even among mountains.
Islands are often home to species endemics—existing only on an island and nowhere else in the world—with nearly all taxa residing on isolated islands sharing common ancestry with a species on the nearest continent. Not without challenge, there is typically a correlation between island endemics and diversity; that is, that the greater the diversity (species richness) of an island, the greater the increase in endemism. Increased diversity effectively drives speciation. Furthermore, the number of endemics on an island is directly correlated with the relative isolation of the island and its area. In some cases, speciation on islands has occurred rapidly.
Dispersal and in situ speciation are the agents that explain the origins of the organisms in Hawaii. Various geographic modes of speciation have been studied extensively in Hawaiian biota, and in particular, angiosperms appear to have speciated predominately in allopatric and parapatric modes.
Islands are not the only geographic locations that have endemic species. South America has been studied extensively with its areas of endemism representing assemblages of allopatrically distributed species groups. Charis butterflies are a primary example, confined to specific regions corresponding to phylogenies of other species of butterflies, amphibians, birds, marsupials, primates, reptiles, and rodents. The pattern indicates repeated vicariant speciation events among these groups. It is thought that rivers may play a role as the geographic barriers to Charis, not unlike the river barrier hypothesis used to explain the high rates of diversity in the Amazon basin—though this hypothesis has been disputed. Dispersal-mediated allopatric speciation is also thought to be a significant driver of diversification throughout the Neotropics.
Patterns of increased endemism at higher elevations on both islands and continents have been documented on a global level. As topographical elevation increases, species become isolated from one another; often constricted to graded zones. This isolation on "mountain top islands" creates barriers to gene flow, encouraging allopatric speciation, and generating the formation of endemic species. Mountain building (orogeny) is directly correlated with—and directly affects biodiversity. The formation of the Himalayan mountains and the Qinghai–Tibetan Plateau for example have driven the speciation and diversification of numerous plants and animals such as Lepisorus ferns; glyptosternoid fishes (Sisoridae); and the Rana chensinensis species complex. Uplift has also driven vicariant speciation in Macowania daisies in South Africa's Drakensberg mountains, along with Dendrocincla woodcreepers in the South American Andes. The Laramide orogeny during the Late Cretaceous even caused vicariant speciation and radiations of dinosaurs in North America.
Adaptive radiation, like the Galapagos finches observed by Charles Darwin, is often a consequence of rapid allopatric speciation among populations. However, in the case of the finches of the Galapagos, among other island radiations such as the honeycreepers of Hawaii represent cases of limited geographic separation and were likely driven by ecological speciation.
Isthmus of Panama
Geological evidence supports the final closure of the isthmus of Panama approximately 2.7 to 3.5 mya, with some evidence suggesting an earlier transient bridge existing between 13 and 15 mya. Recent evidence increasingly points towards an older and more complex emergence of the Isthmus, with fossil and extant species dispersal (part of the American biotic interchange) occurring in three major pulses, to and from North and South America. Further, the changes in terrestrial biotic distributions of both continents such as with Eciton army ants supports an earlier bridge or a series of bridges. Regardless of the exact timing of the isthmus closer, biologists can study the species on the Pacific and Caribbean sides in what has been called, "one of the greatest natural experiments in evolution". Additionally, as with most geologic events, the closure was unlikely to have occurred rapidly, but instead dynamically—a gradual shallowing of sea water over millions of years.
Studies of snapping shrimp in the genus Alpheus have provided direct evidence of an allopatric speciation event, as phylogenetic reconstructions support the relationships of 15 pairs of sister species of Alpheus, each pair divided across the isthmus and molecular clock dating supports their separation between 3 and 15 million years ago. Recently diverged species live in shallow mangrove waters while older diverged species live in deeper water, correlating with a gradual closure of the isthmus. Support for an allopatric divergence also comes from laboratory experiments on the species pairs showing nearly complete reproductive isolation.
Similar patterns of relatedness and distribution across the Pacific and Atlantic sides have been found in other species pairs such as:
Diadema antillarum and Diadema mexicanum
Echinometra lucunter and Echinometra vanbrunti
Echinometra viridis and E. vanbrunti
Bathygobius soporator and Bathygobius ramosus
B. soporator and Bathygobius andrei
Excirolana braziliensis and variant morphs
Refugia
Ice ages have played important roles in facilitating speciation among vertebrate species. This concept of refugia has been applied to numerous groups of species and their biogeographic distributions.
Glaciation and subsequent retreat caused speciation in many boreal forest birds, such as with North American sapsuckers (Yellow-bellied, Red-naped, and Red-breasted); the warblers in the genus Setophaga (S. townsendii, S. occidentalis, and S. virens), Oreothlypis (O. virginiae, O. ridgwayi, and O. ruficapilla), and Oporornis (O. tolmiei and O. philadelphia now classified in the genus Geothlypis); Fox sparrows (sub species P. (i.) unalaschensis, P. (i.) megarhyncha, and P. (i.) schistacea); Vireo (V. plumbeus, V. cassinii, and V. solitarius); tyrant flycatchers (E. occidentalis and E. difficilis); chickadees (P. rufescens and P. hudsonicus); and thrushes (C. bicknelli and C. minimus).
As a special case of allopatric speciation, peripatric speciation is often invoked for instances of isolation in glaciation refugia as small populations become isolated due to habitat fragmentation such as with North American red (Picea rubens) and black (Picea mariana) spruce or the prairie dogs Cynomys mexicanus and C. ludovicianus.
Superspecies
Numerous species pairs or species groups show abutting distribution patterns, that is, reside in geographically distinct regions next to each other. They often share borders, many of which contain hybrid zones. Some examples of abutting species and superspecies (an informal rank referring to a complex of closely related allopatrically distributed species, also called allospecies) include:
Western and Eastern meadowlarks in North America reside in dry western and wet eastern geographic regions with rare occurrences of hybridization, most of which results in infertile offspring.
Monarch flycatchers endemic to the Solomon Islands; a complex of several species and subspecies (Bougainville, white-capped, and chestnut-bellied monarchs and their related subspecies).
North American sapsuckers and members of the genus Setophaga (the hermit warbler, black-throated green warbler, and Townsend's warbler).
Sixty-six subspecies in the genus Pachycephala residing on the Melanesian islands.
Bonobos and chimpanzees.
Climacteris tree creeper birds in Australia.
Birds-of-paradise in the mountains of New Guinea (genus Astrapia).
Red-shafted and yellow-shafted flickers; black-headed grosbeaks and rose-breasted grosbeaks; Baltimore orioles and Bullock's orioles; and the lazuli and indigo buntings. All of these species pairs connect at zones of hybridization that correspond with major geographic barriers.
Dugesia flatworms in Europe, Asia, and the Mediterranean regions.
Dichromatic toucanets of the genus Selenidera may be a superspecies that arose by the refugia hypothesis in the Amazon basin.
In birds, some areas are prone to high rates of superspecies formation such as the 105 superspecies in Melanesia, comprising 66 percent of all bird species in the region. Patagonia is home to 17 superspecies of forest birds, while North America has 127 superspecies of both land and freshwater birds. Sub-Saharan Africa has 486 passerine birds grouped into 169 superspecies. Australia has numerous bird superspecies as well, with 34 percent of all bird species grouped into superspecies.
Laboratory evidence
Experiments on allopatric speciation are often complex and do not simply divide a species population into two. This is due to a host of defining parameters: measuring reproductive isolation, sample sizes (the number of matings conducted in reproductive isolation tests), bottlenecks, length of experiments, number of generations allowed, or insufficient genetic diversity. Various isolation indices have been developed to measure reproductive isolation (and are often employed in laboratory speciation studies) such as here (index and index ):
Here, and represent the number of matings in heterogameticity where and represent homogametic matings. and is one population and and is the second population. A negative value of denotes negative assortive mating, a positive value denotes positive assortive mating (i. e. expressing reproductive isolation), and a null value (of zero) means the populations are experiencing random mating.
The experimental evidence has solidly established the fact that reproductive isolation evolves as a by-product of selection. Reproductive isolation has been shown to arise from pleiotropy (i.e. indirect selection acting on genes that code for more than one trait)—what has been referred to as genetic hitchhiking. Limitations and controversies exist relating to whether laboratory experiments can accurately reflect the long-scale process of allopatric speciation that occurs in nature. Experiments often fall beneath 100 generations, far less than expected, as rates of speciation in nature are thought to be much larger. Furthermore, rates specifically concerning the evolution of reproductive isolation in Drosophila are significantly higher than what is practiced in laboratory settings. Using index Y presented previously, a survey of 25 allopatric speciation experiments (included in the table below) found that reproductive isolation was not as strong as typically maintained and that laboratory environments have not been well-suited for modeling allopatric speciation. Nevertheless, numerous experiments have shown pre-zygotic and post-zygotic isolation in vicariance, some in less than 100 generations.
Below is a non-exhaustive table of the laboratory experiments conducted on allopatric speciation. The first column indicates the species used in the referenced study, where the "Trait" column refers to the specific characteristic selected for or against in that species. The "Generations" column refers to the number of generations in each experiment performed. If more than one experiment was formed generations are separated by semicolons or dashes (given as a range). Some studies provide a duration in which the experiment was conducted. The "Selection type" column indicates if the study modeled vicariant or peripatric speciation (this may not be explicitly). Direct selection refers to selection imposed to promote reproductive isolation whereas indirect selection implies isolation occurring as a pleiotropic byproduct of natural selection; whereas divergent selection implies deliberate selection of each allopatric population in opposite directions (e.g. one line with more bristles and the other line with less). Some studies performed experiments modeling or controlling for genetic drift. Reproductive isolation occurred pre-zygotically, post-zygotically, both, or not at all. It is important to note that many of the studies conducted contain multiple experiments within—a resolution of which this table does not reflect.
History and research techniques
Early speciation research typically reflected geographic distributions and were thus termed geographic, semi-geographic, and non-geographic. Geographic speciation corresponds to today's usage of the term allopatric speciation, and in 1868, Moritz Wagner was the first to propose the concept of which he used the term Separationstheorie. His idea was later interpreted by Ernst Mayr as a form of founder effect speciation as it focused primarily on small geographically isolated populations.
Edward Bagnall Poulton, an evolutionary biologist and a strong proponent of the importance of natural selection, highlighted the role of geographic isolation in promoting speciation, in the process coining the term "sympatric speciation" in 1903.
Controversy exists as to whether Charles Darwin recognized a true geographical-based model of speciation in his publication of the Origin of Species. In chapter 11, "Geographical Distribution", Darwin discusses geographic barriers to migration, stating for example that "barriers of any kind, or obstacles to free migration, are related in a close and important manner to the differences between the productions of various regions [of the world]". F. J. Sulloway contends that Darwin's position on speciation was "misleading" at the least and may have later misinformed Wagner and David Starr Jordan into believing that Darwin viewed sympatric speciation as the most important mode of speciation. Nevertheless, Darwin never fully accepted Wagner's concept of geographical speciation.
David Starr Jordan played a significant role in promoting allopatric speciation in the early 20th century, providing a wealth of evidence from nature to support the theory. Much later, the biologist Ernst Mayr was the first to encapsulate the then contemporary literature in his 1942 publication Systematics and the Origin of Species, from the Viewpoint of a Zoologist and in his subsequent 1963 publication Animal Species and Evolution. Like Jordan's works, they relied on direct observations of nature, documenting the occurrence of allopatric speciation, of which is widely accepted today. Prior to this research, Theodosius Dobzhansky published Genetics and the Origin of Species in 1937 where he formulated the genetic framework for how speciation could occur.
Other scientists noted the existence of allopatrically distributed pairs of species in nature such as Joel Asaph Allen (who coined the term "Jordan's Law", whereby closely related, geographically isolated species are often found divided by a physical barrier) and Robert Greenleaf Leavitt; however, it is thought that Wagner, Karl Jordan, and David Starr Jordan played a large role in the formation of allopatric speciation as an evolutionary concept; where Mayr and Dobzhansky contributed to the formation of the modern evolutionary synthesis.
The late 20th century saw the development of mathematical models of allopatric speciation, leading to the clear theoretical plausibility that geographic isolation can result in the reproductive isolation of two populations.
Since the 1940s, allopatric speciation has been accepted. Today, it is widely regarded as the most common form of speciation taking place in nature. However, this is not without controversy, as both parapatric and sympatric speciation are both considered tenable modes of speciation that occur in nature. Some researchers even consider there to be a bias in reporting of positive allopatric speciation events, and in one study reviewing 73 speciation papers published in 2009, only 30 percent that suggested allopatric speciation as the primary explanation for the patterns observed considered other modes of speciation as possible.
Contemporary research relies largely on multiple lines of evidence to determine the mode of a speciation event; that is, determining patterns of geographic distribution in conjunction with phylogenetic relatedness based on molecular techniques. This method was effectively introduced by John D. Lynch in 1986 and numerous researchers have employed it and similar methods, yielding enlightening results. Correlation of geographic distribution with phylogenetic data also spawned a sub-field of biogeography called vicariance biogeography developed by Joel Cracraft, James Brown, Mark V. Lomolino, among other biologists specializing in ecology and biogeography. Similarly, full analytical approaches have been proposed and applied to determine which speciation mode a species underwent in the past using various approaches or combinations thereof: species-level phylogenies, range overlaps, symmetry in range sizes between sister species pairs, and species movements within geographic ranges. Molecular clock dating methods are also often employed to accurately gauge divergence times that reflect the fossil or geological record (such as with the snapping shrimp separated by the closure of the Isthmus of Panama or speciation events within the genus Cyclamen). Other techniques used today have employed measures of gene flow between populations, ecological niche modelling (such as in the case of the Myrtle and Audubon's warblers or the environmentally-mediated speciation taking place among dendrobatid frogs in Ecuador), and statistical testing of monophyletic groups. Biotechnological advances have allowed for large scale, multi-locus genome comparisons (such as with the possible allopatric speciation event that occurred between ancestral humans and chimpanzees), linking species' evolutionary history with ecology and clarifying phylogenetic patterns.
References
Further reading
Mathematical models of reproductive isolation
Biogeography
Ecology
Evolutionary biology
Speciation | Allopatric speciation | [
"Biology"
] | 6,988 | [
"Evolutionary biology",
"Evolutionary processes",
"Speciation",
"Biogeography",
"Ecology"
] |
334,998 | https://en.wikipedia.org/wiki/HP%2049/50%20series | The HP 49/50 series are Hewlett-Packard (HP) manufactured graphing calculators. They are the successors of the HP 48 series.
There are five calculators in the 49/50 series of HP graphing calculators. These calculators have both algebraic and RPN entry modes, and can perform numeric and symbolic calculations using the built-in Computer Algebra System (CAS), which is an improved ALG48 and Erable combination from the HP 48 series.
Along with the HP 15C and the HP 48, it is widely considered the greatest calculator ever designed for engineers, scientists, and surveyors. It has advanced functions suitable for applications in mathematics, linear algebra, physics, statistical analysis, numerical analysis, computer science, and others.
Although out of production, its popularity has led to high prices on the used market.
HP 49G
The HP 49G (F1633A, F1896A), was released in August 1999.
The 49G incorporated many of the most powerful interface and mathematics tools available on the HP 48 series into the firmware of the new 49G, including the ability to easily decompile and compile both SysRPL and Saturn assembly code on the unit.
The 49G was the first HP calculator to use flash memory and have an upgradable firmware. In addition, it had a hard sliding case as opposed to the soft pouches supplied with the HP 48 series. Almost the same hardware is also used by the HP 39G and HP 40G.
The last officially supported firmware update for the 49G calculator was 1.18, but several unofficial firmware versions were released by the developers. The final firmware version was 1.19-6. Several firmware versions for the successor hp 49g+ and HP 50g calculators have also been released in builds intended for PC emulation software that lacked full utilization of the successors' ARM CPU. Until at least firmware version 2.09, those emulator builds could be installed on the original HP 49G as well.
In 2003, the CAS source code of the 49G firmware was released under the LGPL. In addition, this release included an interactive geometry program and some commands to allow compatibility with certain programs written for the newer 49g+ calculator. Due to licensing restrictions, the recompiled firmware cannot be redistributed.
hp 49g+
In August 2003, Hewlett-Packard released the hp 49g+ (F2228A). This unit had metallic gold coloration and was backward compatible with the HP 49G.It was designed and manufactured by Kinpo Electronics for HP.
This calculator featured an entirely new processor architecture, USB (Mini-B) and IrDA (IrCOMM) infrared communication, memory expansion via an SD (SDSC/MMC) card, and a slightly larger screen, as well as other improvements over the previous model.
The calculator system did not run directly on the new ARM processor, but rather on an emulation layer for the older Saturn processors found in previous HP calculators. In principle, the firmware for the calculator is identical to that for the 49G, but it gets automatically patched in the course of development to replace some code sequences by special virtual "Saturn+" instructions which bypass the emulation and run natively on the underlying ARM processor in order to improve the calculator's speed. This allowed the 49g+ to maintain binary-level compatibility with most of the programs written for the HP 49G calculator, as well as source code-level compatibility with many written for the HP 48 series.
Despite the emulation, the 49g+ was still much faster than any older model of HP calculator. The speed increase over the HP 49G is around 3–7 times depending on the task. It is even possible to run programs written for the ARM processor thus bypassing the emulation layer completely. A port of the GNU C compiler is also available (see HPGCC below).
hp 48gII
The hp 48gII (F2226A), which was announced on 20 October 2003, was not a replacement for the HP 48 series as its name suggested. Rather it was a 49g+, also with an ARM processor (unlike the HP 48G), but with reduced memory, no expansion via an SD memory card, lower clock speed, a smaller screen, and a non-flashable firmware. This calculator seems to target users that desire mathematical capability, but have no desire to install many programs. The original 2003 version had 128 KB RAM and ran on 3 AAA batteries, whereas the second 2007 version (based on the Apple V2 platform) needs four AAA batteries and comes with 256 KB RAM, added a USB (Mini-B) port and features a better keyboard.
HP 50g
The HP 50g (F2229A) is the latest calculator in the 49/50 series, introduced in 2006. The most apparent change is a revised color scheme, returning the unit to a more traditional HP calculator appearance. Using black plastic for the entire body, white, orange and yellow are used for function shift keys. The back shell is textured more deeply than the 49g+ to provide a more secure grip.
In 2009/2010, a blue and white color scheme variant (NW240AA) specifically tailored for high-contrast was introduced as well. It was also designed to aid color-blind users. In 2011/2012, a slightly different blue and white color scheme was introduced.
The form and size of the calculator shell is identical to the 49g+ series, but four AAA batteries are used as opposed to three in previous models. In addition to all the features of the 49g+, the 50g also includes the full equation library found in the HP 48G series (also available for the 49g+ with firmware 2.06 and above), as well as the periodic table library originally available as a plug-in card for the 48S series, as of firmware 2.15/ 2.16 (the latest, as of 2015), and has a 3.3 V TTL-level asynchronous serial port in addition to IrDA and USB Mini-B ports of the 49g+. Like the 49g+, the range of the infrared port has been limited to about 10 cm (4 inches). Like for the 49g+, the firmware is in principle identical to that for the 49G, but gets automatically patched in the course of development.
The asynchronous serial port is not a true RS-232 port as it uses different voltage levels and a non-standard connector. An external converter/adapter is required to interface with RS-232 equipment.
The keyboard, the most often criticized feature of the 49g+ calculators, uses the new design introduced on the very last 49g+ calculators (hinged keys) to eliminate previous problems.
A worldwide announcement regarding the availability of this calculator was made by HP in September 2006, and official details were available on the HP calculators webpage. The calculator was officially discontinued in 2015. It was HP's last calculator to support RPL, later calculators like the HP Prime support RPN only, although in a variant named Advanced RPN.
Programming
The HP 49/50 series of calculators support both algebraic and a stack-based programming language named RPL, a combination of Reverse Polish Notation (RPN) and Lisp. RPL adds the concepts of lists and functions to stack-based programming, allowing the programmer to pass unevaluated code as arguments to functions, or return unevaluated code from a function by leaving it on the stack.
The highest level language is User RPL, consisting of sequences of built-in postfix operations, optionally including loops and conditionals. Every User RPL command checks the stack for its particular arguments and returns an error if they are incorrect or not present.
Below User RPL is System RPL (SysRPL). Most System RPL commands lack argument checking and are defined only for specific argument types (e.g. short integer vs. long integer), making System RPL programs run dramatically faster than equivalent User RPL ones. In addition, System RPL includes many advanced functions that are not available in User RPL. System RPL programs can be created without the use of PC software (although it is available), thanks to the calculator's built-in compiler, MASD. MASD also can compile Saturn assembly language and, with the latest firmware revision for the 49g+/50g, ARMv4T assembly language on the calculator itself. Many tools exist to assist programmers and make the calculator a powerful programming environment.
Saturn assembly, and, on the 49g+/50g, ARM assembly and C, are also programmable using desktop based compilers. See also the programs available for the HP 48 series.
No model of this series is programmable in HP PPL.
HPGCC for the 49g+/50g
HPGCC is an implementation of the GCC compiler, released under the GNU GPL. It is now mainly targeted at the ARM-based 49g+/50g calculators. Previous versions of HPGCC supported the other ARM-based calculator models (the 48gII, and the hp 39g+/HP 39gs/HP 40gs), but this was removed due to lack of interest and compatibility issues. Formally, HPGCC is a cross-compiler; it compiles code for the ARM-based HP calculators, but runs on a PC rather than the target system.
The latest version of HPGCC offers many enhancements from earlier versions. Most notably, the compiled code is now in ARM Thumb mode by default, resulting in great reduction in code size with little performance hit. Besides implementing most of ANSI C, there are device-specific libraries that allow access to things like the calculator's RPN stack, memory and piezoelectric buzzer. The GCC compiler itself is the property of the Free Software Foundation, and they state that its use does not impose any particular licensing restrictions on any of its output. However, the libraries included with HPGCC, including routines necessary to actually invoke any HPGCC-compiled program on an actual calculator, are released under a modified GPL license, contrary to GCC on many other platforms which use a more permissive license for their libraries. Thus any programs that link against them can only be distributed if they are also released under the GPL (with an exception for "non-profit" software).
Linux, Windows, and Mac OS X versions are available for download. The Windows version also includes a version of Programmer's Notepad for a basic IDE.
Emulators
There are several emulators available for the HP 49G calculator. A version of EMU48 is available in the Debug4x IDE that allows emulation of most of the features of the 49g+/50g, but will not execute any ARM-based code.
An ARM-based emulator, x49gp, has been released and allows the true emulation of the 49g+/50g ARM processor and successfully runs HPGCC 2 and 3 compiled programs. The emulator is only available for Linux and Mac OS X and must be compiled from the source. (See README.QUICKSTART for details.)
The commercial version of the application m48 also supports HP 49G. So far, there are no 49g+/50g emulators for smartphones with the exception of HP 50g for iPhone and iPad released in October 2012.
An emulator for Microsoft Windows Mobile (PPC, smartphones) is available.
Other 49G/49g+/50g emulators for Android (without ARM support).
In 2012, Hewlett-Packard released an emulator named HP 50g Virtual Calculator (version 3.1.29/3.1.30 with firmware 2.16 and support for the StreamSmart 410) for Windows.
Firmware updates
The 49/50 series allows the user to update the firmware to gain enhanced features or bug fixes. Official firmware updates are released by Hewlett-Packard. Unsupported unofficial firmware updates are also available at sites such as hpcalc.org.
See also
Comparison of HP graphing calculators
HP calculators
RPL character set
newRPL (for HP 49g+ and 50g or SwissMicros DM42)
DB48X (for SwissMicros DM42)
References
Further reading
Searchable
(NB. A database of known bugs and problems in the calculator's firmware, both solved and unresolved ones.)
(NB. A thread on an unresolved problem in the calculator's firmware.)
(NB. A thread on an unresolved problem in the calculator's firmware.)
External links
Official HP support for , , ,
Computer algebra systems
Graphing calculators
49
de:HP-49G | HP 49/50 series | [
"Mathematics"
] | 2,765 | [
"Computer algebra systems",
"Mathematical software"
] |
3,033,116 | https://en.wikipedia.org/wiki/Victor%20Glushkov | Victor Mikhailovich Glushkov (; August 24, 1923 – January 30, 1982) was a Soviet computer scientist, the founding father of information technology in the Soviet Union and one of the founding fathers of Soviet cybernetics.
Biography
He was born in Rostov-on-Don, Russian SFSR, in the family of a mining engineer. Glushkov graduated from Rostov State University in 1948, and in 1952 proposed solutions to Hilbert's fifth problem and defended his thesis at Moscow State University.
In 1956 he began working with computers and worked in Kyiv as a Director of the Computational Center of the Academy of Science of Ukraine. In 1958 he became a member of the Communist Party. In 1962 Glushkov established the famous Institute of Cybernetics of the National Academy of Science of Ukraine and became its first director.
He made contributions to the theory of automata. He and his followers (Kapitonova, Letichevskiy and others) successfully applied that theory to enhance construction of computers. His book on that topic Synthesis of Digital Automata became well known. For that work, he was awarded the Lenin Prize in 1964 and elected as a Member of the Academy of Science of USSR.
He greatly influenced many other fields of theoretical computer science (including the theory of programming and artificial intelligence) as well as its applications in the USSR. He published nearly 800 printed works.
One of his great practical goals was the creation of the National Automated System for Computation and Information Processing (OGAS), consisting of a computer network to manage the allocation of resources and information among organizations in the national economy, which would represent a higher form of socialist planning than the extant centrally planned economy. This ambitious project was ahead of its time, first being proposed and modeled in 1962. It received opposition from many senior Communist Party leaders who felt the system threatened Party control of the economy. By the early 1970s official interest in this system had ended.
Glushkov founded a Kyiv-based Chair of Theoretical Cybernetics and Methods of Optimal Control at the Moscow Institute of Physics and Technology in 1967 and a Chair of Theoretical Cybernetics at Kyiv State University in 1969.
The Institute of Cybernetics of the National Academy of Science of Ukraine, which he created, is named after him.
Honors and awards
Member of the National Academy of Science of Ukraine since 1961.
Member of the USSR Academy of Sciences since 1964.
Member of the German Academy of Sciences Leopoldina since 1970.
Lenin Prize, 1964
Order of Lenin, 1967, 1975
USSR State Prize, 1968, 1977
Hero of Socialist Labor, 1969
Ukrainian State Prize, 1970, 1981
Order of the October Revolution, 1973
Computer Pioneer Award (IEEE), For digital automation of computer architecture, 1996.
See also
Analitik
Project Cybersyn
Scientific socialism
References
External links
Victor Glushkov - Founder of Information Technologies in Ukraine and former USSR
Pioneers of Soviet Computing
1923 births
1982 deaths
Soviet computer scientists
Soviet mathematicians
Russian inventors
Soviet cyberneticists
Heroes of Socialist Labour
Recipients of the Order of Lenin
Recipients of the USSR State Prize
Recipients of the Lenin Prize
Full Members of the USSR Academy of Sciences
Members of the National Academy of Sciences of Ukraine
Foreign members of the Bulgarian Academy of Sciences
Academic staff of the Moscow Institute of Physics and Technology
Information technology in Ukraine
Members of the German National Academy of Sciences Leopoldina
Members of the German Academy of Sciences at Berlin
Russian scientists
Government by algorithm | Victor Glushkov | [
"Engineering"
] | 687 | [
"Government by algorithm",
"Automation"
] |
3,033,136 | https://en.wikipedia.org/wiki/Point%20of%20beginning | The point of beginning is a surveyor's mark at the beginning location for the wide-scale surveying of land.
An example is the Beginning Point of the U.S. Public Land Survey that led to the opening of the Northwest Territory, and is the starting point of the surveys of almost all other lands to the west, reaching all the way to the Pacific Ocean. On September 30, 1785, Thomas Hutchins, first and only Geographer of the United States, began surveying the Seven Ranges at the point of beginning.
Points of beginning
Beginning Point of the U.S. Public Land Survey – East Liverpool, Ohio
See also
Initial point
References
External links
Point of Beginning in Wisconsin
Surveying | Point of beginning | [
"Engineering"
] | 140 | [
"Surveying",
"Civil engineering"
] |
3,033,151 | https://en.wikipedia.org/wiki/Maltese%20units%20of%20measurement | In modern usage, metric is used almost exclusively in commercial transactions. These units are mostly historical, although they are still used in some limited contexts and in Maltese idioms and set phrases. Many of these terms are directly related to Arabic units and some to Sicilian units. The Weights and Measures Ordinance of 1921 established uniformity in the conversion of such weights and measures. All these measures were defined as simple multiples of the Imperial units then in use in Britain.
Length
Length units were typically used for measuring goods and building sizes. Distances were traditionally measured in terms of travel time, which explains the lack of large-scale units.
Area
Land
In 1921, these units were redefined with respect to the British Imperial standard. These values reflect this change.
Square
Volume
These units were all (except for the cubic units) defined in 1921 relative to the British Imperial gallon, which was defined in the 1824 act. This is equal to 10 pounds of water at a specified temperature and air pressure. This (~ litres) is slightly larger than the modern definition (exactly litres).
Beer, wine, and spirits measure
None of the units from this group are mentioned in TY Maltese.
Milk and Oil measure
Dry
None of these units are mentioned in TY Maltese. Note that there are two conflicting values for the siegħ ( and tomna, respectively).
Cubic
Mass
All the Maltese mass units were redefined relative to the British Imperial ton in 1921. Before this, the units were presumably based on an Arabic standard. All equivalent measures listed in pounds below are exact values.
Note: there are two distinct units which are named kwart.
Money
This system was used during the rule of the Knights of St. John in Malta. Subsequent currencies in use were sterling and the Maltese pound. Malta has since adopted the euro.
References
Maltese-English Dictionary, appendix 10, p1658, by Aquilina, published by Midsea Books Ltd. No ISBN available.
Teach Yourself Maltese, pp125–6
Att dwar il-Metroloġija Kap. 454
Maltese
Science and technology in Malta
Systems of units
Units of measurement by country | Maltese units of measurement | [
"Mathematics"
] | 429 | [
"Obsolete units of measurement",
"Systems of units",
"Units of measurement by country",
"Quantity",
"Units of measurement"
] |
3,033,334 | https://en.wikipedia.org/wiki/724%20Hapag | 724 Hapag is a minor planet orbiting the Sun in the asteroid belt that was found by Austrian astronomer Johann Palisa in 1911 and named after the German shipping company Hamburg America Line. It was assigned a provisional name of 1911 NC, then became a lost asteroid until it was rediscovered in 1988 as by Tsutomu Hioki and N. Kawasato at Okutama, Japan.
Photometric observations of this asteroid at the Organ Mesa Observatory in Las Cruces, New Mexico in 2011 gave a light curve with a period of 3.1305 ± 0.0001 hours and a brightness variation of 0.11 ± 0.01 in magnitude.
References
External links
Background asteroids
19111021
Hapag
Hapag
Recovered astronomical objects | 724 Hapag | [
"Astronomy"
] | 152 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
3,033,696 | https://en.wikipedia.org/wiki/Nikolai%20Chebotaryov | Nikolai Grigorievich Chebotaryov (often spelled Chebotarov or Chebotarev, , ) ( – 2 July 1947) was a Soviet mathematician. He is best known for the Chebotaryov density theorem.
He was a student of Dmitry Grave, a Russian mathematician. Chebotaryov worked on the algebra of polynomials, in particular examining the distribution of the zeros. He also studied Galois theory and wrote a textbook on the subject titled Basic Galois Theory.
His ideas were used by Emil Artin to prove the Artin reciprocity law.
He worked with his student Anatoly Dorodnov on a generalization of the quadrature of the lune, and proved the conjecture now known as the Chebotarev theorem on roots of unity.
Early life
Nikolai Chebotaryov was born on 15 June 1894 in Kamianets-Podilskyi, Russian Empire (now in Ukraine). He entered the department of physics and mathematics at Kyiv University in 1912. In 1928, he became a professor at Kazan University, remaining there for the rest of his life. He died on 2 July 1947. He was an atheist. On 14 May 2010, a memorial plaque for Nikolai Chebotaryov was unveiled on the main administration building of I.I. Mechnikov Odessa National University.
References
1894 births
1947 deaths
People from Kamianets-Podilskyi
People from Kamenets-Podolsky Uyezd
Ukrainian people of Russian descent
20th-century Russian mathematicians
Soviet mathematicians
Russian atheists
Ukrainian mathematicians
Number theorists
Corresponding Members of the USSR Academy of Sciences
Recipients of the Stalin Prize
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
Russian scientists | Nikolai Chebotaryov | [
"Mathematics"
] | 349 | [
"Number theorists",
"Number theory"
] |
3,033,762 | https://en.wikipedia.org/wiki/Jan%20%C5%9Aleszy%C5%84ski | Ivan Vladislavovich Sleshinsky or Jan Śleszyński () (23 July 1854 – 9 March 1931) was a Polish-Russian mathematician. He was born in Lysianka, Russian Empire to Polish parents.
Life
Śleszyński's main work was on continued fractions, least squares and axiomatic proof theory based on mathematical logic. He and Alfred Pringsheim, working separately, proved what is now called the Śleszyński–Pringsheim theorem.
His most important publications include: "Teoria dowodu" ("The theory of proof") in two volumes (1925, 1929), and "Teoria wyznaczników" ("The theory of determinants") (1926). He is buried at Rakowicki Cemetery.
See also
History of philosophy in Poland
List of Poles
References
External links
1854 births
1931 deaths
19th-century Polish mathematicians
Mathematicians from the Russian Empire
Burials at Rakowicki Cemetery
Polish mathematicians
Polish logicians
Proof theorists | Jan Śleszyński | [
"Mathematics"
] | 211 | [
"Proof theorists",
"Proof theory"
] |
3,034,047 | https://en.wikipedia.org/wiki/Ecrasite | Ecrasite is an explosive material which is unaffected by moisture, shock or fire. It is a mixture of ammonium salts of cresol, phenol and various nitrocresols and nitrophenols principally trinitrocresol and picric acid. It was invented in 1888-1889 by two Austrian engineers named Siersch and Kubin, and used in Austria-Hungary to load artillery shells. Ecrasite was patented secretly, and its composition was once unknown.
Ecrasite is prepared by the partial nitration of a crude mixture of cresol and phenol with a mixture of concentrated sulfuric and nitric acids and the neutralisation of the product with ammonia to produce a crude salt similar to ammonium picrate.
Ecrasite is a bright yellow solid. It is waxy to touch and melts at about 100 °C. When subjected to open flame, it burns without detonation, unless confined. It is insensitive to friction. It requires a detonator for initiation. Its general adoption was hindered by several unexplained explosions during loading into shells, which might have been caused by creation of unstable metal salts of trinitrocresol and/or trinitrophenol when the explosive came in contact with metals or alloys such as copper, brass (widely used for manufacturing detonator parts) and possibly other ones.
References
Explosives | Ecrasite | [
"Chemistry"
] | 291 | [
"Explosives",
"Explosions"
] |
3,034,130 | https://en.wikipedia.org/wiki/Tattler%20%28bird%29 | The tattlers are the two very similar bird species in the shorebird genus Tringa. They formerly had their own genus, Heteroscelus. The old genus name means "different leg" in Greek, referring to the leg scales that differentiate the tattlers from their close relatives, the shanks.
The species are:
Grey-tailed tattler, Tringa brevipes (formerly Heteroscelus brevipes)
Wandering tattler, Tringa incana (formerly Heteroscelus incanus)
Tattlers resemble a common redshank (T. totanus) in shape and size, but not in color. Their upper parts, underwings, face and neck are greyish, and the belly and the weak supercilium are white, with some greyish streaking on the underside in breeding plumage. They have short yellowish legs and a bill with a pale base and dark tip.
Certain identification to species depends on details like the length of the nasal groove and scaling on the tarsus. Birds in breeding plumage can also (with some experience) be identified by the underside pattern: the grey-tailed tattler has fine barring on throat, breast and flanks only, which appear light grey from a distance; the rest of the underside is pure white. The wandering tattler has a coarser barring, still visible from quite far away, all the way from the throat to the undertail coverts. In non-breeding plumage, observers with much experience will note that the wandering tattler is an overall darker bird with very weak supercilia, whereas the grey-tailed tattler is lighter – particularly on the face, due to their stronger supercilia. Their normal calls also differ strongly; the grey-tailed tattler has a disyllabic whistle, whereas the wandering tattler has a rippling trill. But when they flee from the observer or are otherwise startled or excited, both species alike give a variety of longer or shorter alarm calls.
Tattlers are strongly migratory and winter in the tropics and subtropics on muddy and sandy coasts. These are not particularly gregarious birds and are seldom seen in large flocks except at roosts. These birds forage on the ground or water, picking up food by sight. They eat insects, crustaceans and other invertebrates.
Their breeding habitat is stony riverbeds. They nest on the ground, but these waders will perch in trees and sometimes use old nests of other birds.
Footnotes
References
Banks, Richard C.; Cicero, Carla; Dunn, Jon L.; Kratter, Andrew W.; Rasmussen, Pamela C.; Remsen, J.V. Jr.; Rising, James D. & Stotz, Douglas F. (2006): Forty-seventh Supplement to the American Ornithologists' Union Check-list of North American Birds. The Auk 123 (3): 926–936. PDF fulltext
Hayman, Peter; Marchant, John & Prater, Tony (1986): Shorebirds: an identification guide to the waders of the world. Houghton Mifflin, Boston.
VanderWerf, Eric A. (2006): Observations on the birds of Kwajalein Atoll, including six new species records for the Marshall Islands. Micronesica 38 (2): 221–237. PDF fulltext
Tringa
Shorebirds
Bird common names
Paraphyletic groups | Tattler (bird) | [
"Biology"
] | 715 | [
"Phylogenetics",
"Paraphyletic groups"
] |
3,034,280 | https://en.wikipedia.org/wiki/Ciprian%20Manolescu | Ciprian Manolescu (born December 24, 1978) is a Romanian-American mathematician, working in gauge theory, symplectic geometry, and low-dimensional topology. He is currently a professor of mathematics at Stanford University.
Biography
Manolescu completed his first eight classes at School no. 11 Mihai Eminescu and his secondary education at Ion Brătianu High School in Pitești. He completed his undergraduate studies and PhD at Harvard University under the direction of Peter B. Kronheimer. He was the winner of the Morgan Prize, awarded jointly by AMS-MAA-SIAM, in 2002. His undergraduate thesis was on Finite dimensional approximation in Seiberg–Witten theory, and his PhD thesis topic was A spectrum valued TQFT from the Seiberg–Witten equations.
In early 2013, he released a paper detailing a disproof of the triangulation conjecture for manifolds of dimension 5 and higher. For this paper, he received the E. H. Moore Prize from the American Mathematical Society.
Awards and honors
He was among the recipients of the Clay Research Fellowship (2004–2008).
In 2012, he was awarded one of the ten prizes of the European Mathematical Society for his work on low-dimensional topology, and particularly for his role in the development of combinatorial Heegaard Floer homology.
He was elected as a member of the 2017 class of Fellows of the American Mathematical Society "for contributions to Floer homology and the topology of manifolds".
In 2018, he was an invited speaker at the International Congress of Mathematicians (ICM) in Rio de Janeiro.
In 2020, he received a Simons Investigator Award. The citation reads: "Ciprian Manolescu works in low-dimensional topology and gauge theory. His research is centered on constructing new versions of Floer homology and applying them to questions in topology. With collaborators, he showed that many Floer-theoretic invariants are algorithmically computable. He also developed a new variant of Seiberg-Witten Floer homology, which he used to prove the existence of non-triangulable manifolds in high dimensions."
Competitions
He has one of the best records ever in mathematical competitions:
He holds the sole distinction of writing three perfect papers at the International Mathematical Olympiad: Toronto, Canada (1995); Bombay, India (1996); Mar del Plata, Argentina (1997).
Manolescu is a three-time Putnam Fellow, having placed in the top five in the William Lowell Putnam Mathematical Competition in 1997, 1998, and 2000.
Selected works
References
External links
Manolescu's Stanford Page
The Clay Mathematics Institute page
21st-century Romanian mathematicians
21st-century American mathematicians
Topologists
Harvard University alumni
University of California, Los Angeles faculty
People from Alexandria, Romania
1978 births
Living people
Romanian emigrants to the United States
International Mathematical Olympiad participants
Geometers
Fellows of the American Mathematical Society
Putnam Fellows | Ciprian Manolescu | [
"Mathematics"
] | 606 | [
"Topologists",
"Topology",
"Geometers",
"Geometry"
] |
3,034,286 | https://en.wikipedia.org/wiki/Electronic%20lock | An electronic lock (or electric lock) is a locking device which operates by means of electric current. Electric locks are sometimes stand-alone with an electronic control assembly mounted directly to the lock. Electric locks may be connected to an access control system, the advantages of which include: key control, where keys can be added and removed without re-keying the lock cylinder; fine access control, where time and place are factors; and transaction logging, where activity is recorded. Electronic locks can also be remotely monitored and controlled, both to lock and to unlock.
Operation
Electric locks use magnets, solenoids, or motors to actuate the lock by either supplying or removing power. Operating the lock can be as simple as using a switch, for example an apartment intercom door release, or as complex as a biometric based access control system.
There are two basic types of locks: "preventing mechanism" or operation mechanism.
Types
Electromagnetic lock
The most basic type of electronic lock is a magnetic lock (informally called a "mag lock"). A large electro-magnet is mounted on the door frame and a corresponding armature is mounted on the door. When the magnet is powered and the door is closed, the armature is held fast to the magnet. Mag locks are simple to install and are very attack-resistant. One drawback is that improperly installed or maintained mag locks can fall on people, and also that one must unlock the mag lock to both enter and to leave. This has caused fire marshals to impose strict rules on the use of mag locks and access control practice in general. Additionally, NFPA 101 (Standard for Life Safety and Security), as well as the ADA (Americans with Disability Act) require "no prior knowledge" and "one simple movement" to allow "free egress". This means that in an emergency, a person must be able to move to a door and immediately exit with one motion (requiring no push buttons, having another person unlock the door, reading a sign, or "special knowledge").
Other problems include a lag time (delay), because the collapsing magnetic field holding the door shut does not release instantaneously. This lag time can cause a user to collide with the still-locked door. Finally, mag locks fail unlocked, in other words, if electrical power is removed they unlock. This could be a problem where security is a primary concern. Additionally, power outages could affect mag locks installed on fire listed doors, which are required to remain latched at all times except when personnel are passing through. Most mag lock designs would not meet current fire codes as the primary means of securing a fire listed door to a frame. Because of this, many commercial doors (this typically does not apply to private residences) are moving over to stand-alone locks, or electric locks installed under a Certified Personnel Program.
The first mechanical recodable card lock was invented in 1976 by Tor Sørnes, who had worked for VingCard since the 1950s. The first card lock order was shipped in 1979 to Westin Peachtree Plaza Hotel, Atlanta, US. This product triggered the evolution of electronic locks for the hospitality industry.
Electronic strikes
Electric strikes (also called electric latch release) replace a standard strike mounted on the door frame and receive the latch and latch bolt. Electric strikes can be simplest to install when they are designed for one-for-one drop-in replacement of a standard strike, but some electric strike designs require that the door frame be heavily modified. Installation of a strike into a fire listed door (for open backed strikes on pairs of doors) or the frame must be done under listing agency authority, if any modifications to the frame are required (mostly for commercial doors and frames). In the US, since there is no current Certified Personnel Program to allow field installation of electric strikes into fire listed door openings, listing agency field evaluations would most likely require the door and frame to be de-listed and replaced.
Electric strikes can allow mechanical free egress: a departing person operates the lockset in the door, not the electric strike in the door frame. Electric strikes can also be either "fail unlocked" (except in Fire Listed Doors, as they must remain latched when power is not present), or the more-secure "fail locked" design. Electric strikes are easier to attack than a mag lock. It is simple to lever the door open at the strike, as often there is an increased gap between the strike and the door latch. Latch guard plates are often used to cover this gap.
Electronic deadbolts and latches
Electric mortise and cylindrical locks are drop-in replacements for door-mounted mechanical locks. An additional hole must be drilled in the door for electric power wires. Also, a power transfer hinge is often used to get the power from the door frame to the door. Electric mortise and cylindrical locks allow mechanical free egress, and can be either fail unlocked or fail locked. In the US, UL rated doors must retain their rating: in new construction doors are cored and then rated. but in retrofits, the doors must be re-rated.
Electrified exit hardware, sometimes called "panic hardware" or "crash bars", are used in fire exit applications. A person wishing to exit pushes against the bar to open the door, making it the easiest of mechanically-free exit methods. Electrified exit hardware can be either fail unlocked or fail locked. A drawback of electrified exit hardware is their complexity, which requires skill to install and maintenance to assure proper function. Only hardware labeled "Fire Exit Hardware" can be installed on fire listed doors and frames and must meet both panic exit listing standards and fire listing standards.
Motor-operated locks are used throughout Europe. A European motor-operated lock has two modes, day mode where only the latch is electrically operated, and night mode where the more secure deadbolt is electrically operated.
In South Korea, most homes and apartments have installed electronic locks, which are currently replacing the lock systems in older homes. South Korea mainly uses a lock system by Gateman.
Passive electronic lock
The "passive" in passive electronic locks means no power supply. Like electronic deadbolts, it is a drop-in replacement for mechanical locks. But the difference is that passive electronic locks do not require wiring and are easy to install.
The passive electronic lock integrates a miniature electronic single-chip microcomputer. There is no mechanical keyhole, only three metal contacts are retained. When unlocking, insert the electronic key into the keyhole of the passive electronic lock, that is, the three contacts on the head end of the key are in contact with the three contacts on the passive electronic lock. At this time, the key will supply power to the passive electronic lock, and at the same time, read the ID number of the passive electronic lock for verification. When the verification is passed, the key will power the coil in the passive electronic lock. The coil generates a magnetic field and drives the magnet in the passive electronic lock to unlock. At the moment, turn the key to drive the mechanical structure in the passive electronic lock to unlock the lock body. After successful unlocking, the key records the ID number of the passive electronic lock and also records the time of unlocking the passive electronic lock. Passive electronic locks can only be unlocked by a key with unlocking authority, and unlocking will fail if there is no unlocking authority.
Passive electronic locks are currently used in a number of specialized fields, such as power utilities, water utilities, public safety, transportation, data centers, etc.
Programmable lock
The programmable electronic lock system is realized by programmable keys, electronic locks and software. When the identification code of the key matches the identification code of the lock, all available keys are operated to unlock. The internal structure of the lock contains a cylinder, which has a contact (lock slot) that is in contact with the key, and a part of it is an electronic control device to store and verify the received identification code and respond (whether it is unlocked). The key contains a power supply device, usually a rechargeable battery or a replaceable battery in the key, used to drive the system to work; it also includes an electronic storage and control device for storing the identification code of the lock.
The software is used to set and modify the data of each key and lock.
Using this type of key and lock control system does not need to change user habits. In addition, compared with the previous mechanical device, its advantage is that only one key can open multiple locks instead of a bunch of keys like the current one. A single key can contain many lock identification codes; which can set the unlock permission for a single user.
Authentication methods
A feature of electronic locks is that the locks can deactivated or opened by authentication, without the use of a traditional physical key:
Numerical codes, passwords, and passphrases
Perhaps the most common form of electronic lock uses a keypad to enter a numerical code or password for authentication. Some feature an audible response to each press. Combination lengths are usually between four and six digits long.
Security tokens
Another means of authenticating users is to require them to scan or "swipe" a security token such as a smart card or similar, or to interact a token with the lock. For example, some locks can access stored credentials on a personal digital assistant (PDA) or smartphone, by using infrared, Bluetooth, or NFC data transfer methods.
Biometrics
As biometrics become more and more prominent as a recognized means of positive identification, their use in security systems increases. Some electronic locks take advantage of technologies such as fingerprint scanning, retinal scanning, iris scanning and voice print identification to authenticate users.
RFID
Radio-frequency identification (RFID) is the use of an object (typically referred to as an "RFID tag") applied to or incorporated into a product, animal, or person for the purpose of identification and tracking using radio waves. Some tags can be read from several meters away and beyond the line of sight of the reader. This technology is also used in some modern electronic locks. The technology has been approved since before the 1970s but has become much more prevalent in recent years due to its usages in things like global supply chain management and pet microchipping.
See also
Access badge
Common Access Card (CAC)
Credential
Electric strike
Electromagnetic lock
Keycard
Physical security
References
Locks (security device)
Electronic circuits
Articles containing video clips | Electronic lock | [
"Engineering"
] | 2,145 | [
"Electronic engineering",
"Electronic circuits"
] |
3,034,318 | https://en.wikipedia.org/wiki/Sternberg%20Astronomical%20Institute | The Sternberg Astronomical Institute ( in Russian), also known as GAISh (), is a research institution in Moscow, Russia, a division of Moscow State University. The institute is named after astronomer Pavel Karlovich Shternberg. It was founded in 1931, on the site of the observatory established by the university in 1831.
The main-belt asteroid 14789 GAISH, discovered by Lyudmila Chernykh at the Crimean Astrophysical Observatory in 1969, was named in its honour. The official naming citation was published on 6 January 2007 ().
References
External links
Sternberg Astronomical Institute
Research institutes in Russia
Moscow State University
Astronomy institutes and departments
Research institutes in the Soviet Union
Astronomy in the Soviet Union
Research institutes established in 1931
1931 establishments in the Soviet Union | Sternberg Astronomical Institute | [
"Astronomy"
] | 156 | [
"Astronomy institutes and departments",
"Astronomy stubs",
"Astronomy organizations",
"Astronomy organization stubs"
] |
3,034,653 | https://en.wikipedia.org/wiki/Roboteer | The word roboteer refers to those with interests or careers in robotics. It dates back to the 1930s and is also used in 'Future Shock' (1970).
The term roboteer was used by Barbara Krasnov for a story on Deb Huglin, owner of the Robotorium, Inc., in New York City in the early 1980s. Huglin was a lightweight-robotics applications consultant, sculptor, and repatriation archeologist. Huglin worked with Jim Henson on the design and uses of the robotic mit controller for his experimental television series "Fraggle Rock". Huglin died in a fall in the wilderness near Hemet, California in 2008.
See also
Roboticist
References
External links
Debbie the Roboteer on IMDB.
Robotics | Roboteer | [
"Engineering"
] | 154 | [
"Robotics",
"Automation"
] |
3,035,156 | https://en.wikipedia.org/wiki/Fermi%20acceleration | Fermi acceleration, sometimes referred to as diffusive shock acceleration (a subclass of Fermi acceleration), is the acceleration that charged particles undergo when being repeatedly reflected, usually by a magnetic mirror (see also Centrifugal mechanism of acceleration). It receives its name from physicist Enrico Fermi who first proposed the mechanism. This is thought to be the primary mechanism by which particles gain non-thermal energies in astrophysical shock waves. It plays a very important role in many astrophysical models, mainly of shocks including solar flares and supernova remnants.
There are two types of Fermi acceleration: first-order Fermi acceleration (in shocks) and second-order Fermi acceleration (in the environment of moving magnetized gas clouds). In both cases the environment has to be collisionless in order for the mechanism to be effective. This is because Fermi acceleration only applies to particles with energies exceeding the thermal energies, and frequent collisions with surrounding particles will cause severe energy loss and as a result no acceleration will occur.
First order Fermi acceleration
Shock waves typically have moving magnetic inhomogeneities both preceding and following them. Consider the case of a charged particle traveling through the shock wave (from upstream to downstream). If it encounters a moving change in the magnetic field, this can reflect it back through the shock (downstream to upstream) at increased velocity. If a similar process occurs upstream, the particle will again gain energy. These multiple reflections greatly increase its energy. The resulting energy spectrum of many particles undergoing this process (assuming that they do not influence the structure of the shock) turns out to be a power law:
where the spectral index depends, for non-relativistic shocks, only on the compression ratio of the shock.
The term "First order" comes from the fact that the energy gain per shock crossing is proportional to , the velocity of the shock divided by the speed of light.
The injection problem
A mystery of first order Fermi processes is the injection problem. In the environment of a shock, only particles with energies that exceed the thermal energy by much (a factor of a few at least) can cross the shock and 'enter the game' of acceleration. It is presently unclear what mechanism causes the particles to initially have energies sufficiently high to do so.
Second order Fermi acceleration
Second order Fermi acceleration relates to the amount of energy gained during the motion of a charged particle in the presence of randomly moving "magnetic mirrors". So, if the magnetic mirror is moving towards the particle, the particle will end up with increased energy upon reflection. The opposite holds if the mirror is receding. This notion was used by Fermi (1949) to explain the mode of formation of cosmic rays. In this case the magnetic mirror is a moving interstellar magnetized cloud. In a random motion environment, Fermi argued, the probability of a head-on collision is greater than a head-tail collision, so particles would, on average, be accelerated. This random process is now called second-order Fermi acceleration, because the mean energy gain per bounce depends on the mirror velocity squared, .
The resulting energy spectrum anticipated from this physical setup, however, is not universal as in the case of diffusive shock acceleration.
See also
Fermi-Ulam model
Fermi glow
Shock waves in astrophysics
References
External links
David Darling's article on Fermi acceleration
Rieger, Bosch-Ramon and Duffy: Fermi acceleration in astrophysical jets. Astrophys.Space Sci. 309:119-125 (2007)
Fusion power
Dynamics (mechanics)
Cosmic rays
Acceleration | Fermi acceleration | [
"Physics",
"Chemistry",
"Mathematics"
] | 728 | [
"Physical phenomena",
"Physical quantities",
"Acceleration",
"Plasma physics",
"Quantity",
"Fusion power",
"Astrophysics",
"Classical mechanics",
"Motion (physics)",
"Radiation",
"Dynamics (mechanics)",
"Nuclear fusion",
"Wikipedia categories named after physical quantities",
"Cosmic rays"... |
3,035,194 | https://en.wikipedia.org/wiki/Novikov%20conjecture | The Novikov conjecture is one of the most important unsolved problems in topology. It is named for Sergei Novikov who originally posed the conjecture in 1965.
The Novikov conjecture concerns the homotopy invariance of certain polynomials in the Pontryagin classes of a manifold, arising from the fundamental group. According to the Novikov conjecture, the higher signatures, which are certain numerical invariants of smooth manifolds, are homotopy invariants.
The conjecture has been proved for finitely generated abelian groups. It is not yet known whether the Novikov conjecture holds true for all groups. There are no known counterexamples to the conjecture.
Precise formulation of the conjecture
Let be a discrete group and its classifying space, which is an Eilenberg–MacLane space of type , and therefore unique up to homotopy equivalence as a CW complex. Let
be a continuous map from a closed oriented -dimensional manifold to , and
Novikov considered the numerical expression, found by evaluating the cohomology class in top dimension against the fundamental class , and known as a higher signature:
where is the Hirzebruch polynomial, or sometimes (less descriptively) as the -polynomial. For each , this polynomial can be expressed in the Pontryagin classes of the manifold's tangent bundle. The Novikov conjecture states that the higher signature is an invariant of the oriented homotopy type of for every such map and every such class , in other words, if is an orientation preserving homotopy equivalence, the higher signature associated to is equal to that associated to .
Connection with the Borel conjecture
The Novikov conjecture is equivalent to the rational injectivity of the assembly map in L-theory. The
Borel conjecture on the rigidity of aspherical manifolds is equivalent to the assembly map being an isomorphism.
References
John Milnor and James D. Stasheff, Characteristic Classes, Annals of Mathematics Studies 76, Princeton (1974).
Sergei P. Novikov, Algebraic construction and properties of Hermitian analogs of k-theory over rings with involution from the point of view of Hamiltonian formalism. Some applications to differential topology and to the theory of characteristic classes. Izv.Akad.Nauk SSSR, v. 34, 1970 I N2, pp. 253–288; II: N3, pp. 475–500. English summary in Actes Congr. Intern. Math., v. 2, 1970, pp. 39–45.
External links
Biography of Sergei Novikov
Novikov Conjecture Bibliography
Novikov Conjecture 1993 Oberwolfach Conference Proceedings, Volume 1
Novikov Conjecture 1993 Oberwolfach Conference Proceedings, Volume 2
2004 Oberwolfach Seminar notes on the Novikov Conjecture (pdf)
Scholarpedia article by S.P. Novikov (2010)
The Novikov Conjecture at the Manifold Atlas
Geometric topology
Homotopy theory
Conjectures
Unsolved problems in geometry
Surgery theory | Novikov conjecture | [
"Mathematics"
] | 607 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Unsolved problems in geometry",
"Geometric topology",
"Conjectures",
"Topology",
"Mathematical problems"
] |
3,035,321 | https://en.wikipedia.org/wiki/Toilet-related%20injuries%20and%20deaths | There have been many toilet-related injuries and deaths throughout history and in urban legends.
Accidental injuries
Infants and toddlers have fallen headfirst into toilet bowls and drowned. Safety devices exist to help prevent such accidents. Injuries to adults include bruised buttocks and tail bones, as well as dislocated hips have resulted from unexpectedly sitting on the toilet bowl rim because the seat is up or loose. Injuries can also be caused by pinching due to splits in plastic seats and/or by splinters from wooden seats, or if the toilet itself collapses or shatters under the weight of the user. Older high-tank cast-iron cisterns have been known to detach from the wall when the chain is pulled to flush, causing injuries to the user. The 2000 Ig Nobel Prize in Public Health was awarded to three physicians from the Glasgow Western Infirmary for a 1993 case report on wounds sustained to the buttocks due to collapsing toilets. Furthermore, injuries are frequently sustained by people who stand on toilets to reach a height, then slip and fall. There are also instances of people slipping on a wet bathroom floor or from a bath and concussing themselves on the fixture.
Toilet-related injuries are surprisingly common, with some estimates ranging as high as 40,000 in the US every year. In the past, this number would have been much higher, due to the material from which toilet paper was made. This was shown in a 1935 Northern Tissue advertisement which depicted splinter-free toilet paper. In 2012, 2.3 million toilets in the United States, and about 9,400 in Canada, were recalled due to faulty pressure-assist flush mechanisms which put users at risk of the fixture exploding.
Injuries caused by animals
There are also injuries caused by animals. Some black widow spiders like to spin their web below the toilet seat because of insects that can exist in and around it. Therefore, several people have been bitten while using a toilet, particularly outhouse toilets. Although there is immediate pain at the bite site, these bites are rarely fatal. The danger of spiders living beneath toilet seats is the subject of Slim Newton's comic 1972 country song "The Redback on the Toilet Seat".
It has been reported that in some cases rats crawl up through toilet sewer pipes and emerge in the toilet bowl, so that toilet users may be at risk of having a rat bite their buttocks. Many rat exterminators do not believe this, as pipes, at generally six inches (15 centimeters) wide, are too large for rats to climb and are also very slippery. Reports by janitors are always on the top floor, and could involve the rats on the roof, entering the soil pipe through the roof vent, lowering themselves into the pipe, and then into the toilet.
In May 2016, an 11-foot snake, a reticulated python, emerged from a squat toilet and bit the man using it on his penis at his home in Chachoengsao Province, Thailand. Both the victim and the python survived.
Self-induced injury
Some instances of toilet-related deaths are attributed to the drop in blood pressure due to the parasympathetic nervous system during bowel movements. This effect may be magnified by existing circulatory issues. It is further possible that people succumb on the toilet to chronic constipation, because the Valsalva maneuver is often dangerously used to aid in the expulsion of feces from the rectum during a bowel movement. According to Sharon Mantik Lewis, Margaret McLean Heitkemper and Shannon Ruff Dirksen, the "Valsalva maneuver occurs during straining to pass a hardened stool. If defecation is suppressed over long periods, problems can occur, such as constipation or stool impaction. Defecation can be facilitated by the Valsalva maneuver. This maneuver involves contraction of the chest muscles on a closed glottis with simultaneous contraction of the abdominal muscles." This means that people can die while "straining at stool." In chapter 8 of their Abdominal Emergencies, David Cline and Latha Stead wrote that "autopsy studies continue to reveal missed bowel obstruction as an unexpected cause of death".
A 2001 Sopranos episode "He is Risen" shows a fictional depiction of the risk, when the character Gigi Cestone has a heart attack on the toilet of his social club while straining to defecate.
Exploding toilets
In the Victorian era, there was a perceived risk of toilets exploding. These scenarios typically include a flammable substance (either accidentally or deliberately) being introduced into the toilet water, and a lit match or cigarette igniting and exploding the toilet. In 2014, Sloan's Flushmate pressure-assisted flushing system, which uses compressed air to force waste down the drain, was recalled after the company received reports of the air tank failing under pressure and shattering the porcelain.
Historical deaths
In 1945, the German submarine U-1206 was sunk after a toilet accident resulted in seawater flooding into the hull, which created chlorine gas upon contact with a battery and forced the submarine to resurface. At the surface the sub was discovered and attacked by Allied forces, causing the sub's captain to scuttle the sub so Allied forces could not capture it. This case may not have been due to a malfunction, but rather the possibility that the pressurized flushing system in the U-boats, which was extremely complex and required a training course to operate, may not have been properly operated.
Godfrey the Hunchback, Duke of Lower Lorraine (an area roughly coinciding with the Netherlands and Belgium), was murdered in 1076 when staying in the Dutch city of Vlaardingen. Supposedly, the assassin made sure which of the latrines, which were built and drained on the outer side of the wall, according to medieval building style, belonged to the duke's sleeping room, and took a position underneath. Some sources say that a sword was used for the assassination; others mention a sharp iron weapon, which could have been a sword but also a spear or a dagger, but a spear seems to be the most practical choice. After being stabbed in the bottom it took him several days to die from internal bleeding. The assassination was ordered by Dirk V, Count of Holland, and his ally Robrecht the Frisian, Count of Flanders.
The Erfurt latrine disaster of 1184 caused the death of at least 60 people, most of them being nobles.
George II of Great Britain died on the toilet on October 25, 1760, from an aortic dissection. According to Horace Walpole's memoirs, King George "rose as usual at six, and drank his chocolate; for all his actions were invariably methodic. A quarter after seven he went into a little closet. His German valet de chambre in waiting heard a noise, and running in, found the King dead on the floor." In falling he had cut his face.
Ioan P. Culianu was shot dead while on the toilet in the third-floor men's room of Swift Hall on the campus of the University of Chicago on 21 May 1991, in a possibly politically motivated assassination. His killer has never been caught.
The Abbasid's visier Al-Fadl ibn Sahl was found dead mysteriously in a bathroom in Sarakhs in Northern Khorasan. According to some rumors, the Abbasid Caliph Al-Ma'mun ibn Harun Ar-Rashid had ordered his assassination.
Elvis Presley died when using the toilet. "Most sources indicate that Elvis was likely sitting in the toilet area, partially nude, and reading when he collapsed." According to Dylan Jones, "Elvis Presley died aged 42 on August 16th, 1977, in the bathroom of the star's own Graceland mansion in Memphis. Sitting on the toilet, he had toppled like a toy soldier and collapsed onto the floor, where he lay in a pool of his own vomit. His light blue pajamas were around his ankles." In similar terms, Elvis biographer Joel Williamson writes, "For some reason — perhaps involving a reaction to the codeine and attempts to move his bowels — he experienced pain and fright while sitting on the toilet. Alarmed, he stood up, dropped the book he was reading, stumbled forward, and fell face down in the fetal position. He struggled weakly and drooled on the rug. Unable to breathe, he died." This led to the common saying, “The King died on the throne”.
Possible occurrences
Duke Jing of Jin (Ju), ruler of the State of Jin during the Spring and Autumn period of ancient China, died after falling into a toilet pit in summer 581 BC.
Edmund II of England died of natural causes on November 30, 1016, though some report that he was stabbed in the bowels while attending the outhouse. Similarly, Uesugi Kenshin, a warlord in Japan, died on April 19, 1578, with some reports stating that he was assassinated on the toilet.
Lenny Bruce died of a heroin overdose on August 3, 1966, while sitting on the toilet, with his arm tied off.
Air Canada Flight 797 was destroyed on June 2, 1983, with 23 fatalities after an in-flight fire began in or around the rear lavatory. Investigators could not determine the cause or exact point of origin for the fire.
Michael Anderson Godwin, a convicted murderer in South Carolina who had his sentence reduced from death by the electric chair, sat on the metal toilet in his cell while fixing his television. When he bit one of the wires, the resultant electric shock killed him. Another convicted murderer, Laurence Baker in Pittsburgh, was electrocuted while listening to the television on homemade earphones while sitting on a metal toilet.
A collision between a disabled Cessna 182 and a row of portable toilets on May 2, 2009, at Thun Field (south-east of Tacoma), despite an engine failure at altitude, ended without fatalities; the toilets "kind of cushioned things" for the 67-year-old pilot.
British businessman and Conservative politician Christopher Shale was found dead in a portable toilet at the Glastonbury Festival on June 26, 2011. It is suspected he died of a heart attack.
Aboard ships, the head (ship's toilet), and fittings associated with it are cited as one of the most common reasons for the sinking of tens of thousands of boats of all types and sizes. Heads typically have through-hull fittings located below the water line to draw flush water and eliminate waste. Boats are sunk when fittings fail or the toilet back siphons.
Urban legends
Urban legends have been reported regarding the dangers of using a toilet in a variety of situations. Several of them have been shown to be questionable. These include some cases of the presence of venomous spiders but do not include the Australian redback spider who has a reputation for hiding under toilet seats. These recent fears have emerged from a series of hoax emails originating in the Blush Spider hoax, which began circulating the internet in 1999. Spiders have also been reported to live under seats of airplanes, however, the cleaning chemicals used in the toilets would result in an incompatibility with spider's survival.
In large cities like New York City, sewer rats often have mythical status regarding size and ferocity, resulting in tales involving the rodents crawling up sewer pipes to attack an unwitting occupant. Of late, stories about terrorists booby trapping the seat to castrate their targets have begun appearing. Another myth is the risk of being sucked into an aircraft lavatory as a result of vacuum pressure during a flight.
See also
List of unusual deaths
Sanitation
List of people who died on the toilet
References
37. PBS.org Elvis’ addiction was the perfect prescription for an early death
Technology hazards
Injury
Causes of death | Toilet-related injuries and deaths | [
"Technology",
"Biology"
] | 2,404 | [
"Excretion",
"nan",
"Toilets"
] |
3,035,750 | https://en.wikipedia.org/wiki/GENERIC%20formalism | In non-equilibrium thermodynamics, GENERIC is an acronym for General Equation for Non-Equilibrium Reversible-Irreversible Coupling. It is the general form of dynamic equation for a system with both reversible and irreversible dynamics (generated by energy and entropy, respectively). GENERIC formalism is the theory built around the GENERIC equation, which has been proposed in its final form in 1997 by Miroslav Grmela and Hans Christian Öttinger.
GENERIC equation
The GENERIC equation is usually written as
Here:
denotes a set of variables used to describe the state space. The vector can also contain variables depending on a continuous index like a temperature field. In general, is a function , where the set can contain both discrete and continuous indexes. Example: for a gas with nonuniform temperature, contained in a volume ()
, are the system's total energy and entropy. For purely discrete state variables, these are simply functions from to , for continuously indexed , they are functionals
, are the derivatives of and . In the discrete case, it is simply the gradient, for continuous variables, it is the functional derivative (a function )
the Poisson matrix is an antisymmetric matrix (possibly depending on the continuous indexes) describing the reversible dynamics of the system according to Hamiltonian mechanics. The related Poisson bracket fulfills the Jacobi identity.
the friction matrix is a positive semidefinite (and hence symmetric) matrix describing the system's irreversible behaviour.
In addition to the above equation and the properties of its constituents, systems that ought to be properly described by the GENERIC formalism are required to fulfill the degeneracy conditions
which express the conservation of entropy under reversible dynamics and of energy under irreversible dynamics, respectively. The conditions on (antisymmetry and some others) express that the energy is reversibly conserved, and the condition on (positive semidefiniteness) express that the entropy is irreversibly non-decreasing.
Related Applications and Simulation Methods
Viscoelasticity, Complex fluids, Polymers, Soft matter
Smoothed-particle hydrodynamics
Stokesian dynamics
Stochastic Eulerian Lagrangian methods
References
Non-equilibrium thermodynamics | GENERIC formalism | [
"Mathematics"
] | 464 | [
"Non-equilibrium thermodynamics",
"Dynamical systems"
] |
3,035,801 | https://en.wikipedia.org/wiki/Nikolay%20Krylov%20%28mathematician%2C%20born%201879%29 | Nikolay Mitrofanovich Krylov (, ; – May 11, 1955) was a Russian and Soviet mathematician known for works on interpolation, non-linear mechanics, and numerical methods for solving equations of mathematical physics.
Biography
Nikolay Krylov graduated from St. Petersburg State Mining Institute in 1902. In the period from 1912 until 1917, he held the Professor position in this institute. In 1917, he went to the Crimea to become Professor at the Crimea University. He worked there until 1922 and then moved to Kyiv to become chairman of the mathematical physics department at the Ukrainian Academy of Sciences.
Nikolay Krylov was a member of the Société mathématique de France and the American Mathematical Society.
Research
Nikolay Krylov developed new methods for analysis of equations of mathematical physics, which can be used not only for proving the existence of solutions but also for their construction. Since 1932, he worked together with his student Nikolay Bogolyubov on mathematical problems of non-linear mechanics. In this period, they invented certain asymptotic methods for integration of non-linear differential equations, studied dynamical systems, and made significant contributions to the foundations of non-linear mechanics. They proved the first theorems on existence of invariant measures known as Krylov–Bogolyubov theorems, introduced the Krylov–Bogoliubov averaging method and, together with Yurii Mitropolskiy, developed the Krylov–Bogoliubov–Mitropolskiy asymptotic method for approximate solving equations of non-linear mechanics.
Doctoral students
Nikolay Bogolyubov
Publications
Nikolay Krylov published over 200 papers on analysis and mathematical physics and two monographs:
Nicolas Kryloff (1931): Les Méthodes de Solution Approchée des Problèmes de la Physique Mathématique. Paris: Gauthier-Villars [in French].
N. M. Krylov, N. N. Bogoliubov (1947): Introduction to Nonlinear Mechanics. Princeton: Princeton University Press. .
See also
Describing function
Krylov–Bogolyubov theorem
Krylov–Bogoliubov averaging method
Krylov–Bogolyubov–Mitropolskiy asymptotic method
References
1879 births
1955 deaths
20th-century Russian mathematicians
20th-century Russian physicists
20th-century Ukrainian mathematicians
20th-century Ukrainian physicists
Mathematicians from Saint Petersburg
Full Members of the USSR Academy of Sciences
Saint Petersburg Mining University alumni
Academic staff of the Taras Shevchenko National University of Kyiv
Academic staff of Tavrida National V.I. Vernadsky University
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
Control theorists
Mathematicians from the Russian Empire
Physicists from the Russian Empire
Soviet mathematicians
Soviet physicists
Ukrainian people of Russian descent
Burials at Novodevichy Cemetery
Full Members of the All-Ukrainian Academy of Sciences | Nikolay Krylov (mathematician, born 1879) | [
"Engineering"
] | 593 | [
"Control engineering",
"Control theorists"
] |
3,035,940 | https://en.wikipedia.org/wiki/Chlorophyll%20d | Chlorophyll d (Chl d) is a form of chlorophyll, identified by Harold Strain and Winston Manning in 1943. It was unambiguously identified in Acaryochloris marina in the 1990s. It is present in cyanobacteria which use energy captured from sunlight for photosynthesis. Chl d absorbs far-red light, at 710 nm wavelength, just outside the optical range. An organism that contains Chl d is adapted to an environment such as moderately deep water, where it can use far red light for photosynthesis, although there is not a lot of visible light.
Chl d is produced from chlorophyllide d by chlorophyll synthase. Chlorophyllide d is made from chlorophyllide a, but the oxygen-using enzyme that performs this conversion remains unknown as of 2022.
References
Tetrapyrroles
Photosynthetic pigments | Chlorophyll d | [
"Chemistry"
] | 198 | [
"Photosynthetic pigments",
"Photosynthesis"
] |
3,035,941 | https://en.wikipedia.org/wiki/Chlorophyll%20b | Chlorophyll b is a form of chlorophyll. Chlorophyll b helps in photosynthesis by absorbing light energy. It is more soluble than chlorophyll a in polar solvents because of its carbonyl group. Its color is green, and it primarily absorbs blue light.
In land plants, the light-harvesting antennae around photosystem II contain the majority of chlorophyll b. Hence, in shade-adapted chloroplasts, which have an increased ratio of photosystem II to photosystem I, there is a higher ratio of chlorophyll b to chlorophyll a. This is adaptive, as increasing chlorophyll b increases the range of wavelengths absorbed by the shade chloroplasts.
Biosynthesis
The Chlorophyll b biosynthetic pathway utilizes a variety of enzymes. In most plants, chlorophyll is derived from glutamate and is synthesised along a branched pathway that is shared with heme and siroheme.
The initial steps incorporate glutamic acid into 5-aminolevulinic acid (ALA); two molecules of ALA are then reduced to porphobilinogen (PBG), and four molecules of PBG are coupled, forming protoporphyrin IX.
Chlorophyll synthase is the enzyme that completes the biosynthesis of chlorophyll b by catalysing the reaction
chlorophyllide b + phytyl diphosphate chlorophyll b + diphosphate
This forms an ester of the carboxylic acid group in chlorophyllide b with the 20-carbon diterpene alcohol phytol.
References
Tetrapyrroles
Photosynthetic pigments | Chlorophyll b | [
"Chemistry"
] | 383 | [
"Photosynthetic pigments",
"Photosynthesis"
] |
3,036,126 | https://en.wikipedia.org/wiki/Vafa%E2%80%93Witten%20theorem | In theoretical physics, the Vafa–Witten theorem, named after Cumrun Vafa and Edward Witten, is a theorem that shows that vector-like global symmetries (those that transform as expected under reflections) such as isospin and baryon number in vector-like gauge theories like quantum chromodynamics cannot be spontaneously broken as long as the theta angle is zero. This theorem can be proved by showing the exponential fall off of the propagator of fermions.
See also
F-theory
References
Gauge theories
Theorems in quantum mechanics | Vafa–Witten theorem | [
"Physics",
"Mathematics"
] | 116 | [
"Theorems in quantum mechanics",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Quantum physics stubs",
"Physics theorems"
] |
3,036,189 | https://en.wikipedia.org/wiki/Transistor%20tester | Transistor testers are instruments for testing the electrical behavior of transistors and solid-state diodes.
Types of tester
There are three types of transistor testers each performing a unique operation.
Quick-check in-circuit checker
Service type tester
Laboratory-standard tester
In addition, curve tracers are reliable indicators of transistor performance.
Circuit Tester
A circuit tester is used to check whether a transistor which has previously been performing properly in a circuit is still operational. The transistor's ability to "amplify" is taken as a rough index of its performance. This type of tester indicates to a technician whether the transistor is dead or still operative. The advantage of this tester is that the transistor does not have to be removed from the circuit.
Service type transistor testers
These devices usually perform three types of checks:
Forward-current gain, or beta of transistor.
Base-to-collector leakage current with emitter open(ico)
Short circuits from collector to emitter and base.
Some service testers include a go/no-go feature, indicating a pass when a certain hfe is exceeded. These are useful, but fail some functional but low hfe transistors.
Some also provide a means of identifying transistor elements, if these are unknown. The tester has all these features and can check solid-state devices in and out of circuit.
Transistor hfe varies fairly widely with Ic, so measurements with the service type tester give readings that can differ quite a bit from the hfe in the transistor's real life application. Hence these testers are useful, but can't be regarded as giving accurate real-life hfe values.
Laboratory-standard transistor tester or Analyser
This type of tester is used for measuring transistor parameters dynamically under various operating conditions. The readings they give are absolute. Among the important characteristics measured are:
Icbo collector current with emitter open (Common base)
ac beta (Common emitter)
Rin (Input resistance)
Transistor testers have the necessary controls and switches for making the proper voltage, current and signal settings. A meter with a calibrated "good" and "bad" scale is on the front.
In addition, these transistor testers are designed to check the solid-state diodes. There are also testers for checking high transistor and rectifiers.
See also
Load pull, a colloquial term applied to the process of systematically varying the impedance presented to a device under test
References
Transistors
Electronic test equipment | Transistor tester | [
"Technology",
"Engineering"
] | 551 | [
"Electronic test equipment",
"Measuring instruments"
] |
3,036,289 | https://en.wikipedia.org/wiki/Weil%27s%20conjecture%20on%20Tamagawa%20numbers | In mathematics, the Weil conjecture on Tamagawa numbers is the statement that the Tamagawa number of a simply connected simple algebraic group defined over a number field is 1. In this case, simply connected means "not having a proper algebraic covering" in the algebraic group theory sense, which is not always the topologists' meaning.
History
calculated the Tamagawa number in many cases of classical groups and observed that it is an integer in all considered cases and that it was equal to 1 in the cases when the group is simply connected. The first observation does not hold for all groups: found examples where the Tamagawa numbers are not integers. The second observation, that the Tamagawa numbers of simply connected semisimple groups seem to be 1, became known as the Weil conjecture.
Robert Langlands (1966) introduced harmonic analysis methods to show it for Chevalley groups. K. F. Lai (1980) extended the class of known cases to quasisplit reductive groups. proved it for all groups satisfying the Hasse principle, which at the time was known for all groups without E8 factors. V. I. Chernousov (1989) removed this restriction, by proving the Hasse principle for the resistant E8 case (see strong approximation in algebraic groups), thus completing the proof of Weil's conjecture. In 2011, Jacob Lurie and Dennis Gaitsgory announced a proof of the conjecture for algebraic groups over function fields over finite fields, formally published in , and a future proof using a version of the Grothendieck-Lefschetz trace formula will be published in a second volume.
Applications
used the Weil conjecture to calculate the Tamagawa numbers of all semisimple algebraic groups.
For spin groups, the conjecture implies the known Smith–Minkowski–Siegel mass formula.
See also
Tamagawa number
References
.
Further reading
Aravind Asok, Brent Doran and Frances Kirwan, "Yang-Mills theory and Tamagawa Numbers: the fascination of unexpected links in mathematics", February 22, 2013
J. Lurie, The Siegel Mass Formula, Tamagawa Numbers, and Nonabelian Poincaré Duality posted June 8, 2012.
Conjectures
Theorems in group theory
Algebraic groups
Diophantine geometry | Weil's conjecture on Tamagawa numbers | [
"Mathematics"
] | 460 | [
"Mathematical theorems",
"Mathematical problems",
"Conjectures that have been proved"
] |
3,036,467 | https://en.wikipedia.org/wiki/Nodular%20sclerosing%20Hodgkin%20lymphoma | Nodular sclerosing Hodgkin lymphoma ("NSHL") or Nodular sclerosis is a form of Hodgkin's lymphoma that is the most common subtype of HL in developed countries. It affects females slightly more than males and has a median age of onset at ~28 years. It is composed of large tumor nodules with lacunar Reed–Sternberg cell (RS cells) surrounded by fibrotic collagen bands.
The British National Lymphoma Investigation further categorized NSHL based upon Reed–Sternberg cells into "nodular sclerosis type I" (NS I) and "nodular sclerosis type II" (NS II), with the first subtype responding better to treatment.
References
External links
PubMed – use of Erythropoietin
Subtypes
Lymphoma Association information on nodular sclerosing
Histopathology
Hodgkin lymphoma | Nodular sclerosing Hodgkin lymphoma | [
"Chemistry"
] | 205 | [
"Histopathology",
"Microscopy"
] |
3,036,498 | https://en.wikipedia.org/wiki/Safety%20reflector | A safety reflector is a retroreflector intended for pedestrians, runners, motorized and non-motorized vehicles. A safety reflector is similar to reflective stripes that can be found on safety vests and clothing worn by road workers and rescue workers. They are sometimes erroneously called luminous badges or luminous tags, but this is incorrect as they do not themselves produce light, but only reflect it.
Functioning
A safety reflector aids visibility of a person or vehicle visible to on the road, as it reflects light from headlights of vehicles. Safety reflectors are especially useful where there are no streetlights.
Unlike reflective stripes that are permanently fixed to clothing, the safety reflector is a stand-alone device that can be attached to any article of clothing as needed, often using a safety pin and some string. For vehicles, the reflector is usually a fixed part. In bicycles, reflectors are usually on wheels, pedals, under the seat, on the back of the luggage rack, and in front of the front fork. In motorcycles, automobiles, and other vehicles, reflectors are built into the front and rear ends (and sides) next to the headlights and brake lights.
Issue
Fatal traffic accidents at night often involve vehicles with drivers who fail to see pedestrians or bicyclists until they are too close to avoid collision. Reflectors are expected to increase visibility and contribute to safety.
History
The reflector was first invented in 1917 in Nice by Henri Chrétien to provide the army a communication system the enemy could not intercept. Patent is labelled cataphote in 1923.
The cataphote was also invented by Garbarini by combining a convex lens and concave mirror. It was used for aviation, safety in Switzerland and advertising in France.
On 12 March 1925, the minister, the Réseau du Nord railway company and the Touring-Club de France experienced the use of cataphotes to make level crossings visible by night.
In 1926, an automobile club, the Touring-Club de France, offered 180 signals with triangular cataphote to warn for the presence of the level crossings. In 1927, fines were given in France to car owners which did not have the cataphote made mandatory by law. The same year, cataphote were sold for motorized vehicles, motorbikes, bicycles and any kind of trailers.
In 1946, French regulation for catadioptre was NF R 143 11.
On first January 1950, safety catadioptre were made mandatory on the rear side of French vehicles.
In January 1943, a US highway patrolman Raymond Trask proposed the concept of the single cataphote for pedestrians to help them be visible for drivers in a Popular Science publication.
In the late 1950s, Mr. Arvi Lehti a farmer and plastic manufacturer from Pertteli, Finland came up with the idea of a reflector suitable for pedestrian use. His initial idea was to join a pair of automotive reflectors together and attach them to clothing. This early concept was developed further by Arvi's company Talousmuovi into a small, light-weight reflector fit for commercial sale. In the 1960s The Finnish police and transport authority wanted a reflector to improve pedestrian safety, they asked Talousmuovi to design one. The reflectors they created were eventually made for sale to Finns and later the world.
Nowadays one can find reflectors of all possible shapes and colours, as design and fashion industries have turned their faces towards this diminutive gadget. Special 'clip-on' reflectors for bicycles and other human-powered vehicles are also common.
European regulations
Reflector for vulnerable users and non motorized vehicles
Within the European Union, safety reflectors for pedestrians must be certified to comply with the EN 17353:2020 safety standard which is an amalgamation of the prior certifiable EN 1150:1999 and EN 13356:2001 standards. This standard is specifically for "loose, reflective clothing and accessories for non-professional use". There are other standards for other types of reflectors such as professional safety vests and reflectors on bicycles or automobiles.
In Finland, Estonia, Latvia and Lithuania, pedestrians are required by law to wear safety reflectors when walking during dark conditions.
Reflector for motorized vehicles
The EU use Unece regulation number 3 to categorize safety reflectors in several class: IA, IB, IIIA, IIIB and IVA.
The EU use Unece regulation 104 for retro-reflective markings for vehicles of category M2 and M3 (transport of people), N (transport of good), O2, O3 and O4 (trailers). This regulation use colored markings.
The EU also has in its law the Council Directive 76/757/EEC of 27 July 1976 on the approximation of the laws of the Member States relating to reflex reflectors for motor vehicles and their trailers.
Image gallery
Bicycle reflector
A bicycle reflector or prism reflector is a common safety device found on the rear, front and wheels of bicycles. It uses the principle of retroreflection to alert another road user of the bicycle's presence on the road.
The reflector is usually manufactured in the form of a moulded tile of transparent plastic. The outside surface is smooth, allowing light, such as from a car's headlights, to enter. The rear surface of the tile takes the form of an array of angled micro-prisms or spherical beads.
The light striking the rear, inside surface of the prisms or beads, does so at an angle greater than the critical angle thus it undergoes total internal reflection. Due to the orientation of the other inside surfaces, any light internally reflecting is directed back out the front of the reflector in the direction it came from. This alerts the person close to the light source, e.g. the driver of the vehicle, to the presence of the cyclist.
See also
High-visibility clothing
References
External links
Driving simulator that highlights the benefits of safety reflectors (Finnish Road Safety Council)
Vehicle parts
Fashion accessories
Safety equipment
Finnish inventions | Safety reflector | [
"Technology"
] | 1,230 | [
"Vehicle parts",
"Components"
] |
3,036,515 | https://en.wikipedia.org/wiki/List%20of%20eponyms%20of%20special%20functions | This is a list of special function eponyms in mathematics, to cover the theory of special functions, the differential equations they satisfy, named differential operators of the theory (but not intended to include every mathematical eponym). Named symmetric functions, and other special polynomials, are included.
A
Niels Abel: Abel polynomials - Abelian function - Abel–Gontscharoff interpolating polynomial
Sir George Biddell Airy: Airy function
Waleed Al-Salam (1926–1996): Al-Salam polynomial - Al Salam–Carlitz polynomial - Al Salam–Chihara polynomial
C. T. Anger: Anger–Weber function
Kazuhiko Aomoto: Aomoto–Gel'fand hypergeometric function - Aomoto integral
Paul Émile Appell (1855–1930): Appell hypergeometric series, Appell polynomial, Generalized Appell polynomials
Richard Askey: Askey–Wilson polynomial, Askey–Wilson function (with James A. Wilson)
B
Ernest William Barnes: Barnes G-function
E. T. Bell: Bell polynomials
Bender–Dunne polynomial
Jacob Bernoulli: Bernoulli polynomial
Friedrich Bessel: Bessel function, Bessel–Clifford function
H. Blasius: Blasius functions
R. P. Boas, R. C. Buck: Boas–Buck polynomial
Böhmer integral
Erland Samuel Bring: Bring radical
de Bruijn function
Buchstab function
Burchnall, Chaundy: Burchnall–Chaundy polynomial
C
Leonard Carlitz: Carlitz polynomial
Arthur Cayley, Capelli: Cayley–Capelli operator
Celine's polynomial
Charlier polynomial
Pafnuty Chebyshev: Chebyshev polynomials
Elwin Bruno Christoffel, Darboux: Christoffel–Darboux relation
Cyclotomic polynomials
D
H. G. Dawson: Dawson function
Charles F. Dunkl: Dunkl operator, Jacobi–Dunkl operator, Dunkl–Cherednik operator
Dickman–de Bruijn function
E
Engel: Engel expansion
Erdélyi Artúr: Erdelyi–Kober operator
Leonhard Euler: Euler polynomial, Eulerian integral, Euler hypergeometric integral
F
V. N. Faddeeva: Faddeeva function (also known as the complex error function; see error function)
G
C. F. Gauss: Gaussian polynomial, Gaussian distribution, Hypergeometric function 2F1, etc.
Leopold Bernhard Gegenbauer: Gegenbauer polynomials
Gottlieb polynomial
Gould polynomial
Christoph Gudermann: Gudermannian function
H
Wolfgang Hahn: Hahn polynomial, (with H. Exton) Hahn–Exton Bessel function
Philip Hall: Hall polynomial, Hall–Littlewood polynomial
Hermann Hankel: Hankel function
Heine: Heine functions
Charles Hermite: Hermite polynomials
Karl L. W. M. Heun (1859 – 1929): Heun's equation
J. Horn: Horn hypergeometric series
Adolf Hurwitz: Hurwitz zeta-function
Hypergeometric function 2F1
J
Henry Jack (1917–1978) Dundee: Jack polynomial
F. H. Jackson: Jackson derivative Jackson integral
Carl Gustav Jakob Jacobi: Jacobi polynomial, Jacobi theta function
K
Joseph Marie Kampe de Feriet (1893–1982): Kampe de Feriet hypergeometric series
David Kazhdan, George Lusztig: Kazhdan–Lusztig polynomial
Lord Kelvin: Kelvin function
Kibble–Slepian formula
Kirchhoff: Kirchhoff polynomial
Tom H. Koornwinder: Koornwinder polynomial
Kostka polynomial, Kostka–Foulkes polynomial
Mikhail Kravchuk: Kravchuk polynomial
L
Edmond Laguerre: Laguerre polynomials
Johann Heinrich Lambert: Lambert W function
Gabriel Lamé: Lamé polynomial
G. Lauricella Lauricella-Saran: Lauricella hypergeometric series
Adrien-Marie Legendre: Legendre polynomials
Eugen Cornelius Joseph von Lommel (1837–1899), physicist: Lommel polynomial, Lommel function, Lommel–Weber function
M
Ian G. Macdonald: Macdonald polynomial, Macdonald–Kostka polynomial, Macdonald spherical function
Mahler polynomial
Maitland function
Émile Léonard Mathieu: Mathieu function
F. G. Mehler, student of Dirichlet (Ferdinand): Mehler's formula, Mehler–Fock formula, Mehler–Heine formula, Mehler functions
Meijer G-function
Josef Meixner: Meixner polynomial, Meixner-Pollaczek polynomial
Mittag-Leffler: Mittag-Leffler polynomials
Mott polynomial
P
Paul Painlevé: Painlevé function, Painlevé transcendents
Poisson–Charlier polynomial
Pollaczek polynomial
R
Giulio Racah: Racah polynomial
Jacopo Riccati: Riccati–Bessel function
Bernhard Riemann: Riemann zeta-function
Olinde Rodrigues: Rodrigues formula
Leonard James Rogers: Rogers–Askey–Ismail polynomial, Rogers–Ramanujan identity, Rogers–Szegő polynomials
S
Schubert polynomial
Issai Schur: Schur polynomial
Atle Selberg: Selberg integral
Sheffer polynomial
Slater's identities
Thomas Joannes Stieltjes: Stieltjes polynomial, Stieltjes–Wigert polynomials
Strömgren function
Hermann Struve: Struve function
T
Francesco Tricomi: Tricomi–Carlitz polynomial
W
Wall polynomial
Wangerein: Wangerein functions
Weber function
Weierstrass: Weierstrass function
Louis Weisner: Weisner's method
E. T. Whittaker: Whittaker function
Wilson polynomial
Z
Frits Zernike: Zernike polynomials
Eponyms of special functions
Eponymous functions | List of eponyms of special functions | [
"Mathematics"
] | 1,216 | [
"Special functions",
"Combinatorics"
] |
3,036,665 | https://en.wikipedia.org/wiki/843%20Nicolaia | 843 Nicolaia is a main-belt asteroid discovered by Danish astronomer H. Thiele on 30 September 1916. It was a lost asteroid for 65 years before being rediscovered by Astronomisches Rechen-Institut at Heidelberg in 1981. The asteroid is orbiting the Sun with a period of 3.44 years.
References
External links
000843
Discoveries by Holger Thiele
Named minor planets
19160930
Recovered astronomical objects | 843 Nicolaia | [
"Astronomy"
] | 90 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
3,036,745 | https://en.wikipedia.org/wiki/878%20Mildred | 878 Mildred is a minor planet in the main belt orbiting the Sun. It is the lowest numbered, and thus the namesake, of the Mildred family of asteroids, a subgroup of the Nysa family. The Mildred subgroup, and by extension 878 Mildred itself, is thought to have been formed by a recent fragmentation event from a larger asteroid.
Discovery
878 Mildred was originally discovered in 1916 using the 1.5 m Hale Telescope at the Mount Wilson Observatory, but was subsequently lost until it was again observed on single nights in 1985 and 1991 (a lost asteroid). Initially only two observations of the asteroid were taken on 1916-09-06 which does not allow for an accurate orbital determination, however interest in the object prompted further investigation and more measurements were taken in late September and October. The asteroid was re-discovered in 1991 by Gareth V. Williams. It is named after Mildred Shapley Matthews.
Physical properties
By comparing the asteroid's perceived brightness and the then computed distance from the Sun they arrived at an absolute visual magnitude of 14.3, which if one assumes Mars-like albedo gives an approximate diameter of 3 to 5 kilometers.
References
External links
Minor Planet Center Database entry on (878) Mildred
000878
Discoveries by Seth Nicholson
Discoveries by Harlow Shapley
Named minor planets
19160906
Recovered astronomical objects | 878 Mildred | [
"Astronomy"
] | 271 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
3,036,816 | https://en.wikipedia.org/wiki/Whittaker%20function | In mathematics, a Whittaker function is a special solution of Whittaker's equation, a modified form of the confluent hypergeometric equation introduced by to make the formulas involving the solutions more symmetric. More generally, introduced Whittaker functions of reductive groups over local fields, where the functions studied by Whittaker are essentially the case where the local field is the real numbers and the group is SL2(R).
Whittaker's equation is
It has a regular singular point at 0 and an irregular singular point at ∞.
Two solutions are given by the Whittaker functions Mκ,μ(z), Wκ,μ(z), defined in terms of Kummer's confluent hypergeometric functions M and U by
The Whittaker function is the same as those with opposite values of , in other words considered as a function of at fixed and it is even functions. When and are real, the functions give real values for real and imaginary values of . These functions of play a role in so-called Kummer spaces.
Whittaker functions appear as coefficients of certain representations of the group SL2(R), called Whittaker models.
References
.
.
.
.
Further reading
Special hypergeometric functions
E. T. Whittaker
Special functions | Whittaker function | [
"Mathematics"
] | 281 | [
"Special functions",
"Combinatorics"
] |
3,036,860 | https://en.wikipedia.org/wiki/Operation%20Fishbowl | Operation Fishbowl was a series of high-altitude nuclear tests in 1962 that were carried out by the United States as a part of the larger Operation Dominic nuclear test program.
Introduction
The Operation Fishbowl nuclear tests were originally to be completed during the first half of 1962 with three tests named Bluegill, Starfish and Urraca.
The first test attempt was delayed until June. Planning for Operation Fishbowl, as well as many other nuclear tests in the region, began rapidly in response to the sudden Soviet announcement on August 30, 1961, that they were ending a three-year moratorium on nuclear testing. The rapid planning of very complex operations necessitated many changes as the project progressed.
All of the tests were to be launched on missiles from Johnston Island in the Pacific Ocean north of the equator. Johnston Island had already been established as a launch site for United States high-altitude nuclear tests, rather than the other locations in the Pacific Proving Grounds. In 1958, Lewis Strauss, then chairman of the United States Atomic Energy Commission, opposed doing any high-altitude tests at locations that had been used for earlier Pacific nuclear tests. His opposition was motivated by fears that the flash from the nighttime high-altitude detonations might blind civilians who were living on nearby islands. Johnston Island was a remote location, more distant from populated areas than other potential test locations. To protect residents of the Hawaiian Islands from flash blindness or permanent retinal injury from the bright nuclear flash, the nuclear missiles of Operation Fishbowl were launched generally toward the southwest of Johnston Island so that the detonations would be farther from Hawaii.
Urraca was to be a test of about 1 megaton yield at very high altitude (above 1000 km). The proposed Urraca test was always controversial, especially after the damage caused to satellites by the Starfish Prime detonation, as described below. Urraca was finally canceled, and an extensive re-evaluation of the Operation Fishbowl plan was made during an 82-day operations pause after the Bluegill Prime disaster of July 25, 1962, as described below.
A test named Kingfish was added during the early stages of Operation Fishbowl planning. Two low-yield tests, Checkmate and Tightrope, were also added during the project, so the final number of tests in Operation Fishbowl was five. Tightrope was the last atmospheric nuclear test conducted by the United States, as the Limited Test Ban Treaty came into effect shortly thereafter.
Research directions
The United States completed six high-altitude nuclear tests in 1958, but the high-altitude tests of that year raised a number of questions. According to U.S. Government Report ADA955694 on the first successful test of the Fishbowl series, "Previous high-altitude nuclear tests: Teak, Orange, and Yucca, plus the three ARGUS shots were poorly instrumented and hastily executed. Despite thorough studies of the meager data, present models of these bursts are sketchy and tentative. These models are too uncertain to permit extrapolation to other altitudes and yields with any confidence. Thus there is a strong need, not only for better instrumentation, but for further tests covering a range of altitudes and yields."
There were three phenomena in particular that required further investigation:
The electromagnetic pulse generated by a high-altitude nuclear explosion appeared to have very significant differences from the electromagnetic pulse generated by nuclear explosions closer to the surface.
The auroras associated with high-altitude nuclear explosions, especially the auroras that appeared almost instantaneously far away from the explosion in the opposite hemisphere, were not clearly understood. The nature of the possible radiation belts that were initially generated along the magnetic field lines connecting the areas of the auroral displays were also poorly understood.
Areas of blackout of radio communication needed to be understood in much more detail since that information would be critical for military operations during periods of possible nuclear explosions.
The Fishbowl tests were monitored by a large number of surface and aircraft-based stations in the wide area around the planned detonations and also in the region in the southern hemisphere in the Samoan Islands region, which was known in these tests as the southern conjugate region. Johnston Island is in the northern hemisphere, as were all of the planned Operation Fishbowl nuclear detonation locations. It was known from previous high altitude tests, as well as from theoretical work done in the late 1950s, that high-altitude nuclear tests produce a number of unique geophysical phenomena at the opposite end of the magnetic field line of the Earth's magnetic field.
According to the standard reference book on nuclear weapon effects by the United States Department of Defense, "For the high-altitude tests conducted in 1958 and 1962 in the vicinity of Johnston Island, the charged particles entered the atmosphere in the northern hemisphere between Johnston Island and the main Hawaiian Islands, whereas the conjugate region was in the vicinity of the Samoan, Fiji, and Tonga Islands. It is in these areas that auroras were actually observed, in addition to those in the areas of the nuclear explosions."
Beta particles are charged particles (usually with a negative electrical charge) that are released from nuclear explosions. These particles travel in a spiral along the magnetic field lines in the Earth's magnetic field. The nuclear explosions also release heavier debris ions, which also carry an electrical charge, and which also travel in a spiral along the Earth's magnetic field lines.
The Earth's magnetic field lines arc high above the Earth until they reach the magnetic conjugate area in the opposite hemisphere.
According to the DOD nuclear weapon effects reference, "Because the beta particles have high velocities, the beta auroras in the remote (southern) hemisphere appeared within a fraction of a second of those in the hemisphere where the burst had occurred. The debris ions, however, travel more slowly and so the debris aurora in the remote hemisphere, if it is formed, appears at a somewhat later time. The beta auroras are generally most intense at an altitude of 30 to 60 miles, whereas the intensity of the debris auroras is greatest in the 60 to 125 miles range. Remote conjugate beta auroras can occur if the detonation is above 25 miles, whereas debris auroras appear only if the detonation altitude is in excess of some 200 miles."
Some of the charged particles traveling along the Earth's magnetic field lines cause auroras and other geophysical phenomena in the conjugate areas. Other charged particles are reflected back along the magnetic field lines, where they can persist for long periods of time (up to several months or longer), forming artificial radiation belts.
According to the Operation Fishbowl planning document of November 1961, "Since much valuable data can be obtained from time and spectrum resolved photography, this dictates that the test be performed at nighttime when auroral photographic conditions are best." As with all U.S. Pacific high-altitude nuclear tests, all of the Operation Fishbowl tests were completed at night. This is in contrast to the high-altitude nuclear tests of the Soviet Project K nuclear tests, which were done over the populated land region of central Kazakhstan, and therefore had to be done during the daytime to avoid eyeburn damage to the population from the very bright flash of high-altitude nuclear explosions (as discussed in the introduction to this article).
First attempts
According to the initial plan of Operation Fishbowl, the nuclear tests were to be Bluegill, Starfish and Urraca, in that order. If a test were to fail, the next attempt of the same test would be of the same name plus the word "prime." If Bluegill failed, the next attempt would be Bluegill Prime, and if Bluegill Prime failed, the next attempt would be Bluegill Double Prime, etc.
Bluegill
The first planned test of Operation Fishbowl was on June 2, 1962, when a nuclear warhead was launched from Johnston Island on a Thor missile just after midnight. Although the Thor missile appeared to be on a normal trajectory, the radar tracking system lost track of the missile. Because of the large number of ships and aircraft in the area, there was no way to predict if the missile was on a safe trajectory, so the range safety officers ordered the missile with its warhead to be destroyed. No nuclear detonation occurred and no data was obtained, but subsequent investigation found that the Thor was actually following the proper flight trajectory.
Starfish
The second planned test of Operation Fishbowl was on June 19, 1962. The launch of a Thor missile with a nuclear warhead occurred just before midnight from Johnston Island. The Thor missile flew a normal trajectory for 59 seconds; then the rocket engine suddenly stopped, and the missile began to break apart. The range safety officer ordered the destruction of the missile and the warhead. The missile was between 30,000 and 35,000 feet (between 9.1 and 10.7 km) in altitude when it was destroyed.
Some of the missile parts fell on Johnston Island, and a large amount of missile debris fell into the ocean in the vicinity of the island. Navy Explosive Ordnance Disposal and Underwater Demolition Team swimmers recovered approximately 250 pieces of the missile assembly during the next two weeks. Some of the debris was contaminated with plutonium. Nonessential personnel had been evacuated from Johnston Island during the test.
Starfish Prime
On July 9, 1962, at 09:00:09 Coordinated Universal Time, which was nine seconds after 10 p.m. on July 8, Johnston Island local time, the Starfish Prime test was successfully detonated at an altitude of . The coordinates of the detonation were 16 degrees, 28 minutes North latitude, 169 degrees, 38 minutes West longitude (30 km, or about 18 mi, southwest of Johnston Island). The actual weapon yield was very close to the design yield, which has been described by various sources at different values in the very narrow range of 1.4 to 1.45 megatons (6.0 PJ).
The Thor missile carrying the Starfish Prime warhead actually reached an apogee (maximum height) of about 1100 km (just over 680 miles), and the warhead was detonated on its downward trajectory when it had fallen to the programmed altitude of . The nuclear warhead detonated at 13 minutes and 41 seconds after liftoff of the Thor missile.
Starfish Prime caused an electromagnetic pulse (EMP) which was far larger than expected, so much larger that it drove much of the instrumentation off scale, causing great difficulty in getting accurate measurements. The Starfish Prime electromagnetic pulse also made those effects known to the public by causing electrical damage in Hawaii, about away from the detonation point, knocking out about 300 streetlights, setting off numerous burglar alarms and damaging a telephone company microwave link (the detonation time was nine seconds after 11 p.m. in Hawaii).
A total of 27 sounding rockets were launched from Johnston Island to obtain experimental data from the shot, with the first of the support rockets being launched 2 hours and 45 minutes before the launch of the Thor missile carrying the nuclear warhead. Most of these smaller instrumentation rockets were launched just after the time of the launch of the main Thor missile carrying the warhead. In addition, a large number of rocket-borne instruments were launched from a firing area at Barking Sands, Kauai, in the Hawaiian Islands.
A very large number of United States military ships and aircraft were operating in support of Starfish Prime in the Johnston Island area and across the nearby North Pacific region, including the primary instrumentation ship USAS American Mariner providing measurements conducted by personnel provided by RCA Service Company and Barnes Engineering Company. A few military ships and aircraft were also positioned in the southern conjugate region for the test, which was near the Samoan Islands. In addition, an uninvited observation ship from the Soviet Union was stationed near Johnston Island for the test and another Soviet scientific expeditionary ship was located in the southern conjugate region, permanent features of all future oceanic nuclear testing.
After the Starfish Prime detonation, bright auroras were observed in the detonation area as well as in the southern conjugate region on the other side of the equator from the detonation. According to one of the first technical reports, "The visible phenomena due to the burst were widespread and quite intense; a very large area of the Pacific was illuminated by the auroral phenomena, from far south of the south magnetic conjugate area (Tongatapu) through the burst area to far north of the north conjugate area (French Frigate Shoals). ... At twilight after the burst, resonant scattering of light from lithium and other debris was observed at Johnston and French Frigate Shoals for many days confirming the longtime presence of debris in the atmosphere. An interesting side effect was that the Royal New Zealand Air Force was aided in anti-submarine maneuvers by the light from the bomb."
The Starfish Prime radiation belt persisted at high altitude for many months and damaged the United States satellites Traac, Transit 4B, Injun I and Telstar I, as well as the United Kingdom satellite Ariel. It also damaged the Soviet satellite Cosmos V. All of these satellites failed completely within several months of the Starfish detonation. There is also evidence that the Starfish Prime radiation belt may have damaged the satellites Explorer 14, Explorer 15 and Relay 1. Telstar I lasted the longest of the satellites that were clearly damaged by the Starfish Prime radiation, with its complete failure occurring on February 21, 1963.
In 2010, the United States Defense Threat Reduction Agency issued a report that had been written in support of the United States Commission to Assess the Threat to the United States from Electromagnetic Pulse Attack. The report, entitled "Collateral Damage to Satellites from an EMP Attack," discusses in great detail the satellite damage caused by the Starfish Prime artificial radiation belts as well as other historical nuclear events that caused artificial radiation belts and their effects on many satellites that were then in orbit. The same report also projects the effects of one or more present-day high altitude nuclear explosions upon the formation of artificial radiation belts and the probable resulting effects on satellites that were in orbit as of the year 2010.
Bluegill Prime
On July 25, 1962, a second attempt was made to launch the Bluegill device, but ended in disaster when the Thor suffered a stuck valve preventing the flow of LOX to the combustion chamber. The engine lost thrust and unburned RP-1 spilled down into the hot thrust chamber, igniting and starting a fire around the base of the missile. With the Thor engulfed in flames, the Range Safety Officer sent the destruct command, which split the rocket and ruptured both fuel tanks, completely destroying the missile and badly damaging the launch pad. The warhead charges also exploded asymmetrically and sprayed the area with the moderately radioactive core materials.
Although there was little danger of an accidental nuclear explosion, the destruction of the nuclear warhead on the launch pad caused contamination of the area by alpha-emitting core materials. Burning rocket fuel, flowing through the cable trenches, caused extensive chemical contamination of the trenches and the equipment associated with the cabling in the trenches.
The radioactive contamination on Johnston Island was determined to be a major problem, and it was necessary to decontaminate the entire area before the badly damaged launch pad could be rebuilt.
Operations pause
Operation Fishbowl test operations stopped after the disastrous failure of Bluegill Prime, and most of the personnel not directly involved in the radioactive cleanup and launch pad rebuild on Johnston Island returned to their home stations to await the resumption of tests.
According to the Operation Dominic I report, "The enforced pause allowed DOD to replan the remainder of the Fishbowl series. The Urraca event was canceled to avoid further damage to satellites and three new shots were added." A second launch pad was constructed during the operations pause so that Operation Fishbowl could continue in the event of another serious incident.
Continuation of the Fishbowl series
After a pause of nearly three months, Operation Fishbowl was ready to continue, beginning with another attempt at the Bluegill test.
Bluegill Double Prime
Eighty-two days after the failure of Bluegill Prime, about 30 minutes before midnight on the night of October 15, 1962, local Johnston Island time (October 16 UTC), another attempt was made at the Bluegill test. The Thor missile malfunctioned and began tumbling out of control about 85 seconds after launch, and the range safety officer ordered the destruction of the missile and its nuclear warhead about 95 seconds after launch.
Checkmate
On October 19, 1962, at about 90 minutes before midnight (local Johnston Island time), an XM-33 Strypi rocket launched a low-yield nuclear warhead which detonated successfully at an altitude of . It was reported that the yield and burst altitude were very close to those desired, but according to most official documents the exact nuclear yield remains classified. It is reported in the open literature as simply being less than 20 kilotons. One report by the U.S. federal government, however, reported the Checkmate test yield as 10 kilotons.
It was reported that, "Observers on Johnston Island saw a green and blue circular region surrounded by a blood-red ring formed overhead that faded in less than one minute. Blue-green streamers and numerous pink striations formed, the latter lasting for 30 minutes. Observers at Samoa saw a white flash, which faded to orange and disappeared in about one minute."
Bluegill Triple Prime
The fourth attempt at the Bluegill test was launched on a Thor missile on October 25, 1962 (Johnston Island time). It resulted in a successful detonation of a submegaton nuclear warhead at about one minute before midnight, local time (the official Coordinated Universal Time was 0959 on October 26, 1962). It was officially reported as being in the submegaton range (meaning more than 200 kilotons but less than one megaton), and most observers of the U.S. nuclear testing programs believe that the nuclear yield was about 400 kilotons. One report by the U.S. federal government reported the test yield as 200 kilotons.
Since all of the Operation Fishbowl tests were planned to occur during the night, the potential for eyeburn, especially for permanent retinal damage, was an important consideration at all levels of planning. Much research went into the potential eyeburn problem. One of the official reports for the project stated that, for the altitudes planned for the Bluegill, Kingfish and Checkmate tests, "the thermal-pulse durations are of the same order of magnitude or shorter than the natural blink period which, for the average person, is about 150 milliseconds. Furthermore, the atmospheric attenuation is normally much less for a given distance than in the case of sea-level or near-sea-level explosions. Consequently, the eye-damage hazard is more severe."
Two cases of retinal damage did occur with military personnel on Johnston Island during the Bluegill Triple Prime test. Neither individual had his protective goggles in place at the instant of the detonation. One official report stated, "In the first case, acuity for central vision was 20/400 initially, but returned to 20/25 by six months. The second victim was less fortunate, as central vision did not improve beyond 20/60. The lesion diameters were 0.35 and 0.50 mm respectively. Both individuals noted immediate visual disturbances, but neither was incapacitated."
There had been concern that eyeburn problems might occur during the earlier Starfish Prime test, since the countdown was rebroadcast by radio stations in Hawaii, and many civilians would be watching the thermonuclear detonation as it occurred, but no such problems in Hawaii were reported.
Kingfish
The Kingfish detonation occurred at 0210 (Johnston Island time) on November 1, 1962, and was the fourth successful detonation of the Fishbowl series. It was officially reported only as being a submegaton explosion (meaning in the range of more than 200 kilotons, but less than a megaton), but most independent observers believe that it used the same 400 kiloton warhead as the Bluegill Triple Prime test, although one report by the U.S. federal government reported the test yield as 200 kilotons.
As with the other Fishbowl tests, a number of small rockets with various scientific instrumentation were launched from Johnson Island to monitor the effects of the high-altitude explosion. In the case of the Kingfish test, 29 rockets were launched from Johnston Island in addition to the Thor rocket carrying the nuclear warhead.
According to the official report, at the time of the Kingfish detonation, "Johnston Island observers saw a yellow-white, luminous circle with intense purple streamers for the first minute. Some of the streamers displayed what appeared to be a rapid twisting motion at times. A large pale-green patch appeared somewhat south of the burst and grew, becoming the dominant visible feature after 5 minutes. By H+1 the green had become dull gray, but the feature persisted for 3 hours. At Oahu a bright flash was observed and after about 10 seconds a great white ball appeared to rise slowly out of the sea and was visible for about 9 minutes."
After most of the electromagnetic pulse measurements on Starfish Prime had failed because the EMP was so much larger than expected, extra care was taken to obtain accurate EMP measurements on the Bluegill Triple Prime and Kingfish tests. The EMP mechanism that had been hypothesized before Operation Fishbowl had been conclusively disproven by the Starfish Prime test. Prompt gamma ray output measurements on these later tests were also carefully obtained so that a new theory of the mechanism for high-altitude EMP could be developed and confirmed. That new theory about the generation of nuclear EMP was developed by Los Alamos physicist Conrad Longmire in 1963, and it is the high-altitude nuclear EMP theory that is still used today.
As of the beginning of 2011, the EMP waveforms and prompt gamma radiation outputs for Bluegill Triple Prime and Kingfish remain classified. An unclassified report confirms that these measurements were successfully made and that a subsequent theory (which is the one now used) was developed which describes the mechanism by which the high-altitude EMP is generated. That new theory does give results which are consistent with both the Bluegill Triple Prime and Kingfish data. (The report actually using the Bluegill Triple Prime and Kingfish data to confirm the new EMP theory is the still-classified Part 2 of the unclassified report by Conrad Longmire.)
According to a Sandia National Laboratories report, EMP generated during the Operation Fishbowl tests caused "input circuit troubles in radio receivers during the Starfish and Checkmate bursts; the triggering of surge arresters on an airplane with a trailing-wire antenna during Starfish, Checkmate, and Bluegill; and the Oahu streetlight incident." (The "Oahu streetlight incident" refers to the 300 streetlights in Honolulu extinguished by the Starfish Prime detonation.)
Tightrope
The final test of Operation Fishbowl was detonated at 2130 (9:30 p.m. local Johnston Island time) on November 3, 1962 (the time and date was officially recorded as 0730 UTC, November 4, 1962). It was launched on a Nike-Hercules missile, and detonated at a lower altitude than the other Fishbowl tests. Although it was officially one of the Operation Fishbowl tests, it is sometimes not listed among high-altitude nuclear tests because of its lower detonation altitude. The nuclear yield was reported in most official documents only as being less than 20 kilotons. One report by the U.S. federal government reported the Tightrope test yield as 10 kilotons.
"At Johnston Island, there was an intense white flash. Even with high-density goggles, the burst was too bright to view, even for a few seconds. A distinct thermal pulse was also felt on the bare skin. A yellow-orange disc was formed, which transformed itself into a purple doughnut. A glowing purple cloud was faintly visible for a few minutes."
Seven rockets carrying scientific instrumentation were launched from Johnston Island in support of the Tightrope test, which was the final atmospheric test conducted by the United States.
Table
See also
Electromagnetism
Geomagnetic storm
References
External links
Joint Task Force 8 video report on Operation Fishbowl
Explosions in 1962
1962 in military history
1962 in the United States
Johnston Atoll American nuclear explosive tests
Exoatmospheric nuclear weapons testing
Electromagnetic radiation
Energy weapons
Bombs
Electronic warfare | Operation Fishbowl | [
"Physics"
] | 5,061 | [
"Electromagnetic radiation",
"Physical phenomena",
"Radiation"
] |
3,037,035 | https://en.wikipedia.org/wiki/Degree-constrained%20spanning%20tree | In graph theory, a degree-constrained spanning tree is a spanning tree where the maximum vertex degree is limited to a certain constant k. The degree-constrained spanning tree problem is to determine whether a particular graph has such a spanning tree for a particular k.
Formal definition
Input: n-node undirected graph G(V,E); positive integer k < n.
Question: Does G have a spanning tree in which no node has degree greater than k?
NP-completeness
This problem is NP-complete . This can be shown by a reduction from the Hamiltonian path problem. It remains NP-complete even if k is fixed to a value ≥ 2. If the problem is defined as the degree must be ≤ k, the k = 2 case of degree-confined spanning tree is the Hamiltonian path problem.
Degree-constrained minimum spanning tree
On a weighted graph, a Degree-constrained minimum spanning tree (DCMST) is a degree-constrained spanning tree in which the sum of its edges has the minimum possible sum. Finding a DCMST is an NP-Hard problem.
Heuristic algorithms that can solve the problem in polynomial time have been proposed, including Genetic and Ant-Based Algorithms.
Approximation Algorithm
give an iterative polynomial time algorithm which, given a graph , returns a spanning tree with maximum degree no larger than , where is the minimum possible maximum degree over all spanning trees. Thus, if , such an algorithm will either return a spanning tree of maximum degree or .
References
Spanning tree
NP-complete problems | Degree-constrained spanning tree | [
"Mathematics"
] | 309 | [
"NP-complete problems",
"Mathematical problems",
"Computational problems"
] |
3,037,149 | https://en.wikipedia.org/wiki/Nanocar | The nanocar is a molecule designed in 2005 at Rice University by a group headed by Professor James Tour. Despite the name, the original nanocar does not contain a molecular motor, hence, it is not really a car. Rather, it was designed to answer the question of how fullerenes move about on metal surfaces; specifically, whether they roll or slide (they roll).
The molecule consists of an H-shaped 'chassis' with fullerene groups attached at the four corners to act as wheels.
When dispersed on a gold surface, the molecules attach themselves to the surface via their fullerene groups and are detected via scanning tunneling microscopy. One can deduce their orientation as the frame length is a little shorter than its width.
Upon heating the surface to 200 °C the molecules move forward and back as they roll on their fullerene "wheels". The nanocar is able to roll about because the fullerene wheel is fitted to the alkyne "axle" through a carbon-carbon single bond. The hydrogen on the neighboring carbon is no great obstacle to free rotation. When the temperature is high enough, the four carbon-carbon bonds rotate and the car rolls about. Occasionally the direction of movement changes as the molecule pivots. The rolling action was confirmed by Professor Kevin Kelly, also at Rice, by pulling the molecule with the tip of the STM.
Independent early conceptual contribution
The concept of a nanocar built out of molecular "tinkertoys" was first hypothesized by M.T. Michalewicz at the Fifth Foresight Conference on Molecular Nanotechnology (November 1997). Subsequently, an expanded version was published in Annals of Improbable Research. These papers were supposed to be a not-so-serious contribution to a fundamental debate on the limits of bottom-up Drexlerian nanotechnology and conceptual limits of how far mechanistic analogies advanced by Eric Drexler could be carried out. The important feature of this nanocar concept was the fact that all molecular component tinkertoys were known and synthesized molecules (alas, some very exotic and only recently discovered, e.g. staffanes, and notably – ferric wheel, 1995), in contrast to some Drexlerian diamondoid structures that were only postulated and never synthesized; and the drive system that was embedded in a ferric wheel and driven by inhomogeneous or time-dependent magnetic field of a substrate – an "engine in a wheel" concept.
Nanodragster
The Nanodragster, dubbed the world's smallest hot rod, is a molecular nanocar. The design improves on previous nanocar designs and is a step towards creating molecular machines. The name comes from the nanocar's resemblance to a dragster, as its staggered wheel fitment has a shorter axle with smaller wheels in the front and a larger axle with larger wheels in the back.
The nanocar was developed at Rice University’s Richard E. Smalley Institute Nanoscale Science and Technology by the team of James Tour, Kevin Kelly and other colleagues involved in its research. The previous nanocar developed was 3 to 4 nanometers which was a little over[the width of?] a strand of DNA and was around 20,000 times thinner than a human hair. These nanocars were built with carbon buckyballs as their four wheels, and the surface on which they were placed required a temperature of to get it moving. On the other hand, a nanocar which utilized p-carborane wheels moves as if sliding on ice, rather than rolling. Such observations led to the production of nanocars which had both wheel designs.
The nanodragster is 50,000 times thinner than a human hair and has a top speed of 0.014 millimeters per hour (0.0006 in/h or 3.89×10−9 m/s). The rear wheels are spherical fullerene molecules, or buckyballs, composed of sixty carbon atoms each, which are attracted to a dragstrip that is made up of a very fine layer of gold. This design also enabled Tour’s team to operate the device at lower temperatures.
The nanodragster and other nano-machines are designed for use in transporting items. The technology can be used in manufacturing computer circuits and electronic components, or in conjunction with pharmaceuticals inside the human body. Tour also speculated that the knowledge gained from the nanocar research would help build efficient catalytic systems in the future.
Electrically driven directional motion of a four-wheel molecule on a metal surface
Kudernac et al. described a specially designed molecule that has four motorized "wheels". By depositing the molecule on a copper surface and providing them with sufficient energy from electrons of a scanning tunnelling microscope they were able to drive some of the molecules in a specific direction, much like a car, being the first single molecule capable to continue moving in the same direction across a surface. Inelastic electron tunnelling induces conformational changes in the rotors and propels the molecule across a copper surface. By changing the direction of the rotary motion of individual motor units, the self-propelling molecular 'four-wheeler' structure can follow random or preferentially linear trajectories. This design provides a starting point for the exploration of more sophisticated molecular mechanical systems, perhaps with complete control over their direction of motion.
This electrically driven nanocar was built under supervision of University of Groningen chemist Bernard L. Feringa, who was awarded the Nobel Prize for Chemistry in 2016 for his pioneering work on nanomotors, together with Jean-Pierre Sauvage and J. Fraser Stoddart.
Motor nanocar
A future nanocar with a synthetic molecular motor has been developed by Jean-Francois Morin et al. It is fitted with carborane wheels and a light-powered helicene synthetic molecular motor. Although the motor moiety displayed unidirectional rotation in solution, light-driven motion on a surface has yet to be observed. Mobility in water and other liquids can be also realized by a molecular propeller in the future.
See also
NanoPutian
Nanocar race
References
Molecular machines
Nanotechnology | Nanocar | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 1,263 | [
"Machines",
"Materials science",
"Molecular machines",
"Physical systems",
"Nanotechnology"
] |
11,869,868 | https://en.wikipedia.org/wiki/Wu%20Cheng-wen%20%28biochemist%29 | Wu Cheng-wen (; born 19 June 1938) is a Taiwanese biochemist. He is the founding president of National Health Research Institutes, serving from 1996 to 2005.
Wu was elected as an academician of Taiwan's Academia Sinica in 1984. He is a 1988 Guggenheim fellow, as well as a 2011 recipient of the in Life Sciences.
Wu was the director of Academia Sinica's Institute of Biomedical Sciences, and served on the Council of the Academia Sinica. He was a professor of pharmacological sciences at Stony Brook University, and lived in Setauket, New York. Currently he works as a special lecturer at National Yang-Ming University.
References
1938 births
Affiliated Senior High School of National Taiwan Normal University alumni
Living people
Taiwanese biochemists
Members of Academia Sinica
Scientists from Taipei
Taiwanese expatriates in the United States
20th-century Taiwanese scientists
Stony Brook University faculty
20th-century biochemists
21st-century Taiwanese scientists
21st-century biochemists | Wu Cheng-wen (biochemist) | [
"Chemistry"
] | 200 | [
"Biochemistry stubs",
"Biochemists",
"Biochemist stubs"
] |
11,869,902 | https://en.wikipedia.org/wiki/Geosynchronous%20satellite | A geosynchronous satellite is a satellite in geosynchronous orbit, with an orbital period the same as the Earth's rotation period. Such a satellite returns to the same position in the sky after each sidereal day, and over the course of a day traces out a path in the sky that is typically some form of analemma. A special case of geosynchronous satellite is the geostationary satellite, which has a geostationary orbit – a circular geosynchronous orbit directly above the Earth's equator. Another type of geosynchronous orbit used by satellites is the Tundra elliptical orbit.
Geostationary satellites have the unique property of remaining permanently fixed in exactly the same position in the sky as viewed from any fixed location on Earth, meaning that ground-based antennas do not need to track them but can remain fixed in one direction. Such satellites are often used for communication purposes; a geosynchronous network is a communication network based on communication with or through geosynchronous satellites.
Definition
The term geosynchronous refers to the satellite's orbital period which enables it to be matched, with the rotation of the Earth ("geo-"). Along with this orbital period requirement, to be geostationary as well, the satellite must be placed in an orbit that puts it in the vicinity over the equator. These two requirements make the satellite appear in an unchanging area of visibility when viewed from the Earth's surface, enabling continuous operation from one point on the ground. The special case of a geostationary orbit is the most common type of orbit for communications satellites.
If a geosynchronous satellite's orbit is not exactly aligned with the Earth's equator, the orbit is known as an inclined orbit. It will appear (when viewed by someone on the ground) to oscillate daily around a fixed point. As the angle between the orbit and the equator decreases, the magnitude of this oscillation becomes smaller; when the orbit lies entirely over the equator in a circular orbit, the satellite remains stationary relative to the Earth's surface – it is said to be geostationary.
Application
, there are approximately 446 active geosynchronous satellites, some of which are not operational.
Geostationary satellites appear to be fixed over one spot above the equator. Receiving and transmitting antennas on the earth do not need to track such a satellite. These antennas can be fixed in place and are much less expensive than tracking antennas. These satellites have revolutionized global communications, television broadcasting and weather forecasting, and have a number of important defense and intelligence applications.
One disadvantage of geostationary satellites is a result of their high altitude: radio signals take approximately 0.25 of a second to reach and return from the satellite, resulting in a small but significant signal delay. This delay increases the difficulty of telephone conversation and reduces the performance of common network protocols such as TCP/IP, but does not present a problem with non-interactive systems such as satellite television broadcasts. There are a number of proprietary satellite data protocols that are designed to proxy TCP/IP connections over long-delay satellite links—these are marketed as being a partial solution to the poor performance of native TCP over satellite links. TCP presumes that all loss is due to congestion, not errors, and probes link capacity with its "slow start" algorithm, which only sends packets once it is known that earlier packets have been received. Slow start is very slow over a path using a geostationary satellite. RFC 2488, written in 1999, gives several suggestions on this issue.
There are some advantages of geo-stationary satellites:
Get high temporal resolution data.
Tracking of the satellite by its earth stations is simplified.
Satellite always in same position.
A disadvantage of geostationary satellites is the incomplete geographical coverage, since ground stations at higher than roughly 60 degrees latitude have difficulty reliably receiving signals at low elevations. Satellite dishes at such high latitudes would need to be pointed almost directly towards the horizon. The signals would have to pass through the largest amount of atmosphere, and could even be blocked by land topography, vegetation or buildings. In the USSR, a practical solution was developed for this problem with the creation of special Molniya / Orbita inclined path
satellite networks with elliptical orbits. Similar elliptical orbits are used for the Sirius Radio satellites.
History
The concept was first proposed by Herman Potočnik in 1928 and popularised by the science fiction author Arthur C. Clarke in a paper in Wireless World in 1945. Working prior to the advent of solid-state electronics, Clarke envisioned a trio of large, crewed space stations arranged in a triangle around the planet. Modern satellites are numerous, uncrewed, and often no larger than an automobile.
Widely known as the "father of the geosynchronous satellite", Harold Rosen, an engineer at Hughes Aircraft Company, invented the first operational geosynchronous satellite, Syncom 2. It was launched on a Delta rocket B booster from Cape Canaveral July 26, 1963.
The first geostationary communication satellite was Syncom 3, launched on August 19, 1964, with a Delta D launch vehicle from Cape Canaveral. The satellite, in orbit approximately above the International Date Line, was used to telecast the 1964 Summer Olympics in Tokyo to the United States.
Westar 1 was America's first domestic and commercially launched geostationary communications satellite, launched by Western Union and NASA on April 13, 1974.
See also
Geosynchronous orbit
Geostationary orbit
Geostationary balloon satellite
Graveyard orbit
List of orbits
List of satellites in geosynchronous orbit
Molniya orbit
Tundra orbit
Polar mount - Mount useful for aiming a satellite dish at geosynchronous satellites
Satellite television
References
External links
Lyngsat list of communications satellites in geostationary orbit
For an interactive list of active inactive satellites geosynchronous and orbital at NORAD Celestrack
Satellites by orbit
Satellite broadcasting
Telecommunications-related introductions in 1963 | Geosynchronous satellite | [
"Engineering"
] | 1,235 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
11,869,917 | https://en.wikipedia.org/wiki/Southeast%20Asian%20haze | The Southeast Asian haze is a fire-related recurrent transboundary air pollution issue. Haze events, where air quality reaches hazardous levels due to high concentrations of airborne particulate matter from burning biomass, have caused adverse health, environmental and economic impacts in several countries in Southeast Asia. Caused primarily by slash-and-burn land clearing, the problem flares up every dry season to varying degrees and generally is worst between July and October and during El Niño events. Transboundary haze in Southeast Asia has been recorded since 1972 with the 1997 and 2015 events being particularly severe.
Industrial-scale slash-and-burn practices to clear land for agricultural purposes are a major cause of the haze, particularly for palm oil and pulpwood production in the region. Burning land occurs as it is cheaper and faster compared to cutting and clearing using excavators or other machinery. Fires started for this purpose sometimes spread and create forest fires, worsening the problem. The high concentration of peat in soil contributes to the haze's density and high sulphur content.
Fires in Indonesia (particularly South Sumatra and Riau in Sumatra, and Kalimantan in Borneo), and to a lesser extent in Malaysia and Thailand, have been identified as sources. The haze regularly has a major impact on air quality in Indonesia, Malaysia, Singapore and Brunei Darussalam; to a lesser extent and in particularly severe years, it also impacts the Philippines, Thailand, Vietnam, Cambodia and countries outside the region.
Haze events have been shown to cause health issues and mortality in affected areas, and have caused disruption to economic activity and education as sectors are forced to close to minimise exposure to hazardous air. The haze also has a substantial environmental impact, being a major contributor to greenhouse gas emissions in the region and affecting wildlife and ecosystems.
The haze is an international issue which has caused regional political tensions. Efforts have been made to mitigate haze events and their impacts, and some relevant frameworks for regional cooperation among ASEAN countries have been introduced. Challenges remain in implementing these, and mitigation efforts have failed to prevent haze from reoccurring.
Causes
Most haze events have resulted from smoke from fires that occurred on peatlands in Sumatra and the Kalimantan region of Borneo island. Poor accountability and transparency of Indonesian agricultural companies, and limited political and economic incentives to hold companies to account, have been identified as key barriers to mitigating the issue.
Undisturbed humid tropical forests are considered to be very resistant to fire, experiencing rare fires only during extraordinary dry periods.
A study published in 2005 concluded that there is no single dominant cause of fire in a particular site and there are wide differences in the causes of fires in different sites. The study identified the following direct and indirect causes of fire:
Direct causes of fire
Fire as a tool in land clearing
Fire as a weapon in land tenure or land use disputes
Accidental or escaped fires
Fire connected with resource extraction
Indirect causes of fire
Land tenure and land use allocation conflicts and competition
Forest degrading practices
Economic incentives/disincentives
Population growth and migration
Inadequate fire fighting and management capacity
Fire as a tool in land clearing
Fire is the cheapest and fastest method to clear land in preparation for planting. Fire is used to clear the plant material left over from logging or old crops. Mechanically raking the plant material into long piles and letting them rot over time, is expensive and slow, and could harbour pests. Clearing land with machines and chemicals can cost up to US$200 per hectare while using fire costs US$5 per hectare.
After a peat swamp forest has been cleared and drained, the peat soil is still unsuitable for agriculture, because peat soil is nutrient-poor and acidic (pH 3 - 4). To make the soil suitable for agriculture, the pH has to be neutralised and nutrients added. Pests and plant diseases also have to be removed. One method is to use chemicals such as limestone to neutralise the acidity, as well as fertilisers and pesticides. This method costs about Rupiah 30 - 40 million per hectare. Alternatively, fire is used to clear the plant material left over from logging. The fire kills pests and the resulting ash serves to fertilise the soil and neutralise the acidity. This method costs Rupiah 2 million per hectare.
Land conflicts
In Indonesia, the Basic Forestry Law grants the Ministry of Forestry authority over all land classified as forests. Approximately 49% of the nation (909,070 square kilometres) is covered by actual forest, although the government classifies 69% of the land area (1,331,270 square kilometres) as forest. The land rights of traditional communities that live on land classified as forest cannot be registered and are generally unrecognised by the state. Therefore, these communities do not really have the ability to enforce rules at the village level and exclude outsiders such as oil palm plantations, logging companies, residents of other villages, migrants, small-scale loggers or transmigrants. Competing claims in turn leads to land conflicts. As the number of new, external actors increases, so does the likelihood that fire will be used as a weapon.
Role of peat
A peatland is an area where organic material such as leaves and twigs had accumulated naturally under waterlogged conditions in the last 10,000 years. This layer of organic material, known as peat, can be up to 20m deep. Indonesia has 265,500 km2 of peatland, which comprises 13.9% of its land area. Malaysia also has significant peatland in the Peninsular and Borneo, at 26,685 km2, covering 8.1% of its land area.
Although originally a wetland ecosystem, much of the peatland in Southeast Asia have been drained for human activities such as agriculture, forestry and urban development. A report published in 2011 stated that more than 30% of peat swamp forests had been converted to agricultural land and a further 30% had been logged or degraded in the past 20 to 30 years. Excessive drainage in peat results in the top layer of peat drying out. Due to its high carbon content, dry peat is extremely susceptible to burning, especially during the dry season.
Studies have shown that peat fires are a major contributor to the haze. In 2009, around 40% of all fires in Peninsular Malaysia, Borneo, Sumatra and Java were detected in peatlands, even though they cover only 10% of the land area studied. The concentration of sulphur in rain falling over Singapore in 1997 correlated closely with the PM2.5 concentration, which can be attributed to the strong sulphur emission from peat fires.
History
Southeast Asian haze has frequently reoccurred, with the severity and regions affected differing between seasons. The issue has been recorded since 1972. The 1997 Southeast Asian haze, caused by major forest fires in Indonesia, is thought to be the most severe on record, leading to dangerous pollution across most of Southeast Asia and affecting air quality as far as Sri Lanka. The 2015 haze has also been highlighted as a particularly severe year. In 2020, lockdowns and other social movement restrictions introduced due to the COVID-19 pandemic are thought to have reduced air pollution across the region.
1997 Southeast Asian haze
1997 Indonesian forest fires
2005 Malaysian haze
2006 Southeast Asian haze
2009 Southeast Asian haze
2010 Southeast Asian haze
2013 Southeast Asian haze
2015 Southeast Asian haze
2016 Southeast Asian haze
2017 Southeast Asian haze
2019 Southeast Asian haze
2023 Southeast Asian haze
Effects
Haze related damages can be attributed to two sources: the haze causing fire and the haze itself. Each of the two factors can create significant disruption to people's daily lives and affect people's health. As a whole the recurring haze incidents affected regional economy and generated contention between governments of nations affected.
Direct fire damage
Haze fires can cause many kinds of damage that are local as well as transboundary. These include loss of direct and indirect forest benefits, timber, agricultural products and biodiversity. The fires also incur significant firefighting costs and carbon release to the atmosphere.
Forest fires that contribute to haze are a part of deforestation in Indonesia and Malaysia, a major environmental issue.
Economy
Some of the more direct damage caused by haze includes damage to regional tourism during haze periods, as flights have to be cancelled or delayed during particularly severe events. The haze also leads to industrial production losses, airline and airport losses, damage to fisheries, and incurs the costs on cloud seeding. In addition, severe haze weather can lead to reduced crop productivity, accidents, evacuations, and the loss of confidence of foreign investors.
The 1997 Southeast Asian haze is thought to have led to US$9bn in damages across ASEAN whilst the 2015 haze cost Indonesia alone an estimated $16bn.
Education
School closures have affected many parts of Malaysia, Singapore and Indonesia, sometimes for several weeks, due to hazardous air pollution.
Health
The health effects of haze depend on its severity as measured by the Pollutants Standards Index (PSI). Levels above 100 are classified as unhealthy and anything above 300 as hazardous. There is also individual variation regarding the ability to tolerate air pollution. Most people would at most experience sneezing, running nose, eye irritation, dry throat and dry cough from the pollutants.
However, persons with medical conditions like asthma, chronic lung disease, chronic sinusitis and allergic skin conditions are likely to be more severely affected by the haze and they may experience more severe symptoms. Children and the elderly in general are more likely to be affected. For some, symptoms may worsen with physical activity. One study linked the haze to increased lung cancer diagnoses in Malaysia.
The transboundary Southeast Asian haze has been linked to various cardiovascular conditions including acute ischemic stroke, acute myocardial infarction and cardiac arrest. These studies found dose-dependent effect of PSI on the risk of development these conditions. There appears to be increased susceptibility amongst the elderly and those with history of heart disease and diabetes mellitus. The risk is elevated for several days after exposure. PSI during periods of haze has also been correlated with all-cause mortality, as well as respiratory-illnesses that presented to Emergency Departments and hospital admissions.
The 1997 Southeast Asian haze is estimated to have directly led to 40,000 hospitalisations. A 2016 study estimated the 2015 Southeast Asian haze may have caused around 100,000 deaths, most of which were in Indonesia; the BBC estimated over 500,000 suffered from respiratory ailments in the same season.
A population study found that individuals experienced mild psychological stress, which was associated with the perceived dangerous PSI level and the number of physical symptoms.
Environment
In addition to the direct burning of rainforest, the haze also harms wildlife in the region such as orangutans, birds and amphibians, by impacting their health and reproduction. It has also been suggested that haze affects marine ecosystems.
The haze also contributes to greenhouse gas emissions, to an extent that Indonesia's national daily emissions increased tenfold and temporarily exceeded that of China and the United States during the 2015 haze season. Deforestation in Indonesia contributed to the country being the third highest emitter in the world as of 2013. Commentators have suggested Indonesia's emissions during haze seasons undermine potential efforts to reach its pledged Nationally Determined Contribution under the Paris Agreement.
Responses
Countries have responded to haze events with state of emergency declarations, cloud seeding to clear air and mobilising firefighting resources to areas being burned. The public have also been recommended to stay at home with the doors closed, and wear face masks when outside to minimise exposure to hazardous air quality. During the severe 1997 haze caused primarily by forest fires in Indonesia, Malaysian Prime Minister Mahathir Mohamad announced Operation Haze, sending Malaysian firefighters to Indonesia to support the response.
Singapore and Malaysia continuously monitor and report air pollution levels, using the Pollutant Standards Index and Air Pollution Index, respectively.
ASEAN introduced a Transboundary Haze agreement in 2002 following the severe international impact of the 1997 haze. Indonesia became the last country in ASEAN to ratify it in 2014, despite its major contribution to the issue.
Singapore introduced the Transboundary Haze Pollution Act 2014, that criminalises activities overseas that contribute to haze. Implementation of the domestic act to mitigate the regional issue has been challenging, and has affected Indonesia–Singapore relations. Singapore's investigations into individuals involved in the 2015 haze were accepted by Indonesia, on the condition that it did not violate Indonesian sovereignty. Efforts have been made to introduce a similar domestic law in Malaysia, although the government shelved this in 2020.
The Roundtable on Sustainable Palm Oil added "no peat" to its certification scheme in response to the link between palm oil and peat burning.
Proposed solutions
The below solutions are proposed by Dennis et al. to mitigate the direct and indirect causes of fires which result in haze.
Reduce the use of fire as a tool in land clearing
Indonesian law prohibits the use of fire to clear land for any agriculture but weak enforcement is a major issue. Many companies have also claimed that zero burning is impractical and uncompetitive given the lack of meaningful penalties for illegal burning.
Land-use allocations and tenure
Research shows that the most common cause of fire was related to competition and conflict about land tenure and land allocation. Land-use allocation decisions made by central government agencies often overlap with the concession boundaries of local jurisdictions and indigenous communities' territories. Regional reforms are needed to resolve the resource conflicts and they offer opportunities for the regional government to reconcile decisions with those of local and customary institutions. Regional reforms should also ensure that land and resource allocations and decisions at all levels are compatible with physical site characteristics, prominently taking fire risks into account. However, Indonesia's legacy of inaccurate maps, overlapping boundaries, and a lack of technical expertise at the Provincial and District levels will make this a difficult task.
Reduce forest degrading practices
Policies to improve land management and measures to restore ecological integrity to degraded natural forests are extremely important to reduce the incidence of repeated fires. Promoting community involvement in such rehabilitation efforts is critical for their success in reducing fire risks.
Capacity to prevent and suppress fires
The fires in Kalimantan and Sumatra highlight the need to develop fire management systems that address concerns of specific areas. Sufficient resources must be made available to improve fire management in regions that need them, while recognising the diverse needs of different regions and the people within them.
Technology such as remote sensing, digital mapping, and instantaneous communications can help to predict, detect, and respond to potential fire crises. However, such technology should be broadly accessible, widely used, and transparently controlled before they can be effective in improving fire management in remote regions.
Economic disincentives and incentives
In addition to effective criminal and monetary penalties for illegal burning and liability for fire damage, some policy analysts believe in the potential for economic policy reforms and market-based incentives. A combination of eco-labeling and international trade restrictions could reduce markets for commodities that posed high-fire risks in their production. The government could also provide fiscal advantages to support companies' investments in fire management.
See also
ASEAN Agreement on Transboundary Haze Pollution
Asian Brown Cloud
Chemical Equator
Peat swamp forest
Air pollution in Malaysia
Environmental issues in Indonesia
Deforestation in Indonesia
Palm oil production in Indonesia
Stubble burning
Combustion of biomass
References
Smog
Air pollution by region
Environmental disasters
Transboundary environmental issues
Environmental issues in Brunei
Environmental issues in Malaysia
Environmental issues in Indonesia
Environmental issues in Thailand
Health disasters in Asia
Health disasters in Malaysia
Health disasters in Indonesia
Health disasters in Singapore
Health disasters in Thailand | Southeast Asian haze | [
"Physics"
] | 3,136 | [
"Visibility",
"Smog",
"Physical quantities"
] |
11,869,942 | https://en.wikipedia.org/wiki/Clutter%20%28radar%29 | Clutter is the unwanted return (echoes) in electronic systems, particularly in reference to radars. Such echoes are typically returned from ground, sea, rain, animals/insects, chaff and atmospheric turbulences, and can cause serious performance issues with radar systems. What one person considers to be unwanted clutter, another may consider to be a wanted target. However, targets usually refer to point scatterers and clutter to extended scatterers (covering many range, angle, and Doppler cells). The clutter may fill a volume (such as rain) or be confined to a surface (like land). A knowledge of the volume or surface area illuminated is required to estimated the echo per unit volume, η, or echo per unit surface area, σ° (the radar backscatter coefficient).
Causes
Clutter may be caused by man-made objects such as buildings and — intentionally — by radar countermeasures such as chaff. Other causes include natural objects such as terrain features, sea, precipitation, hail spike, dust storms, birds, turbulence in the atmospheric circulation, and meteor trails. Radar clutter can also be caused by other atmospheric phenomena, such as disturbances in the ionosphere caused by geomagnetic storms or other space weather events. This phenomenon is especially apparent near the geomagnetic poles, where the action of the solar wind on the earth’s magnetosphere produces convection patterns in the ionospheric plasma. Radar clutter can degrade the ability of over-the-horizon radar to detect targets. Clutter may also originate from multipath echoes from valid targets caused by ground reflection, atmospheric ducting or ionospheric reflection/refraction (e.g., anomalous propagation). This clutter type is especially bothersome since it appears to move and behave like common targets of interest, such as aircraft or weather balloons.
Clutter-limited or noise-limited radar
Electromagnetic signals processed by a radar receiver consist of three main components: useful signal (e.g., echoes from aircraft), clutter, and noise. The total signal competing with the target return is thus clutter plus noise. In practice there is often either no clutter or clutter dominates and the noise can be ignored. In the first case, the radar is said to be noise-limited, while in the second it is clutter-limited.
Volume clutter
Rain, hail, snow and chaff are examples of volume clutter. For example, suppose an airborne target, at range , is within a rainstorm. What is the effect on the detectability of the target?
First find the magnitude of the clutter return. Assume that the clutter fills the cell containing the target, that scatterers are statistically independent and that the scatterers are uniformly distributed through the volume. The clutter volume illuminated by a pulse can be calculated from the beam widths and the pulse duration, Figure 1. If c is the speed of light and is the time duration of the transmitted pulse then the pulse returning from a target is equivalent to a physical extent of c, as is the return from any individual element of the clutter. The azimuth and elevation beamwidths, at a range , are and respectively if the illuminated cell is assumed to have an elliptical cross section.
The volume of the illuminated cell is thus:
For small angles this simplifies to:
The clutter is assumed to be a large number of independent scatterers that fill the cell containing the target uniformly. The clutter return from the volume is calculated as for the normal radar equation but the radar cross section is replaced by the product of the volume backscatter coefficient, , and the clutter cell volume as derived above. The clutter return is then
where
= transmitter power (Watts)
= gain of the transmitting antenna
= effective aperture (area) of the receiving antenna
= distance from the radar to the target
A correction must be made to allow for the fact that the illumination of the clutter is not uniform across the beamwidth. In practice the beam shape will approximate to a sinc function which itself approximates to a Gaussian function. The correction factor is found by integrating across the beam width the Gaussian approximation of the antenna. The corrected back scattered power is
A number of simplifying substitutions can be made.
The receiving antenna aperture is related to its gain by:
and the antenna gain is related to the two beamwidths by:
The same antenna is generally used both for transmission and reception thus the received clutter power is:
If the Clutter Return Power is greater than the System Noise Power then the Radar is clutter limited and the Signal to Clutter Ratio must be equal to or greater than the Minimum Signal to Noise Ratio for the target to be detectable.
From the radar equation the return from the target itself will be
with a resulting expression for the signal to clutter ratio of
The implication is that when the radar is noise limited the variation of signal to noise ratio is an inverse fourth power. Halving the distance will cause the signal to noise ratio to increase (improve) by a factor of 16. When the radar is volume clutter limited, however, the variation is an inverse square law and halving the distance will cause the signal to clutter to improve by only 4 times.
Since
it follows that
Clearly narrow beamwidths and short pulses are required to reduce the effect of clutter by reducing the volume of the clutter cell. If pulse compression is used then the appropriate pulse duration to be used in the calculation is that of the compressed pulse, not the transmitted pulse.
Problems in calculating signal to volume clutter ratio
A problem with volume clutter, e.g. rain, is that the volume illuminated may not be completely filled, in which case the fraction filled must be known, and the scatterers may not be uniformly distributed. Consider a beam 10° in elevation. At a range of 10 km the beam could cover from ground level to a height of 1750 metres. There could be rain at ground level but the top of the beam could be above cloud level. In the part of the beam containing rain the rainfall rate will not be constant. One would need to know how the rain was distributed to make any accurate assessment of the clutter and the signal to clutter ratio. All that can be expected from the equation is an estimate to the nearest 5 or 10 dB.
Surface clutter
The surface clutter return depends upon the nature of the surface, its roughness, the grazing angle (angle the beam makes with the surface), the frequency and the polarisation. The reflected signal is the phasor sum of a large number of individual returns from a variety of sources, some of them capable of movement (leaves, rain drops, ripples) and some of them stationary (pylons, buildings, tree trunks). Individual samples of clutter vary from one resolution cell to another (spatial variation) and vary with time for a given cell (temporal variation).
Beam filling
For a target close to the Earth's surface such that the earth and target are in the same range resolution cell one of two conditions are possible. The most common case is when the beam intersects the surface at such an angle that the area illuminated at any one time is only a fraction of the surface intersected by the beam as illustrated in Figure 2.
Pulse length limited case
For the pulse length limited case the area illuminated depends upon the azimuth width of the beam and the length of the pulse, measured along the surface. The illuminated patch has a width in azimuth of
.
The length measured along the surface is
.
The area illuminated by the radar is then given by
For 'small' beamwidths this approximates to
The clutter return is then
Watts
Substituting for the illuminated area
Watts
where is the back scatter coefficient of the clutter.
Converting to degrees and putting in the numerical values gives
Watts
The expression for the target return remains unchanged thus the signal to clutter ratio is
Watts
This simplifies to
In the case of surface clutter the signal to clutter now varies inversely with R. Halving the distance only causes a doubling of the ratio (a factor of two improvement).
Problems in calculating clutter for the pulse length limited case
There are a number of problems in calculating the signal to clutter ratio. The clutter in the main beam is extended over a range of grazing angles and the backscatter coefficient depends upon grazing angle. Clutter will appear in the antenna sidelobes, which again will involve a range of grazing angles and may even involve clutter of a different nature.
Beam width limited case
The calculation is similar to the previous examples, in this case the illuminated area is
which for small beamwidths simplifies to
The clutter return is as before
Watts
Substituting for the illuminated area
Watts
This can be simplified to:
Watts
Converting to degrees
Watts
The target return remains unchanged thus
Which simplifies to
As in the case of Volume Clutter the Signal to clutter ratio follows an inverse square law.
General problems in calculating surface clutter
The general significant problem is that the backscatter coefficient cannot in general be calculated and must be measured. The problem is the validity of measurements taken in one location under one set of conditions being used for a different location under different conditions. Various empirical formulae and graphs exist which enable an estimate to be made but the results need to be used with caution.
Clutter folding
Clutter folding is a term used in describing "clutter" seen by radar systems. Clutter folding becomes a problem when the range extent of the clutter (seen by the radar) exceeds the pulse repetition frequency interval of the radar, and it no longer provides adequate clutter suppression, and the clutter "folds" back in range. The solution to this problem is usually to add fill pulses to each coherent dwell of the radar, increasing the range over which clutter suppression is applied by the system.
The tradeoff for doing this is that adding fill pulses will degrade the performance, due to wasted transmitter power and a longer dwell time.
References
Atmosphere
Radar theory
Radio frequency propagation | Clutter (radar) | [
"Physics"
] | 2,070 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
11,870,065 | https://en.wikipedia.org/wiki/Tovex | Tovex (also known as Trenchrite, Seismogel, and Seismopac) is a water-gel explosive composed of ammonium nitrate and methylammonium nitrate that has several advantages over traditional dynamite, including lower toxicity and safer manufacture, transport, and storage. It has thus almost entirely replaced dynamite.
There are numerous versions ranging from shearing charges to aluminized common blasting agents. Tovex is used by 80% of international oil companies for seismic exploration.
History
The Tovex family of products, sometimes generically called "water gels," were developed by the Explosives Department at DuPont (E.I. du Pont de Nemours & Company, Inc.) in the mid-to-late 1960s when pelletized TNT was included in aqueous gels to create a slurry form of ANFO that displayed water-resistant properties in wet bore holes. TNT-sensitized water gels were commercially successful, but the TNT led to problems with oxygen balance: namely elevated amounts of combustion by-products such as carbon monoxide and nitrogen-dioxide complexes. Not only was TNT "dirty," it was also expensive. TNT was eliminated through the work of DuPont chemists Colin Dunglinson, Joseph Dean Chrisp Sr. and William Lyerly along with a team of others at DuPont's Potomac River Development Laboratory (PRDL) at Falling Waters, West Virginia, and at DuPont's Eastern Laboratories (EL) at Gibbstown, New Jersey. These chemists and engineers formulated a series of water gel-base products that replaced the TNT with methyl ammonium nitrate, also known as monomethylamine nitrate, or PR-M, (which stands for "Potomac River – monomethylamine nitrate"), creating the "Tovex Extra" product line.
In late 1973, DuPont declared "the last days of dynamite" and switched to the new Tovex formula. The "Tovex" (that replaced nitroglycerin-based dynamite) had evolved into a cap-sensitive product. Even though it bore the same name as the earlier "Tovex," it was quite different from the precursors, which could only be initiated in large diameters (5 inches) with a one pound TNT booster. The new Tovex of the mid-to-late 1970s could be detonated in (critical) diameters much smaller than 5-inches by utilizing DuPont's Detaflex, thus making the new Tovex a realistic replacement for dynamite.
Until then, only nitroglycerin-based explosives were commercially feasible for blasters who wanted cap-sensitive explosives that could be initiated with a #6 blasting cap in bore holes as small as 3/4 of an inch in diameter, sometimes less. The new Tovex satisfied that requirement.
Atlas, Hercules, IRECO, Trojan-US Powder and several other explosives manufacturing firms of the era created emulsions, gels and slurries which accomplished the same end, but it was the DuPont patent of PR-M based explosives () that gave the DuPont Company a competitive edge.
DuPont stopped producing nitroglycerine-based dynamite in 1976, replacing it with Tovex. In 1980, it sold its Tovex technology to Explosive Technology International (ETI), a Canadian company. One ETI licensee is Biafo Industries Limited, headquartered in Islamabad, Pakistan.
, explosives sold under DuPont's original "Tovex" trade name are distributed in Europe by Societe Suisse des Explosifs, Brigue, in Switzerland.
Properties
Tovex is a 50/50 aqueous solution of ammonium nitrate and methylammonium nitrate (sometimes also called monomethylamine nitrate, or PR-M), sensitized fuels, and other ingredients including sodium nitrate prills, finely divided (paint-grade) aluminum, finely divided coal, proprietary materials to make some grades cap sensitive, and thickening agents to enhance water resistance and to act as crystal modifiers. The detonation velocity of Tovex is around 4,000 to 5,500 m·s−1. The specific gravity is 0.8–1.4. Tovex looks like a fluid gel which can be white to black in color.
Ingredients
sodium nitrate
ammonium nitrate
methylammonium nitrate (sometimes called "Monomethylamine Nitrate")
calcium nitrate
aluminum
fuel oil No.2
carbonaceous fuel
perlite
silica (fibrous glass)
ethylene glycol
guar gum
Uses
avalanche control
blasting for road construction
mining for minerals
quarrying for the construction and building industry
seismic exploration
tunneling
improvised explosive devices
hiking trail building
cutting fire lines
hazardous tree removal
Sample applications
The blasting product is malleable to the extent that it can be cut to length, laid out, or bundled for a wide variety of applications. Because the material requires heat and fast compression to detonate, it is safe to transport and manipulate once in the field, even if dropped from high altitudes, set on fire, or peppered with high velocity rifle bullets.
For dead trees which are considered too hazardous to remove utilizing crosscut saws or chainsaws, one or two wraps of Tovex around the base of the trunk is often enough to fell the tree safely so that the remains may be safely bucked and removed from hiking trails, parks, or other places where people recreate.
For firebreaks, the product is simply uncoiled and laid along the ground to follow the ridge line or contours of the hillside or mountain along which the fire crews wish to establish a line of defense. Fire crews then follow-up by clearing debris along the blasted line to establish a fuel-free line.
For more technical blasting which requires greater planning and finesse, Tovex is often bundled according to weight into or up against a solid material, then a detonation cord is applied to the Tovex to create a fast moving, heated dynamo effect within the Tovex when one or more blasting caps ignite the detonation cord, thus accelerating the Tovex, which causes it to detonate.
Blasting caps are ignited utilizing hand-held control boxes which employ a series of safety interlocks and switches which require a strict radio sessioning handshake protocol between the unit which ignites the cap and the unit used by the Master Blaster controlling the shot, designed to prevent the emplaced Tovex, detonation cord, and caps from igniting prematurely.
After detonation, the material is completely utilized; there is no discernible residue, unless one employs microscopic analysis. Typically, Tovex and other commercial explosives employ embedded taggants which identify the product and often the agency which purchased the material.
Features
cap sensitive
wide range of bore hole densities
improved flexibility in loading
water-resistance
no nitroglycerin and noxious fumes
reduced handling, transportation and storage hazards
high bubble energy (underwater explosion)
reduced sound levels and better control on vibrations
References
External links
Biafo Industries Ltd – Tovex Explosive
Blasting Upper Bear Creek Trail
Bear Divide Hotshots
San Gabriel Mountains Trailbuilders
Explosives | Tovex | [
"Chemistry"
] | 1,465 | [
"Explosives",
"Explosions"
] |
11,870,817 | https://en.wikipedia.org/wiki/Octacube%20%28sculpture%29 | The Octacube is a large, stainless steel sculpture displayed in the mathematics department of Pennsylvania State University in State College, PA. The sculpture represents a mathematical object called the 24-cell or "octacube". Because a real 24-cell is four-dimensional, the artwork is actually a projection into the three-dimensional world.
Octacube has very high intrinsic symmetry, which matches features in chemistry (molecular symmetry) and physics (quantum field theory).
The sculpture was designed by , a mathematics professor at Pennsylvania State University. The university's machine shop spent over a year completing the intricate metal-work. Octacube was funded by an alumna in memory of her husband, Kermit Anderson, who died in the September 11 attacks.
Artwork
The Octacube's metal skeleton measures about in all three dimensions. It is a complex arrangement of unpainted, tri-cornered flanges. The base is a high granite block, with some engraving.
The artwork was designed by Adrian Ocneanu, a Penn State mathematics professor. He supplied the specifications for the sculpture's 96 triangular pieces of stainless steel and for their assembly. Fabrication was done by Penn State's machine shop, led by Jerry Anderson. The work took over a year, involving bending and welding as well as cutting. Discussing the construction, Ocneanu said: It's very hard to make 12 steel sheets meet perfectly—and conformally—at each of the 23 vertices, with no trace of welding left. The people who built it are really world-class experts and perfectionists—artists in steel.
Because of the reflective metal at different angles, the appearance is pleasantly strange. In some cases, the mirror-like surfaces create an illusion of transparency by showing reflections from unexpected sides of the structure. The sculpture's mathematician creator commented: When I saw the actual sculpture, I had quite a shock. I never imagined the play of light on the surfaces. There are subtle optical effects that you can feel but can't quite put your finger on.
Interpretation
Regular shapes
The Platonic solids are three-dimensional shapes with special, high, symmetry. They are the next step up in dimension from the two-dimensional regular polygons (squares, equilateral triangles, etc.). The five Platonic solids are the tetrahedron (4 faces), cube (6 faces), octahedron (8 faces), dodecahedron (12 faces), and icosahedron (20 faces). They have been known since the time of the Ancient Greeks and valued for their aesthetic appeal and philosophical, even mystical, import. (See also the Timaeus, a dialogue of Plato.)
In higher dimensions, the counterparts of the Platonic solids are the regular polytopes. These shapes were first described in the mid-19th century by a Swiss mathematician, Ludwig Schläfli. In four dimensions, there are six of them: the pentachoron (5-cell), tesseract (8-cell), hexadecachoron (16-cell), octacube (24-cell), hecatonicosachoron (120-cell), and the hexacosichoron (600-cell).
The 24-cell consists of 24 octahedrons, joined in 4-dimensional space. The 24-cell's vertex figure (the 3-D shape formed when a 4-D corner is cut off) is a cube. Despite its suggestive name, the octacube is not the 4-D analog of either the octahedron or the cube. In fact, it is the only one of the six 4-D regular polytopes that lacks a corresponding Platonic solid.
Projections
Ocneanu explains the conceptual challenge in working in the fourth dimension: "Although mathematicians can work with a fourth dimension abstractly by adding a fourth coordinate to the three that we use to describe a point in space, a fourth spatial dimension is difficult to visualize."
Although it is impossible to see or make 4-dimensional objects, it is possible to map them into lower dimensions to get some impressions of them. An analogy for converting the 4-D 24-cell into its 3-D sculpture is cartographic projection, where the surface of the 3-D Earth (or a globe) is reduced to a flat 2-D plane (a portable map). This is done either with light 'casting a shadow' from the globe onto the map or with some mathematical transformation. Many different types of map projection exist: the familiar rectangular Mercator (used for navigation), the circular gnomonic (first projection invented), and several others. All of them have limitations in that they show some features in a distorted manner—'you can't flatten an orange peel without damaging it'—but they are useful visual aids and convenient references.
In the same manner that the exterior of the Earth is a 2-D skin (bent into the third dimension), the exterior of a 4-dimensional shape is a 3-D space (but folded through hyperspace, the fourth dimension). However, just as the surface of Earth's globe cannot be mapped onto a plane without some distortions, neither can the exterior 3-D shape of the 24-cell 4-D hyper-shape. In the image on the right a 24-cell is shown projected into space as a 3-D object (and then the image is a 2-D rendering of it, with perspective to aid the eye). Some of the distortions:
Curving edge lines: these are straight in four dimensions, but the projection into a lower dimension makes them appear to curve (similar effects occur when mapping the Earth).
It is necessary to use semi-transparent faces because of the complexity of the object, so the many "boxes" (octahedral cells) are seen.
Only 23 cells are clearly seen. The 24th cell is the "outside in", the whole exterior space around the object as seen in three dimensions.
To map the 24-cell, Ocneanu uses a related projection which he calls windowed radial stereographic projection. As with the stereographic projection, there are curved lines shown in 3-D space. Instead of using semitransparent surfaces, "windows" are cut into the faces of the cells so that interior cells can be seen. Also, only 23 vertices are physically present. The 24th vertex "occurs at infinity" because of the projection; what one sees is the 8 legs and arms of the sculpture diverging outwards from the center of the 3-D sculpture.
Symmetry
The Octacube sculpture has very high symmetry. The stainless steel structure has the same amount of symmetry as a cube or an octahedron. The artwork can be visualized as related to a cube: the arms and legs of the structure extend to the corners. Imagining an octahedron is more difficult; it involves thinking of the faces of the visualized cube forming the corners of an octahedron. The cube and octahedron have the same amount and type of symmetry: octahedral symmetry, called Oh (order 48) in mathematical notation. Some, but not all, of the symmetry elements are
3 different four-fold rotation axes (one through each pair of opposing faces of the visualized cube): up/down, in/out and left/right as seen in the photograph
4 different three-fold rotation axes (one through each pair of opposing corners of the cube [along each of the opposing arm/leg pairs])
6 different two-fold rotation axes (one through the midpoint of each opposing edge of the visualized cube)
9 mirror planes that bisect the visualized cube
3 that cut it top/bottom, left/right and front/back. These mirrors represent its reflective dihedral subsymmetry D2h, order 8 (a subordinate symmetry of any object with octahedral symmetry)
6 that go along the diagonals of opposing faces of the visualized cube (these go along double sets of arm-leg pairs). These mirrors represent its reflective tetrahedral subsymmetry Td, order 24 (a subordinate symmetry of any object with octahedral symmetry).
Using the mid room points, the sculpture represents the root systems of type D4, B4=C4 and F4, that is all 4d ones other than A4. It can visualize the projection of D4 to B3 and D4 to G2.
Science allusions
Many molecules have the same symmetry as the Octacube sculpture. The organic molecule, cubane (C8H8) is one example. The arms and legs of the sculpture are similar to the outward projecting hydrogen atoms. Sulfur hexafluoride (or any molecule with exact octahedral molecular geometry) also shares the same symmetry although the resemblance is not as similar.
The Octacube also shows parallels to concepts in theoretical physics. Creator Ocneanu researches mathematical aspects of quantum field theory (QFT). The subject has been described by a Fields medal winner, Ed Witten, as the most difficult area in physics. Part of Ocneanu's work is to build theoretical, and even physical, models of the symmetry features in QFT. Ocneanu cites the relationship of the inner and outer halves of the structure as analogous to the relationship of spin 1/2 particles (e.g. electrons) and spin 1 particles (e.g. photons).
Memorial
Octacube was commissioned and funded by Jill Anderson, a 1965 PSU math grad, in memory of her husband, Kermit, another 1965 math grad, who was killed in the 9-11 terrorist attacks. Summarizing the memorial, Anderson said: I hope that the sculpture will encourage students, faculty, administrators, alumnae, and friends to ponder and appreciate the wonderful world of mathematics. I also hope that all who view the sculpture will begin to grasp the sobering fact that everyone is vulnerable to something terrible happening to them and that we all must learn to live one day at a time, making the very best of what has been given to us. It would be great if everyone who views the Octacube walks away with the feeling that being kind to others is a good way to live.
Anderson also funded a math scholarship in Kermit's name, at the same time the sculpture project went forward.
Reception
A more complete explanation of the sculpture, including how it came to be made, how its construction was funded and its role in mathematics and physics, has been made available by Penn State. In addition, Ocneanu has provided his own commentary.
See also
Artists:
Salvador Dalí, painter of fourth dimension allusions
David Smith, a sculptor of abstract, geometric stainless steel
Tony Smith, another creator of large abstract geometric sculptures
Math:
Group theory, the mathematical discipline that historically encompassed much research into symmetry
Operator algebra and Representation theory, Ocneanu's areas of math research
References
Notes
Citations
External links
Video from Penn State about the Octacube
User created video on imagining a four dimensional object (but a tesseract). Note discussion of projections at ~22 minutes and the discussion of the cells in the model at ~35 minutes.
Mathematical artworks
Quantum field theory
2005 sculptures
Mathematics and culture
Memorials for the September 11 attacks
Pennsylvania State University
Steel sculptures in Pennsylvania
Group theory
Operator algebras | Octacube (sculpture) | [
"Physics",
"Mathematics"
] | 2,368 | [
"Group theory",
"Quantum field theory",
"Fields of abstract algebra",
"Quantum mechanics"
] |
11,871,466 | https://en.wikipedia.org/wiki/Net%20interest%20spread | Net interest spread refers to the difference in borrowing and lending rates of financial institutions (such as banks) in nominal terms. It is considered analogous to the gross margin of non-financial companies.
Net interest spread is expressed as interest yield on earning assets (any asset, such as a loan, that generates interest income) minus interest rates paid on borrowed funds.
Net interest spread is similar to net interest margin; net interest spread expresses the nominal average difference between borrowing and lending rates, without compensating for the fact that the amount of earning assets and borrowed funds may be different.
Example
For example, a bank has average loans to customers of $100, and earns gross interest income of $6. The interest yield is 6/100 = 6%. A bank takes deposits from customers and pays 1% to those customers. The bank lends its customers money at 6%. The bank's net interest spread is 5%.
References
Successful Bank Asset/Liability Management: A Guide to the Future Beyond Gap, John W. Bitner, Robert A. Goddard, 1992, p. 185.
Net Interest Spread Software
There are several popular commercial net interest spread software packages to help banks manage and grow their net interest spread effectively. Among these are:
Margin Maximizer Suite - this software was originally developed by US Banking Alliance which was later purchased by ProfitStars - a Jack Henry Company. This software is coupled with an onsite consulting service. The software is installed onsite and is a Microsoft .Net-based application that must be installed on each lender's computer.
PrecisionLender (formerly MarginPro) - an entirely web-based solution, launched in October 2009. It was developed by the original team from US Banking Alliance. It is delivered through Software as a Service (SaaS).
Austin Associates LLC - another web based commercial loan pricing solution. Unlike PrecisionLender, it is a more traditional html web-forms-based application.
See also
Net interest margin
Net Interest Income
Financial ratios
Banking
Interest | Net interest spread | [
"Mathematics"
] | 409 | [
"Financial ratios",
"Quantity",
"Metrics"
] |
11,871,683 | https://en.wikipedia.org/wiki/Epsilon%20Eridani%20b | Epsilon Eridani b, also known as AEgir , is an exoplanet approximately 10.5 light-years away orbiting the star Epsilon Eridani, in the constellation of Eridanus (the River). The planet was discovered in 2000, and as of 2024 remains the only confirmed planet in its planetary system. It orbits at around 3.5 AU with a period of around 7.6 years, and has a mass around 0.6 times that of Jupiter. , both the Extrasolar Planets Encyclopaedia and the NASA Exoplanet Archive list the planet as 'confirmed'.
Name
The planet and its host star are one of the planetary systems selected by the International Astronomical Union as part of NameExoWorlds, their public process for giving proper names to exoplanets and their host star (where no proper name already exists). The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning names were AEgir for the planet (pronounced [Anglicized] or , an approximation of the old Norse Ægir) and Ran for the star. James Ott, age 14, submitted the names for the IAU contest and won.
The moon Aegir of Saturn is also named after the mythological Ægir, and differs in spelling only by capitalization.
Discovery
The planet's existence was suspected by a Canadian team led by Bruce Campbell and Gordon Walker in the early 1990s, but their observations were not definitive enough to make a solid discovery. Its formal discovery was announced on August 7, 2000, by a team led by Artie Hatzes. The discoverers gave its mass as 1.2 ± 0.33 times that of Jupiter, with a mean distance of 3.4 AU from the star. Observers, including Geoffrey Marcy, suggested that more information on the star's Doppler noise behaviour created by its large and varying magnetic field was needed before the planet could be confirmed.
In 2006, the Hubble Space Telescope made astrometric measurements and confirmed the existence of the planet. These observations indicated that the planet has a mass 1.5 times that of Jupiter and shares the same plane as the outer dust disk observed around the star. The derived orbit from these measurements is eccentric: either 0.25 or 0.7.
Meanwhile, the Spitzer Space Telescope detected an asteroid belt at roughly 3 AU from the star. In 2009 one team of astronomers claimed that the proposed planet's eccentricity and this belt were inconsistent: the planet would pass through the asteroid belt and rapidly clear it of material. The planet and the inner belt may be reconciled if that belt's material had migrated in from the outer comet belt (also known to exist).
Astronomers continue to collect and analyse radial velocity data, while also refining existing upper limits from non-detection via direct imaging, on Epsilon Eridani b. A paper published in January 2019 found an orbital eccentricity with an order of magnitude smaller than earlier estimates had, at around 0.07, consistent with a nearly circular orbit and very similar to Jupiter's orbital eccentricity of 0.05. This resolved the stability issue with the inner asteroid belt. The updated measurements, amongst other things, also included new estimates for the mass and inclination of the planet, at 0.78 times that of Jupiter but due to the inclination having been poorly constrained at 89 degrees this was only a rough estimate of the absolute mass. If the planet instead orbited at the same inclination as the debris disc (34 degrees), as supported by Benedict et al. 2006, then its mass would have been greater, at 1.19 times that of Jupiter.
Using astrometric data taken from the U.S. Naval Observatory Robotic Astrometric Telescope (URAT) combined with previously collected data from the Hipparcos mission, and the newer Gaia EDR3 data release, a group of scientists at the United States Naval Observatory believe they have, with high formal confidence levels, confirmed the presence of a long-period exoplanet orbiting Epsilon Eridani.
A paper published in October 2021 determines, using absolute astrometry measurements from the Hipparcos, Gaia DR2 data, and new radial velocity measurements from Keck/NIRC2 Ms-band vortex coronagraph images, a lower absolute mass of 0.65 times that of Jupiter, at an eccentricity close to 0.055 with the planet orbiting at around 3.53 AU inclined at 78 degrees. Similar updated findings were published in a paper in July 2021, determining a minimum mass of 0.651 that of Jupiter, with the planet's semi-major axis at 3.5 AU orbiting with an eccentricity of 0.044. A March 2022 paper finds an inclination of 45 degrees, closer to earlier estimates, a mass 0.63 times that of Jupiter, and an eccentricity of 0.16.
Direct imaging of Epsilon Eridani b with the James Webb Space Telescope is planned.
See also
47 Ursae Majoris b
51 Pegasi b
List of nearest exoplanets
Notes
References
External links
Epsilon Eridani b at The Extrasolar Planets Encyclopaedia. Retrieved 2020-05-04.
Epsilon Eridani b at The NASA Exoplanet Archive. Retrieved 2020-05-04.
Eridanus (constellation)
Exoplanets detected by radial velocity
Exoplanets detected by astrometry
Exoplanets discovered in 2000
Giant planets
Exoplanets with proper names | Epsilon Eridani b | [
"Astronomy"
] | 1,129 | [] |
11,872,111 | https://en.wikipedia.org/wiki/Naturally%20occurring%20radioactive%20material | Naturally occurring radioactive materials (NORM) and technologically enhanced naturally occurring radioactive materials (TENORM) consist of materials, usually industrial wastes or by-products enriched with radioactive elements found in the environment, such as uranium, thorium and potassium and any of their decay products, such as radium and radon. Produced water discharges and spills are a good example of entering NORMs into the surrounding environment.
Natural radioactive elements are present in very low concentrations in Earth's crust, and are brought to the surface through human activities such as oil and gas exploration or mining, and through natural processes like leakage of radon gas to the atmosphere or through dissolution in ground water. Another example of TENORM is coal ash produced from coal burning in power plants. If radioactivity is much higher than background level, handling TENORM may cause problems in many industries and transportation.
NORM in oil and gas exploration
Oil and gas TENORM and/or NORM is created in the production process, when produced fluids from reservoirs carry sulfates up to the surface of the Earth's crust. Some states, such as North Dakota, uses the term "diffuse NORM". Barium, calcium and strontium sulfates are larger compounds, and the smaller atoms, such as radium-226 and radium-228, can fit into the empty spaces of the compound and be carried through the produced fluids. As the fluids approach the surface, changes in the temperature and pressure cause the barium, calcium, strontium and radium sulfates to precipitate out of solution and form scale on the inside, or on occasion, the outside of the tubulars and/or casing. The use of tubulars in the production process that are NORM contaminated does not cause a health hazard if the scale is inside the tubulars and the tubulars remain downhole. Enhanced concentrations of the radium 226 and 228 and the daughter products such as lead-210 may also occur in sludge that accumulates in oilfield pits, tanks and lagoons. Radon gas in the natural gas streams concentrate as NORM in gas processing activities. Radon decays to lead-210, then to bismuth-210, polonium-210 and stabilizes with lead-206. Radon decay elements occur as a shiny film on the inner surface of inlet lines, treating units, pumps and valves associated with propylene, ethane and propane processing systems.
NORM characteristics vary depending on the nature of the waste. NORM may be created in a crystalline form, which is brittle and thin, and can cause flaking to occur in tubulars. NORM formed in carbonate matrix can have a density of 3.5 grams/cubic centimeters and must be noted when packing for transportation. NORM scales may be white or a brown solid, or thick sludge to solid, dry flaky substances. NORM may also be found in oil and gas production produced waters.
Cutting and reaming oilfield pipe, removing solids from tanks and pits, and refurbishing gas processing equipment may expose employees to particles containing increased levels of alpha emitting radionuclides that could pose health risks if inhaled or ingested.
NORM is found in many industries including
The coal industry (mining and combustion)
Metal mining and smelting
Mineral sands (rare earth minerals, titanium and zirconium).
Fertilizer (phosphate) industry
Building industry
Hazards
The hazards associated with NORM are inhalation and ingestion routes of entry as well as external exposure where there has been a significant accumulation of scales. Respirators may be necessary in dry processes, where NORM scales and dust become air borne and have a significant chance to enter the body.
The hazardous elements found in NORM are radium 226, 228 and radon 222 and also daughter products from these radionuclides. The elements are referred to as "bone seekers" which when inside the body migrate to the bone tissue and concentrate. This exposure can cause bone cancers and other bone abnormalities. The concentration of radium and other daughter products build over time, with several years of excessive exposures. Therefore, from a liability standpoint an employee that has not had respiratory protection over several years could develop bone or other cancers from NORM exposure and decide to seek compensation such as medical expenses and lost wages from the oil company which generated the TENORM and the employer.
Radium radionuclides emit alpha and beta particles as well as gamma rays. The radiation emitted from a radium 226 atom is 96% alpha particles and 4% gamma rays. The alpha particle is not the most dangerous particle associated with NORM, as an external hazard. Alpha particles are identical with helium-4 nuclei. Alpha particles travel short distances in air, of only 2–3 cm, and cannot penetrate through a dead layer of skin on the human body. However, some radium alpha particle emitters are "bone seekers" due to radium possessing a high affinity for chloride ions. In the case that radium atoms are not expelled from the body, they concentrate in areas where chloride ions are prevalent, such as bone tissue. The half-life for radium 226 is approximately 1,620 years, and will remain in the body for the lifetime of the human — a significant length of time to cause damage.
Beta particles are electrons or positrons and can travel farther than alpha particles in air. They are in the middle of the scale in terms of ionizing potential and penetrating power, being stopped by a few millimeters of plastic. This radiation is a small portion of the total emitted during radium 226 decay. Radium 228 emits beta particles, and is also a concern for human health through inhalation and ingestion.
The gamma rays emitted from radium 226, accounting for 4% of the radiation, are harmful to humans with sufficient exposure. Gamma rays are highly penetrating and some can pass through metals, so Geiger counters or a scintillation probe are used to measure gamma ray exposures when monitoring for NORM.
Alpha and beta particles are harmful once inside the body. Breathing NORM contaminates from dusts should be prevented by wearing respirators with particulate filters. In the case of properly trained occupational NORM workers, air monitoring and analysis may be necessary. These measurements, ALI and DAC, are calculated values based on the dose an average employee working 2,000 hours a year may be exposed to. The current legal limit exposure in the United States is 1 ALI, or 5 rems. A rem, or roentgen equivalent man, is a measurement of absorption of radiation on parts of the body over an extended period of time. A DAC is a concentration of alpha and beta particles that an average working employee is exposed to for 2,000 hours of light work. If an employee is exposed to over 10% of an ALI, 500 mREM, then the employee's dose must be documented under instructions with federal and state regulations.
Regulation
United States
NORM is not federally regulated in the United States. The Nuclear Regulatory Commission (NRC) has jurisdiction over a relatively narrow spectrum of radiation, and the Environmental Protection Agency (EPA) has jurisdiction over NORM. Since no federal entity has implemented NORM regulations, NORM is variably regulated by the states.
United Kingdom
In the UK regulation is via the Environmental Permitting (England and Wales) Regulations 2010.
This defines two types of NORM activity:
Type 1 NORM industrial activity means:
(a) the production and use of thorium, or thorium compounds, and the production of products where thorium is deliberately added; or
(b) the production and use of uranium or uranium compounds, and the production of products where uranium is deliberately added
Type 2 NORM industrial activity means:
(a) the extraction, production and use of rare earth elements and rare earth element alloys;
(b) the mining and processing of ores other than uranium ore;
(c) the production of oil and gas;
(d) the removal and management of radioactive scales and precipitates from equipment associated with industrial activities;
(e) any industrial activity utilising phosphate ore;
(f) the manufacture of titanium dioxide pigments;
(g) the extraction and refining of zircon and manufacture of zirconium compounds;
(h) the production of tin, copper, aluminium, zinc, lead and iron and steel;
(i) any activity related to coal mine de-watering plants;
(j) china clay extraction;
(k) water treatment associated with provision of drinking water;
or
(l) The remediation of contamination from any type 1 NORM industrial activity or any of the activities listed above.
An activity which involves the processing of radionuclides of natural terrestrial or cosmic origin for their radioactive, fissile or fertile properties is not a type 1 NORM industrial activity or a type 2 NORM industrial activity.
See also
Background radiation, ionizing radiation constantly present in the natural environment of the Earth
Environmental radioactivity
References
External links
North Dakota Department of Health
NORM Technology Connection, Interstate Oil and Gas Compact Commission
Radiation Quick Reference Guide, Domestic Nuclear Detection Office
Naturally Occurring Radioactive Materials from the World Nuclear Association
UK guidance on Radioactive Substances Regulation For the Environmental Permitting (England and Wales) Regulations 2010:Defra
Radioactive waste
By-products
Environmental impact of fossil fuels
Environmental impact of mining
Water pollution | Naturally occurring radioactive material | [
"Chemistry",
"Technology",
"Environmental_science"
] | 1,889 | [
"Water pollution",
"Hazardous waste",
"Environmental impact of nuclear power",
"Radioactivity",
"Radioactive waste"
] |
11,872,205 | https://en.wikipedia.org/wiki/14%20Herculis%20c | 14 Herculis c or 14 Her c is an exoplanet approximately 58.4 light-years away in the constellation of Hercules. The planet was found orbiting the star 14 Herculis, with a mass that would make the planet a gas giant roughly the same size as Jupiter but much more massive. It was discovered on November 17, 2005 and published on November 2, 2006, although its existence was not confirmed until 2021.
According to a 2007 analysis, the existence of a second planet in the 14 Herculis system was "clearly" supported by the evidence, but the planet's parameters were not precisely known. It may be in a 4:1 resonance with the inner planet 14 Herculis b.
The inclination and true mass of 14 Herculis c were measured in 2021, using data from Gaia, and refined by further astrometric studies in 2022 and 2023. The inclination is 82°, corresponding to a true mass of .
Direct imaging of 14 Herculis c with the James Webb Space Telescope is planned.
References
External links
Hercules (constellation)
Giant planets
Exoplanets discovered in 2005
Exoplanets detected by radial velocity
Exoplanets detected by astrometry | 14 Herculis c | [
"Astronomy"
] | 246 | [
"Hercules (constellation)",
"Constellations"
] |
11,872,693 | https://en.wikipedia.org/wiki/Printed%20matter | Printed matter is a term, mostly used by mailing systems, normally used to describe mechanically printed materials for which reduced fees are paid which are lower than first-class mail. Each postal administration has its own rules for what may be posted as printed matter. In the Great Britain a special "Book Post" was introduced in 1848 that by 1852 had been extended to the wider range of material.
Conception
Printed matter was produced by printers or publishers, such as books, magazines, booklets, brochures and other publicity materials and in some cases, newspapers. Because much of this material is mailed, it is also a category of mail, accepted for delivery by a postal administration, that is not considered to be first-class mail and therefore qualifies for a special reduced printed matter postal rate. Depending on the specific postal regulations of the country, it is usually non-personal correspondence and printed in multiple quantities. Most postal authorities do not permit additional services, like registration or express services, to be added to items mailed as printed matter.
In the Postal Convention between the United States of America and the Republic of Mexico, proclaimed on June 20, 1862, terms were specified relating to the rates for printed matter between the two countries. The rate was one cent for every ounce or fraction of an ounce.
By country
China
Printed matter was called as Chūbǎn-wù () or Yìnshuā-pǐn (印刷品) what had the thousand-years history.
United States
As of June 2007, the USPS has a printed matter classification known as "Bound Printed Matter", BPM, defined as, advertising, promotional, directory, or editorial material that is securely bound and at least 90% is imprinted by a process other than handwriting or typewriting and is only available for user with an imprint permit.
For international shipment of printed matter, the USPS provides a discount M-bag service; following the 2007 elimination of surface mail, only airmail M-bag remains.
United Kingdom
The term is Printed Papers as used by the Royal Mail.
See also
Printing
Paper
Mail order catalog
References
Postal systems
Philatelic terminology | Printed matter | [
"Technology"
] | 428 | [
"Transport systems",
"Postal systems"
] |
11,872,944 | https://en.wikipedia.org/wiki/Manzana%20%28unit%29 | A is a unit of area used in Argentina and in many Central American countries, originally defined as 10,000 square in Spanish customary units. In other Spanish-speaking regions, the term has the meaning of a city block.
Today its size varies between countries:
In Argentina it is a hectare, 10,000 m2.
In most Central American countries it is about , varying between countries.
In Belize it is .
In Nicaragua it is .
If a is taken as 83.59 cm, then a of 10,000 square s is equal to 6,987.29 m2. In calculations, the approximate value of 7000 m2 (or equivalently 0.7 ha) is often used to simplify conversion.
See also
acre
Honduran units of measurement
Footnotes
References
External links
manzana definition on sizes.com
Units of area | Manzana (unit) | [
"Mathematics"
] | 170 | [
"Quantity",
"Units of area",
"Units of measurement"
] |
11,873,798 | https://en.wikipedia.org/wiki/Toshiba%20902T | The Toshiba 902T was a 3G cellphone made by Toshiba in 2005, and was sold under Vodafone Japan (now SoftBank Mobile). It's a variant of worldwide version TS921, that featured a 1.92-megapixel camera, a QVGA screen, a Bluetooth device, and "Active Turn Style" two-rotation axes display design.
902T | Toshiba 902T | [
"Technology"
] | 91 | [
"Mobile technology stubs",
"Mobile phone stubs"
] |
11,873,940 | https://en.wikipedia.org/wiki/Royal%20Engineers%2C%20Columbia%20Detachment | The Columbia Detachment of the Royal Engineers was a contingent of the Royal Engineers of the British Army that was responsible for the foundation of British Columbia as the Colony of British Columbia (1858–66). It was commanded by Colonel Richard Clement Moody FICE FRGS RIBA, Kt. (Fr.).
British Columbia
Selection
When news of the Fraser Canyon Gold Rush reached London, Sir Edward Bulwer-Lytton, Secretary of State for the Colonies, requested that War Office recommend an officer who were 'a man of good judgement possessing a knowledge of mankind' to lead 150 (which was later increased to 172) Royal Engineers who had been selected for their 'superior discipline and intelligence'. The War Office chose Moody: and Lord Lytton, who described Moody as his 'distinguished friend', accepted their nomination, as a consequence of Moody's military record, and of his success as Governor of the Falkland Islands, and of the distinguished geopolitical record of his father Colonel Thomas Moody, Kt., at the Colonial Office. Moody's responsibility was to transform the new Colony of British Columbia (1858–66) into the British Empire's 'bulwark in the farthest west' and to 'found a second England on the shores of the Pacific'. Lytton desired to send to the colony 'representatives of the best of British culture, not just a police force': to send men who possessed 'courtesy, high breeding and urbane knowledge of the world', such as Moody whom the Government considered to be the archetypal 'English gentleman and British Officer'. Moody's brother, Colonel Hampden Clement Blamire Moody, already had served with the Royal Engineers in British Columbia, from 1840 to 1848, to such success that he was granted command of the Royal Engineers across the entirety of China.
Richard Clement Moody and his wife Mary Hawks (of the Hawks industrial dynasty and of the Boyd merchant banking family) and their four children left England for British Columbia in October 1858 and arrived in British Columbia in December 1858, with the 172 Royal Engineers of the Royal Engineers, Columbia Detachment, and his secretary Robert Burnaby (after whom he subsequently named Burnaby Lake). The 'gentlemen' Royal Engineers defined by Moody were his three Captains Robert Mann Parsons, John Marshall Grant, and Henry Reynolds Luard, and his two Lieutenants Lieutenant Arthur Reid Lempriere (of Diélament, Jersey) and Lieutenant Henry Spencer Palmer, in addition to Captain William Driscoll Gosset (who was to be Colonial Treasurer and Commissary Officer). The contingent also included Doctor John Vernon Seddall and The Rev. John Sheepshanks (who was to be Chaplain of the Columbia Detachment). Moody was sworn in as the first Lieutenant-Governor of British Columbia and appointed Chief Commissioner of Lands and Works for British Columbia.
Ned McGowan's War
Moody had hoped to begin immediately the foundation of a capital city, but, on his arrival at Fort Langley, he learned of an insurrection, at the settlement of Hill's Bar, by a notorious outlaw, Ned McGowan, and some restive gold miners. Moody repressed the rebellion, which became popularly known as 'Ned McGowan's War', without loss of life. Moody described the incident:
The notorious Ned McGowan, of Californian celebrity at the head of a band of Yankee Rowdies defying the law! Every peaceable citizen frightened out of his wits!—Summons & warrants laughed to scorn! A Magistrate seized while on the Bench, & brought to the Rebel's camp, tried, condemned, & heavily fined! A man shot dead shortly before! Such a tale to welcome me at the close of a day of great enjoyment.
Moody described the response to his success: 'They gave me a Salute, firing off their loaded Revolvers over my head—Pleasant—Balls whistling over one's head! as a compliment! Suppose a hand had dropped by accident! I stood up, & raised my cap & thanked them in the Queen's name for their loyal reception of me'.
The Foundation of British Columbia
In British Columbia, Moody 'wanted to build a city of beauty in the wilderness' and planned his city as an iconic visual metaphor for British dominance, 'styled and located with the objective of reinforcing the authority of the Crown and of the robe'. Subsequent to the enactment of the Pre-emption Act of 1860, Moody settled the Lower Mainland. He founded the new capital city, New Westminster, at a site of dense forest of Douglas pine that he selected for its strategic excellence including the quality of its port. He, in his letter to his friend Arthur Blackwood of the Colonial Office that is dated 1 February 1859, described the majestic beauty of the site:
"The entrance to the Frazer is very striking--Extending miles to the right & left are low marsh lands (apparently of very rich qualities) & yet fr the Background of Superb Mountains- Swiss in outline, dark in woods, grandly towering into the clouds there is a sublimity that deeply impresses you. Everything is large and magnificent, worthy of the entrance to the Queen of England's dominions on the Pacific mainland. [...] My imagination converted the silent marshes into Cuyp-like pictures of horses and cattle lazily fattening in rich meadows in a glowing sunset. [...] The water of the deep clear Frazer was of a glassy stillness, not a ripple before us, except when a fish rose to the surface or broods of wild ducks fluttered away".
Moody designed the roads and the settlements of New Westminster, and his Royal Engineers, under Captain John Marshall Grant, built an extensive road network, including that which became Kingsway, which connected New Westminster to False Creek; and the North Road between Port Moody and New Westminster; and the Pacific terminus, at Burrard's Inlet, of Port Moody, of the Canadian and Pacific Railway (which subsequently was extended to the mouth of the Inlet and terminates now at Vancouver); and the Cariboo Road; and Stanley Park, which was an important strategic area for invaluable the eventuality of an invasion by America. He named Burnaby Lake after his secretary Robert Burnaby, and he named Port Coquitlam's 400-foot 'Mary Hill' after his wife Mary Hawks. Moody designed the first Coat of arms of British Columbia. Richard Clement Moody established Port Moody, which was subsequently named after him, at the end of the trail that connected New Westminster with Burrard Inlet, to defend New Westminster from potential attack from the United States. Moody also established a town at Hastings which was later incorporated into Vancouver.
The British designated multiple tracts as government reserves. The Pre-emption Act did not specify conditions for the distribution of the land, and, consequently, large areas were bought by speculators. Moody requisitioned 3,750 acres (sc. 1,517 hectares) for himself, and, on this land, he subsequently built for himself, and owned, Mayfield, a model farm near New Westminster. Moody was criticised by journalists for land grabbing, but his requisitions were ordered by the Colonial Office, and Moody throughout his tenure in British Columbia received the approbation of the British authorities in London, and was in British Columbia described as 'the real father of New Westminster'. However, Lord Lytton, then Secretary of State for the Colonies, 'forgot the practicalities of paying for clearing and developing the site and the town' and the effort of Moody's Engineers was continually impeded by insufficient funds, which, together with the continuous opposition of Sir James Douglas, Governor of Vancouver Island, whom Sir Thomas Frederick Elliot (1808 - 1880) described as 'like any other fraud', 'made it impossible for [Moody's] design to be fulfilled'.
Throughout his tenure in British Columbia, Moody feuded with Douglas whose jurisdictions overlapped. Moody's offices of Chief Commissioner and Lieutenant-Governor were of 'higher prestige [and] lesser authority' than that of Douglas, whom the British Government which had selected Moody to 'out manoeuvre the old Hudson's Bay Factor [Governor Douglas]'. Moody had been selected by Lord Lytton as the archetypal 'English gentleman and British Officer', and because his family was 'eminently respectable': he was the son of Colonel Thomas Moody, Kt., who owned land in the islands in which Douglas's father owned less land and from which Douglas's 'a half-breed' mother originated. Governor Douglas's ethnicity was 'an affront to Victorian society', whereas Mary Moody was a member of the Hawks industrial dynasty and of the Boyd merchant banking family. Mary Moody wrote, on 4 August 1859, 'it is not pleasant to serve under a Hudson's Bay Factor', and that the 'Governor and Richard can never get on'. John Robson, who was the editor of the British Columbian, wanted Richard Clement Moody's office to include that of Governor of British Columbia, and to thereby make obsolete Douglas. In letter to the Colonial Office of 27 December 1858, Richard Clement Moody states that he has 'entirely disarmed [Douglas] of all jealously'. Douglas repeatedly insulted the Royal Engineers by attempting to assume their command and refusing to acknowledge their contribution to the nascent colony.
Margaret A. Ormsby, who was the author of the Dictionary of Canadian Biography entry for Moody (2002), unpopularly censures Moody for the abortive development of the New Westminster. However, most significant historians commend Moody's contribution and exculpate Moody from responsibility for the abortive development of New Westminster, primarily because of the perpetual insufficiency of funds and of the personally motivated opposition by Douglas whom Sir Thomas Frederick Elliot (1808 - 1880) described as 'like any other fraud'. Robert Burnaby observed that Douglas proceeded with 'muddling [Moody's] work and doubling his expenditure' and with employing administrators to 'work a crooked policy against Moody' to 'retard British Columbia and build up... the stronghold of Hudson's Bay interests' and their own 'landed stake'. Therefore, Robert Edgar Cail, Don W. Thomson, Ishiguro, and Scott commended Moody for his contribution, and Scott accused Ormsby of being 'adamant in her dislike of Colonel Moody' despite the majority of evidence, and almost all other biographies of Moody, including that by the Institution of Civil Engineers, and that by the Royal Engineers, and that by the British Columbia Historical Association, commend Moody's achievements in British Columbia.
The Royal Engineers, Columbia Detachment was disbanded in July 1863. The Moody family (which now consisted of Moody, and his wife, and seven legitimate children) and the 22 Royal Engineers who wished to return to England, who had 8 wives between them, departed for England. 130 of the original Columbia Detachment decided to remain in British Columbia. Scott contends that the dissolution of the Columbia Detachment, and the consequent departure of Moody, 'doomed' the development of the settlement and the realisation of Lord Lytton's dream. A vast congregation of New Westminster citizens gathered at the dock to bid farewell to Moody as his boat departed for England. Moody wanted to return to British Columbia, but he died before he was able to do so. Moody left his library behind, in New Westminster, to become the public library of New Westminster.
In April 1863, the Councillors of New Westminster decreed that 20 acres should be reserved and named Moody Square after Richard Clement Moody. The area around Moody Square that was completed only in 1889 has also been named Moody Park after Moody. Numerous developments occurred in and around Moody Park, including Century House, which was opened by Princess Margaret on 23 July 1958. In 1984, on the occasion of the 125th anniversary of New Westminster, a monument of Richard Clement Moody, at the entrance of the park, was unveiled by Mayor Tom Baker. For Moody's achievements in the Falkland Islands and in British Columbia, British diplomat David Tatham CMG, who served as Governor of the Falkland Islands, described Moody as an 'Empire builder'. In January 2014, with the support of the Friends of the British Columbia Archives and of the Royal British Columbia Museum Foundation, The Royal British Columbia Museum purchased a photograph album that had belonged to Richard Clement Moody. The album contains over 100 photographs of the early settlement of British Columbia, including some of the earliest known photographs of First Nations peoples.
Sources
.
Margaret A. Ormsby, "Richard Clement Moody" in Dictionary of Canadian Biography Online, (1982)
References
Works cited
External links
Royal Engineers Living History Group
City of New Westminster
Short Documentary about the Royal Engineers
History of the Pacific Northwest
Interior of British Columbia
History of British Columbia
British explorers of North America
Explorers of British Columbia
English explorers
English surveyors
Military engineering
Graduates of the Royal Military Academy, Woolwich
Royal Engineers
Royal Engineers officers
Colony of British Columbia (1858–1866) people
British colonial governors and administrators in the Americas
Lieutenant governors of British Columbia | Royal Engineers, Columbia Detachment | [
"Engineering"
] | 2,684 | [
"Construction",
"Military engineering"
] |
11,876,255 | https://en.wikipedia.org/wiki/Wildlife%20of%20South%20Africa | The wildlife of South Africa consists of the flora and fauna of this country in Southern Africa. The country has a range of different habitat types and an ecologically rich and diverse wildlife, vascular plants being particularly abundant, many of them endemic to the country. There are few forested areas, much savanna grassland, semi-arid Karoo vegetation and the fynbos of the Cape Floristic Region. Famed for its national parks and big game, 297 species of mammal have been recorded in South Africa, as well as 849 species of bird and over 20,000 species of vascular plants.
Geography
South Africa is located in subtropical southern Africa, lying between 22°S and 35°S. It is bordered by Namibia, Botswana and Zimbabwe to the north, by Mozambique and Eswatini (Swaziland) to the northeast, by the Indian Ocean to the east and south, and the Atlantic Ocean to the west, the coastline extending for more than . The interior of the country consists of a large, nearly flat, plateau with an altitude of between and . The eastern, and highest, part of this is the Drakensberg, the highest point being Mafadi (), which is on the border with Lesotho, a country surrounded by South Africa.
The south and south-western parts of the plateau, at approximately above sea level, and the adjoining plain below, at approximately above sea level, is known as the Great Karoo, and consists of sparsely populated shrubland. To the north the Great Karoo fades into the drier and more arid Bushmanland, which eventually becomes the Kalahari Desert in the far north-west of the country. The mid-eastern, and highest part of the plateau is known as the Highveld. This relatively well-watered area is home to a great proportion of the country's commercial farmlands. To the north of Highveld, the plateau slopes downwards into the Bushveld, which ultimately gives way to the Limpopo lowlands or Lowveld.
The climate of South Africa is influenced by its position between two oceans and its elevation. Winters are mild in coastal regions, particularly in the Eastern Cape. Cold and warm coastal currents running north-west and north-east respectively account for the difference in climates between west and east coasts. The weather pattern is also influenced by the El Niño–Southern Oscillation. In the plateau area, the influence of the sea is reduced, and the daily temperature range is much wider; here the summer days are very hot, while the nights are usually cool, with the possibility of frosts in winter. The country experiences a high degree of sunshine with rainfall about half of the global average, increasing from west to east, and with semi-desert regions in the north-west. The Western Cape experiences a Mediterranean climate with winter rainfall, but most of the country has more rain in summer.
Flora
A total of 23,420 species of vascular plant has been recorded in South Africa, making it the sixth most species-rich country in the world and the most species-rich country on the African continent. Of these, 153 species are considered to be threatened. Nine biomes have been described in South Africa: Fynbos, Succulent Karoo, desert, Nama Karoo, grassland, savanna, Albany thickets, the Indian Ocean coastal belt, and forests.
The most prevalent biome in the country is the grassland, particularly on the Highveld, where the plant cover is dominated by different species of grass; fires, frosts and grazing pressure result in few trees occurring here, but geophytes (bulbs) are plentiful and there is a high level of plant diversity, especially on the escarpments. Vegetation becomes even more sparse towards the northwest due to low rainfall. There are several species of water-storing succulents, like aloes and euphorbias, in the very hot and dry Namaqualand area. The grass and thorn savannah turns slowly into a bush savannah towards the north-east of the country, with denser growth. There are significant numbers of baobab trees in this area, near the northern end of Kruger National Park.
There are few forests in the country, these being largely restricted to patches on mountains and escarpments in high rainfall areas and gallery forests, and much of the plateau area is covered by grassland and savanna. The karoo occupies much of the drier western half of the country; this area is influenced by its proximity to the Atlantic and has winter rainfall. The vegetation here is dominated by dwarf succulent plants, with many endemic species of both plants and animals. Fynbos is a belt of natural shrubland located in the Western Cape and Eastern Cape provinces with a unique flora dominated by ericas, proteas and restios. This area is part of the Cape Floristic Region. The World Wide Fund for Nature divides this region into three ecoregions: the Lowland fynbos and renosterveld, the Montane fynbos and renosterveld and the Albany thickets. There is some concern that the Cape Floristic Region is experiencing one of the most rapid rates of extinction in the world due to habitat destruction, land degradation, and invasive alien plants.
The Cape Floral Region Protected Areas is a UNESCO World Heritage Site, a group of about thirteen protected areas that together cover an area of over a million hectares. This is a hotspot of diversity of endemic plants, many of which are threatened, and demonstrates ongoing ecological and evolutionary processes. This region occupies less than 0.5% of the area of the African continent yet has almost 20% of its plant species, almost 70% of the 9,000 plant species being endemic to the region. The Fynbos vegetation consists mainly of sclerophyllous shrubland. Of special interest is the pollination biology of the plants, many of which rely on ants, termites, birds or mammals for this function, the adaptions they have made to the fire risk, and the high level of adaptive radiation and speciation. The Mediterranean climate produces hot, dry summers, and many of the plants have underground storage organs allowing them to resprout after fires. A typical species is the silver tree, which grows naturally only on Table Mountain. Fire kills many of the trees but triggers the germination of the seeds, founding the next generation of these short-lived trees.
Fauna
Mammals
Some 297 species of mammal have been recorded in South Africa, of which 30 species are considered threatened. The Kruger National Park, in the east of the country, is one of the largest national parks in the world, with an area of of grassland with scattered trees. It supports a wide range of ungulates including Burchell's zebra, impala, greater kudu, blue wildebeest, waterbuck, warthog, Cape buffalo, giraffe and hippopotamus. There are also black and white rhinoceroses, African elephant, African wild dog, cheetah, leopard, lion and spotted hyena.
Elsewhere in the country there are gemsbok, alternatively known as oryx, nyala, bushbuck and springbok. There are seventeen species of golden mole, a family limited to southern Africa, five species of elephant shrew, many species of shrews, the southern African hedgehog, the aardvark, various hares and the critically endangered riverine rabbit. There are numerous species of bat and a great many species of rodent. Primates are represented by the Mohol bushbaby, the brown greater galago, the Sykes' monkey, the vervet monkey and the chacma baboon. Smaller carnivores include mongooses, genets, the caracal, the serval, the African wildcat, the Cape fox, the side-striped jackal, the black-backed jackal, meerkats, and the African clawless otter. The brown fur seal and other species of seal occur on the coasts and the waters around the country are visited by numerous species of whale and dolphin.
Birds
With its diverse habitat types, South Africa has a wide range of residential and migratory species. According to the 2018 edition of The Clements Checklist of Birds of the World, 849 species of bird have been recorded in South Africa and its offshore islands. Of these, 125 species are vagrants, and about 30 are endemic either to South Africa, or the more inclusive South Africa/Lesotho/Eswatini region. The endemic species include the southern black and blue korhaans, the grey-winged francolin, the Knysna turaco, the Fynbos buttonquail, the southern bald ibis, the forest buzzard, the ground woodpecker, the Cape and Drakensberg rockjumpers, the Cape, eastern and Agulhas long-billed larks, the red, Karoo, Rudd's and Botha's larks, the Cape bulbul, the Victorin's and Knysna warblers, the Drakensberg prinia, the bush blackcap, the Cape sugarbird, the chorister robin-chat, the sentinel and Cape rock thrushes, the buff-streaked chat, the pied starling, the African Penguin, and the orange-breasted sunbird.
The common ostrich is plentiful on the open grassland and savannah areas. Some birds breed elsewhere but migrate to South Africa to overwinter, while others breed in the country but migrate away in the non-breeding season. Migratory species include the greater striped swallow, white-rumped swift, white stork, African pygmy kingfisher, yellow-billed kite and the European bee-eater.
Reptiles and amphibians
There is a rich fauna of reptiles and amphibians, with 447 species of reptile recorded in the country (as compiled by the Reptile Database), and 132 species of amphibian (compiled by AmphibiaWeb). South Africa has the richest diversity of reptiles of any African country. Endemic species include the angulate tortoise and geometric tortoise, the Zululand dwarf chameleon, the Transkei dwarf chameleon and the Robertson dwarf chameleon, the Broadley's flat lizard, the dwarf Karoo girdled lizard, the Soutpansberg rock lizard, and the yellow-bellied house snake.
Also included among the fauna are the Nile crocodile, the leopard tortoise, the Speke's hinge-back tortoise, the serrated hinged terrapin, various chameleons, lizards, geckos and skinks, the cape cobra, the black mamba, the eastern green mamba, the puff adder, the mole snake and a range of other venomous and non-venomous snakes.
Amphibian diversity reflects the many diverse habitats around the country. Species of interest include the endemic western leopard toad and the arum frog, the bronze caco, the spotted snout-burrower and the critically endangered Rose's ghost frog, found only on the slopes of Table Mountain. Another endangered endemic species is the Natal diving frog.
National parks
The following have been designated as national parks in South Africa:
Addo Elephant National Park
Agulhas National Park
Augrabies Falls National Park
Bontebok National Park
Camdeboo National Park
Garden Route National Park
Golden Gate Highlands National Park
Karoo National Park
Kgalagadi Transfrontier Park
Kruger National Park
Mapungubwe National Park
Marakele National Park
Mokala National Park
Mountain Zebra National Park
Namaqua National Park
Richtersveld National Park
Table Mountain National Park
Tankwa Karoo National Park
West Coast National Park
South African endangered species
Some animals occurring in South Africa are classified as "endangered" or "critically endangered". These include:
Giant golden mole, Chrysospalax trevelyani
Van Zyl's golden mole, Cryptochloris zyli
Marley's golden mole, Amblysomus marleyi
Gunning's golden mole, Neamblysomus gunningi
Juliana's golden mole, Neamblysomus julianae
White-tailed rat, Mystromys albicaudatus
African wild dog, Lycaon pictus
Sei whale, Balaenoptera borealis
Blue whale, Balaenoptera musculus
African penguin, Spheniscus demersus
Critically endangered
De Winton's golden mole, Cryptochloris wintoni
Riverine rabbit, Bunolagus monticularis
Hooded vulture, Necrosyrtes monachus
White-headed vulture, Trigonoceps occipitalis
White-backed vulture, Gyps africanus
References
South Africa
Biota of South Africa | Wildlife of South Africa | [
"Biology"
] | 2,626 | [
"Biota by country",
"Biota of South Africa",
"Wildlife by country"
] |
11,877,899 | https://en.wikipedia.org/wiki/Vruk | The Vruk is a proprietary bass drum pedal design produced by Vruk Corporation. The term vruk also refers to playing techniques associated with this design, and related accessories produced by the corporation for attachment to other brands of pedal.
Proponents claim that the technique gives greater control and in particular allows greater speed.
The name VRUK (capitalised) is also used by Vineyard Records UK.
Players
Chris Adler
Tim Waterson
References
Percussion instrument beaters
Musical instrument parts and accessories
Drum kit components | Vruk | [
"Technology"
] | 97 | [
"Components",
"Musical instrument parts and accessories"
] |
11,879,346 | https://en.wikipedia.org/wiki/Spirit%20world%20%28Spiritualism%29 | The spirit world, according to spiritualism, is the world or realm inhabited by spirits, both good or evil of various spiritual manifestations. This spirit world is regarded as an external environment for spirits. The Spiritualism religious movement in the nineteenth century espoused a belief in an afterlife where individual's awareness persists beyond death. Although independent from one another, both the spirit world and the physical world are in constant interaction. Through séances, trances, and other forms of mediumship, these worlds can consciously communicate with each other. The spirit world is sometimes described by mediums from the natural world in trance.
History
By the mid-19th century most Spiritualist writers concurred that the spirit world was of "tangible substance" and a place consisting of "spheres" or "zones". Although specific details differed, the construct suggested organization and centralization. An 18th-century writer, Emanuel Swedenborg, influenced Spiritualist views of the spirit world. He described a series of concentric spheres each including a hierarchical organization of spirits in a setting more earth-like than theocentric. The spheres become gradually more illuminated and celestial. Spiritualists added a concept of limitlessness, or infinity to these spheres. Furthermore, it was defined that Laws initiated by God apply to earth as well as the spirit world.
Another common Spiritualist conception was that the spirit world is inherently good and is related to truth-seeking as opposed to things that are bad residing in a "spiritual darkness". This conception inferred as in the biblical parable Lazarus and Dives that there is considered a greater distance between good and bad spirits than between the dead and the living. Also, the spirit world is "The Home of the Soul" as described by C. W. Leadbeater (Theosophist), suggesting that for a living human to experience the spirit world is a blissful, meaningful and life-changing experience.
Yet, John Worth Edmonds stated in his 1853 work Spiritualism, "Man's relation spiritually with the spirit-world is no more wonderful than his connection with the natural world. The two parts of his nature respond to the same affinities in the natural and spiritual worlds." He asserted, quoting Swedenborg through mediumship, that the relationship between man and the spirit world is reciprocal and thus could contain sorrow. Though ultimately, "wandering through the spheres" a path of goodness "is received at last by that Spirit whose thought is universal love forever."
See also
Afterlife
Astral plane
Celtic Otherworld
Exorcism
Ghost
Hell
Heaven
List of death deities
Paradise
Shamanism
Soul flight
Spirit possession
Spirit photography
Spiritual warfare
Spiritual mapping
Territorial spirit
The Dreaming
Underworld/Netherworld
References
Afterlife places
Spiritism
Spiritualism
Spirituality | Spirit world (Spiritualism) | [
"Biology"
] | 554 | [
"Behavior",
"Human behavior",
"Spirituality"
] |
11,879,874 | https://en.wikipedia.org/wiki/Oden%20Institute%20for%20Computational%20Engineering%20and%20Sciences | The Oden Institute for Computational Engineering and Sciences is an interdisciplinary research unit and graduate program at The University of Texas at Austin dedicated to advancing computational science and engineering through a variety of programs and research centers. The Institute currently supports 16 research centers, seven research groups and maintains the Computational Sciences, Engineering and Mathematics Program, a graduate degree program leading to the M.S. and Ph.D. degrees in Computational Science, Engineering and Mathematics. The interdisciplinary programs underway at the Oden Institute involve 123 faculty representing 23 academic departments and five schools and colleges. Oden Institute faculty hold positions in the Cockrell School of Engineering, College of Natural Sciences, Jackson School of Geosciences, Dell Medical School and McCombs School of Business. The Institute also supports the Peter O'Donnell, Jr. Postdoctoral Fellowship program and a program for visiting scholars through the J. Tinsley Oden Faculty Fellowship Research Fund. Organizationally, the Oden Institute reports to the Vice President for Research.
Research centers and groups
The Oden Institute supports 23 research centers and research groups. Each center and group is organized around a research topic and is directed by an Oden Institute faculty member.
Applied Mathematics Group
Autonomous Systems Group
Center for Computational Astronautical Sciences and Technologies (CAST)
Center for Computational GeoSciences and Optimization
Center for Computational Life Sciences and Biology
Center for Computational Materials
Center for Computational Molecular Science
Center for Computational Oncology
Center for Distributed and Grid Computing
Center for Numerical Analysis
Center for Predictive Engineering and Computational Science
Center for Quantum Materials Engineering
Center for Scientific Machine Learning
Center for Subsurface Modeling
Computational Hydraulics Group
Computational Mechanics Group
Computational Research in Ice and Ocean Systems
Computational Visualization Center
Electromagnetics and Acoustics Group
Parallel Algorithms for Data Analysis and Simulation Group
Probabilistic and High Order Inference, Computation, Estimation and Simulation
Science of High-Performance Computing Group
Willerson Center for Cardiovascular Modeling and Simulation
Programs
The Oden Institute supports seven major programs that seek to promote computational science at various levels.
The Computational Sciences, Engineering and Mathematics Program (CSEM)
A graduate program for MS and PhD students
Peter O'Donnell, Jr. Postdoctoral Fellowship Program
Supports the research of recent doctorates
J. Tinsley Oden Faculty Fellowship Research Program
Brings researchers and scholars from academia, government and industry to the institute to collaborate with Oden Institute researchers
Moncrief Endowed Position
Used to support outstanding junior faculty
Moncrief Grand Challenge Awards Program
Provides funding and resources for University of Texas at Austin faculty who work on challenges that affect national competitiveness
The Moncrief Undergraduate Summer Internship Program
Supports undergraduate interns who work with ICES faculty during the summer
Undergraduate certificate program
Allows junior and senior level students at The University of Texas at Austin the opportunity to study computational engineering and sciences, and have their studies recognized with a certificate.
Notable faculty
Ivo Babuška
Chandrajit Bajaj
Luis Caffarelli
James R. Chelikowsky
Bjorn Engquist
Irene M. Gamba
Omar Ghattas
Thomas J.R. Hughes
Moriba K. Jah
Robert Moser
J. Tinsley Oden
William H. Press
Mary Wheeler
Karen Willcox
References
External links
Oden Institute for Computational Engineering and Sciences
University of Texas at Austin
Research institutes in Texas
Educational institutions established in 2003
Computational science
Engineering education in the United States
Mathematics organizations
Science and technology in Texas | Oden Institute for Computational Engineering and Sciences | [
"Mathematics"
] | 662 | [
"Computational science",
"Applied mathematics"
] |
11,880,077 | https://en.wikipedia.org/wiki/Mobile%20packet%20data%20service | MPDS or Mobile Packet Data Service provides remote users with IP capability over satellite for portable and extremely reliable communications for Internet applications such as World Wide Web access, file transfer and e-mail. It was launched by COMSAT mobile communications (CMC).
Mobile telecommunications | Mobile packet data service | [
"Technology"
] | 55 | [
"Mobile telecommunications"
] |
11,880,456 | https://en.wikipedia.org/wiki/Cam%20plastometer | The cam plastometer is a physical testing machine. It measures the resistance of non-brittle materials to compressive deformation at constant true-strain rates. In this way, it can be compared a bit to the Gleeble. In the early days, the machine operates at relatively low strain rates, but over time it has been enhanced and currently it can operate over a wide range of strain rates
The machine is patented under the name of "United States Patent 4109516".
In the machine, deformation compressive forces are applied to a specimen by two flat, opposing platens which impact a flat, rectangular specimen. The deformation forces can be varied during operation, to simulate actual conditions which occur during industrial pressing and forming operations. The plastometer is also capable of torsional testing of specimens".
The cam plastometers are expensive and there are only a few of them in the world.
References
Mechanical engineering
Measuring instruments | Cam plastometer | [
"Physics",
"Technology",
"Engineering"
] | 190 | [
"Applied and interdisciplinary physics",
"Mechanical engineering",
"Measuring instruments"
] |
11,880,615 | https://en.wikipedia.org/wiki/Azoxystrobin | Azoxystrobin is a broad spectrum systemic fungicide widely used in agriculture to protect crops from fungal diseases. It was first marketed in 1996 using the brand name Amistar and by 1999 it had been registered in 48 countries on more than 50 crops. In the year 2000 it was announced that it had been granted UK Millennium product status.
History
In 1977, academic research groups in Germany published details of two new antifungal antibiotics they had isolated from the basidiomycete fungus Strobilurus tenacellus. They named these strobilurin A and B but did not provide detailed structures, only data based on their high-resolution mass spectra, which showed that the simpler of the two had molecular formula C16H18O3. In the following year, further details including structures were published and a related fungicide, oudemansin A from the fungus Oudemansiella mucida, whose identity had been determined by X-ray crystallography, was disclosed.
When the fungicidal effects were shown to stem from what was then a novel mode of action, chemists at the Imperial Chemical Industries (ICI) research site at Jealott's Hill became interested to use them as leads to develop new fungicides suitable for use in agriculture. The first task was to synthesize a sample of strobilurin A for testing. In doing so, it was discovered that the structure that had been published was incorrect in the stereochemistry of one of the double bonds: the strobilurins, in fact, have the E,Z,E not E,E,E configuration. Once this was realised and the correct material was made and tested it was shown, as expected, to be active in vitro but insufficiently stable to light to be active in the glasshouse. A large programme of chemistry to make analogues was begun when it was discovered that a new stilbene structure containing the β-methoxyacrylate portion (shown in blue and believed to be the toxophore) had good activity in glasshouse tests but still lacked sufficient photostability. After more than 1400 analogues had been made and tested the team chose azoxystrobin for commercialisation and it was developed under the code number ICIA5504.
First sales were in 1996 using the brand name Amistar: it then gained fast-track registration in the United States, where it was marketed in 1997 as Heritage. By 1999 it had been registered in 48 countries on more than 50 crops. In the year 2000 it was announced that it had been granted Millennium product status by the UK Prime Minister, Tony Blair, as it had become in three years the world's best-selling fungicide. Meanwhile, BASF scientists who were collaborating with the German academic groups that had discovered strobilurin A had independently invented kresoxim-methyl, which was also launched in 1996.
Synthesis
The first synthesis of azoxystrobin was disclosed in patents filed by the ICI group. The sequence of the two substitution reactions allows for the synthesis of a diverse range of structural analogs via an Ullmann-type etherification between the aryl chloride of the first intermediate and a substituted phenol, thus introducing a range of structural diversity while maintaining the central strobin toxophore. The final choice of 2-cyano phenol in the second step of the synthesis was made after many other alternatives had been tested for their fungicidal properties.
The crystal structure was published in 2008.
Mechanism of action
Azoxystrobin and other strobilurins inhibit mitochondrial respiration by blocking electron transport. They bind at the quinol outer binding site of the cytochrome b-c1 complex, where ubiquinone (coenzyme Q10) would normally bind when carrying electrons to that protein. Thus production of ATP is prevented. The generic name for this mode of action is "Quinone Outside Inhibitors" QoI.
Formulations
Azoxystrobin is made available to end-users only in formulated products. Since the active ingredient has moderate solubility in water, formulations aid its use in water-based sprays by creating an emulsion when diluted. Modern products use non-powdery formulations with reduced or no use of hazardous solvents, for example suspension concentrates. The fungicide is compatible with many other pesticides and adjuvants when mixed by the farmer for spraying.
Usage
Azoxystrobin is a xylem-mobile systemic fungicide with translaminar, protectant and curative properties. In cereal crops, its main outlet, the length of disease control is generally about four to six weeks during the period of active stem elongation. All pesticides are required to seek registration from appropriate authorities in the country in which they will be used. In the United States, the Environmental Protection Agency (EPA) is responsible for regulating pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Food Quality Protection Act (FQPA). A pesticide can only be used legally according to the directions on the label that is included at the time of the sale of the pesticide. The purpose of the label is "to provide clear directions for effective product performance while minimizing risks to human health and the environment". A label is a legally binding document that mandates how the pesticide can and must be used and failure to follow the label as written when using the pesticide is a federal offence.
Within the European Union, a 2-tiered approach is used for the approval and authorisation of pesticides. Firstly, before a formulated product can be developed for market, the active substance must be approved for the European Union. After this has been achieved, authorisation for the specific product must be sought from every Member State that the applicant wants to sell it to. Afterwards, there is a monitoring programme to make sure the pesticide residues in food are below the limits set by the European Food Safety Authority.
Agriculture and Horticulture
Azoxystrobin possesses a broad spectrum of activity, in common with other QoI inhibitors. Examples of the fungal groups on which it is effective are Ascomycota, Deuteromycota, and Basidiomycota, as well as the oomycetes. In addition, its properties mean that it can move systemically through plant tissue to protect parts of the crop that were not in contact with the spray. This combination of properties has meant that it achieved widespread use very quickly and has reached annual sales of more than $500 million. Important diseases which it controls include leaf spot, rusts, powdery mildew, downy mildew, net blotch and blight.
Worldwide, azoxystrobin is registered for use on all important crops. For example, in the European Union and United States, it is registered for use in wheat, barley, oats, rye, soya, cotton, rice, strawberry, peas, beans, onions and many other vegetables. The advantage to the farmer comes in the form of improved yield at harvest. Farmers can act in their best economic interest: the value of the additional yield can be estimated and the total cost of using the fungicide informs the decision to purchase. This cost-benefit analysis by the end user sets a maximum price which the supplier can demand and in practice pesticide prices fluctuate according to the current market value of the crops in which they are used. The estimated annual use of azoxystrobin in US agriculture is mapped by the US Geological Survey and shows an increasing trend from its introduction in 1997 to 2019, the latest date for which figures are available, and now reaching .
Home and garden
One of the earliest uses of azoxystrobin was to control fungal diseases of turf and it has been used on golf courses and lawns. It is now available for domestic markets under brand names such as Heritage and Azoxy 2SC.
Azoxystrobin is added to mold-resistant Purple wallboards (optiSHIELD AT, mixture of azoxystrobin and thiabendazole) and can leach into house dust, potentially providing a source of life-long exposure to children and adults.
Human safety
Azoxystrobin has little toxicity to mammals with an LD50 of over 5000 mg/kg (rats, oral). However, it can cause skin and eye irritation. First aid information is included with the label.
The World Health Organization (WHO) and Food and Agriculture Organization (FAO) joint meeting on pesticide residues has determined that the acceptable daily intake for azoxystrobin is 0-0.2 mg/kg bodyweight per day.
The Codex Alimentarius database maintained by the FAO lists the maximum residue limits for azoxystrobin in various food products.
Effects on the environment
Azoxystrobin is categorized as having a low potential for bioconcentration and of moderate risk to fish, earthworms and bees but of high risk to aquatic crustaceans, so care must be taken to avoid runoff into water bodies. Its main degradation product, the carboxylic acid resulting from hydrolysis of its methyl ester, is also potentially harmful to aquatic environments. The benefits and risks of use of QoI fungicides have been reviewed and there is extensive literature on azoxystrobin's environmental profile.
Ultimately it is the regulatory authorities in each country who must weigh up the benefits to end users and balance these against the compound's inherent hazards and consequent risks to consumers and the wider environment.
Resistance Management
Fungal populations have the ability to develop resistance to QoI inhibitors. This potential can be mitigated by careful management. Reports of individual pest species becoming resistant to azoxystrobin are monitored by manufacturers, regulatory bodies such as the EPA and the Fungicides Resistance Action Committee (FRAC), who assign fungicides into classes by mode of action. In some cases, the risks of resistance developing can be reduced by using a mixture of two or more fungicides which each have activity on relevant pests but with unrelated mechanisms of action. FRAC assigns fungicides into classes so as to facilitate this. On cereal crops in the US, for example, azoxystrobin may only be used in mixture, usually with an azole fungicide such as difenoconazole.
Brands
Azoxystrobin is the ISO common name for the active ingredient which is formulated into the branded product sold to end-users. By international convention and in many countries the law, pesticide labels are required to include the common name of the active ingredients. These names are not the exclusive property of the holder of any patent or trademark and as such they are the easiest way for non-experts to refer to individual chemicals. Companies selling pesticides normally do so using a brand name or wordmark which allows them to distinguish their product from competitor products having the same active ingredient. In many cases, this branding is country and formulation-specific so after several years of sales there can be multiple brand names for a given active ingredient. The situation is made even more complicated when companies license their ingredients to others, as is often done. In addition, the product may be pre-mixed with other pesticides under a new brand name.
It is therefore difficult to provide a comprehensive list of brand names for products containing azoxystrobin. They include Amistar, Abound, Heritage, Olympus, Ortiva, Priori Xtra, Scotts DiseaseEx, Haedes and Quadris. Suppliers and brand names in the United States are listed in the National Pesticide Information Retrieval System.
References
Further reading
External links
National Pesticide Information Center
Quadris Fungicide Product - Syngenta US
Azoxystrobin on Pubchem
Fungicides
Pyrimidines
Phenol ethers
Benzonitriles
Methyl esters
Strobilurins
Diaryl ethers
Nitriles | Azoxystrobin | [
"Chemistry",
"Biology"
] | 2,459 | [
"Fungicides",
"Nitriles",
"Biocides",
"Functional groups"
] |
11,880,982 | https://en.wikipedia.org/wiki/Material%20nonimplication | Material nonimplication or abjunction (Latin ab = "away", junctio= "to join") is a term referring to a logic operation used in generic circuits and Boolean algebra. It is the negation of material implication. That is to say that for any two propositions and , the material nonimplication from to is true if and only if the negation of the material implication from to is true. This is more naturally stated as that the material nonimplication from to is true only if is true and is false.
It may be written using logical notation as , , or "Lpq" (in Bocheński notation), and is logically equivalent to , and .
Definition
Truth table
Logical Equivalences
Material nonimplication may be defined as the negation of material implication.
In classical logic, it is also equivalent to the negation of the disjunction of and , and also the conjunction of and
Properties
falsehood-preserving: The interpretation under which all variables are assigned a truth value of "false" produces a truth value of "false" as a result of material nonimplication.
Symbol
The symbol for material nonimplication is simply a crossed-out material implication symbol. Its Unicode symbol is 219B16 (8603 decimal): ↛.
Natural language
Grammatical
"p minus q."
"p without q."
Rhetorical
"p but not q."
"q is false, in spite of p."
Computer science
Bitwise operation: A & ~B. This is usually called "bit clear" (BIC) or "and not" (ANDN).
Logical operation: A && !B.
See also
Implication
Set difference
References
External links
Logical connectives | Material nonimplication | [
"Mathematics"
] | 354 | [
"Mathematical logic stubs",
"Mathematical logic"
] |
11,881,071 | https://en.wikipedia.org/wiki/Kilpatrick%20limit | In particle accelerators, a common mechanism for accelerating a charged particle beam is via copper resonant cavities in which electric and magnetic fields form a standing wave, the mode of which is designed so that the E field points along the axis of the accelerator, producing forward acceleration of the particles when in the correct phase.
The maximum electric field achievable is limited by a process known as RF breakdown. The reliable limits for various RF frequencies were tested experimentally in the 1950s by W. D. Kilpatrick.
An approximate relation by least-square optimization of the data yields
with (megavolts per metre).
This relation is known as the Kilpatrick Limit.
References
Accelerator physics | Kilpatrick limit | [
"Physics"
] | 145 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
11,881,741 | https://en.wikipedia.org/wiki/Bifeprunox | Bifeprunox (INN) (code name DU-127,090) is an atypical antipsychotic which, similarly to aripiprazole, combines minimal D2 receptor agonism with serotonin receptor agonism. It was under development for the treatment of schizophrenia, psychosis and Parkinson's disease.
In a multi-center, placebo-controlled study, 20 mg of bifeprunox was found to be significantly more effective than placebo at reducing symptoms of schizophrenia, with a low incidence of side effects. An NDA for Bifeprunox was filed with the U.S. Food and Drug Administration in January 2007. The FDA rejected the application in August 2007. In June 2009, Solvay and Wyeth decided to cease development because "efficacy data did not support pursuing the existing development strategy of stabilisation of non-acute patients with schizophrenia."
Pharmacodynamics
Bifeprunox is an atypical antipsychotic that is a partial D2 agonist.
See also
Aripiprazole
Pardoprunox
References
5-HT7 agonists
Atypical antipsychotics
Phenylpiperazines
Carbamates
Benzoxazoles
Biphenyls
Abandoned drugs | Bifeprunox | [
"Chemistry"
] | 267 | [
"Drug safety",
"Abandoned drugs"
] |
11,881,862 | https://en.wikipedia.org/wiki/RADiations%20Effects%20on%20Components%20and%20Systems | The RADECS association is a non-profit professional organization that promotes basic and applied research in the field of radiation and its effects on materials, components and systems. The acronym RADECS stands for "RADiations Effects on Components and Systems.
History
The first “Radiation and its Effects on Components and Systems" (RADECS) conference was held in Montpellier, France in 1989 as a French national conference. In 1991, the members of the organizing committee expanded the scope of RADECS to become a European conference. Since then, the RADECS Conference and RADECS Workshop have run in alternate years.
The activities of the RADECS association are as follows:
RADECS biannual European Conference
Biannual Technical Workshop
Promote research activities on radiation effects due to charged, un-charged particles and ionizing radiation
Scientific publications or promotion of scientific publications
Cooperation and exchange with other organizations (e.g. IEEE Nuclear and Plasma Sciences Society)
RADECS conferences
The RADECS conference and workshops address technical issues related to radiation effects on devices, integrated circuits, sensors, and systems, as well as radiation hardening, testing, and environmental modeling methods. Papers from the events are published in a biennial issue of the IEEE Transactions on Nuclear Science journal.
References
External links
EASii IC – Company for RADECS website
International professional associations
Radiation Effects
Nuclear organizations
Radiation Effects
International organizations based in France
Professional associations based in France
Organizations based in Montpellier | RADiations Effects on Components and Systems | [
"Astronomy",
"Engineering"
] | 292 | [
"Space technology",
"Energy organizations",
"Nuclear organizations",
"Outer space"
] |
11,882,025 | https://en.wikipedia.org/wiki/Translocated%20promoter%20region | Translocated promoter region is a component of the tpr-met fusion protein.
External links | Translocated promoter region | [
"Chemistry",
"Biology"
] | 21 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
11,882,041 | https://en.wikipedia.org/wiki/Tpr-met%20fusion%20protein | Tpr-Met fusion protein is an oncogene fusion protein consisting of TPR and MET.
Structure
Tpr-Met was generated following a chromosomal rearrangement induced by the treatment of a human osteogenic sarcoma cell line with the carcinogen N-methyl-N'-nitronitrosoguanidine. The genomic rearrangement fuses two genetic loci, translocated promoter region, from chromosome 1q25 which encodes a dimerization leucine zipper motif, and MET, from chromosome 7q31 which contributes the kinase domain and carboxy-terminus of the Met RTK. The resulting 65 kDa cytoplasmic Tpr-Met oncoprotein forms a dimer mediated through the Tpr leucine zipper.
The Tpr-Met fusion protein lacks the extracellular, transmembrane and juxtamembrane domains of c-Met receptor, and has gained the Tpr dimerization motif, which allows constitutive and ligand-independent activation of the kinase. The loss of juxtamembrane sequences, necessary for the negative regulation of kinase activity and receptor degradation, prolongs duration of Met signalling.
Experimental evidences
Effects in muscle
Skeletal muscle
Specific expression of Tpr-Met in terminally-differentiated skeletal muscle causes muscle wasting in vivo and exerts anti-differentiation effects in terminally differentiated myotubes. Constitutive activation of MET signaling has been suggested to cause defects in myogenic differentiation, contributing to rhabdomyosarcoma development and progression.
Cardiac muscle
In a transgenic model, cardiac-specific expression of Tpr-Met oncogene during postnatal life causes heart failure with early-onset.
References
External links
Engineered proteins | Tpr-met fusion protein | [
"Chemistry"
] | 365 | [
"Biochemistry stubs",
"Protein stubs"
] |
11,882,107 | https://en.wikipedia.org/wiki/Space%20Research%20and%20Technology%20Institute | The Space Research and Technology Institute () of the Bulgarian Academy of Sciences is a primary research body in the field of space science in Bulgaria.
The mission of SRTI-BAS is to conduct fundamental and applied studies in the field of Space Physics, Remote Sensing of the Earth and Planets, and Aerospace Systems and Technologies.
Scope
The field of activity of SRTI ranges over fundamental and applied investigations in space physics, astrophysics, image processing, remote sensing, life sciences, scientific equipment, preparation and implementation of experiments in the area of space exploration and usage from the board of automatic and piloted spacecraft, investigation on control systems, air- and spacecraft and equipment for them, activity for creation of cosmic materials and technologies and their transfer in the national economy, education of post-graduate students and master's degrees.
History
The organized participation of Bulgarian scientists in space research started in 1969 with the creation of a Scientific Group of Space Physics (SGSP) at the Presidium of the Bulgarian Academy of Sciences. In 1974, based on the SGSP, the Central Laboratory for Space Research (CLSR) was founded. The Space Research Institute (SRI) at the Bulgarian Academy of Sciences succeeded the Central Laboratory for Space Research in 1987. Under the reform carried out at the Bulgarian Academy of Sciences, by a Resolution of the General Assembly of the BAS of 23 March 2010, the SRI and the Solar-Terrestrial Influences Institute (STII) merged to form a new unit – the Space and Solar-Terrestrial Research Institute at the BAS (SSTRI–BAS) renamed in 2012 to Space Research and Technology Institute (SRTI).
Bulgarian scientists from SRTI-BAS successfully participated in the Intercosmos program, preparing experiments and designing equipment for several satellites and rockets.
In 1979, the first Bulgarian cosmonaut Georgi Ivanov flew in space on board of Soyuz 33.
In 1981 two satellites were launched - Bulgaria 1300 and Meteor-Priroda 2-4 (Meteor 1-31), furnished entirely with Bulgarian equipment, aimed at studying the ionospheric-magnetospheric relationship and remote sensing of the Earth from space.
In 1984 teams from SRTI-BAS took part in the international projects "Vega 1 and 2" (1984) – for realization of the project "Venus-Halley's Comet".
In 1988 the second Bulgarian cosmonaut Alexandar Alexsandrov flew on board Soyuz TM-5 to the Mir space station. "Active" (1989) – for determination of the electrostatic field around a satellite, the development of apparatus "VSK- FREGAT" (1989) – which transmit images of the Phobos satellite of Mars, within the "Phobos" Program. In the institute was created the space greenhouse "SVET", with which successful experiments were carried out by Russian and American astronauts, including the cultivation of plants from "seed to seed" of the Space Station (SS) "MIR".
Until 2001 on board of the "MIR" SS worked and the system for complex physiological study of astronauts "NEVROLAB-B" and "R-400" radiometer to obtain data on the parameters of the Earth's surface.
In the recent years the institute is actively included in competitions on the 6th, 7th, and Horizon 2020 framework programmes of the EU, PHARE programme, NATO, etc.
Publishing activity
Since 2004 SRTI-BAS is organizing an annual conference "Space, Ecology, Safety" which proceedings (ISSN ) can be found on the SRTI-BAS website. Few more workshops and conferences were organized by STIL-BAS before the reform in 2010. Their proceedings can also be found in the 'Publishing activity' section of SRTI-BAS website.
The Aerospace Research in Bulgaria journal was founded in 1978 under the name Space Research in Bulgaria. Its founder and first editor was Acad. Kiril Serafimov (1978–1990). Over the years, editors were Prof. Boris Bonev (1991–1996), Prof. Nikola Georgiev (1996–2006), and Prof. Garo Mardirossian (2006–until now). The Journal has been changing its name two times. Firstly, it was issued under Space Research in Bulgaria (, No. 1–8), from No. 9 to No. 15 its name was changed to “Аерокосмически изследвания в България” () continuing the policy from the first issues to publish in Bulgarian, Russian, and English. Since 2001, the journal name was changed to Aerospace Research in Bulgaria (No. 16-, , e) and its content is entirely in English, with summaries in Bulgarian or Russian.
See also
List of government space agencies
References
Institutes of the Bulgarian Academy of Sciences
Space program of Bulgaria
Space organizations
Space agencies | Space Research and Technology Institute | [
"Astronomy"
] | 992 | [
"Astronomy organizations",
"Space organizations"
] |
164,117 | https://en.wikipedia.org/wiki/Duration%20%28music%29 | In music, duration is an amount of time or how long or short a note, phrase, section, or composition lasts. "Duration is the length of time a pitch, or tone, is sounded." A note may last less than a second, while a symphony may last more than an hour. One of the fundamental features of rhythm, or encompassing rhythm, duration is also central to meter and musical form. Release plays an important part in determining the timbre of a musical instrument and is affected by articulation.
The concept of duration can be further broken down into those of beat and meter, where beat is seen as (usually, but certainly not always) a 'constant', and rhythm being longer, shorter or the same length as the beat. Pitch may even be considered a part of duration. In serial music the beginning of a note may be considered, or its duration may be (for example, is a 6 the note which begins at the sixth beat, or which lasts six beats?).
Durations, and their beginnings and endings, may be described as long, short, or taking a specific amount of time. Often duration is described according to terms borrowed from descriptions of pitch. As such, the duration complement is the amount of different durations used, the duration scale is an ordering (scale) of those durations from shortest to longest, the duration range is the difference in length between the shortest and longest, and the duration hierarchy is an ordering of those durations based on frequency of use.
Durational patterns are the foreground details projected against a background metric structure, which includes meter, tempo, and all rhythmic aspects which produce temporal regularity or structure. Duration patterns may be divided into rhythmic units and rhythmic gestures (Winold, 1975, chap. 3). But they may also be described using terms borrowed from the metrical feet of poetry: iamb (weak–strong), anapest (weak–weak–strong), trochee (strong–weak), dactyl (strong–weak–weak), and amphibrach (weak–strong–weak), which may overlap to explain ambiguity.
See also
tuplet
References
Elements of music
Rhythm and meter | Duration (music) | [
"Physics",
"Technology"
] | 452 | [
"Physical quantities",
"Time",
"Elements of music",
"Rhythm and meter",
"Spacetime",
"Components"
] |
164,174 | https://en.wikipedia.org/wiki/Optical%20communication | Optical communication, also known as optical telecommunication, is communication at a distance using light to carry information. It can be performed visually or by using electronic devices. The earliest basic forms of optical communication date back several millennia, while the earliest electrical device created to do so was the photophone, invented in 1880.
An optical communication system uses a transmitter, which encodes a message into an optical signal, a channel, which carries the signal to its destination, and a receiver, which reproduces the message from the received optical signal. When electronic equipment is not employed the 'receiver' is a person visually observing and interpreting a signal, which may be either simple (such as the presence of a beacon fire) or complex (such as lights using color codes or flashed in a Morse code sequence).
Modern communication relies on optical networking systems using optical fiber, optical amplifiers, lasers, switches, routers, and other related technologies. Free-space optical communication use lasers to transmit signals in space, while terrestrial forms are naturally limited by geography and weather. This article provides a basic introduction to different forms of optical communication.
Visual forms
Visual techniques such as smoke signals, beacon fires, hydraulic telegraphs, ship flags and semaphore lines were the earliest forms of optical communication. Hydraulic telegraph semaphores date back to the 4th century BCE Greece. Distress flares are still used by mariners in emergencies, while lighthouses and navigation lights are used to communicate navigation hazards.
The heliograph uses a mirror to reflect sunlight to a distant observer. When a signaler tilts the mirror to reflect sunlight, the distant observer sees flashes of light that can be used to transmit a prearranged signaling code. Naval ships often use signal lamps and Morse code in a similar way.
Aircraft pilots often use visual approach slope indicator (VASI) projected light systems to land safely, especially at night. Military aircraft landing on an aircraft carrier use a similar system to land correctly on a carrier deck. The coloured light system communicates the aircraft's height relative to a standard landing glideslope. As well, airport control towers still use Aldis lamps to transmit instructions to aircraft whose radios have failed.
Semaphore line
A 'semaphore telegraph', also called a 'semaphore line', 'optical telegraph', 'shutter telegraph chain', 'Chappe telegraph', or 'Napoleonic semaphore', is a system used for conveying information by means of visual signals, using towers with pivoting arms or shutters, also known as blades or paddles. Information is encoded by the position of the mechanical elements; it is read when the shutter is in a fixed position.
Semaphore lines were a precursor of the electrical telegraph. They were far faster than post riders for conveying a message over long distances, but far more expensive and less private than the electrical telegraph lines which would later replace them. The maximum distance that a pair of semaphore telegraph stations can bridge is limited by geography, weather and the availability of light; thus, in practical use, most optical telegraphs used lines of relay stations to bridge longer distances. Each relay station would also require its complement of skilled operator-observers to convey messages back and forth across the line.
The modern design of semaphores was first foreseen by the British polymath Robert Hooke, who first gave a vivid and comprehensive outline of visual telegraphy in a 1684 submission to the Royal Society. His proposal (which was motivated by military concerns following the Battle of Vienna the preceding year) was not put into practice during his lifetime.
The first operational optical semaphore line arrived in 1792, created by the French engineer Claude Chappe and his brothers, who succeeded in covering France with a network of 556 stations stretching a total distance of . It was used for military and national communications until the 1850s.
Many national services adopted signaling systems different from the Chappe system. For example, Britain and Sweden adopted systems of shuttered panels (in contradiction to the Chappe brothers' contention that angled rods are more visible). In Spain, the engineer Agustín de Betancourt developed his own system which was adopted by that state. This system was considered by many experts in Europe better than Chappe's, even in France.
These systems were popular in the late 18th to early 19th century but could not compete with the electrical telegraph, and went completely out of service by 1880.
Semaphore signal flags
Semaphore flags are the system for conveying information at a distance by means of visual signals with hand-held flags, rods, disks, paddles, or occasionally bare or gloved hands. Information is encoded by the position of the flags, objects or arms; it is read when they are in a fixed position.
Semaphores were adopted and widely used (with hand-held flags replacing the mechanical arms of shutter semaphores) in the maritime world in the 19th century. They are still used during underway replenishment at sea and are acceptable for emergency communication in daylight or, using lighted wands instead of flags, at night.
The newer flag semaphore system uses two short poles with square flags, which a signaler holds in different positions to convey letters of the alphabet and numbers. The transmitter holds one pole in each hand, and extends each arm in one of eight possible directions. Except for in the rest position, the flags cannot overlap. The flags are colored differently based on whether the signals are sent by sea or by land. At sea, the flags are colored red and yellow (the Oscar flags), while on land, they are white and blue (the Papa flags). Flags are not required, they just make the characters more obvious.
Signal lamps
Signal lamps (such as Aldis lamps), are visual signaling devices for optical communication (typically using Morse code). Modern signal lamps are a focused lamp which can produce a pulse of light. In large versions this pulse is achieved by opening and closing shutters mounted in front of the lamp, either via a manually operated pressure switch or, in later versions, automatically.
With hand held lamps, a concave mirror is tilted by a trigger to focus the light into pulses. The lamps are usually equipped with some form of optical sight, and are most commonly deployed on naval vessels and also used in airport control towers with coded aviation light signals.
Aviation light signals are used in the case of a radio failure, an aircraft not equipped with a radio, or in the case of a hearing-impaired pilot. Air traffic controllers have long used signal light guns to direct such aircraft. The light gun's lamp has a focused bright beam capable of emitting three different colors: red, white and green. These colors may be flashing or steady, and provide different instructions to aircraft in flight or on the ground (for example, "cleared to land" or "cleared for takeoff"). Pilots can acknowledge the instructions by wiggling their plane's wings, moving their ailerons if they are on the ground, or by flashing their landing or navigation lights during night time. Only 12 simple standardized instructions are directed at aircraft using signal light guns as the system is not utilized with Morse code.
Heliograph
A heliograph ( helios, meaning "sun", and graphein, meaning "write") is a wireless solar telegraph that signals by flashes of sunlight (generally using Morse code) reflected by a mirror. The flashes are produced by momentarily pivoting the mirror, or by interrupting the beam with a shutter.
The heliograph was a simple but effective instrument for instantaneous optical communication over long distances during the late 19th and early 20th century. Its main uses were in military, surveys and forest protection work. They were standard issue in the British and Australian armies until the 1960s, and were used by the Pakistani army as late as 1975.
Electronic forms
In the present day a variety of electronic systems optically transmit and receive information carried by pulses of light. Fiber-optic communication cables are employed to carry electronic data and telephone traffic. Free-space optical communications are also used every day in various applications.
Optical fiber
Optical fiber is the most common type of channel for optical communications. The transmitters in optical fiber links are generally light-emitting diodes (LEDs) or laser diodes. Infrared light is used more commonly than visible light, because optical fibers transmit infrared wavelengths with less attenuation and dispersion. The signal encoding is typically simple intensity modulation, although historically optical phase and frequency modulation have been demonstrated in the lab. The need for periodic signal regeneration was largely superseded by the introduction of the erbium-doped fiber amplifier, which extended link distances at significantly lower cost. The commercial introduction of dense wavelength-division multiplexing (WDM) in 1996 by Ciena Corp was the real start of optical networking. WDM is now the common basis nearly every high-capacity optical system in the world
The first optical communication systems were designed and delivered to the U.S. Army and Chevron by Optelecom, Inc., the venture co-founded by Gordon Gould, the inventor of the optical amplifier and the laser.
Photophone
The photophone (originally given an alternate name, radiophone) is a communication device which allowed for the transmission of speech on a beam of light. It was invented jointly by Alexander Graham Bell and his assistant Charles Sumner Tainter on February 19, 1880, at Bell's 1325 'L' Street laboratory in Washington, D.C. Both were later to become full associates in the Volta Laboratory Association, created and financed by Bell.
On June 21, 1880, Bell's assistant transmitted a wireless voice telephone message of considerable distance, from the roof of the Franklin School to the window of Bell's laboratory, some 213 meters (about 700 ft) away.
Bell believed the photophone was his most important invention. Of the 18 patents granted in Bell's name alone, and the 12 he shared with his collaborators, four were for the photophone, which Bell referred to as his "greatest achievement", telling a reporter shortly before his death that the photophone was "the greatest invention [I have] ever made, greater than the telephone".
The photophone was a precursor to the fiber-optic communication systems which achieved popular worldwide usage starting in the 1980s. The master patent for the photophone ( Apparatus for Signalling and Communicating, called Photophone), was issued in December 1880, many decades before its principles came to have practical applications.
Free-space optical communication
Free-space optics (FSO) systems are employed for 'last mile' telecommunications and can function over distances of several kilometers as long as there is a clear line of sight between the source and the destination, and the optical receiver can reliably decode the transmitted information. Other free-space systems can provide high-data-rate, long-range links using small, low-mass, low-power-consumption subsystems which make them suitable for communications in space. Various planned satellite constellations intended to provide global broadband coverage take advantage of these benefits and employ laser communication for inter-satellite links between the several hundred to thousand satellites effectively creating a space-based optical mesh network.
More generally, transmission of unguided optical signals is known as optical wireless communications (OWC). Examples include medium-range visible light communication and short-distance IrDA, using infrared LEDs.
See also
Fiber tapping
Interconnect bottleneck
Jun-Ichi Nishizawa an inventor of optical communication.
Modulating retro-reflector
OECC (OptoElectronics and Communications Conference)
Optical interconnect
Opto-isolator
Parallel optical interface
References
Citations
Bibliography
Alwayn, Vivek. Fiber-Optic Technologies, Cisco Press, Apr 23, 2004.
Bruce, Robert V Bell: Alexander Bell and the Conquest of Solitude, Ithaca, New York: Cornell University Press, 1990. .
Mims III, Forest M. The First Century of Lightwave Communications, Fiber Optics Weekly Update, Information Gatekeepers, February 10–26, 1982, pp. 6–23.
Paschotta, Rüdiger. Encyclopedia of Laser Physics and Technology, RP-Photonics.com website, 2012.
Further reading
Bayvel, Polina Future High-Capacity Optical Telecommunication Networks, Philosophical Transactions: Mathematical, Physical and Engineering Sciences, Vol. 358, No. 1765, January 2000, Science into the Next Millennium: Young Scientists Give Their Visions of the Future: II. Mathematics, Physics and Engineering, pp. 303–329, stable article URL: https://www.jstor.org/stable/2666790, published by The Royal Society.
Dilhac, J-M. The Telegraph of Claude Chappe -An Optical Telecommunication Network For The XVIII Century, Toulouse: Institut National des Sciences Appliquées de Toulouse. Retrieved from IEEE Global History Network.
Telecommunications | Optical communication | [
"Technology",
"Engineering"
] | 2,636 | [
"Information and communications technology",
"Telecommunications engineering",
"Telecommunications",
"Optical communications"
] |
164,217 | https://en.wikipedia.org/wiki/Tobacco%20mosaic%20virus | Tobacco mosaic virus (TMV) is a positive-sense single-stranded RNA virus species in the genus Tobamovirus that infects a wide range of plants, especially tobacco and other members of the family Solanaceae. The infection causes characteristic patterns, such as "mosaic"-like mottling and discoloration on the leaves (hence the name). TMV was the first virus to be discovered. Although it was known from the late 19th century that a non-bacterial infectious disease was damaging tobacco crops, it was not until 1930 that the infectious agent was determined to be a virus. It is the first pathogen identified as a virus. The virus was crystallised by Wendell Meredith Stanley.It has a similar size to the largest synthetic molecule, known as PG5 with comparable length and diameter.
History
In 1886, Adolf Mayer first described the tobacco mosaic disease that could be transferred between plants, similar to bacterial infections. In 1892, Dmitri Ivanovsky gave the first concrete evidence for the existence of a non-bacterial infectious agent, showing that infected sap remained infectious even after filtering through the finest Chamberland filters. Later, in 1903, Ivanovsky published a paper describing abnormal crystal intracellular inclusions in the host cells of the affected tobacco plants and argued the connection between these inclusions and the infectious agent. However, Ivanovsky remained rather convinced, despite repeated failures to produce evidence, that the causal agent was an unculturable bacterium, too small to be retained on the employed Chamberland filters and to be detected in the light microscope. In 1898, Martinus Beijerinck independently replicated Ivanovsky's filtration experiments and then showed that the infectious agent was able to reproduce and multiply in the host cells of the tobacco plant. Beijerinck adopted the term of "virus" to indicate that the causal agent of tobacco mosaic disease was of non-bacterial nature. Tobacco mosaic virus was the first virus to be crystallized. It exhibit liquid crystal phases above a critical density. It was achieved by Wendell Meredith Stanley in 1935 who also showed that TMV remains active even after crystallization. For his work, he was awarded 1/4 of the Nobel Prize in Chemistry in 1946, even though it was later shown some of his conclusions (in particular, that the crystals were pure protein, and assembled by autocatalysis) were incorrect. The first electron microscopical images of TMV were made in 1939 by Gustav Kausche, Edgar Pfankuch and Helmut Ruska – the brother of Nobel Prize winner Ernst Ruska. In 1955, Heinz Fraenkel-Conrat and Robley Williams showed that purified TMV RNA and its capsid (coat) protein assemble by themselves to functional viruses, indicating that this is the most stable structure (the one with the lowest free energy).
The crystallographer Rosalind Franklin worked for Stanley for about a month at Berkeley, and later designed and built a model of TMV for the 1958 World's Fair at Brussels. In 1958, she speculated that the virus was hollow, not solid, and hypothesized that the RNA of TMV is single-stranded. This conjecture was proven to be correct after her death and is now known to be the + strand. The investigations of tobacco mosaic disease and subsequent discovery of its viral nature were instrumental in the establishment of the general concepts of virology.
Structure
Tobacco mosaic virus has a rod-like appearance. Its capsid is made from 2130 molecules of coat protein and one molecule of genomic single strand RNA, 6400 bases long. The coat protein self-assembles into the rod-like helical structure (16.3 proteins per helix turn) around the RNA, which forms a hairpin loop structure (see the electron micrograph above). The structural organization of the virus gives stability. The protein monomer consists of 158 amino acids which are assembled into four main alpha-helices, which are joined by a prominent loop proximal to the axis of the virion. Virions are ~300 nm in length and ~18 nm in diameter. Negatively stained electron microphotographs show a distinct inner channel of radius ~2 nm. The RNA is located at a radius of ~4 nm and is protected from the action of cellular enzymes by the coat protein. X-ray fiber diffraction structure of the intact virus was studied based on an electron density map at 3.6 Å resolution. Inside the capsid helix, near the core, is the coiled RNA molecule, which is made up of 6,395 ±10 nucleotides. The structure of the virus plays an important role in the recognition of the viral DNA. This happens due to the formation of an obligatory intermediate produced from a protein allows the virus to recognize a specific RNA hairpin structure. The intermediate induces the nucleation of TMV self-assembly by binding with the hairpin structure.
Genome
The TMV genome consists of a 6.3–6.5 kbp single-stranded (ss) RNA. The 3’-terminus has a tRNA-like structure, and the 5’-terminus has a methylated nucleotide cap. (m7G5’pppG). The genome encodes 4 open reading frames (ORFs), two of which produce a single protein due to ribosomal readthrough of a leaky UAG stop codon. The 4 genes encode a replicase (with methyltransferase [MT] and RNA helicase [Hel] domains), an RNA-dependent RNA polymerase, a so-called movement protein (MP) and a capsid protein (CP). The coding sequence starts with the first reading frame, which is 69 nucleotides away from the 5' end of the RNA. The noncoding region at the 5' end can be varied in different individual virions, but there hasn't been any variation found between virions in the noncoding region at the 3' end.
Physicochemical properties
TMV is a thermostable virus. On a dried leaf, it can withstand up to 50 °C (120 degree Fahrenheit) for 30 minutes.
TMV has an index of refraction of about 1.57.
Disease cycle
TMV does not have a distinct overwintering structure. Rather, it will over-winter in infected tobacco stalks and leaves in the soil, on the surface of contaminated seed (TMV can even survive in contaminated tobacco products for many years, so smokers can accidentally transmit it by touch, although not in the smoke itself). With the direct contact with host plants through its vectors (normally insects such as aphids and leafhoppers), TMV will go through the infection process and then the replication process.
Infection and transmission
After its multiplication, it enters the neighboring cells through plasmodesmata. The infection does not spread through contact with insects, but instead spreads by direct contact to the neighboring cells. For its smooth entry, TMV produces a 30 kDa movement protein called P30 which enlarges the plasmodesmata. TMV most likely moves from cell-to-cell as a complex of the RNA, P30, and replicate proteins.
It can also spread through phloem for longer distance movement within the plant. Moreover, TMV can be transmitted from one plant to another by direct contact. Although TMV does not have defined transmission vectors, the virus can be easily transmitted from the infected hosts to the healthy plants by human handling.
Replication
Following entry into its host via mechanical inoculation, TMV uncoats itself to release its viral [+]RNA strand. As uncoating occurs, the MetHel:Pol gene is translated to make the capping enzyme MetHel and the RNA Polymerase. Then the viral genome will further replicate to produce multiple mRNAs via a [-]RNA intermediate primed by the tRNAHIS at the [+]RNA 3' end. The resulting mRNAs encode several proteins, including the coat protein and an RNA-dependent RNA polymerase (RdRp), as well as the movement protein. Thus TMV can replicate its own genome.
After the coat protein and RNA genome of TMV have been synthesized, they spontaneously assemble into complete TMV virions in a highly organized process. The protomers come together to form disks or 'lockwashers' composed of two layers of protomers arranged in a helix. The helical capsid grows by the addition of protomers to the end of the rod. As the rod lengthens, the RNA passes through a channel in its center and forms a loop at the growing end. In this way the RNA can easily fit as a spiral into the interior of the helical capsid.
Host and symptoms
Like other plant pathogenic viruses, TMV has a very wide host range and has different effects depending on the host being infected. Tobacco mosaic virus has been known to cause a production loss for flue cured tobacco of up to two percent in North Carolina. It is known to infect members of nine plant families, and at least 125 individual species, including tobacco, tomato, pepper (all members of the Solanaceae), cucumbers, a number of ornamental flowers, and beans including Phaseolus vulgaris and Vigna unguiculata. There are many different strains. The first symptom of this virus disease is a light green coloration between the veins of young leaves. This is followed quickly by the development of a "mosaic" or mottled pattern of light and dark green areas in the leaves. Rugosity may also be seen where the infected plant leaves display small localized random wrinkles. These symptoms develop quickly and are more pronounced on younger leaves. Its infection does not result in plant death, but if infection occurs early in the season, plants are stunted. Lower leaves are subjected to "mosaic burn" especially during periods of hot and dry weather. In these cases, large dead areas develop in the leaves. This constitutes one of the most destructive phases of Tobacco mosaic virus infection. Infected leaves may be crinkled, puckered, or elongated. However, if TMV infects crops like grape and apple, it is almost symptomless. TMV is able to infect and complete its replication cycle in a plant pathogenic fungus, TMV is able to enter and replicate in cells of C. acutatum, C. clavatum, and C. theobromicola, which may not be an exception, although it has neither been found nor probably searched for in nature.
Environment
TMV is one of the most stable viruses and has a wide survival range. As long as the surrounding temperature remains below approximately 40 degrees Celsius, TMV can sustain its stable form. All it needs is a host to infect. If necessary, greenhouses and botanical gardens would provide the most favorable condition for TMV to spread out, due to the high population density of possible hosts and the constant temperature throughout the year. It also could be useful to culture TMV in vitro in sap because it can survive up to 3000 days.
Treatment and management
One of the common control methods for TMV is sanitation, which includes removing infected plants and washing hands in between each planting. Crop rotation should also be employed to avoid infected soil/seed beds for at least two years. As for any plant disease, looking for resistant strains against TMV may also be advised. Furthermore, the cross protection method can be administered, where the stronger strain of TMV infection is inhibited by infecting the host plant with a mild strain of TMV, similar to the effect of a vaccine.
In the past ten years, the application of genetic engineering on a host plant genome has been developed to allow the host plant to produce the TMV coat protein within their cells. It was hypothesized that the TMV genome will be re-coated rapidly upon entering the host cell, thus it prevents the initiation of TMV replication. Later it was found that the mechanism that protects the host from viral genome insertion is through gene silencing.
TMV is inhibited by a product of the myxomycete slime mold Physarum polycephalum. Both tobacco and the beans P. vulgaris and V. sinensis suffered almost no lesioning in vitro from TMV when treated with a P. polycephalum extract.
Research has shown that Bacillus spp. can be used to reduce the severity of symptoms from TMV in tobacco plants. In the study, treated tobacco plants had more growth and less build-up of TMV virions than tobacco plants that hadn't been treated.
A research has been conducted by H.Fraenkel-Conrat to show the influence of acetic acid on the Tobacco Mosaic Virus. According to the research, 67% acetic acid resulted as degradation of the virus.
Another possible source of prevention for TMV is the use of salicylic acid. A study completed by a research team at the University of Cambridge found that treating plants with salicylic acid reduced the amount of TMV viral RNAs and viral coat protein present in the tobacco plants. Their research showed that salicylic acid most likely was disrupting replication and transcription and more specifically, the RdRp complex.
A research was conducted and revealed that humans have antibodies against Tobacco Mosaic Virus.
Scientific and environmental impact
The large amount of literature about TMV and its choice for many pioneering investigations in structural biology (including X-ray diffraction and X-ray crystallography), virus assembly and disassembly, and so on, are fundamentally due to the large quantities that can be obtained, plus the fact that it does not infect animals. After growing several hundred infected tobacco plants in a greenhouse, followed by a few simple laboratory procedures, a scientist can produce several grams of the virus. In fact, tobacco mosaic virus is so proliferate that the inclusion bodies can be seen with only a light microscope.
James D. Watson, in his memoir The Double Helix, cites his x-ray investigation of TMV's helical structure as an important step in deducing the nature of the DNA molecule.
Applications
Plant viruses can be used to engineer viral vectors, tools commonly used by molecular biologists to deliver genetic material into plant cells; they are also sources of biomaterials and nanotechnology devices. Viral vectors based on TMV include those of the magnICON and TRBO plant expression technologies. Due to its cylindrical shape, high aspect ratio, self-assembling nature, and ability to incorporate metal coatings (nickel and cobalt) into its shell, TMV is an ideal candidate to be incorporated into battery electrodes. Addition of TMV to a battery electrode increases the reactive surface area by an order of magnitude, resulting in an increase in the battery's capacity by up to six times compared to a planar electrode geometry. The TMV-based vector also enabled C. acutatum to transiently express exogenous GFP up to six subcultures and for at least 2 mo after infection, without the need to develop transformation technology, RNAi can be expressed in the phytopathogenic fungus Colletotrichum acutatum by VIGS using a recombinant vector based on TMV in which the ORF of the gene encoding the green fluorescent protein (GFP) was transcribed in fungal cells from a duplicate of the TMV coat protein (CP) subgenomic mRNA promoter and demonstrated that the approach could be used to obtain foreign protein expression in fungi.
Notes
References
Further reading
External links
Description of plant viruses – TMV – contains information on symptoms, hosts species, purification etc.
Further information
Electron microscope image of TM
Species described in 1892
Viruses described in the 19th century
Tobamovirus
Model organisms
Tobacco diseases
Viral plant pathogens and diseases
1898 in biology
1890s in science
1890s in the Netherlands
Biology in the Netherlands
Martinus Beijerinck | Tobacco mosaic virus | [
"Biology"
] | 3,331 | [
"Model organisms",
"Biological models"
] |
164,313 | https://en.wikipedia.org/wiki/Gravity%20wave | In fluid dynamics, gravity waves are waves in a fluid medium or at the interface between two media when the force of gravity or buoyancy tries to restore equilibrium. An example of such an interface is that between the atmosphere and the ocean, which gives rise to wind waves.
A gravity wave results when fluid is displaced from a position of equilibrium. The restoration of the fluid to equilibrium will produce a movement of the fluid back and forth, called a wave orbit. Gravity waves on an air–sea interface of the ocean are called surface gravity waves (a type of surface wave), while gravity waves that are the body of the water (such as between parts of different densities) are called internal waves. Wind-generated waves on the water surface are examples of gravity waves, as are tsunamis, ocean tides, and the wakes of surface vessels.
The period of wind-generated gravity waves on the free surface of the Earth's ponds, lakes, seas and oceans are predominantly between 0.3 and 30 seconds (corresponding to frequencies between 3 Hz and .03 Hz). Shorter waves are also affected by surface tension and are called gravity–capillary waves and (if hardly influenced by gravity) capillary waves. Alternatively, so-called infragravity waves, which are due to subharmonic nonlinear wave interaction with the wind waves, have periods longer than the accompanying wind-generated waves.
Atmosphere dynamics on Earth
In the Earth's atmosphere, gravity waves are a mechanism that produce the transfer of momentum from the troposphere to the stratosphere and mesosphere. Gravity waves are generated in the troposphere by frontal systems or by airflow over mountains. At first, waves propagate through the atmosphere without appreciable change in mean velocity. But as the waves reach more rarefied (thin) air at higher altitudes, their amplitude increases, and nonlinear effects cause the waves to break, transferring their momentum to the mean flow. This transfer of momentum is responsible for the forcing of the many large-scale dynamical features of the atmosphere. For example, this momentum transfer is partly responsible for the driving of the Quasi-Biennial Oscillation, and in the mesosphere, it is thought to be the major driving force of the Semi-Annual Oscillation. Thus, this process plays a key role in the dynamics of the middle atmosphere.
The effect of gravity waves in clouds can look like altostratus undulatus clouds, and are sometimes confused with them, but the formation mechanism is different. Atmospheric gravity waves reaching ionosphere are responsible for the generation of traveling ionospheric disturbances and could be observed by radars.
Quantitative description
Deep water
The phase velocity of a linear gravity wave with wavenumber is given by the formula
where g is the acceleration due to gravity. When surface tension is important, this is modified to
where σ is the surface tension coefficient and ρ is the density.
The gravity wave represents a perturbation around a stationary state, in which there is no velocity. Thus, the perturbation introduced to the system is described by a velocity field of infinitesimally small amplitude, Because the fluid is assumed incompressible, this velocity field has the streamfunction representation
where the subscripts indicate partial derivatives. In this derivation it suffices to work in two dimensions , where gravity points in the negative z-direction. Next, in an initially stationary incompressible fluid, there is no vorticity, and the fluid stays irrotational, hence In the streamfunction representation, Next, because of the translational invariance of the system in the x-direction, it is possible to make the ansatz
where k is a spatial wavenumber. Thus, the problem reduces to solving the equation
We work in a sea of infinite depth, so the boundary condition is at The undisturbed surface is at , and the disturbed or wavy surface is at where is small in magnitude. If no fluid is to leak out of the bottom, we must have the condition
Hence, on , where A and the wave speed c are constants to be determined from conditions at the interface.
The free-surface condition: At the free surface , the kinematic condition holds:
Linearizing, this is simply
where the velocity is linearized on to the surface Using the normal-mode and streamfunction representations, this condition is , the second interfacial condition.
Pressure relation across the interface: For the case with surface tension, the pressure difference over the interface at is given by the Young–Laplace equation:
where σ is the surface tension and κ is the curvature of the interface, which in a linear approximation is
Thus,
However, this condition refers to the total pressure (base+perturbed), thus
(As usual, The perturbed quantities can be linearized onto the surface z=0.) Using hydrostatic balance, in the form
this becomes
The perturbed pressures are evaluated in terms of streamfunctions, using the horizontal momentum equation of the linearised Euler equations for the perturbations,
to yield
Putting this last equation and the jump condition together,
Substituting the second interfacial condition and using the normal-mode representation, this relation becomes
Using the solution , this gives
Since is the phase speed in terms of the angular frequency and the wavenumber, the gravity wave angular frequency can be expressed as
The group velocity of a wave (that is, the speed at which a wave packet travels) is given by
and thus for a gravity wave,
The group velocity is one half the phase velocity. A wave in which the group and phase velocities differ is called dispersive.
Shallow water
Gravity waves traveling in shallow water (where the depth is much less than the wavelength), are nondispersive: the phase and group velocities are identical and independent of wavelength and frequency. When the water depth is h,
Generation of ocean waves by wind
Wind waves, as their name suggests, are generated by wind transferring energy from the atmosphere to the ocean's surface, and capillary-gravity waves play an essential role in this effect. There are two distinct mechanisms involved, called after their proponents, Phillips and Miles.
In the work of Phillips, the ocean surface is imagined to be initially flat (glassy), and a turbulent wind blows over the surface. When a flow is turbulent, one observes a randomly fluctuating velocity field superimposed on a mean flow (contrast with a laminar flow, in which the fluid motion is ordered and smooth). The fluctuating velocity field gives rise to fluctuating stresses (both tangential and normal) that act on the air-water interface. The normal stress, or fluctuating pressure acts as a forcing term (much like pushing a swing introduces a forcing term). If the frequency and wavenumber of this forcing term match a mode of vibration of the capillary-gravity wave (as derived above), then there is a resonance, and the wave grows in amplitude. As with other resonance effects, the amplitude of this wave grows linearly with time.
The air-water interface is now endowed with a surface roughness due to the capillary-gravity waves, and a second phase of wave growth takes place. A wave established on the surface either spontaneously as described above, or in laboratory conditions, interacts with the turbulent mean flow in a manner described by Miles. This is the so-called critical-layer mechanism. A critical layer forms at a height where the wave speed c equals the mean turbulent flow U. As the flow is turbulent, its mean profile is logarithmic, and its second derivative is thus negative. This is precisely the condition for the mean flow to impart its energy to the interface through the critical layer. This supply of energy to the interface is destabilizing and causes the amplitude of the wave on the interface to grow in time. As in other examples of linear instability, the growth rate of the disturbance in this phase is exponential in time.
This Miles–Phillips Mechanism process can continue until an equilibrium is reached, or until the wind stops transferring energy to the waves (i.e., blowing them along) or when they run out of ocean distance, also known as fetch length.
Analog gravity models and surface gravity waves
Surface gravity waves have been recognized as a powerful tool for studying analog gravity models, providing experimental platforms for phenomena typically found in black hole physics. In an experiment, surface gravity waves were utilized to simulate phase space horizons, akin to event horizons of black holes. This experiment observed logarithmic phase singularities, which are central to phenomena like Hawking radiation, and the emergence of Fermi-Dirac distributions, which parallel quantum mechanical systems.
By propagating surface gravity water waves, researchers were able to recreate the energy wave functions of an inverted harmonic oscillator, a system that serves as an analog for black hole physics. The experiment demonstrated how the free evolution of these classical waves in a controlled laboratory environment can reveal the formation of horizons and singularities, shedding light on fundamental aspects of gravitational theories and quantum mechanics.
See also
Acoustic wave
Asteroseismology
Green's law
Horizontal convective rolls
Lee wave
Lunitidal interval
Mesosphere#Dynamic features
Morning Glory cloud
Orr–Sommerfeld equation
Rayleigh–Taylor instability
Rogue wave
Skyquake
Notes
References
Gill, A. E., "Gravity wave". Glossary of Meteorology. American Meteorological Society (15 December 2014).
Crawford, Frank S., Jr. (1968). Waves (Berkeley Physics Course, Vol. 3), (McGraw-Hill, 1968) Free online version
Alexander, P., A. de la Torre, and P. Llamedo (2008), Interpretation of gravity wave signatures in GPS radio occultations, J. Geophys. Res., 113, D16117, doi:10.1029/2007JD009390.
Further reading
External links | Gravity wave | [
"Chemistry"
] | 2,051 | [
"Gravity waves",
"Fluid dynamics"
] |
164,318 | https://en.wikipedia.org/wiki/Quasi-biennial%20oscillation | The quasi-biennial oscillation (QBO) is a quasiperiodic oscillation of the equatorial zonal wind between easterlies and westerlies in the tropical stratosphere with a mean period of 28 to 29 months. The alternating wind regimes develop at the top of the lower stratosphere and propagate downwards at about per month until they are dissipated at the tropopause. Downward motion of the easterlies is usually more irregular than that of the westerlies. The amplitude of the easterly phase is about twice as strong as that of the westerly phase. At the top of the vertical QBO domain, easterlies dominate, while at the bottom, westerlies are more likely to be found. At the level, with regards to monthly mean zonal winds, the strongest recorded easterly was 29.55 m/s in November 2005, while the strongest recorded westerly was only 15.62 m/s in June 1995.
Theory
In 1883, the eruption of Krakatoa led to visual tracking of subsequent volcanic ash in the stratosphere. This visual tracking led to the discovery of easterly winds between 25 and 30 km above the surface. The winds were then called the Krakatau easterlies. In 1908, data balloons launched above Lake Victoria in Africa recorded westerly winds in the stratospheric levels of the atmosphere. These findings, at the time, were thought to contradict the 1883 findings. However, the winds that would become known as the QBO were discovered to oscillate between westerly and easterly in the 1950s by researchers at the UK Meteorological Office. The cause of these QBO winds remained unclear for some time. Radiosonde soundings showed that its phase was not related to the annual cycle, as is the case for many other stratospheric circulation patterns. In the 1970s it was recognized by Richard Lindzen and James Holton that the periodic wind reversal was driven by atmospheric waves emanating from the tropical troposphere that travel upwards and are dissipated in the stratosphere by radiative cooling. The precise nature of the waves responsible for this effect was heavily debated; in recent years, however, gravity waves have come to be seen as a major contributor and the QBO is now simulated in a growing number of climate models.
Effects
Effects of the QBO include mixing of stratospheric ozone by the secondary circulation caused by the QBO, modification of monsoon precipitation, and an influence on stratospheric circulation in northern hemisphere winter (mediated partly by a change in the frequency of sudden stratospheric warmings). Easterly phases of the QBO often coincide with more sudden stratospheric warmings, a weaker Atlantic jet stream, and cold winters in Northern Europe and the Eastern U.S. In contrast, westerly phases of the QBO often coincide with mild winters in the Eastern U.S. and a strong Atlantic jet stream with mild, wet winters in Northern Europe. In addition, the QBO has been shown to affect hurricane frequency during hurricane seasons in the Atlantic. Research has also been conducted investigating a possible relationship between ENSO (El Niño–Southern Oscillation) and the QBO.
Observation of the QBO with weather balloons
The Free University of Berlin supplies a QBO data set that comprises radiosonde observations from Canton Island, Gan, and Singapore. The plot below shows the QBO during the 1980s.
Recent observations
The first significant observed deviation from the normal QBO since its discovery in early 1950s was noted beginning in February 2016, when the transition to easterly winds was disrupted by a new band of westerly winds that formed unexpectedly. The lack of a reliable QBO cycle deprives forecasters of a valuable tool. Since the QBO has a strong influence on the North Atlantic Oscillation and thereby north European weather, scientists speculated that the coming winter could be warmer and stormier in that region.
NASA scientists have been researching to test if the extremely strong 2014–2016 El Niño, climate change, or some other factor might be involved. They are trying to determine whether this is more of a once-in-a-generation event or a sign of the changing climate.
See also
North Atlantic oscillation
References
Further reading
External links
The Berlin QBO data series since 2024 provided by the Institute of Meteorology and Climate Research at the Karlsruhe Institute of Technology (1953–present) also available as netcdf on zenodo
NASA Goddard QBO web page (1980-present)
Tropical meteorology
Atmospheric dynamics
Regional climate effects
Climate oscillations | Quasi-biennial oscillation | [
"Chemistry"
] | 941 | [
"Atmospheric dynamics",
"Fluid dynamics"
] |
164,321 | https://en.wikipedia.org/wiki/Secondary%20circulation | In fluid dynamics, a secondary circulation or secondary flow is a weak circulation that plays a key maintenance role in sustaining a stronger primary circulation that contains most of the kinetic energy and momentum of a flow. For example, a tropical cyclone's primary winds are tangential (horizontally swirling), but its evolution and maintenance against friction involves an in-up-out secondary circulation flow that is also important to its clouds and rain. On a planetary scale, Earth's winds are mostly east–west or zonal, but that flow is maintained against friction by the Coriolis force acting on a small north–south or meridional secondary circulation.
See also
Hough function
Primitive equations
Secondary flow
References
Geophysics
Physical oceanography
Atmospheric dynamics
Fluid mechanics | Secondary circulation | [
"Physics",
"Chemistry",
"Engineering"
] | 150 | [
"Applied and interdisciplinary physics",
"Atmospheric dynamics",
"Civil engineering",
"Physical oceanography",
"Geophysics",
"Fluid mechanics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
164,332 | https://en.wikipedia.org/wiki/Chiaroscuro | In art, chiaroscuro ( , ; ) is the use of strong contrasts between light and dark, usually bold contrasts affecting a whole composition. It is also a technical term used by artists and art historians for the use of contrasts of light to achieve a sense of volume in modelling three-dimensional objects and figures. Similar effects in cinema, and black and white and low-key photography, are also called chiaroscuro. Taken to its extreme, the use of shadow and contrast to focus strongly on the subject of a painting is called tenebrism.
Further specialized uses of the term include chiaroscuro woodcut for colour woodcuts printed with different blocks, each using a different coloured ink; and chiaroscuro for drawings on coloured paper in a dark medium with white highlighting.
Chiaroscuro originated in the Renaissance period but is most notably associated with Baroque art. Chiaroscuro is one of the canonical painting modes of the Renaissance (alongside cangiante, sfumato and unione) (see also Renaissance art). Artists known for using the technique include Leonardo da Vinci, Caravaggio, Rembrandt, Vermeer, Goya, and Georges de La Tour.
History
Origin in the chiaroscuro drawing
The term chiaroscuro originated during the Renaissance as drawing on coloured paper, where the artist worked from the paper's base tone toward light using white gouache, and toward dark using ink, bodycolour or watercolour. These in turn drew on traditions in illuminated manuscripts going back to late Roman Imperial manuscripts on purple-dyed vellum. Such works are called "chiaroscuro drawings", but may only be described in modern museum terminology by such formulae as "pen on prepared paper, heightened with white bodycolour". Chiaroscuro woodcuts began as imitations of this technique. When discussing Italian art, the term sometimes is used to mean painted images in monochrome or two colours, more generally known in English by the French equivalent, grisaille. The term broadened in meaning early on to cover all strong contrasts in illumination between light and dark areas in art, which is now the primary meaning.
Chiaroscuro modelling
The more technical use of the term chiaroscuro is the effect of light modelling in painting, drawing, or printmaking, where three-dimensional volume is suggested by the value gradation of colour and the analytical division of light and shadow shapes—often called "shading". The invention of these effects in the West, "skiagraphia" or "shadow-painting" to the Ancient Greeks, traditionally was ascribed to the famous Athenian painter of the fifth century BC, Apollodoros. Although few Ancient Greek paintings survive, their understanding of the effect of light modelling still may be seen in the late-fourth-century BC mosaics of Pella, Macedonia, in particular the Stag Hunt Mosaic, in the House of the Abduction of Helen, inscribed gnosis epoesen, or 'knowledge did it'.
The technique also survived in rather crude standardized form in Byzantine art and was refined again in the Middle Ages to become standard by the early fifteenth-century in painting and manuscript illumination in Italy and Flanders, and then spread to all Western art.
According to the theory of the art historian Marcia B. Hall, which has gained considerable acceptance, chiaroscuro is one of four modes of painting colours available to Italian High Renaissance painters, along with cangiante, sfumato and unione.
The Raphael painting illustrated, with light coming from the left, demonstrates both delicate modelling chiaroscuro to give volume to the body of the model, and strong chiaroscuro in the more common sense, in the contrast between the well-lit model and the very dark background of foliage. To further complicate matters, however, the compositional chiaroscuro of the contrast between model and background probably would not be described using this term, as the two elements are almost completely separated. The term is mostly used to describe compositions where at least some principal elements of the main composition show the transition between light and dark, as in the Baglioni and Geertgen tot Sint Jans paintings illustrated above and below.
Chiaroscuro modelling is now taken for granted, but it has had some opponents; namely: the English portrait miniaturist Nicholas Hilliard cautioned in his treatise on painting against all but the minimal use we see in his works, reflecting the views of his patron Queen Elizabeth I of England: "seeing that best to show oneself needeth no shadow of place but rather the open light... Her Majesty... chose her place to sit for that purpose in the open alley of a goodly garden, where no tree was near, nor any shadow at all..."
In drawings and prints, modelling chiaroscuro often is achieved by the use of hatching, or shading by parallel lines. Washes, stipple or dotting effects, and "surface tone" in printmaking are other techniques.
Chiaroscuro woodcuts
Chiaroscuro woodcuts are old master prints in woodcut using two or more blocks printed in different colours; they do not necessarily feature strong contrasts of light and dark. They were first produced to achieve similar effects to chiaroscuro drawings. After some early experiments in book-printing, the true chiaroscuro woodcut conceived for two blocks was probably first invented by Lucas Cranach the Elder in Germany in 1508 or 1509, though he backdated some of his first prints and added tone blocks to some prints first produced for monochrome printing, swiftly followed by Hans Burgkmair the Elder. The formschneider or block-cutter who worked in the press of Johannes Schott in Strasbourg is claimed to be the first one to achieve chiaroscuro woodcuts with three blocks. Despite Vasari's claim for Italian precedence in Ugo da Carpi, it is clear that his, the first Italian examples, date to around 1516 But other sources suggest, the first chiaroscuro woodcut to be the Triumph of Julius Caesar, which was created by Andrea Mantegna, an Italian painter, between 1470 and 1500. Another view states that: "Lucas Cranach backdated two of his works in an attempt to grab the glory" and that the technique was invented "in all probability" by Burgkmair "who was commissioned by the emperor Maximilian to find a cheap and effective way of getting the imperial image widely disseminated as he needed to drum up money and support for a crusade".
Other printmakers who have used this technique include Hans Wechtlin, Hans Baldung Grien, and Parmigianino. In Germany, the technique achieved its greatest popularity around 1520, but it was used in Italy throughout the sixteenth century. Later artists such as Goltzius sometimes made use of it. In most German two-block prints, the keyblock (or "line block") was printed in black and the tone block or blocks had flat areas of colour. In Italy, chiaroscuro woodcuts were produced without keyblocks to achieve a very different effect.
Compositional chiaroscuro to Caravaggio
Manuscript illumination was, as in many areas, especially experimental in attempting ambitious lighting effects since the results were not for public display. The development of compositional chiaroscuro received a considerable impetus in northern Europe from the vision of the Nativity of Jesus of Saint Bridget of Sweden, a very popular mystic. She described the infant Jesus as emitting light; depictions increasingly reduced other light sources in the scene to emphasize this effect, and the Nativity remained very commonly treated with chiaroscuro through to the Baroque. Hugo van der Goes and his followers painted many scenes lit only by candle or the divine light from the infant Christ. As with some later painters, in their hands the effect was of stillness and calm rather than the drama with which it would be used during the Baroque.
Strong chiaroscuro became a popular effect during the sixteenth century in Mannerism and Baroque art. Divine light continued to illuminate, often rather inadequately, the compositions of Tintoretto, Veronese, and their many followers. The use of dark subjects dramatically lit by a shaft of light from a single constricted and often unseen source, was a compositional device developed by Ugo da Carpi (c. 1455 – c. 1523), Giovanni Baglione (1566–1643), and Caravaggio (1571–1610), the last of whom was crucial in developing the style of tenebrism, where dramatic chiaroscuro becomes a dominant stylistic device.
17th and 18th centuries
Tenebrism was especially practiced in Spain and the Spanish-ruled Kingdom of Naples, by Jusepe de Ribera and his followers. Adam Elsheimer (1578–1610), a German artist living in Rome, produced several night scenes lit mainly by fire, and sometimes moonlight. Unlike Caravaggio's, his dark areas contain very subtle detail and interest. The influences of Caravaggio and Elsheimer were strong on Peter Paul Rubens, who exploited their respective approaches to tenebrosity for dramatic effect in paintings such as The Raising of the Cross (1610–1611). Artemisia Gentileschi (1593–1656), a Baroque artist who was a follower of Caravaggio, was also an outstanding exponent of tenebrism and chiaroscuro.
A particular genre that developed was the nocturnal scene lit by candlelight, which looked back to earlier northern artists such as Geertgen tot Sint Jans and more immediately, to the innovations of Caravaggio and Elsheimer. This theme played out with many artists from the Low Countries in the first few decades of the seventeenth century, where it became associated with the Utrecht Caravaggisti such as Gerrit van Honthorst and Dirck van Baburen, and with Flemish Baroque painters such as Jacob Jordaens. Rembrandt van Rijn's (1606–1669) early works from the 1620s also adopted the single-candle light source. The nocturnal candle-lit scene re-emerged in the Dutch Republic in the mid-seventeenth century on a smaller scale in the works of fijnschilders such as Gerrit Dou and Gottfried Schalken.
Rembrandt's own interest in effects of darkness shifted in his mature works. He relied less on the sharp contrasts of light and dark that marked the Italian influences of the earlier generation, a factor found in his mid-seventeenth-century etchings. In that medium he shared many similarities with his contemporary in Italy, Giovanni Benedetto Castiglione, whose work in printmaking led him to invent the monotype.
Outside the Low Countries, artists such as Georges de La Tour and Trophime Bigot in France and Joseph Wright of Derby in England, carried on with such strong, but graduated, candlelight chiaroscuro. Watteau used a gentle chiaroscuro in the leafy backgrounds of his fêtes galantes, and this was continued in paintings by many French artists, notably Fragonard. At the end of the century Fuseli and others used a heavier chiaroscuro for romantic effect, as did Delacroix and others in the nineteenth century.
Use of the term
The French use of the term, , was introduced by the seventeenth-century art-critic Roger de Piles in the course of a famous argument (Débat sur le coloris), on the relative merits of drawing and colour in painting (his Dialogues sur le coloris, 1673, was a key contribution to the Débat).
In English, the Italian term has been usedoriginally as and since at least the late seventeenth century. The term is less frequently used of art after the late nineteenth century, although the Expressionist and other modern movements make great use of the effect.
Especially since the strong twentieth-century rise in the reputation of Caravaggio, in non-specialist use the term is mainly used for strong chiaroscuro effects such as his, or Rembrandt's. As the Tate puts it: "Chiaroscuro is generally only remarked upon when it is a particularly prominent feature of the work, usually when the artist is using extreme contrasts of light and shade".
Cinema and photography
Chiaroscuro is used in cinematography for extreme low key and high-contrast lighting to create distinct areas of light and darkness in films, especially in black and white films. Classic examples are The Cabinet of Dr. Caligari (1920), Nosferatu (1922), Metropolis (1927) The Hunchback of Notre Dame (1939), The Devil and Daniel Webster (1941), and the black and white scenes in Andrei Tarkovsky's Stalker (1979).
For example, in Metropolis, chiaroscuro lighting creates contrast between light and dark mise-en-scene and figures. The effect highlights the differences between the capitalist elite and the workers.
In photography, chiaroscuro can be achieved by using "Rembrandt lighting". In more highly developed photographic processes, the technique may be termed "ambient/natural lighting", although when done so for the effect, the look is artificial and not generally documentary in nature. In particular, Bill Henson along with others, such as W. Eugene Smith, Josef Koudelka, Lothar Wolleh, Annie Leibovitz, Floria Sigismondi, and Ralph Gibson may be considered some of the modern masters of chiaroscuro in documentary photography.
Perhaps the most direct use of chiaroscuro in filmmaking is Stanley Kubrick's 1975 film Barry Lyndon. When informed that no lens then had a sufficiently wide aperture to shoot a costume drama set in grand palaces using only candlelight, Kubrick bought and retrofitted a special lens for the purpose: a modified Mitchell BNC camera and a Zeiss lens manufactured for the rigors of space photography, with a maximum aperture of f/0.7. The natural, unaugmented lighting of the sets in the film exemplified low-key, natural lighting in filmwork at its most extreme, outside of the Eastern European/Soviet filmmaking tradition (itself exemplified by the harsh low-key lighting style employed by Soviet filmmaker Sergei Eisenstein).
Sven Nykvist, the longtime collaborator of Ingmar Bergman, also informed much of his photography with chiaroscuro realism, as did Gregg Toland, who influenced such cinematographers as László Kovács, Vilmos Zsigmond, and Vittorio Storaro with his use of deep and selective focus augmented with strong horizon-level key lighting penetrating through windows and doorways. Much of the celebrated film noir tradition relies on techniques related to chiaroscuro that Toland perfected in the early 1930s (though high-key lighting, stage lighting, frontal lighting, and other film noir effects are interspersed in ways that diminish the chiaroscuro claim).
Gallery
Chiaroscuro in modelling; paintings
Chiaroscuro in modelling; prints and drawings
Chiaroscuro as a major element in composition: painting
Chiaroscuro as a major element in composition: photography
Chiaroscuro faces
Chiaroscuro drawings and woodcuts
See also
Light-and-shade watermark
Notes
References
David Landau & Peter Parshall, The Renaissance Print, pp. 179–202; 273–81 & passim; Yale, 1996,
External links
Chiaroscuro Woodcut from the Metropolitan Museum of Art Timeline of Art History
Chiaroscuro woodcut from Spencer Museum of Art, Kansas
(Modelling) chiaroscuro from Evansville University
Visual arts terminology
Artistic techniques
Italian words and phrases
Composition in visual art
Shadows | Chiaroscuro | [
"Physics"
] | 3,272 | [
"Optical phenomena",
"Physical phenomena",
"Shadows"
] |
164,346 | https://en.wikipedia.org/wiki/Society%20of%20Motion%20Picture%20and%20Television%20Engineers | The Society of Motion Picture and Television Engineers (SMPTE) (, rarely ), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is a global professional association of engineers, technologists, and executives working in the media and entertainment industry. As an internationally recognized standards organization, SMPTE has published more than 800 technical standards and related documents for broadcast, filmmaking, digital cinema, audio recording, information technology (IT), and medical imaging.
SMPTE also publishes the SMPTE Motion Imaging Journal, provides networking opportunities for its members, produces academic conferences and exhibitions, and performs other industry-related functions. SMPTE membership is open to any individual or organization with an interest in the subject matter. In the US, SMPTE is a 501(c)3 non-profit charitable organization.
History
An informal organizational meeting was held in April 1916 at the Astor Hotel in New York City. Enthusiasm and interest increased, and meetings were held in New York and Chicago, culminating in the founding of the Society of Motion Picture Engineers in the Oak Room of the Raleigh Hotel, Washington DC on the 24th of July. Ten industry stakeholders attended and signed the Articles of Incorporation. Papers of incorporation were executed on 24 July 1916, were filed on 10 August in Washington DC. With a second meeting scheduled, invitations were telegraphed to Jenkin’s industry friends, i.e., key players and engineering executives in the motion picture industry.
Three months later, 26 attended the first “official” meeting of the Society, the SMPE, at the Hotel Astor in New York City, on 2 and 3 October 1916. Jenkins was formally elected president, a constitution was ratified, an emblem for the Society was approved, and six committees were established.
At the July 1917 Society Convention in Chicago, a set of specifications including the dimensions of 35 mm film, 16 frames per second, etc. were adopted. SMPE set and issued a formal document reached by consensus, its first as an accredited Standards Development Organization (SDO), registering the specifications with the United States Bureau of Standards.
The SMPTE Centennial Gala took place on Friday, 28 October 2016, following the annual Conference and Exhibition; James Cameron and Douglas Trumbull received SMPTE’s top honors. SMPTE officially bestowed Honorary Membership, the Society’s highest honor, upon Avatar and Titanic director Cameron in recognition of his work advancing visual effects (VFX), motion capture, and stereoscopic 3D photography, as well as his experimentation in HFR. Presented by Oscar-winning special effects cinematographer Richard Edlund, SMPTE honored Trumbull, who was responsible for the VFX in 2001: A Space Odyssey and Blade Runner, with the Society’s most prestigious medal award, the Progress Medal. The award recognized Trumbull’s contributions to VFX, stereoscopic 3D, and HFR cinema, including his current work to enable stereoscopic 3D with his 120-frames-per-second Magi system.
Educational and professional development activities
SMPTE's educational and professional development activities include technical presentations at regular meetings of its local Sections, annual and biennial conferences in the US and Australia and the SMPTE Motion Imaging Journal. The society sponsors many awards, the oldest of which are the SMPTE Progress Medal, the Samuel Warner Memorial Medal, and the David Sarnoff Medal. SMPTE also has a number of Student Chapters and sponsors scholarships for college students in the motion imaging disciplines.
Standards
SMPTE standards documents are copyrighted and may be purchased from the SMPTE website, or other distributors of technical standards. Standards documents may be purchased by the general public. Significant standards promulgated by SMPTE include:
All film and television transmission formats and media, including digital.
Physical interfaces for transmission of television signals and related data (such as SMPTE timecode and the serial digital interface) (SDI)
SMPTE color bars
Test card patterns and other diagnostic tools
The Material Exchange Format (MXF)
SMPTE 2110
SMPTE ST 421:2013 (VC-1 video codec)
Film format
SMP(T)E'S first standard was to get everyone using 35-mm film width, four sprocket holes per frame, 1.37:1 picture ratio. Until then, there were competing film formats. With the standard, theaters could all run the same films.
Film frame rate
SMP(T)E's standard in 1927 was for speed at which sound film is shown, 24 frames per second.
3D television
SMPTE's taskforce on "3D to the home" produced a report on the issues and challenges and suggested minimum standards for the 3D home master that would be distributed after post-production to the ingest points of distribution channels for 3D video content. A group within the standards committees has begun to work on the formal definition of the SMPTE 3D Home Master.
Digital cinema
In 1999, SMPTE established the DC28 technology committee, for the foundations of Digital Cinema.
Membership
SMPTE Fellows
Terry Adams, NBC Olympics, LLC
Andy Beale, BT Sport
Lynn D. Claudy, National Association of Broadcasters
Lawrence R. Kaplan, CEO of SDVI
Honors and awards program
The SMPTE presents awards to individuals for outstanding contributions in fields of the society.
Honorary membership and the honor roll
Recipients include:
Renville "Ren" H. McMann Jr. (2017)
James Cameron (2016)
Oscar B. "O.B." Hanson (2015)
George Lucas (2014)
John Logie Baird (2014)
Philo Taylor Farnsworth (1996)
Ray M. Dolby (1992)
Linwood G. Dunn (1984)
Herbert T. Kalmus (1958)
Walt Disney (1955)
Vladimir K. Zworykin (1950)
Samuel L. Warner (1946)
George Eastman (1928)
Thomas Alva Edison (1928)
Louis Lumiere (1928)
C. Francis Jenkins (1926)
Progress Medal
The Progress Medal, instituted in 1935, is SMPTE's oldest and most prestigious medal, and is awarded annually for contributions to engineering aspects of the film and/or television industries.
Recipients include:
Douglas Trumbull (2016)
Ioan Allen (2014)
David Wood (2012)
Edwin Catmull (2011)
Birney Dayton (2008)
Clyde D. Smith (2007)
Roderick Snell (2006)
S. Merrill Weiss (2005)
Dr. Kees Immink (2004)
Stanley N. Baron (2003)
William C. Miller (2002)
Bernard J. Lechner (2001)
Edwin E. Catmall (1996)
Ray Dolby (1983)
Harold E. Edgerton (1959)
Fred Waller (1953)
Vladimir K. Zworykin (1950)
John G. Frayne (1947)
Walt Disney (1940)
Herbert Kalmus (1938)
Edward W. Kellogg (1937)
Kenneth Mees (1936)
David Sarnoff Gold Medal
Chuck Pagano (2013)
James M. DeFilippis (2012)
Bernard J. Lechner (1996)
Stanley N. Baron (1991)
William F. Schreiber (1990)
Adrian Ettlinger (1976)
Joseph A. Flaherty, Jr. (1974)
Peter C. Goldmark (1969)
W. R. G. Baker (1959)
Albert Rose (1958)
Charles Ginsburg (1957)
Robert E. Shelby (1956)
Arthur V. Loughren (1953)
Otto H. Schade (1951)
Eastman Kodak Gold Medal
The Eastman Kodak Gold Medal, instituted in 1967, recognizes outstanding contributions that lead to new or unique educational programs utilizing motion pictures, television, high-speed and instrumentation photography or other photography sciences. Recent recipients are
Andrew Laszlo (2006)
James MacKay (2005)
Dr. Roderick T. Ryan (2004)
George Spiro Dibie (2003)
Jean-Pierre Beauviala (2002)
Related organizations
Related organizations include
Advanced Television Systems Committee (ATSC)
Audio Engineering Society (AES)
BBC Research Department
Digital Video Broadcasting
European Broadcasting Union (EBU)
ITU Radiocommunication Sector (formerly known as the CCIR)
ITU Telecommunication Sector (formerly known as the CCITT)
Institute of Electrical and Electronics Engineers (IEEE)
Joint Photographic Experts Group (JPEG)
Moving Picture Experts Group (MPEG)
See also
Digital Picture Exchange
General Exchange Format (GXF)
Glossary of video terms
Outline of film (Extensive alphabetical listing)
Media Dispatch Protocol SMPTE 2032 parts 1, 2 and 3
Video tape recorder (VTR) standards defined by SMPTE
References
Bibliography
Charles S. Swartz (editor). Understanding Digital Cinema. A Professional Handbook. Elsevier, 2005.
Philip J. Cianci (Editorial Content Director), The SMPTE Chronicle, Vol. I 1916 – 1949 Motion Pictures, Vol. II 1950 – 1989 Television, Vol III. 1990 – 2016 Digital Media, SMPTE, 2022.
Philip J. Cianci (Editorial Content Director), Magic and Miracles - 100 Years of Moving Image Science and Technology - The Work of the Society of Motion Picture and Television Engineers, SMPTE, 2017.
Philip J. Cianci (Editorial Content Director), The Honor Roll and Honorary Members of The Society of Motion Picture and Television Engineers, SMPTE, 2016
1916 establishments in the United States
3D imaging
Broadcast engineering
Economy of Westchester County, New York
Film and video technology
Organizations awarded an Academy Honorary Award
Organizations based in New York (state)
Science and technology in New York (state)
Television terminology
White Plains, New York | Society of Motion Picture and Television Engineers | [
"Engineering"
] | 1,959 | [
"Broadcast engineering",
"Electronic engineering"
] |
164,402 | https://en.wikipedia.org/wiki/Magnetic%20dipole | In electromagnetism, a magnetic dipole is the limit of either a closed loop of electric current or a pair of poles as the size of the source is reduced to zero while keeping the magnetic moment constant.
It is a magnetic analogue of the electric dipole, but the analogy is not perfect. In particular, a true magnetic monopole, the magnetic analogue of an electric charge, has never been observed in nature. However, magnetic monopole quasiparticles have been observed as emergent properties of certain condensed matter systems. Moreover, one form of magnetic dipole moment is associated with a fundamental quantum property—the spin of elementary particles.
Because magnetic monopoles do not exist, the magnetic field at a large distance from any static magnetic source looks like the field of a dipole with the same dipole moment. For higher-order sources (e.g. quadrupoles) with no dipole moment, their field decays towards zero with distance faster than a dipole field does.
External magnetic field produced by a magnetic dipole moment
In classical physics, the magnetic field of a dipole is calculated as the limit of either a current loop or a pair of charges as the source shrinks to a point while keeping the magnetic moment constant. For the current loop, this limit is most easily derived from the vector potential:
where μ0 is the vacuum permeability constant and is the surface of a sphere of radius .
The magnetic flux density (strength of the B-field) is then
Alternatively one can obtain the scalar potential first from the magnetic pole limit,
and hence the magnetic field strength (or strength of the H-field) is
The magnetic field strength is symmetric under rotations about the axis of the magnetic moment.
In spherical coordinates, with , and with the magnetic moment aligned with the z-axis, then the field strength can more simply be expressed as
Internal magnetic field of a dipole
The two models for a dipole (current loop and magnetic poles), give the same predictions for the magnetic field far from the source. However, inside the source region they give different predictions. The magnetic field between poles is in the opposite direction to the magnetic moment (which points from the negative charge to the positive charge), while inside a current loop it is in the same direction (see the figure to the right (above for mobile users)). Clearly, the limits of these fields must also be different as the sources shrink to zero size. This distinction only matters if the dipole limit is used to calculate fields inside a magnetic material.
If a magnetic dipole is formed by making a current loop smaller and smaller, but keeping the product of current and area constant, the limiting field is
where is the Dirac delta function in three dimensions. Unlike the expressions in the previous section, this limit is correct for the internal field of the dipole.
If a magnetic dipole is formed by taking a "north pole" and a "south pole", bringing them closer and closer together but keeping the product of magnetic pole-charge and distance constant, the limiting field is
These fields are related by , where
is the magnetization.
Forces between two magnetic dipoles
The force exerted by one dipole moment on another separated in space by a vector can be calculated using:
or
where is the distance between dipoles. The force acting on is in the opposite direction.
The torque can be obtained from the formula
Dipolar fields from finite sources
The magnetic scalar potential produced by a finite source, but external to it, can be represented by a multipole expansion. Each term in the expansion is associated with a characteristic moment and a potential having a characteristic rate of decrease with distance from the source. Monopole moments have a rate of decrease, dipole moments have a rate, quadrupole moments have a rate, and so on. The higher the order, the faster the potential drops off. Since the lowest-order term observed in magnetic sources is the dipole term, it dominates at large distances. Therefore, at large distances any magnetic source looks like a dipole of the same magnetic moment.
Notes
References
Magnetostatics
Magnetism
Electric and magnetic fields in matter | Magnetic dipole | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 842 | [
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
164,483 | https://en.wikipedia.org/wiki/Scattering | In physics, scattering is a wide range of physical processes where moving particles or radiation of some form, such as light or sound, are forced to deviate from a straight trajectory by localized non-uniformities (including particles and radiation) in the medium through which they pass. In conventional use, this also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections of radiation that undergo scattering are often called diffuse reflections and unscattered reflections are called specular (mirror-like) reflections. Originally, the term was confined to light scattering (going back at least as far as Isaac Newton in the 17th century). As more "ray"-like phenomena were discovered, the idea of scattering was extended to them, so that William Herschel could refer to the scattering of "heat rays" (not then recognized as electromagnetic in nature) in 1800. John Tyndall, a pioneer in light scattering research, noted the connection between light scattering and acoustic scattering in the 1870s. Near the end of the 19th century, the scattering of cathode rays (electron beams) and X-rays was observed and discussed. With the discovery of subatomic particles (e.g. Ernest Rutherford in 1911) and the development of quantum theory in the 20th century, the sense of the term became broader as it was recognized that the same mathematical frameworks used in light scattering could be applied to many other phenomena.
Scattering can refer to the consequences of particle-particle collisions between molecules, atoms, electrons, photons and other particles. Examples include: cosmic ray scattering in the Earth's upper atmosphere; particle collisions inside particle accelerators; electron scattering by gas atoms in fluorescent lamps; and neutron scattering inside nuclear reactors.
The types of non-uniformities which can cause scattering, sometimes known as scatterers or scattering centers, are too numerous to list, but a small sample includes particles, bubbles, droplets, density fluctuations in fluids, crystallites in polycrystalline solids, defects in monocrystalline solids, surface roughness, cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory.
Some areas where scattering and scattering theory are significant include radar sensing, medical ultrasound, semiconductor wafer inspection, polymerization process monitoring, acoustic tiling, free-space communications and computer-generated imagery. Particle-particle scattering theory is important in areas such as particle physics, atomic, molecular, and optical physics, nuclear physics and astrophysics. In particle physics the quantum interaction and scattering of fundamental particles is described by the Scattering Matrix or S-Matrix, introduced and developed by John Archibald Wheeler and Werner Heisenberg.
Scattering is quantified using many different concepts, including scattering cross section (σ), attenuation coefficients, the bidirectional scattering distribution function (BSDF), S-matrices, and mean free path.
Single and multiple scattering
When radiation is only scattered by one localized scattering center, this is called single scattering. It is more common that scattering centers are grouped together; in such cases, radiation may scatter many times, in what is known as multiple scattering. The main difference between the effects of single and multiple scattering is that single scattering can usually be treated as a random phenomenon, whereas multiple scattering, somewhat counterintuitively, can be modeled as a more deterministic process because the combined results of a large number of scattering events tend to average out. Multiple scattering can thus often be modeled well with diffusion theory.
Because the location of a single scattering center is not usually well known relative to the path of the radiation, the outcome, which tends to depend strongly on the exact incoming trajectory, appears random to an observer. This type of scattering would be exemplified by an electron being fired at an atomic nucleus. In this case, the atom's exact position relative to the path of the electron is unknown and would be unmeasurable, so the exact trajectory of the electron after the collision cannot be predicted. Single scattering is therefore often described by probability distributions.
With multiple scattering, the randomness of the interaction tends to be averaged out by a large number of scattering events, so that the final path of the radiation appears to be a deterministic distribution of intensity. This is exemplified by a light beam passing through thick fog. Multiple scattering is highly analogous to diffusion, and the terms multiple scattering and diffusion are interchangeable in many contexts. Optical elements designed to produce multiple scattering are thus known as diffusers. Coherent backscattering, an enhancement of backscattering that occurs when coherent radiation is multiply scattered by a random medium, is usually attributed to weak localization.
Not all single scattering is random, however. A well-controlled laser beam can be exactly positioned to scatter off a microscopic particle with a deterministic outcome, for instance. Such situations are encountered in radar scattering as well, where the targets tend to be macroscopic objects such as people or aircraft.
Similarly, multiple scattering can sometimes have somewhat random outcomes, particularly with coherent radiation. The random fluctuations in the multiply scattered intensity of coherent radiation are called speckles. Speckle also occurs if multiple parts of a coherent wave scatter from different centers. In certain rare circumstances, multiple scattering may only involve a small number of interactions such that the randomness is not completely averaged out. These systems are considered to be some of the most difficult to model accurately.
The description of scattering and the distinction between single and multiple scattering are tightly related to wave–particle duality.
Theory
Scattering theory is a framework for studying and understanding the scattering of waves and particles. Wave scattering corresponds to the collision and scattering of a wave with some material object, for instance (sunlight) scattered by rain drops to form a rainbow. Scattering also includes the interaction of billiard balls on a table, the Rutherford scattering (or angle change) of alpha particles by gold nuclei, the Bragg scattering (or diffraction) of electrons and X-rays by a cluster of atoms, and the inelastic scattering of a fission fragment as it traverses a thin foil. More precisely, scattering consists of the study of how solutions of partial differential equations, propagating freely "in the distant past", come together and interact with one another or with a boundary condition, and then propagate away "to the distant future".
The direct scattering problem is the problem of determining the distribution of scattered radiation/particle flux basing on the characteristics of the scatterer. The inverse scattering problem is the problem of determining the characteristics of an object (e.g., its shape, internal constitution) from measurement data of radiation or particles scattered from the object.
Attenuation due to scattering
When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas. In the simplest case consider an interaction that removes particles from the "unscattered beam" at a uniform rate that is proportional to the incident number of particles per unit area per unit time (), i.e. that
where Q is an interaction coefficient and x is the distance traveled in the target.
The above ordinary first-order differential equation has solutions of the form:
where Io is the initial flux, path length Δx ≡ x − xo, the second equality defines an interaction mean free path λ, the third uses the number of targets per unit volume η to define an area cross-section σ, and the last uses the target mass density ρ to define a density mean free path τ. Hence one converts between these quantities via Q = 1/λ = ησ = ρ/τ, as shown in the figure at left.
In electromagnetic absorption spectroscopy, for example, interaction coefficient (e.g. Q in cm−1) is variously called opacity, absorption coefficient, and attenuation coefficient. In nuclear physics, area cross-sections (e.g. σ in barns or units of 10−24 cm2), density mean free path (e.g. τ in grams/cm2), and its reciprocal the mass attenuation coefficient (e.g. in cm2/gram) or area per nucleon are all popular, while in electron microscopy the inelastic mean free path (e.g. λ in nanometers) is often discussed instead.
Elastic and inelastic scattering
The term "elastic scattering" implies that the internal states of the scattering particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles.
The example of scattering in quantum chemistry is particularly instructive, as the theory is reasonably complex while still having a good foundation on which to build an intuitive understanding. When two atoms are scattered off one another, one can understand them as being the bound state solutions of some differential equation. Thus, for example, the hydrogen atom corresponds to a solution to the Schrödinger equation with a negative inverse-power (i.e., attractive Coulombic) central potential. The scattering of two hydrogen atoms will disturb the state of each atom, resulting in one or both becoming excited, or even ionized, representing an inelastic scattering process.
The term "deep inelastic scattering" refers to a special kind of scattering experiment in particle physics.
Mathematical framework
In mathematics, scattering theory deals with a more abstract formulation of the same set of concepts. For example, if a differential equation is known to have some simple, localized solutions, and the solutions are a function of a single parameter, that parameter can take the conceptual role of time. One then asks what might happen if two such solutions are set up far away from each other, in the "distant past", and are made to move towards each other, interact (under the constraint of the differential equation) and then move apart in the "future". The scattering matrix then pairs solutions in the "distant past" to those in the "distant future".
Solutions to differential equations are often posed on manifolds. Frequently, the means to the solution requires the study of the spectrum of an operator on the manifold. As a result, the solutions often have a spectrum that can be identified with a Hilbert space, and scattering is described by a certain map, the S matrix, on Hilbert spaces. Solutions with a discrete spectrum correspond to bound states in quantum mechanics, while a continuous spectrum is associated with scattering states. The study of inelastic scattering then asks how discrete and continuous spectra are mixed together.
An important, notable development is the inverse scattering transform, central to the solution of many exactly solvable models.
Theoretical physics
In mathematical physics, scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations. In acoustics, the differential equation is the wave equation, and scattering studies how its solutions, the sound waves, scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water, coming from a submarine). In the case of classical electrodynamics, the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics, the equations are those of Quantum electrodynamics, Quantum chromodynamics and the Standard Model, the solutions of which correspond to fundamental particles.
In regular quantum mechanics, which includes quantum chemistry, the relevant equation is the Schrödinger equation, although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations, are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons. The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again. (The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis, and the Born approximation.
Electromagnetics
Electromagnetic waves are one of the best known and most commonly encountered forms of radiation that undergo scattering. Scattering of light and radio waves (especially in radar) is particularly important. Several different aspects of electromagnetic scattering are distinct enough to have conventional names. Major forms of elastic light scattering (involving negligible energy transfer) are Rayleigh scattering and Mie scattering. Inelastic scattering includes Brillouin scattering, Raman scattering, inelastic X-ray scattering and Compton scattering.
Light scattering is one of the two major physical processes that contribute to the visible appearance of most objects, the other being absorption. Surfaces described as white owe their appearance to multiple scattering of light by internal or surface inhomogeneities in the object, for example by the boundaries of transparent microscopic crystals that make up a stone or by the microscopic fibers in a sheet of paper. More generally, the gloss (or lustre or sheen) of the surface is determined by scattering. Highly scattering surfaces are described as being dull or having a matte finish, while the absence of surface scattering leads to a glossy appearance, as with polished metal or stone.
Spectral absorption, the selective absorption of certain colors, determines the color of most objects with some modification by elastic scattering. The apparent blue color of veins in skin is a common example where both spectral absorption and scattering play important and complex roles in the coloration. Light scattering can also create color without absorption, often shades of blue, as with the sky (Rayleigh scattering), the human blue iris, and the feathers of some birds (Prum et al. 1998). However, resonant light scattering in nanoparticles can produce many different highly saturated and vibrant hues, especially when surface plasmon resonance is involved (Roqué et al. 2006).
Models of light scattering can be divided into three domains based on a dimensionless size parameter, α which is defined as:
where πDp is the circumference of a particle and λ is the wavelength of incident radiation in the medium. Based on the value of α, these domains are:
α ≪ 1: Rayleigh scattering (small particle compared to wavelength of light);
α ≈ 1: Mie scattering (particle about the same size as wavelength of light, valid only for spheres);
α ≫ 1: geometric scattering (particle much larger than wavelength of light).
Rayleigh scattering is a process in which electromagnetic radiation (including light) is scattered by a small spherical volume of variant refractive indexes, such as a particle, bubble, droplet, or even a density fluctuation. This effect was first modeled successfully by Lord Rayleigh, from whom it gets its name. In order for Rayleigh's model to apply, the sphere must be much smaller in diameter than the wavelength (λ) of the scattered wave; typically the upper limit is taken to be about 1/10 the wavelength. In this size regime, the exact shape of the scattering center is usually not very significant and can often be treated as a sphere of equivalent volume. The inherent scattering that radiation undergoes passing through a pure gas is due to microscopic density fluctuations as the gas molecules move around, which are normally small enough in scale for Rayleigh's model to apply. This scattering mechanism is the primary cause of the blue color of the Earth's sky on a clear day, as the shorter blue wavelengths of sunlight passing overhead are more strongly scattered than the longer red wavelengths according to Rayleigh's famous 1/λ4 relation. Along with absorption, such scattering is a major cause of the attenuation of radiation by the atmosphere. The degree of scattering varies as a function of the ratio of the particle diameter to the wavelength of the radiation, along with many other factors including polarization, angle, and coherence.
For larger diameters, the problem of electromagnetic scattering by spheres was first solved by Gustav Mie, and scattering by spheres larger than the Rayleigh range is therefore usually known as Mie scattering. In the Mie regime, the shape of the scattering center becomes much more significant and the theory only applies well to spheres and, with some modification, spheroids and ellipsoids. Closed-form solutions for scattering by certain other simple shapes exist, but no general closed-form solution is known for arbitrary shapes.
Both Mie and Rayleigh scattering are considered elastic scattering processes, in which the energy (and thus wavelength and frequency) of the light is not substantially changed. However, electromagnetic radiation scattered by moving scattering centers does undergo a Doppler shift, which can be detected and used to measure the velocity of the scattering center/s in forms of techniques such as lidar and radar. This shift involves a slight change in energy.
At values of the ratio of particle diameter to wavelength more than about 10, the laws of geometric optics are mostly sufficient to describe the interaction of light with the particle. Mie theory can still be used for these larger spheres, but the solution often becomes numerically unwieldy.
For modeling of scattering in cases where the Rayleigh and Mie models do not apply such as larger, irregularly shaped particles, there are many numerical methods that can be used. The most common are finite-element methods which solve Maxwell's equations to find the distribution of the scattered electromagnetic field. Sophisticated software packages exist which allow the user to specify the refractive index or indices of the scattering feature in space, creating a 2- or sometimes 3-dimensional model of the structure. For relatively large and complex structures, these models usually require substantial execution times on a computer.
Electrophoresis involves the migration of macromolecules under the influence of an electric field. Electrophoretic light scattering involves passing an electric field through a liquid which makes particles move. The bigger the charge is on the particles, the faster they are able to move.
See also
Attenuation#Light scattering
Backscattering
Bragg diffraction
Brillouin scattering
Characteristic mode analysis
Compton scattering
Coulomb scattering
Deep scattering layer
Diffuse sky radiation
Doppler effect
Dynamic Light Scattering
Electron diffraction
Electron scattering
Electrophoretic light scattering
Extinction
Haag–Ruelle scattering theory
Kikuchi line
Levinson's theorem
Light scattering by particles
Linewidth
Mie scattering
Mie theory
Molecular scattering
Mott scattering
Neutron scattering
Phase space measurement with forward modeling
Photon diffusion
Powder diffraction
Raman scattering
Rayleigh scattering
Resonances in scattering from potentials
Rutherford scattering
Small-angle scattering
Scattering amplitude
Scattering from rough surfaces
Scintillation (physics)
S-Matrix
Tyndall effect
Thomson scattering
Wolf effect
X-ray crystallography
References
External links
Research group on light scattering and diffusion in complex systems
Multiple light scattering from a photonic science point of view
Neutron Scattering Web
Neutron and X-Ray Scattering
World directory of neutron scattering instruments
Scattering and diffraction
Optics Classification and Indexing Scheme (OCIS), Optical Society of America, 1997
Lectures of the European school on theoretical methods for electron and positron induced chemistry, Prague, Feb. 2005
E. Koelink, Lectures on scattering theory, Delft the Netherlands 2006
Physical phenomena
Atomic physics
Nuclear physics
Particle physics
Radar theory
Scattering, absorption and radiative transfer (optics) | Scattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 4,024 | [
"Physical phenomena",
" absorption and radiative transfer (optics)",
"Nuclear physics",
"Quantum mechanics",
"Scattering",
"Atomic physics",
"Particle physics",
"Condensed matter physics",
"Atomic",
" molecular",
" and optical physics"
] |
164,494 | https://en.wikipedia.org/wiki/Computer%20magazine | Computer magazines are about computers and related subjects, such as networking and the Internet. Most computer magazines offer (or offered) advice, some offer programming tutorials, reviews of the latest technologies, and advertisements.
History
1940s–1950s
Sources:.
Mathematics of Computation established in 1943, articles about computers began to appear from 1946 (Volume 2, Number 15) to the end of 1954. Scientific journal.
Digital Computer Newsletter, (1949–1968), founded by Albert Eugene Smith.
Computers and People, (1951–1988), was arguably the first computer magazine. It began as Roster of Organizations in the Field of Automatic Computing Machinery (1951–1952), and then The Computing Machinery Field (1952–1953). It was published by Edmund Berkeley. Computers and Automation held the first Computer Art Contest in 1963 and maintained a bibliography on computer art starting in 1966. It also included a monthly estimated census of all installed computer systems starting in 1962. In 1973 name changed to Computers and Automation and People, and finally in 1975 to Computers and People.
AFIPS conference proceedings (AFIPS Joint Computer Conferences) (1952–1987).
ACM National Conference proceedings (Proceedings of National Meetings) (1952, 1956–1987, 1997)
IEEE Transactions on Computers from 1952, scientific journal.
Computing News (1953–1963), was an early computer magazine produced by Jackson W. Granholm out of Thousand Oaks, California. The first documented copyright was applied for on September 1, 1954, for issue #36. The magazine was released on the 1st and 15th of each month, which places issue #1 at March 15, 1953. The last documented release was issue #217 on March 15, 1962.
Journal of the ACM from 1954, scientific journal.
Datamation from 1957, was another early computer and data processing magazine. It is still being published as an e-publication on the Internet. Futurist Donald Prell was its founder.
Information and Computation from 1957, scientific journal.
IBM Journal of Research and Development from 1957, scientific journal.
Communications of the ACM from 1958, mix of science magazine, trade magazine, and a scientific journal
The Computer Journal from 1958, scientific journal.
1960s–1970s
ACS Newsletter (1966–1976), Amateur Computer Society newsletter.
Computerworld (1967)
People's Computer Company Newsletter (1972–1981)
Amateur Computer Club Newsletter (ACCN; 1973–)
Dr. Dobb's Journal (1976–2014) was the first microcomputer magazine to focus on software, rather than hardware.
1980s
In the 1980s, computer magazines skewed their content towards the hobbyist end of the then-microcomputer market, and used to contain type-in programs, but these have gone out of fashion. The first magazine devoted to this class of computers was Creative Computing. Byte was an influential technical journal that published until the 1990s.
In 1983, an average of one new computer magazine appeared each week. By late that year more than 200 existed. Their numbers and size grew rapidly with the industry they covered, and BYTE and 80 Micro were among the three thickest magazines of any kind per issue. Compute!s editor in chief reported in the December 1983 issue that "all of our previous records are being broken: largest number of pages, largest-number of four-color advertising pages, largest number of printing pages, and the largest number of editorial pages".
Computers were the only industry with product-specific magazines, like 80 Micro, PC Magazine, and Macworld; their editors vowed to impartially cover their computers whether or not doing so hurt their readers' and advertisers' market, while claiming that their rivals pandered to advertisers by only publishing positive news.
BYTE, in March 1984, apologized for publishing articles by authors with promotional material for companies without describing them as such, and in April suggested that other magazines adopt its rules of conduct for writers, such as prohibiting employees from accepting gifts or discounts.
InfoWorld stated in June that many of the "150 or so" industry magazines published articles without clearly identifying authors' affiliations and conflicts of interest.
Around 1985, many magazines ended. However, as their number exceeded the amount of available advertising revenue despite revenue in the first half of the year five times that of the same period in 1982. Consumers typically bought computer magazines more for advertising than articles, which benefited already leading journals like BYTE and PC Magazine and hurt weaker ones. Also affecting magazines was the computer industry's economic difficulties, including the video game crash of 1983, which badly hurt the home-computer market.
Dan Gutman, the founder of Computer Games, recalled in 1987 that "the computer games industry crashed and burned like a bad night of Flight Simulator—with my magazine on the runway". Antic's advertising sales declined by 50% in 90 days, Compute!'s number of pages declined from 392 in December 1983 to 160 ten months later, and Compute! and Compute!'s Gazette's publisher assured readers in an editorial that his company "is and continues to be quite successful ... even during these particularly difficult times in the industry". Computer Gaming World stated in 1988 that it was the only one of the 18 color magazines that covered computer games in 1983 to survive the crash. Compute! similarly stated that year that it was the only general-interest survivor of about 150 consumer-computing magazines published in 1983.
Some computer magazines in the 1980s and 1990s were issued only on disk (or cassette tape, or CD-ROM) with no printed counterpart; such publications are collectively (though somewhat inaccurately) known as disk magazines and are listed separately.
1990s
In some ways, the heyday of printed computer magazines was a period during the 1990s. During this period, a large number of computer manufacturers took out advertisements in computer magazines, so they became quite thick and could afford to carry quite a number of articles in each issue. Computer Shopper was a good example of this trend.
Some printed computer magazines used to include covermount floppy disks, CDs, or other media as inserts; they typically contained software, demos, and electronic versions of the print issue.
2000s–2010s
However, with the rise in popularity of the Internet, many computer magazines went bankrupt or transitioned to an online-only existence. Exceptions include Wired, which is more of a technology magazine than a computer magazine.
List of computer magazines
Notable regular contributors to print computer magazines
See also
Online magazine
Magazine
Online newspaper
Notes
References
Magazine
Magazine genres | Computer magazine | [
"Technology"
] | 1,312 | [
"Works about computing",
"Computer magazines",
"Computers",
"History of computing"
] |
164,501 | https://en.wikipedia.org/wiki/Nimber | In mathematics, the nimbers, also called Grundy numbers, are introduced in combinatorial game theory, where they are defined as the values of heaps in the game Nim. The nimbers are the ordinal numbers endowed with nimber addition and nimber multiplication, which are distinct from ordinal addition and ordinal multiplication.
Because of the Sprague–Grundy theorem which states that every impartial game is equivalent to a Nim heap of a certain size, nimbers arise in a much larger class of impartial games. They may also occur in partisan games like Domineering.
The nimber addition and multiplication operations are associative and commutative. Each nimber is its own additive inverse. In particular for some pairs of ordinals, their nimber sum is smaller than either addend. The minimum excludant operation is applied to sets of nimbers.
Uses
Nim
Nim is a game in which two players take turns removing objects from distinct heaps. As moves depend only on the position and not on which of the two players is currently moving, and where the payoffs are symmetric, Nim is an impartial game. On each turn, a player must remove at least one object, and may remove any number of objects provided they all come from the same heap. The goal of the game is to be the player who removes the last object. The nimber of a heap is simply the number of objects in that heap. Using nim addition, one can calculate the nimber of the game as a whole. The winning strategy is to force the nimber of the game to 0 for the opponent's turn.
Cram
Cram is a game often played on a rectangular board in which players take turns placing dominoes either horizontally or vertically until no more dominoes can be placed. The first player that cannot make a move loses. As the possible moves for both players are the same, it is an impartial game and can have a nimber value. For example, any board that is an even size by an even size will have a nimber of 0. Any board that is even by odd will have a non-zero nimber. Any board will have a nimber of 0 for all even and a nimber of 1 for all odd .
Northcott's game
In Northcott's game, pegs for each player are placed along a column with a finite number of spaces. Each turn each player must move the piece up or down the column, but may not move past the other player's piece. Several columns are stacked together to add complexity. The player that can no longer make any moves loses. Unlike many other nimber related games, the number of spaces between the two tokens on each row are the sizes of the Nim heaps. If your opponent increases the number of spaces between two tokens, just decrease it on your next move. Else, play the game of Nim and make the Nim-sum of the number of spaces between the tokens on each row be 0.
Hackenbush
Hackenbush is a game invented by mathematician John Horton Conway. It may be played on any configuration of colored line segments connected to one another by their endpoints and to a "ground" line. Players take turns removing line segments. An impartial game version, thereby a game able to be analyzed using nimbers, can be found by removing distinction from the lines, allowing either player to cut any branch. Any segments reliant on the newly removed segment in order to connect to the ground line are removed as well. In this way, each connection to the ground can be considered a nim heap with a nimber value. Additionally, all the separate connections to the ground line can also be summed for a nimber of the game state.
Addition
Nimber addition (also known as nim-addition) can be used to calculate the size of a single nim heap equivalent to a collection of nim heaps. It is defined recursively by
where the minimum excludant of a set of ordinals is defined to be the smallest ordinal that is not an element of .
For finite ordinals, the nim-sum is easily evaluated on a computer by taking the bitwise exclusive or (XOR, denoted by ) of the corresponding numbers. For example, the nim-sum of 7 and 14 can be found by writing 7 as 111 and 14 as 1110; the ones place adds to 1; the twos place adds to 2, which we replace with 0; the fours place adds to 2, which we replace with 0; the eights place adds to 1. So the nim-sum is written in binary as 1001, or in decimal as 9.
This property of addition follows from the fact that both and XOR yield a winning strategy for Nim and there can be only one such strategy; or it can be shown directly by induction: Let and be two finite ordinals, and assume that the nim-sum of all pairs with one of them reduced is already defined. The only number whose XOR with is is , and vice versa; thus is excluded.
On the other hand, for any ordinal , XORing with all of , and must lead to a reduction for one of them (since the leading 1 in must be present in at least one of the three); since
we must have either
thus is included as either
and hence is the minimum excluded ordinal.
Nimber addition is associative and commutative, with as the additive identity element. Moreover, a nimber is its own additive inverse. It follows that if and only if .
Multiplication
Nimber multiplication (nim-multiplication) is defined recursively by
Nimber multiplication is associative and commutative, with the ordinal as the multiplicative identity element. Moreover, nimber multiplication distributes over nimber addition.
Thus, except for the fact that nimbers form a proper class and not a set, the class of nimbers forms a ring. In fact, it even determines an algebraically closed field of characteristic 2, with the nimber multiplicative inverse of a nonzero ordinal given by
where is the smallest set of ordinals (nimbers) such that
is an element of ;
if and is an element of , then is also an element of .
For all natural numbers , the set of nimbers less than form the Galois field of order . Therefore, the set of finite nimbers is isomorphic to the direct limit as of the fields . This subfield is not algebraically closed, since no field with not a power of 2 is contained in any of those fields, and therefore not in their direct limit; for instance the polynomial , which has a root in , does not have a root in the set of finite nimbers.
Just as in the case of nimber addition, there is a means of computing the nimber product of finite ordinals. This is determined by the rules that
The nimber product of a Fermat 2-power (numbers of the form ) with a smaller number is equal to their ordinary product;
The nimber square of a Fermat 2-power is equal to as evaluated under the ordinary multiplication of natural numbers.
The smallest algebraically closed field of nimbers is the set of nimbers less than the ordinal , where is the smallest infinite ordinal. It follows that as a nimber, is transcendental over the field.
Addition and multiplication tables
The following tables exhibit addition and multiplication among the first 16 nimbers.
This subset is closed under both operations, since 16 is of the form .
(If you prefer simple text tables, they are .)
See also
Surreal number
Notes
References
which discusses games, surreal numbers, and nimbers.
Combinatorial game theory
Finite fields
Ordinal numbers | Nimber | [
"Mathematics"
] | 1,646 | [
"Ordinal numbers",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Game theory",
"Order theory",
"Combinatorial game theory",
"Numbers"
] |
164,511 | https://en.wikipedia.org/wiki/Radio%20clock | A radio clock or radio-controlled clock (RCC), and often colloquially (and incorrectly) referred to as an "atomic clock", is a type of quartz clock or watch that is automatically synchronized to a time code transmitted by a radio transmitter connected to a time standard such as an atomic clock. Such a clock may be synchronized to the time sent by a single transmitter, such as many national or regional time transmitters, or may use the multiple transmitters used by satellite navigation systems such as Global Positioning System. Such systems may be used to automatically set clocks or for any purpose where accurate time is needed. Radio clocks may include any feature available for a clock, such as alarm function, display of ambient temperature and humidity, broadcast radio reception, etc.
One common style of radio-controlled clock uses time signals transmitted by dedicated terrestrial longwave radio transmitters, which emit a time code that can be demodulated and displayed by the radio controlled clock. The radio controlled clock will contain an accurate time base oscillator to maintain timekeeping if the radio signal is momentarily unavailable. Other radio controlled clocks use the time signals transmitted by dedicated transmitters in the shortwave bands. Systems using dedicated time signal stations can achieve accuracy of a few tens of milliseconds.
GPS satellite receivers also internally generate accurate time information from the satellite signals. Dedicated GPS timing receivers are accurate to better than 1 microsecond; however, general-purpose or consumer grade GPS may have an offset of up to one second between the internally calculated time, which is much more accurate than 1 second, and the time displayed on the screen.
Other broadcast services may include timekeeping information of varying accuracy within their signals. Timepieces with Bluetooth radio support, ranging from watches with basic control of functionality via a mobile app to full smartwatches obtain time information from a connected phone, with no need to receive time signal broadcasts.
Single transmitter
Radio clocks synchronized to a terrestrial time signal can usually achieve an accuracy within a hundredth of a second relative to the time standard, generally limited by uncertainties and variability in radio propagation. Some timekeepers, particularly watches such as some Casio Wave Ceptors which are more likely than desk clocks to be used when travelling, can synchronise to any one of several different time signals transmitted in different regions.
Longwave and shortwave transmissions
Radio clocks depend on coded time signals from radio stations. The stations vary in broadcast frequency, in geographic location, and in how the signal is modulated to identify the current time. In general, each station has its own format for the time code.
List of radio time signal stations
Descriptions
Many other countries can receive these signals (JJY can sometimes be received in New Zealand, Western Australia, Tasmania, Southeast Asia, parts of Western Europe and the Pacific Northwest of North America at night), but success depends on the time of day, atmospheric conditions, and interference from intervening buildings. Reception is generally better if the clock is placed near a window facing the transmitter. There is also a propagation delay of approximately for every the receiver is from the transmitter.
Clock receivers
A number of manufacturers and retailers sell radio clocks that receive coded time signals from a radio station, which, in turn, derives the time from a true atomic clock.
One of the first radio clocks was offered by Heathkit in late 1983. Their model GC-1000 "Most Accurate Clock" received shortwave time signals from radio station WWV in Fort Collins, Colorado. It automatically switched between WWV's 5, 10, and 15 MHz frequencies to find the strongest signal as conditions changed through the day and year. It kept time during periods of poor reception with a quartz-crystal oscillator. This oscillator was disciplined, meaning that the microprocessor-based clock used the highly accurate time signal received from WWV to trim the crystal oscillator. The timekeeping between updates was thus considerably more accurate than the crystal alone could have achieved. Time down to the tenth of a second was shown on an LED display. The GC-1000 originally sold for US$250 in kit form and US$400 preassembled, and was considered impressive at the time. Heath Company was granted a patent for its design.
By 1990, engineers from German watchmaker Junghans had miniaturized this technology to fit into the case of a digital wristwatch. The following year the analog version Junghans MEGA with hands was launched.
In the 2000s (decade) radio-based "atomic clocks" became common in retail stores; as of 2010 prices start at around US$15 in many countries. Clocks may have other features such as indoor thermometers and weather station functionality. These use signals transmitted by the appropriate transmitter for the country in which they are to be used. Depending upon signal strength they may require placement in a location with a relatively unobstructed path to the transmitter and need fair to good atmospheric conditions to successfully update the time. Inexpensive clocks keep track of the time between updates, or in their absence, with a non-disciplined quartz-crystal clock, with the accuracy typical of non-radio-controlled quartz timepieces. Some clocks include indicators to alert users to possible inaccuracy when synchronization has not been recently successful.
The United States National Institute of Standards and Technology (NIST) has published guidelines recommending that radio clock movements keep time between synchronizations to within ±0.5 seconds to keep time correct when rounded to the nearest second. Some of these movements can keep time between synchronizations to within ±0.2 seconds by synchronizing more than once spread over a day.
Timepieces with Bluetooth radio support, ranging from watches with basic control of functionality via a mobile app to full smartwatches obtain time information from a connected phone, with no need to receive time signal broadcasts.
Other broadcasts
Attached to other broadcast stations Broadcast stations in many countries have carriers precisely synchronized to a standard phase and frequency, such as the BBC Radio 4 longwave service on 198 kHz, and some also transmit sub-audible or even inaudible time-code information, like the Radio France longwave transmitter on 162 kHz. Attached time signal systems generally use audible tones or phase modulation of the carrier wave.
Teletext (TTX) Digital text pages embedded in television video also provide accurate time. Many modern TV sets and VCRs with TTX decoders can obtain accurate time from Teletext and set the internal clock. However, the TTX time can vary up to 5 minutes.
Many digital radio and digital television schemes also include provisions for time-code transmission.
Digital Terrestrial Television The DVB and ATSC standards have 2 packet types that send time and date information to the receiver. Digital television systems can equal GPS stratum 2 accuracy (with short term clock discipline) and stratum 1 (with long term clock discipline) provided the transmitter site (or network) supports that level of functionality.
VHF FM Radio Data System (RDS) RDS can send a clock signal with sub-second precision but with an accuracy no greater than 100 ms and with no indication of clock stratum. Not all RDS networks or stations using RDS send accurate time signals. The time stamp format for this technology is Modified Julian Date (MJD) plus UTC hours, UTC minutes and a local time offset.
L-band and VHF Digital Audio Broadcasting DAB systems provide a time signal that has a precision equal to or better than Digital Radio Mondiale (DRM) but like FM RDS do not indicate clock stratum. DAB systems can equal GPS stratum 2 accuracy (short term clock discipline) and stratum 1 (long term clock discipline) provided the transmitter site (or network) supports that level of functionality. The time stamp format for this technology is BCD.
Digital Radio Mondiale (DRM) DRM is able to send a clock signal, but one not as precise as navigation satellite clock signals. DRM timestamps received via shortwave (or multiple hop mediumwave) can be up to 200 ms off due to path delay. The time stamp format for this technology is BCD.
Gallery
Multiple transmitters
A radio clock receiver may combine multiple time sources to improve its accuracy. This is what is done in satellite navigation systems such as the Global Positioning System, Galileo, and GLONASS. Satellite navigation systems have one or more caesium, rubidium or hydrogen maser atomic clocks on each satellite, referenced to a clock or clocks on the ground. Dedicated timing receivers can serve as local time standards, with a precision better than 50 ns. The recent revival and enhancement of LORAN, a land-based radio navigation system, will provide another multiple source time distribution system.
GPS clocks
Many modern radio clocks use satellite navigation systems such as Global Positioning System to provide more accurate time than can be obtained from terrestrial radio stations. These GPS clocks combine time estimates from multiple satellite atomic clocks with error estimates maintained by a network of ground stations. Due to effects inherent in radio propagation and ionospheric spread and delay, GPS timing requires averaging of these phenomena over several periods. No GPS receiver directly computes time or frequency, rather they use GPS to discipline an oscillator that may range from a quartz crystal in a low-end navigation receiver, through oven-controlled crystal oscillators (OCXO) in specialized units, to atomic oscillators (rubidium) in some receivers used for synchronization in telecommunications. For this reason, these devices are technically referred to as GPS-disciplined oscillators.
GPS units intended primarily for time measurement as opposed to navigation can be set to assume the antenna position is fixed. In this mode, the device will average its position fixes. After approximately a day of operation, it will know its position to within a few meters. Once it has averaged its position, it can determine accurate time even if it can pick up signals from only one or two satellites.
GPS clocks provide the precise time needed for synchrophasor measurement of voltage and current on the commercial power grid to determine the health of the system.
Astronomy timekeeping
Although any satellite navigation receiver that is performing its primary navigational function must have an internal time reference accurate to a small fraction of a second, the displayed time is often not as precise as the internal clock. Most inexpensive navigation receivers have one CPU that is multitasking. The highest-priority task for the CPU is maintaining satellite lock—not updating the display. Multicore CPUs for navigation systems can only be found on high end products.
For serious precision timekeeping, a more specialized GPS device is needed. Some amateur astronomers, most notably those who time grazing lunar occultation events when the moon blocks the light from stars and planets, require the highest precision available for persons working outside large research institutions. The Web site of the International Occultation Timing Association has detailed technical information about precision timekeeping for the amateur astronomer.
Daylight saving time
Various formats listed above include a flag indicating the status of daylight saving time (DST) in the home country of the transmitter. This signal is typically used by clocks to adjust the displayed time to meet user expectations.
See also
Casio Wave Ceptor
Clock network
Speaking clock
Standard frequency and time signal service
Time from NPL
Time and frequency transfer
Time synchronization in North America
References
External links
IOTA Observers Manual This manual from the International Occultation Timing Association has very extensive details on methods of accurate time measurement.
NIST website: WWVB Radio Controlled Clocks
NTP Project Development Website
Clocks
Watches
Clock | Radio clock | [
"Physics",
"Technology",
"Engineering"
] | 2,362 | [
"Information and communications technology",
"Machines",
"Telecommunications engineering",
"Clocks",
"Radio technology",
"Measuring instruments",
"Physical systems"
] |
164,524 | https://en.wikipedia.org/wiki/Jam%20sync | Jam sync refers to the practice of applying a phase hit to a system to bring it in synchronization with another. The term originates from the use of this technique to replace defective time code on a video tape recording by replacing it with a new time code sequence, which may be an extension of a previous good time code sequence on an earlier part of the source material.
Synchronization | Jam sync | [
"Engineering"
] | 81 | [
"Telecommunications engineering",
"Synchronization"
] |
164,547 | https://en.wikipedia.org/wiki/Sudden%20stratospheric%20warming | A sudden stratospheric warming (SSW) is an event in which polar stratospheric temperatures rise by several tens of kelvins (up to increases of about 50 °C (90 °F)) over the course of a few days. The warming is preceded by a slowing then reversal of the westerly winds in the stratospheric polar vortex, commonly measured at 60 ° latitude at the 10 hPa level. SSWs occur about six times per decade in the northern hemisphere (NH), and about once every 20-30 years in the southern hemisphere (SH). In the SH, SSW accompanied by a reversal of the vortex westerly (which is soon after followed by a vortex recovery) was observed once during the period 1979–2024; this was in September 2002. Stratospheric warming in September 2019 was comparable to or even greater than that of 2002, but the wind reversal did not occur.
History
The first continued measurements of the stratosphere were taken by Richard Scherhag in 1951 using radiosondes to take reliable temperature readings in the upper stratosphere (~40 km) and he became the first to observe stratospheric warming on 27 January 1952. After his discovery, he assembled a team of meteorologists at the Free University of Berlin specifically to study the stratosphere, and this group continued to map the NH stratospheric temperature and geopotential height for many years using radiosondes and rocketsondes.
When the weather satellites era began, meteorological measurements became far more frequent. Although satellites were primarily used for the troposphere, they also recorded data for the stratosphere. Today both satellites and stratospheric radiosondes are used to take measurements of the stratosphere.
Classification and description
SSW is closely associated with polar vortex breakdown. Meteorologists typically classify vortex breakdown into three categories: major, minor, and final. No unambiguous standard definition of these has so far been adopted. However, differences in the methodology to detect SSWs are not relevant as long as circulation in the polar stratosphere reverses. "Major SSWs occur when the winter polar stratospheric westerlies reverse to easterlies. In minor warmings, the polar temperature gradient reverses but the circulation does not, and in final warmings, the vortex breaks down and remains easterly until the following boreal autumn". However, this classification is based on the NH SSWs, as no major SSW by this definition has been observed in the SH in winter.
Sometimes a fourth category, the Canadian warming, is included because of its unique and distinguishing structure and evolution.
"There are two main types of SSW: displacement events in which the stratospheric polar vortex is displaced from the pole and split events in which the vortex splits into two or more vortices. Some SSWs are a combination of both types".
Major
These occur when the westerly winds at 60°N and 10 hPa reverse, i.e. become easterly. A complete disruption of the polar vortex is observed and the vortex will either be split into daughter vortices, or displaced from its normal location over the pole.
According to the World Meteorological Organization's Commission for Atmospheric Sciences: "a stratospheric warming can be said to be major if at 10 mb or below the latitudinal mean temperature increases poleward from 60 degree latitude and an associated circulation reversal is observed (that is, the prevailing mean westerly winds poleward of 60° latitude are succeeded by mean easterlies in the same area)."
Minor
Minor warmings are similar to major warmings; however, they are less dramatic: the westerly winds are slowed but do not reverse. Therefore, a breakdown of the vortex before its final breakdown is never observed. All the SH SSWs observed since 1979 were minor warmings except for that in September 2002.
McInturff cites the WMO's Commission for Atmospheric Sciences: "a stratospheric warming is called minor if a significant temperature increase is observed (that is, at least 25 degrees in a period of week or less) at any stratospheric level in any area of winter time hemisphere. The polar vortex is not broken down and the wind reversal from westerly to easterly is less extensive."
Final
The radiative cycle in the stratosphere means that during winter the mean flow is westerly and during summer it is easterly. A final warming occurs on this transition, so that the polar vortex winds change direction for the warming and do not change back until the following winter. This is because the stratosphere has entered the summer easterly phase. It is final because another warming cannot occur over the summer, so it is the final warming of the current winter. Most of the SH SSWs fall into this category as their onsets most commonly occur sometime in austral spring months, and the stratospheric wind and temperature anomalies tend to persist until early summer. In this sense, SH SSWs represent faster-than-normal seasonal march of the westerly polar vortex.
Canadian
Canadian warmings occur in early winter in the stratosphere of the NH, typically from mid-November to early December. They have no counterpart in the SH.
Dynamics
In a usual NH winter, several minor warming events occur, with a major event occurring roughly every two years. One reason for major stratospheric warmings in the NH is that orography and land-sea temperature contrasts are responsible for the generation of long (wavenumber 1 or 2) Rossby waves in the troposphere. These planetary-scale waves travel upward to the stratosphere and dissipate there, decelerating the westerly winds and warming the Arctic. This is the reason that major warmings are usually only observed in the NH, with an exception observed in September 2002. As the SH is largely an ocean hemisphere, the planetary-scale wave activity is much weaker, and the SH vortex westerly is much stronger in winter, which partly explains why major SSW has not been observed in the SH winter at least in the instrumental observation era.
At an initial time a blocking-type circulation pattern establishes in the troposphere. This blocking pattern causes Rossby waves with zonal wavenumber 1 and/or 2 to grow to unusually large amplitudes. The growing wave propagates into the stratosphere and decelerates the westerly mean zonal winds. Thus the polar night jet weakens and simultaneously becomes distorted by the growing planetary waves. Because the wave amplitude increases with decreasing density, this easterly acceleration process is not effective at fairly high levels. If the waves are sufficiently strong, the mean zonal flow may decelerate sufficiently so that the winter westerlies turn easterly. At this point planetary waves may no longer penetrate into the stratosphere ). Hence, further upward transfer of energy is completely blocked and a very rapid easterly acceleration and the polar warming occur at this critical level, which must then move downward until eventually the warming and zonal wind reversal affect the entire polar stratosphere. This wave-mean flow interaction explains the stratosphere-troposphere downward coupling during the SH SSW events as well. The upward propagation of planetary waves and their interaction with the stratospheric mean flow is traditionally diagnosed via so-called Eliassen-Palm fluxes.
There is a link between sudden stratospheric warmings and the quasi-biennial oscillation (QBO): if the QBO is in its easterly phase, the atmospheric waveguide is modified in such a way that upward-propagating Rossby waves are focused on the polar vortex, intensifying their interaction with the mean flow. Thus, there exists a statistically significant imbalance between the frequency of sudden stratospheric warmings if these events are grouped according to the QBO phase (easterly or westerly). However, the QBO-polar vortex relationship is less statistically significant in the SH.
Weather and climate effects
Although sudden stratospheric warmings are mainly forced by planetary-scale waves which propagate up from the lower atmosphere, there is also a subsequent return effect of sudden stratospheric warmings on surface weather and climate. Following a sudden stratospheric warming, the high altitude westerly winds reverse and are replaced by easterlies. The easterly winds progress down through the atmosphere, often leading to a weakening of the tropospheric westerly winds, resulting in dramatic reductions in temperature in Northern Europe. This process can take a few days to a few weeks to occur.
Similar downward processes are found in the SH in the austral late spring to early summer seasons. SH SSWs in austral spring tend to cause the Antarctic ozone concentration to be higher than normal from spring to early summer, and both weaker vortex and higher Antarctic ozone act to cause the tropospheric jet to shift equatorward, which is expressed as a negative phase of the Southern Annular Mode (SAM) in the SH extratropical geopotential height and surface pressure fields in the subsequent late spring to early summer seasons. SSWs in austral spring have been found to result in warmer and drier conditions over eastern Australia during late spring-early summer, increasing the risk of forest/bushfires, but cooler and wetter conditions over Patagonia. Also, austral spring to late spring SSWs influence the Antarctic sea-ice extent in the subsequent early summer season.
Table of major mid-winter SSW events in reanalyses products
See also
Polar amplification
Teleconnection
References
Further reading
External links
UK Met Office: What is a sudden stratospheric warming (SSW)?
Weather and Climate Discussion, Reading Meteorology WCD Blog: Sudden Stratospheric Stirrings
GEOS-5 Analyses and Forecasts of the Major Stratospheric Sudden Warming of January 2013 NASA Global Modelling and Assimilation Office
Atmospheric dynamics | Sudden stratospheric warming | [
"Chemistry"
] | 2,059 | [
"Atmospheric dynamics",
"Fluid dynamics"
] |
164,557 | https://en.wikipedia.org/wiki/Semaphore%20%28programming%29 | In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive. A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions.
A useful way to think of a semaphore as used in a real-world system is as a record of how many units of a particular resource are available, coupled with operations to adjust that record safely (i.e., to avoid race conditions) as units are acquired or become free, and, if necessary, wait until a unit of the resource becomes available.
Though semaphores are useful for preventing race conditions, they do not guarantee their absence. Semaphores that allow an arbitrary resource count are called counting semaphores, while semaphores that are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores and are used to implement locks.
The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in 1962 or 1963, when Dijkstra and his team were developing an operating system for the Electrologica X8. That system eventually became known as the THE multiprogramming system.
Library analogy
Suppose a physical library has ten identical study rooms, to be used by one student at a time. Students must request a room from the front desk. If no rooms are free, students wait at the desk until someone relinquishes a room. When a student has finished using a room, the student must return to the desk and indicate that the room is free.
In the simplest implementation, the clerk at the front desk knows only the number of free rooms available. This requires that all of the students use their room while they have signed up for it and return it when they are done. When a student requests a room, the clerk decreases this number. When a student releases a room, the clerk increases this number. The room can be used for as long as desired, and so it is not possible to book rooms ahead of time.
In this scenario, the front desk count-holder represents a counting semaphore, the rooms are the resource, and the students represent processes/threads. The value of the semaphore in this scenario is initially 10, with all rooms empty. When a student requests a room, they are granted access, and the value of the semaphore is changed to 9. After the next student comes, it drops to 8, then 7, and so on. If someone requests a room and the current value of the semaphore is 0, they are forced to wait until a room is freed (when the count is increased from 0). If one of the rooms was released, but there are several students waiting, then any method can be used to select the one who will occupy the room (like FIFO or randomly picking one). And of course, a student must inform the clerk about releasing their room only after really leaving it.
Important observations
When used to control access to a pool of resources, a semaphore tracks only how many resources are free. It does not keep track of which of the resources are free. Some other mechanism (possibly involving more semaphores) may be required to select a particular free resource.
The paradigm is especially powerful because the semaphore count may serve as a useful trigger for a number of different actions. The librarian above may turn the lights off in the study hall when there are no students remaining, or may place a sign that says the rooms are very busy when most of the rooms are occupied.
The success of the protocol requires applications to follow it correctly. Fairness and safety are likely to be compromised (which practically means a program may behave slowly, act erratically, hang, or crash) if even a single process acts incorrectly. This includes:
requesting a resource and forgetting to release it;
releasing a resource that was never requested;
holding a resource for a long time without needing it;
using a resource without requesting it first (or after releasing it).
Even if all processes follow these rules, multi-resource deadlock may still occur when there are different resources managed by different semaphores and when processes need to use more than one resource at a time, as illustrated by the dining philosophers problem.
Semantics and implementation
Counting semaphores are equipped with two operations, historically denoted as P and V (see for alternative names). Operation V increments the semaphore S, and operation P decrements it.
The value of the semaphore S is the number of units of the resource that are currently available. The P operation wastes time or sleeps until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. The V operation is the inverse: it makes a resource available again after the process has finished using it.
One important property of semaphore S is that its value cannot be changed except by using the V and P operations.
A simple way to understand (P) and (V) operations is:
: Decrements the value of the semaphore variable by 1. If the new value of the semaphore variable is negative, the process executing is blocked (i.e., added to the semaphore's queue). Otherwise, the process continues execution, having used a unit of the resource.
: Increments the value of the semaphore variable by 1. After the increment, if the pre-increment value was negative (meaning there are processes waiting for a resource), it transfers a blocked process from the semaphore's waiting queue to the ready queue.
Many operating systems provide efficient semaphore primitives that unblock a waiting process when the semaphore is incremented. This means that processes do not waste time checking the semaphore value unnecessarily.
The counting semaphore concept can be extended with the ability to claim or return more than one "unit" from the semaphore, a technique implemented in Unix. The modified V and P operations are as follows, using square brackets to indicate atomic operations, i.e., operations that appear indivisible to other processes:
function V(semaphore S, integer I):
[S ← S + I]
function P(semaphore S, integer I):
repeat:
[if S ≥ I:
S ← S − I
break]
However, the rest of this section refers to semaphores with unary V and P operations, unless otherwise specified.
To avoid starvation, a semaphore has an associated queue of processes (usually with FIFO semantics). If a process performs a P operation on a semaphore that has the value zero, the process is added to the semaphore's queue and its execution is suspended. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution. When processes have different priorities the queue may be ordered thereby, such that the highest priority process is taken from the queue first.
If the implementation does not ensure atomicity of the increment, decrement, and comparison operations, there is a risk of increments or decrements being forgotten, or of the semaphore value becoming negative. Atomicity may be achieved by using a machine instruction that can read, modify, and write the semaphore in a single operation. Without such a hardware instruction, an atomic operation may be synthesized by using a software mutual exclusion algorithm. On uniprocessor systems, atomic operations can be ensured by temporarily suspending preemption or disabling hardware interrupts. This approach does not work on multiprocessor systems where it is possible for two programs sharing a semaphore to run on different processors at the same time. To solve this problem in a multiprocessor system, a locking variable can be used to control access to the semaphore. The locking variable is manipulated using a test-and-set-lock command.
Examples
Trivial example
Consider a variable A and a boolean variable S. A is only accessed when S is marked true. Thus, S is a semaphore for A.
One can imagine a stoplight signal (S) just before a train station (A). In this case, if the signal is green, then one can enter the train station. If it is yellow or red (or any other color), the train station cannot be accessed.
Login queue
Consider a system that can only support ten users (S=10). Whenever a user logs in, P is called, decrementing the semaphore S by 1. Whenever a user logs out, V is called, incrementing S by 1 representing a login slot that has become available. When S is 0, any users wishing to log in must wait until S increases. The login request is enqueued onto a FIFO queue until a slot is freed. Mutual exclusion is used to ensure that requests are enqueued in order. Whenever S increases (login slots available), a login request is dequeued, and the user owning the request is allowed to log in. If S is already greater than 0, then login requests are immediately dequeued.
Producer–consumer problem
In the producer–consumer problem, one process (the producer) generates data items and another process (the consumer) receives and uses them. They communicate using a queue of maximum size N and are subject to the following conditions:
the consumer must wait for the producer to produce something if the queue is empty;
the producer must wait for the consumer to consume something if the queue is full.
The semaphore solution to the producer–consumer problem tracks the state of the queue with two semaphores: emptyCount, the number of empty places in the queue, and fullCount, the number of elements in the queue. To maintain integrity, emptyCount may be lower (but never higher) than the actual number of empty places in the queue, and fullCount may be lower (but never higher) than the actual number of items in the queue. Empty places and items represent two kinds of resources, empty boxes and full boxes, and the semaphores emptyCount and fullCount maintain control over these resources.
The binary semaphore useQueue ensures that the integrity of the state of the queue itself is not compromised, for example, by two producers attempting to add items to an empty queue simultaneously, thereby corrupting its internal state. Alternatively a mutex could be used in place of the binary semaphore.
The emptyCount is initially N, fullCount is initially 0, and useQueue is initially 1.
The producer does the following repeatedly:
produce:
P(emptyCount)
P(useQueue)
putItemIntoQueue(item)
V(useQueue)
V(fullCount)
The consumer does the following repeatedly
consume:
P(fullCount)
P(useQueue)
item ← getItemFromQueue()
V(useQueue)
V(emptyCount)
Below is a substantive example:
A single consumer enters its critical section. Since fullCount is 0, the consumer blocks.
Several producers enter the producer critical section. No more than N producers may enter their critical section due to emptyCount constraining their entry.
The producers, one at a time, gain access to the queue through useQueue and deposit items in the queue.
Once the first producer exits its critical section, fullCount is incremented, allowing one consumer to enter its critical section.
Note that emptyCount may be much lower than the actual number of empty places in the queue, for example, where many producers have decremented it but are waiting their turn on useQueue before filling empty places. Note that emptyCount + fullCount ≤ N always holds, with equality if and only if no producers or consumers are executing their critical sections.
Passing the baton pattern
The "Passing the baton" pattern proposed by Gregory R. Andrews is a generic scheme to solve many complex concurrent programming problems in which multiple processes compete for the same resource with complex access conditions (such as satisfying specific priority criteria or avoiding starvation). Given a shared resource, the pattern requires a private "priv" semaphore (initialized to zero) for each process (or class of processes) involved and a single mutual exclusion "mutex" semaphore (initialized to one).
The pseudo-code for each process is:
void process(int proc_id, int res_id)
{
resource_acquire(proc_id, res_id);
<use the resource res_id>;
resource_release(proc_id, res_id);
}
The pseudo-code of the resource acquisition and release primitives are:
void resource_acquire(int proc_id, int res_id)
{
P(mutex);
if(<the condition to access res_id is not verified for proc_id>)
{
<indicate that proc_id is suspended for res_id>;
V(mutex);
P(priv[proc_id]);
<indicate that proc_id is not suspended for res_id anymore>;
}
<indicate that proc_id is accessing the resource>;
pass_the_baton(); // See below
}
void resource_release(int proc_id, int res_id)
{
P(mutex);
<indicate that proc_id is not accessing the resource res_id anymore>;
pass_the_baton(); // See below
}
Both primitives in turn use the "pass_the_baton" method, whose pseudo-code is:
void pass_the_baton(int res_id)
{
if <the condition to access res_id is true for at least one suspended process>
{
int p = <choose the process to wake>;
V(priv[p]);
}
else
{
V(mutex);
}
}
Remarks
The pattern is called "passing the baton" because a process that releases the resource as well as a freshly reactivated process will activate at most one suspended process, that is, shall "pass the baton to it". The mutex is released only when a process is going to suspend itself (resource_acquire), or when pass_the_baton is unable to reactivate another suspended process.
Operation names
The canonical names V and P come from the initials of Dutch words. V is generally explained as verhogen ("increase"). Several explanations have been offered for P, including proberen ("to test" or "to try"), passeren ("pass"), and pakken ("grab"). Dijkstra's earliest paper on the subject gives passering ("passing") as the meaning for P, and vrijgave ("release") as the meaning for V. It also mentions that the terminology is taken from that used in railroad signals. Dijkstra subsequently wrote that he intended P to stand for prolaag, short for probeer te verlagen, literally "try to reduce", or to parallel the terms used in the other case, "try to decrease".
In ALGOL 68, the Linux kernel, and in some English textbooks, the V and P operations are called, respectively, up and down. In software engineering practice, they are often called signal and wait, release and acquire (standard Java library), or post and pend. Some texts call them vacate and procure to match the original Dutch initials.
Semaphores vs. mutexes
A mutex is a locking mechanism that sometimes uses the same basic implementation as the binary semaphore. However, they differ in how they are used. While a binary semaphore may be colloquially referred to as a mutex, a true mutex has a more specific use-case and definition, in that only the task that locked the mutex is supposed to unlock it. This constraint aims to handle some potential problems of using semaphores:
Priority inversion: If the mutex knows who locked it and is supposed to unlock it, it is possible to promote the priority of that task whenever a higher-priority task starts waiting on the mutex.
Premature task termination: Mutexes may also provide deletion safety, where the task holding the mutex cannot be accidentally deleted.
Termination deadlock: If a mutex-holding task terminates for any reason, the OS can release the mutex and signal waiting tasks of this condition.
Recursion deadlock: a task is allowed to lock a reentrant mutex multiple times as it unlocks it an equal number of times.
Accidental release: An error is raised on the release of the mutex if the releasing task is not its owner.
See also
Async/await
Flag (programming)
Synchronization (computer science)
Cigarette smokers problem
Dining philosophers problem
Readers–writers problem
Sleeping barber problem
Monitor
Spurious wakeup
References
External links
Introductions
Hilsheimer, Volker (2004). "Implementing a Read/Write Mutex" (Web page). Qt Quarterly, Issue 11 - Q3 2004
References
(September 1965)
Computer science
Concurrency control
Concurrency (computer science)
Parallel computing
Computer-mediated communication
Edsger W. Dijkstra
Dutch inventions | Semaphore (programming) | [
"Technology",
"Engineering"
] | 3,707 | [
"Telecommunications engineering",
"Information systems",
"Computing and society",
"Computer-mediated communication",
"Synchronization"
] |
164,570 | https://en.wikipedia.org/wiki/Wavenumber | In the physical sciences, the wavenumber (or wave number), also known as repetency, is the spatial frequency of a wave, measured in cycles per unit distance (ordinary wavenumber) or radians per unit distance (angular wavenumber). It is analogous to temporal frequency, which is defined as the number of wave cycles per unit time (ordinary frequency) or radians per unit time (angular frequency).
In multidimensional systems, the wavenumber is the magnitude of the wave vector. The space of wave vectors is called reciprocal space. Wave numbers and wave vectors play an essential role in optics and the physics of wave scattering, such as X-ray diffraction, neutron diffraction, electron diffraction, and elementary particle physics. For quantum mechanical waves, the wavenumber multiplied by the reduced Planck constant is the canonical momentum.
Wavenumber can be used to specify quantities other than spatial frequency. For example, in optical spectroscopy, it is often used as a unit of temporal frequency assuming a certain speed of light.
Definition
Wavenumber, as used in spectroscopy and most chemistry fields, is defined as the number of wavelengths per unit distance, typically centimeters (cm−1):
where λ is the wavelength. It is sometimes called the "spectroscopic wavenumber". It equals the spatial frequency.
For example, a wavenumber in inverse centimeters can be converted to a frequency expressed in the unit gigahertz by multiplying by (the speed of light, in centimeters per nanosecond); conversely, an electromagnetic wave at 29.9792458 GHz has a wavelength of 1 cm in free space.
In theoretical physics, a wave number, defined as the number of radians per unit distance, sometimes called "angular wavenumber", is more often used:
When wavenumber is represented by the symbol , a frequency is still being represented, albeit indirectly. As described in the spectroscopy section, this is done through the relationship
where s is a frequency expressed in the unit hertz. This is done for convenience as frequencies tend to be very large.
Wavenumber has dimensions of reciprocal length, so its SI unit is the reciprocal of meters (m−1). In spectroscopy it is usual to give wavenumbers in cgs unit (i.e., reciprocal centimeters; cm−1); in this context, the wavenumber was formerly called the kayser, after Heinrich Kayser (some older scientific papers used this unit, abbreviated as K, where 1K = 1cm−1). The angular wavenumber may be expressed in the unit radian per meter (rad⋅m−1), or as above, since the radian is dimensionless.
For electromagnetic radiation in vacuum, wavenumber is directly proportional to frequency and to photon energy. Because of this, wavenumbers are used as a convenient unit of energy in spectroscopy.
Complex
A complex-valued wavenumber can be defined for a medium with complex-valued relative permittivity , relative permeability and refraction index n as:
where k0 is the free-space wavenumber, as above. The imaginary part of the wavenumber expresses attenuation per unit distance and is useful in the study of exponentially decaying evanescent fields.
Plane waves in linear media
The propagation factor of a sinusoidal plane wave propagating in the positive x direction in a linear material is given by
where
phase constant in the units of radians/meter
attenuation constant in the units of nepers/meter
angular frequency
distance traveled in the x direction
conductivity in Siemens/meter
complex permittivity
complex permeability
The sign convention is chosen for consistency with propagation in lossy media. If the attenuation constant is positive, then the wave amplitude decreases as the wave propagates in the x-direction.
Wavelength, phase velocity, and skin depth have simple relationships to the components of the wavenumber:
In wave equations
Here we assume that the wave is regular in the sense that the different quantities describing the wave such as the wavelength, frequency and thus the wavenumber are constants. See wavepacket for discussion of the case when these quantities are not constant.
In general, the angular wavenumber k (i.e. the magnitude of the wave vector) is given by
where ν is the frequency of the wave, λ is the wavelength, ω = 2πν is the angular frequency of the wave, and vp is the phase velocity of the wave. The dependence of the wavenumber on the frequency (or more commonly the frequency on the wavenumber) is known as a dispersion relation.
For the special case of an electromagnetic wave in a vacuum, in which the wave propagates at the speed of light, k is given by:
where E is the energy of the wave, ħ is the reduced Planck constant, and c is the speed of light in a vacuum.
For the special case of a matter wave, for example an electron wave, in the non-relativistic approximation (in the case of a free particle, that is, the particle has no potential energy):
Here p is the momentum of the particle, m is the mass of the particle, E is the kinetic energy of the particle, and ħ is the reduced Planck constant.
Wavenumber is also used to define the group velocity.
In spectroscopy
In spectroscopy, "wavenumber" (in reciprocal centimeters, cm−1) refers to a temporal frequency (in hertz) which has been divided by the speed of light in vacuum (usually in centimeters per second, cm⋅s−1):
The historical reason for using this spectroscopic wavenumber rather than frequency is that it is a convenient unit when studying atomic spectra by counting fringes per cm with an interferometer : the spectroscopic wavenumber is the reciprocal of the wavelength of light in vacuum:
which remains essentially the same in air, and so the spectroscopic wavenumber is directly related to the angles of light scattered from diffraction gratings and the distance between fringes in interferometers, when those instruments are operated in air or vacuum. Such wavenumbers were first used in the calculations of Johannes Rydberg in the 1880s. The Rydberg–Ritz combination principle of 1908 was also formulated in terms of wavenumbers. A few years later spectral lines could be understood in quantum theory as differences between energy levels, energy being proportional to wavenumber, or frequency. However, spectroscopic data kept being tabulated in terms of spectroscopic wavenumber rather than frequency or energy.
For example, the spectroscopic wavenumbers of the emission spectrum of atomic hydrogen are given by the Rydberg formula:
where R is the Rydberg constant, and ni and nf are the principal quantum numbers of the initial and final levels respectively (ni is greater than nf for emission).
A spectroscopic wavenumber can be converted into energy per photon E by Planck's relation:
It can also be converted into wavelength of light:
where n is the refractive index of the medium. Note that the wavelength of light changes as it passes through different media, however, the spectroscopic wavenumber (i.e., frequency) remains constant.
Often spatial frequencies are stated by some authors "in wavenumbers", incorrectly transferring the name of the quantity to the CGS unit cm−1 itself.
See also
Angular wavelength
Spatial frequency
Refractive index
Zonal wavenumber
References
External links
Wave mechanics
Scalar physical quantities
Units of frequency
Quotients | Wavenumber | [
"Physics",
"Mathematics"
] | 1,562 | [
"Scalar physical quantities",
"Physical phenomena",
"Physical quantities",
"Quotients",
"Quantity",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Arithmetic",
"Units of frequency",
"Units of measurement"
] |
164,571 | https://en.wikipedia.org/wiki/Atmospheric%20waveguide | An atmospheric waveguide is an atmospheric flow feature that improves the propagation of certain atmospheric waves.
The effect arises because wave parameters such as group velocity or vertical wavenumber depend on mean flow direction and strength. Thus, for instance, westerlies might be a good waveguide for eastward-traveling waves, but might strongly dissipate westward-traveling waves, by increasing or decreasing their vertical wavenumber, respectively. Modification of the waves' group velocity will change their meridional propagation speed, directing them more polewards or more equatorwards.
References
Atmosphere | Atmospheric waveguide | [
"Astronomy"
] | 116 | [
"Astronomy stubs",
"Planetary science stubs"
] |
164,572 | https://en.wikipedia.org/wiki/Dissipation | In thermodynamics, dissipation is the result of an irreversible process that affects a thermodynamic system. In a dissipative process, energy (internal, bulk flow kinetic, or system potential) transforms from an initial form to a final form, where the capacity of the final form to do thermodynamic work is less than that of the initial form. For example, transfer of energy as heat is dissipative because it is a transfer of energy other than by thermodynamic work or by transfer of matter, and spreads previously concentrated energy. Following the second law of thermodynamics, in conduction and radiation from one body to another, the entropy varies with temperature (reduces the capacity of the combination of the two bodies to do work), but never decreases in an isolated system.
In mechanical engineering, dissipation is the irreversible conversion of mechanical energy into thermal energy with an associated increase in entropy.
Processes with defined local temperature produce entropy at a certain rate. The entropy production rate times local temperature gives the dissipated power. Important examples of irreversible processes are: heat flow through a thermal resistance, fluid flow through a flow resistance, diffusion (mixing), chemical reactions, and electric current flow through an electrical resistance (Joule heating).
Definition
Dissipative thermodynamic processes are essentially irreversible because they produce entropy. Planck regarded friction as the prime example of an irreversible thermodynamic process. In a process in which the temperature is locally continuously defined, the local density of rate of entropy production times local temperature gives the local density of dissipated power.
A particular occurrence of a dissipative process cannot be described by a single individual Hamiltonian formalism. A dissipative process requires a collection of admissible individual Hamiltonian descriptions, exactly which one describes the actual particular occurrence of the process of interest being unknown. This includes friction and hammering, and all similar forces that result in decoherency of energy—that is, conversion of coherent or directed energy flow into an indirected or more isotropic distribution of energy.
Energy
"The conversion of mechanical energy into heat is called energy dissipation." – François Roddier The term is also applied to the loss of energy due to generation of unwanted heat in electric and electronic circuits.
Computational physics
In computational physics, numerical dissipation (also known as "Numerical diffusion") refers to certain side-effects that may occur as a result of a numerical solution to a differential equation. When the pure advection equation, which is free of dissipation, is solved by a numerical approximation method, the energy of the initial wave may be reduced in a way analogous to a diffusional process. Such a method is said to contain 'dissipation'. In some cases, "artificial dissipation" is intentionally added to improve the numerical stability characteristics of the solution.
Mathematics
A formal, mathematical definition of dissipation, as commonly used in the mathematical study of measure-preserving dynamical systems, is given in the article wandering set.
Examples
In hydraulic engineering
Dissipation is the process of converting mechanical energy of downward-flowing water into thermal and acoustical energy. Various devices are designed in stream beds to reduce the kinetic energy of flowing waters to reduce their erosive potential on banks and river bottoms. Very often, these devices look like small waterfalls or cascades, where water flows vertically or over riprap to lose some of its kinetic energy.
Irreversible processes
Important examples of irreversible processes are:
Heat flow through a thermal resistance
Fluid flow through a flow resistance
Diffusion (mixing)
Chemical reactions
Electrical current flow through an electrical resistance (Joule heating).
Waves or oscillations
Waves or oscillations, lose energy over time, typically from friction or turbulence. In many cases, the "lost" energy raises the temperature of the system. For example, a wave that loses amplitude is said to dissipate. The precise nature of the effects depends on the nature of the wave: an atmospheric wave, for instance, may dissipate close to the surface due to friction with the land mass, and at higher levels due to radiative cooling.
History
The concept of dissipation was introduced in the field of thermodynamics by William Thomson (Lord Kelvin) in 1852. Lord Kelvin deduced that a subset of the above-mentioned irreversible dissipative processes will occur unless a process is governed by a "perfect thermodynamic engine". The processes that Lord Kelvin identified were friction, diffusion, conduction of heat and the absorption of light.
See also
Entropy production
General equation of heat transfer
Flood control
Principle of maximum entropy
Two-dimensional gas
References
Thermodynamic processes
Thermodynamic entropy
Non-equilibrium thermodynamics
Dynamical systems | Dissipation | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,016 | [
"Physical quantities",
"Non-equilibrium thermodynamics",
"Thermodynamic processes",
"Thermodynamic entropy",
"Entropy",
"Mechanics",
"Thermodynamics",
"Statistical mechanics",
"Dynamical systems"
] |
164,597 | https://en.wikipedia.org/wiki/Mean%20flow | In fluid dynamics, the fluid flow is often decomposed into a mean flow and deviations from the mean. The averaging can be done either in space or in time, or by ensemble averaging.
Example
Calculation of the mean flow may often be as simple as the mathematical mean: simply add up the given flow rates and then divide the final figure by the number of initial readings.
For example, given two discharges (Q) of 3 m³/s and 5 m³/s, we can use these flow rates Q to calculate the mean flow rate Qmean. Which in this case is Qmean = 4 m³/s.
See also
Generalized Lagrangian mean
References
Fluid dynamics | Mean flow | [
"Chemistry",
"Engineering"
] | 141 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.