text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1
value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|---|---|
Lophocytes are amoeba-like cells that move slowly through the mesohyl and secrete collagen fibres.
Collencytes are another type of collagen-producing cell.
Rhabdiferous cells secrete polysaccharides that also form part of the mesohyl.
Oocytes and spermatocytes are reproductive cells.
Sclerocytes secrete the mineralized spicules ("little spines") that form the skeletons of many sponges and in some species provide some defense against predators.
In addition to or instead of sclerocytes, demosponges have spongocytes that secrete a form of collagen that polymerizes into spongin, a thick fibrous material that stiffens the mesohyl.
Myocytes ("muscle cells") conduct signals and cause parts of the animal to contract.
"Grey cells" act as sponges' equivalent of an immune system.
Archaeocytes (or amoebocytes) are amoeba-like cells that are totipotent, in other words, each is capable of transformation into any other type of cell. They also have important roles in feeding and in clearing debris that block the ostia.
Many larval sponges possess neuron-less eyes that are based on cryptochromes. They mediate phototaxic behavior.
Glass sponges present a distinctive variation on this basic plan. Their spicules, which are made of silica, form a scaffolding-like framework between whose rods the living tissue is suspended like a cobweb that contains most of the cell types. This tissue is a syncytium that in some ways behaves like many cells that share a single external membrane, and in others like a single cell with multiple nuclei.
Water flow and body structures
Most sponges work rather like chimneys: they take in water at the bottom and eject it from the osculum at the top. Since ambient currents are faster at the top, the suction effect that they produce by Bernoulli's principle does some of the work for free. Sponges can control the water flow by various combinations of wholly or partially closing the osculum and ostia (the intake pores) and varying the beat of the flagella, and may shut it down if there is a lot of sand or silt in the water. | Sponge | Wikipedia | 485 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Although the layers of pinacocytes and choanocytes resemble the epithelia of more complex animals, they are not bound tightly by cell-to-cell connections or a basal lamina (thin fibrous sheet underneath). The flexibility of these layers and re-modeling of the mesohyl by lophocytes allow the animals to adjust their shapes throughout their lives to take maximum advantage of local water currents.
The simplest body structure in sponges is a tube or vase shape known as "asconoid", but this severely limits the size of the animal. The body structure is characterized by a stalk-like spongocoel surrounded by a single layer of choanocytes. If it is simply scaled up, the ratio of its volume to surface area increases, because surface increases as the square of length or width while volume increases proportionally to the cube. The amount of tissue that needs food and oxygen is determined by the volume, but the pumping capacity that supplies food and oxygen depends on the area covered by choanocytes. Asconoid sponges seldom exceed in diameter.
Some sponges overcome this limitation by adopting the "syconoid" structure, in which the body wall is pleated. The inner pockets of the pleats are lined with choanocytes, which connect to the outer pockets of the pleats by ostia. This increase in the number of choanocytes and hence in pumping capacity enables syconoid sponges to grow up to a few centimeters in diameter.
The "leuconoid" pattern boosts pumping capacity further by filling the interior almost completely with mesohyl that contains a network of chambers lined with choanocytes and connected to each other and to the water intakes and outlet by tubes. Leuconid sponges grow to over in diameter, and the fact that growth in any direction increases the number of choanocyte chambers enables them to take a wider range of forms, for example, "encrusting" sponges whose shapes follow those of the surfaces to which they attach. All freshwater and most shallow-water marine sponges have leuconid bodies. The networks of water passages in glass sponges are similar to the leuconid structure. | Sponge | Wikipedia | 450 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
In all three types of structure, the cross-section area of the choanocyte-lined regions is much greater than that of the intake and outlet channels. This makes the flow slower near the choanocytes and thus makes it easier for them to trap food particles. For example, in Leuconia, a small leuconoid sponge about tall and in diameter, water enters each of more than 80,000 intake canals at 6 cm per minute. However, because Leuconia has more than 2 million flagellated chambers whose combined diameter is much greater than that of the canals, water flow through chambers slows to 3.6 cm per hour, making it easy for choanocytes to capture food. All the water is expelled through a single osculum at about 8.5 cm per second, fast enough to carry waste products some distance away.
Skeleton
In zoology, a skeleton is any fairly rigid structure of an animal, irrespective of whether it has joints and irrespective of whether it is biomineralized. The mesohyl functions as an endoskeleton in most sponges, and is the only skeleton in soft sponges that encrust hard surfaces such as rocks. More commonly the mesohyl is stiffened by mineral spicules, by spongin fibers or both. Spicules, which are present in most but not all species, may be made of silica or calcium carbonate, and vary in shape from simple rods to three-dimensional "stars" with up to six rays. Spicules are produced by sclerocyte cells, and may be separate, connected by joints, or fused.
Some sponges also secrete exoskeletons that lie completely outside their organic components. For example, sclerosponges ("hard sponges") have massive calcium carbonate exoskeletons over which the organic matter forms a thin layer with choanocyte chambers in pits in the mineral. These exoskeletons are secreted by the pinacocytes that form the animals' skins.
Vital functions
Movement
Although adult sponges are fundamentally sessile animals, some marine and freshwater species can move across the sea bed at speeds of per day, as a result of amoeba-like movements of pinacocytes and other cells. A few species can contract their whole bodies, and many can close their oscula and ostia. Juveniles drift or swim freely, while adults are stationary.
Respiration, feeding and excretion | Sponge | Wikipedia | 512 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Sponges do not have distinct circulatory, respiratory, digestive, and excretory systems – instead, the water flow system supports all these functions. They filter food particles out of the water flowing through them. Particles larger than 50 micrometers cannot enter the ostia and pinacocytes consume them by phagocytosis (engulfing and intracellular digestion). Particles from 0.5 μm to 50 μm are trapped in the ostia, which taper from the outer to inner ends. These particles are consumed by pinacocytes or by archaeocytes which partially extrude themselves through the walls of the ostia. Bacteria-sized particles, below 0.5 micrometers, pass through the ostia and are caught and consumed by choanocytes. Since the smallest particles are by far the most common, choanocytes typically capture 80% of a sponge's food supply. Archaeocytes transport food packaged in vesicles from cells that directly digest food to those that do not. At least one species of sponge has internal fibers that function as tracks for use by nutrient-carrying archaeocytes, and these tracks also move inert objects.
It used to be claimed that glass sponges could live on nutrients dissolved in sea water and were very averse to silt. However, a study in 2007 found no evidence of this and concluded that they extract bacteria and other micro-organisms from water very efficiently (about 79%) and process suspended sediment grains to extract such prey. Collar bodies digest food and distribute it wrapped in vesicles that are transported by dynein "motor" molecules along bundles of microtubules that run throughout the syncytium.
Sponges' cells absorb oxygen by diffusion from water into cells as water flows through body, into which carbon dioxide and other soluble waste products such as ammonia also diffuse. Archeocytes remove mineral particles that threaten to block the ostia, transport them through the mesohyl and generally dump them into the outgoing water current, although some species incorporate them into their skeletons.
Carnivorous sponges | Sponge | Wikipedia | 423 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
In waters where the supply of food particles is very poor, some species prey on crustaceans and other small animals. So far only 137 species have been discovered. Most belong to the family Cladorhizidae, but a few members of the Guitarridae and Esperiopsidae are also carnivores. In most cases, little is known about how they actually capture prey, although some species are thought to use either sticky threads or hooked spicules. Most carnivorous sponges live in deep waters, up to , and the development of deep-ocean exploration techniques is expected to lead to the discovery of several more. However, one species has been found in Mediterranean caves at depths of , alongside the more usual filter-feeding sponges. The cave-dwelling predators capture crustaceans under long by entangling them with fine threads, digest them by enveloping them with further threads over the course of a few days, and then return to their normal shape; there is no evidence that they use venom.
Most known carnivorous sponges have completely lost the water flow system and choanocytes. However, the genus Chondrocladia uses a highly modified water flow system to inflate balloon-like structures that are used for capturing prey.
Endosymbionts
Freshwater sponges often host green algae as endosymbionts within archaeocytes and other cells and benefit from nutrients produced by the algae. Many marine species host other photosynthesizing organisms, most commonly cyanobacteria but in some cases dinoflagellates. Symbiotic cyanobacteria may form a third of the total mass of living tissue in some sponges, and some sponges gain 48% to 80% of their energy supply from these micro-organisms. In 2008, a University of Stuttgart team reported that spicules made of silica conduct light into the mesohyl, where the photosynthesizing endosymbionts live. Sponges that host photosynthesizing organisms are most common in waters with relatively poor supplies of food particles and often have leafy shapes that maximize the amount of sunlight they collect.
A recently discovered carnivorous sponge that lives near hydrothermal vents hosts methane-eating bacteria and digests some of them.
"Immune" system | Sponge | Wikipedia | 466 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Sponges do not have the complex immune systems of most other animals. However, they reject grafts from other species but accept them from other members of their own species. In a few marine species, gray cells play the leading role in rejection of foreign material. When invaded, they produce a chemical that stops movement of other cells in the affected area, thus preventing the intruder from using the sponge's internal transport systems. If the intrusion persists, the grey cells concentrate in the area and release toxins that kill all cells in the area. The "immune" system can stay in this activated state for up to three weeks.
Reproduction
Asexual
Sponges have three asexual methods of reproduction: after fragmentation, by budding, and by producing gemmules. Fragments of sponges may be detached by currents or waves. They use the mobility of their pinacocytes and choanocytes and reshaping of the mesohyl to re-attach themselves to a suitable surface and then rebuild themselves as small but functional sponges over the course of several days. The same capabilities enable sponges that have been squeezed through a fine cloth to regenerate. A sponge fragment can only regenerate if it contains both collencytes to produce mesohyl and archeocytes to produce all the other cell types. A very few species reproduce by budding. | Sponge | Wikipedia | 275 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Gemmules are "survival pods" which a few marine sponges and many freshwater species produce by the thousands when dying and which some, mainly freshwater species, regularly produce in autumn. Spongocytes make gemmules by wrapping shells of spongin, often reinforced with spicules, round clusters of archeocytes that are full of nutrients. Freshwater gemmules may also include photosynthesizing symbionts. The gemmules then become dormant, and in this state can survive cold, drying out, lack of oxygen and extreme variations in salinity. Freshwater gemmules often do not revive until the temperature drops, stays cold for a few months and then reaches a near-"normal" level. When a gemmule germinates, the archeocytes round the outside of the cluster transform into pinacocytes, a membrane over a pore in the shell bursts, the cluster of cells slowly emerges, and most of the remaining archeocytes transform into other cell types needed to make a functioning sponge. Gemmules from the same species but different individuals can join forces to form one sponge. Some gemmules are retained within the parent sponge, and in spring it can be difficult to tell whether an old sponge has revived or been "recolonized" by its own gemmules.
Sexual
Most sponges are hermaphrodites (function as both sexes simultaneously), although sponges have no gonads (reproductive organs). Sperm are produced by choanocytes or entire choanocyte chambers that sink into the mesohyl and form spermatic cysts while eggs are formed by transformation of archeocytes, or of choanocytes in some species. Each egg generally acquires a yolk by consuming "nurse cells". During spawning, sperm burst out of their cysts and are expelled via the osculum. If they contact another sponge of the same species, the water flow carries them to choanocytes that engulf them but, instead of digesting them, metamorphose to an ameboid form and carry the sperm through the mesohyl to eggs, which in most cases engulf the carrier and its cargo. | Sponge | Wikipedia | 447 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
A few species release fertilized eggs into the water, but most retain the eggs until they hatch. By retaining the eggs, the parents can transfer symbiotic microorganisms directly to their offspring through vertical transmission, while the species who release their eggs into the water has to acquire symbionts horizontally (a combination of both is probably most common, where larvae with vertically transmitted symbionts also acquire others horizontally). There are four types of larvae, but all are lecithotrophic (non-feeding) balls of cells with an outer layer of cells whose flagella or cilia enable the larvae to move. After swimming for a few days the larvae sink and crawl until they find a place to settle. Most of the cells transform into archeocytes and then into the types appropriate for their locations in a miniature adult sponge.
Glass sponge embryos start by dividing into separate cells, but once 32 cells have formed they rapidly transform into larvae that externally are ovoid with a band of cilia round the middle that they use for movement, but internally have the typical glass sponge structure of spicules with a cobweb-like main syncitium draped around and between them and choanosyncytia with multiple collar bodies in the center. The larvae then leave their parents' bodies.
Meiosis
The cytological progression of porifera oogenesis and spermatogenesis (gametogenesis) is very similar to that of other metazoa. Most of the genes from the classic set of meiotic genes, including genes for DNA recombination and double-strand break repair, that are conserved in eukaryotes are expressed in the sponges (e.g. Geodia hentscheli and Geodia phlegraei). Since porifera are considered to be the earliest divergent animals, these findings indicate that the basic toolkit of meiosis including capabilities for recombination and DNA repair were present early in eukaryote evolution.
Life cycle
Sponges in temperate regions live for at most a few years, but some tropical species and perhaps some deep-ocean ones may live for 200 years or more. Some calcified demosponges grow by only per year and, if that rate is constant, specimens wide must be about 5,000 years old. Some sponges start sexual reproduction when only a few weeks old, while others wait until they are several years old.
Coordination of activities | Sponge | Wikipedia | 500 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Adult sponges lack neurons or any other kind of nervous tissue. However, most species have the ability to perform movements that are coordinated all over their bodies, mainly contractions of the pinacocytes, squeezing the water channels and thus expelling excess sediment and other substances that may cause blockages. Some species can contract the osculum independently of the rest of the body. Sponges may also contract in order to reduce the area that is vulnerable to attack by predators. In cases where two sponges are fused, for example if there is a large but still unseparated bud, these contraction waves slowly become coordinated in both of the "Siamese twins". The coordinating mechanism is unknown, but may involve chemicals similar to neurotransmitters. However, glass sponges rapidly transmit electrical impulses through all parts of the syncytium, and use this to halt the motion of their flagella if the incoming water contains toxins or excessive sediment. Myocytes are thought to be responsible for closing the osculum and for transmitting signals between different parts of the body.
Sponges contain genes very similar to those that contain the "recipe" for the post-synaptic density, an important signal-receiving structure in the neurons of all other animals. However, in sponges these genes are only activated in "flask cells" that appear only in larvae and may provide some sensory capability while the larvae are swimming. This raises questions about whether flask cells represent the predecessors of true neurons or are evidence that sponges' ancestors had true neurons but lost them as they adapted to a sessile lifestyle.
Ecology
Habitats
Sponges are worldwide in their distribution, living in a wide range of ocean habitats, from the polar regions to the tropics. Most live in quiet, clear waters, because sediment stirred up by waves or currents would block their pores, making it difficult for them to feed and breathe. The greatest numbers of sponges are usually found on firm surfaces such as rocks, but some sponges can attach themselves to soft sediment by means of a root-like base. | Sponge | Wikipedia | 423 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Sponges are more abundant but less diverse in temperate waters than in tropical waters, possibly because organisms that prey on sponges are more abundant in tropical waters. Glass sponges are the most common in polar waters and in the depths of temperate and tropical seas, as their very porous construction enables them to extract food from these resource-poor waters with the minimum of effort. Demosponges and calcareous sponges are abundant and diverse in shallower non-polar waters.
The different classes of sponge live in different ranges of habitat:
{|class="wikitable"
|-
! Class !! Water type !! Depth !! Type of surface
|-
! Calcarea
|Marine ||less than ||Hard
|-
! Glass sponges
|Marine ||Deep ||Soft or firm sediment
|-
! Demosponges
|Marine, brackish; and about 150 freshwater species ||Inter-tidal to abyssal; a carnivorous demosponge has been found at ||Any
|}
As primary producers
Sponges with photosynthesizing endosymbionts produce up to three times more oxygen than they consume, as well as more organic matter than they consume. Such contributions to their habitats' resources are significant along Australia's Great Barrier Reef but relatively minor in the Caribbean.
Defenses
Many sponges shed spicules, forming a dense carpet several meters deep that keeps away echinoderms which would otherwise prey on the sponges. They also produce toxins that prevent other sessile organisms such as bryozoans or sea squirts from growing on or near them, making sponges very effective competitors for living space. One of many examples includes ageliferin.
A few species, the Caribbean fire sponge Tedania ignis, cause a severe rash in humans who handle them. Turtles and some fish feed mainly on sponges. It is often said that sponges produce chemical defenses against such predators. However, experiments have been unable to establish a relationship between the toxicity of chemicals produced by sponges and how they taste to fish, which would diminish the usefulness of chemical defenses as deterrents. Predation by fish may even help to spread sponges by detaching fragments. However, some studies have shown fish showing a preference for non chemically defended sponges, and another study found that high levels of coral predation did predict the presence of chemically defended species.
Glass sponges produce no toxic chemicals, and live in very deep water where predators are rare.
Predation | Sponge | Wikipedia | 512 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Spongeflies, also known as spongillaflies (Neuroptera, Sisyridae), are specialist predators of freshwater sponges. The female lays her eggs on vegetation overhanging water. The larvae hatch and drop into the water where they seek out sponges to feed on. They use their elongated mouthparts to pierce the sponge and suck the fluids within. The larvae of some species cling to the surface of the sponge while others take refuge in the sponge's internal cavities. The fully grown larvae leave the water and spin a cocoon in which to pupate.
Bioerosion
The Caribbean chicken-liver sponge Chondrilla nucula secretes toxins that kill coral polyps, allowing the sponges to grow over the coral skeletons. Others, especially in the family Clionaidae, use corrosive substances secreted by their archeocytes to tunnel into rocks, corals and the shells of dead mollusks. Sponges may remove up to per year from reefs, creating visible notches just below low-tide level.
Diseases
Caribbean sponges of the genus Aplysina suffer from Aplysina red band syndrome. This causes Aplysina to develop one or more rust-colored bands, sometimes with adjacent bands of necrotic tissue. These lesions may completely encircle branches of the sponge. The disease appears to be contagious and impacts approximately ten percent of A. cauliformis on Bahamian reefs. The rust-colored bands are caused by a cyanobacterium, but it is unknown whether this organism actually causes the disease.
Collaboration with other organisms
In addition to hosting photosynthesizing endosymbionts, sponges are noted for their wide range of collaborations with other organisms. The relatively large encrusting sponge Lissodendoryx colombiensis is most common on rocky surfaces, but has extended its range into seagrass meadows by letting itself be surrounded or overgrown by seagrass sponges, which are distasteful to the local starfish and therefore protect Lissodendoryx against them; in return, the seagrass sponges get higher positions away from the sea-floor sediment. | Sponge | Wikipedia | 446 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Shrimps of the genus Synalpheus form colonies in sponges, and each shrimp species inhabits a different sponge species, making Synalpheus one of the most diverse crustacean genera. Specifically, Synalpheus regalis utilizes the sponge not only as a food source, but also as a defense against other shrimp and predators. As many as 16,000 individuals inhabit a single loggerhead sponge, feeding off the larger particles that collect on the sponge as it filters the ocean to feed itself. Other crustaceans such as hermit crabs commonly have a specific species of sponge, Pseudospongosorites, grow on them as both the sponge and crab occupy gastropod shells until the crab and sponge outgrow the shell, eventually resulting in the crab using the sponge's body as protection instead of the shell until the crab finds a suitable replacement shell.
Sponge loop
Most sponges are detritivores which filter organic debris particles and microscopic life forms from ocean water. In particular, sponges occupy an important role as detritivores in coral reef food webs by recycling detritus to higher trophic levels.
The hypothesis has been made that coral reef sponges facilitate the transfer of coral-derived organic matter to their associated detritivores via the production of sponge detritus, as shown in the diagram. Several sponge species are able to convert coral-derived DOM into sponge detritus, and transfer organic matter produced by corals further up the reef food web. Corals release organic matter as both dissolved and particulate mucus, as well as cellular material such as expelled Symbiodinium.
Organic matter could be transferred from corals to sponges by all these pathways, but DOM likely makes up the largest fraction, as the majority (56 to 80%) of coral mucus dissolves in the water column, and coral loss of fixed carbon due to expulsion of Symbiodinium is typically negligible (0.01%) compared with mucus release (up to ~40%). Coral-derived organic matter could also be indirectly transferred to sponges via bacteria, which can also consume coral mucus.
Sponge holobiont | Sponge | Wikipedia | 441 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Besides a one to one symbiotic relationship, it is possible for a host to become symbiotic with a microbial consortium, resulting in a diverse sponge microbiome. Sponges are able to host a wide range of microbial communities that can also be very specific. The microbial communities that form a symbiotic relationship with the sponge can amount to as much as 35% of the biomass of its host. The term for this specific symbiotic relationship, where a microbial consortia pairs with a host is called a holobiotic relationship. The sponge as well as the microbial community associated with it will produce a large range of secondary metabolites that help protect it against predators through mechanisms such as chemical defense.
Some of these relationships include endosymbionts within bacteriocyte cells, and cyanobacteria or microalgae found below the pinacoderm cell layer where they are able to receive the highest amount of light, used for phototrophy. They can host over 50 different microbial phyla and candidate phyla, including Alphaprotoebacteria, Actinomycetota, Chloroflexota, Nitrospirota, "Cyanobacteria", the taxa Gamma-, the candidate phylum Poribacteria, and Thaumarchaea.
Systematics
Taxonomy
Carl Linnaeus, who classified most kinds of sessile animals as belonging to the order Zoophyta in the class Vermes, mistakenly identified the genus Spongia as plants in the order Algae. For a long time thereafter, sponges were assigned to subkingdom Parazoa ("beside the animals") separated from the Eumetazoa which formed the rest of the kingdom Animalia. | Sponge | Wikipedia | 365 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
The phylum Porifera is further divided into classes mainly according to the composition of their skeletons:
Hexactinellida (glass sponges) have silicate spicules, the largest of which have six rays and may be individual or fused. The main components of their bodies are syncytia in which large numbers of cell share a single external membrane.
Calcarea have skeletons made of calcite, a form of calcium carbonate, which may form separate spicules or large masses. All the cells have a single nucleus and membrane.
Most Demospongiae have silicate spicules or spongin fibers or both within their soft tissues. However, a few also have massive external skeletons made of aragonite, another form of calcium carbonate. All the cells have a single nucleus and membrane.
Archeocyatha are known only as fossils from the Cambrian period.
In the 1970s, sponges with massive calcium carbonate skeletons were assigned to a separate class, Sclerospongiae, otherwise known as "coralline sponges".
However, in the 1980s, it was found that these were all members of either the Calcarea or the Demospongiae.
So far scientific publications have identified about 9,000 poriferan species, of which: about 400 are glass sponges; about 500 are calcareous species; and the rest are demosponges. However, some types of habitat, vertical rock and cave walls and galleries in rock and coral boulders, have been investigated very little, even in shallow seas.
Classes
Sponges were traditionally distributed in three classes: calcareous sponges (Calcarea), glass sponges (Hexactinellida) and demosponges (Demospongiae). However, studies have now shown that the Homoscleromorpha, a group thought to belong to the Demospongiae, has a genetic relationship well separated from other sponge classes. Therefore, they have recently been recognized as the fourth class of sponges. | Sponge | Wikipedia | 412 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Sponges are divided into classes mainly according to the composition of their skeletons: These are arranged in evolutionary order as shown below in ascending order of their evolution from top to bottom:
{|class="wikitable"
! Class !! Type of cells !! Spicules !! Spongin fibers !! Massive exoskeleton !! Body form
|-
! Hexactinellida
|Mostly syncytia in all species||SilicaMay be individual or fused ||Never ||Never ||Leuconoid
|-
! Demospongiae
|Single nucleus, single external membrane ||Silica ||In many species ||In some species.Made of aragonite if present.||Leuconoid
|-
! Calcarea
|Single nucleus, single external membrane||CalciteMay be individual or large masses ||Never ||Common.Made of calcite if present.||Asconoid, syconoid, leuconoid or solenoid
|-
! Homoscleromorpha
|Single nucleus, single external membrane||Silica ||In many species ||Never ||Sylleibid or leuconoid
|}
Phylogeny
The phylogeny of sponges has been debated heavily since the advent of phylogenetics. Originally thought to be the most basal animal phylum, there is now considerable evidence that Ctenophora may hold that title instead. Additionally, the monophyly of the phylum is now under question. Several studies have concluded that all other animals emerged from within the sponges, and usually recover that the calcareous sponges and Homoscleromorpha are closer to other animals than to demosponges. The internal relationships of Porifera have proven to be less uncertain. A close relationship of Homoscleromorpha and Calcarea has been recovered in nearly all studies, whether or not they support sponge or eumetazoan monophyly. The position of glass sponges is also fairly certain, with a majority of studies recovering them as the sister of the demosponges. Thus, the uncertainty at the base of the animal family tree is probably best represented by the below cladogram.
Evolutionary history
Fossil record | Sponge | Wikipedia | 466 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Although molecular clocks and biomarkers suggest sponges existed well before the Cambrian explosion of life, silica spicules like those of demosponges are absent from the fossil record until the Cambrian. An unsubstantiated 2002 report exists of spicules in rocks dated around . Well-preserved fossil sponges from about in the Ediacaran period have been found in the Doushantuo Formation. These fossils, which include: spicules; pinacocytes; porocytes; archeocytes; sclerocytes; and the internal cavity, have been classified as demosponges. The Ediacaran record of sponges also contains two other genera: the stem-hexactinellid Helicolocellus from the Dengying Formation and the possible stem-archaeocyathan Arimasia from the Nama Group. These genera are both from the “Nama assemblage” of Ediacaran biota, although whether this is due to a genuine lack beforehand or preservational bias is uncertain. Fossils of glass sponges have been found from around in rocks in Australia, China, and Mongolia. Early Cambrian sponges from Mexico belonging to the genus Kiwetinokia show evidence of fusion of several smaller spicules to form a single large spicule. Calcium carbonate spicules of calcareous sponges have been found in Early Cambrian rocks from about in Australia. Other probable demosponges have been found in the Early Cambrian Chengjiang fauna, from . Fossils found in the Canadian Northwest Territories dating to may be sponges; if this finding is confirmed, it suggests the first animals appeared before the Neoproterozoic oxygenation event.
Freshwater sponges appear to be much younger, as the earliest known fossils date from the Mid-Eocene period about . Although about 90% of modern sponges are demosponges, fossilized remains of this type are less common than those of other types because their skeletons are composed of relatively soft spongin that does not fossilize well.
The earliest sponge symbionts are known from the early Silurian. | Sponge | Wikipedia | 428 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
A chemical tracer is 24-isopropyl cholestane, which is a stable derivative of 24-isopropyl cholesterol, which is said to be produced by demosponges but not by eumetazoans ("true animals", i.e. cnidarians and bilaterians). Since choanoflagellates are thought to be animals' closest single-celled relatives, a team of scientists examined the biochemistry and genes of one choanoflagellate species. They concluded that this species could not produce 24-isopropyl cholesterol but that investigation of a wider range of choanoflagellates would be necessary in order to prove that the fossil 24-isopropyl cholestane could only have been produced by demosponges.
Although a previous publication reported traces of the chemical 24-isopropyl cholestane in ancient rocks dating to , recent research using a much more accurately dated rock series has revealed that these biomarkers only appear before the end of the Marinoan glaciation approximately , and that "Biomarker analysis has yet to reveal any convincing evidence for ancient sponges pre-dating the first globally extensive Neoproterozoic glacial episode (the Sturtian, ~ in Oman)". While it has been argued that this 'sponge biomarker' could have originated from marine algae, recent research suggests that the algae's ability to produce this biomarker evolved only in the Carboniferous; as such, the biomarker remains strongly supportive of the presence of demosponges in the Cryogenian.
Archaeocyathids, which some classify as a type of coralline sponge, are very common fossils in rocks from the Early Cambrian about , but apparently died out by the end of the Cambrian .
It has been suggested that they were produced by: sponges; cnidarians; algae; foraminiferans; a completely separate phylum of animals, Archaeocyatha; or even a completely separate kingdom of life, labeled Archaeata or Inferibionta. Since the 1990s, archaeocyathids have been regarded as a distinctive group of sponges. | Sponge | Wikipedia | 456 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
It is difficult to fit chancelloriids into classifications of sponges or more complex animals. An analysis in 1996 concluded that they were closely related to sponges on the grounds that the detailed structure of chancellorid sclerites ("armor plates") is similar to that of fibers of spongin, a collagen protein, in modern keratose (horny) demosponges such as Darwinella. However, another analysis in 2002 concluded that chancelloriids are not sponges and may be intermediate between sponges and more complex animals, among other reasons because their skins were thicker and more tightly connected than those of sponges. In 2008, a detailed analysis of chancelloriids' sclerites concluded that they were very similar to those of halkieriids, mobile bilaterian animals that looked like slugs in chain mail and whose fossils are found in rocks from the very Early Cambrian to the Mid Cambrian. If this is correct, it would create a dilemma, as it is extremely unlikely that totally unrelated organisms could have developed such similar sclerites independently, but the huge difference in the structures of their bodies makes it hard to see how they could be closely related.
Relationships to other animal groups
In the 1990s, sponges were widely regarded as a monophyletic group, all of them having descended from a common ancestor that was itself a sponge, and as the "sister-group" to all other metazoans (multi-celled animals), which themselves form a monophyletic group. On the other hand, some 1990s analyses also revived the idea that animals' nearest evolutionary relatives are choanoflagellates, single-celled organisms very similar to sponges' choanocytes – which would imply that most Metazoa evolved from very sponge-like ancestors and therefore that sponges may not be monophyletic, as the same sponge-like ancestors may have given rise both to modern sponges and to non-sponge members of Metazoa. | Sponge | Wikipedia | 404 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Analyses since 2001 have concluded that Eumetazoa (more complex than sponges) are more closely related to particular groups of sponges than to other sponge groups. Such conclusions imply that sponges are not monophyletic, because the last common ancestor of all sponges would also be a direct ancestor of the Eumetazoa, which are not sponges. A study in 2001 based on comparisons of ribosome DNA concluded that the most fundamental division within sponges was between glass sponges and the rest, and that Eumetazoa are more closely related to calcareous sponges (those with calcium carbonate spicules) than to other types of sponge. In 2007, one analysis based on comparisons of RNA and another based mainly on comparison of spicules concluded that demosponges and glass sponges are more closely related to each other than either is to the calcareous sponges, which in turn are more closely related to Eumetazoa.
Other anatomical and biochemical evidence links the Eumetazoa with Homoscleromorpha, a sub-group of demosponges. A comparison in 2007 of nuclear DNA, excluding glass sponges and comb jellies, concluded that:
Homoscleromorpha are most closely related to Eumetazoa;
calcareous sponges are the next closest;
the other demosponges are evolutionary "aunts" of these groups; and
the chancelloriids, bag-like animals whose fossils are found in Cambrian rocks, may be sponges.
The sperm of Homoscleromorpha share features with the sperm of Eumetazoa, that sperm of other sponges lack. In both Homoscleromorpha and Eumetazoa layers of cells are bound together by attachment to a carpet-like basal membrane composed mainly of "typ IV" collagen, a form of collagen not found in other sponges – although the spongin fibers that reinforce the mesohyl of all demosponges is similar to "type IV" collagen. | Sponge | Wikipedia | 419 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
The analyses described above concluded that sponges are closest to the ancestors of all Metazoa, of all multi-celled animals including both sponges and more complex groups. However, another comparison in 2008 of 150 genes in each of 21 genera, ranging from fungi to humans but including only two species of sponge, suggested that comb jellies (ctenophora) are the most basal lineage of the Metazoa included in the sample. If this is correct, either modern comb jellies developed their complex structures independently of other Metazoa, or sponges' ancestors were more complex and all known sponges are drastically simplified forms. The study recommended further analyses using a wider range of sponges and other simple Metazoa such as Placozoa.
However, reanalysis of the data showed that the computer algorithms used for analysis were misled by the presence of specific ctenophore genes that were markedly different from those of other species, leaving sponges as either the sister group to all other animals, or an ancestral paraphyletic grade. 'Family trees' constructed using a combination of all available data – morphological, developmental and molecular – concluded that the sponges are in fact a monophyletic group, and with the cnidarians form the sister group to the bilaterians.
A very large and internally consistent alignment of 1,719 proteins at the metazoan scale, published in 2017, showed that (i) sponges – represented by Homoscleromorpha, Calcarea, Hexactinellida, and Demospongiae – are monophyletic, (ii) sponges are sister-group to all other multicellular animals, (iii) ctenophores emerge as the second-earliest branching animal lineage, and (iv) placozoans emerge as the third animal lineage, followed by cnidarians sister-group to bilaterians.
In March 2021, scientists from Dublin found additional evidence that sponges are the sister group to all other animals, while in May 2023, Schultz et al. found patterns of irreversible change in genome synteny that provide strong evidence that ctenophores are the sister group to all other animals instead.
Notable spongiologists | Sponge | Wikipedia | 460 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Céline Allewaert
Patricia Bergquist
James Scott Bowerbank
Maurice Burton
Henry John Carter
Max Walker de Laubenfels
Arthur Dendy
Édouard Placide Duchassaing de Fontbressin
Randolph Kirkpatrick
Robert J. Lendlmayer von Lendenfeld
Edward Alfred Minchin
Giovanni Domenico Nardo
Eduard Oscar Schmidt
Émile Topsent
Use
By dolphins
A report in 1997 described use of sponges as a tool by bottlenose dolphins in Shark Bay in Western Australia. A dolphin will attach a marine sponge to its rostrum, which is presumably then used to protect it when searching for food in the sandy sea bottom. The behavior, known as sponging, has only been observed in this bay and is almost exclusively shown by females. A study in 2005 concluded that mothers teach the behavior to their daughters and that all the sponge users are closely related, suggesting that it is a fairly recent innovation.
By humans
Skeleton
The calcium carbonate or silica spicules of most sponge genera make them too rough for most uses, but two genera, Hippospongia and Spongia, have soft, entirely fibrous skeletons. Early Europeans used soft sponges for many purposes, including padding for helmets, portable drinking utensils and municipal water filters. Until the invention of synthetic sponges, they were used as cleaning tools, applicators for paints and ceramic glazes and discreet contraceptives. However, by the mid-20th century, overfishing brought both the animals and the industry close to extinction.
Many objects with sponge-like textures are now made of substances not derived from poriferans. Synthetic sponges include personal and household cleaning tools, breast implants, and contraceptive sponges. Typical materials used are cellulose foam, polyurethane foam, and less frequently, silicone foam.
The luffa "sponge", also spelled loofah, which is commonly sold for use in the kitchen or the shower, is not derived from an animal but mainly from the fibrous "skeleton" of the sponge gourd (Luffa aegyptiaca, Cucurbitaceae).
Antibiotic compounds
Sponges have medicinal potential due to the presence in sponges themselves or their microbial symbionts of chemicals that may be used to control viruses, bacteria, tumors and fungi.
Other biologically active compounds | Sponge | Wikipedia | 481 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Lacking any protective shell or means of escape, sponges have evolved to synthesize a variety of unusual compounds. One such class is the oxidized fatty acid derivatives called oxylipins. Members of this family have been found to have anti-cancer, anti-bacterial and anti-fungal properties. One example isolated from the Okinawan Plakortis sponges, plakoridine A, has shown potential as a cytotoxin to murine lymphoma cells. | Sponge | Wikipedia | 100 | 47271 | https://en.wikipedia.org/wiki/Sponge | Biology and health sciences | Porifera | null |
Movable type (US English; moveable type in British English) is the system and technology of printing and typography that uses movable components to reproduce the elements of a document (usually individual alphanumeric characters or punctuation marks) usually on the medium of paper.
Overview
The world's first movable type printing technology for paper books was made of porcelain materials and was invented around 1040 AD in China during the Northern Song dynasty by the inventor Bi Sheng (990–1051). The earliest printed paper money with movable metal type to print the identifying code of the money was made in 1161 during the Song dynasty. In 1193, a book in the Song dynasty documented how to use the copper movable type. The oldest extant book printed with movable metal type, Jikji, was printed in Korea in 1377 during the Goryeo dynasty.
The spread of both movable-type systems was, to some degree, limited to primarily East Asia. The creation of the printing press in Europe may have been influenced by various sporadic reports of movable type technology brought back to Europe by returning business people and missionaries to China. Some of these medieval European accounts are still preserved in the library archives of the Vatican and Oxford University among many others.
Around 1450, German goldsmith Johannes Gutenberg invented the metal movable-type printing press, along with innovations in casting the type based on a matrix and hand mould. The small number of alphabetic characters needed for European languages was an important factor. Gutenberg was the first to create his type pieces from an alloy of lead, tin, and antimony—and these materials remained standard for 550 years.
For alphabetic scripts, movable-type page setting was quicker than woodblock printing. The metal type pieces were more durable and the lettering was more uniform, leading to typography and fonts. The high quality and relatively low price of the Gutenberg Bible (1455) established the superiority of movable type in Europe and the use of printing presses spread rapidly. The printing press may be regarded as one of the key factors fostering the Renaissance and, due to its effectiveness, its use spread around the globe.
The 19th-century invention of hot metal typesetting and its successors caused movable type to decline in the 20th century.
Precursors to movable type | Movable type | Wikipedia | 476 | 47300 | https://en.wikipedia.org/wiki/Movable%20type | Technology | Printing | null |
Letter punch and coins
The technique of imprinting multiple copies of symbols or glyphs with a master type punch made of hard metal first developed around 3000 BC in ancient Sumer. These metal punch types can be seen as precursors of the letter punches adapted in later millennia to printing with movable metal type. Cylinder seals were used in Mesopotamia to create an impression on a surface by rolling the seal on wet clay.
Seals and stamps
Seals and stamps may have been precursors to movable type. The uneven spacing of the impressions on brick stamps found in the Mesopotamian cities of Uruk and Larsa, dating from the 2nd millennium BC, has been conjectured by some archaeologists as evidence that the stamps were made using movable type. The enigmatic Minoan Phaistos Disc of –1600 BC has been considered by one scholar as an early example of a body of text being reproduced with reusable characters: it may have been produced by pressing pre-formed hieroglyphic "seals" into the soft clay. A few authors even view the disc as technically meeting all definitional criteria to represent an early incidence of movable-type printing.
Woodblock printing
Bones, shells, bamboo slips, metal tablets, stone tablets, silk, as well as other materials were previously used for writing. However, following the invention of paper during the Chinese Han dynasty, writing materials became more portable and economical. Yet, copying books by hand was still labour-consuming. Not until the Xiping Era (172–178 AD), towards the end of the Eastern Han dynasty, did sealing print and monotype appear. These were used to print designs on fabrics and to print texts. | Movable type | Wikipedia | 338 | 47300 | https://en.wikipedia.org/wiki/Movable%20type | Technology | Printing | null |
By about the 8th century during the Tang dynasty, woodblock printing was invented and worked as follows. First, the neat hand-copied script was stuck on a relatively thick and smooth board, with the front of the paper sticking to the board, the paper being so thin it was transparent, the characters showing in reverse distinctly so that every stroke could be easily recognized. Then, carvers cut away the parts of the board that were not part of the character, so that the characters were cut in relief, completely differently from those cut intaglio. When printing, the bulging characters would have some ink spread on them and be covered by paper. With workers' hands moving on the back of paper gently, characters would be printed on the paper. By the Song dynasty, woodblock printing came to its heyday. Although woodblock printing played an influential role in spreading culture, there were some significant drawbacks. Carving the printing plate required considerable time, labour, and materials. It also was not convenient to store these plates and was difficult to correct mistakes.
History
Ceramic movable type
Bi Sheng () (990–1051) developed the first known movable-type system for printing in China around 1040 AD during the Northern Song dynasty, using ceramic materials. As described by the Chinese scholar Shen Kuo (沈括) (1031–1095):
After his death, the ceramic movable-type passed onto his descendants. In 1193, Zhou Bida, an officer of the Southern Song dynasty, made a set of clay movable-type method according to the method described by Shen Kuo in his Dream Pool Essays, and printed his book | Movable type | Wikipedia | 335 | 47300 | https://en.wikipedia.org/wiki/Movable%20type | Technology | Printing | null |
Isostatic depression is the sinking of large parts of the Earth's crust into the asthenosphere caused by a heavy weight placed on the Earth's surface, often glacial ice during continental glaciation. Isostatic depression and isostatic rebound occur at rates of centimeters per year. Greenland is an example of an isostatically depressed region.
Glacial isostatic depression
Isostatic depression is a phase of glacial isostasy, along with isostatic rebound. Glacial isostasy is the Earth's response to changing surface loads of ice and water during the expansion and contraction of large ice sheets. The Earth's asthenosphere acts viscoelastically, flowing when exposed to loads and non-hydrostatic stress, such as ice sheets, for an extended period of time. The Earth's crust is depressed by the product of thickness of ice and the ratio of ice and mantle densities. This large ice load results in elastic deformation of the entire lithospheric mantle over the span of 10,000-100,000 years, with the load eventually supported by the lithosphere after the limit of local isostatic depression has been attained. Historically, isostatic depression has been used to estimate global ice volume by relating the magnitude of depression to the density of ice and upper mantle material.
Glacial megalakes can form in regional depressions under the influence of glacial load.
Isostatic depression in Greenland
Greenland is isostatically depressed by the Greenland ice sheet. However, due to deglaciation induced by climate change, the regions near the shrinking ice sheet have begun to uplift, a process known as post-glacial rebound. Modeling these glacial isostatic adjustments has been an area of interest for some time now as the entire topography of Greenland is affected by these movements. These movements are unique in that they can be observed on a human time scale unlike other geological processes. Models have been created to assess what future equilibrium states of the Greenland ice sheet will look like. | Isostatic depression | Wikipedia | 401 | 11782509 | https://en.wikipedia.org/wiki/Isostatic%20depression | Physical sciences | Landforms: General | Earth science |
A hydraulic motor is a mechanical actuator that converts hydraulic pressure and flow into torque and angular displacement (rotation). The hydraulic motor is the rotary counterpart of the hydraulic cylinder as a linear actuator. Most broadly, the category of devices called hydraulic motors has sometimes included those that run on hydropower (namely, water engines and water motors) but in today's terminology the name usually refers more specifically to motors that use hydraulic fluid as part of closed hydraulic circuits in modern hydraulic machinery.
Conceptually, a hydraulic motor should be interchangeable with a hydraulic pump because it performs the opposite function – similar to the way a DC electric motor is theoretically interchangeable with a DC electrical generator. However, many hydraulic pumps cannot be used as hydraulic motors because they cannot be backdriven. Also, a hydraulic motor is usually designed for working pressure at both sides of the motor, whereas most hydraulic pumps rely on low pressure provided from the reservoir at the input side and would leak fluid when abused as a motor.
History of hydraulic motors
One of the first rotary hydraulic motors to be developed was that constructed by William Armstrong for his Swing Bridge over the River Tyne. Two motors were provided, for reliability. Each one was a three-cylinder single-acting oscillating engine. Armstrong developed a wide range of hydraulic motors, linear and rotary, that were used for a wide range of industrial and civil engineering tasks, particularly for docks and moving bridges.
The first simple fixed-stroke hydraulic motors had the disadvantage that they used the same volume of water whatever the load and so were wasteful at part-power. Unlike steam engines, as water is incompressible, they could not be throttled or their valve cut-off controlled. To overcome this, motors with variable stroke were developed. Adjusting the stroke, rather than controlling admission valves, now controlled the engine power and water consumption. One of the first of these was Arthur Rigg's patent engine of 1886. This used a double eccentric mechanism, as used on variable stroke power presses, to control the stroke length of a three cylinder radial engine. Later, the swashplate engine with an adjustable swashplate angle would become a popular way to make variable stroke hydraulic motors.
Hydraulic motor types
Vane motors | Hydraulic motor | Wikipedia | 451 | 6995526 | https://en.wikipedia.org/wiki/Hydraulic%20motor | Technology | Hydraulics and pneumatics | null |
A vane motor consists of a housing with an eccentric bore, in which runs a rotor with vanes in it that slide in and out. The force differential created by the unbalanced force of the pressurized fluid on the vanes causes the rotor to spin in one direction. A critical element in vane motor design is how the vane tips are machined at the contact point between vane tip and motor housing. Several types of "lip" designs are used, and the main objective is to provide a tight seal between the inside of the motor housing and the vane, and at the same time to minimize wear and metal-to-metal contact.
Gear motors
A gear motor (external gear) consists of two gears, the driven gear (attached to the output shaft by way of a key, etc.) and the idler gear. High pressure oil is ported into one side of the gears, where it flows around the periphery of the gears, between the gear tips and the wall housings in which it resides, to the outlet port. The gears mesh, not allowing the oil from the outlet side to flow back to the inlet side. For lubrication, the gear motor uses a small amount of oil from the pressurized side of the gears, bleeds this through the (typically) hydrodynamic bearings, and vents the same oil either to the low pressure side of the gears, or through a dedicated drain port on the motor housing, which is usually connected to a line that vents the motor's case pressure to the system's reservoir. An especially positive attribute of the gear motor is that catastrophic breakdown is less common than in most other types of hydraulic motors. This is because the gears gradually wear down the housing and/or main bushings, reducing the volumetric efficiency of the motor gradually until it is all but useless. This often happens long before wear causes the unit to seize or break down.
Gear motors can be supplied as single or double-directional based on their usage, and they are preferred in either aluminum or cast iron bodies, depending on application conditions. They offer design options that can handle radial loads. Additionally, alternative configurations include pressure relief valve, anti-cavitation valve, and speed sensor to meet specific application needs.
Gerotor motors | Hydraulic motor | Wikipedia | 457 | 6995526 | https://en.wikipedia.org/wiki/Hydraulic%20motor | Technology | Hydraulics and pneumatics | null |
The gerotor motor is in essence a rotor with n − 1 teeth, rotating off center in a rotor/stator with n teeth. Pressurized fluid is guided into the assembly using a (usually) axially placed plate-type distributor valve. Several different designs exist, such as the Geroller (internal or external rollers) and Nichols motors. Typically, the Gerotor motors are low-to-medium speed and medium-to-high torque.
Axial plunger motors
For high quality rotating drive systems, plunger motors are generally used. Whereas the speed of hydraulic pumps range from 1200 to 1800 rpm, the machinery to be driven by the motor often requires a much lower speed. This means that when an axial plunger motor (swept volume maximum 2 litres) is used, a gearbox is usually needed. For a continuously adjustable swept volume, axial piston motors are used.
Like piston (reciprocating) type pumps, the most common design of the piston type of motor is the axial. This type of motor is the most commonly used in hydraulic systems. These motors are, like their pump counterparts, available in both variable and fixed displacement designs. Typical usable (within acceptable efficiency) rotational speeds range from below 50 rpm to above 14000 rpm. Efficiencies and minimum/maximum rotational speeds are highly dependent on the design of the rotating group, and many different types are in use.
Radial piston motors
Radial piston motors are available in two basic types: Pistons pushing inward, and pistons pushing outward.
Pistons pushing inward
The crankshaft type (e.g. Staffa or SAI hydraulic motors) with a single cam and the pistons pushing inwards is basically an old design but is one which has extremely high starting torque characteristics. They are available in displacements from 40 cc/rev up to about 50 litres/rev but can sometimes be limited in power output. Crankshaft type radial piston motors are capable of running at "creep" speeds and some can run seamlessly up to 1500 rpm whilst offering virtually constant output torque characteristics. This makes them still the most versatile design. | Hydraulic motor | Wikipedia | 419 | 6995526 | https://en.wikipedia.org/wiki/Hydraulic%20motor | Technology | Hydraulics and pneumatics | null |
The single-cam-type radial piston motor exists in many different designs itself. Usually the difference lies in the way the fluid is distributed to the different pistons or cylinders, and also the design of the cylinders themselves. Some motors have pistons attached to the cam using rods (much like in an internal combustion engine), while others employ floating "shoes", and even spherical contact telescopic cylinders like the Parker Denison Calzoni type. Each design has its own set of pros and cons, such as freewheeling ability, high volumetric efficiency, high reliability and so on.
Pistons pushing outward
Multi-lobe cam ring types (e.g. Black Bruin, Rexroth, Hägglunds Drives, Poclain, Rotary Power or Eaton Hydre-MAC type) have a cam ring with multiple lobes and the piston rollers push outward against the cam ring. This produces a very smooth output with high starting torque but they are often limited in the upper speed range. This type of motor is available in a very wide range from about 1 litre/rev to 250 litres/rev. These motors are particularly good on low speed applications and can develop very high power.
Braking
Hydraulic motors usually have a drain connection for the internal leakage, which means that when the power unit is turned off the hydraulic motor in the drive system will move slowly if an external load is acting on it. Thus, for applications such as a crane or winch with suspended load, there is always a need for a brake or a locking device.
Uses
Hydraulic pumps, motors, and cylinders can be combined into hydraulic drive systems. One or more hydraulic pumps, coupled to one or more hydraulic motors, constitute a hydraulic transmission.
Hydraulic motors are used for many applications now such as winches and crane drives, wheel motors for military vehicles, self-driven cranes, excavators, conveyor and feeder drives, cooling fan drives, mixer and agitator drives, roll mills, drum drives for digesters, trommels and kilns, shredders, drilling rigs, trench cutters, high-powered lawn trimmers, and plastic injection machines.
Hydraulic motors are also used in heat transfer applications. | Hydraulic motor | Wikipedia | 449 | 6995526 | https://en.wikipedia.org/wiki/Hydraulic%20motor | Technology | Hydraulics and pneumatics | null |
Varicella vaccine, also known as chickenpox vaccine, is a vaccine that protects against chickenpox. One dose of vaccine prevents 95% of moderate disease and 100% of severe disease. Two doses of vaccine are more effective than one. If given to those who are not immune within five days of exposure to chickenpox it prevents most cases of the disease. Vaccinating a large portion of the population also protects those who are not vaccinated. It is given by injection just under the skin. Another vaccine, known as zoster vaccine, is used to prevent diseases caused by the same virus – the varicella zoster virus.
The World Health Organization (WHO) recommends routine vaccination only if a country can keep more than 80% of people vaccinated. If only 20% to 80% of people are vaccinated it is possible that more people will get the disease at an older age and outcomes overall may worsen. Either one or two doses of the vaccine are recommended. In the United States two doses are recommended starting at twelve to fifteen months of age. , twenty-three countries recommend all non-medically exempt children receive the vaccine, nine recommend it only for high-risk groups, three additional countries recommend use in only parts of the country, while other countries make no recommendation. Not all countries provide the vaccine due to its cost. In the United Kingdom, Varilrix, a live viral vaccine is approved from the age of 12 months, but only recommended for certain at risk groups.
Minor side effects may include pain at the site of injection, fever, and rash. Severe side effects are rare and occur mostly in those with poor immune function. Its use in people with HIV/AIDS should be done with care. It is not recommended during pregnancy; however, the few times it has been given during pregnancy no problems resulted. The vaccine is available either by itself or along with the MMR vaccine, in a version known as the MMRV vaccine. It is made from weakened virus.
A live attenuated varicella vaccine, the Oka strain, was developed by Michiaki Takahashi and his colleagues in Japan in the early 1970s. American vaccinologist Maurice Hilleman's team developed a chickenpox vaccine in the United States in 1981, based on the "Oka strain" of the varicella virus. The chickenpox vaccine first became commercially available in 1984. It is on the WHO Model List of Essential Medicines. | Varicella vaccine | Wikipedia | 508 | 8649736 | https://en.wikipedia.org/wiki/Varicella%20vaccine | Biology and health sciences | Vaccines | Health |
Medical uses
Varicella vaccine is 70% to 90% effective for preventing varicella and more than 95% effective for preventing severe varicella. Follow-up evaluations have taken place in the United States of children immunized that revealed protection for at least 11 years. Studies were conducted in Japan which indicated protection for at least 20 years.
People who do not develop enough protection when they get the vaccine may develop a mild case of the disease when in close contact with a person with chickenpox. In these cases, people show very little sign of illness. This has been the case of children who get the vaccine in their early childhood and later have contact with children with chickenpox. Some of these children may develop mild chickenpox also known as breakthrough disease.
Another vaccine, known as zoster vaccine, is simply a larger-than-normal dose of the same vaccine used against chickenpox and is used in older adults to reduce the risk of shingles (also called herpes zoster) and postherpetic neuralgia, which are caused by the same virus. The recombinant zoster (shingles) vaccine is recommended for adults aged 50 years and older.
Duration of immunity
The long-term duration of protection from varicella vaccine is unknown, but there are now persons vaccinated twenty years ago with no evidence of waning immunity, while others have become vulnerable in as few as six years. Assessments of the duration of immunity are complicated in an environment where natural disease is still common, which typically leads to an overestimation of effectiveness. | Varicella vaccine | Wikipedia | 325 | 8649736 | https://en.wikipedia.org/wiki/Varicella%20vaccine | Biology and health sciences | Vaccines | Health |
Some vaccinated children have been found to lose their protective antibodies in as little as five to eight years. However, according to the World Health Organization (WHO): "After observation of study populations for periods of up to 20 years in Japan and 10 years in the United States, more than 90% of immunocompetent persons who were vaccinated as children were still protected from varicella." However, since only one out of five Japanese children were vaccinated, the annual exposure of these vaccinees to children with natural chickenpox boosted the vaccinees' immune system. In the United States, where universal varicella vaccination has been practiced, the majority of children no longer receive exogenous (outside) boosting, thus, their cell-mediated immunity to VZV (varicella zoster virus) wanes – necessitating booster chickenpox vaccinations. As time goes on, boosters may be necessary. Persons exposed to the virus after vaccination tend to experience milder cases of chickenpox if they develop the disease.
Chickenpox
Prior to the widespread introduction of the vaccine in the United States in 1995 (1986 in Japan and 1988 in Korea), there were around 4,000,000 cases per year in the United States, mostly in children, with typically 10,500–13,000 hospital admissions (range, 8,000–18,000), and 100–150 deaths each year. Most of the deaths were among young children.
During 2003, and the first half of 2004, the CDC reported eight deaths from varicella, six of whom were children or adolescents. These deaths and hospital admissions have substantially declined in the US due to vaccination, though the rate of shingles infection has increased as adults are less exposed to infected children (which would otherwise help protect against shingles). Ten years after the vaccine was recommended in the US, the CDC reported as much as a 90% drop in chickenpox cases, a varicella-related hospital admission decline of 71% and a 97% drop in chickenpox deaths among those under 20.
Vaccines are less effective among high-risk patients, as well as being more dangerous because they contain attenuated live viruses. In a study performed on children with an impaired immune system, 30% had lost the antibody after five years, and 8% had already caught wild chickenpox in those five years. | Varicella vaccine | Wikipedia | 502 | 8649736 | https://en.wikipedia.org/wiki/Varicella%20vaccine | Biology and health sciences | Vaccines | Health |
Herpes zoster
Herpes zoster (shingles) most often occurs in the elderly and is only rarely seen in children. The incidence of herpes zoster in vaccinated adults is 0.9/1000 person-years, and is 0.33/1000 person-years in vaccinated children; this is lower than the overall incidence of 3.2–4.2/1000 person-years.
The risk of developing shingles is reduced for children who receive the varicella vaccine, but not eliminated. The CDC stated in 2014: "Chickenpox vaccines contain weakened live VZV, which may cause latent (dormant) infection. The vaccine-strain VZV can reactivate later in life and cause shingles. However, the risk of getting shingles from vaccine-strain VZV after chickenpox vaccination is much lower than getting shingles after natural infection with wild-type VZV."
The risk of shingles is significantly lower among children who have received varicella vaccination, including those who are immunocompromised. The risk of shingles is approximately 80% lower among healthy vaccinated children compared to unvaccinated children who had wild-type varicella. A population with high varicella vaccination also has lower incidence of shingles in unvaccinated children, due to herd immunity.
Schedule
The WHO recommends one or two doses with the initial dose given at 12 to 18 months of age. The second dose, if given, should occur at least one to three months later. The second dose, if given, provides the additional benefit of improved protection against all varicella. This vaccine is a shot given subcutaneously (under the skin). It is recommended for all children under 13 and for everyone 13 or older who has never had chickenpox. | Varicella vaccine | Wikipedia | 385 | 8649736 | https://en.wikipedia.org/wiki/Varicella%20vaccine | Biology and health sciences | Vaccines | Health |
In the United States, two doses are recommended by the CDC. For a routine vaccination, the first dose is administered at 12 to 15 months of age and the second dose at age 4–6 years. However, the second dose can be given as early as 3 months after the first dose. If an individual misses the timing for the routine vaccination, the individual is eligible to receive a catch-up vaccination. For a catch-up vaccination, individuals between 7 and 12 years old should receive a two-dose series 3 months apart (a minimum interval of 4 weeks). For individuals 13–18 years old, the catch-up vaccination should be given 4 to 8 weeks apart (a minimum interval of 4 weeks). The varicella vaccine did not become widely available in the United States until 1995.
In the UK, the vaccine is only available on the National Health Service for those who are in close contact with someone who is particularly vulnerable to chickenpox. As there is an increased risk of shingles in adults due to possible lack of contact with chickenpox-infected children providing a natural boosting to immunity, and the fact that chickenpox is usually a mild illness, the NHS cites concerns about unvaccinated children catching chickenpox as adults when it is more dangerous. However, the vaccine is approved for 12 months and up and is available privately, with a second dose to be given a year after the first. | Varicella vaccine | Wikipedia | 300 | 8649736 | https://en.wikipedia.org/wiki/Varicella%20vaccine | Biology and health sciences | Vaccines | Health |
Contraindications
The varicella vaccine is not recommended for seriously ill people, pregnant women, people who have tuberculosis, people who have experienced a serious allergic reaction to the varicella vaccine in the past, people who are allergic to gelatin, people allergic to neomycin, people receiving high doses of steroids, people receiving treatment for cancer with x-rays or chemotherapy, as well as people who have received blood products or transfusions during the past five months. Additionally, the varicella vaccine is not recommended for people who are taking salicylates (e.g. aspirin). After receiving the varicella vaccine, the use of salicylates should be avoided for at least six weeks. The varicella vaccine is also not recommended for individuals who have received a live vaccine in the last four weeks, because live vaccines that are administered too soon within one another may not be as effective. It may be usable in people with HIV infections who have a good blood count and are receiving appropriate treatment. Specific antiviral medication, such as acyclovir, famciclovir, or valacyclovir, are not recommended 24 hours before and 14 days after vaccination.
Side effects
Serious side effects are very rare. From 1998 to 2013, only one vaccine-related death was reported: an English child with pre-existent leukemia. On some occasions, severe reactions such as meningitis and pneumonia have been reported (mainly in inadvertently vaccinated immunocompromised children) as well as anaphylaxis.
The possible mild side effects include redness, stiffness, and soreness at the injection site, as well as fever. A few people may develop a mild rash, which usually appears around the injection site.
There is a short-term risk of developing herpes zoster (shingles) following vaccination. However, this risk is less than the risk due to a natural infection resulting in chickenpox. Most of the cases reported have been mild and have not been associated with serious complications.
Approximately 5% of children who receive the vaccine develop a fever or rash. Adverse reaction reports for the period 1995 to 2005 found no deaths attributed to the vaccine despite approximately 55.7 million doses being delivered. Cases of vaccine-related chickenpox have been reported in patients with a weakened immune system, but no deaths. | Varicella vaccine | Wikipedia | 490 | 8649736 | https://en.wikipedia.org/wiki/Varicella%20vaccine | Biology and health sciences | Vaccines | Health |
The literature contains several reports of adverse reactions following varicella vaccination, including vaccine-strain zoster in children and adults.
History
The varicella-zoster vaccine is made from the Oka/Merck strain of live attenuated varicella virus. The Oka virus was initially obtained from a child with natural varicella, introduced into human embryonic lung cell cultures, adapted to and propagated in embryonic guinea pig cell cultures, and finally propagated in a human diploid cell line originally derived from fetal tissues (WI-38). Takahashi and his colleagues used the Oka strain to develop a live attenuated varicella vaccine in Japan in the early 1970s. This strain was further developed by pharmaceutical companies such as Merck & Co. and GlaxoSmithKline. American vaccinologist Maurice Hilleman's team at Merck then used the Oka strain to prepare a chickenpox vaccine in 1981.
Japan was among the first countries to vaccinate for chickenpox. The vaccine developed by Hilleman was first licensed in the United States in 1995. Routine vaccination against varicella zoster virus is also performed in the United States, and the incidence of chickenpox has been dramatically reduced there (from four million cases per year in the pre-vaccine era to approximately 390,000 cases per year ).
, standalone varicella vaccines are available in all 27 European Union member countries, and 16 countries also offer a combined measles, mumps, rubella, and varicella vaccine (MMRV). Twelve European countries (Austria, Andorra, Cyprus, Czech Republic, Finland, Germany, Greece, Hungary, Italy, Latvia, Luxembourg and Spain) have universal varicella vaccination (UVV) policies, though only six of these countries have made it available at no cost via government funding. EU member states that have not implemented UVV cite reasons such as "a perceived low disease burden and low public health priority," the cost and cost-effectiveness, the possible risk of herpes zoster when vaccinating older adults, and rare fevers leading to seizures after the first dose of the MMRV vaccine. "Countries that implemented UVV experienced decreases in varicella incidence, hospitalizations, and complications, showing overall beneficial impact." | Varicella vaccine | Wikipedia | 486 | 8649736 | https://en.wikipedia.org/wiki/Varicella%20vaccine | Biology and health sciences | Vaccines | Health |
Varicella vaccination is recommended in Canada for all healthy children aged 1 to 12, as well as susceptible adolescents and adults 50 years of age and younger; "may be considered for people with select immunodeficiency disorders; and "should be prioritized" for susceptible individuals, including "non-pregnant women of childbearing age, household contacts of immunocompromised individuals, members of a household expecting a newborn, health care workers, adults who may be exposed occupationally to varicella (for example, people who work with young children), immigrants and refugees from tropical regions, people receiving chronic salicylate therapy (for example, acetylsalicylic acid [ASA])," and others.
Australia has adopted recommendations for routine immunization of children and susceptible adults against chickenpox.
Other countries, such as the United Kingdom, have targeted recommendations for the vaccine, e.g., for susceptible healthcare workers at risk of varicella exposure. In the UK, varicella antibodies are measured as part of the routine of prenatal care, and by 2005 all National Health Service personnel had determined their immunity and been immunized if they were non-immune and had direct patient contact. Population-based immunization against varicella is not otherwise practised in the UK.
Since 2013, the MMRV vaccine has been offered for free to all Brazilian citizens.
Society and culture
Catholic Church
The use of fetal tissue in vaccine development is the practice of researching, developing, and producing vaccines through growing viruses in cultured (laboratory-grown) cells that were originally derived from human fetal tissue. Since the cell strains in use originate from abortions, there has been some opposition to the practice and the resulting vaccines on religious and moral grounds. | Varicella vaccine | Wikipedia | 365 | 8649736 | https://en.wikipedia.org/wiki/Varicella%20vaccine | Biology and health sciences | Vaccines | Health |
The Roman Catholic Church is opposed to abortion. Nevertheless, the Pontifical Academy for Life stated in 2017 that "clinically recommended vaccinations can be used with a clear conscience and that the use of such vaccines does not signify some sort of cooperation with voluntary abortion". On 21 December 2020, the Vatican's doctrinal office, the Congregation for the Doctrine of the Faith, further clarified that it is "morally " for Catholics to receive vaccines derived from fetal cell lines or in which such lines were used in testing or development, because "passive material cooperation in the procured abortion from which these cell lines originate is, on the part of those making use of the resulting vaccines, remote" and "does not and should not in any way imply that there is a moral endorsement of the use of cell lines proceeding from aborted fetuses". | Varicella vaccine | Wikipedia | 172 | 8649736 | https://en.wikipedia.org/wiki/Varicella%20vaccine | Biology and health sciences | Vaccines | Health |
Artillery are ranged weapons that launch munitions far beyond the range and power of infantry firearms. Early artillery development focused on the ability to breach defensive walls and fortifications during sieges, and led to heavy, fairly immobile siege engines. As technology improved, lighter, more mobile field artillery cannons developed for battlefield use. This development continues today; modern self-propelled artillery vehicles are highly mobile weapons of great versatility generally providing the largest share of an army's total firepower.
Originally, the word "artillery" referred to any group of soldiers primarily armed with some form of manufactured weapon or armour. Since the introduction of gunpowder and cannon, "artillery" has largely meant cannon, and in contemporary usage, usually refers to shell-firing guns, howitzers, and mortars (collectively called barrel artillery, cannon artillery or gun artillery) and rocket artillery. In common speech, the word "artillery" is often used to refer to individual devices, along with their accessories and fittings, although these assemblages are more properly called "equipment". However, there is no generally recognized generic term for a gun, howitzer, mortar, and so forth: the United States uses "artillery piece", but most English-speaking armies use "gun" and "mortar". The projectiles fired are typically either "shot" (if solid) or "shell" (if not solid). Historically, variants of solid shot including canister, chain shot and grapeshot were also used. "Shell" is a widely used generic term for a projectile, which is a component of munitions.
By association, artillery may also refer to the arm of service that customarily operates such engines. In some armies, the artillery arm has operated field, coastal, anti-aircraft, and anti-tank artillery; in others these have been separate arms, and with some nations coastal has been a naval or marine responsibility.
In the 20th century, target acquisition devices (such as radar) and techniques (such as sound ranging and flash spotting) emerged, primarily for artillery. These are usually utilized by one or more of the artillery arms. The widespread adoption of indirect fire in the early 20th century introduced the need for specialist data for field artillery, notably survey and meteorological, and in some armies, provision of these are the responsibility of the artillery arm. The majority of combat deaths in the Napoleonic Wars, World War I, and World War II were caused by artillery. In 1944, Joseph Stalin said in a speech that artillery was "the god of war".
Artillery piece | Artillery | Wikipedia | 512 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Although not called by that name, siege engines performing the role recognizable as artillery have been employed in warfare since antiquity. The first known catapult was developed in Syracuse in 399 BC. Until the introduction of gunpowder into western warfare, artillery was dependent upon mechanical energy, which not only severely limited the kinetic energy of the projectiles, but also required the construction of very large engines to accumulate sufficient energy. A 1st-century BC Roman catapult launching stones achieved a kinetic energy of 16 kilojoules, compared to a mid-19th-century 12-pounder gun, which fired a round, with a kinetic energy of 240 kilojoules, or a 20th-century US battleship that fired a projectile from its main battery with an energy level surpassing 350 megajoules.
From the Middle Ages through most of the modern era, artillery pieces on land were moved by horse-drawn gun carriages. In the contemporary era, artillery pieces and their crew relied on wheeled or tracked vehicles as transportation. These land versions of artillery were dwarfed by railway guns; the largest of these large-calibre guns ever conceived – Project Babylon of the Supergun affair – was theoretically capable of putting a satellite into orbit. Artillery used by naval forces has also changed significantly, with missiles generally replacing guns in surface warfare.
Over the course of military history, projectiles were manufactured from a wide variety of materials, into a wide variety of shapes, using many different methods in which to target structural/defensive works and inflict enemy casualties. The engineering applications for ordnance delivery have likewise changed significantly over time, encompassing some of the most complex and advanced technologies in use today.
In some armies, the weapon of artillery is the projectile, not the equipment that fires it. The process of delivering fire onto the target is called gunnery. The actions involved in operating an artillery piece are collectively called "serving the gun" by the "detachment" or gun crew, constituting either direct or indirect artillery fire. The manner in which gunnery crews (or formations) are employed is called artillery support. At different periods in history, this may refer to weapons designed to be fired from ground-, sea-, and even air-based weapons platforms.
Crew
Some armed forces use the term "gunners" for the soldiers and sailors with the primary function of using artillery. | Artillery | Wikipedia | 466 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
The gunners and their guns are usually grouped in teams called either "crews" or "detachments". Several such crews and teams with other functions are combined into a unit of artillery, usually called a battery, although sometimes called a company. In gun detachments, each role is numbered, starting with "1" the Detachment Commander, and the highest number being the Coverer, the second-in-command. "Gunner" is also the lowest rank, and junior non-commissioned officers are "Bombardiers" in some artillery arms.
Batteries are roughly equivalent to a company in the infantry, and are combined into larger military organizations for administrative and operational purposes, either battalions or regiments, depending on the army. These may be grouped into brigades; the Russian army also groups some brigades into artillery divisions, and the People's Liberation Army has artillery corps.
The term "artillery" also designates a combat arm of most military services when used organizationally to describe units and formations of the national armed forces that operate the weapons.
Tactics
During military operations, field artillery has the role of providing support to other arms in combat or of attacking targets, particularly in-depth. Broadly, these effects fall into two categories, aiming either to suppress or neutralize the enemy, or to cause casualties, damage, and destruction. This is mostly achieved by delivering high-explosive munitions to suppress, or inflict casualties on the enemy from casing fragments and other debris and from blast, or by destroying enemy positions, equipment, and vehicles. Non-lethal munitions, notably smoke, can also suppress or neutralize the enemy by obscuring their view.
Fire may be directed by an artillery observer or another observer, including crewed and uncrewed aircraft, or called onto map coordinates.
Military doctrine has had a significant influence on the core engineering design considerations of artillery ordnance through its history, in seeking to achieve a balance between the delivered volume of fire with ordnance mobility. However, during the modern period, the consideration of protecting the gunners also arose due to the late-19th-century introduction of the new generation of infantry weapons using conoidal bullet, better known as the Minié ball, with a range almost as long as that of field artillery. | Artillery | Wikipedia | 445 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
The gunners' increasing proximity to and participation in direct combat against other combat arms and attacks by aircraft made the introduction of a gun shield necessary. The problems of how to employ a fixed or horse-towed gun in mobile warfare necessitated the development of new methods of transporting the artillery into combat. Two distinct forms of artillery were developed: the towed gun, used primarily to attack or defend a fixed-line; and the self-propelled gun, intended to accompany a mobile force and to provide continuous fire support and/or suppression. These influences have guided the development of artillery ordnance, systems, organizations, and operations until the present, with artillery systems capable of providing support at ranges from as little as 100 m to the intercontinental ranges of ballistic missiles. The only combat in which artillery is unable to take part is close-quarters combat, with the possible exception of artillery reconnaissance teams.
Etymology
The word as used in the current context originated in the Middle Ages. One suggestion is that it comes from French atelier, meaning the place where manual work is done.
Another suggestion is that it originates from the 13th century and the Old French artillier, designating craftsmen and manufacturers of all materials and warfare equipments (spears, swords, armor, war machines); and, for the next 250 years, the sense of the word "artillery" covered all forms of military weapons. Hence, the naming of the Honourable Artillery Company, which was essentially an infantry unit until the 19th century.
Another suggestion is that it comes from the Italian arte de tirare (art of shooting), coined by one of the first theorists on the use of artillery, Niccolò Tartaglia. The term was used by Girolamo Ruscelli (died 1566) in his Precepts of Modern Militia published posthumously in 1572.
History
Mechanical systems used for throwing ammunition in ancient warfare, also known as "engines of war", like the catapult, onager, trebuchet, and ballista, are also referred to by military historians as artillery.
Medieval
During medieval times, more types of artillery were developed, most notably the counterweight trebuchet. Traction trebuchets, using manpower to launch projectiles, have been used in ancient China since the 4th century as anti-personnel weapons. The much more powerful counterweight trebuchet was invented in the eastern Mediterranean region in the 12th century, with the earliest definite attestation in 1187.
Invention of gunpowder | Artillery | Wikipedia | 492 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Early Chinese artillery had vase-like shapes. This includes the "long range awe inspiring" cannon dated from 1350 and found in the 14th century Ming dynasty treatise Huolongjing. With the development of better metallurgy techniques, later cannons abandoned the vase shape of early Chinese artillery. This change can be seen in the bronze "thousand ball thunder cannon", an early example of field artillery. These small, crude weapons diffused into the Middle East (the madfaa) and reached Europe in the 13th century, in a very limited manner.
In Asia, Mongols adopted the Chinese artillery and used it effectively in the great conquest. By the late 14th century, Chinese rebels used organized artillery and cavalry to push Mongols out.
As small smooth-bore barrels, these were initially cast in iron or bronze around a core, with the first drilled bore ordnance recorded in operation near Seville in 1247. They fired lead, iron, or stone balls, sometimes large arrows and on occasions simply handfuls of whatever scrap came to hand. During the Hundred Years' War, these weapons became more common, initially as the bombard and later the cannon. Cannons were always muzzle-loaders. While there were many early attempts at breech-loading designs, a lack of engineering knowledge rendered these even more dangerous to use than muzzle-loaders.
Expansion of use
In 1415, the Portuguese invaded the Mediterranean port town of Ceuta. While it is difficult to confirm the use of firearms in the siege of the city, it is known the Portuguese defended it thereafter with firearms, namely bombardas, colebratas, and falconetes. In 1419, Sultan Abu Sa'id led an army to reconquer the fallen city, and Marinids brought cannons and used them in the assault on Ceuta. Finally, hand-held firearms and riflemen appear in Morocco, in 1437, in an expedition against the people of Tangiers. It is clear these weapons had developed into several different forms, from small guns to large artillery pieces. | Artillery | Wikipedia | 411 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
The artillery revolution in Europe caught on during the Hundred Years' War and changed the way that battles were fought. In the preceding decades, the English had even used a gunpowder-like weapon in military campaigns against the Scottish. However, at this time, the cannons used in battle were very small and not particularly powerful. Cannons were only useful for the defense of a castle, as demonstrated at Breteuil in 1356, when the besieged English used a cannon to destroy an attacking French assault tower. By the end of the 14th century, cannons were only powerful enough to knock in roofs, and could not penetrate castle walls.
However, a major change occurred between 1420 and 1430, when artillery became much more powerful and could now batter strongholds and fortresses quite efficiently. The English, French, and Burgundians all advanced in military technology, and as a result the traditional advantage that went to the defense in a siege was lost. Cannons during this period were elongated, and the recipe for gunpowder was improved to make it three times as powerful as before. These changes led to the increased power in the artillery weapons of the time.
Joan of Arc encountered gunpowder weaponry several times. When she led the French against the English at the Battle of Tourelles, in 1430, she faced heavy gunpowder fortifications, and yet her troops prevailed in that battle. In addition, she led assaults against the English-held towns of Jargeau, Meung, and Beaugency, all with the support of large artillery units. When she led the assault on Paris, Joan faced stiff artillery fire, especially from the suburb of St. Denis, which ultimately led to her defeat in this battle. In April 1430, she went to battle against the Burgundians, whose support was purchased by the English. At this time, the Burgundians had the strongest and largest gunpowder arsenal among the European powers, and yet the French, under Joan of Arc's leadership, were able to beat back the Burgundians and defend themselves. As a result, most of the battles of the Hundred Years' War that Joan of Arc participated in were fought with gunpowder artillery. | Artillery | Wikipedia | 436 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
The army of Mehmet the Conqueror, which conquered Constantinople in 1453, included both artillery and foot soldiers armed with gunpowder weapons. The Ottomans brought to the siege sixty-nine guns in fifteen separate batteries and trained them at the walls of the city. The barrage of Ottoman cannon fire lasted forty days, and they are estimated to have fired 19,320 times. Artillery also played a decisive role in the Battle of St. Jakob an der Birs of 1444. Early cannon were not always reliable; King James II of Scotland was killed by the accidental explosion of one of his own cannon, imported from Flanders, at the siege of Roxburgh Castle in 1460.
The able use of artillery supported to a large measure the expansion and defense of the Portuguese Empire, as it was a necessary tool that allowed the Portuguese to face overwhelming odds both on land and sea from Morocco to Asia. In great sieges and in sea battles, the Portuguese demonstrated a level of proficiency in the use of artillery after the beginning of the 16th century unequalled by contemporary European neighbours, in part due to the experience gained in intense fighting in Morocco, which served as a proving ground for artillery and its practical application, and made Portugal a forerunner in gunnery for decades. During the reign of King Manuel (1495–1521) at least 2017 cannon were sent to Morocco for garrison defense, with more than 3000 cannon estimated to have been required during that 26-year period. An especially noticeable division between siege guns and anti-personnel guns enhanced the use and effectiveness of Portuguese firearms above contemporary powers, making cannon the most essential element in the Portuguese arsenal.
The three major classes of Portuguese artillery were anti-personnel guns with a high borelength (including: rebrodequim, berço, falconete, falcão, sacre, áspide, cão, serpentina and passavolante); bastion guns which could batter fortifications (camelete, leão, pelicano, basilisco, águia, camelo, roqueira, urso); and howitzers that fired large stone cannonballs in an elevated arch, weighted up to 4000 pounds and could fire incendiary devices, such as a hollow iron ball filled with pitch and fuse, designed to be fired at close range and burst on contact. The most popular in Portuguese arsenals was the berço, a 5 cm, one pounder bronze breech-loading cannon that weighted 150 kg with an effective range of 600 meters. | Artillery | Wikipedia | 507 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
A tactical innovation the Portuguese introduced in fort defense was the use of combinations of projectiles against massed assaults. Although canister shot had been developed in the early 15th century, the Portuguese were the first to employ it extensively, and Portuguese engineers invented a canister round which consisted of a thin lead case filled with iron pellets, that broke up at the muzzle and scattered its contents in a narrow pattern. An innovation which Portugal adopted in advance of other European powers was fuse-delayed action shells, and were commonly used in 1505. Although dangerous, their effectiveness meant a sixth of all rounds used by the Portuguese in Morocco were of the fused-shell variety.
The new Ming Dynasty established the "Divine Engine Battalion" (神机营), which specialized in various types of artillery. Light cannons and cannons with multiple volleys were developed. In a campaign to suppress a local minority rebellion near today's Burmese border, "the Ming army used a 3-line method of arquebuses/muskets to destroy an elephant formation".
When the Portuguese and Spanish arrived at Southeast Asia, they found that the local kingdoms were already using cannons. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Duarte Barbosa ca. 1514 said that the inhabitants of Java were great masters in casting artillery and very good artillerymen. They made many one-pounder cannons (cetbang or rentaka), long muskets, spingarde (arquebus), schioppi (hand cannon), Greek fire, guns (cannons), and other fire-works. In all aspects the Javanese were considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Pati Unus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By the early 16th century, the Javanese had already started locally-producing large guns, which were dubbed "sacred cannon[s]" or "holy cannon[s]" and have survived up to the present day - though in limited numbers. These cannons varied between 180 and 260 pounders, weighing anywhere between 3–8 tons, measuring between 3–6 m. | Artillery | Wikipedia | 475 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Between 1593 and 1597, about 200,000 Korean and Chinese troops which fought against Japan in Korea actively used heavy artillery in both siege and field combat. Korean forces mounted artillery in ships as naval guns, providing an advantage against Japanese navy which used Kunikuzushi (国崩し – Japanese breech-loading swivel gun) and Ōzutsu (大筒 – large size Tanegashima) as their largest firearms.
Smoothbores
Bombards were of value mainly in sieges. A famous Turkish example used at the siege of Constantinople in 1453 weighed 19 tons, took 200 men and sixty oxen to emplace, and could fire just seven times a day. The Fall of Constantinople was perhaps "the first event of supreme importance whose result was determined by the use of artillery" when the huge bronze cannons of Mehmed II breached the city's walls, ending the Byzantine Empire, according to Sir Charles Oman.
Bombards developed in Europe were massive smoothbore weapons distinguished by their lack of a field carriage, immobility once emplaced, highly individual design, and noted unreliability (in 1460 James II, King of Scots, was killed when one exploded at the siege of Roxburgh). Their large size precluded the barrels being cast and they were constructed out of metal staves or rods bound together with hoops like a barrel, giving their name to the gun barrel.
The use of the word "cannon" marks the introduction in the 15th century of a dedicated field carriage with axle, trail and animal-drawn limber—this produced mobile field pieces that could move and support an army in action, rather than being found only in the siege and static defenses. The reduction in the size of the barrel was due to improvements in both iron technology and gunpowder manufacture, while the development of trunnions—projections at the side of the cannon as an integral part of the cast—allowed the barrel to be fixed to a more movable base, and also made raising or lowering the barrel much easier. | Artillery | Wikipedia | 417 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
The first land-based mobile weapon is usually credited to Jan Žižka, who deployed his oxen-hauled cannon during the Hussite Wars of Bohemia (1418–1424). However, cannons were still large and cumbersome. With the rise of musketry in the 16th century, cannon were largely (though not entirely) displaced from the battlefield—the cannon were too slow and cumbersome to be used and too easily lost to a rapid enemy advance.
The combining of shot and powder into a single unit, a cartridge, occurred in the 1620s with a simple fabric bag, and was quickly adopted by all nations. It speeded loading and made it safer, but unexpelled bag fragments were an additional fouling in the gun barrel and a new tool—a worm—was introduced to remove them. Gustavus Adolphus is identified as the general who made cannon an effective force on the battlefield—pushing the development of much lighter and smaller weapons and deploying them in far greater numbers than previously. The outcome of battles was still determined by the clash of infantry.
Shells, explosive-filled fused projectiles, were in use by the 15th century. The development of specialized pieces—shipboard artillery, howitzers and mortars—was also begun in this period. More esoteric designs, like the multi-barrel ribauldequin (known as "organ guns"), were also produced.
The 1650 book by Kazimierz Siemienowicz Artis Magnae Artilleriae pars prima was one of the most important contemporary publications on the subject of artillery. For over two centuries this work was used in Europe as a basic artillery manual.
One of the most significant effects of artillery during this period was however somewhat more indirect—by easily reducing to rubble any medieval-type fortification or city wall (some which had stood since Roman times), it abolished millennia of siege-warfare strategies and styles of fortification building. This led, among other things, to a frenzy of new bastion-style fortifications to be built all over Europe and in its colonies, but also had a strong integrating effect on emerging nation-states, as kings were able to use their newfound artillery superiority to force any local dukes or lords to submit to their will, setting the stage for the absolutist kingdoms to come. | Artillery | Wikipedia | 473 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Modern rocket artillery can trace its heritage back to the Mysorean rockets of Mysore. Their first recorded use was in 1780 during the battles of the Second, Third and Fourth Mysore Wars. The wars fought between the British East India Company and the Kingdom of Mysore in India made use of the rockets as a weapon. In the Battle of Pollilur, the Siege of Seringapatam (1792) and in Battle of Seringapatam in 1799, these rockets were used with considerable effect against the British. After the wars, several Mysore rockets were sent to England, but experiments with heavier payloads were unsuccessful. In 1804 William Congreve, considering the Mysorian rockets to have too short a range (less than 1,000 yards) developed rockets in numerous sizes with ranges up to 3,000 yards and eventually utilizing iron casing as the Congreve rocket which were used effectively during the Napoleonic Wars and the War of 1812.
Napoleonic
With the Napoleonic Wars, artillery experienced changes in both physical design and operation. Rather than being overseen by "mechanics", artillery was viewed as its own service branch with the capability of dominating the battlefield. The success of the French artillery companies was at least in part due to the presence of specially trained artillery officers leading and coordinating during the chaos of battle. Napoleon, himself a former artillery officer, perfected the tactic of massed artillery batteries unleashed upon a critical point in his enemies' line as a prelude to a decisive infantry and cavalry assault.
Physically, cannons continued to become smaller and lighter. During the Seven Years War, King Frederick II of Prussia used these advances to deploy horse artillery that could move throughout the battlefield. Frederick also introduced the reversible iron ramrod, which was much more resistant to breakage than older wooden designs. The reversibility aspect also helped increase the rate of fire, since a soldier would no longer have to worry about what end of the ramrod they were using. | Artillery | Wikipedia | 389 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Jean-Baptiste de Gribeauval, a French artillery engineer, introduced the standardization of cannon design in the mid-18th century. He developed a 6-inch (150 mm) field howitzer whose gun barrel, carriage assembly and ammunition specifications were made uniform for all French cannons. The standardized interchangeable parts of these cannons down to the nuts, bolts and screws made their mass production and repair much easier. While the Gribeauval system made for more efficient production and assembly, the carriages used were heavy and the gunners were forced to march on foot (instead of riding on the limber and gun as in the British system). Each cannon was named for the weight of its projectiles, giving us variants such as 4, 8, and 12, indicating the weight in pounds. The projectiles themselves included solid balls or canister containing lead bullets or other material. These canister shots acted as massive shotguns, peppering the target with hundreds of projectiles at close range. The solid balls, known as round shot, was most effective when fired at shoulder-height across a flat, open area. The ball would tear through the ranks of the enemy or bounce along the ground breaking legs and ankles.
Modern
The development of modern artillery occurred in the mid to late 19th century as a result of the convergence of various improvements in the underlying technology. Advances in metallurgy allowed for the construction of breech-loading rifled guns that could fire at a much greater muzzle velocity.
After the British artillery was shown up in the Crimean War as having barely changed since the Napoleonic Wars, the industrialist William Armstrong was awarded a contract by the government to design a new piece of artillery. Production started in 1855 at the Elswick Ordnance Company and the Royal Arsenal at Woolwich, and the outcome was the revolutionary Armstrong Gun, which marked the birth of modern artillery. Three of its features particularly stand out. | Artillery | Wikipedia | 381 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
First, the piece was rifled, which allowed for a much more accurate and powerful action. Although rifling had been tried on small arms since the 15th century, the necessary machinery to accurately rifle artillery was not available until the mid-19th century. Martin von Wahrendorff, and Joseph Whitworth independently produced rifled cannon in the 1840s, but it was Armstrong's gun that was first to see widespread use during the Crimean War. The cast iron shell of the Armstrong gun was similar in shape to a Minié ball and had a thin lead coating which made it fractionally larger than the gun's bore and which engaged with the gun's rifling grooves to impart spin to the shell. This spin, together with the elimination of windage as a result of the tight fit, enabled the gun to achieve greater range and accuracy than existing smooth-bore muzzle-loaders with a smaller powder charge.
His gun was also a breech-loader. Although attempts at breech-loading mechanisms had been made since medieval times, the essential engineering problem was that the mechanism could not withstand the explosive charge. It was only with the advances in metallurgy and precision engineering capabilities during the Industrial Revolution that Armstrong was able to construct a viable solution. The gun combined all the properties that make up an effective artillery piece. The gun was mounted on a carriage in such a way as to return the gun to firing position after the recoil.
What made the gun really revolutionary lay in the technique of the construction of the gun barrel that allowed it to withstand much more powerful explosive forces. The "built-up" method involved assembling the barrel with wrought-iron (later mild steel was used) tubes of successively smaller diameter. The tube would then be heated to allow it to expand and fit over the previous tube. When it cooled the gun would contract although not back to its original size, which allowed an even pressure along the walls of the gun which was directed inward against the outward forces that the gun's firing exerted on the barrel.
Another innovative feature, more usually associated with 20th-century guns, was what Armstrong called its "grip", which was essentially a squeeze bore; the 6 inches of the bore at the muzzle end was of slightly smaller diameter, which centered the shell before it left the barrel and at the same time slightly swaged down its lead coating, reducing its diameter and slightly improving its ballistic qualities. | Artillery | Wikipedia | 492 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Armstrong's system was adopted in 1858, initially for "special service in the field" and initially he produced only smaller artillery pieces, 6-pounder (2.5 in/64 mm) mountain or light field guns, 9-pounder (3 in/76 mm) guns for horse artillery, and 12-pounder (3 inches /76 mm) field guns.
The first cannon to contain all 'modern' features is generally considered to be the French 75 of 1897. The gun used cased ammunition, was breech-loading, had modern sights, and a self-contained firing mechanism. It was the first field gun to include a hydro-pneumatic recoil mechanism, which kept the gun's trail and wheels perfectly still during the firing sequence. Since it did not need to be re-aimed after each shot, the crew could fire as soon as the barrel returned to its resting position. In typical use, the French 75 could deliver fifteen rounds per minute on its target, either shrapnel or melinite high-explosive, up to about 5 miles (8,500 m) away. Its firing rate could even reach close to 30 rounds per minute, albeit only for a very short time and with a highly experienced crew. These were rates that contemporary bolt action rifles could not match.
Indirect fire
Indirect fire, the firing of a projectile without relying on direct line of sight between the gun and the target, possibly dates back to the 16th century. Early battlefield use of indirect fire may have occurred at Paltzig in July 1759, when the Russian artillery fired over the tops of trees, and at the Battle of Waterloo, where a battery of the Royal Horse Artillery fired shrapnel indirectly against advancing French troops.
In 1882, Russian Lieutenant Colonel KG Guk published Indirect Fire for Field Artillery, which provided a practical method of using aiming points for indirect fire by describing, "all the essentials of aiming points, crest clearance, and corrections to fire by an observer". | Artillery | Wikipedia | 402 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
A few years later, the Richtfläche (lining-plane) sight was invented in Germany and provided a means of indirect laying in azimuth, complementing the clinometers for indirect laying in elevation which already existed. Despite conservative opposition within the German army, indirect fire was adopted as doctrine by the 1890s. In the early 1900s, Goertz in Germany developed an optical sight for azimuth laying. It quickly replaced the lining-plane; in English, it became the 'Dial Sight' (UK) or 'Panoramic Telescope' (US).
The British halfheartedly experimented with indirect fire techniques since the 1890s, but with the onset of the Boer War, they were the first to apply the theory in practice in 1899, although they had to improvise without a lining-plane sight.
In the next 15 years leading up to World War I, the techniques of indirect fire became available for all types of artillery. Indirect fire was the defining characteristic of 20th-century artillery and led to undreamt of changes in the amount of artillery, its tactics, organisation, and techniques, most of which occurred during World War I.
An implication of indirect fire and improving guns was increasing range between gun and target, this increased the time of flight and the vertex of the trajectory. The result was decreasing accuracy (the increasing distance between the target and the mean point of impact of the shells aimed at it) caused by the increasing effects of non-standard conditions. Indirect firing data was based on standard conditions including a specific muzzle velocity, zero wind, air temperature and density, and propellant temperature. In practice, this standard combination of conditions almost never existed, they varied throughout the day and day to day, and the greater the time of flight, the greater the inaccuracy. An added complication was the need for survey to accurately fix the coordinates of the gun position and provide accurate orientation for the guns. Of course, targets had to be accurately located, but by 1916, air photo interpretation techniques enabled this, and ground survey techniques could sometimes be used. | Artillery | Wikipedia | 421 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
In 1914, the methods of correcting firing data for the actual conditions were often convoluted, and the availability of data about actual conditions was rudimentary or non-existent, the assumption was that fire would always be ranged (adjusted). British heavy artillery worked energetically to progressively solve all these problems from late 1914 onwards, and by early 1918, had effective processes in place for both field and heavy artillery. These processes enabled 'map-shooting', later called 'predicted fire'; it meant that effective fire could be delivered against an accurately located target without ranging. Nevertheless, the mean point of impact was still some tens of yards from the target-centre aiming point. It was not precision fire, but it was good enough for concentrations and barrages. These processes remain in use into the 21st century with refinements to calculations enabled by computers and improved data capture about non-standard conditions.
The British Major General Henry Hugh Tudor pioneered armour and artillery cooperation at the breakthrough Battle of Cambrai. The improvements in providing and using data for non-standard conditions (propellant temperature, muzzle velocity, wind, air temperature, and barometric pressure) were developed by the major combatants throughout the war and enabled effective predicted fire. The effectiveness of this was demonstrated by the British in 1917 (at Cambrai) and by Germany the following year (Operation Michael).
Major General J.B.A. Bailey, British Army (retired) wrote:
An estimated 75,000 French soldiers were casualties of friendly artillery fire in the four years of World War I.
Precision-guidance | Artillery | Wikipedia | 319 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Modern artillery is most obviously distinguished by its long range, firing an explosive shell or rocket and a mobile carriage for firing and transport. However, its most important characteristic is the use of indirect fire, whereby the firing equipment is aimed without seeing the target through its sights. Indirect fire emerged at the beginning of the 20th century and was greatly enhanced by the development of predicted fire methods in World War I. However, indirect fire was area fire; it was and is not suitable for destroying point targets; its primary purpose is area suppression. Nevertheless, by the late 1970s precision-guided munitions started to appear, notably the US 155 mm Copperhead and its Soviet 152 mm Krasnopol equivalent that had success in Indian service. These relied on laser designation to 'illuminate' the target that the shell homed onto. However, in the early 21st century, the Global Positioning System (GPS) enabled relatively cheap and accurate guidance for shells and missiles, notably the US 155 mm Excalibur and the 227 mm GMLRS rocket. The introduction of these led to a new issue, the need for very accurate three dimensional target coordinates—the mensuration process.
Weapons covered by the term 'modern artillery' include "cannon" artillery (such as howitzer, mortar, and field gun) and rocket artillery. Certain smaller-caliber mortars are more properly designated small arms rather than artillery, albeit indirect-fire small arms. This term also came to include coastal artillery which traditionally defended coastal areas against seaborne attack and controlled the passage of ships. With the advent of powered flight at the start of the 20th century, artillery also included ground-based anti-aircraft batteries.
The term "artillery" has traditionally not been used for projectiles with internal guidance systems, preferring the term "missilery", though some modern artillery units employ surface-to-surface missiles. Advances in terminal guidance systems for small munitions has allowed large-caliber guided projectiles to be developed, blurring this distinction. See Long Range Precision Fires (LRPF), Joint terminal attack controller
Ammunition
One of the most important roles of logistics is the supply of munitions as a primary type of artillery consumable, their storage (ammunition dump, arsenal, magazine
) and the provision of fuzes, detonators and warheads at the point where artillery troops will assemble the charge, projectile, bomb or shell.
A round of artillery ammunition comprises four components:
Fuze
Projectile
Propellant
Primer
Fuzes | Artillery | Wikipedia | 496 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Fuzes are the devices that initiate an artillery projectile, either to detonate its High Explosive (HE) filling or eject its cargo (illuminating flare or smoke canisters being examples). The official military spelling is "fuze". Broadly there are four main types:
impact (including graze and delay)
mechanical time including airburst
proximity sensor including airburst
programmable electronic detonation including airburst
Most artillery fuzes are nose fuzes. However, base fuzes have been used with armor-piercing shells and for squash head (High-Explosive Squash Head (HESH) or High Explosive, Plastic (HEP) anti-tank shells). At least one nuclear shell and its non-nuclear spotting version also used a multi-deck mechanical time fuze fitted into its base.
Impact fuzes were, and in some armies remain, the standard fuze for HE projectiles. Their default action is normally 'superquick', some have had a 'graze' action which allows them to penetrate light cover and others have 'delay'. Delay fuzes allow the shell to penetrate the ground before exploding. Armor or Concrete-Piercing (AP or CP) fuzes are specially hardened. During World War I and later, ricochet fire with delay or graze fuzed HE shells, fired with a flat angle of descent, was used to achieve airburst.
HE shells can be fitted with other fuzes. Airburst fuzes usually have a combined airburst and impact function. However, until the introduction of proximity fuzes, the airburst function was mostly used with cargo munitions—for example, shrapnel, illumination, and smoke. The larger calibers of anti-aircraft artillery are almost always used airburst. Airburst fuzes have to have the fuze length (running time) set on them. This is done just before firing using either a wrench or a fuze setter pre-set to the required fuze length. | Artillery | Wikipedia | 408 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Early airburst fuzes used igniferous timers which lasted into the second half of the 20th century. Mechanical time fuzes appeared in the early part of the century. These required a means of powering them. The Thiel mechanism used a spring and escapement (i.e. 'clockwork'), Junghans used centrifugal force and gears, and Dixi used centrifugal force and balls. From about 1980, electronic time fuzes started replacing mechanical ones for use with cargo munitions.
Proximity fuzes have been of two types: photo-electric or radar. The former was not very successful and seems only to have been used with British anti-aircraft artillery 'unrotated projectiles' (rockets) in World War II. Radar proximity fuzes were a big improvement over the mechanical (time) fuzes which they replaced. Mechanical time fuzes required an accurate calculation of their running time, which was affected by non-standard conditions. With HE (requiring a burst 20 to above the ground), if this was very slightly wrong the rounds would either hit the ground or burst too high. Accurate running time was less important with cargo munitions that burst much higher.
The first radar proximity fuzes (perhaps originally codenamed 'VT' and later called Variable Time (VT)) were invented by the British and developed by the US and initially used against aircraft in World War II. Their ground use was delayed for fear of the enemy recovering 'blinds' (artillery shells which failed to detonate) and copying the fuze. The first proximity fuzes were designed to detonate about above the ground. These air-bursts are much more lethal against personnel than ground bursts because they deliver a greater proportion of useful fragments and deliver them into terrain where a prone soldier would be protected from ground bursts.
However, proximity fuzes can suffer premature detonation because of the moisture in heavy rain clouds. This led to 'Controlled Variable Time' (CVT) after World War II. These fuzes have a mechanical timer that switched on the radar about 5 seconds before expected impact, they also detonated on impact.
The proximity fuze emerged on the battlefields of Europe in late December 1944. They have become known as the U.S. Artillery's "Christmas present", and were much appreciated when they arrived during the Battle of the Bulge. They were also used to great effect in anti-aircraft projectiles in the Pacific against kamikaze as well as in Britain against V-1 flying bombs. | Artillery | Wikipedia | 511 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Electronic multi-function fuzes started to appear around 1980. Using solid-state electronics they were relatively cheap and reliable, and became the standard fitted fuze in operational ammunition stocks in some western armies. The early versions were often limited to proximity airburst, albeit with height of burst options, and impact. Some offered a go/no-go functional test through the fuze setter.
Later versions introduced induction fuze setting and testing instead of physically placing a fuze setter on the fuze. The latest, such as Junghan's DM84U provide options giving, superquick, delay, a choice of proximity heights of burst, time and a choice of foliage penetration depths.
Projectiles
The projectile is the munition or "bullet" fired downrange. This may be an explosive device. Projectiles have traditionally been classified as "shot" or "shell", the former being solid and the latter having some form of "payload".
Shells can be divided into three configurations: bursting, base ejection or nose ejection. The latter is sometimes called the shrapnel configuration. The most modern is base ejection, which was introduced in World War I. Base and nose ejection are almost always used with airburst fuzes. Bursting shells use various types of fuze depending on the nature of the payload and the tactical need at the time.
Payloads have included:
Bursting: high-explosive, white phosphorus, coloured marker, chemical, nuclear devices; high-explosive anti-tank and canister may be considered special types of bursting shell.
Nose ejection: shrapnel, star, incendiary and flechette (a more modern version of shrapnel).
Base ejection: Dual-Purpose Improved Conventional Munition bomblets, which arm themselves and function after a set number of rotations after having been ejected from the projectile (this produces unexploded sub-munitions, or "duds", which remain dangerous), scatterable mines, illuminating, coloured flare, smoke, incendiary, propaganda, chaff (foil to jam radars) and modern exotics such as electronic payloads and sensor-fuzed munitions. | Artillery | Wikipedia | 459 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Stabilization
Rifled: Artillery projectiles have traditionally been spin-stabilised, meaning that they spin in flight so that gyroscopic forces prevent them from tumbling. Spin is induced by gun barrels having rifling, which engages a soft metal band around the projectile, called a "driving band" (UK) or "rotating band" (U.S.). The driving band is usually made of copper, but synthetic materials have been used.
Smoothbore/fin-stabilized: In modern artillery, smoothbore barrels have been used mostly by mortars. These projectiles use fins in the airflow at their rear to maintain correct orientation. The primary benefits over rifled barrels is reduced barrel wear, longer ranges that can be achieved (due to the reduced loss of energy to friction and gas escaping around the projectile via the rifling) and larger explosive cores for a given caliber artillery due to less metal needing to be used to form the case of the projectile because of less force applied to the shell from the non-rifled sides of the barrel of smooth bore guns.
Rifled/fin-stabilized: A combination of the above can be used, where the barrel is rifled, but the projectile also has deployable fins for stabilization, guidance or gliding.
Propellant
Most forms of artillery require a propellant to propel the projectile to the target. Propellant is always a low explosive, which means it deflagrates, rather than detonating like high explosives. The shell is accelerated to a high velocity in a very short time by the rapid generation of gas from the burning propellant. This high pressure is achieved by burning the propellant in a contained area, either the chamber of a gun barrel or the combustion chamber of a rocket motor. | Artillery | Wikipedia | 356 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Until the late 19th century, the only available propellant was black powder. It had many disadvantages as a propellant; it has relatively low power, requiring large amounts of powder to fire projectiles, and created thick clouds of white smoke that would obscure the targets, betray the positions of guns, and make aiming impossible. In 1846, nitrocellulose (also known as guncotton) was discovered, and the high explosive nitroglycerin was discovered at nearly the same time. Nitrocellulose was significantly more powerful than black powder, and was smokeless. Early guncotton was unstable, however, and burned very fast and hot, leading to greatly increased barrel wear. Widespread introduction of smokeless powder would wait until the advent of the double-base powders, which combine nitrocellulose and nitroglycerin to produce powerful, smokeless, stable propellant.
Many other formulations were developed in the following decades, generally trying to find the optimum characteristics of a good artillery propellant – low temperature, high energy, non-corrosive, highly stable, cheap, and easy to manufacture in large quantities. Modern gun propellants are broadly divided into three classes: single-base propellants that are mainly or entirely nitrocellulose based, double-base propellants consisting of a combination of nitrocellulose and nitroglycerin, and triple base composed of a combination of nitrocellulose and nitroglycerin and nitroguanidine.
Artillery shells fired from a barrel can be assisted to greater range in three ways:
Rocket-assisted projectiles enhance and sustain the projectile's velocity by providing additional 'push' from a small rocket motor that is part of the projectile's base.
Base bleed uses a small pyrotechnic charge at the base of the projectile to introduce sufficient combustion products into the low-pressure region behind the base of the projectile responsible for a large proportion of the drag.
Ramjet-assisted, similar to rocket-assisted, but using a ramjet instead of a rocket motor; it is anticipated that a ramjet-assisted 120-mm mortar shell could reach a range of . | Artillery | Wikipedia | 456 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Propelling charges for barrel artillery can be provided either as cartridge bags or in metal cartridge cases. Generally, anti-aircraft artillery and smaller-caliber (up to 3" or 76.2 mm) guns use metal cartridge cases that include the round and propellant, similar to a modern rifle cartridge. This simplifies loading and is necessary for very high rates of fire. Bagged propellant allows the amount of powder to be raised or lowered, depending on the range to the target. It also makes handling of larger shells easier. Cases and bags require totally different types of breech. A metal case holds an integral primer to initiate the propellant and provides the gas seal to prevent the gases leaking out of the breech; this is called obturation. With bagged charges, the breech itself provides obturation and holds the primer. In either case, the primer is usually percussion, but electrical is also used, and laser ignition is emerging. Modern 155 mm guns have a primer magazine fitted to their breech.
Artillery ammunition has four classifications according to use:
Service: ammunition used in live fire training or for wartime use in a combat zone. Also known as "warshot" ammunition.
Practice: Ammunition with a non- or minimally-explosive projectile that mimics the characteristics (range, accuracy) of live rounds for use under training conditions. Practice artillery ammunition often utilizes a colored-smoke-generating bursting charge for marking purposes in place of the normal high-explosive charge.
Dummy: Ammunition with an inert warhead, inert primer, and no propellant; used for training or display.
Blank: Ammunition with live primer, greatly reduced propellant charge (typically black powder), and no projectile; used for training, demonstration or ceremonial use.
Field artillery system
Because modern field artillery mostly uses indirect fire, the guns have to be part of a system that enables them to attack targets invisible to them, in accordance with the combined arms plan. | Artillery | Wikipedia | 401 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
The main functions in the field artillery system are:
Communications
Command: authority to allocate resources;
Target acquisition: detect, identify and deduce the location of targets;
Control: authority to decide which targets to attack and allot fire units to the attack;
Computation of firing data – to deliver fire from a fire unit onto its target;
Fire units: guns, launchers or mortars grouped together;
Specialist services: produce data to support the production of accurate firing data;
Logistic services: to provide combat supplies, particularly ammunition, and equipment support.
All these calculations to produce a quadrant elevation (or range) and azimuth were done manually using instruments, tabulated, data of the moment, and approximations until battlefield computers started appearing in the 1960s and 1970s. While some early calculators copied the manual method (typically substituting polynomials for tabulated data), computers use a different approach. They simulate a shell's trajectory by 'flying' it in short steps and applying data about the conditions affecting the trajectory at each step. This simulation is repeated until it produces a quadrant elevation and azimuth that lands the shell within the required 'closing' distance of the target coordinates.
NATO has a standard ballistic model for computer calculations and has expanded the scope of this into the NATO Armaments Ballistic Kernel (NABK) within the SG2 Shareable (Fire Control) Software Suite (S4).
Logistics
Supply of artillery ammunition has always been a major component of military logistics. Up until World War I some armies made artillery responsible for all forward ammunition supply because the load of small arms ammunition was trivial compared to artillery. Different armies use different approaches to ammunition supply, which can vary with the nature of operations. Differences include where the logistic service transfers artillery ammunition to artillery, the amount of ammunition carried in units and extent to which stocks are held at unit or battery level. A key difference is whether supply is 'push' or 'pull'. In the former the 'pipeline' keeps pushing ammunition into formations or units at a defined rate. In the latter units fire as tactically necessary and replenish to maintain or reach their authorised holding (which can vary), so the logistic system has to be able to cope with surge and slack.
Classification
Artillery types can be categorised in several ways, for example by type or size of weapon or ordnance, by role or by organizational arrangements.
Types of ordnance
The types of cannon artillery are generally distinguished by the velocity at which they fire projectiles.
Types of artillery: | Artillery | Wikipedia | 512 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Cannon: The oldest type of artillery with direct firing trajectory.
Bombard: A type of a large calibre, muzzle-loading artillery piece, a cannon or mortar used during sieges to shoot round stone projectiles at the walls of enemy fortifications.
Falconet was a type of light cannon developed in the late 15th century that fired a smaller shot than the similar falcon.
Swivel gun is a type of small cannon mounted on a swiveling stand or fork which allows a very wide arc of movement. Camel mounted swivel guns called zamburak were used by the Gunpowder Empires as self-propelled artillery.
Volley gun is a gun with multiple single-shot barrels that volley fired simultaneously or sequentially in quick succession. Although capable of unleashing intense firepower, volley guns differ from modern machine guns in that they lack autoloading and automatic fire mechanisms
Siege artillery: Large-caliber artillery that have limited mobility with indirect firing trajectory, which was used to bombard targets at long distances.
Large-calibre artillery.
Field artillery: Mobile weapons used to support armies in the field. Subcategories include:
Infantry support guns: Directly support infantry units.
Mountain guns: Lightweight guns that can be disassembled and transported through difficult terrain.
Field guns: Capable of long-range direct fires.
Howitzers: Capable of high-angle fire, they are most often employed for indirect-fire.
Gun-howitzers: Capable of high or low-angle fire with a longer barrel.
Mortars: Typically muzzle-loaded, short-barreled, high-trajectory weapons designed primarily for an indirect-fire role.
Gun-mortars: Typically breech-loaded, capable of high or low-angle fire with a longer barrel.
Tank guns: Large-caliber guns mounted on tanks to provide mobile direct fire.
Anti-tank artillery: Guns, usually mobile, designed primarily for direct fire to destroy armored fighting vehicles with heavy armor.
Anti-tank gun: Guns designed for direct fire to destroy tanks and other armored fighting vehicles.
Anti-aircraft artillery: Guns, usually mobile, designed for attacking aircraft by land and/or at sea. Some guns were suitable for the dual roles of anti-aircraft and anti-tank warfare.
Rocket artillery: Launches rockets or missiles, instead of shot or shell.
Railway gun: Large-caliber weapons that are mounted on, transported by and fired from specially-designed railway wagons. | Artillery | Wikipedia | 492 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Naval artillery: Guns mounted on warships to be used either against other naval vessels or to bombard coastal targets in support of ground forces. The crowning achievement of naval artillery was the battleship, but the advent of air power and missiles have rendered this type of artillery largely obsolete. They are typically longer-barreled, low-trajectory, high-velocity weapons designed primarily for a direct-fire role.
Coastal artillery: Fixed-position weapons dedicated to defense of a particular location, usually a coast (for example, the Atlantic Wall in World War II) or harbor. Not needing to be mobile, coastal artillery used to be much larger than equivalent field artillery pieces, giving them longer range and more destructive power. Modern coastal artillery (for example, Russia's "Bereg" system) is often self-propelled, (allowing it to avoid counter-battery fire) and fully integrated, meaning that each battery has all of the support systems that it requires (maintenance, targeting radar, etc.) organic to its unit.
Aircraft artillery: Large-caliber guns mounted on attack aircraft, this is typically found on slow-flying gunships.
Nuclear artillery: Artillery which fires nuclear shells. | Artillery | Wikipedia | 237 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Modern field artillery can also be split into two other subcategories: towed and self-propelled. As the name suggests, towed artillery has a prime mover, usually an artillery tractor or truck, to move the piece, crew, and ammunition around. Towed artillery is in some cases equipped with an APU for small displacements. Self-propelled artillery is permanently mounted on a carriage or vehicle with room for the crew and ammunition and is thus capable of moving quickly from one firing position to another, both to support the fluid nature of modern combat and to avoid counter-battery fire. It includes mortar carrier vehicles, many of which allow the mortar to be removed from the vehicle and be used dismounted, potentially in terrain in which the vehicle cannot navigate, or in order to avoid detection.
Organizational types
At the beginning of the modern artillery period, the late 19th century, many armies had three main types of artillery, in some case they were sub-branches within the artillery branch in others they were separate branches or corps. There were also other types excluding the armament fitted to warships:
Horse artillery, first formed as regular units in the late 18th century, with the role of supporting cavalry, they were distinguished by the entire crew being mounted.
Field or "foot" artillery, the main artillery arm of the field army, using either guns, howitzers, or mortars. In World War II this branch again started using rockets and later surface to surface missiles.
Fortress or garrison artillery, operated a nation's fixed defences using guns, howitzers or mortars, either on land or coastal frontiers. Some had deployable elements to provide heavy artillery to the field army. In some nations coast defence artillery was a naval responsibility.
Mountain artillery, a few nations treated mountain artillery as a separate branch, in others it was a speciality in another artillery branch. They used light guns or howitzers, usually designed for pack animal transport and easily broken down into small easily handled loads
Naval artillery, some nations carried pack artillery on some warships, these were used and manhandled by naval (or marine) landing parties. At times, part of a ship's armament would be unshipped and mated to makeshift carriages and limbers for actions ashore, for example during the Second Boer War, during the First World War the guns from the stricken SMS Königsberg formed the main artillery strength of the German forces in East Africa. | Artillery | Wikipedia | 486 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
After World War I many nations merged these different artillery branches, in some cases keeping some as sub-branches. Naval artillery disappeared apart from that belonging to marines. However, two new branches of artillery emerged during that war and its aftermath, both used specialised guns (and a few rockets) and used direct not indirect fire, in the 1950s and 1960s both started to make extensive use of missiles:
Anti-tank artillery, also under various organisational arrangements but typically either field artillery or a specialist branch and additional elements integral to infantry, etc., units. However, in most armies field and anti-aircraft artillery also had at least a secondary anti-tank role. After World War II anti-tank in Western armies became mostly the responsibility of infantry and armoured branches and ceased to be an artillery matter, with some exceptions.
Anti-aircraft artillery, under various organisational arrangements including being part of artillery, a separate corps, even a separate service or being split between army for the field and air force for home defence. In some cases infantry and the new armoured corps also operated their own integral light anti-aircraft artillery. Home defence anti-aircraft artillery often used fixed as well as mobile mountings. Some anti-aircraft guns could also be used as field or anti-tank artillery, providing they had suitable sights.
However, the general switch by artillery to indirect fire before and during World War I led to a reaction in some armies. The result was accompanying or infantry guns. These were usually small, short range guns, that could be easily man-handled and used mostly for direct fire but some could use indirect fire. Some were operated by the artillery branch but under command of the supported unit. In World War II they were joined by self-propelled assault guns, although other armies adopted infantry or close support tanks in armoured branch units for the same purpose, subsequently tanks generally took on the accompanying role.
Equipment types
The three main types of artillery "gun" are field guns, howitzers, and mortars. During the 20th century, guns and howitzers have steadily merged in artillery use, making a distinction between the terms somewhat meaningless. By the end of the 20th century, true guns with calibers larger than about 60 mm have become very rare in artillery use, the main users being tanks, ships, and a few residual anti-aircraft and coastal guns. The term "cannon" is a United States generic term that includes guns, howitzers, and mortars; it is not used in other English speaking armies. | Artillery | Wikipedia | 505 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
The traditional definitions differentiated between guns and howitzers in terms of maximum elevation (well less than 45° as opposed to close to or greater than 45°), number of charges (one or more than one charge), and having higher or lower muzzle velocity, sometimes indicated by barrel length. These three criteria give eight possible combinations, of which guns and howitzers are but two. However, modern "howitzers" have higher velocities and longer barrels than the equivalent "guns" of the first half of the 20th century.
True guns are characterized by long range, having a maximum elevation significantly less than 45°, a high muzzle velocity and hence a relatively long barrel, smooth bore (no rifling) and a single charge. The latter often led to fixed ammunition where the projectile is locked to the cartridge case. There is no generally accepted minimum muzzle velocity or barrel length associated with a gun.
Howitzers can fire at maximum elevations at least close to 45°; elevations up to about 70° are normal for modern howitzers. Howitzers also have a choice of charges, meaning that the same elevation angle of fire will achieve a different range depending on the charge used. They have rifled bores, lower muzzle velocities and shorter barrels than equivalent guns. All this means they can deliver fire with a steep angle of descent. Because of their multi-charge capability, their ammunition is mostly separate loading (the projectile and propellant are loaded separately).
That leaves six combinations of the three criteria, some of which have been termed gun howitzers. A term first used in the 1930s when howitzers with a relatively high maximum muzzle velocities were introduced, it never became widely accepted, most armies electing to widen the definition of "gun" or "howitzer". By the 1960s, most equipment had maximum elevations up to about 70°, were multi-charge, had quite high maximum muzzle velocities and relatively long barrels. | Artillery | Wikipedia | 404 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Mortars are simpler. The modern mortar originated in World War I and there were several patterns. After that war, most mortars settled on the Stokes pattern, characterized by a short barrel, smooth bore, low muzzle velocity, elevation angle of firing generally greater than 45°, and a very simple and light mounting using a "baseplate" on the ground. The projectile with its integral propelling charge was dropped down the barrel from the muzzle to hit a fixed firing pin. Since that time, a few mortars have become rifled and adopted breech loading.
There are other recognized typifying characteristics for artillery. One such characteristic is the type of obturation used to seal the chamber and prevent gases escaping through the breech. This may use a metal cartridge case that also holds the propelling charge, a configuration called "QF" or "quickfiring" by some nations. The alternative does not use a metal cartridge case, the propellant being merely bagged or in combustible cases with the breech itself providing all the sealing. This is called "BL" or "breech loading" by some nations.
A second characteristic is the form of propulsion. Modern equipment can either be towed or self-propelled (SP). A towed gun fires from the ground and any inherent protection is limited to a gun shield. Towing by horse teams lasted throughout World War II in some armies, but others were fully mechanized with wheeled or tracked gun towing vehicles by the outbreak of that war. The size of a towing vehicle depends on the weight of the equipment and the amount of ammunition it has to carry.
A variation of towed is portee, where the vehicle carries the gun which is dismounted for firing. Mortars are often carried this way. A mortar is sometimes carried in an armored vehicle and can either fire from it or be dismounted to fire from the ground. Since the early 1960s it has been possible to carry lighter towed guns and most mortars by helicopter. Even before that, they were parachuted or landed by glider from the time of the first airborne trials in the USSR in the 1930s. | Artillery | Wikipedia | 425 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
In SP equipment, the gun is an integral part of the vehicle that carries it. SPs first appeared during World War I, but did not really develop until World War II. They are mostly tracked vehicles, but wheeled SPs started to appear in the 1970s. Some SPs have no armor and carry few or no other weapons and ammunition. Armored SPs usually carry a useful ammunition load. Early armored SPs were mostly a "casemate" configuration, in essence an open top armored box offering only limited traverse. However, most modern armored SPs have a full enclosed armored turret, usually giving full traverse for the gun. Many SPs cannot fire without deploying stabilizers or spades, sometimes hydraulic. A few SPs are designed so that the recoil forces of the gun are transferred directly onto the ground through a baseplate. A few towed guns have been given limited self-propulsion by means of an auxiliary engine.
Two other forms of tactical propulsion were used in the first half of the 20th century: Railways or transporting the equipment by road, as two or three separate loads, with disassembly and re-assembly at the beginning and end of the journey. Railway artillery took two forms, railway mountings for heavy and super-heavy guns and howitzers and armored trains as "fighting vehicles" armed with light artillery in a direct fire role. Disassembled transport was also used with heavy and super heavy weapons and lasted into the 1950s.
Caliber categories
A third form of artillery typing is to classify it as "light", "medium", "heavy" and various other terms. It appears to have been introduced in World War I, which spawned a very wide array of artillery in all sorts of sizes so a simple categorical system was needed. Some armies defined these categories by bands of calibers. Different bands were used for different types of weapons—field guns, mortars, anti-aircraft guns and coastal guns.
Modern operations
List of countries in order of amount of artillery (only conventional barrel ordnance is given, in use with land forces):
Artillery is used in a variety of roles depending on its type and caliber. The general role of artillery is to provide fire support—"the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize or suppress the enemy". This NATO definition makes artillery a supporting arm although not all NATO armies agree with this logic. The italicised terms are NATO's. | Artillery | Wikipedia | 498 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Unlike rockets, guns (or howitzers as some armies still call them) and mortars are suitable for delivering close supporting fire. However, they are all suitable for providing deep supporting fire although the limited range of many mortars tends to exclude them from the role. Their control arrangements and limited range also mean that mortars are most suited to direct supporting fire. Guns are used either for this or general supporting fire while rockets are mostly used for the latter. However, lighter rockets may be used for direct fire support. These rules of thumb apply to NATO armies.
Modern mortars, because of their lighter weight and simpler, more transportable design, are usually an integral part of infantry and, in some armies, armour units. This means they generally do not have to concentrate their fire so their shorter range is not a disadvantage. Some armies also consider infantry operated mortars to be more responsive than artillery, but this is a function of the control arrangements and not the case in all armies. However, mortars have always been used by artillery units and remain with them in many armies, including a few in NATO.
In NATO armies artillery is usually assigned a tactical mission that establishes its relationship and responsibilities to the formation or units it is assigned to. It seems that not all NATO nations use the terms and outside NATO others are probably used. The standard terms are: direct support, general support, general support reinforcing and reinforcing. These tactical missions are in the context of the command authority: operational command, operational control, tactical command or tactical control.
In NATO direct support generally means that the directly supporting artillery unit provides observers and liaison to the manoeuvre troops being supported, typically an artillery battalion or equivalent is assigned to a brigade and its batteries to the brigade's battalions. However, some armies achieve this by placing the assigned artillery units under command of the directly supported formation. Nevertheless, the batteries' fire can be concentrated onto a single target, as can the fire of units in range and with the other tactical missions.
Application of fire
There are several dimensions to this subject. The first is the notion that fire may be against an opportunity target or may be arranged. If it is the latter it may be either on-call or scheduled. Arranged targets may be part of a fire plan. Fire may be either observed or unobserved, if the former it may be adjusted, if the latter then it has to be predicted. Observation of adjusted fire may be directly by a forward observer or indirectly via some other target acquisition system. | Artillery | Wikipedia | 510 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
NATO also recognises several different types of fire support for tactical purposes:
Counterbattery fire: delivered for the purpose of destroying or neutralizing the enemy's fire support system.
Counterpreparation fire: intensive prearranged fire delivered when the imminence of the enemy attack is discovered.
Covering fire: used to protect troops when they are within range of enemy small arms.
Defensive fire: delivered by supporting units to assist and protect a unit engaged in a defensive action.
Final Protective Fire: an immediately available prearranged barrier of fire designed to impede enemy movement across defensive lines or areas.
Harassing fire: a random number of shells are fired at random intervals, without any pattern to it that the enemy can predict. This process is designed to hinder enemy forces' movement, and, by the constantly imposed stress, threat of losses and inability of enemy forces to relax or sleep, lowers their morale.
Interdiction fire: placed on an area or point to prevent the enemy from using the area or point.
Preparation fire: delivered before an attack to weaken the enemy position.
These purposes have existed for most of the 20th century, although their definitions have evolved and will continue to do so, lack of suppression in counterbattery is an omission. Broadly they can be defined as either:
Deep supporting fire: directed at objectives not in the immediate vicinity of own force, for neutralizing or destroying enemy reserves and weapons, and interfering with enemy command, supply, communications and observation; or
Close supporting fire: placed on enemy troops, weapons or positions which, because of their proximity present the most immediate and serious threat to the supported unit.
Two other NATO terms also need definition:
Neutralization fire: delivered to render a target temporarily ineffective or unusable; and
Suppression fire: that degrades the performance of a target below the level needed to fulfill its mission. Suppression is usually only effective for the duration of the fire.
The tactical purposes also include various "mission verbs", a rapidly expanding subject with the modern concept of "effects based operations". | Artillery | Wikipedia | 418 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Targeting is the process of selecting target and matching the appropriate response to them taking account of operational requirements and capabilities. It requires consideration of the type of fire support required and the extent of coordination with the supported arm. It involves decisions about:
what effects are required, for example, neutralization or suppression;
the proximity of and risks to own troops or non-combatants;
what types of munitions, including their fuzing, are to be used and in what quantities;
when the targets should be attacked and possibly for how long;
what methods should be used, for example, converged or distributed, whether adjustment is permissible or surprise essential, the need for special procedures such as precision or danger close
how many fire units are needed and which ones they should be from those that are available (in range, with the required munitions type and quantity, not allotted to another target, have the most suitable line of fire if there is a risk to own troops or non-combatants);
The targeting process is the key aspect of tactical fire control. Depending on the circumstances and national procedures it may all be undertaken in one place or may be distributed. In armies practicing control from the front, most of the process may be undertaken by a forward observer or other target acquirer. This is particularly the case for a smaller target requiring only a few fire units. The extent to which the process is formal or informal and makes use of computer based systems, documented norms or experience and judgement also varies widely armies and other circumstances.
Surprise may be essential or irrelevant. It depends on what effects are required and whether or not the target is likely to move or quickly improve its protective posture. During World War II UK researchers concluded that for impact fuzed munitions the relative risk were as follows:
men standing – 1
men lying – 1/3
men firing from trenches – 1/15–1/50
men crouching in trenches – 1/25–1/100
Airburst munitions significantly increase the relative risk for lying men, etc. Historically most casualties occur in the first 10–15 seconds of fire, i.e. the time needed to react and improve protective posture, however, this is less relevant if airburst is used. | Artillery | Wikipedia | 441 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
There are several ways of making best use of this brief window of maximum vulnerability:
ordering the guns to fire together, either by executive order or by a "fire at" time. The disadvantage is that if the fire is concentrated from many dispersed fire units then there will be different times of flight and the first rounds will be spread in time. To some extent a large concentration offsets the problem because it may mean that only one round is required from each gun and most of these could arrive in the 15 second window.
burst fire, a rate of fire to deliver three rounds from each gun within 10 or 15 seconds, this reduces the number of guns and hence fire units needed, which means they may be less dispersed and have less variation in their times of flight. Smaller caliber guns, such as 105 mm, have always been able to deliver three rounds in 15 seconds, larger calibers firing fixed rounds could also do it but it was not until the 1970s that a multi-charge 155 mm howitzer, FH-70 first gained the capability.
multiple round simultaneous impact (MRSI), where a single weapon or multiple individual weapons fire multiple rounds at differing trajectories so that all rounds arrive on target at the same time.
time on target, fire units fire at the time less their time of flight, this works well with prearranged scheduled fire but is less satisfactory for opportunity targets because it means delaying the delivery of fire by selecting a 'safe' time that all or most fire units can achieve. It can be used with both the previous two methods.
Counter-battery fire
Modern counter-battery fire developed in World War I, with the objective of defeating the enemy's artillery. Typically such fire was used to suppress enemy batteries when they were or were about to interfere with the activities of friendly forces (such as to prevent enemy defensive artillery fire against an impending attack) or to systematically destroy enemy guns. In World War I the latter required air observation. The first indirect counter-battery fire was in May 1900 by an observer in a balloon. | Artillery | Wikipedia | 413 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
Enemy artillery can be detected in two ways, either by direct observation of the guns from the air or by ground observers (including specialist reconnaissance), or from their firing signatures. This includes radars tracking the shells in flight to determine their place of origin, sound ranging detecting guns firing and resecting their position from pairs of microphones or cross-observation of gun flashes using observation by human observers or opto-electronic devices, although the widespread adoption of 'flashless' propellant limited the effectiveness of the latter.
Once hostile batteries have been detected they may be engaged immediately by friendly artillery or later at an optimum time, depending on the tactical situation and the counter-battery policy. Air strike is another option. In some situations the task is to locate all active enemy batteries for attack using a counter-battery fire at the appropriate moment in accordance with a plan developed by artillery intelligence staff. In other situations counter-battery fire may occur whenever a battery is located with sufficient accuracy.
Modern counter-battery target acquisition uses unmanned aircraft, counter-battery radar, ground reconnaissance and sound-ranging. Counter-battery fire may be adjusted by some of the systems, for example the operator of an unmanned aircraft can 'follow' a battery if it moves. Defensive measures by batteries include frequently changing position or constructing defensive earthworks, the tunnels used by North Korea being an extreme example. Counter-measures include air defence against aircraft and attacking counter-battery radars physically and electronically.
Field artillery team
'Field Artillery Team' is a US term and the following description and terminology applies to the US, other armies are broadly similar but differ in significant details. Modern field artillery (post–World War I) has three distinct parts: the Forward Observer (FO), the Fire Direction Center (FDC) and the actual guns themselves. The forward observer observes the target using tools such as binoculars, laser rangefinders, designators and call back fire missions on his radio, or relays the data through a portable computer via an encrypted digital radio connection protected from jamming by computerized frequency hopping. A lesser known part of the team is the FAS or Field Artillery Survey team which sets up the "Gun Line" for the cannons. Today most artillery battalions use an "Aiming Circle" which allows for faster setup and more mobility. FAS teams are still used for checks and balances purposes and if a gun battery has issues with the "Aiming Circle" a FAS team will do it for them. | Artillery | Wikipedia | 503 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
The FO can communicate directly with the battery FDC, of which there is one per each battery of 4–8 guns. Otherwise the several FOs communicate with a higher FDC such as at a Battalion level, and the higher FDC prioritizes the targets and allocates fires to individual batteries as needed to engage the targets that are spotted by the FOs or to perform preplanned fires.
The Battery FDC computes firing data—ammunition to be used, powder charge, fuse settings, the direction to the target, and the quadrant elevation to be fired at to reach the target, what gun will fire any rounds needed for adjusting on the target, and the number of rounds to be fired on the target by each gun once the target has been accurately located—to the guns. Traditionally this data is relayed via radio or wire communications as a warning order to the guns, followed by orders specifying the type of ammunition and fuse setting, direction, and the elevation needed to reach the target, and the method of adjustment or orders for fire for effect (FFE). However, in more advanced artillery units, this data is relayed through a digital radio link.
Other parts of the field artillery team include meteorological analysis to determine the temperature, humidity and pressure of the air and wind direction and speed at different altitudes. Also radar is used both for determining the location of enemy artillery and mortar batteries and to determine the precise actual strike points of rounds fired by battery and comparing that location with what was expected to compute a registration allowing future rounds to be fired with much greater accuracy.
Time on target | Artillery | Wikipedia | 323 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
A technique called time on target (TOT) was developed by the British Army in North Africa at the end of 1941 and early 1942 particularly for counter-battery fire and other concentrations, it proved very popular. It relied on BBC time signals to enable officers to synchronize their watches to the second because this avoided the need to use military radio networks and the possibility of losing surprise, and the need for field telephone networks in the desert. With this technique the time of flight from each fire unit (battery or troop) to the target is taken from the range or firing tables, or the computer and each engaging fire unit subtracts its time of flight from the TOT to determine the time to fire. An executive order to fire is given to all guns in the fire unit at the correct moment to fire. When each fire unit fires their rounds at their individual firing time all the opening rounds will reach the target area almost simultaneously. This is especially effective when combined with techniques that allow fires for effect to be made without preliminary adjusting fires.
Multiple round simultaneous impact
Multiple round simultaneous impact (MRSI) is a modern version of the earlier time on target concept. MRSI is when a single gun fires multiple shells so all arrive at the same target simultaneously. This is possible because there is more than one trajectory for a round to fly to any given target. Typically one is below 45 degrees from horizontal and the other is above it, and by using different sized propellant charges with each shell, it is possible to utilize more than two trajectories. Because the higher trajectories cause the shells to arc higher into the air, they take longer to reach the target. If shells are fired on higher trajectories for initial volleys (starting with the shell with the most propellant and working down) and later volleys are fired on the lower trajectories, with the correct timing the shells will all arrive at the same target simultaneously. This is useful because many more shells can land on the target with no warning. With traditional methods of firing, the target area may have time (however long it takes to reload and re-fire the guns) to take cover between volleys. However, guns capable of burst fire can deliver multiple rounds in a few seconds if they use the same firing data for each, and if guns in more than one location are firing on one target they can use Time on Target procedures so that all their shells arrive at the same time and target. | Artillery | Wikipedia | 502 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
MRSI has a few prerequisites. The first is guns with a high rate of fire. The second is the ability to use different sized propellant charges. Third is a fire control computer that has the ability to compute MRSI volleys and the capability to produce firing data, sent to each gun, and then presented to the gun commander in the correct order. The number of rounds that can be delivered in MRSI depends primarily on the range to the target and the rate of fire. To allow the most shells to reach the target, the target has to be in range of the lowest propellant charge.
Examples of guns with a rate of fire that makes them suitable for MRSI includes UK's AS-90, South Africa's Denel G6-52 (which can land six rounds simultaneously at targets at least away), Germany's Panzerhaubitze 2000 (which can land five rounds simultaneously at targets at least away), Slovakia's 155 mm SpGH ZUZANA 2, and K9 Thunder.
The Archer project (developed by BAE-Systems Bofors in Sweden) is a 155 mm howitzer on a wheeled chassis which is claimed to be able to deliver up to six shells on target simultaneously from the same gun. The 120 mm twin barrel AMOS mortar system, joint developed by Hägglunds (Sweden) and Patria (Finland), is capable of 7 + 7 shells MRSI. The United States Crusader program (now cancelled) was slated to have MRSI capability. It is unclear how many fire control computers have the necessary capabilities.
Two-round MRSI firings were a popular artillery demonstration in the 1960s, where well trained detachments could show off their skills for spectators.
Air burst | Artillery | Wikipedia | 352 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
The destructiveness of artillery bombardments can be enhanced when some or all of the shells are set for airburst, meaning that they explode in the air above the target instead of upon impact. This can be accomplished either through time fuzes or proximity fuzes. Time fuzes use a precise timer to detonate the shell after a preset delay. This technique is tricky and slight variations in the functioning of the fuze can cause it to explode too high and be ineffective, or to strike the ground instead of exploding above it. Since December 1944 (Battle of the Bulge), proximity fuzed artillery shells have been available that take the guesswork out of this process. These employ a miniature, low powered radar transmitter in the fuze to detect the ground and explode them at a predetermined height above it. The return of the weak radar signal completes an electrical circuit in the fuze which explodes the shell. The proximity fuze itself was developed by the British to increase the effectiveness of anti-aircraft warfare.
This is a very effective tactic against infantry and light vehicles, because it scatters the fragmentation of the shell over a larger area and prevents it from being blocked by terrain or entrenchments that do not include some form of robust overhead cover. Combined with TOT or MRSI tactics that give no warning of the incoming rounds, these rounds are especially devastating because many enemy soldiers are likely to be caught in the open; even more so if the attack is launched against an assembly area or troops moving in the open rather than a unit in an entrenched tactical position.
Use in monuments
Numerous war memorials around the world incorporate an artillery piece that was used in the war or battle commemorated. | Artillery | Wikipedia | 343 | 2508 | https://en.wikipedia.org/wiki/Artillery | Technology | Artillery and siege | null |
The Airbus A300 is Airbus' first production aircraft and the world's first twin-engine, double-aisle (wide-body) airliner. It was developed by Airbus Industrie GIE, now merged into Airbus SE, and manufactured from 1971 to 2007.
In September 1967, aircraft manufacturers in France, West Germany and the United Kingdom signed an initial memorandum of understanding to collaborate to develop an innovative large airliner. The French and West Germans reached a firm agreement on 29 May 1969, after the British withdrew from the project on 10 April 1969. A new collaborative aerospace company, Airbus Industrie GIE, was formally created on 18 December 1970 to develop and produce it. The A300 prototype first flew on 28 October 1972.
The first twin-engine widebody airliner, the A300 typically seats 247 passengers in two classes over a range of 5,375 to 7,500 km (2,900 to 4,050 nmi; ).
Initial variants are powered by General Electric CF6-50 or Pratt & Whitney JT9D turbofans and have a three-crew flight deck. The improved A300-600 has a two-crew cockpit and updated CF6-80C2 or PW4000 engines; it made its first flight on 8 July 1983 and entered service later that year. The A300 is the basis of the smaller A310 (first flown in 1982) and was adapted in a freighter version. Its cross section was retained for the larger four-engined A340 (1991) and the larger twin-engined A330 (1992). It is also the basis for the oversize Beluga transport (1994). Unlike most Airbus products, it has a yoke, not using a fly-by-wire system.
Launch customer Air France introduced the type on 23 May 1974.
After limited demand initially, sales took off as the type was proven in early service, beginning three decades of steady orders. It has a similar capacity to the Boeing 767-300, introduced in 1986, but lacked the 767-300ER range. During the 1990s, the A300 became popular with cargo aircraft operators, as both passenger airliner conversions and as original builds. Production ceased in July 2007 after 561 deliveries.
, there are 197 A300 family aircraft still in commercial service.
Development
Origins | Airbus A300 | Wikipedia | 480 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
During the 1960s, European aircraft manufacturers such as Hawker Siddeley and the British Aircraft Corporation, based in the UK, and Sud Aviation of France, had ambitions to build a new 200-seat airliner for the growing civil aviation market. While studies were performed and considered, such as a stretched twin-engine variant of the Hawker Siddeley Trident and an expanded development of the British Aircraft Corporation (BAC) One-Eleven, designated the BAC Two-Eleven, it was recognized that if each of the European manufacturers were to launch similar aircraft into the market at the same time, neither would achieve sales volume needed to make them viable. In 1965, a British government study, known as the Plowden Report, had found British aircraft production costs to be between 10% and 20% higher than American counterparts due to shorter production runs, which was in part due to the fractured European market. To overcome this factor, the report recommended the pursuit of multinational collaborative projects between the region's leading aircraft manufacturers.
European manufacturers were keen to explore prospective programmes; the proposed 260-seat wide-body HBN 100 between Hawker Siddeley, Nord Aviation, and Breguet Aviation being one such example. National governments were also keen to support such efforts amid a belief that American manufacturers could dominate the European Economic Community; in particular, Germany had ambitions for a multinational airliner project to invigorate its aircraft industry, which had declined considerably following the Second World War. During the mid-1960s, both Air France and American Airlines had expressed interest in a short-haul twin-engine wide-body aircraft, indicating a market demand for such an aircraft to be produced. In July 1967, during a high-profile meeting between French, German, and British ministers, an agreement was made for greater cooperation between European nations in the field of aviation technology, and "for the joint development and production of an airbus". The word airbus at this point was a generic aviation term for a larger commercial aircraft, and was considered acceptable in multiple languages, including French. | Airbus A300 | Wikipedia | 412 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
Shortly after the July 1967 meeting, French engineer Roger Béteille was appointed as the technical director of what would become the A300 programme, while Henri Ziegler, chief operating office of Sud Aviation, was appointed as the general manager of the organisation and German politician Franz Josef Strauss became the chairman of the supervisory board. Béteille drew up an initial work share plan for the project, under which French firms would produce the aircraft's cockpit, the control systems, and lower-centre portion of the fuselage, Hawker Siddeley would manufacture the wings, while German companies would produce the forward, rear and upper part of the center fuselage sections. Additional work included moving elements of the wings being produced in the Netherlands, and Spain producing the horizontal tail plane.
An early design goal for the A300 that Béteille had stressed the importance of was the incorporation of a high level of technology, which would serve as a decisive advantage over prospective competitors. For this reason, the A300 would feature the first use of composite materials of any passenger aircraft, the leading and trailing edges of the tail fin being composed of glass fibre reinforced plastic. Béteille opted for English as the working language for the developing aircraft, as well against using Metric instrumentation and measurements, as most airlines already had US-built aircraft. These decisions were partially influenced by feedback from various airlines, such as Air France and Lufthansa, as an emphasis had been placed on determining the specifics of what kind of aircraft that potential operators were seeking. According to Airbus, this cultural approach to market research had been crucial to the company's long-term success.
Workshare and redefinition | Airbus A300 | Wikipedia | 338 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
On 26 September 1967, the French, West German and British governments signed a Memorandum of Understanding to start the development of the 300-seat Airbus A300. At this point, the A300 was only the second major joint aircraft programme in Europe, the first being the Anglo-French Concorde. Under the terms of the memorandum, the French and British were to each receive a 37.5 per cent work share on the project, while the West Germans would receive a 25 per cent share. Sud Aviation was recognized as the lead contractor for the A300, with Hawker Siddeley being selected as the British partner company. At the time, the news of the announcement had been clouded by the British Government's support for the Airbus, which coincided with its refusal to back BAC's proposed competitor, the BAC 2–11, despite a preference for the latter expressed by British European Airways (BEA). Another parameter was the requirement for a new engine to be developed by Rolls-Royce to power the proposed airliner; a derivative of the in-development Rolls-Royce RB211, the triple-spool RB207, capable of producing of .
The programme cost was US$4.6 billion (in 1993 dollars, equivalent to $ in ).
In December 1968, the French and British partner companies (Sud Aviation and Hawker Siddeley) proposed a revised configuration, the 250-seat Airbus A250. It had been feared that the original 300-seat proposal was too large for the market, thus it had been scaled down to produce the A250. The dimensional changes involved in the shrink reduced the length of the fuselage by and the diameter by , reducing the overall weight by . For increased flexibility, the cabin floor was raised so that standard LD3 freight containers could be accommodated side-by-side, allowing more cargo to be carried. Refinements made by Hawker Siddeley to the wing's design provided for greater lift and overall performance; this gave the aircraft the ability to climb faster and attain a level cruising altitude sooner than any other passenger aircraft. It was later renamed the A300B. | Airbus A300 | Wikipedia | 432 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
Perhaps the most significant change of the A300B was that it would not require new engines to be developed, being of a suitable size to be powered by Rolls-Royce's RB211, or alternatively the American Pratt & Whitney JT9D and General Electric CF6 powerplants; this switch was recognized as considerably reducing the project's development costs. To attract potential customers in the US market, it was decided that General Electric CF6-50 engines would power the A300 in place of the British RB207; these engines would be produced in co-operation with French firm Snecma. By this time, Rolls-Royce had been concentrating their efforts upon developing their RB211 turbofan engine instead and progress on the RB207's development had been slow for some time, the firm having suffered due to funding limitations, both of which had been factors in the engine switch decision.
On 10 April 1969, a few months after the decision to drop the RB207 had been announced, the British government announced that they would withdraw from the Airbus venture. In response, West Germany proposed to France that they would be willing to contribute up to 50% of the project's costs if France was prepared to do the same. Additionally, the managing director of Hawker Siddeley, Sir Arnold Alexander Hall, decided that his company would remain in the project as a favoured sub-contractor, developing and manufacturing the wings for the A300, which would prove to be an important contributor to the performance of subsequent versions. Hawker Siddeley spent £35 million of its own funds, along with a further £35 million loan from the West German government, on the machine tooling to design and produce the wings.
Programme launch | Airbus A300 | Wikipedia | 353 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
On 29 May 1969, during the Paris Air Show, French transport minister Jean Chamant and German economics minister Karl Schiller signed an agreement officially launching the Airbus A300, the world's first twin-engine widebody airliner. The intention of the project was to produce an aircraft that was smaller, lighter, and more economical than its three-engine American rivals, the McDonnell Douglas DC-10 and the Lockheed L-1011 TriStar. In order to meet Air France's demands for an aircraft larger than 250-seat A300B, it was decided to stretch the fuselage to create a new variant, designated as the A300B2, which would be offered alongside the original 250-seat A300B, henceforth referred to as the A300B1. On 3 September 1970, Air France signed a letter of intent for six A300s, marking the first order to be won for the new airliner.
In the aftermath of the Paris Air Show agreement, it was decided that, in order to provide effective management of responsibilities, a Groupement d'intérêt économique would be established, allowing the various partners to work together on the project while remaining separate business entities. On 18 December 1970, Airbus Industrie was formally established following an agreement between Aérospatiale (the newly merged Sud Aviation and Nord Aviation) of France and the antecedents to Deutsche Aerospace of Germany, each receiving a 50 per cent stake in the newly formed company. In 1971, the consortium was joined by a third full partner, the Spanish firm CASA, who received a 4.2 per cent stake, the other two members reducing their stakes to 47.9 per cent each. In 1979, Britain joined the Airbus consortium via British Aerospace, which Hawker Siddeley had merged into, which acquired a 20 per cent stake in Airbus Industrie with France and Germany each reducing their stakes to 37.9 per cent.
Prototype and flight testing | Airbus A300 | Wikipedia | 401 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
Airbus Industrie was initially headquartered in Paris, which is where design, development, flight testing, sales, marketing, and customer support activities were centred; the headquarters was relocated to Toulouse in January 1974. The final assembly line for the A300 was located adjacent to Toulouse Blagnac International Airport. The manufacturing process necessitated transporting each aircraft section being produced by the partner companies scattered across Europe to this one location. The combined use of ferries and roads were used for the assembly of the first A300, however this was time-consuming and not viewed as ideal by Felix Kracht, Airbus Industrie's production director. Kracht's solution was to have the various A300 sections brought to Toulouse by a fleet of Boeing 377-derived Aero Spacelines Super Guppy aircraft, by which means none of the manufacturing sites were more than two hours away. Having the sections airlifted in this manner made the A300 the first airliner to use just-in-time manufacturing techniques, and allowed each company to manufacture its sections as fully equipped, ready-to-fly assemblies.
In September 1969, construction of the first prototype A300 began. On 28 September 1972, this first prototype was unveiled to the public, it conducted its maiden flight from Toulouse–Blagnac International Airport on 28 October that year. This maiden flight, which was performed a month ahead of schedule, lasted for one hour and 25 minutes; the captain was Max Fischl and the first officer was Bernard Ziegler, son of Henri Ziegler. In 1972, unit cost was US$17.5M. On 5 February 1973, the second prototype performed its maiden flight. The flight test programme, which involved a total of four aircraft, was relatively problem-free, accumulating 1,580 flight hours throughout. In September 1973, as part of promotional efforts for the A300, the new aircraft was taken on a six-week tour around North America and South America, to demonstrate it to airline executives, pilots, and would-be customers. Amongst the consequences of this expedition, it had allegedly brought the A300 to the attention of Frank Borman, the CEO of Eastern Airlines, one of the "big four" U.S. airlines. | Airbus A300 | Wikipedia | 461 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
Entry into service
On 15 March 1974, type certificates were granted for the A300 from both German and French authorities, clearing the way for its entry into revenue service. On 23 May 1974, Federal Aviation Administration (FAA) certification was received. The first production model, the A300B2, entered service in 1974, followed by the A300B4 one year later. Initially, the success of the consortium was poor, in part due to the economic consequences of the 1973 oil crisis, but by 1979 there were 81 A300 passenger liners in service with 14 airlines, alongside 133 firm orders and 88 options. Ten years after the official launch of the A300, the company had achieved a 26 per cent market share in terms of dollar value, enabling Airbus to proceed with the development of its second aircraft, the Airbus A310.
Design
The Airbus A300 is a wide-body medium-to-long range airliner; it has the distinction of being the first twin-engine wide-body aircraft in the world. In 1977, the A300 became the first Extended Range Twin Operations (ETOPS)-compliant aircraft, due to its high performance and safety standards. Another world-first of the A300 is the use of composite materials on a commercial aircraft, which were used on both secondary and later primary airframe structures, decreasing overall weight and improving cost-effectiveness. Other firsts included the pioneering use of centre-of-gravity control, achieved by transferring fuel between various locations across the aircraft, and electrically signalled secondary flight controls.
The A300 is powered by a pair of underwing turbofan engines, either General Electric CF6 or Pratt & Whitney JT9D engines; the sole use of underwing engine pods allowed for any suitable turbofan engine to be more readily used. The lack of a third tail-mounted engine, as per the trijet configuration used by some competing airliners, allowed for the wings to be located further forwards and to reduce the size of the vertical stabiliser and elevator, which had the effect of increasing the aircraft's flight performance and fuel efficiency. | Airbus A300 | Wikipedia | 433 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
Airbus partners had employed the latest technology, some of which having been derived from Concorde, on the A300. According to Airbus, new technologies adopted for the airliner were selected principally for increased safety, operational capability, and profitability. Upon entry into service in 1974, the A300 was a very advanced plane, which went on to influence later airliner designs. The technological highlights include advanced wings by de Havilland (later BAE Systems) with supercritical airfoil sections for economical performance and advanced aerodynamically efficient flight control surfaces.
The diameter circular fuselage section allows an eight-abreast passenger seating and is wide enough for 2 LD3 cargo containers side by side. Structures are made from metal billets, reducing weight. It is the first airliner to be fitted with wind shear protection. Its advanced autopilots are capable of flying the aircraft from climb-out to landing, and it has an electrically controlled braking system.
Later A300s incorporated other advanced features such as the Forward-Facing Crew Cockpit (FFCC), which enabled a two-pilot flight crew to fly the aircraft alone without the need for a flight engineer, the functions of which were automated; this two-man cockpit concept was a world-first for a wide-body aircraft. Glass cockpit flight instrumentation, which used cathode-ray tube (CRT) monitors to display flight, navigation, and warning information, along with fully digital dual autopilots and digital flight control computers for controlling the spoilers, flaps, and leading-edge slats, were also adopted upon later-built models. Additional composites were also made use of, such as carbon-fibre-reinforced polymer (CFRP), as well as their presence in an increasing proportion of the aircraft's components, including the spoilers, rudder, air brakes, and landing gear doors. Another feature of later aircraft was the addition of wingtip fences, which improved aerodynamic performance and thus reduced cruise fuel consumption by about 1.5% for the A300-600. | Airbus A300 | Wikipedia | 413 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
In addition to passenger duties, the A300 became widely used by air freight operators; according to Airbus, it is the best-selling freight aircraft of all time. Various variants of the A300 were built to meet customer demands, often for diverse roles such as aerial refueling tankers, freighter models (new-build and conversions), combi aircraft, military airlifter, and VIP transport. Perhaps the most visually unique of the variants is the A300-600ST Beluga, an oversized cargo-carrying model operated by Airbus to carry aircraft sections between their manufacturing facilities. The A300 was the basis for, and retained a high level of commonality with, the second airliner produced by Airbus, the smaller Airbus A310.
Operational history
On 23 May 1974, the first A300 to enter service performed the first commercial flight of the type, flying from Paris to London, for Air France.
Immediately after the launch, sales of the A300 were weak for some years, with most orders going to airlines that had an obligation to favor the domestically made product – notably Air France and Lufthansa, the first two airlines to place orders for the type. Following the appointment of Bernard Lathière as Henri Ziegler's replacement, an aggressive sales approach was adopted. Indian Airlines was the world's first domestic airline to purchase the A300, ordering three aircraft with three options. However, between December 1975 and May 1977, there were no sales for the type. During this period a number of "whitetail" A300s – completed but unsold aircraft – were completed and stored at Toulouse, and production fell to half an aircraft per month amid calls to pause production completely.
During the flight testing of the A300B2, Airbus held a series of talks with Korean Air on the topic of developing a longer-range version of the A300, which would become the A300B4. In September 1974, Korean Air placed an order for four A300B4s with options for two further aircraft; this sale was viewed as significant as it was the first non-European international airline to order Airbus aircraft. Airbus had viewed South-East Asia as a vital market that was ready to be opened up and believed Korean Air to be the 'key'. | Airbus A300 | Wikipedia | 470 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
Airlines operating the A300 on short-haul routes were forced to reduce frequencies to try and fill the aircraft. As a result, they lost passengers to airlines operating more frequent narrow-body flights. Eventually, Airbus had to build its own narrowbody aircraft (the A320) to compete with the Boeing 737 and McDonnell Douglas DC-9/MD-80. The saviour of the A300 was the advent of ETOPS, a revised FAA rule which allows twin-engine jets to fly long-distance routes that were previously off-limits to them. This enabled Airbus to develop the aircraft as a medium/long-range airliner.
In 1977, US carrier Eastern Air Lines leased four A300s as an in-service trial. CEO Frank Borman was impressed that the A300 consumed 30% less fuel, even less than expected, than Eastern's fleet of L-1011s. The A300 would replacing the aging DC-9s and 727-100s but in smaller numbers, while being a twinjet sized between the Tristars and 727-200s, and capable of operating from short runway airports with sufficient range from New York City to Miami. Borman proceeded to order 23 A300s, becoming the first U.S. customer for the type. This order is often cited as the point at which Airbus came to be seen as a serious competitor to the large American aircraft-manufacturers Boeing and McDonnell Douglas. Aviation author John Bowen alleged that various concessions, such as loan guarantees from European governments and compensation payments, were a factor in the decision as well. Although the A300 was originally too large for Eastern's exiting routes, Airbus provided a fixed subsidy for a 57% load factor which decreased for every percent above that figure. The Eastern Air Lines breakthrough was shortly followed by an order from Pan Am. From then on, the A300 family sold well, eventually reaching a total of 561 delivered aircraft.
In December 1977, Aerocondor Colombia became the first Airbus operator in Latin America, leasing one Airbus A300B4-2C, named Ciudad de Barranquilla. | Airbus A300 | Wikipedia | 437 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
During the late 1970s, Airbus adopted a so-called 'Silk Road' strategy, targeting airlines in the Far East. As a result, The aircraft found particular favor with Asian airlines, being bought by Japan Air System, Korean Air, China Eastern Airlines, Thai Airways International, Singapore Airlines, Malaysia Airlines, Philippine Airlines, Garuda Indonesia, China Airlines, Pakistan International Airlines, Indian Airlines, Trans Australia Airlines and many others. As Asia did not have restrictions similar to the FAA 60-minutes rule for twin-engine airliners which existed at the time, Asian airlines used A300s for routes across the Bay of Bengal and South China Sea.
In 1977, the A300B4 became the first ETOPS compliant aircraft, qualifying for Extended Twin Engine Operations over water, providing operators with more versatility in routing. In 1982, Garuda Indonesian Airways became the first airline to fly the A300B4-200FFCC with the newly Forward-Facing Crew Cockpit concept, the world's first wide-body aircraft that only operated by two-man cockpit crew. By 1981, Airbus was growing rapidly, with over 400 aircraft sold to over forty airlines.
In 1989, Chinese operator China Eastern Airlines received its first A300; by 2006, the airline operated around 18 A300s, making it the largest operator of both the A300 and the A310 at that time. On 31 May 2014, China Eastern officially retired the last A300-600 in its fleet, having begun drawing down the type in 2010.
From 1997 to 2014, a single A300, designated A300 Zero-G, was operated by the European Space Agency (ESA), centre national d'études spatiales (CNES) and the German Aerospace Center (DLR) as a reduced-gravity aircraft for conducting research into microgravity; the A300 is the largest aircraft to ever have been used in this capacity. A typical flight would last for two and a half hours, enabling up to 30 parabolas to be performed per flight.
By the 1990s, the A300 was being heavily promoted as a cargo freighter. The largest freight operator of the A300 is FedEx Express, which has 70 A300 aircraft in service as of September 2022. UPS Airlines also operates 52 freighter versions of the A300. | Airbus A300 | Wikipedia | 477 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
The final version was the A300-600R and is rated for 180-minute ETOPS. The A300 has enjoyed renewed interest in the secondhand market for conversion to freighters; large numbers were being converted during the late 1990s. The freighter versions – either new-build A300-600s or converted ex-passenger A300-600s, A300B2s and B4s – account for most of the world's freighter fleet after the Boeing 747 freighter.
The A300 provided Airbus the experience of manufacturing and selling airliners competitively. The basic fuselage of the A300 was later stretched (A330 and A340), shortened (A310), or modified into derivatives (A300-600ST Beluga Super Transporter). In 2006, unit cost of an −600F was $105 million. In March 2006, Airbus announced the impending closure of the A300/A310 final assembly line, making them the first Airbus aircraft to be discontinued. The final production A300, an A300F freighter, performed its initial flight on 18 April 2007, and was delivered to FedEx Express on 12 July 2007. Airbus has announced a support package to keep A300s flying commercially.
Airbus offers the A330-200F freighter as a replacement for the A300 cargo variants.
The life of UPS's fleet of 52 A300s, delivered from 2000 to 2006, will be extended to 2035 by a flight deck upgrade based around Honeywell Primus Epic avionics; new displays and flight management system (FMS), improved weather radar, a central maintenance system, and a new version of the current enhanced ground proximity warning system.
With a light usage of only two to three cycles per day, it will not reach the maximum number of cycles by then.
The first modification will be made at Airbus Toulouse in 2019 and certified in 2020.
As of July 2017, there are 211 A300s in service with 22 operators, with the largest operator being FedEx Express with 68 A300-600F aircraft.
Variants
A300B1 | Airbus A300 | Wikipedia | 440 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
The A300B1 was the first variant to take flight. It had a maximum takeoff weight (MTOW) of , was long and was powered by two General Electric CF6-50A engines. Only two prototypes of the variant were built before it was adapted into the A300B2, the first production variant of the airliner. The second prototype was leased to Trans European Airways in 1974.
A300B2
A300B2-100
Responding to a need for more seats from Air France, Airbus decided that the first production variant should be larger than the original prototype A300B1. The CF6-50A powered A300B2-100 was longer than the A300B1 and had an increased MTOW of , allowing for 30 additional seats and bringing the typical passenger count up to 281, with capacity for 20 LD3 containers. Two prototypes were built and the variant made its maiden flight on 28 June 1973, became certified on 15 March 1974 and entered service with Air France on 23 May 1974.
A300B2-200
For the A300B2-200, originally designated as the A300B2K, Krueger flaps were introduced at the leading-edge root, the slat angles were reduced from 20 degrees to 16 degrees, and other lift related changes were made in order to introduce a high-lift system. This was done to improve performance when operating at high-altitude airports, where the air is less dense and lift generation is reduced. The variant had an increased MTOW of and was powered by CF6-50C engines, was certified on 23 June 1976, and entered service with South African Airways in November 1976. CF6-50C1 and CF6-50C2 models were also later fitted depending on customer requirements, these became certified on 22 February 1978 and 21 February 1980 respectively.
A300B2-320
The A300B2-320 introduced the Pratt & Whitney JT9D powerplant and was powered by JT9D-59A engines. It retained the MTOW of the B2-200, was certified on 4 January 1980, and entered service with Scandinavian Airlines on 18 February 1980, with only four being produced.
A300B4 | Airbus A300 | Wikipedia | 462 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
A300B4-100
The initial A300B4 variant, later named the A300B4-100, included a centre fuel tank for an increased fuel capacity of , and had an increased MTOW of . It also featured Krueger flaps and had a similar high-lift system to what was later fitted to the A300B2-200. The variant made its maiden flight on 26 December 1974, was certified on 26 March 1975, and entered service with Germanair in May 1975.
A300B4-200
The A300B4-200 had an increased MTOW of and featured an additional optional fuel tank in the rear cargo hold, which would reduce the cargo capacity by two LD3 containers. The variant was certified on 26 April 1979.
A300B4-200FFCC
It is the A300B4-200 without the flight engineer but analog flight instruments. Introduced by Garuda Indonesian Airways in 1982
A300-600 | Airbus A300 | Wikipedia | 202 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
The A300-600, officially designated as the A300B4-600, was slightly longer than the A300B2 and A300B4 variants and had an increased interior space from using a similar rear fuselage to the Airbus A310, this allowed it to have two additional rows of seats. It was initially powered by Pratt & Whitney JT9D-7R4H1 engines, but was later fitted with General Electric CF6-80C2 engines, with Pratt & Whitney PW4156 or PW4158 engines being introduced in 1986. Other changes include an improved wing featuring a recambered trailing edge, the incorporation of simpler single-slotted Fowler flaps, the deletion of slat fences, and the removal of the outboard ailerons after they were deemed unnecessary on the A310. The variant made its first flight on 8 July 1983, was certified on 9 March 1984, and entered service in June 1984 with Saudi Arabian Airlines. A total of 313 A300-600s (all versions) have been sold. The A300-600 uses the A310 cockpits, featuring digital technology and electronic displays, eliminating the need for a flight engineer. The FAA issues a single type rating which allows operation of both the A310 and A300-600.
A300-600: (Official designation: A300B4-600) The baseline model of the −600 series.
A300-620C: (Official designation: A300C4-620) A convertible-freighter version. Four delivered between 1984 and 1985.
A300-600F: (Official designation: A300F4-600) The freighter version of the baseline −600.
A300-600R: (Official designation: A300B4-600R) The increased-range −600, achieved by an additional trim fuel tank in the tail. First delivery in 1988 to American Airlines; all A300s built since 1989 (freighters included) are −600Rs. Japan Air System (later merged into Japan Airlines) took delivery of the last new-built passenger A300, an A300-622R, in November 2002.
A300-600RC: (Official designation: A300C4-600R) The convertible-freighter version of the −600R. Two were delivered in 1999. | Airbus A300 | Wikipedia | 502 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
A300-600RF: (Official designation: A300F4-600R) The freighter version of the −600R. All A300s delivered between November 2002 and 12 July 2007 (last ever A300 delivery) were A300-600RFs. | Airbus A300 | Wikipedia | 59 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
A300B10 (A310)
Airbus had demand for an aircraft smaller than the A300.
On 7 July 1978, the A310 (initially the A300B10) was launched with orders from Swissair and Lufthansa.
On 3 April 1982, the first prototype conducted its maiden flight and it received its type certification on 11 March 1983.
Keeping the same eight-abreast cross-section, the A310 is shorter than the initial A300 variants, and has a smaller wing, down from . The A310 introduced a two-crew glass cockpit, later adopted for the A300-600 with a common type rating. It was powered by the same GE CF6-80 or Pratt & Whitney JT9D then PW4000 turbofans. It can seat 220 passengers in two classes, or 240 in all-economy, and can fly up to .
It has overwing exits between the two main front and rear door pairs.
In April 1983, the aircraft entered revenue service with Swissair and competed with the Boeing 767–200, introduced six months before.
Its longer range and ETOPS regulations allowed it to be operated on transatlantic flights. Until the last delivery in June 1998, 255 aircraft were produced, as it was succeeded by the larger Airbus A330-200.
It has cargo aircraft versions, and was derived into the Airbus A310 MRTT military tanker/transport.
A300-600ST
Commonly referred to as the Airbus Beluga or "Airbus Super Transporter", these five airframes are used by Airbus to ferry parts between the company's disparate manufacturing facilities, thus enabling workshare distribution. They replaced the four Aero Spacelines Super Guppys previously used by Airbus.
ICAO code: A3ST
Operators
, there are 197 A300 family aircraft in commercial service.
The five largest operators were FedEx Express (70), UPS Airlines (52), European Air Transport Leipzig (23), Iran Air (11), and Mahan Air (11).
Deliveries
Data through end of December 2007.
Accidents and incidents
As of June 2021, the A300 has been involved in 77 occurrences including 24 hull-loss accidents causing 1133 fatalities, and criminal occurrences and hijackings causing fatalities.
Accidents with fatalities | Airbus A300 | Wikipedia | 474 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
21 September 1987: An Egyptair Airbus A300B4-203 touched down past the runway threshold during a training flight. The right main gear hit the runway lights and the aircraft collided with an antenna and fences. No passengers were on board the plane, but 5 crew members were killed. The aircraft was written off. This was the first fatal accident of an Airbus A300.
28 September 1992: PIA Flight 268, an A300B4 crashed on approach near Kathmandu, Nepal. All 12 crew and 155 passengers died.
26 April 1994: China Airlines Flight 140 crashed at the end of runway at Nagoya, Japan, killing all 15 crew and 249 of 256 passengers on board.
26 September 1997: Garuda Indonesia Flight 152 was on approach to Polonia International Airport in Medan. The plane later crashed into a ravine in Buah Nabar due to ATC error and apparent haze that covers the country which limits the visibility. All 234 passengers and crew aboard were killed in Indonesia's deadliest crash.
16 February 1998: China Airlines Flight 676 crashed into a residential area close to CKS International Airport near Taipei, Taiwan. All 196 people on board were killed, including Taiwan's central bank president. Six people on the ground were also killed.
2 February 2000: During take-off, a Lockheed C-130 Hercules owned by the Iranian Air Force lost control and veered off the runway, striking an Airbus A300B2-203 owned by Iran Air, killing eight people.
12 November 2001: American Airlines Flight 587 crashed into Belle Harbor—a neighbourhood in Queens, New York, United States—shortly after takeoff from John F. Kennedy International Airport. The vertical stabiliser ripped off the aircraft after the rudder was mishandled during wake turbulence. All 260 people on board were killed, along with 5 people on the ground. It is the second-deadliest incident involving an A300 to date and the second-deadliest aircraft incident in the United States.
14 April 2010: AeroUnion Flight 302, an A300B4-203F, crashed on a road short of the runway while attempting to land at Monterrey Airport in Mexico. Seven people (five crew members and two on the ground) were killed.
14 August 2013: UPS Flight 1354, an Airbus A300F4-622R, crashed outside the perimeter fence on approach to Birmingham–Shuttlesworth International Airport in Birmingham, Alabama, United States. Both crew members died. | Airbus A300 | Wikipedia | 506 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
Non-fatal hull losses
18 December 1983: Malaysian Airline System Flight 684, an Airbus A300B4 leased from Scandinavian Airlines System (SAS), registration OY-KAA, crashed short of the runway at Kuala Lumpur in bad weather while attempting to land on a flight from Singapore. All 247 people aboard escaped unharmed but the aircraft was destroyed in the resulting fire.
24 April 1993: an Air Inter Airbus A300B2-1C was written off after colliding with a light pole while being pushed back at Montpellier.
15 November 1993, an Indian Airlines Airbus A300, registered as VT-EDV, crash landed near Hyderabad Airport. There were no deaths but the aircraft was written off.
10 August 1994 – Korean Air Flight 2033 (Airbus A300) from Seoul to Jeju, the flight approached faster than usual to avoid potential windshear. Fifty feet above the runway the co-pilot, who was not flying the aircraft, decided that there was insufficient runway left to land and tried to perform a go-around against the captain's wishes. The aircraft touched down 1,773 meters beyond the runway threshold. The aircraft could not be stopped on the remaining 1,227 meters of runway and overran at a speed of 104 knots. After striking the airport wall and a guard post at 30 knots, the aircraft burst into flames and was incinerated. The cabin crew was credited with safely evacuating all passengers although only half of the aircraft's emergency exits were usable.
17 October 2001: Pakistan International Airlines flight PK231, registration AP-BCJ, from Islamabad via Peshawar to Dubai veered off the side of the runway after the right hand main landing gear collapsed as it touched down. The aircraft skidded and eventually came to rest in sand 50 meters from the runway. The aircraft sustained damage to its right wing structure and its no. 2 engine, which partly broke off the wing. All 205 passengers and crew survived.
1 March 2004: Pakistan International Airlines Flight 2002 burst 2 tyres whilst taking off from King Abdulaziz International Airport. Fragments of the tyre were ingested by the engines, this caused the engines to catch fire and an aborted takeoff was performed. Due to the fire substantial damage to the engine and the left wing caused the aircraft to be written off. All 261 passengers and 12 crew survived. | Airbus A300 | Wikipedia | 479 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
16 November 2012: an Air Contractors Airbus A300B4-203(F) EI-EAC, operating flight QY6321 on behalf of EAT Leipzig from Leipzig (Germany) to Bratislava (Slovakia), suffered a nose wheel collapse during roll out after landing at Bratislava's M. R. Štefánik Airport. All three crew members survived unharmed, the aircraft was written off. As of December 2017, the aircraft still was parked at a remote area of the airport between runways 13 and 22.
12 October 2015: An Airbus A300B4-200F Freighter operated by Egyptian Tristar cargo carrier crashed in Mogadishu, Somalia. All the passengers and crew members survived the crash.
1 October 2016: An Airbus A300-B4 registration PR-STN on a cargo flight between São Paulo-Guarulhos and Recife suffered a runway excursion after landing and the aft gear collapsed upon touchdown. | Airbus A300 | Wikipedia | 192 | 2524 | https://en.wikipedia.org/wiki/Airbus%20A300 | Technology | Specific aircraft_2 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.