text stringlengths 9 2.4k |
|---|
Okinawans "were often surprised at the comparatively humane treatment they received from the American enemy". "Islands of Discontent: Okinawan Responses to Japanese and American Power" by Mark Selden states that the Americans "did not pursue a policy of torture, rape, and murder of civilians as Japanese military officials had warned". American Military Intelligence Corps combat translators such as Teruto Tsubota managed to convince many civilians not to kill themselves. Survivors of the mass suicides blamed also the indoctrination of their education system of the time, in which the Okinawans were taught to become "more Japanese than the Japanese" and were expected to prove it.
Witnesses and historians claim that American and Japanese soldiers raped Okinawan women during the battle. Rape by Japanese troops reportedly "became common" in June, after it became clear that the Imperial Japanese Army had been defeated. Marine Corps officials in Okinawa and Washington have said that they knew of no rapes by American personnel in Okinawa at the end of the war. There are, however, numerous credible testimony accounts which note that a large number of rapes were committed by American forces during the battle. This includes stories of rape after trading sexual favors or even marrying Americans, such as the alleged incident in the village of Katsuyama, where civilians said they had formed a vigilante group to ambush and kill three black American soldiers who they claimed would frequently rape the local girls there.
|
MEXT textbook controversy.
There is ongoing disagreement between Okinawa's local government and Japan's national government over the role of the Japanese military in civilian mass suicides during the battle. In March 2007, the national Ministry of Education, Culture, Sports, Science and Technology (MEXT) advised textbook publishers to reword descriptions that the embattled Imperial Japanese Army forced civilians to kill themselves in the war to avoid being taken prisoner. MEXT preferred descriptions that just say that civilians received hand grenades from the Japanese military. This move sparked widespread protests among Okinawans. In June 2007, the Okinawa Prefectural Assembly adopted a resolution stating, "We strongly call on the (national) government to retract the instruction and to immediately restore the description in the textbooks so the truth of the Battle of Okinawa will be handed down correctly and a tragic war will never happen again."
|
The Nobel Prize-winning author Kenzaburō Ōe wrote a booklet that states that the mass suicide order was given by the military during the battle. He was sued by revisionists, including a wartime commander during the battle, who disputed this and wanted to stop publication of the booklet. At a court hearing, Ōe testified "Mass suicides were forced on Okinawa islanders under Japan's hierarchical social structure that ran through the state of Japan, the Japanese armed forces and local garrisons." In March 2008, the Osaka Prefecture Court ruled in favor of Ōe, stating, "It can be said the military was deeply involved in the mass suicides." The court recognized the military's involvement in the mass suicides and murder-suicides, citing the testimony about the distribution of grenades for suicide by soldiers and the fact that mass suicides were not recorded on islands where the military was not stationed.
In 2012, Korean-Japanese director Pak Su-nam announced her work on the documentary "Nuchigafu" (Okinawan for "only if one is alive") collecting living survivors' accounts to show "the truth of history to many people", alleging that "there were two types of orders for 'honorable deaths'—one for residents to kill each other and the other for the military to kill all residents". In March 2013, Japanese textbook publisher Shimizu Shoin was permitted by MEXT to publish the statements that "Orders from Japanese soldiers led to Okinawans committing group suicide" and "The [Japanese] army caused many tragedies in Okinawa, killing local civilians and forcing them to commit mass suicide."
|
Aftermath.
Military historian and journalist Hanson W. Baldwin stated about scale and ferocity of the battle, especially for American forces, that:According to historian George Feifer, Okinawa was the "site of the largest land-sea-air battle in history" and that the battle was the "last major one before the start of the atomic age". At least 90% of the buildings on the island were destroyed, along with countless historical documents, artifacts, and cultural treasures, and the tropical landscape was turned into "a vast field of mud, lead, decay and maggots". The military value of Okinawa was significant, as Okinawa provided a fleet anchorage, troop staging areas, and airfields in proximity to Japan. The US cleared the surrounding waters of mines in Operation Zebra, occupied Okinawa, and set up the United States Civil Administration of the Ryukyu Islands, a form of military government, after the battle. In 2011, one official of the prefectural government told David Hearst of "The Guardian":
Effect on the wider war.
|
Because the next major event following the Battle of Okinawa was the total surrender of Japan, the effect of this battle is more difficult to consider. Because Japan surrendered when it did, the anticipated series of battles and the invasion of the Japanese homeland never occurred, and all military strategies on both sides which presupposed this apparently-inevitable next development were immediately rendered moot.
Some military historians believe that the Okinawa campaign led directly to the atomic bombings of Hiroshima and Nagasaki, as a means of avoiding the planned ground invasion of the Japanese mainland. This view is explained by Victor Davis Hanson in his book "Ripples of Battle":
Meanwhile, many parties continue to debate the broader question of "why Japan surrendered", attributing the surrender to a number of possible reasons including: the atomic bombings, the Soviet invasion of Manchuria, and Japan's depleted resources.
Memorial.
In 1995, the Okinawa government erected a memorial monument named the Cornerstone of Peace in Mabuni, the site of the last fighting in southeastern Okinawa. The memorial lists all the known names of those who died in the battle, civilian and military, Japanese and foreign. As of 2024, the monument lists 242,225 names.
Modern US base.
Significant US forces remain garrisoned on Okinawa as the United States Forces Japan, which the Japanese government sees as an important guarantee of regional stability, and Kadena remains the largest US air base in Asia. Local residents have long protested against the size and presence of the base. |
Battle of El Alamein
There were two Battles of El Alamein in World War II, both fought in 1942. The battles occurred during the North African campaign in Egypt, in and around an area named after a railway stop called El Alamein.
In addition, the Battle of Alam el Halfa (30 August – 5 September 1942) was fought during the same period and in the same location. |
Brezhnev Doctrine
The Brezhnev Doctrine was a Soviet foreign policy that proclaimed that any threat to "socialist rule" in any state of the Soviet Bloc in Central and Eastern Europe was a threat to all of them, and therefore, it justified the intervention of fellow socialist states. It was proclaimed in order to justify the Soviet-led occupation of Czechoslovakia earlier in 1968, with the overthrow of the reformist government there. The references to "socialism" meant control by the communist parties which were loyal to the Kremlin. Soviet leader Mikhail Gorbachev repudiated the doctrine in the late 1980s, as the Kremlin accepted the peaceful overthrow of Soviet rule in all its satellite countries in Eastern Europe.
The policy was first and most clearly outlined by Sergei Kovalev in a September 26, 1968 "Pravda" article entitled "Sovereignty and the International Obligations of Socialist Countries". Leonid Brezhnev reiterated it in a speech at the Fifth Congress of the Polish United Workers' Party on November 13, 1968, which stated: "When forces that are hostile to socialism try to turn the development of some socialist country towards capitalism, it becomes not only a problem of the country concerned, but a common problem and concern of all socialist countries."
|
This doctrine was announced to retroactively justify the invasion of Czechoslovakia in August 1968 that ended the Prague Spring, along with earlier Soviet military interventions, such as the invasion of Hungary in 1956. These interventions were meant to put an end to liberalization efforts and uprisings that had the potential to compromise Soviet hegemony inside the Soviet Bloc, which was considered by the Soviet Union to be an essential and defensive and strategic buffer in case hostilities with NATO were to break out.
In practice, the policy meant that only limited independence of the satellite states' communist parties was allowed and that none would be allowed to compromise the cohesiveness of the Eastern Bloc in any way. That is, no country could leave the Warsaw Pact or disturb a ruling communist party's monopoly on power. Implicit in this doctrine was that the leadership of the Soviet Union reserved, for itself, the power to define "socialism" and "capitalism". Following the announcement of the Brezhnev Doctrine, numerous treaties were signed between the Soviet Union and its satellite states to reassert these points and to further ensure inter-state cooperation. The principles of the doctrine were so broad that the Soviets even used it to justify their military intervention in the communist (but non-Warsaw Pact) nation of Afghanistan in 1979. The Brezhnev Doctrine stayed in effect until it was ended with the Soviet reaction to the Polish crisis of 1980–1981.
|
Mikhail Gorbachev refused to use military force when Poland held free elections in 1989 and Solidarity defeated the Polish United Workers' Party. It was superseded by the facetiously named Sinatra Doctrine in 1989, alluding to the Frank Sinatra song "My Way". The refusal to intervene in the emancipation of the Eastern European satellite states and the Pan-European Picnic then led to the fall of the Iron Curtain and the largely peaceful collapse of the Eastern Bloc.
Origins.
1956 Hungarian Revolution and Soviet invasion.
The period between 1953 and 1968 was saturated with dissidence and reformation within the Soviet satellite states. 1953 saw the death of Soviet Leader Joseph Stalin, followed closely by Nikita Khrushchev's 1956 "Secret Speech" denouncing Stalin. This denouncement of the former leader led to a period of the Soviet Era known commonly as "De-Stalinization." Under the blanket reforms of this process, Imre Nagy came to power in Hungary as the new prime minister, taking over for Mátyás Rákosi. Almost immediately Nagy set out on a path of reform. |
This denouncement of the former leader led to a period of the Soviet Era known commonly as "De-Stalinization." Under the blanket reforms of this process, Imre Nagy came to power in Hungary as the new prime minister, taking over for Mátyás Rákosi. Almost immediately Nagy set out on a path of reform. Police power was reduced, collectivized farms were split up and being returned to individual peasants, industry and food production shifted and religious tolerance was becoming more prominent. These reforms shocked the Hungarian Communist Party. Nagy was quickly overthrown by Rákosi in 1955, and stripped of his positions. Shortly after this coup, Khrushchev signed the Belgrade Declaration which stated "separate paths to socialism were permissible within the Soviet Bloc." With hopes for serious reform just having been extinguished in Hungary, this declaration was not received well by the Hungarians. Tensions quickly mounted in Hungary with demonstrations and calls for not only the withdrawal of Soviet troops, but for a Hungarian withdrawal from the Warsaw Pact as well. |
Tensions quickly mounted in Hungary with demonstrations and calls for not only the withdrawal of Soviet troops, but for a Hungarian withdrawal from the Warsaw Pact as well. By October 23 Soviet forces landed in Budapest. A chaotic and bloody suppression of revolutionary forces lasted from October 24 until November 7, ending with thousands of Hungarians murdered and many more fleeing the country. Although order was restored, tensions remained on both sides of the conflict. Hungarians resented the end of the reformation, and the Soviets wanted to avoid a similar crisis from occurring again anywhere in the Soviet Bloc.
A peaceful Brezhnev Doctrine.
When the Hungarian Revolution of 1956 was suppressed, the Soviets adopted the mindset that governments supporting both communism and capitalism must coexist, and more importantly, build relations. This idea stressed that all people are equal, and own the right to solve the problems of their own countries themselves, and that in order for both states to peacefully coexist, neither country can exercise the right to get involved in each other's internal affairs. While this idea was brought up following the events of Hungary, they were not put into effect for a great deal of time. This is further explained in the Renunciation section.
1968 Prague Spring.
|
Notions of reform had been slowly growing in Czechoslovakia since the early-mid 1960s. However, once the Stalinist President Antonín Novotný resigned as head of the Communist Party of Czechoslovakia in January 1968, the Prague Spring began to take shape. Alexander Dubček replaced Novotný as head of the party, initially thought a friend to the Soviet Union. It was not long before Dubček began making serious liberal reforms. In an effort to establish what Dubček called "developed socialism", he instituted changes in Czechoslovakia to create a much more free and liberal version of the socialist state. Aspects of a market economy were implemented, travel restrictions were eased for citizens, state censorship loosened, the power of the StB secret police was limited, and steps were taken to improve relations with the west. As the reforms piled up, the Kremlin quickly grew uneasy as they hoped to not only preserve their power within Czechoslovakia, but to avoid another Hungarian-style revolution as well. Soviet panic compounded in March 1968 when student protests erupted in Poland and Antonín Novotný resigned as the Czechoslovak president. On March 21, Yuri Andropov, the KGB Chairman, issued a grave statement concerning the reforms taking place under Dubček. "The methods and forms by which the work is progressing in Czechoslovakia remind one very much of Hungary. In this outward appearance of chaos…there is a certain order. It all began like this in Hungary also, but then came the first and second echelons, and then, finally the social democrats."
|
Leonid Brezhnev sought clarification from Dubček on March 21, with the Politburo convened, on the situation in Czechoslovakia. Eager to avoid a similar fate as Imre Nagy, Dubček reassured Brezhnev that the reforms were totally under control and not on a similar path to those seen in 1956 in Hungary. Despite Dubček's assurances, other Soviet allies grew uneasy by the reforms taking place in an Eastern European neighbor. The First Secretary of the Ukrainian Communist Party called on Moscow for an immediate invasion of Czechoslovakia in order to stop Dubček's "socialism with a human face" from spreading into the Ukrainian SSR and sparking unrest. By May 6, Brezhnev condemned Dubček's system, declaring it a step toward "the complete collapse of the Warsaw Pact." After three months of negotiations, agreements, and rising tensions between Moscow and Czechoslovakia, the Soviet/Warsaw Pact invasion began on the night of August 20, 1968, which was to be met with great Czechoslovak discontent and resistance for many months into 1970.
|
Formation of the Doctrine.
Brezhnev realized the need for a shift from Nikita Khrushchev's idea of "different paths to socialism" towards one that fostered a more unified vision throughout the socialist camp. "Economic integration, political consolidation, a return to ideological orthodoxy, and inter-Party cooperation became the new watchwords of Soviet bloc relations." On November 12, 1968, Brezhnev stated that "[w]hen external and internal forces hostile to socialism try to turn the development of a given socialist country in the direction of … the capitalist system ... this is no longer merely a problem for that country's people, but a common problem, the concern of all socialist countries." Brezhnev's statement at the Fifth Congress of the Polish United Workers Party effectively classified the issue of sovereignty as less important than the preservation of Soviet-style socialism. While no new doctrine had been officially announced, it was clear that Soviet intervention was imminent if Moscow perceived any satellite to be at risk of jeopardizing the Soviet hegemony.
|
Brezhnev Doctrine in practice.
The vague, broad nature of the Brezhnev Doctrine allowed application to any international situation the USSR saw fit. This is clearly evident not only through the Prague Spring in 1968, and the indirect pressure on Poland from 1980 to 1981, but also in the Soviet involvement in Afghanistan starting in the 1970s. Any instance which caused the USSR to question whether or not a country was becoming a "risk to international socialism", the use of military intervention was, in Soviet eyes, not only justified, but necessary.
Invasion of Afghanistan in 1979.
The Soviet government's desire to link its foreign policy to the Brezhnev Doctrine was evoked again when it ordered a military intervention in Afghanistan in 1979. This was perhaps the last chapter of this doctrine's saga.
In April 1978, a coup in Kabul brought the Afghan Communist Party to power with Nur Muhammad Taraki being installed as the second president of Afghanistan. The previous president, Mohammed Daoud Khan was killed during the coup. The Saur Revolution (as the coup was known) took Moscow by surprise, who preferred that the pro-Soviet Daoud Khan stay in power. The previous regime had maintained a pro-Soviet foreign policy as Daoud Khan was a Pashtun who rejected the Durand Line as the frontier with Pakistan. The Afghan Communist Party was divided into a factional struggle between factions known as the Khalq and Parcham. The Parcham was the more moderate of the two factions, arguing that Afghanistan was not ready for socialism, requiring more gradual process while the ultra-Communist Khalq favored a more radical approach. The Khalq faction was victorious and the leader of the Pacham faction Babrak Karmal fled to Moscow in fear of his life, to take up the position as Afghan ambassador in Moscow.
|
Islamic fundamentalists took issue with the Communist party in power. As a result, a "jihad" was proclaimed against the Communist government. Brezhnev and other Soviet leaders falsely portrayed the United States as the one behind the "jihad" in Afghanistan, and the rebellion in Afghanistan was seen in Moscow not so much in the context of Afghan politics with an unpopular government pursuing policies that much of the population rejected (such as the collectivisation of agriculture), but rather in the context of the Cold War, being seen as the first stage of an alleged American plot to instigate a "jihad" in Soviet Central Asia where the majority of the population was Muslim. To assist the government, the Soviet Union drastically increased its military aid to Afghanistan while sending Soviet advisers to train the Afghan military.
|
During his talks with the Soviets during his time as Ambassador, Karmal coordinated with the Soviet government to replace Amin. It was this coordination that led to both Soviet soldiers and airborne units organizing a coup against the Amin-led Afghanistan government, during which Amin was assassinated. In his place, the Soviets installed their ally, former-Ambassador Babrak Karmal, as the new lead of the government in Afghanistan.
The Soviet Union, once again, fell back to the Brezhnev Doctrine for rationale, claiming that it was both morally and politically justified. It was also explained by the Soviets that they owed help to their friend and ally Babrak Karmal.
Renunciation.
The long lasting struggle of the war in Afghanistan made the Soviets realize that their reach and influence was in fact limited. "[The war in Afghanistan] had shown that socialist internationalism and Soviet national interests were not always compatible." Tensions between the USSR and Czechoslovakia since 1968, as well as Poland in 1980, proved the inefficiencies inherent in the Brezhnev Doctrine. The Solidarity trade union protests in Poland were suppressed without outside intervention, leaving the Brezhnev doctrine effectively dead. Although the Kremlin wanted to preserve communism in its satellites, the decision was not to intervene. Gorbachev's Glasnost and Perestroika finally opened the door for Soviet Bloc countries and republics to make reforms without fear of Soviet intervention. When East Germany desperately asked for Soviet troops to put down growing unrest in 1989, Gorbachev flatly refused.
|
Post-Brezhnev Doctrine.
The abandonment of the doctrine had a major effect on the way that the Soviets dealt with countries they once tried to control. The new Sinatra doctrine allowed other countries that were oppressed under communist intervention, to go about their own political reform. This carried over internally as well. In fact, the Soviet Union's biggest problem after the removal of the Brezhnev Doctrine, was the Khrushchev Dilemma. This did not address how to stop internal political reform, but how to tame the physical violence that comes along with it. It had become clear that the Soviet Union was beginning to loosen up.
It is possible to pinpoint the renouncement of the Brezhnev Doctrine as a factor in the dissolution of the Soviet Union. Countries that were once micromanaged now could do what they wanted to politically, because the Soviets could no longer try to conquer where they saw fit. With that, the Soviet Union began to collapse. While the communist agenda had caused infinite problems for other countries, it was the driving force behind the Soviet Union staying together. After all, it seems that the removal of the incentive to conquer, and forcing of communism upon other nations, defeated the one thing Soviet Russia had always been about, the expansion of Communism.
|
With the fall of the Brezhnev Doctrine, came the fall of the Warsaw Pact, and perhaps the final moment for the Soviet Union, the fall of the Berlin Wall that had prevented the migration of East Germans to West Germany. The Brezhnev Doctrine coming to a close, was perhaps the beginning of the end for one of the strongest empires in the world's history, the Soviet Union.
In other Communist countries.
The Soviet Union was not the only Communist country to intervene militarily in fellow countries. Vietnam deposed the Khmer Rouge in the Cambodian–Vietnamese War of 1978, which was followed by a revenge Chinese invasion of Vietnam in the Sino-Vietnamese War of 1979. |
Bain-marie
A bain-marie ( , ), also known as a water bath or double boiler, a type of heated bath, is a piece of equipment used in science, industry, and cooking to heat materials gently or to keep materials warm over a period of time. A "bain-marie" is also used to melt ingredients for cooking.
History.
The name comes from the French or , in turn derived from the medieval Latin and the Arabic , all meaning 'Mary's bath'. In his books, the 300 AD alchemist Zosimos of Panopolis credits for the invention of the device Mary the Jewess, an ancient alchemist. However, the water bath was known many centuries earlier (Hippocrates and Theophrastus), and the "balneum Mariae" attributed to Mary the Jewess was used to heat its contents above , while the bain-marie that continues to be used today only heats its contents up to a gentle heat of less than .
Description.
The double boiler comes in a wide variety of shapes, sizes, and types, but traditionally is a wide, cylindrical, usually metal container made of three or four basic parts: a handle, an outer (or lower) container that holds the working fluid, an inner (or upper), smaller container that fits inside the outer one and which holds the material to be heated or cooked, and sometimes a base underneath. Under the outer container of the bain-marie (or built into its base) is a heat source.
|
Typically, the inner container is immersed about halfway into the working fluid.
The inner container, filled with the substance to be heated, fits inside the outer container filled with the working fluid (often water, but alternatively steam or oil). The outer container is heated at or below the base, causing the temperature of the working fluid to rise and thus transferring heat to the inner container. The maximum obtainable temperature of the working fluid is dictated by its composition and boiling point at the ambient pressure. Since the surface of the inner container is always in contact with the working fluid, the double boiler serves as a constant-temperature heat source for the substance being heated, without hot or cold spots that can affect its properties.
When the working fluid is water and the bain-marie is used at sea level, the maximum temperature of the material in the lower container will not exceed , the boiling point of water at sea level. Using different working fluids such as oil in the outer container, or pressurizing the outer container, will result in different maximum temperatures obtainable in the inner container.
|
Alternatives.
A contemporary alternative to the traditional, liquid-filled bain-marie is the electric "dry-heat" bain-marie, heated by elements below both pots. The dry-heat form of electric bains-marie often consumes less energy, requires little cleaning, and can be heated more quickly than traditional versions. They can also operate at higher temperatures, and are often much less expensive than their traditional counterparts.
Electric bains-marie can also be wet, using either hot water or vapor, or steam, in the heating process. The open, bath-type bain-marie heats via a small, hot-water tub (or "bath"), and the vapour-type bain-marie heats with scalding-hot steam.
Culinary applications.
In cooking applications, a bain-marie usually consists of a pan or pot of water in which another container or containers of food to be cooked is/are placed.
Other uses.
In small scale soap-making, a bain-marie's inherent control over maximum temperature makes it optimal for liquefying melt-and-pour soap bases prior to molding them into bars. It offers the advantage of maintaining the base in a liquid state, or reliquefying a solidified base, with minimal deterioration. Similarly, using a water bath, traditional wood glue can be melted and kept in a stable liquid state over many hours without damage to the animal proteins it incorporates. |
Belgian
Belgian may refer to: |
Ballu tundu
' or ' is a traditional Sardinian folk dance which is typically danced in a closed or open circle. The dance was described as early as 1805 by Mameli and by La Marmora in 1825. In northern and central Sardinia, the dance is lively and animated with leaps and agile movements and usually accompanied by a choir of three or more singers in the center of the circle. In other areas, the dance is done to launeddas and the shepherd's , but the accordion had also made its appearance by the 19th century. The introduction is in time but the dance itself is done in .
At least in the past, the manner of holding hands was very important and followed strict rules. Married or engaged couples could hold hands palm to palm with fingers entwined, but a man could not do this with a young girl or another man's wife. If a stranger entered the circle, he had to do so to the woman's right so as not to come between her and her husband. |
Barbagia
Barbagia (; or ) is a geographical, cultural and natural region of inner Sardinia, contained for the most part in the province of Nuoro and Ogliastra and located alongside the Gennargentu massif.
The name comes from Cicero, who described the land as inhabited by barbarians; Roman domination over this part of the island was in fact never more than nominal as a result of the Roman-Sardinian Wars. This word shares its etymology with the now antiquated "Barbary".
The Sardinians, many of whose revolts came from this area, were also mocked by the ancient Romans with the pejorative term 'thieves wearing rough woolen garments'.
In 594, Pope Gregory the Great wrote a letter to Hospito, a Christian whom he calls the "leader of the Barbaricini" (). Hospito apparently permitted the evangelisation of pagan Barbagia by Christian missionaries.
The area is usually divided into five Barbagias: the Barbagia of Ollolai, the Barbagia of Seulo, the Barbagia of Belvì, the Mandrolisai, and finally the , the historical name by which the area of Ogliastra was once referred to. The latter two are named after a sub-region, and the others after their main villages.
|
The area comprises mainly rocky and steep hills and mountains, and there is little human presence. Barbagia is one of the least populated areas in Europe, which has allowed Barbagia to preserve better the island's cultural and natural treasures. According to a thesis by the archaeologist Giovanni Lilliu, Sardinian history has always been characterised by what he called the "constant of Sardinian resistance", opposed to the invaders who attempted at various times to lord over the indigenous inhabitants. Barbagia is one of the few Sardinian regions where the Sardinian language in its own varieties, both Nuorese and Campidanese, is still spoken on an everyday basis, while the rest of the island has already mostly undergone thorough Italianization and language shift to Italian.
One of the most important villages is Gavoi. Orgosolo was famous for its bandits and kidnappers and typical murals. Oliena is well known for its wines (especially the "Nepente", a wine made with Cannonau grapes). Another well known town is Fonni, the highest town in Sardinia at more than 1,000 meters above sea level. Fonni is also the gateway to the Gennargentu mountain system.
The economy consists of agriculture, sheep breeding, art and tradition related business, tourism and light industry. |
Brabham
Motor Racing Developments Ltd., commonly known as Brabham ( ), was a British racing car manufacturer and Formula One racing team. It was founded in 1960 by the Australian driver Jack Brabham and the British-Australian designer Ron Tauranac. The team had a successful thirty-year history, winning four FIA Formula One World Drivers' Championships and two World Constructors' Championships.
Under Brabham and Tauranac, Brabham won double world championships in 1966 and 1967, with the 1966 drivers' title going to Jack Brabham and the 1967 title going to Denny Hulme. Jack Brabham is the only Formula One driver to win a Drivers' Championship in a car bearing his own name. Brabham was the first Formula One team to use a wind tunnel to design cars. It became the world's largest manufacturer of open-wheel racing cars sold to customer teams, having built more than 500 cars by 1970. Teams using Brabham cars won championships in Formula Two and Formula Three. The cars also competed in events like the Indianapolis 500 and Formula 5000 racing.
|
The businessman Bernie Ecclestone owned Brabham during most of the 1970s and 1980s, and later became responsible for administering the commercial aspects of Formula One. Under Ecclestone and chief designer Gordon Murray, the team won two more Drivers' Championships in the 1980s with Brazilian Nelson Piquet. During this period, the team withdrew from manufacturing customer cars but introduced innovations such as carbon brakes and hydropneumatic suspension; it also reintroduced in-race refuelling. Its unique 'fan car' won its only race, in , before being withdrawn. Piquet won his first championship in in the ground effect BT49-Ford. In , he became the first driver to win a title with a turbocharged car, the Brabham BT52, which was powered by BMW's M12 straight-four engine and won four Grands Prix that season. Ecclestone sold the team in 1988.
Midway through the 1992 season, the team collapsed financially and was investigated for fraud, as its new owner, Japanese engineering firm Middlebridge, failed to make its loan repayments. In 2009, a German organisation unsuccessfully attempted to enter the 2010 Formula One season using the Brabham name.
|
Origins.
The Brabham team was founded by Jack Brabham and Ron Tauranac, who met in 1951 while both were successfully building and racing cars in their native Australia. Brabham, who was a highly successful dirt oval speedway Speedcar driver with multiple Australian national and state titles to his credit before moving full time into road racing in 1953, was the more successful driver and went to the United Kingdom in 1955 to further his racing career. There he started driving for the Cooper Car Company works team and by 1958 had progressed with them to Formula One, the highest category of open-wheel racing defined by the Fédération Internationale de l'Automobile (FIA), motor sport's world governing body In 1959 and 1960, Brabham won the Formula One World Drivers' Championship in Cooper's revolutionary mid-engined cars.
Despite their innovation of putting the engine behind the driver, the Coopers and their chief designer, Owen Maddock, were generally resistant to developing their cars. Brabham pushed for further advances, and played a significant role in developing Cooper's highly successful 1960 T53 "lowline" car, with input from his friend Tauranac. Brabham was confident he could do better than Cooper, and in late 1959 he asked Tauranac to come to the UK and work with him, initially producing upgrade kits for Sunbeam Rapier and Triumph Herald road cars at his car dealership, Jack Brabham Motors, but with the long-term aim of designing racing cars. Brabham describes Tauranac as "absolutely the only bloke I'd have gone into partnership with". Later, Brabham offered a Coventry-Climax FWE-engined version of the Herald, with and uprated suspension to match the extra power.
|
To meet that aim, Brabham and Tauranac set up Motor Racing Developments Ltd. (MRD), deliberately avoiding the use of either man's name. The new company would compete with Cooper in the market for customer racing cars. As Brabham was still employed by Cooper, Tauranac produced the first MRD car, for the entry level Formula Junior class, in secrecy. Unveiled in the summer of 1961, the "MRD" was soon renamed. Motoring journalist Jabby Crombac pointed out that "[the] way a Frenchman pronounces those initials—written phonetically, 'em air day'—sounded perilously like the French word... "."" Gavin Youl achieved a second-place finish at Goodwood and another at Mallory Park in the MRD-Ford. The cars were subsequently known as Brabhams, with type numbers starting with BT for "Brabham Tauranac".
By the 1961 Formula One season, the Lotus and Ferrari teams had developed the mid-engined approach further than Cooper. Brabham had a poor season, scoring only four points, and—having run his own private Coopers in non-championship events during 1961—left the company in 1962 to drive for his own team: the Brabham Racing Organisation, using cars built by Motor Racing Developments. The team was based at Chessington, England and held the British licence.
|
Racing history—Formula One.
Jack Brabham and Ron Tauranac (1961–1970).
Motor Racing Developments initially concentrated on making money by building cars for sale to customers in lower formulae, so the new car for the Formula One team was not ready until partway through the 1962 Formula One season. The Brabham Racing Organisation (BRO) started the year fielding a customer Lotus chassis, which was delivered at 3am to keep it a secret. Brabham took two points finishes in Lotuses, before the turquoise-liveried Brabham BT3 car made its debut at the 1962 German Grand Prix. It retired with a throttle problem after 9 of the 15 laps, but went on to take a pair of fourth places at the end of the season.
From the 1963 season, Brabham was partnered by American driver Dan Gurney, the pair now running in Australia's racing colours of green and gold. Brabham took the team's first win at the non-championship Solitude Grand Prix in 1963. Gurney took the marque's first two wins in the world championship, at the 1964 French and Mexican Grands Prix. Brabham works and customer cars took another three non-championship wins during the 1964 season. The 1965 season was less successful, with no championship wins. Brabham finished third or fourth in the Constructors' Championship for three years running, but poor reliability marred promising performances on several occasions. Motor sport authors Mike Lawrence and David Hodges have said that a lack of resources may have cost the team results, a view echoed by Tauranac.
|
The FIA doubled the Formula One engine capacity limit to 3 litres for the 1966 season and suitable engines were scarce. Brabham used engines from Australian engineering firm Repco, which had never produced a Formula One engine before, based on aluminium V8 engine blocks from the defunct American Oldsmobile F85 road car project, and other off-the-shelf parts. Consulting and design engineer Phil Irving (of Vincent Motorcycle fame) was the project engineer responsible for producing the initial version of the engine. Few expected the Brabham-Repcos to be competitive, but the light and reliable cars ran at the front from the start of the season. At the French Grand Prix at Reims-Gueux, Brabham became the first man to win a Formula One world championship race in a car bearing his own name. Only his former teammate, Bruce McLaren, has since matched the achievement. It was the first in a run of four straight wins for the Australian veteran. Brabham won his third title in 1966, becoming the only driver to win the Formula One World Championship in a car carrying his own name ("cf" Surtees, Hill and Fittipaldi Automotive). In 1967, the title went to Brabham's teammate, New Zealander Denny Hulme. Hulme had better reliability through the year, possibly due to Brabham's desire to try new parts first. The Brabham team took the Constructors' World Championship in both years.
|
For 1968, Austrian Jochen Rindt replaced Hulme, who had left to join McLaren. Repco produced a more powerful version of their V8 to maintain competitiveness against Ford's new Cosworth DFV, but it proved very unreliable. Slow communications between the UK and Australia had always made identifying and correcting problems very difficult. The car was fast—Rindt set pole position twice during the season—but Brabham and Rindt finished only three races between them, and ended the year with only ten points.
Although Brabham bought Cosworth DFV engines for the 1969 season, Rindt left to join Lotus. His replacement, Jacky Ickx, had a strong second half to the season, winning in Germany and Canada, after Brabham was sidelined by a testing accident. Ickx finished second in the Drivers' Championship, with 37 points to Jackie Stewart's 63. Brabham himself took a couple of pole positions and two top-3 finishes, but did not finish half the races. The team were second in the Constructors' Championship, aided by second places at Monaco and Watkins Glen scored by Piers Courage, driving a Brabham for the Frank Williams Racing Cars privateer squad.
|
Brabham took his last win in the opening race of the 1970 season and was competitive throughout the year, although mechanical failures blunted his challenge. After losing secured victories in the last corner at both Monaco and England, Jack decided he had had enough, and sold his part in the company to former Jochen Rindt manager, a businessman named Bernie Ecclestone, at the end of the year. Aided by number-two driver Rolf Stommelen, the team came fourth in the Constructors' Championship.
Ron Tauranac (1971).
Tauranac signed double world champion Graham Hill and young Australian Tim Schenken to drive for the 1971 season. Tauranac designed the unusual 'lobster claw' BT34, featuring twin radiators mounted ahead of the front wheels, a single example of which was built for Hill. Although Hill, no longer a front-runner since his 1969 accident, took his final Formula One win in the non-championship BRDC International Trophy at Silverstone, the team scored only seven championship points.
Bernie Ecclestone (1972–1988).
|
Tauranac left Brabham early in the 1972 season after Ecclestone changed the way the company was organised without consulting him. Ecclestone has since said "In retrospect, the relationship was never going to work", noting that "[Tauranac and I] both take the view: 'Please be reasonable, do it my way'". The highlights of an aimless year, during which the team ran three different models, were pole position for Argentinian driver Carlos Reutemann at his home race at Buenos Aires and a victory in the non-championship Interlagos Grand Prix. For the 1973 season, Ecclestone promoted the young South African engineer Gordon Murray to chief designer and moved Herbie Blash from the Formula Two programme to become the Formula One team manager. Both would remain with the team for the next 15 years. For 1973, Murray produced the triangular cross-section BT42, with which Reutemann scored two podium finishes and finished seventh in the Drivers' Championship.
In the 1974 season, Reutemann took the first three victories of his Formula One career, and Brabham's first since 1970. The team finished a close fifth in the Constructors' Championship, fielding the much more competitive BT44s. After a strong finish to the 1974 season, many observers felt the team were favourites to win the 1975 title. The year started well, with a first win for Brazilian driver Carlos Pace at the Interlagos circuit in his native São Paulo. However, as the season progressed, tyre wear frequently slowed the cars in races, and the team was constantly outperformed by Ferrari and McLaren. Pace took another two podiums and finished sixth in the championship; while Reutemann had five podium finishes, including a dominant win in the 1975 German Grand Prix, and finished third in the Drivers' Championship. The team likewise ranked second in the Constructors' Championship at the end of the year.
|
While rival teams Lotus and McLaren relied on the Cosworth DFV engine from the late 1960s to the early 1980s, Ecclestone sought a competitive advantage by investigating other options. Despite the success of Murray's Cosworth-powered cars, Ecclestone signed a deal with Italian motor manufacturer Alfa Romeo to use their large and powerful flat-12 engine from the 1976 season. The engines were free, but they rendered the new BT45s, now in red Martini Racing livery, unreliable and overweight. At that time, designer David North was hired to work alongside Murray. The 1976 and 1977 seasons saw Brabham fall toward the back of the field again. Reutemann negotiated a release from his contract before the end of the 1976 season and signed with Ferrari. Ulsterman John Watson replaced him at Brabham for 1977. Watson lost near certain victory in the French Grand Prix (Dijon) of that year when his car ran low on fuel on the last lap and was passed by Mario Andretti's Lotus, with Watson's second place being the team's best result of the season. The car often showed at the head of races, but the unreliability of the Alfa Romeo engine was a major problem. The team lost Pace early in the 1977 season when he died in a light aircraft accident.
|
For the 1978 season, Murray's BT46 featured several new technologies to overcome the weight and packaging difficulties caused by the Alfa Romeo engines. Ecclestone signed then two-time Formula One world champion Niki Lauda from Ferrari through a deal with Italian dairy products company Parmalat which met the cost of Lauda ending his Ferrari contract and made up his salary to the £200,000 Ferrari was offering. 1978 was the year of the dominant Lotus 79 "wing car", which used aerodynamic ground effect to stick to the track when cornering, but Lauda won two races in the BT46, one with the controversial "B" or "fan car" version.
The partnership with Alfa Romeo ended during the 1979 season, the team's first with young Brazilian driver Nelson Piquet. Murray designed the full-ground effect BT48 around a rapidly developed new Alfa Romeo V12 engine and incorporated an effective "carbon-carbon braking" system—a technology Brabham pioneered in 1976. However, unexpected movement of the car's aerodynamic centre of pressure made its handling unpredictable and the new engine was unreliable. The team dropped to eighth in the Constructors' Championship by the end of the season. Alfa Romeo started testing their own Formula One car during the season, prompting Ecclestone to revert to Cosworth DFV engines, a move Murray described as being "like having a holiday". The new, lighter, Cosworth-powered BT49 was introduced before the end of the year at the Canadian Grand Prix; where after practice Lauda announced his immediate retirement from driving, later saying that he "was no longer getting any pleasure from driving round and round in circles".
|
The team used the BT49 over four seasons. In the 1980 season Piquet scored three wins and the team took third in the Constructors' Championship with Piquet second in the Drivers' Championship. This season saw the introduction of the blue and white livery that the cars would wear through several changes of sponsor, until the team's demise in 1992. With a better understanding of ground effect, the team further developed the BT49C for the 1981 season, incorporating a hydropneumatic suspension system to avoid ride height limitations intended to reduce downforce. Piquet, who had developed a close working relationship with Murray, took the drivers' title with three wins, albeit amid accusations of cheating. The team finished second in the Constructors' Championship, behind the Williams team.
|
Piquet took the team's last wins: two in 1984 by winning the seventh and eighth races of that season, the Canadian Grand Prix and the Detroit Grand Prix, and one in 1985 by winning the French Grand Prix. He finished fifth in 1984 and a mere eighth in 1985 in the respective Drivers' Championships. After seven years and two world championships, Piquet felt he was worth more than Ecclestone's salary offer for 1986, and reluctantly left for the Williams team at the end of the season.
For the 1986 season, Patrese returned to Brabham, and was joined by Elio de Angelis. The season was a disaster for Brabham, scoring only two points. Murray's radical long and low BT55, with its BMW M12 engine tilted over to improve its aerodynamics and lower its centre of gravity, had severe reliability issues, and the Pirelli tyres performed poorly. De Angelis became the Formula One team's only fatality when he died in a testing accident at the Paul Ricard circuit. Derek Warwick, who replaced de Angelis, was close to scoring two points for fifth in the British Grand Prix, but a problem on the last lap dropped him out of the points.
|
In August, BMW after considering running their own in-house team, announced their departure from Formula One at the end of the season. Murray, who had largely taken over the running of the team as Ecclestone became more involved with his role at the Formula One Constructors Association, felt that "the way the team had operated for 15 years broke down". He left Brabham in November to join McLaren.
Ecclestone held BMW to their contract for the 1987 season, but the German company would only supply the laydown engine. The upright units, around which Brabham had designed their new car, were sold for use by the Arrows team. Senior figures at Brabham, including Murray, have admitted that by this stage Ecclestone had lost interest in running the team. The 1987 season was only slightly more successful than the previous year—Patrese and de Cesaris scoring 10 points between them, including two third places at the Belgian Grand Prix and the Mexican Grand Prix. Unable to locate a suitable engine supplier, the team missed the FIA deadline for entry into the 1988 world championship and Ecclestone finally announced the team's withdrawal from Formula One at the Brazilian Grand Prix in April 1988. During the season-ending Australian Grand Prix, Ecclestone announced he had sold MRD to EuroBrun team owner Walter Brun for an unknown price.
|
Joachim Lüthi (1989).
Brun soon sold the team on, this time to Swiss financier Joachim Lüthi, who brought it back into Formula One for the 1989 season. The new Brabham BT58, powered by a Judd V8 engine (originally another of Jack Brabham's companies), was produced for the 1989 season. Italian driver Stefano Modena, who had driven for the team in the 1987 Australian Grand Prix in a one off drive for the team, drove alongside the more experienced Martin Brundle who was returning to Formula One after spending 1988 winning the World Sportscar Championship for Jaguar. Modena took the team's last podium: a third place at the Monaco Grand Prix (Brundle, who had only just scraped through pre-qualifying by 0.021 seconds before qualifying a brilliant 4th, had been running third but was forced to stop to replace a flat battery, finally finishing sixth). The team also failed to make the grid sometimes: Brundle failed to prequalify at the Canadian Grand Prix and the French Grand Prix. The team finished 9th in the Constructors' Championship at the end of the season.
|
Middlebridge Racing (1989–1992).
After Lüthi's arrest on tax fraud charges in mid-1989, several parties disputed the ownership of the team. Middlebridge Group Limited, a Japanese engineering firm owned by billionaire Koji Nakauchi, was already involved with established Formula 3000 team Middlebridge Racing and gained control of Brabham for the 1990 season. Herbie Blash had returned to run the team in 1989 and continued to do so in 1990. Middlebridge paid for its purchase using £1 million loaned to them by finance company Landhurst Leasing, but the team remained underfunded and would only score a few more points finishes in its last three seasons.
Jack Brabham's youngest son, David, raced for the Formula One team for a short time in 1990 including the season-ending Australian Grand Prix (the first time a Brabham had driven a Brabham car in an Australian Grand Prix since 1968). 1990 was another disastrous year, with Modena's fifth place in the season-opening United States Grand Prix being the only top six finish. The team finished ninth in the Constructors' Championship. Brundle and fellow Briton Mark Blundell, scored only three points during the 1991 season. Due to poor results in the first half of 1991, they had to prequalify in the second half of the season; Blundell failed to do so in Japan, as did Brundle in Australia. The team finished 10th in the Constructors' Championship, behind another struggling British team, Lotus. The 1992 season started with Eric van de Poele and Giovanna Amati after Japanese Formula 3000 driver Akihiko Nakaya was denied a superlicense. Damon Hill, the son of another former Brabham driver and World Champion, debuted in the team after Amati was dropped when her sponsorship failed to materialise. Amati, the fifth and last () woman to race in Formula One, ended her career with three DNQs.
|
Argentine Sergio Rinland designed the team's final cars around Judd engines, except for 1991 when Yamaha powered the cars. In the 1992 season the cars (which were updated versions of the 1991 car) rarely qualified for races. Hill gave the team its final finish, at the Hungarian Grand Prix, where he crossed the finish line 11th and last, four laps behind the winner, Ayrton Senna. After the end of that race the team ran out of funds and collapsed.
Middlebridge Group Limited had been unable to continue making repayments against the £6 million ultimately provided by Landhurst Leasing, which went into administration. The Serious Fraud Office investigated the case. Landhurst's managing directors were found guilty of corruption and imprisoned, having accepted bribes for further loans to Middlebridge. It was one of four teams to leave Formula One that year. ("cf" March Engineering, Fondmetal and Andrea Moda Formula). Although there was talk of reviving the team for the following year, its assets passed to Landhurst Leasing and were auctioned by the company's receivers in 1993. Among these was the team's old factory in Chessington, which was acquired by Yamaha Motor Sports and used to house Activa Technology Limited, a company manufacturing composite components for race and road cars run by Herbie Blash. The factory was bought by the Carlin DPR GP2 motor racing team in 2006.
|
Motor Racing Developments.
Brabham cars were also widely used by other teams, and not just in Formula One. Jack Brabham and Ron Tauranac called the company they set up in 1961 to design and build formula racing cars to customer teams Motor Racing Developments (MRD), and this company had a large portfolio of other activities. Initially, Brabham and Tauranac each held 50 per cent of the shares. Tauranac was responsible for design and running the business, while Brabham was the test driver and arranged corporate deals like the Repco engine supply and the use of the MIRA wind tunnel. He also contributed ideas to the design process and often machined parts and helped build the cars.
From 1963 to 1965, MRD was not directly involved in Formula One, and often ran works cars in other formulae. A separate company, Jack Brabham's Brabham Racing Organisation, ran the Formula One works entry. Like other customers, BRO bought its cars from MRD, initially at £3,000 per car, although it did not pay for development parts. Tauranac was unhappy with his distance from the Formula One operation and before the 1966 season suggested that he was no longer interested in producing cars for Formula One under this arrangement. Brabham investigated other chassis suppliers for BRO, however the two reached an agreement and from 1966 MRD was much more closely involved in this category. After Jack Brabham sold his shares in MRD to Ron Tauranac at the end of 1969, the works Formula One team was MRD.
|
Despite only building its first car in 1961, by the mid-1960s MRD had overtaken established constructors like Cooper to become the largest manufacturer of single-seat racing cars in the world, and by 1970 had built over 500 cars. Of the other Formula One teams which used Brabhams, Frank Williams Racing Cars and the Rob Walker Racing Team were the most successful. The 1965 British Grand Prix saw seven Brabhams compete, only two of them from the works team, and there were usually four or five at championship Grands Prix throughout that season. The firm built scores of cars for the lower formulae each year, peaking with 89 cars in 1966. Brabham had the reputation of providing customers with cars of a standard equal to those used by the works team, which worked "out of the box". The company provided a high degree of support to its customers—including Jack Brabham helping customers set up their cars. During this period the cars were usually known as "Repco Brabhams", not because of the Repco engines used in Formula One between 1966 and 1968, but because of a smaller-scale sponsorship deal through which the Australian company had been providing parts to Jack Brabham since his Cooper days.
|
At the end of 1971 Bernie Ecclestone bought MRD. He retained the Brabham brand, as did subsequent owners. Although the production of customer cars continued briefly under Ecclestone's ownership, he believed the company needed to focus on Formula One to succeed. The last production customer Brabhams were the Formula Two BT40 and the Formula Three BT41 of 1973, although Ecclestone sold ex-works Formula One BT44Bs to RAM Racing as late as 1976.
In 1988 Ecclestone sold Motor Racing Developments to Alfa Romeo. The Formula One team did not compete that year, but Alfa Romeo put the company to use designing and building a prototype "Procar"—a racing car with the silhouette of a large saloon (the Alfa Romeo 164) covering a composite racing car chassis and mid-mounted race engine. This was intended for a racing series for major manufacturers to support Formula One Grands Prix, and was designated the Brabham BT57.
Racing history—other categories.
IndyCar.
|
Formula Two.
In the 1960s and early 1970s, drivers who had reached Formula One often continued to compete in Formula Two. In 1966 MRD produced the BT18 for the lower category, with a Honda engine acting as a stressed component. The car was extremely successful, winning 11 consecutive Formula Two races in the hands of the Formula One pairing of Brabham and Hulme. Cars were entered by MRD and not by the Brabham Racing Organisation, avoiding a direct conflict with Repco, their Formula One engine supplier.
Formula Three.
The first Formula Three Brabham, the BT9, won only four major races in 1964. The BT15 which followed in 1965 was a highly successful design. 58 cars were sold, which won 42 major races. Further developments of the same concept, including wings by the end of the decade, were highly competitive up until 1971. The BT38C of 1972 was Brabham's first production monocoque and the first not designed by Tauranac. Although 40 were ordered, it was less successful than its predecessors. The angular BT41 was the final Formula Three Brabham.
|
Formula 5000.
Brabham made one car for Formula 5000 racing, the Brabham BT43. Rolled out in late 1973 it was tested in early 1974 by John Watson at Silverstone before making its debut at the Rothmans F5000 Championship Round at Monza on 30 June 1974, driven by Martin Birrane. Former Australian Drivers' Champion Kevin Bartlett used the Chevrolet powered Brabham BT43 to finish 3rd in the 1978 Australian Drivers' Championship including finishing 5th in the 1978 Australian Grand Prix.
Sports cars.
Tauranac did not enjoy designing sports cars and could only spare a small amount of his time from MRD's very successful single-seater business. Only 14 sports car models were built between 1961 and 1972, out of a total production of almost 600 chassis. The BT8A was the only one built in any numbers, and was quite successful in national level racing in the UK in 1964 and 1965. The design was "stretched" in 1966 to become the one-off BT17, originally fitted with the 4.3-litre version of the Repco engine for Can-Am racing. It was quickly abandoned by MRD after engine reliability problems became evident.
|
Technical innovation.
Brabham was considered a technically conservative team in the 1960s, chiefly because it persevered with traditional spaceframe cars long after Lotus introduced lighter, stiffer monocoque chassis to Formula One in 1962. Chief designer Tauranac reasoned that monocoques of the time were not usefully stiffer than well designed spaceframe chassis, and were harder to repair and less suitable for MRD's customers. His "old fashioned" cars won the Brabham team the 1966 and 1967 championships, and were competitive in Formula One until rule changes forced a move to monocoques in 1970.
Despite the perceived conservatism, in 1963 Brabham was the first Formula One team to use a wind tunnel to hone its designs to reduce drag and stop the cars lifting off the ground at speed. The practice became the norm in only the early 1980s, and is possibly the most important factor in the design of modern cars. Towards the end of the 1960s, teams began to exploit aerodynamic downforce to push the cars' tyres down harder on the track and enable them to maintain faster speeds through high-speed corners. At the 1968 Belgian Grand Prix, Brabham was the first, alongside Ferrari, to introduce full width rear wings to this effect.
|
The team's most fertile period of technical innovation came in the 1970s and 1980s when Gordon Murray became technical director. During 1976, the team introduced carbon-carbon brakes to Formula One, which promised reduced unsprung weight and better stopping performance due to carbon's greater coefficient of friction. The initial versions used carbon-carbon composite brake pads and a steel disc faced with carbon "pucks." The technology was not reliable at first; in 1976, Carlos Pace crashed at at the Österreichring circuit after heat build-up in the brakes boiled the brake fluid, leaving him with no way of stopping the car. By 1979, Brabham had developed an effective carbon-carbon braking system, combining structural carbon discs with carbon brake pads.
Although Brabham experimented with airdams and underbody skirts in the mid-1970s, the team, like the rest of the field, did not immediately understand Lotus's development of a ground effect car in 1977. The Brabham BT46B "Fan car" of 1978, generated enormous downforce with a fan, which sucked air from beneath the car, although its claimed use was for engine cooling. The car raced only once in the Formula One World Championship—Niki Lauda winning the 1978 Swedish Grand Prix—before a loophole in the regulations was closed by the FIA.
|
Although in 1979 Murray was the first to use lightweight carbon fibre composite panels to stiffen Brabham's aluminium alloy monocoques, he echoed his predecessor Tauranac in being the last to switch to the new fully composite monocoques. Murray was reluctant to build the entire chassis from composite materials until he understood their behaviour in a crash, an understanding achieved in part through an instrumented crash test of a BT49 chassis. The team did not follow McLaren's 1981 MP4/1 with its own fully composite chassis until the "lowline" BT55 in 1986, the last team to do so. This technology is now used in all top level single seater racing cars.
For the 1981 season the FIA introduced a minimum ride height for the cars, intended to slow them in corners by limiting the downforce created by aerodynamic ground effect. Gordon Murray devised a hydropneumatic suspension system for the BT49C, which allowed the car to settle to a much lower ride height at speed. Brabham was accused of cheating by other teams, although Murray believes that the system met the letter of the regulations. No action was taken against the team and others soon produced systems with similar effects.
|
At the 1982 British Grand Prix, Brabham reintroduced the idea of re-fuelling and changing the car's tyres during the race, unseen since the 1957 Formula One season, to allow its drivers to sprint away at the start of races on a light fuel load and soft tyres. After studying techniques used at the Indianapolis 500 and in NASCAR racing in the United States, the team was able to refuel and re-tyre the car in 14 seconds in tests ahead of the race. In 1982 Murray felt the tactic did little more than "get our sponsors noticed at races we had no chance of winning," but in 1983 the team made good use of the tactic. Refuelling was banned for 1984, although it reappeared between 1994 and 2009, but tyre changes have remained part of Formula One.
Controversy.
The fan car and hydropneumatic suspension exploited loopholes in the sporting regulations. In the early 1980s, Brabham was accused of going further and breaking the regulations. During 1981, Piquet's first championship year, rumours circulated of illegal underweight Brabham chassis. Driver Jacques Laffite was among those to claim that the cars were fitted with heavily ballasted bodywork before being weighed at scrutineering. The accusation was denied by Brabham's management. No formal protest was made against the team and no action was taken against it by the sporting authorities.
|
From 1978, Ecclestone was president of the Formula One Constructors Association (FOCA), a body formed by the teams to represent their interests. This left his team open to accusations of having advance warning of rule changes. Ecclestone denies that the team benefited from this and Murray has noted that, contrary to this view, at the end of 1982 the team had to abandon its new BT51 car, built on the basis that ground effect would be permitted in 1983. Brabham had to design and build a replacement, the BT52, in only three months. At the end of the 1983 season, Renault and Ferrari, both beaten to the Drivers' Championship by Piquet, protested that the Research Octane Number (RON) 102.4 of the team's fuel was above the legal limit of 102. The FIA declared that a figure of up to 102.9 was permitted under the rules, and that Brabham had not exceeded this limit.
Later use of the Brabham name.
Revival attempts.
On 4 June 2009, Franz Hilmer confirmed that he had used the name to lodge an entry for the 2010 Formula One season as a cost-capped team under the new budget cap regulations. The Brabham family was not involved and announced that it was seeking legal advice over the use of the name. The team's entry was not accepted, and the Brabham family later obtained legal recognition of their exclusive rights to the Brabham brand.
|
Brabham Racing.
In September 2014, David Brabham—the son of Brabham founder Sir Jack Brabham—announced the reformation of the Brabham Racing team under the name Project Brabham, with plans to enter the 2015 FIA World Endurance Championship and 2015 24 Hours of Le Mans in the LMP2 category using a crowdsourcing business model. The company also expressed interest in returning to Formula One, but did not have the financial capacity to do so.
In 2019, Brabham Automotive announced its goal to enter the 2021 FIA World Endurance Championship using a BT62 in the GTE class. The team competed in the 2019 GT Cup Championship. It also entered the final two races of the 2019 Britcar Endurance Championship, winning on its debut.
In 2021, Brabham Automotive debuted their BT63 GT2 car at the season finale of the 2021 GT2 European Series.
Championship results.
Results achieved by the "works" Brabham team. Bold results indicate a championship win.
References.
All race and championship results are taken from the Official Formula 1 Website. 1962 Season review. www.formula1.com. Retrieved 27 April 2006
External links. |
Boeing B-17 Flying Fortress
The Boeing B-17 Flying Fortress is an American four-engined heavy bomber aircraft developed in the 1930s for the United States Army Air Corps (USAAC). A fast and high-flying bomber, the B-17 dropped more bombs than any other aircraft during World War II, used primarily in the European Theater of Operations. It is the third-most produced bomber of all time, behind the American four-engined Consolidated B-24 Liberator and the German multirole, twin-engined Junkers Ju 88. The B-17 was also employed in transport, anti-submarine warfare, and search and rescue roles.
In a USAAC competition, Boeing's prototype Model 299/XB-17 outperformed two other entries but crashed, losing the initial 200-bomber contract to the Douglas B-18 Bolo. Still, the Air Corps ordered 13 more B-17s for further evaluation, which were introduced into service in 1938. The B-17 evolved through numerous design advances but from its inception, the USAAC (from 1941 the United States Army Air Forces, USAAF) promoted the aircraft as a strategic weapon. It was a relatively fast, high-flying, long-range bomber with heavy defensive armament at the expense of bomb load. It also developed a reputation for toughness based upon stories and photos of badly damaged B-17s safely returning to base.
|
The B-17 saw early action in the Pacific War, where it conducted air raids against Japanese shipping and airfields. But it was primarily employed by the USAAF in the daylight component of the Allied strategic bombing campaign over Europe, complementing RAF Bomber Command's night bombers in attacking German industrial, military and civilian targets. Of the roughly of bombs dropped on Nazi Germany and its occupied territories by Allied aircraft, over (42.6%) were dropped from B-17s.
As of January 2025, four aircraft remain in flying condition. About 50 survive in storage or are on static display, the oldest of which is "The Swoose", a B-17D which was flown in combat in the Pacific on the first day of the United States' involvement in World War II. There are also several reasonably complete wrecks, such as underwater, that have been found. B-17 survivors gained national attention in 2022 in the United States, when one was destroyed in a fatal mid-air collision with another warbird at an airshow.
Development.
Origins.
|
On 8 August 1934, the USAAC tendered a proposal for a multiengine bomber to replace the Martin B-10. The Air Corps was looking for a bomber capable of reinforcing the air forces in Hawaii, Panama, and Alaska. Requirements were for it to carry a "useful bombload" at an altitude of for 10 hours with a top speed of at least .
They also desired, but did not require, a bomber with a range of and a speed of . The competition for the air corps contract was to be decided by a "fly-off" between Boeing's design, the Douglas DB-1, and the Martin Model 146 at Wilbur Wright Field in Dayton, Ohio.
The prototype B-17, with the Boeing factory designation of Model 299, was designed by a team of engineers led by E. Gifford Emery and Edward Curtis Wells, and was built at Boeing's own expense. It combined features of the company's experimental XB-15 bomber and 247 transport. The B-17's armament consisted of five .30 caliber (7.62 mm) machine guns, with a payload up to of bombs on two racks in the bomb bay behind the cockpit. The aircraft was powered by four Pratt & Whitney R-1690 Hornet radial engines, each producing at .
|
The first flight of the Model 299 was on 1935 with Boeing chief test pilot Leslie Tower at the controls. The day before, Richard Williams, a reporter for "The Seattle Times", coined the name "Flying Fortress" when – observing the large number of machine guns sticking out from the new aircraft – he described it as a "15-ton flying fortress" in a picture caption. The most distinctive mount was in the nose, which allowed the single machine gun to be fired toward nearly all frontal angles.
Boeing was quick to see the value of the name and had it trademarked for use. Boeing also claimed in some of the early press releases that Model 299 was the first combat aircraft that could continue its mission if one of its four engines failed. On , the prototype flew from Seattle to Wright Field in nine hours and three minutes with an average ground speed of , much faster than the competition.
At the fly-off, the four-engined Boeing's performance was superior to those of the twin-engine DB-1 and Model 146. In March 1935 Army Chief of Staff General Douglas MacArthur created GHQ Air Force and promoted lieutenant colonel Frank Maxwell Andrews to brigadier general to become the head of GHQ Air Force. MacArthur and Andrews both believed that the capabilities of large four-engined aircraft exceeded those of shorter-ranged, twin-engine aircraft, and that the B-17 was better suited to new, emerging USAAC doctrine. Their opinions were shared by the air corps procurement officers, and even before the competition had finished, they suggested buying 65 B-17s.
|
On 30 October 1935, a test flight determining the rate of climb and service ceiling was planned. The command pilot was Major Ployer Peter Hill, Wright Field Material Division Chief of the Flying Branch, his first flight in the Model 299. Copilot was Lieutenant Donald Putt, while Boeing chief test pilot Leslie R. Tower was behind the pilots in an advisory role. Also on board were Wright Field test observer John Cutting and mechanic Mark Koegler. The plane stalled and spun into the ground soon after takeoff, bursting into flames. Though initially surviving the impact, Hill died within a few hours, and Tower on 19 November. Post-accident interviews with Tower and Putt determined the control surface gust lock had not been released. Doyle notes, "The loss of Hill and Tower, and the Model 299, was directly responsible for the creation of the modern written checklist used by pilots to this day."
The crashed Model 299 could not finish the evaluation, thus disqualifying it from the competition. While the Air Corps was still enthusiastic about the aircraft's potential, Army officials were daunted by its cost; Douglas quoted a unit price of $58,200 () based on a production order of 220 aircraft, compared with $99,620 ( ) from Boeing. MacArthur's successor, Army Chief of Staff Malin Craig, canceled the order for 65 YB-17s and ordered 133 of the twin-engined Douglas B-18 Bolo, instead. Secretary of War Harry Hines Woodring in October 1938 decided that no four-engine bombers, including B-17s, would be purchased by the War Department in 1939.
|
Initial orders.
Despite the crash, the USAAC had been impressed by the prototype's performance, and on 1936, through a legal loophole, the Air Corps ordered 13 YB-17s (designated Y1B-17 after November 1936 to denote its special F-1 funding) for service testing. The YB-17 incorporated a number of significant changes from the Model 299, including more powerful Wright R-1820-39 Cyclone engines. Although the prototype was company-owned and never received a military serial (the B-17 designation itself did not appear officially until January 1936, nearly three months after the prototype crashed), the term "XB-17" was retroactively applied to the "NX13372's" airframe and has entered the lexicon to describe the first Flying Fortress.
Between 1 March and 4 August 1937, 12 of the 13 Y1B-17s were delivered to the 2nd Bombardment Group at Langley Field in Virginia for operational development and flight tests. One suggestion adopted was the use of a preflight checklist to avoid accidents such as that which befell the Model 299. In one of their first missions, three B-17s, directed by lead navigator Lieutenant Curtis LeMay, were sent by General Andrews to "intercept" and photograph the Italian ocean liner "Rex" off the Atlantic coast. The mission was successful and widely publicized. The 13th Y1B-17 was delivered to the Material Division at Wright Field, Ohio, to be used for flight testing.
|
A 14th Y1B-17 ("37-369"), originally constructed for ground testing of the airframe's strength, was upgraded by Boeing with exhaust-driven General Electric turbo-superchargers, and designated Y1B-17A. Designed by Sanford Moss, engine exhaust gases turned the turbine's steel-alloy blades, forcing high-pressure air into the Wright Cyclone GR-1820-39 engine supercharger. Scheduled to fly in 1937, it encountered problems with the turbochargers, and its first flight was delayed until 1938. The aircraft was delivered to the Army on 1939. Once service testing was complete, the Y1B-17s and Y1B-17A were redesignated B-17 and B-17A, respectively, to signify the change to operational status. The Y1B-17A had a maximum speed of , at its best operational altitude, compared to for the Y1B-17. Also, the Y1B-17A's new service ceiling was more than higher at , compared to the Y1B-17's . These turbo-superchargers were incorporated into the B-17B.
Opposition to the Air Corps' ambitions for the acquisition of more B-17s faded, and in late 1937, 10 more aircraft designated B-17B were ordered to equip two bombardment groups, one on each U.S. coast. Improved with larger flaps and rudder and a well-framed, 10 panel plexiglass nose, the B-17Bs were delivered in five small batches between July 1939 and March 1940. In July 1940, an order for 512 B-17s was issued, but at the time of the attack on Pearl Harbor, fewer than 200 were in service with the army.
|
A total of 155 B-17s of all variants were delivered between 1937 and 1941, but production quickly accelerated, with the B-17 once holding the record for the highest production rate for any large aircraft. The aircraft went on to serve in every World War II combat zone, and by the time production ended in May 1945, 12,731 B-17s had been built by Boeing, Douglas, and Vega (a subsidiary of Lockheed).
Design and variants.
The aircraft went through several alterations in each of its design stages and variants. Of the 13 YB-17s ordered for service testing, 12 were used by the 2nd Bomb Group at Langley Field, Virginia, to develop heavy bombing techniques, and the 13th was used for flight testing at the Material Division at Wright Field, Ohio. Experiments on this aircraft led to the use of 4 General Electric turbo-superchargers, which later became standard on the B-17 line. A 14th aircraft, the YB-17A, originally destined for ground testing only and upgraded with the turbochargers, was redesignated B-17A after testing had finished.
|
As the production line developed, Boeing engineers continued to improve upon the basic design. To enhance performance at slower speeds, the B-17B was altered to include larger rudders and flaps. The B-17C changed from three bulged, oval-shaped gun blisters to two flush, oval-shaped gun window openings, and on the lower fuselage, a single "bathtub" gun gondola housing, which resembled the similarly configured and located "Bodenlafette"/"Bola" ventral defensive emplacement on the German Heinkel He 111P-series medium bomber.
While models A through D of the B-17 were designed defensively, the large-tailed B-17E was the first model primarily focused on offensive warfare. The B-17E was an extensive revision of the Model 299 design: The fuselage was extended by ; a much larger rear fuselage, vertical tailfin, rudder, and horizontal stabilizer were added; a gunner's position was added in the new tail; the nose (especially the bombardier's framed, 10-panel nose glazing) remained relatively the same as the earlier B through D versions had; a Sperry electrically powered manned dorsal gun turret just behind the cockpit was added; a similarly powered (also built by Sperry) manned ventral ball turret just aft of the bomb bay – replaced the relatively hard-to-use, Sperry model 645705-D remotely operated ventral turret on the earliest examples of the E variant. These modifications resulted in a 20% increase in aircraft weight. The B-17's turbocharged Wright R-1820 Cyclone 9 engines were upgraded to increasingly more powerful versions of the same powerplants throughout its production, and similarly, the number of machine gun emplacement locations was increased.
|
The B-17F variant was the primary version used by the Eighth Air Force to face the Germans in 1943, and standardized the manned Sperry ball turret for ventral defense, also replacing the earlier, 10-panel framed bombardier's nose glazing from the B subtype with an enlarged, nearly frameless Plexiglas bombardier's nose enclosure for improved forward vision.
Two experimental versions of the B-17 were flown under different designations, the XB-38 'Flying Fortress' and the YB-40 'Flying Fortress.' The XB-38 was an engine testbed for Allison V-1710 liquid-cooled engines, should the Wright engines normally used on the B-17 become unavailable. The only prototype XB-38 to fly crashed on its ninth flight, and the concept was abandoned. The Allison V-1710 was reallocated to fighter aircraft.
The YB-40 was a heavily armed modification of the standard B-17 used before the North American P-51 Mustang, an effective long-range fighter, became available to act as escort. Additional armament included an additional dorsal turret in the radio room, a remotely operated and fired Bendix-built "chin turret" directly below the bombardier's accommodation, and twin 50 caliber (12.7 mm) guns in each of the waist positions. The ammunition load was over 11,000 rounds. All of these modifications made the YB-40 well over heavier than a fully loaded B-17F. The YB-40s with their greater weight, had trouble keeping up with the lighter bombers once they had dropped their bombs, so the project was abandoned and finally phased out in July 1943. The final production blocks of the B-17F from Douglas' plants did, however, adopt the YB-40's "chin turret", giving them a much-improved forward defense capability.
|
By the time the definitive B-17G appeared, the number of guns had been increased from seven to 13, the designs of the gun stations were finalized, and other adjustments were completed. The B-17G was the final version of the Flying Fortress, incorporating all changes made to its predecessor, the B-17F, and in total, 8,680 were built, the last (by Lockheed) on 1945. Many B-17Gs were converted for other missions such as cargo hauling, engine testing, and reconnaissance. Initially designated SB-17G, a number of B-17Gs were also converted for search-and-rescue duties, later to be redesignated B-17H.
Late in World War II, at least 25 B-17s were fitted with radio controls and television cameras, loaded with of high explosives and designated BQ-7 "Aphrodite missiles" for Operation Aphrodite against bombing-resistant German bunkers. The operation, which involved remotely flying the Aphrodite drones onto their targets by accompanying CQ-17 "mothership" control aircraft, was approved on 1944, and assigned to the 388th Bombardment Group stationed at RAF Fersfield, a satellite of RAF Knettishall.
|
The first four drones were sent to Mimoyecques (V-3 site), the Siracourt V-1 bunker, and the V-2 Blockhaus d'Éperlecques at Watten, and La Coupole at Wizernes on 4 August, causing little damage and two pilots were killed. On August 12, a Consolidated B-24 Liberator, part of the United States Navy's contribution ("Project Anvil"), en route for Heligoland piloted by Lieutenant Joseph P. Kennedy Jr. (future U.S. president John F. Kennedy's elder brother) exploded over the Blyth estuary. Blast damage was caused over a radius of . Naval flights stopped but a few more missions were flown by the USAAF. The Aphrodite project was effectively scrapped in early 1945.
Operational history.
The B-17 began operations in World War II with the Royal Air Force (RAF) in 1941, and in the Southwest Pacific with the U.S. Army.
During World War II, the B-17 equipped 32 overseas combat groups, inventory peaking in August 1944 at 4,574 USAAF aircraft worldwide. The British heavy bombers, the Avro Lancaster and Handley Page Halifax, dropped and respectively.
|
RAF use.
The RAF entered World War II without a sufficient supply of modern heavy bombers, with the largest available long-range medium bombers in any numbers being the Vickers Wellington, which could carry of bombs. While the Short Stirling and Handley Page Halifax became its primary bombers by 1941, in early 1940, the RAF agreed with the U.S. Army Air Corps to acquire 20 B-17Cs, which were given the service name Fortress Mk.I. Their first operation, against Wilhelmshaven on 1941 was unsuccessful. On three B-17s of 90 Squadron took part in a raid on the German capital ship Gneisenau and Prinz Eugen anchored in Brest from , to draw German fighters away from 18 Handley Page Hampdens attacking at lower altitudes, and in time for 79 Vickers Wellingtons to attack later with the German fighters refueling. The operation did not work as expected, with 90 Squadron's Fortresses being unopposed.
By September, the RAF had lost eight B-17Cs in combat and had experienced numerous mechanical problems, and Bomber Command abandoned daylight bombing raids using the Fortress I because of the aircraft's poor performance. The experience showed both the RAF and USAAF that the B-17C was not ready for combat, and that improved defenses, larger bomb loads, and more accurate bombing methods were required. However, the USAAF continued using the B-17 as a day bomber, despite misgivings by the RAF that attempts at daylight bombing would be ineffective.
|
As use by Bomber Command had been curtailed, the RAF transferred its remaining Fortress Mk.I aircraft to Coastal Command for use as a long-range maritime patrol aircraft. These were augmented starting in July 1942 by 45 Fortress Mk.IIA (B-17E) followed by 19 Fortress Mk II (B-17F) and three Fortress Mk III (B-17G). A Fortress IIA from No. 206 Squadron RAF sank U-627 on 1942, the first of 11 U-boat kills credited to RAF Fortress bombers during the war.
As sufficient Consolidated Liberators finally became available, Coastal Command withdrew the Fortress from the Azores, transferring the type to the meteorological reconnaissance role. Three squadrons undertook Met profiles from airfields in Iceland, Scotland, and England, gathering data for vital weather forecasting purposes.
The RAF's No. 223 Squadron, as part of 100 Group, operated several Fortresses equipped with an electronic warfare system known as "Airborne Cigar" (ABC). This was operated by German-speaking radio operators to identify and jam German ground controllers' broadcasts to their nightfighters. They could also pose as ground controllers themselves to steer nightfighters away from the bomber streams.
|
Initial USAAF operations over Europe.
The air corps – renamed United States Army Air Forces (USAAF) on 20 June 1941 – used the B-17 and other bombers to bomb from high altitudes with the aid of the then-secret Norden bombsight, known as the "Blue Ox", which was an optical electromechanical gyrostabilized analog computer. The device was able to determine, from variables put in by the bombardier, the point at which the bombs should be released to hit the target. The bombardier essentially took over flight control of the aircraft during the bomb run, maintaining a level altitude during the final moments before release.
The USAAF began building up its air forces in Europe using B-17Es soon after entering the war. The first Eighth Air Force units arrived in High Wycombe, England, on 1942, to form the 97th Bomb Group. On 1942, 12 B-17Es of the 97th, with the lead aircraft piloted by Major Paul Tibbets and carrying Brigadier General Ira Eaker as an observer, were close escorted by four squadrons of RAF Spitfire IXs (and a further five squadrons of Spitfire Vs to cover the withdrawal) on the first USAAF heavy bomber raid over Europe, against the large railroad marshalling yards at Rouen-Sotteville in France, while a further six aircraft flew a diversionary raid along the French coast. The operation, carried out in good visibility, was a success, with only minor damage to one aircraft, unrelated to enemy action, and half the bombs landing in the target area.
|
Two additional groups arrived in Britain at the same time, bringing with them the first B-17Fs, which served as the primary AAF heavy bomber fighting the Germans until September 1943. As the raids of the American bombing campaign grew in numbers and frequency, German interception efforts grew in strength (such as during the attempted bombing of Kiel on 13 June 1943), such that unescorted bombing missions came to be discouraged.
Combined offensive.
The two different strategies of the American and British bomber commands were organized at the Casablanca Conference in January 1943. The resulting "Combined Bomber Offensive" weakened the Wehrmacht, destroyed German morale, and established air superiority through Operation Pointblank's destruction of German fighter strength in preparation for a ground offensive. The USAAF bombers attacked by day, with British operations – chiefly against industrial cities – by night.
Operation Pointblank opened with attacks on targets in Western Europe. General Ira C. Eaker and the Eighth Air Force placed highest priority on attacks on the German aircraft industry, especially fighter assembly plants, engine factories, and ball-bearing manufacturers. Attacks began in April 1943 on heavily fortified key industrial plants in Bremen and Recklinghausen.
|
Since the airfield bombings were not appreciably reducing German fighter strength, additional B-17 groups were formed, and Eaker ordered major missions deeper into Germany against important industrial targets. The 8th Air Force then targeted the ball-bearing factories in Schweinfurt, hoping to cripple the war effort there. The first raid on 1943 did not result in critical damage to the factories, with the 230 attacking B-17s being intercepted by an estimated 300 Luftwaffe fighters. The Germans shot down 36 aircraft with the loss of 200 men, and coupled with a raid earlier in the day against Regensburg, a total of 60 B-17s were lost that day.
A second attempt on Schweinfurt on 14 October 1943 later came to be known as "Black Thursday". While the attack was successful at disrupting the entire works, severely curtailing work there for the remainder of the war, it was at an extreme cost. Of the 291 attacking Fortresses, 60 were shot down over Germany, five crashed on approach to Britain, and 12 more were scrapped due to damage – a loss of 77 B-17s. Additionally, 122 bombers were damaged and needed repairs before their next flights. Of 2,900 men in the crews, about 650 did not return, although some survived as prisoners of war. Only 33 bombers landed without damage. These losses were a result of concentrated attacks by over 300 German fighters.
|
Such high losses of aircrews could not be sustained, and the USAAF, recognizing the vulnerability of heavy bombers to interceptors when operating alone, suspended daylight bomber raids deep into Germany until the development of an escort fighter that could protect the bombers all the way from the United Kingdom to Germany and back. At the same time, the German nightfighting ability noticeably improved to counter the nighttime strikes, challenging the conventional faith in the cover of darkness. The 8th Air Force alone lost 176 bombers in October 1943, and was to suffer similar casualties on 1944 on missions to Oschersleben, Halberstadt, and Brunswick. Lieutenant General James Doolittle, commander of the 8th, had ordered the second Schweinfurt mission to be cancelled as the weather deteriorated, but the lead units had already entered hostile air space and continued with the mission. Most of the escorts turned back or missed the rendezvous, and as a result, 60 B-17s were destroyed.
A third raid on Schweinfurt on 1944 highlighted what came to be known as "Big Week", during which the bombing missions were directed against German aircraft production. German fighters needed to respond, and the North American P-51 Mustang and Republic P-47 Thunderbolt fighters (equipped with improved drop tanks to extend their range) accompanying the American heavies all the way to and from the targets engaged them. The escort fighters reduced the loss rate to below 7%, with a total of 247 B-17s lost in 3,500 sorties while taking part in the Big Week raids.
|
By September 1944, 27 of the 42 bomb groups of the 8th Air Force and six of the 21 groups of the 15th Air Force used B-17s. Losses to flak continued to take a high toll of heavy bombers through 1944, but the war in Europe was being won by the Allies. And by 1945, 2 days after the last heavy bombing mission in Europe, the rate of aircraft loss was so low that replacement aircraft were no longer arriving and the number of bombers per bomb group was reduced. The Combined Bomber Offensive was effectively complete.
Pacific Theater.
On 7 December 1941, a group of 12 B-17s of the 38th (four B-17C) and 88th (eight B-17E) Reconnaissance Squadrons, en route to reinforce the Philippines, was flown into Pearl Harbor from Hamilton Field, California, arriving while the surprise attack on Pearl Harbor was going on. Leonard "Smitty" Smith Humiston, co-pilot on First Lieutenant Robert H. Richards' B-17C, AAF S/N "40-2049", reported that he thought the U.S. Navy was giving the flight a 21-gun salute to celebrate the arrival of the bombers, after which he realized that Pearl Harbor was under attack. The Fortress came under fire from Japanese fighter aircraft, though the crew was unharmed with the exception of one member who suffered an abrasion on his hand. Japanese activity forced them to divert from Hickam Field to Bellows Field. On landing, the aircraft overran the runway and ran into a ditch, where it was then strafed. Although initially deemed repairable, "40-2049" (11th BG / 38th RS) received more than 200 bullet holes and never flew again. Ten of the 12 Fortresses survived the attack.
|
By 1941, the Far East Air Force (FEAF) based at Clark Field in the Philippines had 35 B-17s, with the War Department eventually planning to raise that to 165. When the FEAF received word of the attack on Pearl Harbor, General Lewis H. Brereton sent his bombers and fighters on various patrol missions to prevent them from being caught on the ground. Brereton planned B-17 raids on Japanese airfields in Formosa, in accordance with Rainbow 5 war plan directives, but this was overruled by General Douglas MacArthur. A series of disputed discussions and decisions, followed by several confusing and false reports of air attacks, delayed the authorization of the sortie. By the time the B-17s and escorting Curtiss P-40 Warhawk fighters were about to get airborne, they were destroyed by Japanese bombers of the 11th Air Fleet. The FEAF lost half its aircraft during the first strike, and was all but destroyed over the next few days.
Another early World War II Pacific engagement, on 1941, involved Colin Kelly, who reportedly crashed his B-17 into the Japanese battleship "Haruna", which was later acknowledged as a near bomb miss on the heavy cruiser "Ashigara". Nonetheless, this deed made him a celebrated war hero. Kelly's B-17C AAF S/N "40-2045" (19th BG / 30th BS) crashed about from Clark Field after he held the burning Fortress steady long enough for the surviving crew to bail out. Kelly was posthumously awarded the Distinguished Service Cross.
|
Noted Japanese ace Saburō Sakai is credited with this kill, and in the process, came to respect the ability of the Fortress to absorb punishment.
B-17s were used in early battles of the Pacific with little success, notably the Battle of Coral Sea and Battle of Midway. While there, the Fifth Air Force B-17s were tasked with disrupting the Japanese sea lanes. Air Corps doctrine dictated bombing runs from high altitude, but they soon found only 1% of their bombs hit targets. However, B-17s were operating at heights too great for most A6M Zero fighters to reach.
The B-17's greatest success in the Pacific was in the Battle of the Bismarck Sea, in which aircraft of this type were responsible for damaging and sinking several Japanese transport ships. On 2 March 1943, six B-17s of the 64th Squadron flying at attacked a major Japanese troop convoy off New Guinea, using skip bombing to sink , which carried 1,200 army troops, and damage two other transports, "Teiyo Maru" and "Nojima". On 3 March 1943, 13 B-17s flying at bombed the convoy, forcing the convoy to disperse and reducing the concentration of their anti-aircraft defenses. |
On 2 March 1943, six B-17s of the 64th Squadron flying at attacked a major Japanese troop convoy off New Guinea, using skip bombing to sink , which carried 1,200 army troops, and damage two other transports, "Teiyo Maru" and "Nojima". On 3 March 1943, 13 B-17s flying at bombed the convoy, forcing the convoy to disperse and reducing the concentration of their anti-aircraft defenses. The B-17s attracted a number of Mitsubishi A6M Zero fighters, which were in turn attacked by the P-38 Lightning escorts. One B-17 broke up in the air, and its crew was forced to take to their parachutes. Japanese fighter pilots machine-gunned some of the B-17 crew members as they descended and attacked others in the water after they landed. Five of the Japanese fighters strafing the B-17 aircrew were promptly engaged and shot down by three Lightnings, though these were also then lost. The allied fighter pilots claimed 15 Zeros destroyed, while the B-17 crews claimed five more. Actual Japanese fighter losses for the day were seven destroyed and three damaged. |
The allied fighter pilots claimed 15 Zeros destroyed, while the B-17 crews claimed five more. Actual Japanese fighter losses for the day were seven destroyed and three damaged. The remaining seven transports and three of the eight destroyers were then sunk by a combination of low level strafing runs by Royal Australian Air Force Beaufighters, and skip bombing by USAAF North American B-25 Mitchells at , while B-17s claimed five hits from higher altitudes. On the morning of 4 March 1943, a B-17 sank the destroyer "Asashio" with a bomb while she was picking up survivors from "Arashio".
At their peak, 168 B-17 bombers were in the Pacific theater in September 1942, but already in mid-1942 Gen. Arnold had decided that the B-17 was unsuitable for the kind of operations required in the Pacific and made plans to replace all of the B-17s in the theater with B-24s (and later, B-29s) as soon as they became available. Although the conversion was not complete until mid-1943, B-17 combat operations in the Pacific theater came to an end after a little over a year. Surviving aircraft were reassigned to the 54th Troop Carrier Wing's special airdrop section and were used to drop supplies to ground forces operating in close contact with the enemy. Special airdrop B-17s supported Australian commandos operating near the Japanese stronghold at Rabaul, which had been the primary B-17 target in 1942 and early 1943.
|
B-17s were still used in the Pacific later in the war, however, mainly in the combat search and rescue role. A number of B-17Gs, redesignated B-17Hs and later SB-17Gs, were used in the Pacific during the final year of the war to carry and drop lifeboats to stranded bomber crews who had been shot down or crashed at sea. These aircraft were nicknamed Dumbos, and remained in service for many years after the end of World War II.
Bomber defense.
Before the advent of long-range fighter escorts, B-17s had only their .50 caliber M2 Browning machine guns to rely on for defense during the bombing runs over Europe. As the war intensified, Boeing used feedback from aircrews to improve each new variant with increased armament and armor. Defensive armament increased from four machine guns and one nose machine gun in the B-17C, to thirteen machine guns in the B-17G. But because the bombers could not maneuver when attacked by fighters and needed to be flown straight and level during their final bomb run, individual aircraft struggled to fend off a direct attack.
|
A 1943 survey by the USAAF found that over half the bombers shot down by the Germans had left the protection of the main formation. To address this problem, the United States developed the bomb-group formation, which evolved into the staggered combat box formation in which all the B-17s could safely cover any others in their formation with their machine guns. This made a formation of bombers a dangerous target to engage by enemy fighters. In order to more quickly form these formations, assembly ships, planes with distinctive paint schemes, were utilized to guide bombers into formation, saving assembly time. "Luftwaffe" fighter pilots likened attacking a B-17 combat box formation to encountering a "fliegendes Stachelschwein", "flying porcupine", with dozens of machine guns in a combat box aimed at them from almost every direction. However, the use of this rigid formation meant that individual aircraft could not engage in evasive maneuvers: they had to fly constantly in a straight line, which made them vulnerable to German flak. Moreover, German fighter aircraft later developed the tactic of high-speed strafing passes rather than engaging with individual aircraft to inflict damage with minimum risk.
|
The B-17 was noted for its ability to absorb battle damage, still reach its target and bring its crew home safely. Wally Hoffman, a B-17 pilot with the Eighth Air Force during World War II, said, "The plane can be cut and slashed almost to pieces by enemy fire and bring its crew home." Martin Caidin reported one instance in which a B-17 suffered a midair collision with a Focke-Wulf Fw 190, losing an engine and suffering serious damage to both the starboard horizontal stabilizer and the vertical stabilizer, and being knocked out of formation by the impact. The B-17 was reported as shot down by observers, but it survived and brought its crew home without injury. Its toughness was compensation for its shorter range and lighter bomb load compared to the B-24 and British Avro Lancaster heavy bombers. Stories circulated of B-17s returning to base with tails shredded, engines destroyed and large portions of their wings destroyed by flak. This durability, together with the large operational numbers in the Eighth Air Force and the fame achieved by the "Memphis Belle", made the B-17 a key bomber aircraft of the war. Other factors such as combat effectiveness and political issues also contributed to the B-17's success.
|
The B-17 adopted early electronic countermeasures, such as Window and Carpet to confuse German radar. This greatly reduced the effectiveness of German Flak, by perhaps as much as 75%, meaning that 450 bombers were saved by these technologies.
Luftwaffe attacks.
After examining wrecked B-17s and B-24s, Luftwaffe officers discovered that on average it took about 20 hits with 20 mm shells fired from the rear to bring them down. Pilots of average ability hit the bombers with only about two percent of the rounds they fired, so to obtain 20 hits, the average pilot had to fire one thousand rounds at a bomber. Early versions of the Fw 190, one of the best German interceptor fighters, were equipped with two MG FF cannons, which carried only 500 rounds when belt-fed (normally using 60-round drum magazines in earlier installations), and later with the better Mauser MG 151/20 cannons, which had a longer effective range than the MG FF weapon. Later versions carried four or even six MG 151/20 cannon and twin 13 mm machine guns. The German fighters found that when attacking from the front, where fewer defensive guns were mounted (and where the pilot was exposed and not protected by armor as he was from the rear), it took only four or five hits to bring a bomber down.
|
To rectify the Fw 190's shortcomings, the number of cannons fitted was doubled to four, with a corresponding increase in the amount of ammunition carried, creating the "Sturmbock" bomber destroyer version. This type replaced the vulnerable twin-engine "Zerstörer" heavy fighters which could not survive interception by P-51 Mustangs flying well ahead of the combat boxes in an air supremacy role starting very early in 1944 to clear any Luftwaffe defensive fighters from the skies. By 1944, a further upgrade to Rheinmetall-Borsig's MK 108 cannons mounted either in the wing, or in underwing, conformal mount gun pods, was made for the "Sturmbock" Focke-Wulfs as either the /R2 or /R8 field modification kits, enabling aircraft to bring a bomber down with just a few hits.
|
Luftwaffe-captured B-17s.
During World War II approximately 40 B-17s were captured and refurbished by Germany after crash-landing or being forced down, with about a dozen put back into the air. Given German "Balkenkreuz" national markings on their wings and fuselage sides, and swastika tail fin–flashes, the captured B-17s were used to determine the B-17's vulnerabilities and to train German interceptor pilots in attack tactics. Others, with the cover designations Dornier Do 200 and Do 288, were used as long-range transports by the "Kampfgeschwader" 200 special duties unit, carrying out agent drops and supplying secret airstrips in the Middle East and North Africa. They were chosen specifically for these missions as being more suitable for this role than other available German aircraft; they never attempted to deceive the Allies and always wore full "Luftwaffe" markings. One B-17 of KG200, bearing the "Luftwaffe"s KG 200 "Geschwaderkennung" (combat wing code) markings "A3+FB", was interned by Spain when it landed at Valencia airfield, 1944, remaining there for the rest of the war. It has been alleged that some B-17s kept their Allied markings and were used by the "Luftwaffe" in attempts to infiltrate B-17 bombing formations and report on their positions and altitudes. According to these allegations, the practice was initially successful, but Army Air Forces combat aircrews quickly developed and established standard procedures to first warn off, and then fire upon any "stranger" trying to join a group's formation.
|
Soviet-interned B-17s.
The U.S. did not offer B-17s to the Soviet Union as part of its war materiel assistance program, but at least 73 aircraft were acquired by the Soviet Air Force. These aircraft had landed with mechanical trouble during the shuttle bombing raids over Germany or had been damaged by a "Luftwaffe" raid in Poltava. The Soviets restored 23 to flying condition and concentrated them in the 890th Bomber Regiment of the 45th Bomber Aviation Division, but they never saw combat. In 1946 (or 1947, according to Holm), the regiment was assigned to the Kazan factory (moving from Baranovichi) to help the Soviet effort to reproduce the more advanced Boeing B-29 as the Tupolev Tu-4.
Swiss-interned B-17s.
During the Allied bomber offensive, some US and British bombers landed in Switzerland and were interned. Some had been damaged and were unable to get back to Allied bases. Others flew into Swiss airspace due to navigation errors, and on rare occasions, accidentally bombed Swiss cities. Swiss fighter aircraft intercepted such aircraft and sought to force them to land.
|
In October 1943, a B-17F-25-VE (tail number 25841) developed engine trouble after a raid over Germany and was forced to land in Switzerland. The plane and its US flight crew were interned. The aircraft was turned over to the Swiss Air Force, which flew the bomber until the end of the war, using other interned but non-airworthy B-17s for spare parts. The bomber's topside surfaces were repainted a dark olive drab, but it retained its light gray underwing and lower fuselage surfaces. It carried the Swiss national white cross insignia in red squares on the topside and underside of its wings, and on both sides of its rudder and its fuselage, with the light gray flash letters "RD" and "I" on either side of the fuselage insignias.
Japanese-captured B-17s.
In 1942, Japanese technicians and mechanics rebuilt three damaged B-17s, one "D" and two "E" series, using parts salvaged from abandoned B-17 wrecks in the Philippines and the Java East Indies. The three bombers, which still contained their top-secret Norden bombsights, were ferried to Japan where they underwent extensive technical evaluation by the "Giken", the Imperial Japanese Army Air Force's Air Technical Research Institute ("Koku Gijutsu Kenkyujo") at Tachikawa's air field. The "D" model, later deemed an obsolescent design, was used in Japanese training and propaganda films. The two "E"s were used to develop air combat tactics for use against B-17s; they were also used as enemy aircraft in pilot and crew training films. One of the two "E"s was photographed late in the war by US aerial recon. It was code-named "Tachikawa 105" after the mystery aircraft's wingspan () but not correctly identified as a captured B-17 until after the war. No traces of the three captured Flying Fortresses were ever found in Japan by Allied occupation forces. The bombers were assumed either lost by various means or scrapped late in the war for their vital war materials.
|
Postwar history.
U.S. Air Force.
After World War II, the B-17 was quickly phased out of use as a bomber and the Army Air Forces retired most of its fleet. Flight crews ferried the bombers back across the Atlantic to the United States where the majority were sold for scrap and melted down, although many remained in use in second-line roles such as VIP transports, air-sea rescue and photo-reconnaissance. Strategic Air Command (SAC), established in 1946, used reconnaissance B-17s (at first called F-9 ["F" for "Fotorecon"], later RB-17) until 1949.
The USAF Air Rescue Service of the Military Air Transport Service (MATS) operated B-17s as so-called "Dumbo" air-sea rescue aircraft. Work on using B-17s to carry airborne lifeboats had begun in 1943, but they entered service in the European theater only in February 1945. They were also used to provide search and rescue support for B-29 raids against Japan. About 130 B-17s were converted to the air-sea rescue role, at first designated B-17H and later SB-17G. Some SB-17s had their defensive guns removed, while others retained their guns to allow use close to combat areas. The SB-17 served through the Korean War, remaining in service with USAF until the mid-1950s.
|
In 1946, surplus B-17s were chosen as drone aircraft for atmospheric sampling during the Operation Crossroads atomic bomb tests, being able to fly close to or even through the mushroom clouds without endangering a crew. This led to more widespread conversion of B-17s as drones and drone control aircraft, both for further use in atomic testing and as targets for testing surface-to-air and air-to-air missiles. were converted to drones. The last operational mission flown by a USAF Fortress was conducted on 1959, when a DB-17P, serial" 44-83684 ", directed a QB-17G, out of Holloman Air Force Base, New Mexico, as a target for an AIM-4 Falcon air-to-air missile fired from a McDonnell F-101 Voodoo. A retirement ceremony was held several days later at Holloman AFB, after which "44-83684" was retired. It was subsequently used in various films and in the 1960s television show "12 O'Clock High" before being retired to the Planes of Fame aviation museum in Chino, California. Perhaps the most famous B-17, the "Memphis Belle", has been restored – with the B-17D "The Swoose" under way – to her World War II wartime appearance by the National Museum of the United States Air Force at Wright-Patterson Air Force Base, Ohio.
|
U.S. Navy and Coast Guard.
During the last year of World War II and shortly thereafter, the United States Navy (USN) acquired 48 ex-USAAF B-17s for patrol and air-sea rescue work. The first two ex-USAAF B-17s, a B-17F (later modified to B-17G standard) and a B-17G were obtained by the Navy for various development programs. At first, these aircraft operated under their original USAAF designations, but on 31 July 1945 they were assigned the naval aircraft designation PB-1, a designation which had originally been used in 1925 for the Boeing Model 50 experimental flying boat.
Thirty-two B-17Gs were used by the Navy under the designation PB-1W, the suffix -W indicating an airborne early warning role. A large radome for an S-band AN/APS-20 search radar was fitted underneath the fuselage and additional internal fuel tanks were added for longer range, with the provision for additional underwing fuel tanks. Originally, the B-17 was also chosen because of its heavy defensive armament, but this was later removed. These aircraft were painted dark blue, the standard Navy paint scheme which had been adopted in late 1944. PB-1Ws continued in USN service until 1955, gradually being phased out in favor of the Lockheed WV-2 (known in the USAF as the EC-121, a designation adopted by the USN in 1962), a military version of the Lockheed 1049 Constellation commercial airliner.
|
In July 1945, 16 B-17s were transferred to the Coast Guard via the Navy; these aircraft were initially assigned U.S. Navy Bureau Numbers (BuNo), but were delivered to the Coast Guard designated as PB-1Gs beginning in July 1946. Coast Guard PB-1Gs were stationed at a number of bases in the U.S. and Newfoundland, with five at Coast Guard Air Station Elizabeth City, North Carolina, two at CGAS San Francisco, two at NAS Argentia, Newfoundland, one at CGAS Kodiak, Alaska, and one in Washington state. They were used primarily in the "Dumbo" air-sea rescue role, but were also used for iceberg patrol duties and for photo mapping. The Coast Guard PB-1Gs served throughout the 1950s, the last example not being withdrawn from service until 14 October 1959.
Special operations.
B-17s were used by the CIA front companies Civil Air Transport, Air America and Intermountain Aviation for special missions. These included B-17G "44-85531", registered as N809Z. These aircraft were primarily used for agent drop missions over the People's Republic of China, flying from Taiwan, with Taiwanese crews. Four B-17s were shot down in these operations.
|
In 1957 the surviving B-17s had been stripped of all weapons and painted black. One of these Taiwan-based B-17s was flown to Clark Air Base in the Philippines in mid-September, assigned for covert missions into Tibet.
On 28 May 1962, N809Z, piloted by Connie Seigrist and Douglas Price, flew Major James Smith, USAF and Lieutenant Leonard A. LeSchack, USNR to the abandoned Soviet arctic ice station NP 8, as Operation Coldfeet. Smith and LeSchack parachuted from the B-17 and searched the station for several days. On 1 June, Seigrist and Price returned and picked up Smith and LeSchack using a Fulton Skyhook system installed on the B-17. N809Z was used to perform a Skyhook pick up in the James Bond movie "Thunderball" in 1965. This aircraft, now restored to its original B-17G configuration, was on display in the Evergreen Aviation & Space Museum in McMinnville, Oregon until it was sold to the Collings Foundation in 2015.
Operators.
The B-17, a versatile aircraft, served in dozens of USAAF units in theaters of combat throughout World War II, and in other roles for the RAF. Its main use was in Europe, where its shorter range and smaller bombload relative to other aircraft did not hamper it as much as in the Pacific Theater. Peak USAAF inventory (in August 1944) was 4,574 worldwide.
|
Surviving aircraft and wrecks.
Of the more than 12,000 B-17 made, four were known to be in flying as of January 2025. There are about 40 B-17 in collections in the United States, and overall about 46 globally.
There are also nearly complete or partially complete B-17 wrecks that have been discovered: an example of this is a B-17F that ditched in the Pacific on 11 July 1943, but was located in 1986.
Fortresses as a symbol.
The B-17 Flying Fortress became symbolic of the United States of America's air power. In a 1943 Consolidated Aircraft poll of 2,500 men in cities where Consolidated advertisements had been run in newspapers, 73% had heard of the B-24 and 90% knew of the B-17.
After the first Y1B-17s were delivered to the Army Air Corps 2nd Bombardment Group, they were used on flights to promote their long range and navigational capabilities. In January 1938, group commander Colonel Robert Olds flew a Y1B-17 from the U.S. east coast to the west coast, setting a transcontinental record of 13 hours 27 minutes. He also broke the west-to-east coast record on the return trip, averaging in 11 hours 1 minute. Six bombers of the 2nd Bombardment Group took off from Langley Field on 1938 as part of a goodwill flight to Buenos Aires, Argentina. Covering they returned on , with seven aircraft setting off on a flight to Rio de Janeiro, Brazil, three days later. In a well-publicized mission on 12 May of the same year, three Y1B-17s "intercepted" and took photographs of the Italian ocean liner SS "Rex" off the Atlantic coast.
|
Many pilots who flew both the B-17 and the B-24 preferred the B-17 for its greater stability and ease in formation flying. The electrical systems were less vulnerable to damage than the B-24's hydraulics, and the B-17 was easier to fly than a B-24 when missing an engine. During the war, the largest offensive bombing force, the Eighth Air Force, had an open preference for the B-17. Lieutenant General Jimmy Doolittle wrote about his preference for equipping the Eighth with B-17s, citing the logistical advantage in keeping field forces down to a minimum number of aircraft types with their individual servicing and spares. For this reason, he wanted B-17 bombers and P-51 fighters for the Eighth. His views were supported by Eighth Air Force statisticians, whose mission studies showed that the Flying Fortress's utility and survivability was much greater than those of the B-24 Liberator. Making it back to base on numerous occasions, despite extensive battle damage, the B-17's durability became legendary; stories and photos of B-17s surviving battle damage were widely circulated during the war. Despite an inferior performance and smaller bombload than the more numerous B-24 Liberators, a survey of Eighth Air Force crews showed a much higher rate of satisfaction with the B-17.
|
Accidents and incidents.
Most of the losses were during WWII, however because of the Warbird flights there have been losses in the 2020s as well.
Noted B-17 pilots and crew members.
Medal of Honor recipients.
Many B-17 crew members received military honors and 17 received the Medal of Honor, the highest military decoration awarded by the United States:
Notable appearances in media.
A Douglas Aircraft B-17 assembly line is featured in the 1944 drama "An American Romance". Hollywood featured the B-17 in its period films, such as director Howard Hawks' "Air Force" starring John Garfield and "Twelve O'Clock High" starring Gregory Peck. Both films were made with the full cooperation of the United States Army Air Forces and used USAAF aircraft and (for "Twelve O'Clock High") combat footage. In 1964, the latter film was made into a television show of the same name and ran for three years on ABC TV. Footage from "Twelve O' Clock High" was also used, along with three restored B-17s, in the 1962 film "The War Lover". An early model YB-17 also appeared in the 1938 film "Test Pilot" with Clark Gable and Spencer Tracy, and later with Clark Gable in "Command Decision" in 1948, in "Tora! Tora! Tora!" in 1970, and in "Memphis Belle" with Matthew Modine, Eric Stoltz, Billy Zane, and Harry Connick Jr. in 1990. The most famous B-17, the "Memphis Belle", toured the U. S. with her crew to reinforce national morale (and to sell war bonds). She was featured in a USAAF documentary, "".
|
The Flying Fortress has also been featured in artistic works expressing the physical and psychological stress of the combat conditions and the high casualty rates that crews suffered. Works such as "The Death of the Ball Turret Gunner" by Randall Jarrell and "Heavy Metal"s section "B-17" depict the nature of these missions. The Ball turret itself has inspired works like Steven Spielberg's "The Mission". Artists who served on the bomber units also created paintings and drawings depicting the combat conditions in World War II. "Masters of the Air", a 2024 American war drama television miniseries created by John Shiban and John Orloff, based on the 2007 book "Masters of the Air: America's Bomber Boys Who Fought the Air War Against Nazi Germany" by Donald L. Miller, follows the actions of the 100th Bomb Group, a B-17 unit in eastern England during World War II. |
Trieste (bathyscaphe)
Trieste is a Swiss-designed, Italian-built deep-diving research bathyscaphe. In 1960, it became the first crewed vessel to reach the bottom of Challenger Deep in the Mariana Trench, the deepest point in Earth's seabed. The mission was the final goal for Project Nekton, a series of dives conducted by the United States Navy in the Pacific Ocean near Guam. The vessel was piloted by Swiss oceanographer Jacques Piccard and US Navy lieutenant Don Walsh. They reached a depth of about .
The bathyscaphe was designed by Swiss scientist Auguste Piccard, the father of pilot Jacques Piccard. It was built in Italy and first launched in 1953. The vessel was first owned and operated by the French Navy until it was purchased by the US Navy in 1958. It was taken out of service in 1966. Since the 1980s, it has been on exhibit in the National Museum of the United States Navy in Washington, D.C.
Design.
"Trieste" was designed by the Swiss scientist Auguste Piccard, based on his previous experience with the bathyscaphe "FNRS-2". The term bathyscaphe refers to its capacity to dive and manoeuvre untethered to a ship in contrast to a bathysphere, "bathys" being ancient Greek meaning "deep" and "scaphe" being a light, bowl-shaped boat.
|
Built in Italy and launched on 26 August 1953 near the Isle of Capri on the Mediterranean Sea it was operated in the Mediterranean by the French Navy for several years until it was purchased by the United States Navy in 1958 for US$250,000, equivalent to $ million today.
"Trieste" consisted of a heavy crew sphere suspended from a hull containing tanks filled with gasoline (petrol) for buoyancy, ballast hoppers filled with iron shot and floodable water tanks to sink. This general configuration remained the same but after modifications to the hull for Project Nekton, which included the dive to Challenger Deep, "Trieste" was more than long.
The hull was built by Cantieri Riuniti dell'Adriatico, in the Free Territory of Trieste on the border between Italy and Yugoslavia, now in Italy, hence the name. The pressure sphere was built separately and installed on the hull in the Cantiere navale di Castellammare di Stabia, near Naples.<br>
The pressure sphere was attached to the underside of the hull and accommodated two crew who accessed it via a vertical shaft through the hull; this access shaft was not pressurized and flooded with seawater on descent. The sphere was completely self-contained, having a closed-circuit rebreather system with oxygen provided from cylinders while carbon dioxide was scrubbed from the air by being passed through canisters of soda-lime. Batteries provided electrical power.
|
Piccard's original pressure sphere was built by Acciaierie, Terni of steel forged in two hemispheres and welded to form a sphere in diameter and thick, This pressure sphere was replaced in December 1958 with another cast by the Krupp Steel Works of Essen, Germany in three sections; an equatorial ring and two caps, which were finely machined and joined by the Ateliers de Constructions Mécaniques de Vevey. The new sphere was also steel, but smaller at diameter and with thicker walls, at , calculated to withstand the pressure at the bottom of Challenger Deep plus a substantial factor of safety. The new sphere weighed in air and in water giving it an average specific gravity 2.6 times (or 1.6 times greater than) that of seawater (13÷(13−8)).
Outside observations by the crew were made through a porthole made from a single, tapered block of acrylic glass; the only transparent material available that could withstand the pressure. Outside illumination was by quartz arc-light bulbs, which could withstand the pressure without modification.
|
The buoyancy tanks were filled with gasoline, which floats in water and is similarly incompressible. Changes in the volume of the gasoline caused by any slight compression or temperature changes were accommodated by the free flow of seawater into and out of the bottom of the tanks during a dive via valves, equalising the pressure and allowing them to be lightly built.<br>
Ballast was held in two conical hoppers fore and aft of the crew sphere each containing of iron shot. This shot ballast allowed the craft to sink, and its release caused it to ascend. The iron shot was locked in place at the throats of the hoppers by electromagnets thus was released either by switching the electromagnets off or automatically in the event of an electrical failure. Progressive release allowed buoyancy trim. Compressed-air–driven variable-buoyancy pressure vessels typically used in submarines are not feasible at extreme pressure.<br>
Water tanks at each end of the hull were pumped out for flotation, lifting, and towing on the surface and fully flooded to allow sinking.<br>
|
Following its acquisition by the United States Navy, "Trieste" was modified extensively by the Naval Electronics Laboratory, San Diego, California, tested in the Pacific Ocean over the next few years, and culminated in a dive to the bottom of Challenger Deep 23 January 1960.
The Mariana Trench dives.
"Trieste" departed San Diego on 5 October 1959 for Guam aboard the freighter "Santa Maria" to participate in "Project Nekton", a series of very deep dives in the Mariana Trench.
On 23 January 1960, it reached the ocean floor in the Challenger Deep (the deepest southern part of the Mariana Trench), carrying Jacques Piccard and Don Walsh. This was the first time a vessel, crewed or uncrewed, had reached the deepest known point of the Earth's oceans. The onboard systems indicated a depth of , although this was revised later to ; fairly recently, more accurate measurements have found Challenger Deep to be between and deep.
The descent to the ocean floor took 4 hours 47 minutes at a descent rate of . After passing , one of the outer Plexiglas window panes cracked, shaking the entire vessel. The two men spent twenty minutes on the ocean floor. The temperature in the cabin was 7 °C (45 °F) at the time. While at maximum depth, Piccard and Walsh unexpectedly regained the ability to communicate with the support ship, USS "Wandank" (ATA-204), using a sonar/hydrophone voice communications system. At a speed of almost – about five times the speed of sound in air – it took about seven seconds for a voice message to travel from the craft to the support ship and another seven seconds for answers to return.
|
While at the bottom, Piccard and Walsh reported observing a number of sole and flounder (both flatfish). The accuracy of this observation has later been questioned and recent authorities do not recognize it as valid. The theoretical maximum depth for fish is at about , beyond which they would become hyperosmotic. Invertebrates such as sea cucumbers, some of which potentially could be mistaken for flatfish, have been confirmed at depths of and more. Walsh later said that their original observation could be mistaken as their knowledge of biology was limited. Piccard and Walsh noted that the floor of the Challenger Deep consisted of "diatomaceous ooze". The ascent took 3 hours and 15 minutes. The National Museum of the Navy commemorated the 60th anniversary of the dive in January 2020.
Other deep dives and retirement.
The "Trieste" performed a number of deep dives in the Mediterranean prior to being purchased by the U.S. Navy in 1958. It conducted 48 dives exceeding between 1953 and 1957 as the "Batiscafo Trieste".
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.