text stringlengths 9 2.4k |
|---|
The form and function of modern-day humans' upper bodies appear to have evolved from living in a more forested setting. Living in this kind of environment would have made it so that being able to travel arboreally would have been advantageous at the time. Although different to human walking, bipedal locomotion in trees was thought to be advantageous. It has also been proposed that, like some modern-day apes, early hominins had undergone a knuckle-walking stage prior to adapting the back limbs for bipedality while retaining forearms capable of grasping. Numerous causes for the evolution of human bipedalism involve freeing the hands for carrying and using tools, sexual dimorphism in provisioning, changes in climate and environment (from jungle to savanna) that favored a more elevated eye-position, and to reduce the amount of skin exposed to the tropical sun. It is possible that bipedalism provided a variety of benefits to the hominin species, and scientists have suggested multiple reasons for evolution of human bipedalism. There is also not only the question of why the earliest hominins were partially bipedal but also why hominins became more bipedal over time. For example, the postural feeding hypothesis describes how the earliest hominins became bipedal for the benefit of reaching food in trees while the savanna-based theory describes how the late hominins that started to settle on the ground became increasingly bipedal.
|
Multiple factors.
Napier (1963) argued that it is unlikely that a single factor drove the evolution of bipedalism. He stated "It seems unlikely that any single factor was responsible for such a dramatic change in behaviour. In addition to the advantages of accruing from ability to carry objects – food or otherwise – the improvement of the visual range and the freeing of the hands for purposes of defence and offence may equally have played their part as catalysts." Sigmon (1971) demonstrated that chimpanzees exhibit bipedalism in different contexts, and one single factor should be used to explain bipedalism: preadaptation for human bipedalism. Day (1986) emphasized three major pressures that drove evolution of bipedalism: food acquisition, predator avoidance, and reproductive success. Ko (2015) stated that there are two main questions regarding bipedalism 1. Why were the earliest hominins partially bipedal? and 2. Why did hominins become more bipedal over time? He argued that these questions can be answered with combination of prominent theories such as Savanna-based, Postural feeding, and Provisioning.
|
Savannah-based theory.
According to the Savanna-based theory, hominines came down from the tree's branches and adapted to life on the savanna by walking erect on two feet. The theory suggests that early hominids were forced to adapt to bipedal locomotion on the open savanna after they left the trees. One of the proposed mechanisms was the knuckle-walking hypothesis, which states that human ancestors used quadrupedal locomotion on the savanna, as evidenced by morphological characteristics found in "Australopithecus anamensis" and "Australopithecus afarensis" forelimbs, and that it is less parsimonious to assume that knuckle walking developed twice in genera "Pan" and "Gorilla" instead of evolving it once as synapomorphy for "Pan" and "Gorilla" before losing it in Australopithecus. The evolution of an orthograde posture would have been very helpful on a savanna as it would allow the ability to look over tall grasses in order to watch out for predators, or terrestrially hunt and sneak up on prey. It was also suggested in P. E. Wheeler's "The evolution of bipedality and loss of functional body hair in hominids", that a possible advantage of bipedalism in the savanna was reducing the amount of surface area of the body exposed to the sun, helping regulate body temperature. In fact, Elizabeth Vrba's turnover pulse hypothesis supports the savanna-based theory by explaining the shrinking of forested areas due to global warming and cooling, which forced animals out into the open grasslands and caused the need for hominids to acquire bipedality.
|
Others state hominines had already achieved the bipedal adaptation that was used in the savanna. The fossil evidence reveals that early bipedal hominins were still adapted to climbing trees at the time they were also walking upright. It is possible that bipedalism evolved in the trees, and was later applied to the savanna as a vestigial trait. Humans and orangutans are both unique to a bipedal reactive adaptation when climbing on thin branches, in which they have increased hip and knee extension in relation to the diameter of the branch, which can increase an arboreal feeding range and can be attributed to a convergent evolution of bipedalism evolving in arboreal environments. Hominine fossils found in dry grassland environments led anthropologists to believe hominines lived, slept, walked upright, and died only in those environments because no hominine fossils were found in forested areas. However, fossilization is a rare occurrence—the conditions must be just right in order for an organism that dies to become fossilized for somebody to find later, which is also a rare occurrence. The fact that no hominine fossils were found in forests does not ultimately lead to the conclusion that no hominines ever died there. The convenience of the savanna-based theory caused this point to be overlooked for over a hundred years.
|
Some of the fossils found actually showed that there was still an adaptation to arboreal life. For example, Lucy, the famous "Australopithecus afarensis", found in Hadar in Ethiopia, which may have been forested at the time of Lucy's death, had curved fingers that would still give her the ability to grasp tree branches, but she walked bipedally. "Little Foot", a nearly-complete specimen of "Australopithecus africanus", has a divergent big toe as well as the ankle strength to walk upright. "Little Foot" could grasp things using his feet like an ape, perhaps tree branches, and he was bipedal. Ancient pollen found in the soil in the locations in which these fossils were found suggest that the area used to be much more wet and covered in thick vegetation and has only recently become the arid desert it is now.
Traveling efficiency hypothesis.
An alternative explanation is that the mixture of savanna and scattered forests increased terrestrial travel by proto-humans between clusters of trees, and bipedalism offered greater efficiency for long-distance travel between these clusters than quadrupedalism. In an experiment monitoring chimpanzee metabolic rate via oxygen consumption, it was found that the quadrupedal and bipedal energy costs were very similar, implying that this transition in early ape-like ancestors would not have been very difficult or energetically costing. This increased travel efficiency is likely to have been selected for as it assisted foraging across widely dispersed resources.
|
Postural feeding hypothesis.
The postural feeding hypothesis has been recently supported by Dr. Kevin Hunt, a professor at Indiana University. This hypothesis asserts that chimpanzees were only bipedal when they eat. While on the ground, they would reach up for fruit hanging from small trees and while in trees, bipedalism was used to reach up to grab for an overhead branch. These bipedal movements may have evolved into regular habits because they were so convenient in obtaining food. Also, Hunt's hypotheses states that these movements coevolved with chimpanzee arm-hanging, as this movement was very effective and efficient in harvesting food. When analyzing fossil anatomy, "Australopithecus afarensis" has very similar features of the hand and shoulder to the chimpanzee, which indicates hanging arms. Also, the "Australopithecus" hip and hind limb very clearly indicate bipedalism, but these fossils also indicate very inefficient locomotive movement when compared to humans. For this reason, Hunt argues that bipedalism evolved more as a terrestrial feeding posture than as a walking posture.
|
A related study conducted by University of Birmingham, Professor Susannah Thorpe examined the most arboreal great ape, the orangutan, holding onto supporting branches in order to navigate branches that were too flexible or unstable otherwise. In more than 75 percent of observations, the orangutans used their forelimbs to stabilize themselves while navigating thinner branches. Increased fragmentation of forests where A. afarensis as well as other ancestors of modern humans and other apes resided could have contributed to this increase of bipedalism in order to navigate the diminishing forests. Findings also could shed light on discrepancies observed in the anatomy of A. afarensis, such as the ankle joint, which allowed it to "wobble" and long, highly flexible forelimbs. If bipedalism started from upright navigation in trees, it could explain both increased flexibility in the ankle as well as long forelimbs which grab hold of branches.
Provisioning model.
|
However, this model has been debated, as others have argued that early bipedal hominids were instead polygynous. Among most monogamous primates, males and females are about the same size. That is sexual dimorphism is minimal, and other studies have suggested that "Australopithecus afarensis" males were nearly twice the weight of females. However, Lovejoy's model posits that the larger range a provisioning male would have to cover (to avoid competing with the female for resources she could attain herself) would select for increased male body size to limit predation risk. Furthermore, as the species became more bipedal, specialized feet would prevent the infant from conveniently clinging to the mother hampering the mother's freedom and thus make her and her offspring more dependent on resources collected by others. Modern monogamous primates such as gibbons tend to be also territorial, but fossil evidence indicates that "Australopithecus afarensis" lived in large groups. However, while both gibbons and hominids have reduced canine sexual dimorphism, female gibbons enlarge ('masculinize') their canines so they can actively share in the defense of their home territory. Instead, the reduction of the male hominid canine is consistent with reduced inter-male aggression in a pair-bonded though group living primate.
|
Early bipedalism in homininae model.
Recent studies of 4.4 million years old "Ardipithecus ramidus" suggest bipedalism. It is thus possible that bipedalism evolved very early in homininae and was reduced in chimpanzee and gorilla when they became more specialized. Other recent studies of the foot structure of "Ardipithecus ramidus" suggest that the species was closely related to African-ape ancestors. This possibly provides a species close to the true connection between fully bipedal hominins and quadruped apes. According to Richard Dawkins in his book "The Ancestor's Tale", chimps and bonobos are descended from "Australopithecus" gracile type species while gorillas are descended from "Paranthropus". These apes may have once been bipedal, but then lost this ability when they were forced back into an arboreal habitat, presumably by those australopithecines from whom eventually evolved hominins. Early hominines such as "Ardipithecus ramidus" may have possessed an arboreal type of bipedalism that later independently evolved towards knuckle-walking in chimpanzees and gorillas and towards efficient walking and running in modern humans (see figure). It is also proposed that one cause of Neanderthal extinction was a less efficient running.
|
Warning display (aposematic) model.
Joseph Jordania from the University of Melbourne recently (2011) suggested that bipedalism was one of the central elements of the general defense strategy of early hominids, based on aposematism, or warning display and intimidation of potential predators and competitors with exaggerated visual and audio signals. According to this model, hominids were trying to stay as visible and as loud as possible all the time. Several morphological and behavioral developments were employed to achieve this goal: upright bipedal posture, longer legs, long tightly coiled hair on the top of the head, body painting, threatening synchronous body movements, loud voice and extremely loud rhythmic singing/stomping/drumming on external subjects. Slow locomotion and strong body odor (both characteristic for hominids and humans) are other features often employed by aposematic species to advertise their non-profitability for potential predators.
Other behavioural models.
There are a variety of ideas which promote a specific change in behaviour as the key driver for the evolution of hominid bipedalism. For example, Wescott (1967) and later Jablonski & Chaplin (1993) suggest that bipedal threat displays could have been the transitional behaviour which led to some groups of apes beginning to adopt bipedal postures more often. Others (e.g. Dart 1925) have offered the idea that the need for more vigilance against predators could have provided the initial motivation. Dawkins (e.g. 2004) has argued that it could have begun as a kind of fashion that just caught on and then escalated through sexual selection. And it has even been suggested (e.g. Tanner 1981:165) that male phallic display could have been the initial incentive, as well as increased sexual signaling in upright female posture.
|
Thermoregulatory model.
The thermoregulatory model explaining the origin of bipedalism is one of the simplest theories so far advanced, but it is a viable explanation. Dr. Peter Wheeler, a professor of evolutionary biology, proposes that bipedalism raises the amount of body surface area higher above the ground which results in a reduction in heat gain and helps heat dissipation. When a hominid is higher above the ground, the organism accesses more favorable wind speeds and temperatures. During heat seasons, greater wind flow results in a higher heat loss, which makes the organism more comfortable. Also, Wheeler explains that a vertical posture minimizes the direct exposure to the sun whereas quadrupedalism exposes more of the body to direct exposure. Analysis and interpretations of Ardipithecus reveal that this hypothesis needs modification to consider that the forest and woodland environmental preadaptation of early-stage hominid bipedalism preceded further refinement of bipedalism by the pressure of natural selection. This then allowed for the more efficient exploitation of the hotter conditions ecological niche, rather than the hotter conditions being hypothetically bipedalism's initial stimulus. A feedback mechanism from the advantages of bipedality in hot and open habitats would then in turn make a forest preadaptation solidify as a permanent state.
|
Carrying models.
Charles Darwin wrote that "Man could not have attained his present dominant position in the world without the use of his hands, which are so admirably adapted to the act of obedience of his will". Darwin (1871:52) and many models on bipedal origins are based on this line of thought. Gordon Hewes (1961) suggested that the carrying of meat "over considerable distances" (Hewes 1961:689) was the key factor. Isaac (1978) and Sinclair et al. (1986) offered modifications of this idea, as indeed did Lovejoy (1981) with his "provisioning model" described above. Others, such as Nancy Tanner (1981), have suggested that infant carrying was key, while others again have suggested stone tools and weapons drove the change. This stone-tools theory is very unlikely, as though ancient humans were known to hunt, the discovery of tools was not discovered for thousands of years after the origin of bipedalism, chronologically precluding it from being a driving force of evolution. (Wooden tools and spears fossilize poorly and therefore it is difficult to make a judgment about their potential usage.)
|
Wading models.
The observation that large primates, including especially the great apes, that predominantly move quadrupedally on dry land, tend to switch to bipedal locomotion in waist deep water, has led to the idea that the origin of human bipedalism may have been influenced by waterside environments. This idea, labelled "the wading hypothesis", was originally suggested by the Oxford marine biologist Alister Hardy who said: "It seems to me likely that Man learnt to stand erect first in water and then, as his balance improved, he found he became better equipped for standing up on the shore when he came out, and indeed also for running." It was then promoted by Elaine Morgan, as part of the aquatic ape hypothesis, who cited bipedalism among a cluster of other human traits unique among primates, including voluntary control of breathing, hairlessness and subcutaneous fat. The "aquatic ape hypothesis", as originally formulated, has not been accepted or considered a serious theory within the anthropological scholarly community. Others, however, have sought to promote wading as a factor in the origin of human bipedalism without referring to further ("aquatic ape" related) factors. Since 2000 Carsten Niemitz has published a series of papers and a book on a variant of the wading hypothesis, which he calls the "amphibian generalist theory" ().
|
Other theories have been proposed that suggest wading and the exploitation of aquatic food sources (providing essential nutrients for human brain evolution or critical fallback foods) may have exerted evolutionary pressures on human ancestors promoting adaptations which later assisted full-time bipedalism. It has also been thought that consistent water-based food sources had developed early hominid dependency and facilitated dispersal along seas and rivers.
Consequences.
Prehistoric fossil records show that early hominins first developed bipedalism before being followed by an increase in brain size. The consequences of these two changes in particular resulted in painful and difficult labor due to the increased favor of a narrow pelvis for bipedalism being countered by larger heads passing through the constricted birth canal. This phenomenon is commonly known as the obstetrical dilemma.
Non-human primates habitually deliver their young on their own, but the same cannot be said for modern-day humans. Isolated birth appears to be rare and actively avoided cross-culturally, even if birthing methods may differ between said cultures. This is due to the fact that the narrowing of the hips and the change in the pelvic angle caused a discrepancy in the ratio of the size of the head to the birth canal. The result of this is that there is greater difficulty in birthing for hominins in general, let alone to be doing it by oneself.
|
Physiology.
Bipedal movement occurs in a number of ways and requires many mechanical and neurological adaptations. Some of these are described below.
Biomechanics.
Standing.
Energy-efficient means of standing bipedally involve constant adjustment of balance, and of course these must avoid overcorrection. The difficulties associated with simple standing in upright humans are highlighted by the greatly increased risk of falling present in the elderly, even with minimal reductions in control system effectiveness.
Shoulder stability.
Shoulder stability would decrease with the evolution of bipedalism. Shoulder mobility would increase because the need for a stable shoulder is only present in arboreal habitats. Shoulder mobility would support suspensory locomotion behaviors which are present in human bipedalism. The forelimbs are freed from weight-bearing requirements, which makes the shoulder a place of evidence for the evolution of bipedalism.
Walking.
Unlike non-human apes that are able to practice bipedality such as "Pan" and "Gorilla", hominins have the ability to move bipedally without the utilization of a bent-hip-bent-knee (BHBK) gait, which requires the engagement of both the hip and the knee joints. This human ability to walk is made possible by the spinal curvature humans have that non-human apes do not. Rather, walking is characterized by an "inverted pendulum" movement in which the center of gravity vaults over a stiff leg with each step. Force plates can be used to quantify the whole-body kinetic & potential energy, with walking displaying an out-of-phase relationship indicating exchange between the two. This model applies to all walking organisms regardless of the number of legs, and thus bipedal locomotion does not differ in terms of whole-body kinetics.
|
In humans, walking is composed of several separate processes:
Running.
Early hominins underwent post-cranial changes in order to better adapt to bipedality, especially running. One of these changes is having longer hindlimbs proportional to the forelimbs and their effects. As previously mentioned, longer hindlimbs assist in thermoregulation by reducing the total surface area exposed to direct sunlight while simultaneously allowing for more space for cooling winds. Additionally, having longer limbs is more energy-efficient, since longer limbs mean that overall muscle strain is lessened. Better energy efficiency, in turn, means higher endurance, particularly when running long distances.
Running is characterized by a spring-mass movement. Kinetic and potential energy are in phase, and the energy is stored & released from a spring-like limb during foot contact, achieved by the plantar arch and the Achilles tendon in the foot and leg, respectively. Again, the whole-body kinetics are similar to animals with more limbs.
|
Musculature.
Bipedalism requires strong leg muscles, particularly in the thighs. Contrast in domesticated poultry the well muscled legs, against the small and bony wings. Likewise in humans, the quadriceps and hamstring muscles of the thigh are both so crucial to bipedal activities that each alone is much larger than the well-developed biceps of the arms. In addition to the leg muscles, the increased size of the gluteus maximus in humans is an important adaptation as it provides support and stability to the trunk and lessens the amount of stress on the joints when running.
Respiration.
Quadrupeds, have more restrictive breathing respire while moving than do bipedal humans. "Quadrupedal species normally synchronize the locomotor and respiratory cycles at a constant ratio of 1:1 (strides per breath) in both the trot and gallop. Human runners differ from quadrupeds in that while running they employ several phase-locked patterns (4:1, 3:1, 2:1, 1:1, 5:2, and 3:2), although a 2:1 coupling ratio appears to be favored. Even though the evolution of bipedal gait has reduced the mechanical constraints on respiration in man, thereby permitting greater flexibility in breathing pattern, it has seemingly not eliminated the need for the synchronization of respiration and body motion during sustained running."
|
Respiration through bipedality means that there is better breath control in bipeds, which can be associated with brain growth. The modern brain utilizes approximately 20% of energy input gained through breathing and eating, as opposed to species like chimpanzees who use up twice as much energy as humans for the same amount of movement. This excess energy, leading to brain growth, also leads to the development of verbal communication. This is because breath control means that the muscles associated with breathing can be manipulated into creating sounds. This means that the onset of bipedality, leading to more efficient breathing, may be related to the origin of verbal language.
Bipedal robots.
For nearly the whole of the 20th century, bipedal robots were very difficult to construct and robot locomotion involved only wheels, treads, or multiple legs. Recent cheap and compact computing power has made two-legged robots more feasible. Some notable biped robots are ASIMO, HUBO, MABEL and QRIO. Recently, spurred by the success of creating a fully passive, un-powered bipedal walking robot, those working on such machines have begun using principles gleaned from the study of human and animal locomotion, which often relies on passive mechanisms to minimize power consumption. |
Bootstrapping
In general, bootstrapping usually refers to a self-starting process that is supposed to continue or grow without external input. Many analytical techniques are often called bootstrap methods in reference to their self-starting or self-supporting implementation, such as bootstrapping (statistics), bootstrapping (finance), or bootstrapping (linguistics).
Etymology.
Tall boots may have a tab, loop or handle at the top known as a bootstrap, allowing one to use fingers or a boot hook tool to help pull the boots on. The saying "to " was already in use during the 19th century as an example of an impossible task. The idiom dates at least to 1834, when it appeared in the "Workingman's Advocate": "It is conjectured that Mr. Murphee will now be enabled to hand himself over the Cumberland river or a barn yard fence by the straps of his boots." In 1860 it appeared in a comment on philosophy of mind: "The attempt of the mind to analyze itself [is] an effort analogous to one who would lift himself by his own bootstraps." Bootstrap as a metaphor, meaning to better oneself by one's own unaided efforts, was in use in 1922. This metaphor spawned additional metaphors for a series of self-sustaining processes that proceed without external help.
|
The term is sometimes attributed to a story in Rudolf Erich Raspe's "", but in that story Baron Munchausen pulls himself (and his horse) out of a swamp by his hair (specifically, his pigtail), not by his bootstraps and no explicit reference to bootstraps has been found elsewhere in the various versions of the Munchausen tales.
Originally meant to attempt something ludicrously far-fetched or even impossible, the phrase "Pull yourself up by your bootstraps!" has since been utilized as a narrative for economic mobility or a cure for depression. That idea is believed to have been popularized by American writer Horatio Alger in the 19th century. To request that someone "bootstrap" is to suggest that they might overcome great difficulty by sheer force of will.
Critics have observed that the phrase is used to portray unfair situations as far more meritocratic than they really are. A 2009 study found that 77% of Americans believe that wealth is often the result of hard work. Various studies have found that the main predictor of future wealth is not IQ or hard work, but initial wealth.
|
Applications.
Computing.
In computer technology, the term bootstrapping refers to language compilers that are able to be coded in the same language. (For example, a C compiler is now written in the C language. Once the basic compiler is written, improvements can be iteratively made, thus "pulling the language up by its bootstraps"). Also, booting usually refers to the process of loading the basic software into the memory of a computer after power-on or general reset, the kernel will load the operating system which will then take care of loading other device drivers and software as needed.
Software loading and execution.
Booting is the process of starting a computer, specifically with regard to starting its software. The process involves a chain of stages, in which at each stage, a relatively small and simple program loads and then executes the larger, more complicated program of the next stage. It is in this sense that the computer "pulls itself up by its bootstraps"; i.e., it improves itself by its own efforts. Booting is a chain of events that starts with execution of hardware-based procedures and may then hand off to firmware and software which is loaded into main memory. Booting often involves processes such as performing self-tests, loading configuration settings, loading a BIOS, resident monitors, a hypervisor, an operating system, or utility software.
|
The computer term bootstrap began as a metaphor in the 1950s. In computers, pressing a bootstrap button caused a hardwired program to read a bootstrap program from an input unit. The computer would then execute the bootstrap program, which caused it to read more program instructions. It became a self-sustaining process that proceeded without external help from manually entered instructions. As a computing term, bootstrap has been used since at least 1953.
Software development.
Bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor ("e.g.", ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high-level programming language.
|
The term was also championed by Doug Engelbart to refer to his belief that organizations could better evolve by improving the process they use for improvement (thus obtaining a compounding effect over time). His SRI team that developed the NLS hypertext system applied this strategy by using the tool they had developed to improve the tool.
Compilers.
The development of compilers for new programming languages first developed in an existing language but then rewritten in the new language and compiled by itself, is another example of the bootstrapping notion.
Installers.
During the installation of computer programs, it is sometimes necessary to update the installer or package manager itself. The common pattern for this is to use a small executable bootstrapper file ("e.g.," setup.exe) which updates the installer and starts the real installation after the update. Sometimes the bootstrapper also installs other prerequisites for the software during the bootstrapping process.
Overlay networks.
A bootstrapping node, also known as a rendezvous host, is a node in an overlay network that provides initial configuration information to newly joining nodes so that they may successfully join the overlay network.
|
Discrete-event simulation.
A type of computer simulation called discrete-event simulation represents the operation of a system as a chronological sequence of events. A technique called "bootstrapping the simulation model" is used, which bootstraps initial data points using a pseudorandom number generator to schedule an initial set of pending events, which schedule additional events, and with time, the distribution of event times approaches its steady state—the bootstrapping behavior is overwhelmed by steady-state behavior.
Artificial intelligence and machine learning.
Bootstrapping is a technique used to iteratively improve a classifier's performance. Typically, multiple classifiers will be trained on different sets of the input data, and on prediction tasks the output of the different classifiers will be combined.
Seed AI is a hypothesized type of artificial intelligence capable of recursive self-improvement. Having improved itself, it would become better at improving itself, potentially leading to an exponential increase in intelligence. No such AI is known to exist, but it remains an active field of research. Seed AI is a significant part of some theories about the technological singularity: proponents believe that the development of seed AI will rapidly yield ever-smarter intelligence (via bootstrapping) and thus a new era.
|
Statistics.
Bootstrapping is a resampling technique used to obtain estimates of summary statistics.
Business.
Bootstrapping in business means starting a business without external help or working capital. Entrepreneurs in the startup development phase of their company survive through internal cash flow and are very cautious with their expenses. Generally at the start of a venture, a small amount of money will be set aside for the bootstrap process. Bootstrapping can also be a supplement for econometric models. Bootstrapping was also expanded upon in the book "Bootstrap Business" by Richard Christiansen, the Harvard Business Review article "The Art of Bootstrapping" and the follow-up book "The Origin and Evolution of New Businesses" by Amar Bhide. There is also an entire bible written on how to properly bootstrap by Seth Godin.
Experts have noted that several common stages exist for bootstrapping a business venture:
There are many types of companies that are eligible for bootstrapping. Early-stage companies that do not necessarily require large influxes of capital (particularly from outside sources) qualify. This would specifically allow for flexibility for the business and time to grow. Serial entrepreneur companies could also possibly reap the benefits of bootstrapping. These are organizations whereby the founder has money from the sale of a previous companies they can use to invest.
|
There are different methods of bootstrapping. Future business owners aspiring to use bootstrapping as way of launching their product or service often use the following methods:
Bootstrapping is often considered successful. When taking into account statistics provided by Fundera, approximately 77% of small business rely on some sort of personal investment and or savings in order to fund their startup ventures. The average small business venture requires approximately $10,000 in startup capital with a third of small business launching with less than $5,000 bootstrapped.
Based on startup data presented by Entrepreneur.com, in comparison other methods of funding, bootstrapping is more commonly used than others. "0.91% of startups are funded by angel investors, while 0.05% are funded by VCs. In contrast, 57 percent of startups are funded by personal loans and credit, while 38 percent receive funding from family and friends."
Some examples of successful entrepreneurs that have used bootstrapping in order to finance their businesses include serial entrepreneur Mark Cuban. He has publicly endorsed bootstrapping claiming that "If you can start on your own … do it by [yourself] without having to go out and raise money." When asked why he believed this approach was most necessary, he replied, "I think the biggest mistake people make is once they have an idea and the goal of starting a business, they think they have to raise money. And once you raise money, that's not an accomplishment, that's an obligation" because "now, you're reporting to whoever you raised money from."
|
Bootstrapped companies such as Apple Inc. (APPL), eBay Inc. (EBAY) and Coca-Cola Co. have also claimed that they attribute some of their success to the fact that this method of funding enables them to remain highly focused on a specific array of profitable product.
Startups can grow by reinvesting profits in its own growth if bootstrapping costs are low and return on investment is high. This financing approach allows owners to maintain control of their business and forces them to spend with discipline. In addition, bootstrapping allows startups to focus on customers rather than investors, thereby increasing the likelihood of creating a profitable business. This leaves startups with a better exit strategy with greater returns.
Leveraged buyouts, or highly leveraged or "bootstrap" transactions, occur when an investor acquires a controlling interest in a company's equity and where a significant percentage of the purchase price is financed through leverage, i.e. borrowing by the acquired company.
Operation Bootstrap ("Operación Manos a la Obra") refers to the ambitious projects that industrialized Puerto Rico in the mid-20th century.
|
Biology.
Richard Dawkins in his book "River Out of Eden" used the computer bootstrapping concept to explain how biological cells differentiate: "Different cells receive different combinations of chemicals, which switch on different combinations of genes, and some genes work to switch other genes on or off. And so the bootstrapping continues, until we have the full repertoire of different kinds of cells."
Phylogenetics.
Bootstrapping analysis gives a way to judge the strength of support for clades on phylogenetic trees. A number is written by a node, which reflects the percentage of bootstrap trees which also resolve the clade at the endpoints of that branch.
Law.
Bootstrapping is a rule preventing the admission of hearsay evidence in conspiracy cases.
Linguistics.
Bootstrapping is a theory of language acquisition.
Physics.
Flatness.
Whitworth's three plates method does not rely other flat reference surfaces or other precision instruments, and thus solves the problem of how to create a better precise flat surface.
|
Quantum theory.
Bootstrapping is using very general consistency criteria to determine the form of a quantum theory from some assumptions on the spectrum of particles or operators.
Magnetically confined fusion plasmas.
In tokamak fusion devices, bootstrapping refers to the process in which a bootstrap current is self-generated by the plasma, which reduces or eliminates the need for an external current driver. Maximising the bootstrap current is a major goal of advanced tokamak designs.
Inertially confined fusion plasmas.
Bootstrapping in inertial confinement fusion refers to the alpha particles produced in the fusion reaction providing further heating to the plasma. This heating leads to ignition and an overall energy gain.
Electronics.
Bootstrapping is a form of positive feedback in analog circuit design.
Electric power grid.
An electric power grid is almost never brought down intentionally. Generators and power stations are started and shut down as necessary. A typical power station requires power for start up prior to being able to generate power. This power is obtained from the grid, so if the entire grid is down these stations cannot be started.
|
Therefore, to get a grid started, there must be at least a small number of power stations that can start entirely on their own. A black start is the process of restoring a power station to operation without relying on external power. In the absence of grid power, one or more black starts are used to bootstrap the grid.
Nuclear power.
A nuclear power plant always needs to have a way to remove decay heat, which is usually done with electrical cooling pumps. But in the rare case of a complete loss of electrical power, this can still be achieved by booting a turbine generator. As steam builds up in the steam generator, it can be used to power the turbine generator (initially with no oil pumps, circ water pumps, or condensation pumps). Once the turbine generator is producing electricity, the auxiliary pumps can be powered on, and the reactor cooling pumps can be run momentarily. Eventually the steam pressure will become insufficient to power the turbine generator, and the process can be shut down in reverse order. The process can be repeated until no longer needed. This can cause great damage to the turbine generator, but more importantly, it saves the nuclear reactor.
|
Cellular networks.
A Bootstrapping Server Function (BSF) is an intermediary element in cellular networks which provides application independent functions for mutual authentication of user equipment and servers unknown to each other and for 'bootstrapping' the exchange of secret session keys afterwards. The term 'bootstrapping' is related to building a security relation with a previously unknown device first and to allow installing security elements (keys) in the device and the BSF afterwards. |
Baltic languages
The Baltic languages are a branch of the Indo-European language family spoken natively or as a second language by a population of about 6.5–7.0 million people mainly in areas extending east and southeast of the Baltic Sea in Europe. Together with the Slavic languages, they form the Balto-Slavic branch of the Indo-European family.
Scholars usually regard them as a single subgroup divided into two branches: West Baltic (containing only extinct languages) and East Baltic (containing at least two living languages, Lithuanian, Latvian, and by some counts including Latgalian and Samogitian as separate languages rather than dialects of those two). In addition, the existence of the Dnieper-Oka language is hypothesized, with the extinct Golyad language being the only known member. The range of the East Baltic linguistic influence once possibly reached as far as the Ural Mountains, but this hypothesis has been questioned.
Old Prussian, a Western Baltic language that became extinct in the 18th century, had possibly conserved the greatest number of properties from Proto-Baltic.
|
Although related, Lithuanian, Latvian, and particularly Old Prussian have lexicons that differ substantially from one another and so the languages are not mutually intelligible. Relatively low mutual interaction for neighbouring languages historically led to gradual erosion of mutual intelligibility, and development of their respective linguistic innovations that did not exist in shared Proto-Baltic. The substantial number of false friends and various uses and sources of loanwords from their surrounding languages are considered to be the major reasons for poor mutual intelligibility today.
Branches.
Within Indo-European, the Baltic languages are generally classified as forming a single family with two branches: Eastern and Western Baltic. But these two branches are sometimes classified as independent branches of Balto-Slavic itself.
History.
It is believed that the Baltic languages are among the most conservative of the currently remaining Indo-European languages, despite their late attestation.
Although the Baltic Aesti tribe was mentioned by ancient historians such as Tacitus as early as 98 CE, the first attestation of a Baltic language was 1369, in a Basel epigram of two lines written in Old Prussian. Lithuanian was first attested in a printed book, which is a Catechism by Martynas Mažvydas published in 1547. Latvian appeared in a printed Catechism in 1585.
|
One reason for the late attestation is that the Baltic peoples resisted Christianization longer than any other Europeans, which delayed the introduction of writing and isolated their languages from outside influence.
With the establishment of a German state in Prussia, and the mass influx of Germanic (and to a lesser degree Slavic-speaking) settlers, the Prussians began to be assimilated, and by the end of the 17th century, the Prussian language had become extinct.
After the Partitions of Polish-Lithuanian Commonwealth, most of the Baltic lands were under the rule of the Russian Empire, where the native languages or alphabets were sometimes prohibited from being written down or used publicly in a Russification effort (see Lithuanian press ban for the ban in force from 1864 to 1904).
Geographic distribution.
Speakers of modern Baltic languages are generally concentrated within the borders of Lithuania and Latvia, and in emigrant communities in the United States, Canada, Australia and the countries within the former borders of the Soviet Union.
|
Historically the languages were spoken over a larger area: west to the mouth of the Vistula river in present-day Poland, at least as far east as the Dniepr river in present-day Belarus, perhaps even to Moscow, and perhaps as far south as Kyiv. Key evidence of Baltic language presence in these regions is found in hydronyms (names of bodies of water) that are characteristically Baltic. The use of hydronyms is generally accepted to determine the extent of a culture's influence, but "not" the date of such influence.
The eventual expansion of the use of Slavic languages in the south and east, and Germanic languages in the west, reduced the geographic distribution of Baltic languages to a fraction of the area that they formerly covered. The Russian geneticist Oleg Balanovsky speculated that there is a predominance of the assimilated pre-Slavic substrate in the genetics of East and West Slavic populations, according to him the common genetic structure which contrasts East Slavs and Balts from other populations may suggest that the pre-Slavic substrate of the East Slavs consists most significantly of Baltic-speakers, which predated the Slavs in the cultures of the Eurasian steppe according to archaeological references he cites.
|
Contact with Uralic languages.
Though Estonia is geopolitically included among the Baltic states due to its location, Estonian is a Finnic language of the Uralic language family and is not related to the Baltic languages, which are Indo-European.
The Mordvinic languages, spoken mainly along western tributaries of the Volga, show several dozen loanwords from one or more Baltic languages. These may have been mediated by contacts with the Eastern Balts along the river Oka. In regards to the same geographical location, Asko Parpola, in a 2013 article, suggested that the Baltic presence in this area, dated to –600 CE, is due to an "elite superstratum". However, linguist argued that the Volga-Oka is a "secondary" Baltic-speaking area, expanding from East Baltic, due to a large number of Baltic loanwords in Finnic and Saami.
Finnish scholars also indicate that Latvian had extensive contacts with Livonian, and, to a lesser extent, to Estonian and South Estonian. Therefore, this contact accounts for the number of Finnic hydronyms in Lithuania and Latvia that increase in a northwards direction.
|
Parpola, in the same article, supposed the existence of a Baltic substratum for Finnic, in Estonia and coastal Finland. In the same vein, Kallio argues for the existence of a lost "North Baltic language" that would account for loanwords during the evolution of the Finnic branch.
Comparative linguistics.
Genetic relatedness.
The Baltic languages are of particular interest to linguists because they retain many archaic features, which are thought to have been present in the early stages of the Proto-Indo-European language. However, linguists have had a hard time establishing the precise relationship of the Baltic languages to other languages in the Indo-European family. Several of the extinct Baltic languages have a limited or nonexistent written record, their existence being known only from the records of ancient historians and personal or place names. All of the languages in the Baltic group (including the living ones) were first written down relatively late in their probable existence as distinct languages. These two factors combined with others have obscured the history of the Baltic languages, leading to a number of theories regarding their position in the Indo-European family.
|
The Baltic languages show a close relationship with the Slavic languages, and are grouped with them in a Balto-Slavic family by most scholars. This family is considered to have developed from a common ancestor, Proto-Balto-Slavic. Later on, several lexical, phonological and morphological dialectisms developed, separating the various Balto-Slavic languages from each other. Although it is generally agreed that the Slavic languages developed from a single more-or-less unified dialect (Proto-Slavic) that split off from common Balto-Slavic, there is more disagreement about the relationship between the Baltic languages.
The traditional view is that the Balto-Slavic languages split into two branches, Baltic and Slavic, with each branch developing as a single common language (Proto-Baltic and Proto-Slavic) for some time afterwards. Proto-Baltic is then thought to have split into East Baltic and West Baltic branches. However, more recent scholarship has suggested that there was no unified Proto-Baltic stage, but that Proto-Balto-Slavic split directly into three groups: Slavic, East Baltic and West Baltic. Under this view, the Baltic family is paraphyletic, and consists of all Balto-Slavic languages that are not Slavic. In the 1960s Vladimir Toporov and Vyacheslav Ivanov made the following conclusions about the relationship between the Baltic and Slavic languages:
|
These scholars' theses do not contradict the close relationship between Baltic and Slavic languages and, from a historical perspective, specify the Baltic-Slavic languages' evolution – the terms 'Baltic' and 'Slavic' are relevant only from the point of view of the present time, meaning diachronic changes, and the oldest stage of the language development could be called both Baltic and Slavic; this concept does not contradict the traditional thesis that the Proto-Slavic and Proto-Baltic languages coexisted for a long time after their formation – between the 2nd millennium BC and circa the 5th century BC – the Proto-Slavic language was a continuum of the Proto-Baltic dialects, more rather, the Proto-Slavic language should have been localized in the peripheral circle of Proto-Baltic dialects.
Finally, a minority of scholars argue that Baltic descended directly from Proto-Indo-European, without an intermediate common Balto-Slavic stage. They argue that the many similarities and shared innovations between Baltic and Slavic are caused by several millennia of contact between the groups, rather than a shared heritage.
|
Thracian hypothesis.
The Baltic-speaking peoples likely encompassed an area in eastern Europe much larger than their modern range. As in the case of the Celtic languages of Western Europe, they were reduced by invasion, extermination and assimilation. Studies in comparative linguistics point to genetic relationship between the languages of the Baltic family and the following extinct languages:
The Baltic classification of Dacian and Thracian has been proposed by the Lithuanian scientist Jonas Basanavičius, who insisted this is the most important work of his life and listed 600 identical words of Balts and Thracians. His theory included Phrygian in the related group, but this did not find support and was disapproved among other authors, such as , whose own analysis found Phrygian completely lacking parallels in either Thracian or Baltic languages.
Romanian linguist Sorin Paliga, analysing and criticizing Harvey Mayer's study, did admit "great likeness" between Thracian, the substrate of Romanian, and "some Baltic forms". |
Bioinformatics
Bioinformatics () is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, data science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The process of analyzing and interpreting data can sometimes be referred to as computational biology, however this distinction between the two terms is often disputed. To some, the term "computational biology" refers to building and using models of biological systems.
Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (especially in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences.
|
Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions.
History.
The first definition of the term "bioinformatics" was coined by Paulien Hogeweg and Ben Hesper in 1970, to refer to the study of information processes in biotic systems. This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems).
|
Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology.
Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics.
Sequences.
There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for $1,000 or less.
Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. Margaret Oakley Dayhoff, a pioneer in the field, compiled one of the first protein sequence databases, initially published as books as well as methods of sequence alignment and molecular evolution. Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991.
|
In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful.
Goals.
In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures.
Important sub-disciplines within bioinformatics and computational biology include:
The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis.
|
Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data.
Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes.
Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures.
Sequence analysis.
Since the bacteriophage Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides.
|
DNA sequencing.
Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or affected by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing.
Sequence assembly.
Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The shotgun sequencing technique (used by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, "Haemophilus influenzae") generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are a critical area of bioinformatics research.
|
Genome annotation.
In genomics, annotation refers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate of sequencing exceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics.
Genome annotation can be classified into three levels: the nucleotide, protein, and process levels.
Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination of ab initio gene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome.
The principal aim of protein-level annotation is to assign function to the protein products of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function.
|
Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been the inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem.
The first description of a comprehensive annotation system was published in 1995 by The Institute for Genomic Research, which performed the first complete sequencing and analysis of the genome of a free-living (non-symbiotic) organism, the bacterium "Haemophilus influenzae". The system identifies the genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. The GeneMark program trained to find protein-coding genes in "Haemophilus influenzae" is constantly changing and improving.
Following the goals that the Human Genome Project left to achieve after its closure in 2003, the ENCODE project was developed by the National Human Genome Research Institute. This project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error).
|
Gene function prediction.
While genome annotation is primarily based on sequence similarity (and thus homology), other properties of sequences can be used to predict the function of genes. In fact, most "gene" function prediction methods focus on "protein" sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure, or protein-protein interactions.
Computational evolutionary biology.
Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to:
Comparative genomics.
The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. Intergenomic maps are made to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Entire genomes are involved in processes of hybridization, polyploidization and endosymbiosis that lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models.
|
Many of these studies are based on the detection of sequence homology to assign sequences to protein families.
Pan genomics.
Pan genomics is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particular monophyletic taxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and the Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species.
Genetics of disease.
As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. Simple Mendelian inheritance has been observed for over 3,000 disorders that have been identified at the Online Mendelian Inheritance in Man database, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such as infertility, breast cancer and Alzheimer's disease), rather than a single cause. There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable the choices an algorithm provides.
|
Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability. Rare variants may account for some of the missing heritability. Large-scale whole genome sequencing studies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions of rare variants. Functional annotations predict the effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes.
|
Analysis of mutations in cancer.
In cancer, the genomes of affected cells are rearranged in complex or unpredictable ways. In addition to single-nucleotide polymorphism arrays identifying point mutations that cause cancer, oligonucleotide microarrays can be used to identify chromosomal gains and losses (called comparative genomic hybridization). These detection methods generate terabytes of data per experiment. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes.
Two important principles can be used to identify cancer by mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers.
Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors.
|
Gene and protein expression.
Analysis of gene expression.
The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.
Analysis of protein expression.
|
Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays.
Analysis of regulation.
Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process.
For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments.
|
Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). Clustering algorithms can be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods.
Analysis of cellular organization.
Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. A gene ontology category, "cellular component", has been devised to capture subcellular localization in many biological databases.
|
Microscopy and image analysis.
Microscopic pictures allow for the location of organelles as well as molecules, which may be the source of abnormalities in diseases.
Protein localization.
Finding the location of proteins allows us to predict what they do. This is called protein function prediction. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools.
Nuclear organization of chromatin.
Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space.
|
Structural bioinformatics.
Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP) is an open competition where worldwide research groups submit protein models for evaluating unknown protein models.
Amino acid sequence.
The linear amino acid sequence of a protein is called the primary structure. The primary structure can be easily determined from the sequence of codons on the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of a protein in its native environment. An exception is the misfolded prion protein involved in bovine spongiform encephalopathy. This structure is linked to the function of the protein. Additional structural information includes the "secondary", "tertiary" and "quaternary" structure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time.
|
Homology.
In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene "A", whose function is known, is homologous to the sequence of gene "B," whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling is used to predict the structure of an unknown protein from existing homologous proteins.
One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have very different amino acid sequences, their protein structures are very similar, reflecting their shared function and shared ancestor.
Other techniques for predicting protein structure include protein threading and "de novo" (from scratch) physics-based modeling.
|
Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and "in silico" mutagenesis studies.
A 2021 deep-learning algorithms-based software called AlphaFold, developed by Google's DeepMind, greatly outperforms all other prediction software methods, and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database.
Network and systems biology.
"Network analysis" seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both.
|
"Systems biology" involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.
Molecular interaction networks.
Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field.
|
Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions.
Biodiversity informatics.
Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools. A growing area is also macro-ecology, i.e. the study of how biodiversity is connected to ecology and human impact, such as climate change.
Others.
Literature analysis.
The enormous number of published literature makes it virtually impossible for individuals to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example:
|
The area of research draws from statistics and computational linguistics.
High-throughput image analysis.
Computational technologies are used to automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems can improve an observer's accuracy, objectivity, or speed. Image analysis is important for both diagnostics and research. Some examples are:
High-throughput single cell data analysis.
Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition.
Ontologies and data integration.
Biological ontologies are directed acyclic graphs of controlled vocabularies. They create categories for biological concepts and descriptions so they can be easily analyzed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis.
|
The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes.
Databases.
Databases are essential for bioinformatics research and applications. Databases exist for many different information types, including DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases can contain both empirical data (obtained directly from experiments) and predicted data (obtained from analysis of existing data). They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. Databases can have different formats, access mechanisms, and be public or private.
Some of the most commonly used databases are listed below:
Software and tools.
Software tools for bioinformatics include simple command-line tools, more complex graphical programs, and standalone web-services. They are made by bioinformatics companies or by public institutions.
|
Open-source bioinformatics software.
Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative "in silico" experiments, and freely available open code bases have created opportunities for research groups to contribute to both bioinformatics regardless of funding. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide "de facto" standards and shared object models for assisting with the challenge of bioinformation integration.
Open-source bioinformatics software includes Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD.
The non-profit Open Bioinformatics Foundation and the annual Bioinformatics Open Source Conference promote open-source bioinformatics software.
|
Web services in bioinformatics.
SOAP- and REST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads.
Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems.
Bioinformatics workflow management systems.
A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to
|
Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE.
BioCompute and BioCompute Objects.
In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University.
It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff.
|
In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators.
Education platforms.
While bioinformatics is taught as an in-person master's degree at many universities, there are many other methods and technologies available to learn and obtain certification in the subject. The computational nature of bioinformatics lends it to computer-aided and online learning. Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. 4273 is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4273π operating system.
|
MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization at the University of California, San Diego, Genomic Data Science Specialization at Johns Hopkins University, and EdX's Data Analysis for Life Sciences XSeries at Harvard University.
Conferences.
There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB). |
Brian De Palma
Brian Russell De Palma (; born September 11, 1940) is an American film director and screenwriter. With a career spanning over 50 years, he is best known for work in the suspense, crime and psychological thriller genres. De Palma was a leading member of the New Hollywood generation.
"Carrie" (1976), his adaptation of Stephen King's novel of the same name, put him on the map. He enjoyed commercial success with "Dressed to Kill" (1980), "The Untouchables" (1987) and (1996) and made cult classics such as "Greetings" (1968), "Hi, Mom!" (1970), "Sisters" (1972), "Phantom of the Paradise" (1974), and "The Fury" (1978).
As a young director, De Palma dreamed of being the "American Godard". His style is allusive; he paid homage to Alfred Hitchcock in "Obsession" (1976) and "Body Double" (1984); "Blow Out" (1981) is based on Michelangelo Antonioni's "Blowup" (1966) and "Scarface" (1983), his remake of Howard Hawks's 1932 film, is dedicated to Hawks and Ben Hecht. His work has been criticized for its violence and sexual content but has also been championed by American critics such as Roger Ebert and Pauline Kael. In 2015, he was interviewed about his work in a well-received documentary by Noah Baumbach.
|
Early life and education.
De Palma was born on September 11, 1940, in Newark, New Jersey, the youngest of three boys. His Italian-American parents were Vivienne DePalma (née Muti), and Anthony F. DePalma, an orthopedic surgeon who was the son of immigrants from Alberona, Province of Foggia. He was raised in Philadelphia, Pennsylvania and New Hampshire, and attended various Protestant and Quaker schools, eventually graduating from Friends' Central School. He had a poor relationship with his father, and would secretly follow him to record his adulterous behavior; this would eventually inspire the teenage character in De Palma's "Dressed to Kill" (1980). When he was in high school, he built computers. He won a regional science-fair prize for his project "An Analog Computer to Solve Differential Equations".
Enrolled at Columbia University as a physics student, De Palma became enraptured with filmmaking after seeing Orson Welles's "Citizen Kane" (1941) and Alfred Hitchcock's "Vertigo" (1958). After receiving his undergraduate degree in 1962, De Palma enrolled at the newly mixed-gender Sarah Lawrence College as a graduate student in their theater department, earning an M.A. in the discipline in 1964 and becoming one of the first male students in a predominantly female school. Once there, influences as various as drama teacher Wilford Leach, the Maysles brothers, Michelangelo Antonioni, Andy Warhol and Jean-Luc Godard, impressed upon De Palma the many styles and themes that would shape his work in the coming decades.
|
Career.
1963–1976: Rise to prominence.
An early association with a young Robert De Niro resulted in "The Wedding Party". The film, co-directed with Wilford Leach and producer Cynthia Munroe, had been shot in 1963 but remained unreleased until 1969, when De Palma's star had risen sufficiently in the Greenwich Village filmmaking scene. De Niro was unknown at the time; the credits mistakenly display his name as "Robert ". The film is noteworthy for its invocation of silent film techniques and use of the jump-cut. De Palma followed this style with various small films for the NAACP and the Treasury Department.
During the 1960s, De Palma began making a living producing documentaries, notably "The Responsive Eye" (1966), about "The Responsive Eye" op-art exhibit curated by William Seitz for MoMA in 1965. In an interview with Joseph Gelmis from 1969, De Palma described the film as "very good and very successful. It's distributed by Pathe Contemporary and makes lots of money. I shot it in four hours, with synched sound. I had two other guys shooting people's reactions to the paintings, and the paintings themselves."
|
"Dionysus in '69" (1969) was De Palma's other major documentary from this period. The film records the Performance Group's performance of Euripides's "The Bacchae", starring, amongst others, De Palma regular William Finley. The play is noted for breaking traditional barriers between performers and audience. The film's most striking quality is its extensive use of the split-screen. De Palma recalls that he was "floored" by this performance upon first sight, and in 1973 recounts how he "began to try and figure out a way to capture it on film. I came up with the idea of split-screen, to be able to show the actual audience involvement, to trace the life of the audience and that of the play as they merge in and out of each other."
De Palma's most significant features from this decade are "Greetings" (1968) and "Hi, Mom!" (1970). Both films star De Niro and espouse a leftist revolutionary viewpoint in the spirit of the time. "Greetings" was entered into the 19th Berlin International Film Festival, where it won a Silver Bear award. His other major film from this period is the slasher comedy "Murder a la Mod" (1968). Each of these films experiments with narrative and intertextuality, reflecting De Palma's stated intention to become the "American Godard".
|
In 1970, De Palma left New York for Hollywood at age thirty to make "Get to Know Your Rabbit" (1972), starring Orson Welles and Tommy Smothers. Making the film was a crushing experience for De Palma, as Smothers did not like many of De Palma's ideas. Here he made several small, studio and independently released films. Among them were the horror film "Sisters" (1972), the rock musical "Phantom of the Paradise" (1974) and "Obsession" (1976), a variation on theme of Alfred Hitchcock's "Vertigo" (1958) scored by Hitchcock's frequent collaborator Bernard Herrmann.
1976–1979: Breakthrough.
In November 1976, De Palma released an adaptation of Stephen King's novel "Carrie". Though some see the psychic thriller as De Palma's bid for a blockbuster, the project was in fact small, underfunded by United Artists, and well under the cultural radar during the early months of production, as King's novel was not yet a bestseller. De Palma gravitated toward the project and changed crucial plot elements based upon his own predilections. The cast was mostly young and relatively new, though Sissy Spacek and John Travolta had gained attention for previous work in, respectively, film and sitcoms. "Carrie" became De Palma's first genuine box-office success, garnering Spacek and Piper Laurie Oscar nominations for their performances. Pre-production for the film had coincided with the casting process for George Lucas's "Star Wars", and many of the actors cast in De Palma's film had been earmarked as contenders for Lucas's movie, and vice versa. Its suspense sequences are buttressed by teen comedy tropes, and its use of split-screen, split-diopter and slow motion shots tell the story visually rather than through dialogue. As for Lucas's project, De Palma complained in an early viewing of "Star Wars" that the opening text crawl was poorly written and volunteered to help edit the text to a more concise and engaging form.
|
The financial and critical success of "Carrie" allowed De Palma to pursue more personal material. Alfred Bester's novel "The Demolished Man" had fascinated De Palma since the late 1950s and appealed to his background in mathematics and avant-garde storytelling. Its unconventional unfolding of plot (exemplified in its mathematical layout of dialogue) and its stress on perception have analogs in De Palma's filmmaking. He sought to adapt it numerous times, though the project would carry a substantial price tag, and has yet to appear on-screen (Steven Spielberg's 2002 adaptation of Philip K. Dick's "Minority Report" bears striking similarities to De Palma's visual style and some of the themes of "The Demolished Man"). The result of his experience with adapting "The Demolished Man" was the 1978 science fiction psychic thriller "The Fury", starring Kirk Douglas, Carrie Snodgress, John Cassavetes and Amy Irving. The film was admired by Jean-Luc Godard, who featured a clip in his mammoth "Histoire(s) du cinéma", and Pauline Kael, who championed both "The Fury" and De Palma. The film boasted a larger budget than "Carrie", though the consensus view at the time was that De Palma was repeating himself, with diminishing returns.
1980–1996: Established career.
|
The 1980s were marked by some of De Palma's best known films, including the erotic thriller "Dressed to Kill" (1980) starring Michael Caine and Angie Dickinson. Although the film received critical acclaim, it caused controversy for its negative depiction of the transgender community. The following year he directed "Blow Out" (1981), a variation on Michelangelo Antonioni's "Blow-Up" (1966) and Francis Ford Coppola's "The Conversation" (1974) starring John Travolta, Nancy Allen and John Lithgow. The film received critical acclaim. Kael wrote: "De Palma has sprung to the place that Robert Altman achieved with films such as "McCabe & Mrs. Miller" and "Nashville" and that Francis Ford Coppola reached with "The Godfather" films—that is, to the place where genre is transcended and what we're moved by is an artist's vision. It's a great movie."
De Palma directed "Scarface" (1983), a remake of Howard Hawks's 1932 film, starring Al Pacino and Michelle Pfeiffer with a screenplay by Oliver Stone. The film received mixed reviews with its negative depictions of ethnic stereotypes, as well as its violence and profanity. It has since been re-evaluated and is now considered a cult classic. The following year he made another erotic thriller, "Body Double" (1984), starring Craig Wasson and Melanie Griffith. The film also received mixed reviews but has since had a reassessment and found acclaim. De Palma directed the music video for Bruce Springsteen's single "Dancing in the Dark" the same year.
|
In 1987, De Palma directed the crime film "The Untouchables", loosely based on the book of the same name and adapted by David Mamet. The film stars Kevin Costner, Andy Garcia, Robert De Niro and Sean Connery, the last of whom won the Academy Award for Best Supporting Actor for the film. It received critical acclaim and box-office success. De Palma's Vietnam War film "Casualties of War" (1989) won critical praise but performed poorly in theatres and "The Bonfire of the Vanities" (1990) was a notorious failure with both critics and audiences. De Palma then had subsequent successes with "Raising Cain" (1992) and "Carlito's Way" (1993). "" (1996) was his highest-grossing film and started "."
1998–present: Career slump.
De Palma's work after "Mission: Impossible" has been less well received. His ensuing films "Snake Eyes" (1998), "Mission to Mars" (2000), and "Femme Fatale" (2002) all failed at the box office and received generally poor reviews, though "Femme Fatale" has since been revived in the eyes of many film critics and became a cult classic. His 2006 adaptation of "The Black Dahlia" was also unsuccessful and is currently the last movie De Palma has directed with backing from Hollywood.
|
A political controversy erupted over the portrayal of US soldiers in De Palma's 2007 film "Redacted". Loosely based on the 2006 Mahmudiyah killings by American soldiers in Iraq, the film echoes themes that appeared in "Casualties of War". "Redacted" received a limited release in the United States and grossed less than $1 million against a $5 million budget.
De Palma's output has slowed since the release of "Redacted", with subsequent projects often falling into development hell, due mostly to creative differences. In 2012, his film "Passion" starring Rachel McAdams and Noomi Rapace was selected to compete for the Golden Lion at the 69th Venice International Film Festival but received mixed reviews and was financially unsuccessful.
De Palma's next project was the thriller "Domino" (2019), released two years after the film began production. It received generally negative reviews and was released direct-to-VOD in the United States, grossing less than half a million dollars internationally. De Palma has also expressed dissatisfaction with both the production of the film and the final result; "I never experienced such a horrible movie set."
|
In 2018, De Palma published his debut novel in France, "Les serpents sont-ils nécessaires?" (English translation: "Are Snakes Necessary?"), co-written with Susan Lehman. It was published in the U.S. in 2020. De Palma and Lehman also wrote a second book, currently unpublished, called "Terry", based on one of De Palma's passion projects about a French film production making an adaptation of "Thérèse Raquin".
It was announced in 2018 that De Palma would write and direct a horror film titled "Predator", inspired by the Harvey Weinstein sexual abuse cases, and would direct Wagner Moura in a film titled "Sweet Vengeance", based on two real-life murder cases. Filming on the latter was to have begun in early 2019 in Montevideo. In a 2020 interview with the "Associated Press", De Palma confirmed that "Predator" was retitled "Catch and Kill" and added that he was to have started filming in August that same year.
Despite rumors of his supposed retirement after having had "Sweet Vengeance" and "Catch and Kill" fall through, De Palma revealed to "Vulture" in September 2024 that he had "one other" undisclosed film he was planning to make, and that he was in the process of trying to cast it.
|
Filmmaking style, techniques and trademarks.
De Palma's films can fall into two categories, his thriller films ("Sisters", "Body Double", "Obsession", "Dressed to Kill", "Blow Out", "Raising Cain") and his mainly commercial films ( "The Untouchables", "Carlito's Way", and "Mission: Impossible"). He has often produced "De Palma" films one after the other before going on to direct a different genre, but would always return to his familiar territory. Because of the subject matter and graphic violence of some of De Palma's films, such as "Dressed to Kill", "Scarface" and "Body Double", they are often at the center of controversy with the Motion Picture Association of America, film critics and the viewing public.
Inspirations.
De Palma frequently quotes and references other directors' work. His early work was inspired by the films of Jean-Luc Godard. Michelangelo Antonioni's "Blowup" and Francis Ford Coppola's "The Conversation" plots were used for the basis of "Blow Out". "The Untouchables" finale shoot out in the train station is a clear borrowing from the Odessa Steps sequence in Sergei Eisenstein's "The Battleship Potemkin". The main plot from "Rear Window" was used for "Body Double", while it also used elements of "Vertigo". "Vertigo" was also the basis for "Obsession". "Dressed to Kill" was a note-for-note homage to Hitchcock's "Psycho", including such moments as the surprise death of the lead actress and the exposition scene by the psychiatrist at the end.
|
Camera shots.
Film critics have often noted De Palma's penchant for unusual camera angles and compositions. He often frames characters against the background using a canted angle shot. Split-screen techniques have been used to show two separate events happening simultaneously. To emphasize the dramatic impact of a certain scene De Palma has employed a 360-degree camera pan. Slow sweeping, panning and tracking shots are often used throughout his films, often through precisely-choreographed long takes lasting for minutes without cutting. Split focus shots, often referred to as "di-opt", are used by De Palma to emphasize the foreground person/object while simultaneously keeping a background person/object in focus. Slow-motion is frequently used in his films to increase suspense.
Personal life.
De Palma has been married and divorced three times, to actress Nancy Allen (1979–1983), producer Gale Anne Hurd (1991–1993), and Darnell Gregorio (1995–1997). He has one daughter from his marriage to Hurd, and one daughter from his marriage to Gregorio. He resides in Manhattan, New York.
|
Reception and legacy.
De Palma is often cited as a leading member of the New Hollywood generation of film directors, a distinct pedigree who either emerged from film schools or are overtly cine-literate. His contemporaries include Martin Scorsese, Paul Schrader, John Milius, George Lucas, Francis Ford Coppola, Steven Spielberg, John Carpenter, and Ridley Scott. His artistry in directing and use of cinematography and suspense in several of his films has often been compared to the work of Alfred Hitchcock. Psychologists have been intrigued by De Palma's fascination with pathology, by the aberrant behavior aroused in characters who find themselves manipulated by others.
De Palma has encouraged and fostered the filmmaking careers of directors such as Mark Romanek and Keith Gordon, the latter of whom collaborated with him twice as an actor, both in 1979's "Home Movies" and 1980's "Dressed to Kill". Filmmakers influenced by De Palma include Terrence Malick, Quentin Tarantino, Ronny Yu, Don Mancini, Nacho Vigalondo, and Jack Thomas Smith. During an interview with De Palma, Quentin Tarantino said that "Blow Out" is one of his all-time favorite films, and that after watching "Scarface" he knew how to make his own film. John Travolta's performance as Jack Terry in "Blow Out" even resulted in Tarantino casting him as Vincent Vega in his 1994 film "Pulp Fiction", which would go on to reinvigorate Travolta's then-declining career. Tarantino also placed "Carrie" at number eight in a list of his favorite films.
|
Critics who frequently admire De Palma's work include Pauline Kael and Roger Ebert. Kael wrote in her review of "Blow Out", "At forty, Brian De Palma has more than twenty years of moviemaking behind him, and he has been growing better and better. Each time a new film of his opens, everything he has done before seems to have been preparation for it." In his review of "Femme Fatale", Roger Ebert wrote about the director: "De Palma deserves more honor as a director. Consider also these titles: "Sisters", "Blow Out", "The Fury", "Dressed to Kill", "Carrie", "Scarface", "Wise Guys", "Casualties of War", "Carlito's Way", "Mission: Impossible". Yes, there are a few failures along the way ("Snake Eyes", "Mission to Mars", "The Bonfire of the Vanities"), but look at the range here, and reflect that these movies contain treasure for those who admire the craft as well as the story, who sense the glee with which De Palma manipulates images and characters for the simple joy of being good at it. It's not just that he sometimes works in the style of Hitchcock, but that he has the nerve to."
|
The influential French film magazine "Cahiers du Cinéma" has placed five of De Palma's films ("Carlito's Way", "", "Snake Eyes", "Mission to Mars", and "Redacted") on their annual top ten list, with "Redacted" placing first on the 2008 list. The magazine also listed "Carlito's Way" as the greatest film of the 1990s.
Julie Salamon has written that critics have accused De Palma of being "a perverse misogynist", to which De Palma has responded with, "I'm always attacked for having an erotic, sexist approach chopping up women, putting women in peril. I'm making suspense movies! What else is going to happen to them?"
His films have also been interpreted as feminist and examined for their perceived queer affinities. In "Film Comment" "Queer and Now and Then" column on "Femme Fatale", film critic Michael Koresky writes that "De Palma's films radiate an undeniable queer energy" and notes the "intense appeal" De Palma's films have for gay critics. In her book "The Erotic Thriller in Contemporary Cinema", Linda Ruth Williams writes that "De Palma understood the cinematic potency of dangerous fucking, perhaps earlier than his feminist detractors".
|
Robin Wood considered "Sisters" an overtly feminist film, writing that "one can define the monster of "Sisters" as women's liberation; adding only that the film follows the time-honored horror film tradition of making the monster emerge as the most sympathetic character and its emotional center." Pauline Kael's review of "Casualties of War", "A Wounded Apparition", describes the film as "feminist" and notes that "De Palma was always involved in examining (and sometimes satirizing) victimization, but he was often accused of being a victimizer". Helen Grace, in a piece for "Lola", writes that upon seeing "Dressed to Kill" amidst calls for a boycott from feminist groups Women Against Violence Against Women and Women Against Pornography, that the film "seemed to say more about masculine anxiety than about the fears that women were expressing in relation to the film". De Palma has also expressed contrition for the depiction of a transgender murderer in the film, saying in a 2016 interview "I don't know what the transgender community would think [of the film now]... Obviously I realize that it's not good for their image to be transgender and also be a psychopathic murderer. But I think that [perception] passes with time. We're in a different time." In the same interview, he said he was "glad" that the film had become a "a favorite of the gay community".
|
David Thomson wrote in his entry for De Palma, "There is a self-conscious cunning in De Palma's work, ready to control everything except his own cruelty and indifference." Matt Zoller Seitz objected to this characterisation, writing that there are films from the director which can be seen as "straightforwardly empathetic and/or moralistic".
His life and career in his own words was the subject of the 2015 documentary "De Palma," directed by Noah Baumbach and Jake Paltrow. |
North American B-25 Mitchell
The North American B-25 Mitchell is an American medium bomber that was introduced in 1941 and named in honor of Brigadier General William "Billy" Mitchell, a pioneer of U.S. military aviation. Used by many Allied air forces, the B-25 served in every theater of World War II, and after the war ended, many remained in service, operating across four decades. Produced in numerous variants, nearly 10,000 B-25s were built. It was the most-produced American medium bomber and the third most-produced American bomber overall. These included several limited models such as the F-10 reconnaissance aircraft, the AT-24 crew trainers, and the United States Marine Corps' PBJ-1 patrol bomber.
Design and development.
In March 1939, the US Army Air Corps issued a specification for a medium bomber that was capable of carrying a payload of over at . North American Aviation (NAA) used its NA-40B design to develop the NA-62, which competed for the medium bomber contract. No YB-25 was available for prototype service tests. In September 1939, the Air Corps ordered the NA-62 into production as the B-25, along with the other new Air Corps medium bomber, the Martin B-26 Marauder "off the drawing board".
|
Early into B-25 production, NAA incorporated a significant redesign to the wing dihedral. The first nine aircraft had a constant-dihedral, meaning the wing had a consistent, upward angle from the fuselage to the wingtip. This design caused stability problems. "Flattening" the outer wing panels just outboard of the engine nacelles nullified the problem and gave the B-25 its gull wing configuration. Less noticeable changes during this period included an increase in the size of the tail fins and a decrease in their inward tilt at their tops.
NAA continued design and development in 1940 and 1941. Both the B-25A and B-25B series entered USAAF service. The B-25B was operational in 1942. Combat requirements led to further developments. Before the year was over, NAA was producing the B-25C and B-25D series at different plants. Also in 1942, the manufacturer began design work on the cannon-armed B-25G series. The NA-100 of 1943 and 1944 was an interim armament development at the Kansas City complex known as the B-25D2. Similar armament upgrades by U.S-based commercial modification centers involved about half of the B-25G series. Further development led to the B-25H, B-25J, and B-25J2. The gunship design concept dates to late 1942 and NAA sent a field technical representative to the SWPA. The factory-produced B-25G entered production during the NA-96 order followed by the redesigned B-25H gunship. The B-25J reverted to the bomber role, but it, too, could be outfitted as a strafer.
|
NAA manufactured the greatest number of aircraft in World War II, the first time a company had produced trainers, bombers, and fighters simultaneously (the AT-6/SNJ Texan/Harvard, B-25 Mitchell, and the P-51 Mustang). It produced B-25s at both its Inglewood main plant and an additional 6,608 aircraft at its Kansas City, Kansas, plant at Fairfax Airport.
After the war, the USAF placed a contract for the TB-25L trainer in 1952. This was a modification program by Hayes of Birmingham, Alabama. Its primary role was reciprocating engine pilot training.
A development of the B-25 was the North American XB-28 Dragon, designed as a high-altitude bomber. Two prototypes were built with the second prototype, the XB-28A, evaluated as a photo-reconnaissance platform, but the aircraft did not enter production.
Flight characteristics.
The B-25 was a safe and forgiving aircraft to fly. With one engine out, 60° banking turns into the dead engine were possible, and control could be easily maintained down to 145 mph (230 km/h). The pilot had to remember to maintain engine-out directional control at low speeds after takeoff with rudder; if this maneuver were attempted with ailerons, the aircraft could snap out of control. The tricycle landing gear made for excellent visibility while taxiing. The only significant complaint about the B-25 was its extremely noisy engines; as a result, many pilots eventually suffered from some degree of hearing loss.
|
The high noise level was due to design and space restrictions in the engine cowlings, which resulted in the exhaust "stacks" protruding directly from the cowling ring and partly covered by a small triangular fairing. This arrangement directed exhaust and noise directly at the pilot and crew compartments.
Durability.
The Mitchell was exceptionally sturdy and could withstand tremendous punishment. One B-25C of the 321st Bomb Group was nicknamed "Patches" because its crew chief painted all the aircraft's flak hole patches with bright yellow zinc chromate primer. By the end of the war, this aircraft had completed over 300 missions, had been belly-landed six times, and had over 400 patched holes. The airframe of "Patches" was so distorted from battle damage that straight-and-level flight required 8° of left aileron trim and 6° of right rudder, causing the aircraft to "crab" sideways across the sky.
Operational history.
Asia-Pacific.
Most B-25s in American service were used in the war against Japan in Asia and the Pacific. The Mitchell fought from the Northern Pacific to the South Pacific and the Far East. These areas included the campaigns in the Aleutian Islands, Papua New Guinea, the Solomon Islands, New Britain, China, Burma and the island hopping campaign in the Central Pacific, as well as in the Doolittle Raid. The aircraft's potential as a ground-attack aircraft emerged during the Pacific war. The jungle environment reduced the usefulness of medium-level bombing, and made low-level attack the best tactic. Using similar mast height level tactics and skip bombing, the B-25 proved itself to be a capable anti-shipping weapon and sank many enemy sea vessels. An ever-increasing number of forward firing guns made the B-25 a formidable strafing aircraft for island warfare. The strafer models were the B-25C1/D1, the B-25J1 and with the NAA strafer nose, the J2 subseries.
|
In Burma, the B-25 was used to attack Japanese communication links, especially bridges in central Burma. It also helped supply the besieged troops at Imphal in 1944. The China Air Task Force, the Chinese American Composite Wing, the First Air Commando Group, the 341st Bomb Group, and eventually, the relocated 12th Bomb Group, all operated the B-25 in the China Burma India Theater. Many of these missions involved battle-field isolation, interdiction, and close air support.
Later in the war, as the USAAF acquired bases in other parts of the Pacific, the Mitchell could strike targets in Indochina, Formosa, and Kyushu, increasing the usefulness of the B-25. It was also used in some of the shortest raids of the Pacific War, striking from Saipan against Guam and Tinian. The 41st Bomb Group used it against Japanese-occupied islands that had been bypassed by the main campaign, such as the Marshall Islands.
Middle East and Italy.
The first B-25s arrived in Egypt and were carrying out independent operations by October 1942. Operations there against Axis airfields and motorized vehicle columns supported the ground actions of the Second Battle of El Alamein. Thereafter, the aircraft took part in the rest of the campaign in North Africa, the invasion of Sicily, and the advance up Italy. In the Strait of Messina to the Aegean Sea, the B-25 conducted sea sweeps as part of the coastal air forces. In Italy, the B-25 was used in the ground attack role, concentrating on attacks against road and rail links in Italy, Austria, and the Balkans. The B-25 had a longer range than the Douglas A-20 Havoc and Douglas A-26 Invader, allowing it to reach further into occupied Europe. The five bombardment groups – 20 squadrons – of the Ninth and Twelfth Air Forces that used the B-25 in the Mediterranean Theater of Operations were the only U.S. units to employ the B-25 in Europe.
|
Europe.
The RAF received nearly 900 Mitchells, using them to replace Douglas Bostons, Lockheed Venturas, and Vickers Wellington bombers. The Mitchell entered active RAF service on 22 January 1943. At first, it was used to bomb targets in occupied Europe. After the Normandy invasion, the RAF and France used Mitchells in support of the Allies in Europe. Several squadrons moved to forward airbases on the continent. The USAAF used the B-25 in combat in the Mediterranean Theater of Operations.
US Army Air Forces.
The B-25B found fame as the bomber used in the 18 April 1942 Doolittle Raid, in which 15 B-25Bs led by Lieutenant Colonel Jimmy Doolittle attacked mainland Japan, four months after the Japanese attack on Pearl Harbor (a 16th plane which participated was forced to abort, landing in Russia, where it and the crew were initially interned). The mission gave a much-needed lift in morale to the Americans and alarmed the Japanese, who had believed their home islands to be inviolable by enemy forces. Although the amount of actual damage done was relatively minor, it forced the Japanese to divert troops for home defense for the remainder of the war.
|
The raiders took off from the carrier and bombed Tokyo and four other Japanese cities. Fifteen of the bombers subsequently crash-landed en route to recovery fields in eastern China. The losses resulted from the task force being spotted by a Japanese vessel, which forced the bombers to take off early, fuel exhaustion, stormy nighttime conditions with zero visibility, and the failure to activate electronic homing aids at the recovery bases. Only one B-25 bomber landed intact, in Vladivostok, where its five-man crew was interned and the aircraft confiscated. Of the 80 aircrew members, 69 survived their historic mission and eventually made it back to American lines.
Following additional modifications, including the addition of a Plexiglas dome for navigational sightings to replace the overhead window for the navigator, and heavier nose armament, de-icing and anti-icing equipment, the B-25C entered USAAF operations. Through block 20, the B-25C and B-25D differed only in the location of manufacture: C series at Inglewood, California, and D series at Kansas City, Kansas. After block 20, some NA-96s began the transition to the G series, while some NA-87s acquired interim modifications eventually produced as the B-25D2 and ordered as the NA-100. NAA built a total of 3,915 B-25Cs and Ds during World War II.
|
Although the B-25 was designed to bomb from medium altitudes in level flight, it was frequently used in the Southwest Pacific theatre in treetop-level strafing and missions with parachute-retarded fragmentation bombs against Japanese airfields in New Guinea and the Philippines. These heavily armed Mitchells were field-modified at Townsville, Australia, under the direction of Major Paul I. "Pappy" Gunn and North American technical representative Jack Fox. These "commerce destroyers" were also used on strafing and skip bombing missions against Japanese shipping trying to resupply their armies.
Under the leadership of Lieutenant General George C. Kenney, Mitchells of the Far East Air Forces and its existing components, the Fifth and Thirteenth Air Forces, devastated Japanese targets in the Southwest Pacific Theater during 1944 to 1945. The USAAF played a significant role in pushing the Japanese back to their home islands. The type operated with great effect in the Central Pacific, Alaska, North Africa, Mediterranean, and China-Burma-India theaters.
|
The USAAF Antisubmarine Command made great use of the B-25 in 1942 and 1943. Some of the earliest B-25 bomb groups also flew the Mitchell on coastal patrols after the Pearl Harbor attack, prior to the AAFAC organization. Many of the two dozen or so antisubmarine squadrons flew the B-25C, D, and G series in the American Theater antisubmarine campaign, often in the distinctive, white sea-search camouflage.
Combat developments.
Use as a gunship.
In anti-shipping operations, the USAAF had an urgent need for hard-hitting aircraft, and North American responded with the B-25G. In this series, the transparent nose and bombardier/navigator position was changed for a shorter, hatched nose with two fixed .50 in (12.7 mm) machine guns and a manually loaded 75 mm (2.95 in) M4 cannon, one of the largest weapons fitted to an aircraft, similar to the British 57 mm gun-armed Mosquito Mk. XVIII and the autoloading German 75 mm long-barrel "Bordkanone BK 7,5" heavy-caliber ordnance fitted to both the Henschel Hs 129B-3 and Junkers Ju 88P-1. The B-25G's shorter nose placed the cannon breech behind the pilot, where it could be manually loaded and serviced by the navigator; his crew station was moved to a position just behind the pilot. The navigator signaled the pilot when the gun was ready and the pilot fired the weapon using a button on his control wheel.
|
The Royal Air Force, U.S. Navy, and Soviet VVS each conducted trials with this series, but none adopted it. The G series comprised one prototype, five preproduction C conversions, 58 C series modifications, and 400 production aircraft for a total of 464 B-25Gs. In its final version, the G-12, an interim armament modification, eliminated the lower Bendix turret and added a starboard dual gun pack, waist guns, and a canopy for the tail gunner to improve the view when firing the single tail gun. In April 1945, the air depots in Hawaii refurbished about two dozen of these and included the eight-gun nose and rocket launchers in the upgrade.
The B-25H series continued the development of the gunship version. NAA Inglewood produced 1000. The H had even more firepower. Most replaced the M4 gun with the lighter T13E1, designed specifically for the aircraft, but 20-odd H-1 block aircraft completed by the Republic Aviation modification center at Evansville had the M4 and two-machine-gun nose armament. The 75 mm (2.95 in) gun fired at a muzzle velocity of . Due to its slow rate of fire (about four rounds could be fired in a single strafing run), relative ineffectiveness against ground targets, and the substantial recoil, the 75 mm gun was sometimes removed from both G and H models and replaced with two additional .50 in (12.7 mm) machine guns as a field modification. In the new FEAF, these were redesignated the G1 and H1 series, respectively.
|
The H series normally came from the factory mounting four fixed, forward-firing .50 in (12.7 mm) machine guns in the nose; four in a pair of under-cockpit conformal flank-mount gun pod packages (two guns per side); two more in the manned dorsal turret, relocated forward to a position just behind the cockpit (which became standard for the J-model); one each in a pair of new waist positions, introduced simultaneously with the forward-relocated dorsal turret; and lastly, a pair of guns in a new tail-gunner's position. Company promotional material bragged that the B-25H could "bring to bear 10 machine guns coming and four going, in addition to the 75 mm cannon, eight rockets, and 3,000 lb (1,360 kg) of bombs."
The H had a modified cockpit with single flight controls operated by the pilot. The co-pilot's station and controls were removed and replaced by a smaller seat used by the navigator/cannoneer, The radio operator crew position was aft of the bomb bay with access to the waist guns. Factory production totals were 405 B-25Gs and 1,000 B-25Hs, with 248 of the latter being used by the Navy as PBJ-1Hs. Elimination of the co-pilot saved weight, and moving the dorsal turret forward partially counterbalanced the waist guns and the manned rear turret.
|
Return to medium bomber.
Following the two-gunship series, NAA again produced the medium bomber configuration with the B-25J series. It optimized the mix of the interim NA-100 and the H series, having both the bombardier's station and fixed guns of the D and the forward turret and refined armament of the H series. NAA also produced a strafer nose-first shipped to air depots as kits, then introduced on the production line in alternating blocks with the bombardier nose. The solid metal "strafer" nose housed eight centerline Browning M2 .50 caliber machine guns. The remainder of the armament was as in the H-5. NAA also supplied kits to mount eight underwing 5 inch High Velocity Airborne Rockets just outside the propeller arcs. These were mounted on zero-length launch rails, four per wing.
|
Postwar (USAF) use.
In 1947, legislation created an independent United States Air Force and by that time, the B-25 inventory numbered only a few hundred. Some B-25s continued in service into the 1950s in training, reconnaissance, and support roles. The principal use during this period was undergraduate training of multiengine aircraft pilots slated for reciprocating engine or turboprop cargo, aerial refueling, or reconnaissance aircraft. Others were assigned to units of the Air National Guard in training roles in support of Northrop F-89 Scorpion and Lockheed F-94 Starfire operations.
During its USAF tenure, many B-25s received the so-called "Hayes modification" and as a result, surviving B-25s often have exhaust systems with a semi collector ring that splits emissions into two different systems. The upper seven cylinders are collected by a ring, while the other cylinders remain directed to individual ports.
TB-25J-25-NC Mitchell, "44-30854", the last B-25 in the USAF inventory, assigned at March AFB, California, as of March 1960, was flown to Eglin AFB, Florida, from Turner Air Force Base, Georgia, on 21 May 1960, the last flight by a USAF B-25. It was presented by Brigadier General A. J. Russell, Commander of SAC's 822d Air Division at Turner AFB, to the Air Proving Ground Center Commander, Brigadier General Robert H. Warren. He in turn presented the bomber to Valparaiso, Florida, Mayor Randall Roberts on behalf of the Niceville-Valparaiso Chamber of Commerce. Four of the original Tokyo Raiders were present for the ceremony, Colonel (later Major General) David Jones, Colonel Jack Simms, Lieutenant Colonel Joseph Manske, and retired Master Sergeant Edwin W. Horton. It was donated back to the Air Force Armament Museum c. 1974 and marked as Doolittle's "40-2344".
|
U.S. Navy and USMC.
The U.S. Navy designation for the Mitchell was the PBJ-1 and apart from increased use of radar, it was configured like its Army Air Forces counterparts. Under the pre-1962 USN/USMC/USCG aircraft designation system, PBJ-1 stood for Patrol (P) Bomber (B) built by North American Aviation (J), first variant (-1) under the existing American naval aircraft designation system of the era. The PBJ had its origin in an inter-service agreement of mid-1942 between the Navy and the USAAF exchanging the Boeing Renton plant for the Kansas plant for B-29 Superfortress production. The Boeing XPBB Sea Ranger flying boat, competing for B-29 engines, was cancelled in exchange for part of the Kansas City Mitchell production. Other terms included the interservice transfer of 50 B-25Cs and 152 B-25Ds to the Navy. The bombers carried Navy bureau numbers (BuNos), beginning with BuNo 34998. The first PBJ-1 arrived in February 1943, and nearly all reached Marine Corps squadrons, beginning with Marine Bombing Squadron 413 (VMB-413). Following the AAFAC format, the Marine Mitchells had search radar in a retractable radome replacing the remotely operated ventral turret. Later D and J series had nose-mounted APS-3 radar; and later still, J and H series mounted radar in the starboard wingtip. The large quantities of B-25H and J series became known as PBJ-1H and PBJ-1J, respectively. These aircraft often operated along with earlier PBJ series in Marine squadrons.
|
The PBJs were operated almost exclusively by the Marine Corps as land-based bombers. The U.S. Marine Corps established Marine bomber squadrons (VMB), beginning with VMB-413, in March 1943 at MCAS Cherry Point, North Carolina. Eight VMB squadrons were flying PBJs by the end of 1943 as the initial Marine medium bombardment group. Four more squadrons were in the process of formation in late 1945, but had not yet deployed by the time the war ended.
Operations of the Marine Corps PBJ-1s began in March 1944. The Marine PBJs flew from the Philippines, Saipan, Iwo Jima, and Okinawa during the last few months of the Pacific war. Their primary mission was the long-range interdiction of enemy shipping trying to run the blockade, which was strangling Japan. The weapon of choice during these missions was usually the five-inch HVAR rocket, eight of which could be carried. Some VMB-612 intruder PBJ-1D and J series planes flew without top turrets to save weight and increase range on night patrols, especially towards the end of the war when air superiority had been achieved.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.