text
stringlengths 2
132k
| source
dict |
|---|---|
to fail. Which was promptly called play farming due to the idea of "farmers" experimenting with agriculture. In addition, the ability for humans to stay within one place for food and create permanent settlements made the process move along faster. During this transitional period, crops began to acclimate and evolve with humans encouraging humans to invest further into crops. Over time this reliance on plant breeding has created problems, as highlighted by the book Botany of Desire where Michael Pollan shows the connection between basic human desires through four different plants: apples for sweetness, tulips for beauty, cannabis for intoxication, and potatoes for control. In a form of reciprocal evolution humans have influenced these plants as much as the plants have influenced the people that consume them, is known as coevolution. Selective plant breeding is also used in research to produce transgenic animals that breed "true" (i.e., are homozygous) for artificially inserted or deleted genes. == Selective breeding in aquaculture == Selective breeding in aquaculture holds high potential for the genetic improvement of fish and shellfish for the process of production. Unlike terrestrial livestock, the potential benefits of selective breeding in aquaculture were not realized until recently. This is because high mortality led to the selection of only a few broodstock, causing inbreeding depression, which then forced the use of wild broodstock. This was evident in selective breeding programs for growth rate, which resulted in slow growth and high mortality. Control of the reproduction cycle was one of the main reasons as it is a requisite for selective breeding programs. Artificial reproduction was not achieved because of the difficulties in hatching or feeding some farmed species such as eel and yellowtail farming. A suspected reason associated with the late realization of success in selective breeding programs in aquaculture was the
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
education of the concerned people – researchers, advisory personnel and fish farmers. The education of fish biologists paid less attention to quantitative genetics and breeding plans. Another was the failure of documentation of the genetic gains in successive generations. This in turn led to failure in quantifying economic benefits that successful selective breeding programs produce. Documentation of the genetic changes was considered important as they help in fine tuning further selection schemes. === Quality traits in aquaculture === Aquaculture species are reared for particular traits such as growth rate, survival rate, meat quality, resistance to diseases, age at sexual maturation, fecundity, shell traits like shell size, shell color, etc. Growth rate – growth rate is normally measured as either body weight or body length. This trait is of great economic importance for all aquaculture species as faster growth rate speeds up the turnover of production. Improved growth rates show that farmed animals utilize their feed more efficiently through a positive correlated response. Survival rate – survival rate may take into account the degrees of resistance to diseases. This may also see the stress response as fish under stress are highly vulnerable to diseases. The stress fish experience could be of biological, chemical or environmental influence. Meat quality – the quality of fish is of great economic importance in the market. Fish quality usually takes into account size, meatiness, and percentage of fat, color of flesh, taste, shape of the body, ideal oil and omega-3 content. Age at sexual maturation – The age of maturity in aquaculture species is another very important attribute for farmers as during early maturation the species divert all their energy to gonad production affecting growth and meat production and are more susceptible to health problems (Gjerde 1986). Fecundity – As the fecundity in fish and shellfish
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
is usually high it is not considered as a major trait for improvement. However, selective breeding practices may consider the size of the egg and correlate it with survival and early growth rate. === Finfish response to selection === ==== Salmonids ==== Gjedrem (1979) showed that selection of Atlantic salmon (Salmo salar) led to an increase in body weight by 30% per generation. A comparative study on the performance of select Atlantic salmon with wild fish was conducted by AKVAFORSK Genetics Centre in Norway. The traits, for which the selection was done included growth rate, feed consumption, protein retention, energy retention, and feed conversion efficiency. Selected fish had a twice better growth rate, a 40% higher feed intake, and an increased protein and energy retention. This led to an overall 20% better Fed Conversion Efficiency as compared to the wild stock. Atlantic salmon have also been selected for resistance to bacterial and viral diseases. Selection was done to check resistance to Infectious Pancreatic Necrosis Virus (IPNV). The results showed 66.6% mortality for low-resistant species whereas the high-resistant species showed 29.3% mortality compared to wild species. Rainbow trout (S. gairdneri) was reported to show large improvements in growth rate after 7–10 generations of selection. Kincaid et al. (1977) showed that growth gains by 30% could be achieved by selectively breeding rainbow trout for three generations. A 7% increase in growth was recorded per generation for rainbow trout by Kause et al. (2005). In Japan, high resistance to IPNV in rainbow trout has been achieved by selectively breeding the stock. Resistant strains were found to have an average mortality of 4.3% whereas 96.1% mortality was observed in a highly sensitive strain. Coho salmon (Oncorhynchus kisutch) increase in weight was found to be more than 60% after four generations of selective breeding. In
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
Chile, Neira et al. (2006) conducted experiments on early spawning dates in coho salmon. After selectively breeding the fish for four generations, spawning dates were 13–15 days earlier. Cyprinids Selective breeding programs for the Common carp (Cyprinus carpio) include improvement in growth, shape and resistance to disease. Experiments carried out in the USSR used crossings of broodstocks to increase genetic diversity and then selected the species for traits like growth rate, exterior traits and viability, and/or adaptation to environmental conditions like variations in temperature. Kirpichnikov et al. (1974) and Babouchkine (1987) selected carp for fast growth and tolerance to cold, the Ropsha carp. The results showed a 30–40% to 77.4% improvement of cold tolerance but did not provide any data for growth rate. An increase in growth rate was observed in the second generation in Vietnam. Moav and Wohlfarth (1976) showed positive results when selecting for slower growth for three generations compared to selecting for faster growth. Schaperclaus (1962) showed resistance to the dropsy disease wherein selected lines suffered low mortality (11.5%) compared to unselected (57%). ==== Channel Catfish ==== Growth was seen to increase by 12–20% in selectively bred Iictalurus punctatus. More recently, the response of the Channel Catfish to selection for improved growth rate was found to be approximately 80%, that is, an average of 13% per generation. === Shellfish response to selection === ==== Oysters ==== Selection for live weight of Pacific oysters showed improvements ranging from 0.4% to 25.6% compared to the wild stock. Sydney-rock oysters (Saccostrea commercialis) showed a 4% increase after one generation and a 15% increase after two generations. Chilean oysters (Ostrea chilensis), selected for improvement in live weight and shell length showed a 10–13% gain in one generation. Bonamia ostrea is a protistan parasite that causes catastrophic losses (nearly 98%) in European
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
flat oyster Ostrea edulis L. This protistan parasite is endemic to three oyster-regions in Europe. Selective breeding programs show that O. edulis susceptibility to the infection differs across oyster strains in Europe. A study carried out by Culloty et al. showed that 'Rossmore' oysters in Cork harbour, Ireland had better resistance compared to other Irish strains. A selective breeding program at Cork harbour uses broodstock from 3– to 4-year-old survivors and is further controlled until a viable percentage reaches market size. Over the years 'Rossmore' oysters have shown to develop lower prevalence of B. ostreae infection and percentage mortality. Ragone Calvo et al. (2003) selectively bred the eastern oyster, Crassostrea virginica, for resistance against co-occurring parasites Haplosporidium nelson (MSX) and Perkinsus marinus (Dermo). They achieved dual resistance to the disease in four generations of selective breeding. The oysters showed higher growth and survival rates and low susceptibility to the infections. At the end of the experiment, artificially selected C. virginica showed a 34–48% higher survival rate. ==== Penaeid shrimps ==== Selection for growth in Penaeid shrimps yielded successful results. A selective breeding program for Litopenaeus stylirostris saw an 18% increase in growth after the fourth generation and 21% growth after the fifth generation. Marsupenaeus japonicas showed a 10.7% increase in growth after the first generation. Argue et al. (2002) conducted a selective breeding program on the Pacific White Shrimp, Litopenaeus vannamei at The Oceanic Institute, Waimanalo, USA from 1995 to 1998. They reported significant responses to selection compared to the unselected control shrimps. After one generation, a 21% increase was observed in growth and 18.4% increase in survival to TSV. The Taura Syndrome Virus (TSV) causes mortalities of 70% or more in shrimps. C.I. Oceanos S.A. in Colombia selected the survivors of the disease from infected ponds and used them
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
as parents for the next generation. They achieved satisfying results in two or three generations wherein survival rates approached levels before the outbreak of the disease. The resulting heavy losses (up to 90%) caused by Infectious hypodermal and haematopoietic necrosis virus (IHHNV) caused a number of shrimp farming industries started to selectively breed shrimps resistant to this disease. Successful outcomes led to development of Super Shrimp, a selected line of L. stylirostris that is resistant to IHHNV infection. Tang et al. (2000) confirmed this by showing no mortalities in IHHNV- challenged Super Shrimp post larvae and juveniles. === Aquatic species versus terrestrial livestock === Selective breeding programs for aquatic species provide better outcomes compared to terrestrial livestock. This higher response to selection of aquatic farmed species can be attributed to the following: High fecundity in both sexes fish and shellfish enabling higher selection intensity. Large phenotypic and genetic variation in the selected traits. Selective breeding in aquaculture provide remarkable economic benefits to the industry, the primary one being that it reduces production costs due to faster turnover rates. When selective breeding is carried out, some characteristics are lost for others that may suit a specific environment or situation. This is because of faster growth rates, decreased maintenance rates, increased energy and protein retention, and better feed efficiency. Applying genetic improvement programs to aquaculture species will increase their productivity. Thus allowing them to meet the increasing demands of growing populations. Conversely, selective breeding within aquaculture can create problems within the biodiversity of both stock and wild fish, which can hurt the industry down the road. Although there is great potential to improve aquaculture due to the current lack of domestication, it is essential that the genetic diversity of the fish are preserved through proper genetic management, as we domesticate these species.
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
It is not uncommon for fish to escape the nets or pens that they are kept in, especially in mass. If these fish are farmed in areas they are not native to they may be able to establish themselves and outcompete native populations of fish, and cause ecological harm as an invasive species. Furthermore, if they are in areas where the fish being farmed are native too their genetics are selectively bred rather than being wild. These farmed fish could breed with the natives which could be problematic In the sense that they would have been bred for consumption rather than by chance. Resulting in an overall decrease in genetic diversity and rendering local fish populations less fit for survival. If proper management is not taking place then the economic benefits and the diversity of the fish species will falter. == Advantages and disadvantages == Selective breeding is a direct way to determine if a specific trait can evolve in response to selection. A single-generation method of breeding is not as accurate or direct. The process is also more practical and easier to understand than sibling analysis. Selective breeding is better for traits such as physiology and behavior that are hard to measure because it requires fewer individuals to test than single-generation testing. However, there are disadvantages to this process. This is because a single experiment done in selective breeding cannot be used to assess an entire group of genetic variances, individual experiments must be done for every individual trait. Also, due to the necessity of selective breeding experiments to require maintaining the organisms tested in a lab or greenhouse, it is impractical to use this breeding method on many organisms. Controlled mating instances are difficult to carry out in this case and this is a necessary component of selective
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
breeding. Additionally, selective breeding can lead to a variety of issues including reduction of genetic diversity or physical problems. The process of selective breeding can create physical issues for plants or animals such as dogs selectively bred for extremely small sizes dislocating their kneecaps at a much more frequent rate then other dogs. An example in the plant world is the Lenape potatoes were selectively bred for their disease or pest resistance which was attributed to their high levels of toxic glycoalkaloid solanine which are usually present only in small amounts in potatoes fit for human consumption. When genetic diversity is lost it can also allow for populations to lack genetic alternatives to adapt to events. This becomes an issue of biodiversity, because attributes are so wide-spread they can result in mass epidemics. As seen in the Southern Corn leaf-blight epidemic of 1970 that wiped out 15% of the United States corn crop due to the wide use of a type of Texan corn strain that was artificially selected due to having sterile pollen to make farming easier. At the same time it was more vulnerable to Southern Corn leaf-blight. == See also == == References == == Bibliography == Darwin, Charles (2004). The Origin of Species. London: CRW Publishing Limited. ISBN 978-1-904633-78-5. == Further reading == == External links == The Global Plan of Action for Animal Genetic Resources
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
In chemistry, the term chemically inert is used to describe a substance that is not chemically reactive. From a thermodynamic perspective, a substance is inert, or nonlabile, if it is thermodynamically unstable (negative standard Gibbs free energy of formation) yet decomposes at a slow, or negligible rate. Most of the noble gases, which appear in the last column of the periodic table, are classified as inert (or unreactive). These elements are stable in their naturally occurring form (gaseous form) and they are called inert gases. == Noble gas == The noble gases (helium, neon, argon, krypton, xenon and radon) were previously known as 'inert gases' because of their perceived lack of participation in any chemical reactions. The reason for this is that their outermost electron shells (valence shells) are completely filled, so that they have little tendency to gain or lose electrons. They are said to acquire a noble gas configuration, or a full electron configuration. It is now known that most of these gases in fact do react to form chemical compounds, such as xenon tetrafluoride. Hence, they have been renamed to 'noble gases', as the only two of these we know truly to be inert are helium and neon. However, a large amount of energy is required to drive such reactions, usually in the form of heat, pressure, or radiation, often assisted by catalysts. The resulting compounds often need to be kept in moisture-free conditions at low temperatures to prevent rapid decomposition back into their elements. == Inert gas == The term inert may also be applied in a relative sense. For example, molecular nitrogen is an inert gas under ordinary conditions, existing as diatomic molecules, N2. The presence of a strong triple covalent bond in the N2 molecule renders it unreactive under normal circumstances. Nevertheless, nitrogen gas
|
{
"page_id": 17633222,
"source": null,
"title": "Chemically inert"
}
|
does react with the alkali metal lithium to form compound lithium nitride (Li3N), even under ordinary conditions. Under high pressures and temperatures and with the right catalysts, nitrogen becomes more reactive; the Haber process uses such conditions to produce ammonia from atmospheric nitrogen. === Main uses === Inert atmospheres consisting of gases such as argon, nitrogen, or helium are commonly used in chemical reaction chambers and in storage containers for oxygen- or water-sensitive substances, to prevent unwanted reactions of these substances with oxygen or water. Argon is widely used in fluorescence tubes and low energy light bulbs. Argon gas helps to protect the metal filament inside the bulb from reacting with oxygen and corroding the filament under high temperature. Neon is used in making advertising signs. Neon gas in a vacuum tube glows bright red in colour when electricity is passed through. Different coloured neon lights can also be made by using other gases. Helium gas is mainly used to fill hot air and party balloons. Balloons filled with it float upwards and this phenomenon is achieved as helium gas is less dense than air. == See also == Noble metal == References ==
|
{
"page_id": 17633222,
"source": null,
"title": "Chemically inert"
}
|
A compositional domain in genetics is a region of DNA with a distinct guanine (G) and cytosine (C) G-C and C-G content (collectively GC content). The homogeneity of compositional domains is compared to that of the chromosome on which they reside. As such, compositional domains can be homogeneous or nonhomogeneous domains. Compositionally homogeneous domains that are sufficiently long (= 300 kb) are termed isochores or isochoric domains. The compositional domain model was proposed as an alternative to the isochoric model. The isochore model was proposed by Bernardi and colleagues to explain the observed non-uniformity of genomic fragments in the genome. However, recent sequencing of complete genomic data refuted the isochoric model. Its main predictions were: GC content of the third codon position (GC3) of protein coding genes is correlated with the GC content of the isochores embedding the corresponding genes. This prediction was found to be incorrect. GC3 could not predict the GC content of nearby sequences. The genome organization of warm-blooded vertebrates is a mosaic of isochores. This prediction was rejected by many studies that used the complete human genome data. The genome organization of cold-blooded vertebrates is characterized by low GC content levels and lower compositional heterogeneity. This prediction was disproved by finding high and low GC content domains in fish genomes. The compositional domain model describes the genome as a mosaic of short and long homogeneous and nonhomogeneous domains. The composition and organization of the domains were shaped by different evolutionary processes that either fused or broke down the domains. This genomic organization model was confirmed in many new genomic studies of cow, honeybee, sea urchin, body louse, Nasonia, beetle, and ant genomes. The human genome was described as consisting of a mixture of compositionally nonhomogeneous domains with numerous short compositionally homogeneous domains and relatively few long
|
{
"page_id": 35590090,
"source": null,
"title": "Compositional domain"
}
|
ones. == References == == External links == IsoPlotter — a free, open source program to calculate and visualize isochores in a given genome
|
{
"page_id": 35590090,
"source": null,
"title": "Compositional domain"
}
|
A microbial electrolysis cell (MEC) is a technology related to Microbial fuel cells (MFC). Whilst MFCs produce an electric current from the microbial decomposition of organic compounds, MECs partially reverse the process to generate hydrogen or methane from organic material by applying an electric current. The electric current would ideally be produced by a renewable source of power. The hydrogen or methane produced can be used to produce electricity by means of an additional PEM fuel cell or internal combustion engine. == Microbial electrolysis cells == MEC systems are based on a number of components: Microorganisms – are attached to the anode. The identity of the microorganisms determines the products and efficiency of the MEC. Materials – The anode material in a MEC can be the same as an MFC, such as carbon cloth, carbon paper, graphite felt, graphite granules or graphite brushes. Platinum can be used as a catalyst to reduce the overpotential required for hydrogen production. The high cost of platinum is driving research into biocathodes as an alternative. Or as other alternative for catalyst, the stainless steel plates were used as cathode and anode materials. Other materials include membranes (although some MECs are membraneless), and tubing and gas collection systems. == Generating hydrogen == Electrogenic microorganisms consuming an energy source (such as acetic acid) release electrons and protons, creating an electrical potential of up to 0.3 volts. In a conventional MFC, this voltage is used to generate electrical power. In a MEC, an additional voltage is supplied to the cell from an outside source. The combined voltage is sufficient to reduce protons, producing hydrogen gas. As part of the energy for this reduction is derived from bacterial activity, the total electrical energy that has to be supplied is less than for electrolysis of water in the absence
|
{
"page_id": 23400395,
"source": null,
"title": "Microbial electrolysis cell"
}
|
of microbes. Hydrogen production has reached up to 3.12 m3H2/m3d with an input voltage of 0.8 volts. The efficiency of hydrogen production depends on which organic substances are used. Lactic and acetic acid achieve 82% efficiency, while the values for unpretreated cellulose or glucose are close to 63%. The efficiency of normal water electrolysis is 60 to 70 percent. As MEC's convert unusable biomass into usable hydrogen, they can produce 144% more usable energy than they consume as electrical energy. Depending on the organisms present at the cathode, MECs can also produce methane by a related mechanism. CalculationsOverall hydrogen recovery was calculated as RH2 = CERCat. The Coulombic efficiency is CE=(nCE/nth), where nth is the moles of hydrogen that could be theoretically produced and nCE = CP/(2F) is the moles of hydrogen that could be produced from the measured current, CP is the total coulombs calculated by integrating the current over time, F is Faraday's constant, and 2 is the moles of electrons per mole of hydrogen. The cathodic hydrogen recovery was calculated as RCat = nH2/nCE, where nH2 is the total moles of hydrogen produced. Hydrogen yield (YH2) was calculated as YH2 = nH2 /ns, where ns is substrate removal calculated on the basis of chemical oxygen demand (22). == Uses == Hydrogen and methane can both be used as alternatives to fossil fuels in internal combustion engines or for power generation. Like MFCs or bioethanol production plants, MECs have the potential to convert waste organic matter into a valuable energy source. Hydrogen can also be combined with the nitrogen in the air to produce ammonia, which can be used to make ammonium fertilizer. Ammonia has been proposed as a practical alternative to fossil fuel for internal combustion engines. == See also == Hydrogen technologies Microbial electrosynthesis Microbial fuel
|
{
"page_id": 23400395,
"source": null,
"title": "Microbial electrolysis cell"
}
|
cells Microbial electrolysis carbon capture == References == == External links == National Science Foundation The University of Queensland Scientific Blogging [1]
|
{
"page_id": 23400395,
"source": null,
"title": "Microbial electrolysis cell"
}
|
A truss is an assembly of members such as beams, connected by nodes, that creates a rigid structure. In engineering, a truss is a structure that "consists of two-force members only, where the members are organized so that the assemblage as a whole behaves as a single object". A two-force member is a structural component where force is applied to only two points. Although this rigorous definition allows the members to have any shape connected in any stable configuration, architectural trusses typically comprise five or more triangular units constructed with straight members whose ends are connected at joints referred to as nodes. In this typical context, external forces and reactions to those forces are considered to act only at the nodes and result in forces in the members that are either tensile or compressive. For straight members, moments (torques) are explicitly excluded because, and only because, all the joints in a truss are treated as revolutes, as is necessary for the links to be two-force members. A planar truss is one where all members and nodes lie within a two-dimensional plane, while a space frame has members and nodes that extend into three dimensions. The top beams in a truss are called top chords and are typically in compression, and the bottom beams are called bottom chords, and are typically in tension. The interior beams are called webs, and the areas inside the webs are called panels, or from graphic statics (see Cremona diagram) polygons. == Etymology == Truss derives from the Old French word trousse, from around 1200 AD, which means "collection of things bound together". The term truss has often been used to describe any assembly of members such as a cruck frame or a couple of rafters. == Characteristics == A truss consists of typically (but not
|
{
"page_id": 397263,
"source": null,
"title": "Truss"
}
|
necessarily) straight members connected at joints, traditionally termed panel points. Trusses are typically (but not necessarily) composed of triangles because of the structural stability of that shape and design. A triangle is the simplest geometric figure that will not change shape when the lengths of the sides are fixed. In comparison, both the angles and the lengths of a four-sided figure must be fixed for it to retain its shape. === Simple truss === The simplest form of a truss is one single triangle. This type of truss is seen in a framed roof consisting of rafters and a ceiling joist, and in other mechanical structures such as bicycles and aircraft. Because of the stability of this shape and the methods of analysis used to calculate the forces within it, a truss composed entirely of triangles is known as a simple truss. However, a simple truss is often defined more restrictively by demanding that it can be constructed through successive addition of pairs of members, each connected to two existing joints and to each other to form a new joint, and this definition does not require a simple truss to comprise only triangles. The traditional diamond-shape bicycle frame, which uses two conjoined triangles, is an example of a simple truss. === Planar truss === A planar truss lies in a single plane. Planar trusses are typically used in parallel to form roofs and bridges. The depth of a truss, or the height between the upper and lower chords, is what makes it an efficient structural form. A solid girder or beam of equal strength would have substantial weight and material cost as compared to a truss. For a given span, a deeper truss will require less material in the chords and greater material in the verticals and diagonals. An optimum
|
{
"page_id": 397263,
"source": null,
"title": "Truss"
}
|
depth of the truss will maximize the efficiency. === Space frame truss === A space frame truss is a three-dimensional framework of members pinned at their ends. A tetrahedron shape is the simplest space truss, consisting of six members that meet at four joints. Large planar structures may be composed from tetrahedrons with common edges, and they are also employed in the base structures of large free-standing power-line pylons. == Types == For more truss types, see truss types used in bridges. There are two basic types of truss: The pitched truss, or common truss, is characterized by its triangular shape. It is most often used for roof construction. Some common trusses are named according to their web configuration. The chord size and web configuration are determined by span, load, and spacing. The parallel-chord truss, or flat truss, gets its name from its parallel top and bottom chords. It is often used for floor construction. A combination of the two is a truncated truss, used in hip roof construction. A metal-plate-connected wood truss is a roof or floor truss whose wood members are connected with metal connector plates. === Warren truss === Truss members form a series of equilateral triangles, alternating up and down. === Octet truss === Truss members are made up of all equivalent equilateral triangles. The minimum composition is two regular tetrahedrons along with an octahedron. They fill up three-dimensional space in a variety of configurations. === Pratt truss === The Pratt truss was patented in 1844 by two Boston railway engineers, Caleb Pratt and his son Thomas Willis Pratt. The design uses vertical members for compression and diagonal members to respond to tension. The Pratt truss design remained popular as bridge designers switched from wood to iron, and from iron to steel. This continued popularity of
|
{
"page_id": 397263,
"source": null,
"title": "Truss"
}
|
the Pratt truss is probably due to the fact that the configuration of the members means that longer diagonal members are only in tension for gravity load effects. This allows these members to be used more efficiently, as slenderness effects related to buckling under compression loads (which are compounded by the length of the member) will typically not control the design. Therefore, for a given planar truss with a fixed depth, the Pratt configuration is usually the most efficient under static, vertical loading. The Southern Pacific Railroad bridge in Tempe, Arizona, is a 393-meter-long (1,289 ft) truss bridge built in 1912. The structure, still in use today, consists of nine Pratt truss spans of varying lengths. The Wright Flyer used a Pratt truss in its wing construction, as the minimization of compression member lengths allowed for lower aerodynamic drag. === Town's lattice truss === American architect Ithiel Town designed Town's Lattice Truss as an alternative to heavy-timber bridges. His design, patented in 1820 and 1835, uses easy-to-handle planks arranged diagonally with short spaces in between them, to form a lattice. === Bowstring truss === Named for their shape, bowstring trusses were first used for arched truss bridges, often confused with tied-arch bridges. Thousands of bowstring trusses were used during World War II for holding up the curved roofs of aircraft hangars and other military buildings. Many variations exist in the arrangements of the members connecting the nodes of the upper arc with those of the lower, straight sequence of members, from nearly isosceles triangles to a variant of the Pratt truss. === King and queen post trusses === One of the simplest truss styles to implement, the king post consists of two angled supports leaning into a common vertical support. The queen post truss, sometimes queenpost or queenspost, is similar
|
{
"page_id": 397263,
"source": null,
"title": "Truss"
}
|
to a king post truss in that the outer supports are angled towards the centre of the structure. The primary difference is the horizontal extension at the centre which relies on beam action to provide mechanical stability. This truss style is only suitable for relatively short spans. === Lenticular truss === Lenticular trusses, patented in 1878 by William Douglas (although the Gaunless Bridge of 1823 was the first of the type), have the top and bottom chords of the truss arched, forming a lens shape. A lenticular pony truss bridge is a bridge design that involves a lenticular truss extending above and below the roadbed. === Vierendeel structure === The members of a Vierendeel structure are not triangulated but form rectangular openings. The structure has a frame with fixed joints that are capable of transferring and resisting bending moments. As such, it does not fit the definition of a truss, since it contains non-two-force members: regular trusses comprise members that are commonly assumed to have pinned joints, with the implication that no moments exist at the jointed ends. This style of structure was named after the Belgian engineer Arthur Vierendeel, who developed the design in 1896. It is rarely used for bridges because of higher costs compared to a triangulated truss, but in buildings it has the advantage that a large amount of the exterior envelope remains unobstructed and it can therefore be used for windows and door openings. In some applications this is preferable to a braced-frame system, which would leave some areas obstructed by the diagonal braces. == Statics == A truss that is assumed to comprise members that are connected by means of pin joints, and which is supported at both ends by means of hinged joints and rollers, is described as being statically determinate. Newton's laws
|
{
"page_id": 397263,
"source": null,
"title": "Truss"
}
|
apply to the structure as a whole, as well as to each node or joint. In order for any node that may be subject to an external load or force to remain static in space, the following conditions must hold: the sums of all (horizontal and vertical) forces, as well as all moments acting about the node, equal zero. Analysis of these conditions at each node yields the magnitude of the compression or tension forces. Trusses that are supported at more than two positions are said to be statically indeterminate, and the application of Newton's Laws alone is not sufficient to determine the member forces. In order for a truss with pin-connected members to be stable, it does not need to be entirely composed of triangles. In mathematical terms, the following necessary condition for stability of a simple truss exists: m ≥ 2 j − r ( a ) {\displaystyle m\geq 2j-r\qquad \qquad \mathrm {(a)} } where m is the total number of truss members, j is the total number of joints and r is the number of reactions (equal to 3 generally) in a 2-dimensional structure. When m = 2 j − 3 {\displaystyle m=2j-3} , the truss is said to be statically determinate, because the m+3 internal member forces and support reactions can then be completely determined by 2j equilibrium equations, once the external loads and the geometry of the truss are known. Given a certain number of joints, this is the minimum number of members, in the sense that if any member is taken out (or fails), then the truss as a whole fails. While the relation (a) is necessary, it is not sufficient for stability, which also depends on the truss geometry, support conditions and the load carrying capacity of the members. Some structures are built
|
{
"page_id": 397263,
"source": null,
"title": "Truss"
}
|
with more than this minimum number of truss members. Those structures may survive even when some of the members fail. Their member forces depend on the relative stiffness of the members, in addition to the equilibrium condition described. == Analysis == Because the forces in each of its two main girders are essentially planar, a truss is usually modeled as a two-dimensional plane frame. However, if there are significant out-of-plane forces, then the structure must be modeled as a three-dimensional space. The analysis of trusses often assumes that loads are applied to joints only and not at intermediate points along the members. The weight of the members is often insignificant compared to the applied loads and so is often omitted; alternatively, half of the weight of each member may be applied to its two end joints. Provided that the members are long and slender, the moments transmitted through the joints are negligible, and the junctions can be treated as "hinges" or "pin-joints". Under these simplifying assumptions, every member of the truss is then subjected to pure compression or pure tension forces – shear, bending moment, and other more-complex stresses are all practically zero. Trusses are physically stronger than other ways of arranging structural elements, because nearly every material can resist a much larger load in tension or compression than in shear, bending, torsion, or other kinds of force. These simplifications make trusses easier to analyze. Structural analysis of trusses of any type can readily be carried out using a matrix method such as the direct stiffness method, the flexibility method, or the finite-element method. === Forces in members === Illustrated is a simple, statically determinate flat truss with 9 joints and (2 x 9) − 3 = 15 members. External loads are concentrated in the outer joints. Since this is
|
{
"page_id": 397263,
"source": null,
"title": "Truss"
}
|
a symmetrical truss with symmetrical vertical loads, the reactive forces at A and B are vertical, equal, and half the total load. The internal forces in the members of the truss can be calculated in a variety of ways, including graphical methods: Cremona diagram Culmann diagram Ritter analytical method (method of sections) === Design of members === A truss can be thought of as a beam where the web consists of a series of separate members instead of a continuous plate. In the truss, the lower horizontal member (the bottom chord) and the upper horizontal member (the top chord) carry tension and compression, fulfilling the same function as the flanges of an I-beam. Which chord carries tension and which carries compression depends on the overall direction of bending. In the truss pictured above right, the bottom chord is in tension, and the top chord in compression. The diagonal and vertical members form the truss web, and carry the shear stress. Individually, they are also in tension and compression; the exact arrangement of forces depends on the type of truss and again on the direction of bending. In the truss shown above right, the vertical members are in tension, and the diagonals are in compression. In addition to carrying the static forces, the members serve additional functions of stabilizing each other, preventing buckling. In the adjacent picture, the top chord is prevented from buckling by the presence of bracing and by the stiffness of the web members. The inclusion of the elements shown is largely an engineering decision based upon economics, being a balance between the costs of raw materials, off-site fabrication, component transportation, on-site erection, the availability of machinery, and the cost of labor. In other cases the appearance of the structure may take on greater importance and so influence
|
{
"page_id": 397263,
"source": null,
"title": "Truss"
}
|
the design decisions beyond mere matters of economics. Modern materials such as prestressed concrete and fabrication methods such as automated welding have significantly influenced the design of modern bridges. Once the force on each member is known, the next step is to determine the cross section of the individual truss members. For members under tension the cross-sectional area A can be found using A = F × γ / σy, where F is the force in the member, γ is a safety factor (typically 1.5 but dependent on building codes), and σy is the yield tensile strength of the steel used. The members under compression also have to be designed to be safe against buckling. The weight of a truss member depends directly on its cross section—that weight partially determines how strong the other members of the truss need to be. Giving one member a larger cross section than on a previous iteration requires giving other members a larger cross section as well, to hold the greater weight of the first member—one needs to go through another iteration to find exactly how much greater the other members need to be. Sometimes the designer goes through several iterations of the design process to converge on the "right" cross section for each member. On the other hand, reducing the size of one member from the previous iteration merely makes the other members have a larger (and more expensive) safety factor than is technically necessary, but does not require another iteration to find a buildable truss. The effect of the weight of the individual truss members in a large truss, such as a bridge, is usually insignificant compared to the force of the external loads. === Design of joints === After determining the minimum cross section of the members, the last step in
|
{
"page_id": 397263,
"source": null,
"title": "Truss"
}
|
the design of a truss would be detailing of the bolted joints, e.g., involving shear stress of the bolt connections used in the joints. Based on the needs of the project, truss internal connections (joints) can be designed as rigid, semi-rigid, or hinged. Rigid connections can allow transfer of bending moments, leading to development of secondary bending moments in the members. == Applications == === Post-frame structures === Component connections are critical to the structural integrity of a framing system. In buildings with large, clearspan wood trusses, the most critical connections are those between the truss and its supports. In addition to gravity-induced forces (a.k.a. bearing loads), these connections must resist shear forces acting perpendicular to the plane of the truss and uplift forces due to wind. Depending upon overall building design, the connections may also be required to transfer bending moment. Wood posts enable the fabrication of strong, direct, yet inexpensive connections between large trusses and walls. Exact details for post-to-truss connections vary from designer to designer, and may be influenced by post type. Solid-sawn timber and glulam posts are generally notched to form a truss-bearing surface. The truss is rested on the notches and bolted into place. A special plate/bracket may be added to increase connection load-transfer capabilities. With mechanically-laminated posts, the truss may rest on a shortened outer-ply or on a shortened inner-ply. The later scenario places the bolts in double shear and is a very effective connection. == Gallery == == See also == Brown truss Convex uniform honeycomb Geodesic dome Lattice tower Serrurier truss Stress: Compressive stress Tensile stress Structural mechanics Structural steel Tensegrity Truss rod == References ==
|
{
"page_id": 397263,
"source": null,
"title": "Truss"
}
|
Hundreds of replicas of the Statue of Liberty (Liberty Enlightening the World) have been created worldwide. The original Statue of Liberty, designed by sculptor Frédéric Auguste Bartholdi, is 151 feet tall and stands on a pedestal that is 154 feet tall, making the height of the entire sculpture 305 feet. The design for the original Statue of Liberty began in 1865, with final installation in 1886. == France == === Paris === ==== Musée d'Orsay ==== On the occasion of the Exposition Universelle of 1900, sculptor Frédéric Bartholdi crafted a 1/16 scale, 2.74-metre (9 ft) version of his Liberty Enlightening the World. It was cast in 1889 and he subsequently gave it to the Musée du Luxembourg. In 1906, the statue was placed outside the museum in the Jardin du Luxembourg, where it stood for over a century, until 2011. Since 2012 it has stood within the entrance hall to the Musée d'Orsay, and a newly constructed bronze replica stands in its place in the Jardin du Luxembourg. ==== Île aux Cygnes ==== This statue was given in 1889 to France by U.S. citizens living in Paris, only three years after the main statue in New York was inaugurated, to celebrate the centennial of the French Revolution. Originally, the statue was turned towards the east in order to face the Eiffel Tower. In 1937 it was turned towards the west so that it would be facing the original statue in New York. It is one of three replicas in Paris. The statue is near the Pont de Grenelle on the Île aux Cygnes, a man-made island in the Seine (48°51′0″N 2°16′47″E). It is a quarter-scale version (11.50 metres (37 feet 9 inches) high), and was one of the working models used during construction of the actual Statue of Liberty. The
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
statue is a short distance away from the Eiffel Tower and Alexandre-Gustave Eiffel designed its interior. This model weighs 14 tons. It was inaugurated on 4 July 1889. Its tablet bears two dates: "IV JUILLET 1776" (4 July 1776: the United States Declaration of Independence) like the New York statue, and "XIV JUILLET 1789" (14 July 1789: the storming of the Bastille) associated with an equal sign. This statue is shown in the film National Treasure: Book of Secrets as a historic location. ==== Musée des Arts et Métiers ==== The 2.86-metre (9.4 ft) tall original plaster maquette finished in 1878 by Auguste Bartholdi that was used to make the statue in New York is in the Musée des Arts et Métiers in Paris. This original plaster model was bequeathed by the artist's widow in 1907, together with part of the artist's estate. On the square outside the Musée des Arts et Métiers's entrance was a bronze copy made from the plaster maquette, number 1 from an original edition of 12, made by the museum and cast by Susse Fondeur Paris. It was this replica that was shipped to the U.S. under a joint effort by the Embassy of France in the United States, the Conservatoire national des arts et métiers and the shipping company CMA CGM Group. After spending time on Ellis Island for Independence Day 2021, it now resides at the French ambassador's residence in Washington, D.C. ==== Flame of Liberty ==== A life-size copy of the torch, Flame of Liberty, can be seen above the entrance to the Pont de l'Alma tunnel near the Champs-Élysées in Paris. It was given to the city as a return gift in honour of the centennial celebration of the statue's dedication. In 1997, the torch became an unofficial memorial to Diana,
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
Princess of Wales after she was killed in a car accident in the Pont de l'Alma tunnel. === Barentin === There is a 13.5 m (44 feet) polyester replica in the northwest of France, in the small town of Barentin near Rouen. It was made for the 1969 French film, Le Cerveau ("The Brain"), directed by Gérard Oury and featuring actors Jean-Paul Belmondo and Bourvil. === Bordeaux === There is a 2.5 m (8.2 ft) replica of the statue in the city of Bordeaux. The first Bordeaux statue was seized and melted down by the Nazis in World War II. The statue was replaced in 2000 and a plaque was added to commemorate the victims of the 11 September terrorist attacks. On the night of 25 March 2003, unknown vandals poured red paint and gasoline on the replica and set it on fire. The vandals also cracked the pedestal of the plaque. The mayor of Bordeaux, former prime minister Alain Juppé, condemned the attack. === Colmar === A 12 m (39 ft 4 in) replica of the Statue of Liberty in Colmar, the city of Bartholdi's birth, was dedicated on 4 July 2004, to commemorate the 100th anniversary of his death. It stands at the north entrance of the city. The Bartholdi Museum in Colmar contains numerous models of various sizes made by Bartholdi during the process of designing the statue. === Saint-Cyr-sur-Mer === Frédéric Bartholdi donated a copy of the Statue of Liberty to the town square of Saint-Cyr-sur-Mer. === Other French cities === Other Liberty Enlightening the World statues are displayed in Poitiers and Lunel. The Musée des beaux-arts de Lyon owns a terracotta version. Near Chaumont, Haute Marne, is a miniature replica in the flag plaza of the former Chaumont Air Base. This was the home of
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
the US 48th Tactical Fighter Wing, now based at Lakenheath, England, with its own statue at the flag plaza. The 48th TFW is the only USAF wing with a name: "The Statue of Liberty Wing". Another example is of a Liberty Enlightening the World replica in Châteauneuf-la-Forêt, near the city of Limoges in the area of Haute-Vienne, Limousin. There is another "original" Bartholdi replica at Roybon (near Grenoble) There is a small replica on Promenade des Anglais in Nice. This is one of Bartholdi's first models and overlooks the Mediterranean Sea on the Quai des Etats-Unis (the promenade of the United States). == Other European countries == === Austria === In Minimundus, a miniature park located at the Wörthersee in Carinthia, Austria, is another replica of the Statue of Liberty. In Graz, standing between the Opera House and the NextLiberty Theater, stands a steel structure built out of steel beams, that depict the original size of the statue of liberty, before the plates of the final form were being put into place. Instead of torch of flame, this depiction is holding a sword in extended left arm and a sphere in the right arm representing the world. === Denmark === A small replica in lego is situated in the original Legoland in Billund. The replica is made from 400,000 Lego bricks and also resides in other Lego theme parks. === Germany === A 35 m (115 ft) copy is in the German Heidepark Soltau theme park, located on a lake with cruising Mississippi steamboats. It weighs 28 metric tons (31 short tons), is made of plastic foam on a steel frame with polyester cladding, and was designed by the Dutch artist Gerla Spee. === Ireland === A green painted replica of the Statue of Liberty can be found near Mulnamina
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
More, County Donegal, Ireland. === Kosovo === A replica stands atop the Hotel Victory in Pristina, Kosovo. Today the Hotel is closed and the Police from Kosovo used the building. === Netherlands === A 33 ft (10 m) replica has its temporary location in the Dutch city of Assen. The statue bears characteristic features that represent the culture and landscape of the region, like a can of beans instead of the original torch. The replica, by sculptor Natasja Bennink, was on display for the duration of an exhibition on American realism in the Drents Museum until 27 May 2018. === Norway === A smaller replica is in the Norwegian village of Visnes, where the copper used in the original statue was mined. A replica is also on the facade of a pub in Bleik, county of Nordland === Romania === There is a 2.5 m tall statue in Boldeşti-Scăeni, near Prahova County. === Spain === In 1897 a 123 cm (4 ft 0 in) replica in iron and bronze was erected in Cenicero, Spain, to honor local fighters during the First Carlist War. In 1936 it was removed during the dictatorship of Francisco Franco. It was restored in 1976. The Rossend Arús public library in Barcelona has in its entrance a 2 m (6 ft 7 in) replica from 1894. She welcomes visitors to the library, which is devoted to the labour movement, anarchism, and freemasonry. Cadaqués, a small village that was residence of Salvador Dalí, has an unusual version, with both arms and hands up holding torches. It is on top of a small tourism information office. === Ukraine === There is a unique "sitting" Statue of Liberty in the Ukrainian city of Lviv. It is a sculpture on a dome of the house (15, Liberty Avenue) built by
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
architect Yuriy Zakharevych and decorated by sculptor Leandro Marconi in 1874–1891. === United Kingdom === A 17 ft (5.2 m), 9,200 kg (9.2 tons) replica stood atop the Liberty Shoe factory in Leicester, England, until 2002 when the building was demolished. The statue was put into storage while the building was replaced. The statue, which dates back to the 1920s, was initially going to be put back on the replacement building, but was too heavy, so in December 2008 following restoration, it was placed on a pedestal near Liberty Park Halls of Residence on a traffic island, "Liberty Circus", close to where it originally stood. There used to be a 10-foot-high (3.0 m) replica in the stairwell of bowling alley LA Bowl, in Warrington, England. Prior to that it was above the entrance of Liberty Street, a nearby restaurant. It is thought that this is now situated approximately 4 miles away on Mustard Lane in Croft. There is also a small replica located at RAF Lakenheath, England, at the base flag plaza, made from leftover copper from the original. == North America == === Canada === In Coquitlam, British Columbia a small replica stands on Delestre Avenue just east of North Road. The statue was removed in 2019 when the hotel behind it was demolished. === Mexico === In Campeche, Mexico, there is a small replica in the small town of Palizada. In Durango, Mexico, a small replica is in Parque Guadiana. This park also has other small reproductions such as the Eiffel Tower and Taj Mahal. === United States === From 1902 to 2002, visitors to midtown Manhattan were occasionally disoriented by what seemed to be an impossibly nearby view of the statue. They were seeing a 30-foot-high (9.1 m) replica located at 43 West 64th Street atop the
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
Liberty Warehouse. In February 2002, the statue was removed by the building's owners to allow the building to be expanded. It was donated to the Brooklyn Museum of Art, which installed it in its sculpture garden in October 2005 with plans to restore it on site in spring of 2006. A replica that used to reside at the Musée des Arts et Métiers in Paris was shipped to the U.S. under a joint effort by the Embassy of France in the United States, the Conservatoire national des arts et métiers and the shipping company CMA CGM Group. After spending time on Ellis Island for Independence Day 2021, it now resides at the French ambassador's residence in Washington, D.C. It will remain on display at the residence until 2031. A bronze sculpture of the Statue of Liberty is on display in the Metropolitan Museum of Art in New York City. This statue is known as a "committee model." The work was part of an edition of replicas that were sold to help finance the final monument. The sculpture stands at 47 3/4 inches and weights 56.4 pounds. Duluth, Minnesota, has a small copy on the south corner of the Duluth Entertainment Convention Center property, in the center of a clearing surrounded by pine trees where it may be passed unnoticed. It was presented to the city by some of Bartholdi's descendants residing in Duluth. The Boy Scouts of America celebrated their fortieth anniversary in 1950 with the theme of "Strengthen the Arm of Liberty". Between 1949 and 1952, approximately two hundred 100-inch (2.5 m) replicas of the statue, made of stamped copper, were purchased by Boy Scout troops and donated in 39 states in the U.S. and several of its possessions and territories. The project was the brainchild of Kansas City
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
businessman J.P. Whitaker, who was then Scout Commissioner of the Kansas City Area Council. The copper statues were manufactured by Friedley-Voshardt Co. (Chicago, Illinois) and purchased through the Kansas City Boy Scout office by those wanting one. The statues are approximately 8+1⁄2 feet (2.6 m) tall without the base, are constructed of sheet copper, weigh 290 pounds (130 kg), and originally cost $350 plus freight. The mass-produced statues are not great art nor meticulously accurate (a conservator notes that "her face isn't as mature as the real Liberty. It's rounder and more like a little girl's"), but they are cherished, particularly since 9/11. Many have been lost or destroyed, but preservationists have been able to account for about a hundred of them, and BSA Troop 101 of Cheyenne, Wyoming, has collected photographs of over 100 of them. They are commonly installed at city halls, libraries, and schools. One of these statues was sent to the Philippines. After some years at the mouth of the Pasig River, Manila, it was kept in a store room at the Scout Reservation, Makiling, Laguna, for about two decades. It is now stored at the national office of the Boy Scouts of the Philippines, Manila. Three of the Boy Scout Statues are in Wyoming; in Lions Park in Cheyenne, the Carbon County Courthouse in Rawlins, and at the Goshen County District Courthouse in Torrington. A nine-foot-tall replica of the Statue, built in 1950, stands in Warner Park in Madison, Wisconsin. A replica of the original statue was unveiled on 12 October 2011, at 667 Madison Avenue in Manhattan. Its owner, billionaire Leonard N. Stern, purchased it after reading about it in the local news. The replica is one of only 12 cast from the original mold created by Frédéric Auguste Bartholdi using digital surface scanning
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
and lost-wax casting methods, and is the only one currently on public display. The statue itself is 9 feet tall and 15 feet including the pedestal on which it stands. There is a half-size replica at the New York-New York Hotel and Casino in Las Vegas, Nevada. In April 2011, the U.S. Postal Service announced that three billion postage stamps mistakenly based on a photograph of this replica were produced and would be sold to the public. In November 2013, the statue's sculptor, Robert Davidson, filed a copyright infringement suit against the U.S. government. Another small replica exists in Las Vegas on Route 589 near Arville St in a plaza parking lot. The city of Sioux Falls, South Dakota, erected a replacement bronze reproduction standing 9 ft (2.7 m) tall in McKennan Park atop the original pedestal of a long-vanished wooden replica. A 36-foot-tall (11 m) bronze replica, accurately based on Bartholdi's Liberty Enlightening the World, stands in Vestavia Hills, a suburb of Birmingham, Alabama. It was cast in 1956 at the Société Antoine Durenne foundry in Somerville, Haut Marne, France, for placement in 1958 atop the Liberty National Life Insurance Company building in downtown Birmingham. It was relocated and placed on a 60-foot-tall (18 m) granite pedestal adjacent to Interstate 459 in 1989. Two 30-foot (9.1 m) copper replicas by sculptor Leo Lentelli stand atop the Liberty National Bank Building in Buffalo, New York, nearly 108 m (354 ft) above street level. A 25-foot-tall (7.6 m) replica sits on the ruins of the late Marysville Bridge (erected on a platform (pier)) in the Dauphin Narrows of Susquehanna River north of Harrisburg. The replica was built by local activist Gene Stilp on July 2, 1986; it was made of Venetian blinds and stood 18 feet (5.5 m) tall. Six years
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
later, after it was destroyed in a windstorm, it was rebuilt by Stilp and other local citizens, of wood, metal, glass and fiberglass, to a height of 25 feet (7.6 m). A Lego replica of the Statue of Liberty consisting of 2882 bricks and standing 0.9 m (3.0 ft) is a popular sculpture among Lego enthusiasts. The statue went out of production, but due to popular demand was returned to sale. A 1/12 replica of the Statue of Liberty made essentially out of junk stands at the intersection of US 280 and US 341 in McRae, Georgia. The head is made out of a stump from a nearby swamp, the arm holding the torch is made from styrofoam and the hand holding the book is actually an electric lineman's glove. The town's Lions Club erected the replica in 1986 during the statue's centennial. An 11-foot (3.4 m) miniature Statue of Liberty (holding a Bible instead of a tablet) currently stands atop a 15-foot (4.6 m) pedestal outside the Liberty Recycling plant in San Marcos, California. The company was named after the statue, which has been moved throughout northern San Diego County for over 80 years, originating at the Liberty Hotel in Leucadia, in the 1920s. A 25-foot (7.6 m) replica of the statue, lofting a Christian cross, holding the Ten Commandments, and named the Statue of Liberation through Christ, was erected by a predominantly African American church in Memphis, Tennessee, on 4 July 2006. A small replica stands on the grounds of the Cherokee Capitol Building in Tahlequah, Oklahoma, a gift from the local Boy Scouts in 1950 (presumably as part of the above-mentioned national Boy Scout celebration). Fargo, North Dakota, also had a replica of the Statue of Liberty on the corner of Main Avenue and 2nd Street at
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
the entrance of the Main Avenue bridge, which was reported stolen on July 26, 2019. There is a replica on the shoreline of Lake Chaubunagungamaug in Webster, Massachusetts. A 1/6-scale replica (≈50 feet including pedestal) stands in a parking lot of a strip mall in Milwaukie, Oregon, off McLoughlin Blvd at 4255 SE Roethe Rd. A 6-foot (1.8 m) replica stands at Statue of Liberty Plaza in West Seattle, Washington, at Alki Beach Park. A 10-foot (3.0 m) replica overlooks Interstate 5 in Everett, Washington from a private residence. A replica of the Statue of Liberty stands on Mackinac Island, Michigan. A replica of the Statue of Liberty is located in the downtown area of New Castle, Pennsylvania. A replica of the Statue of Liberty is located near the Lincoln High School in Ellwood City, Pennsylvania. A bronze replica of the Statue of Liberty resides in Neenah, Wisconsin. It was cast in California by the Great American Bronze Works. This version of the Statue of Liberty is 14 feet, 6 inches tall. It is 10 percent the size of the original. A replica approximately the same size as an adult person is located alongside Highway 80 at the west end of Forney, Texas. An earlier installation stood from 1986 until May 2016, when it was removed to make way for highway construction. As of November 2019, it has been replaced in nearly the same spot, this time painted a darker green and with a illuminated torch. A small statue stands on the grounds of the Chimbarazo Hospital Museum on the Richmond National Battlefield in Richmond, Virginia. A replica of the Statue of Liberty in Liberty Park at the entrance to the city of Schenectady, New York. A replica in Newton Falls, Ohio, used to stand in front of Liberty Tax
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
Service in Leavittsburg, Ohio. It was donated to Newton Falls by the former owner of Liberty Tax Service when she closed the business. A 7-foot, 7-inch replica of the statue, built in 2013, stands in Old Church Park at the corner of Cherry and Morning Streets in Sunbury, Ohio. A replica of the statue stands in Guam, at the Paseo de Susana park, adjacent to the Hagatna Boat Basin. It was erected in 1950 by the Boy Scouts of America. == South America == === Argentina === In Buenos Aires there is a small cast iron original replica in Barrancas de Belgrano Park located near the intersection of "La Pampa" and "Arribeños" streets. It is cast by Bartholdi from the same mould as those cast in Paris; although it is much smaller (3 meters tall). It was inaugurated on October 3, 1886, 25 days before the one in New York. On its base you can read the inscription “Val d'Osne – 8 Rue Voltaire, Paris”, the name of the French workshops and “1884” probably the year of creation. Another replica was bought by the government and placed in a school, Colegio Nacional Sarmiento, about the same date. There is another replica in Plaza Libertad (Liberty Square) in the city of Villa Aberastain, San Juan. This one was installed on the city square in 1931. There are also two cheaper non-metallic replicas; one is 6 m tall, located in the "New York" Casino in San Luis and the other crowns a commercial gallery, "Galería de Fabricantes", in Munro, a city in the northeast suburbs of Buenos Aires. === Brazil === In Bangu, Rio de Janeiro exists a nickel replica made by Bartholdi in 1899. Bartholdi was commissioned by José Paranhos, Baron of Rio Branco to make a replica in order to
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
celebrate the 10th anniversary of the Republic of Brazil. Until 1940, the statue was Paranhos family property. In 1940 the statue was passed to Guanabara State. On 20 January 1964, Carlos Lacerda, governor of Guanabara State, placed the statue in Miami Square, Bangu. 22.856681°S 43.492577°W / -22.856681; -43.492577 A small-scale cast metal replica can be found in Maceió, the capital of Alagoas State, in northeast Brazil. The replica is in front of a building constructed in 1869 as the seat of the Conselho Provincial (Provincial Council), and which today is the Museu da Imagem e do Som de Alagoas (Museum of Image and Sound of Alagoas). This replica is possibly a casting produced by the Fundição Val d'Osne in France, as in the Praça Lavenere Machado (formerly Praça Dois Leões) on the opposite side of the museum, there are four somewhat larger-than-life size cast metal statues of wild animals, at least one of which is embossed with the name of the foundry. These castings and the replica all appear to be made of similar material and to be of similar age. It is also probable that they are near contemporaries of the actual Statue of Liberty. 9.6728563°S 35.7223114°W / -9.6728563; -35.7223114 A large modern replica stands in front of the New York City Center, a shopping center constructed in 1999 in Barra da Tijuca in the State of Rio de Janeiro. 22.9992837°S 43.360481°W / -22.9992837; -43.360481 The Havan department store chain has replicas in many of their stores. The largest one of these, 57 meters tall, is allegedly in the Barra Velha branc, in the state of Santa Catarina.26.635870°S 48.698708°W / -26.635870; -48.698708 There is another large replica the parking area of a Havan Department Store on the outskirts of Curitiba, in the State of Paraná, opened in 2000. 25.4639912°S
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
49.2521676°W / -25.4639912; -49.2521676 Also, there is a small replica of the statue in Belém, in front of a Belém Importados store, near the city's port. === Ecuador === In Guayaquil, a little replica gives the name of "New York" to a neighborhood in the Valle Alto area. === Peru === In Lima the New York Casino in the Jesús María District has a small replica in the main entrance. The casino is a tribute to the state of New York and the USA. In Arequipa, the Plaza Las Américas in the Cerro Colorado district has a small replica in the monument in the middle of the square on top of a globe pointing to the American continent. == Asia == === India === A small replica can be found in Vardhaman Fantasy, an amusement park in Mira Road, Mumbai along with six other wonders of the world. The 7 wonders of the world are made in Eco Park, Kolkata, West Bengal. Another small replica can be found in Seven Wonders Park, a park in Kotri, Kota, Rajasthan along with six other wonders of the world. === Malaysia === A large replica can be found in Genting Highlands in the state of Pahang. === Singapore === A small replica can be found in Haw Par Villa, a theme park. === China === ==== Guangzhou ==== Siting on top of the memorial tomb of "72 Martyrs of Huanghuagang" (see Huanghuagang Uprising). The current one was re-built in 1981. ==== Beijing ==== During the Tiananmen Square protest of 1989, Chinese student demonstrators in Beijing built a 10 m (33 ft) image called the Goddess of Democracy, which sculptor Tsao Tsing-yuan said was intentionally "dissimilar" to the Statue of Liberty to avoid being "too openly pro-American." (See article for a list of replicas
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
of that statue.) ==== Shenzhen ==== A replica can be found in Window of the World Park. === Israel === A 15 foot high replica of the Statue of Liberty is at the western entrance of the village of Arraba in Israel, near a local restaurant. At a highway intersection in Jerusalem called "New York Square," there is an abstract skeletal replica of the statue. The 13 foot high Statue of Liberty, crafted by YouFine Sculpture, was custom-made for an Israeli Resident and installed in front of his newly constructed yard. This bespoke piece highlights Liberty sculpture has entered the homes of ordinary people."Custom Made Large Statue of Liberty". YouFine. Retrieved August 7, 2024. === Japan === The French Statue of Liberty from the Île aux Cygnes came to Odaiba, Tokyo, from April 1998 to May 1999 in commemoration of "The French year in Japan". Because of its popularity, in 2000 a replica of the French Statue of Liberty was erected at the same place. The Tokyo Bay statue is about 1/7th the size of the statue in New York Harbor. In Japan, a small Statue of Liberty is in the Amerika-mura (American Village) shopping district in Osaka, Japan. Another replica is in Oirase near the town of Shimoda south of Misawa in Aomori Prefecture, where the United States has an 8,000-person U.S. Air Force base. A replica of the Statue of Liberty in Ishinomaki, Miyagi Prefecture, was damaged by the 2011 Tōhoku earthquake and tsunami. There is also a replica in Oyabe, Toyama. === Pakistan === There are replicas of the Statue of Liberty in Bahria Town, Lahore, and also in Bahria Town Phase 8, Islamabad. === Philippines === As early as January 1945, there were already news of a campaign that would help erect a Statue of Liberty
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
replica in the Philippines. The said monument was supposed to be sponsored by The Chicago Daily Times whose goal was "to commemorate one of the great epics in the struggle for human freedom–the liberation of the Philippines." In 1950, the Boy Scouts of America marked its 40th anniversary. Jack P. Whitaker, the Scout Commissioner of the Kansas City Area Council at the time, had previously proposed the idea of creating and distributing replicas of the Statue of Liberty to every U.S. state and territory, as well as the Philippines. The eight-foot statues, which were cast in bronze, were distributed all over the U.S. and the world from 1949 to 1951. Almost 200 replicas were delivered to the 39 states of the U.S. and countries such as Panama and Puerto Rico. The Boy Scouts of the Philippines, on the other hand, received its own replica in the early part of 1950. The statues were donated by the Boy Scouts of America as "an expression of scout brotherhood and goodwill." Their 40th anniversary theme was "Strengthen the Arm of Liberty." Miniature versions of the statue were also given as gifts. The Philippines became the first independent nation to receive one of the 4,000 eight-inch statues from the Boy Scouts of America. In April 1950, the said statue was officially given by Chief Scout Executive Arthur A. Shuck to Carlos P. Romulo, then chief of the Philippine Mission to the United Nations. In the Philippines, several places were suggested as the site where the eight-foot bronze replica would be erected. The task of choosing the perfect site was delegated to the National Urban Planning Commission, and among those it considered were “Engineer Island, atop the proposed reviewing stand on the Rizal Park, and on the center island rotunda between the Old Legislative building
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
and Manila City Hall.” In the end, the Boy Scouts of the Philippines (BSP) erected the statue just outside Intramuros. As the icon of the United States, the replica of Lady Liberty would survive several attacks by student protesters in the 1960s. It remained standing until the early 1970s, when the BSP decided to transfer it to the Scout Reservation in Mt. Makiling which would serve as the statue's home for two decades or so. In a 2002 article published by the Philippine Star, then BSP PR head Nixon Canlapan revealed that the Statue of Liberty was eventually moved and stored at the BSP headquarters on Concepcion Street (now Natividad Almeda-Lopez) in Ermita, Manila. Turns out, the U.S.-sponsored replica was not the first Lady Liberty in Manila. In the 1930s, one of Manila's biggest shopping stores at that time became the talk of the town not just for its products but also for its unique multi-story building. Located in Juan Luna Street, the L.R. Aguinaldo's Emporium had an Art Deco facade featuring two contrasting statues: Andres Bonifacio on the right and the Statue of Liberty on the left. Founded by the Philippine retailing pioneer Leopoldo R. Aguinaldo, the establishment would eventually be recognized as Aguinaldo's Department Store. Following the war, Leopoldo's son, Francisco, assumed control of the business, and subsequently, the store relocated to Echague. The Echague branch in the 1950s was known for introducing its customers to quality products both from the Philippines and abroad. It also commissioned young interior designers to update the store's furniture section. Thus, the store catapulted the careers of famous designers like Myra Cruz, Edgar Ramirez, and Bonnie Ramos, among others. Aguinaldo's succumbed to the competition and closed in the 1960s. The original building in Juan Luna Street still stands, along with both the
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
Bonifacio and the Liberty statues. Since the creation of the Liberty statues in Intramuros and Juan Luna Street, other Philippine provinces soon followed suit. Statue of Liberty replicas in can be found in Pangasinan and as far as Camp John Hay amphitheater in Baguio. === Thailand === The Mini Siam and Mini Europe model village, in Pattaya, has a miniature Statue of Liberty amongst others. === Taiwan === There are at least two Statue of Liberty replicas (greater than 30 feet in height) in Taiwan. These two statues are in the cities of Keelung and Taipei. === Vietnam === From 1887 to 1945, Hanoi was home to another copy of the statue. Measuring 2.85 m (9 ft 4 in) tall, it was erected by the French colonial government after being sent from France for an exhibition. It was known to locals unaware of its history as Tượng Bà đầm xòe (Statue of the Western lady wearing dress). When the French lost control of French Indochina during World War II, the statue was toppled on 1 August 1945, after being deemed a vestige of the colonial government along with other statues erected by the French. == Australia == A 30-foot replica was once found at the Westfield Marion shopping complex in Adelaide, South Australia. The statue was demolished in 2019. == References == == External links == Statues of Liberty in the world Replica Statue of Liberty Search Quick view of Statue of Liberty
|
{
"page_id": 5378000,
"source": null,
"title": "Replicas of the Statue of Liberty"
}
|
Kampō (or Kanpō, 漢方) medicine is the Japanese study and adaptation of traditional Chinese medicine. In 1967, the Japanese Ministry of Health, Labour and Welfare approved four kampo medicines for reimbursement under the National Health Insurance (NHI) program. In 1976, 82 kampo medicines were approved by the Ministry of Health, Labour and Welfare. Currently, 148 kampo medicines are approved for reimbursement. The 14th edition of the Japanese Pharmacopoeia (JP) (日本薬局方 Nihon yakkyokuhō) lists 165 herbal ingredients that are approved to be used in kampo remedies. Tsumura (ツムラ) is the leading maker making 128 of the 148 kampo medicines. The "count" column shows in how many of these 128 formulae the herb is found. The most common herb is Glycyrrhizae Radix (Chinese liquorice root). It is in 94 of the 128 Tsumura formulae. Other common herbs are Zingiberis Rhizoma (ginger) (51 of 128 formulae) and Paeoniae Radix (Chinese peony root) (44 of 128 formulae). Note 1: this character cannot be displayed correctly on a computer. "庶" is usually substituted in Chinese and Japanese. The "灬" in "庶" should be replaced with "虫". Note 2: this character cannot be displayed correctly on a computer. "梨" is usually substituted in Chinese. "梨" or "藜" is usually substituted in Japanese. The "勿" in "藜" should be replaced with "刂". == See also == Kampo list Chinese classic herbal formula List of plants used in herbalism Pharmacopoeia == References == Tsumura Herb Handbook (in Japanese) Bensky, Dan, Steve Clavey, Erich Stöger, and Andrew Gamble "Chinese Herbal Medicine: Materia Medica" 3rd ed. Eastland Press, 2004. (ISBN 0-939616-42-4) Eastland Press Herb List Arranged by Pinyin Wiseman, Nigel. "Learner's Disney Character Dictionary of Chinese Medicine" == External links == The World of Kampo
|
{
"page_id": 9179093,
"source": null,
"title": "List of kampo herbs"
}
|
DIMPL (Discovery of Intergenic Motifs PipeLine) is a bioinformatic pipeline that enables the extraction and selection of bacterial GC-rich intergenic regions (IGRs) that are enriched for structured non-coding RNAs (ncRNAs). The method of enriching bacterial IGRs for ncRNA motif discovery was first reported for a study in "Genome-wide discovery of structured noncoding RNAs in bacteria". DIMPL pipeline automates the process of total genome analysis by extracting IGRs, filtering them by length and nucleic acid composition, and collecting the data necessary to identify candidate motifs and assign their possible functions. DIMPL pipeline provides reproducible techniques for identifying genomic regions enriched for ncRNA through support vector machine (SVM) classifiers. It can be used to look for nucleic acid and protein motifs, including riboswitch-like elements, upstream open reading frames (uORFs), short open reading frames (sORFs), ribosomal protein leader sequences, selfish genetic elements and other structured RNA motifs of unknown function. DIMPL uses various sequence analysis resources, including: Rfam database, as a reference of known RNA families BLASTX search tool, to eliminate unannotated protein coding regions INFERNAL package, to search the IGSs sequences CMfinder, to look for possible RNA secondary structure features R-scape software and R2R drawing algorithm, to generate the consensus model RNAcode, to look for the presence of coding regions GenomeView, to visualize the genetic context of the RNA motif RNA motifs discovered using DIMPL include HMP-PP riboswitch, icd-II ncRNA motif, carA ncRNA motif, ldh2 ncRNA motif, among others. == References ==
|
{
"page_id": 69144535,
"source": null,
"title": "DIMPL"
}
|
rnn is an open-source machine learning framework that implements recurrent neural network architectures, such as LSTM and GRU, natively in the R programming language, that has been downloaded over 100,000 times (from the RStudio servers alone). The rnn package is distributed through the Comprehensive R Archive Network under the open-source GPL v3 license. == Workflow == The below example from the rnn documentation show how to train a recurrent neural network to solve the problem of bit-by-bit binary addition. == sigmoid == The sigmoid functions and derivatives used in the package were originally included in the package, from version 0.8.0 onwards, these were released in a separate R package sigmoid, with the intention to enable more general use. The sigmoid package is a dependency of the rnn package and therefore automatically installed with it. == Reception == With the release of version 0.3.0 in April 2016 the use in production and research environments became more widespread. The package was reviewed several months later on the R blog The Beginner Programmer as "R provides a simple and very user friendly package named rnn for working with recurrent neural networks.", which further increased usage. The book Neural Networks in R by Balaji Venkateswaran and Giuseppe Ciaburro uses rnn to demonstrate recurrent neural networks to R users. It is also used in the r-exercises.com course "Neural network exercises". The RStudio CRAN mirror download logs show that the package is downloaded on average about 2,000 per month from those servers , with a total of over 100,000 downloads since the first release, according to RDocumentation.org, this puts the package in the 15th percentile of most popular R packages . == References == == External links == Repository on GitHub rnn package on CRAN
|
{
"page_id": 57741272,
"source": null,
"title": "Rnn (software)"
}
|
Rodentology is a branch of mammalogy for the study of rodents by a rodentologist. The scientific group of rodents would include, but is not limited to, mice, rats, squirrels, etc. From the perspective of zoology, it investigates the behaviour, biology and classification of various rodent species. The study of rodents includes their genetics and their place in the ecosystem. Furthermore, research may be conducted into pest control, disease vector, disease management in agriculture, environmental degradation, globalization and disease, effects on human society and pathogen transmission. == Benefits to society == Rodentology may benefit human society with contributions to agriculture, balance of nature, food protection, global health, human health, infrastructure protection, integrated pest management, public health and scientific research. == References == == External links ==
|
{
"page_id": 77729754,
"source": null,
"title": "Rodentology"
}
|
The flatness problem (also known as the oldness problem) is a cosmological fine-tuning problem within the Big Bang model of the universe. Such problems arise from the observation that some of the initial conditions of the universe appear to be fine-tuned to very 'special' values, and that small deviations from these values would have extreme effects on the appearance of the universe at the current time. In the case of the flatness problem, the parameter which appears fine-tuned is the density of matter and energy in the universe. This value affects the curvature of space-time, with a very specific critical value being required for a flat universe. The current density of the universe is observed to be very close to this critical value. Since any departure of the total density from the critical value would increase rapidly over cosmic time, the early universe must have had a density even closer to the critical density, departing from it by one part in 1062 or less. This leads cosmologists to question how the initial density came to be so closely fine-tuned to this 'special' value. The problem was first mentioned by Robert Dicke in 1969.: 62, : 61 The most commonly accepted solution among cosmologists is cosmic inflation, the idea that the universe went through a brief period of extremely rapid expansion in the first fraction of a second after the Big Bang; along with the monopole problem and the horizon problem, the flatness problem is one of the three primary motivations for inflationary theory. == Energy density and the Friedmann equation == According to Einstein's field equations of general relativity, the structure of spacetime is affected by the presence of matter and energy. On small scales space appears flat – as does the surface of the Earth if one looks at
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
a small area. On large scales however, space is bent by the gravitational effect of matter. Since relativity indicates that matter and energy are equivalent, this effect is also produced by the presence of energy (such as light and other electromagnetic radiation) in addition to matter. The amount of bending (or curvature) of the universe depends on the density of matter/energy present. This relationship can be expressed by the first Friedmann equation. In a universe without a cosmological constant, this is: H 2 = 8 π G 3 ρ − k c 2 a 2 {\displaystyle H^{2}={\frac {8\pi G}{3}}\rho -{\frac {kc^{2}}{a^{2}}}} Here H {\displaystyle H} is the Hubble parameter, a measure of the rate at which the universe is expanding. ρ {\displaystyle \rho } is the total density of mass and energy in the universe, a {\displaystyle a} is the scale factor (essentially the 'size' of the universe), and k {\displaystyle k} is the curvature parameter — that is, a measure of how curved spacetime is. A positive, zero or negative value of k {\displaystyle k} corresponds to a respectively closed, flat or open universe. The constants G {\displaystyle G} and c {\displaystyle c} are Newton's gravitational constant and the speed of light, respectively. Cosmologists often simplify this equation by defining a critical density, ρ c {\displaystyle \rho _{c}} . For a given value of H {\displaystyle H} , this is defined as the density required for a flat universe, i.e. k = 0 {\displaystyle k=0} . Thus the above equation implies ρ c = 3 H 2 8 π G {\displaystyle \rho _{c}={\frac {3H^{2}}{8\pi G}}} . Since the constant G {\displaystyle G} is known and the expansion rate H {\displaystyle H} can be measured by observing the speed at which distant galaxies are receding from us, ρ c {\displaystyle
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
\rho _{c}} can be determined. Its value is currently around 10−26 kg m−3. The ratio of the actual density to this critical value is called Ω, and its difference from 1 determines the geometry of the universe: Ω > 1 corresponds to a greater than critical density, ρ > ρ c {\displaystyle \rho >\rho _{c}} , and hence a closed universe. Ω < 1 gives a low density open universe, and Ω equal to exactly 1 gives a flat universe. The Friedmann equation, 3 a 2 8 π G H 2 = ρ a 2 − 3 k c 2 8 π G , {\displaystyle {\frac {3a^{2}}{8\pi G}}H^{2}=\rho a^{2}-{\frac {3kc^{2}}{8\pi G}},} can be re-arranged into ρ c a 2 − ρ a 2 = − 3 k c 2 8 π G , {\displaystyle \rho _{c}a^{2}-\rho a^{2}=-{\frac {3kc^{2}}{8\pi G}},} which after factoring ρ a 2 {\displaystyle \rho a^{2}} , and using Ω = ρ / ρ c {\displaystyle \Omega =\rho /\rho _{c}} , leads to ( Ω − 1 − 1 ) ρ a 2 = − 3 k c 2 8 π G . {\displaystyle (\Omega ^{-1}-1)\rho a^{2}={\frac {-3kc^{2}}{8\pi G}}.} The right hand side of the last expression above contains constants only and therefore the left hand side must remain constant throughout the evolution of the universe. As the universe expands the scale factor a {\displaystyle a} increases, but the density ρ {\displaystyle \rho } decreases as matter (or energy) becomes spread out. For the standard model of the universe which contains mainly matter and radiation for most of its history, ρ {\displaystyle \rho } decreases more quickly than a 2 {\displaystyle a^{2}} increases, and so the factor ρ a 2 {\displaystyle \rho a^{2}} will decrease. Since the time of the Planck era, shortly after the Big Bang, this
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
term has decreased by a factor of around 10 60 , {\displaystyle 10^{60},} and so ( Ω − 1 − 1 ) {\displaystyle (\Omega ^{-1}-1)} must have increased by a similar amount to retain the constant value of their product. == Current value of Ω == === Measurement === The value of Ω at the present time is denoted Ω0. This value can be deduced by measuring the curvature of spacetime (since Ω = 1, or ρ = ρ c {\displaystyle \rho =\rho _{c}} , is defined as the density for which the curvature k = 0). The curvature can be inferred from a number of observations. One such observation is that of anisotropies (that is, variations with direction - see below) in the Cosmic Microwave Background (CMB) radiation. The CMB is electromagnetic radiation which fills the universe, left over from an early stage in its history when it was filled with photons and a hot, dense plasma. This plasma cooled as the universe expanded, and when it cooled enough to form stable atoms it no longer absorbed the photons. The photons present at that stage have been propagating ever since, growing fainter and less energetic as they spread through the ever-expanding universe. The temperature of this radiation is almost the same at all points on the sky, but there is a slight variation (around one part in 100,000) between the temperature received from different directions. The angular scale of these fluctuations - the typical angle between a hot patch and a cold patch on the sky - depends on the curvature of the universe which in turn depends on its density as described above. Thus, measurements of this angular scale allow an estimation of Ω0. Another probe of Ω0 is the frequency of Type-Ia supernovae at different distances from
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
Earth. These supernovae, the explosions of degenerate white dwarf stars, are a type of standard candle; this means that the processes governing their intrinsic brightness are well understood so that a measure of apparent brightness when seen from Earth can be used to derive accurate distance measures for them (the apparent brightness decreasing in proportion to the square of the distance - see luminosity distance). Comparing this distance to the redshift of the supernovae gives a measure of the rate at which the universe has been expanding at different points in history. Since the expansion rate evolves differently over time in cosmologies with different total densities, Ω0 can be inferred from the supernovae data. Data from the Wilkinson Microwave Anisotropy Probe (WMAP, measuring CMB anisotropies) combined with that from the Sloan Digital Sky Survey and observations of type-Ia supernovae constrain Ω0 to be 1 within 1%. In other words, the term |Ω − 1| is currently less than 0.01, and therefore must have been less than 10−62 at the Planck era. The cosmological parameters measured by Planck spacecraft mission reaffirmed previous results by WMAP. === Implication === This tiny value is the crux of the flatness problem. If the initial density of the universe could take any value, it would seem extremely surprising to find it so 'finely tuned' to the critical value ρ c {\displaystyle \rho _{c}} . Indeed, a very small departure of Ω from 1 in the early universe would have been magnified during billions of years of expansion to create a current density very far from critical. In the case of an overdensity ( ρ > ρ c {\displaystyle \rho >\rho _{c}} ) this would lead to a universe so dense it would cease expanding and collapse into a Big Crunch (an opposite to the Big
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
Bang in which all matter and energy falls back into an extremely dense state) in a few years or less; in the case of an underdensity ( ρ < ρ c {\displaystyle \rho <\rho _{c}} ) it would expand so quickly and become so sparse it would soon seem essentially empty, and gravity would not be strong enough by comparison to cause matter to collapse and form galaxies resulting in a big freeze. In either case the universe would contain no complex structures such as galaxies, stars, planets and any form of life. This problem with the Big Bang model was first pointed out by Robert Dicke in 1969, and it motivated a search for some reason the density should take such a specific value. == Solutions to the problem == Some cosmologists agreed with Dicke that the flatness problem was a serious one, in need of a fundamental reason for the closeness of the density to criticality. But there was also a school of thought which denied that there was a problem to solve, arguing instead that since the universe must have some density it may as well have one close to ρ c {\displaystyle \rho _{c}} as far from it, and that speculating on a reason for any particular value was "beyond the domain of science". That, however, is a minority viewpoint, even among those sceptical of the existence of the flatness problem. Several cosmologists have argued that, for a variety of reasons, the flatness problem is based on a misunderstanding. === Anthropic principle === One solution to the problem is to invoke the anthropic principle, which states that humans should take into account the conditions necessary for them to exist when speculating about causes of the universe's properties. If two types of universe seem equally likely but
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
only one is suitable for the evolution of intelligent life, the anthropic principle suggests that finding ourselves in that universe is no surprise: if the other universe had existed instead, there would be no observers to notice the fact. The principle can be applied to solve the flatness problem in two somewhat different ways. The first (an application of the 'strong anthropic principle') was suggested by C. B. Collins and Stephen Hawking, who in 1973 considered the existence of an infinite number of universes such that every possible combination of initial properties was held by some universe. In such a situation, they argued, only those universes with exactly the correct density for forming galaxies and stars would give rise to intelligent observers such as humans: therefore, the fact that we observe Ω to be so close to 1 would be "simply a reflection of our own existence". An alternative approach, which makes use of the 'weak anthropic principle', is to suppose that the universe is infinite in size, but with the density varying in different places (i.e. an inhomogeneous universe). Thus some regions will be over-dense (Ω > 1) and some under-dense (Ω < 1). These regions may be extremely far apart - perhaps so far that light has not had time to travel from one to another during the age of the universe (that is, they lie outside one another's cosmological horizons). Therefore, each region would behave essentially as a separate universe: if we happened to live in a large patch of almost-critical density we would have no way of knowing of the existence of far-off under- or over-dense patches since no light or other signal has reached us from them. An appeal to the anthropic principle can then be made, arguing that intelligent life would only arise in
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
those patches with Ω very close to 1, and that therefore our living in such a patch is unsurprising. This latter argument makes use of a version of the anthropic principle which is 'weaker' in the sense that it requires no speculation on multiple universes, or on the probabilities of various different universes existing instead of the current one. It requires only a single universe which is infinite - or merely large enough that many disconnected patches can form - and that the density varies in different regions (which is certainly the case on smaller scales, giving rise to galactic clusters and voids). However, the anthropic principle has been criticised by many scientists. For example, in 1979 Bernard Carr and Martin Rees argued that the principle "is entirely post hoc: it has not yet been used to predict any feature of the Universe." Others have taken objection to its philosophical basis, with Ernan McMullin writing in 1994 that "the weak Anthropic principle is trivial ... and the strong Anthropic principle is indefensible." Since many physicists and philosophers of science do not consider the principle to be compatible with the scientific method, another explanation for the flatness problem was needed. === Inflation === The standard solution to the flatness problem invokes cosmic inflation, a process whereby the universe expands exponentially quickly (i.e. a {\displaystyle a} grows as e λ t {\displaystyle e^{\lambda t}} with time t {\displaystyle t} , for some constant λ {\displaystyle \lambda } ) during a short period in its early history. The theory of inflation was first proposed in 1979, and published in 1981, by Alan Guth. His two main motivations for doing so were the flatness problem and the horizon problem, another fine-tuning problem of physical cosmology. However, "In December, 1980 when Guth was developing his
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
inflation model, he was not trying to solve either the flatness or horizon problems. Indeed, at that time, he knew nothing of the horizon problem and had never quantitatively calculated the flatness problem". He was a particle physicist trying to solve the magnetic monopole problem." The proposed cause of inflation is a field which permeates space and drives the expansion. The field contains a certain energy density, but unlike the density of the matter or radiation present in the late universe, which decrease over time, the density of the inflationary field remains roughly constant as space expands. Therefore, the term ρ a 2 {\displaystyle \rho a^{2}} increases extremely rapidly as the scale factor a {\displaystyle a} grows exponentially. Recalling the Friedmann Equation ( Ω − 1 − 1 ) ρ a 2 = − 3 k c 2 8 π G {\displaystyle (\Omega ^{-1}-1)\rho a^{2}={\frac {-3kc^{2}}{8\pi G}}} , and the fact that the right-hand side of this expression is constant, the term | Ω − 1 − 1 | {\displaystyle |\Omega ^{-1}-1|} must therefore decrease with time. Thus if | Ω − 1 − 1 | {\displaystyle |\Omega ^{-1}-1|} initially takes any arbitrary value, a period of inflation can force it down towards 0 and leave it extremely small - around 10 − 62 {\displaystyle 10^{-62}} as required above, for example. Subsequent evolution of the universe will cause the value to grow, bringing it to the currently observed value of around 0.01. Thus the sensitive dependence on the initial value of Ω has been removed: a large and therefore 'unsurprising' starting value need not become amplified and lead to a very curved universe with no opportunity to form galaxies and other structures. This success in solving the flatness problem is considered one of the major motivations for inflationary theory. However,
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
some physicists deny that inflationary theory resolves the flatness problem, arguing that it merely moves the fine-tuning from the probability distribution to the potential of a field, or even deny that it is a scientific theory. === Post inflation === Although inflationary theory is regarded as having had much success, and the evidence for it is compelling, it is not universally accepted: cosmologists recognize that there are still gaps in the theory and are open to the possibility that future observations will disprove it. In particular, in the absence of any firm evidence for what the field driving inflation should be, many different versions of the theory have been proposed. Many of these contain parameters or initial conditions which themselves require fine-tuning in much the way that the early density does without inflation. For these reasons work is still being done on alternative solutions to the flatness problem. These have included non-standard interpretations of the effect of dark energy and gravity, particle production in an oscillating universe, and use of a Bayesian statistical approach to argue that the problem is non-existent. The latter argument, suggested for example by Evrard and Coles, maintains that the idea that Ω being close to 1 is 'unlikely' is based on assumptions about the likely distribution of the parameter which are not necessarily justified. Despite this ongoing work, inflation remains by far the dominant explanation for the flatness problem. The question arises, however, whether it is still the dominant explanation because it is the best explanation, or because the community is unaware of progress on this problem. In particular, in addition to the idea that Ω is not a suitable parameter in this context, other arguments against the flatness problem have been presented: if the universe collapses in the future, then the flatness problem "exists",
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
but only for a relatively short time, so a typical observer would not expect to measure Ω appreciably different from 1; in the case of a universe which expands forever with a positive cosmological constant, fine-tuning is needed not to achieve a (nearly) flat universe, but also to avoid it. === Einstein–Cartan theory === The flatness problem is naturally solved by the Einstein–Cartan–Sciama–Kibble theory of gravity, without an exotic form of matter required in inflationary theory. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. It has no free parameters. Including torsion gives the correct conservation law for the total (orbital plus intrinsic) angular momentum of matter in the presence of gravity. The minimal coupling between torsion and Dirac spinors obeying the nonlinear Dirac equation generates a spin-spin interaction which is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical big bang singularity, replacing it with a bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the big bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era. == See also == Magnetic monopole Horizon problem == Notes == == References ==
|
{
"page_id": 1118171,
"source": null,
"title": "Flatness problem"
}
|
In organophosphorus chemistry, the Kabachnik–Fields reaction is a three-component organic reaction forming α-aminomethylphosphonates from an amine, a carbonyl compound, and a dialkyl phosphonate, (RO)2P(O)H (that are also called dialkylphosphites). Aminophosphonates are synthetic targets of some importance as phosphorus analogues of α-amino acids (a bioisostere). This multicomponent reaction was independently discovered by Martin Kabachnik and Ellis K. Fields in 1952. The reaction is very similar to the two-component Pudovik reaction, which involves condensation of the phosphite and a preformed imine. The first step in this reaction is the formation of an imine, followed by a hydrophosphonylation step where the phosphonate P–H bond across the C=N double bond. The starting carbonyl component is usually an aldehyde and sometimes a ketone. The reaction can be accelerated with a combination of dehydrating reagent and Lewis acid. Enantioselective variants of the Kabachnik–Fields reaction have been developed, for example employing α-methylbenzylamine provides a chiral, non-racemic α-aminophosphonate. == References ==
|
{
"page_id": 22024157,
"source": null,
"title": "Kabachnik–Fields reaction"
}
|
The Rolf Wideroe Prize is awarded every third year by the Accelerator Group of the European Physical Society (EPS), in memory of Rolf Widerøe, to individuals in recognition of outstanding work in the field of accelerator physics. The prize was awarded for the first time in 1996, but was only named the Rolf Wideroe Prize in 2011. Before this year the prize was simply referred to as EPS Accelerator Group Prizes. == Laureates == 2020: Lucio Rossi 2017: Lyn Evans 2014: Mikael Eriksson 2011: Shin-Ichi Kurokawa 2008: Alexander Chao 2006 Vladimir Teplyakov 2004: Igor Meshkov 2002: Kurt Hübner 2000: Eberhard Keil 1998: Cristoforo Benvenuti 1996: R.D. Kohaupt and the DESY Feedback Group: M. Ebert, D. Heins, J. Klute, K.-H. Matthiesen, H. Musfeldt, S. Pätzold, J. Rümmler, M. Schweiger, J. Theiss == See also == List of physics awards == References ==
|
{
"page_id": 64032738,
"source": null,
"title": "Rolf Wideroe Prize"
}
|
In general relativity, Regge–Wheeler–Zerilli equations are a pair of equations that describe gravitational perturbations of a Schwarzschild black hole, named after Tullio Regge, John Archibald Wheeler and Frank J. Zerilli. The perturbations of a Schwarzchild metric is classified into two types, namely, axial and polar perturbations, a terminology introduced by Subrahmanyan Chandrasekhar. Axial perturbations induce frame dragging by imparting rotation to the black hole and change sign when the azimuthal direction is reversed, whereas polar perturbations do not impart rotation and do not change sign under the reversal of azimuthal direction. The equation for axial perturbations is called Regge–Wheeler equation and the equation governing polar perturbations is called Zerilli equation. When assuming an harmonic time-dependence, the equations take the same form of the one-dimensional Schrödinger equation. The equations read as ( d 2 d r ∗ 2 + σ 2 ) Z ± = V ± Z ± {\displaystyle \left({\frac {d^{2}}{dr_{*}^{2}}}+\sigma ^{2}\right)Z^{\pm }=V^{\pm }Z^{\pm }} where Z + {\displaystyle Z^{+}} characterises the polar perturbations and Z − {\displaystyle Z^{-}} the axial perturbations. Here r ∗ = r + 2 M ln ( r / 2 M − 1 ) {\displaystyle r_{*}=r+2M\ln(r/2M-1)} is the tortoise coordinate (we set G = c = 1 {\displaystyle G=c=1} ), r {\displaystyle r} belongs to the Schwarzschild coordinates ( t , r , θ , φ ) {\displaystyle (t,r,\theta ,\varphi )} , 2 M {\displaystyle 2M} is the Schwarzschild radius and σ {\displaystyle \sigma } represents the time frequency of the perturbations appearing in the form e i σ t {\displaystyle e^{i\sigma t}} . The Regge–Wheeler potential and Zerilli potential are respectively given by V − = 2 ( r 2 − 2 M r ) r 5 [ ( n + 1 ) r − 3 M ] {\displaystyle V^{-}={\frac {2(r^{2}-2Mr)}{r^{5}}}[(n+1)r-3M]} V +
|
{
"page_id": 71241698,
"source": null,
"title": "Regge–Wheeler–Zerilli equations"
}
|
= 2 ( r 2 − 2 M r ) r 5 ( n r + 3 M ) 2 [ n 2 ( n + 1 ) r 3 + 3 M n 2 r 2 + 9 M 2 n r + 9 M 3 ] {\displaystyle V^{+}={\frac {2(r^{2}-2Mr)}{r^{5}(nr+3M)^{2}}}[n^{2}(n+1)r^{3}+3Mn^{2}r^{2}+9M^{2}nr+9M^{3}]} where 2 n = ( l − 1 ) ( l + 2 ) {\displaystyle 2n=(l-1)(l+2)} and l = 2 , 3 , 4 , … {\displaystyle l=2,3,4,\dots } characterizes the eigenmode for the θ {\displaystyle \theta } coordinate. For gravitational perturbations, the modes l = 0 , 1 {\displaystyle l=0,\,1} are irrelevant because they do not evolve with time. Physically gravitational perturbations with l = 0 {\displaystyle l=0} (monopole) mode represents a change in the black hole mass, whereas the l = 1 {\displaystyle l=1} (dipole) mode corresponds to a shift in the location and value of the black hole's angular momentum. The shape of above potentials are exhibited in the figure. In tortoise coordinates, r ∗ → − ∞ {\displaystyle r_{*}\rightarrow -\infty } denotes the event horizon and r ∗ → ∞ {\displaystyle r_{*}\rightarrow \infty } is equivalent to r → ∞ {\displaystyle r\rightarrow \infty } i.e., to distances far away from the back hole. The potentials are short-ranged as they decay faster than 1 / r ∗ {\displaystyle 1/r_{*}} ; as r ∗ → ∞ {\displaystyle r_{*}\rightarrow \infty } we have V ± → 2 ( n + 1 ) / r 2 {\displaystyle V^{\pm }\rightarrow 2(n+1)/r^{2}} and as r ∗ → − ∞ {\displaystyle r_{*}\rightarrow -\infty } , we have V ± ∼ e r ∗ / 2 M . {\displaystyle V^{\pm }\sim e^{r_{*}/2M}.} Consequently, the asymptotic behaviour of the solutions for r ∗ → ± ∞ {\displaystyle r_{*}\rightarrow \pm \infty } is e ±
|
{
"page_id": 71241698,
"source": null,
"title": "Regge–Wheeler–Zerilli equations"
}
|
i σ r ∗ . {\displaystyle e^{\pm i\sigma r_{*}}.} == Relations between the two problems == In 1975, Subrahmanyan Chandrasekhar and Steven Detweiler discovered a one-to-one mapping between the two equations, leading to a consequence that the spectrum corresponding to both potentials are identical. The two potentials can also be written as V ± = ± 6 M d f d r ∗ + ( 6 M f ) 2 + 4 n ( n + 1 ) f , f = r 2 − 2 M r 2 r 3 ( n r + 3 M ) . {\displaystyle V^{\pm }=\pm 6M{\frac {df}{dr_{*}}}+(6Mf)^{2}+4n(n+1)f,\quad f={\frac {r^{2}-2Mr}{2r^{3}(nr+3M)}}.} The relations between Z + {\displaystyle Z^{+}} and Z − {\displaystyle Z^{-}} are given by [ 4 n ( n + 1 ) ± 12 i σ M ] Z ± = [ 4 n ( n + 1 ) + 72 M 2 ( r 2 − 2 M r ) r 3 ( 2 n r + 6 M ) ] Z ∓ ± 12 M d Z ∓ d r ∗ . {\displaystyle [4n(n+1)\pm 12i\sigma M]Z^{\pm }=\left[4n(n+1)+{\frac {72M^{2}(r^{2}-2Mr)}{r^{3}(2nr+6M)}}\right]Z^{\mp }\pm 12M{\frac {dZ^{\mp }}{dr_{*}}}.} == Reflection and transmission coefficients == Here V ± {\displaystyle V^{\pm }} is always positive and the problem is one of reflection and transmission of waves incident from r ∗ → ∞ {\displaystyle r_{*}\rightarrow \infty } to r ∗ → − ∞ {\displaystyle r_{*}\rightarrow -\infty } . The problem is essentially the same as that of a reflection and transmission problem by a potential barrier in quantum mechanics. Let the incident wave with unit amplitude be e + i σ r ∗ {\displaystyle e^{+i\sigma r_{*}}} , then the asymptotic behaviours of the solution are given by Z ± = e + i σ r ∗ + R ± e
|
{
"page_id": 71241698,
"source": null,
"title": "Regge–Wheeler–Zerilli equations"
}
|
− i σ r ∗ as r ∗ → + ∞ {\displaystyle Z^{\pm }=e^{+i\sigma r_{*}}+R^{\pm }e^{-i\sigma r_{*}}\quad {\text{as}}\quad r_{*}\rightarrow +\infty } Z ± = T ± e i σ r ∗ as r ∗ → − ∞ {\displaystyle Z^{\pm }=T^{\pm }e^{i\sigma r_{*}}\quad {\text{as}}\quad r_{*}\rightarrow -\infty } where R = R ( σ ) {\displaystyle R=R(\sigma )} and T = T ( σ ) {\displaystyle T=T(\sigma )} are respectively the reflection and transmission amplitudes. In the second equation, we have imposed the physical requirement that no waves emerge from the event horizon. The reflection and transmission coefficients are thus defined as R ± = | R ± | 2 , T ± = | T ± | 2 {\displaystyle {\mathcal {R}}^{\pm }=|R^{\pm }|^{2},\quad {\mathcal {T}}^{\pm }=|T^{\pm }|^{2}} subjected to the condition R ± + T ± = 1. {\displaystyle {\mathcal {R}}^{\pm }+{\mathcal {T}}^{\pm }=1.} Because of the inherent connection between the two equations as outlined in the previous section, it turns out T + = T − , R + = e i δ R − , e i δ = n ( n + 1 ) − 3 i σ M n ( n + 1 ) + 3 i σ M {\displaystyle T^{+}=T^{-},\quad R^{+}=e^{i\delta }R^{-},\quad e^{i\delta }={\frac {n(n+1)-3i\sigma M}{n(n+1)+3i\sigma M}}} and thus consequently, since R + {\displaystyle R^{+}} and R − {\displaystyle R^{-}} differ only in their phases, we get T ≡ T + = T − , R ≡ R + = R − . {\displaystyle {\mathcal {T}}\equiv {\mathcal {T}}^{+}={\mathcal {T}}^{-},\quad {\mathcal {R}}\equiv {\mathcal {R}}^{+}={\mathcal {R}}^{-}.} It is clear from the figure for the reflection coefficient that small-frequency perturbations are readily reflected by the black hole whereas large-frequency ones are absorbed by the black hole. The transition arises around the fundamental quasi-normal mode frequency (see below) for each
|
{
"page_id": 71241698,
"source": null,
"title": "Regge–Wheeler–Zerilli equations"
}
|
multipole. == Quasi-normal modes == Quasi-normal modes correspond to pure tones of the black hole. These tones are excited when arbitrary, but small, perturbations imping on a black hole, such as an object falling into it, accretion of matter surrounding it, the last stage of slightly aspherical collapse, the last stage of a binary merger etc. Unlike the reflection and transmission coefficient problem, quasi-normal modes are characterised by complex-valued σ {\displaystyle \sigma } 's with the convention R e { σ } > 0 {\displaystyle \mathrm {Re} \{\sigma \}>0} . The required boundary conditions are Z ± = A ± e − i σ r ∗ as r ∗ → + ∞ {\displaystyle Z^{\pm }=A^{\pm }e^{-i\sigma r_{*}}\quad {\text{as}}\quad r_{*}\rightarrow +\infty } Z ± = e i σ r ∗ as r ∗ → − ∞ {\displaystyle Z^{\pm }=e^{i\sigma r_{*}}\quad {\text{as}}\quad r_{*}\rightarrow -\infty } indicating that we have purely outgoing waves with amplitude A ± {\displaystyle A^{\pm }} and purely ingoing waves at the horizon. The problem becomes an eigenvalue problem. The quasi-normal modes are of damping type in time, although these waves diverge in space as r ∗ → ± ∞ {\displaystyle r^{*}\to \pm \infty } (this is due to the implicit assumption that the perturbation in quasi-normal modes is 'infinite' in the remote past). Again because of the relation mentioned between the two problem, the spectrum of Z + {\displaystyle Z^{+}} and Z − {\displaystyle Z^{-}} are identical and thus it enough to consider the spectrum of Z − . {\displaystyle Z^{-}.} The problem is simplified by introducing Z − = exp ( i ∫ r ∗ ϕ d r ∗ ) . {\displaystyle Z^{-}=\exp \left(i\int ^{r_{*}}\phi \,dr_{*}\right).} The nonlinear eigenvalue problem is given by i d ϕ d r ∗ + σ 2 − ϕ 2 −
|
{
"page_id": 71241698,
"source": null,
"title": "Regge–Wheeler–Zerilli equations"
}
|
V − = 0 , ϕ ( − ∞ ) = + σ , ϕ ( + ∞ ) = − σ . {\displaystyle i{\frac {d\phi }{dr_{*}}}+\sigma ^{2}-\phi ^{2}-V^{-}=0,\quad \phi (-\infty )=+\sigma ,\quad \phi (+\infty )=-\sigma .} The solution is found to exist only for a discrete set of values of σ . {\displaystyle \sigma .} This equation also implies the identity − 2 i σ + ∫ − ∞ + ∞ ( σ 2 − ϕ 2 ) d r ∗ = ∫ − ∞ + ∞ V − d r ∗ = 4 n + 1 4 M . {\displaystyle -2i\sigma +\int _{-\infty }^{+\infty }(\sigma ^{2}-\phi ^{2})dr_{*}=\int _{-\infty }^{+\infty }V^{-}dr_{*}={\frac {4n+1}{4M}}.} == See also == Chandrasekhar–Page equations Teukolsky equations == References ==
|
{
"page_id": 71241698,
"source": null,
"title": "Regge–Wheeler–Zerilli equations"
}
|
An enyne is an organic compound containing a C=C double bond (alkene) and a C≡C triple bond (alkyne). It is called a conjugated enyne when the double and triple bonds are conjugated. The term is a contraction of the terms alkene and alkyne. The simplest enyne is vinylacetylene. The organic enynes are isanolic acid and exocarpic acid. == See also == Enyne metathesis Enediyne Polyyne == References ==
|
{
"page_id": 3411940,
"source": null,
"title": "Enyne"
}
|
The molecular formula C15H11NO3 (molar mass: 253.25 g/mol, exact mass: 253.073894 u) may refer to: Cridanimod Furegrelate
|
{
"page_id": 79171557,
"source": null,
"title": "C15H11NO3"
}
|
A gamma ray, also known as gamma radiation (symbol γ), is a penetrating form of electromagnetic radiation arising from high energy interactions like the radioactive decay of atomic nuclei or astronomical events like solar flares. It consists of the shortest wavelength electromagnetic waves, typically shorter than those of X-rays. With frequencies above 30 exahertz (3×1019 Hz) and wavelengths less than 10 picometers (1×10−11 m), gamma ray photons have the highest photon energy of any form of electromagnetic radiation. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900 while studying radiation emitted by radium. In 1903, Ernest Rutherford named this radiation gamma rays based on their relatively strong penetration of matter; in 1900, he had already named two less penetrating types of decay radiation (discovered by Henri Becquerel) alpha rays and beta rays in ascending order of penetrating power. Gamma rays from radioactive decay are in the energy range from a few kiloelectronvolts (keV) to approximately 8 megaelectronvolts (MeV), corresponding to the typical energy levels in nuclei with reasonably long lifetimes. The energy spectrum of gamma rays can be used to identify the decaying radionuclides using gamma spectroscopy. Very-high-energy gamma rays in the 100–1000 teraelectronvolt (TeV) range have been observed from astronomical sources such as the Cygnus X-3 microquasar. Natural sources of gamma rays originating on Earth are mostly a result of radioactive decay and secondary radiation from atmospheric interactions with cosmic ray particles. However, there are other rare natural sources, such as terrestrial gamma-ray flashes, which produce gamma rays from electron action upon the nucleus. Notable artificial sources of gamma rays include fission, such as that which occurs in nuclear reactors, and high energy physics experiments, such as neutral pion decay and nuclear fusion. The energy ranges of gamma rays and X-rays overlap in the electromagnetic spectrum,
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
so the terminology for these electromagnetic waves varies between scientific disciplines. In some fields of physics, they are distinguished by their origin: gamma rays are created by nuclear decay while X-rays originate outside the nucleus. In astrophysics, gamma rays are conventionally defined as having photon energies above 100 keV and are the subject of gamma-ray astronomy, while radiation below 100 keV is classified as X-rays and is the subject of X-ray astronomy. Gamma rays are ionizing radiation and are thus hazardous to life. They can cause DNA mutations, cancer and tumors, and at high doses burns and radiation sickness. Due to their high penetration power, they can damage bone marrow and internal organs. Unlike alpha and beta rays, they easily pass through the body and thus pose a formidable radiation protection challenge, requiring shielding made from dense materials such as lead or concrete. On Earth, the magnetosphere protects life from most types of lethal cosmic radiation other than gamma rays. == History of discovery == The first gamma ray source to be discovered was the radioactive decay process called gamma decay. In this type of decay, an excited nucleus emits a gamma ray almost immediately upon formation. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900, while studying radiation emitted from radium. Villard knew that his described radiation was more powerful than previously described types of rays from radium, which included beta rays, first noted as "radioactivity" by Henri Becquerel in 1896, and alpha rays, discovered as a less penetrating form of radiation by Rutherford, in 1899. However, Villard did not consider naming them as a different fundamental type. Later, in 1903, Villard's radiation was recognized as being of a type fundamentally different from previously named rays by Ernest Rutherford, who named Villard's rays "gamma rays" by
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
analogy with the beta and alpha rays that Rutherford had differentiated in 1899. The "rays" emitted by radioactive elements were named in order of their power to penetrate various materials, using the first three letters of the Greek alphabet: alpha rays as the least penetrating, followed by beta rays, followed by gamma rays as the most penetrating. Rutherford also noted that gamma rays were not deflected (or at least, not easily deflected) by a magnetic field, another property making them unlike alpha and beta rays. Gamma rays were first thought to be particles with mass, like alpha and beta rays. Rutherford initially believed that they might be extremely fast beta particles, but their failure to be deflected by a magnetic field indicated that they had no charge. In 1914, gamma rays were observed to be reflected from crystal surfaces, proving that they were electromagnetic radiation. Rutherford and his co-worker Edward Andrade measured the wavelengths of gamma rays from radium, and found they were similar to X-rays, but with shorter wavelengths and thus, higher frequency. This was eventually recognized as giving them more energy per photon, as soon as the latter term became generally accepted. A gamma decay was then understood to usually emit a gamma photon. == Sources == Natural sources of gamma rays on Earth include gamma decay from naturally occurring radioisotopes such as potassium-40, and also as a secondary radiation from various atmospheric interactions with cosmic ray particles. Natural terrestrial sources that produce gamma rays include lightning strikes and terrestrial gamma-ray flashes, which produce high energy emissions from natural high-energy voltages. Gamma rays are produced by a number of astronomical processes in which very high-energy electrons are produced. Such electrons produce secondary gamma rays by the mechanisms of bremsstrahlung, inverse Compton scattering and synchrotron radiation. A large fraction
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
of such astronomical gamma rays are screened by Earth's atmosphere. Notable artificial sources of gamma rays include fission, such as occurs in nuclear reactors, as well as high energy physics experiments, such as neutral pion decay and nuclear fusion. A sample of gamma ray-emitting material that is used for irradiating or imaging is known as a gamma source. It is also called a radioactive source, isotope source, or radiation source, though these more general terms also apply to alpha and beta-emitting devices. Gamma sources are usually sealed to prevent radioactive contamination, and transported in heavy shielding. === Radioactive decay (gamma decay) === Gamma rays are produced during gamma decay, which normally occurs after other forms of decay occur, such as alpha or beta decay. A radioactive nucleus can decay by the emission of an α or β particle. The daughter nucleus that results is usually left in an excited state. It can then decay to a lower energy state by emitting a gamma ray photon, in a process called gamma decay. The emission of a gamma ray from an excited nucleus typically requires only 10−12 seconds. Gamma decay may also follow nuclear reactions such as neutron capture, nuclear fission, or nuclear fusion. Gamma decay is also a mode of relaxation of many excited states of atomic nuclei following other types of radioactive decay, such as beta decay, so long as these states possess the necessary component of nuclear spin. When high-energy gamma rays, electrons, or protons bombard materials, the excited atoms emit characteristic "secondary" gamma rays, which are products of the creation of excited nuclear states in the bombarded atoms. Such transitions, a form of nuclear gamma fluorescence, form a topic in nuclear physics called gamma spectroscopy. Formation of fluorescent gamma rays are a rapid subtype of radioactive gamma decay.
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
In certain cases, the excited nuclear state that follows the emission of a beta particle or other type of excitation, may be more stable than average, and is termed a metastable excited state, if its decay takes (at least) 100 to 1000 times longer than the average 10−12 seconds. Such relatively long-lived excited nuclei are termed nuclear isomers, and their decays are termed isomeric transitions. Such nuclei have half-lifes that are more easily measurable, and rare nuclear isomers are able to stay in their excited state for minutes, hours, days, or occasionally far longer, before emitting a gamma ray. The process of isomeric transition is therefore similar to any gamma emission, but differs in that it involves the intermediate metastable excited state(s) of the nuclei. Metastable states are often characterized by high nuclear spin, requiring a change in spin of several units or more with gamma decay, instead of a single unit transition that occurs in only 10−12 seconds. The rate of gamma decay is also slowed when the energy of excitation of the nucleus is small. An emitted gamma ray from any type of excited state may transfer its energy directly to any electrons, but most probably to one of the K shell electrons of the atom, causing it to be ejected from that atom, in a process generally termed the photoelectric effect (external gamma rays and ultraviolet rays may also cause this effect). The photoelectric effect should not be confused with the internal conversion process, in which a gamma ray photon is not produced as an intermediate particle (rather, a "virtual gamma ray" may be thought to mediate the process). ==== Decay schemes ==== One example of gamma ray production due to radionuclide decay is the decay scheme for cobalt-60, as illustrated in the accompanying diagram. First, 60Co
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
decays to excited 60Ni by beta decay emission of an electron of 0.31 MeV. Then the excited 60Ni decays to the ground state (see nuclear shell model) by emitting gamma rays in succession of 1.17 MeV followed by 1.33 MeV. This path is followed 99.88% of the time: Another example is the alpha decay of 241Am to form 237Np; which is followed by gamma emission. In some cases, the gamma emission spectrum of the daughter nucleus is quite simple, (e.g. 60Co/60Ni) while in other cases, such as with (241Am/237Np and 192Ir/192Pt), the gamma emission spectrum is complex, revealing that a series of nuclear energy levels exist. === Particle physics === Gamma rays are produced in many processes of particle physics. Typically, gamma rays are the products of neutral systems which decay through electromagnetic interactions (rather than a weak or strong interaction). For example, in an electron–positron annihilation, the usual products are two gamma ray photons. If the annihilating electron and positron are at rest, each of the resulting gamma rays has an energy of ~ 511 keV and frequency of ~ 1.24×1020 Hz. Conversely, gamma rays above 1022 keV can interact with nuclei via pair production of an electron and positron. Similarly, a neutral pion most often decays into two photons. Many other hadrons and massive bosons also decay electromagnetically. High energy physics experiments, such as the Large Hadron Collider, accordingly employ substantial radiation shielding. Because subatomic particles mostly have far shorter wavelengths than atomic nuclei, particle physics gamma rays are generally several orders of magnitude more energetic than nuclear decay gamma rays. Since gamma rays are at the top of the electromagnetic spectrum in terms of energy, all extremely high-energy photons are gamma rays; for example, a photon having the Planck energy would be a gamma ray. === Other
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
sources === A few gamma rays in astronomy are known to arise from gamma decay (see discussion of SN1987A), but most do not. Photons from astrophysical sources that carry energy in the gamma radiation range are often explicitly called gamma-radiation. In addition to nuclear emissions, they are often produced by sub-atomic particle and particle-photon interactions. Those include electron-positron annihilation, neutral pion decay, bremsstrahlung, inverse Compton scattering, and synchrotron radiation. ==== Laboratory sources ==== In October 2017, scientists from various European universities proposed a means for sources of GeV photons using lasers as exciters through a controlled interplay between the cascade and anomalous radiative trapping. ==== Terrestrial thunderstorms ==== Thunderstorms can produce a brief pulse of gamma radiation called a terrestrial gamma-ray flash. These gamma rays are thought to be produced by high intensity static electric fields accelerating electrons, which then produce gamma rays by bremsstrahlung as they collide with and are slowed by atoms in the atmosphere. Gamma rays up to 100 MeV can be emitted by terrestrial thunderstorms, and were discovered by space-borne observatories. This raises the possibility of health risks to passengers and crew on aircraft flying in or near thunderclouds. ==== Solar flares ==== The most effusive solar flares emit across the entire EM spectrum, including γ-rays. The first confident observation occurred in 1972. ==== Cosmic rays ==== Extraterrestrial, high energy gamma rays include the gamma ray background produced when cosmic rays (either high speed electrons or protons) collide with ordinary matter, producing pair-production gamma rays at 511 keV. Alternatively, bremsstrahlung are produced at energies of tens of MeV or more when cosmic ray electrons interact with nuclei of sufficiently high atomic number (see gamma ray image of the Moon near the end of this article, for illustration). ==== Pulsars and magnetars ==== The gamma ray sky
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
(see illustration at right) is dominated by the more common and longer-term production of gamma rays that emanate from pulsars within the Milky Way. Sources from the rest of the sky are mostly quasars. Pulsars are thought to be neutron stars with magnetic fields that produce focused beams of radiation, and are far less energetic, more common, and much nearer sources (typically seen only in our own galaxy) than are quasars or the rarer gamma-ray burst sources of gamma rays. Pulsars have relatively long-lived magnetic fields that produce focused beams of relativistic speed charged particles, which emit gamma rays (bremsstrahlung) when those strike gas or dust in their nearby medium, and are decelerated. This is a similar mechanism to the production of high-energy photons in megavoltage radiation therapy machines (see bremsstrahlung). Inverse Compton scattering, in which charged particles (usually electrons) impart energy to low-energy photons boosting them to higher energy photons. Such impacts of photons on relativistic charged particle beams is another possible mechanism of gamma ray production. Neutron stars with a very high magnetic field (magnetars), thought to produce astronomical soft gamma repeaters, are another relatively long-lived star-powered source of gamma radiation. ==== Quasars and active galaxies ==== More powerful gamma rays from very distant quasars and closer active galaxies are thought to have a gamma ray production source similar to a particle accelerator. High energy electrons produced by the quasar, and subjected to inverse Compton scattering, synchrotron radiation, or bremsstrahlung, are the likely source of the gamma rays from those objects. It is thought that a supermassive black hole at the center of such galaxies provides the power source that intermittently destroys stars and focuses the resulting charged particles into beams that emerge from their rotational poles. When those beams interact with gas, dust, and lower energy photons
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
they produce X-rays and gamma rays. These sources are known to fluctuate with durations of a few weeks, suggesting their relatively small size (less than a few light-weeks across). Such sources of gamma and X-rays are the most commonly visible high intensity sources outside the Milky Way galaxy. They shine not in bursts (see illustration), but relatively continuously when viewed with gamma ray telescopes. The power of a typical quasar is about 1040 watts, a small fraction of which is gamma radiation. Much of the rest is emitted as electromagnetic waves of all frequencies, including radio waves. ==== Gamma-ray bursts ==== The most intense sources of gamma rays are also the most intense sources of any type of electromagnetic radiation presently known. They are the "long duration burst" sources of gamma rays in astronomy ("long" in this context, meaning a few tens of seconds), and they are rare compared with the sources discussed above. By contrast, "short" gamma-ray bursts of two seconds or less, which are not associated with supernovae, are thought to produce gamma rays during the collision of pairs of neutron stars, or a neutron star and a black hole. The so-called long-duration gamma-ray bursts produce a total energy output of about 1044 joules (as much energy as the Sun will produce in its entire life-time) but in a period of only 20 to 40 seconds. Gamma rays are approximately 50% of the total energy output. The leading hypotheses for the mechanism of production of these highest-known intensity beams of radiation, are inverse Compton scattering and synchrotron radiation from high-energy charged particles. These processes occur as relativistic charged particles leave the region of the event horizon of a newly formed black hole created during supernova explosion. The beam of particles moving at relativistic speeds are focused for a
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
few tens of seconds by the magnetic field of the exploding hypernova. The fusion explosion of the hypernova drives the energetics of the process. If the narrowly directed beam happens to be pointed toward the Earth, it shines at gamma ray frequencies with such intensity, that it can be detected even at distances of up to 10 billion light years, which is close to the edge of the visible universe. == Properties == === Penetration of matter === Due to their penetrating nature, gamma rays require large amounts of shielding mass to reduce them to levels which are not harmful to living cells, in contrast to alpha particles, which can be stopped by paper or skin, and beta particles, which can be shielded by thin aluminium. Gamma rays are best absorbed by materials with high atomic numbers (Z) and high density, which contribute to the total stopping power. Because of this, a lead (high Z) shield is 20–30% better as a gamma shield than an equal mass of another low-Z shielding material, such as aluminium, concrete, water, or soil; lead's major advantage is not in lower weight, but rather its compactness due to its higher density. Protective clothing, goggles and respirators can protect from internal contact with or ingestion of alpha or beta emitting particles, but provide no protection from gamma radiation from external sources. The higher the energy of the gamma rays, the thicker the shielding made from the same shielding material is required. Materials for shielding gamma rays are typically measured by the thickness required to reduce the intensity of the gamma rays by one half (the half-value layer or HVL). For example, gamma rays that require 1 cm (0.4 inch) of lead to reduce their intensity by 50% will also have their intensity reduced in half by
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
4.1 cm of granite rock, 6 cm (2.5 inches) of concrete, or 9 cm (3.5 inches) of packed soil. However, the mass of this much concrete or soil is only 20–30% greater than that of lead with the same absorption capability. Depleted uranium is sometimes used for shielding in portable gamma ray sources, due to the smaller half-value layer when compared to lead (around 0.6 times the thickness for common gamma ray sources, i.e. Iridium-192 and Cobalt-60) and cheaper cost compared to tungsten. In a nuclear power plant, shielding can be provided by steel and concrete in the pressure and particle containment vessel, while water provides a radiation shielding of fuel rods during storage or transport into the reactor core. The loss of water or removal of a "hot" fuel assembly into the air would result in much higher radiation levels than when kept under water. === Matter interaction === When a gamma ray passes through matter, the probability for absorption is proportional to the thickness of the layer, the density of the material, and the absorption cross section of the material. The total absorption shows an exponential decrease of intensity with distance from the incident surface: I ( x ) = I 0 ⋅ e − μ x {\displaystyle I(x)=I_{0}\cdot e^{-\mu x}} where x is the thickness of the material from the incident surface, μ= nσ is the absorption coefficient, measured in cm−1, n the number of atoms per cm3 of the material (atomic density) and σ the absorption cross section in cm2. As it passes through matter, gamma radiation ionizes via three processes: The photoelectric effect: This describes the case in which a gamma photon interacts with and transfers its energy to an atomic electron, causing the ejection of that electron from the atom. The kinetic energy of
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
the resulting photoelectron is equal to the energy of the incident gamma photon minus the energy that originally bound the electron to the atom (binding energy). The photoelectric effect is the dominant energy transfer mechanism for X-ray and gamma ray photons with energies below 50 keV (thousand electronvolts), but it is much less important at higher energies. Compton scattering: This is an interaction in which an incident gamma photon loses enough energy to an atomic electron to cause its ejection, with the remainder of the original photon's energy emitted as a new, lower energy gamma photon whose emission direction is different from that of the incident gamma photon, hence the term "scattering". The probability of Compton scattering decreases with increasing photon energy. It is thought to be the principal absorption mechanism for gamma rays in the intermediate energy range 100 keV to 10 MeV. It is relatively independent of the atomic number of the absorbing material, which is why very dense materials like lead are only modestly better shields, on a per weight basis, than are less dense materials. Pair production: This becomes possible with gamma energies exceeding 1.02 MeV, and becomes important as an absorption mechanism at energies over 5 MeV (see illustration at right, for lead). By interaction with the electric field of a nucleus, the energy of the incident photon is converted into the mass of an electron-positron pair. Any gamma energy in excess of the equivalent rest mass of the two particles (totaling at least 1.02 MeV) appears as the kinetic energy of the pair and in the recoil of the emitting nucleus. At the end of the positron's range, it combines with a free electron, and the two annihilate, and the entire mass of these two is then converted into two gamma photons of at
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
least 0.51 MeV energy each (or higher according to the kinetic energy of the annihilated particles). The secondary electrons (and/or positrons) produced in any of these three processes frequently have enough energy to produce much ionization themselves. Additionally, gamma rays, particularly high energy ones, can interact with atomic nuclei resulting in ejection of particles in photodisintegration, or in some cases, even nuclear fission (photofission). === Light interaction === High-energy (from 80 GeV to ~10 TeV) gamma rays arriving from far-distant quasars are used to estimate the extragalactic background light in the universe: The highest-energy rays interact more readily with the background light photons and thus the density of the background light may be estimated by analyzing the incoming gamma ray spectra. === Gamma spectroscopy === Gamma spectroscopy is the study of the energetic transitions in atomic nuclei, which are generally associated with the absorption or emission of gamma rays. As in optical spectroscopy (see Franck–Condon effect) the absorption of gamma rays by a nucleus is especially likely (i.e., peaks in a "resonance") when the energy of the gamma ray is the same as that of an energy transition in the nucleus. In the case of gamma rays, such a resonance is seen in the technique of Mössbauer spectroscopy. In the Mössbauer effect the narrow resonance absorption for nuclear gamma absorption can be successfully attained by physically immobilizing atomic nuclei in a crystal. The immobilization of nuclei at both ends of a gamma resonance interaction is required so that no gamma energy is lost to the kinetic energy of recoiling nuclei at either the emitting or absorbing end of a gamma transition. Such loss of energy causes gamma ray resonance absorption to fail. However, when emitted gamma rays carry essentially all of the energy of the atomic nuclear de-excitation that produces
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
them, this energy is also sufficient to excite the same energy state in a second immobilized nucleus of the same type. == Applications == Gamma rays provide information about some of the most energetic phenomena in the universe; however, they are largely absorbed by the Earth's atmosphere. Instruments aboard high-altitude balloons and satellites missions, such as the Fermi Gamma-ray Space Telescope, provide our only view of the universe in gamma rays. Gamma-induced molecular changes can also be used to alter the properties of semi-precious stones, and is often used to change white topaz into blue topaz. Non-contact industrial sensors commonly use sources of gamma radiation in refining, mining, chemicals, food, soaps and detergents, and pulp and paper industries, for the measurement of levels, density, and thicknesses. Gamma-ray sensors are also used for measuring the fluid levels in water and oil industries. Typically, these use Co-60 or Cs-137 isotopes as the radiation source. In the US, gamma ray detectors are beginning to be used as part of the Container Security Initiative (CSI). These machines are advertised to be able to scan 30 containers per hour. Gamma radiation is often used to kill living organisms, in a process called irradiation. Applications of this include the sterilization of medical equipment (as an alternative to autoclaves or chemical means), the removal of decay-causing bacteria from many foods and the prevention of the sprouting of fruit and vegetables to maintain freshness and flavor. Despite their cancer-causing properties, gamma rays are also used to treat some types of cancer, since the rays also kill cancer cells. In the procedure called gamma-knife surgery, multiple concentrated beams of gamma rays are directed to the growth in order to kill the cancerous cells. The beams are aimed from different angles to concentrate the radiation on the growth while minimizing
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
damage to surrounding tissues. Gamma rays are also used for diagnostic purposes in nuclear medicine in imaging techniques. A number of different gamma-emitting radioisotopes are used. For example, in a PET scan a radiolabeled sugar called fluorodeoxyglucose emits positrons that are annihilated by electrons, producing pairs of gamma rays that highlight cancer as the cancer often has a higher metabolic rate than the surrounding tissues. The most common gamma emitter used in medical applications is the nuclear isomer technetium-99m which emits gamma rays in the same energy range as diagnostic X-rays. When this radionuclide tracer is administered to a patient, a gamma camera can be used to form an image of the radioisotope's distribution by detecting the gamma radiation emitted (see also SPECT). Depending on which molecule has been labeled with the tracer, such techniques can be employed to diagnose a wide range of conditions (for example, the spread of cancer to the bones via bone scan). == Health effects == Gamma rays cause damage at a cellular level and are penetrating, causing diffuse damage throughout the body. However, they are less ionising than alpha or beta particles, which are less penetrating. Low levels of gamma rays cause a stochastic health risk, which for radiation dose assessment is defined as the probability of cancer induction and genetic damage. The International Commission on Radiological Protection says "In the low dose range, below about 100 mSv, it is scientifically plausible to assume that the incidence of cancer or heritable effects will rise in direct proportion to an increase in the equivalent dose in the relevant organs and tissues": 51 High doses produce deterministic effects, which is the severity of acute tissue damage that is certain to happen. These effects are compared to the physical quantity absorbed dose measured by the unit gray
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
(Gy).: 61 === Effects and body response === When gamma radiation breaks DNA molecules, a cell may be able to repair the damaged genetic material, within limits. However, a study of Rothkamm and Lobrich concerning X-ray radiation has shown that this repair process works well after high-dose exposure but is much slower in the case of a low-dose exposure. Studies have shown low-dose gamma radiation may be enough to cause cancer. In a study of mice, they were given human-relevant low-dose gamma radiation, with genotoxic effects 45 days after continuous low-dose gamma radiation, with significant increases of chromosomal damage, DNA lesions and phenotypic mutations in blood cells of irradiated animals, covering the three types of genotoxic activity. Another study studied the effects of acute ionizing gamma radiation in rats, up to 10 Gy, and who ended up showing acute oxidative protein damage, DNA damage, cardiac troponin T carbonylation, and long-term cardiomyopathy. === Risk assessment === The natural outdoor exposure in the United Kingdom ranges from 0.1 to 0.5 μSv/h with significant increase around known nuclear and contaminated sites. Natural exposure to gamma rays is about 1 to 2 mSv per year, and the average total amount of radiation received in one year per inhabitant in the USA is 3.6 mSv. There is a small increase in the dose, due to naturally occurring gamma radiation, around small particles of high atomic number materials in the human body caused by the photoelectric effect. By comparison, the radiation dose from chest radiography (about 0.06 mSv) is a fraction of the annual naturally occurring background radiation dose. A chest CT delivers 5 to 8 mSv. A whole-body PET/CT scan can deliver 14 to 32 mSv depending on the protocol. The dose from fluoroscopy of the stomach is much higher, approximately 50 mSv (14 times
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
the annual background). An acute full-body equivalent single exposure dose of 1 Sv (1000 mSv), or 1 Gy, will cause mild symptoms of acute radiation sickness, such as nausea and vomiting; and a dose of 2.0–3.5 Sv (2.0–3.5 Gy) causes more severe symptoms (i.e. nausea, diarrhea, hair loss, hemorrhaging, and inability to fight infections), and will cause death in a sizable number of cases—about 10% to 35% without medical treatment. A dose of 3–5 Sv (3–5 Gy) is considered approximately the LD50 (or the lethal dose for 50% of exposed population) for an acute exposure to radiation even with standard medical treatment. A dose higher than 5 Sv (5 Gy) brings an increasing chance of death above 50%. Above 7.5–10 Sv (7.5–10 Gy) to the entire body, even extraordinary treatment, such as bone-marrow transplants, will not prevent the death of the individual exposed (see radiation poisoning). (Doses much larger than this may, however, be delivered to selected parts of the body in the course of radiation therapy.) For low-dose exposure, for example among nuclear workers, who receive an average yearly radiation dose of 19 mSv, the risk of dying from cancer (excluding leukemia) increases by 2 percent. For a dose of 100 mSv, the risk increase is 10 percent. By comparison, risk of dying from cancer was increased by 32 percent for the survivors of the atomic bombing of Hiroshima and Nagasaki. == Units of measurement and exposure == The following table shows radiation quantities in SI and non-SI units: The measure of the ionizing effect of gamma and X-rays in dry air is called the exposure, for which a legacy unit, the röntgen, was used from 1928. This has been replaced by kerma, now mainly used for instrument calibration purposes but not for received dose effect. The effect of
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
gamma and other ionizing radiation on living tissue is more closely related to the amount of energy deposited in tissue rather than the ionisation of air, and replacement radiometric units and quantities for radiation protection have been defined and developed from 1953 onwards. These are: The gray (Gy), is the SI unit of absorbed dose, which is the amount of radiation energy deposited in the irradiated material. For gamma radiation this is numerically equivalent to equivalent dose measured by the sievert, which indicates the stochastic biological effect of low levels of radiation on human tissue. The radiation weighting conversion factor from absorbed dose to equivalent dose is 1 for gamma, whereas alpha particles have a factor of 20, reflecting their greater ionising effect on tissue. The rad is the deprecated CGS unit for absorbed dose and the rem is the deprecated CGS unit of equivalent dose, used mainly in the USA. == Distinction from X-rays == The conventional distinction between X-rays and gamma rays has changed over time. Originally, the electromagnetic radiation emitted by X-ray tubes almost invariably had a longer wavelength than the radiation (gamma rays) emitted by radioactive nuclei. Older literature distinguished between X- and gamma radiation on the basis of wavelength, with radiation shorter than some arbitrary wavelength, such as 10−11 m, defined as gamma rays. Since the energy of photons is proportional to their frequency and inversely proportional to wavelength, this past distinction between X-rays and gamma rays can also be thought of in terms of its energy, with gamma rays considered to be higher energy electromagnetic radiation than are X-rays. However, since current artificial sources are now able to duplicate any electromagnetic radiation that originates in the nucleus, as well as far higher energies, the wavelengths characteristic of radioactive gamma ray sources vs. other types
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
now completely overlap. Thus, gamma rays are now usually distinguished by their origin: X-rays are emitted by definition by electrons outside the nucleus, while gamma rays are emitted by the nucleus. Exceptions to this convention occur in astronomy, where gamma decay is seen in the afterglow of certain supernovas, but radiation from high energy processes known to involve other radiation sources than radioactive decay is still classed as gamma radiation. For example, modern high-energy X-rays produced by linear accelerators for megavoltage treatment in cancer often have higher energy (4 to 25 MeV) than do most classical gamma rays produced by nuclear gamma decay. One of the most common gamma ray emitting isotopes used in diagnostic nuclear medicine, technetium-99m, produces gamma radiation of the same energy (140 keV) as that produced by diagnostic X-ray machines, but of significantly lower energy than therapeutic photons from linear particle accelerators. In the medical community today, the convention that radiation produced by nuclear decay is the only type referred to as "gamma" radiation is still respected. Due to this broad overlap in energy ranges, in physics the two types of electromagnetic radiation are now often defined by their origin: X-rays are emitted by electrons (either in orbitals outside of the nucleus, or while being accelerated to produce bremsstrahlung-type radiation), while gamma rays are emitted by the nucleus or by means of other particle decays or annihilation events. There is no lower limit to the energy of photons produced by nuclear reactions, and thus ultraviolet or lower energy photons produced by these processes would also be defined as "gamma rays" (indeed, this happens for the isomeric transition of the extremely low-energy isomer 229mTh). The only naming-convention that is still universally respected is the rule that electromagnetic radiation that is known to be of atomic nuclear origin
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
is always referred to as "gamma rays", and never as X-rays. However, in physics and astronomy, the converse convention (that all gamma rays are considered to be of nuclear origin) is frequently violated. In astronomy, higher energy gamma and X-rays are defined by energy, since the processes that produce them may be uncertain and photon energy, not origin, determines the required astronomical detectors needed. High-energy photons occur in nature that are known to be produced by processes other than nuclear decay but are still referred to as gamma radiation. An example is "gamma rays" from lightning discharges at 10 to 20 MeV, and known to be produced by the bremsstrahlung mechanism. Another example is gamma-ray bursts, now known to be produced from processes too powerful to involve simple collections of atoms undergoing radioactive decay. This is part and parcel of the general realization that many gamma rays produced in astronomical processes result not from radioactive decay or particle annihilation, but rather in non-radioactive processes similar to X-rays. Although the gamma rays of astronomy often come from non-radioactive events, a few gamma rays in astronomy are specifically known to originate from gamma decay of nuclei (as demonstrated by their spectra and emission half life). A classic example is that of supernova SN 1987A, which emits an "afterglow" of gamma-ray photons from the decay of newly made radioactive nickel-56 and cobalt-56. Most gamma rays in astronomy, however, arise by other mechanisms. == See also == Annihilation Galactic Center GeV excess Gaseous ionization detectors Very-high-energy gamma ray Ultra-high-energy gamma ray == Explanatory notes == == References == == External links == Basic reference on several types of radiation Archived 2018-04-25 at the Wayback Machine Radiation Q & A GCSE information Radiation information Archived 2010-06-11 at the Wayback Machine Gamma-ray bursts The Lund/LBNL Nuclear
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
Data Search – Contains information on gamma-ray energies from isotopes. Mapping soils with airborne detectors Archived 2010-11-11 at the Wayback Machine The LIVEChart of Nuclides – IAEA with filter on gamma-ray energy Health Physics Society Public Education Website
|
{
"page_id": 18616290,
"source": null,
"title": "Gamma ray"
}
|
A transitional fossil is any fossilized remains of a life form that exhibits traits common to both an ancestral group and its derived descendant group. This is especially important where the descendant group is sharply differentiated by gross anatomy and mode of living from the ancestral group. These fossils serve as a reminder that taxonomic divisions are human constructs that have been imposed in hindsight on a continuum of variation. Because of the incompleteness of the fossil record, there is usually no way to know exactly how close a transitional fossil is to the point of divergence. Therefore, it cannot be assumed that transitional fossils are direct ancestors of more recent groups, though they are frequently used as models for such ancestors. In 1859, when Charles Darwin's On the Origin of Species was first published, the fossil record was poorly known. Darwin described the perceived lack of transitional fossils as "the most obvious and gravest objection which can be urged against my theory," but he explained it by relating it to the extreme imperfection of the geological record. He noted the limited collections available at the time but described the available information as showing patterns that followed from his theory of descent with modification through natural selection. Indeed, Archaeopteryx was discovered just two years later, in 1861, and represents a classic transitional form between earlier, non-avian dinosaurs and birds. Many more transitional fossils have been discovered since then, and there is now abundant evidence of how all classes of vertebrates are related, including many transitional fossils. Specific examples of class-level transitions are: tetrapods and fish, birds and dinosaurs, and mammals and "mammal-like reptiles". The term "missing link" has been used extensively in popular writings on human evolution to refer to a perceived gap in the hominid evolutionary record. It is
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
most commonly used to refer to any new transitional fossil finds. Scientists, however, do not use the term, as it refers to a pre-evolutionary view of nature. == Evolutionary and phylogenetic taxonomy == === Transitions in phylogenetic nomenclature === In evolutionary taxonomy, the prevailing form of taxonomy during much of the 20th century and still used in non-specialist textbooks, taxa based on morphological similarity are often drawn as "bubbles" or "spindles" branching off from each other, forming evolutionary trees. Transitional forms are seen as falling between the various groups in terms of anatomy, having a mixture of characteristics from inside and outside the newly branched clade. With the establishment of cladistics in the 1990s, relationships commonly came to be expressed in cladograms that illustrate the branching of the evolutionary lineages in stick-like figures. The different so-called "natural" or "monophyletic" groups form nested units, and only these are given phylogenetic names. While in traditional classification tetrapods and fish are seen as two different groups, phylogenetically tetrapods are considered a branch of fish. Thus, with cladistics there is no longer a transition between established groups, and the term "transitional fossils" is a misnomer. Differentiation occurs within groups, represented as branches in the cladogram. In a cladistic context, transitional organisms can be seen as representing early examples of a branch, where not all of the traits typical of the previously known descendants on that branch have yet evolved. Such early representatives of a group are usually termed "basal taxa" or "sister taxa," depending on whether the fossil organism belongs to the daughter clade or not. === Transitional versus ancestral === A source of confusion is the notion that a transitional form between two different taxonomic groups must be a direct ancestor of one or both groups. The difficulty is exacerbated by the fact
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
that one of the goals of evolutionary taxonomy is to identify taxa that were ancestors of other taxa. However, because evolution is a branching process that produces a complex bush pattern of related species rather than a linear process producing a ladder-like progression, and because of the incompleteness of the fossil record, it is unlikely that any particular form represented in the fossil record is a direct ancestor of any other. Cladistics deemphasizes the concept of one taxonomic group being an ancestor of another, and instead emphasizes the identification of sister taxa that share a more recent common ancestor with one another than they do with other groups. There are a few exceptional cases, such as some marine plankton microfossils, where the fossil record is complete enough to suggest with confidence that certain fossils represent a population that was actually ancestral to a later population of a different species. But, in general, transitional fossils are considered to have features that illustrate the transitional anatomical features of actual common ancestors of different taxa, rather than to be actual ancestors. == Prominent examples == === Archaeopteryx === Archaeopteryx is a genus of theropod dinosaur closely related to the birds. Since the late 19th century, it has been accepted by palaeontologists, and celebrated in lay reference works, as being the oldest known bird, though a study in 2011 has cast doubt on this assessment, suggesting instead that it is a non-avialan dinosaur closely related to the origin of birds. It lived in what is now southern Germany in the Late Jurassic period around 150 million years ago, when Europe was an archipelago in a shallow warm tropical sea, much closer to the equator than it is now. Similar in shape to a European magpie, with the largest individuals possibly attaining the size of
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
a raven, Archaeopteryx could grow to about 0.5 metres (1.6 ft) in length. Despite its small size, broad wings, and inferred ability to fly or glide, Archaeopteryx has more in common with other small Mesozoic dinosaurs than it does with modern birds. In particular, it shares the following features with the deinonychosaurs (dromaeosaurs and troodontids): jaws with sharp teeth, three fingers with claws, a long bony tail, hyperextensible second toes ("killing claw"), feathers (which suggest homeothermy), and various skeletal features. These features make Archaeopteryx a clear candidate for a transitional fossil between dinosaurs and birds, making it important in the study both of dinosaurs and of the origin of birds. The first complete specimen was announced in 1861, and ten more Archaeopteryx fossils have been found since then. Most of the eleven known fossils include impressions of feathers—among the oldest direct evidence of such structures. Moreover, because these feathers take the advanced form of flight feathers, Archaeopteryx fossils are evidence that feathers began to evolve before the Late Jurassic. === Australopithecus afarensis === The hominid Australopithecus afarensis represents an evolutionary transition between modern bipedal humans and their quadrupedal ape ancestors. A number of traits of the A. afarensis skeleton strongly reflect bipedalism, to the extent that some researchers have suggested that bipedality evolved long before A. afarensis. In overall anatomy, the pelvis is far more human-like than ape-like. The iliac blades are short and wide, the sacrum is wide and positioned directly behind the hip joint, and there is clear evidence of a strong attachment for the knee extensors, implying an upright posture.: 122 While the pelvis is not entirely like that of a human (being markedly wide, or flared, with laterally orientated iliac blades), these features point to a structure radically remodelled to accommodate a significant degree of bipedalism.
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
The femur angles in toward the knee from the hip. This trait allows the foot to fall closer to the midline of the body, and strongly indicates habitual bipedal locomotion. Present-day humans, orangutans and spider monkeys possess this same feature. The feet feature adducted big toes, making it difficult if not impossible to grasp branches with the hindlimbs. Besides locomotion, A. afarensis also had a slightly larger brain than a modern chimpanzee (the closest living relative of humans) and had teeth that were more human than ape-like. === Pakicetids, Ambulocetus === The cetaceans (whales, dolphins and porpoises) are marine mammal descendants of land mammals. The pakicetids are an extinct family of hoofed mammals that are the earliest whales, whose closest sister group is Indohyus from the family Raoellidae. They lived in the Early Eocene, around 53 million years ago. Their fossils were first discovered in North Pakistan in 1979, at a river not far from the shores of the former Tethys Sea. Pakicetids could hear under water, using enhanced bone conduction, rather than depending on tympanic membranes like most land mammals. This arrangement does not give directional hearing under water. Ambulocetus natans, which lived about 49 million years ago, was discovered in Pakistan in 1994. It was probably amphibious, and looked like a crocodile. In the Eocene, ambulocetids inhabited the bays and estuaries of the Tethys Ocean in northern Pakistan. The fossils of ambulocetids are always found in near-shore shallow marine deposits associated with abundant marine plant fossils and littoral molluscs. Although they are found only in marine deposits, their oxygen isotope values indicate that they consumed water with a range of degrees of salinity, some specimens showing no evidence of sea water consumption and others none of fresh water consumption at the time when their teeth were fossilized. It
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
is clear that ambulocetids tolerated a wide range of salt concentrations. Their diet probably included land animals that approached water for drinking, or freshwater aquatic organisms that lived in the river. Hence, ambulocetids represent the transition phase of cetacean ancestors between freshwater and marine habitat. === Tiktaalik === Tiktaalik is a genus of extinct sarcopterygian (lobe-finned fish) from the Late Devonian period, with many features akin to those of tetrapods (four-legged animals). It is one of several lines of ancient sarcopterygians to develop adaptations to the oxygen-poor shallow water habitats of its time—adaptations that led to the evolution of tetrapods. Well-preserved fossils were found in 2004 on Ellesmere Island in Nunavut, Canada. Tiktaalik lived approximately 375 million years ago. Paleontologists suggest that it is representative of the transition between non-tetrapod vertebrates such as Panderichthys, known from fossils 380 million years old, and early tetrapods such as Acanthostega and Ichthyostega, known from fossils about 365 million years old. Its mixture of primitive fish and derived tetrapod characteristics led one of its discoverers, Neil Shubin, to characterize Tiktaalik as a "fishapod." Unlike many previous, more fish-like transitional fossils, the "fins" of Tiktaalik have basic wrist bones and simple rays reminiscent of fingers. They may have been weight-bearing. Like all modern tetrapods, it had rib bones, a mobile neck with a separate pectoral girdle, and lungs, though it had the gills, scales, and fins of a fish. However in a 2008 paper by Boisvert at al. it is noted that Panderichthys, due to its more derived distal portion, might be closer to tetrapods than Tiktaalik, which might have independently developed similarities to tetrapods by convergent evolution. Tetrapod footprints found in Poland and reported in Nature in January 2010 were "securely dated" at 10 million years older than the oldest known elpistostegids (of which
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
Tiktaalik is an example), implying that animals like Tiktaalik, possessing features that evolved around 400 million years ago, were "late-surviving relics rather than direct transitional forms, and they highlight just how little we know of the earliest history of land vertebrates." === Amphistium === Pleuronectiformes (flatfish) are an order of ray-finned fish. The most obvious characteristic of the modern flatfish is their asymmetry, with both eyes on the same side of the head in the adult fish. In some families the eyes are always on the right side of the body (dextral or right-eyed flatfish) and in others they are always on the left (sinistral or left-eyed flatfish). The primitive spiny turbots include equal numbers of right- and left-eyed individuals, and are generally less asymmetrical than the other families. Other distinguishing features of the order are the presence of protrusible eyes, another adaptation to living on the seabed (benthos), and the extension of the dorsal fin onto the head. Amphistium is a 50-million-year-old fossil fish identified as an early relative of the flatfish, and as a transitional fossil. In Amphistium, the transition from the typical symmetric head of a vertebrate is incomplete, with one eye placed near the top-center of the head. Paleontologists concluded that "the change happened gradually, in a way consistent with evolution via natural selection—not suddenly, as researchers once had little choice but to believe." Amphistium is among the many fossil fish species known from the Monte Bolca Lagerstätte of Lutetian Italy. Heteronectes is a related, and very similar fossil from slightly earlier strata of France. === Runcaria === A Middle Devonian precursor to seed plants has been identified from Belgium, predating the earliest seed plants by about 20 million years. Runcaria, small and radially symmetrical, is an integumented megasporangium surrounded by a cupule. The megasporangium bears
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
an unopened distal extension protruding above the multilobed integument. It is suspected that the extension was involved in anemophilous pollination. Runcaria sheds new light on the sequence of character acquisition leading to the seed, having all the qualities of seed plants except for a solid seed coat and a system to guide the pollen to the seed. == Fossil record == Not every transitional form appears in the fossil record, because the fossil record is not complete. Organisms are only rarely preserved as fossils in the best of circumstances, and only a fraction of such fossils have been discovered. Paleontologist Donald Prothero noted that this is illustrated by the fact that the number of species known through the fossil record was less than 5% of the number of known living species, suggesting that the number of species known through fossils must be far less than 1% of all the species that have ever lived. Because of the specialized and rare circumstances required for a biological structure to fossilize, logic dictates that known fossils represent only a small percentage of all life-forms that ever existed—and that each discovery represents only a snapshot of evolution. The transition itself can only be illustrated and corroborated by transitional fossils, which never demonstrate an exact half-way point between clearly divergent forms. The fossil record is very uneven and, with few exceptions, is heavily slanted toward organisms with hard parts, leaving most groups of soft-bodied organisms with little to no fossil record. The groups considered to have a good fossil record, including a number of transitional fossils between traditional groups, are the vertebrates, the echinoderms, the brachiopods and some groups of arthropods. == History == === Post-Darwin === The idea that animal and plant species were not constant, but changed over time, was suggested as far
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
back as the 18th century. Darwin's On the Origin of Species, published in 1859, gave it a firm scientific basis. A weakness of Darwin's work, however, was the lack of palaeontological evidence, as pointed out by Darwin himself. While it is easy to imagine natural selection producing the variation seen within genera and families, the transmutation between the higher categories was harder to imagine. The dramatic find of the London specimen of Archaeopteryx in 1861, only two years after the publication of Darwin's work, offered for the first time a link between the class of the highly derived birds, and that of the more basal reptiles. In a letter to Darwin, the palaeontologist Hugh Falconer wrote: Had the Solnhofen quarries been commissioned—by august command—to turn out a strange being à la Darwin—it could not have executed the behest more handsomely—than in the Archaeopteryx. Thus, transitional fossils like Archaeopteryx came to be seen as not only corroborating Darwin's theory, but as icons of evolution in their own right. For example, the Swedish encyclopedic dictionary Nordisk familjebok of 1904 showed an inaccurate Archaeopteryx reconstruction (see illustration) of the fossil, "ett af de betydelsefullaste paleontologiska fynd, som någonsin gjorts" ("one of the most significant paleontological discoveries ever made"). === The rise of plants === Transitional fossils are not only those of animals. With the increasing mapping of the divisions of plants at the beginning of the 20th century, the search began for the ancestor of the vascular plants. In 1917, Robert Kidston and William Henry Lang found the remains of an extremely primitive plant in the Rhynie chert in Aberdeenshire, Scotland, and named it Rhynia. The Rhynia plant was small and stick-like, with simple dichotomously branching stems without leaves, each tipped by a sporangium. The simple form echoes that of the sporophyte of
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
mosses, and it has been shown that Rhynia had an alternation of generations, with a corresponding gametophyte in the form of crowded tufts of diminutive stems only a few millimetres in height. Rhynia thus falls midway between mosses and early vascular plants like ferns and clubmosses. From a carpet of moss-like gametophytes, the larger Rhynia sporophytes grew much like simple clubmosses, spreading by means of horizontal growing stems growing rhizoids that anchored the plant to the substrate. The unusual mix of moss-like and vascular traits and the extreme structural simplicity of the plant had huge implications for botanical understanding. == Missing links == The idea of all living things being linked through some sort of transmutation process predates Darwin's theory of evolution. Jean-Baptiste Lamarck envisioned that life was generated constantly in the form of the simplest creatures, and strove towards complexity and perfection (i.e. humans) through a progressive series of lower forms. In his view, lower animals were simply newcomers on the evolutionary scene. After On the Origin of Species, the idea of "lower animals" representing earlier stages in evolution lingered, as demonstrated in Ernst Haeckel's figure of the human pedigree. While the vertebrates were then seen as forming a sort of evolutionary sequence, the various classes were distinct, the undiscovered intermediate forms being called "missing links." The term was first used in a scientific context by Charles Lyell in the third edition (1851) of his book Elements of Geology in relation to missing parts of the geological column, but it was popularized in its present meaning by its appearance on page xi of his book Geological Evidences of the Antiquity of Man of 1863. By that time, it was generally thought that the end of the last glacial period marked the first appearance of humanity; Lyell drew on new
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
findings in his Antiquity of Man to put the origin of human beings much further back. Lyell wrote that it remained a profound mystery how the huge gulf between man and beast could be bridged. Lyell's vivid writing fired the public imagination, inspiring Jules Verne's Journey to the Center of the Earth (1864) and Louis Figuier's 1867 second edition of La Terre avant le déluge ("Earth before the Flood"), which included dramatic illustrations of savage men and women wearing animal skins and wielding stone axes, in place of the Garden of Eden shown in the 1863 edition. The search for a fossil showing transitional traits between apes and humans, however, was fruitless until the young Dutch geologist Eugène Dubois found a skullcap, a molar and a femur on the banks of Solo River, Java in 1891. The find combined a low, ape-like skull roof with a brain estimated at around 1000 cc, midway between that of a chimpanzee and an adult human. The single molar was larger than any modern human tooth, but the femur was long and straight, with a knee angle showing that "Java Man" had walked upright. Given the name Pithecanthropus erectus ("erect ape-man"), it became the first in what is now a long list of human evolution fossils. At the time it was hailed by many as the "missing link," helping set the term as primarily used for human fossils, though it is sometimes used for other intermediates, like the dinosaur-bird intermediary Archaeopteryx. While "missing link" is still a popular term, well-recognized by the public and often used in the popular media, the term is avoided in scientific publications. Some bloggers have called it "inappropriate"; both because the links are no longer "missing", and because human evolution is no longer believed to have occurred in terms
|
{
"page_id": 331755,
"source": null,
"title": "Transitional fossil"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.