text stringlengths 11 1.65k | source stringlengths 38 44 |
|---|---|
Oceanography In 1881 the geographer John Francon Williams published a seminal book, "Geography of the Oceans". Between 1907 and 1911 Otto Krümmel published the "Handbuch der Ozeanographie", which became influential in awakening public interest in oceanography. The four-month 1910 North Atlantic expedition headed by John Murray and Johan Hjort was the most ambitious research oceanographic and marine zoological project ever mounted until then, and led to the classic 1912 book "The Depths of the Ocean". The first acoustic measurement of sea depth was made in 1914. Between 1925 and 1927 the "Meteor" expedition gathered 70,000 ocean depth measurements using an echo sounder, surveying the Mid-Atlantic Ridge. Sverdrup, Johnson and Fleming published "The Oceans" in 1942, which was a major landmark. "The Sea" (in three volumes, covering physical oceanography, seawater and geology) edited by M.N. Hill was published in 1962, while Rhodes Fairbridge's "Encyclopedia of Oceanography" was published in 1966. The Great Global Rift, running along the Mid Atlantic Ridge, was discovered by Maurice Ewing and Bruce Heezen in 1953; in 1954 a mountain range under the Arctic Ocean was found by the Arctic Institute of the USSR. The theory of seafloor spreading was developed in 1960 by Harry Hammond Hess. The Ocean Drilling Program started in 1966. Deep-sea vents were discovered in 1977 by Jack Corliss and Robert Ballard in the submersible | https://en.wikipedia.org/wiki?curid=44044 |
Oceanography In the 1950s, Auguste Piccard invented the bathyscaphe and used the bathyscaphe to investigate the ocean's depths. The United States nuclear submarine made the first journey under the ice to the North Pole in 1958. In 1962 the FLIP (Floating Instrument Platform), a spar buoy, was first deployed. From the 1970s, there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction. An oceanographic buoy array was established in the Pacific to allow prediction of El Niño events. 1990 saw the start of the World Ocean Circulation Experiment (WOCE) which continued until 2002. Geosat seafloor mapping data became available in 1995. In recent years studies advanced particular knowledge on ocean acidification, ocean heat content, ocean currents, the El Niño phenomenon, mapping of methane hydrate deposits, the carbon cycle, coastal erosion, weathering and climate feedbacks in regards to climate change interactions. Study of the oceans is linked to understanding global climate changes, potential global warming and related biosphere concerns. The atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux (and solar insolation). Wind stress is a major driver of ocean currents while the ocean is a sink for atmospheric carbon dioxide. All these factors relate to the ocean's biogeochemical setup | https://en.wikipedia.org/wiki?curid=44044 |
Oceanography Further understanding of the worlds oceans permit scientists to better decide weather changes which in addition guides to a more reliable utilization of earths resources. The study of oceanography is divided into these four branches: Biological oceanography investigates the ecology of marine organisms in the context of the physical, chemical and geological characteristics of their ocean environment and the biology of individual marine organisms. Chemical oceanography is the study of the chemistry of the ocean. Whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes, ocean chemistry focuses primarily on the geochemical cycles. The following is a central topic investigated by chemical oceanography. Ocean acidification describes the decrease in ocean pH that is caused by anthropogenic carbon dioxide () emissions into the atmosphere. Seawater is slightly alkaline and had a preindustrial pH of about 8.2. More recently, anthropogenic activities have steadily increased the carbon dioxide content of the atmosphere; about 30–40% of the added CO is absorbed by the oceans, forming carbonic acid and lowering the pH (now below 8.1) through ocean acidification. The pH is expected to reach 7.7 by the year 2100. An important element for the skeletons of marine animals is calcium, but calcium carbonate becomes more soluble with pressure, so carbonate shells and skeletons dissolve below the carbonate compensation depth | https://en.wikipedia.org/wiki?curid=44044 |
Oceanography Calcium carbonate becomes more soluble at lower pH, so ocean acidification is likely to affect marine organisms with calcareous shells, such as oysters, clams, sea urchins and corals, and the carbonate compensation depth will rise closer to the sea surface. Affected planktonic organisms will include pteropods, coccolithophorids and foraminifera, all important in the food chain. In tropical regions, corals are likely to be severely affected as they become less able to build their calcium carbonate skeletons, in turn adversely impacting other reef dwellers. The current rate of ocean chemistry change seems to be unprecedented in Earth's geological history, making it unclear how well marine ecosystems will adapt to the shifting conditions of the near future. Of particular concern is the manner in which the combination of acidification with the expected additional stressors of higher temperatures and lower oxygen levels will impact the seas. Geological oceanography is the study of the geology of the ocean floor including plate tectonics and paleoceanography. Physical oceanography studies the ocean's physical attributes including temperature-salinity structure, mixing, surface waves, internal waves, surface tides, internal tides, and currents. The following are central topics investigated by physical oceanography. Since the early ocean expeditions in oceanography, a major interest was the study of the ocean currents and temperature measurements | https://en.wikipedia.org/wiki?curid=44044 |
Oceanography The tides, the Coriolis effect, changes in direction and strength of wind, salinity and temperature are the main factors determining ocean currents. The thermohaline circulation (THC) ("thermo-" referring to temperature and "-haline" referring to salt content) connects the ocean basins and is primarily dependent on the density of sea water. It is becoming more common to refer to this system as the 'meridional overturning circulation' because it more accurately accounts for other driving factors beyond temperature and salinity. Oceanic heat content (OHC) refers to the heat stored in the ocean. The changes in the ocean heat play an important role in sea level rise, because of thermal expansion. Ocean warming accounts for 90% of the energy accumulation from global warming between 1971 and 2010. Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the reconstruction of past climate at various intervals. Paleoceanographic research is also intimately tied to palaeoclimatology. The first international organization of oceanography was created in 1902 as the International Council for the Exploration of the Sea | https://en.wikipedia.org/wiki?curid=44044 |
Oceanography In 1903 the Scripps Institution of was founded, followed by Woods Hole Oceanographic Institution in 1930, Virginia Institute of Marine Science in 1938, and later the Lamont-Doherty Earth Observatory at Columbia University, and the School of at University of Washington. In Britain, the National Centre (an institute of the Natural Environment Research Council) is the successor to the UK's Institute of Oceanographic Sciences. In Australia, CSIRO Marine and Atmospheric Research (CMAR), is a leading centre. In 1921 the International Hydrographic Bureau (IHB) was formed in Monaco. | https://en.wikipedia.org/wiki?curid=44044 |
Biomimetics or biomimicry is the imitation of the models, systems, and elements of nature for the purpose of solving complex human problems. The terms "biomimetics" and "biomimicry" derive from ("bios"), life, and μίμησις ("mīmēsis"), imitation, from μιμεῖσθαι ("mīmeisthai"), to imitate, from μῖμος ("mimos"), actor. A closely related field is bionics. Living organisms have evolved well-adapted structures and materials over geological time through natural selection. has given rise to new technologies inspired by biological solutions at macro and nanoscales. Humans have looked at nature for answers to problems throughout our existence. Nature has solved engineering problems such as self-healing abilities, environmental exposure tolerance and resistance, hydrophobicity, self-assembly, and harnessing solar energy. One of the early examples of biomimicry was the study of birds to enable human flight. Although never successful in creating a "flying machine", Leonardo da Vinci (1452–1519) was a keen observer of the anatomy and flight of birds, and made numerous notes and sketches on his observations as well as sketches of "flying machines". The Wright Brothers, who succeeded in flying the first heavier-than-air aircraft in 1903, allegedly derived inspiration from observations of pigeons in flight. During the 1950s the American biophysicist and polymath Otto Schmitt developed the concept of "biomimetics" | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics During his doctoral research he developed the Schmitt trigger by studying the nerves in squid, attempting to engineer a device that replicated the biological system of nerve propagation. He continued to focus on devices that mimic natural systems and by 1957 he had perceived a converse to the standard view of biophysics at that time, a view he would come to call biomimetics. In 1960 Jack E. Steele coined a similar term, "bionics", at Wright-Patterson Air Force Base in Dayton, Ohio, where Otto Schmitt also worked. Steele defined bionics as "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues". During a later meeting in 1963 Schmitt stated, In 1969, Schmitt used the term “biomimetic“ in the title one of his papers, and by 1974 it had found its way into Webster's Dictionary, bionics entered the same dictionary earlier in 1960 as "a science concerned with the application of data about the functioning of biological systems to the solution of engineering problems". Bionic took on a different connotation when Martin Caidin referenced Jack Steele and his work in the novel "Cyborg" which later resulted in the 1974 television series "The Six Million Dollar Man" and its spin-offs. The term bionic then became associated with "the use of electronically operated artificial body parts" and "having ordinary human powers increased by or as if by the aid of such devices" | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics Because the term "bionic" took on the implication of supernatural strength, the scientific community in English speaking countries largely abandoned it. The term "biomimicry" appeared as early as 1982. Biomimicry was popularized by scientist and author Janine Benyus in her 1997 book "Biomimicry: Innovation Inspired by Nature". Biomimicry is defined in the book as a "new science that studies nature's models and then imitates or takes inspiration from these designs and processes to solve human problems". Benyus suggests looking to Nature as a "Model, Measure, and Mentor" and emphasizes sustainability as an objective of biomimicry. One of the latest examples of biomimicry has been created by Johannes-Paul Fladerer and Ernst Kurzmann by the description of "managemANT". This term (a combination of the words "management" and "ant"), describes the usage of behavioural strategies of ants in economic and management strategies. could in principle be applied in many fields. Because of the diversity and complexity of biological systems, the number of features that might be imitated is large. Biomimetic applications are at various stages of development from technologies that might become commercially usable to prototypes. Murray's law, which in conventional form determined the optimum diameter of blood vessels, has been re-derived to provide simple equations for the pipe or tube diameter which gives a minimum mass engineering system. Aircraft wing design and flight techniques are being inspired by birds and bats | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics Biorobots based on the physiology and methods of locomotion of animals include BionicKangaroo which moves like a kangaroo, saving energy from one jump and transferring it to its next jump. Kamigami Robots, a children's toy, mimic cockroach locomotion to run quickly and efficiently over indoor and outdoor surfaces. Researchers studied the termite's ability to maintain virtually constant temperature and humidity in their termite mounds in Africa despite outside temperatures that vary from 1.5 °C to 40 °C (35 °F to 104 °F). Researchers initially scanned a termite mound and created 3-D images of the mound structure, which revealed construction that could influence human building design. The Eastgate Centre, a mid-rise office complex in Harare, Zimbabwe, stays cool without air conditioning and uses only 10% of the energy of a conventional building of the same size. Researchers in the Sapienza University of Rome were inspired by the natural ventilation in termite mounds and designed a double façade that significantly cuts down over lit areas in a building. Scientists have imitated the porous nature of mound walls by designing a facade with double panels that was able to reduce heat gained by radiation and increase heat loss by convection in cavity between the two panels. The overall cooling load on the building’s energy consumption was reduced by 15%. A similar inspiration was drawn from the porous walls of termite mounds to design a naturally ventilated façade with a small ventilation gap | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics This design of façade is able to induce air flow due to the Venturi effect and continuously circulates rising air in the ventilation slot. Significant transfer of heat between the building’s external wall surface and the air flowing over it was observed. The design is coupled with greening of the façade. Green wall facilitates additional natural cooling via evaporation, respiration and transpiration in plants.The damp plant substrate further support the cooling effect. Scientists in Shanghai University were able to replicate the complex microstructure of clay-made conduit network in the mound to mimic the excellent humidity control in mounds. They proposed a porous humidity control material (HCM) using Sepiolite and calcium chloride with water vapor adsorption-desorption content at 550 grams per meter squared. Calcium chloride is a desiccant and improves the water vapor adsorption-desorption property of the Bio-HCM. The proposed bio-HCM has a regime of interfiber mesopores which acts as a mini reservoir. The flexural strength of the proposed material was estimated to be 10.3 MPa using computational simulations. In structural engineering, the Swiss Federal Institute of Technology (EPFL) has incorporated biomimetic characteristics in an adaptive deployable "tensegrity" bridge. The bridge can carry out self-diagnosis and self-repair. The arrangement of leaves on a plant has been adapted for better solar power collection | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics Analysis of the elastic deformation happening when a pollinator lands on the sheath-like perch part of the flower "Strelitzia reginae" (known as Bird-of-Paradise flower) has inspired architects and scientists from the University of Freiburg and University of Stuttgart to create hingeless shading systems that can react to their environment. These bio-inspired products are sold under the name Flectofin. Other hingeless bioinspired systems include Flectofold. Flectofold has been inspired from the trapping system developed by the carnivorous plant "Aldrovanda vesiculosa". There is a great need for new structural materials that are light weight but offer exceptional combinations of stiffness, strength, and toughness. Such materials would need to be manufactured into bulk materials with complex shapes at high volume and low cost and would serve a variety of fields such as construction, transportation, energy storage and conversion. In a classic design problem, strength and toughness are more likely to be mutually exclusive i.e., strong materials are brittle and tough materials are weak. However, natural materials with complex and hierarchical material gradients that span from nano- to macro-scales are both strong and tough. Generally, most natural materials utilize limited chemical components but complex material architectures that give rise to exceptional mechanical properties | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics Understanding the highly diverse and multi functional biological materials and discovering approaches to replicate such structures will lead to advanced and more efficient technologies. Bone, nacre (abalone shell), teeth, the dactyl clubs of stomatopod shrimps and bamboo are great examples of damage tolerant materials. The exceptional resistance to fracture of bone is due to complex deformation and toughening mechanisms that operate at spanning different size scales - nanoscale structure of protein molecules to macroscopic physiological scale. Nacre exhibits similar mechanical properties however with rather simpler structure. Nacre shows a brick and mortar llike structure with thick mineral layer (0.2∼0.9-μm) of closely packed aragonite structures and thin organic matrix (∼20-nm). While thin films and micrometer sized samples that mimic these structures are already produced, successful production of bulk biomimetic structural materials is yet to be realized. However, numerous processing techniques have been proposed for producing nacre like materials. Biomorphic mineralization is a technique that produces materials with morphologies and structures resembling those of natural living organisms by using bio-structures as templates for mineralization. Compared to other methods of material production, biomorphic mineralization is facile, environmentally benign and economic | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics Freeze casting (Ice templating), an inexpensive method to mimic natural layered structures was employed by researchers at Lawrence Berkeley National Laboratory to create alumina-Al-Si and IT HAP-epoxy layered composites that match the mechanical properties of bone with an equivalent mineral/ organic content. Various further studies also employed similar methods to produce high strength and high toughness composites involving a variety of constituent phases. Recent studies demonstrated production of cohesive and self supporting macroscopic tissue constructs that mimic living tissues by printing tens of thousands of heterologous picoliter droplets in software-defined, 3D millimeter-scale geometries. Efforts are also taken up to mimic the design of nacre in artificial composite materials using fused deposition modelling and the helicoidal structures of stomatopod clubs in the fabrication of high performance carbon fiber-epoxy composites. Various established and novel additive manufacturing technologies like PolyJet printing, direct ink writing, 3D magnetic printing, multi-material magnetically assisted 3D printing and magnetically-assisted slip casting have also been utilized to mimic the complex micro-scale architectures of natural materials and provide huge scope for future research. Spider web silk is as strong as the Kevlar used in bulletproof vests | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics Engineers could in principle use such a material, if it could be reengineered to have a long enough life, for parachute lines, suspension bridge cables, artificial ligaments for medicine, and other purposes. The self-sharpening teeth of many animals have been copied to make better cutting tools. New ceramics that exhibit giant electret hysteresis have also been realized. In general in biological systems, self healing occurs via chemical signals released at the site of fracture which initiate a systemic response that transport repairing agents to the fracture site thereby promoting autonomic healing. To demonstrate the use of micro-vascular networks for autonomic healing, researchers developed a microvascular coating–substrate architecture that mimics human skin. Bio-inspired self-healing structural color hydrogels that maintain the stability of an inverse opal structure and its resultant structural colors were developed. A self-repairing membrane inspired by rapid self-sealing processes in plants was developed for inflatable light weight structures such as rubber boats or Tensairity® constructions. The researchers applied a thin soft cellular polyurethane foam coating on the inside of a fabric substrate, which closes the crack if the membrane is punctured with a spike. Self-healing materials, polymers and composite materials capable of mending cracks have been produced based on biological materials. Surfaces that recreate properties of shark skin are intended to enable more efficient movement through water | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics Efforts have been made to produce fabric that emulates shark skin. Surface tension biomimetics are being researched for technologies such as hydrophobic or hydrophilic coatings and microactuators. Some amphibians, such as tree and torrent frogs and arboreal salamanders, are able to attach to and move over wet or even flooded environments without falling. This kind of organisms have toe pads which are permanently wetted by mucus secreted from glands that open into the channels between epidermal cells. They attach to mating surfaces by wet adhesion and they are capable of climbing on wet rocks even when water is flowing over the surface. Tire treads have also been inspired by the toe pads of tree frogs. Marine mussels can stick easily and efficiently to surfaces underwater under the harsh conditions of the ocean. Mussels use strong filaments to adhere to rocks in the inter-tidal zones of wave-swept beaches, preventing them from being swept away in strong sea currents. Mussel foot proteins attach the filaments to rocks, boats and practically any surface in nature including other mussels. These proteins contain a mix of amino acid residues which has been adapted specifically for adhesive purposes. Researchers from the University of California Santa Barbara borrowed and simplified chemistries that the mussel foot uses to overcome this engineering challenge of wet adhesion to create copolyampholytes, and one-component adhesive systems with potential for employment in nanofabrication protocols | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics Other research has proposed adhesive glue from mussels. Leg attachment pads of several animals, including many insects (e.g. beetles and flies), spiders and lizards (e.g. geckos), are capable of attaching to a variety of surfaces and are used for locomotion, even on vertical walls or across ceilings. Attachment systems in these organisms have similar structures at their terminal elements of contact, known as setae. Such biological examples have offered inspiration in order to produce climbing robots, boots and tape. Synthetic setae have also been developed for the production of dry adhesives. Biomimetic materials are gaining increasing attention in the field of optics and photonics. There are still little known bioinspired or biomimetic products involving the photonic properties of plants or animals. However, understanding how nature designed such optical materials from biological resources is worth pursuing and might lead to future commercial products. For instance, the chiral self-assembly of cellulose inspired by the "Pollia condensata" berry has been exploited to make optically active films. Such films are made from cellulose which is a biodegradable and biobased resource obtained from wood or cotton. The structural colours can potentially be everlasting and have more vibrant colour than the ones obtained from chemical absorption of light. "Pollia condensata" is not the only fruit showing a structural coloured skin; iridescence is also found in berries of other species such as "Margaritaria nobilis" | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics These fruits show iridescent colors in the blue-green region of the visible spectrum which gives the fruit a strong metallic and shiny visual appearance. The structural colours come from the organisation of cellulose chains in the fruit's epicarp, a part of the fruit skin. Each cell of the epicarp is made of a multilayered envelope that behaves like a Bragg reflector. However, the light which is reflected from the skin of these fruits is not polarised unlike the one arising from man-made replicates obtained from the self-assembly of cellulose nanocrystals into helicoids, which only reflect left-handed circularly polarised light. The fruit of "Elaeocarpus angustifolius" also show structural colour that come arises from the presence of specialised cells called iridosomes which have layered structures. Similar iridosomes have also been found in Delarbrea michieana fruits. In plants, multi layer structures can be found either at the surface of the leaves (on top of the epidermis), such as in "Selaginella willdenowii" or within specialized intra-cellular organelles, the so-called iridoplasts, which are located inside the cells of the upper epidermis. For instance, the rain forest plants Begonia pavonina have iridoplasts located inside the epidermal cells. Structural colours have also been found in several algae, such as in the red alga "Chondrus crispus" (Irish Moss). Structural coloration produces the rainbow colours of soap bubbles, butterfly wings and many beetle scales | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics Phase-separation has been used to fabricate ultra-white scattering membranes from polymethylmethacrylate, mimicking the beetle "Cyphochilus". LED lights can be designed to mimic the patterns of scales on fireflies' abdomens, improving their efficiency. "Morpho" butterfly wings are structurally coloured to produce a vibrant blue that does not vary with angle. This effect can be mimicked by a variety of technologies. Lotus Cars claim to have developed a paint that mimics the "Morpho" butterfly's structural blue colour. In 2007, Qualcomm commercialised an interferometric modulator display technology, "Mirasol", using "Morpho"-like optical interference. In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of "Morpho" butterfly wing scales. Canon Inc.'s SubWavelength structure Coating uses wedge-shaped structures the size of the wavelength of visible light. The wedge-shaped structures cause a continuously changing refractive index as light travels through the coating, significantly reducing lens flare. This imitates the structure of a moth's eye. Holistic planned grazing, using fencing and/or herders, seeks to restore grasslands by carefully planning movements of large herds of livestock to mimic the vast herds found in nature | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics The natural system being mimicked and used as a template is grazing animals concentrated by pack predators that must move on after eating, trampling, and manuring an area, and returning only after it has fully recovered. Developed by Allan Savory, who in turn was inspired by the work of André Voisin, this method of grazing holds tremendous potential in building soil, increasing biodiversity, reversing desertification, and mitigating global warming, similar to what occurred during the past 40 million years as the expansion of grass-grazer ecosystems built deep grassland soils, sequestering carbon and cooling the planet. Permaculture is a set of design principles centered around whole systems thinking, simulating or directly utilizing the patterns and resilient features observed in natural ecosystems. It uses these principles in a growing number of fields from regenerative agriculture, rewilding, community, and organizational design and development. Some air conditioning systems use biomimicry in their fans to increase airflow while reducing power consumption. Technologists like Jas Johl have speculated that the functionality of vacuole cells could be used to design highly adaptable security systems.. "The functionality of a vacuole, a biological structure that guards and promotes growth, illuminates the value of adaptability as a guiding principle for security | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics " The functions and significance of vacuoles are fractal in nature, the organelle has no basic shape or size; its structure varies according to the requirements of the cell. Vacuoles not only isolate threats, contain what's necessary, export waste, maintain pressure -- they also help the cell scale and grow. Johl argues these functions are necessary for any security system design. The 500 Series Shinkansen used biomimicry to reduce energy consumption and noise levels while increasing passenger comfort. Protein folding has been used to control material formation for self-assembled functional nanostructures. Polar bear fur has inspired the design of thermal collectors and clothing. The light refractive properties of the moth's eye has been studied to reduce the reflectivity of solar panels. The Bombardier beetle's powerful repellent spray inspired a Swedish company to develop a "micro mist" spray technology, which is claimed to have a low carbon impact (compared to aerosol sprays). The beetle mixes chemicals and releases its spray via a steerable nozzle at the end of its abdomen, stinging and confusing the victim. Most viruses have an outer capsule 20 to 300 nm in diameter. Virus capsules are remarkably robust and capable of withstanding temperatures as high as 60 °C; they are stable across the pH range 2-10. Viral capsules can be used to create nano device components such as nanowires, nanotubes, and quantum dots | https://en.wikipedia.org/wiki?curid=45784 |
Biomimetics Tubular virus particles such as the tobacco mosaic virus (TMV) can be used as templates to create nanofibers and nanotubes, since both the inner and outer layers of the virus are charged surfaces which can induce nucleation of crystal growth. This was demonstrated through the production of platinum and gold nanotubes using TMV as a template. Mineralized virus particles have been shown to withstand various pH values by mineralizing the viruses with different materials such as silicon, PbS, and CdS and could therefore serve as a useful carriers of material. A spherical plant virus called cowpea chlorotic mottle virus (CCMV) has interesting expanding properties when exposed to environments of pH higher than 6.5. Above this pH, 60 independent pores with diameters about 2 nm begin to exchange substance with the environment. The structural transition of the viral capsid can be utilized in Biomorphic mineralization for selective uptake and deposition of minerals by controlling the solution pH. Possible applications include using the viral cage to produce uniformly shaped and sized quantum dot semiconductor nanoparticles through a series of pH washes. This is an alternative to the apoferritin cage technique currently used to synthesize uniform CdSe nanoparticles. Such materials could also be used for targeted drug delivery since particles release contents upon exposure to specific pH levels. | https://en.wikipedia.org/wiki?curid=45784 |
National Institutes of Health The (NIH) () is the primary agency of the United States government responsible for biomedical and public health research. It was founded in the late 1880s and is now part of the United States Department of Health and Human Services. The majority of NIH facilities are located in Bethesda, Maryland. The NIH conducts its own scientific research through its Intramural Research Program (IRP) and provides major biomedical research funding to non-NIH research facilities through its Extramural Research Program. , the Intramural Research Program (IRP) had 1,200 principal investigators and more than 4,000 postdoctoral fellows in basic, translational, and clinical research, being the largest biomedical research institution in the world, while, as of 2003, the extramural arm provided 28% of biomedical research funding spent annually in the U.S., or about US$26.4 billion. The NIH comprises 27 separate institutes and centers of different biomedical disciplines and is responsible for many scientific accomplishments, including the discovery of fluoride to prevent tooth decay, the use of lithium to manage bipolar disorder, and the creation of vaccines against hepatitis, "Haemophilus influenzae" (HIB), and human papillomavirus (HPV). In 2019, the NIH was ranked number 2 in the world for biomedical sciences by the Nature Index, which measured the largest contributors to papers published in a subset of leading journals from 2015–2018 | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health NIH's roots extend back to the Marine Hospital Service in the late 1790s that provided medical relief to sick and disabled men in the U.S. Navy. By 1870, a network of marine hospitals had developed and was placed under the charge of a medical officer within the Bureau of the Treasury Department. In the late 1870s, Congress allocated funds to investigate the causes of epidemics like cholera and yellow fever, and it created the National Board of Health, making medical research an official government initiative. In 1887, a laboratory for the study of bacteria, the Hygienic Laboratory, was established at the Marine Hospital in New York. In the early 1900s, Congress began appropriating funds for the Marine Hospital Service. By 1922, this organization changed its name to Public Health Services and established a Special Cancer Investigations laboratory at Harvard Medical School. This marked the beginning of a partnership with universities. In 1930, the Hygienic Laboratory was re-designated as the National Institute of Health by the Ransdell Act, and was given $750,000 to construct two NIH buildings. Over the next few decades, Congress would markedly increase funding of the NIH, and various institutes and centers within the NIH were created for specific research programs. In 1944, the Public Health Service Act was approved, and the National Cancer Institute became a division of NIH. In 1948, the name changed from National Institute of Health to National Institutes of Health | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health In the 1960s, virologist and cancer researcher Chester M. Southam injected HeLa cancer cells into patients at the Jewish Chronic Disease Hospital. When three doctors resigned after refusing to inject patients without their consent, the experiment gained considerable media attention. The NIH was a major source of funding for Southam's research and had required all research involving human subjects to obtain their consent prior to any experimentation. Upon investigating all of their grantee institutions, the NIH discovered that the majority of them did not protect the rights of human subjects. From then on, the NIH has required all grantee institutions to approve any research proposals involving human experimentation with review boards. In 1967, the Division of Regional Medical Programs was created to administer grants for research for heart disease, cancer, and strokes. That same year, the NIH director lobbied the White House for increased federal funding in order to increase research and the speed with which health benefits could be brought to the people. An advisory committee was formed to oversee further development of the NIH and its research programs. By 1971 cancer research was in full force and President Nixon signed the National Cancer Act, initiating a National Cancer Program, President's Cancer Panel, National Cancer Advisory Board, and 15 new research, training, and demonstration centers | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health Funding for the NIH has often been a source of contention in Congress, serving as a proxy for the political currents of the time. In 1992, the NIH encompassed nearly 1 percent of the federal government's operating budget and controlled more than 50 percent of all funding for health research, and 85 percent of all funding for health studies in universities. While government funding for research in other disciplines has been increasing at a rate similar to inflation since the 1970s, research funding for the NIH nearly tripled through the 1990s and early 2000s, but has remained relatively stagnant since then. By the 1990s, the NIH committee focus had shifted to DNA research, and launched the Human Genome Project. The NIH Office of the Director is the central office responsible for setting policy for NIH, and for planning, managing and coordinating the programs and activities of all NIH components. The NIH Director plays an active role in shaping the agency's activities and outlook. The Director is responsible for providing leadership to the Institutes and Centers by identifying needs and opportunities, especially in efforts involving multiple Institutes. Within this Office is the Division of Program Coordination, Planning and Strategic Initiatives with 12 divisions including: Intramural research is primarily conducted at the main campus in Bethesda, Maryland and Rockville, Maryland, and the surrounding communities | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health The Bayview Campus in Baltimore, Maryland houses the research programs of the National Institute on Aging, National Institute on Drug Abuse, and National Human Genome Research Institute with nearly 1,000 scientists and support staff. The Frederick National Laboratory in Frederick, MD and the nearby Riverside Research Park, houses many components of the National Cancer Institute, including the Center for Cancer Research, Office of Scientific Operations, Management Operations Support Branch, the division of Cancer Epidemiology and Genetics and the division of Cancer Treatment and Diagnosis. The National Institute of Environmental Health Sciences is located in the Research Triangle region of North Carolina. Other ICs have satellite locations in addition to operations at the main campus. The National Institute of Allergy and Infectious Diseases maintains its Rocky Mountain Labs in Hamilton, Montana, with an emphasis on BSL3 and BSL4 laboratory work. NIDKK operates the Phoenix Epidemiology and Clinical Research Branch in Phoenix, AZ. As of 2017, 153 scientists receiving financial support from the NIH have been awarded a Nobel Prize and 195 have been awarded a Lasker Award. NIH devotes 10% of its funding to research within its own facilities (intramural research), and gives >80% of its funding in research grants to extramural (outside) researchers. Of this extramural funding, a certain percentage (2.8% in 2014) must be granted to small businesses under the SBIR/STTR program | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health , the extramural funding consisted of about 50,000 grants to more than 325,000 researchers at more than 3000 institutions. , this rate of granting remained reasonably steady, at 47,000 grants to 2,700 organizations. , NIH spent (not including temporary funding from the American Recovery and Reinvestment Act of 2009) on clinical research, on genetics-related research, on prevention research, on cancer, and on biotechnology. In 2008 a Congressional mandate called for investigators funded by the NIH to submit an electronic version of their final manuscripts to the National Library of Medicine's research repository, PubMed Central (PMC), no later than 12 months after the official date of publication. The NIH Public Access Policy was the first public access mandate for a U.S. public funding agency. On February 13, 2012, the (NIH) announced a new group of individuals assigned to research pain. This committee is composed of researchers from different organizations and will focus to "coordinate pain research activities across the federal government with the goals of stimulating pain research collaboration… and providing an important avenue for public involvement" ("Members of new," 2012). With a committee such as this research will not be conducted by each individual organization or person but instead a collaborating group which will increase the information available. With this hopefully more pain management will be available including techniques for arthritis sufferers | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health In 2000, the Joint Economic Committee of Congress reported NIH research, which was funded at $16 billion a year in 2000, that some econometric studies had given a rate of return of 25 to 40 percent per year by reducing the economic cost of illness in the US. It found that of the 21 drugs with the highest therapeutic impact on society introduced between 1965 and 1992, public funding was "instrumental" for 15. As of 2011 NIH-supported research helped to discover 153 new FDA-approved drugs, vaccines, and new indications for drugs in the 40 years prior. One study found NIH funding aided either directly or indirectly in developing the drugs or drug targets for all of the 210 FDA-approved drugs from 2010 to 2016. In 2015, Pierre Azoulay et al. estimated $10 million invested in research generated two to three new patents. Since its inception, the NIH intramural research program has been a source of many pivotal scientific and medical discoveries. Some of these include: In September 2006, the NIH Blueprint for Neuroscience Research started a contract for the NIH Toolbox for the Assessment of Neurological and Behavioral Function to develop a set of state-of-the-art measurement tools to enhance collection of data in large cohort studies. Scientists from more than 100 institutions nationwide contributed. In September 2012, the NIH Toolbox was rolled out to the research community. NIH Toolbox assessments are based, where possible, on Item Response Theory and adapted for testing by computer | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health To allocate funds, the NIH must first obtain its budget from Congress. This process begins with institute and center (IC) leaders collaborating with scientists to determine the most important and promising research areas within their fields. IC leaders discuss research areas with NIH management who then develops a budget request for continuing projects, new research proposals, and new initiatives from the Director. NIH submits its budget request to the Department of Health and Human Services (HHS), and the HHS considers this request as a portion of its budget. Many adjustments and appeals occur between NIH and HHS before the agency submits NIH's budget request to the Office of Management and Budget (OMB). OMB determines what amounts and research areas are approved for incorporation into the President's final budget. The President then sends NIH's budget request to Congress in February for the next fiscal year's allocations. The House and Senate Appropriations Subcommittees deliberate and by fall, Congress usually appropriates funding. This process takes approximately 18 months before the NIH can allocate any actual funds. When a government shutdown occurs, the NIH continues to treat people who are already enrolled in clinical trials, but does not start any new clinical trials and does not admit new patients who are not already enrolled in a clinical trial, except for the most critically ill, as determined by the NIH Director | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health Over the last century, the responsibility to allocate funding has shifted from the OD and Advisory Committee to the individual ICs and Congress increasingly set apart funding for particular causes. In the 1970s, Congress began to earmark funds specifically for cancer research, and in the 1980s there was a significant amount allocated for AIDS/HIV research. Funding for the NIH has often been a source of contention in Congress, serving as a proxy for the political currents of the time. During the 1980s, President Reagan repeatedly tried to cut funding for research, only to see Congress partly restore funding. The political contention over NIH funding slowed the nation's response to the AIDS epidemic; while AIDS was reported in newspaper articles from 1981, no funding was provided for research on the disease. In 1984 National Cancer Institute scientists found implications that "variants of a human cancer virus called HTLV-III are the primary cause of acquired immunodeficiency syndrome (AIDS)," a new epidemic that gripped the nation. In 1992, the NIH encompassed nearly 1 percent of the federal government's operating budget and controlled more than 50 percent of all funding for health research and 85 percent of all funding for health studies in universities. From 1993 to 2001 the NIH budget doubled. Since then, funding essentially remained flat, and during the decade following the financial crisis, the NIH budget struggled to keep up with inflation. In 1999 Congress increased the NIH's budget by $2.3 billion to $17 | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health 2 billion in 2000. In 2009 Congress again increased the NIH budget to $31 billion in 2010. In 2017 and 2018, Congress passed laws with bipartisan support that substantially increasing appropriations for NIH, which was 37.3 billion dollars annually in FY2018. Researchers at universities or other institutions outside of NIH can apply for research project grants (RPGs) from the NIH. There are numerous funding mechanisms for different project types (e.g., basic research, clinical research, etc.) and career stages (e.g., early career, postdoc fellowships, etc.). The NIH regularly issues "requests for applications" (RFAs), e.g., on specific programmatic priorities or timely medical problems (such as Zika virus research in early 2016). In addition, researchers can apply for "investigator-initiated grants" whose subject is determined by the scientist. The total number of applicants has increased substantially, from about 60,000 investigators who had applied during the period from 1999 to 2003 to slightly less than 90,000 in who had applied during the period from 2011 to 2015. Due to this, the "cumulative investigator rate," that is, the likelihood that unique investigators are funded over a 5-year window, has declined from 43% to 31%. R01 grants are the most common funding mechanism and include investigator-initiated projects. The roughly 27,000 to 29,000 R01 applications had a funding success of 17-19% during 2012 though 2014 | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health Similarly, the 13,000 to 14,000 R21 applications had a funding success of 13-14% during the same period. In FY 2016, the total number of grant applications received by the NIH was 54,220, with approximately 19% being awarded funding. Institutes have varying funding rates. The National Cancer Institute awarded funding to 12% of applicants, while the National Institute for General Medical Science awarded funding to 30% of applicants. NIH employs five broad decision criteria in its funding policy. First, ensure the highest quality of scientific research by employing an arduous peer review process. Second, seize opportunities that have the greatest potential to yield new knowledge and that will lead to better prevention and treatment of disease. Third, maintain a diverse research portfolio in order to capitalize on major discoveries in a variety of fields such as cell biology, genetics, physics, engineering, and computer science. Fourth, address public health needs according to the disease burden (e.g., prevalence and mortality). And fifth, construct and support the scientific infrastructure (e.g., well-equipped laboratories and safe research facilities) necessary to conduct research. Advisory committee members advise the Institute on policy and procedures affecting the external research programs and provide a second level of review for all grant and cooperative agreement applications considered by the Institute for funding | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health In 2014, it was announced that the NIH is directing scientists to perform their experiments with both female and male animals, or cells derived from females as well as males if they are studying cell cultures, and that the NIH would take the balance of each study design into consideration when awarding grants. The announcement also stated that this rule would probably not apply when studying sex-specific diseases (for example, ovarian or testicular cancer). One of the goals of the NIH is to "expand the base in medical and associated sciences in order to ensure a continued high return on the public investment in research." Taxpayer dollars funding NIH are from the taxpayers, making them the primary beneficiaries of advances in research. Thus, the general public is a key stakeholder in the decisions resulting from the NIH funding policy. However, some in the general public do not feel their interests are being represented, and individuals have formed patient advocacy groups to represent their own interests. Important stakeholders of the NIH funding policy include researchers and scientists. Extramural researchers differ from intramural researchers in that they are not employed by the NIH but may apply for funding. Throughout the history of the NIH, the amount of funding received has increased, but the proportion to each IC remains relatively constant. The individual ICs then decide who will receive the grant money and how much will be allotted | https://en.wikipedia.org/wiki?curid=46174 |
National Institutes of Health Policy changes on who receives funding significantly affect researchers. For example, the NIH has recently attempted to approve more first-time NIH R01 applicants or the research grant applications of young scientists. To encourage the participation of young scientists, the application process has been shortened and made easier. In addition, first-time applicants are being offered more funding for their research grants than those who have received grants in the past. In 2011 and 2012, the Department of Health and Human Services Office of Inspector General published a series of audit reports revealing that throughout the fiscal years 2000–2010, institutes under the aegis of the NIH did not comply with the time and amount requirements specified in appropriations statutes, in awarding federal contracts to commercial partners, committing the federal government to tens of millions of dollars of expenditure ahead of appropriation of funds from Congress. The NIH is composed of 27 separate institutes and centers (ICs) that conduct and coordinate research across different disciplines of biomedical science. These are: In addition, the National Center for Research Resources operated from April 13, 1962 to December 23, 2011. | https://en.wikipedia.org/wiki?curid=46174 |
Azimuth An azimuth (; from Arabic اَلسُّمُوت "as-sumūt", 'the directions', the plural form of the Arabic noun السَّمْت "as-samt", meaning 'the direction') is an angular measurement in a spherical coordinate system. The vector from an observer (origin) to a point of interest is projected perpendicularly onto a reference plane; the angle between the projected vector and a reference vector on the reference plane is called the azimuth. When used as a celestial coordinate, the azimuth is the horizontal direction of a star or other astronomical object in the sky. The star is the point of interest, the reference plane is the local area (e.g. a circular area 5 km in radius at sea level) around an observer on Earth's surface, and the reference vector points to true north. The azimuth is the angle between the north vector and the star's vector on the horizontal plane. is usually measured in degrees (°). The concept is used in navigation, astronomy, engineering, mapping, mining, and ballistics. In land navigation, azimuth is usually denoted alpha, "α", and defined as a horizontal angle measured clockwise from a north base line or "meridian". "Azimuth" has also been more generally defined as a horizontal angle measured clockwise from any fixed reference plane or easily established base direction line. Today, the reference plane for an azimuth is typically true north, measured as a 0° azimuth, though other angular units (grad, mil) can be used | https://en.wikipedia.org/wiki?curid=47487 |
Azimuth Moving clockwise on a 360 degree circle, east has azimuth 90°, south 180°, and west 270°. There are exceptions: some navigation systems use south as the reference vector. Any direction can be the reference vector, as long as it is clearly defined. Quite commonly, azimuths or compass bearings are stated in a system in which either north or south can be the zero, and the angle may be measured clockwise or anticlockwise from the zero. For example, a bearing might be described as "(from) south, (turn) thirty degrees (toward the) east" (the words in brackets are usually omitted), abbreviated "S30°E", which is the bearing 30 degrees in the eastward direction from south, i.e. the bearing 150 degrees clockwise from north. The reference direction, stated first, is always north or south, and the turning direction, stated last, is east or west. The directions are chosen so that the angle, stated between them, is positive, between zero and 90 degrees. If the bearing happens to be exactly in the direction of one of the cardinal points, a different notation, e.g. "due east", is used instead. The cartographical azimuth (in decimal degrees) can be calculated when the coordinates of 2 points are known in a flat plane (cartographical coordinates): Remark that the reference axes are swapped relative to the (counterclockwise) mathematical polar coordinate system and that the azimuth is clockwise relative to the north. This is the reason why the X and Y axis in the above formula are swapped | https://en.wikipedia.org/wiki?curid=47487 |
Azimuth If the azimuth becomes negative, one can always add 360°. The formula in radians would be slightly easier: Note the swapped formula_3 in contrast to the normal formula_4 atan2 input order. When the coordinates ("X", "Y") of one point, the distance "D", and the azimuth "α" to another point ("X", "Y") are known, one can calculate its coordinates: This is typically used in triangulation and azimuth identification (AzID), especially in radar applications. We are standing at latitude formula_6, longitude zero; we want to find the azimuth from our viewpoint to Point 2 at latitude formula_7, longitude "L" (positive eastward). We can get a fair approximation by assuming the Earth is a sphere, in which case the azimuth "α" is given by A better approximation assumes the Earth is a slightly-squashed sphere (an "oblate spheroid"); "azimuth" then has at least two very slightly different meanings. "Normal-section azimuth" is the angle measured at our viewpoint by a theodolite whose axis is perpendicular to the surface of the spheroid; "geodetic azimuth" is the angle between north and the "geodesic"; that is, the shortest path on the surface of the spheroid from our viewpoint to Point 2. The difference is usually immeasurably small; if Point 2 is not more than 100 km away, the difference will not exceed 0.03 arc second. Various websites will calculate geodetic azimuth; e.g., GeoScience Australia site. Formulas for calculating geodetic azimuth are linked in the distance article | https://en.wikipedia.org/wiki?curid=47487 |
Azimuth Normal-section azimuth is simpler to calculate; Bomford says Cunningham's formula is exact for any distance. If "f" is the flattening, and "e" the eccentricity, for the chosen spheroid (e.g., for WGS84) then If "φ" = 0 then To calculate the azimuth of the sun or a star given its declination and hour angle at our location, we modify the formula for a spherical earth. Replace "φ" with declination and longitude difference with hour angle, and change the sign (since the hour angle is positive westward instead of east). There is a wide variety of azimuthal map projections. They all have the property that directions (the azimuths) from a central point are preserved. Some navigation systems use south as the reference plane. However, any direction can serve as the plane of reference, as long as it is clearly defined for everyone using that system. Used in celestial navigation, an "azimuth" is the direction of a celestial body from the observer. In astronomy, an "azimuth" is sometimes referred to as a bearing. In modern astronomy azimuth is nearly always measured from the north. (The article on coordinate systems, for example, uses a convention measuring from the south.) In former times, it was common to refer to azimuth from the south, as it was then zero at the same time that the hour angle of a star was zero. This assumes, however, that the star (upper) culminates in the south, which is only true if the star's declination is less than (i.e. further south than) the observer's latitude | https://en.wikipedia.org/wiki?curid=47487 |
Azimuth If, instead of measuring from and along the horizon, the angles are measured from and along the celestial equator, the angles are called right ascension if referenced to the Vernal Equinox, or hour angle if referenced to the celestial meridian. In the horizontal coordinate system, used in celestial navigation and satellite dish installation, azimuth is one of the two coordinates. The other is altitude, sometimes called elevation above the horizon. See also: Sat finder. In mathematics, the azimuth angle of a point in cylindrical coordinates or spherical coordinates is the anticlockwise angle between the positive "x"-axis and the projection of the vector onto the "xy"-plane. The angle is the same as an angle in polar coordinates of the component of the vector in the "xy"-plane and is normally measured in radians rather than degrees. As well as measuring the angle differently, in mathematical applications theta, "θ", is very often used to represent the azimuth rather than the representation of symbol phi "φ". For magnetic tape drives, "azimuth" refers to the angle between the tape head(s) and tape. In sound localization experiments and literature, the "azimuth" refers to the angle the sound source makes compared to the imaginary straight line that is drawn from within the head through the area between the eyes. An azimuth thruster in shipbuilding is a propeller that can be rotated horizontally. The word azimuth is in all European languages today | https://en.wikipedia.org/wiki?curid=47487 |
Azimuth It originates from medieval Arabic "al-sumūt", pronounced "as-sumūt" in Arabic, meaning "the directions" (plural of Arabic "al-samt" = "the direction"). The Arabic word entered late medieval Latin in an astronomy context and in particular in the use of the Arabic version of the astrolabe astronomy instrument. The word's first record in English is in the 1390s in "Treatise on the Astrolabe" by Geoffrey Chaucer. The first known record in any Western language is in Spanish in the 1270s in an astronomy book that was largely derived from Arabic sources, the "Libros del saber de astronomía" commissioned by King Alfonso X of Castile. | https://en.wikipedia.org/wiki?curid=47487 |
Cloud albedo is a measure of the albedo of a cloud. Higher values indicate that a cloud reflects a larger amount of solar radiation and transmits a smaller amount of radiation. depends on the total mass of water, the size and shape of the droplets or particles and their distribution in space. Cloud albedo, along with the greenhouse effect of clouds, strongly influence the Earth's energy budget. Thick clouds (such as stratocumulus) reflect a large amount of incoming solar radiation, meaning they have a high albedo. Thin clouds (such as Cirrus) tend to transmit most solar radiation, so have low albedo. Studies have shown that cloud liquid water path varies with changing cloud droplet size, which may alter the behavior of clouds and their albedo. The variations of the albedo of typical clouds in the atmosphere are dominated by the column amount of liquid water and ice in the cloud. varies from less than 10% to more than 90% and depends on drop sizes, liquid water or ice content, thickness of the cloud, and the sun's zenith angle. The smaller the drops and the greater the liquid water content, the greater the cloud albedo, if all other factors are the same. Addition of cloud nuclei by pollution can lead to an increase in solar radiation reflected by clouds. Increasing aerosol concentration and aerosol density increases cloud droplet concentration, decreases cloud droplet size, and increases cloud albedo | https://en.wikipedia.org/wiki?curid=47514 |
Cloud albedo In macrophysically identical clouds, a cloud with few larger drops will have a lower albedo than a cloud with more smaller drops. The cloud albedo increases with the total water content or depth of the cloud and with the solar zenith angle. The variation of albedo with zenith angle is most rapid when the sun is near the horizon, and least when the sun is overhead. Absorption of solar radiation by plane-parallel clouds decreases with increasing zenith angle because radiation that is reflected to space at the higher zenith angles penetrates less deeply into the cloud and is therefore less likely to be absorbed. | https://en.wikipedia.org/wiki?curid=47514 |
Planetary body A planetary body or planetary object is any secondary body in the Solar System that is geologically differentiated or in hydrostatic equilibrium and thus has a planet-like geology: a planet, dwarf planet, or planetary-mass moon. In 2002, planetary scientists Alan Stern and Harold Levison proposed the following algorithm to determine whether an object in space satisfies the definition for a planetary body. The body must: This definition excludes brown dwarfs and stars, as well as small bodies such as planetesimals. | https://en.wikipedia.org/wiki?curid=47517 |
Cryosphere The cryosphere (from the Greek "kryos", "cold", "frost" or "ice" and "sphaira", "globe, ball") is an all-encompassing term for those portions of Earth's surface where water is in solid form, including sea ice, lake ice, river ice, snow cover, glaciers, ice caps, ice sheets, and frozen ground (which includes permafrost). Thus, there is a wide overlap with the hydrosphere. The cryosphere is an integral part of the global climate system with important linkages and feedbacks generated through its influence on surface energy and moisture fluxes, clouds, precipitation, hydrology, atmospheric and oceanic circulation. Through these feedback processes, the cryosphere plays a significant role in the global climate and in climate model response to global changes. The term deglaciation describes the retreat of cryospheric features. Cryology is the study of cryospheres. Frozen water is found on the Earth’s surface primarily as snow cover, freshwater ice in lakes and rivers, sea ice, glaciers, ice sheets, and frozen ground and permafrost (permanently frozen ground). The residence time of water in each of these cryospheric sub-systems varies widely. Snow cover and freshwater ice are essentially seasonal, and most sea ice, except for ice in the central Arctic, lasts only a few years if it is not seasonal. A given water particle in glaciers, ice sheets, or ground ice, however, may remain frozen for 10–100,000 years or longer, and deep ice in parts of East Antarctica may have an age approaching 1 million years | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere Most of the world's ice volume is in Antarctica, principally in the East Antarctic Ice Sheet. In terms of areal extent, however, Northern Hemisphere winter snow and ice extent comprise the largest area, amounting to an average 23% of hemispheric surface area in January. The large areal extent and the important climatic roles of snow and ice, related to their unique physical properties, indicate that the ability to observe and model snow and ice-cover extent, thickness, and physical properties (radiative and thermal properties) is of particular significance for climate research. There are several fundamental physical properties of snow and ice that modulate energy exchanges between the surface and the atmosphere. The most important properties are the surface reflectance (albedo), the ability to transfer heat (thermal diffusivity), and the ability to change state (latent heat). These physical properties, together with surface roughness, emissivity, and dielectric characteristics, have important implications for observing snow and ice from space. For example, surface roughness is often the dominant factor determining the strength of radar backscatter . Physical properties such as crystal structure, density, length, and liquid water content are important factors affecting the transfers of heat and water and the scattering of microwave energy. The surface reflectance of incoming solar radiation is important for the surface energy balance (SEB) | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere It is the ratio of reflected to incident solar radiation, commonly referred to as albedo. Climatologists are primarily interested in albedo integrated over the shortwave portion of the electromagnetic spectrum (~300 to 3500 nm), which coincides with the main solar energy input. Typically, albedo values for non-melting snow-covered surfaces are high (~80–90%) except in the case of forests. The higher albedos for snow and ice cause rapid shifts in surface reflectivity in autumn and spring in high latitudes, but the overall climatic significance of this increase is spatially and temporally modulated by cloud cover. (Planetary albedo is determined principally by cloud cover, and by the small amount of total solar radiation received in high latitudes during winter months.) Summer and autumn are times of high-average cloudiness over the Arctic Ocean so the albedo feedback associated with the large seasonal changes in sea-ice extent is greatly reduced. Groisman "et al." observed that snow cover exhibited the greatest influence on the Earth radiative balance in the spring (April to May) period when incoming solar radiation was greatest over snow-covered areas. The thermal properties of cryospheric elements also have important climatic consequences. Snow and ice have much lower thermal diffusivities than air. Thermal diffusivity is a measure of the speed at which temperature waves can penetrate a substance. Snow and ice are many orders of magnitude less efficient at diffusing heat than air | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere Snow cover insulates the ground surface, and sea ice insulates the underlying ocean, decoupling the surface-atmosphere interface with respect to both heat and moisture fluxes. The flux of moisture from a water surface is eliminated by even a thin skin of ice, whereas the flux of heat through thin ice continues to be substantial until it attains a thickness in excess of 30 to 40 cm. However, even a small amount of snow on top of the ice will dramatically reduce the heat flux and slow down the rate of ice growth. The insulating effect of snow also has major implications for the hydrological cycle. In non-permafrost regions, the insulating effect of snow is such that only near-surface ground freezes and deep-water drainage is uninterrupted. While snow and ice act to insulate the surface from large energy losses in winter, they also act to retard warming in the spring and summer because of the large amount of energy required to melt ice (the latent heat of fusion, 3.34 x 10 J/kg at 0 °C). However, the strong static stability of the atmosphere over areas of extensive snow or ice tends to confine the immediate cooling effect to a relatively shallow layer, so that associated atmospheric anomalies are usually short-lived and local to regional in scale. In some areas of the world such as Eurasia, however, the cooling associated with a heavy snowpack and moist spring soils is known to play a role in modulating the summer monsoon circulation | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere Gutzler and Preston (1997) recently presented evidence for a similar snow-summer circulation feedback over the southwestern United States. The role of snow cover in modulating the monsoon is just one example of a short-term cryosphere-climate feedback involving the land surface and the atmosphere. From Figure 1 it can be seen that there are numerous cryosphere-climate feedbacks in the global climate system. These operate over a wide range of spatial and temporal scales from local seasonal cooling of air temperatures to hemispheric-scale variations in ice sheets over time-scales of thousands of years. The feedback mechanisms involved are often complex and incompletely understood. For example, Curry "et al." (1995) showed that the so-called “simple” sea ice-albedo feedback involved complex interactions with lead fraction, melt ponds, ice thickness, snow cover, and sea-ice extent. Snow cover has the second-largest areal extent of any component of the cryosphere, with a mean maximum areal extent of approximately 47 million km. Most of the Earth's snow-covered area (SCA) is located in the Northern Hemisphere, and temporal variability is dominated by the seasonal cycle; Northern Hemisphere snow-cover extent ranges from 46.5 million km in January to 3.8 million km in August. North American winter SCA has exhibited an increasing trend over much of this century largely in response to an increase in precipitation | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere However, the available satellite data show that the hemispheric winter snow cover has exhibited little interannual variability over the 1972–1996 period, with a coefficient of variation (COV=s.d./mean) for January Northern Hemisphere snow cover of < 0.04. According to Groisman "et al." Northern Hemisphere spring snow cover should exhibit a decreasing trend to explain an observed increase in Northern Hemisphere spring air temperatures this century. Preliminary estimates of SCA from historical and reconstructed in situ snow-cover data suggest this is the case for Eurasia, but not for North America, where spring snow cover has remained close to current levels over most of this century. Because of the close relationship observed between hemispheric air temperature and snow-cover extent over the period of satellite data (IPCC 1996), there is considerable interest in monitoring Northern Hemisphere snow-cover extent for detecting and monitoring climate change. Snow cover is an extremely important storage component in the water balance, especially seasonal snowpacks in mountainous areas of the world. Though limited in extent, seasonal snowpacks in the Earth’s mountain ranges account for the major source of the runoff for stream flow and groundwater recharge over wide areas of the midlatitudes. For example, over 85% of the annual runoff from the Colorado River basin originates as snowmelt | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere Snowmelt runoff from the Earth's mountains fills the rivers and recharges the aquifers that over a billion people depend on for their water resources. Furthermore, over 40% of the world's protected areas are in mountains, attesting to their value both as unique ecosystems needing protection and as recreation areas for humans. Climate warming is expected to result in major changes to the partitioning of snow and rainfall, and to the timing of snowmelt, which will have important implications for water use and management. These changes also involve potentially important decadal and longer time-scale feedbacks to the climate system through temporal and spatial changes in soil moisture and runoff to the oceans.(Walsh 1995). Freshwater fluxes from the snow cover into the marine environment may be important, as the total flux is probably of the same magnitude as desalinated ridging and rubble areas of sea ice. In addition, there is an associated pulse of precipitated pollutants which accumulate over the Arctic winter in snowfall and are released into the ocean upon ablation of the sea-ice . Sea ice covers much of the polar oceans and forms by freezing of sea water. Satellite data since the early 1970s reveal considerable seasonal, regional, and interannual variability in the sea-ice covers of both hemispheres. Seasonally, sea-ice extent in the Southern Hemisphere varies by a factor of 5, from a minimum of 3–4 million km in February to a maximum of 17–20 million km in September | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere The seasonal variation is much less in the Northern Hemisphere where the confined nature and high latitudes of the Arctic Ocean result in a much larger perennial ice cover, and the surrounding land limits the equatorward extent of wintertime ice. Thus, the seasonal variability in Northern Hemisphere ice extent varies by only a factor of 2, from a minimum of 7–9 million km in September to a maximum of 14–16 million km in March. The ice cover exhibits much greater regional-scale interannual variability than it does hemispherical. For instance, in the region of the Sea of Okhotsk and Japan, maximum ice extent decreased from 1.3 million km in 1983 to 0.85 million km in 1984, a decrease of 35%, before rebounding the following year to 1.2 million km. The regional fluctuations in both hemispheres are such that for any several-year period of the satellite record some regions exhibit decreasing ice coverage while others exhibit increasing ice cover. The overall trend indicated in the passive microwave record from 1978 through mid-1995 shows that the extent of Arctic sea ice is decreasing 2.7% per decade. Subsequent work with the satellite passive-microwave data indicates that from late October 1978 through the end of 1996 the extent of Arctic sea ice decreased by 2.9% per decade while the extent of Antarctic sea ice increased by 1.3% per decade | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere The Intergovernmental Panel on Climate Change publication "Climate change 2013: The Physical Science Basis" stated that sea ice extent for the Northern Hemisphere showed a decrease of 3.8% ± 0.3% per decade from November 1978 to December 2012. Ice forms on rivers and lakes in response to seasonal cooling. The sizes of the ice bodies involved are too small to exert anything other than localized climatic effects. However, the freeze-up/break-up processes respond to large-scale and local weather factors, such that considerable interannual variability exists in the dates of appearance and disappearance of the ice. Long series of lake-ice observations can serve as a proxy climate record, and the monitoring of freeze-up and break-up trends may provide a convenient integrated and seasonally-specific index of climatic perturbations. Information on river-ice conditions is less useful as a climatic proxy because ice formation is strongly dependent on river-flow regime, which is affected by precipitation, snow melt, and watershed runoff as well as being subject to human interference that directly modifies channel flow, or that indirectly affects the runoff via land-use practices. Lake freeze-up depends on the heat storage in the lake and therefore on its depth, the rate and temperature of any inflow, and water-air energy fluxes. Information on lake depth is often unavailable, although some indication of the depth of shallow lakes in the Arctic can be obtained from airborne radar imagery during late winter (Sellman "et al | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere " 1975) and spaceborne optical imagery during summer (Duguay and Lafleur 1997). The timing of breakup is modified by snow depth on the ice as well as by ice thickness and freshwater inflow. Frozen ground (permafrost and seasonally frozen ground) occupies approximately 54 million km of the exposed land areas of the Northern Hemisphere (Zhang et al., 2003) and therefore has the largest areal extent of any component of the cryosphere. Permafrost (perennially frozen ground) may occur where mean annual air temperatures (MAAT) are less than −1 or −2 °C and is generally continuous where MAAT are less than −7 °C. In addition, its extent and thickness are affected by ground moisture content, vegetation cover, winter snow depth, and aspect. The global extent of permafrost is still not completely known, but it underlies approximately 20% of Northern Hemisphere land areas. Thicknesses exceed 600 m along the Arctic coast of northeastern Siberia and Alaska, but, toward the margins, permafrost becomes thinner and horizontally discontinuous. The marginal zones will be more immediately subject to any melting caused by a warming trend. Most of the presently existing permafrost formed during previous colder conditions and is therefore relic. However, permafrost may form under present-day polar climates where glaciers retreat or land emergence exposes unfrozen ground | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere Washburn (1973) concluded that most continuous permafrost is in balance with the present climate at its upper surface, but changes at the base depend on the present climate and geothermal heat flow; in contrast, most discontinuous permafrost is probably unstable or "in such delicate equilibrium that the slightest climatic or surface change will have drastic disequilibrium effects". Under warming conditions, the increasing depth of the summer active layer has significant impacts on the hydrologic and geomorphic regimes. Thawing and retreat of permafrost have been reported in the upper Mackenzie Valley and along the southern margin of its occurrence in Manitoba, but such observations are not readily quantified and generalized. Based on average latitudinal gradients of air temperature, an average northward displacement of the southern permafrost boundary by 50-to-150 km could be expected, under equilibrium conditions, for a 1 °C warming. Only a fraction of the permafrost zone consists of actual ground ice. The remainder (dry permafrost) is simply soil or rock at subfreezing temperatures. The ice volume is generally greatest in the uppermost permafrost layers and mainly comprises pore and segregated ice in Earth material. Measurements of bore-hole temperatures in permafrost can be used as indicators of net changes in temperature regime | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere Gold and Lachenbruch (1973) infer a 2–4 °C warming over 75 to 100 years at Cape Thompson, Alaska, where the upper 25% of the 400-m thick permafrost is unstable with respect to an equilibrium profile of temperature with depth (for the present mean annual surface temperature of −5 °C). Maritime influences may have biased this estimate, however. At Prudhoe Bay similar data imply a 1.8 °C warming over the last 100 years (Lachenbruch "et al." 1982). Further complications may be introduced by changes in snow-cover depths and the natural or artificial disturbance of the surface vegetation. The potential rates of permafrost thawing have been established by Osterkamp (1984) to be two centuries or less for 25-meter-thick permafrost in the discontinuous zone of interior Alaska, assuming warming from −0.4 to 0 °C in 3–4 years, followed by a further 2.6 °C rise. Although the response of permafrost (depth) to temperature change is typically a very slow process (Osterkamp 1984; Koster 1993), there is ample evidence for the fact that the active layer thickness quickly responds to a temperature change (Kane "et al." 1991). Whether, under a warming or cooling scenario, global climate change will have a significant effect on the duration of frost-free periods in both regions with seasonally and perennially frozen ground. Ice sheets and glaciers are flowing ice masses that rest on solid land. They are controlled by snow accumulation, surface and basal melt, calving into surrounding oceans or lakes and internal dynamics | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere The latter results from gravity-driven creep flow ("glacial flow") within the ice body and sliding on the underlying land, which leads to thinning and horizontal spreading. Any imbalance of this dynamic equilibrium between mass gain, loss and transport due to flow results in either growing or shrinking ice bodies. Ice sheets are the greatest potential source of global freshwater, holding approximately 77% of the global total. This corresponds to 80 m of world sea-level equivalent, with Antarctica accounting for 90% of this. Greenland accounts for most of the remaining 10%, with other ice bodies and glaciers accounting for less than 0.5%. Because of their size in relation to annual rates of snow accumulation and melt, the residence time of water in ice sheets can extend to 100,000 or 1 million years. Consequently, any climatic perturbations produce slow responses, occurring over glacial and interglacial periods. Valley glaciers respond rapidly to climatic fluctuations with typical response times of 10–50 years. However, the response of individual glaciers may be asynchronous to the same climatic forcing because of differences in glacier length, elevation, slope, and speed of motion. Oerlemans (1994) provided evidence of coherent global glacier retreat which could be explained by a linear warming trend of 0.66 °C per 100 years | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere While glacier variations are likely to have minimal effects upon global climate, their recession may have contributed one third to one half of the observed 20th Century rise in sea level (Meier 1984; IPCC 1996). Furthermore, it is extremely likely that such extensive glacier recession as is currently observed in the Western Cordillera of North America, where runoff from glacierized basins is used for irrigation and hydropower, involves significant hydrological and ecosystem impacts. Effective water-resource planning and impact mitigation in such areas depends upon developing a sophisticated knowledge of the status of glacier ice and the mechanisms that cause it to change. Furthermore, a clear understanding of the mechanisms at work is crucial to interpreting the global-change signals that are contained in the time series of glacier mass balance records. Combined glacier mass balance estimates of the large ice sheets carry an uncertainty of about 20%. Studies based on estimated snowfall and mass output tend to indicate that the ice sheets are near balance or taking some water out of the oceans. Marinebased studies suggest sea-level rise from the Antarctic or rapid ice-shelf basal melting. Some authors (Paterson 1993; Alley 1997) have suggested that the difference between the observed rate of sea-level rise (roughly 2 mm/y) and the explained rate of sea-level rise from melting of mountain glaciers, thermal expansion of the ocean, etc | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere (roughly 1 mm/y or less) is similar to the modeled imbalance in the Antarctic (roughly 1 mm/y of sea-level rise; Huybrechts 1990), suggesting a contribution of sea-level rise from the Antarctic. Relationships between global climate and changes in ice extent are complex. The mass balance of land-based glaciers and ice sheets is determined by the accumulation of snow, mostly in winter, and warm-season ablation due primarily to net radiation and turbulent heat fluxes to melting ice and snow from warm-air advection,(Munro 1990). However, most of Antarctica never experiences surface melting. Where ice masses terminate in the ocean, iceberg calving is the major contributor to mass loss. In this situation, the ice margin may extend out into deep water as a floating ice shelf, such as that in the Ross Sea. Despite the possibility that global warming could result in losses to the Greenland ice sheet being offset by gains to the Antarctic ice sheet, there is major concern about the possibility of a West Antarctic Ice Sheet collapse. The West Antarctic Ice Sheet is grounded on bedrock below sea level, and its collapse has the potential of raising the world sea level 6–7 m over a few hundred years. Most of the discharge of the West Antarctic Ice Sheet is via the five major ice streams (faster flowing ice) entering the Ross Ice Shelf, the Rutford Ice Stream entering Ronne-Filchner shelf of the Weddell Sea, and the Thwaites Glacier and Pine Island Glacier entering the Amundsen Ice Shelf | https://en.wikipedia.org/wiki?curid=47527 |
Cryosphere Opinions differ as to the present mass balance of these systems (Bentley 1983, 1985), principally because of the limited data. The West Antarctic Ice Sheet is stable so long as the Ross Ice Shelf is constrained by drag along its lateral boundaries and pinned by local grounding. | https://en.wikipedia.org/wiki?curid=47527 |
Celestial coordinate system In astronomy, a celestial coordinate system (or celestial reference system) is a system for specifying positions of satellites, planets, stars, galaxies, and other celestial objects. Coordinate systems can specify an object's position in three-dimensional space or plot merely its direction on a celestial sphere, if the object's distance is unknown or trivial. The coordinate systems are implemented in either spherical or rectangular coordinates. Spherical coordinates, projected on the celestial sphere, are analogous to the geographic coordinate system used on the surface of Earth. These differ in their choice of fundamental plane, which divides the celestial sphere into two equal hemispheres along a great circle. Rectangular coordinates, in appropriate units, are simply the cartesian equivalent of the spherical coordinates, with the same fundamental () plane and primary (-axis) direction. Each coordinate system is named after its choice of fundamental plane. The following table lists the common coordinate systems in use by the astronomical community. The fundamental plane divides the celestial sphere into two equal hemispheres and defines the baseline for the latitudinal coordinates, similar to the equator in the geographic coordinate system. The poles are located at ±90° from the fundamental plane. The primary direction is the starting point of the longitudinal coordinates | https://en.wikipedia.org/wiki?curid=48381 |
Celestial coordinate system The origin is the zero distance point, the "center of the celestial sphere", although the definition of celestial sphere is ambiguous about the definition of its center point. The "horizontal", or altitude-azimuth, system is based on the position of the observer on Earth, which revolves around its own axis once per sidereal day (23 hours, 56 minutes and 4.091 seconds) in relation to the star background. The positioning of a celestial object by the horizontal system varies with time, but is a useful coordinate system for locating and tracking objects for observers on Earth. It is based on the position of stars relative to an observer's ideal horizon. The "equatorial" coordinate system is centered at Earth's center, but fixed relative to the celestial poles and the March equinox. The coordinates are based on the location of stars relative to Earth's equator if it were projected out to an infinite distance. The equatorial describes the sky as seen from the Solar System, and modern star maps almost exclusively use equatorial coordinates. The "equatorial" system is the normal coordinate system for most professional and many amateur astronomers having an equatorial mount that follows the movement of the sky during the night. Celestial objects are found by adjusting the telescope's or other instrument's scales so that they match the equatorial coordinates of the selected object to observe | https://en.wikipedia.org/wiki?curid=48381 |
Celestial coordinate system Popular choices of pole and equator are the older B1950 and the modern J2000 systems, but a pole and equator "of date" can also be used, meaning one appropriate to the date under consideration, such as when a measurement of the position of a planet or spacecraft is made. There are also subdivisions into "mean of date" coordinates, which average out or ignore nutation, and "true of date," which include nutation. The fundamental plane is the plane of the Earth's orbit, called the ecliptic plane. There are two principal variants of the ecliptic coordinate system: geocentric ecliptic coordinates centered on the Earth and heliocentric ecliptic coordinates centered on the center of mass of the Solar System. The geocentric ecliptic system was the principal coordinate system for ancient astronomy and is still useful for computing the apparent motions of the Sun, Moon, and planets. The heliocentric ecliptic system describes the planets' orbital movement around the Sun, and centers on the barycenter of the Solar System (i.e. very close to the center of the Sun). The system is primarily used for computing the positions of planets and other Solar System bodies, as well as defining their orbital elements. The galactic coordinate system uses the approximate plane of our galaxy as its fundamental plane. The Solar System is still the center of the coordinate system, and the zero point is defined as the direction towards the galactic center | https://en.wikipedia.org/wiki?curid=48381 |
Celestial coordinate system Galactic latitude resembles the elevation above the galactic plane and galactic longitude determines direction relative to the center of the galaxy. The supergalactic coordinate system corresponds to a fundamental plane that contains a higher than average number of local galaxies in the sky as seen from Earth. Conversions between the various coordinate systems are given. See the notes before using these equations. The classical equations, derived from spherical trigonometry, for the longitudinal coordinate are presented to the right of a bracket; simply dividing the first equation by the second gives the convenient tangent equation seen on the left. The rotation matrix equivalent is given beneath each case. This division is ambiguous because tan has a period of 180° () whereas cos and sin have periods of 360° (2). Note that azimuth () is measured from the south point, turning positive to the west. Zenith distance, the angular distance along the great circle from the zenith to a celestial object, is simply the complementary angle of the altitude: . In solving the equation for , in order to avoid the ambiguity of the arctangent, use of the two-argument arctangent, denoted , is recommended. The two-argument arctangent computes the arctangent of , and accounts for the quadrant in which it is being computed | https://en.wikipedia.org/wiki?curid=48381 |
Celestial coordinate system Thus, consistent with the convention of azimuth being measured from the south and opening positive to the west, where If the above formula produces a negative value for , it can be rendered positive by simply adding 360°. Again, in solving the equation for , use of the two-argument arctangent that accounts for the quadrant is recommended. Thus, again consistent with the convention of azimuth being measured from the south and opening positive to the west, where These equations are for converting equatorial coordinates to Galactic coordinates. formula_10 are the equatorial coordinates of the North Galactic Pole and formula_11 is the Galactic longitude of the North Celestial Pole. Referred to J2000.0 the values of these quantities are: If the equatorial coordinates are referred to another equinox, they must be precessed to their place at J2000.0 before applying these formulae. These equations convert to equatorial coordinates referred to B2000.0. | https://en.wikipedia.org/wiki?curid=48381 |
Volatilisation Volatilization is the process whereby a dissolved sample is vaporised. In atomic spectroscopy this is usually a two-step process. The analyte is turned into small droplets in a nebuliser which are entrained in a gas flow which is in turn volatilised in a high temperature flame in the case of AAS or volatilised in a gas plasma torch in the case of ICP spectroscopy. Herbicide volatilisation refers to evaporation or sublimation of a volatile herbicide. The effect of gaseous chemical is lost at its intended place of application and may move downwind and affect other plants not intended to be affected causing crop damage. Herbicides vary in their susceptibility to volatilisation. Prompt incorporation of the herbicide into the soil may reduce or prevent volatilisation. Wind, temperature, and humidity also affect the rate of volatilisation with humidity reducing in. 2,4-D and dicamba are commonly used chemicals that are known to be subject to volatilisation but there are many others. Application of herbicides later in the season to protect herbicide-resistant genetically modified plants increases the risk of volatilisation as the temperature is higher and incorporation into the soil impractical. Herbicide applied as a powder or a mist can also drift in the wind in solid form as dust or liquid form as tiny drops. However, a transformation of known herbicides, such as glyphosate, dicamba or MCPA, into the form of herbicidal ionic liquids proved to be a solution to this particular problem<ref name="10.1021/acssuschemeng | https://en.wikipedia.org/wiki?curid=49484 |
Volatilisation 7b01224"></ref>, since herbicidal ionic systems express lower susceptibility to volatilisation. <ref name="10.1002/cplu.201800251"></ref> | https://en.wikipedia.org/wiki?curid=49484 |
Interacting boson model The interacting boson model (IBM) is a model in nuclear physics in which nucleons (protons or neutrons) pair up, essentially acting as a single particle with boson properties, with integral spin of 0, 2 or 4. It is sometimes known as the Interacting boson approximation (IBA). The IBM1/IBM-I model treats both types of nucleons the same and considers only pairs of nucleons coupled to total angular momentum 0 and 2, called respectively, s and d bosons. The IBM2/IBM-II model treats protons and neutrons separately. Both models are restricted to nuclei with even numbers of protons and neutrons. The model can be used to predict vibrational and rotational modes of non-spherical nuclei. This model was invented by Akito Arima and Francesco Iachello in 1974. | https://en.wikipedia.org/wiki?curid=50604 |
Astronomy (from ) is a natural science that studies celestial objects and phenomena. It uses mathematics, physics, and chemistry in order to explain their origin and evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates outside Earth's atmosphere. Cosmology is a branch of astronomy. It studies the Universe as a whole. is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Babylonians, Greeks, Indians, Egyptians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars. Nowadays, professional astronomy is often said to be the same as astrophysics. Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results. Amateurs play an active role in astronomy. It is one of the few sciences in which this is the case. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets. "Astronomy" (from the Greek ἀστρονομία from ἄστρον "astron", "star" and -νομία "-nomia" from νόμος "nomos", "law" or "culture") means "law of the stars" (or "culture of the stars" depending on the translation). should not be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects. Although the two fields share a common origin, they are now entirely distinct. "Astronomy" and "astrophysics" are synonyms. Based on strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's atmosphere and of their physical and chemical properties," while "astrophysics" refers to the branch of astronomy dealing with "the behavior, physical properties, and dynamic processes of celestial objects and phenomena". In some cases, as in the introduction of the introductory textbook "The Physical Universe" by Frank Shu, "astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used to describe the physics-oriented version of the subject | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy However, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Some fields, such as astrometry, are purely astronomy rather than also astrophysics. Various departments in which scientists carry out research on this subject may use "astronomy" and "astrophysics", partly depending on whether the department is historically affiliated with a physics department, and many professional astronomers have physics rather than astronomy degrees. Some titles of the leading scientific journals in this field include "The Astronomical Journal", "The Astrophysical Journal", and "& Astrophysics". In early historic times, astronomy only consisted of the observation and predictions of the motions of objects visible to the naked eye. In some locations, early cultures assembled massive artifacts that possibly had some astronomical purpose. In addition to their ceremonial uses, these observatories could be employed to determine the seasons, an important factor in knowing when to plant crops and in understanding the length of the year. Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye. As civilizations developed, most notably in Mesopotamia, Greece, Persia, India, China, Egypt, and Central America, astronomical observatories were assembled and ideas on the nature of the Universe began to develop | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Most early astronomy consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon and the Earth in the Universe were explored philosophically. The Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the geocentric model of the Universe, or the Ptolemaic system, named after Ptolemy. A particularly important early development was the beginning of mathematical and scientific astronomy, which began among the Babylonians, who laid the foundations for the later astronomical traditions that developed in many other civilizations. The Babylonians discovered that lunar eclipses recurred in a repeating cycle known as a saros. Following the Babylonians, significant advances in astronomy were made in ancient Greece and the Hellenistic world. Greek astronomy is characterized from the start by seeking a rational, physical explanation for celestial phenomena. In the 3rd century BC, Aristarchus of Samos estimated the size and distance of the Moon and Sun, and he proposed a model of the Solar System where the Earth and planets rotated around the Sun, now called the heliocentric model. In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of the Moon and invented the earliest known astronomical devices such as the astrolabe | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Hipparchus also created a comprehensive catalog of 1020 stars, and most of the constellations of the northern hemisphere derive from Greek astronomy. The Antikythera mechanism (c. 150–80 BC) was an early analog computer designed to calculate the location of the Sun, Moon, and planets for a given date. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe. Medieval Europe housed a number of important astronomers. Richard of Wallingford (1292–1336) made major contributions to astronomy and horology, including the invention of the first astronomical clock, the Rectangulus which allowed for the measurement of angles between planets and other astronomical bodies, as well as an equatorium called the "Albion" which could be used for astronomical calculations such as lunar, solar and planetary longitudes and could predict eclipses. Nicole Oresme (1320–1382) and Jean Buridan (1300–1361) first discussed evidence for the rotation of the Earth, furthermore, Buridan also developed the theory of impetus (predecessor of the modern scientific theory of inertia) which was able to show planets were capable of motion without the intervention of angels. Georg von Peuerbach (1423–1461) and Regiomontanus (1436–1476) helped make astronomical progress instrumental to Copernicus's development of the heliocentric model decades later. flourished in the Islamic world and other parts of the world | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. In 964, the Andromeda Galaxy, the largest galaxy in the Local Group, was described by the Persian Muslim astronomer Abd al-Rahman al-Sufi in his "Book of Fixed Stars". The SN 1006 supernova, the brightest apparent magnitude stellar event in recorded history, was observed by the Egyptian Arabic astronomer Ali ibn Ridwan and Chinese astronomers in 1006. Some of the prominent Islamic (mostly Persian and Arab) astronomers who made significant contributions to the science include Al-Battani, Thebit, Abd al-Rahman al-Sufi, Biruni, Abū Ishāq Ibrāhīm al-Zarqālī, Al-Birjandi, and the astronomers of the Maragheh and Samarkand observatories. Astronomers during that time introduced many Arabic names now used for individual stars. It is also believed that the ruins at Great Zimbabwe and Timbuktu may have housed astronomical observatories. Europeans had previously believed that there had been no astronomical observation in sub-Saharan Africa during the pre-colonial Middle Ages, but modern discoveries show otherwise. For over six centuries (from the recovery of ancient learning during the late Middle Ages into the Enlightenment), the Roman Catholic Church gave more financial and social support to the study of astronomy than probably all other institutions. Among the Church's motives was finding the date for Easter. During the Renaissance, Nicolaus Copernicus proposed a heliocentric model of the solar system | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy His work was defended by Galileo Galilei and expanded upon by Johannes Kepler. Kepler was the first to devise a system that correctly described the details of the motion of the planets around the Sun. However, Kepler did not succeed in formulating a theory behind the laws he wrote down. It was Isaac Newton, with his invention of celestial dynamics and his law of gravitation, who finally explained the motions of the planets. Newton also developed the reflecting telescope. Improvements in the size and quality of the telescope led to further discoveries. The English astronomer John Flamsteed catalogued over 3000 stars, More extensive star catalogues were produced by Nicolas Louis de Lacaille. The astronomer William Herschel made a detailed catalog of nebulosity and clusters, and in 1781 discovered the planet Uranus, the first new planet found. The distance to a star was announced in 1838 when the parallax of 61 Cygni was measured by Friedrich Bessel. During the 18–19th centuries, the study of the three-body problem by Leonhard Euler, Alexis Claude Clairaut, and Jean le Rond d'Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations. Significant advances in astronomy came about with the introduction of new technology, including the spectroscope and photography | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Joseph von Fraunhofer discovered about 600 bands in the spectrum of the Sun in 1814–15, which, in 1859, Gustav Kirchhoff ascribed to the presence of different elements. Stars were proven to be similar to the Earth's own Sun, but with a wide range of temperatures, masses, and sizes. The existence of the Earth's galaxy, the Milky Way, as its own group of stars was only proved in the 20th century, along with the existence of "external" galaxies. The observed recession of those galaxies led to the discovery of the expansion of the Universe. Theoretical astronomy led to speculations on the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made huge advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, heavily evidenced by cosmic microwave background radiation, Hubble's law, and the cosmological abundances of elements. Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere. In February 2016, it was revealed that the LIGO project had detected evidence of gravitational waves in the previous September. The main source of information about celestial bodies and other objects is visible light, or more generally electromagnetic radiation | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Observational astronomy may be categorized according to the corresponding region of the electromagnetic spectrum on which the observations are made. Some parts of the spectrum can be observed from the Earth's surface, while other parts are only observable from either high altitudes or outside the Earth's atmosphere. Specific information on these subfields is given below. Radio astronomy uses radiation with wavelengths greater than approximately one millimeter, outside the visible range. Radio astronomy is different from most other forms of observational astronomy in that the observed radio waves can be treated as waves rather than as discrete photons. Hence, it is relatively easier to measure both the amplitude and phase of radio waves, whereas this is not as easily done at shorter wavelengths. Although some radio waves are emitted directly by astronomical objects, a product of thermal emission, most of the radio emission that is observed is the result of synchrotron radiation, which is produced when electrons orbit magnetic fields. Additionally, a number of spectral lines produced by interstellar gas, notably the hydrogen spectral line at 21 cm, are observable at radio wavelengths. A wide variety of other objects are observable at radio wavelengths, including supernovae, interstellar gas, pulsars, and active galactic nuclei. Infrared astronomy is founded on the detection and analysis of infrared radiation, wavelengths longer than red light and outside the range of our vision | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy The infrared spectrum is useful for studying objects that are too cold to radiate visible light, such as planets, circumstellar disks or nebulae whose light is blocked by dust. The longer wavelengths of infrared can penetrate clouds of dust that block visible light, allowing the observation of young stars embedded in molecular clouds and the cores of galaxies. Observations from the Wide-field Infrared Survey Explorer (WISE) have been particularly effective at unveiling numerous Galactic protostars and their host star clusters. With the exception of infrared wavelengths close to visible light, such radiation is heavily absorbed by the atmosphere, or masked, as the atmosphere itself produces significant infrared emission. Consequently, infrared observatories have to be located in high, dry places on Earth or in space. Some molecules radiate strongly in the infrared. This allows the study of the chemistry of space; more specifically it can detect water in comets. Historically, optical astronomy, also called visible light astronomy, is the oldest form of astronomy. Images of observations were originally drawn by hand. In the late 19th century and most of the 20th century, images were made using photographic equipment. Modern images are made using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern medium | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Although visible light itself extends from approximately 4000 Å to 7000 Å (400 nm to 700 nm), that same equipment can be used to observe some near-ultraviolet and near-infrared radiation. Ultraviolet astronomy employs ultraviolet wavelengths between approximately 100 and 3200 Å (10 to 320 nm). Light at those wavelengths is absorbed by the Earth's atmosphere, requiring observations at these wavelengths to be performed from the upper atmosphere or from space. Ultraviolet astronomy is best suited to the study of thermal radiation and spectral emission lines from hot blue stars (OB stars) that are very bright in this wave band. This includes the blue stars in other galaxies, which have been the targets of several ultraviolet surveys. Other objects commonly observed in ultraviolet light include planetary nebulae, supernova remnants, and active galactic nuclei. However, as ultraviolet light is easily absorbed by interstellar dust, an adjustment of ultraviolet measurements is necessary. X-ray astronomy uses X-ray wavelengths. Typically, X-ray radiation is produced by synchrotron emission (the result of electrons orbiting magnetic field lines), thermal emission from thin gases above 10 (10 million) kelvins, and thermal emission from thick gases above 10 Kelvin. Since X-rays are absorbed by the Earth's atmosphere, all X-ray observations must be performed from high-altitude balloons, rockets, or X-ray astronomy satellites | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Notable X-ray sources include X-ray binaries, pulsars, supernova remnants, elliptical galaxies, clusters of galaxies, and active galactic nuclei. Gamma ray astronomy observes astronomical objects at the shortest wavelengths of the electromagnetic spectrum. Gamma rays may be observed directly by satellites such as the Compton Gamma Ray Observatory or by specialized telescopes called atmospheric Cherenkov telescopes. The Cherenkov telescopes do not detect the gamma rays directly but instead detect the flashes of visible light produced when gamma rays are absorbed by the Earth's atmosphere. Most gamma-ray emitting sources are actually gamma-ray bursts, objects which only produce gamma radiation for a few milliseconds to thousands of seconds before fading away. Only 10% of gamma-ray sources are non-transient sources. These steady gamma-ray emitters include pulsars, neutron stars, and black hole candidates such as active galactic nuclei. In addition to electromagnetic radiation, a few other events originating from great distances may be observed from the Earth. In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX, and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Cosmic rays, which consist of very high energy particles (atomic nuclei) that can decay or be absorbed when they enter the Earth's atmosphere, result in a cascade of secondary particles which can be detected by current observatories. Some future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the Earth's atmosphere. Gravitational-wave astronomy is an emerging field of astronomy that employs gravitational-wave detectors to collect observational data about distant massive objects. A few observatories have been constructed, such as the "Laser Interferometer Gravitational Observatory" LIGO. LIGO made its first detection on 14 September 2015, observing gravitational waves from a binary black hole. A second gravitational wave was detected on 26 December 2015 and additional observations should continue but gravitational waves require extremely sensitive instruments. The combination of observations made using electromagnetic radiation, neutrinos or gravitational waves and other complementary information, is known as multi-messenger astronomy. One of the oldest fields in astronomy, and in all of science, is the measurement of the positions of celestial objects. Historically, accurate knowledge of the positions of the Sun, Moon, planets and stars has been essential in celestial navigation (the use of celestial objects to guide navigation) and in the making of calendars | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Careful measurement of the positions of the planets has led to a solid understanding of gravitational perturbations, and an ability to determine past and future positions of the planets with great accuracy, a field known as celestial mechanics. More recently the tracking of near-Earth objects will allow for predictions of close encounters or potential collisions of the Earth with those objects. The measurement of stellar parallax of nearby stars provides a fundamental baseline in the cosmic distance ladder that is used to measure the scale of the Universe. Parallax measurements of nearby stars provide an absolute baseline for the properties of more distant stars, as their properties can be compared. Measurements of the radial velocity and proper motion of stars allows astronomers to plot the movement of these systems through the Milky Way galaxy. Astrometric results are the basis used to calculate the distribution of speculated dark matter in the galaxy. During the 1990s, the measurement of the stellar wobble of nearby stars was used to detect large extrasolar planets orbiting those stars. Theoretical astronomers use several tools including analytical models and computational numerical simulations; each has its particular advantages. Analytical models of a process are better for giving broader insight into the heart of what is going on. Numerical models reveal the existence of phenomena and effects otherwise unobserved | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena. Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency between the data and model's results, the general tendency is to try to make minimal modifications to the model so that it produces results that fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model. Phenomena modeled by theoretical astronomers include: stellar dynamics and evolution; galaxy formation; large-scale distribution of matter in the Universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole ("astro")physics and the study of gravitational waves. Some widely accepted and studied theories and models in astronomy, now included in the Lambda-CDM model are the Big Bang, dark matter and fundamental theories of physics | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy A few examples of this process: Along with Cosmic inflation, dark matter and dark energy are the current leading topics in astronomy, as their discovery and controversy originated during the study of the galaxies. Astrophysics is the branch of astronomy that employs the principles of physics and chemistry "to ascertain the nature of the astronomical objects, rather than their positions or motions in space". Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, "astrophysicists" typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics. In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, and black holes; whether or not time travel is possible, wormholes can form, or the multiverse exists; and the origin and ultimate fate of the universe | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Astrochemistry is the study of the abundance and reactions of molecules in the Universe, and their interaction with radiation. The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form. Studies in this field contribute to the understanding of the formation of the Solar System, Earth's origin and geology, abiogenesis, and the origin of climate and oceans. Astrobiology is an interdisciplinary scientific field concerned with the origins, early evolution, distribution, and future of life in the universe. Astrobiology considers the question of whether extraterrestrial life exists, and how humans can detect it if it does | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy The term exobiology is similar. Astrobiology makes use of molecular biology, biophysics, biochemistry, chemistry, astronomy, physical cosmology, exoplanetology and geology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth. The origin and early evolution of life is an inseparable part of the discipline of astrobiology. Astrobiology concerns itself with interpretation of existing scientific data, and although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories. This interdisciplinary field encompasses research on the origin of planetary systems, origins of organic compounds in space, rock-water-carbon interactions, abiogenesis on Earth, planetary habitability, research on biosignatures for life detection, and studies on the potential for life to adapt to challenges on Earth and in outer space. Cosmology (from the Greek κόσμος ("kosmos") "world, universe" and λόγος ("logos") "word, study" or literally "logic") could be considered the study of the Universe as a whole. Observations of the large-scale structure of the Universe, a branch known as physical cosmology, have provided a deep understanding of the formation and evolution of the cosmos. Fundamental to modern cosmology is the well-accepted theory of the Big Bang, wherein our Universe began at a single point in time, and thereafter expanded over the course of 13 | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy 8 billion years to its present condition. The concept of the Big Bang can be traced back to the discovery of the microwave background radiation in 1965. In the course of this expansion, the Universe underwent several evolutionary stages. In the very early moments, it is theorized that the Universe experienced a very rapid cosmic inflation, which homogenized the starting conditions. Thereafter, nucleosynthesis produced the elemental abundance of the early Universe. (See also nucleocosmochronology.) When the first neutral atoms formed from a sea of primordial ions, space became transparent to radiation, releasing the energy viewed today as the microwave background radiation. The expanding Universe then underwent a Dark Age due to the lack of stellar energy sources. A hierarchical structure of matter began to form from minute variations in the mass density of space. Matter accumulated in the densest regions, forming clouds of gas and the earliest stars, the Population III stars. These massive stars triggered the reionization process and are believed to have created many of the heavy elements in the early Universe, which, through nuclear decay, create lighter elements, allowing the cycle of nucleosynthesis to continue longer. Gravitational aggregations clustered into filaments, leaving voids in the gaps. Gradually, organizations of gas and dust merged to form the first primitive galaxies | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Over time, these pulled in more matter, and were often organized into groups and clusters of galaxies, then into larger-scale superclusters. Various fields of physics are crucial to studying the universe. Interdisciplinary studies involve the fields of quantum mechanics, particle physics, plasma physics, condensed matter physics, statistical mechanics, optics, and nuclear physics. Fundamental to the structure of the Universe is the existence of dark matter and dark energy. These are now thought to be its dominant components, forming 96% of the mass of the Universe. For this reason, much effort is expended in trying to understand the physics of these components. The study of objects outside our galaxy is a branch of astronomy concerned with the formation and evolution of Galaxies, their morphology (description) and classification, the observation of active galaxies, and at a larger scale, the groups and clusters of galaxies. Finally, the latter is important for the understanding of the large-scale structure of the cosmos. Most galaxies are organized into distinct shapes that allow for classification schemes. They are commonly divided into spiral, elliptical and Irregular galaxies. As the name suggests, an elliptical galaxy has the cross-sectional shape of an ellipse. The stars move along random orbits with no preferred direction. These galaxies contain little or no interstellar dust, few star-forming regions, and older stars | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Elliptical galaxies are more commonly found at the core of galactic clusters, and may have been formed through mergers of large galaxies. A spiral galaxy is organized into a flat, rotating disk, usually with a prominent bulge or bar at the center, and trailing bright arms that spiral outward. The arms are dusty regions of star formation within which massive young stars produce a blue tint. Spiral galaxies are typically surrounded by a halo of older stars. Both the Milky Way and one of our nearest galaxy neighbors, the Andromeda Galaxy, are spiral galaxies. Irregular galaxies are chaotic in appearance, and are neither spiral nor elliptical. About a quarter of all galaxies are irregular, and the peculiar shapes of such galaxies may be the result of gravitational interaction. An active galaxy is a formation that emits a significant amount of its energy from a source other than its stars, dust and gas. It is powered by a compact region at the core, thought to be a super-massive black hole that is emitting radiation from in-falling material. A radio galaxy is an active galaxy that is very luminous in the radio portion of the spectrum, and is emitting immense plumes or lobes of gas. Active galaxies that emit shorter frequency, high-energy radiation include Seyfert galaxies, Quasars, and Blazars. Quasars are believed to be the most consistently luminous objects in the known universe. The large-scale structure of the cosmos is represented by groups and clusters of galaxies | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy This structure is organized into a hierarchy of groupings, with the largest being the superclusters. The collective matter is formed into filaments and walls, leaving large voids between. The Solar System orbits within the Milky Way, a barred spiral galaxy that is a prominent member of the Local Group of galaxies. It is a rotating mass of gas, dust, stars and other objects, held together by mutual gravitational attraction. As the Earth is located within the dusty outer arms, there are large portions of the Milky Way that are obscured from view. In the center of the Milky Way is the core, a bar-shaped bulge with what is believed to be a supermassive black hole at its center. This is surrounded by four primary arms that spiral from the core. This is a region of active star formation that contains many younger, population I stars. The disk is surrounded by a spheroid halo of older, population II stars, as well as relatively dense concentrations of stars known as globular clusters. Between the stars lies the interstellar medium, a region of sparse matter. In the densest regions, molecular clouds of molecular hydrogen and other elements create star-forming regions. These begin as a compact pre-stellar core or dark nebulae, which concentrate and collapse (in volumes determined by the Jeans length) to form compact protostars. As the more massive stars appear, they transform the cloud into an H II region (ionized atomic hydrogen) of glowing gas and plasma | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy The stellar wind and supernova explosions from these stars eventually cause the cloud to disperse, often leaving behind one or more young open clusters of stars. These clusters gradually disperse, and the stars join the population of the Milky Way. Kinematic studies of matter in the Milky Way and other galaxies have demonstrated that there is more mass than can be accounted for by visible matter. A dark matter halo appears to dominate the mass, although the nature of this dark matter remains undetermined. The study of stars and stellar evolution is fundamental to our understanding of the Universe. The astrophysics of stars has been determined through observation and theoretical understanding; and from computer simulations of the interior. Star formation occurs in dense regions of dust and gas, known as giant molecular clouds. When destabilized, cloud fragments can collapse under the influence of gravity, to form a protostar. A sufficiently dense, and hot, core region will trigger nuclear fusion, thus creating a main-sequence star. Almost all elements heavier than hydrogen and helium were created inside the cores of stars. The characteristics of the resulting star depend primarily upon its starting mass. The more massive the star, the greater its luminosity, and the more rapidly it fuses its hydrogen fuel into helium in its core. Over time, this hydrogen fuel is completely converted into helium, and the star begins to evolve. The fusion of helium requires a higher core temperature | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy A star with a high enough core temperature will push its outer layers outward while increasing its core density. The resulting red giant formed by the expanding outer layers enjoys a brief life span, before the helium fuel in the core is in turn consumed. Very massive stars can also undergo a series of evolutionary phases, as they fuse increasingly heavier elements. The final fate of the star depends on its mass, with stars of mass greater than about eight times the Sun becoming core collapse supernovae; while smaller stars blow off their outer layers and leave behind the inert core in the form of a white dwarf. The ejection of the outer layers forms a planetary nebula. The remnant of a supernova is a dense neutron star, or, if the stellar mass was at least three times that of the Sun, a black hole. Closely orbiting binary stars can follow more complex evolutionary paths, such as mass transfer onto a white dwarf companion that can potentially cause a supernova. Planetary nebulae and supernovae distribute the "metals" produced in the star by fusion to the interstellar medium; without them, all new stars (and their planetary systems) would be formed from hydrogen and helium alone. At a distance of about eight light-minutes, the most frequently studied star is the Sun, a typical main-sequence dwarf star of stellar class G2 V, and about 4.6 billion years (Gyr) old. The Sun is not considered a variable star, but it does undergo periodic changes in activity known as the sunspot cycle | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy This is an 11-year oscillation in sunspot number. Sunspots are regions of lower-than- average temperatures that are associated with intense magnetic activity. The Sun has steadily increased in luminosity by 40% since it first became a main-sequence star. The Sun has also undergone periodic changes in luminosity that can have a significant impact on the Earth. The Maunder minimum, for example, is believed to have caused the Little Ice Age phenomenon during the Middle Ages. The visible outer surface of the Sun is called the photosphere. Above this layer is a thin region known as the chromosphere. This is surrounded by a transition region of rapidly increasing temperatures, and finally by the super-heated corona. At the center of the Sun is the core region, a volume of sufficient temperature and pressure for nuclear fusion to occur. Above the core is the radiation zone, where the plasma conveys the energy flux by means of radiation. Above that is the convection zone where the gas material transports energy primarily through physical displacement of the gas known as convection. It is believed that the movement of mass within the convection zone creates the magnetic activity that generates sunspots. A solar wind of plasma particles constantly streams outward from the Sun until, at the outermost limit of the Solar System, it reaches the heliopause | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy As the solar wind passes the Earth, it interacts with the Earth's magnetic field (magnetosphere) and deflects the solar wind, but traps some creating the Van Allen radiation belts that envelop the Earth. The aurora are created when solar wind particles are guided by the magnetic flux lines into the Earth's polar regions where the lines the descend into the atmosphere. Planetary science is the study of the assemblage of planets, moons, dwarf planets, comets, asteroids, and other bodies orbiting the Sun, as well as extrasolar planets. The Solar System has been relatively well-studied, initially through telescopes and then later by spacecraft. This has provided a good overall understanding of the formation and evolution of the Sun's planetary system, although many new discoveries are still being made. The Solar System is subdivided into the inner planets, the asteroid belt, and the outer planets. The inner terrestrial planets consist of Mercury, Venus, Earth, and Mars. The outer gas giant planets are Jupiter, Saturn, Uranus, and Neptune. Beyond Neptune lies the Kuiper belt, and finally the Oort Cloud, which may extend as far as a light-year. The planets were formed 4.6 billion years ago in the protoplanetary disk that surrounded the early Sun. Through a process that included gravitational attraction, collision, and accretion, the disk formed clumps of matter that, with time, became protoplanets | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy The radiation pressure of the solar wind then expelled most of the unaccreted matter, and only those planets with sufficient mass retained their gaseous atmosphere. The planets continued to sweep up, or eject, the remaining matter during a period of intense bombardment, evidenced by the many impact craters on the Moon. During this period, some of the protoplanets may have collided and one such collision may have formed the Moon. Once a planet reaches sufficient mass, the materials of different densities segregate within, during planetary differentiation. This process can form a stony or metallic core, surrounded by a mantle and an outer crust. The core may include solid and liquid regions, and some planetary cores generate their own magnetic field, which can protect their atmospheres from solar wind stripping. A planet or moon's interior heat is produced from the collisions that created the body, by the decay of radioactive materials ("e.g." uranium, thorium, and Al), or tidal heating caused by interactions with other bodies. Some planets and moons accumulate enough heat to drive geologic processes such as volcanism and tectonics. Those that accumulate or retain an atmosphere can also undergo surface erosion from wind or water. Smaller bodies, without tidal heating, cool more quickly; and their geological activity ceases with the exception of impact cratering. and astrophysics have developed significant interdisciplinary links with other major scientific fields | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Archaeoastronomy is the study of ancient or traditional astronomies in their cultural context, utilizing archaeological and anthropological evidence. Astrobiology is the study of the advent and evolution of biological systems in the Universe, with particular emphasis on the possibility of non-terrestrial life. Astrostatistics is the application of statistics to astrophysics to the analysis of vast amount of observational astrophysical data. The study of chemicals found in space, including their formation, interaction and destruction, is called astrochemistry. These substances are usually found in molecular clouds, although they may also appear in low temperature stars, brown dwarfs and planets. Cosmochemistry is the study of the chemicals found within the Solar System, including the origins of the elements and variations in the isotope ratios. Both of these fields represent an overlap of the disciplines of astronomy and chemistry. As "forensic astronomy", finally, methods from astronomy have been used to solve problems of law and history. is one of the sciences to which amateurs can contribute the most. Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. Common targets of amateur astronomers include the Sun, the Moon, planets, stars, comets, meteor showers, and a variety of deep-sky objects such as star clusters, galaxies, and nebulae | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy clubs are located throughout the world and many have programs to help their members set up and complete observational programs including those to observe all the objects in the Messier (110 objects) or Herschel 400 catalogues of points of interest in the night sky. One branch of amateur astronomy, amateur astrophotography, involves the taking of photos of the night sky. Many amateurs like to specialize in the observation of particular objects, types of objects, or types of events which interest them. Most amateurs work at visible wavelengths, but a small minority experiment with wavelengths outside the visible spectrum. This includes the use of infrared filters on conventional telescopes, and also the use of radio telescopes. The pioneer of amateur radio astronomy was Karl Jansky, who started observing the sky at radio wavelengths in the 1930s. A number of amateur astronomers use either homemade telescopes or use radio telescopes which were originally built for astronomy research but which are now available to amateurs ("e.g." the One-Mile Telescope). Amateur astronomers continue to make scientific contributions to the field of astronomy and it is one of the few scientific disciplines where amateurs can still make significant contributions. Amateurs can make occultation measurements that are used to refine the orbits of minor planets. They can also discover comets, and perform regular observations of variable stars | https://en.wikipedia.org/wiki?curid=50650 |
Astronomy Improvements in digital technology have allowed amateurs to make impressive advances in the field of astrophotography. Although the scientific discipline of astronomy has made tremendous strides in understanding the nature of the Universe and its contents, there remain some important unanswered questions. Answers to these may require the construction of new ground- and space-based instruments, and possibly new developments in theoretical and experimental physics. | https://en.wikipedia.org/wiki?curid=50650 |
Conventional superconductor Conventional superconductors are materials that display superconductivity as described by BCS theory or its extensions. This is in contrast to unconventional superconductors, which do not. Conventional superconductors can be either type-I or type-II. Most elemental superconductors are conventional. Niobium and vanadium are type-II, while most other elemental superconductors are type-I. Critical temperatures of some elemental superconductors: Most compound and alloy superconductors are type-II materials. The most commonly used conventional superconductor in applications is a niobium-titanium alloy - this is a type-II superconductor with a superconducting critical temperature of 11 K. The highest critical temperature so far achieved in a conventional superconductor was 39 K (-234 °C) in magnesium diboride. BaKBiO is an unusual superconductor (a non-cuprate oxide) - but considered 'conventional' in the sense that the BCS theory applies. | https://en.wikipedia.org/wiki?curid=51070 |
Giant-impact hypothesis The giant-impact hypothesis, sometimes called the Big Splash, or the Theia Impact, suggests that Luna (the Moon) formed from the ejecta of a collision between the proto-Earth and a Mars-sized planetesimal, approximately 4.5 billion years ago, in the Hadean eon (about 20 to 100 million years after the Solar System coalesced). The colliding body is sometimes called Theia, from the name of the mythical Greek Titan who was the mother of Selene, the goddess of the Moon. Analysis of lunar rocks, published in a 2016 report, suggests that the impact may have been a direct hit, causing a thorough mixing of both parent bodies. The giant-impact hypothesis is currently the favored scientific hypothesis for the formation of the Moon. Supporting evidence includes: However, there remain several questions concerning the best current models of the giant-impact hypothesis. The energy of such a giant impact is predicted to have heated Earth to produce a global magma ocean, and evidence of the resultant planetary differentiation of the heavier material sinking into Earth's mantle has been documented. However, there is no self-consistent model that starts with the giant-impact event and follows the evolution of the debris into a single moon. Other remaining questions include when the Moon lost its share of volatile elements and why Venus—which experienced giant impacts during its formation—does not host a similar moon. In 1898, George Darwin made the suggestion that the Earth and Moon were once a single body | https://en.wikipedia.org/wiki?curid=51143 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.