text stringlengths 11 1.65k | source stringlengths 38 44 |
|---|---|
Old age For Thomas More, on the island of Utopia, when people are so old as to have "out-lived themselves" and are terminally ill, in pain, and a burden to everyone, the priests exhort them about choosing to die. The priests assure them that "they shall be happy after death." If they choose to die, they end their lives by starvation or by taking opium. Antonio de Guevara's utopian nation "had a custom, not to live longer than sixty five years". At that age, they practiced self-immolation. Rather than condemn the practice, Bishop Guevara called it a "golden world" in which people "have overcome the natural appetite to desire to live". In the Modern period, the "cultural status" of old people has declined in many cultures. Joan Erikson observed that "aged individuals are often ostracized, neglected, and overlooked; elders are seen no longer as bearers of wisdom but as embodiments of shame." Research on age-related attitudes consistently finds that negative attitudes exceed positive attitudes toward old people because of their looks and behavior. In his study "Aging and Old Age", Posner discovers "resentment and disdain of older people" in American society. Harvard University's implicit-association test measures implicit "attitudes and beliefs" about Young vis a vis Old. "Blind Spot: Hidden Biases of Good People", a book about the test, reports that 80% of Americans have an "automatic preference for the young over old" and that attitude is true worldwide | https://en.wikipedia.org/wiki?curid=229060 |
Old age The young are "consistent in their negative attitude" toward the old. "Ageism" documents that Americans generally have "little tolerance for older persons and very few reservations about harboring negative attitudes" about them. Despite its prevalence, ageism is seldom the subject of public discourse. In 2014, a documentary film called "The Age of Love" used humor and poignant adventures of 30 seniors who attend a speed dating event for 70- to 90-year-olds, and discovered how the search for romance changes; or does not change; from a childhood sweetheart to older age. Simone de Beauvoir wrote that "there is one form of experience that belongs only to those that are old – that of old age itself." Nevertheless, simulations of old age attempt to help younger people gain some understanding. Texas A&M University offers a plan for an "Aging Simulation" workshop. The workshop is adapted from "Sensitizing People to the Processes of Aging". Some of the simulations follow: The Macklin Intergenerational Institute conducts Xtreme Aging workshops, as depicted in "The New York Times". A condensed version was presented on NBC's Today Show and is available online. One exercise was to lay out 3 sets of 5 slips of paper. On set #1, write your 5 most enjoyed activities; on set #2, write your 5 most valued possessions; on set #3, write your 5 most loved people. Then "lose" them one by one, trying to feel each loss, until you have lost them all as happens in old age | https://en.wikipedia.org/wiki?curid=229060 |
Old age Most people in the age range of 60–80 (the years of retirement and early old age), enjoy rich possibilities for a full life, but the condition of frailty distinguished by "bodily failure" and greater dependence becomes increasingly common after that. In the United States, hospital discharge data from 2003 to 2011 shows that injury was the most common reason for hospitalization among patients aged 65+. Gerontologists note the lack of research regarding and the difficulty in defining frailty. However, they add that physicians recognize frailty when they see it. A group of geriatricians proposed a general definition of frailty as "a physical state of increased vulnerability to stressors that results from decreased reserves and disregulation in multiple physiological systems". Frailty is a common condition in later old age but different definitions of frailty produce diverse assessments of prevalence. One study placed the incidence of frailty for ages 65+ at 10.7%. Another study placed the incidence of frailty in age 65+ population at 22% for women and 15% for men. A Canadian study illustrated how frailty increases with age and calculated the prevalence for 65+ as 22.4% and for 85+ as 43.7%. A worldwide study of "patterns of frailty" based on data from 20 nations found (a) a consistent correlation between frailty and age, (b) a higher frequency among women, and (c) more frailty in wealthier nations where greater support and medical care increases longevity | https://en.wikipedia.org/wiki?curid=229060 |
Old age In Norway, a 20-year longitudinal study of 400 people found that bodily failure and greater dependence became prevalent in the 80+ years. The study calls these years the "fourth age" or "old age in the real meaning of the term". Similarly, the "Berlin Aging Study" rated over-all functionality on four levels: good, medium, poor, and very poor. People in their 70s were mostly rated good. In the 80–90 year range, the four levels of functionality were divided equally. By the 90–100 year range, 60% would be considered frail because of very poor functionality and only 5% still possessed good functionality. In the United States, the 85+ age group is the fastest growing, a group that is almost sure to face the "inevitable decrepitude" of survivors. (Frailty and decrepitude are synonyms.) Three unique markers of frailty have been proposed: (a) loss of any notion of invincibility, (b) loss of ability to do things essential to one's care, and (c) loss of possibility for a subsequent life stage. survivors on-average deteriorate from agility in their 65–80s to a period of frailty preceding death. This deterioration is gradual for some and precipitous for others. Frailty is marked by an array of chronic physical and mental problems which means that frailty is not treatable as a specific disease. These problems coupled with increased dependency in the basic activities of daily living (ADLs) required for personal care add emotional problems: depression and anxiety | https://en.wikipedia.org/wiki?curid=229060 |
Old age In sum, frailty has been depicted as a group of "complex issues," distinct but "causally interconnected," that often include "comorbid diseases", progressive weakness, stress, exhaustion, and depression. Johnson and Barer did a pioneering study of "Life Beyond 85 Years" by interviews over a six-year period. In talking with 85-year-olds and older, they found some popular conceptions about old age to be erroneous. Such erroneous conceptions include (1) people in old age have at least one family member for support, (2) old age well-being requires social activity, and (3) "successful adaptation" to age-related changes demands a continuity of self-concept. In their interviews, Johnson and Barer found that 24% of the 85+ had no face-to-face family relationships; many have outlived their families. Second, that contrary to popular notions, the interviews revealed that the reduced activity and socializing of the over-85s does not harm their well-being; they "welcome increased detachment". Third, rather than a continuity of self-concept, as the interviewees faced new situations they changed their "cognitive and emotional processes" and reconstituted their "self–representation". Frail people require a high level of care. Medical advances have made it possible to "postpone death" for years. This added time costs many frail people "prolonged sickness, dependence, pain, and suffering" | https://en.wikipedia.org/wiki?curid=229060 |
Old age According to a study by the Agency for Healthcare Research and Quality (AHRQ), the rate of emergency department visits was consistently highest among patients ages 85 years and older in 2006–2011 in the United States. Additionally, patients aged 65+ had the highest percentage of hospital stays for adults with multiple chronic conditions but the second highest percentage of hospital costs in 2003–2014. These final years are also costly in economic terms. One out of every four Medicare dollars is spent on the frail in their last year of life . . . in attempts to postpone death. Medical treatments in the final days are not only economically costly, they are often unnecessary, even harmful. Nortin Hadler, M.D. warns against the tendency to medicalize and overtreat the frail. In her "Choosing Medical Care in Old Age", Michael R. Gillick M.D. argues that appropriate medical treatment for the frail is not the same as for the robust. The frail are vulnerable to "being tipped over" by any physical stress put on the system such as medical interventions. Old age, death, and frailty are linked because approximately half the deaths in old age are preceded by months or years of frailty. "Older Adults' Views on Death" is based on interviews with 109 people in the 70–90 age range, with a mean age of 80.7. Almost 20% of the people wanted to use whatever treatment that might postpone death. About the same number said that, given a terminal illness, they would choose assisted suicide | https://en.wikipedia.org/wiki?curid=229060 |
Old age Roughly half chose doing nothing except live day by day until death comes naturally without medical or other intervention designed to prolong life. This choice was coupled with a desire to receive palliative care if needed. About half of older adults suffer multimorbidity, that is, they have three or more chronic conditions. Medical advances have made it possible to "postpone death," but in many cases this postponement adds "prolonged sickness, dependence, pain, and suffering," a time that is costly in social, psychological, and economic terms. The longitudinal interviews of 150 age 85+ people summarized in "Life Beyond 85 Years" found "progressive terminal decline" in the year prior to death: constant fatigue, much sleep, detachment from people, things, and activities, simplified lives. Most of the interviewees did not fear death; some would welcome it. One person said, "Living this long is pure hell." However, nearly everyone feared a long process of dying. Some wanted to die in their sleep; others wanted to die "on their feet". The study of "Older Adults' Views on Death" found that the more frail people were, the more "pain, suffering, and struggles" they were enduring, the more likely they were to "accept and welcome" death as a release from their misery. Their fear about the process of dying was that it would prolong their distress. Besides being a release from misery, some saw death as a way to reunion with departed loved ones | https://en.wikipedia.org/wiki?curid=229060 |
Old age Others saw death as a way to free their caretakers from the burden of their care. Generally speaking, old people have always been more religious than young people. At the same time, wide cultural variations exist. In the United States, 90% of old age Hispanics view themselves as very, quite, or somewhat religious. The Pew Research Center's study of black and white old people found that 62% of those in ages 65–74 and 70% in ages 75+ asserted that religion was "very important" to them. For all 65+ people, more women (76%) than men (53%) and more blacks (87%) than whites (63%) consider religion "very important" to them. This compares to 54% in the 30–49 age range. In a British 20-year longitudinal study, less than half of the old people surveyed said that religion was "very important" to them, and a quarter said they had become less religious in old age. The late-life rise in religiosity is stronger in Japan than in the United States, but in the Netherlands it is minimal. In the practice of religion, a study of 60+ people found that 25% read the Bible every day and over 40% look at religious TV. Pew Research found that in the age 65+ range, 75% of whites and 87% of blacks pray daily. Participation in organized religion is not a good indicator of religiosity because transportation and health problems often hinder participation. In the industrialized countries, life expectancy and, thus, the old age population have increased consistently over the last decades | https://en.wikipedia.org/wiki?curid=229060 |
Old age In the United States the proportion of people aged 65 or older increased from 4% in 1900 to about 12% in 2000. In 1900, only about of the nation's citizens were 65 or older (out of 76 million total American citizens). By 2000, the number of senior citizens had increased to about 35 million (of 280 million US citizens). Population experts estimate that more than Americans—about 17 percent of the population—will be 65 or older in 2020. By 2050, it is projected that at least 400,000 Americans will be 100 or older. The number of old people is growing around the world chiefly because of the post–World War II baby boom and increases in the provision and standards of health care. By 2050, 33% of the developed world's population and almost 20% of the less developed world's population will be over 60 years old. The growing number of people living to their 80s and 90s in the developed world has strained public welfare systems and has also resulted in increased incidence of diseases like cancer and dementia that were rarely seen in premodern times. When the United States Social Security program was created, persons older than 65 numbered only around 5% of the population and the average life expectancy of a 65-year-old in 1936 was approximately 5 years, while in 2011 it could often range from 10 to 20 years. Other issues that can arise from an increasing population are growing demands for health care and an increase in demand for different types of services | https://en.wikipedia.org/wiki?curid=229060 |
Old age Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes.<ref name="doi10.2202/1941-6008.1011"></ref> In industrialized nations, the proportion is much higher, reaching 90%. According to Erik Erikson's "Stages of Psychosocial Development", the human personality is developed in a series of eight stages that take place from the time of birth and continue on throughout an individual's complete life. He characterises old age as a period of "Integrity vs. Despair", during which a person focuses on reflecting back on his life. Those who are unsuccessful during this phase will feel that their life has been wasted and will experience many regrets. The individual will be left with feelings of bitterness and despair. Those who feel proud of their accomplishments will feel a sense of integrity. Successfully completing this phase means looking back with few regrets and a general feeling of satisfaction. These individuals will attain wisdom, even when confronting death. Coping is a very important skill needed in the aging process to move forward with life and not be 'stuck' in the past. The way a person adapts and copes, reflects his aging process on a psycho-social level. For people in their 80s and 90s, Joan Erikson added a ninth stage in "The Life Cycle Completed: Extended Version" | https://en.wikipedia.org/wiki?curid=229060 |
Old age As she wrote, she added the ninth stage because the Integrity of the eighth stage imposes "a serious demand on the senses of elders" and the Wisdom of the eighth stage requires capacities that ninth stage elders "do not usually have". Newman & Newman also proposed a ninth stage of life, Elderhood. Elderhood refers to those individuals who live past the life expectancy of their birth cohorts. There are two different types of people described in this stage of life. The "young old" are the healthy individuals who can function on their own without assistance and can complete their daily tasks independently. The "old old" are those who depend on specific services due to declining health or diseases. This period of life is characterized as a period of "immortality vs. extinction". Immortality is the belief that your life will go on past death, some examples are an afterlife or living on through one's family. Extinction refers to feeling as if life has no purpose. Social theories, or concepts, propose explanations for the distinctive relationships between old people and their societies. One of the theories is the disengagement theory proposed in 1961. This theory proposes that in old age a mutual disengagement between people and their society occurs in anticipation of death. By becoming disengaged from work and family responsibilities, according to this concept, people are enabled to enjoy their old age without stress | https://en.wikipedia.org/wiki?curid=229060 |
Old age This theory has been subjected to the criticism that old age disengagement is neither natural, inevitable, nor beneficial. Furthermore, disengaging from social ties in old age is not across the board: unsatisfactory ties are dropped and satisfying ones kept. In opposition to the disengagement theory, the activity theory of old age argues that disengagement in old age occurs not by desire, but by the barriers to social engagement imposed by society. This theory has been faulted for not factoring in psychological changes that occur in old age as shown by reduced activity, even when available. It has also been found that happiness in old age is not proportional to activity. According to the continuity theory, in spite of the inevitable differences imposed by their old age, most people try to maintain continuity in personhood, activities, and relationships with their younger days. Socioemotional selectivity theory also depicts how people maintain continuity in old age. The focus of this theory is continuity sustained by social networks, albeit networks narrowed by choice and by circumstances. The choice is for more harmonious relationships. The circumstances are loss of relationships by death and distance. Life expectancy by nation at birth in the year 2011 ranged from 48 years to 82. Low values indicate high death rates for infants and children | https://en.wikipedia.org/wiki?curid=229060 |
Old age In most parts of the world women live, on average, longer than men; even so, the disparities vary between 12 years in Russia to no difference or higher life expectancy for men in countries such as Zimbabwe and Uganda. The number of elderly persons worldwide began to surge in the second half of the 20th century. Up to that time (and still true in underdeveloped countries), five or less percent of the population was over 65. Few lived longer than their 70s and people who attained advanced age (i.e. their 80s) were rare enough to be a novelty and were revered as wise sages. The worldwide over-65 population in 1960 was one-third of the under 5 population. By 2013, the over-65 population had grown to equal the under 5 population. The over-65 population is projected to double the under five by 2050. Before the surge in the over-65 population, accidents and disease claimed many people before they could attain old age, and health problems in those over 65 meant a quick death in most cases. If a person lived to an advanced age, it was due to genetic factors and/or a relatively easy lifestyle, since diseases of old age could not be treated before the 20th century. In October 2016, scientists identified the maximum human lifespan at an average age of 115, with an absolute upper limit of 125 years. However, the concept of a maximum lifespan in humans is still widely debated among the scientific community | https://en.wikipedia.org/wiki?curid=229060 |
Old age German chancellor Otto von Bismarck created the world's first comprehensive government social safety net in the 1880s, providing for old age pensions. In the United States of America, and the United Kingdom, 65 (UK 60 for women) was traditionally the age of retirement with full old age benefits. In 2003, the age at which a United States citizen became eligible for full Social Security benefits began to increase gradually, and will continue to do so until it reaches 67 in 2027. Full retirement age for Social Security benefits for people retiring in 2012 is age 66. In the United Kingdom, the state pension age for men and women will rise to 66 in 2020 with further increases scheduled after that." Originally, the purpose of old age pensions was to prevent elderly persons from being reduced to beggary, which is still common in some underdeveloped countries, but growing life expectancies and older populations have brought into question the model under which pension systems were designed. By 1990, the United States was spending 30 per cent of its budget on the elderly, compared with 2 per cent on education. The dominant perception of the American old age population changed from "needy" and "worthy" to "powerful" and "greedy," old people getting more than their share of the nation's resources. However, in 2011, using a Supplemental Poverty Measure (SPM), the old age American poverty rate was measured as 15.9% | https://en.wikipedia.org/wiki?curid=229060 |
Old age In the United States in 2008, 11 million people aged 65+ lived alone: 5 million or 22% of ages 65–74, 4 million or 34% of ages 75–84, and 2 million or 41% of ages 85+. The 2007 gender breakdown for all people 65+ was men 19% and women 39%. Many new assistive devices made especially for the home have enabled more old people to care for themselves activities of daily living (ADL). Able Data lists 40,000 assistive technology products in 20 categories. Some examples of devices are a medical alert and safety system, shower seat (making it so the person does not get tired in the shower and fall), a bed cane (offering support to those with unsteadiness getting in and out of bed) and an ADL cuff (used with eating utensils for people with paralysis or hand weakness). A Swedish study found that at age 76, 46% of the subjects used assistive devices. When they reached age 86, 69% used them. The subjects were ambivalent regarding the use of the assistive devices: as "enablers" or as "disablers". People who view assistive devices as enabling greater independence accept and use them. Those who see them as symbols of disability reject them. However, organizations like Love for the Elderly aim to combat such age-related prejudice by educating the public about the importance of appreciating growing older, while also providing services of kindness to elders in senior homes. Even with assistive devices as of 2006, 8 | https://en.wikipedia.org/wiki?curid=229060 |
Old age 5 million Americans needed personal assistance because of impaired basic activities of daily living required for personal care or impaired instrumental activities of daily living (IADL) required for independent living. Projections place this number at 21 million by 2030 when 40% of Americans over 70 will need assistance. There are many options for such long term care to those who require it. There is home care in which a family member, volunteer, or trained professional will aid the person in need and help with daily activities. Another option is community services which can provide the person with transportation, meal plans, or activities in senior centers. A third option is assisted living where 24-hour round-the-clock supervision is given with aid in eating, bathing, dressing, etc. A final option is a nursing home which provides professional nursing care. A scholarly literature has emerged, especially in Britain, showing historical trends in the visual depiction of old age. | https://en.wikipedia.org/wiki?curid=229060 |
Cryoelectronics In electronics, cryoelectronics or cryolectronics is the study of superconductivity under cryogenic conditions and its applications. It is also described as the operation of power electronic devices at cryogenic temperatures. Practical applications of this field is quite broad, although it is particularly useful in areas where cryogenic environment exists such as superconducting technologies and spacecraft design. It also became a special branch of cryophysics and cryotechnics and plays a role in operations that require high resolution and precision measurements. Cryoelectronic devices include the SQUIDs or the superconducting quantum interference devices, which represent magnetic sensors of highest sensitivity. They serve as the backbone of applications that range from materials evaluation, geological and environmental prospecting, and medical diagnostics, among others. A key factor in production of new technologies is whether it is cost effective and useful. Devices that make use of cryoelectronics and the applications of superconductivity such as computers, information transmission lines, and magnetocardiography have potential for commercial value outside of a few specific devices for singular purposes. At the same time, the presence of other devices made with highly specialized functions can be competitively marketed without having to rely on a large market. | https://en.wikipedia.org/wiki?curid=233631 |
Reverse transcription polymerase chain reaction (RT-PCR) is a laboratory technique combining reverse transcription of RNA into DNA (in this context called complementary DNA or cDNA) and amplification of specific DNA targets using polymerase chain reaction (PCR). It is primarily used to measure the amount of a specific RNA. This is achieved by monitoring the amplification reaction using fluorescence, a technique called real-time PCR or quantitative PCR (qPCR). Combined RT-PCR and qPCR are routinely used for analysis of gene expression and quantification of viral RNA in research and clinical settings. The close association between RT-PCR and qPCR has led to metonymic use of the term qPCR to mean RT-PCR. Such use may be confusing, as RT-PCR can be used without qPCR, for example to enable molecular cloning, sequencing or simple detection of RNA. Conversely, qPCR may be used without RT-PCR, for example to quantify the copy number of a specific piece of DNA. The combined RT-PCR and qPCR technique has been described as quantitative RT-PCR or real-time RT-PCR (sometimes even called quantitative real-time RT-PCR), has been variously abbreviated as qRT-PCR, RT-qPCR, RRT-PCR. and rRT-PCR. In order to avoid confusion, the following abbreviations will be used consistently throughout this article: Not all authors, especially earlier ones, use this convention and the reader should be cautious when following links. RT-PCR has been used to indicate both real-time PCR (qPCR) and reverse transcription PCR (RT-PCR) | https://en.wikipedia.org/wiki?curid=235077 |
Reverse transcription polymerase chain reaction Since its introduction in 1977, Northern blot has been used extensively for RNA quantification despite its shortcomings: (a) time-consuming technique, (b) requires a large quantity of RNA for detection, and (c) quantitatively inaccurate in the low abundance of RNA content. However, the discovery of reverse transcriptase during the study of viral replication of genetic material led to the development of RT-PCR, which has since displaced northern blot as the method of choice for RNA detection and quantification. RT-PCR has risen to become the benchmark technology for the detection and/or comparison of RNA levels for several reasons: (a) it does not require post PCR processing, (b) a wide range (>10-fold) of RNA abundance can be measured, and (c) it provides insight into both qualitative and quantitative data. Due to its simplicity, specificity and sensitivity, RT-PCR is used in a wide range of applications from experiments as simple as quantification of yeast cells in wine to more complex uses as diagnostic tools for detecting infectious agents such as the avian flu virus and SARS-CoV-2 (2019-nCoV) (COVID-19). In RT-PCR, the RNA template is first converted into a complementary DNA (cDNA) using a reverse transcriptase. The cDNA is then used as a template for exponential amplification using PCR. QT-NASBA is currently the most sensitive method of RNA detection available | https://en.wikipedia.org/wiki?curid=235077 |
Reverse transcription polymerase chain reaction The use of RT-PCR for the detection of RNA transcript has revolutionalized the study of gene expression in the following important ways: The quantification of mRNA using RT-PCR can be achieved as either a one-step or a two-step reaction. The difference between the two approaches lies in the number of tubes used when performing the procedure. The two-step reaction requires that the reverse transcriptase reaction and PCR amplification be performed in separate tubes. The disadvantage of the two-step approach is susceptibility to contamination due to more frequent sample handling. On the other hand, the entire reaction from cDNA synthesis to PCR amplification occurs in a single tube in the one-step approach. The one-step approach is thought to minimize experimental variation by containing all of the enzymatic reactions in a single environment. It eliminates the steps of pipetting cDNA product, which is labor-intensive and prone to contamination, to PCR reaction. The further use of inhibitor-tolerant polymerases, polymerase enhancers with an optimized one-step RT-PCR condition, supports the reverse transcription of the RNA from unpurified or crude samples, such as whole blood and serum. However, the starting RNA templates are prone to degradation in the one-step approach, and the use of this approach is not recommended when repeated assays from the same sample is required. Additionally, the one-step approach is reported to be less accurate compared to the two-step approach | https://en.wikipedia.org/wiki?curid=235077 |
Reverse transcription polymerase chain reaction It is also the preferred method of analysis when using DNA binding dyes such as SYBR Green since the elimination of primer-dimers can be achieved through a simple change in the melting temperature. Nevertheless, the one-step approach is a relatively convenient solution for the rapid detection of target RNA directly in biosensing. Quantification of RT-PCR products can largely be divided into two categories: end-point and real-time. The use of end-point RT-PCR is preferred for measuring gene expression changes in small number of samples, but the real-time RT-PCR has become the gold standard method for validating results obtained from array analyses or gene expression changes on a global scale. The measurement approaches of end-point RT-PCR requires the detection of gene expression levels by the use of fluorescent dyes like ethidium bromide, P32 labeling of PCR products using phosphorimager, or by scintillation counting. End-point RT-PCR is commonly achieved using three different methods: relative, competitive and comparative. The emergence of novel fluorescent DNA labeling techniques in the past few years have enabled the analysis and detection of PCR products in real-time and has consequently led to the widespread adoption of real-time RT-PCR for the analysis of gene expression. Not only is real-time RT-PCR now the method of choice for quantification of gene expression, it is also the preferred method of obtaining results from array analyses and gene expressions on a global scale | https://en.wikipedia.org/wiki?curid=235077 |
Reverse transcription polymerase chain reaction Currently, there are four different fluorescent DNA probes available for the real-time RT-PCR detection of PCR products: SYBR Green, TaqMan, molecular beacons, and scorpion probes. All of these probes allow the detection of PCR products by generating a fluorescent signal. While the SYBR Green dye emits its fluorescent signal simply by binding to the double-stranded DNA in solution, the TaqMan probes', molecular beacons' and scorpions' generation of fluorescence depend on Förster Resonance Energy Transfer (FRET) coupling of the dye molecule and a quencher moiety to the oligonucleotide substrates. Two strategies are commonly employed to quantify the results obtained by real-time RT-PCR; the standard curve method and the comparative threshold method. The exponential amplification via reverse transcription polymerase chain reaction provides for a highly sensitive technique in which a very low copy number of RNA molecules can be detected. RT-PCR is widely used in the diagnosis of genetic diseases and, semiquantitatively, in the determination of the abundance of specific different RNA molecules within a cell or tissue as a measure of gene expression. RT-PCR is commonly used in research methods to measure gene expression. For example, Lin et al. used qRT-PCR to measure expression of Gal genes in yeast cells. First, Lin et al. engineered a mutation of a protein suspected to participate in the regulation of Gal genes. This mutation was hypothesized to selectively abolish Gal expression | https://en.wikipedia.org/wiki?curid=235077 |
Reverse transcription polymerase chain reaction To confirm this, gene expression levels of yeast cells containing this mutation were analyzed using qRT-PCR. The researchers were able to conclusively determine that the mutation of this regulatory protein reduced Gal expression. Northern blot analysis is used to study the RNA's gene expression further. RT-PCR can also be very useful in the insertion of eukaryotic genes into prokaryotes. Because most eukaryotic genes contain introns, which are present in the genome but not in the mature mRNA, the cDNA generated from a RT-PCR reaction is the exact (without regard to the error-prone nature of reverse transcriptases) DNA sequence that would be directly translated into protein after transcription. When these genes are expressed in prokaryotic cells for the sake of protein production or purification, the RNA produced directly from transcription need not undergo splicing as the transcript contains only exons. (Prokaryotes, such as E. coli, lack the mRNA splicing mechanism of eukaryotes). RT-PCR can be used to diagnose genetic disease such as Lesch–Nyhan syndrome. This genetic disease is caused by a malfunction in the HPRT1 gene, which clinically leads to the fatal uric acid urinary stone and symptoms similar to gout. Analyzing a pregnant mother and a fetus for mRNA expression levels of HPRT1 will reveal if the mother is a carrier and if the fetus will likely to develop Lesch–Nyhan syndrome. Scientists are working on ways to use RT-PCR in cancer detection to help improve prognosis, and monitor response to therapy | https://en.wikipedia.org/wiki?curid=235077 |
Reverse transcription polymerase chain reaction Circulating tumor cells produce unique mRNA transcripts depending on the type of cancer. The goal is to determine which mRNA transcripts serve as the best biomarkers for a particular cancer cell type and then analyze its expression levels with RT-PCR. RT-PCR is commonly used in studying the genomes of viruses whose genomes are composed of RNA, such as Influenzavirus A, retroviruses like HIV and Sars-Cov-2 . Despite its major advantages, RT-PCR is not without drawbacks. The exponential growth of the reverse transcribed complementary DNA (cDNA) during the multiple cycles of PCR produces inaccurate end point quantification due to the difficulty in maintaining linearity. In order to provide accurate detection and quantification of RNA content in a sample, qRT-PCR was developed using fluorescence-based modification to monitor the amplification products during each cycle of PCR. The extreme sensitivity of the technique can be a double edged sword since even the slightest DNA contamination can lead to undesirable results. A simple method for elimination of false positive results is to include anchors, or tags, to the 5' region of a gene specific primer. Additionally, planning and design of quantification studies can be technically challenging due to the existence of numerous sources of variation including template concentration and amplification efficiency. RT-PCR can be carried out by the one-step RT-PCR protocol or the two-step RT-PCR protocol | https://en.wikipedia.org/wiki?curid=235077 |
Reverse transcription polymerase chain reaction One-step RT-PCR take mRNA targets (up to 6 kb) and subjects them to reverse transcription and then PCR amplification in a single test tube. Use only intact, high quality RNA for the best results. Be sure to use a sequence-specific primer. Two-step RT-PCR, as the name implies, occurs in two steps. First the reverse transcription and then the PCR. This method is more sensitive than the one-step method. Kits are also useful for two-step RT-PCR. Just as for one-step, use only intact, high quality RNA for the best results. The primer for two-step does not have to be sequence specific. Quantitative RT-PCR assay is considered to be the gold standard for measuring the number of copies of specific cDNA targets in a sample but it is poorly standardized. As a result, while there are numerous publications utilizing the technique, many provide inadequate experimental detail and use unsuitable data analysis to draw inappropriate conclusions. Due to the inherent variability in the quality of any quantitative PCR data, not only do reviewers have a difficult time evaluating these manuscripts, but the studies also become impossible to replicate. Recognizing the need for the standardization of the reporting of experimental conditions, the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE, pronounced mykee) guidelines have been published by an international consortium of academic scientists | https://en.wikipedia.org/wiki?curid=235077 |
Reverse transcription polymerase chain reaction The MIQE guidelines describe the minimum information necessary for evaluating quantitative PCR experiments that should be required for publication for encouraging better experimental practice and ensuring the relevance, accuracy, correct interpretation, and repeatability of quantitative PCR data. Besides reporting guidelines, the MIQE stresses the need to standardize the nomenclature associated with quantitative PCR to avoid confusion; for example, the abbreviation qPCR should be used for quantitative real-time PCR and RT-qPCR should be used for reverse transcription-qPCR, and genes used for normalisation should be referred to as reference genes instead of housekeeping genes. It also proposes that commercially derived terms like TaqMan probes should not be used but instead referred to as hydrolysis probes. Additionally, it is proposed that quantification cycle (Cq) be used to describe the PCR cycle used for quantification instead of threshold cycle (Ct), crossing point (Cp), and takeoff point (TOP), which refer to the same value but were coined by different manufacturers of real-time instruments. The guideline consists of the following elements: 1) experimental design, 2) sample, 3) nucleic acid extraction, 4) reverse transcription, 5) qPCR target information, 6) oligonucleotides, 7) protocol, 8) validation, and 9) data analysis. Specific items within each element carry a label of either E (essential) or D (desirable) | https://en.wikipedia.org/wiki?curid=235077 |
Reverse transcription polymerase chain reaction Those labelled E are considered critical and indispensable while those labelled D are considered peripheral yet important for best-practices. | https://en.wikipedia.org/wiki?curid=235077 |
Trace element A trace element is a chemical element whose concentration (or other measure of amount) is very low (a "trace amount"). The exact definition depends on the field of science: | https://en.wikipedia.org/wiki?curid=235175 |
Volcanology (also spelled vulcanology) is the study of volcanoes, lava, magma, and related geological, geophysical and geochemical phenomena (volcanism). The term "volcanology" is derived from the Latin word "vulcan". Vulcan was the ancient Roman god of fire. A volcanologist is a geologist who studies the eruptive activity and formation of volcanoes, and their current and historic eruptions. Volcanologists frequently visit volcanoes, especially active ones, to observe volcanic eruptions, collect eruptive products including tephra (such as ash or pumice), rock and lava samples. One major focus of enquiry is the prediction of eruptions; there is currently no accurate way to do this, but predicting eruptions, like predicting earthquakes, could save many lives. In 1841, the first volcanological observatory, the Vesuvius Observatory, was founded in the Kingdom of the Two Sicilies. Seismic observations are made using seismographs deployed near volcanic areas, watching out for increased seismicity during volcanic events, in particular looking for long period harmonic tremors, which signal magma movement through volcanic conduits. Surface deformation monitoring includes the use of geodetic techniques such as leveling, tilt, strain, angle and distance measurements through tiltmeters, total stations and EDMs. This also includes GNSS observations and InSAR. Surface deformation indicates magma upwelling: increased magma supply produces bulges in the volcanic center's surface | https://en.wikipedia.org/wiki?curid=238112 |
Volcanology Gas emissions may be monitored with equipment including portable ultra-violet spectrometers (COSPEC, now superseded by the miniDOAS), which analyzes the presence of volcanic gases such as sulfur dioxide; or by infra-red spectroscopy (FTIR). Increased gas emissions, and more particularly changes in gas compositions, may signal an impending volcanic eruption. Temperature changes are monitored using thermometers and observing changes in thermal properties of volcanic lakes and vents, which may indicate upcoming activity. Satellites are widely used to monitor volcanoes, as they allow a large area to be monitored easily. They can measure the spread of an ash plume, such as the one from Eyjafjallajökull's 2010 eruption, as well as SO emissions. InSAR and thermal imaging can monitor large, scarcely populated areas where it would be too expensive to maintain instruments on the ground. Other geophysical techniques (electrical, gravity and magnetic observations) include monitoring fluctuations and sudden change in resistivity, gravity anomalies or magnetic anomaly patterns that may indicate volcano-induced faulting and magma upwelling. Stratigraphic analyses includes analyzing tephra and lava deposits and dating these to give volcano eruption patterns, with estimated cycles of intense activity and size of eruptions. has an extensive history. The earliest known recording of a volcanic eruption may be on a wall painting dated to about 7,000 BCE found at the Neolithic site at Çatal Höyük in Anatolia, Turkey | https://en.wikipedia.org/wiki?curid=238112 |
Volcanology This painting has been interpreted as a depiction of an erupting volcano, with a cluster of houses below shows a twin peaked volcano in eruption, with a town at its base (though archaeologists now question this interpretation). The volcano may be either Hasan Dağ, or its smaller neighbour, Melendiz Dağ. The classical world of Greece and the early Roman Empire explained volcanoes as sites of various gods. Greeks considered that Hephaestus, the god of fire, sat below the volcano Etna, forging the weapons of Zeus. The Greek word used to describe volcanoes was "etna", or "hiera", after Heracles, the son of Zeus. The Roman poet Virgil, in interpreting the Greek mythos, held that the giant Enceladus was buried beneath Etna by the goddess Athena as punishment for rebellion against the gods; the mountain's rumblings were his tormented cries, the flames his breath and the tremors his railing against the bars of his prison. Enceladus' brother Mimas was buried beneath Vesuvius by Hephaestus, and the blood of other defeated giants welled up in the Phlegrean Fields surrounding Vesuvius. The Greek philosopher Empedocles (c. 490-430 BCE) saw the world divided into four elemental forces, of Earth, Air, Fire and Water. Volcanoes, Empedocles maintained, were the manifestation of Elemental Fire. Plato contended that channels of hot and cold waters flow in inexhaustible quantities through subterranean rivers. In the depths of the earth snakes a vast river of fire, the "Pyriphlegethon", which feeds all the world's volcanoes | https://en.wikipedia.org/wiki?curid=238112 |
Volcanology Aristotle considered underground fire as the result of "the...friction of the wind when it plunges into narrow passages." Wind played a key role in volcano explanations until the 16th century. Lucretius, a Roman philosopher, claimed Etna was completely hollow and the fires of the underground driven by a fierce wind circulating near sea level. Ovid believed that the flame was fed from "fatty foods" and eruptions stopped when the food ran out. Vitruvius contended that sulfur, alum and bitumen fed the deep fires. Observations by Pliny the Elder noted the presence of earthquakes preceded an eruption; he died in the eruption of Vesuvius in 79 CE while investigating it at Stabiae. His nephew, Pliny the Younger gave detailed descriptions of the eruption in which his uncle died, attributing his death to the effects of toxic gases. Such eruptions have been named Plinian in honour of the two authors. "Nuées ardentes" were described from the Azores in 1580. Georgius Agricola argued the rays of the sun, as later proposed by Descartes had nothing to do with volcanoes. Agricola believed vapor under pressure caused eruptions of 'mointain oil' and basalt. Jesuit Athanasius Kircher (1602–1680) witnessed eruptions of Mount Etna and Stromboli, then visited the crater of Vesuvius and published his view of an Earth with a central fire connected to numerous others caused by the burning of sulfur, bitumen and coal | https://en.wikipedia.org/wiki?curid=238112 |
Volcanology Johannes Kepler considered volcanoes as conduits for the tears and excrement of the Earth, voiding bitumen, tar and sulfur. Descartes, pronouncing that God had created the Earth in an instant, declared he had done so in three layers; the fiery depths, a layer of water, and the air. Volcanoes, he said, were formed where the rays of the sun pierced the earth. Science wrestled with the ideas of the combustion of pyrite with water, that rock was solidified bitumen, and with notions of rock being formed from water (Neptunism). Of the volcanoes then known, all were near the water, hence the action of the sea upon the land was used to explain volcanism. Tribal legends of volcanoes abound from the Pacific Ring of Fire and the Americas, usually invoking the forces of the supernatural or the divine to explain the violent outbursts of volcanoes. Taranaki and Tongariro, according to Māori mythology, were lovers who fell in love with Pihanga, and a spiteful jealous fight ensued. Māori will not to this day live between Tongariro and Taranaki for fear of the dispute flaring up again. In the Hawaiian religion, Pele ( Pel-a; ) is the goddess of volcanoes and a popular figure in Hawaiian mythology. Pele was used for various scientific terms as for Pele's hair, Pele's tears, and Limu o Pele (Pele's seaweed). A volcano on the Jovian moon Io is also named Pele. Saint Agatha is patron saint of Catania, close to mount Etna, and an important highly venerated (till today) example of virgin martyrs of Christian antiquity | https://en.wikipedia.org/wiki?curid=238112 |
Volcanology In 253 CE, one year after her violent death, the stilling of an eruption of Mt. Etna was attributed to her intercession. Catania was however nearly completely destroyed by the eruption of Mt. Etna in 1169, and over 15,000 of its inhabitants died. Nevertheless, she was invoked again for the 1669 Etna eruption and, for an outbreak danginering Nicolosi in 1886. The way she is invoked and dealt with in Italian Folk religion, a sort of quid pro quo way approach to saints, has been related (in the tradition of James Frazer) to earlier pagan believes. In 1660 the eruption of Vesuvius rained twinned pyroxene crystals and ash upon the nearby villages. The crystals resembled the crucifix and this was interpreted as the work of Saint Januarius. In Naples, the relics of St Januarius are paraded through town at every major eruption of Vesuvius. The register of these processions and the 1779 and 1794 diary of Father Antonio Piaggio allowed British diplomat and amateur naturalist Sir William Hamilton to provide a detailed chronology and description of Vesuvius' eruptions. | https://en.wikipedia.org/wiki?curid=238112 |
Maritime nation A maritime nation is any nation which borders the sea and is dependent on its use for majority of the following state activities: commerce and transport, war, to define a territorial boundary, or for any maritime activity (activities using the sea to convey or produce an end result). Historically, the term has been used to refer to a thalassocracy such as Carthage and Phoenicia but during the medieval period increasingly became associated with the Maritime Republics of Venice, Pisa, Genoa, Amalfi, Gaeta, Ancona and Ragusa | https://en.wikipedia.org/wiki?curid=238250 |
Adverse pressure gradient In fluid dynamics, an adverse pressure gradient occurs when the static pressure increases in the direction of the flow. Mathematically this is expressed as: formula_1 for a flow in the positive formula_2-direction. This is important for boundary layers, increasing the fluid pressure is akin to increasing the potential energy of the fluid, leading to a reduced kinetic energy and a deceleration of the fluid. Since the fluid in the inner part of the boundary layer is slower, it is more greatly affected by the increasing pressure gradient. For a large enough pressure increase, this fluid may slow to zero velocity or even become reversed causing a flow separation. This has very significant consequences in aerodynamics since flow separation significantly modifies the pressure distribution along the surface and hence the lift and drag characteristics. Turbulent boundary layers tend to be able to sustain an adverse pressure gradient better than an equivalent laminar boundary layer. The more efficient mixing which occurs in a turbulent boundary layer transports kinetic energy from the edge of the boundary layer to the low momentum flow at the solid surface, often preventing the separation that would occur for a laminar boundary layer under the same conditions. This physical fact has led to a variety of schemes to actually produce turbulent boundary layers when boundary layer separation is dominant at high Reynolds numbers | https://en.wikipedia.org/wiki?curid=238493 |
Adverse pressure gradient The dimples on a golf ball, the fuzz on a tennis ball, or the seams on a baseball are good examples. Aeroplane wings are often engineered with vortex generators on the upper surface to produce a turbulent boundary layer. | https://en.wikipedia.org/wiki?curid=238493 |
Otto Bütschli Johann Adam (3 May 1848 – 2 February 1920) was a German zoologist and professor at the University of Heidelberg. He specialized in invertebrates and insect development. Many of the groups of protists were first recognized by him. Bütschli was born Frankfurt am Main. He studied mineralogy, chemistry, and paleontology in Karlsruhe and became assistant of Karl Alfred von Zittel (geology and paleontology). He moved to Heidelberg in 1866 and worked with Robert Bunsen (chemistry). He received his PhD from the University of Heidelberg in 1868, after passing examinations in geology, paleontology, and zoology. He joined Rudolf Leuckart at the University of Leipzig in 1869. After leaving his studies to serve as an officer in the Franco-Prussian War (1870-1871), Bütschli worked in his private laboratory and then for two years (1873-1874) with Karl Möbius at the University of Kiel. After that, he worked privately. In 1876, he made Habilitation. He became professor at the University of Heidelberg, as successor of Alexander Pagenstecher, in 1878. He held this position for over 40 years. | https://en.wikipedia.org/wiki?curid=239389 |
Virtual microscope The Virtual Microscope project is an initiative to make micromorphology and behavior of some small organisms available online. Images are from Antarctica and the Baltic Sea are available at no cost. Images are offered in higher magnification or lower resolution. Varieties of images offer can include scanning electron microscopy, transmission electron microscopy, and are accompanied by related publications for research. The site interface is deliberately kept simple with tutorials offered in several areas. The editorial board consists of professors from several universities worldwide. Its global scope was added after its foundation, and it supervised by Rutgers University. | https://en.wikipedia.org/wiki?curid=240693 |
Gaussian year A is defined as 365.2568983 days. It was adopted by Carl Friedrich Gauss as the length of the sidereal year in his studies of the dynamics of the solar system. A slightly different value is now accepted as the length of the sidereal year, and the value accepted by Gauss is given a special name. A particle of negligible mass, that orbits a body of 1 solar mass in this period, has a mean axis for its orbit of 1 astronomical unit by definition. The value is derived from Kepler's third law as where | https://en.wikipedia.org/wiki?curid=240953 |
Chemical species A chemical species is a chemical substance or ensemble composed of chemically identical molecular entities that can explore the same set of molecular energy levels on a characteristic or delineated time scale. These energy levels determine the way the chemical species will interact with others (engaging in chemical bonds, etc.). The species can be atom, molecule, ion, radical, and it has a chemical name and chemical formula. The term is also applied to a set of chemically identical atomic or molecular structural units in a solid array. In supramolecular chemistry, chemical species are those supramolecular structures whose interactions and associations are brought about via intermolecular bonding and debonding actions, and function to form the basis of this branch of chemistry. For instance: | https://en.wikipedia.org/wiki?curid=241761 |
Nuclear chemistry is the sub-field of chemistry dealing with radioactivity, nuclear processes, and transformations in the nuclei of atoms, such as nuclear transmutation and nuclear properties. It is the chemistry of radioactive elements such as the actinides, radium and radon together with the chemistry associated with equipment (such as nuclear reactors) which are designed to perform nuclear processes. This includes the corrosion of surfaces and the behavior under conditions of both normal and abnormal operation (such as during an accident). An important area is the behavior of objects and materials after being placed into a nuclear waste storage or disposal site. It includes the study of the chemical effects resulting from the absorption of radiation within living animals, plants, and other materials. The radiation chemistry controls much of radiation biology as radiation has an effect on living things at the molecular scale, to explain it another way the radiation alters the biochemicals within an organism, the alteration of the bio-molecules then changes the chemistry which occurs within the organism, this change in chemistry then can lead to a biological outcome. As a result, nuclear chemistry greatly assists the understanding of medical treatments (such as cancer radiotherapy) and has enabled these treatments to improve. It includes the study of the production and use of radioactive sources for a range of processes | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry These include radiotherapy in medical applications; the use of radioactive tracers within industry, science and the environment; and the use of radiation to modify materials such as polymers. It also includes the study and use of nuclear processes in "non-radioactive" areas of human activity. For instance, nuclear magnetic resonance (NMR) spectroscopy is commonly used in synthetic organic chemistry and physical chemistry and for structural analysis in macro-molecular chemistry. concerned with the study of nucleus, changes occurring in the nucleus, properties of the particles present in the nucleus and the emission or absorption of radiation from the nucleus After Wilhelm Röntgen discovered X-rays in 1882, many scientists began to work on ionizing radiation. One of these was Henri Becquerel, who investigated the relationship between phosphorescence and the blackening of photographic plates. When Becquerel (working in France) discovered that, with no external source of energy, the uranium generated rays which could blacken (or "fog") the photographic plate, radioactivity was discovered. Marie Curie (working in Paris) and her husband Pierre Curie isolated two new radioactive elements from uranium ore. They used radiometric methods to identify which stream the radioactivity was in after each chemical separation; they separated the uranium ore into each of the different chemical elements that were known at the time, and measured the radioactivity of each fraction | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry They then attempted to separate these radioactive fractions further, to isolate a smaller fraction with a higher specific activity (radioactivity divided by mass). In this way, they isolated polonium and radium. It was noticed in about 1901 that high doses of radiation could cause an injury in humans. Henri Becquerel had carried a sample of radium in his pocket and as a result he suffered a highly localized dose which resulted in a radiation burn. This injury resulted in the biological properties of radiation being investigated, which in time resulted in the development of medical treatment Ernest Rutherford, working in Canada and England, showed that radioactive decay can be described by a simple equation (a linear first degree derivative equation, now called first order kinetics), implying that a given radioactive substance has a characteristic "half-life" (the time taken for the amount of radioactivity present in a source to diminish by half). He also coined the terms alpha, beta and gamma rays, he converted nitrogen into oxygen, and most importantly he supervised the students who did the Geiger–Marsden experiment (gold foil experiment) which showed that the 'plum pudding model' of the atom was wrong. In the plum pudding model, proposed by J. J. Thomson in 1904, the atom is composed of electrons surrounded by a 'cloud' of positive charge to balance the electrons' negative charge | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry To Rutherford, the gold foil experiment implied that the positive charge was confined to a very small nucleus leading first to the Rutherford model, and eventually to the Bohr model of the atom, where the positive nucleus is surrounded by the negative electrons. In 1934 Marie Curie's daughter (Irène Joliot-Curie) and son-in-law (Frédéric Joliot-Curie) were the first to create artificial radioactivity: they bombarded boron with alpha particles to make the neutron-poor isotope nitrogen-13; this isotope emitted positrons. In addition, they bombarded aluminium and magnesium with neutrons to make new radioisotopes. Radiochemistry is the chemistry of radioactive materials, in which radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being "inactive" as the isotopes are "stable"). For further details please see the page on radiochemistry. Radiation chemistry is the study of the chemical effects of radiation on the matter; this is very different from radiochemistry as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide. Prior to radiation chemistry, it was commonly believed that pure water could not be destroyed. Initial experiments were focused on understanding the effects of radiation on matter | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry Using a X-ray generator, studied the biological effects of radiation as it became a common treatment option and diagnostic method. Fricke proposed and subsequently proved that the energy from X - rays were able to convert water into activated water, allowing it to react with dissolved species. Radiochemistry, radiation chemistry and nuclear chemical engineering play a very important role for uranium and thorium fuel precursors synthesis, starting from ores of these elements, fuel fabrication, coolant chemistry, fuel reprocessing, radioactive waste treatment and storage, monitoring of radioactive elements release during reactor operation and radioactive geological storage, etc. A combination of radiochemistry and radiation chemistry is used to study nuclear reactions such as fission and fusion. Some early evidence for nuclear fission was the formation of a short-lived radioisotope of barium which was isolated from neutron irradiated uranium (Ba, with a half-life of 83 minutes and Ba, with a half-life of 12.8 days, are major fission products of uranium). At the time, it was thought that this was a new radium isotope, as it was then standard radiochemical practice to use a barium sulfate carrier precipitate to assist in the isolation of radium | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry More recently, a combination of radiochemical methods and nuclear physics has been used to try to make new 'superheavy' elements; it is thought that islands of relative stability exist where the nuclides have half-lives of years, thus enabling weighable amounts of the new elements to be isolated. For more details of the original discovery of nuclear fission see the work of Otto Hahn. This is the chemistry associated with any part of the nuclear fuel cycle, including nuclear reprocessing. The fuel cycle includes all the operations involved in producing fuel, from mining, ore processing and enrichment to fuel production ("Front-end of the cycle"). It also includes the 'in-pile' behavior (use of the fuel in a reactor) before the "back end" of the cycle. The "back end" includes the management of the used nuclear fuel in either a spent fuel pool or dry storage, before it is disposed of into an underground waste store or reprocessed. The nuclear chemistry associated with the nuclear fuel cycle can be divided into two main areas, one area is concerned with operation under the intended conditions while the other area is concerned with maloperation conditions where some alteration from the normal operating conditions has occurred or ("more rarely") an accident is occurring. Without this process, none of this would be true. In the United States, it is normal to use fuel once in a power reactor before placing it in a waste store. The long-term plan is currently to place the used civilian reactor fuel in a deep store | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry This non-reprocessing policy was started in March 1977 because of concerns about nuclear weapons proliferation. President Jimmy Carter issued a Presidential directive which indefinitely suspended the commercial reprocessing and recycling of plutonium in the United States. This directive was likely an attempt by the United States to lead other countries by example, but many other nations continue to reprocess spent nuclear fuels. The Russian government under President Vladimir Putin repealed a law which had banned the import of used nuclear fuel, which makes it possible for Russians to offer a reprocessing service for clients outside Russia (similar to that offered by BNFL). The current method of choice is to use the PUREX liquid-liquid extraction process which uses a tributyl phosphate/hydrocarbon mixture to extract both uranium and plutonium from nitric acid. This extraction is of the nitrate salts and is classed as being of a solvation mechanism. For example, the extraction of plutonium by an extraction agent (S) in a nitrate medium occurs by the following reaction. A complex bond is formed between the metal cation, the nitrates and the tributyl phosphate, and a model compound of a dioxouranium(VI) complex with two nitrates and two triethyl phosphates has been characterised by X-ray crystallography. When the nitric acid concentration is high the extraction into the organic phase is favored, and when the nitric acid concentration is low the extraction is reversed (the organic phase is "stripped" of the metal) | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry It is normal to dissolve the used fuel in nitric acid, after the removal of the insoluble matter the uranium and plutonium are extracted from the highly active liquor. It is normal to then back extract the loaded organic phase to create a "medium active" liquor which contains mostly uranium and plutonium with only small traces of fission products. This medium active aqueous mixture is then extracted again by tributyl phosphate/hydrocarbon to form a new organic phase, the metal bearing organic phase is then stripped of the metals to form an aqueous mixture of only uranium and plutonium. The two stages of extraction are used to improve the purity of the actinide product, the organic phase used for the first extraction will suffer a far greater dose of radiation. The radiation can degrade the tributyl phosphate into dibutyl hydrogen phosphate. The dibutyl hydrogen phosphate can act as an extraction agent for both the actinides and other metals such as ruthenium. The dibutyl hydrogen phosphate can make the system behave in a more complex manner as it tends to extract metals by an ion exchange mechanism (extraction favoured by low acid concentration), to reduce the effect of the dibutyl hydrogen phosphate it is common for the used organic phase to be washed with sodium carbonate solution to remove the acidic degradation products of the tributyl phosphate | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry The PUREX process can be modified to make a UREX ("UR"anium "EX"traction) process which could be used to save space inside high level nuclear waste disposal sites, such as Yucca Mountain nuclear waste repository, by removing the uranium which makes up the vast majority of the mass and volume of used fuel and recycling it as reprocessed uranium. The UREX process is a PUREX process which has been modified to prevent the plutonium being extracted. This can be done by adding a plutonium reductant before the first metal extraction step. In the UREX process, ~99.9% of the uranium and >95% of technetium are separated from each other and the other fission products and actinides. The key is the addition of acetohydroxamic acid (AHA) to the extraction and scrubs sections of the process. The addition of AHA greatly diminishes the extractability of plutonium and neptunium, providing greater proliferation resistance than with the plutonium extraction stage of the PUREX process. Adding a second extraction agent, octyl(phenyl)-"N","N"-dibutyl carbamoylmethyl phosphine oxide (CMPO) in combination with tributylphosphate, (TBP), the PUREX process can be turned into the TRUEX ("TR"ans"U"ranic "EX"traction) process this is a process which was invented in the US by Argonne National Laboratory, and is designed to remove the transuranic metals (Am/Cm) from waste. The idea is that by lowering the alpha activity of the waste, the majority of the waste can then be disposed of with greater ease | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry In common with PUREX this process operates by a solvation mechanism. As an alternative to TRUEX, an extraction process using a malondiamide has been devised. The DIAMEX ("DIAM"ide"EX"traction) process has the advantage of avoiding the formation of organic waste which contains elements other than carbon, hydrogen, nitrogen, and oxygen. Such an organic waste can be burned without the formation of acidic gases which could contribute to acid rain. The DIAMEX process is being worked on in Europe by the French CEA. The process is sufficiently mature that an industrial plant could be constructed with the existing knowledge of the process. In common with PUREX this process operates by a solvation mechanism. Selective Actinide Extraction (SANEX). As part of the management of minor actinides, it has been proposed that the lanthanides and trivalent minor actinides should be removed from the PUREX raffinate by a process such as DIAMEX or TRUEX. In order to allow the actinides such as americium to be either reused in industrial sources or used as fuel the lanthanides must be removed. The lanthanides have large neutron cross sections and hence they would poison a neutron-driven nuclear reaction. To date, the extraction system for the SANEX process has not been defined, but currently, several different research groups are working towards a process. For instance, the French CEA is working on a bis-triazinyl pyridine (BTP) based process. Other systems such as the dithiophosphinic acids are being worked on by some other workers | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry This is the "UNiversal" "EX"traction process which was developed in Russia and the Czech Republic, it is a process designed to remove all of the most troublesome (Sr, Cs and minor actinides) radioisotopes from the raffinates left after the extraction of uranium and plutonium from used nuclear fuel. The chemistry is based upon the interaction of caesium and strontium with poly ethylene oxide (poly ethylene glycol) and a cobalt carborane anion (known as chlorinated cobalt dicarbollide). The actinides are extracted by CMPO, and the diluent is a polar aromatic such as nitrobenzene. Other dilents such as "meta"-nitrobenzotrifluoride and phenyl trifluoromethyl sulfone have been suggested as well. Another important area of nuclear chemistry is the study of how fission products interact with surfaces; this is thought to control the rate of release and migration of fission products both from waste containers under normal conditions and from power reactors under accident conditions. Like chromate and molybdate, the TcO anion can react with steel surfaces to form a corrosion resistant layer. In this way, these metaloxo anions act as anodic corrosion inhibitors. The formation of TcO on steel surfaces is one effect which will retard the release of Tc from nuclear waste drums and nuclear equipment which has been lost before decontamination (e.g. submarine reactors lost at sea). This TcO layer renders the steel surface passive, inhibiting the anodic corrosion reaction | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry The radioactive nature of technetium makes this corrosion protection impractical in almost all situations. It has also been shown that TcO anions react to form a layer on the surface of activated carbon (charcoal) or aluminium. A short review of the biochemical properties of a series of key long lived radioisotopes can be read on line. Tc in nuclear waste may exist in chemical forms other than the TcO anion, these other forms have different chemical properties. Similarly, the release of iodine-131 in a serious power reactor accident could be retarded by absorption on metal surfaces within the nuclear plant. Despite the growing use of nuclear medicine, the potential expansion of nuclear power plants, and worries about protection against nuclear threats and the management of the nuclear waste generated in past decades, the number of students opting to specialize in nuclear and radiochemistry has decreased significantly over the past few decades. Now, with many experts in these fields approaching retirement age, action is needed to avoid a workforce gap in these critical fields, for example by building student interest in these careers, expanding the educational capacity of universities and colleges, and providing more specific on-the-job training. Nuclear and Radiochemistry (NRC) is mostly being taught at university level, usually first at the Master- and PhD-degree level. In Europe, as substantial effort is being done to harmonize and prepare the NRC education for the industry's and society's future needs | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry This effort is being coordinated in a project funded by the Coordinated Action supported by the European Atomic Energy Community's 7th Framework Program.. Although NucWik is primarily aimed at teachers, anyone interested in nuclear and radiochemistry is welcome and can find a lot of information and material explaining topics related to NRC. Some methods first developed within nuclear chemistry and physics have become so widely used within chemistry and other physical sciences that they may be best thought of as separate from "normal" nuclear chemistry. For example, the isotope effect is used so extensively to investigate chemical mechanisms and the use of cosmogenic isotopes and long-lived unstable isotopes in geology that it is best to consider much of isotopic chemistry as separate from nuclear chemistry. The mechanisms of chemical reactions can be investigated by observing how the kinetics of a reaction is changed by making an isotopic modification of a substrate, known as the kinetic isotope effect. This is now a standard method in organic chemistry. Briefly, replacing normal hydrogen (protons) by deuterium within a molecule causes the molecular vibrational frequency of X-H (for example C-H, N-H and O-H) bonds to decrease, which leads to a decrease in vibrational zero-point energy. This can lead to a decrease in the reaction rate if the rate-determining step involves breaking a bond between hydrogen and another atom | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry Thus, if the reaction changes in rate when protons are replaced by deuteriums, it is reasonable to assume that the breaking of the bond to hydrogen is part of the step which determines the rate. Cosmogenic isotopes are formed by the interaction of cosmic rays with the nucleus of an atom. These can be used for dating purposes and for use as natural tracers. In addition, by careful measurement of some ratios of stable isotopes it is possible to obtain new insights into the origin of bullets, ages of ice samples, ages of rocks, and the diet of a person can be identified from a hair or other tissue sample. (See Isotope geochemistry and Isotopic signature for further details). Within living things, isotopic labels (both radioactive and nonradioactive) can be used to probe how the complex web of reactions which makes up the metabolism of an organism converts one substance to another. For instance a green plant uses light energy to convert water and carbon dioxide into glucose by photosynthesis. If the oxygen in the water is labeled, then the label appears in the oxygen gas formed by the plant and not in the glucose formed in the chloroplasts within the plant cells. For biochemical and physiological experiments and medical methods, a number of specific isotopes have important applications. By organic synthesis it is possible to create a complex molecule with a radioactive label that can be confined to a small area of the molecule | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry For short-lived isotopes such as C, very rapid synthetic methods have been developed to permit the rapid addition of the radioactive isotope to the molecule. For instance a palladium catalysed carbonylation reaction in a microfluidic device has been used to rapidly form amides and it might be possible to use this method to form radioactive imaging agents for PET imaging. Nuclear spectroscopy are methods that use the nucleus to obtain information of the local structure in matter. Important methods are NMR (see below), Mössbauer spectroscopy and Perturbed angular correlation. These methods use the interaction of the hyperfine field with the nucleus' spin. The field can be magnetic or/and electric and are created by the electrons of the atom and its sourrounding neighbours. Thus, these methods investigate the local structure in matter, mainly condensed matter in condensed matter physics and solid state chemistry. NMR spectroscopy uses the net spin of nuclei in a substance upon energy absorption to identify molecules. This has now become a standard spectroscopic tool within synthetic chemistry. One major use of NMR is to determine the bond connectivity within an organic molecule. NMR imaging also uses the net spin of nuclei (commonly protons) for imaging. This is widely used for diagnostic purposes in medicine, and can provide detailed images of the inside of a person without inflicting any radiation upon them | https://en.wikipedia.org/wiki?curid=242001 |
Nuclear chemistry In a medical setting, NMR is often known simply as "magnetic resonance" imaging, as the word 'nuclear' has negative connotations for many people. | https://en.wikipedia.org/wiki?curid=242001 |
Nucleocosmochronology or nuclear cosmochronology is a technique used to determine timescales for astrophysical objects and events. It compares the observed ratios of abundances of heavy radioactive and stable nuclides to the primordial ratios predicted by nucleosynthesis theory in order to calculate the age of formation of astronomical objects. has been employed to determine the age of the Sun ( billion years) and of the Galactic thin disk ( billion years), among others. It has also been used to estimate the age of the Milky Way itself, as exemplified by a recent study of Cayrel's Star in the Galactic halo, which due to its low metallicity, is believed to have formed early in the history of the Galaxy. Limiting factors in its precision are the quality of observations of faint stars and the uncertainty of the primordial abundances of r-process elements. | https://en.wikipedia.org/wiki?curid=242282 |
Inferior and superior planets In the Solar System, a planet is said to be inferior or interior with respect to another planet if its orbit lies inside the other planet's orbit around the Sun. In this situation, the latter planet is said to be superior to the former. In the reference frame of the Earth, in which the terms were originally used, the inferior planets are Mercury and Venus, while the superior planets are Mars, Jupiter, Saturn, Uranus and Neptune. Dwarf planets like Ceres or Pluto and most asteroids are 'superior' in the sense that they almost all orbit outside the orbit of Earth. These terms were originally used in the geocentric cosmology of Claudius Ptolemy to differentiate as inferior those planets (Mercury and Venus) whose epicycle remained co-linear with the Earth and Sun, and as superior those planets (Mars, Jupiter, and Saturn) that did not. In the 16th century, the terms were modified by Copernicus, who rejected Ptolemy's geocentric model, to distinguish a planet's orbit's size in relation to the Earth's. When Earth is stated or assumed to be the reference point: The terms are sometimes used more generally; for example, Earth is an inferior planet relative to Mars. Interior planet now seems to be the preferred term for astronomers. Inferior/interior and superior are different from the terms inner planet and outer planet, which designate those planets which lie inside the asteroid belt and those that lie outside it, respectively. Inferior planet is also different from minor planet or dwarf planet | https://en.wikipedia.org/wiki?curid=243441 |
Inferior and superior planets Superior planet is also different from gas giant. | https://en.wikipedia.org/wiki?curid=243441 |
Scientific law Scientific laws or laws of science are statements, based on repeated experiments or observations, that describe or predict a range of natural phenomena. The term "law" has diverse usage in many cases (approximate, accurate, broad, or narrow) across all fields of natural science (physics, chemistry, biology, Earth science). Laws are developed from data and can be further developed through mathematics; in all cases they are directly or indirectly based on empirical evidence. It is generally understood that they implicitly reflect, though they do not explicitly assert, causal relationships fundamental to reality, and are discovered rather than invented. Scientific laws summarize the results of experiments or observations, usually within a certain range of application. In general, the accuracy of a law does not change when a new theory of the relevant phenomenon is worked out, but rather the scope of the law's application, since the mathematics or statement representing the law does not change. As with other kinds of scientific knowledge, laws do not have absolute certainty (as mathematical theorems or identities do), and it is always possible for a law to be contradicted, restricted, or extended by future observations. A law can usually be formulated as one or several statements or equations, so that it can be used to predict the outcome of an experiment, given the circumstances of the processes taking place | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law Laws differ from hypotheses and postulates, which are proposed during the scientific process before and during validation by experiment and observation. Hypotheses and postulates are not laws since they have not been verified to the same degree, although they may lead to the formulation of laws. Laws are narrower in scope than scientific theories, which may entail one or several laws. Science distinguishes a law or theory from facts. Calling a law a fact is ambiguous, an overstatement, or an equivocation. The nature of scientific laws has been much discussed in philosophy, but in essence scientific laws are simply empirical conclusions reached by scientific method; they are intended to be neither laden with ontological commitments nor statements of logical absolutes. A scientific law always applies to a physical system under the same conditions, and it implies that there is a causal relationship involving the elements of the system. Factual and well-confirmed statements like "Mercury is liquid at standard temperature and pressure" are considered too specific to qualify as scientific laws. A central problem in the philosophy of science, going back to David Hume, is that of distinguishing causal relationships (such as those implied by laws) from principles that arise due to constant conjunction. Laws differ from scientific theories in that they do not posit a mechanism or explanation of phenomena: they are merely distillations of the results of repeated observation | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law As such, a law is limited in applicability to circumstances resembling those already observed, and may be found false when extrapolated. Ohm's law only applies to linear networks, Newton's law of universal gravitation only applies in weak gravitational fields, the early laws of aerodynamics such as Bernoulli's principle do not apply in case of compressible flow such as occurs in transonic and supersonic flight, Hooke's law only applies to strain below the elastic limit, Boyle's law applies with perfect accuracy only to the ideal gas, etc. These laws remain useful, but only under the conditions where they apply. Many laws take mathematical forms, and thus can be stated as an equation; for example, the law of conservation of energy can be written as formula_1, where E is the total amount of energy in the universe. Similarly, the first law of thermodynamics can be written as formula_2, and Newton's Second law can be written as "F" = . While these scientific laws explain what our senses perceive, they are still empirical, and so are not like mathematical theorems (which can be proved purely by mathematics and not by scientific experiment). Like theories and hypotheses, laws make predictions (specifically, they predict that new observations will conform to the law), and can be falsified if they are found in contradiction with new data. Some laws are only approximations of other more general laws, and are good approximations with a restricted domain of applicability | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law For example, Newtonian dynamics (which is based on Galilean transformations) is the low-speed limit of special relativity (since the Galilean transformation is the low-speed approximation to the Lorentz transformation). Similarly, the Newtonian gravitation law is a low-mass approximation of general relativity, and Coulomb's law is an approximation to Quantum Electrodynamics at large distances (compared to the range of weak interactions). In such cases it is common to use the simpler, approximate versions of the laws, instead of the more accurate general laws. Laws are constantly being tested experimentally to higher and higher degrees of precision. This is one of the main goals of science. Just because laws have never been observed to be violated does not preclude testing them at increased accuracy or in new kinds of conditions to confirm whether they continue to hold, or whether they break, and what can be discovered in the process. It is always possible for laws to be invalidated or proven to have limitations, by repeatable experimental evidence, should any be observed. Well-established laws have indeed been invalidated in some special cases, but the new formulations created to explain the discrepancies generalize upon, rather than overthrow, the originals. That is, the invalidated laws have been found to be only close approximations, to which other terms or factors must be added to cover previously unaccounted-for conditions, e.g | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law very large or very small scales of time or space, enormous speeds or masses, etc. Thus, rather than unchanging knowledge, physical laws are better viewed as a series of improving and more precise generalizations. Scientific laws are typically conclusions based on repeated scientific experiments and observations over many years and which have become accepted universally within the scientific community. A scientific law is "inferred from particular facts, applicable to a defined group or class of phenomena, and expressible by the statement that a particular phenomenon always occurs if certain conditions be present." The production of a summary description of our environment in the form of such laws is a fundamental aim of science. Several general properties of scientific laws, particularly when referring to laws in physics, have been identified. They are: The term "scientific law" is traditionally associated with the natural sciences, though the social sciences also contain laws. For example, Zipf's law is a law in the social sciences which is based on mathematical statistics. In these cases, laws may describe general trends or expected behaviors rather than being absolutes. Some laws reflect mathematical symmetries found in Nature (e.g. the Pauli exclusion principle reflects identity of electrons, conservation laws reflect homogeneity of space, time, and Lorentz transformations reflect rotational symmetry of spacetime) | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law Many fundamental physical laws are mathematical consequences of various symmetries of space, time, or other aspects of nature. Specifically, Noether's theorem connects some conservation laws to certain symmetries. For example, conservation of energy is a consequence of the shift symmetry of time (no moment of time is different from any other), while conservation of momentum is a consequence of the symmetry (homogeneity) of space (no place in space is special, or different than any other). The indistinguishability of all particles of each fundamental type (say, electrons, or photons) results in the Dirac and Bose quantum statistics which in turn result in the Pauli exclusion principle for fermions and in Bose–Einstein condensation for bosons. The rotational symmetry between time and space coordinate axes (when one is taken as imaginary, another as real) results in Lorentz transformations which in turn result in special relativity theory. Symmetry between inertial and gravitational mass results in general relativity. The inverse square law of interactions mediated by massless bosons is the mathematical consequence of the 3-dimensionality of space. One strategy in the search for the most fundamental laws of nature is to search for the most general mathematical symmetry group that can be applied to the fundamental interactions. Most significant laws in science are conservation laws. These fundamental laws follow from homogeneity of space, time and phase, in other words "symmetry" | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law Conservation laws can be expressed using the general continuity equation (for a conserved quantity) can be written in differential form as: where ρ is some quantity per unit volume, J is the flux of that quantity (change in quantity per unit time per unit area). Intuitively, the divergence (denoted ∇•) of a vector field is a measure of flux diverging radially outwards from a point, so the negative is the amount piling up at a point, hence the rate of change of density in a region of space must be the amount of flux leaving or collecting in some region (see main article for details). In the table below, the fluxes, flows for various physical quantities in transport, and their associated continuity equations, are collected for comparison. More general equations are the convection–diffusion equation and Boltzmann transport equation, which have their roots in the continuity equation. All of classical mechanics, including Newton's laws, Lagrange's equations, Hamilton's equations, etc., can be derived from this very simple principle: where formula_5 is the action; the integral of the Lagrangian of the physical system between two times "t" and "t". The kinetic energy of the system is "T" (a function of the rate of change of the configuration of the system), and potential energy is "V" (a function of the configuration and its rate of change). The configuration of a system which has "N" degrees of freedom is defined by generalized coordinates q = ("q", "q", ... "q") | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law There are generalized momenta conjugate to these coordinates, p = ("p", "p", ..., "p"), where: The action and Lagrangian both contain the dynamics of the system for all times. The term "path" simply refers to a curve traced out by the system in terms of the generalized coordinates in the configuration space, i.e. the curve q("t"), parameterized by time (see also parametric equation for this concept). The action is a "functional" rather than a "function", since it depends on the Lagrangian, and the Lagrangian depends on the path q("t"), so the action depends on the "entire" "shape" of the path for all times (in the time interval from "t" to "t"). Between two instants of time, there are infinitely many paths, but one for which the action is stationary (to the first order) is the true path. The stationary value for the "entire continuum" of Lagrangian values corresponding to some path, "not just one value" of the Lagrangian, is required (in other words it is "not" as simple as "differentiating a function and setting it to zero, then solving the equations to find the points of maxima and minima etc", rather this idea is applied to the entire "shape" of the function, see calculus of variations for more details on this procedure). Notice "L" is "not" the total energy "E" of the system due to the difference, rather than the sum: The following general approaches to classical mechanics are summarized below in the order of establishment | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law They are equivalent formulations, Newton's is very commonly used due to simplicity, but Hamilton's and Lagrange's equations are more general, and their range can extend into other branches of physics with suitable modifications. From the above, any equation of motion in classical mechanics can be derived. Equations describing fluid flow in various situations can be derived, using the above classical equations of motion and often conservation of mass, energy and momentum. Some elementary examples follow. Some of the more famous laws of nature are found in Isaac Newton's theories of (now) classical mechanics, presented in his "Philosophiae Naturalis Principia Mathematica", and in Albert Einstein's theory of relativity. Postulates of special relativity are not "laws" in themselves, but assumptions of their nature in terms of "relative motion". Often two are stated as "the laws of physics are the same in all inertial frames" and "the speed of light is constant". However the second is redundant, since the speed of light is predicted by Maxwell's equations. Essentially there is only one. The said postulate leads to the Lorentz transformations – the transformation law between two frame of references moving relative to each other. For any 4-vector this replaces the Galilean transformation law from classical mechanics. The Lorentz transformations reduce to the Galilean transformations for low velocities much less than the speed of light "c" | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law The magnitudes of 4-vectors are invariants - "not" "conserved", but the same for all inertial frames (i.e. every observer in an inertial frame will agree on the same value), in particular if "A" is the four-momentum, the magnitude can derive the famous invariant equation for mass-energy and momentum conservation (see invariant mass): in which the (more famous) mass-energy equivalence "E" = "mc" is a special case. General relativity is governed by the Einstein field equations, which describe the curvature of space-time due to mass-energy equivalent to the gravitational field. Solving the equation for the geometry of space warped due to the mass distribution gives the metric tensor. Using the geodesic equation, the motion of masses falling along the geodesics can be calculated. In a relatively flat spacetime due to weak gravitational fields, gravitational analogues of Maxwell's equations can be found; the GEM equations, to describe an analogous "gravitomagnetic field". They are well established by the theory, and experimental tests form ongoing research. These equations can be modified to include magnetic monopoles, and are consistent with our observations of monopoles either existing or not existing; if they do not exist, the generalized equations reduce to the ones above, if they do, the equations become fully symmetric in electric and magnetic charges and currents | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law Indeed, there is a duality transformation where electric and magnetic charges can be "rotated into one another", and still satisfy Maxwell's equations. These laws were found before the formulation of Maxwell's equations. They are not fundamental, since they can be derived from Maxwell's Equations. Coulomb's Law can be found from Gauss' Law (electrostatic form) and the Biot–Savart Law can be deduced from Ampere's Law (magnetostatic form). Lenz' Law and Faraday's Law can be incorporated into the Maxwell-Faraday equation. Nonetheless they are still very effective for simple calculations. Classically, optics is based on a variational principle: light travels from one point in space to another in the shortest time. In geometric optics laws are based on approximations in Euclidean geometry (such as the paraxial approximation). In physical optics, laws are based on physical properties of materials. In actuality, optical properties of matter are significantly more complex and require quantum mechanics. Quantum mechanics has its roots in postulates. This leads to results which are not usually called "laws", but hold the same status, in that all of quantum mechanics follows from them. One postulate that a particle (or a system of many particles) is described by a wavefunction, and this satisfies a quantum wave equation: namely the Schrödinger equation (which can be written as a non-relativistic wave equation, or a relativistic wave equation) | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law Solving this wave equation predicts the time-evolution of the system's behaviour, analogous to solving Newton's laws in classical mechanics. Other postulates change the idea of physical observables; using quantum operators; some measurements can't be made at the same instant of time (Uncertainty principles), particles are fundamentally indistinguishable. Another postulate; the wavefunction collapse postulate, counters the usual idea of a measurement in science. Applying electromagnetism, thermodynamics, and quantum mechanics, to atoms and molecules, some laws of electromagnetic radiation and light are as follows. Chemical laws are those laws of nature relevant to chemistry. Historically, observations led to many empirical laws, though now it is known that chemistry has its foundations in quantum mechanics. The most fundamental concept in chemistry is the law of conservation of mass, which states that there is no detectable change in the quantity of matter during an ordinary chemical reaction. Modern physics shows that it is actually energy that is conserved, and that energy and mass are related; a concept which becomes important in nuclear chemistry. Conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics. Additional laws of chemistry elaborate on the law of conservation of mass | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law Joseph Proust's law of definite composition says that pure chemicals are composed of elements in a definite formulation; we now know that the structural arrangement of these elements is also important. Dalton's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers; although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. The law of definite composition and the law of multiple proportions are the first two of the three laws of stoichiometry, the proportions by which the chemical elements combine to form chemical compounds. The third law of stoichiometry is the law of reciprocal proportions, which provides the basis for establishing equivalent weights for each chemical element. Elemental equivalent weights can then be used to derive atomic weights for each element. More modern laws of chemistry define the relationship between energy and its transformations. Some mathematical theorems and axioms are referred to as laws because they provide logical foundation to empirical laws. Examples of other observed phenomena sometimes described as laws include the Titius–Bode law of planetary positions, Zipf's law of linguistics, and Moore's law of technological growth. Many of these laws fall within the scope of uncomfortable science. Other laws are pragmatic and observational, such as the law of unintended consequences | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law By analogy, principles in other fields of study are sometimes loosely referred to as "laws". These include Occam's razor as a principle of philosophy and the Pareto principle of economics. The observation that there are underlying regularities in nature dates from prehistoric times, since the recognition of cause-and-effect relationships is an implicit recognition that there are laws of nature. The recognition of such regularities as independent scientific laws "per se", though, was limited by their entanglement in animism, and by the attribution of many effects that do not have readily obvious causes—such as meteorological, astronomical and biological phenomena—to the actions of various gods, spirits, supernatural beings, etc. Observation and speculation about nature were intimately bound up with metaphysics and morality. According to a positivist view, when compared to pre-modern accounts of causality, laws of nature replace the need for divine causality on the one hand, and accounts such as Plato's theory of forms on the other. In Europe, systematic theorizing about nature ("physis") began with the early Greek philosophers and scientists and continued into the Hellenistic and Roman imperial periods, during which times the intellectual influence of Roman law increasingly became paramount.The formula "law of nature" first appears as "a live metaphor" favored by Latin poets Lucretius, Virgil, Ovid, Manilius, in time gaining a firm theoretical presence in the prose treatises of Seneca and Pliny | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law Why this Roman origin? According to [historian and classicist Daryn] Lehoux's persuasive narrative, the idea was made possible by the pivotal role of codified law and forensic argument in Roman life and culture. For the Romans . . . the place par excellence where ethics, law, nature, religion and politics overlap is the law court. When we read Seneca's "Natural Questions", and watch again and again just how he applies standards of evidence, witness evaluation, argument and proof, we can recognize that we are reading one of the great Roman rhetoricians of the age, thoroughly immersed in forensic method. And not Seneca alone. Legal models of scientific judgment turn up all over the place, and for example prove equally integral to Ptolemy's approach to verification, where the mind is assigned the role of magistrate, the senses that of disclosure of evidence, and dialectical reason that of the law itself. The precise formulation of what are now recognized as modern and valid statements of the laws of nature dates from the 17th century in Europe, with the beginning of accurate experimentation and development of advanced forms of mathematics. During this period, natural philosophers such as Isaac Newton were influenced by a religious view which held that God had instituted absolute, universal and immutable physical laws. In chapter 7 of "The World", René Descartes described "nature" as matter itself, unchanging as created by God, thus changes in parts "are to be attributed to nature | https://en.wikipedia.org/wiki?curid=244629 |
Scientific law The rules according to which these changes take place I call the 'laws of nature'." The modern scientific method which took shape at this time (with Francis Bacon and Galileo) aimed at total separation of science from theology, with minimal speculation about metaphysics and ethics. Natural law in the political sense, conceived as universal (i.e., divorced from sectarian religion and accidents of place), was also elaborated in this period (by Grotius, Spinoza, and Hobbes, to name a few). The distinction between natural law in the political-legal sense and law of nature or physical law in the scientific sense is a modern one, both concepts being equally derived from "physis", the Greek word (translated into Latin as "natura") for "nature". | https://en.wikipedia.org/wiki?curid=244629 |
Proplyd A proplyd, a syllabic abbreviation of an ionized protoplanetary disk, is an externally illuminated photoevaporating disk around a young star. Nearly 180 proplyds have been discovered in the Orion Nebula. Images of proplyds in other star-forming regions are rare, while Orion is the only region with a large known sample due to its relative proximity to Earth. In 1979 observations with the Lallemand electronic camera at the Pic-du-Midi Observatory showed six unresolved high-ionization sources near the Trapezium Cluster. These sources were not interpreted as proplyds, but as partly ionized globules (PIGs). The idea was that these objects are being ionized from the outside by M42. Later observations with the Very Large Array showed solar-system-sized condensations associated with these sources. Here the idea appeared that these objects might be low-mass stars surrounded by an evaporating protostellar accretion disk. Proplyds were clearly resolved in 1993 using images of the Hubble Space Telescope Wide Field Camera and the term "proplyd" was used. In the Orion Nebula the proplyds observed are usually one of two types. Some proplyds glow around luminous stars, in cases where the disk is found close to the star, glowing from the star's luminosity. Other proplyds are found at a greater distance from the host star and instead show up as dark silhouettes due to the self-obscuration of cooler dust and gases from the disk itself. Some proplyds show signs of movement from solar irradiance shock waves pushing the proplyds | https://en.wikipedia.org/wiki?curid=245434 |
Proplyd The Orion Nebula is approximately 1,500 light-years from the Sun with very active star formation. The Orion Nebula and the Sun are in the same spiral arm of the Milky way Galaxy. A proplyd may form new planets and planetesimal systems. Current models show that the metallicity of the star and proplyd, along with the correct planetary system temperature and distance from the star, are keys to planet and planetesimal formation. To date, the solar system, with 8 planets, 5 dwarf planets and 5 planetesimal systems, is the largest planetary system found. Most proplyds develop into a system with no planetesimal systems, or into one very large planetesimal system. Photoevaporating proplyds in other star forming regions were found with the Hubble Space Telescope. NGC 1977 currently represents the star-forming region with the largest number of proplyds outside of the Orion Nebula, with 7 proplyds. It is also the first and currently only instance where a B-type star, 42 Orionis is responsible for the photoevaporation. Another type of photoevaporating proplyd was discovered with the Spitzer Space Telescope. These cometary tails represent dust being pulled away from the disks. Westerhout 5 is a region with many dusty proplyds, especially around HD 17505. These dusty proplyds are depleted of any gas in the outer regions of the disk, but the photoevaporation could leave an inner, more robust, and possibly gas-rich disk component of radius 5-10 astronomical units | https://en.wikipedia.org/wiki?curid=245434 |
Proplyd The proplyds in the Orion Nebula and other star-forming regions represent proto-planetary disks around low-mass stars being externally photoevaporated. These low-mass proplyds are usually found within 0.3 parsec (60,000 astronomical units) of the massive OB star and the dusty proplyds have tails with a length of 0.1 to 0.2 parsec (20,000 to 40,000 au). There is a proposed type of intermediate massive counterpart, called proplyd-like objects. Objects in NGC 3603 and later in Cygnus OB2 were proposed as intermediate massive versions of the bright proplyds found in the Orion Nebula. The proplyd-like objects in Cygnus OB2 for example are 6 to 14 parsec distant to a large collection of OB stars and have tail lengths of 0.11 to 0.55 parsec (24,000 to 113,000 au). The nature of proplyd-like objects as intermediate massive proplyds is partly supported by a sperctrum for one object, which showed that the mass loss rate is higher than the mass accretion rate. Another object did not show any outflow, but accretion. | https://en.wikipedia.org/wiki?curid=245434 |
Sonology is a neologism used to describe the study of sound in a variety of disciplines. In medicine, the term is used in the field of [imaging] to describe the practice of medical ultrasonography. According to some scholars, sonology may represent a more advanced application of clinical sonography, chiefly due to the requirement for the use of critical application of both cognitive and radiographic skills in making the diagnostic determination at the time of bedside application of focused ultrasound. The term is also used to describe interdisciplinary research in the field of electronic music and computer music, drawing upon disciplines such as acoustics, electronics, informatics, composition and psychoacoustics. This sense of the term is widely associated with the Institute of Sonology, which was established by composer Gottfried Michael Koenig at the University of Utrecht in 1967 and later moved to the Royal Conservatory of The Hague in 1986. The term has also been adopted to describe the study of electronic music at other institutions, including the Center for Computational (now "Sound and Music Computing") at the University of Padua, Kunitachi College of Music in Tokyo, at the Catalonia College of Music in Barcelona and the Federal University of Minas Gerais in Brazil. The term has been less commonly used to describe the use of sound for therapeutic and religious purposes. | https://en.wikipedia.org/wiki?curid=261581 |
Bombay Natural History Society The Bombay Natural History Society, founded on 15 September 1883, is one of the largest non-governmental organisations in India engaged in conservation and biodiversity research. It supports many research efforts through grants and publishes the "Journal of the Bombay Natural History Society". Many prominent naturalists, including the ornithologists Sálim Ali and S. Dillon Ripley, have been associated with it. The society is commonly known by its initials, BNHS. On 15 September 1883 eight men interested in natural history met at Bombay in the then Victoria and Albert Museum (now Bhau Daji Lad Museum) and: According to E. H. Aitken (the first honorary secretary, September 1883-March 1886), Dr G. A. Maconochie was the "" (Latin for "source and origin") of the society. The other founders were Dr D. MacDonald, Col. C. Swinhoe, Mr J. C. Anderson, Mr J. Johnston, Dr Atmaram Pandurang and Dr Sakharam Arjun. Mr H. M. Phipson (second honorary secretary, 1886–1906) was a part of the founding group. He lent a part of his wine shop at 18 Forbes Street to the BNHS as an office. In 1911, R. C. Wroughton a BNHS member and forest officer organised a survey of mammals making use of the members spread through the Indian subcontinent to provide specimens. This was perhaps the first collaborative natural history study in the world. It resulted in a collection of 50,000 specimens in 12 years. Several new species were discovered, 47 publications were published, and the understanding of biogeographic boundaries was improved | https://en.wikipedia.org/wiki?curid=264765 |
Bombay Natural History Society In the early years, the "Journal of the BNHS" reviewed contemporary literature from other parts of the world. The description of ant-bird interactions in German by Erwin Stresemann was reviewed in a 1935 issue leading to the introduction of the term "anting" into English. Today the BNHS is headquartered in the specially constructed 'Hornbill House' in southern Mumbai. It sponsors studies in Indian wildlife and conservation, and publishes a four-monthly journal, "Journal of the Bombay Natural History Society" ("JBNHS"), as well as a quarterly magazine, "Hornbill". BNHS is the partner of BirdLife International in India. It has been designated as a 'Scientific and Industrial Research Organisation' by the Department of Science and Technology. Its headquarter is in Mumbai and has one regional centre at Wetland Research and Training Centre, near Chilika Lake, Odisha. The BNHS logo is the great hornbill, inspired by a great hornbill named William, who lived on the premises of the Society from 1894 until 1920, during the honorary secretaryships of H. M. Phipson until 1906 and W. S. Millard from 1906 to 1920. The logo was created in 1933, the silver-jubilee year of the Society's founding. According to H. M. Phipson, William was born in May 1894 and presented to the Society three months later by H. Ingle of Karwar. He reached his full length ( by the end of his third year. His diet consisted of fruit, (like plantains and wild figs) and also of live mice, scorpions, and plain raw meat, which he ate with relish | https://en.wikipedia.org/wiki?curid=264765 |
Bombay Natural History Society He apparently did not drink water, nor use it for bathing. William was known for catching tennis balls thrown at him from a distance of some 30 feet with his beak. In his obituary of W. S. Millard, Sir Norman Kinnear made the following remarks about William: | https://en.wikipedia.org/wiki?curid=264765 |
Gakkel Ridge The (formerly known as the Nansen Cordillera and Arctic Mid-Ocean Ridge) is a mid-oceanic ridge, a divergent tectonic plate boundary between the North American Plate and the Eurasian Plate. It is located in the Eurasian Basin of the Arctic Ocean, between Greenland and Siberia, and has a length of about 1,800 kilometers. Geologically, it connects the northern end of the Mid-Atlantic Ridge with the Laptev Sea Rift. The existence and approximate location of the were predicted by Soviet polar explorer Yakov Yakovlevich Gakkel, and confirmed on Soviet expeditions in the Arctic around 1950. The Ridge is named after him, and the name was recognized in April 1987 by SCUFN (under that body's old name, the Sub-Committee on Geographical Names and Nomenclature of Ocean Bottom Features). The ridge is the slowest known spreading ridge on earth, with a rate of less than one centimeter per year. Until 1999, it was believed to be non-volcanic; that year, scientists operating from a nuclear submarine discovered active volcanoes along it. In 2001 two research icebreakers, the German "Polarstern" and the American "Healy", with several groups of scientists, cruised to the to explore it and collect petrological samples. Among other discoveries, this expedition found evidence of hydrothermal vents | https://en.wikipedia.org/wiki?curid=271269 |
Gakkel Ridge In 2007, Woods Hole Oceanographic Institution conducted the "Arctic Gakkel Vents Expedition" (AGAVE), which made some unanticipated discoveries, including the unconsolidated fragmented pyroclastic volcanic deposits that cover the axial valley of the ridge (whose area is greater than 10 km). These suggest volatile substances in concentrations ten times those in the magmas of normal mid-ocean ridges. Using "free-swimming" robotic submersibles on the Gakkel ridge, the AGAVE expedition also discovered what they called "bizarre 'mats' of microbial communities containing a half dozen or more new species". The Gakkel ridge is remarkable in that is not offset by any transform faults. The ridge does have segments with variable orientation and varying degrees of volcanism: the Western Volcanic Zone From the Lena trough, 7° W, to 3° E longitude), the Sparsely Magmatic Zone (from 3° E to 29° E longitude), and the Eastern Magmatic Zone (from 29° E to 89°E). The gaps of volcanic activity imply very cold crust and mantle, probably related to the very low spreading rate, but it is not yet known why some parts of the ridge are more magmatic than others. Some earthquakes have been detected from the mantle, below the crust, which is very unusual for a mid-ocean ridge. It confirms that the mantle and crust of Gakkel ridge, like some segments of the Southwest Indian Ridge, are very cold. | https://en.wikipedia.org/wiki?curid=271269 |
Alcoholate Originally, an alcoholate was the crystalline form of a salt in which alcohol took the place of water of crystallization, such as [SnCl(OCH)·CHOH] and CHNO·CHOH. The second meaning of the word is that of a tincture, or alcoholic extract of plant material. The third, and more usual meaning of the word is as a synonym for alkoxide—a compound formed by the substitution of the hydrogen atom of the hydroxyl group of an alcohol by a metal atom. | https://en.wikipedia.org/wiki?curid=277262 |
Trace metal Trace metals are the metals subset of trace elements; that is, metals normally present in small but measurable amounts in animal and plant cells and tissues and that are a necessary part of nutrition and physiology. Many biometals are trace metals. Ingestion of, or exposure to, excessive quantities can be toxic. However, insufficient plasma or tissue levels of certain trace metals can cause pathology, as is the case with iron. Trace metals within the human body include iron, lithium, zinc, copper, chromium, nickel, cobalt, vanadium, molybdenum, manganese and others. Trace metals are metals needed by living organisms to function properly and are depleted through the expenditure of energy by various metabolic processes of living organisms. They are replenished in animals through diet as well as environmental exposure, and in plants through the uptake of nutrients from the soil in which the plant grows. Human vitamin pills and plant fertilizers can be a source of trace metals. Trace metals are sometimes referred to as trace elements, although the latter includes minerals and is a broader category. See also Dietary mineral. Trace elements are required by the body for specific functions. Things such as vitamins, sports drinks, fresh fruits and vegetables are sources. Taken in excessive amounts, trace elements can cause problems. For example fluorine is required for the formation of bones and enamel on teeth | https://en.wikipedia.org/wiki?curid=277810 |
Trace metal However, when taken in an excessive amount can cause a disease called "Fluorosis', in which bone deformations and yellowing of teeth are seen. Fluorine can occur naturally in some areas in ground water. Trace metals include iron that can help to prevent anemia, and zinc that is a cofactor in over 100 enzyme reactions. | https://en.wikipedia.org/wiki?curid=277810 |
Néel temperature The or magnetic ordering temperature, "T", is the temperature above which an antiferromagnetic material becomes paramagnetic—that is, the thermal energy becomes large enough to destroy the microscopic magnetic ordering within the material. The is analogous to the Curie temperature, "T", for ferromagnetic materials. It is named after Louis Néel (1904–2000), who received the 1970 Nobel Prize in Physics for his work in the area. Listed below are the Néel temperatures of several materials: | https://en.wikipedia.org/wiki?curid=284139 |
Mixture In chemistry, a mixture is a material made up of two or more different substances which are physically combined. A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the form of solutions, suspensions and colloids. Mixtures are one product of mechanically blending or mixing chemical substances such as elements and compounds, without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. Despite the fact that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point, may differ from those of the components. Some mixtures can be separated into their components by using physical (mechanical or thermal) means. Azeotropes are one kind of mixture that usually poses considerable difficulties regarding the separation processes required to obtain their constituents (physical or chemical processes or, even a blend of them). Mixtures can be either homogeneous or heterogeneous. A mixture in which its constituents are distributed uniformly is called homogeneous mixture, such as salt in water. A mixture in which its constituents are not distributed uniformly is called heterogeneous mixture, such as sand in water. One example of a mixture is air. Air is a homogeneous mixture of the gaseous substances nitrogen, oxygen, and smaller amounts of other substances | https://en.wikipedia.org/wiki?curid=286069 |
Mixture Salt, sugar, and many other substances dissolve in water to form homogeneous mixtures. A homogeneous mixture in which there is both a solute and solvent present is also a solution. Mixtures can have any amounts of ingredients. Mixtures are unlike chemical compounds, because: The following table shows the main properties of the three families of mixtures and examples of the three types of mixture. A "heterogeneous mixture" is a mixture of two or more chemical substances (elements or compounds). Examples are: mixtures of sand and water or sand and iron filings, a conglomerate rock, water and oil, a portion salad, trail mix, and concrete (not cement). A mixture of powdered silver metal and powdered gold metal would represent a heterogeneous mixture of two elements. Making a distinction between "homogeneous" and "heterogeneous" mixtures is a matter of the scale of sampling. On a coarse enough scale, any mixture can be said to be homogeneous, if the entire article is allowed to count as a "sample" of it. On a fine enough scale, any mixture can be said to be heterogeneous, because a sample could be as small as a single molecule. In practical terms, if the property of interest of the mixture is the same regardless of which sample of it is taken for the examination used, the mixture is homogeneous | https://en.wikipedia.org/wiki?curid=286069 |
Mixture Gy's sampling theory quantitavely defines the heterogeneity of a particle as: where formula_2, formula_3, formula_4, formula_5, and formula_6 are respectively: the heterogeneity of the formula_7th particle of the population, the mass concentration of the property of interest in the formula_7th particle of the population, the mass concentration of the property of interest in the population, the mass of the formula_7th particle in the population, and the average mass of a particle in the population. During sampling of heterogeneous mixtures of particles, the variance of the sampling error is generally non-zero. Pierre Gy derived, from the Poisson sampling model, the following formula for the variance of the sampling error in the mass concentration in a sample: in which "V" is the variance of the sampling error, "N" is the number of particles in the population (before the sample was taken), "q" is the probability of including the "i"th particle of the population in the sample (i.e. the first-order inclusion probability of the "i"th particle), "m" is the mass of the "i"th particle of the population and "a" is the mass concentration of the property of interest in the "i"th particle of the population. The above equation for the variance of the sampling error is an approximation based on a linearization of the mass concentration in a sample. In the theory of Gy, correct sampling is defined as a sampling scenario in which all particles have the same probability of being included in the sample | https://en.wikipedia.org/wiki?curid=286069 |
Mixture This implies that "q" no longer depends on "i", and can therefore be replaced by the symbol "q". Gy's equation for the variance of the sampling error becomes: where "a" is that concentration of the property of interest in the population from which the sample is to be drawn and "M" is the mass of the population from which the sample is to be drawn. | https://en.wikipedia.org/wiki?curid=286069 |
Settling is the process by which particulates settle to the bottom of a liquid and form a sediment. Particles that experience a force, either due to gravity or due to centrifugal motion will tend to move in a uniform manner in the direction exerted by that force. For gravity settling, this means that the particles will tend to fall to the bottom of the vessel, forming a slurry at the vessel base. is an important operation in many applications, such as mining, wastewater treatment, biological science, space propellant reignition, and scooping. For settling particles that are considered individually, i.e. dilute particle solutions, there are two main forces enacting upon any particle. The primary force is an applied force, such as gravity, and a drag force that is due to the motion of the particle through the fluid. The applied force is usually not affected by the particle's velocity, whereas the drag force is a function of the particle velocity. For a particle at rest no drag force will be exhibited, which causes the particle to accelerate due to the applied force. When the particle accelerates, the drag force acts in the direction opposite to the particle's motion, retarding further acceleration, in the absence of other forces drag directly opposes the applied force. As the particle increases in velocity eventually the drag force and the applied force will approximately equate, causing no further change in the particle's velocity | https://en.wikipedia.org/wiki?curid=286454 |
Settling This velocity is known as the terminal velocity, "settling velocity" or "fall velocity" of the particle. This is readily measurable by examining the rate of fall of individual particles. The terminal velocity of the particle is affected by many parameters, i.e. anything that will alter the particle's drag. Hence the terminal velocity is most notably dependent upon grain size, the shape (roundness and sphericity) and density of the grains, as well as to the viscosity and density of the fluid. For dilute suspensions, Stokes' law predicts the settling velocity of small spheres in fluid, either air or water. This originates due to the strength of viscous forces at the surface of the particle providing the majority of the retarding force. Stokes' law finds many applications in the natural sciences, and is given by: where "w" is the settling velocity, "ρ" is density (the subscripts "p" and "f" indicate particle and fluid respectively), "g" is the acceleration due to gravity, "r" is the radius of the particle and "μ" is the dynamic viscosity of the fluid. Stokes' law applies when the Reynolds number, Re, of the particle is less than 0.1. Experimentally Stokes' law is found to hold within 1% for formula_2, within 3% for formula_3 and within 9% formula_4. With increasing Reynolds numbers, Stokes law begins to break down due to the increasing importance of fluid inertia, requiring the use of empirical solutions to calculate drag forces | https://en.wikipedia.org/wiki?curid=286454 |
Settling Defining a drag coefficient, formula_5, as the ratio of the force experienced by the particle divided by the impact pressure of the fluid, a coefficient that can be considered as the transfer of available fluid force into drag is established. In this region the inertia of the impacting fluid is responsible for the majority of force transfer to the particle. For a spherical particle in the Stokes regime this value is not constant, however in the Newtonian drag regime the drag on a sphere can be approximated by a constant, 0.44. This constant value implies that the efficiency of transfer of energy from the fluid to the particle is not a function of fluid velocity. As such the terminal velocity of a particle in a Newtonian regime can again be obtained by equating the drag force to the applied force, resulting in the following expression In the intermediate region between Stokes drag and Newtonian drag, there exists a transitional regime, where the analytical solution to the problem of a falling sphere becomes problematic. To solve this, empirical expressions are used to calculate drag in this region. One such empirical equation is that of Schiller and Naumann, and may be valid for formula_8: Stokes, transitional and Newtonian settling describe the behaviour of a single spherical particle in an infinite fluid, known as free settling. However this model has limitations in practical application | https://en.wikipedia.org/wiki?curid=286454 |
Settling Alternate considerations, such as the interaction of particles in the fluid, or the interaction of the particles with the container walls can modify the settling behaviour. that has these forces in appreciable magnitude is known as hindered settling. Subsequently semi-analytic or empirical solutions may be used to perform meaningful hindered settling calculations. The solid-gas flow systems are present in many industrial applications, as dry, catalytic reactors, settling tanks, pneumatic conveying of solids, among others. Obviously, in industrial operations the drag rule is not simple as a single sphere settling in a stationary fluid. However, this knowledge indicates how drag behaves in more complex systems, which are designed and studied by engineers applying empirical and more sophisticated tools. For example, "tanks" are used for separating solids and/or oil from another liquid. In food processing, the vegetable is crushed and placed inside of a settling tank with water. The oil floats to the top of the water then is collected. In water and waste water treatment a flocculant is often added prior to settling to form larger particles that settle out quickly in a settling tank or an inclined plate settler, leaving the water with a lower turbidity. In winemaking, the French term for this process is "". This step usually occurs in white wine production before the start of fermentation. Settleable solids are the particulates that settle out of a still fluid | https://en.wikipedia.org/wiki?curid=286454 |
Settling Settleable solids can be quantified for a suspension using an Imhoff cone. The standard Imhoff cone of transparent glass or plastic holds one liter of liquid and has calibrated markings to measure the volume of solids accumulated in the bottom of the conical container after settling for one hour. A standardized Imhoff cone procedure is commonly used to measure suspended solids in wastewater or stormwater runoff. The simplicity of the method makes it popular for estimating water quality. To numerically gauge the stability of suspended solids and predict agglomeration and sedimentation events, zeta potential is commonly analyzed. This parameter indicates the electrostatic repulsion between solid particles and can be used to predict whether aggregation and settling will occur over time. The water sample to be measured should be representative of the total stream. Samples are best collected from the discharge falling from a pipe or over a weir, because samples skimmed from the top of a flowing channel may fail to capture larger, high-density solids moving along the bottom of the channel. The sampling bucket is vigorously stirred to uniformly re-suspend all collected solids immediately before pouring the volume required to fill the cone. The filled cone is immediately placed in a stationary holding rack to allow quiescent settling. The rack should be located away from heating sources, including direct sunlight, which might cause currents within the cone from thermal density changes of the liquid contents | https://en.wikipedia.org/wiki?curid=286454 |
Settling After 45 minutes of settling, the cone is partially rotated about its axis of symmetry just enough to dislodge any settled material adhering to the side of the cone. Accumulated sediment is observed and measured fifteen minutes later, after one hour of total settling time. | https://en.wikipedia.org/wiki?curid=286454 |
CSIRO The Commonwealth Scientific and Industrial Research Organisation (CSIRO) is an Australian federal government agency responsible for scientific research. works with leading organisations around the world. From its headquarters in Canberra, maintains more than 50 sites across Australia and in France, Chile and the United States, employing about 5,500 people. Federally funded scientific research began in Australia years ago. The Advisory Council of Science and Industry was established in 1916 but was hampered by insufficient available finance. In 1926 the research effort was reinvigorated by establishment of the Council for Scientific and Industrial Research (CSIR), which strengthened national science leadership and increased research funding. CSIR grew rapidly and achieved significant early successes. In 1949 further legislated changes included renaming the organisation as CSIRO. Notable developments by have included the invention of atomic absorption spectroscopy, essential components of Wi-Fi technology, development of the first commercially successful polymer banknote, the invention of the insect repellent in Aerogard and the introduction of a series of biological controls into Australia, such as the introduction of myxomatosis and rabbit calicivirus for the control of rabbit populations. is governed by a board appointed by the Australian Government, currently chaired by David Thodey. There are nine directors inclusive of the Chief Executive, presently Dr | https://en.wikipedia.org/wiki?curid=288262 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.